Linux-PM Archive on lore.kernel.org
 help / color / Atom feed
* [PATCH 00/28] PM: QoS: Get rid of unuseful code and rework CPU latency QoS interface
@ 2020-02-11 22:51 Rafael J. Wysocki
  2020-02-11 22:52 ` [PATCH 01/28] PM: QoS: Drop debugfs interface Rafael J. Wysocki
                   ` (31 more replies)
  0 siblings, 32 replies; 74+ messages in thread
From: Rafael J. Wysocki @ 2020-02-11 22:51 UTC (permalink / raw)
  To: Linux PM; +Cc: LKML, Amit Kucheria

Hi All,

This series of patches is based on the observation that after commit
c3082a674f46 ("PM: QoS: Get rid of unused flags") the only global PM QoS class
in use is PM_QOS_CPU_DMA_LATENCY, but there is still a significant amount of
code dedicated to the handling of global PM QoS classes in general.  That code
takes up space and adds overhead in vain, so it is better to get rid of it.

Moreover, with that unuseful code removed, the interface for adding QoS
requests for CPU latency becomes inelegant and confusing, so it is better to
clean it up.

Patches [01/28-12/28] do the first part described above, which also includes
some assorted cleanups of the core PM QoS code that doesn't go away.

Patches [13/28-25/28] rework the CPU latency QoS interface (in the classic
"define stubs, migrate users, change the API proper" manner), patches
[26-27/28] update the general comments and documentation to match the code
after the previous changes and the last one makes the CPU latency QoS depend
on CPU_IDLE (because cpuidle is the only user of its target value today).

The majority of the patches in this series don't change the functionality of
the code at all (at least not intentionally).

Please refer to the changelogs of individual patches for details.

Thanks!




^ permalink raw reply	[flat|nested] 74+ messages in thread

* [PATCH 01/28] PM: QoS: Drop debugfs interface
  2020-02-11 22:51 [PATCH 00/28] PM: QoS: Get rid of unuseful code and rework CPU latency QoS interface Rafael J. Wysocki
@ 2020-02-11 22:52 ` Rafael J. Wysocki
  2020-02-11 22:58 ` [PATCH 02/28] PM: QoS: Drop pm_qos_update_request_timeout() Rafael J. Wysocki
                   ` (30 subsequent siblings)
  31 siblings, 0 replies; 74+ messages in thread
From: Rafael J. Wysocki @ 2020-02-11 22:52 UTC (permalink / raw)
  To: Linux PM; +Cc: LKML, Amit Kucheria

From: "Rafael J. Wysocki" <rafael.j.wysocki@intel.com>

After commit c3082a674f46 ("PM: QoS: Get rid of unused flags") the
only global PM QoS class in use is PM_QOS_CPU_DMA_LATENCY and the
existing PM QoS debugfs interface has become overly complicated (as
it takes other potentially possible PM QoS classes that are not there
any more into account).  It is also not particularly useful (the
"type" of the PM_QOS_CPU_DMA_LATENCY is known, its aggregate value
can be read from /dev/cpu_dma_latency and the number of requests in
the queue does not really matter) and there are no known users
depending on it.  Moreover, there are dedicated trace events that
can be used for tracking PM QoS usage with much higher precision.

For these reasons, drop the PM QoS debugfs interface altogether.

Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
---
 kernel/power/qos.c | 73 ++----------------------------------------------------
 1 file changed, 2 insertions(+), 71 deletions(-)

diff --git a/kernel/power/qos.c b/kernel/power/qos.c
index 83edf8698118..d932fa42e8e4 100644
--- a/kernel/power/qos.c
+++ b/kernel/power/qos.c
@@ -137,69 +137,6 @@ static inline void pm_qos_set_value(struct pm_qos_constraints *c, s32 value)
 	c->target_value = value;
 }
 
-static int pm_qos_debug_show(struct seq_file *s, void *unused)
-{
-	struct pm_qos_object *qos = (struct pm_qos_object *)s->private;
-	struct pm_qos_constraints *c;
-	struct pm_qos_request *req;
-	char *type;
-	unsigned long flags;
-	int tot_reqs = 0;
-	int active_reqs = 0;
-
-	if (IS_ERR_OR_NULL(qos)) {
-		pr_err("%s: bad qos param!\n", __func__);
-		return -EINVAL;
-	}
-	c = qos->constraints;
-	if (IS_ERR_OR_NULL(c)) {
-		pr_err("%s: Bad constraints on qos?\n", __func__);
-		return -EINVAL;
-	}
-
-	/* Lock to ensure we have a snapshot */
-	spin_lock_irqsave(&pm_qos_lock, flags);
-	if (plist_head_empty(&c->list)) {
-		seq_puts(s, "Empty!\n");
-		goto out;
-	}
-
-	switch (c->type) {
-	case PM_QOS_MIN:
-		type = "Minimum";
-		break;
-	case PM_QOS_MAX:
-		type = "Maximum";
-		break;
-	case PM_QOS_SUM:
-		type = "Sum";
-		break;
-	default:
-		type = "Unknown";
-	}
-
-	plist_for_each_entry(req, &c->list, node) {
-		char *state = "Default";
-
-		if ((req->node).prio != c->default_value) {
-			active_reqs++;
-			state = "Active";
-		}
-		tot_reqs++;
-		seq_printf(s, "%d: %d: %s\n", tot_reqs,
-			   (req->node).prio, state);
-	}
-
-	seq_printf(s, "Type=%s, Value=%d, Requests: active=%d / total=%d\n",
-		   type, pm_qos_get_value(c), active_reqs, tot_reqs);
-
-out:
-	spin_unlock_irqrestore(&pm_qos_lock, flags);
-	return 0;
-}
-
-DEFINE_SHOW_ATTRIBUTE(pm_qos_debug);
-
 /**
  * pm_qos_update_target - manages the constraints list and calls the notifiers
  *  if needed
@@ -529,15 +466,12 @@ int pm_qos_remove_notifier(int pm_qos_class, struct notifier_block *notifier)
 EXPORT_SYMBOL_GPL(pm_qos_remove_notifier);
 
 /* User space interface to PM QoS classes via misc devices */
-static int register_pm_qos_misc(struct pm_qos_object *qos, struct dentry *d)
+static int register_pm_qos_misc(struct pm_qos_object *qos)
 {
 	qos->pm_qos_power_miscdev.minor = MISC_DYNAMIC_MINOR;
 	qos->pm_qos_power_miscdev.name = qos->name;
 	qos->pm_qos_power_miscdev.fops = &pm_qos_power_fops;
 
-	debugfs_create_file(qos->name, S_IRUGO, d, (void *)qos,
-			    &pm_qos_debug_fops);
-
 	return misc_register(&qos->pm_qos_power_miscdev);
 }
 
@@ -631,14 +565,11 @@ static int __init pm_qos_power_init(void)
 {
 	int ret = 0;
 	int i;
-	struct dentry *d;
 
 	BUILD_BUG_ON(ARRAY_SIZE(pm_qos_array) != PM_QOS_NUM_CLASSES);
 
-	d = debugfs_create_dir("pm_qos", NULL);
-
 	for (i = PM_QOS_CPU_DMA_LATENCY; i < PM_QOS_NUM_CLASSES; i++) {
-		ret = register_pm_qos_misc(pm_qos_array[i], d);
+		ret = register_pm_qos_misc(pm_qos_array[i]);
 		if (ret < 0) {
 			pr_err("%s: %s setup failed\n",
 			       __func__, pm_qos_array[i]->name);
-- 
2.16.4






^ permalink raw reply	[flat|nested] 74+ messages in thread

* [PATCH 02/28] PM: QoS: Drop pm_qos_update_request_timeout()
  2020-02-11 22:51 [PATCH 00/28] PM: QoS: Get rid of unuseful code and rework CPU latency QoS interface Rafael J. Wysocki
  2020-02-11 22:52 ` [PATCH 01/28] PM: QoS: Drop debugfs interface Rafael J. Wysocki
@ 2020-02-11 22:58 ` Rafael J. Wysocki
  2020-02-11 22:58 ` [PATCH 03/28] PM: QoS: Drop the PM_QOS_SUM QoS type Rafael J. Wysocki
                   ` (29 subsequent siblings)
  31 siblings, 0 replies; 74+ messages in thread
From: Rafael J. Wysocki @ 2020-02-11 22:58 UTC (permalink / raw)
  To: Linux PM; +Cc: LKML, Amit Kucheria, Steven Rostedt

From: "Rafael J. Wysocki" <rafael.j.wysocki@intel.com>

The pm_qos_update_request_timeout() function is not called from
anywhere, so drop it along with the work member in struct
pm_qos_request needed by it.

Also drop the useless pm_qos_update_request_timeout trace event
that is only triggered by that function (so it never triggers at
all) and update the trace events documentation accordingly.

Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
---
 Documentation/trace/events-power.rst |  2 --
 include/linux/pm_qos.h               |  4 ---
 include/trace/events/power.h         | 24 ------------------
 kernel/power/qos.c                   | 48 ------------------------------------
 4 files changed, 78 deletions(-)

diff --git a/Documentation/trace/events-power.rst b/Documentation/trace/events-power.rst
index 2ef318962e29..eec7453a168e 100644
--- a/Documentation/trace/events-power.rst
+++ b/Documentation/trace/events-power.rst
@@ -78,11 +78,9 @@ target/flags update.
   pm_qos_add_request                 "pm_qos_class=%s value=%d"
   pm_qos_update_request              "pm_qos_class=%s value=%d"
   pm_qos_remove_request              "pm_qos_class=%s value=%d"
-  pm_qos_update_request_timeout      "pm_qos_class=%s value=%d, timeout_us=%ld"
 
 The first parameter gives the QoS class name (e.g. "CPU_DMA_LATENCY").
 The second parameter is value to be added/updated/removed.
-The third parameter is timeout value in usec.
 ::
 
   pm_qos_update_target               "action=%s prev_value=%d curr_value=%d"
diff --git a/include/linux/pm_qos.h b/include/linux/pm_qos.h
index 19eafca5680e..4747bdb6bed2 100644
--- a/include/linux/pm_qos.h
+++ b/include/linux/pm_qos.h
@@ -8,7 +8,6 @@
 #include <linux/plist.h>
 #include <linux/notifier.h>
 #include <linux/device.h>
-#include <linux/workqueue.h>
 
 enum {
 	PM_QOS_RESERVED = 0,
@@ -43,7 +42,6 @@ enum pm_qos_flags_status {
 struct pm_qos_request {
 	struct plist_node node;
 	int pm_qos_class;
-	struct delayed_work work; /* for pm_qos_update_request_timeout */
 };
 
 struct pm_qos_flags_request {
@@ -149,8 +147,6 @@ void pm_qos_add_request(struct pm_qos_request *req, int pm_qos_class,
 			s32 value);
 void pm_qos_update_request(struct pm_qos_request *req,
 			   s32 new_value);
-void pm_qos_update_request_timeout(struct pm_qos_request *req,
-				   s32 new_value, unsigned long timeout_us);
 void pm_qos_remove_request(struct pm_qos_request *req);
 
 int pm_qos_request(int pm_qos_class);
diff --git a/include/trace/events/power.h b/include/trace/events/power.h
index 7457e238e1b7..ecf39daabf16 100644
--- a/include/trace/events/power.h
+++ b/include/trace/events/power.h
@@ -404,30 +404,6 @@ DEFINE_EVENT(pm_qos_request, pm_qos_remove_request,
 	TP_ARGS(pm_qos_class, value)
 );
 
-TRACE_EVENT(pm_qos_update_request_timeout,
-
-	TP_PROTO(int pm_qos_class, s32 value, unsigned long timeout_us),
-
-	TP_ARGS(pm_qos_class, value, timeout_us),
-
-	TP_STRUCT__entry(
-		__field( int,                    pm_qos_class   )
-		__field( s32,                    value          )
-		__field( unsigned long,          timeout_us     )
-	),
-
-	TP_fast_assign(
-		__entry->pm_qos_class = pm_qos_class;
-		__entry->value = value;
-		__entry->timeout_us = timeout_us;
-	),
-
-	TP_printk("pm_qos_class=%s value=%d, timeout_us=%ld",
-		  __print_symbolic(__entry->pm_qos_class,
-			{ PM_QOS_CPU_DMA_LATENCY,	"CPU_DMA_LATENCY" }),
-		  __entry->value, __entry->timeout_us)
-);
-
 DECLARE_EVENT_CLASS(pm_qos_update,
 
 	TP_PROTO(enum pm_qos_req_action action, int prev_value, int curr_value),
diff --git a/kernel/power/qos.c b/kernel/power/qos.c
index d932fa42e8e4..67dab7f330e4 100644
--- a/kernel/power/qos.c
+++ b/kernel/power/qos.c
@@ -295,21 +295,6 @@ static void __pm_qos_update_request(struct pm_qos_request *req,
 			&req->node, PM_QOS_UPDATE_REQ, new_value);
 }
 
-/**
- * pm_qos_work_fn - the timeout handler of pm_qos_update_request_timeout
- * @work: work struct for the delayed work (timeout)
- *
- * This cancels the timeout request by falling back to the default at timeout.
- */
-static void pm_qos_work_fn(struct work_struct *work)
-{
-	struct pm_qos_request *req = container_of(to_delayed_work(work),
-						  struct pm_qos_request,
-						  work);
-
-	__pm_qos_update_request(req, PM_QOS_DEFAULT_VALUE);
-}
-
 /**
  * pm_qos_add_request - inserts new qos request into the list
  * @req: pointer to a preallocated handle
@@ -334,7 +319,6 @@ void pm_qos_add_request(struct pm_qos_request *req,
 		return;
 	}
 	req->pm_qos_class = pm_qos_class;
-	INIT_DELAYED_WORK(&req->work, pm_qos_work_fn);
 	trace_pm_qos_add_request(pm_qos_class, value);
 	pm_qos_update_target(pm_qos_array[pm_qos_class]->constraints,
 			     &req->node, PM_QOS_ADD_REQ, value);
@@ -362,40 +346,10 @@ void pm_qos_update_request(struct pm_qos_request *req,
 		return;
 	}
 
-	cancel_delayed_work_sync(&req->work);
 	__pm_qos_update_request(req, new_value);
 }
 EXPORT_SYMBOL_GPL(pm_qos_update_request);
 
-/**
- * pm_qos_update_request_timeout - modifies an existing qos request temporarily.
- * @req : handle to list element holding a pm_qos request to use
- * @new_value: defines the temporal qos request
- * @timeout_us: the effective duration of this qos request in usecs.
- *
- * After timeout_us, this qos request is cancelled automatically.
- */
-void pm_qos_update_request_timeout(struct pm_qos_request *req, s32 new_value,
-				   unsigned long timeout_us)
-{
-	if (!req)
-		return;
-	if (WARN(!pm_qos_request_active(req),
-		 "%s called for unknown object.", __func__))
-		return;
-
-	cancel_delayed_work_sync(&req->work);
-
-	trace_pm_qos_update_request_timeout(req->pm_qos_class,
-					    new_value, timeout_us);
-	if (new_value != req->node.prio)
-		pm_qos_update_target(
-			pm_qos_array[req->pm_qos_class]->constraints,
-			&req->node, PM_QOS_UPDATE_REQ, new_value);
-
-	schedule_delayed_work(&req->work, usecs_to_jiffies(timeout_us));
-}
-
 /**
  * pm_qos_remove_request - modifies an existing qos request
  * @req: handle to request list element
@@ -415,8 +369,6 @@ void pm_qos_remove_request(struct pm_qos_request *req)
 		return;
 	}
 
-	cancel_delayed_work_sync(&req->work);
-
 	trace_pm_qos_remove_request(req->pm_qos_class, PM_QOS_DEFAULT_VALUE);
 	pm_qos_update_target(pm_qos_array[req->pm_qos_class]->constraints,
 			     &req->node, PM_QOS_REMOVE_REQ,
-- 
2.16.4






^ permalink raw reply	[flat|nested] 74+ messages in thread

* [PATCH 03/28] PM: QoS: Drop the PM_QOS_SUM QoS type
  2020-02-11 22:51 [PATCH 00/28] PM: QoS: Get rid of unuseful code and rework CPU latency QoS interface Rafael J. Wysocki
  2020-02-11 22:52 ` [PATCH 01/28] PM: QoS: Drop debugfs interface Rafael J. Wysocki
  2020-02-11 22:58 ` [PATCH 02/28] PM: QoS: Drop pm_qos_update_request_timeout() Rafael J. Wysocki
@ 2020-02-11 22:58 ` Rafael J. Wysocki
  2020-02-11 22:58 ` [PATCH 04/28] PM: QoS: Clean up pm_qos_update_target() and pm_qos_update_flags() Rafael J. Wysocki
                   ` (28 subsequent siblings)
  31 siblings, 0 replies; 74+ messages in thread
From: Rafael J. Wysocki @ 2020-02-11 22:58 UTC (permalink / raw)
  To: Linux PM; +Cc: LKML, Amit Kucheria

From: "Rafael J. Wysocki" <rafael.j.wysocki@intel.com>

The PM_QOS_SUM QoS type is not used, so drop it along with the
code referring to it in pm_qos_get_value() and the related local
variables in there.

No intentional functional impact.

Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
---
 include/linux/pm_qos.h | 1 -
 kernel/power/qos.c     | 9 ---------
 2 files changed, 10 deletions(-)

diff --git a/include/linux/pm_qos.h b/include/linux/pm_qos.h
index 4747bdb6bed2..48bfb96a9360 100644
--- a/include/linux/pm_qos.h
+++ b/include/linux/pm_qos.h
@@ -53,7 +53,6 @@ enum pm_qos_type {
 	PM_QOS_UNITIALIZED,
 	PM_QOS_MAX,		/* return the largest value */
 	PM_QOS_MIN,		/* return the smallest value */
-	PM_QOS_SUM		/* return the sum */
 };
 
 /*
diff --git a/kernel/power/qos.c b/kernel/power/qos.c
index 67dab7f330e4..a6be7faa1974 100644
--- a/kernel/power/qos.c
+++ b/kernel/power/qos.c
@@ -101,9 +101,6 @@ static const struct file_operations pm_qos_power_fops = {
 /* unlocked internal variant */
 static inline int pm_qos_get_value(struct pm_qos_constraints *c)
 {
-	struct plist_node *node;
-	int total_value = 0;
-
 	if (plist_head_empty(&c->list))
 		return c->no_constraint_value;
 
@@ -114,12 +111,6 @@ static inline int pm_qos_get_value(struct pm_qos_constraints *c)
 	case PM_QOS_MAX:
 		return plist_last(&c->list)->prio;
 
-	case PM_QOS_SUM:
-		plist_for_each(node, &c->list)
-			total_value += node->prio;
-
-		return total_value;
-
 	default:
 		/* runtime check for not using enum */
 		BUG();
-- 
2.16.4






^ permalink raw reply	[flat|nested] 74+ messages in thread

* [PATCH 04/28] PM: QoS: Clean up pm_qos_update_target() and pm_qos_update_flags()
  2020-02-11 22:51 [PATCH 00/28] PM: QoS: Get rid of unuseful code and rework CPU latency QoS interface Rafael J. Wysocki
                   ` (2 preceding siblings ...)
  2020-02-11 22:58 ` [PATCH 03/28] PM: QoS: Drop the PM_QOS_SUM QoS type Rafael J. Wysocki
@ 2020-02-11 22:58 ` Rafael J. Wysocki
  2020-02-11 22:58 ` [PATCH 05/28] PM: QoS: Clean up pm_qos_read_value() and pm_qos_get/set_value() Rafael J. Wysocki
                   ` (27 subsequent siblings)
  31 siblings, 0 replies; 74+ messages in thread
From: Rafael J. Wysocki @ 2020-02-11 22:58 UTC (permalink / raw)
  To: Linux PM; +Cc: LKML, Amit Kucheria

From: "Rafael J. Wysocki" <rafael.j.wysocki@intel.com>

Clean up the pm_qos_update_target() function:
 * Update its kerneldoc comment.
 * Drop the redundant ret local variable from it.
 * Reorder definitions of local variables in it.
 * Update a comment in it.

Also update the kerneldoc comment of pm_qos_update_flags() (e.g.
notifiers are not called by it any more) and add one emtpy line
to its body (for more visual clarity).

No intentional functional impact.

Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
---
 kernel/power/qos.c | 56 ++++++++++++++++++++++++++++--------------------------
 1 file changed, 29 insertions(+), 27 deletions(-)

diff --git a/kernel/power/qos.c b/kernel/power/qos.c
index a6be7faa1974..6a36809d6160 100644
--- a/kernel/power/qos.c
+++ b/kernel/power/qos.c
@@ -129,24 +129,30 @@ static inline void pm_qos_set_value(struct pm_qos_constraints *c, s32 value)
 }
 
 /**
- * pm_qos_update_target - manages the constraints list and calls the notifiers
- *  if needed
- * @c: constraints data struct
- * @node: request to add to the list, to update or to remove
- * @action: action to take on the constraints list
- * @value: value of the request to add or update
+ * pm_qos_update_target - Update a list of PM QoS constraint requests.
+ * @c: List of PM QoS requests.
+ * @node: Target list entry.
+ * @action: Action to carry out (add, update or remove).
+ * @value: New request value for the target list entry.
  *
- * This function returns 1 if the aggregated constraint value has changed, 0
- *  otherwise.
+ * Update the given list of PM QoS constraint requests, @c, by carrying an
+ * @action involving the @node list entry and @value on it.
+ *
+ * The recognized values of @action are PM_QOS_ADD_REQ (store @value in @node
+ * and add it to the list), PM_QOS_UPDATE_REQ (remove @node from the list, store
+ * @value in it and add it to the list again), and PM_QOS_REMOVE_REQ (remove
+ * @node from the list, ignore @value).
+ *
+ * Return: 1 if the aggregate constraint value has changed, 0  otherwise.
  */
 int pm_qos_update_target(struct pm_qos_constraints *c, struct plist_node *node,
 			 enum pm_qos_req_action action, int value)
 {
-	unsigned long flags;
 	int prev_value, curr_value, new_value;
-	int ret;
+	unsigned long flags;
 
 	spin_lock_irqsave(&pm_qos_lock, flags);
+
 	prev_value = pm_qos_get_value(c);
 	if (value == PM_QOS_DEFAULT_VALUE)
 		new_value = c->default_value;
@@ -159,9 +165,8 @@ int pm_qos_update_target(struct pm_qos_constraints *c, struct plist_node *node,
 		break;
 	case PM_QOS_UPDATE_REQ:
 		/*
-		 * to change the list, we atomically remove, reinit
-		 * with new value and add, then see if the extremal
-		 * changed
+		 * To change the list, atomically remove, reinit with new value
+		 * and add, then see if the aggregate has changed.
 		 */
 		plist_del(node, &c->list);
 		/* fall through */
@@ -180,16 +185,14 @@ int pm_qos_update_target(struct pm_qos_constraints *c, struct plist_node *node,
 	spin_unlock_irqrestore(&pm_qos_lock, flags);
 
 	trace_pm_qos_update_target(action, prev_value, curr_value);
-	if (prev_value != curr_value) {
-		ret = 1;
-		if (c->notifiers)
-			blocking_notifier_call_chain(c->notifiers,
-						     (unsigned long)curr_value,
-						     NULL);
-	} else {
-		ret = 0;
-	}
-	return ret;
+
+	if (prev_value == curr_value)
+		return 0;
+
+	if (c->notifiers)
+		blocking_notifier_call_chain(c->notifiers, curr_value, NULL);
+
+	return 1;
 }
 
 /**
@@ -211,14 +214,12 @@ static void pm_qos_flags_remove_req(struct pm_qos_flags *pqf,
 
 /**
  * pm_qos_update_flags - Update a set of PM QoS flags.
- * @pqf: Set of flags to update.
+ * @pqf: Set of PM QoS flags to update.
  * @req: Request to add to the set, to modify, or to remove from the set.
  * @action: Action to take on the set.
  * @val: Value of the request to add or modify.
  *
- * Update the given set of PM QoS flags and call notifiers if the aggregate
- * value has changed.  Returns 1 if the aggregate constraint value has changed,
- * 0 otherwise.
+ * Return: 1 if the aggregate constraint value has changed, 0 otherwise.
  */
 bool pm_qos_update_flags(struct pm_qos_flags *pqf,
 			 struct pm_qos_flags_request *req,
@@ -254,6 +255,7 @@ bool pm_qos_update_flags(struct pm_qos_flags *pqf,
 	spin_unlock_irqrestore(&pm_qos_lock, irqflags);
 
 	trace_pm_qos_update_flags(action, prev_value, curr_value);
+
 	return prev_value != curr_value;
 }
 
-- 
2.16.4






^ permalink raw reply	[flat|nested] 74+ messages in thread

* [PATCH 05/28] PM: QoS: Clean up pm_qos_read_value() and pm_qos_get/set_value()
  2020-02-11 22:51 [PATCH 00/28] PM: QoS: Get rid of unuseful code and rework CPU latency QoS interface Rafael J. Wysocki
                   ` (3 preceding siblings ...)
  2020-02-11 22:58 ` [PATCH 04/28] PM: QoS: Clean up pm_qos_update_target() and pm_qos_update_flags() Rafael J. Wysocki
@ 2020-02-11 22:58 ` Rafael J. Wysocki
  2020-02-11 22:59 ` [PATCH 06/28] PM: QoS: Drop iterations over global QoS classes Rafael J. Wysocki
                   ` (26 subsequent siblings)
  31 siblings, 0 replies; 74+ messages in thread
From: Rafael J. Wysocki @ 2020-02-11 22:58 UTC (permalink / raw)
  To: Linux PM; +Cc: LKML, Amit Kucheria

From: "Rafael J. Wysocki" <rafael.j.wysocki@intel.com>

Move the definition of pm_qos_read_value() before the one of
pm_qos_get_value() and add a kerneldoc comment to it (as it is
not static).

Also replace the BUG() in pm_qos_get_value() with WARN() (to
prevent the kernel from crashing if an unknown PM QoS type is
used by mistake) and drop the comment next to it that is not
necessary any more.

Additionally, drop the unnecessary inline modifier from the header
of pm_qos_set_value().

Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
---
 kernel/power/qos.c | 22 ++++++++++++----------
 1 file changed, 12 insertions(+), 10 deletions(-)

diff --git a/kernel/power/qos.c b/kernel/power/qos.c
index 6a36809d6160..f09eca5ffe07 100644
--- a/kernel/power/qos.c
+++ b/kernel/power/qos.c
@@ -98,8 +98,16 @@ static const struct file_operations pm_qos_power_fops = {
 	.llseek = noop_llseek,
 };
 
-/* unlocked internal variant */
-static inline int pm_qos_get_value(struct pm_qos_constraints *c)
+/**
+ * pm_qos_read_value - Return the current effective constraint value.
+ * @c: List of PM QoS constraint requests.
+ */
+s32 pm_qos_read_value(struct pm_qos_constraints *c)
+{
+	return c->target_value;
+}
+
+static int pm_qos_get_value(struct pm_qos_constraints *c)
 {
 	if (plist_head_empty(&c->list))
 		return c->no_constraint_value;
@@ -112,18 +120,12 @@ static inline int pm_qos_get_value(struct pm_qos_constraints *c)
 		return plist_last(&c->list)->prio;
 
 	default:
-		/* runtime check for not using enum */
-		BUG();
+		WARN(1, "Unknown PM QoS type in %s\n", __func__);
 		return PM_QOS_DEFAULT_VALUE;
 	}
 }
 
-s32 pm_qos_read_value(struct pm_qos_constraints *c)
-{
-	return c->target_value;
-}
-
-static inline void pm_qos_set_value(struct pm_qos_constraints *c, s32 value)
+static void pm_qos_set_value(struct pm_qos_constraints *c, s32 value)
 {
 	c->target_value = value;
 }
-- 
2.16.4






^ permalink raw reply	[flat|nested] 74+ messages in thread

* [PATCH 06/28] PM: QoS: Drop iterations over global QoS classes
  2020-02-11 22:51 [PATCH 00/28] PM: QoS: Get rid of unuseful code and rework CPU latency QoS interface Rafael J. Wysocki
                   ` (4 preceding siblings ...)
  2020-02-11 22:58 ` [PATCH 05/28] PM: QoS: Clean up pm_qos_read_value() and pm_qos_get/set_value() Rafael J. Wysocki
@ 2020-02-11 22:59 ` Rafael J. Wysocki
  2020-02-11 23:00 ` [PATCH 07/28] PM: QoS: Clean up misc device file operations Rafael J. Wysocki
                   ` (25 subsequent siblings)
  31 siblings, 0 replies; 74+ messages in thread
From: Rafael J. Wysocki @ 2020-02-11 22:59 UTC (permalink / raw)
  To: Linux PM; +Cc: LKML, Amit Kucheria

From: "Rafael J. Wysocki" <rafael.j.wysocki@intel.com>

After commit c3082a674f46 ("PM: QoS: Get rid of unused flags") the
only global PM QoS class in use is PM_QOS_CPU_DMA_LATENCY, so it
does not really make sense to iterate over global QoS classes
anywhere, since there is only one.

Remove iterations over global QoS classes from the code and use
PM_QOS_CPU_DMA_LATENCY as the target class directly where needed.

No intentional functional impact.

Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
---
 kernel/power/qos.c | 52 ++++++++++++++--------------------------------------
 1 file changed, 14 insertions(+), 38 deletions(-)

diff --git a/kernel/power/qos.c b/kernel/power/qos.c
index f09eca5ffe07..57ff542a4f9d 100644
--- a/kernel/power/qos.c
+++ b/kernel/power/qos.c
@@ -412,7 +412,8 @@ int pm_qos_remove_notifier(int pm_qos_class, struct notifier_block *notifier)
 }
 EXPORT_SYMBOL_GPL(pm_qos_remove_notifier);
 
-/* User space interface to PM QoS classes via misc devices */
+/* User space interface to global PM QoS via misc device. */
+
 static int register_pm_qos_misc(struct pm_qos_object *qos)
 {
 	qos->pm_qos_power_miscdev.minor = MISC_DYNAMIC_MINOR;
@@ -422,35 +423,18 @@ static int register_pm_qos_misc(struct pm_qos_object *qos)
 	return misc_register(&qos->pm_qos_power_miscdev);
 }
 
-static int find_pm_qos_object_by_minor(int minor)
-{
-	int pm_qos_class;
-
-	for (pm_qos_class = PM_QOS_CPU_DMA_LATENCY;
-		pm_qos_class < PM_QOS_NUM_CLASSES; pm_qos_class++) {
-		if (minor ==
-			pm_qos_array[pm_qos_class]->pm_qos_power_miscdev.minor)
-			return pm_qos_class;
-	}
-	return -1;
-}
-
 static int pm_qos_power_open(struct inode *inode, struct file *filp)
 {
-	long pm_qos_class;
+	struct pm_qos_request *req;
 
-	pm_qos_class = find_pm_qos_object_by_minor(iminor(inode));
-	if (pm_qos_class >= PM_QOS_CPU_DMA_LATENCY) {
-		struct pm_qos_request *req = kzalloc(sizeof(*req), GFP_KERNEL);
-		if (!req)
-			return -ENOMEM;
+	req = kzalloc(sizeof(*req), GFP_KERNEL);
+	if (!req)
+		return -ENOMEM;
 
-		pm_qos_add_request(req, pm_qos_class, PM_QOS_DEFAULT_VALUE);
-		filp->private_data = req;
+	pm_qos_add_request(req, PM_QOS_CPU_DMA_LATENCY, PM_QOS_DEFAULT_VALUE);
+	filp->private_data = req;
 
-		return 0;
-	}
-	return -EPERM;
+	return 0;
 }
 
 static int pm_qos_power_release(struct inode *inode, struct file *filp)
@@ -464,7 +448,6 @@ static int pm_qos_power_release(struct inode *inode, struct file *filp)
 	return 0;
 }
 
-
 static ssize_t pm_qos_power_read(struct file *filp, char __user *buf,
 		size_t count, loff_t *f_pos)
 {
@@ -507,26 +490,19 @@ static ssize_t pm_qos_power_write(struct file *filp, const char __user *buf,
 	return count;
 }
 
-
 static int __init pm_qos_power_init(void)
 {
-	int ret = 0;
-	int i;
+	int ret;
 
 	BUILD_BUG_ON(ARRAY_SIZE(pm_qos_array) != PM_QOS_NUM_CLASSES);
 
-	for (i = PM_QOS_CPU_DMA_LATENCY; i < PM_QOS_NUM_CLASSES; i++) {
-		ret = register_pm_qos_misc(pm_qos_array[i]);
-		if (ret < 0) {
-			pr_err("%s: %s setup failed\n",
-			       __func__, pm_qos_array[i]->name);
-			return ret;
-		}
-	}
+	ret = register_pm_qos_misc(pm_qos_array[PM_QOS_CPU_DMA_LATENCY]);
+	if (ret < 0)
+		pr_err("%s: %s setup failed\n", __func__,
+		       pm_qos_array[PM_QOS_CPU_DMA_LATENCY]->name);
 
 	return ret;
 }
-
 late_initcall(pm_qos_power_init);
 
 /* Definitions related to the frequency QoS below. */
-- 
2.16.4






^ permalink raw reply	[flat|nested] 74+ messages in thread

* [PATCH 07/28] PM: QoS: Clean up misc device file operations
  2020-02-11 22:51 [PATCH 00/28] PM: QoS: Get rid of unuseful code and rework CPU latency QoS interface Rafael J. Wysocki
                   ` (5 preceding siblings ...)
  2020-02-11 22:59 ` [PATCH 06/28] PM: QoS: Drop iterations over global QoS classes Rafael J. Wysocki
@ 2020-02-11 23:00 ` Rafael J. Wysocki
  2020-02-11 23:01 ` [PATCH 08/28] PM: QoS: Redefine struct pm_qos_request and drop struct pm_qos_object Rafael J. Wysocki
                   ` (24 subsequent siblings)
  31 siblings, 0 replies; 74+ messages in thread
From: Rafael J. Wysocki @ 2020-02-11 23:00 UTC (permalink / raw)
  To: Linux PM; +Cc: LKML, Amit Kucheria

From: "Rafael J. Wysocki" <rafael.j.wysocki@intel.com>

Reorder the code to avoid using extra function header declarations
for the pm_qos_power_*() family of functions and drop those
declarations.

Also clean up the internals of those functions to consolidate checks,
avoid using redundant local variables and similar.

No intentional functional impact.

Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
---
 kernel/power/qos.c | 62 +++++++++++++++++++++++-------------------------------
 1 file changed, 26 insertions(+), 36 deletions(-)

diff --git a/kernel/power/qos.c b/kernel/power/qos.c
index 57ff542a4f9d..9f67584d4466 100644
--- a/kernel/power/qos.c
+++ b/kernel/power/qos.c
@@ -83,21 +83,6 @@ static struct pm_qos_object *pm_qos_array[] = {
 	&cpu_dma_pm_qos,
 };
 
-static ssize_t pm_qos_power_write(struct file *filp, const char __user *buf,
-		size_t count, loff_t *f_pos);
-static ssize_t pm_qos_power_read(struct file *filp, char __user *buf,
-		size_t count, loff_t *f_pos);
-static int pm_qos_power_open(struct inode *inode, struct file *filp);
-static int pm_qos_power_release(struct inode *inode, struct file *filp);
-
-static const struct file_operations pm_qos_power_fops = {
-	.write = pm_qos_power_write,
-	.read = pm_qos_power_read,
-	.open = pm_qos_power_open,
-	.release = pm_qos_power_release,
-	.llseek = noop_llseek,
-};
-
 /**
  * pm_qos_read_value - Return the current effective constraint value.
  * @c: List of PM QoS constraint requests.
@@ -414,15 +399,6 @@ EXPORT_SYMBOL_GPL(pm_qos_remove_notifier);
 
 /* User space interface to global PM QoS via misc device. */
 
-static int register_pm_qos_misc(struct pm_qos_object *qos)
-{
-	qos->pm_qos_power_miscdev.minor = MISC_DYNAMIC_MINOR;
-	qos->pm_qos_power_miscdev.name = qos->name;
-	qos->pm_qos_power_miscdev.fops = &pm_qos_power_fops;
-
-	return misc_register(&qos->pm_qos_power_miscdev);
-}
-
 static int pm_qos_power_open(struct inode *inode, struct file *filp)
 {
 	struct pm_qos_request *req;
@@ -439,9 +415,10 @@ static int pm_qos_power_open(struct inode *inode, struct file *filp)
 
 static int pm_qos_power_release(struct inode *inode, struct file *filp)
 {
-	struct pm_qos_request *req;
+	struct pm_qos_request *req = filp->private_data;
+
+	filp->private_data = NULL;
 
-	req = filp->private_data;
 	pm_qos_remove_request(req);
 	kfree(req);
 
@@ -449,15 +426,13 @@ static int pm_qos_power_release(struct inode *inode, struct file *filp)
 }
 
 static ssize_t pm_qos_power_read(struct file *filp, char __user *buf,
-		size_t count, loff_t *f_pos)
+				 size_t count, loff_t *f_pos)
 {
-	s32 value;
-	unsigned long flags;
 	struct pm_qos_request *req = filp->private_data;
+	unsigned long flags;
+	s32 value;
 
-	if (!req)
-		return -EINVAL;
-	if (!pm_qos_request_active(req))
+	if (!req || !pm_qos_request_active(req))
 		return -EINVAL;
 
 	spin_lock_irqsave(&pm_qos_lock, flags);
@@ -468,10 +443,9 @@ static ssize_t pm_qos_power_read(struct file *filp, char __user *buf,
 }
 
 static ssize_t pm_qos_power_write(struct file *filp, const char __user *buf,
-		size_t count, loff_t *f_pos)
+				  size_t count, loff_t *f_pos)
 {
 	s32 value;
-	struct pm_qos_request *req;
 
 	if (count == sizeof(s32)) {
 		if (copy_from_user(&value, buf, sizeof(s32)))
@@ -484,12 +458,28 @@ static ssize_t pm_qos_power_write(struct file *filp, const char __user *buf,
 			return ret;
 	}
 
-	req = filp->private_data;
-	pm_qos_update_request(req, value);
+	pm_qos_update_request(filp->private_data, value);
 
 	return count;
 }
 
+static const struct file_operations pm_qos_power_fops = {
+	.write = pm_qos_power_write,
+	.read = pm_qos_power_read,
+	.open = pm_qos_power_open,
+	.release = pm_qos_power_release,
+	.llseek = noop_llseek,
+};
+
+static int register_pm_qos_misc(struct pm_qos_object *qos)
+{
+	qos->pm_qos_power_miscdev.minor = MISC_DYNAMIC_MINOR;
+	qos->pm_qos_power_miscdev.name = qos->name;
+	qos->pm_qos_power_miscdev.fops = &pm_qos_power_fops;
+
+	return misc_register(&qos->pm_qos_power_miscdev);
+}
+
 static int __init pm_qos_power_init(void)
 {
 	int ret;
-- 
2.16.4






^ permalink raw reply	[flat|nested] 74+ messages in thread

* [PATCH 08/28] PM: QoS: Redefine struct pm_qos_request and drop struct pm_qos_object
  2020-02-11 22:51 [PATCH 00/28] PM: QoS: Get rid of unuseful code and rework CPU latency QoS interface Rafael J. Wysocki
                   ` (6 preceding siblings ...)
  2020-02-11 23:00 ` [PATCH 07/28] PM: QoS: Clean up misc device file operations Rafael J. Wysocki
@ 2020-02-11 23:01 ` Rafael J. Wysocki
  2020-02-11 23:02 ` [PATCH 09/28] PM: QoS: Drop PM_QOS_CPU_DMA_LATENCY notifier chain Rafael J. Wysocki
                   ` (23 subsequent siblings)
  31 siblings, 0 replies; 74+ messages in thread
From: Rafael J. Wysocki @ 2020-02-11 23:01 UTC (permalink / raw)
  To: Linux PM; +Cc: LKML, Amit Kucheria

From: "Rafael J. Wysocki" <rafael.j.wysocki@intel.com>

First, change the definition of struct pm_qos_request so that it
contains a struct pm_qos_constraints pointer (called "qos") instead
of a PM QoS class number (in preparation for dropping the PM QoS
classes concept altogether going forward) and move its definition
(along with the definition of struct pm_qos_flags_request that does
not change) after the definition of struct pm_qos_constraints.

Next, drop the definition of struct pm_qos_object and the null_pm_qos
and cpu_dma_pm_qos variables of that type along with pm_qos_array[]
holding pointers to them and change the code to refer to the
pm_qos_constraints structure directly or to use the new qos pointer
in struct pm_qos_request for that instead of going through
pm_qos_array[] to access it.  Also update kerneldoc comments that
mention pm_qos_class to refer to PM_QOS_CPU_DMA_LATENCY directly
instead.

Finally, drop register_pm_qos_misc(), introduce cpu_latency_qos_miscdev
(with the name field set to "cpu_dma_latency") to implement the
CPU latency QoS interface in /dev/ and register it directly from
pm_qos_power_init().

After these changes the notion of PM QoS classes remains only in the
API (in the form of redundant function parameters that are ignored)
and in the definitions of PM QoS trace events.

While at it, some redundant local variables are dropped etc.

No intentional functional impact.

Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
---
 include/linux/pm_qos.h |  20 ++++----
 kernel/power/qos.c     | 121 +++++++++++++++++--------------------------------
 2 files changed, 51 insertions(+), 90 deletions(-)

diff --git a/include/linux/pm_qos.h b/include/linux/pm_qos.h
index 48bfb96a9360..bef110aa80cc 100644
--- a/include/linux/pm_qos.h
+++ b/include/linux/pm_qos.h
@@ -39,16 +39,6 @@ enum pm_qos_flags_status {
 
 #define PM_QOS_FLAG_NO_POWER_OFF	(1 << 0)
 
-struct pm_qos_request {
-	struct plist_node node;
-	int pm_qos_class;
-};
-
-struct pm_qos_flags_request {
-	struct list_head node;
-	s32 flags;	/* Do not change to 64 bit */
-};
-
 enum pm_qos_type {
 	PM_QOS_UNITIALIZED,
 	PM_QOS_MAX,		/* return the largest value */
@@ -69,6 +59,16 @@ struct pm_qos_constraints {
 	struct blocking_notifier_head *notifiers;
 };
 
+struct pm_qos_request {
+	struct plist_node node;
+	struct pm_qos_constraints *qos;
+};
+
+struct pm_qos_flags_request {
+	struct list_head node;
+	s32 flags;	/* Do not change to 64 bit */
+};
+
 struct pm_qos_flags {
 	struct list_head list;
 	s32 effective_flags;	/* Do not change to 64 bit */
diff --git a/kernel/power/qos.c b/kernel/power/qos.c
index 9f67584d4466..952c5f55e23c 100644
--- a/kernel/power/qos.c
+++ b/kernel/power/qos.c
@@ -54,16 +54,8 @@
  * or pm_qos_object list and pm_qos_objects need to happen with pm_qos_lock
  * held, taken with _irqsave.  One lock to rule them all
  */
-struct pm_qos_object {
-	struct pm_qos_constraints *constraints;
-	struct miscdevice pm_qos_power_miscdev;
-	char *name;
-};
-
 static DEFINE_SPINLOCK(pm_qos_lock);
 
-static struct pm_qos_object null_pm_qos;
-
 static BLOCKING_NOTIFIER_HEAD(cpu_dma_lat_notifier);
 static struct pm_qos_constraints cpu_dma_constraints = {
 	.list = PLIST_HEAD_INIT(cpu_dma_constraints.list),
@@ -73,15 +65,6 @@ static struct pm_qos_constraints cpu_dma_constraints = {
 	.type = PM_QOS_MIN,
 	.notifiers = &cpu_dma_lat_notifier,
 };
-static struct pm_qos_object cpu_dma_pm_qos = {
-	.constraints = &cpu_dma_constraints,
-	.name = "cpu_dma_latency",
-};
-
-static struct pm_qos_object *pm_qos_array[] = {
-	&null_pm_qos,
-	&cpu_dma_pm_qos,
-};
 
 /**
  * pm_qos_read_value - Return the current effective constraint value.
@@ -248,46 +231,34 @@ bool pm_qos_update_flags(struct pm_qos_flags *pqf,
 
 /**
  * pm_qos_request - returns current system wide qos expectation
- * @pm_qos_class: identification of which qos value is requested
+ * @pm_qos_class: Ignored.
  *
  * This function returns the current target value.
  */
 int pm_qos_request(int pm_qos_class)
 {
-	return pm_qos_read_value(pm_qos_array[pm_qos_class]->constraints);
+	return pm_qos_read_value(&cpu_dma_constraints);
 }
 EXPORT_SYMBOL_GPL(pm_qos_request);
 
 int pm_qos_request_active(struct pm_qos_request *req)
 {
-	return req->pm_qos_class != 0;
+	return req->qos == &cpu_dma_constraints;
 }
 EXPORT_SYMBOL_GPL(pm_qos_request_active);
 
-static void __pm_qos_update_request(struct pm_qos_request *req,
-			   s32 new_value)
-{
-	trace_pm_qos_update_request(req->pm_qos_class, new_value);
-
-	if (new_value != req->node.prio)
-		pm_qos_update_target(
-			pm_qos_array[req->pm_qos_class]->constraints,
-			&req->node, PM_QOS_UPDATE_REQ, new_value);
-}
-
 /**
  * pm_qos_add_request - inserts new qos request into the list
  * @req: pointer to a preallocated handle
- * @pm_qos_class: identifies which list of qos request to use
+ * @pm_qos_class: Ignored.
  * @value: defines the qos request
  *
- * This function inserts a new entry in the pm_qos_class list of requested qos
- * performance characteristics.  It recomputes the aggregate QoS expectations
- * for the pm_qos_class of parameters and initializes the pm_qos_request
+ * This function inserts a new entry in the PM_QOS_CPU_DMA_LATENCY list of
+ * requested QoS performance characteristics.  It recomputes the aggregate QoS
+ * expectations for the PM_QOS_CPU_DMA_LATENCY list and initializes the @req
  * handle.  Caller needs to save this handle for later use in updates and
  * removal.
  */
-
 void pm_qos_add_request(struct pm_qos_request *req,
 			int pm_qos_class, s32 value)
 {
@@ -298,10 +269,11 @@ void pm_qos_add_request(struct pm_qos_request *req,
 		WARN(1, KERN_ERR "pm_qos_add_request() called for already added request\n");
 		return;
 	}
-	req->pm_qos_class = pm_qos_class;
-	trace_pm_qos_add_request(pm_qos_class, value);
-	pm_qos_update_target(pm_qos_array[pm_qos_class]->constraints,
-			     &req->node, PM_QOS_ADD_REQ, value);
+
+	trace_pm_qos_add_request(PM_QOS_CPU_DMA_LATENCY, value);
+
+	req->qos = &cpu_dma_constraints;
+	pm_qos_update_target(req->qos, &req->node, PM_QOS_ADD_REQ, value);
 }
 EXPORT_SYMBOL_GPL(pm_qos_add_request);
 
@@ -310,13 +282,12 @@ EXPORT_SYMBOL_GPL(pm_qos_add_request);
  * @req : handle to list element holding a pm_qos request to use
  * @value: defines the qos request
  *
- * Updates an existing qos request for the pm_qos_class of parameters along
- * with updating the target pm_qos_class value.
+ * Updates an existing qos request for the PM_QOS_CPU_DMA_LATENCY list along
+ * with updating the target PM_QOS_CPU_DMA_LATENCY value.
  *
  * Attempts are made to make this code callable on hot code paths.
  */
-void pm_qos_update_request(struct pm_qos_request *req,
-			   s32 new_value)
+void pm_qos_update_request(struct pm_qos_request *req, s32 new_value)
 {
 	if (!req) /*guard against callers passing in null */
 		return;
@@ -326,7 +297,12 @@ void pm_qos_update_request(struct pm_qos_request *req,
 		return;
 	}
 
-	__pm_qos_update_request(req, new_value);
+	trace_pm_qos_update_request(PM_QOS_CPU_DMA_LATENCY, new_value);
+
+	if (new_value == req->node.prio)
+		return;
+
+	pm_qos_update_target(req->qos, &req->node, PM_QOS_UPDATE_REQ, new_value);
 }
 EXPORT_SYMBOL_GPL(pm_qos_update_request);
 
@@ -335,7 +311,7 @@ EXPORT_SYMBOL_GPL(pm_qos_update_request);
  * @req: handle to request list element
  *
  * Will remove pm qos request from the list of constraints and
- * recompute the current target value for the pm_qos_class.  Call this
+ * recompute the current target value for PM_QOS_CPU_DMA_LATENCY.  Call this
  * on slow code paths.
  */
 void pm_qos_remove_request(struct pm_qos_request *req)
@@ -349,9 +325,9 @@ void pm_qos_remove_request(struct pm_qos_request *req)
 		return;
 	}
 
-	trace_pm_qos_remove_request(req->pm_qos_class, PM_QOS_DEFAULT_VALUE);
-	pm_qos_update_target(pm_qos_array[req->pm_qos_class]->constraints,
-			     &req->node, PM_QOS_REMOVE_REQ,
+	trace_pm_qos_remove_request(PM_QOS_CPU_DMA_LATENCY, PM_QOS_DEFAULT_VALUE);
+
+	pm_qos_update_target(req->qos, &req->node, PM_QOS_REMOVE_REQ,
 			     PM_QOS_DEFAULT_VALUE);
 	memset(req, 0, sizeof(*req));
 }
@@ -359,41 +335,31 @@ EXPORT_SYMBOL_GPL(pm_qos_remove_request);
 
 /**
  * pm_qos_add_notifier - sets notification entry for changes to target value
- * @pm_qos_class: identifies which qos target changes should be notified.
+ * @pm_qos_class: Ignored.
  * @notifier: notifier block managed by caller.
  *
  * will register the notifier into a notification chain that gets called
- * upon changes to the pm_qos_class target value.
+ * upon changes to the PM_QOS_CPU_DMA_LATENCY target value.
  */
 int pm_qos_add_notifier(int pm_qos_class, struct notifier_block *notifier)
 {
-	int retval;
-
-	retval = blocking_notifier_chain_register(
-			pm_qos_array[pm_qos_class]->constraints->notifiers,
-			notifier);
-
-	return retval;
+	return blocking_notifier_chain_register(cpu_dma_constraints.notifiers,
+						notifier);
 }
 EXPORT_SYMBOL_GPL(pm_qos_add_notifier);
 
 /**
  * pm_qos_remove_notifier - deletes notification entry from chain.
- * @pm_qos_class: identifies which qos target changes are notified.
+ * @pm_qos_class: Ignored.
  * @notifier: notifier block to be removed.
  *
  * will remove the notifier from the notification chain that gets called
- * upon changes to the pm_qos_class target value.
+ * upon changes to the PM_QOS_CPU_DMA_LATENCY target value.
  */
 int pm_qos_remove_notifier(int pm_qos_class, struct notifier_block *notifier)
 {
-	int retval;
-
-	retval = blocking_notifier_chain_unregister(
-			pm_qos_array[pm_qos_class]->constraints->notifiers,
-			notifier);
-
-	return retval;
+	return blocking_notifier_chain_unregister(cpu_dma_constraints.notifiers,
+						  notifier);
 }
 EXPORT_SYMBOL_GPL(pm_qos_remove_notifier);
 
@@ -436,7 +402,7 @@ static ssize_t pm_qos_power_read(struct file *filp, char __user *buf,
 		return -EINVAL;
 
 	spin_lock_irqsave(&pm_qos_lock, flags);
-	value = pm_qos_get_value(pm_qos_array[req->pm_qos_class]->constraints);
+	value = pm_qos_get_value(&cpu_dma_constraints);
 	spin_unlock_irqrestore(&pm_qos_lock, flags);
 
 	return simple_read_from_buffer(buf, count, f_pos, &value, sizeof(s32));
@@ -471,25 +437,20 @@ static const struct file_operations pm_qos_power_fops = {
 	.llseek = noop_llseek,
 };
 
-static int register_pm_qos_misc(struct pm_qos_object *qos)
-{
-	qos->pm_qos_power_miscdev.minor = MISC_DYNAMIC_MINOR;
-	qos->pm_qos_power_miscdev.name = qos->name;
-	qos->pm_qos_power_miscdev.fops = &pm_qos_power_fops;
-
-	return misc_register(&qos->pm_qos_power_miscdev);
-}
+static struct miscdevice cpu_latency_qos_miscdev = {
+	.minor = MISC_DYNAMIC_MINOR,
+	.name = "cpu_dma_latency",
+	.fops = &pm_qos_power_fops,
+};
 
 static int __init pm_qos_power_init(void)
 {
 	int ret;
 
-	BUILD_BUG_ON(ARRAY_SIZE(pm_qos_array) != PM_QOS_NUM_CLASSES);
-
-	ret = register_pm_qos_misc(pm_qos_array[PM_QOS_CPU_DMA_LATENCY]);
+	ret = misc_register(&cpu_latency_qos_miscdev);
 	if (ret < 0)
 		pr_err("%s: %s setup failed\n", __func__,
-		       pm_qos_array[PM_QOS_CPU_DMA_LATENCY]->name);
+		       cpu_latency_qos_miscdev.name);
 
 	return ret;
 }
-- 
2.16.4






^ permalink raw reply	[flat|nested] 74+ messages in thread

* [PATCH 09/28] PM: QoS: Drop PM_QOS_CPU_DMA_LATENCY notifier chain
  2020-02-11 22:51 [PATCH 00/28] PM: QoS: Get rid of unuseful code and rework CPU latency QoS interface Rafael J. Wysocki
                   ` (7 preceding siblings ...)
  2020-02-11 23:01 ` [PATCH 08/28] PM: QoS: Redefine struct pm_qos_request and drop struct pm_qos_object Rafael J. Wysocki
@ 2020-02-11 23:02 ` Rafael J. Wysocki
  2020-02-11 23:04 ` [PATCH 10/28] PM: QoS: Rename things related to the CPU latency QoS Rafael J. Wysocki
                   ` (22 subsequent siblings)
  31 siblings, 0 replies; 74+ messages in thread
From: Rafael J. Wysocki @ 2020-02-11 23:02 UTC (permalink / raw)
  To: Linux PM; +Cc: LKML, Amit Kucheria, Daniel Lezcano

From: "Rafael J. Wysocki" <rafael.j.wysocki@intel.com>

Notice that pm_qos_remove_notifier() is not used at all and the only
caller of pm_qos_add_notifier() is the cpuidle core, which only needs
the PM_QOS_CPU_DMA_LATENCY notifier to invoke wake_up_all_idle_cpus()
upon changes of the PM_QOS_CPU_DMA_LATENCY target value.

First, to ensure that wake_up_all_idle_cpus() will be called
whenever the PM_QOS_CPU_DMA_LATENCY target value changes, modify the
pm_qos_add/update/remove_request() family of functions to check if
the effective constraint for the PM_QOS_CPU_DMA_LATENCY has changed
and call wake_up_all_idle_cpus() directly in that case.

Next, drop the PM_QOS_CPU_DMA_LATENCY notifier from cpuidle as it is
not necessary any more.

Finally, drop both pm_qos_add_notifier() and pm_qos_remove_notifier(),
as they have no callers now, along with cpu_dma_lat_notifier which is
only used by them.

Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
---
 drivers/cpuidle/cpuidle.c | 40 +---------------------------------------
 include/linux/pm_qos.h    |  2 --
 kernel/power/qos.c        | 47 +++++++++++------------------------------------
 3 files changed, 12 insertions(+), 77 deletions(-)

diff --git a/drivers/cpuidle/cpuidle.c b/drivers/cpuidle/cpuidle.c
index de81298051b3..c149d9e20dfd 100644
--- a/drivers/cpuidle/cpuidle.c
+++ b/drivers/cpuidle/cpuidle.c
@@ -736,53 +736,15 @@ int cpuidle_register(struct cpuidle_driver *drv,
 }
 EXPORT_SYMBOL_GPL(cpuidle_register);
 
-#ifdef CONFIG_SMP
-
-/*
- * This function gets called when a part of the kernel has a new latency
- * requirement.  This means we need to get all processors out of their C-state,
- * and then recalculate a new suitable C-state. Just do a cross-cpu IPI; that
- * wakes them all right up.
- */
-static int cpuidle_latency_notify(struct notifier_block *b,
-		unsigned long l, void *v)
-{
-	wake_up_all_idle_cpus();
-	return NOTIFY_OK;
-}
-
-static struct notifier_block cpuidle_latency_notifier = {
-	.notifier_call = cpuidle_latency_notify,
-};
-
-static inline void latency_notifier_init(struct notifier_block *n)
-{
-	pm_qos_add_notifier(PM_QOS_CPU_DMA_LATENCY, n);
-}
-
-#else /* CONFIG_SMP */
-
-#define latency_notifier_init(x) do { } while (0)
-
-#endif /* CONFIG_SMP */
-
 /**
  * cpuidle_init - core initializer
  */
 static int __init cpuidle_init(void)
 {
-	int ret;
-
 	if (cpuidle_disabled())
 		return -ENODEV;
 
-	ret = cpuidle_add_interface(cpu_subsys.dev_root);
-	if (ret)
-		return ret;
-
-	latency_notifier_init(&cpuidle_latency_notifier);
-
-	return 0;
+	return cpuidle_add_interface(cpu_subsys.dev_root);
 }
 
 module_param(off, int, 0444);
diff --git a/include/linux/pm_qos.h b/include/linux/pm_qos.h
index bef110aa80cc..cb57e5918a25 100644
--- a/include/linux/pm_qos.h
+++ b/include/linux/pm_qos.h
@@ -149,8 +149,6 @@ void pm_qos_update_request(struct pm_qos_request *req,
 void pm_qos_remove_request(struct pm_qos_request *req);
 
 int pm_qos_request(int pm_qos_class);
-int pm_qos_add_notifier(int pm_qos_class, struct notifier_block *notifier);
-int pm_qos_remove_notifier(int pm_qos_class, struct notifier_block *notifier);
 int pm_qos_request_active(struct pm_qos_request *req);
 s32 pm_qos_read_value(struct pm_qos_constraints *c);
 
diff --git a/kernel/power/qos.c b/kernel/power/qos.c
index 952c5f55e23c..201b43bc6457 100644
--- a/kernel/power/qos.c
+++ b/kernel/power/qos.c
@@ -56,14 +56,12 @@
  */
 static DEFINE_SPINLOCK(pm_qos_lock);
 
-static BLOCKING_NOTIFIER_HEAD(cpu_dma_lat_notifier);
 static struct pm_qos_constraints cpu_dma_constraints = {
 	.list = PLIST_HEAD_INIT(cpu_dma_constraints.list),
 	.target_value = PM_QOS_CPU_DMA_LAT_DEFAULT_VALUE,
 	.default_value = PM_QOS_CPU_DMA_LAT_DEFAULT_VALUE,
 	.no_constraint_value = PM_QOS_CPU_DMA_LAT_DEFAULT_VALUE,
 	.type = PM_QOS_MIN,
-	.notifiers = &cpu_dma_lat_notifier,
 };
 
 /**
@@ -247,6 +245,14 @@ int pm_qos_request_active(struct pm_qos_request *req)
 }
 EXPORT_SYMBOL_GPL(pm_qos_request_active);
 
+static void cpu_latency_qos_update(struct pm_qos_request *req,
+				   enum pm_qos_req_action action, s32 value)
+{
+	int ret = pm_qos_update_target(req->qos, &req->node, action, value);
+	if (ret > 0)
+		wake_up_all_idle_cpus();
+}
+
 /**
  * pm_qos_add_request - inserts new qos request into the list
  * @req: pointer to a preallocated handle
@@ -273,7 +279,7 @@ void pm_qos_add_request(struct pm_qos_request *req,
 	trace_pm_qos_add_request(PM_QOS_CPU_DMA_LATENCY, value);
 
 	req->qos = &cpu_dma_constraints;
-	pm_qos_update_target(req->qos, &req->node, PM_QOS_ADD_REQ, value);
+	cpu_latency_qos_update(req, PM_QOS_ADD_REQ, value);
 }
 EXPORT_SYMBOL_GPL(pm_qos_add_request);
 
@@ -302,7 +308,7 @@ void pm_qos_update_request(struct pm_qos_request *req, s32 new_value)
 	if (new_value == req->node.prio)
 		return;
 
-	pm_qos_update_target(req->qos, &req->node, PM_QOS_UPDATE_REQ, new_value);
+	cpu_latency_qos_update(req, PM_QOS_UPDATE_REQ, new_value);
 }
 EXPORT_SYMBOL_GPL(pm_qos_update_request);
 
@@ -327,42 +333,11 @@ void pm_qos_remove_request(struct pm_qos_request *req)
 
 	trace_pm_qos_remove_request(PM_QOS_CPU_DMA_LATENCY, PM_QOS_DEFAULT_VALUE);
 
-	pm_qos_update_target(req->qos, &req->node, PM_QOS_REMOVE_REQ,
-			     PM_QOS_DEFAULT_VALUE);
+	cpu_latency_qos_update(req, PM_QOS_REMOVE_REQ, PM_QOS_DEFAULT_VALUE);
 	memset(req, 0, sizeof(*req));
 }
 EXPORT_SYMBOL_GPL(pm_qos_remove_request);
 
-/**
- * pm_qos_add_notifier - sets notification entry for changes to target value
- * @pm_qos_class: Ignored.
- * @notifier: notifier block managed by caller.
- *
- * will register the notifier into a notification chain that gets called
- * upon changes to the PM_QOS_CPU_DMA_LATENCY target value.
- */
-int pm_qos_add_notifier(int pm_qos_class, struct notifier_block *notifier)
-{
-	return blocking_notifier_chain_register(cpu_dma_constraints.notifiers,
-						notifier);
-}
-EXPORT_SYMBOL_GPL(pm_qos_add_notifier);
-
-/**
- * pm_qos_remove_notifier - deletes notification entry from chain.
- * @pm_qos_class: Ignored.
- * @notifier: notifier block to be removed.
- *
- * will remove the notifier from the notification chain that gets called
- * upon changes to the PM_QOS_CPU_DMA_LATENCY target value.
- */
-int pm_qos_remove_notifier(int pm_qos_class, struct notifier_block *notifier)
-{
-	return blocking_notifier_chain_unregister(cpu_dma_constraints.notifiers,
-						  notifier);
-}
-EXPORT_SYMBOL_GPL(pm_qos_remove_notifier);
-
 /* User space interface to global PM QoS via misc device. */
 
 static int pm_qos_power_open(struct inode *inode, struct file *filp)
-- 
2.16.4






^ permalink raw reply	[flat|nested] 74+ messages in thread

* [PATCH 10/28] PM: QoS: Rename things related to the CPU latency QoS
  2020-02-11 22:51 [PATCH 00/28] PM: QoS: Get rid of unuseful code and rework CPU latency QoS interface Rafael J. Wysocki
                   ` (8 preceding siblings ...)
  2020-02-11 23:02 ` [PATCH 09/28] PM: QoS: Drop PM_QOS_CPU_DMA_LATENCY notifier chain Rafael J. Wysocki
@ 2020-02-11 23:04 ` Rafael J. Wysocki
  2020-02-12 10:34   ` Rafael J. Wysocki
  2020-02-12 19:13   ` Greg Kroah-Hartman
  2020-02-11 23:06 ` [PATCH 11/28] PM: QoS: Simplify definitions of CPU latency QoS trace events Rafael J. Wysocki
                   ` (21 subsequent siblings)
  31 siblings, 2 replies; 74+ messages in thread
From: Rafael J. Wysocki @ 2020-02-11 23:04 UTC (permalink / raw)
  To: Linux PM; +Cc: LKML, Amit Kucheria, Greg Kroah-Hartman, linux-serial

From: "Rafael J. Wysocki" <rafael.j.wysocki@intel.com>

First, rename PM_QOS_CPU_DMA_LAT_DEFAULT_VALUE to
PM_QOS_CPU_LATENCY_DEFAULT_VALUE and update all of the code
referring to it accordingly.

Next, rename cpu_dma_constraints to cpu_latency_constraints, move
the definition of it closer to the functions referring to it and
update all of them accordingly.  [While at it, add a comment to mark
the start of the code related to the CPU latency QoS.]

Finally, rename the pm_qos_power_*() family of functions and
pm_qos_power_fops to cpu_latency_qos_*() and cpu_latency_qos_fops,
respectively, and update the definition of cpu_latency_qos_miscdev.
[While at it, update the miscdev interface code start comment.]

No intentional functional impact.

Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
---
 drivers/tty/serial/8250/8250_omap.c |  6 ++--
 drivers/tty/serial/omap-serial.c    |  6 ++--
 include/linux/pm_qos.h              |  2 +-
 kernel/power/qos.c                  | 56 +++++++++++++++++++------------------
 4 files changed, 36 insertions(+), 34 deletions(-)

diff --git a/drivers/tty/serial/8250/8250_omap.c b/drivers/tty/serial/8250/8250_omap.c
index 6f343ca08440..19f8d2f9e7ba 100644
--- a/drivers/tty/serial/8250/8250_omap.c
+++ b/drivers/tty/serial/8250/8250_omap.c
@@ -1222,8 +1222,8 @@ static int omap8250_probe(struct platform_device *pdev)
 			 DEFAULT_CLK_SPEED);
 	}
 
-	priv->latency = PM_QOS_CPU_DMA_LAT_DEFAULT_VALUE;
-	priv->calc_latency = PM_QOS_CPU_DMA_LAT_DEFAULT_VALUE;
+	priv->latency = PM_QOS_CPU_LATENCY_DEFAULT_VALUE;
+	priv->calc_latency = PM_QOS_CPU_LATENCY_DEFAULT_VALUE;
 	pm_qos_add_request(&priv->pm_qos_request, PM_QOS_CPU_DMA_LATENCY,
 			   priv->latency);
 	INIT_WORK(&priv->qos_work, omap8250_uart_qos_work);
@@ -1445,7 +1445,7 @@ static int omap8250_runtime_suspend(struct device *dev)
 	if (up->dma && up->dma->rxchan)
 		omap_8250_rx_dma_flush(up);
 
-	priv->latency = PM_QOS_CPU_DMA_LAT_DEFAULT_VALUE;
+	priv->latency = PM_QOS_CPU_LATENCY_DEFAULT_VALUE;
 	schedule_work(&priv->qos_work);
 
 	return 0;
diff --git a/drivers/tty/serial/omap-serial.c b/drivers/tty/serial/omap-serial.c
index 48017cec7f2f..ce2558767eee 100644
--- a/drivers/tty/serial/omap-serial.c
+++ b/drivers/tty/serial/omap-serial.c
@@ -1722,8 +1722,8 @@ static int serial_omap_probe(struct platform_device *pdev)
 			 DEFAULT_CLK_SPEED);
 	}
 
-	up->latency = PM_QOS_CPU_DMA_LAT_DEFAULT_VALUE;
-	up->calc_latency = PM_QOS_CPU_DMA_LAT_DEFAULT_VALUE;
+	up->latency = PM_QOS_CPU_LATENCY_DEFAULT_VALUE;
+	up->calc_latency = PM_QOS_CPU_LATENCY_DEFAULT_VALUE;
 	pm_qos_add_request(&up->pm_qos_request,
 		PM_QOS_CPU_DMA_LATENCY, up->latency);
 	INIT_WORK(&up->qos_work, serial_omap_uart_qos_work);
@@ -1869,7 +1869,7 @@ static int serial_omap_runtime_suspend(struct device *dev)
 
 	serial_omap_enable_wakeup(up, true);
 
-	up->latency = PM_QOS_CPU_DMA_LAT_DEFAULT_VALUE;
+	up->latency = PM_QOS_CPU_LATENCY_DEFAULT_VALUE;
 	schedule_work(&up->qos_work);
 
 	return 0;
diff --git a/include/linux/pm_qos.h b/include/linux/pm_qos.h
index cb57e5918a25..a3e0bfc6c470 100644
--- a/include/linux/pm_qos.h
+++ b/include/linux/pm_qos.h
@@ -28,7 +28,7 @@ enum pm_qos_flags_status {
 #define PM_QOS_LATENCY_ANY	S32_MAX
 #define PM_QOS_LATENCY_ANY_NS	((s64)PM_QOS_LATENCY_ANY * NSEC_PER_USEC)
 
-#define PM_QOS_CPU_DMA_LAT_DEFAULT_VALUE	(2000 * USEC_PER_SEC)
+#define PM_QOS_CPU_LATENCY_DEFAULT_VALUE	(2000 * USEC_PER_SEC)
 #define PM_QOS_RESUME_LATENCY_DEFAULT_VALUE	PM_QOS_LATENCY_ANY
 #define PM_QOS_RESUME_LATENCY_NO_CONSTRAINT	PM_QOS_LATENCY_ANY
 #define PM_QOS_RESUME_LATENCY_NO_CONSTRAINT_NS	PM_QOS_LATENCY_ANY_NS
diff --git a/kernel/power/qos.c b/kernel/power/qos.c
index 201b43bc6457..a6bf53e9db17 100644
--- a/kernel/power/qos.c
+++ b/kernel/power/qos.c
@@ -56,14 +56,6 @@
  */
 static DEFINE_SPINLOCK(pm_qos_lock);
 
-static struct pm_qos_constraints cpu_dma_constraints = {
-	.list = PLIST_HEAD_INIT(cpu_dma_constraints.list),
-	.target_value = PM_QOS_CPU_DMA_LAT_DEFAULT_VALUE,
-	.default_value = PM_QOS_CPU_DMA_LAT_DEFAULT_VALUE,
-	.no_constraint_value = PM_QOS_CPU_DMA_LAT_DEFAULT_VALUE,
-	.type = PM_QOS_MIN,
-};
-
 /**
  * pm_qos_read_value - Return the current effective constraint value.
  * @c: List of PM QoS constraint requests.
@@ -227,6 +219,16 @@ bool pm_qos_update_flags(struct pm_qos_flags *pqf,
 	return prev_value != curr_value;
 }
 
+/* Definitions related to the CPU latency QoS. */
+
+static struct pm_qos_constraints cpu_latency_constraints = {
+	.list = PLIST_HEAD_INIT(cpu_latency_constraints.list),
+	.target_value = PM_QOS_CPU_LATENCY_DEFAULT_VALUE,
+	.default_value = PM_QOS_CPU_LATENCY_DEFAULT_VALUE,
+	.no_constraint_value = PM_QOS_CPU_LATENCY_DEFAULT_VALUE,
+	.type = PM_QOS_MIN,
+};
+
 /**
  * pm_qos_request - returns current system wide qos expectation
  * @pm_qos_class: Ignored.
@@ -235,13 +237,13 @@ bool pm_qos_update_flags(struct pm_qos_flags *pqf,
  */
 int pm_qos_request(int pm_qos_class)
 {
-	return pm_qos_read_value(&cpu_dma_constraints);
+	return pm_qos_read_value(&cpu_latency_constraints);
 }
 EXPORT_SYMBOL_GPL(pm_qos_request);
 
 int pm_qos_request_active(struct pm_qos_request *req)
 {
-	return req->qos == &cpu_dma_constraints;
+	return req->qos == &cpu_latency_constraints;
 }
 EXPORT_SYMBOL_GPL(pm_qos_request_active);
 
@@ -278,7 +280,7 @@ void pm_qos_add_request(struct pm_qos_request *req,
 
 	trace_pm_qos_add_request(PM_QOS_CPU_DMA_LATENCY, value);
 
-	req->qos = &cpu_dma_constraints;
+	req->qos = &cpu_latency_constraints;
 	cpu_latency_qos_update(req, PM_QOS_ADD_REQ, value);
 }
 EXPORT_SYMBOL_GPL(pm_qos_add_request);
@@ -338,9 +340,9 @@ void pm_qos_remove_request(struct pm_qos_request *req)
 }
 EXPORT_SYMBOL_GPL(pm_qos_remove_request);
 
-/* User space interface to global PM QoS via misc device. */
+/* User space interface to the CPU latency QoS via misc device. */
 
-static int pm_qos_power_open(struct inode *inode, struct file *filp)
+static int cpu_latency_qos_open(struct inode *inode, struct file *filp)
 {
 	struct pm_qos_request *req;
 
@@ -354,7 +356,7 @@ static int pm_qos_power_open(struct inode *inode, struct file *filp)
 	return 0;
 }
 
-static int pm_qos_power_release(struct inode *inode, struct file *filp)
+static int cpu_latency_qos_release(struct inode *inode, struct file *filp)
 {
 	struct pm_qos_request *req = filp->private_data;
 
@@ -366,8 +368,8 @@ static int pm_qos_power_release(struct inode *inode, struct file *filp)
 	return 0;
 }
 
-static ssize_t pm_qos_power_read(struct file *filp, char __user *buf,
-				 size_t count, loff_t *f_pos)
+static ssize_t cpu_latency_qos_read(struct file *filp, char __user *buf,
+				    size_t count, loff_t *f_pos)
 {
 	struct pm_qos_request *req = filp->private_data;
 	unsigned long flags;
@@ -377,14 +379,14 @@ static ssize_t pm_qos_power_read(struct file *filp, char __user *buf,
 		return -EINVAL;
 
 	spin_lock_irqsave(&pm_qos_lock, flags);
-	value = pm_qos_get_value(&cpu_dma_constraints);
+	value = pm_qos_get_value(&cpu_latency_constraints);
 	spin_unlock_irqrestore(&pm_qos_lock, flags);
 
 	return simple_read_from_buffer(buf, count, f_pos, &value, sizeof(s32));
 }
 
-static ssize_t pm_qos_power_write(struct file *filp, const char __user *buf,
-				  size_t count, loff_t *f_pos)
+static ssize_t cpu_latency_qos_write(struct file *filp, const char __user *buf,
+				     size_t count, loff_t *f_pos)
 {
 	s32 value;
 
@@ -404,21 +406,21 @@ static ssize_t pm_qos_power_write(struct file *filp, const char __user *buf,
 	return count;
 }
 
-static const struct file_operations pm_qos_power_fops = {
-	.write = pm_qos_power_write,
-	.read = pm_qos_power_read,
-	.open = pm_qos_power_open,
-	.release = pm_qos_power_release,
+static const struct file_operations cpu_latency_qos_fops = {
+	.write = cpu_latency_qos_write,
+	.read = cpu_latency_qos_read,
+	.open = cpu_latency_qos_open,
+	.release = cpu_latency_qos_release,
 	.llseek = noop_llseek,
 };
 
 static struct miscdevice cpu_latency_qos_miscdev = {
 	.minor = MISC_DYNAMIC_MINOR,
 	.name = "cpu_dma_latency",
-	.fops = &pm_qos_power_fops,
+	.fops = &cpu_latency_qos_fops,
 };
 
-static int __init pm_qos_power_init(void)
+static int __init cpu_latency_qos_init(void)
 {
 	int ret;
 
@@ -429,7 +431,7 @@ static int __init pm_qos_power_init(void)
 
 	return ret;
 }
-late_initcall(pm_qos_power_init);
+late_initcall(cpu_latency_qos_init);
 
 /* Definitions related to the frequency QoS below. */
 
-- 
2.16.4






^ permalink raw reply	[flat|nested] 74+ messages in thread

* [PATCH 11/28] PM: QoS: Simplify definitions of CPU latency QoS trace events
  2020-02-11 22:51 [PATCH 00/28] PM: QoS: Get rid of unuseful code and rework CPU latency QoS interface Rafael J. Wysocki
                   ` (9 preceding siblings ...)
  2020-02-11 23:04 ` [PATCH 10/28] PM: QoS: Rename things related to the CPU latency QoS Rafael J. Wysocki
@ 2020-02-11 23:06 ` Rafael J. Wysocki
  2020-02-11 23:07 ` [PATCH 12/28] PM: QoS: Adjust pm_qos_request() signature and reorder pm_qos.h Rafael J. Wysocki
                   ` (20 subsequent siblings)
  31 siblings, 0 replies; 74+ messages in thread
From: Rafael J. Wysocki @ 2020-02-11 23:06 UTC (permalink / raw)
  To: Linux PM; +Cc: LKML, Amit Kucheria, Steven Rostedt

From: "Rafael J. Wysocki" <rafael.j.wysocki@intel.com>

Modify the definitions of the CPU latency QoS trace events to take
one argument (since PM_QOS_CPU_DMA_LATENCY is always passed as the
pm_qos_class argument to them) and update the documentation of them
accordingly (while at it, make it explicitly mention CPU latency QoS
and relocate it after the device PM QoS trace events documentation).

The names and output format of the trace events do not change to
preserve user space compatibility.

No intentional functional impact.

Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
---
 Documentation/trace/events-power.rst | 19 ++++++++++---------
 include/trace/events/power.h         | 35 +++++++++++++++++------------------
 kernel/power/qos.c                   | 16 ++++++++--------
 3 files changed, 35 insertions(+), 35 deletions(-)

diff --git a/Documentation/trace/events-power.rst b/Documentation/trace/events-power.rst
index eec7453a168e..f45bf11fa88d 100644
--- a/Documentation/trace/events-power.rst
+++ b/Documentation/trace/events-power.rst
@@ -73,14 +73,6 @@ The second parameter is the power domain target state.
 ================
 The PM QoS events are used for QoS add/update/remove request and for
 target/flags update.
-::
-
-  pm_qos_add_request                 "pm_qos_class=%s value=%d"
-  pm_qos_update_request              "pm_qos_class=%s value=%d"
-  pm_qos_remove_request              "pm_qos_class=%s value=%d"
-
-The first parameter gives the QoS class name (e.g. "CPU_DMA_LATENCY").
-The second parameter is value to be added/updated/removed.
 ::
 
   pm_qos_update_target               "action=%s prev_value=%d curr_value=%d"
@@ -90,7 +82,7 @@ The first parameter gives the QoS action name (e.g. "ADD_REQ").
 The second parameter is the previous QoS value.
 The third parameter is the current QoS value to update.
 
-And, there are also events used for device PM QoS add/update/remove request.
+There are also events used for device PM QoS add/update/remove request.
 ::
 
   dev_pm_qos_add_request             "device=%s type=%s new_value=%d"
@@ -101,3 +93,12 @@ The first parameter gives the device name which tries to add/update/remove
 QoS requests.
 The second parameter gives the request type (e.g. "DEV_PM_QOS_RESUME_LATENCY").
 The third parameter is value to be added/updated/removed.
+
+And, there are events used for CPU latency QoS add/update/remove request.
+::
+
+  pm_qos_add_request        "value=%d"
+  pm_qos_update_request     "value=%d"
+  pm_qos_remove_request     "value=%d"
+
+The parameter is the value to be added/updated/removed.
diff --git a/include/trace/events/power.h b/include/trace/events/power.h
index ecf39daabf16..af5018aa9517 100644
--- a/include/trace/events/power.h
+++ b/include/trace/events/power.h
@@ -359,51 +359,50 @@ DEFINE_EVENT(power_domain, power_domain_target,
 );
 
 /*
- * The pm qos events are used for pm qos update
+ * CPU latency QoS events used for global CPU latency QoS list updates
  */
-DECLARE_EVENT_CLASS(pm_qos_request,
+DECLARE_EVENT_CLASS(cpu_latency_qos_request,
 
-	TP_PROTO(int pm_qos_class, s32 value),
+	TP_PROTO(s32 value),
 
-	TP_ARGS(pm_qos_class, value),
+	TP_ARGS(value),
 
 	TP_STRUCT__entry(
-		__field( int,                    pm_qos_class   )
 		__field( s32,                    value          )
 	),
 
 	TP_fast_assign(
-		__entry->pm_qos_class = pm_qos_class;
 		__entry->value = value;
 	),
 
-	TP_printk("pm_qos_class=%s value=%d",
-		  __print_symbolic(__entry->pm_qos_class,
-			{ PM_QOS_CPU_DMA_LATENCY,	"CPU_DMA_LATENCY" }),
+	TP_printk("CPU_DMA_LATENCY value=%d",
 		  __entry->value)
 );
 
-DEFINE_EVENT(pm_qos_request, pm_qos_add_request,
+DEFINE_EVENT(cpu_latency_qos_request, pm_qos_add_request,
 
-	TP_PROTO(int pm_qos_class, s32 value),
+	TP_PROTO(s32 value),
 
-	TP_ARGS(pm_qos_class, value)
+	TP_ARGS(value)
 );
 
-DEFINE_EVENT(pm_qos_request, pm_qos_update_request,
+DEFINE_EVENT(cpu_latency_qos_request, pm_qos_update_request,
 
-	TP_PROTO(int pm_qos_class, s32 value),
+	TP_PROTO(s32 value),
 
-	TP_ARGS(pm_qos_class, value)
+	TP_ARGS(value)
 );
 
-DEFINE_EVENT(pm_qos_request, pm_qos_remove_request,
+DEFINE_EVENT(cpu_latency_qos_request, pm_qos_remove_request,
 
-	TP_PROTO(int pm_qos_class, s32 value),
+	TP_PROTO(s32 value),
 
-	TP_ARGS(pm_qos_class, value)
+	TP_ARGS(value)
 );
 
+/*
+ * General PM QoS events used for updates of PM QoS request lists
+ */
 DECLARE_EVENT_CLASS(pm_qos_update,
 
 	TP_PROTO(enum pm_qos_req_action action, int prev_value, int curr_value),
diff --git a/kernel/power/qos.c b/kernel/power/qos.c
index a6bf53e9db17..afac7010e0f2 100644
--- a/kernel/power/qos.c
+++ b/kernel/power/qos.c
@@ -247,8 +247,8 @@ int pm_qos_request_active(struct pm_qos_request *req)
 }
 EXPORT_SYMBOL_GPL(pm_qos_request_active);
 
-static void cpu_latency_qos_update(struct pm_qos_request *req,
-				   enum pm_qos_req_action action, s32 value)
+static void cpu_latency_qos_apply(struct pm_qos_request *req,
+				  enum pm_qos_req_action action, s32 value)
 {
 	int ret = pm_qos_update_target(req->qos, &req->node, action, value);
 	if (ret > 0)
@@ -278,10 +278,10 @@ void pm_qos_add_request(struct pm_qos_request *req,
 		return;
 	}
 
-	trace_pm_qos_add_request(PM_QOS_CPU_DMA_LATENCY, value);
+	trace_pm_qos_add_request(value);
 
 	req->qos = &cpu_latency_constraints;
-	cpu_latency_qos_update(req, PM_QOS_ADD_REQ, value);
+	cpu_latency_qos_apply(req, PM_QOS_ADD_REQ, value);
 }
 EXPORT_SYMBOL_GPL(pm_qos_add_request);
 
@@ -305,12 +305,12 @@ void pm_qos_update_request(struct pm_qos_request *req, s32 new_value)
 		return;
 	}
 
-	trace_pm_qos_update_request(PM_QOS_CPU_DMA_LATENCY, new_value);
+	trace_pm_qos_update_request(new_value);
 
 	if (new_value == req->node.prio)
 		return;
 
-	cpu_latency_qos_update(req, PM_QOS_UPDATE_REQ, new_value);
+	cpu_latency_qos_apply(req, PM_QOS_UPDATE_REQ, new_value);
 }
 EXPORT_SYMBOL_GPL(pm_qos_update_request);
 
@@ -333,9 +333,9 @@ void pm_qos_remove_request(struct pm_qos_request *req)
 		return;
 	}
 
-	trace_pm_qos_remove_request(PM_QOS_CPU_DMA_LATENCY, PM_QOS_DEFAULT_VALUE);
+	trace_pm_qos_remove_request(PM_QOS_DEFAULT_VALUE);
 
-	cpu_latency_qos_update(req, PM_QOS_REMOVE_REQ, PM_QOS_DEFAULT_VALUE);
+	cpu_latency_qos_apply(req, PM_QOS_REMOVE_REQ, PM_QOS_DEFAULT_VALUE);
 	memset(req, 0, sizeof(*req));
 }
 EXPORT_SYMBOL_GPL(pm_qos_remove_request);
-- 
2.16.4






^ permalink raw reply	[flat|nested] 74+ messages in thread

* [PATCH 12/28] PM: QoS: Adjust pm_qos_request() signature and reorder pm_qos.h
  2020-02-11 22:51 [PATCH 00/28] PM: QoS: Get rid of unuseful code and rework CPU latency QoS interface Rafael J. Wysocki
                   ` (10 preceding siblings ...)
  2020-02-11 23:06 ` [PATCH 11/28] PM: QoS: Simplify definitions of CPU latency QoS trace events Rafael J. Wysocki
@ 2020-02-11 23:07 ` Rafael J. Wysocki
  2020-02-11 23:07 ` [PATCH 13/28] PM: QoS: Add CPU latency QoS API wrappers Rafael J. Wysocki
                   ` (19 subsequent siblings)
  31 siblings, 0 replies; 74+ messages in thread
From: Rafael J. Wysocki @ 2020-02-11 23:07 UTC (permalink / raw)
  To: Linux PM; +Cc: LKML, Amit Kucheria

From: "Rafael J. Wysocki" <rafael.j.wysocki@intel.com>

Change the return type of pm_qos_request() to be the same as the
one of pm_qos_read_value() called by it internally and stop exporting
it to modules (because its only caller, cpuidle, is not modular).

Also move the pm_qos_read_value() header away from the CPU latency
QoS API function headers in pm_qos.h (because it technically does
not belong to that API).

No intentional functional impact.

Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
---
 include/linux/pm_qos.h | 6 +++---
 kernel/power/qos.c     | 3 +--
 2 files changed, 4 insertions(+), 5 deletions(-)

diff --git a/include/linux/pm_qos.h b/include/linux/pm_qos.h
index a3e0bfc6c470..3c4bee29ecda 100644
--- a/include/linux/pm_qos.h
+++ b/include/linux/pm_qos.h
@@ -137,20 +137,20 @@ static inline int dev_pm_qos_request_active(struct dev_pm_qos_request *req)
 	return req->dev != NULL;
 }
 
+s32 pm_qos_read_value(struct pm_qos_constraints *c);
 int pm_qos_update_target(struct pm_qos_constraints *c, struct plist_node *node,
 			 enum pm_qos_req_action action, int value);
 bool pm_qos_update_flags(struct pm_qos_flags *pqf,
 			 struct pm_qos_flags_request *req,
 			 enum pm_qos_req_action action, s32 val);
+
 void pm_qos_add_request(struct pm_qos_request *req, int pm_qos_class,
 			s32 value);
 void pm_qos_update_request(struct pm_qos_request *req,
 			   s32 new_value);
 void pm_qos_remove_request(struct pm_qos_request *req);
-
-int pm_qos_request(int pm_qos_class);
+s32 pm_qos_request(int pm_qos_class);
 int pm_qos_request_active(struct pm_qos_request *req);
-s32 pm_qos_read_value(struct pm_qos_constraints *c);
 
 #ifdef CONFIG_PM
 enum pm_qos_flags_status __dev_pm_qos_flags(struct device *dev, s32 mask);
diff --git a/kernel/power/qos.c b/kernel/power/qos.c
index afac7010e0f2..7bb55aca03bb 100644
--- a/kernel/power/qos.c
+++ b/kernel/power/qos.c
@@ -235,11 +235,10 @@ static struct pm_qos_constraints cpu_latency_constraints = {
  *
  * This function returns the current target value.
  */
-int pm_qos_request(int pm_qos_class)
+s32 pm_qos_request(int pm_qos_class)
 {
 	return pm_qos_read_value(&cpu_latency_constraints);
 }
-EXPORT_SYMBOL_GPL(pm_qos_request);
 
 int pm_qos_request_active(struct pm_qos_request *req)
 {
-- 
2.16.4






^ permalink raw reply	[flat|nested] 74+ messages in thread

* [PATCH 13/28] PM: QoS: Add CPU latency QoS API wrappers
  2020-02-11 22:51 [PATCH 00/28] PM: QoS: Get rid of unuseful code and rework CPU latency QoS interface Rafael J. Wysocki
                   ` (11 preceding siblings ...)
  2020-02-11 23:07 ` [PATCH 12/28] PM: QoS: Adjust pm_qos_request() signature and reorder pm_qos.h Rafael J. Wysocki
@ 2020-02-11 23:07 ` Rafael J. Wysocki
  2020-02-11 23:08 ` [PATCH 14/28] cpuidle: Call cpu_latency_qos_limit() instead of pm_qos_request() Rafael J. Wysocki
                   ` (18 subsequent siblings)
  31 siblings, 0 replies; 74+ messages in thread
From: Rafael J. Wysocki @ 2020-02-11 23:07 UTC (permalink / raw)
  To: Linux PM; +Cc: LKML, Amit Kucheria

From: "Rafael J. Wysocki" <rafael.j.wysocki@intel.com>

Introduce (temporary) wrappers around pm_qos_request(),
pm_qos_request_active() and pm_qos_add/update/remove_request() to
provide replacements for them with function signatures that will be
used in the final CPU latency QoS API, so that the users of it can be
switched over to the new arrangement one by one before the API is
finally set.

No intentional functional impact.

Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
---
 include/linux/pm_qos.h | 27 +++++++++++++++++++++++++++
 1 file changed, 27 insertions(+)

diff --git a/include/linux/pm_qos.h b/include/linux/pm_qos.h
index 3c4bee29ecda..63d39e66f95d 100644
--- a/include/linux/pm_qos.h
+++ b/include/linux/pm_qos.h
@@ -152,6 +152,33 @@ void pm_qos_remove_request(struct pm_qos_request *req);
 s32 pm_qos_request(int pm_qos_class);
 int pm_qos_request_active(struct pm_qos_request *req);
 
+static inline void cpu_latency_qos_add_request(struct pm_qos_request *req,
+					       s32 value)
+{
+	pm_qos_add_request(req, PM_QOS_CPU_DMA_LATENCY, value);
+}
+
+static inline void cpu_latency_qos_update_request(struct pm_qos_request *req,
+						  s32 new_value)
+{
+	pm_qos_update_request(req, new_value);
+}
+
+static inline void cpu_latency_qos_remove_request(struct pm_qos_request *req)
+{
+	pm_qos_remove_request(req);
+}
+
+static inline bool cpu_latency_qos_request_active(struct pm_qos_request *req)
+{
+	return pm_qos_request_active(req);
+}
+
+static inline s32 cpu_latency_qos_limit(void)
+{
+	return pm_qos_request(PM_QOS_CPU_DMA_LATENCY);
+}
+
 #ifdef CONFIG_PM
 enum pm_qos_flags_status __dev_pm_qos_flags(struct device *dev, s32 mask);
 enum pm_qos_flags_status dev_pm_qos_flags(struct device *dev, s32 mask);
-- 
2.16.4






^ permalink raw reply	[flat|nested] 74+ messages in thread

* [PATCH 14/28] cpuidle: Call cpu_latency_qos_limit() instead of pm_qos_request()
  2020-02-11 22:51 [PATCH 00/28] PM: QoS: Get rid of unuseful code and rework CPU latency QoS interface Rafael J. Wysocki
                   ` (12 preceding siblings ...)
  2020-02-11 23:07 ` [PATCH 13/28] PM: QoS: Add CPU latency QoS API wrappers Rafael J. Wysocki
@ 2020-02-11 23:08 ` Rafael J. Wysocki
  2020-02-11 23:10 ` [PATCH 15/28] x86: platform: iosf_mbi: Call cpu_latency_qos_*() instead of pm_qos_*() Rafael J. Wysocki
                   ` (17 subsequent siblings)
  31 siblings, 0 replies; 74+ messages in thread
From: Rafael J. Wysocki @ 2020-02-11 23:08 UTC (permalink / raw)
  To: Linux PM; +Cc: LKML, Amit Kucheria, Daniel Lezcano

From: "Rafael J. Wysocki" <rafael.j.wysocki@intel.com>

Call cpu_latency_qos_limit() instead of pm_qos_request(), because the
latter is going to be dropped.

No intentional functional impact.

Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
---
 drivers/cpuidle/governor.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/cpuidle/governor.c b/drivers/cpuidle/governor.c
index e48271e117a3..29acaf48e575 100644
--- a/drivers/cpuidle/governor.c
+++ b/drivers/cpuidle/governor.c
@@ -109,9 +109,9 @@ int cpuidle_register_governor(struct cpuidle_governor *gov)
  */
 s64 cpuidle_governor_latency_req(unsigned int cpu)
 {
-	int global_req = pm_qos_request(PM_QOS_CPU_DMA_LATENCY);
 	struct device *device = get_cpu_device(cpu);
 	int device_req = dev_pm_qos_raw_resume_latency(device);
+	int global_req = cpu_latency_qos_limit();
 
 	if (device_req > global_req)
 		device_req = global_req;
-- 
2.16.4






^ permalink raw reply	[flat|nested] 74+ messages in thread

* [PATCH 15/28] x86: platform: iosf_mbi: Call cpu_latency_qos_*() instead of pm_qos_*()
  2020-02-11 22:51 [PATCH 00/28] PM: QoS: Get rid of unuseful code and rework CPU latency QoS interface Rafael J. Wysocki
                   ` (13 preceding siblings ...)
  2020-02-11 23:08 ` [PATCH 14/28] cpuidle: Call cpu_latency_qos_limit() instead of pm_qos_request() Rafael J. Wysocki
@ 2020-02-11 23:10 ` Rafael J. Wysocki
  2020-02-12 10:14   ` Andy Shevchenko
  2020-02-11 23:12 ` [PATCH 16/28] drm: i915: " Rafael J. Wysocki
                   ` (16 subsequent siblings)
  31 siblings, 1 reply; 74+ messages in thread
From: Rafael J. Wysocki @ 2020-02-11 23:10 UTC (permalink / raw)
  To: Linux PM; +Cc: LKML, Amit Kucheria, Andy Shevchenko, David Box, x86 Maintainers

From: "Rafael J. Wysocki" <rafael.j.wysocki@intel.com>

Call cpu_latency_qos_add/update/remove_request() instead of
pm_qos_add/update/remove_request(), respectively, because the
latter are going to be dropped.

No intentional functional impact.

Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
---
 arch/x86/platform/intel/iosf_mbi.c | 13 ++++++-------
 1 file changed, 6 insertions(+), 7 deletions(-)

diff --git a/arch/x86/platform/intel/iosf_mbi.c b/arch/x86/platform/intel/iosf_mbi.c
index 9e2444500428..526f70f27c1c 100644
--- a/arch/x86/platform/intel/iosf_mbi.c
+++ b/arch/x86/platform/intel/iosf_mbi.c
@@ -265,7 +265,7 @@ static void iosf_mbi_reset_semaphore(void)
 			    iosf_mbi_sem_address, 0, PUNIT_SEMAPHORE_BIT))
 		dev_err(&mbi_pdev->dev, "Error P-Unit semaphore reset failed\n");
 
-	pm_qos_update_request(&iosf_mbi_pm_qos, PM_QOS_DEFAULT_VALUE);
+	cpu_latency_qos_update_request(&iosf_mbi_pm_qos, PM_QOS_DEFAULT_VALUE);
 
 	blocking_notifier_call_chain(&iosf_mbi_pmic_bus_access_notifier,
 				     MBI_PMIC_BUS_ACCESS_END, NULL);
@@ -301,8 +301,8 @@ static void iosf_mbi_reset_semaphore(void)
  * 4) When CPU cores enter C6 or C7 the P-Unit needs to talk to the PMIC
  *    if this happens while the kernel itself is accessing the PMIC I2C bus
  *    the SoC hangs.
- *    As the third step we call pm_qos_update_request() to disallow the CPU
- *    to enter C6 or C7.
+ *    As the third step we call cpu_latency_qos_update_request() to disallow the
+ *    CPU to enter C6 or C7.
  *
  * 5) The P-Unit has a PMIC bus semaphore which we can request to stop
  *    autonomous P-Unit tasks from accessing the PMIC I2C bus while we hold it.
@@ -338,7 +338,7 @@ int iosf_mbi_block_punit_i2c_access(void)
 	 * requires the P-Unit to talk to the PMIC and if this happens while
 	 * we're holding the semaphore, the SoC hangs.
 	 */
-	pm_qos_update_request(&iosf_mbi_pm_qos, 0);
+	cpu_latency_qos_update_request(&iosf_mbi_pm_qos, 0);
 
 	/* host driver writes to side band semaphore register */
 	ret = iosf_mbi_write(BT_MBI_UNIT_PMC, MBI_REG_WRITE,
@@ -547,8 +547,7 @@ static int __init iosf_mbi_init(void)
 {
 	iosf_debugfs_init();
 
-	pm_qos_add_request(&iosf_mbi_pm_qos, PM_QOS_CPU_DMA_LATENCY,
-			   PM_QOS_DEFAULT_VALUE);
+	cpu_latency_qos_add_request(&iosf_mbi_pm_qos, PM_QOS_DEFAULT_VALUE);
 
 	return pci_register_driver(&iosf_mbi_pci_driver);
 }
@@ -561,7 +560,7 @@ static void __exit iosf_mbi_exit(void)
 	pci_dev_put(mbi_pdev);
 	mbi_pdev = NULL;
 
-	pm_qos_remove_request(&iosf_mbi_pm_qos);
+	cpu_latency_qos_remove_request(&iosf_mbi_pm_qos);
 }
 
 module_init(iosf_mbi_init);
-- 
2.16.4






^ permalink raw reply	[flat|nested] 74+ messages in thread

* [PATCH 16/28] drm: i915: Call cpu_latency_qos_*() instead of pm_qos_*()
  2020-02-11 22:51 [PATCH 00/28] PM: QoS: Get rid of unuseful code and rework CPU latency QoS interface Rafael J. Wysocki
                   ` (14 preceding siblings ...)
  2020-02-11 23:10 ` [PATCH 15/28] x86: platform: iosf_mbi: Call cpu_latency_qos_*() instead of pm_qos_*() Rafael J. Wysocki
@ 2020-02-11 23:12 ` " Rafael J. Wysocki
  2020-02-12 10:32   ` Rafael J. Wysocki
  2020-02-14  7:42   ` Jani Nikula
  2020-02-11 23:13 ` [PATCH 17/28] drivers: hsi: " Rafael J. Wysocki
                   ` (15 subsequent siblings)
  31 siblings, 2 replies; 74+ messages in thread
From: Rafael J. Wysocki @ 2020-02-11 23:12 UTC (permalink / raw)
  To: Linux PM
  Cc: LKML, Amit Kucheria, Jani Nikula, Joonas Lahtinen, Rodrigo Vivi,
	intel-gfx

From: "Rafael J. Wysocki" <rafael.j.wysocki@intel.com>

Call cpu_latency_qos_add/update/remove_request() instead of
pm_qos_add/update/remove_request(), respectively, because the
latter are going to be dropped.

No intentional functional impact.

Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
---
 drivers/gpu/drm/i915/display/intel_dp.c |  4 ++--
 drivers/gpu/drm/i915/i915_drv.c         | 12 +++++-------
 drivers/gpu/drm/i915/intel_sideband.c   |  5 +++--
 3 files changed, 10 insertions(+), 11 deletions(-)

diff --git a/drivers/gpu/drm/i915/display/intel_dp.c b/drivers/gpu/drm/i915/display/intel_dp.c
index c7424e2a04a3..208457005a11 100644
--- a/drivers/gpu/drm/i915/display/intel_dp.c
+++ b/drivers/gpu/drm/i915/display/intel_dp.c
@@ -1360,7 +1360,7 @@ intel_dp_aux_xfer(struct intel_dp *intel_dp,
 	 * lowest possible wakeup latency and so prevent the cpu from going into
 	 * deep sleep states.
 	 */
-	pm_qos_update_request(&i915->pm_qos, 0);
+	cpu_latency_qos_update_request(&i915->pm_qos, 0);
 
 	intel_dp_check_edp(intel_dp);
 
@@ -1488,7 +1488,7 @@ intel_dp_aux_xfer(struct intel_dp *intel_dp,
 
 	ret = recv_bytes;
 out:
-	pm_qos_update_request(&i915->pm_qos, PM_QOS_DEFAULT_VALUE);
+	cpu_latency_qos_update_request(&i915->pm_qos, PM_QOS_DEFAULT_VALUE);
 
 	if (vdd)
 		edp_panel_vdd_off(intel_dp, false);
diff --git a/drivers/gpu/drm/i915/i915_drv.c b/drivers/gpu/drm/i915/i915_drv.c
index f7385abdd74b..74481a189cfc 100644
--- a/drivers/gpu/drm/i915/i915_drv.c
+++ b/drivers/gpu/drm/i915/i915_drv.c
@@ -502,8 +502,7 @@ static int i915_driver_early_probe(struct drm_i915_private *dev_priv)
 	mutex_init(&dev_priv->backlight_lock);
 
 	mutex_init(&dev_priv->sb_lock);
-	pm_qos_add_request(&dev_priv->sb_qos,
-			   PM_QOS_CPU_DMA_LATENCY, PM_QOS_DEFAULT_VALUE);
+	cpu_latency_qos_add_request(&dev_priv->sb_qos, PM_QOS_DEFAULT_VALUE);
 
 	mutex_init(&dev_priv->av_mutex);
 	mutex_init(&dev_priv->wm.wm_mutex);
@@ -568,7 +567,7 @@ static void i915_driver_late_release(struct drm_i915_private *dev_priv)
 	vlv_free_s0ix_state(dev_priv);
 	i915_workqueues_cleanup(dev_priv);
 
-	pm_qos_remove_request(&dev_priv->sb_qos);
+	cpu_latency_qos_remove_request(&dev_priv->sb_qos);
 	mutex_destroy(&dev_priv->sb_lock);
 }
 
@@ -1226,8 +1225,7 @@ static int i915_driver_hw_probe(struct drm_i915_private *dev_priv)
 		}
 	}
 
-	pm_qos_add_request(&dev_priv->pm_qos, PM_QOS_CPU_DMA_LATENCY,
-			   PM_QOS_DEFAULT_VALUE);
+	cpu_latency_qos_add_request(&dev_priv->pm_qos, PM_QOS_DEFAULT_VALUE);
 
 	intel_gt_init_workarounds(dev_priv);
 
@@ -1273,7 +1271,7 @@ static int i915_driver_hw_probe(struct drm_i915_private *dev_priv)
 err_msi:
 	if (pdev->msi_enabled)
 		pci_disable_msi(pdev);
-	pm_qos_remove_request(&dev_priv->pm_qos);
+	cpu_latency_qos_remove_request(&dev_priv->pm_qos);
 err_mem_regions:
 	intel_memory_regions_driver_release(dev_priv);
 err_ggtt:
@@ -1296,7 +1294,7 @@ static void i915_driver_hw_remove(struct drm_i915_private *dev_priv)
 	if (pdev->msi_enabled)
 		pci_disable_msi(pdev);
 
-	pm_qos_remove_request(&dev_priv->pm_qos);
+	cpu_latency_qos_remove_request(&dev_priv->pm_qos);
 }
 
 /**
diff --git a/drivers/gpu/drm/i915/intel_sideband.c b/drivers/gpu/drm/i915/intel_sideband.c
index cbfb7171d62d..0648eda309e4 100644
--- a/drivers/gpu/drm/i915/intel_sideband.c
+++ b/drivers/gpu/drm/i915/intel_sideband.c
@@ -60,7 +60,7 @@ static void __vlv_punit_get(struct drm_i915_private *i915)
 	 * to the Valleyview P-unit and not all sideband communications.
 	 */
 	if (IS_VALLEYVIEW(i915)) {
-		pm_qos_update_request(&i915->sb_qos, 0);
+		cpu_latency_qos_update_request(&i915->sb_qos, 0);
 		on_each_cpu(ping, NULL, 1);
 	}
 }
@@ -68,7 +68,8 @@ static void __vlv_punit_get(struct drm_i915_private *i915)
 static void __vlv_punit_put(struct drm_i915_private *i915)
 {
 	if (IS_VALLEYVIEW(i915))
-		pm_qos_update_request(&i915->sb_qos, PM_QOS_DEFAULT_VALUE);
+		cpu_latency_qos_update_request(&i915->sb_qos,
+					       PM_QOS_DEFAULT_VALUE);
 
 	iosf_mbi_punit_release();
 }
-- 
2.16.4






^ permalink raw reply	[flat|nested] 74+ messages in thread

* [PATCH 17/28] drivers: hsi: Call cpu_latency_qos_*() instead of pm_qos_*()
  2020-02-11 22:51 [PATCH 00/28] PM: QoS: Get rid of unuseful code and rework CPU latency QoS interface Rafael J. Wysocki
                   ` (15 preceding siblings ...)
  2020-02-11 23:12 ` [PATCH 16/28] drm: i915: " Rafael J. Wysocki
@ 2020-02-11 23:13 ` " Rafael J. Wysocki
  2020-02-13 21:06   ` Sebastian Reichel
  2020-02-11 23:17 ` [PATCH 18/28] drivers: media: " Rafael J. Wysocki
                   ` (14 subsequent siblings)
  31 siblings, 1 reply; 74+ messages in thread
From: Rafael J. Wysocki @ 2020-02-11 23:13 UTC (permalink / raw)
  To: Linux PM; +Cc: LKML, Amit Kucheria, Sebastian Reichel

From: "Rafael J. Wysocki" <rafael.j.wysocki@intel.com>

Call cpu_latency_qos_add/remove_request() and
cpu_latency_qos_request_active() instead of
pm_qos_add/remove_request() and pm_qos_request_active(),
respectively, because the latter are going to be dropped.

No intentional functional impact.

Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
---
 drivers/hsi/clients/cmt_speech.c | 9 ++++-----
 1 file changed, 4 insertions(+), 5 deletions(-)

diff --git a/drivers/hsi/clients/cmt_speech.c b/drivers/hsi/clients/cmt_speech.c
index 9eec970cdfa5..89869c66fb9d 100644
--- a/drivers/hsi/clients/cmt_speech.c
+++ b/drivers/hsi/clients/cmt_speech.c
@@ -965,14 +965,13 @@ static int cs_hsi_buf_config(struct cs_hsi_iface *hi,
 
 	if (old_state != hi->iface_state) {
 		if (hi->iface_state == CS_STATE_CONFIGURED) {
-			pm_qos_add_request(&hi->pm_qos_req,
-				PM_QOS_CPU_DMA_LATENCY,
+			cpu_latency_qos_add_request(&hi->pm_qos_req,
 				CS_QOS_LATENCY_FOR_DATA_USEC);
 			local_bh_disable();
 			cs_hsi_read_on_data(hi);
 			local_bh_enable();
 		} else if (old_state == CS_STATE_CONFIGURED) {
-			pm_qos_remove_request(&hi->pm_qos_req);
+			cpu_latency_qos_remove_request(&hi->pm_qos_req);
 		}
 	}
 	return r;
@@ -1075,8 +1074,8 @@ static void cs_hsi_stop(struct cs_hsi_iface *hi)
 	WARN_ON(!cs_state_idle(hi->control_state));
 	WARN_ON(!cs_state_idle(hi->data_state));
 
-	if (pm_qos_request_active(&hi->pm_qos_req))
-		pm_qos_remove_request(&hi->pm_qos_req);
+	if (cpu_latency_qos_request_active(&hi->pm_qos_req))
+		cpu_latency_qos_remove_request(&hi->pm_qos_req);
 
 	spin_lock_bh(&hi->lock);
 	cs_hsi_free_data(hi);
-- 
2.16.4






^ permalink raw reply	[flat|nested] 74+ messages in thread

* [PATCH 18/28] drivers: media: Call cpu_latency_qos_*() instead of pm_qos_*()
  2020-02-11 22:51 [PATCH 00/28] PM: QoS: Get rid of unuseful code and rework CPU latency QoS interface Rafael J. Wysocki
                   ` (16 preceding siblings ...)
  2020-02-11 23:13 ` [PATCH 17/28] drivers: hsi: " Rafael J. Wysocki
@ 2020-02-11 23:17 ` " Rafael J. Wysocki
  2020-02-12  5:37   ` Mauro Carvalho Chehab
  2020-02-11 23:21 ` [PATCH 19/28] drivers: mmc: " Rafael J. Wysocki
                   ` (13 subsequent siblings)
  31 siblings, 1 reply; 74+ messages in thread
From: Rafael J. Wysocki @ 2020-02-11 23:17 UTC (permalink / raw)
  To: Linux PM; +Cc: LKML, Amit Kucheria, Mauro Carvalho Chehab, linux-media

From: "Rafael J. Wysocki" <rafael.j.wysocki@intel.com>

Call cpu_latency_qos_add/remove_request() instead of
pm_qos_add/remove_request(), respectively, because the
latter are going to be dropped.

No intentional functional impact.

Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
---
 drivers/media/pci/saa7134/saa7134-video.c | 5 ++---
 drivers/media/platform/via-camera.c       | 4 ++--
 2 files changed, 4 insertions(+), 5 deletions(-)

diff --git a/drivers/media/pci/saa7134/saa7134-video.c b/drivers/media/pci/saa7134/saa7134-video.c
index 342cabf48064..a8ac94fadc14 100644
--- a/drivers/media/pci/saa7134/saa7134-video.c
+++ b/drivers/media/pci/saa7134/saa7134-video.c
@@ -1008,8 +1008,7 @@ int saa7134_vb2_start_streaming(struct vb2_queue *vq, unsigned int count)
 	 */
 	if ((dmaq == &dev->video_q && !vb2_is_streaming(&dev->vbi_vbq)) ||
 	    (dmaq == &dev->vbi_q && !vb2_is_streaming(&dev->video_vbq)))
-		pm_qos_add_request(&dev->qos_request,
-			PM_QOS_CPU_DMA_LATENCY, 20);
+		cpu_latency_qos_add_request(&dev->qos_request, 20);
 	dmaq->seq_nr = 0;
 
 	return 0;
@@ -1024,7 +1023,7 @@ void saa7134_vb2_stop_streaming(struct vb2_queue *vq)
 
 	if ((dmaq == &dev->video_q && !vb2_is_streaming(&dev->vbi_vbq)) ||
 	    (dmaq == &dev->vbi_q && !vb2_is_streaming(&dev->video_vbq)))
-		pm_qos_remove_request(&dev->qos_request);
+		cpu_latency_qos_remove_request(&dev->qos_request);
 }
 
 static const struct vb2_ops vb2_qops = {
diff --git a/drivers/media/platform/via-camera.c b/drivers/media/platform/via-camera.c
index 78841b9015ce..1cd4f7be88dd 100644
--- a/drivers/media/platform/via-camera.c
+++ b/drivers/media/platform/via-camera.c
@@ -646,7 +646,7 @@ static int viacam_vb2_start_streaming(struct vb2_queue *vq, unsigned int count)
 	 * requirement which will keep the CPU out of the deeper sleep
 	 * states.
 	 */
-	pm_qos_add_request(&cam->qos_request, PM_QOS_CPU_DMA_LATENCY, 50);
+	cpu_latency_qos_add_request(&cam->qos_request, 50);
 	viacam_start_engine(cam);
 	return 0;
 out:
@@ -662,7 +662,7 @@ static void viacam_vb2_stop_streaming(struct vb2_queue *vq)
 	struct via_camera *cam = vb2_get_drv_priv(vq);
 	struct via_buffer *buf, *tmp;
 
-	pm_qos_remove_request(&cam->qos_request);
+	cpu_latency_qos_remove_request(&cam->qos_request);
 	viacam_stop_engine(cam);
 
 	list_for_each_entry_safe(buf, tmp, &cam->buffer_queue, queue) {
-- 
2.16.4






^ permalink raw reply	[flat|nested] 74+ messages in thread

* [PATCH 19/28] drivers: mmc: Call cpu_latency_qos_*() instead of pm_qos_*()
  2020-02-11 22:51 [PATCH 00/28] PM: QoS: Get rid of unuseful code and rework CPU latency QoS interface Rafael J. Wysocki
                   ` (17 preceding siblings ...)
  2020-02-11 23:17 ` [PATCH 18/28] drivers: media: " Rafael J. Wysocki
@ 2020-02-11 23:21 ` " Rafael J. Wysocki
  2020-02-11 23:24 ` [PATCH 20/28] drivers: net: " Rafael J. Wysocki
                   ` (12 subsequent siblings)
  31 siblings, 0 replies; 74+ messages in thread
From: Rafael J. Wysocki @ 2020-02-11 23:21 UTC (permalink / raw)
  To: Linux PM; +Cc: LKML, Amit Kucheria, Ulf Hansson, linux-mmc

From: "Rafael J. Wysocki" <rafael.j.wysocki@intel.com>

Call cpu_latency_qos_add/remove_request() instead of
pm_qos_add/remove_request(), respectively, because the
latter are going to be dropped.

No intentional functional impact.

Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
---
 drivers/mmc/host/sdhci-esdhc-imx.c | 14 ++++++--------
 1 file changed, 6 insertions(+), 8 deletions(-)

diff --git a/drivers/mmc/host/sdhci-esdhc-imx.c b/drivers/mmc/host/sdhci-esdhc-imx.c
index 382f25b2fa45..b2bdf5012c55 100644
--- a/drivers/mmc/host/sdhci-esdhc-imx.c
+++ b/drivers/mmc/host/sdhci-esdhc-imx.c
@@ -1452,8 +1452,7 @@ static int sdhci_esdhc_imx_probe(struct platform_device *pdev)
 						  pdev->id_entry->driver_data;
 
 	if (imx_data->socdata->flags & ESDHC_FLAG_PMQOS)
-		pm_qos_add_request(&imx_data->pm_qos_req,
-			PM_QOS_CPU_DMA_LATENCY, 0);
+		cpu_latency_qos_add_request(&imx_data->pm_qos_req, 0);
 
 	imx_data->clk_ipg = devm_clk_get(&pdev->dev, "ipg");
 	if (IS_ERR(imx_data->clk_ipg)) {
@@ -1572,7 +1571,7 @@ static int sdhci_esdhc_imx_probe(struct platform_device *pdev)
 	clk_disable_unprepare(imx_data->clk_per);
 free_sdhci:
 	if (imx_data->socdata->flags & ESDHC_FLAG_PMQOS)
-		pm_qos_remove_request(&imx_data->pm_qos_req);
+		cpu_latency_qos_remove_request(&imx_data->pm_qos_req);
 	sdhci_pltfm_free(pdev);
 	return err;
 }
@@ -1595,7 +1594,7 @@ static int sdhci_esdhc_imx_remove(struct platform_device *pdev)
 	clk_disable_unprepare(imx_data->clk_ahb);
 
 	if (imx_data->socdata->flags & ESDHC_FLAG_PMQOS)
-		pm_qos_remove_request(&imx_data->pm_qos_req);
+		cpu_latency_qos_remove_request(&imx_data->pm_qos_req);
 
 	sdhci_pltfm_free(pdev);
 
@@ -1667,7 +1666,7 @@ static int sdhci_esdhc_runtime_suspend(struct device *dev)
 	clk_disable_unprepare(imx_data->clk_ahb);
 
 	if (imx_data->socdata->flags & ESDHC_FLAG_PMQOS)
-		pm_qos_remove_request(&imx_data->pm_qos_req);
+		cpu_latency_qos_remove_request(&imx_data->pm_qos_req);
 
 	return ret;
 }
@@ -1680,8 +1679,7 @@ static int sdhci_esdhc_runtime_resume(struct device *dev)
 	int err;
 
 	if (imx_data->socdata->flags & ESDHC_FLAG_PMQOS)
-		pm_qos_add_request(&imx_data->pm_qos_req,
-			PM_QOS_CPU_DMA_LATENCY, 0);
+		cpu_latency_qos_add_request(&imx_data->pm_qos_req, 0);
 
 	err = clk_prepare_enable(imx_data->clk_ahb);
 	if (err)
@@ -1714,7 +1712,7 @@ static int sdhci_esdhc_runtime_resume(struct device *dev)
 	clk_disable_unprepare(imx_data->clk_ahb);
 remove_pm_qos_request:
 	if (imx_data->socdata->flags & ESDHC_FLAG_PMQOS)
-		pm_qos_remove_request(&imx_data->pm_qos_req);
+		cpu_latency_qos_remove_request(&imx_data->pm_qos_req);
 	return err;
 }
 #endif
-- 
2.16.4






^ permalink raw reply	[flat|nested] 74+ messages in thread

* [PATCH 20/28] drivers: net: Call cpu_latency_qos_*() instead of pm_qos_*()
  2020-02-11 22:51 [PATCH 00/28] PM: QoS: Get rid of unuseful code and rework CPU latency QoS interface Rafael J. Wysocki
                   ` (18 preceding siblings ...)
  2020-02-11 23:21 ` [PATCH 19/28] drivers: mmc: " Rafael J. Wysocki
@ 2020-02-11 23:24 ` " Rafael J. Wysocki
  2020-02-11 23:48   ` Jeff Kirsher
  2020-02-12  5:49   ` Kalle Valo
  2020-02-11 23:26 ` [PATCH 21/28] drivers: spi: " Rafael J. Wysocki
                   ` (11 subsequent siblings)
  31 siblings, 2 replies; 74+ messages in thread
From: Rafael J. Wysocki @ 2020-02-11 23:24 UTC (permalink / raw)
  To: Linux PM
  Cc: LKML, Amit Kucheria, Jeff Kirsher, intel-wired-lan, Kalle Valo,
	linux-wireless

From: "Rafael J. Wysocki" <rafael.j.wysocki@intel.com>

Call cpu_latency_qos_add/update/remove_request() instead of
pm_qos_add/update/remove_request(), respectively, because the
latter are going to be dropped.

No intentional functional impact.

Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
---
 drivers/net/ethernet/intel/e1000e/netdev.c   | 13 ++++++-------
 drivers/net/wireless/ath/ath10k/core.c       |  4 ++--
 drivers/net/wireless/intel/ipw2x00/ipw2100.c | 10 +++++-----
 3 files changed, 13 insertions(+), 14 deletions(-)

diff --git a/drivers/net/ethernet/intel/e1000e/netdev.c b/drivers/net/ethernet/intel/e1000e/netdev.c
index db4ea58bac82..0f02c7a5ee9b 100644
--- a/drivers/net/ethernet/intel/e1000e/netdev.c
+++ b/drivers/net/ethernet/intel/e1000e/netdev.c
@@ -3280,10 +3280,10 @@ static void e1000_configure_rx(struct e1000_adapter *adapter)
 
 		dev_info(&adapter->pdev->dev,
 			 "Some CPU C-states have been disabled in order to enable jumbo frames\n");
-		pm_qos_update_request(&adapter->pm_qos_req, lat);
+		cpu_latency_qos_update_request(&adapter->pm_qos_req, lat);
 	} else {
-		pm_qos_update_request(&adapter->pm_qos_req,
-				      PM_QOS_DEFAULT_VALUE);
+		cpu_latency_qos_update_request(&adapter->pm_qos_req,
+					       PM_QOS_DEFAULT_VALUE);
 	}
 
 	/* Enable Receives */
@@ -4636,8 +4636,7 @@ int e1000e_open(struct net_device *netdev)
 		e1000_update_mng_vlan(adapter);
 
 	/* DMA latency requirement to workaround jumbo issue */
-	pm_qos_add_request(&adapter->pm_qos_req, PM_QOS_CPU_DMA_LATENCY,
-			   PM_QOS_DEFAULT_VALUE);
+	cpu_latency_qos_add_request(&adapter->pm_qos_req, PM_QOS_DEFAULT_VALUE);
 
 	/* before we allocate an interrupt, we must be ready to handle it.
 	 * Setting DEBUG_SHIRQ in the kernel makes it fire an interrupt
@@ -4679,7 +4678,7 @@ int e1000e_open(struct net_device *netdev)
 	return 0;
 
 err_req_irq:
-	pm_qos_remove_request(&adapter->pm_qos_req);
+	cpu_latency_qos_remove_request(&adapter->pm_qos_req);
 	e1000e_release_hw_control(adapter);
 	e1000_power_down_phy(adapter);
 	e1000e_free_rx_resources(adapter->rx_ring);
@@ -4743,7 +4742,7 @@ int e1000e_close(struct net_device *netdev)
 	    !test_bit(__E1000_TESTING, &adapter->state))
 		e1000e_release_hw_control(adapter);
 
-	pm_qos_remove_request(&adapter->pm_qos_req);
+	cpu_latency_qos_remove_request(&adapter->pm_qos_req);
 
 	pm_runtime_put_sync(&pdev->dev);
 
diff --git a/drivers/net/wireless/ath/ath10k/core.c b/drivers/net/wireless/ath/ath10k/core.c
index 5ec16ce19b69..a202a4eea76a 100644
--- a/drivers/net/wireless/ath/ath10k/core.c
+++ b/drivers/net/wireless/ath/ath10k/core.c
@@ -1052,11 +1052,11 @@ static int ath10k_download_fw(struct ath10k *ar)
 	}
 
 	memset(&latency_qos, 0, sizeof(latency_qos));
-	pm_qos_add_request(&latency_qos, PM_QOS_CPU_DMA_LATENCY, 0);
+	cpu_latency_qos_add_request(&latency_qos, 0);
 
 	ret = ath10k_bmi_fast_download(ar, address, data, data_len);
 
-	pm_qos_remove_request(&latency_qos);
+	cpu_latency_qos_remove_request(&latency_qos);
 
 	return ret;
 }
diff --git a/drivers/net/wireless/intel/ipw2x00/ipw2100.c b/drivers/net/wireless/intel/ipw2x00/ipw2100.c
index 536cd729c086..5dfcce77d094 100644
--- a/drivers/net/wireless/intel/ipw2x00/ipw2100.c
+++ b/drivers/net/wireless/intel/ipw2x00/ipw2100.c
@@ -1730,7 +1730,7 @@ static int ipw2100_up(struct ipw2100_priv *priv, int deferred)
 	/* the ipw2100 hardware really doesn't want power management delays
 	 * longer than 175usec
 	 */
-	pm_qos_update_request(&ipw2100_pm_qos_req, 175);
+	cpu_latency_qos_update_request(&ipw2100_pm_qos_req, 175);
 
 	/* If the interrupt is enabled, turn it off... */
 	spin_lock_irqsave(&priv->low_lock, flags);
@@ -1875,7 +1875,8 @@ static void ipw2100_down(struct ipw2100_priv *priv)
 	ipw2100_disable_interrupts(priv);
 	spin_unlock_irqrestore(&priv->low_lock, flags);
 
-	pm_qos_update_request(&ipw2100_pm_qos_req, PM_QOS_DEFAULT_VALUE);
+	cpu_latency_qos_update_request(&ipw2100_pm_qos_req,
+				       PM_QOS_DEFAULT_VALUE);
 
 	/* We have to signal any supplicant if we are disassociating */
 	if (associated)
@@ -6566,8 +6567,7 @@ static int __init ipw2100_init(void)
 	printk(KERN_INFO DRV_NAME ": %s, %s\n", DRV_DESCRIPTION, DRV_VERSION);
 	printk(KERN_INFO DRV_NAME ": %s\n", DRV_COPYRIGHT);
 
-	pm_qos_add_request(&ipw2100_pm_qos_req, PM_QOS_CPU_DMA_LATENCY,
-			   PM_QOS_DEFAULT_VALUE);
+	cpu_latency_qos_add_request(&ipw2100_pm_qos_req, PM_QOS_DEFAULT_VALUE);
 
 	ret = pci_register_driver(&ipw2100_pci_driver);
 	if (ret)
@@ -6594,7 +6594,7 @@ static void __exit ipw2100_exit(void)
 			   &driver_attr_debug_level);
 #endif
 	pci_unregister_driver(&ipw2100_pci_driver);
-	pm_qos_remove_request(&ipw2100_pm_qos_req);
+	cpu_latency_qos_remove_request(&ipw2100_pm_qos_req);
 }
 
 module_init(ipw2100_init);
-- 
2.16.4






^ permalink raw reply	[flat|nested] 74+ messages in thread

* [PATCH 21/28] drivers: spi: Call cpu_latency_qos_*() instead of pm_qos_*()
  2020-02-11 22:51 [PATCH 00/28] PM: QoS: Get rid of unuseful code and rework CPU latency QoS interface Rafael J. Wysocki
                   ` (19 preceding siblings ...)
  2020-02-11 23:24 ` [PATCH 20/28] drivers: net: " Rafael J. Wysocki
@ 2020-02-11 23:26 ` " Rafael J. Wysocki
  2020-02-11 23:27 ` [PATCH 22/28] drivers: tty: " Rafael J. Wysocki
                   ` (10 subsequent siblings)
  31 siblings, 0 replies; 74+ messages in thread
From: Rafael J. Wysocki @ 2020-02-11 23:26 UTC (permalink / raw)
  To: Linux PM; +Cc: LKML, Amit Kucheria, Han Xu, linux-spi

From: "Rafael J. Wysocki" <rafael.j.wysocki@intel.com>

Call cpu_latency_qos_add/remove_request() instead of
pm_qos_add/remove_request(), respectively, because the
latter are going to be dropped.

No intentional functional impact.

Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
---
 drivers/spi/spi-fsl-qspi.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/spi/spi-fsl-qspi.c b/drivers/spi/spi-fsl-qspi.c
index e8a499cd1f13..02e5cba0a5bb 100644
--- a/drivers/spi/spi-fsl-qspi.c
+++ b/drivers/spi/spi-fsl-qspi.c
@@ -484,7 +484,7 @@ static int fsl_qspi_clk_prep_enable(struct fsl_qspi *q)
 	}
 
 	if (needs_wakeup_wait_mode(q))
-		pm_qos_add_request(&q->pm_qos_req, PM_QOS_CPU_DMA_LATENCY, 0);
+		cpu_latency_qos_add_request(&q->pm_qos_req, 0);
 
 	return 0;
 }
@@ -492,7 +492,7 @@ static int fsl_qspi_clk_prep_enable(struct fsl_qspi *q)
 static void fsl_qspi_clk_disable_unprep(struct fsl_qspi *q)
 {
 	if (needs_wakeup_wait_mode(q))
-		pm_qos_remove_request(&q->pm_qos_req);
+		cpu_latency_qos_remove_request(&q->pm_qos_req);
 
 	clk_disable_unprepare(q->clk);
 	clk_disable_unprepare(q->clk_en);
-- 
2.16.4






^ permalink raw reply	[flat|nested] 74+ messages in thread

* [PATCH 22/28] drivers: tty: Call cpu_latency_qos_*() instead of pm_qos_*()
  2020-02-11 22:51 [PATCH 00/28] PM: QoS: Get rid of unuseful code and rework CPU latency QoS interface Rafael J. Wysocki
                   ` (20 preceding siblings ...)
  2020-02-11 23:26 ` [PATCH 21/28] drivers: spi: " Rafael J. Wysocki
@ 2020-02-11 23:27 ` " Rafael J. Wysocki
  2020-02-12 10:35   ` Rafael J. Wysocki
  2020-02-12 19:13   ` Greg Kroah-Hartman
  2020-02-11 23:28 ` [PATCH 23/28] drivers: usb: " Rafael J. Wysocki
                   ` (9 subsequent siblings)
  31 siblings, 2 replies; 74+ messages in thread
From: Rafael J. Wysocki @ 2020-02-11 23:27 UTC (permalink / raw)
  To: Linux PM; +Cc: LKML, Amit Kucheria, Greg Kroah-Hartman, linux-serial

From: "Rafael J. Wysocki" <rafael.j.wysocki@intel.com>

Call cpu_latency_qos_add/update/remove_request() instead of
pm_qos_add/update/remove_request(), respectively, because the
latter are going to be dropped.

No intentional functional impact.

Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
---
 drivers/tty/serial/8250/8250_omap.c | 7 +++----
 drivers/tty/serial/omap-serial.c    | 9 ++++-----
 2 files changed, 7 insertions(+), 9 deletions(-)

diff --git a/drivers/tty/serial/8250/8250_omap.c b/drivers/tty/serial/8250/8250_omap.c
index 19f8d2f9e7ba..76fe72bfb8bb 100644
--- a/drivers/tty/serial/8250/8250_omap.c
+++ b/drivers/tty/serial/8250/8250_omap.c
@@ -569,7 +569,7 @@ static void omap8250_uart_qos_work(struct work_struct *work)
 	struct omap8250_priv *priv;
 
 	priv = container_of(work, struct omap8250_priv, qos_work);
-	pm_qos_update_request(&priv->pm_qos_request, priv->latency);
+	cpu_latency_qos_update_request(&priv->pm_qos_request, priv->latency);
 }
 
 #ifdef CONFIG_SERIAL_8250_DMA
@@ -1224,8 +1224,7 @@ static int omap8250_probe(struct platform_device *pdev)
 
 	priv->latency = PM_QOS_CPU_LATENCY_DEFAULT_VALUE;
 	priv->calc_latency = PM_QOS_CPU_LATENCY_DEFAULT_VALUE;
-	pm_qos_add_request(&priv->pm_qos_request, PM_QOS_CPU_DMA_LATENCY,
-			   priv->latency);
+	cpu_latency_qos_add_request(&priv->pm_qos_request, priv->latency);
 	INIT_WORK(&priv->qos_work, omap8250_uart_qos_work);
 
 	spin_lock_init(&priv->rx_dma_lock);
@@ -1295,7 +1294,7 @@ static int omap8250_remove(struct platform_device *pdev)
 	pm_runtime_put_sync(&pdev->dev);
 	pm_runtime_disable(&pdev->dev);
 	serial8250_unregister_port(priv->line);
-	pm_qos_remove_request(&priv->pm_qos_request);
+	cpu_latency_qos_remove_request(&priv->pm_qos_request);
 	device_init_wakeup(&pdev->dev, false);
 	return 0;
 }
diff --git a/drivers/tty/serial/omap-serial.c b/drivers/tty/serial/omap-serial.c
index ce2558767eee..e0b720ac754b 100644
--- a/drivers/tty/serial/omap-serial.c
+++ b/drivers/tty/serial/omap-serial.c
@@ -831,7 +831,7 @@ static void serial_omap_uart_qos_work(struct work_struct *work)
 	struct uart_omap_port *up = container_of(work, struct uart_omap_port,
 						qos_work);
 
-	pm_qos_update_request(&up->pm_qos_request, up->latency);
+	cpu_latency_qos_update_request(&up->pm_qos_request, up->latency);
 }
 
 static void
@@ -1724,8 +1724,7 @@ static int serial_omap_probe(struct platform_device *pdev)
 
 	up->latency = PM_QOS_CPU_LATENCY_DEFAULT_VALUE;
 	up->calc_latency = PM_QOS_CPU_LATENCY_DEFAULT_VALUE;
-	pm_qos_add_request(&up->pm_qos_request,
-		PM_QOS_CPU_DMA_LATENCY, up->latency);
+	cpu_latency_qos_add_request(&up->pm_qos_request, up->latency);
 	INIT_WORK(&up->qos_work, serial_omap_uart_qos_work);
 
 	platform_set_drvdata(pdev, up);
@@ -1759,7 +1758,7 @@ static int serial_omap_probe(struct platform_device *pdev)
 	pm_runtime_dont_use_autosuspend(&pdev->dev);
 	pm_runtime_put_sync(&pdev->dev);
 	pm_runtime_disable(&pdev->dev);
-	pm_qos_remove_request(&up->pm_qos_request);
+	cpu_latency_qos_remove_request(&up->pm_qos_request);
 	device_init_wakeup(up->dev, false);
 err_rs485:
 err_port_line:
@@ -1777,7 +1776,7 @@ static int serial_omap_remove(struct platform_device *dev)
 	pm_runtime_dont_use_autosuspend(up->dev);
 	pm_runtime_put_sync(up->dev);
 	pm_runtime_disable(up->dev);
-	pm_qos_remove_request(&up->pm_qos_request);
+	cpu_latency_qos_remove_request(&up->pm_qos_request);
 	device_init_wakeup(&dev->dev, false);
 
 	return 0;
-- 
2.16.4






^ permalink raw reply	[flat|nested] 74+ messages in thread

* [PATCH 23/28] drivers: usb: Call cpu_latency_qos_*() instead of pm_qos_*()
  2020-02-11 22:51 [PATCH 00/28] PM: QoS: Get rid of unuseful code and rework CPU latency QoS interface Rafael J. Wysocki
                   ` (21 preceding siblings ...)
  2020-02-11 23:27 ` [PATCH 22/28] drivers: tty: " Rafael J. Wysocki
@ 2020-02-11 23:28 ` " Rafael J. Wysocki
  2020-02-12 18:38   ` Greg KH
  2020-02-19  1:09   ` Peter Chen
  2020-02-11 23:34 ` [PATCH 24/28] sound: " Rafael J. Wysocki
                   ` (8 subsequent siblings)
  31 siblings, 2 replies; 74+ messages in thread
From: Rafael J. Wysocki @ 2020-02-11 23:28 UTC (permalink / raw)
  To: Linux PM; +Cc: LKML, Amit Kucheria, Peter Chen, linux-usb

From: "Rafael J. Wysocki" <rafael.j.wysocki@intel.com>

Call cpu_latency_qos_add/remove_request() instead of
pm_qos_add/remove_request(), respectively, because the
latter are going to be dropped.

No intentional functional impact.

Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
---
 drivers/usb/chipidea/ci_hdrc_imx.c | 12 +++++-------
 1 file changed, 5 insertions(+), 7 deletions(-)

diff --git a/drivers/usb/chipidea/ci_hdrc_imx.c b/drivers/usb/chipidea/ci_hdrc_imx.c
index d8e7eb2f97b9..a479af3ae31d 100644
--- a/drivers/usb/chipidea/ci_hdrc_imx.c
+++ b/drivers/usb/chipidea/ci_hdrc_imx.c
@@ -393,8 +393,7 @@ static int ci_hdrc_imx_probe(struct platform_device *pdev)
 	}
 
 	if (pdata.flags & CI_HDRC_PMQOS)
-		pm_qos_add_request(&data->pm_qos_req,
-			PM_QOS_CPU_DMA_LATENCY, 0);
+		cpu_latency_qos_add_request(&data->pm_qos_req, 0);
 
 	ret = imx_get_clks(dev);
 	if (ret)
@@ -478,7 +477,7 @@ static int ci_hdrc_imx_probe(struct platform_device *pdev)
 		/* don't overwrite original ret (cf. EPROBE_DEFER) */
 		regulator_disable(data->hsic_pad_regulator);
 	if (pdata.flags & CI_HDRC_PMQOS)
-		pm_qos_remove_request(&data->pm_qos_req);
+		cpu_latency_qos_remove_request(&data->pm_qos_req);
 	data->ci_pdev = NULL;
 	return ret;
 }
@@ -499,7 +498,7 @@ static int ci_hdrc_imx_remove(struct platform_device *pdev)
 	if (data->ci_pdev) {
 		imx_disable_unprepare_clks(&pdev->dev);
 		if (data->plat_data->flags & CI_HDRC_PMQOS)
-			pm_qos_remove_request(&data->pm_qos_req);
+			cpu_latency_qos_remove_request(&data->pm_qos_req);
 		if (data->hsic_pad_regulator)
 			regulator_disable(data->hsic_pad_regulator);
 	}
@@ -527,7 +526,7 @@ static int __maybe_unused imx_controller_suspend(struct device *dev)
 
 	imx_disable_unprepare_clks(dev);
 	if (data->plat_data->flags & CI_HDRC_PMQOS)
-		pm_qos_remove_request(&data->pm_qos_req);
+		cpu_latency_qos_remove_request(&data->pm_qos_req);
 
 	data->in_lpm = true;
 
@@ -547,8 +546,7 @@ static int __maybe_unused imx_controller_resume(struct device *dev)
 	}
 
 	if (data->plat_data->flags & CI_HDRC_PMQOS)
-		pm_qos_add_request(&data->pm_qos_req,
-			PM_QOS_CPU_DMA_LATENCY, 0);
+		cpu_latency_qos_add_request(&data->pm_qos_req, 0);
 
 	ret = imx_prepare_enable_clks(dev);
 	if (ret)
-- 
2.16.4






^ permalink raw reply	[flat|nested] 74+ messages in thread

* [PATCH 24/28] sound: Call cpu_latency_qos_*() instead of pm_qos_*()
  2020-02-11 22:51 [PATCH 00/28] PM: QoS: Get rid of unuseful code and rework CPU latency QoS interface Rafael J. Wysocki
                   ` (22 preceding siblings ...)
  2020-02-11 23:28 ` [PATCH 23/28] drivers: usb: " Rafael J. Wysocki
@ 2020-02-11 23:34 ` " Rafael J. Wysocki
  2020-02-12 10:08   ` Mark Brown
  2020-02-12 10:18   ` Mark Brown
  2020-02-11 23:35 ` [PATCH 25/28] PM: QoS: Drop PM_QOS_CPU_DMA_LATENCY and rename related functions Rafael J. Wysocki
                   ` (7 subsequent siblings)
  31 siblings, 2 replies; 74+ messages in thread
From: Rafael J. Wysocki @ 2020-02-11 23:34 UTC (permalink / raw)
  To: Linux PM
  Cc: LKML, Amit Kucheria, Takashi Iwai, alsa-devel, Liam Girdwood, Mark Brown

From: "Rafael J. Wysocki" <rafael.j.wysocki@intel.com>

Call cpu_latency_qos_add/update/remove_request() and
cpu_latency_qos_request_active() instead of
pm_qos_add/update/remove_request() and pm_qos_request_active(),
respectively, because the latter are going to be dropped.

No intentional functional impact.

Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
---
 sound/core/pcm_native.c               | 14 +++++++-------
 sound/soc/intel/atom/sst/sst.c        |  5 ++---
 sound/soc/intel/atom/sst/sst_loader.c |  4 ++--
 sound/soc/ti/omap-dmic.c              |  7 ++++---
 sound/soc/ti/omap-mcbsp.c             | 16 ++++++++--------
 sound/soc/ti/omap-mcpdm.c             | 16 ++++++++--------
 6 files changed, 31 insertions(+), 31 deletions(-)

diff --git a/sound/core/pcm_native.c b/sound/core/pcm_native.c
index 336406bcb59e..151bac1bbd0b 100644
--- a/sound/core/pcm_native.c
+++ b/sound/core/pcm_native.c
@@ -748,11 +748,11 @@ static int snd_pcm_hw_params(struct snd_pcm_substream *substream,
 	snd_pcm_timer_resolution_change(substream);
 	snd_pcm_set_state(substream, SNDRV_PCM_STATE_SETUP);
 
-	if (pm_qos_request_active(&substream->latency_pm_qos_req))
-		pm_qos_remove_request(&substream->latency_pm_qos_req);
+	if (cpu_latency_qos_request_active(&substream->latency_pm_qos_req))
+		cpu_latency_qos_remove_request(&substream->latency_pm_qos_req);
 	if ((usecs = period_to_usecs(runtime)) >= 0)
-		pm_qos_add_request(&substream->latency_pm_qos_req,
-				   PM_QOS_CPU_DMA_LATENCY, usecs);
+		cpu_latency_qos_add_request(&substream->latency_pm_qos_req,
+					    usecs);
 	return 0;
  _error:
 	/* hardware might be unusable from this time,
@@ -821,7 +821,7 @@ static int snd_pcm_hw_free(struct snd_pcm_substream *substream)
 		return -EBADFD;
 	result = do_hw_free(substream);
 	snd_pcm_set_state(substream, SNDRV_PCM_STATE_OPEN);
-	pm_qos_remove_request(&substream->latency_pm_qos_req);
+	cpu_latency_qos_remove_request(&substream->latency_pm_qos_req);
 	return result;
 }
 
@@ -2598,8 +2598,8 @@ void snd_pcm_release_substream(struct snd_pcm_substream *substream)
 		substream->ops->close(substream);
 		substream->hw_opened = 0;
 	}
-	if (pm_qos_request_active(&substream->latency_pm_qos_req))
-		pm_qos_remove_request(&substream->latency_pm_qos_req);
+	if (cpu_latency_qos_request_active(&substream->latency_pm_qos_req))
+		cpu_latency_qos_remove_request(&substream->latency_pm_qos_req);
 	if (substream->pcm_release) {
 		substream->pcm_release(substream);
 		substream->pcm_release = NULL;
diff --git a/sound/soc/intel/atom/sst/sst.c b/sound/soc/intel/atom/sst/sst.c
index 68bcec5241f7..d6563985e008 100644
--- a/sound/soc/intel/atom/sst/sst.c
+++ b/sound/soc/intel/atom/sst/sst.c
@@ -325,8 +325,7 @@ int sst_context_init(struct intel_sst_drv *ctx)
 		ret = -ENOMEM;
 		goto do_free_mem;
 	}
-	pm_qos_add_request(ctx->qos, PM_QOS_CPU_DMA_LATENCY,
-				PM_QOS_DEFAULT_VALUE);
+	cpu_latency_qos_add_request(ctx->qos, PM_QOS_DEFAULT_VALUE);
 
 	dev_dbg(ctx->dev, "Requesting FW %s now...\n", ctx->firmware_name);
 	ret = request_firmware_nowait(THIS_MODULE, true, ctx->firmware_name,
@@ -364,7 +363,7 @@ void sst_context_cleanup(struct intel_sst_drv *ctx)
 	sysfs_remove_group(&ctx->dev->kobj, &sst_fw_version_attr_group);
 	flush_scheduled_work();
 	destroy_workqueue(ctx->post_msg_wq);
-	pm_qos_remove_request(ctx->qos);
+	cpu_latency_qos_remove_request(ctx->qos);
 	kfree(ctx->fw_sg_list.src);
 	kfree(ctx->fw_sg_list.dst);
 	ctx->fw_sg_list.list_len = 0;
diff --git a/sound/soc/intel/atom/sst/sst_loader.c b/sound/soc/intel/atom/sst/sst_loader.c
index ce11c36848c4..9b0e3739c738 100644
--- a/sound/soc/intel/atom/sst/sst_loader.c
+++ b/sound/soc/intel/atom/sst/sst_loader.c
@@ -412,7 +412,7 @@ int sst_load_fw(struct intel_sst_drv *sst_drv_ctx)
 		return -ENOMEM;
 
 	/* Prevent C-states beyond C6 */
-	pm_qos_update_request(sst_drv_ctx->qos, 0);
+	cpu_latency_qos_update_request(sst_drv_ctx->qos, 0);
 
 	sst_drv_ctx->sst_state = SST_FW_LOADING;
 
@@ -442,7 +442,7 @@ int sst_load_fw(struct intel_sst_drv *sst_drv_ctx)
 
 restore:
 	/* Re-enable Deeper C-states beyond C6 */
-	pm_qos_update_request(sst_drv_ctx->qos, PM_QOS_DEFAULT_VALUE);
+	cpu_latency_qos_update_request(sst_drv_ctx->qos, PM_QOS_DEFAULT_VALUE);
 	sst_free_block(sst_drv_ctx, block);
 	dev_dbg(sst_drv_ctx->dev, "fw load successful!!!\n");
 
diff --git a/sound/soc/ti/omap-dmic.c b/sound/soc/ti/omap-dmic.c
index 3f226be123d4..913579c43e9d 100644
--- a/sound/soc/ti/omap-dmic.c
+++ b/sound/soc/ti/omap-dmic.c
@@ -112,7 +112,7 @@ static void omap_dmic_dai_shutdown(struct snd_pcm_substream *substream,
 
 	mutex_lock(&dmic->mutex);
 
-	pm_qos_remove_request(&dmic->pm_qos_req);
+	cpu_latency_qos_remove_request(&dmic->pm_qos_req);
 
 	if (!dai->active)
 		dmic->active = 0;
@@ -230,8 +230,9 @@ static int omap_dmic_dai_prepare(struct snd_pcm_substream *substream,
 	struct omap_dmic *dmic = snd_soc_dai_get_drvdata(dai);
 	u32 ctrl;
 
-	if (pm_qos_request_active(&dmic->pm_qos_req))
-		pm_qos_update_request(&dmic->pm_qos_req, dmic->latency);
+	if (cpu_latency_qos_request_active(&dmic->pm_qos_req))
+		cpu_latency_qos_update_request(&dmic->pm_qos_req,
+					       dmic->latency);
 
 	/* Configure uplink threshold */
 	omap_dmic_write(dmic, OMAP_DMIC_FIFO_CTRL_REG, dmic->threshold);
diff --git a/sound/soc/ti/omap-mcbsp.c b/sound/soc/ti/omap-mcbsp.c
index 26b503bbdb5f..302d5c493c29 100644
--- a/sound/soc/ti/omap-mcbsp.c
+++ b/sound/soc/ti/omap-mcbsp.c
@@ -836,10 +836,10 @@ static void omap_mcbsp_dai_shutdown(struct snd_pcm_substream *substream,
 	int stream2 = tx ? SNDRV_PCM_STREAM_CAPTURE : SNDRV_PCM_STREAM_PLAYBACK;
 
 	if (mcbsp->latency[stream2])
-		pm_qos_update_request(&mcbsp->pm_qos_req,
-				      mcbsp->latency[stream2]);
+		cpu_latency_qos_update_request(&mcbsp->pm_qos_req,
+					       mcbsp->latency[stream2]);
 	else if (mcbsp->latency[stream1])
-		pm_qos_remove_request(&mcbsp->pm_qos_req);
+		cpu_latency_qos_remove_request(&mcbsp->pm_qos_req);
 
 	mcbsp->latency[stream1] = 0;
 
@@ -863,10 +863,10 @@ static int omap_mcbsp_dai_prepare(struct snd_pcm_substream *substream,
 	if (!latency || mcbsp->latency[stream1] < latency)
 		latency = mcbsp->latency[stream1];
 
-	if (pm_qos_request_active(pm_qos_req))
-		pm_qos_update_request(pm_qos_req, latency);
+	if (cpu_latency_qos_request_active(pm_qos_req))
+		cpu_latency_qos_update_request(pm_qos_req, latency);
 	else if (latency)
-		pm_qos_add_request(pm_qos_req, PM_QOS_CPU_DMA_LATENCY, latency);
+		cpu_latency_qos_add_request(pm_qos_req, latency);
 
 	return 0;
 }
@@ -1434,8 +1434,8 @@ static int asoc_mcbsp_remove(struct platform_device *pdev)
 	if (mcbsp->pdata->ops && mcbsp->pdata->ops->free)
 		mcbsp->pdata->ops->free(mcbsp->id);
 
-	if (pm_qos_request_active(&mcbsp->pm_qos_req))
-		pm_qos_remove_request(&mcbsp->pm_qos_req);
+	if (cpu_latency_qos_request_active(&mcbsp->pm_qos_req))
+		cpu_latency_qos_remove_request(&mcbsp->pm_qos_req);
 
 	if (mcbsp->pdata->buffer_size)
 		sysfs_remove_group(&mcbsp->dev->kobj, &additional_attr_group);
diff --git a/sound/soc/ti/omap-mcpdm.c b/sound/soc/ti/omap-mcpdm.c
index a726cd7a8252..d7ac4df6f2d9 100644
--- a/sound/soc/ti/omap-mcpdm.c
+++ b/sound/soc/ti/omap-mcpdm.c
@@ -281,10 +281,10 @@ static void omap_mcpdm_dai_shutdown(struct snd_pcm_substream *substream,
 	}
 
 	if (mcpdm->latency[stream2])
-		pm_qos_update_request(&mcpdm->pm_qos_req,
-				      mcpdm->latency[stream2]);
+		cpu_latency_qos_update_request(&mcpdm->pm_qos_req,
+					       mcpdm->latency[stream2]);
 	else if (mcpdm->latency[stream1])
-		pm_qos_remove_request(&mcpdm->pm_qos_req);
+		cpu_latency_qos_remove_request(&mcpdm->pm_qos_req);
 
 	mcpdm->latency[stream1] = 0;
 
@@ -386,10 +386,10 @@ static int omap_mcpdm_prepare(struct snd_pcm_substream *substream,
 	if (!latency || mcpdm->latency[stream1] < latency)
 		latency = mcpdm->latency[stream1];
 
-	if (pm_qos_request_active(pm_qos_req))
-		pm_qos_update_request(pm_qos_req, latency);
+	if (cpu_latency_qos_request_active(pm_qos_req))
+		cpu_latency_qos_update_request(pm_qos_req, latency);
 	else if (latency)
-		pm_qos_add_request(pm_qos_req, PM_QOS_CPU_DMA_LATENCY, latency);
+		cpu_latency_qos_add_request(pm_qos_req, latency);
 
 	if (!omap_mcpdm_active(mcpdm)) {
 		omap_mcpdm_start(mcpdm);
@@ -451,8 +451,8 @@ static int omap_mcpdm_remove(struct snd_soc_dai *dai)
 	free_irq(mcpdm->irq, (void *)mcpdm);
 	pm_runtime_disable(mcpdm->dev);
 
-	if (pm_qos_request_active(&mcpdm->pm_qos_req))
-		pm_qos_remove_request(&mcpdm->pm_qos_req);
+	if (cpu_latency_qos_request_active(&mcpdm->pm_qos_req))
+		cpu_latency_qos_remove_request(&mcpdm->pm_qos_req);
 
 	return 0;
 }
-- 
2.16.4






^ permalink raw reply	[flat|nested] 74+ messages in thread

* [PATCH 25/28] PM: QoS: Drop PM_QOS_CPU_DMA_LATENCY and rename related functions
  2020-02-11 22:51 [PATCH 00/28] PM: QoS: Get rid of unuseful code and rework CPU latency QoS interface Rafael J. Wysocki
                   ` (23 preceding siblings ...)
  2020-02-11 23:34 ` [PATCH 24/28] sound: " Rafael J. Wysocki
@ 2020-02-11 23:35 ` Rafael J. Wysocki
  2020-02-11 23:35 ` [PATCH 26/28] PM: QoS: Update file information comments Rafael J. Wysocki
                   ` (6 subsequent siblings)
  31 siblings, 0 replies; 74+ messages in thread
From: Rafael J. Wysocki @ 2020-02-11 23:35 UTC (permalink / raw)
  To: Linux PM; +Cc: LKML, Amit Kucheria

From: "Rafael J. Wysocki" <rafael.j.wysocki@intel.com>

Drop the PM QoS classes enum including PM_QOS_CPU_DMA_LATENCY,
drop the wrappers around pm_qos_request(), pm_qos_request_active(),
and pm_qos_add/update/remove_request() introduced previously, rename
these functions, respectively, to cpu_latency_qos_limit(),
cpu_latency_qos_request_active(), and
cpu_latency_qos_add/update/remove_request(), and update their
kerneldoc comments.  [While at it, drop some useless comments from
these functions.]

No intentional functional impact.

Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
---
 include/linux/pm_qos.h | 47 +++---------------------
 kernel/power/qos.c     | 98 +++++++++++++++++++++++++-------------------------
 2 files changed, 54 insertions(+), 91 deletions(-)

diff --git a/include/linux/pm_qos.h b/include/linux/pm_qos.h
index 63d39e66f95d..e0ca4d780457 100644
--- a/include/linux/pm_qos.h
+++ b/include/linux/pm_qos.h
@@ -9,14 +9,6 @@
 #include <linux/notifier.h>
 #include <linux/device.h>
 
-enum {
-	PM_QOS_RESERVED = 0,
-	PM_QOS_CPU_DMA_LATENCY,
-
-	/* insert new class ID */
-	PM_QOS_NUM_CLASSES,
-};
-
 enum pm_qos_flags_status {
 	PM_QOS_FLAGS_UNDEFINED = -1,
 	PM_QOS_FLAGS_NONE,
@@ -144,40 +136,11 @@ bool pm_qos_update_flags(struct pm_qos_flags *pqf,
 			 struct pm_qos_flags_request *req,
 			 enum pm_qos_req_action action, s32 val);
 
-void pm_qos_add_request(struct pm_qos_request *req, int pm_qos_class,
-			s32 value);
-void pm_qos_update_request(struct pm_qos_request *req,
-			   s32 new_value);
-void pm_qos_remove_request(struct pm_qos_request *req);
-s32 pm_qos_request(int pm_qos_class);
-int pm_qos_request_active(struct pm_qos_request *req);
-
-static inline void cpu_latency_qos_add_request(struct pm_qos_request *req,
-					       s32 value)
-{
-	pm_qos_add_request(req, PM_QOS_CPU_DMA_LATENCY, value);
-}
-
-static inline void cpu_latency_qos_update_request(struct pm_qos_request *req,
-						  s32 new_value)
-{
-	pm_qos_update_request(req, new_value);
-}
-
-static inline void cpu_latency_qos_remove_request(struct pm_qos_request *req)
-{
-	pm_qos_remove_request(req);
-}
-
-static inline bool cpu_latency_qos_request_active(struct pm_qos_request *req)
-{
-	return pm_qos_request_active(req);
-}
-
-static inline s32 cpu_latency_qos_limit(void)
-{
-	return pm_qos_request(PM_QOS_CPU_DMA_LATENCY);
-}
+s32 cpu_latency_qos_limit(void);
+bool cpu_latency_qos_request_active(struct pm_qos_request *req);
+void cpu_latency_qos_add_request(struct pm_qos_request *req, s32 value);
+void cpu_latency_qos_update_request(struct pm_qos_request *req, s32 new_value);
+void cpu_latency_qos_remove_request(struct pm_qos_request *req);
 
 #ifdef CONFIG_PM
 enum pm_qos_flags_status __dev_pm_qos_flags(struct device *dev, s32 mask);
diff --git a/kernel/power/qos.c b/kernel/power/qos.c
index 7bb55aca03bb..7374c76f409a 100644
--- a/kernel/power/qos.c
+++ b/kernel/power/qos.c
@@ -230,21 +230,25 @@ static struct pm_qos_constraints cpu_latency_constraints = {
 };
 
 /**
- * pm_qos_request - returns current system wide qos expectation
- * @pm_qos_class: Ignored.
- *
- * This function returns the current target value.
+ * cpu_latency_qos_limit - Return current system-wide CPU latency QoS limit.
  */
-s32 pm_qos_request(int pm_qos_class)
+s32 cpu_latency_qos_limit(void)
 {
 	return pm_qos_read_value(&cpu_latency_constraints);
 }
 
-int pm_qos_request_active(struct pm_qos_request *req)
+/**
+ * cpu_latency_qos_request_active - Check the given PM QoS request.
+ * @req: PM QoS request to check.
+ *
+ * Return: 'true' if @req has been added to the CPU latency QoS list, 'false'
+ * otherwise.
+ */
+bool cpu_latency_qos_request_active(struct pm_qos_request *req)
 {
 	return req->qos == &cpu_latency_constraints;
 }
-EXPORT_SYMBOL_GPL(pm_qos_request_active);
+EXPORT_SYMBOL_GPL(cpu_latency_qos_request_active);
 
 static void cpu_latency_qos_apply(struct pm_qos_request *req,
 				  enum pm_qos_req_action action, s32 value)
@@ -255,25 +259,24 @@ static void cpu_latency_qos_apply(struct pm_qos_request *req,
 }
 
 /**
- * pm_qos_add_request - inserts new qos request into the list
- * @req: pointer to a preallocated handle
- * @pm_qos_class: Ignored.
- * @value: defines the qos request
+ * cpu_latency_qos_add_request - Add new CPU latency QoS request.
+ * @req: Pointer to a preallocated handle.
+ * @value: Requested constraint value.
+ *
+ * Use @value to initialize the request handle pointed to by @req, insert it as
+ * a new entry to the CPU latency QoS list and recompute the effective QoS
+ * constraint for that list.
  *
- * This function inserts a new entry in the PM_QOS_CPU_DMA_LATENCY list of
- * requested QoS performance characteristics.  It recomputes the aggregate QoS
- * expectations for the PM_QOS_CPU_DMA_LATENCY list and initializes the @req
- * handle.  Caller needs to save this handle for later use in updates and
- * removal.
+ * Callers need to save the handle for later use in updates and removal of the
+ * QoS request represented by it.
  */
-void pm_qos_add_request(struct pm_qos_request *req,
-			int pm_qos_class, s32 value)
+void cpu_latency_qos_add_request(struct pm_qos_request *req, s32 value)
 {
-	if (!req) /*guard against callers passing in null */
+	if (!req)
 		return;
 
-	if (pm_qos_request_active(req)) {
-		WARN(1, KERN_ERR "pm_qos_add_request() called for already added request\n");
+	if (cpu_latency_qos_request_active(req)) {
+		WARN(1, KERN_ERR "%s called for already added request\n", __func__);
 		return;
 	}
 
@@ -282,25 +285,24 @@ void pm_qos_add_request(struct pm_qos_request *req,
 	req->qos = &cpu_latency_constraints;
 	cpu_latency_qos_apply(req, PM_QOS_ADD_REQ, value);
 }
-EXPORT_SYMBOL_GPL(pm_qos_add_request);
+EXPORT_SYMBOL_GPL(cpu_latency_qos_add_request);
 
 /**
- * pm_qos_update_request - modifies an existing qos request
- * @req : handle to list element holding a pm_qos request to use
- * @value: defines the qos request
- *
- * Updates an existing qos request for the PM_QOS_CPU_DMA_LATENCY list along
- * with updating the target PM_QOS_CPU_DMA_LATENCY value.
+ * cpu_latency_qos_update_request - Modify existing CPU latency QoS request.
+ * @req : QoS request to update.
+ * @new_value: New requested constraint value.
  *
- * Attempts are made to make this code callable on hot code paths.
+ * Use @new_value to update the QoS request represented by @req in the CPU
+ * latency QoS list along with updating the effective constraint value for that
+ * list.
  */
-void pm_qos_update_request(struct pm_qos_request *req, s32 new_value)
+void cpu_latency_qos_update_request(struct pm_qos_request *req, s32 new_value)
 {
-	if (!req) /*guard against callers passing in null */
+	if (!req)
 		return;
 
-	if (!pm_qos_request_active(req)) {
-		WARN(1, KERN_ERR "pm_qos_update_request() called for unknown object\n");
+	if (!cpu_latency_qos_request_active(req)) {
+		WARN(1, KERN_ERR "%s called for unknown object\n", __func__);
 		return;
 	}
 
@@ -311,24 +313,22 @@ void pm_qos_update_request(struct pm_qos_request *req, s32 new_value)
 
 	cpu_latency_qos_apply(req, PM_QOS_UPDATE_REQ, new_value);
 }
-EXPORT_SYMBOL_GPL(pm_qos_update_request);
+EXPORT_SYMBOL_GPL(cpu_latency_qos_update_request);
 
 /**
- * pm_qos_remove_request - modifies an existing qos request
- * @req: handle to request list element
+ * cpu_latency_qos_remove_request - Remove existing CPU latency QoS request.
+ * @req: QoS request to remove.
  *
- * Will remove pm qos request from the list of constraints and
- * recompute the current target value for PM_QOS_CPU_DMA_LATENCY.  Call this
- * on slow code paths.
+ * Remove the CPU latency QoS request represented by @req from the CPU latency
+ * QoS list along with updating the effective constraint value for that list.
  */
-void pm_qos_remove_request(struct pm_qos_request *req)
+void cpu_latency_qos_remove_request(struct pm_qos_request *req)
 {
-	if (!req) /*guard against callers passing in null */
+	if (!req)
 		return;
-		/* silent return to keep pcm code cleaner */
 
-	if (!pm_qos_request_active(req)) {
-		WARN(1, KERN_ERR "pm_qos_remove_request() called for unknown object\n");
+	if (!cpu_latency_qos_request_active(req)) {
+		WARN(1, KERN_ERR "%s called for unknown object\n", __func__);
 		return;
 	}
 
@@ -337,7 +337,7 @@ void pm_qos_remove_request(struct pm_qos_request *req)
 	cpu_latency_qos_apply(req, PM_QOS_REMOVE_REQ, PM_QOS_DEFAULT_VALUE);
 	memset(req, 0, sizeof(*req));
 }
-EXPORT_SYMBOL_GPL(pm_qos_remove_request);
+EXPORT_SYMBOL_GPL(cpu_latency_qos_remove_request);
 
 /* User space interface to the CPU latency QoS via misc device. */
 
@@ -349,7 +349,7 @@ static int cpu_latency_qos_open(struct inode *inode, struct file *filp)
 	if (!req)
 		return -ENOMEM;
 
-	pm_qos_add_request(req, PM_QOS_CPU_DMA_LATENCY, PM_QOS_DEFAULT_VALUE);
+	cpu_latency_qos_add_request(req, PM_QOS_DEFAULT_VALUE);
 	filp->private_data = req;
 
 	return 0;
@@ -361,7 +361,7 @@ static int cpu_latency_qos_release(struct inode *inode, struct file *filp)
 
 	filp->private_data = NULL;
 
-	pm_qos_remove_request(req);
+	cpu_latency_qos_remove_request(req);
 	kfree(req);
 
 	return 0;
@@ -374,7 +374,7 @@ static ssize_t cpu_latency_qos_read(struct file *filp, char __user *buf,
 	unsigned long flags;
 	s32 value;
 
-	if (!req || !pm_qos_request_active(req))
+	if (!req || !cpu_latency_qos_request_active(req))
 		return -EINVAL;
 
 	spin_lock_irqsave(&pm_qos_lock, flags);
@@ -400,7 +400,7 @@ static ssize_t cpu_latency_qos_write(struct file *filp, const char __user *buf,
 			return ret;
 	}
 
-	pm_qos_update_request(filp->private_data, value);
+	cpu_latency_qos_update_request(filp->private_data, value);
 
 	return count;
 }
-- 
2.16.4






^ permalink raw reply	[flat|nested] 74+ messages in thread

* [PATCH 26/28] PM: QoS: Update file information comments
  2020-02-11 22:51 [PATCH 00/28] PM: QoS: Get rid of unuseful code and rework CPU latency QoS interface Rafael J. Wysocki
                   ` (24 preceding siblings ...)
  2020-02-11 23:35 ` [PATCH 25/28] PM: QoS: Drop PM_QOS_CPU_DMA_LATENCY and rename related functions Rafael J. Wysocki
@ 2020-02-11 23:35 ` Rafael J. Wysocki
  2020-02-11 23:36 ` [PATCH 27/28] Documentation: PM: QoS: Update to reflect previous code changes Rafael J. Wysocki
                   ` (5 subsequent siblings)
  31 siblings, 0 replies; 74+ messages in thread
From: Rafael J. Wysocki @ 2020-02-11 23:35 UTC (permalink / raw)
  To: Linux PM; +Cc: LKML, Amit Kucheria

From: "Rafael J. Wysocki" <rafael.j.wysocki@intel.com>

Update the file information comments in include/linux/pm_qos.h
and kernel/power/qos.c by adding titles along with copyright and
authors information to them and changing the qos.c description to
better reflect its contents (outdated information is dropped from
it in particular).

Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
---
 include/linux/pm_qos.h | 15 +++++++++++----
 kernel/power/qos.c     | 34 ++++++++++++----------------------
 2 files changed, 23 insertions(+), 26 deletions(-)

diff --git a/include/linux/pm_qos.h b/include/linux/pm_qos.h
index e0ca4d780457..df065db3f57a 100644
--- a/include/linux/pm_qos.h
+++ b/include/linux/pm_qos.h
@@ -1,10 +1,17 @@
 /* SPDX-License-Identifier: GPL-2.0 */
-#ifndef _LINUX_PM_QOS_H
-#define _LINUX_PM_QOS_H
-/* interface for the pm_qos_power infrastructure of the linux kernel.
+/*
+ * Definitions related to Power Management Quality of Service (PM QoS).
+ *
+ * Copyright (C) 2020 Intel Corporation
  *
- * Mark Gross <mgross@linux.intel.com>
+ * Authors:
+ *	Mark Gross <mgross@linux.intel.com>
+ *	Rafael J. Wysocki <rafael.j.wysocki@intel.com>
  */
+
+#ifndef _LINUX_PM_QOS_H
+#define _LINUX_PM_QOS_H
+
 #include <linux/plist.h>
 #include <linux/notifier.h>
 #include <linux/device.h>
diff --git a/kernel/power/qos.c b/kernel/power/qos.c
index 7374c76f409a..ef73573db43d 100644
--- a/kernel/power/qos.c
+++ b/kernel/power/qos.c
@@ -1,31 +1,21 @@
 // SPDX-License-Identifier: GPL-2.0-only
 /*
- * This module exposes the interface to kernel space for specifying
- * QoS dependencies.  It provides infrastructure for registration of:
+ * Power Management Quality of Service (PM QoS) support base.
  *
- * Dependents on a QoS value : register requests
- * Watchers of QoS value : get notified when target QoS value changes
+ * Copyright (C) 2020 Intel Corporation
  *
- * This QoS design is best effort based.  Dependents register their QoS needs.
- * Watchers register to keep track of the current QoS needs of the system.
+ * Authors:
+ *	Mark Gross <mgross@linux.intel.com>
+ *	Rafael J. Wysocki <rafael.j.wysocki@intel.com>
  *
- * There are 3 basic classes of QoS parameter: latency, timeout, throughput
- * each have defined units:
- * latency: usec
- * timeout: usec <-- currently not used.
- * throughput: kbs (kilo byte / sec)
+ * Provided here is an interface for specifying PM QoS dependencies.  It allows
+ * entities depending on QoS constraints to register their requests which are
+ * aggregated as appropriate to produce effective constraints (target values)
+ * that can be monitored by entities needing to respect them, either by polling
+ * or through a built-in notification mechanism.
  *
- * There are lists of pm_qos_objects each one wrapping requests, notifiers
- *
- * User mode requests on a QOS parameter register themselves to the
- * subsystem by opening the device node /dev/... and writing there request to
- * the node.  As long as the process holds a file handle open to the node the
- * client continues to be accounted for.  Upon file release the usermode
- * request is removed and a new qos target is computed.  This way when the
- * request that the application has is cleaned up when closes the file
- * pointer or exits the pm_qos_object will get an opportunity to clean up.
- *
- * Mark Gross <mgross@linux.intel.com>
+ * In addition to the basic functionality, more specific interfaces for managing
+ * global CPU latency QoS requests and frequency QoS requests are provided.
  */
 
 /*#define DEBUG*/
-- 
2.16.4






^ permalink raw reply	[flat|nested] 74+ messages in thread

* [PATCH 27/28] Documentation: PM: QoS: Update to reflect previous code changes
  2020-02-11 22:51 [PATCH 00/28] PM: QoS: Get rid of unuseful code and rework CPU latency QoS interface Rafael J. Wysocki
                   ` (25 preceding siblings ...)
  2020-02-11 23:35 ` [PATCH 26/28] PM: QoS: Update file information comments Rafael J. Wysocki
@ 2020-02-11 23:36 ` Rafael J. Wysocki
  2020-02-11 23:37 ` [PATCH 28/28] PM: QoS: Make CPU latency QoS depend on CONFIG_CPU_IDLE Rafael J. Wysocki
                   ` (4 subsequent siblings)
  31 siblings, 0 replies; 74+ messages in thread
From: Rafael J. Wysocki @ 2020-02-11 23:36 UTC (permalink / raw)
  To: Linux PM; +Cc: LKML, Amit Kucheria, Linux Documentation

From: "Rafael J. Wysocki" <rafael.j.wysocki@intel.com>

Update the PM QoS documentation to reflect the previous code changes
regarding the removal of PM QoS classes and the CPU latency QoS API
rework.

Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
---
 Documentation/admin-guide/pm/cpuidle.rst | 73 +++++++++++++--------------
 Documentation/power/pm_qos_interface.rst | 86 +++++++++++++++-----------------
 2 files changed, 75 insertions(+), 84 deletions(-)

diff --git a/Documentation/admin-guide/pm/cpuidle.rst b/Documentation/admin-guide/pm/cpuidle.rst
index 6a06dc473dd6..5605cc6f9560 100644
--- a/Documentation/admin-guide/pm/cpuidle.rst
+++ b/Documentation/admin-guide/pm/cpuidle.rst
@@ -583,20 +583,17 @@ Power Management Quality of Service for CPUs
 The power management quality of service (PM QoS) framework in the Linux kernel
 allows kernel code and user space processes to set constraints on various
 energy-efficiency features of the kernel to prevent performance from dropping
-below a required level.  The PM QoS constraints can be set globally, in
-predefined categories referred to as PM QoS classes, or against individual
-devices.
+below a required level.
 
 CPU idle time management can be affected by PM QoS in two ways, through the
-global constraint in the ``PM_QOS_CPU_DMA_LATENCY`` class and through the
-resume latency constraints for individual CPUs.  Kernel code (e.g. device
-drivers) can set both of them with the help of special internal interfaces
-provided by the PM QoS framework.  User space can modify the former by opening
-the :file:`cpu_dma_latency` special device file under :file:`/dev/` and writing
-a binary value (interpreted as a signed 32-bit integer) to it.  In turn, the
-resume latency constraint for a CPU can be modified by user space by writing a
-string (representing a signed 32-bit integer) to the
-:file:`power/pm_qos_resume_latency_us` file under
+global CPU latency limit and through the resume latency constraints for
+individual CPUs.  Kernel code (e.g. device drivers) can set both of them with
+the help of special internal interfaces provided by the PM QoS framework.  User
+space can modify the former by opening the :file:`cpu_dma_latency` special
+device file under :file:`/dev/` and writing a binary value (interpreted as a
+signed 32-bit integer) to it.  In turn, the resume latency constraint for a CPU
+can be modified from user space by writing a string (representing a signed
+32-bit integer) to the :file:`power/pm_qos_resume_latency_us` file under
 :file:`/sys/devices/system/cpu/cpu<N>/` in ``sysfs``, where the CPU number
 ``<N>`` is allocated at the system initialization time.  Negative values
 will be rejected in both cases and, also in both cases, the written integer
@@ -605,32 +602,34 @@ number will be interpreted as a requested PM QoS constraint in microseconds.
 The requested value is not automatically applied as a new constraint, however,
 as it may be less restrictive (greater in this particular case) than another
 constraint previously requested by someone else.  For this reason, the PM QoS
-framework maintains a list of requests that have been made so far in each
-global class and for each device, aggregates them and applies the effective
-(minimum in this particular case) value as the new constraint.
+framework maintains a list of requests that have been made so far for the
+global CPU latency limit and for each individual CPU, aggregates them and
+applies the effective (minimum in this particular case) value as the new
+constraint.
 
 In fact, opening the :file:`cpu_dma_latency` special device file causes a new
-PM QoS request to be created and added to the priority list of requests in the
-``PM_QOS_CPU_DMA_LATENCY`` class and the file descriptor coming from the
-"open" operation represents that request.  If that file descriptor is then
-used for writing, the number written to it will be associated with the PM QoS
-request represented by it as a new requested constraint value.  Next, the
-priority list mechanism will be used to determine the new effective value of
-the entire list of requests and that effective value will be set as a new
-constraint.  Thus setting a new requested constraint value will only change the
-real constraint if the effective "list" value is affected by it.  In particular,
-for the ``PM_QOS_CPU_DMA_LATENCY`` class it only affects the real constraint if
-it is the minimum of the requested constraints in the list.  The process holding
-a file descriptor obtained by opening the :file:`cpu_dma_latency` special device
-file controls the PM QoS request associated with that file descriptor, but it
-controls this particular PM QoS request only.
+PM QoS request to be created and added to a global priority list of CPU latency
+limit requests and the file descriptor coming from the "open" operation
+represents that request.  If that file descriptor is then used for writing, the
+number written to it will be associated with the PM QoS request represented by
+it as a new requested limit value.  Next, the priority list mechanism will be
+used to determine the new effective value of the entire list of requests and
+that effective value will be set as a new CPU latency limit.  Thus requesting a
+new limit value will only change the real limit if the effective "list" value is
+affected by it, which is the case if it is the minimum of the requested values
+in the list.
+
+The process holding a file descriptor obtained by opening the
+:file:`cpu_dma_latency` special device file controls the PM QoS request
+associated with that file descriptor, but it controls this particular PM QoS
+request only.
 
 Closing the :file:`cpu_dma_latency` special device file or, more precisely, the
 file descriptor obtained while opening it, causes the PM QoS request associated
-with that file descriptor to be removed from the ``PM_QOS_CPU_DMA_LATENCY``
-class priority list and destroyed.  If that happens, the priority list mechanism
-will be used, again, to determine the new effective value for the whole list
-and that value will become the new real constraint.
+with that file descriptor to be removed from the global priority list of CPU
+latency limit requests and destroyed.  If that happens, the priority list
+mechanism will be used again, to determine the new effective value for the whole
+list and that value will become the new limit.
 
 In turn, for each CPU there is one resume latency PM QoS request associated with
 the :file:`power/pm_qos_resume_latency_us` file under
@@ -647,10 +646,10 @@ CPU in question every time the list of requests is updated this way or another
 (there may be other requests coming from kernel code in that list).
 
 CPU idle time governors are expected to regard the minimum of the global
-effective ``PM_QOS_CPU_DMA_LATENCY`` class constraint and the effective
-resume latency constraint for the given CPU as the upper limit for the exit
-latency of the idle states they can select for that CPU.  They should never
-select any idle states with exit latency beyond that limit.
+(effective) CPU latency limit and the effective resume latency constraint for
+the given CPU as the upper limit for the exit latency of the idle states that
+they are allowed to select for that CPU.  They should never select any idle
+states with exit latency beyond that limit.
 
 
 Idle States Control Via Kernel Command Line
diff --git a/Documentation/power/pm_qos_interface.rst b/Documentation/power/pm_qos_interface.rst
index 0d62d506caf0..064f668fbdab 100644
--- a/Documentation/power/pm_qos_interface.rst
+++ b/Documentation/power/pm_qos_interface.rst
@@ -7,86 +7,78 @@ performance expectations by drivers, subsystems and user space applications on
 one of the parameters.
 
 Two different PM QoS frameworks are available:
-1. PM QoS classes for cpu_dma_latency
+1. CPU latency QoS.
 2. The per-device PM QoS framework provides the API to manage the
    per-device latency constraints and PM QoS flags.
 
-Each parameters have defined units:
-
- * latency: usec
- * timeout: usec
- * throughput: kbs (kilo bit / sec)
- * memory bandwidth: mbs (mega bit / sec)
+The latency unit used in the PM QoS framework is the microsecond (usec).
 
 
 1. PM QoS framework
 ===================
 
-The infrastructure exposes multiple misc device nodes one per implemented
-parameter.  The set of parameters implement is defined by pm_qos_power_init()
-and pm_qos_params.h.  This is done because having the available parameters
-being runtime configurable or changeable from a driver was seen as too easy to
-abuse.
-
-For each parameter a list of performance requests is maintained along with
-an aggregated target value.  The aggregated target value is updated with
-changes to the request list or elements of the list.  Typically the
-aggregated target value is simply the max or min of the request values held
-in the parameter list elements.
+A global list of CPU latency QoS requests is maintained along with an aggregated
+(effective) target value.  The aggregated target value is updated with changes
+to the request list or elements of the list.  For CPU latency QoS, the
+aggregated target value is simply the min of the request values held in the list
+elements.
+
 Note: the aggregated target value is implemented as an atomic variable so that
 reading the aggregated value does not require any locking mechanism.
 
+From kernel space the use of this interface is simple:
 
-From kernel mode the use of this interface is simple:
-
-void pm_qos_add_request(handle, param_class, target_value):
-  Will insert an element into the list for that identified PM QoS class with the
-  target value.  Upon change to this list the new target is recomputed and any
-  registered notifiers are called only if the target value is now different.
-  Clients of pm_qos need to save the returned handle for future use in other
-  pm_qos API functions.
+void cpu_latency_qos_add_request(handle, target_value):
+  Will insert an element into the CPU latency QoS list with the target value.
+  Upon change to this list the new target is recomputed and any registered
+  notifiers are called only if the target value is now different.
+  Clients of PM QoS need to save the returned handle for future use in other
+  PM QoS API functions.
 
-void pm_qos_update_request(handle, new_target_value):
+void cpu_latency_qos_update_request(handle, new_target_value):
   Will update the list element pointed to by the handle with the new target
   value and recompute the new aggregated target, calling the notification tree
   if the target is changed.
 
-void pm_qos_remove_request(handle):
+void cpu_latency_qos_remove_request(handle):
   Will remove the element.  After removal it will update the aggregate target
   and call the notification tree if the target was changed as a result of
   removing the request.
 
-int pm_qos_request(param_class):
-  Returns the aggregated value for a given PM QoS class.
+int cpu_latency_qos_limit():
+  Returns the aggregated value for the CPU latency QoS.
+
+int cpu_latency_qos_request_active(handle):
+  Returns if the request is still active, i.e. it has not been removed from the
+  CPU latency QoS list.
 
-int pm_qos_request_active(handle):
-  Returns if the request is still active, i.e. it has not been removed from a
-  PM QoS class constraints list.
+int cpu_latency_qos_add_notifier(notifier):
+  Adds a notification callback function to the CPU latency QoS. The callback is
+  called when the aggregated value for the CPU latency QoS is changed.
 
-int pm_qos_add_notifier(param_class, notifier):
-  Adds a notification callback function to the PM QoS class. The callback is
-  called when the aggregated value for the PM QoS class is changed.
+int cpu_latency_qos_remove_notifier(notifier):
+  Removes the notification callback function from the CPU latency QoS.
 
-int pm_qos_remove_notifier(int param_class, notifier):
-  Removes the notification callback function for the PM QoS class.
 
+From user space:
 
-From user mode:
+The infrastructure exposes one device node, /dev/cpu_dma_latency, for the CPU
+latency QoS.
 
-Only processes can register a pm_qos request.  To provide for automatic
+Only processes can register a PM QoS request.  To provide for automatic
 cleanup of a process, the interface requires the process to register its
-parameter requests in the following way:
+parameter requests as follows.
 
-To register the default pm_qos target for the specific parameter, the process
-must open /dev/cpu_dma_latency
+To register the default PM QoS target for the CPU latency QoS, the process must
+open /dev/cpu_dma_latency.
 
 As long as the device node is held open that process has a registered
 request on the parameter.
 
-To change the requested target value the process needs to write an s32 value to
-the open device node.  Alternatively the user mode program could write a hex
-string for the value using 10 char long format e.g. "0x12345678".  This
-translates to a pm_qos_update_request call.
+To change the requested target value, the process needs to write an s32 value to
+the open device node.  Alternatively, it can write a hex string for the value
+using the 10 char long format e.g. "0x12345678".  This translates to a
+cpu_latency_qos_update_request() call.
 
 To remove the user mode request for a target value simply close the device
 node.
-- 
2.16.4






^ permalink raw reply	[flat|nested] 74+ messages in thread

* [PATCH 28/28] PM: QoS: Make CPU latency QoS depend on CONFIG_CPU_IDLE
  2020-02-11 22:51 [PATCH 00/28] PM: QoS: Get rid of unuseful code and rework CPU latency QoS interface Rafael J. Wysocki
                   ` (26 preceding siblings ...)
  2020-02-11 23:36 ` [PATCH 27/28] Documentation: PM: QoS: Update to reflect previous code changes Rafael J. Wysocki
@ 2020-02-11 23:37 ` Rafael J. Wysocki
  2020-02-12  8:37 ` [PATCH 00/28] PM: QoS: Get rid of unuseful code and rework CPU latency QoS interface Ulf Hansson
                   ` (3 subsequent siblings)
  31 siblings, 0 replies; 74+ messages in thread
From: Rafael J. Wysocki @ 2020-02-11 23:37 UTC (permalink / raw)
  To: Linux PM; +Cc: LKML, Amit Kucheria, Daniel Lezcano

From: "Rafael J. Wysocki" <rafael.j.wysocki@intel.com>

Because cpuidle is the only user of the effective constraint coming
from the CPU latency QoS, add #ifdef CONFIG_CPU_IDLE around that code
to avoid building it unnecessarily.

Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
---
 include/linux/pm_qos.h | 13 +++++++++++++
 kernel/power/qos.c     |  2 ++
 2 files changed, 15 insertions(+)

diff --git a/include/linux/pm_qos.h b/include/linux/pm_qos.h
index df065db3f57a..4a69d4af3ff8 100644
--- a/include/linux/pm_qos.h
+++ b/include/linux/pm_qos.h
@@ -143,11 +143,24 @@ bool pm_qos_update_flags(struct pm_qos_flags *pqf,
 			 struct pm_qos_flags_request *req,
 			 enum pm_qos_req_action action, s32 val);
 
+#ifdef CONFIG_CPU_IDLE
 s32 cpu_latency_qos_limit(void);
 bool cpu_latency_qos_request_active(struct pm_qos_request *req);
 void cpu_latency_qos_add_request(struct pm_qos_request *req, s32 value);
 void cpu_latency_qos_update_request(struct pm_qos_request *req, s32 new_value);
 void cpu_latency_qos_remove_request(struct pm_qos_request *req);
+#else
+static inline s32 cpu_latency_qos_limit(void) { return INT_MAX; }
+static inline bool cpu_latency_qos_request_active(struct pm_qos_request *req)
+{
+	return false;
+}
+static inline void cpu_latency_qos_add_request(struct pm_qos_request *req,
+					       s32 value) {}
+static inline void cpu_latency_qos_update_request(struct pm_qos_request *req,
+						  s32 new_value) {}
+static inline void cpu_latency_qos_remove_request(struct pm_qos_request *req) {}
+#endif
 
 #ifdef CONFIG_PM
 enum pm_qos_flags_status __dev_pm_qos_flags(struct device *dev, s32 mask);
diff --git a/kernel/power/qos.c b/kernel/power/qos.c
index ef73573db43d..32927682bcc4 100644
--- a/kernel/power/qos.c
+++ b/kernel/power/qos.c
@@ -209,6 +209,7 @@ bool pm_qos_update_flags(struct pm_qos_flags *pqf,
 	return prev_value != curr_value;
 }
 
+#ifdef CONFIG_CPU_IDLE
 /* Definitions related to the CPU latency QoS. */
 
 static struct pm_qos_constraints cpu_latency_constraints = {
@@ -421,6 +422,7 @@ static int __init cpu_latency_qos_init(void)
 	return ret;
 }
 late_initcall(cpu_latency_qos_init);
+#endif /* CONFIG_CPU_IDLE */
 
 /* Definitions related to the frequency QoS below. */
 
-- 
2.16.4






^ permalink raw reply	[flat|nested] 74+ messages in thread

* Re: [PATCH 20/28] drivers: net: Call cpu_latency_qos_*() instead of pm_qos_*()
  2020-02-11 23:24 ` [PATCH 20/28] drivers: net: " Rafael J. Wysocki
@ 2020-02-11 23:48   ` Jeff Kirsher
  2020-02-12  5:49   ` Kalle Valo
  1 sibling, 0 replies; 74+ messages in thread
From: Jeff Kirsher @ 2020-02-11 23:48 UTC (permalink / raw)
  To: Rafael J. Wysocki, Linux PM
  Cc: LKML, Amit Kucheria, intel-wired-lan, Kalle Valo, linux-wireless

[-- Attachment #1: Type: text/plain, Size: 735 bytes --]

On Wed, 2020-02-12 at 00:24 +0100, Rafael J. Wysocki wrote:
> From: "Rafael J. Wysocki" <rafael.j.wysocki@intel.com>
> 
> Call cpu_latency_qos_add/update/remove_request() instead of
> pm_qos_add/update/remove_request(), respectively, because the
> latter are going to be dropped.
> 
> No intentional functional impact.
> 
> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>

Acked-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com

For the e1000e changes

> ---
>  drivers/net/ethernet/intel/e1000e/netdev.c   | 13 ++++++-------
>  drivers/net/wireless/ath/ath10k/core.c       |  4 ++--
>  drivers/net/wireless/intel/ipw2x00/ipw2100.c | 10 +++++-----
>  3 files changed, 13 insertions(+), 14 deletions(-)


[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 74+ messages in thread

* Re: [PATCH 18/28] drivers: media: Call cpu_latency_qos_*() instead of pm_qos_*()
  2020-02-11 23:17 ` [PATCH 18/28] drivers: media: " Rafael J. Wysocki
@ 2020-02-12  5:37   ` Mauro Carvalho Chehab
  0 siblings, 0 replies; 74+ messages in thread
From: Mauro Carvalho Chehab @ 2020-02-12  5:37 UTC (permalink / raw)
  To: Rafael J. Wysocki; +Cc: Linux PM, LKML, Amit Kucheria, linux-media

Em Wed, 12 Feb 2020 00:17:51 +0100
"Rafael J. Wysocki" <rjw@rjwysocki.net> escreveu:

> From: "Rafael J. Wysocki" <rafael.j.wysocki@intel.com>
> 
> Call cpu_latency_qos_add/remove_request() instead of
> pm_qos_add/remove_request(), respectively, because the
> latter are going to be dropped.
> 
> No intentional functional impact.
> 
> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>

I'm assuming that this will be applied via your tree. So:

Acked-by: Mauro Carvalho Chehab <mchehab+huawei@kernel.org>
 
> ---
>  drivers/media/pci/saa7134/saa7134-video.c | 5 ++---
>  drivers/media/platform/via-camera.c       | 4 ++--
>  2 files changed, 4 insertions(+), 5 deletions(-)
> 
> diff --git a/drivers/media/pci/saa7134/saa7134-video.c b/drivers/media/pci/saa7134/saa7134-video.c
> index 342cabf48064..a8ac94fadc14 100644
> --- a/drivers/media/pci/saa7134/saa7134-video.c
> +++ b/drivers/media/pci/saa7134/saa7134-video.c
> @@ -1008,8 +1008,7 @@ int saa7134_vb2_start_streaming(struct vb2_queue *vq, unsigned int count)
>  	 */
>  	if ((dmaq == &dev->video_q && !vb2_is_streaming(&dev->vbi_vbq)) ||
>  	    (dmaq == &dev->vbi_q && !vb2_is_streaming(&dev->video_vbq)))
> -		pm_qos_add_request(&dev->qos_request,
> -			PM_QOS_CPU_DMA_LATENCY, 20);
> +		cpu_latency_qos_add_request(&dev->qos_request, 20);
>  	dmaq->seq_nr = 0;
>  
>  	return 0;
> @@ -1024,7 +1023,7 @@ void saa7134_vb2_stop_streaming(struct vb2_queue *vq)
>  
>  	if ((dmaq == &dev->video_q && !vb2_is_streaming(&dev->vbi_vbq)) ||
>  	    (dmaq == &dev->vbi_q && !vb2_is_streaming(&dev->video_vbq)))
> -		pm_qos_remove_request(&dev->qos_request);
> +		cpu_latency_qos_remove_request(&dev->qos_request);
>  }
>  
>  static const struct vb2_ops vb2_qops = {
> diff --git a/drivers/media/platform/via-camera.c b/drivers/media/platform/via-camera.c
> index 78841b9015ce..1cd4f7be88dd 100644
> --- a/drivers/media/platform/via-camera.c
> +++ b/drivers/media/platform/via-camera.c
> @@ -646,7 +646,7 @@ static int viacam_vb2_start_streaming(struct vb2_queue *vq, unsigned int count)
>  	 * requirement which will keep the CPU out of the deeper sleep
>  	 * states.
>  	 */
> -	pm_qos_add_request(&cam->qos_request, PM_QOS_CPU_DMA_LATENCY, 50);
> +	cpu_latency_qos_add_request(&cam->qos_request, 50);
>  	viacam_start_engine(cam);
>  	return 0;
>  out:
> @@ -662,7 +662,7 @@ static void viacam_vb2_stop_streaming(struct vb2_queue *vq)
>  	struct via_camera *cam = vb2_get_drv_priv(vq);
>  	struct via_buffer *buf, *tmp;
>  
> -	pm_qos_remove_request(&cam->qos_request);
> +	cpu_latency_qos_remove_request(&cam->qos_request);
>  	viacam_stop_engine(cam);
>  
>  	list_for_each_entry_safe(buf, tmp, &cam->buffer_queue, queue) {




Cheers,
Mauro

^ permalink raw reply	[flat|nested] 74+ messages in thread

* Re: [PATCH 20/28] drivers: net: Call cpu_latency_qos_*() instead of pm_qos_*()
  2020-02-11 23:24 ` [PATCH 20/28] drivers: net: " Rafael J. Wysocki
  2020-02-11 23:48   ` Jeff Kirsher
@ 2020-02-12  5:49   ` Kalle Valo
  1 sibling, 0 replies; 74+ messages in thread
From: Kalle Valo @ 2020-02-12  5:49 UTC (permalink / raw)
  To: Rafael J. Wysocki
  Cc: Linux PM, LKML, Amit Kucheria, Jeff Kirsher, intel-wired-lan,
	linux-wireless

"Rafael J. Wysocki" <rjw@rjwysocki.net> writes:

> From: "Rafael J. Wysocki" <rafael.j.wysocki@intel.com>
>
> Call cpu_latency_qos_add/update/remove_request() instead of
> pm_qos_add/update/remove_request(), respectively, because the
> latter are going to be dropped.
>
> No intentional functional impact.
>
> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
> ---
>  drivers/net/ethernet/intel/e1000e/netdev.c   | 13 ++++++-------
>  drivers/net/wireless/ath/ath10k/core.c       |  4 ++--
>  drivers/net/wireless/intel/ipw2x00/ipw2100.c | 10 +++++-----
>  3 files changed, 13 insertions(+), 14 deletions(-)

For the wireless stuff:

Acked-by: Kalle Valo <kvalo@codeaurora.org>

-- 
https://wireless.wiki.kernel.org/en/developers/documentation/submittingpatches

^ permalink raw reply	[flat|nested] 74+ messages in thread

* Re: [PATCH 00/28] PM: QoS: Get rid of unuseful code and rework CPU latency QoS interface
  2020-02-11 22:51 [PATCH 00/28] PM: QoS: Get rid of unuseful code and rework CPU latency QoS interface Rafael J. Wysocki
                   ` (27 preceding siblings ...)
  2020-02-11 23:37 ` [PATCH 28/28] PM: QoS: Make CPU latency QoS depend on CONFIG_CPU_IDLE Rafael J. Wysocki
@ 2020-02-12  8:37 ` Ulf Hansson
  2020-02-12  9:17   ` Rafael J. Wysocki
  2020-02-12  9:39 ` Rafael J. Wysocki
                   ` (2 subsequent siblings)
  31 siblings, 1 reply; 74+ messages in thread
From: Ulf Hansson @ 2020-02-12  8:37 UTC (permalink / raw)
  To: Rafael J. Wysocki; +Cc: Linux PM, LKML, Amit Kucheria

On Wed, 12 Feb 2020 at 00:39, Rafael J. Wysocki <rjw@rjwysocki.net> wrote:
>
> Hi All,
>
> This series of patches is based on the observation that after commit
> c3082a674f46 ("PM: QoS: Get rid of unused flags") the only global PM QoS class
> in use is PM_QOS_CPU_DMA_LATENCY, but there is still a significant amount of
> code dedicated to the handling of global PM QoS classes in general.  That code
> takes up space and adds overhead in vain, so it is better to get rid of it.
>
> Moreover, with that unuseful code removed, the interface for adding QoS
> requests for CPU latency becomes inelegant and confusing, so it is better to
> clean it up.
>
> Patches [01/28-12/28] do the first part described above, which also includes
> some assorted cleanups of the core PM QoS code that doesn't go away.
>
> Patches [13/28-25/28] rework the CPU latency QoS interface (in the classic
> "define stubs, migrate users, change the API proper" manner), patches
> [26-27/28] update the general comments and documentation to match the code
> after the previous changes and the last one makes the CPU latency QoS depend
> on CPU_IDLE (because cpuidle is the only user of its target value today).
>
> The majority of the patches in this series don't change the functionality of
> the code at all (at least not intentionally).
>
> Please refer to the changelogs of individual patches for details.
>
> Thanks!

A big thanks for cleaning this up! The PM_QOS_CPU_DMA_LATENCY and
friends, has been annoying me for a long time. This certainly makes
the code far better and more understandable!

I have looked through the series and couldn't find any obvious
mistakes, so feel free to add:

Reviewed-by: Ulf Hansson <ulf.hansson@linaro.org>

Note, the review tag also means, that's fine for you to pick the mmc
patch via your tree.

Kind regards
Uffe

^ permalink raw reply	[flat|nested] 74+ messages in thread

* Re: [PATCH 00/28] PM: QoS: Get rid of unuseful code and rework CPU latency QoS interface
  2020-02-12  8:37 ` [PATCH 00/28] PM: QoS: Get rid of unuseful code and rework CPU latency QoS interface Ulf Hansson
@ 2020-02-12  9:17   ` Rafael J. Wysocki
  0 siblings, 0 replies; 74+ messages in thread
From: Rafael J. Wysocki @ 2020-02-12  9:17 UTC (permalink / raw)
  To: Ulf Hansson; +Cc: Rafael J. Wysocki, Linux PM, LKML, Amit Kucheria

On Wed, Feb 12, 2020 at 9:38 AM Ulf Hansson <ulf.hansson@linaro.org> wrote:
>
> On Wed, 12 Feb 2020 at 00:39, Rafael J. Wysocki <rjw@rjwysocki.net> wrote:
> >
> > Hi All,
> >
> > This series of patches is based on the observation that after commit
> > c3082a674f46 ("PM: QoS: Get rid of unused flags") the only global PM QoS class
> > in use is PM_QOS_CPU_DMA_LATENCY, but there is still a significant amount of
> > code dedicated to the handling of global PM QoS classes in general.  That code
> > takes up space and adds overhead in vain, so it is better to get rid of it.
> >
> > Moreover, with that unuseful code removed, the interface for adding QoS
> > requests for CPU latency becomes inelegant and confusing, so it is better to
> > clean it up.
> >
> > Patches [01/28-12/28] do the first part described above, which also includes
> > some assorted cleanups of the core PM QoS code that doesn't go away.
> >
> > Patches [13/28-25/28] rework the CPU latency QoS interface (in the classic
> > "define stubs, migrate users, change the API proper" manner), patches
> > [26-27/28] update the general comments and documentation to match the code
> > after the previous changes and the last one makes the CPU latency QoS depend
> > on CPU_IDLE (because cpuidle is the only user of its target value today).
> >
> > The majority of the patches in this series don't change the functionality of
> > the code at all (at least not intentionally).
> >
> > Please refer to the changelogs of individual patches for details.
> >
> > Thanks!
>
> A big thanks for cleaning this up! The PM_QOS_CPU_DMA_LATENCY and
> friends, has been annoying me for a long time. This certainly makes
> the code far better and more understandable!
>
> I have looked through the series and couldn't find any obvious
> mistakes, so feel free to add:
>
> Reviewed-by: Ulf Hansson <ulf.hansson@linaro.org>

Thanks for the review, much appreciated!

> Note, the review tag also means, that's fine for you to pick the mmc
> patch via your tree.

Thank you!

^ permalink raw reply	[flat|nested] 74+ messages in thread

* Re: [PATCH 00/28] PM: QoS: Get rid of unuseful code and rework CPU latency QoS interface
  2020-02-11 22:51 [PATCH 00/28] PM: QoS: Get rid of unuseful code and rework CPU latency QoS interface Rafael J. Wysocki
                   ` (28 preceding siblings ...)
  2020-02-12  8:37 ` [PATCH 00/28] PM: QoS: Get rid of unuseful code and rework CPU latency QoS interface Ulf Hansson
@ 2020-02-12  9:39 ` Rafael J. Wysocki
  2020-02-12 23:32 ` Francisco Jerez
  2020-02-13  7:10 ` Amit Kucheria
  31 siblings, 0 replies; 74+ messages in thread
From: Rafael J. Wysocki @ 2020-02-12  9:39 UTC (permalink / raw)
  To: Linux PM; +Cc: LKML, Amit Kucheria, Rafael J. Wysocki

On Wed, Feb 12, 2020 at 12:39 AM Rafael J. Wysocki <rjw@rjwysocki.net> wrote:
>
> Hi All,
>
> This series of patches is based on the observation that after commit
> c3082a674f46 ("PM: QoS: Get rid of unused flags") the only global PM QoS class
> in use is PM_QOS_CPU_DMA_LATENCY, but there is still a significant amount of
> code dedicated to the handling of global PM QoS classes in general.  That code
> takes up space and adds overhead in vain, so it is better to get rid of it.
>
> Moreover, with that unuseful code removed, the interface for adding QoS
> requests for CPU latency becomes inelegant and confusing, so it is better to
> clean it up.
>
> Patches [01/28-12/28] do the first part described above, which also includes
> some assorted cleanups of the core PM QoS code that doesn't go away.
>
> Patches [13/28-25/28] rework the CPU latency QoS interface (in the classic
> "define stubs, migrate users, change the API proper" manner), patches
> [26-27/28] update the general comments and documentation to match the code
> after the previous changes and the last one makes the CPU latency QoS depend
> on CPU_IDLE (because cpuidle is the only user of its target value today).
>
> The majority of the patches in this series don't change the functionality of
> the code at all (at least not intentionally).
>
> Please refer to the changelogs of individual patches for details.
>
> Thanks!

This patch series is available in the git branch at

 git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm.git
cpu-latency-qos

for easier access, but please note that it may be updated in response
to review comments etc.

^ permalink raw reply	[flat|nested] 74+ messages in thread

* Re: [PATCH 24/28] sound: Call cpu_latency_qos_*() instead of pm_qos_*()
  2020-02-11 23:34 ` [PATCH 24/28] sound: " Rafael J. Wysocki
@ 2020-02-12 10:08   ` Mark Brown
  2020-02-12 10:16     ` Rafael J. Wysocki
  2020-02-12 10:18   ` Mark Brown
  1 sibling, 1 reply; 74+ messages in thread
From: Mark Brown @ 2020-02-12 10:08 UTC (permalink / raw)
  To: Rafael J. Wysocki
  Cc: Linux PM, LKML, Amit Kucheria, Takashi Iwai, alsa-devel, Liam Girdwood

[-- Attachment #1: Type: text/plain, Size: 455 bytes --]

On Wed, Feb 12, 2020 at 12:34:15AM +0100, Rafael J. Wysocki wrote:
> From: "Rafael J. Wysocki" <rafael.j.wysocki@intel.com>
> 
> Call cpu_latency_qos_add/update/remove_request() and
> cpu_latency_qos_request_active() instead of
> pm_qos_add/update/remove_request() and pm_qos_request_active(),
> respectively, because the latter are going to be dropped.

What's the story with dependencies here, I only have this patch and not
the cover letter?

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 74+ messages in thread

* Re: [PATCH 15/28] x86: platform: iosf_mbi: Call cpu_latency_qos_*() instead of pm_qos_*()
  2020-02-11 23:10 ` [PATCH 15/28] x86: platform: iosf_mbi: Call cpu_latency_qos_*() instead of pm_qos_*() Rafael J. Wysocki
@ 2020-02-12 10:14   ` Andy Shevchenko
  0 siblings, 0 replies; 74+ messages in thread
From: Andy Shevchenko @ 2020-02-12 10:14 UTC (permalink / raw)
  To: Rafael J. Wysocki
  Cc: Linux PM, LKML, Amit Kucheria, David Box, x86 Maintainers

On Wed, Feb 12, 2020 at 12:10:00AM +0100, Rafael J. Wysocki wrote:
> From: "Rafael J. Wysocki" <rafael.j.wysocki@intel.com>
> 
> Call cpu_latency_qos_add/update/remove_request() instead of
> pm_qos_add/update/remove_request(), respectively, because the
> latter are going to be dropped.
> 
> No intentional functional impact.

Reviewed-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>

> 
> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
> ---
>  arch/x86/platform/intel/iosf_mbi.c | 13 ++++++-------
>  1 file changed, 6 insertions(+), 7 deletions(-)
> 
> diff --git a/arch/x86/platform/intel/iosf_mbi.c b/arch/x86/platform/intel/iosf_mbi.c
> index 9e2444500428..526f70f27c1c 100644
> --- a/arch/x86/platform/intel/iosf_mbi.c
> +++ b/arch/x86/platform/intel/iosf_mbi.c
> @@ -265,7 +265,7 @@ static void iosf_mbi_reset_semaphore(void)
>  			    iosf_mbi_sem_address, 0, PUNIT_SEMAPHORE_BIT))
>  		dev_err(&mbi_pdev->dev, "Error P-Unit semaphore reset failed\n");
>  
> -	pm_qos_update_request(&iosf_mbi_pm_qos, PM_QOS_DEFAULT_VALUE);
> +	cpu_latency_qos_update_request(&iosf_mbi_pm_qos, PM_QOS_DEFAULT_VALUE);
>  
>  	blocking_notifier_call_chain(&iosf_mbi_pmic_bus_access_notifier,
>  				     MBI_PMIC_BUS_ACCESS_END, NULL);
> @@ -301,8 +301,8 @@ static void iosf_mbi_reset_semaphore(void)
>   * 4) When CPU cores enter C6 or C7 the P-Unit needs to talk to the PMIC
>   *    if this happens while the kernel itself is accessing the PMIC I2C bus
>   *    the SoC hangs.
> - *    As the third step we call pm_qos_update_request() to disallow the CPU
> - *    to enter C6 or C7.
> + *    As the third step we call cpu_latency_qos_update_request() to disallow the
> + *    CPU to enter C6 or C7.
>   *
>   * 5) The P-Unit has a PMIC bus semaphore which we can request to stop
>   *    autonomous P-Unit tasks from accessing the PMIC I2C bus while we hold it.
> @@ -338,7 +338,7 @@ int iosf_mbi_block_punit_i2c_access(void)
>  	 * requires the P-Unit to talk to the PMIC and if this happens while
>  	 * we're holding the semaphore, the SoC hangs.
>  	 */
> -	pm_qos_update_request(&iosf_mbi_pm_qos, 0);
> +	cpu_latency_qos_update_request(&iosf_mbi_pm_qos, 0);
>  
>  	/* host driver writes to side band semaphore register */
>  	ret = iosf_mbi_write(BT_MBI_UNIT_PMC, MBI_REG_WRITE,
> @@ -547,8 +547,7 @@ static int __init iosf_mbi_init(void)
>  {
>  	iosf_debugfs_init();
>  
> -	pm_qos_add_request(&iosf_mbi_pm_qos, PM_QOS_CPU_DMA_LATENCY,
> -			   PM_QOS_DEFAULT_VALUE);
> +	cpu_latency_qos_add_request(&iosf_mbi_pm_qos, PM_QOS_DEFAULT_VALUE);
>  
>  	return pci_register_driver(&iosf_mbi_pci_driver);
>  }
> @@ -561,7 +560,7 @@ static void __exit iosf_mbi_exit(void)
>  	pci_dev_put(mbi_pdev);
>  	mbi_pdev = NULL;
>  
> -	pm_qos_remove_request(&iosf_mbi_pm_qos);
> +	cpu_latency_qos_remove_request(&iosf_mbi_pm_qos);
>  }
>  
>  module_init(iosf_mbi_init);
> -- 
> 2.16.4
> 
> 
> 
> 
> 

-- 
With Best Regards,
Andy Shevchenko



^ permalink raw reply	[flat|nested] 74+ messages in thread

* Re: [PATCH 24/28] sound: Call cpu_latency_qos_*() instead of pm_qos_*()
  2020-02-12 10:08   ` Mark Brown
@ 2020-02-12 10:16     ` Rafael J. Wysocki
  2020-02-12 10:21       ` Takashi Iwai
  0 siblings, 1 reply; 74+ messages in thread
From: Rafael J. Wysocki @ 2020-02-12 10:16 UTC (permalink / raw)
  To: Mark Brown
  Cc: Rafael J. Wysocki, Linux PM, LKML, Amit Kucheria, Takashi Iwai,
	moderated list:SOUND - SOC LAYER / DYNAMIC AUDIO POWER MANAGEM...,
	Liam Girdwood

On Wed, Feb 12, 2020 at 11:08 AM Mark Brown <broonie@kernel.org> wrote:
>
> On Wed, Feb 12, 2020 at 12:34:15AM +0100, Rafael J. Wysocki wrote:
> > From: "Rafael J. Wysocki" <rafael.j.wysocki@intel.com>
> >
> > Call cpu_latency_qos_add/update/remove_request() and
> > cpu_latency_qos_request_active() instead of
> > pm_qos_add/update/remove_request() and pm_qos_request_active(),
> > respectively, because the latter are going to be dropped.
>
> What's the story with dependencies here, I only have this patch and not
> the cover letter?

The cover letter is here:

https://lore.kernel.org/linux-pm/CAJZ5v0h1z2p66J5KB3P0RjPkLE-DfDbcfhG_OrnDG_weir7HMA@mail.gmail.com/T/#m92ce7ffd743083e89e45c0a98da8c0140e44c70b

Generally speaking, this patch depends on the previous patches in the series.

^ permalink raw reply	[flat|nested] 74+ messages in thread

* Re: [PATCH 24/28] sound: Call cpu_latency_qos_*() instead of pm_qos_*()
  2020-02-11 23:34 ` [PATCH 24/28] sound: " Rafael J. Wysocki
  2020-02-12 10:08   ` Mark Brown
@ 2020-02-12 10:18   ` Mark Brown
  1 sibling, 0 replies; 74+ messages in thread
From: Mark Brown @ 2020-02-12 10:18 UTC (permalink / raw)
  To: Rafael J. Wysocki
  Cc: Linux PM, LKML, Amit Kucheria, Takashi Iwai, alsa-devel, Liam Girdwood

[-- Attachment #1: Type: text/plain, Size: 406 bytes --]

On Wed, Feb 12, 2020 at 12:34:15AM +0100, Rafael J. Wysocki wrote:
> From: "Rafael J. Wysocki" <rafael.j.wysocki@intel.com>
> 
> Call cpu_latency_qos_add/update/remove_request() and
> cpu_latency_qos_request_active() instead of
> pm_qos_add/update/remove_request() and pm_qos_request_active(),
> respectively, because the latter are going to be dropped.

Acked-by: Mark Brown <broonie@kernel.org>

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 74+ messages in thread

* Re: [PATCH 24/28] sound: Call cpu_latency_qos_*() instead of pm_qos_*()
  2020-02-12 10:16     ` Rafael J. Wysocki
@ 2020-02-12 10:21       ` Takashi Iwai
  0 siblings, 0 replies; 74+ messages in thread
From: Takashi Iwai @ 2020-02-12 10:21 UTC (permalink / raw)
  To: Rafael J. Wysocki
  Cc: Mark Brown, Rafael J. Wysocki, Linux PM, LKML, Amit Kucheria,
	Takashi Iwai,
	moderated list:SOUND - SOC LAYER / DYNAMIC AUDIO POWER MANAGEM...,
	Liam Girdwood

On Wed, 12 Feb 2020 11:16:27 +0100,
Rafael J. Wysocki wrote:
> 
> On Wed, Feb 12, 2020 at 11:08 AM Mark Brown <broonie@kernel.org> wrote:
> >
> > On Wed, Feb 12, 2020 at 12:34:15AM +0100, Rafael J. Wysocki wrote:
> > > From: "Rafael J. Wysocki" <rafael.j.wysocki@intel.com>
> > >
> > > Call cpu_latency_qos_add/update/remove_request() and
> > > cpu_latency_qos_request_active() instead of
> > > pm_qos_add/update/remove_request() and pm_qos_request_active(),
> > > respectively, because the latter are going to be dropped.
> >
> > What's the story with dependencies here, I only have this patch and not
> > the cover letter?
> 
> The cover letter is here:
> 
> https://lore.kernel.org/linux-pm/CAJZ5v0h1z2p66J5KB3P0RjPkLE-DfDbcfhG_OrnDG_weir7HMA@mail.gmail.com/T/#m92ce7ffd743083e89e45c0a98da8c0140e44c70b
> 
> Generally speaking, this patch depends on the previous patches in the series.

OK, so the changes in sound tree are just API rename / replacement.

Acked-by: Takashi Iwai <tiwai@suse.de>


thanks,

Takashi

^ permalink raw reply	[flat|nested] 74+ messages in thread

* Re: [PATCH 16/28] drm: i915: Call cpu_latency_qos_*() instead of pm_qos_*()
  2020-02-11 23:12 ` [PATCH 16/28] drm: i915: " Rafael J. Wysocki
@ 2020-02-12 10:32   ` Rafael J. Wysocki
  2020-02-14  7:42   ` Jani Nikula
  1 sibling, 0 replies; 74+ messages in thread
From: Rafael J. Wysocki @ 2020-02-12 10:32 UTC (permalink / raw)
  To: intel-gfx
  Cc: LKML, Amit Kucheria, Jani Nikula, Joonas Lahtinen, Rodrigo Vivi,
	Rafael J. Wysocki, Linux PM

On Wed, Feb 12, 2020 at 12:40 AM Rafael J. Wysocki <rjw@rjwysocki.net> wrote:
>
> From: "Rafael J. Wysocki" <rafael.j.wysocki@intel.com>
>
> Call cpu_latency_qos_add/update/remove_request() instead of
> pm_qos_add/update/remove_request(), respectively, because the
> latter are going to be dropped.
>
> No intentional functional impact.
>
> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>

Please note that the whole series is available here:

https://lore.kernel.org/linux-pm/1654227.8mz0SueHsU@kreacher/

> ---
>  drivers/gpu/drm/i915/display/intel_dp.c |  4 ++--
>  drivers/gpu/drm/i915/i915_drv.c         | 12 +++++-------
>  drivers/gpu/drm/i915/intel_sideband.c   |  5 +++--
>  3 files changed, 10 insertions(+), 11 deletions(-)
>
> diff --git a/drivers/gpu/drm/i915/display/intel_dp.c b/drivers/gpu/drm/i915/display/intel_dp.c
> index c7424e2a04a3..208457005a11 100644
> --- a/drivers/gpu/drm/i915/display/intel_dp.c
> +++ b/drivers/gpu/drm/i915/display/intel_dp.c
> @@ -1360,7 +1360,7 @@ intel_dp_aux_xfer(struct intel_dp *intel_dp,
>          * lowest possible wakeup latency and so prevent the cpu from going into
>          * deep sleep states.
>          */
> -       pm_qos_update_request(&i915->pm_qos, 0);
> +       cpu_latency_qos_update_request(&i915->pm_qos, 0);
>
>         intel_dp_check_edp(intel_dp);
>
> @@ -1488,7 +1488,7 @@ intel_dp_aux_xfer(struct intel_dp *intel_dp,
>
>         ret = recv_bytes;
>  out:
> -       pm_qos_update_request(&i915->pm_qos, PM_QOS_DEFAULT_VALUE);
> +       cpu_latency_qos_update_request(&i915->pm_qos, PM_QOS_DEFAULT_VALUE);
>
>         if (vdd)
>                 edp_panel_vdd_off(intel_dp, false);
> diff --git a/drivers/gpu/drm/i915/i915_drv.c b/drivers/gpu/drm/i915/i915_drv.c
> index f7385abdd74b..74481a189cfc 100644
> --- a/drivers/gpu/drm/i915/i915_drv.c
> +++ b/drivers/gpu/drm/i915/i915_drv.c
> @@ -502,8 +502,7 @@ static int i915_driver_early_probe(struct drm_i915_private *dev_priv)
>         mutex_init(&dev_priv->backlight_lock);
>
>         mutex_init(&dev_priv->sb_lock);
> -       pm_qos_add_request(&dev_priv->sb_qos,
> -                          PM_QOS_CPU_DMA_LATENCY, PM_QOS_DEFAULT_VALUE);
> +       cpu_latency_qos_add_request(&dev_priv->sb_qos, PM_QOS_DEFAULT_VALUE);
>
>         mutex_init(&dev_priv->av_mutex);
>         mutex_init(&dev_priv->wm.wm_mutex);
> @@ -568,7 +567,7 @@ static void i915_driver_late_release(struct drm_i915_private *dev_priv)
>         vlv_free_s0ix_state(dev_priv);
>         i915_workqueues_cleanup(dev_priv);
>
> -       pm_qos_remove_request(&dev_priv->sb_qos);
> +       cpu_latency_qos_remove_request(&dev_priv->sb_qos);
>         mutex_destroy(&dev_priv->sb_lock);
>  }
>
> @@ -1226,8 +1225,7 @@ static int i915_driver_hw_probe(struct drm_i915_private *dev_priv)
>                 }
>         }
>
> -       pm_qos_add_request(&dev_priv->pm_qos, PM_QOS_CPU_DMA_LATENCY,
> -                          PM_QOS_DEFAULT_VALUE);
> +       cpu_latency_qos_add_request(&dev_priv->pm_qos, PM_QOS_DEFAULT_VALUE);
>
>         intel_gt_init_workarounds(dev_priv);
>
> @@ -1273,7 +1271,7 @@ static int i915_driver_hw_probe(struct drm_i915_private *dev_priv)
>  err_msi:
>         if (pdev->msi_enabled)
>                 pci_disable_msi(pdev);
> -       pm_qos_remove_request(&dev_priv->pm_qos);
> +       cpu_latency_qos_remove_request(&dev_priv->pm_qos);
>  err_mem_regions:
>         intel_memory_regions_driver_release(dev_priv);
>  err_ggtt:
> @@ -1296,7 +1294,7 @@ static void i915_driver_hw_remove(struct drm_i915_private *dev_priv)
>         if (pdev->msi_enabled)
>                 pci_disable_msi(pdev);
>
> -       pm_qos_remove_request(&dev_priv->pm_qos);
> +       cpu_latency_qos_remove_request(&dev_priv->pm_qos);
>  }
>
>  /**
> diff --git a/drivers/gpu/drm/i915/intel_sideband.c b/drivers/gpu/drm/i915/intel_sideband.c
> index cbfb7171d62d..0648eda309e4 100644
> --- a/drivers/gpu/drm/i915/intel_sideband.c
> +++ b/drivers/gpu/drm/i915/intel_sideband.c
> @@ -60,7 +60,7 @@ static void __vlv_punit_get(struct drm_i915_private *i915)
>          * to the Valleyview P-unit and not all sideband communications.
>          */
>         if (IS_VALLEYVIEW(i915)) {
> -               pm_qos_update_request(&i915->sb_qos, 0);
> +               cpu_latency_qos_update_request(&i915->sb_qos, 0);
>                 on_each_cpu(ping, NULL, 1);
>         }
>  }
> @@ -68,7 +68,8 @@ static void __vlv_punit_get(struct drm_i915_private *i915)
>  static void __vlv_punit_put(struct drm_i915_private *i915)
>  {
>         if (IS_VALLEYVIEW(i915))
> -               pm_qos_update_request(&i915->sb_qos, PM_QOS_DEFAULT_VALUE);
> +               cpu_latency_qos_update_request(&i915->sb_qos,
> +                                              PM_QOS_DEFAULT_VALUE);
>
>         iosf_mbi_punit_release();
>  }
> --
> 2.16.4
>
>
>
>
>

^ permalink raw reply	[flat|nested] 74+ messages in thread

* Re: [PATCH 10/28] PM: QoS: Rename things related to the CPU latency QoS
  2020-02-11 23:04 ` [PATCH 10/28] PM: QoS: Rename things related to the CPU latency QoS Rafael J. Wysocki
@ 2020-02-12 10:34   ` Rafael J. Wysocki
  2020-02-12 19:13   ` Greg Kroah-Hartman
  1 sibling, 0 replies; 74+ messages in thread
From: Rafael J. Wysocki @ 2020-02-12 10:34 UTC (permalink / raw)
  To: linux-serial
  Cc: Linux PM, LKML, Amit Kucheria, Greg Kroah-Hartman, Rafael J. Wysocki

On Wed, Feb 12, 2020 at 12:39 AM Rafael J. Wysocki <rjw@rjwysocki.net> wrote:
>
> From: "Rafael J. Wysocki" <rafael.j.wysocki@intel.com>
>
> First, rename PM_QOS_CPU_DMA_LAT_DEFAULT_VALUE to
> PM_QOS_CPU_LATENCY_DEFAULT_VALUE and update all of the code
> referring to it accordingly.
>
> Next, rename cpu_dma_constraints to cpu_latency_constraints, move
> the definition of it closer to the functions referring to it and
> update all of them accordingly.  [While at it, add a comment to mark
> the start of the code related to the CPU latency QoS.]
>
> Finally, rename the pm_qos_power_*() family of functions and
> pm_qos_power_fops to cpu_latency_qos_*() and cpu_latency_qos_fops,
> respectively, and update the definition of cpu_latency_qos_miscdev.
> [While at it, update the miscdev interface code start comment.]
>
> No intentional functional impact.
>
> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>


Please note that the whole series is available here:

https://lore.kernel.org/linux-pm/1654227.8mz0SueHsU@kreacher/


> ---
>  drivers/tty/serial/8250/8250_omap.c |  6 ++--
>  drivers/tty/serial/omap-serial.c    |  6 ++--
>  include/linux/pm_qos.h              |  2 +-
>  kernel/power/qos.c                  | 56 +++++++++++++++++++------------------
>  4 files changed, 36 insertions(+), 34 deletions(-)
>
> diff --git a/drivers/tty/serial/8250/8250_omap.c b/drivers/tty/serial/8250/8250_omap.c
> index 6f343ca08440..19f8d2f9e7ba 100644
> --- a/drivers/tty/serial/8250/8250_omap.c
> +++ b/drivers/tty/serial/8250/8250_omap.c
> @@ -1222,8 +1222,8 @@ static int omap8250_probe(struct platform_device *pdev)
>                          DEFAULT_CLK_SPEED);
>         }
>
> -       priv->latency = PM_QOS_CPU_DMA_LAT_DEFAULT_VALUE;
> -       priv->calc_latency = PM_QOS_CPU_DMA_LAT_DEFAULT_VALUE;
> +       priv->latency = PM_QOS_CPU_LATENCY_DEFAULT_VALUE;
> +       priv->calc_latency = PM_QOS_CPU_LATENCY_DEFAULT_VALUE;
>         pm_qos_add_request(&priv->pm_qos_request, PM_QOS_CPU_DMA_LATENCY,
>                            priv->latency);
>         INIT_WORK(&priv->qos_work, omap8250_uart_qos_work);
> @@ -1445,7 +1445,7 @@ static int omap8250_runtime_suspend(struct device *dev)
>         if (up->dma && up->dma->rxchan)
>                 omap_8250_rx_dma_flush(up);
>
> -       priv->latency = PM_QOS_CPU_DMA_LAT_DEFAULT_VALUE;
> +       priv->latency = PM_QOS_CPU_LATENCY_DEFAULT_VALUE;
>         schedule_work(&priv->qos_work);
>
>         return 0;
> diff --git a/drivers/tty/serial/omap-serial.c b/drivers/tty/serial/omap-serial.c
> index 48017cec7f2f..ce2558767eee 100644
> --- a/drivers/tty/serial/omap-serial.c
> +++ b/drivers/tty/serial/omap-serial.c
> @@ -1722,8 +1722,8 @@ static int serial_omap_probe(struct platform_device *pdev)
>                          DEFAULT_CLK_SPEED);
>         }
>
> -       up->latency = PM_QOS_CPU_DMA_LAT_DEFAULT_VALUE;
> -       up->calc_latency = PM_QOS_CPU_DMA_LAT_DEFAULT_VALUE;
> +       up->latency = PM_QOS_CPU_LATENCY_DEFAULT_VALUE;
> +       up->calc_latency = PM_QOS_CPU_LATENCY_DEFAULT_VALUE;
>         pm_qos_add_request(&up->pm_qos_request,
>                 PM_QOS_CPU_DMA_LATENCY, up->latency);
>         INIT_WORK(&up->qos_work, serial_omap_uart_qos_work);
> @@ -1869,7 +1869,7 @@ static int serial_omap_runtime_suspend(struct device *dev)
>
>         serial_omap_enable_wakeup(up, true);
>
> -       up->latency = PM_QOS_CPU_DMA_LAT_DEFAULT_VALUE;
> +       up->latency = PM_QOS_CPU_LATENCY_DEFAULT_VALUE;
>         schedule_work(&up->qos_work);
>
>         return 0;
> diff --git a/include/linux/pm_qos.h b/include/linux/pm_qos.h
> index cb57e5918a25..a3e0bfc6c470 100644
> --- a/include/linux/pm_qos.h
> +++ b/include/linux/pm_qos.h
> @@ -28,7 +28,7 @@ enum pm_qos_flags_status {
>  #define PM_QOS_LATENCY_ANY     S32_MAX
>  #define PM_QOS_LATENCY_ANY_NS  ((s64)PM_QOS_LATENCY_ANY * NSEC_PER_USEC)
>
> -#define PM_QOS_CPU_DMA_LAT_DEFAULT_VALUE       (2000 * USEC_PER_SEC)
> +#define PM_QOS_CPU_LATENCY_DEFAULT_VALUE       (2000 * USEC_PER_SEC)
>  #define PM_QOS_RESUME_LATENCY_DEFAULT_VALUE    PM_QOS_LATENCY_ANY
>  #define PM_QOS_RESUME_LATENCY_NO_CONSTRAINT    PM_QOS_LATENCY_ANY
>  #define PM_QOS_RESUME_LATENCY_NO_CONSTRAINT_NS PM_QOS_LATENCY_ANY_NS
> diff --git a/kernel/power/qos.c b/kernel/power/qos.c
> index 201b43bc6457..a6bf53e9db17 100644
> --- a/kernel/power/qos.c
> +++ b/kernel/power/qos.c
> @@ -56,14 +56,6 @@
>   */
>  static DEFINE_SPINLOCK(pm_qos_lock);
>
> -static struct pm_qos_constraints cpu_dma_constraints = {
> -       .list = PLIST_HEAD_INIT(cpu_dma_constraints.list),
> -       .target_value = PM_QOS_CPU_DMA_LAT_DEFAULT_VALUE,
> -       .default_value = PM_QOS_CPU_DMA_LAT_DEFAULT_VALUE,
> -       .no_constraint_value = PM_QOS_CPU_DMA_LAT_DEFAULT_VALUE,
> -       .type = PM_QOS_MIN,
> -};
> -
>  /**
>   * pm_qos_read_value - Return the current effective constraint value.
>   * @c: List of PM QoS constraint requests.
> @@ -227,6 +219,16 @@ bool pm_qos_update_flags(struct pm_qos_flags *pqf,
>         return prev_value != curr_value;
>  }
>
> +/* Definitions related to the CPU latency QoS. */
> +
> +static struct pm_qos_constraints cpu_latency_constraints = {
> +       .list = PLIST_HEAD_INIT(cpu_latency_constraints.list),
> +       .target_value = PM_QOS_CPU_LATENCY_DEFAULT_VALUE,
> +       .default_value = PM_QOS_CPU_LATENCY_DEFAULT_VALUE,
> +       .no_constraint_value = PM_QOS_CPU_LATENCY_DEFAULT_VALUE,
> +       .type = PM_QOS_MIN,
> +};
> +
>  /**
>   * pm_qos_request - returns current system wide qos expectation
>   * @pm_qos_class: Ignored.
> @@ -235,13 +237,13 @@ bool pm_qos_update_flags(struct pm_qos_flags *pqf,
>   */
>  int pm_qos_request(int pm_qos_class)
>  {
> -       return pm_qos_read_value(&cpu_dma_constraints);
> +       return pm_qos_read_value(&cpu_latency_constraints);
>  }
>  EXPORT_SYMBOL_GPL(pm_qos_request);
>
>  int pm_qos_request_active(struct pm_qos_request *req)
>  {
> -       return req->qos == &cpu_dma_constraints;
> +       return req->qos == &cpu_latency_constraints;
>  }
>  EXPORT_SYMBOL_GPL(pm_qos_request_active);
>
> @@ -278,7 +280,7 @@ void pm_qos_add_request(struct pm_qos_request *req,
>
>         trace_pm_qos_add_request(PM_QOS_CPU_DMA_LATENCY, value);
>
> -       req->qos = &cpu_dma_constraints;
> +       req->qos = &cpu_latency_constraints;
>         cpu_latency_qos_update(req, PM_QOS_ADD_REQ, value);
>  }
>  EXPORT_SYMBOL_GPL(pm_qos_add_request);
> @@ -338,9 +340,9 @@ void pm_qos_remove_request(struct pm_qos_request *req)
>  }
>  EXPORT_SYMBOL_GPL(pm_qos_remove_request);
>
> -/* User space interface to global PM QoS via misc device. */
> +/* User space interface to the CPU latency QoS via misc device. */
>
> -static int pm_qos_power_open(struct inode *inode, struct file *filp)
> +static int cpu_latency_qos_open(struct inode *inode, struct file *filp)
>  {
>         struct pm_qos_request *req;
>
> @@ -354,7 +356,7 @@ static int pm_qos_power_open(struct inode *inode, struct file *filp)
>         return 0;
>  }
>
> -static int pm_qos_power_release(struct inode *inode, struct file *filp)
> +static int cpu_latency_qos_release(struct inode *inode, struct file *filp)
>  {
>         struct pm_qos_request *req = filp->private_data;
>
> @@ -366,8 +368,8 @@ static int pm_qos_power_release(struct inode *inode, struct file *filp)
>         return 0;
>  }
>
> -static ssize_t pm_qos_power_read(struct file *filp, char __user *buf,
> -                                size_t count, loff_t *f_pos)
> +static ssize_t cpu_latency_qos_read(struct file *filp, char __user *buf,
> +                                   size_t count, loff_t *f_pos)
>  {
>         struct pm_qos_request *req = filp->private_data;
>         unsigned long flags;
> @@ -377,14 +379,14 @@ static ssize_t pm_qos_power_read(struct file *filp, char __user *buf,
>                 return -EINVAL;
>
>         spin_lock_irqsave(&pm_qos_lock, flags);
> -       value = pm_qos_get_value(&cpu_dma_constraints);
> +       value = pm_qos_get_value(&cpu_latency_constraints);
>         spin_unlock_irqrestore(&pm_qos_lock, flags);
>
>         return simple_read_from_buffer(buf, count, f_pos, &value, sizeof(s32));
>  }
>
> -static ssize_t pm_qos_power_write(struct file *filp, const char __user *buf,
> -                                 size_t count, loff_t *f_pos)
> +static ssize_t cpu_latency_qos_write(struct file *filp, const char __user *buf,
> +                                    size_t count, loff_t *f_pos)
>  {
>         s32 value;
>
> @@ -404,21 +406,21 @@ static ssize_t pm_qos_power_write(struct file *filp, const char __user *buf,
>         return count;
>  }
>
> -static const struct file_operations pm_qos_power_fops = {
> -       .write = pm_qos_power_write,
> -       .read = pm_qos_power_read,
> -       .open = pm_qos_power_open,
> -       .release = pm_qos_power_release,
> +static const struct file_operations cpu_latency_qos_fops = {
> +       .write = cpu_latency_qos_write,
> +       .read = cpu_latency_qos_read,
> +       .open = cpu_latency_qos_open,
> +       .release = cpu_latency_qos_release,
>         .llseek = noop_llseek,
>  };
>
>  static struct miscdevice cpu_latency_qos_miscdev = {
>         .minor = MISC_DYNAMIC_MINOR,
>         .name = "cpu_dma_latency",
> -       .fops = &pm_qos_power_fops,
> +       .fops = &cpu_latency_qos_fops,
>  };
>
> -static int __init pm_qos_power_init(void)
> +static int __init cpu_latency_qos_init(void)
>  {
>         int ret;
>
> @@ -429,7 +431,7 @@ static int __init pm_qos_power_init(void)
>
>         return ret;
>  }
> -late_initcall(pm_qos_power_init);
> +late_initcall(cpu_latency_qos_init);
>
>  /* Definitions related to the frequency QoS below. */
>
> --
> 2.16.4
>
>
>
>
>

^ permalink raw reply	[flat|nested] 74+ messages in thread

* Re: [PATCH 22/28] drivers: tty: Call cpu_latency_qos_*() instead of pm_qos_*()
  2020-02-11 23:27 ` [PATCH 22/28] drivers: tty: " Rafael J. Wysocki
@ 2020-02-12 10:35   ` Rafael J. Wysocki
  2020-02-12 19:13   ` Greg Kroah-Hartman
  1 sibling, 0 replies; 74+ messages in thread
From: Rafael J. Wysocki @ 2020-02-12 10:35 UTC (permalink / raw)
  To: linux-serial
  Cc: Linux PM, LKML, Amit Kucheria, Greg Kroah-Hartman, Rafael J. Wysocki

On Wed, Feb 12, 2020 at 12:40 AM Rafael J. Wysocki <rjw@rjwysocki.net> wrote:
>
> From: "Rafael J. Wysocki" <rafael.j.wysocki@intel.com>
>
> Call cpu_latency_qos_add/update/remove_request() instead of
> pm_qos_add/update/remove_request(), respectively, because the
> latter are going to be dropped.
>
> No intentional functional impact.
>
> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>

Please note that the whole series is available here:

https://lore.kernel.org/linux-pm/1654227.8mz0SueHsU@kreacher/

> ---
>  drivers/tty/serial/8250/8250_omap.c | 7 +++----
>  drivers/tty/serial/omap-serial.c    | 9 ++++-----
>  2 files changed, 7 insertions(+), 9 deletions(-)
>
> diff --git a/drivers/tty/serial/8250/8250_omap.c b/drivers/tty/serial/8250/8250_omap.c
> index 19f8d2f9e7ba..76fe72bfb8bb 100644
> --- a/drivers/tty/serial/8250/8250_omap.c
> +++ b/drivers/tty/serial/8250/8250_omap.c
> @@ -569,7 +569,7 @@ static void omap8250_uart_qos_work(struct work_struct *work)
>         struct omap8250_priv *priv;
>
>         priv = container_of(work, struct omap8250_priv, qos_work);
> -       pm_qos_update_request(&priv->pm_qos_request, priv->latency);
> +       cpu_latency_qos_update_request(&priv->pm_qos_request, priv->latency);
>  }
>
>  #ifdef CONFIG_SERIAL_8250_DMA
> @@ -1224,8 +1224,7 @@ static int omap8250_probe(struct platform_device *pdev)
>
>         priv->latency = PM_QOS_CPU_LATENCY_DEFAULT_VALUE;
>         priv->calc_latency = PM_QOS_CPU_LATENCY_DEFAULT_VALUE;
> -       pm_qos_add_request(&priv->pm_qos_request, PM_QOS_CPU_DMA_LATENCY,
> -                          priv->latency);
> +       cpu_latency_qos_add_request(&priv->pm_qos_request, priv->latency);
>         INIT_WORK(&priv->qos_work, omap8250_uart_qos_work);
>
>         spin_lock_init(&priv->rx_dma_lock);
> @@ -1295,7 +1294,7 @@ static int omap8250_remove(struct platform_device *pdev)
>         pm_runtime_put_sync(&pdev->dev);
>         pm_runtime_disable(&pdev->dev);
>         serial8250_unregister_port(priv->line);
> -       pm_qos_remove_request(&priv->pm_qos_request);
> +       cpu_latency_qos_remove_request(&priv->pm_qos_request);
>         device_init_wakeup(&pdev->dev, false);
>         return 0;
>  }
> diff --git a/drivers/tty/serial/omap-serial.c b/drivers/tty/serial/omap-serial.c
> index ce2558767eee..e0b720ac754b 100644
> --- a/drivers/tty/serial/omap-serial.c
> +++ b/drivers/tty/serial/omap-serial.c
> @@ -831,7 +831,7 @@ static void serial_omap_uart_qos_work(struct work_struct *work)
>         struct uart_omap_port *up = container_of(work, struct uart_omap_port,
>                                                 qos_work);
>
> -       pm_qos_update_request(&up->pm_qos_request, up->latency);
> +       cpu_latency_qos_update_request(&up->pm_qos_request, up->latency);
>  }
>
>  static void
> @@ -1724,8 +1724,7 @@ static int serial_omap_probe(struct platform_device *pdev)
>
>         up->latency = PM_QOS_CPU_LATENCY_DEFAULT_VALUE;
>         up->calc_latency = PM_QOS_CPU_LATENCY_DEFAULT_VALUE;
> -       pm_qos_add_request(&up->pm_qos_request,
> -               PM_QOS_CPU_DMA_LATENCY, up->latency);
> +       cpu_latency_qos_add_request(&up->pm_qos_request, up->latency);
>         INIT_WORK(&up->qos_work, serial_omap_uart_qos_work);
>
>         platform_set_drvdata(pdev, up);
> @@ -1759,7 +1758,7 @@ static int serial_omap_probe(struct platform_device *pdev)
>         pm_runtime_dont_use_autosuspend(&pdev->dev);
>         pm_runtime_put_sync(&pdev->dev);
>         pm_runtime_disable(&pdev->dev);
> -       pm_qos_remove_request(&up->pm_qos_request);
> +       cpu_latency_qos_remove_request(&up->pm_qos_request);
>         device_init_wakeup(up->dev, false);
>  err_rs485:
>  err_port_line:
> @@ -1777,7 +1776,7 @@ static int serial_omap_remove(struct platform_device *dev)
>         pm_runtime_dont_use_autosuspend(up->dev);
>         pm_runtime_put_sync(up->dev);
>         pm_runtime_disable(up->dev);
> -       pm_qos_remove_request(&up->pm_qos_request);
> +       cpu_latency_qos_remove_request(&up->pm_qos_request);
>         device_init_wakeup(&dev->dev, false);
>
>         return 0;
> --
> 2.16.4
>
>
>
>
>

^ permalink raw reply	[flat|nested] 74+ messages in thread

* Re: [PATCH 23/28] drivers: usb: Call cpu_latency_qos_*() instead of pm_qos_*()
  2020-02-11 23:28 ` [PATCH 23/28] drivers: usb: " Rafael J. Wysocki
@ 2020-02-12 18:38   ` Greg KH
  2020-02-18  8:03     ` Peter Chen
  2020-02-19  1:09   ` Peter Chen
  1 sibling, 1 reply; 74+ messages in thread
From: Greg KH @ 2020-02-12 18:38 UTC (permalink / raw)
  To: Rafael J. Wysocki; +Cc: Linux PM, LKML, Amit Kucheria, Peter Chen, linux-usb

On Wed, Feb 12, 2020 at 12:28:44AM +0100, Rafael J. Wysocki wrote:
> From: "Rafael J. Wysocki" <rafael.j.wysocki@intel.com>
> 
> Call cpu_latency_qos_add/remove_request() instead of
> pm_qos_add/remove_request(), respectively, because the
> latter are going to be dropped.
> 
> No intentional functional impact.
> 
> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
> ---
>  drivers/usb/chipidea/ci_hdrc_imx.c | 12 +++++-------
>  1 file changed, 5 insertions(+), 7 deletions(-)

Acked-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

^ permalink raw reply	[flat|nested] 74+ messages in thread

* Re: [PATCH 10/28] PM: QoS: Rename things related to the CPU latency QoS
  2020-02-11 23:04 ` [PATCH 10/28] PM: QoS: Rename things related to the CPU latency QoS Rafael J. Wysocki
  2020-02-12 10:34   ` Rafael J. Wysocki
@ 2020-02-12 19:13   ` Greg Kroah-Hartman
  1 sibling, 0 replies; 74+ messages in thread
From: Greg Kroah-Hartman @ 2020-02-12 19:13 UTC (permalink / raw)
  To: Rafael J. Wysocki; +Cc: Linux PM, LKML, Amit Kucheria, linux-serial

On Wed, Feb 12, 2020 at 12:04:31AM +0100, Rafael J. Wysocki wrote:
> From: "Rafael J. Wysocki" <rafael.j.wysocki@intel.com>
> 
> First, rename PM_QOS_CPU_DMA_LAT_DEFAULT_VALUE to
> PM_QOS_CPU_LATENCY_DEFAULT_VALUE and update all of the code
> referring to it accordingly.
> 
> Next, rename cpu_dma_constraints to cpu_latency_constraints, move
> the definition of it closer to the functions referring to it and
> update all of them accordingly.  [While at it, add a comment to mark
> the start of the code related to the CPU latency QoS.]
> 
> Finally, rename the pm_qos_power_*() family of functions and
> pm_qos_power_fops to cpu_latency_qos_*() and cpu_latency_qos_fops,
> respectively, and update the definition of cpu_latency_qos_miscdev.
> [While at it, update the miscdev interface code start comment.]
> 
> No intentional functional impact.
> 
> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
> ---
>  drivers/tty/serial/8250/8250_omap.c |  6 ++--
>  drivers/tty/serial/omap-serial.c    |  6 ++--
>  include/linux/pm_qos.h              |  2 +-
>  kernel/power/qos.c                  | 56 +++++++++++++++++++------------------
>  4 files changed, 36 insertions(+), 34 deletions(-)

Acked-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

^ permalink raw reply	[flat|nested] 74+ messages in thread

* Re: [PATCH 22/28] drivers: tty: Call cpu_latency_qos_*() instead of pm_qos_*()
  2020-02-11 23:27 ` [PATCH 22/28] drivers: tty: " Rafael J. Wysocki
  2020-02-12 10:35   ` Rafael J. Wysocki
@ 2020-02-12 19:13   ` Greg Kroah-Hartman
  1 sibling, 0 replies; 74+ messages in thread
From: Greg Kroah-Hartman @ 2020-02-12 19:13 UTC (permalink / raw)
  To: Rafael J. Wysocki; +Cc: Linux PM, LKML, Amit Kucheria, linux-serial

On Wed, Feb 12, 2020 at 12:27:04AM +0100, Rafael J. Wysocki wrote:
> From: "Rafael J. Wysocki" <rafael.j.wysocki@intel.com>
> 
> Call cpu_latency_qos_add/update/remove_request() instead of
> pm_qos_add/update/remove_request(), respectively, because the
> latter are going to be dropped.
> 
> No intentional functional impact.
> 
> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
> ---
>  drivers/tty/serial/8250/8250_omap.c | 7 +++----
>  drivers/tty/serial/omap-serial.c    | 9 ++++-----
>  2 files changed, 7 insertions(+), 9 deletions(-)

Acked-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

^ permalink raw reply	[flat|nested] 74+ messages in thread

* Re: [PATCH 00/28] PM: QoS: Get rid of unuseful code and rework CPU latency QoS interface
  2020-02-11 22:51 [PATCH 00/28] PM: QoS: Get rid of unuseful code and rework CPU latency QoS interface Rafael J. Wysocki
                   ` (29 preceding siblings ...)
  2020-02-12  9:39 ` Rafael J. Wysocki
@ 2020-02-12 23:32 ` Francisco Jerez
  2020-02-13  0:16   ` Rafael J. Wysocki
  2020-02-13  7:10 ` Amit Kucheria
  31 siblings, 1 reply; 74+ messages in thread
From: Francisco Jerez @ 2020-02-12 23:32 UTC (permalink / raw)
  To: Rafael J. Wysocki, Linux PM
  Cc: LKML, Amit Kucheria, Pandruvada\, Srinivas, Rodrigo Vivi, Peter Zijlstra

[-- Attachment #1.1: Type: text/plain, Size: 4618 bytes --]

"Rafael J. Wysocki" <rjw@rjwysocki.net> writes:

> Hi All,
>
> This series of patches is based on the observation that after commit
> c3082a674f46 ("PM: QoS: Get rid of unused flags") the only global PM QoS class
> in use is PM_QOS_CPU_DMA_LATENCY, but there is still a significant amount of
> code dedicated to the handling of global PM QoS classes in general.  That code
> takes up space and adds overhead in vain, so it is better to get rid of it.
>
> Moreover, with that unuseful code removed, the interface for adding QoS
> requests for CPU latency becomes inelegant and confusing, so it is better to
> clean it up.
>
> Patches [01/28-12/28] do the first part described above, which also includes
> some assorted cleanups of the core PM QoS code that doesn't go away.
>
> Patches [13/28-25/28] rework the CPU latency QoS interface (in the classic
> "define stubs, migrate users, change the API proper" manner), patches
> [26-27/28] update the general comments and documentation to match the code
> after the previous changes and the last one makes the CPU latency QoS depend
> on CPU_IDLE (because cpuidle is the only user of its target value today).
>
> The majority of the patches in this series don't change the functionality of
> the code at all (at least not intentionally).
>
> Please refer to the changelogs of individual patches for details.
>
> Thanks!

Hi Rafael,

I believe some of the interfaces removed here could be useful in the
near future.  It goes back to the energy efficiency- (and IGP graphics
performance-)improving series I submitted a while ago [1].  It relies on
some mechanism for the graphics driver to report an I/O bottleneck to
CPUFREQ, allowing it to make a more conservative trade-off between
energy efficiency and latency, which can greatly reduce the CPU package
energy usage of IO-bound applications (in some graphics benchmarks I've
seen it reduced by over 40% on my ICL laptop), and therefore also allows
TDP-bound applications to obtain a reciprocal improvement in throughput.

I'm not particularly fond of the global PM QoS interfaces TBH, it seems
like an excessively blunt hammer to me, so I can very much relate to the
purpose of this series.  However the finer-grained solution I've
implemented has seen some push-back from i915 and CPUFREQ devs due to
its complexity, since it relies on task scheduler changes in order to
track IO bottlenecks per-process (roughly as suggested by Peter Zijlstra
during our previous discussions), pretty much in the spirit of PELT but
applied to IO utilization.

With that in mind I was hoping we could take advantage of PM QoS as a
temporary solution [2], by introducing a global PM QoS class similar but
with roughly converse semantics to PM_QOS_CPU_DMA_LATENCY, allowing
device drivers to report a *lower* bound on CPU latency beyond which PM
shall not bother to reduce latency if doing so would have negative
consequences on the energy efficiency and/or parallelism of the system.
Of course one would expect the current PM_QOS_CPU_DMA_LATENCY upper
bound to take precedence over the new lower bound in cases where the
former is in conflict with the latter.

I can think of several alternatives to that which don't involve
temporarily holding off your clean-up, but none of them sound
particularly exciting:

 1/ Use an interface specific to CPUFREQ, pretty much like the one
    introduced in my original submission [1].
 
 2/ Use per-CPU PM QoS, which AFAICT would require the graphics driver
    to either place a request on every CPU of the system (which would
    cause a frequent operation to have O(N) complexity on the number of
    CPUs on the system), or play a cat-and-mouse game with the task
    scheduler.
 
 3/ Add a new global PM QoS mechanism roughly duplicating the
    cpu_latency_qos_* interfaces introduced in this series.  Drop your
    change making this available to CPU IDLE only.
 
 3/ Go straight to a scheduling-based approach, which is likely to
    greatly increase the review effort required to upstream this
    feature.  (Peter might disagree though?)

Regards,
Francisco.

[1] https://lore.kernel.org/linux-pm/20180328063845.4884-1-currojerez@riseup.net/

[2] I've written the code to do this already, but I wasn't able to test
    and debug it extensively until this week due to the instability of
    i915 on recent v5.5 kernels that prevented any benchmark run from
    surviving more than a few hours on my ICL system, hopefully the
    required i915 fixes will flow back to stable branches soon enough.

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 227 bytes --]

^ permalink raw reply	[flat|nested] 74+ messages in thread

* Re: [PATCH 00/28] PM: QoS: Get rid of unuseful code and rework CPU latency QoS interface
  2020-02-12 23:32 ` Francisco Jerez
@ 2020-02-13  0:16   ` Rafael J. Wysocki
  2020-02-13  0:37     ` Rafael J. Wysocki
  2020-02-13  8:07     ` Francisco Jerez
  0 siblings, 2 replies; 74+ messages in thread
From: Rafael J. Wysocki @ 2020-02-13  0:16 UTC (permalink / raw)
  To: Francisco Jerez
  Cc: Rafael J. Wysocki, Linux PM, LKML, Amit Kucheria, Pandruvada,
	Srinivas, Rodrigo Vivi, Peter Zijlstra

On Thu, Feb 13, 2020 at 12:31 AM Francisco Jerez <currojerez@riseup.net> wrote:
>
> "Rafael J. Wysocki" <rjw@rjwysocki.net> writes:
>
> > Hi All,
> >
> > This series of patches is based on the observation that after commit
> > c3082a674f46 ("PM: QoS: Get rid of unused flags") the only global PM QoS class
> > in use is PM_QOS_CPU_DMA_LATENCY, but there is still a significant amount of
> > code dedicated to the handling of global PM QoS classes in general.  That code
> > takes up space and adds overhead in vain, so it is better to get rid of it.
> >
> > Moreover, with that unuseful code removed, the interface for adding QoS
> > requests for CPU latency becomes inelegant and confusing, so it is better to
> > clean it up.
> >
> > Patches [01/28-12/28] do the first part described above, which also includes
> > some assorted cleanups of the core PM QoS code that doesn't go away.
> >
> > Patches [13/28-25/28] rework the CPU latency QoS interface (in the classic
> > "define stubs, migrate users, change the API proper" manner), patches
> > [26-27/28] update the general comments and documentation to match the code
> > after the previous changes and the last one makes the CPU latency QoS depend
> > on CPU_IDLE (because cpuidle is the only user of its target value today).
> >
> > The majority of the patches in this series don't change the functionality of
> > the code at all (at least not intentionally).
> >
> > Please refer to the changelogs of individual patches for details.
> >
> > Thanks!
>
> Hi Rafael,
>
> I believe some of the interfaces removed here could be useful in the
> near future.

I disagree.

>  It goes back to the energy efficiency- (and IGP graphics
> performance-)improving series I submitted a while ago [1].  It relies on
> some mechanism for the graphics driver to report an I/O bottleneck to
> CPUFREQ, allowing it to make a more conservative trade-off between
> energy efficiency and latency, which can greatly reduce the CPU package
> energy usage of IO-bound applications (in some graphics benchmarks I've
> seen it reduced by over 40% on my ICL laptop), and therefore also allows
> TDP-bound applications to obtain a reciprocal improvement in throughput.
>
> I'm not particularly fond of the global PM QoS interfaces TBH, it seems
> like an excessively blunt hammer to me, so I can very much relate to the
> purpose of this series.  However the finer-grained solution I've
> implemented has seen some push-back from i915 and CPUFREQ devs due to
> its complexity, since it relies on task scheduler changes in order to
> track IO bottlenecks per-process (roughly as suggested by Peter Zijlstra
> during our previous discussions), pretty much in the spirit of PELT but
> applied to IO utilization.
>
> With that in mind I was hoping we could take advantage of PM QoS as a
> temporary solution [2], by introducing a global PM QoS class similar but
> with roughly converse semantics to PM_QOS_CPU_DMA_LATENCY, allowing
> device drivers to report a *lower* bound on CPU latency beyond which PM
> shall not bother to reduce latency if doing so would have negative
> consequences on the energy efficiency and/or parallelism of the system.

So I really don't quite see how that could be responded to, by cpuidle
say.  What exactly do you mean by "reducing latency" in particular?

> Of course one would expect the current PM_QOS_CPU_DMA_LATENCY upper
> bound to take precedence over the new lower bound in cases where the
> former is in conflict with the latter.

So that needs to be done on top of this series.

> I can think of several alternatives to that which don't involve
> temporarily holding off your clean-up,

The cleanup goes in.  Please work on top of it.

> but none of them sound particularly exciting:
>
>  1/ Use an interface specific to CPUFREQ, pretty much like the one
>     introduced in my original submission [1].

It uses frequency QoS already today, do you really need something else?

>  2/ Use per-CPU PM QoS, which AFAICT would require the graphics driver
>     to either place a request on every CPU of the system (which would
>     cause a frequent operation to have O(N) complexity on the number of
>     CPUs on the system), or play a cat-and-mouse game with the task
>     scheduler.

That's in place already too in the form of device PM QoS; see
drivers/base/power/qos.c.

>  3/ Add a new global PM QoS mechanism roughly duplicating the
>     cpu_latency_qos_* interfaces introduced in this series.  Drop your
>     change making this available to CPU IDLE only.

It sounds like you really want performance for energy efficiency and
CPU latency has a little to do with that.

>  3/ Go straight to a scheduling-based approach, which is likely to
>     greatly increase the review effort required to upstream this
>     feature.  (Peter might disagree though?)

Are you familiar with the utilization clamps mechanism?

Thanks!

^ permalink raw reply	[flat|nested] 74+ messages in thread

* Re: [PATCH 00/28] PM: QoS: Get rid of unuseful code and rework CPU latency QoS interface
  2020-02-13  0:16   ` Rafael J. Wysocki
@ 2020-02-13  0:37     ` Rafael J. Wysocki
  2020-02-13  8:10       ` Francisco Jerez
  2020-02-13  8:07     ` Francisco Jerez
  1 sibling, 1 reply; 74+ messages in thread
From: Rafael J. Wysocki @ 2020-02-13  0:37 UTC (permalink / raw)
  To: Francisco Jerez
  Cc: Rafael J. Wysocki, Linux PM, LKML, Amit Kucheria, Pandruvada,
	Srinivas, Rodrigo Vivi, Peter Zijlstra

On Thu, Feb 13, 2020 at 1:16 AM Rafael J. Wysocki <rafael@kernel.org> wrote:
>
> On Thu, Feb 13, 2020 at 12:31 AM Francisco Jerez <currojerez@riseup.net> wrote:
> >
> > "Rafael J. Wysocki" <rjw@rjwysocki.net> writes:
> >
> > > Hi All,
> > >
> > > This series of patches is based on the observation that after commit
> > > c3082a674f46 ("PM: QoS: Get rid of unused flags") the only global PM QoS class
> > > in use is PM_QOS_CPU_DMA_LATENCY, but there is still a significant amount of
> > > code dedicated to the handling of global PM QoS classes in general.  That code
> > > takes up space and adds overhead in vain, so it is better to get rid of it.
> > >
> > > Moreover, with that unuseful code removed, the interface for adding QoS
> > > requests for CPU latency becomes inelegant and confusing, so it is better to
> > > clean it up.
> > >
> > > Patches [01/28-12/28] do the first part described above, which also includes
> > > some assorted cleanups of the core PM QoS code that doesn't go away.
> > >
> > > Patches [13/28-25/28] rework the CPU latency QoS interface (in the classic
> > > "define stubs, migrate users, change the API proper" manner), patches
> > > [26-27/28] update the general comments and documentation to match the code
> > > after the previous changes and the last one makes the CPU latency QoS depend
> > > on CPU_IDLE (because cpuidle is the only user of its target value today).
> > >
> > > The majority of the patches in this series don't change the functionality of
> > > the code at all (at least not intentionally).
> > >
> > > Please refer to the changelogs of individual patches for details.
> > >
> > > Thanks!
> >
> > Hi Rafael,
> >
> > I believe some of the interfaces removed here could be useful in the
> > near future.
>
> I disagree.
>
> >  It goes back to the energy efficiency- (and IGP graphics
> > performance-)improving series I submitted a while ago [1].  It relies on
> > some mechanism for the graphics driver to report an I/O bottleneck to
> > CPUFREQ, allowing it to make a more conservative trade-off between
> > energy efficiency and latency, which can greatly reduce the CPU package
> > energy usage of IO-bound applications (in some graphics benchmarks I've
> > seen it reduced by over 40% on my ICL laptop), and therefore also allows
> > TDP-bound applications to obtain a reciprocal improvement in throughput.
> >
> > I'm not particularly fond of the global PM QoS interfaces TBH, it seems
> > like an excessively blunt hammer to me, so I can very much relate to the
> > purpose of this series.  However the finer-grained solution I've
> > implemented has seen some push-back from i915 and CPUFREQ devs due to
> > its complexity, since it relies on task scheduler changes in order to
> > track IO bottlenecks per-process (roughly as suggested by Peter Zijlstra
> > during our previous discussions), pretty much in the spirit of PELT but
> > applied to IO utilization.
> >
> > With that in mind I was hoping we could take advantage of PM QoS as a
> > temporary solution [2], by introducing a global PM QoS class similar but
> > with roughly converse semantics to PM_QOS_CPU_DMA_LATENCY, allowing
> > device drivers to report a *lower* bound on CPU latency beyond which PM
> > shall not bother to reduce latency if doing so would have negative
> > consequences on the energy efficiency and/or parallelism of the system.
>
> So I really don't quite see how that could be responded to, by cpuidle
> say.  What exactly do you mean by "reducing latency" in particular?
>
> > Of course one would expect the current PM_QOS_CPU_DMA_LATENCY upper
> > bound to take precedence over the new lower bound in cases where the
> > former is in conflict with the latter.
>
> So that needs to be done on top of this series.
>
> > I can think of several alternatives to that which don't involve
> > temporarily holding off your clean-up,
>
> The cleanup goes in.  Please work on top of it.
>
> > but none of them sound particularly exciting:
> >
> >  1/ Use an interface specific to CPUFREQ, pretty much like the one
> >     introduced in my original submission [1].
>
> It uses frequency QoS already today, do you really need something else?
>
> >  2/ Use per-CPU PM QoS, which AFAICT would require the graphics driver
> >     to either place a request on every CPU of the system (which would
> >     cause a frequent operation to have O(N) complexity on the number of
> >     CPUs on the system), or play a cat-and-mouse game with the task
> >     scheduler.
>
> That's in place already too in the form of device PM QoS; see
> drivers/base/power/qos.c.
>
> >  3/ Add a new global PM QoS mechanism roughly duplicating the
> >     cpu_latency_qos_* interfaces introduced in this series.  Drop your
> >     change making this available to CPU IDLE only.
>
> It sounds like you really want performance for energy efficiency and
> CPU latency has a little to do with that.
>
> >  3/ Go straight to a scheduling-based approach, which is likely to
> >     greatly increase the review effort required to upstream this
> >     feature.  (Peter might disagree though?)
>
> Are you familiar with the utilization clamps mechanism?

And BTW, posting patches as RFC is fine even if they have not been
tested.  At least you let people know that you work on something this
way, so if they work on changes in the same area, they may take that
into consideration.

Also if there are objections to your proposal, you may save quite a
bit of time by sending it early.

It is unfortunate that this series has clashed with the changes that
you were about to propose, but in this particular case in my view it
is better to clean up things and start over.

Thanks!

^ permalink raw reply	[flat|nested] 74+ messages in thread

* Re: [PATCH 00/28] PM: QoS: Get rid of unuseful code and rework CPU latency QoS interface
  2020-02-11 22:51 [PATCH 00/28] PM: QoS: Get rid of unuseful code and rework CPU latency QoS interface Rafael J. Wysocki
                   ` (30 preceding siblings ...)
  2020-02-12 23:32 ` Francisco Jerez
@ 2020-02-13  7:10 ` Amit Kucheria
  2020-02-13 10:17   ` Rafael J. Wysocki
  31 siblings, 1 reply; 74+ messages in thread
From: Amit Kucheria @ 2020-02-13  7:10 UTC (permalink / raw)
  To: Rafael J. Wysocki; +Cc: Linux PM, LKML

On Wed, Feb 12, 2020 at 5:09 AM Rafael J. Wysocki <rjw@rjwysocki.net> wrote:
>
> Hi All,
>
> This series of patches is based on the observation that after commit
> c3082a674f46 ("PM: QoS: Get rid of unused flags") the only global PM QoS class
> in use is PM_QOS_CPU_DMA_LATENCY, but there is still a significant amount of
> code dedicated to the handling of global PM QoS classes in general.  That code
> takes up space and adds overhead in vain, so it is better to get rid of it.
>
> Moreover, with that unuseful code removed, the interface for adding QoS
> requests for CPU latency becomes inelegant and confusing, so it is better to
> clean it up.
>
> Patches [01/28-12/28] do the first part described above, which also includes
> some assorted cleanups of the core PM QoS code that doesn't go away.
>
> Patches [13/28-25/28] rework the CPU latency QoS interface (in the classic
> "define stubs, migrate users, change the API proper" manner), patches
> [26-27/28] update the general comments and documentation to match the code
> after the previous changes and the last one makes the CPU latency QoS depend
> on CPU_IDLE (because cpuidle is the only user of its target value today).
>
> The majority of the patches in this series don't change the functionality of
> the code at all (at least not intentionally).
>
> Please refer to the changelogs of individual patches for details.

Hi Rafael,

Nice cleanup to the code and docs.

I've reviewed the series, and briefly tested it by setting latencies
from userspace. Can we not remove the debugfs interface? It is a quick
way to check the global cpu latency clamp on the system from userspace
without setting up tracepoints or writing a program to read
/dev/cpu_dma_latency.

Except for patch 01/28 removing the debugfs interface, please feel to add my

Reviewed-by: Amit Kucheria <amit.kucheria@linaro.org>
Tested-by: Amit Kucheria <amit.kucheria@linaro.org>

Regards,
Amit

^ permalink raw reply	[flat|nested] 74+ messages in thread

* Re: [PATCH 00/28] PM: QoS: Get rid of unuseful code and rework CPU latency QoS interface
  2020-02-13  0:16   ` Rafael J. Wysocki
  2020-02-13  0:37     ` Rafael J. Wysocki
@ 2020-02-13  8:07     ` Francisco Jerez
  2020-02-13 11:34       ` Rafael J. Wysocki
  1 sibling, 1 reply; 74+ messages in thread
From: Francisco Jerez @ 2020-02-13  8:07 UTC (permalink / raw)
  To: Rafael J. Wysocki
  Cc: Rafael J. Wysocki, Linux PM, LKML, Amit Kucheria, Pandruvada\,
	Srinivas, Rodrigo Vivi, Peter Zijlstra

[-- Attachment #1.1: Type: text/plain, Size: 7870 bytes --]

"Rafael J. Wysocki" <rafael@kernel.org> writes:

> On Thu, Feb 13, 2020 at 12:31 AM Francisco Jerez <currojerez@riseup.net> wrote:
>>
>> "Rafael J. Wysocki" <rjw@rjwysocki.net> writes:
>>
>> > Hi All,
>> >
>> > This series of patches is based on the observation that after commit
>> > c3082a674f46 ("PM: QoS: Get rid of unused flags") the only global PM QoS class
>> > in use is PM_QOS_CPU_DMA_LATENCY, but there is still a significant amount of
>> > code dedicated to the handling of global PM QoS classes in general.  That code
>> > takes up space and adds overhead in vain, so it is better to get rid of it.
>> >
>> > Moreover, with that unuseful code removed, the interface for adding QoS
>> > requests for CPU latency becomes inelegant and confusing, so it is better to
>> > clean it up.
>> >
>> > Patches [01/28-12/28] do the first part described above, which also includes
>> > some assorted cleanups of the core PM QoS code that doesn't go away.
>> >
>> > Patches [13/28-25/28] rework the CPU latency QoS interface (in the classic
>> > "define stubs, migrate users, change the API proper" manner), patches
>> > [26-27/28] update the general comments and documentation to match the code
>> > after the previous changes and the last one makes the CPU latency QoS depend
>> > on CPU_IDLE (because cpuidle is the only user of its target value today).
>> >
>> > The majority of the patches in this series don't change the functionality of
>> > the code at all (at least not intentionally).
>> >
>> > Please refer to the changelogs of individual patches for details.
>> >
>> > Thanks!
>>
>> Hi Rafael,
>>
>> I believe some of the interfaces removed here could be useful in the
>> near future.
>
> I disagree.
>
>>  It goes back to the energy efficiency- (and IGP graphics
>> performance-)improving series I submitted a while ago [1].  It relies on
>> some mechanism for the graphics driver to report an I/O bottleneck to
>> CPUFREQ, allowing it to make a more conservative trade-off between
>> energy efficiency and latency, which can greatly reduce the CPU package
>> energy usage of IO-bound applications (in some graphics benchmarks I've
>> seen it reduced by over 40% on my ICL laptop), and therefore also allows
>> TDP-bound applications to obtain a reciprocal improvement in throughput.
>>
>> I'm not particularly fond of the global PM QoS interfaces TBH, it seems
>> like an excessively blunt hammer to me, so I can very much relate to the
>> purpose of this series.  However the finer-grained solution I've
>> implemented has seen some push-back from i915 and CPUFREQ devs due to
>> its complexity, since it relies on task scheduler changes in order to
>> track IO bottlenecks per-process (roughly as suggested by Peter Zijlstra
>> during our previous discussions), pretty much in the spirit of PELT but
>> applied to IO utilization.
>>
>> With that in mind I was hoping we could take advantage of PM QoS as a
>> temporary solution [2], by introducing a global PM QoS class similar but
>> with roughly converse semantics to PM_QOS_CPU_DMA_LATENCY, allowing
>> device drivers to report a *lower* bound on CPU latency beyond which PM
>> shall not bother to reduce latency if doing so would have negative
>> consequences on the energy efficiency and/or parallelism of the system.
>
> So I really don't quite see how that could be responded to, by cpuidle
> say.  What exactly do you mean by "reducing latency" in particular?
>

cpuidle wouldn't necessarily have to do anything about it since it would
be intended merely as a hint that a device in the system other than the
CPU has a bottleneck.  It could provide a lower bound for the wake-up
latency of the idle states that may be considered by cpuidle.  It seems
to me like it could be useful when a program can tell from the
characteristics of the workload that a latency reduction below a certain
time bound wouldn't materially affect the performance of the system
(e.g. if you have 20 ms to render a GPU-bound frame, you may not care at
all about the CPU taking a fraction of a millisecond more to wake up a
few times each frame).

For cpufreq I was planning to have it influence a time parameter of the
utilization averaging done by the governor, which would allow it to have
a more optimal response in the long term (in the sense of lowering the
energy cost of performing the same work in the specified timeframe),
even if such a large time parameter wouldn't normally be considered
appropriate for utilization averaging due to latency concerns.

>> Of course one would expect the current PM_QOS_CPU_DMA_LATENCY upper
>> bound to take precedence over the new lower bound in cases where the
>> former is in conflict with the latter.
>
> So that needs to be done on top of this series.
>
>> I can think of several alternatives to that which don't involve
>> temporarily holding off your clean-up,
>
> The cleanup goes in.  Please work on top of it.
>

Hopefully we can come up with an alternative in that case.  TBH I'd love
to see your clean-up go in too, but global PM QoS seemed fairly
appealing as a way to split up my work so it could be reviewed
incrementally, even though I'm aiming for a finer-grained solution than
that.

>> but none of them sound particularly exciting:
>>
>>  1/ Use an interface specific to CPUFREQ, pretty much like the one
>>     introduced in my original submission [1].
>
> It uses frequency QoS already today, do you really need something else?
>

Yes.  I don't see how frequency QoS could be useful for this as-is,
unless we're willing to introduce code in every device driver that takes
advantage of this and have them monitor the utilization of every CPU in
the system, so they can calculate an appropriate max frequency
constraint -- One which we can be reasonably certain won't hurt the
long-term performance of the CPU cores these constraints are being
placed on.

>>  2/ Use per-CPU PM QoS, which AFAICT would require the graphics driver
>>     to either place a request on every CPU of the system (which would
>>     cause a frequent operation to have O(N) complexity on the number of
>>     CPUs on the system), or play a cat-and-mouse game with the task
>>     scheduler.
>
> That's in place already too in the form of device PM QoS; see
> drivers/base/power/qos.c.

But wouldn't that have the drawbacks I was talking about above when
trying to use it in order to set this kind of constraints on CPU power
management?

>
>>  3/ Add a new global PM QoS mechanism roughly duplicating the
>>     cpu_latency_qos_* interfaces introduced in this series.  Drop your
>>     change making this available to CPU IDLE only.
>
> It sounds like you really want performance for energy efficiency and
> CPU latency has a little to do with that.
>

The mechanism I've been working on isn't intended to sacrifice long-term
performance of the CPU (e.g. if a CPU core is 100% utilized in the
steady state by the same or an unrelated application the CPUFREQ
governor should still request the maximum turbo frequency for it), it's
only meant to affect the trade-off between energy efficiency and latency
(e.g. the time it takes for the CPUFREQ governor to respond to an
oscillation of the workload that chooses to opt in).

>>  3/ Go straight to a scheduling-based approach, which is likely to
>>     greatly increase the review effort required to upstream this
>>     feature.  (Peter might disagree though?)
>
> Are you familiar with the utilization clamps mechanism?
>

Sure, that would be a possibility as alternative to PM QoS, but it would
most likely involve task scheduler changes to get it working
effectively, which Srinivas and Rodrigo have asked me to leave out from
my next RFC submission in the interest of reviewability.  I wouldn't
mind plumbing comparable information through utilization clamps instead
or as follow-up if you think that's the way forward.

> Thanks!


[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 227 bytes --]

^ permalink raw reply	[flat|nested] 74+ messages in thread

* Re: [PATCH 00/28] PM: QoS: Get rid of unuseful code and rework CPU latency QoS interface
  2020-02-13  0:37     ` Rafael J. Wysocki
@ 2020-02-13  8:10       ` Francisco Jerez
  2020-02-13 11:38         ` Rafael J. Wysocki
  0 siblings, 1 reply; 74+ messages in thread
From: Francisco Jerez @ 2020-02-13  8:10 UTC (permalink / raw)
  To: Rafael J. Wysocki
  Cc: Rafael J. Wysocki, Linux PM, LKML, Amit Kucheria, Pandruvada\,
	Srinivas, Rodrigo Vivi, Peter Zijlstra

[-- Attachment #1.1: Type: text/plain, Size: 6061 bytes --]

"Rafael J. Wysocki" <rafael@kernel.org> writes:

> On Thu, Feb 13, 2020 at 1:16 AM Rafael J. Wysocki <rafael@kernel.org> wrote:
>>
>> On Thu, Feb 13, 2020 at 12:31 AM Francisco Jerez <currojerez@riseup.net> wrote:
>> >
>> > "Rafael J. Wysocki" <rjw@rjwysocki.net> writes:
>> >
>> > > Hi All,
>> > >
>> > > This series of patches is based on the observation that after commit
>> > > c3082a674f46 ("PM: QoS: Get rid of unused flags") the only global PM QoS class
>> > > in use is PM_QOS_CPU_DMA_LATENCY, but there is still a significant amount of
>> > > code dedicated to the handling of global PM QoS classes in general.  That code
>> > > takes up space and adds overhead in vain, so it is better to get rid of it.
>> > >
>> > > Moreover, with that unuseful code removed, the interface for adding QoS
>> > > requests for CPU latency becomes inelegant and confusing, so it is better to
>> > > clean it up.
>> > >
>> > > Patches [01/28-12/28] do the first part described above, which also includes
>> > > some assorted cleanups of the core PM QoS code that doesn't go away.
>> > >
>> > > Patches [13/28-25/28] rework the CPU latency QoS interface (in the classic
>> > > "define stubs, migrate users, change the API proper" manner), patches
>> > > [26-27/28] update the general comments and documentation to match the code
>> > > after the previous changes and the last one makes the CPU latency QoS depend
>> > > on CPU_IDLE (because cpuidle is the only user of its target value today).
>> > >
>> > > The majority of the patches in this series don't change the functionality of
>> > > the code at all (at least not intentionally).
>> > >
>> > > Please refer to the changelogs of individual patches for details.
>> > >
>> > > Thanks!
>> >
>> > Hi Rafael,
>> >
>> > I believe some of the interfaces removed here could be useful in the
>> > near future.
>>
>> I disagree.
>>
>> >  It goes back to the energy efficiency- (and IGP graphics
>> > performance-)improving series I submitted a while ago [1].  It relies on
>> > some mechanism for the graphics driver to report an I/O bottleneck to
>> > CPUFREQ, allowing it to make a more conservative trade-off between
>> > energy efficiency and latency, which can greatly reduce the CPU package
>> > energy usage of IO-bound applications (in some graphics benchmarks I've
>> > seen it reduced by over 40% on my ICL laptop), and therefore also allows
>> > TDP-bound applications to obtain a reciprocal improvement in throughput.
>> >
>> > I'm not particularly fond of the global PM QoS interfaces TBH, it seems
>> > like an excessively blunt hammer to me, so I can very much relate to the
>> > purpose of this series.  However the finer-grained solution I've
>> > implemented has seen some push-back from i915 and CPUFREQ devs due to
>> > its complexity, since it relies on task scheduler changes in order to
>> > track IO bottlenecks per-process (roughly as suggested by Peter Zijlstra
>> > during our previous discussions), pretty much in the spirit of PELT but
>> > applied to IO utilization.
>> >
>> > With that in mind I was hoping we could take advantage of PM QoS as a
>> > temporary solution [2], by introducing a global PM QoS class similar but
>> > with roughly converse semantics to PM_QOS_CPU_DMA_LATENCY, allowing
>> > device drivers to report a *lower* bound on CPU latency beyond which PM
>> > shall not bother to reduce latency if doing so would have negative
>> > consequences on the energy efficiency and/or parallelism of the system.
>>
>> So I really don't quite see how that could be responded to, by cpuidle
>> say.  What exactly do you mean by "reducing latency" in particular?
>>
>> > Of course one would expect the current PM_QOS_CPU_DMA_LATENCY upper
>> > bound to take precedence over the new lower bound in cases where the
>> > former is in conflict with the latter.
>>
>> So that needs to be done on top of this series.
>>
>> > I can think of several alternatives to that which don't involve
>> > temporarily holding off your clean-up,
>>
>> The cleanup goes in.  Please work on top of it.
>>
>> > but none of them sound particularly exciting:
>> >
>> >  1/ Use an interface specific to CPUFREQ, pretty much like the one
>> >     introduced in my original submission [1].
>>
>> It uses frequency QoS already today, do you really need something else?
>>
>> >  2/ Use per-CPU PM QoS, which AFAICT would require the graphics driver
>> >     to either place a request on every CPU of the system (which would
>> >     cause a frequent operation to have O(N) complexity on the number of
>> >     CPUs on the system), or play a cat-and-mouse game with the task
>> >     scheduler.
>>
>> That's in place already too in the form of device PM QoS; see
>> drivers/base/power/qos.c.
>>
>> >  3/ Add a new global PM QoS mechanism roughly duplicating the
>> >     cpu_latency_qos_* interfaces introduced in this series.  Drop your
>> >     change making this available to CPU IDLE only.
>>
>> It sounds like you really want performance for energy efficiency and
>> CPU latency has a little to do with that.
>>
>> >  3/ Go straight to a scheduling-based approach, which is likely to
>> >     greatly increase the review effort required to upstream this
>> >     feature.  (Peter might disagree though?)
>>
>> Are you familiar with the utilization clamps mechanism?
>
> And BTW, posting patches as RFC is fine even if they have not been
> tested.  At least you let people know that you work on something this
> way, so if they work on changes in the same area, they may take that
> into consideration.
>

Sure, that was going to be the first RFC.

> Also if there are objections to your proposal, you may save quite a
> bit of time by sending it early.
>
> It is unfortunate that this series has clashed with the changes that
> you were about to propose, but in this particular case in my view it
> is better to clean up things and start over.
>

Luckily it doesn't clash with the second RFC I was meaning to send,
maybe we should just skip the first?  Or maybe it's valuable as a
curiosity anyway?

> Thanks!

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 227 bytes --]

^ permalink raw reply	[flat|nested] 74+ messages in thread

* Re: [PATCH 00/28] PM: QoS: Get rid of unuseful code and rework CPU latency QoS interface
  2020-02-13  7:10 ` Amit Kucheria
@ 2020-02-13 10:17   ` Rafael J. Wysocki
  2020-02-13 10:22     ` Rafael J. Wysocki
  2020-02-13 10:49     ` Amit Kucheria
  0 siblings, 2 replies; 74+ messages in thread
From: Rafael J. Wysocki @ 2020-02-13 10:17 UTC (permalink / raw)
  To: Amit Kucheria; +Cc: Rafael J. Wysocki, Linux PM, LKML

On Thu, Feb 13, 2020 at 8:10 AM Amit Kucheria <amit.kucheria@linaro.org> wrote:
>
> On Wed, Feb 12, 2020 at 5:09 AM Rafael J. Wysocki <rjw@rjwysocki.net> wrote:
> >
> > Hi All,
> >
> > This series of patches is based on the observation that after commit
> > c3082a674f46 ("PM: QoS: Get rid of unused flags") the only global PM QoS class
> > in use is PM_QOS_CPU_DMA_LATENCY, but there is still a significant amount of
> > code dedicated to the handling of global PM QoS classes in general.  That code
> > takes up space and adds overhead in vain, so it is better to get rid of it.
> >
> > Moreover, with that unuseful code removed, the interface for adding QoS
> > requests for CPU latency becomes inelegant and confusing, so it is better to
> > clean it up.
> >
> > Patches [01/28-12/28] do the first part described above, which also includes
> > some assorted cleanups of the core PM QoS code that doesn't go away.
> >
> > Patches [13/28-25/28] rework the CPU latency QoS interface (in the classic
> > "define stubs, migrate users, change the API proper" manner), patches
> > [26-27/28] update the general comments and documentation to match the code
> > after the previous changes and the last one makes the CPU latency QoS depend
> > on CPU_IDLE (because cpuidle is the only user of its target value today).
> >
> > The majority of the patches in this series don't change the functionality of
> > the code at all (at least not intentionally).
> >
> > Please refer to the changelogs of individual patches for details.
>
> Hi Rafael,
>
> Nice cleanup to the code and docs.
>
> I've reviewed the series, and briefly tested it by setting latencies
> from userspace. Can we not remove the debugfs interface? It is a quick
> way to check the global cpu latency clamp on the system from userspace
> without setting up tracepoints or writing a program to read
> /dev/cpu_dma_latency.

Come on.

What about in Python?

#!/usr/bin/env python
import numpy as np

if __name__ == '__main__':
    f = open("/dev/cpu_dma_latency", "r")
    print(np.fromfile(f, dtype=np.int32, count=1))
    f.close()

And probably you can do it in at least 20 different ways. :-)

Also note that "echo the_debugfs_thing" does the equivalent, but the
conversion takes place in the kernel.  Is it really a good idea to
carry the whole debugfs interface because of that one conversion?

> Except for patch 01/28 removing the debugfs interface, please feel to add my
>
> Reviewed-by: Amit Kucheria <amit.kucheria@linaro.org>
> Tested-by: Amit Kucheria <amit.kucheria@linaro.org>

Thanks!

^ permalink raw reply	[flat|nested] 74+ messages in thread

* Re: [PATCH 00/28] PM: QoS: Get rid of unuseful code and rework CPU latency QoS interface
  2020-02-13 10:17   ` Rafael J. Wysocki
@ 2020-02-13 10:22     ` Rafael J. Wysocki
  2020-02-13 10:49     ` Amit Kucheria
  1 sibling, 0 replies; 74+ messages in thread
From: Rafael J. Wysocki @ 2020-02-13 10:22 UTC (permalink / raw)
  To: Amit Kucheria; +Cc: Rafael J. Wysocki, Linux PM, LKML

On Thu, Feb 13, 2020 at 11:17 AM Rafael J. Wysocki <rafael@kernel.org> wrote:
>
> On Thu, Feb 13, 2020 at 8:10 AM Amit Kucheria <amit.kucheria@linaro.org> wrote:
> >
> > On Wed, Feb 12, 2020 at 5:09 AM Rafael J. Wysocki <rjw@rjwysocki.net> wrote:
> > >
> > > Hi All,
> > >
> > > This series of patches is based on the observation that after commit
> > > c3082a674f46 ("PM: QoS: Get rid of unused flags") the only global PM QoS class
> > > in use is PM_QOS_CPU_DMA_LATENCY, but there is still a significant amount of
> > > code dedicated to the handling of global PM QoS classes in general.  That code
> > > takes up space and adds overhead in vain, so it is better to get rid of it.
> > >
> > > Moreover, with that unuseful code removed, the interface for adding QoS
> > > requests for CPU latency becomes inelegant and confusing, so it is better to
> > > clean it up.
> > >
> > > Patches [01/28-12/28] do the first part described above, which also includes
> > > some assorted cleanups of the core PM QoS code that doesn't go away.
> > >
> > > Patches [13/28-25/28] rework the CPU latency QoS interface (in the classic
> > > "define stubs, migrate users, change the API proper" manner), patches
> > > [26-27/28] update the general comments and documentation to match the code
> > > after the previous changes and the last one makes the CPU latency QoS depend
> > > on CPU_IDLE (because cpuidle is the only user of its target value today).
> > >
> > > The majority of the patches in this series don't change the functionality of
> > > the code at all (at least not intentionally).
> > >
> > > Please refer to the changelogs of individual patches for details.
> >
> > Hi Rafael,
> >
> > Nice cleanup to the code and docs.
> >
> > I've reviewed the series, and briefly tested it by setting latencies
> > from userspace. Can we not remove the debugfs interface? It is a quick
> > way to check the global cpu latency clamp on the system from userspace
> > without setting up tracepoints or writing a program to read
> > /dev/cpu_dma_latency.
>
> Come on.
>
> What about in Python?
>
> #!/usr/bin/env python
> import numpy as np
>
> if __name__ == '__main__':
>     f = open("/dev/cpu_dma_latency", "r")
>     print(np.fromfile(f, dtype=np.int32, count=1))
>     f.close()
>
> And probably you can do it in at least 20 different ways. :-)
>
> Also note that "echo the_debugfs_thing" does the equivalent, but the

I obviously meant "cat the_debugfs_thing" here, sorry.

^ permalink raw reply	[flat|nested] 74+ messages in thread

* Re: [PATCH 00/28] PM: QoS: Get rid of unuseful code and rework CPU latency QoS interface
  2020-02-13 10:17   ` Rafael J. Wysocki
  2020-02-13 10:22     ` Rafael J. Wysocki
@ 2020-02-13 10:49     ` Amit Kucheria
  2020-02-13 11:36       ` Rafael J. Wysocki
  1 sibling, 1 reply; 74+ messages in thread
From: Amit Kucheria @ 2020-02-13 10:49 UTC (permalink / raw)
  To: Rafael J. Wysocki; +Cc: Rafael J. Wysocki, Linux PM, LKML

On Thu, Feb 13, 2020 at 3:47 PM Rafael J. Wysocki <rafael@kernel.org> wrote:
>
> On Thu, Feb 13, 2020 at 8:10 AM Amit Kucheria <amit.kucheria@linaro.org> wrote:
> >
> > On Wed, Feb 12, 2020 at 5:09 AM Rafael J. Wysocki <rjw@rjwysocki.net> wrote:
> > >
> > > Hi All,
> > >
> > > This series of patches is based on the observation that after commit
> > > c3082a674f46 ("PM: QoS: Get rid of unused flags") the only global PM QoS class
> > > in use is PM_QOS_CPU_DMA_LATENCY, but there is still a significant amount of
> > > code dedicated to the handling of global PM QoS classes in general.  That code
> > > takes up space and adds overhead in vain, so it is better to get rid of it.
> > >
> > > Moreover, with that unuseful code removed, the interface for adding QoS
> > > requests for CPU latency becomes inelegant and confusing, so it is better to
> > > clean it up.
> > >
> > > Patches [01/28-12/28] do the first part described above, which also includes
> > > some assorted cleanups of the core PM QoS code that doesn't go away.
> > >
> > > Patches [13/28-25/28] rework the CPU latency QoS interface (in the classic
> > > "define stubs, migrate users, change the API proper" manner), patches
> > > [26-27/28] update the general comments and documentation to match the code
> > > after the previous changes and the last one makes the CPU latency QoS depend
> > > on CPU_IDLE (because cpuidle is the only user of its target value today).
> > >
> > > The majority of the patches in this series don't change the functionality of
> > > the code at all (at least not intentionally).
> > >
> > > Please refer to the changelogs of individual patches for details.
> >
> > Hi Rafael,
> >
> > Nice cleanup to the code and docs.
> >
> > I've reviewed the series, and briefly tested it by setting latencies
> > from userspace. Can we not remove the debugfs interface? It is a quick
> > way to check the global cpu latency clamp on the system from userspace
> > without setting up tracepoints or writing a program to read
> > /dev/cpu_dma_latency.
>
> Come on.
>
> What about in Python?
>
> #!/usr/bin/env python
> import numpy as np
>
> if __name__ == '__main__':
>     f = open("/dev/cpu_dma_latency", "r")
>     print(np.fromfile(f, dtype=np.int32, count=1))
>     f.close()
>
> And probably you can do it in at least 20 different ways. :-)

Indeed, I can, just not as straightforward as "cat /debugfs/filename"
when you don't have python or perl in your buildroot initramfs.

Some hexdump/od acrobatics will yield the value, I guess.

> Also note that "echo the_debugfs_thing" does the equivalent, but the
> conversion takes place in the kernel.  Is it really a good idea to
> carry the whole debugfs interface because of that one conversion?
>
> > Except for patch 01/28 removing the debugfs interface, please feel to add my
> >
> > Reviewed-by: Amit Kucheria <amit.kucheria@linaro.org>
> > Tested-by: Amit Kucheria <amit.kucheria@linaro.org>
>
> Thanks!

^ permalink raw reply	[flat|nested] 74+ messages in thread

* Re: [PATCH 00/28] PM: QoS: Get rid of unuseful code and rework CPU latency QoS interface
  2020-02-13  8:07     ` Francisco Jerez
@ 2020-02-13 11:34       ` Rafael J. Wysocki
  2020-02-13 16:35         ` Rafael J. Wysocki
  2020-02-14  0:14         ` Francisco Jerez
  0 siblings, 2 replies; 74+ messages in thread
From: Rafael J. Wysocki @ 2020-02-13 11:34 UTC (permalink / raw)
  To: Francisco Jerez
  Cc: Rafael J. Wysocki, Rafael J. Wysocki, Linux PM, LKML,
	Amit Kucheria, Pandruvada, Srinivas, Rodrigo Vivi,
	Peter Zijlstra

On Thu, Feb 13, 2020 at 9:07 AM Francisco Jerez <currojerez@riseup.net> wrote:
>
> "Rafael J. Wysocki" <rafael@kernel.org> writes:
>
> > On Thu, Feb 13, 2020 at 12:31 AM Francisco Jerez <currojerez@riseup.net> wrote:
> >>
> >> "Rafael J. Wysocki" <rjw@rjwysocki.net> writes:
> >>
> >> > Hi All,
> >> >
> >> > This series of patches is based on the observation that after commit
> >> > c3082a674f46 ("PM: QoS: Get rid of unused flags") the only global PM QoS class
> >> > in use is PM_QOS_CPU_DMA_LATENCY, but there is still a significant amount of
> >> > code dedicated to the handling of global PM QoS classes in general.  That code
> >> > takes up space and adds overhead in vain, so it is better to get rid of it.
> >> >
> >> > Moreover, with that unuseful code removed, the interface for adding QoS
> >> > requests for CPU latency becomes inelegant and confusing, so it is better to
> >> > clean it up.
> >> >
> >> > Patches [01/28-12/28] do the first part described above, which also includes
> >> > some assorted cleanups of the core PM QoS code that doesn't go away.
> >> >
> >> > Patches [13/28-25/28] rework the CPU latency QoS interface (in the classic
> >> > "define stubs, migrate users, change the API proper" manner), patches
> >> > [26-27/28] update the general comments and documentation to match the code
> >> > after the previous changes and the last one makes the CPU latency QoS depend
> >> > on CPU_IDLE (because cpuidle is the only user of its target value today).
> >> >
> >> > The majority of the patches in this series don't change the functionality of
> >> > the code at all (at least not intentionally).
> >> >
> >> > Please refer to the changelogs of individual patches for details.
> >> >
> >> > Thanks!
> >>
> >> Hi Rafael,
> >>
> >> I believe some of the interfaces removed here could be useful in the
> >> near future.
> >
> > I disagree.
> >
> >>  It goes back to the energy efficiency- (and IGP graphics
> >> performance-)improving series I submitted a while ago [1].  It relies on
> >> some mechanism for the graphics driver to report an I/O bottleneck to
> >> CPUFREQ, allowing it to make a more conservative trade-off between
> >> energy efficiency and latency, which can greatly reduce the CPU package
> >> energy usage of IO-bound applications (in some graphics benchmarks I've
> >> seen it reduced by over 40% on my ICL laptop), and therefore also allows
> >> TDP-bound applications to obtain a reciprocal improvement in throughput.
> >>
> >> I'm not particularly fond of the global PM QoS interfaces TBH, it seems
> >> like an excessively blunt hammer to me, so I can very much relate to the
> >> purpose of this series.  However the finer-grained solution I've
> >> implemented has seen some push-back from i915 and CPUFREQ devs due to
> >> its complexity, since it relies on task scheduler changes in order to
> >> track IO bottlenecks per-process (roughly as suggested by Peter Zijlstra
> >> during our previous discussions), pretty much in the spirit of PELT but
> >> applied to IO utilization.
> >>
> >> With that in mind I was hoping we could take advantage of PM QoS as a
> >> temporary solution [2], by introducing a global PM QoS class similar but
> >> with roughly converse semantics to PM_QOS_CPU_DMA_LATENCY, allowing
> >> device drivers to report a *lower* bound on CPU latency beyond which PM
> >> shall not bother to reduce latency if doing so would have negative
> >> consequences on the energy efficiency and/or parallelism of the system.
> >
> > So I really don't quite see how that could be responded to, by cpuidle
> > say.  What exactly do you mean by "reducing latency" in particular?
> >
>
> cpuidle wouldn't necessarily have to do anything about it since it would
> be intended merely as a hint that a device in the system other than the
> CPU has a bottleneck.  It could provide a lower bound for the wake-up
> latency of the idle states that may be considered by cpuidle.  It seems
> to me like it could be useful when a program can tell from the
> characteristics of the workload that a latency reduction below a certain
> time bound wouldn't materially affect the performance of the system
> (e.g. if you have 20 ms to render a GPU-bound frame, you may not care at
> all about the CPU taking a fraction of a millisecond more to wake up a
> few times each frame).

Well, this is not how cpuidle works.

What it does is to try to find the deepest idle state that makes sense
to let the CPU go into given all of the constraints etc.  IOW it never
tries to reduce the latency, it looks how far it can go with possible
energy savings given a specific latency limit (or no limit at all).

> For cpufreq I was planning to have it influence a time parameter of the
> utilization averaging done by the governor, which would allow it to have
> a more optimal response in the long term (in the sense of lowering the
> energy cost of performing the same work in the specified timeframe),
> even if such a large time parameter wouldn't normally be considered
> appropriate for utilization averaging due to latency concerns.

So this is fine in the schedutil case in principle, it but would not
work with HWP, because that doesn't take the scheduler's utilization
metrics into account.

To cover the HWP case you need to influence the min and max frequency
limits, realistically.

> >> Of course one would expect the current PM_QOS_CPU_DMA_LATENCY upper
> >> bound to take precedence over the new lower bound in cases where the
> >> former is in conflict with the latter.
> >
> > So that needs to be done on top of this series.
> >
> >> I can think of several alternatives to that which don't involve
> >> temporarily holding off your clean-up,
> >
> > The cleanup goes in.  Please work on top of it.
> >
>
> Hopefully we can come up with an alternative in that case.  TBH I'd love
> to see your clean-up go in too, but global PM QoS seemed fairly
> appealing as a way to split up my work so it could be reviewed
> incrementally, even though I'm aiming for a finer-grained solution than
> that.

Well, so "global PM QoS" really means a struct struct
pm_qos_constraints object with a global reader of its target_value.

Of course, pm_qos_update_target() is not particularly convenient to
use, so you'd need to wrap it into an _add/update/remove_request()
family of functions along the lines of the cpu_latency_qos_*() ones I
suppose and you won't need the _apply() thing.

> >> but none of them sound particularly exciting:
> >>
> >>  1/ Use an interface specific to CPUFREQ, pretty much like the one
> >>     introduced in my original submission [1].
> >
> > It uses frequency QoS already today, do you really need something else?
> >
>
> Yes.  I don't see how frequency QoS could be useful for this as-is,
> unless we're willing to introduce code in every device driver that takes
> advantage of this and have them monitor the utilization of every CPU in
> the system, so they can calculate an appropriate max frequency
> constraint -- One which we can be reasonably certain won't hurt the
> long-term performance of the CPU cores these constraints are being
> placed on.

I'm not really sure if I understand you correctly.

The frequency QoS in cpufreq is a way to influence the min and max
freq limits used by it for each CPU.  That is done in a couple of
places like store_max/min_perf_pct() in intel_pstate or
processor_set_cur_state() (I guess the latter would be close to what
you think about, but the other way around - you seem to want to
influence the min and not the max).

Now, the question what request value(s) to put in there and how to
compute them is kind of a different one.

> >>  2/ Use per-CPU PM QoS, which AFAICT would require the graphics driver
> >>     to either place a request on every CPU of the system (which would
> >>     cause a frequent operation to have O(N) complexity on the number of
> >>     CPUs on the system), or play a cat-and-mouse game with the task
> >>     scheduler.
> >
> > That's in place already too in the form of device PM QoS; see
> > drivers/base/power/qos.c.
>
> But wouldn't that have the drawbacks I was talking about above when
> trying to use it in order to set this kind of constraints on CPU power
> management?

I guess so, but the alternatives have drawbacks too.

> >
> >>  3/ Add a new global PM QoS mechanism roughly duplicating the
> >>     cpu_latency_qos_* interfaces introduced in this series.  Drop your
> >>     change making this available to CPU IDLE only.
> >
> > It sounds like you really want performance for energy efficiency and
> > CPU latency has a little to do with that.
> >
>
> The mechanism I've been working on isn't intended to sacrifice long-term
> performance of the CPU (e.g. if a CPU core is 100% utilized in the
> steady state by the same or an unrelated application the CPUFREQ
> governor should still request the maximum turbo frequency for it), it's
> only meant to affect the trade-off between energy efficiency and latency
> (e.g. the time it takes for the CPUFREQ governor to respond to an
> oscillation of the workload that chooses to opt in).

So the meaning of "latency" here is really different from the meaning
of "latency" in the cpuidle context and in RT.

I guess it would be better to call it "response time" in this case to
avoid confusion.

Let me ask if I understand you correctly: the problem is that for some
workloads the time it takes to ramp up the frequency to an acceptable
(or desirable, more generally) level is too high, so the approach
under consideration is to clamp the min frequency, either effectively
or directly, so as to reduce that time?

> >>  3/ Go straight to a scheduling-based approach, which is likely to
> >>     greatly increase the review effort required to upstream this
> >>     feature.  (Peter might disagree though?)
> >
> > Are you familiar with the utilization clamps mechanism?
> >
>
> Sure, that would be a possibility as alternative to PM QoS, but it would
> most likely involve task scheduler changes to get it working
> effectively, which Srinivas and Rodrigo have asked me to leave out from
> my next RFC submission in the interest of reviewability.  I wouldn't
> mind plumbing comparable information through utilization clamps instead
> or as follow-up if you think that's the way forward.

Well, like I said somewhere above (or previously), the main problem
with utilization clamps is that they have no effect on HWP at the
moment.  Currently, there is a little connection between the scheduler
and the HWP algorithm running on the processor.  However, I would like
to investigate that path, because the utilization clamps provide a
good interface for applications to request a certain level of service
from the scheduler (they can really be regarded as a QoS mechanism
too) and connecting them to the HWP min and max limits somehow might
work.

Thanks!

^ permalink raw reply	[flat|nested] 74+ messages in thread

* Re: [PATCH 00/28] PM: QoS: Get rid of unuseful code and rework CPU latency QoS interface
  2020-02-13 10:49     ` Amit Kucheria
@ 2020-02-13 11:36       ` Rafael J. Wysocki
  0 siblings, 0 replies; 74+ messages in thread
From: Rafael J. Wysocki @ 2020-02-13 11:36 UTC (permalink / raw)
  To: Amit Kucheria; +Cc: Rafael J. Wysocki, Rafael J. Wysocki, Linux PM, LKML

On Thu, Feb 13, 2020 at 11:50 AM Amit Kucheria <amit.kucheria@linaro.org> wrote:
>
> On Thu, Feb 13, 2020 at 3:47 PM Rafael J. Wysocki <rafael@kernel.org> wrote:
> >
> > On Thu, Feb 13, 2020 at 8:10 AM Amit Kucheria <amit.kucheria@linaro.org> wrote:
> > >
> > > On Wed, Feb 12, 2020 at 5:09 AM Rafael J. Wysocki <rjw@rjwysocki.net> wrote:
> > > >
> > > > Hi All,
> > > >
> > > > This series of patches is based on the observation that after commit
> > > > c3082a674f46 ("PM: QoS: Get rid of unused flags") the only global PM QoS class
> > > > in use is PM_QOS_CPU_DMA_LATENCY, but there is still a significant amount of
> > > > code dedicated to the handling of global PM QoS classes in general.  That code
> > > > takes up space and adds overhead in vain, so it is better to get rid of it.
> > > >
> > > > Moreover, with that unuseful code removed, the interface for adding QoS
> > > > requests for CPU latency becomes inelegant and confusing, so it is better to
> > > > clean it up.
> > > >
> > > > Patches [01/28-12/28] do the first part described above, which also includes
> > > > some assorted cleanups of the core PM QoS code that doesn't go away.
> > > >
> > > > Patches [13/28-25/28] rework the CPU latency QoS interface (in the classic
> > > > "define stubs, migrate users, change the API proper" manner), patches
> > > > [26-27/28] update the general comments and documentation to match the code
> > > > after the previous changes and the last one makes the CPU latency QoS depend
> > > > on CPU_IDLE (because cpuidle is the only user of its target value today).
> > > >
> > > > The majority of the patches in this series don't change the functionality of
> > > > the code at all (at least not intentionally).
> > > >
> > > > Please refer to the changelogs of individual patches for details.
> > >
> > > Hi Rafael,
> > >
> > > Nice cleanup to the code and docs.
> > >
> > > I've reviewed the series, and briefly tested it by setting latencies
> > > from userspace. Can we not remove the debugfs interface? It is a quick
> > > way to check the global cpu latency clamp on the system from userspace
> > > without setting up tracepoints or writing a program to read
> > > /dev/cpu_dma_latency.
> >
> > Come on.
> >
> > What about in Python?
> >
> > #!/usr/bin/env python
> > import numpy as np
> >
> > if __name__ == '__main__':
> >     f = open("/dev/cpu_dma_latency", "r")
> >     print(np.fromfile(f, dtype=np.int32, count=1))
> >     f.close()
> >
> > And probably you can do it in at least 20 different ways. :-)
>
> Indeed, I can, just not as straightforward as "cat /debugfs/filename"
> when you don't have python or perl in your buildroot initramfs.
>
> Some hexdump/od acrobatics will yield the value, I guess.

Right,

# hexdump --format '"%d\n"' /dev/cpu_dma_latency

works just fine, actually.

Thanks!

^ permalink raw reply	[flat|nested] 74+ messages in thread

* Re: [PATCH 00/28] PM: QoS: Get rid of unuseful code and rework CPU latency QoS interface
  2020-02-13  8:10       ` Francisco Jerez
@ 2020-02-13 11:38         ` Rafael J. Wysocki
  2020-02-21 22:10           ` Francisco Jerez
  0 siblings, 1 reply; 74+ messages in thread
From: Rafael J. Wysocki @ 2020-02-13 11:38 UTC (permalink / raw)
  To: Francisco Jerez
  Cc: Rafael J. Wysocki, Rafael J. Wysocki, Linux PM, LKML,
	Amit Kucheria, Pandruvada, Srinivas, Rodrigo Vivi,
	Peter Zijlstra

On Thu, Feb 13, 2020 at 9:09 AM Francisco Jerez <currojerez@riseup.net> wrote:
>
> "Rafael J. Wysocki" <rafael@kernel.org> writes:
>
> > On Thu, Feb 13, 2020 at 1:16 AM Rafael J. Wysocki <rafael@kernel.org> wrote:
> >>
> >> On Thu, Feb 13, 2020 at 12:31 AM Francisco Jerez <currojerez@riseup.net> wrote:
> >> >

[cut]

> >
> > And BTW, posting patches as RFC is fine even if they have not been
> > tested.  At least you let people know that you work on something this
> > way, so if they work on changes in the same area, they may take that
> > into consideration.
> >
>
> Sure, that was going to be the first RFC.
>
> > Also if there are objections to your proposal, you may save quite a
> > bit of time by sending it early.
> >
> > It is unfortunate that this series has clashed with the changes that
> > you were about to propose, but in this particular case in my view it
> > is better to clean up things and start over.
> >
>
> Luckily it doesn't clash with the second RFC I was meaning to send,
> maybe we should just skip the first?

Yes, please.

> Or maybe it's valuable as a curiosity anyway?

No, let's just focus on the latest one.

Thanks!

^ permalink raw reply	[flat|nested] 74+ messages in thread

* Re: [PATCH 00/28] PM: QoS: Get rid of unuseful code and rework CPU latency QoS interface
  2020-02-13 11:34       ` Rafael J. Wysocki
@ 2020-02-13 16:35         ` Rafael J. Wysocki
  2020-02-14  0:15           ` Francisco Jerez
  2020-02-14  0:14         ` Francisco Jerez
  1 sibling, 1 reply; 74+ messages in thread
From: Rafael J. Wysocki @ 2020-02-13 16:35 UTC (permalink / raw)
  To: Francisco Jerez
  Cc: Rafael J. Wysocki, Linux PM, LKML, Amit Kucheria, Pandruvada,
	Srinivas, Rodrigo Vivi, Peter Zijlstra

On Thu, Feb 13, 2020 at 12:34 PM Rafael J. Wysocki <rafael@kernel.org> wrote:
>
> On Thu, Feb 13, 2020 at 9:07 AM Francisco Jerez <currojerez@riseup.net> wrote:
> >
> > "Rafael J. Wysocki" <rafael@kernel.org> writes:
> >
> > > On Thu, Feb 13, 2020 at 12:31 AM Francisco Jerez <currojerez@riseup.net> wrote:
> > >>
> > >> "Rafael J. Wysocki" <rjw@rjwysocki.net> writes:
> > >>
> > >> > Hi All,
> > >> >
> > >> > This series of patches is based on the observation that after commit
> > >> > c3082a674f46 ("PM: QoS: Get rid of unused flags") the only global PM QoS class
> > >> > in use is PM_QOS_CPU_DMA_LATENCY, but there is still a significant amount of
> > >> > code dedicated to the handling of global PM QoS classes in general.  That code
> > >> > takes up space and adds overhead in vain, so it is better to get rid of it.
> > >> >
> > >> > Moreover, with that unuseful code removed, the interface for adding QoS
> > >> > requests for CPU latency becomes inelegant and confusing, so it is better to
> > >> > clean it up.
> > >> >
> > >> > Patches [01/28-12/28] do the first part described above, which also includes
> > >> > some assorted cleanups of the core PM QoS code that doesn't go away.
> > >> >
> > >> > Patches [13/28-25/28] rework the CPU latency QoS interface (in the classic
> > >> > "define stubs, migrate users, change the API proper" manner), patches
> > >> > [26-27/28] update the general comments and documentation to match the code
> > >> > after the previous changes and the last one makes the CPU latency QoS depend
> > >> > on CPU_IDLE (because cpuidle is the only user of its target value today).
> > >> >
> > >> > The majority of the patches in this series don't change the functionality of
> > >> > the code at all (at least not intentionally).
> > >> >
> > >> > Please refer to the changelogs of individual patches for details.
> > >> >
> > >> > Thanks!
> > >>
> > >> Hi Rafael,
> > >>
> > >> I believe some of the interfaces removed here could be useful in the
> > >> near future.
> > >
> > > I disagree.
> > >
> > >>  It goes back to the energy efficiency- (and IGP graphics
> > >> performance-)improving series I submitted a while ago [1].  It relies on
> > >> some mechanism for the graphics driver to report an I/O bottleneck to
> > >> CPUFREQ, allowing it to make a more conservative trade-off between
> > >> energy efficiency and latency, which can greatly reduce the CPU package
> > >> energy usage of IO-bound applications (in some graphics benchmarks I've
> > >> seen it reduced by over 40% on my ICL laptop), and therefore also allows
> > >> TDP-bound applications to obtain a reciprocal improvement in throughput.
> > >>
> > >> I'm not particularly fond of the global PM QoS interfaces TBH, it seems
> > >> like an excessively blunt hammer to me, so I can very much relate to the
> > >> purpose of this series.  However the finer-grained solution I've
> > >> implemented has seen some push-back from i915 and CPUFREQ devs due to
> > >> its complexity, since it relies on task scheduler changes in order to
> > >> track IO bottlenecks per-process (roughly as suggested by Peter Zijlstra
> > >> during our previous discussions), pretty much in the spirit of PELT but
> > >> applied to IO utilization.
> > >>
> > >> With that in mind I was hoping we could take advantage of PM QoS as a
> > >> temporary solution [2], by introducing a global PM QoS class similar but
> > >> with roughly converse semantics to PM_QOS_CPU_DMA_LATENCY, allowing
> > >> device drivers to report a *lower* bound on CPU latency beyond which PM
> > >> shall not bother to reduce latency if doing so would have negative
> > >> consequences on the energy efficiency and/or parallelism of the system.
> > >
> > > So I really don't quite see how that could be responded to, by cpuidle
> > > say.  What exactly do you mean by "reducing latency" in particular?
> > >
> >
> > cpuidle wouldn't necessarily have to do anything about it since it would
> > be intended merely as a hint that a device in the system other than the
> > CPU has a bottleneck.  It could provide a lower bound for the wake-up
> > latency of the idle states that may be considered by cpuidle.  It seems
> > to me like it could be useful when a program can tell from the
> > characteristics of the workload that a latency reduction below a certain
> > time bound wouldn't materially affect the performance of the system
> > (e.g. if you have 20 ms to render a GPU-bound frame, you may not care at
> > all about the CPU taking a fraction of a millisecond more to wake up a
> > few times each frame).
>
> Well, this is not how cpuidle works.
>
> What it does is to try to find the deepest idle state that makes sense
> to let the CPU go into given all of the constraints etc.  IOW it never
> tries to reduce the latency, it looks how far it can go with possible
> energy savings given a specific latency limit (or no limit at all).
>
> > For cpufreq I was planning to have it influence a time parameter of the
> > utilization averaging done by the governor, which would allow it to have
> > a more optimal response in the long term (in the sense of lowering the
> > energy cost of performing the same work in the specified timeframe),
> > even if such a large time parameter wouldn't normally be considered
> > appropriate for utilization averaging due to latency concerns.
>
> So this is fine in the schedutil case in principle, it but would not
> work with HWP, because that doesn't take the scheduler's utilization
> metrics into account.
>
> To cover the HWP case you need to influence the min and max frequency
> limits, realistically.
>
> > >> Of course one would expect the current PM_QOS_CPU_DMA_LATENCY upper
> > >> bound to take precedence over the new lower bound in cases where the
> > >> former is in conflict with the latter.
> > >
> > > So that needs to be done on top of this series.
> > >
> > >> I can think of several alternatives to that which don't involve
> > >> temporarily holding off your clean-up,
> > >
> > > The cleanup goes in.  Please work on top of it.
> > >
> >
> > Hopefully we can come up with an alternative in that case.  TBH I'd love
> > to see your clean-up go in too, but global PM QoS seemed fairly
> > appealing as a way to split up my work so it could be reviewed
> > incrementally, even though I'm aiming for a finer-grained solution than
> > that.
>
> Well, so "global PM QoS" really means a struct struct
> pm_qos_constraints object with a global reader of its target_value.
>
> Of course, pm_qos_update_target() is not particularly convenient to
> use, so you'd need to wrap it into an _add/update/remove_request()
> family of functions along the lines of the cpu_latency_qos_*() ones I
> suppose and you won't need the _apply() thing.
>
> > >> but none of them sound particularly exciting:
> > >>
> > >>  1/ Use an interface specific to CPUFREQ, pretty much like the one
> > >>     introduced in my original submission [1].
> > >
> > > It uses frequency QoS already today, do you really need something else?
> > >
> >
> > Yes.  I don't see how frequency QoS could be useful for this as-is,
> > unless we're willing to introduce code in every device driver that takes
> > advantage of this and have them monitor the utilization of every CPU in
> > the system, so they can calculate an appropriate max frequency
> > constraint -- One which we can be reasonably certain won't hurt the
> > long-term performance of the CPU cores these constraints are being
> > placed on.
>
> I'm not really sure if I understand you correctly.
>
> The frequency QoS in cpufreq is a way to influence the min and max
> freq limits used by it for each CPU.  That is done in a couple of
> places like store_max/min_perf_pct() in intel_pstate or
> processor_set_cur_state() (I guess the latter would be close to what
> you think about, but the other way around - you seem to want to
> influence the min and not the max).

It looks like *I* got this part the other way around. :-/

I think that your use case is almost equivalent to the thermal
pressure one, so you'd want to limit the max and so that would be
something similar to store_max_perf_pct() with its input side hooked
up to a QoS list.

But it looks like that QoS list would rather be of a "reservation"
type, so a request added to it would mean something like "leave this
fraction of power that appears to be available to the CPU subsystem
unused, because I need it for a different purpose".  And in principle
there might be multiple requests in there at the same time and those
"reservations" would add up.  So that would be a kind of "limited sum"
QoS type which wasn't even there before my changes.

A user of that QoS list might then do something like

ret = cpu_power_reserve_add(1, 4);

meaning that it wants 25% of the "potential" CPU power to be not
utilized by CPU performance scaling and that could affect the
scheduler through load modifications (kind of along the thermal
pressure patchset discussed some time ago) and HWP (as well as the
non-HWP intel_pstate by preventing turbo frequencies from being used
etc).

^ permalink raw reply	[flat|nested] 74+ messages in thread

* Re: [PATCH 17/28] drivers: hsi: Call cpu_latency_qos_*() instead of pm_qos_*()
  2020-02-11 23:13 ` [PATCH 17/28] drivers: hsi: " Rafael J. Wysocki
@ 2020-02-13 21:06   ` Sebastian Reichel
  0 siblings, 0 replies; 74+ messages in thread
From: Sebastian Reichel @ 2020-02-13 21:06 UTC (permalink / raw)
  To: Rafael J. Wysocki; +Cc: Linux PM, LKML, Amit Kucheria

[-- Attachment #1: Type: text/plain, Size: 1950 bytes --]

Hi,

On Wed, Feb 12, 2020 at 12:13:17AM +0100, Rafael J. Wysocki wrote:
> From: "Rafael J. Wysocki" <rafael.j.wysocki@intel.com>
> 
> Call cpu_latency_qos_add/remove_request() and
> cpu_latency_qos_request_active() instead of
> pm_qos_add/remove_request() and pm_qos_request_active(),
> respectively, because the latter are going to be dropped.
> 
> No intentional functional impact.
> 
> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
> ---

Acked-by: Sebastian Reichel <sre@kernel.org>

-- Sebastian

>  drivers/hsi/clients/cmt_speech.c | 9 ++++-----
>  1 file changed, 4 insertions(+), 5 deletions(-)
> 
> diff --git a/drivers/hsi/clients/cmt_speech.c b/drivers/hsi/clients/cmt_speech.c
> index 9eec970cdfa5..89869c66fb9d 100644
> --- a/drivers/hsi/clients/cmt_speech.c
> +++ b/drivers/hsi/clients/cmt_speech.c
> @@ -965,14 +965,13 @@ static int cs_hsi_buf_config(struct cs_hsi_iface *hi,
>  
>  	if (old_state != hi->iface_state) {
>  		if (hi->iface_state == CS_STATE_CONFIGURED) {
> -			pm_qos_add_request(&hi->pm_qos_req,
> -				PM_QOS_CPU_DMA_LATENCY,
> +			cpu_latency_qos_add_request(&hi->pm_qos_req,
>  				CS_QOS_LATENCY_FOR_DATA_USEC);
>  			local_bh_disable();
>  			cs_hsi_read_on_data(hi);
>  			local_bh_enable();
>  		} else if (old_state == CS_STATE_CONFIGURED) {
> -			pm_qos_remove_request(&hi->pm_qos_req);
> +			cpu_latency_qos_remove_request(&hi->pm_qos_req);
>  		}
>  	}
>  	return r;
> @@ -1075,8 +1074,8 @@ static void cs_hsi_stop(struct cs_hsi_iface *hi)
>  	WARN_ON(!cs_state_idle(hi->control_state));
>  	WARN_ON(!cs_state_idle(hi->data_state));
>  
> -	if (pm_qos_request_active(&hi->pm_qos_req))
> -		pm_qos_remove_request(&hi->pm_qos_req);
> +	if (cpu_latency_qos_request_active(&hi->pm_qos_req))
> +		cpu_latency_qos_remove_request(&hi->pm_qos_req);
>  
>  	spin_lock_bh(&hi->lock);
>  	cs_hsi_free_data(hi);
> -- 
> 2.16.4
> 
> 
> 
> 
> 

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 74+ messages in thread

* Re: [PATCH 00/28] PM: QoS: Get rid of unuseful code and rework CPU latency QoS interface
  2020-02-13 11:34       ` Rafael J. Wysocki
  2020-02-13 16:35         ` Rafael J. Wysocki
@ 2020-02-14  0:14         ` Francisco Jerez
  1 sibling, 0 replies; 74+ messages in thread
From: Francisco Jerez @ 2020-02-14  0:14 UTC (permalink / raw)
  To: Rafael J. Wysocki
  Cc: Rafael J. Wysocki, Rafael J. Wysocki, Linux PM, LKML,
	Amit Kucheria, Pandruvada\,
	Srinivas, Rodrigo Vivi, Peter Zijlstra

[-- Attachment #1.1: Type: text/plain, Size: 15375 bytes --]

"Rafael J. Wysocki" <rafael@kernel.org> writes:

> On Thu, Feb 13, 2020 at 9:07 AM Francisco Jerez <currojerez@riseup.net> wrote:
>>
>> "Rafael J. Wysocki" <rafael@kernel.org> writes:
>>
>> > On Thu, Feb 13, 2020 at 12:31 AM Francisco Jerez <currojerez@riseup.net> wrote:
>> >>
>> >> "Rafael J. Wysocki" <rjw@rjwysocki.net> writes:
>> >>
>> >> > Hi All,
>> >> >
>> >> > This series of patches is based on the observation that after commit
>> >> > c3082a674f46 ("PM: QoS: Get rid of unused flags") the only global PM QoS class
>> >> > in use is PM_QOS_CPU_DMA_LATENCY, but there is still a significant amount of
>> >> > code dedicated to the handling of global PM QoS classes in general.  That code
>> >> > takes up space and adds overhead in vain, so it is better to get rid of it.
>> >> >
>> >> > Moreover, with that unuseful code removed, the interface for adding QoS
>> >> > requests for CPU latency becomes inelegant and confusing, so it is better to
>> >> > clean it up.
>> >> >
>> >> > Patches [01/28-12/28] do the first part described above, which also includes
>> >> > some assorted cleanups of the core PM QoS code that doesn't go away.
>> >> >
>> >> > Patches [13/28-25/28] rework the CPU latency QoS interface (in the classic
>> >> > "define stubs, migrate users, change the API proper" manner), patches
>> >> > [26-27/28] update the general comments and documentation to match the code
>> >> > after the previous changes and the last one makes the CPU latency QoS depend
>> >> > on CPU_IDLE (because cpuidle is the only user of its target value today).
>> >> >
>> >> > The majority of the patches in this series don't change the functionality of
>> >> > the code at all (at least not intentionally).
>> >> >
>> >> > Please refer to the changelogs of individual patches for details.
>> >> >
>> >> > Thanks!
>> >>
>> >> Hi Rafael,
>> >>
>> >> I believe some of the interfaces removed here could be useful in the
>> >> near future.
>> >
>> > I disagree.
>> >
>> >>  It goes back to the energy efficiency- (and IGP graphics
>> >> performance-)improving series I submitted a while ago [1].  It relies on
>> >> some mechanism for the graphics driver to report an I/O bottleneck to
>> >> CPUFREQ, allowing it to make a more conservative trade-off between
>> >> energy efficiency and latency, which can greatly reduce the CPU package
>> >> energy usage of IO-bound applications (in some graphics benchmarks I've
>> >> seen it reduced by over 40% on my ICL laptop), and therefore also allows
>> >> TDP-bound applications to obtain a reciprocal improvement in throughput.
>> >>
>> >> I'm not particularly fond of the global PM QoS interfaces TBH, it seems
>> >> like an excessively blunt hammer to me, so I can very much relate to the
>> >> purpose of this series.  However the finer-grained solution I've
>> >> implemented has seen some push-back from i915 and CPUFREQ devs due to
>> >> its complexity, since it relies on task scheduler changes in order to
>> >> track IO bottlenecks per-process (roughly as suggested by Peter Zijlstra
>> >> during our previous discussions), pretty much in the spirit of PELT but
>> >> applied to IO utilization.
>> >>
>> >> With that in mind I was hoping we could take advantage of PM QoS as a
>> >> temporary solution [2], by introducing a global PM QoS class similar but
>> >> with roughly converse semantics to PM_QOS_CPU_DMA_LATENCY, allowing
>> >> device drivers to report a *lower* bound on CPU latency beyond which PM
>> >> shall not bother to reduce latency if doing so would have negative
>> >> consequences on the energy efficiency and/or parallelism of the system.
>> >
>> > So I really don't quite see how that could be responded to, by cpuidle
>> > say.  What exactly do you mean by "reducing latency" in particular?
>> >
>>
>> cpuidle wouldn't necessarily have to do anything about it since it would
>> be intended merely as a hint that a device in the system other than the
>> CPU has a bottleneck.  It could provide a lower bound for the wake-up
>> latency of the idle states that may be considered by cpuidle.  It seems
>> to me like it could be useful when a program can tell from the
>> characteristics of the workload that a latency reduction below a certain
>> time bound wouldn't materially affect the performance of the system
>> (e.g. if you have 20 ms to render a GPU-bound frame, you may not care at
>> all about the CPU taking a fraction of a millisecond more to wake up a
>> few times each frame).
>
> Well, this is not how cpuidle works.
>
> What it does is to try to find the deepest idle state that makes sense
> to let the CPU go into given all of the constraints etc.  IOW it never
> tries to reduce the latency, it looks how far it can go with possible
> energy savings given a specific latency limit (or no limit at all).
>

I didn't mean to say that cpuidle reduces latency except in relative
terms: If a sleep state is available but has too high exit latency to be
used under the current load conditions, an explicit hint from the
application or device driver saying "I'm okay with a wake-up+ramp-up
latency of the order of X nanoseconds" might allow it to do a better job
than any heuristic decision implemented in the idle governor.

>> For cpufreq I was planning to have it influence a time parameter of the
>> utilization averaging done by the governor, which would allow it to have
>> a more optimal response in the long term (in the sense of lowering the
>> energy cost of performing the same work in the specified timeframe),
>> even if such a large time parameter wouldn't normally be considered
>> appropriate for utilization averaging due to latency concerns.
>
> So this is fine in the schedutil case in principle, it but would not
> work with HWP, because that doesn't take the scheduler's utilization
> metrics into account.
>

The code I've been working on lately targets HWP platforms specifically,
but I've gotten it to work on non-HWP too with some minor changes in the
governor.  The same kernel interfaces should work whether the CPUFREQ
governor is delegating frequency selection to HWP or doing it directly.

> To cover the HWP case you need to influence the min and max frequency
> limits, realistically.
>

Indeed, the constraint I was planning to introduce eventually influences
the calculation of the HWP min/max frequencies in order to make sure
that the P-code ends up selecting a reasonably optimal frequency,
without fully removing it out of the picture, it's simply meant to
assist its decisions whenever the applications running on that CPU core
have a non-CPU bottleneck known to the kernel.

>> >> Of course one would expect the current PM_QOS_CPU_DMA_LATENCY upper
>> >> bound to take precedence over the new lower bound in cases where the
>> >> former is in conflict with the latter.
>> >
>> > So that needs to be done on top of this series.
>> >
>> >> I can think of several alternatives to that which don't involve
>> >> temporarily holding off your clean-up,
>> >
>> > The cleanup goes in.  Please work on top of it.
>> >
>>
>> Hopefully we can come up with an alternative in that case.  TBH I'd love
>> to see your clean-up go in too, but global PM QoS seemed fairly
>> appealing as a way to split up my work so it could be reviewed
>> incrementally, even though I'm aiming for a finer-grained solution than
>> that.
>
> Well, so "global PM QoS" really means a struct struct
> pm_qos_constraints object with a global reader of its target_value.
>
> Of course, pm_qos_update_target() is not particularly convenient to
> use, so you'd need to wrap it into an _add/update/remove_request()
> family of functions along the lines of the cpu_latency_qos_*() ones I
> suppose and you won't need the _apply() thing.
>

Yeah, sounds about right.

>> >> but none of them sound particularly exciting:
>> >>
>> >>  1/ Use an interface specific to CPUFREQ, pretty much like the one
>> >>     introduced in my original submission [1].
>> >
>> > It uses frequency QoS already today, do you really need something else?
>> >
>>
>> Yes.  I don't see how frequency QoS could be useful for this as-is,
>> unless we're willing to introduce code in every device driver that takes
>> advantage of this and have them monitor the utilization of every CPU in
>> the system, so they can calculate an appropriate max frequency
>> constraint -- One which we can be reasonably certain won't hurt the
>> long-term performance of the CPU cores these constraints are being
>> placed on.
>
> I'm not really sure if I understand you correctly.
>
> The frequency QoS in cpufreq is a way to influence the min and max
> freq limits used by it for each CPU.  That is done in a couple of
> places like store_max/min_perf_pct() in intel_pstate or
> processor_set_cur_state() (I guess the latter would be close to what
> you think about, but the other way around - you seem to want to
> influence the min and not the max).
>
I do want to influence the max frequency primarily.

> Now, the question what request value(s) to put in there and how to
> compute them is kind of a different one.
>

And the question of what frequency request to put in there is the really
tricky one IMO, because it requires every user of this interface to
monitor CPU performance counters in order to guess what an appropriate
frequency constraint is (i.e. one which won't interfere with the work of
other applications and that won't cause the bottleneck of the same
application to shift from the IO device to the CPU).  That's why
shifting from a frequency constraint to a response latency constraint
seems valuable to me: Even though the optimal CPU frequency constraint
is highly variable in time (based on the instantaneous balance between
CPU and IO load), the optimal latency constraint is approximately
constant for any given workload as long as it continues to be IO-bound
(since the greatest acceptable latency constraint might be a simple
function of the monitor refresh rate, network protocol constraints, IO
device latency, etc.).

>> >>  2/ Use per-CPU PM QoS, which AFAICT would require the graphics driver
>> >>     to either place a request on every CPU of the system (which would
>> >>     cause a frequent operation to have O(N) complexity on the number of
>> >>     CPUs on the system), or play a cat-and-mouse game with the task
>> >>     scheduler.
>> >
>> > That's in place already too in the form of device PM QoS; see
>> > drivers/base/power/qos.c.
>>
>> But wouldn't that have the drawbacks I was talking about above when
>> trying to use it in order to set this kind of constraints on CPU power
>> management?
>
> I guess so, but the alternatives have drawbacks too.
>
>> >
>> >>  3/ Add a new global PM QoS mechanism roughly duplicating the
>> >>     cpu_latency_qos_* interfaces introduced in this series.  Drop your
>> >>     change making this available to CPU IDLE only.
>> >
>> > It sounds like you really want performance for energy efficiency and
>> > CPU latency has a little to do with that.
>> >
>>
>> The mechanism I've been working on isn't intended to sacrifice long-term
>> performance of the CPU (e.g. if a CPU core is 100% utilized in the
>> steady state by the same or an unrelated application the CPUFREQ
>> governor should still request the maximum turbo frequency for it), it's
>> only meant to affect the trade-off between energy efficiency and latency
>> (e.g. the time it takes for the CPUFREQ governor to respond to an
>> oscillation of the workload that chooses to opt in).
>
> So the meaning of "latency" here is really different from the meaning
> of "latency" in the cpuidle context and in RT.
>
> I guess it would be better to call it "response time" in this case to
> avoid confusion.

I'm fine with calling this time parameter response time instead.  What
is going on under the hood is indeed somewhat different from the cpuidle
case, but the interpretation is closely related: Latency it takes for
the CPU to reach some nominal (e.g. maximum) level of performance after
wake-up in response to a step-function utilization.

>
> Let me ask if I understand you correctly: the problem is that for some
> workloads the time it takes to ramp up the frequency to an acceptable
> (or desirable, more generally) level is too high, so the approach
> under consideration is to clamp the min frequency, either effectively
> or directly, so as to reduce that time?
>

Nope, the problem is precisely the opposite: PM is responding too
quickly to transient oscillations of the CPU load, even though the
actual latency requirements of the workload are far less stringent,
leading to energy-inefficient behavior which severely reduces the
throughput of the system under TDP-bound conditions.

>> >>  3/ Go straight to a scheduling-based approach, which is likely to
>> >>     greatly increase the review effort required to upstream this
>> >>     feature.  (Peter might disagree though?)
>> >
>> > Are you familiar with the utilization clamps mechanism?
>> >
>>
>> Sure, that would be a possibility as alternative to PM QoS, but it would
>> most likely involve task scheduler changes to get it working
>> effectively, which Srinivas and Rodrigo have asked me to leave out from
>> my next RFC submission in the interest of reviewability.  I wouldn't
>> mind plumbing comparable information through utilization clamps instead
>> or as follow-up if you think that's the way forward.
>
> Well, like I said somewhere above (or previously), the main problem
> with utilization clamps is that they have no effect on HWP at the
> moment.  Currently, there is a little connection between the scheduler
> and the HWP algorithm running on the processor.  However, I would like
> to investigate that path, because the utilization clamps provide a
> good interface for applications to request a certain level of service
> from the scheduler (they can really be regarded as a QoS mechanism
> too) and connecting them to the HWP min and max limits somehow might
> work.
>

Yeah, it would be really nice to have the utilization clamps influence
the HWP P-state range.  That said I think the most straightforward way
to achieve this via utilization clamps would be to add a third "response
latency" clamp which defaults to infinity (if the application doesn't
care to set a latency requirement) and is aggregated across tasks queued
to the same RQ by taking the minimum value (so the most stringent
latency request is honored).

It may be technically possible to implement this based on the MAX clamp
alone, but it would have similar or worse drawbacks than the per-CPU
frequency QoS alternative we were discussing earlier: In order to avoid
hurting the performance of the application, each bottlenecking device
driver would have to periodically monitor the CPU utilization of every
thread of every process talking to the device, and periodically adjust
their MAX utilization clamps in order to adapt to fluctuations of the
balance between CPU and IO load.  That's O(n*f) run-time overhead on the
number of threads and utilization sampling frequency.  In comparison a
latency constraint would be pretty much a fire-and-forget.

Or it might be possible but likely as controversial to put all processes
talking to the same device under a single cgroup in order to manage them
with a single clamp -- Except I don't think that would easily scale to
multiple devices.

> Thanks!

Thanks for your feedback!

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 227 bytes --]

^ permalink raw reply	[flat|nested] 74+ messages in thread

* Re: [PATCH 00/28] PM: QoS: Get rid of unuseful code and rework CPU latency QoS interface
  2020-02-13 16:35         ` Rafael J. Wysocki
@ 2020-02-14  0:15           ` Francisco Jerez
  2020-02-14 10:42             ` Rafael J. Wysocki
  0 siblings, 1 reply; 74+ messages in thread
From: Francisco Jerez @ 2020-02-14  0:15 UTC (permalink / raw)
  To: Rafael J. Wysocki
  Cc: Rafael J. Wysocki, Linux PM, LKML, Amit Kucheria, Pandruvada\,
	Srinivas, Rodrigo Vivi, Peter Zijlstra

[-- Attachment #1.1: Type: text/plain, Size: 10080 bytes --]

"Rafael J. Wysocki" <rafael@kernel.org> writes:

> On Thu, Feb 13, 2020 at 12:34 PM Rafael J. Wysocki <rafael@kernel.org> wrote:
>>
>> On Thu, Feb 13, 2020 at 9:07 AM Francisco Jerez <currojerez@riseup.net> wrote:
>> >
>> > "Rafael J. Wysocki" <rafael@kernel.org> writes:
>> >
>> > > On Thu, Feb 13, 2020 at 12:31 AM Francisco Jerez <currojerez@riseup.net> wrote:
>> > >>
>> > >> "Rafael J. Wysocki" <rjw@rjwysocki.net> writes:
>> > >>
>> > >> > Hi All,
>> > >> >
>> > >> > This series of patches is based on the observation that after commit
>> > >> > c3082a674f46 ("PM: QoS: Get rid of unused flags") the only global PM QoS class
>> > >> > in use is PM_QOS_CPU_DMA_LATENCY, but there is still a significant amount of
>> > >> > code dedicated to the handling of global PM QoS classes in general.  That code
>> > >> > takes up space and adds overhead in vain, so it is better to get rid of it.
>> > >> >
>> > >> > Moreover, with that unuseful code removed, the interface for adding QoS
>> > >> > requests for CPU latency becomes inelegant and confusing, so it is better to
>> > >> > clean it up.
>> > >> >
>> > >> > Patches [01/28-12/28] do the first part described above, which also includes
>> > >> > some assorted cleanups of the core PM QoS code that doesn't go away.
>> > >> >
>> > >> > Patches [13/28-25/28] rework the CPU latency QoS interface (in the classic
>> > >> > "define stubs, migrate users, change the API proper" manner), patches
>> > >> > [26-27/28] update the general comments and documentation to match the code
>> > >> > after the previous changes and the last one makes the CPU latency QoS depend
>> > >> > on CPU_IDLE (because cpuidle is the only user of its target value today).
>> > >> >
>> > >> > The majority of the patches in this series don't change the functionality of
>> > >> > the code at all (at least not intentionally).
>> > >> >
>> > >> > Please refer to the changelogs of individual patches for details.
>> > >> >
>> > >> > Thanks!
>> > >>
>> > >> Hi Rafael,
>> > >>
>> > >> I believe some of the interfaces removed here could be useful in the
>> > >> near future.
>> > >
>> > > I disagree.
>> > >
>> > >>  It goes back to the energy efficiency- (and IGP graphics
>> > >> performance-)improving series I submitted a while ago [1].  It relies on
>> > >> some mechanism for the graphics driver to report an I/O bottleneck to
>> > >> CPUFREQ, allowing it to make a more conservative trade-off between
>> > >> energy efficiency and latency, which can greatly reduce the CPU package
>> > >> energy usage of IO-bound applications (in some graphics benchmarks I've
>> > >> seen it reduced by over 40% on my ICL laptop), and therefore also allows
>> > >> TDP-bound applications to obtain a reciprocal improvement in throughput.
>> > >>
>> > >> I'm not particularly fond of the global PM QoS interfaces TBH, it seems
>> > >> like an excessively blunt hammer to me, so I can very much relate to the
>> > >> purpose of this series.  However the finer-grained solution I've
>> > >> implemented has seen some push-back from i915 and CPUFREQ devs due to
>> > >> its complexity, since it relies on task scheduler changes in order to
>> > >> track IO bottlenecks per-process (roughly as suggested by Peter Zijlstra
>> > >> during our previous discussions), pretty much in the spirit of PELT but
>> > >> applied to IO utilization.
>> > >>
>> > >> With that in mind I was hoping we could take advantage of PM QoS as a
>> > >> temporary solution [2], by introducing a global PM QoS class similar but
>> > >> with roughly converse semantics to PM_QOS_CPU_DMA_LATENCY, allowing
>> > >> device drivers to report a *lower* bound on CPU latency beyond which PM
>> > >> shall not bother to reduce latency if doing so would have negative
>> > >> consequences on the energy efficiency and/or parallelism of the system.
>> > >
>> > > So I really don't quite see how that could be responded to, by cpuidle
>> > > say.  What exactly do you mean by "reducing latency" in particular?
>> > >
>> >
>> > cpuidle wouldn't necessarily have to do anything about it since it would
>> > be intended merely as a hint that a device in the system other than the
>> > CPU has a bottleneck.  It could provide a lower bound for the wake-up
>> > latency of the idle states that may be considered by cpuidle.  It seems
>> > to me like it could be useful when a program can tell from the
>> > characteristics of the workload that a latency reduction below a certain
>> > time bound wouldn't materially affect the performance of the system
>> > (e.g. if you have 20 ms to render a GPU-bound frame, you may not care at
>> > all about the CPU taking a fraction of a millisecond more to wake up a
>> > few times each frame).
>>
>> Well, this is not how cpuidle works.
>>
>> What it does is to try to find the deepest idle state that makes sense
>> to let the CPU go into given all of the constraints etc.  IOW it never
>> tries to reduce the latency, it looks how far it can go with possible
>> energy savings given a specific latency limit (or no limit at all).
>>
>> > For cpufreq I was planning to have it influence a time parameter of the
>> > utilization averaging done by the governor, which would allow it to have
>> > a more optimal response in the long term (in the sense of lowering the
>> > energy cost of performing the same work in the specified timeframe),
>> > even if such a large time parameter wouldn't normally be considered
>> > appropriate for utilization averaging due to latency concerns.
>>
>> So this is fine in the schedutil case in principle, it but would not
>> work with HWP, because that doesn't take the scheduler's utilization
>> metrics into account.
>>
>> To cover the HWP case you need to influence the min and max frequency
>> limits, realistically.
>>
>> > >> Of course one would expect the current PM_QOS_CPU_DMA_LATENCY upper
>> > >> bound to take precedence over the new lower bound in cases where the
>> > >> former is in conflict with the latter.
>> > >
>> > > So that needs to be done on top of this series.
>> > >
>> > >> I can think of several alternatives to that which don't involve
>> > >> temporarily holding off your clean-up,
>> > >
>> > > The cleanup goes in.  Please work on top of it.
>> > >
>> >
>> > Hopefully we can come up with an alternative in that case.  TBH I'd love
>> > to see your clean-up go in too, but global PM QoS seemed fairly
>> > appealing as a way to split up my work so it could be reviewed
>> > incrementally, even though I'm aiming for a finer-grained solution than
>> > that.
>>
>> Well, so "global PM QoS" really means a struct struct
>> pm_qos_constraints object with a global reader of its target_value.
>>
>> Of course, pm_qos_update_target() is not particularly convenient to
>> use, so you'd need to wrap it into an _add/update/remove_request()
>> family of functions along the lines of the cpu_latency_qos_*() ones I
>> suppose and you won't need the _apply() thing.
>>
>> > >> but none of them sound particularly exciting:
>> > >>
>> > >>  1/ Use an interface specific to CPUFREQ, pretty much like the one
>> > >>     introduced in my original submission [1].
>> > >
>> > > It uses frequency QoS already today, do you really need something else?
>> > >
>> >
>> > Yes.  I don't see how frequency QoS could be useful for this as-is,
>> > unless we're willing to introduce code in every device driver that takes
>> > advantage of this and have them monitor the utilization of every CPU in
>> > the system, so they can calculate an appropriate max frequency
>> > constraint -- One which we can be reasonably certain won't hurt the
>> > long-term performance of the CPU cores these constraints are being
>> > placed on.
>>
>> I'm not really sure if I understand you correctly.
>>
>> The frequency QoS in cpufreq is a way to influence the min and max
>> freq limits used by it for each CPU.  That is done in a couple of
>> places like store_max/min_perf_pct() in intel_pstate or
>> processor_set_cur_state() (I guess the latter would be close to what
>> you think about, but the other way around - you seem to want to
>> influence the min and not the max).
>
> It looks like *I* got this part the other way around. :-/
>
> I think that your use case is almost equivalent to the thermal
> pressure one, so you'd want to limit the max and so that would be
> something similar to store_max_perf_pct() with its input side hooked
> up to a QoS list.
>
> But it looks like that QoS list would rather be of a "reservation"
> type, so a request added to it would mean something like "leave this
> fraction of power that appears to be available to the CPU subsystem
> unused, because I need it for a different purpose".  And in principle
> there might be multiple requests in there at the same time and those
> "reservations" would add up.  So that would be a kind of "limited sum"
> QoS type which wasn't even there before my changes.
>
> A user of that QoS list might then do something like
>
> ret = cpu_power_reserve_add(1, 4);
>
> meaning that it wants 25% of the "potential" CPU power to be not
> utilized by CPU performance scaling and that could affect the
> scheduler through load modifications (kind of along the thermal
> pressure patchset discussed some time ago) and HWP (as well as the
> non-HWP intel_pstate by preventing turbo frequencies from being used
> etc).

The problems with this are the same as with the per-CPU frequency QoS
approach: How does the device driver know what the appropriate fraction
of CPU power is?  Depending on the instantaneous behavior of the
workload it might take 1% or 95% of the CPU power in order to keep the
IO device busy.  Each user of this would need to monitor the performance
of every CPU in the system and update the constraints on each of them
periodically (whether or not they're talking to that IO device, which
would possibly negatively impact the latency of unrelated applications
running on other CPUs, unless we're willing to race with the task
scheduler).  A solution based on utilization clamps (with some
extensions) sounds more future-proof to me honestly.

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 227 bytes --]

^ permalink raw reply	[flat|nested] 74+ messages in thread

* Re: [PATCH 16/28] drm: i915: Call cpu_latency_qos_*() instead of pm_qos_*()
  2020-02-11 23:12 ` [PATCH 16/28] drm: i915: " Rafael J. Wysocki
  2020-02-12 10:32   ` Rafael J. Wysocki
@ 2020-02-14  7:42   ` Jani Nikula
  1 sibling, 0 replies; 74+ messages in thread
From: Jani Nikula @ 2020-02-14  7:42 UTC (permalink / raw)
  To: Rafael J. Wysocki, Linux PM
  Cc: LKML, Amit Kucheria, Joonas Lahtinen, Rodrigo Vivi, intel-gfx

On Wed, 12 Feb 2020, "Rafael J. Wysocki" <rjw@rjwysocki.net> wrote:
> From: "Rafael J. Wysocki" <rafael.j.wysocki@intel.com>
>
> Call cpu_latency_qos_add/update/remove_request() instead of
> pm_qos_add/update/remove_request(), respectively, because the
> latter are going to be dropped.
>
> No intentional functional impact.

Heh, that's careful, I usually boldly claim "no functional changes" on
patches like this.

For merging via whichever tree suits you,

Acked-by: Jani Nikula <jani.nikula@intel.com>

>
> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
> ---
>  drivers/gpu/drm/i915/display/intel_dp.c |  4 ++--
>  drivers/gpu/drm/i915/i915_drv.c         | 12 +++++-------
>  drivers/gpu/drm/i915/intel_sideband.c   |  5 +++--
>  3 files changed, 10 insertions(+), 11 deletions(-)
>
> diff --git a/drivers/gpu/drm/i915/display/intel_dp.c b/drivers/gpu/drm/i915/display/intel_dp.c
> index c7424e2a04a3..208457005a11 100644
> --- a/drivers/gpu/drm/i915/display/intel_dp.c
> +++ b/drivers/gpu/drm/i915/display/intel_dp.c
> @@ -1360,7 +1360,7 @@ intel_dp_aux_xfer(struct intel_dp *intel_dp,
>  	 * lowest possible wakeup latency and so prevent the cpu from going into
>  	 * deep sleep states.
>  	 */
> -	pm_qos_update_request(&i915->pm_qos, 0);
> +	cpu_latency_qos_update_request(&i915->pm_qos, 0);
>  
>  	intel_dp_check_edp(intel_dp);
>  
> @@ -1488,7 +1488,7 @@ intel_dp_aux_xfer(struct intel_dp *intel_dp,
>  
>  	ret = recv_bytes;
>  out:
> -	pm_qos_update_request(&i915->pm_qos, PM_QOS_DEFAULT_VALUE);
> +	cpu_latency_qos_update_request(&i915->pm_qos, PM_QOS_DEFAULT_VALUE);
>  
>  	if (vdd)
>  		edp_panel_vdd_off(intel_dp, false);
> diff --git a/drivers/gpu/drm/i915/i915_drv.c b/drivers/gpu/drm/i915/i915_drv.c
> index f7385abdd74b..74481a189cfc 100644
> --- a/drivers/gpu/drm/i915/i915_drv.c
> +++ b/drivers/gpu/drm/i915/i915_drv.c
> @@ -502,8 +502,7 @@ static int i915_driver_early_probe(struct drm_i915_private *dev_priv)
>  	mutex_init(&dev_priv->backlight_lock);
>  
>  	mutex_init(&dev_priv->sb_lock);
> -	pm_qos_add_request(&dev_priv->sb_qos,
> -			   PM_QOS_CPU_DMA_LATENCY, PM_QOS_DEFAULT_VALUE);
> +	cpu_latency_qos_add_request(&dev_priv->sb_qos, PM_QOS_DEFAULT_VALUE);
>  
>  	mutex_init(&dev_priv->av_mutex);
>  	mutex_init(&dev_priv->wm.wm_mutex);
> @@ -568,7 +567,7 @@ static void i915_driver_late_release(struct drm_i915_private *dev_priv)
>  	vlv_free_s0ix_state(dev_priv);
>  	i915_workqueues_cleanup(dev_priv);
>  
> -	pm_qos_remove_request(&dev_priv->sb_qos);
> +	cpu_latency_qos_remove_request(&dev_priv->sb_qos);
>  	mutex_destroy(&dev_priv->sb_lock);
>  }
>  
> @@ -1226,8 +1225,7 @@ static int i915_driver_hw_probe(struct drm_i915_private *dev_priv)
>  		}
>  	}
>  
> -	pm_qos_add_request(&dev_priv->pm_qos, PM_QOS_CPU_DMA_LATENCY,
> -			   PM_QOS_DEFAULT_VALUE);
> +	cpu_latency_qos_add_request(&dev_priv->pm_qos, PM_QOS_DEFAULT_VALUE);
>  
>  	intel_gt_init_workarounds(dev_priv);
>  
> @@ -1273,7 +1271,7 @@ static int i915_driver_hw_probe(struct drm_i915_private *dev_priv)
>  err_msi:
>  	if (pdev->msi_enabled)
>  		pci_disable_msi(pdev);
> -	pm_qos_remove_request(&dev_priv->pm_qos);
> +	cpu_latency_qos_remove_request(&dev_priv->pm_qos);
>  err_mem_regions:
>  	intel_memory_regions_driver_release(dev_priv);
>  err_ggtt:
> @@ -1296,7 +1294,7 @@ static void i915_driver_hw_remove(struct drm_i915_private *dev_priv)
>  	if (pdev->msi_enabled)
>  		pci_disable_msi(pdev);
>  
> -	pm_qos_remove_request(&dev_priv->pm_qos);
> +	cpu_latency_qos_remove_request(&dev_priv->pm_qos);
>  }
>  
>  /**
> diff --git a/drivers/gpu/drm/i915/intel_sideband.c b/drivers/gpu/drm/i915/intel_sideband.c
> index cbfb7171d62d..0648eda309e4 100644
> --- a/drivers/gpu/drm/i915/intel_sideband.c
> +++ b/drivers/gpu/drm/i915/intel_sideband.c
> @@ -60,7 +60,7 @@ static void __vlv_punit_get(struct drm_i915_private *i915)
>  	 * to the Valleyview P-unit and not all sideband communications.
>  	 */
>  	if (IS_VALLEYVIEW(i915)) {
> -		pm_qos_update_request(&i915->sb_qos, 0);
> +		cpu_latency_qos_update_request(&i915->sb_qos, 0);
>  		on_each_cpu(ping, NULL, 1);
>  	}
>  }
> @@ -68,7 +68,8 @@ static void __vlv_punit_get(struct drm_i915_private *i915)
>  static void __vlv_punit_put(struct drm_i915_private *i915)
>  {
>  	if (IS_VALLEYVIEW(i915))
> -		pm_qos_update_request(&i915->sb_qos, PM_QOS_DEFAULT_VALUE);
> +		cpu_latency_qos_update_request(&i915->sb_qos,
> +					       PM_QOS_DEFAULT_VALUE);
>  
>  	iosf_mbi_punit_release();
>  }

-- 
Jani Nikula, Intel Open Source Graphics Center

^ permalink raw reply	[flat|nested] 74+ messages in thread

* Re: [PATCH 00/28] PM: QoS: Get rid of unuseful code and rework CPU latency QoS interface
  2020-02-14  0:15           ` Francisco Jerez
@ 2020-02-14 10:42             ` Rafael J. Wysocki
  2020-02-14 20:32               ` Francisco Jerez
  0 siblings, 1 reply; 74+ messages in thread
From: Rafael J. Wysocki @ 2020-02-14 10:42 UTC (permalink / raw)
  To: Francisco Jerez
  Cc: Rafael J. Wysocki, Rafael J. Wysocki, Linux PM, LKML,
	Amit Kucheria, Pandruvada, Srinivas, Rodrigo Vivi,
	Peter Zijlstra

On Fri, Feb 14, 2020 at 1:14 AM Francisco Jerez <currojerez@riseup.net> wrote:
>
> "Rafael J. Wysocki" <rafael@kernel.org> writes:
>
> > On Thu, Feb 13, 2020 at 12:34 PM Rafael J. Wysocki <rafael@kernel.org> wrote:

[cut]

> >
> > I think that your use case is almost equivalent to the thermal
> > pressure one, so you'd want to limit the max and so that would be
> > something similar to store_max_perf_pct() with its input side hooked
> > up to a QoS list.
> >
> > But it looks like that QoS list would rather be of a "reservation"
> > type, so a request added to it would mean something like "leave this
> > fraction of power that appears to be available to the CPU subsystem
> > unused, because I need it for a different purpose".  And in principle
> > there might be multiple requests in there at the same time and those
> > "reservations" would add up.  So that would be a kind of "limited sum"
> > QoS type which wasn't even there before my changes.
> >
> > A user of that QoS list might then do something like
> >
> > ret = cpu_power_reserve_add(1, 4);
> >
> > meaning that it wants 25% of the "potential" CPU power to be not
> > utilized by CPU performance scaling and that could affect the
> > scheduler through load modifications (kind of along the thermal
> > pressure patchset discussed some time ago) and HWP (as well as the
> > non-HWP intel_pstate by preventing turbo frequencies from being used
> > etc).
>
> The problems with this are the same as with the per-CPU frequency QoS
> approach: How does the device driver know what the appropriate fraction
> of CPU power is?

Of course it doesn't know and it may never know exactly, but it may guess.

Also, it may set up a feedback loop: request an aggressive
reservation, run for a while, measure something and refine if there's
headroom.  Then repeat.

> Depending on the instantaneous behavior of the
> workload it might take 1% or 95% of the CPU power in order to keep the
> IO device busy.  Each user of this would need to monitor the performance
> of every CPU in the system and update the constraints on each of them
> periodically (whether or not they're talking to that IO device, which
> would possibly negatively impact the latency of unrelated applications
> running on other CPUs, unless we're willing to race with the task
> scheduler).

No, it just needs to measure a signal representing how much power *it*
gets and decide whether or not it can let the CPU subsystem use more
power.

> A solution based on utilization clamps (with some
> extensions) sounds more future-proof to me honestly.

Except that it would be rather hard to connect it to something like
RAPL, which should be quite straightforward with the approach I'm
talking about.

The problem with all scheduler-based ways, again, is that there is no
direct connection between the scheduler and HWP, or even with whatever
the processor does with the P-states in the turbo range.  If any
P-state in the turbo range is requested, the processor has a license
to use whatever P-state it wants, so this pretty much means allowing
it to use as much power as it can.

So in the first place, if you want to limit the use of power in the
CPU subsystem through frequency control alone, you need to prevent it
from using turbo P-states at all.  However, with RAPL you can just
limit power which may still allow some (but not all) turbo P-states to
be used.

^ permalink raw reply	[flat|nested] 74+ messages in thread

* Re: [PATCH 00/28] PM: QoS: Get rid of unuseful code and rework CPU latency QoS interface
  2020-02-14 10:42             ` Rafael J. Wysocki
@ 2020-02-14 20:32               ` Francisco Jerez
  2020-02-24 10:39                 ` Rafael J. Wysocki
  0 siblings, 1 reply; 74+ messages in thread
From: Francisco Jerez @ 2020-02-14 20:32 UTC (permalink / raw)
  To: Rafael J. Wysocki
  Cc: Rafael J. Wysocki, Rafael J. Wysocki, Linux PM, LKML,
	Amit Kucheria, Pandruvada\,
	Srinivas, Rodrigo Vivi, Peter Zijlstra

[-- Attachment #1.1: Type: text/plain, Size: 6046 bytes --]

"Rafael J. Wysocki" <rafael@kernel.org> writes:

> On Fri, Feb 14, 2020 at 1:14 AM Francisco Jerez <currojerez@riseup.net> wrote:
>>
>> "Rafael J. Wysocki" <rafael@kernel.org> writes:
>>
>> > On Thu, Feb 13, 2020 at 12:34 PM Rafael J. Wysocki <rafael@kernel.org> wrote:
>
> [cut]
>
>> >
>> > I think that your use case is almost equivalent to the thermal
>> > pressure one, so you'd want to limit the max and so that would be
>> > something similar to store_max_perf_pct() with its input side hooked
>> > up to a QoS list.
>> >
>> > But it looks like that QoS list would rather be of a "reservation"
>> > type, so a request added to it would mean something like "leave this
>> > fraction of power that appears to be available to the CPU subsystem
>> > unused, because I need it for a different purpose".  And in principle
>> > there might be multiple requests in there at the same time and those
>> > "reservations" would add up.  So that would be a kind of "limited sum"
>> > QoS type which wasn't even there before my changes.
>> >
>> > A user of that QoS list might then do something like
>> >
>> > ret = cpu_power_reserve_add(1, 4);
>> >
>> > meaning that it wants 25% of the "potential" CPU power to be not
>> > utilized by CPU performance scaling and that could affect the
>> > scheduler through load modifications (kind of along the thermal
>> > pressure patchset discussed some time ago) and HWP (as well as the
>> > non-HWP intel_pstate by preventing turbo frequencies from being used
>> > etc).
>>
>> The problems with this are the same as with the per-CPU frequency QoS
>> approach: How does the device driver know what the appropriate fraction
>> of CPU power is?
>
> Of course it doesn't know and it may never know exactly, but it may guess.
>
> Also, it may set up a feedback loop: request an aggressive
> reservation, run for a while, measure something and refine if there's
> headroom.  Then repeat.
>

Yeah, of course, but that's obviously more computationally intensive and
less accurate than computing an approximately optimal constraint in a
single iteration (based on knowledge from performance counters and a
notion of the latency requirements of the application), since such a
feedback loop relies on repeatedly overshooting and undershooting the
optimal value (the latter causes an artificial CPU bottleneck, possibly
slowing down other applications too) in order to converge to and remain
in a neighborhood of the optimal value.

Incidentally people tested a power balancing solution with a feedback
loop very similar to the one you're describing side by side to the RFC
patch series I provided a link to earlier (which targeted Gen9 LP
parts), and the energy efficiency improvements they observed were
roughly half of the improvement obtained with my series unsurprisingly.

Not to speak about generalizing such a feedback loop to bottlenecks on
multiple I/O devices.

>> Depending on the instantaneous behavior of the
>> workload it might take 1% or 95% of the CPU power in order to keep the
>> IO device busy.  Each user of this would need to monitor the performance
>> of every CPU in the system and update the constraints on each of them
>> periodically (whether or not they're talking to that IO device, which
>> would possibly negatively impact the latency of unrelated applications
>> running on other CPUs, unless we're willing to race with the task
>> scheduler).
>
> No, it just needs to measure a signal representing how much power *it*
> gets and decide whether or not it can let the CPU subsystem use more
> power.
>

Well yes it's technically possible to set frequency constraints based on
trial-and-error without sampling utilization information from the CPU
cores, but don't we agree that this kind of information can be highly
valuable?

>> A solution based on utilization clamps (with some
>> extensions) sounds more future-proof to me honestly.
>
> Except that it would be rather hard to connect it to something like
> RAPL, which should be quite straightforward with the approach I'm
> talking about.
>

I think using RAPL as additional control variable would be useful, but
fully orthogonal to the cap being set by some global mechanism or being
derived from the aggregation of a number of per-process power caps based
on the scheduler behavior.  The latter sounds like the more reasonable
fit for a multi-tasking, possibly virtualized environment honestly.
Either way RAPL is neither necessary nor sufficient in order to achieve
the energy efficiency improvement I'm working on.

> The problem with all scheduler-based ways, again, is that there is no
> direct connection between the scheduler and HWP,

I was planning to introduce such a connection in RFC part 2.  I have a
prototype for that based on a not particularly pretty custom interface,
I wouldn't mind trying to get it to use utilization clamps if you think
that's the way forward.

> or even with whatever the processor does with the P-states in the
> turbo range.  If any P-state in the turbo range is requested, the
> processor has a license to use whatever P-state it wants, so this
> pretty much means allowing it to use as much power as it can.
>
> So in the first place, if you want to limit the use of power in the
> CPU subsystem through frequency control alone, you need to prevent it
> from using turbo P-states at all.  However, with RAPL you can just
> limit power which may still allow some (but not all) turbo P-states to
> be used.

My goal is not to limit the use of power of the CPU (if it has enough
load to utilize 100% of the cycles at turbo frequency so be it), but to
get it to use it more efficiently.  If you are constrained by a given
power budget (e.g. the TDP or the one you want set via RAPL) you can do
more with it if you set a stable frequency rather than if you let the
CPU bounce back and forth between turbo and idle.  This can only be
achieved effectively if the frequency governor has a rough idea of the
latency requirements of the workload, since it involves a
latency/energy-efficiency trade-off.

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 227 bytes --]

^ permalink raw reply	[flat|nested] 74+ messages in thread

* Re: [PATCH 23/28] drivers: usb: Call cpu_latency_qos_*() instead of pm_qos_*()
  2020-02-12 18:38   ` Greg KH
@ 2020-02-18  8:03     ` Peter Chen
  2020-02-18  8:08       ` Greg KH
  0 siblings, 1 reply; 74+ messages in thread
From: Peter Chen @ 2020-02-18  8:03 UTC (permalink / raw)
  To: Greg KH; +Cc: Rafael J. Wysocki, Linux PM, LKML, Amit Kucheria, linux-usb

On 20-02-12 10:38:27, Greg KH wrote:
> On Wed, Feb 12, 2020 at 12:28:44AM +0100, Rafael J. Wysocki wrote:
> > From: "Rafael J. Wysocki" <rafael.j.wysocki@intel.com>
> > 
> > Call cpu_latency_qos_add/remove_request() instead of
> > pm_qos_add/remove_request(), respectively, because the
> > latter are going to be dropped.
> > 
> > No intentional functional impact.
> > 
> > Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
> > ---
> >  drivers/usb/chipidea/ci_hdrc_imx.c | 12 +++++-------
> >  1 file changed, 5 insertions(+), 7 deletions(-)
> 
> Acked-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

Hi Greg,

With this patch applied, the usb-next can't compile pass.

-- 

Thanks,
Peter Chen

^ permalink raw reply	[flat|nested] 74+ messages in thread

* Re: [PATCH 23/28] drivers: usb: Call cpu_latency_qos_*() instead of pm_qos_*()
  2020-02-18  8:03     ` Peter Chen
@ 2020-02-18  8:08       ` Greg KH
  2020-02-18  8:11         ` Peter Chen
  0 siblings, 1 reply; 74+ messages in thread
From: Greg KH @ 2020-02-18  8:08 UTC (permalink / raw)
  To: Peter Chen; +Cc: Rafael J. Wysocki, Linux PM, LKML, Amit Kucheria, linux-usb

On Tue, Feb 18, 2020 at 08:03:13AM +0000, Peter Chen wrote:
> On 20-02-12 10:38:27, Greg KH wrote:
> > On Wed, Feb 12, 2020 at 12:28:44AM +0100, Rafael J. Wysocki wrote:
> > > From: "Rafael J. Wysocki" <rafael.j.wysocki@intel.com>
> > > 
> > > Call cpu_latency_qos_add/remove_request() instead of
> > > pm_qos_add/remove_request(), respectively, because the
> > > latter are going to be dropped.
> > > 
> > > No intentional functional impact.
> > > 
> > > Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
> > > ---
> > >  drivers/usb/chipidea/ci_hdrc_imx.c | 12 +++++-------
> > >  1 file changed, 5 insertions(+), 7 deletions(-)
> > 
> > Acked-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
> 
> Hi Greg,
> 
> With this patch applied, the usb-next can't compile pass.

Did I apply this?  It looks to need to go through Rafael's tree which
introduces the new api, right?

thanks,

greg k-h

^ permalink raw reply	[flat|nested] 74+ messages in thread

* RE: [PATCH 23/28] drivers: usb: Call cpu_latency_qos_*() instead of pm_qos_*()
  2020-02-18  8:08       ` Greg KH
@ 2020-02-18  8:11         ` Peter Chen
  0 siblings, 0 replies; 74+ messages in thread
From: Peter Chen @ 2020-02-18  8:11 UTC (permalink / raw)
  To: Greg KH; +Cc: Rafael J. Wysocki, Linux PM, LKML, Amit Kucheria, linux-usb

 
> 
> On Tue, Feb 18, 2020 at 08:03:13AM +0000, Peter Chen wrote:
> > On 20-02-12 10:38:27, Greg KH wrote:
> > > On Wed, Feb 12, 2020 at 12:28:44AM +0100, Rafael J. Wysocki wrote:
> > > > From: "Rafael J. Wysocki" <rafael.j.wysocki@intel.com>
> > > >
> > > > Call cpu_latency_qos_add/remove_request() instead of
> > > > pm_qos_add/remove_request(), respectively, because the latter are
> > > > going to be dropped.
> > > >
> > > > No intentional functional impact.
> > > >
> > > > Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
> > > > ---
> > > >  drivers/usb/chipidea/ci_hdrc_imx.c | 12 +++++-------
> > > >  1 file changed, 5 insertions(+), 7 deletions(-)
> > >
> > > Acked-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
> >
> > Hi Greg,
> >
> > With this patch applied, the usb-next can't compile pass.
> 
> Did I apply this?  It looks to need to go through Rafael's tree which introduces the
> new api, right?
> 

Not yet, I just try it at my local for my chipidea tree.

Peter

^ permalink raw reply	[flat|nested] 74+ messages in thread

* Re: [PATCH 23/28] drivers: usb: Call cpu_latency_qos_*() instead of pm_qos_*()
  2020-02-11 23:28 ` [PATCH 23/28] drivers: usb: " Rafael J. Wysocki
  2020-02-12 18:38   ` Greg KH
@ 2020-02-19  1:09   ` Peter Chen
  1 sibling, 0 replies; 74+ messages in thread
From: Peter Chen @ 2020-02-19  1:09 UTC (permalink / raw)
  To: Rafael J. Wysocki; +Cc: Linux PM, LKML, Amit Kucheria, linux-usb

On 20-02-12 00:28:44, Rafael J. Wysocki wrote:
> From: "Rafael J. Wysocki" <rafael.j.wysocki@intel.com>
> 
> Call cpu_latency_qos_add/remove_request() instead of
> pm_qos_add/remove_request(), respectively, because the
> latter are going to be dropped.
> 
> No intentional functional impact.
> 
> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
> ---
>  drivers/usb/chipidea/ci_hdrc_imx.c | 12 +++++-------
>  1 file changed, 5 insertions(+), 7 deletions(-)
> 
> diff --git a/drivers/usb/chipidea/ci_hdrc_imx.c b/drivers/usb/chipidea/ci_hdrc_imx.c
> index d8e7eb2f97b9..a479af3ae31d 100644
> --- a/drivers/usb/chipidea/ci_hdrc_imx.c
> +++ b/drivers/usb/chipidea/ci_hdrc_imx.c
> @@ -393,8 +393,7 @@ static int ci_hdrc_imx_probe(struct platform_device *pdev)
>  	}
>  
>  	if (pdata.flags & CI_HDRC_PMQOS)
> -		pm_qos_add_request(&data->pm_qos_req,
> -			PM_QOS_CPU_DMA_LATENCY, 0);
> +		cpu_latency_qos_add_request(&data->pm_qos_req, 0);
>  
>  	ret = imx_get_clks(dev);
>  	if (ret)
> @@ -478,7 +477,7 @@ static int ci_hdrc_imx_probe(struct platform_device *pdev)
>  		/* don't overwrite original ret (cf. EPROBE_DEFER) */
>  		regulator_disable(data->hsic_pad_regulator);
>  	if (pdata.flags & CI_HDRC_PMQOS)
> -		pm_qos_remove_request(&data->pm_qos_req);
> +		cpu_latency_qos_remove_request(&data->pm_qos_req);
>  	data->ci_pdev = NULL;
>  	return ret;
>  }
> @@ -499,7 +498,7 @@ static int ci_hdrc_imx_remove(struct platform_device *pdev)
>  	if (data->ci_pdev) {
>  		imx_disable_unprepare_clks(&pdev->dev);
>  		if (data->plat_data->flags & CI_HDRC_PMQOS)
> -			pm_qos_remove_request(&data->pm_qos_req);
> +			cpu_latency_qos_remove_request(&data->pm_qos_req);
>  		if (data->hsic_pad_regulator)
>  			regulator_disable(data->hsic_pad_regulator);
>  	}
> @@ -527,7 +526,7 @@ static int __maybe_unused imx_controller_suspend(struct device *dev)
>  
>  	imx_disable_unprepare_clks(dev);
>  	if (data->plat_data->flags & CI_HDRC_PMQOS)
> -		pm_qos_remove_request(&data->pm_qos_req);
> +		cpu_latency_qos_remove_request(&data->pm_qos_req);
>  
>  	data->in_lpm = true;
>  
> @@ -547,8 +546,7 @@ static int __maybe_unused imx_controller_resume(struct device *dev)
>  	}
>  
>  	if (data->plat_data->flags & CI_HDRC_PMQOS)
> -		pm_qos_add_request(&data->pm_qos_req,
> -			PM_QOS_CPU_DMA_LATENCY, 0);
> +		cpu_latency_qos_add_request(&data->pm_qos_req, 0);
>  
>  	ret = imx_prepare_enable_clks(dev);
>  	if (ret)
> -- 
> 2.16.4
> 

Acked-by: Peter Chen <peter.chen@nxp.com>

-- 

Thanks,
Peter Chen

^ permalink raw reply	[flat|nested] 74+ messages in thread

* Re: [PATCH 00/28] PM: QoS: Get rid of unuseful code and rework CPU latency QoS interface
  2020-02-13 11:38         ` Rafael J. Wysocki
@ 2020-02-21 22:10           ` Francisco Jerez
  2020-02-24  0:29             ` Rafael J. Wysocki
  0 siblings, 1 reply; 74+ messages in thread
From: Francisco Jerez @ 2020-02-21 22:10 UTC (permalink / raw)
  To: Rafael J. Wysocki
  Cc: Rafael J. Wysocki, Rafael J. Wysocki, Linux PM, LKML,
	Amit Kucheria, Pandruvada\,
	Srinivas, Rodrigo Vivi, Peter Zijlstra

[-- Attachment #1.1: Type: text/plain, Size: 2127 bytes --]

"Rafael J. Wysocki" <rafael@kernel.org> writes:

> On Thu, Feb 13, 2020 at 9:09 AM Francisco Jerez <currojerez@riseup.net> wrote:
>>
>> "Rafael J. Wysocki" <rafael@kernel.org> writes:
>>
>> > On Thu, Feb 13, 2020 at 1:16 AM Rafael J. Wysocki <rafael@kernel.org> wrote:
>> >>
>> >> On Thu, Feb 13, 2020 at 12:31 AM Francisco Jerez <currojerez@riseup.net> wrote:
>> >> >
>
> [cut]
>
>> >
>> > And BTW, posting patches as RFC is fine even if they have not been
>> > tested.  At least you let people know that you work on something this
>> > way, so if they work on changes in the same area, they may take that
>> > into consideration.
>> >
>>
>> Sure, that was going to be the first RFC.
>>
>> > Also if there are objections to your proposal, you may save quite a
>> > bit of time by sending it early.
>> >
>> > It is unfortunate that this series has clashed with the changes that
>> > you were about to propose, but in this particular case in my view it
>> > is better to clean up things and start over.
>> >
>>
>> Luckily it doesn't clash with the second RFC I was meaning to send,
>> maybe we should just skip the first?
>
> Yes, please.
>
>> Or maybe it's valuable as a curiosity anyway?
>
> No, let's just focus on the latest one.
>
> Thanks!

We don't seem to have reached much of an agreement on the general
direction of RFC2, so I can't really get started with it.  Here is RFC1
for the record:

https://github.com/curro/linux/commits/intel_pstate-lp-hwp-v10.8-alt

Specifically the following patch conflicts with this series:

https://github.com/curro/linux/commit/9a16f35531bbb76d38493da892ece088e31dc2e0

Series improves performance-per-watt of GfxBench gl_4 (AKA Car Chase) by
over 15% on my system with the branch above, actual FPS "only" improves
about 5.9% on ICL laptop due to it being very lightly TDP-bound with its
rather huge TDP.  The performance of almost every graphics benchmark
I've tried improves significantly with it (a number of SynMark
test-cases are improved by around 40% in perf-per-watt, Egypt
perf-per-watt improves by about 25%).

Hopefully we can come up with some alternative plan of action.

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 227 bytes --]

^ permalink raw reply	[flat|nested] 74+ messages in thread

* Re: [PATCH 00/28] PM: QoS: Get rid of unuseful code and rework CPU latency QoS interface
  2020-02-21 22:10           ` Francisco Jerez
@ 2020-02-24  0:29             ` Rafael J. Wysocki
  2020-02-24 21:06               ` Francisco Jerez
  0 siblings, 1 reply; 74+ messages in thread
From: Rafael J. Wysocki @ 2020-02-24  0:29 UTC (permalink / raw)
  To: Francisco Jerez
  Cc: Rafael J. Wysocki, Rafael J. Wysocki, Linux PM, LKML,
	Amit Kucheria, Pandruvada, Srinivas, Rodrigo Vivi,
	Peter Zijlstra

On Fri, Feb 21, 2020 at 11:10 PM Francisco Jerez <currojerez@riseup.net> wrote:
>
> "Rafael J. Wysocki" <rafael@kernel.org> writes:
>
> > On Thu, Feb 13, 2020 at 9:09 AM Francisco Jerez <currojerez@riseup.net> wrote:
> >>
> >> "Rafael J. Wysocki" <rafael@kernel.org> writes:
> >>
> >> > On Thu, Feb 13, 2020 at 1:16 AM Rafael J. Wysocki <rafael@kernel.org> wrote:
> >> >>
> >> >> On Thu, Feb 13, 2020 at 12:31 AM Francisco Jerez <currojerez@riseup.net> wrote:
> >> >> >
> >
> > [cut]
> >
> >> >
> >> > And BTW, posting patches as RFC is fine even if they have not been
> >> > tested.  At least you let people know that you work on something this
> >> > way, so if they work on changes in the same area, they may take that
> >> > into consideration.
> >> >
> >>
> >> Sure, that was going to be the first RFC.
> >>
> >> > Also if there are objections to your proposal, you may save quite a
> >> > bit of time by sending it early.
> >> >
> >> > It is unfortunate that this series has clashed with the changes that
> >> > you were about to propose, but in this particular case in my view it
> >> > is better to clean up things and start over.
> >> >
> >>
> >> Luckily it doesn't clash with the second RFC I was meaning to send,
> >> maybe we should just skip the first?
> >
> > Yes, please.
> >
> >> Or maybe it's valuable as a curiosity anyway?
> >
> > No, let's just focus on the latest one.
> >
> > Thanks!
>
> We don't seem to have reached much of an agreement on the general
> direction of RFC2, so I can't really get started with it.  Here is RFC1
> for the record:
>
> https://github.com/curro/linux/commits/intel_pstate-lp-hwp-v10.8-alt

Appreciate the link, but that hasn't been posted to linux-pm yet, so
there's not much to discuss.

And when you post it, please rebase it on top of linux-next.

> Specifically the following patch conflicts with this series:
>
> https://github.com/curro/linux/commit/9a16f35531bbb76d38493da892ece088e31dc2e0
>
> Series improves performance-per-watt of GfxBench gl_4 (AKA Car Chase) by
> over 15% on my system with the branch above, actual FPS "only" improves
> about 5.9% on ICL laptop due to it being very lightly TDP-bound with its
> rather huge TDP.  The performance of almost every graphics benchmark
> I've tried improves significantly with it (a number of SynMark
> test-cases are improved by around 40% in perf-per-watt, Egypt
> perf-per-watt improves by about 25%).
>
> Hopefully we can come up with some alternative plan of action.

It is very easy to replace the patch above with an alternative one on
top of linux-next that will add CPU_RESPONSE_FREQUENCY QoS along the
lines of the CPU latency QoS implementation in there without the need
restore to global QoS classes.

IOW, you don't really need the code that goes away in linux-next to
implement what you need.

Thanks!

^ permalink raw reply	[flat|nested] 74+ messages in thread

* Re: [PATCH 00/28] PM: QoS: Get rid of unuseful code and rework CPU latency QoS interface
  2020-02-14 20:32               ` Francisco Jerez
@ 2020-02-24 10:39                 ` Rafael J. Wysocki
  2020-02-24 21:16                   ` Francisco Jerez
  0 siblings, 1 reply; 74+ messages in thread
From: Rafael J. Wysocki @ 2020-02-24 10:39 UTC (permalink / raw)
  To: Francisco Jerez
  Cc: Rafael J. Wysocki, Rafael J. Wysocki, Linux PM, LKML,
	Amit Kucheria, Pandruvada, Srinivas, Rodrigo Vivi,
	Peter Zijlstra

Sorry for the late response, I was offline for a major part of the
previous week.

On Fri, Feb 14, 2020 at 9:31 PM Francisco Jerez <currojerez@riseup.net> wrote:
>
> "Rafael J. Wysocki" <rafael@kernel.org> writes:
>
> > On Fri, Feb 14, 2020 at 1:14 AM Francisco Jerez <currojerez@riseup.net> wrote:
> >>
> >> "Rafael J. Wysocki" <rafael@kernel.org> writes:
> >>
> >> > On Thu, Feb 13, 2020 at 12:34 PM Rafael J. Wysocki <rafael@kernel.org> wrote:
> >
> > [cut]
> >
> >> >
> >> > I think that your use case is almost equivalent to the thermal
> >> > pressure one, so you'd want to limit the max and so that would be
> >> > something similar to store_max_perf_pct() with its input side hooked
> >> > up to a QoS list.
> >> >
> >> > But it looks like that QoS list would rather be of a "reservation"
> >> > type, so a request added to it would mean something like "leave this
> >> > fraction of power that appears to be available to the CPU subsystem
> >> > unused, because I need it for a different purpose".  And in principle
> >> > there might be multiple requests in there at the same time and those
> >> > "reservations" would add up.  So that would be a kind of "limited sum"
> >> > QoS type which wasn't even there before my changes.
> >> >
> >> > A user of that QoS list might then do something like
> >> >
> >> > ret = cpu_power_reserve_add(1, 4);
> >> >
> >> > meaning that it wants 25% of the "potential" CPU power to be not
> >> > utilized by CPU performance scaling and that could affect the
> >> > scheduler through load modifications (kind of along the thermal
> >> > pressure patchset discussed some time ago) and HWP (as well as the
> >> > non-HWP intel_pstate by preventing turbo frequencies from being used
> >> > etc).
> >>
> >> The problems with this are the same as with the per-CPU frequency QoS
> >> approach: How does the device driver know what the appropriate fraction
> >> of CPU power is?
> >
> > Of course it doesn't know and it may never know exactly, but it may guess.
> >
> > Also, it may set up a feedback loop: request an aggressive
> > reservation, run for a while, measure something and refine if there's
> > headroom.  Then repeat.
> >
>
> Yeah, of course, but that's obviously more computationally intensive and
> less accurate than computing an approximately optimal constraint in a
> single iteration (based on knowledge from performance counters and a
> notion of the latency requirements of the application), since such a
> feedback loop relies on repeatedly overshooting and undershooting the
> optimal value (the latter causes an artificial CPU bottleneck, possibly
> slowing down other applications too) in order to converge to and remain
> in a neighborhood of the optimal value.

I'm not saying that feedback loops are the way to go in general, but
that in some cases they are applicable and this particular case looks
like it may be one of them.

> Incidentally people tested a power balancing solution with a feedback
> loop very similar to the one you're describing side by side to the RFC
> patch series I provided a link to earlier (which targeted Gen9 LP
> parts), and the energy efficiency improvements they observed were
> roughly half of the improvement obtained with my series unsurprisingly.
>
> Not to speak about generalizing such a feedback loop to bottlenecks on
> multiple I/O devices.

The generalizing part I'm totally unconvinced above.

> >> Depending on the instantaneous behavior of the
> >> workload it might take 1% or 95% of the CPU power in order to keep the
> >> IO device busy.  Each user of this would need to monitor the performance
> >> of every CPU in the system and update the constraints on each of them
> >> periodically (whether or not they're talking to that IO device, which
> >> would possibly negatively impact the latency of unrelated applications
> >> running on other CPUs, unless we're willing to race with the task
> >> scheduler).
> >
> > No, it just needs to measure a signal representing how much power *it*
> > gets and decide whether or not it can let the CPU subsystem use more
> > power.
> >
>
> Well yes it's technically possible to set frequency constraints based on
> trial-and-error without sampling utilization information from the CPU
> cores, but don't we agree that this kind of information can be highly
> valuable?

OK, so there are three things, frequency constraints (meaning HWP min
and max limits, for example), frequency requests (this is what cpufreq
does) and power limits.

If the processor has at least some autonomy in driving the frequency,
using frequency requests (i.e. cpufreq governors) for limiting power
is inefficient in general, because the processor is not required to
grant those requests at all.

Using frequency limits may be good enough, but it generally limits the
processor's ability to respond at short-time scales (for example,
setting the max frequency limit will prevent the processor from using
frequencies above that limit even temporarily, but that might be the
most energy-efficient option in some cases).

Using power limits (which is what RAPL does) doesn't bring such shortcomings in.

> >> A solution based on utilization clamps (with some
> >> extensions) sounds more future-proof to me honestly.
> >
> > Except that it would be rather hard to connect it to something like
> > RAPL, which should be quite straightforward with the approach I'm
> > talking about.
> >
>
> I think using RAPL as additional control variable would be useful, but
> fully orthogonal to the cap being set by some global mechanism or being
> derived from the aggregation of a number of per-process power caps based
> on the scheduler behavior.

I'm not sure what do you mean by "the cap" here.  A maximum frequency
limit or something else?

> The latter sounds like the more reasonable
> fit for a multi-tasking, possibly virtualized environment honestly.
> Either way RAPL is neither necessary nor sufficient in order to achieve
> the energy efficiency improvement I'm working on.

The "not necessary" I can agree with, but I don't see any arguments
for the "not sufficient" statement.

> > The problem with all scheduler-based ways, again, is that there is no
> > direct connection between the scheduler and HWP,
>
> I was planning to introduce such a connection in RFC part 2.  I have a
> prototype for that based on a not particularly pretty custom interface,
> I wouldn't mind trying to get it to use utilization clamps if you think
> that's the way forward.

Well, I may think so, but that's just thinking at this point.  I have
no real numbers to support that theory.

> > or even with whatever the processor does with the P-states in the
> > turbo range.  If any P-state in the turbo range is requested, the
> > processor has a license to use whatever P-state it wants, so this
> > pretty much means allowing it to use as much power as it can.
> >
> > So in the first place, if you want to limit the use of power in the
> > CPU subsystem through frequency control alone, you need to prevent it
> > from using turbo P-states at all.  However, with RAPL you can just
> > limit power which may still allow some (but not all) turbo P-states to
> > be used.
>
> My goal is not to limit the use of power of the CPU (if it has enough
> load to utilize 100% of the cycles at turbo frequency so be it), but to
> get it to use it more efficiently.  If you are constrained by a given
> power budget (e.g. the TDP or the one you want set via RAPL) you can do
> more with it if you set a stable frequency rather than if you let the
> CPU bounce back and forth between turbo and idle.

Well, this basically means driving the CPU frequency by hand with the
assumption that the processor cannot do the right thing in this
respect, while in theory the HWP algorithm should be able to produce
the desired result.

IOW, your argumentation seems to go into the "HWP is useless"
direction, more or less and while there are people who will agree with
such a statement, others won't.

> This can only be
> achieved effectively if the frequency governor has a rough idea of the
> latency requirements of the workload, since it involves a
> latency/energy-efficiency trade-off.

Let me state this again (and this will be the last time, because I
don't really like to repeat points): the frequency governor can only
*request* the processor to do something in general and the request may
or may not be granted, for various reasons.  If it is not granted, the
whole "control" mechanism fails.

^ permalink raw reply	[flat|nested] 74+ messages in thread

* Re: [PATCH 00/28] PM: QoS: Get rid of unuseful code and rework CPU latency QoS interface
  2020-02-24  0:29             ` Rafael J. Wysocki
@ 2020-02-24 21:06               ` Francisco Jerez
  0 siblings, 0 replies; 74+ messages in thread
From: Francisco Jerez @ 2020-02-24 21:06 UTC (permalink / raw)
  To: Rafael J. Wysocki
  Cc: Rafael J. Wysocki, Rafael J. Wysocki, Linux PM, LKML,
	Amit Kucheria, Pandruvada\,
	Srinivas, Rodrigo Vivi, Peter Zijlstra

[-- Attachment #1.1: Type: text/plain, Size: 2984 bytes --]

"Rafael J. Wysocki" <rafael@kernel.org> writes:

> On Fri, Feb 21, 2020 at 11:10 PM Francisco Jerez <currojerez@riseup.net> wrote:
>>
>> "Rafael J. Wysocki" <rafael@kernel.org> writes:
>>
>> > On Thu, Feb 13, 2020 at 9:09 AM Francisco Jerez <currojerez@riseup.net> wrote:
>> >>
>> >> "Rafael J. Wysocki" <rafael@kernel.org> writes:
>> >>
>> >> > On Thu, Feb 13, 2020 at 1:16 AM Rafael J. Wysocki <rafael@kernel.org> wrote:
>> >> >>
>> >> >> On Thu, Feb 13, 2020 at 12:31 AM Francisco Jerez <currojerez@riseup.net> wrote:
>> >> >> >
>> >
>> > [cut]
>> >
>> >> >
>> >> > And BTW, posting patches as RFC is fine even if they have not been
>> >> > tested.  At least you let people know that you work on something this
>> >> > way, so if they work on changes in the same area, they may take that
>> >> > into consideration.
>> >> >
>> >>
>> >> Sure, that was going to be the first RFC.
>> >>
>> >> > Also if there are objections to your proposal, you may save quite a
>> >> > bit of time by sending it early.
>> >> >
>> >> > It is unfortunate that this series has clashed with the changes that
>> >> > you were about to propose, but in this particular case in my view it
>> >> > is better to clean up things and start over.
>> >> >
>> >>
>> >> Luckily it doesn't clash with the second RFC I was meaning to send,
>> >> maybe we should just skip the first?
>> >
>> > Yes, please.
>> >
>> >> Or maybe it's valuable as a curiosity anyway?
>> >
>> > No, let's just focus on the latest one.
>> >
>> > Thanks!
>>
>> We don't seem to have reached much of an agreement on the general
>> direction of RFC2, so I can't really get started with it.  Here is RFC1
>> for the record:
>>
>> https://github.com/curro/linux/commits/intel_pstate-lp-hwp-v10.8-alt
>
> Appreciate the link, but that hasn't been posted to linux-pm yet, so
> there's not much to discuss.
>
> And when you post it, please rebase it on top of linux-next.
>
>> Specifically the following patch conflicts with this series:
>>
>> https://github.com/curro/linux/commit/9a16f35531bbb76d38493da892ece088e31dc2e0
>>
>> Series improves performance-per-watt of GfxBench gl_4 (AKA Car Chase) by
>> over 15% on my system with the branch above, actual FPS "only" improves
>> about 5.9% on ICL laptop due to it being very lightly TDP-bound with its
>> rather huge TDP.  The performance of almost every graphics benchmark
>> I've tried improves significantly with it (a number of SynMark
>> test-cases are improved by around 40% in perf-per-watt, Egypt
>> perf-per-watt improves by about 25%).
>>
>> Hopefully we can come up with some alternative plan of action.
>
> It is very easy to replace the patch above with an alternative one on
> top of linux-next that will add CPU_RESPONSE_FREQUENCY QoS along the
> lines of the CPU latency QoS implementation in there without the need
> restore to global QoS classes.
>
> IOW, you don't really need the code that goes away in linux-next to
> implement what you need.
>
> Thanks!

Sure, I'll do that.

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 227 bytes --]

^ permalink raw reply	[flat|nested] 74+ messages in thread

* Re: [PATCH 00/28] PM: QoS: Get rid of unuseful code and rework CPU latency QoS interface
  2020-02-24 10:39                 ` Rafael J. Wysocki
@ 2020-02-24 21:16                   ` Francisco Jerez
  0 siblings, 0 replies; 74+ messages in thread
From: Francisco Jerez @ 2020-02-24 21:16 UTC (permalink / raw)
  To: Rafael J. Wysocki
  Cc: Rafael J. Wysocki, Rafael J. Wysocki, Linux PM, LKML,
	Amit Kucheria, Pandruvada\,
	Srinivas, Rodrigo Vivi, Peter Zijlstra

[-- Attachment #1.1: Type: text/plain, Size: 13492 bytes --]

"Rafael J. Wysocki" <rafael@kernel.org> writes:

> Sorry for the late response, I was offline for a major part of the
> previous week.
>
> On Fri, Feb 14, 2020 at 9:31 PM Francisco Jerez <currojerez@riseup.net> wrote:
>>
>> "Rafael J. Wysocki" <rafael@kernel.org> writes:
>>
>> > On Fri, Feb 14, 2020 at 1:14 AM Francisco Jerez <currojerez@riseup.net> wrote:
>> >>
>> >> "Rafael J. Wysocki" <rafael@kernel.org> writes:
>> >>
>> >> > On Thu, Feb 13, 2020 at 12:34 PM Rafael J. Wysocki <rafael@kernel.org> wrote:
>> >
>> > [cut]
>> >
>> >> >
>> >> > I think that your use case is almost equivalent to the thermal
>> >> > pressure one, so you'd want to limit the max and so that would be
>> >> > something similar to store_max_perf_pct() with its input side hooked
>> >> > up to a QoS list.
>> >> >
>> >> > But it looks like that QoS list would rather be of a "reservation"
>> >> > type, so a request added to it would mean something like "leave this
>> >> > fraction of power that appears to be available to the CPU subsystem
>> >> > unused, because I need it for a different purpose".  And in principle
>> >> > there might be multiple requests in there at the same time and those
>> >> > "reservations" would add up.  So that would be a kind of "limited sum"
>> >> > QoS type which wasn't even there before my changes.
>> >> >
>> >> > A user of that QoS list might then do something like
>> >> >
>> >> > ret = cpu_power_reserve_add(1, 4);
>> >> >
>> >> > meaning that it wants 25% of the "potential" CPU power to be not
>> >> > utilized by CPU performance scaling and that could affect the
>> >> > scheduler through load modifications (kind of along the thermal
>> >> > pressure patchset discussed some time ago) and HWP (as well as the
>> >> > non-HWP intel_pstate by preventing turbo frequencies from being used
>> >> > etc).
>> >>
>> >> The problems with this are the same as with the per-CPU frequency QoS
>> >> approach: How does the device driver know what the appropriate fraction
>> >> of CPU power is?
>> >
>> > Of course it doesn't know and it may never know exactly, but it may guess.
>> >
>> > Also, it may set up a feedback loop: request an aggressive
>> > reservation, run for a while, measure something and refine if there's
>> > headroom.  Then repeat.
>> >
>>
>> Yeah, of course, but that's obviously more computationally intensive and
>> less accurate than computing an approximately optimal constraint in a
>> single iteration (based on knowledge from performance counters and a
>> notion of the latency requirements of the application), since such a
>> feedback loop relies on repeatedly overshooting and undershooting the
>> optimal value (the latter causes an artificial CPU bottleneck, possibly
>> slowing down other applications too) in order to converge to and remain
>> in a neighborhood of the optimal value.
>
> I'm not saying that feedback loops are the way to go in general, but
> that in some cases they are applicable and this particular case looks
> like it may be one of them.
>
>> Incidentally people tested a power balancing solution with a feedback
>> loop very similar to the one you're describing side by side to the RFC
>> patch series I provided a link to earlier (which targeted Gen9 LP
>> parts), and the energy efficiency improvements they observed were
>> roughly half of the improvement obtained with my series unsurprisingly.
>>
>> Not to speak about generalizing such a feedback loop to bottlenecks on
>> multiple I/O devices.
>
> The generalizing part I'm totally unconvinced above.
>

One of the main problems I see with generalizing a driver-controlled
feedback loop to multiple devices is that any one of the drivers has no
visibility over the performance of other workloads running on the same
CPU core but not tied to the same feedback loop.  E.g. consider a
GPU-bound application running concurrently with some latency-bound
application on the same CPU core: It would be easy (if somewhat
inaccurate) for the GPU driver to monitor the utilization of the one
device it controls in order to prevent performance loss as a result of
its frequency constraints, but how could it tell whether it's having a
negative impact on the performance of the other non-GPU-bound
application?  It doesn't seem possible to avoid that without the driver
monitoring the performance counters of each CPU core *and* having some
sort of interface in place for other unrelated applications to
communicate their latency constraints (which brings us back to the PM
QoS discussion).

>> >> Depending on the instantaneous behavior of the
>> >> workload it might take 1% or 95% of the CPU power in order to keep the
>> >> IO device busy.  Each user of this would need to monitor the performance
>> >> of every CPU in the system and update the constraints on each of them
>> >> periodically (whether or not they're talking to that IO device, which
>> >> would possibly negatively impact the latency of unrelated applications
>> >> running on other CPUs, unless we're willing to race with the task
>> >> scheduler).
>> >
>> > No, it just needs to measure a signal representing how much power *it*
>> > gets and decide whether or not it can let the CPU subsystem use more
>> > power.
>> >
>>
>> Well yes it's technically possible to set frequency constraints based on
>> trial-and-error without sampling utilization information from the CPU
>> cores, but don't we agree that this kind of information can be highly
>> valuable?
>
> OK, so there are three things, frequency constraints (meaning HWP min
> and max limits, for example), frequency requests (this is what cpufreq
> does) and power limits.
>
> If the processor has at least some autonomy in driving the frequency,
> using frequency requests (i.e. cpufreq governors) for limiting power
> is inefficient in general, because the processor is not required to
> grant those requests at all.
>

For limiting power yes, I agree that it would be less effective than a
RAPL constraint, but the purpose of my proposal is not to set an upper
limit on the power usage of the CPU in absolute terms, but in terms
relative to its performance: Given that the energy efficiency of the CPU
is steadily decreasing with frequency past the inflection point of the
power curve, it's more energy-efficient to set a frequency constraint
rather than to set a constraint on its long-term average power
consumption while letting the clock frequency swing arbitrarily around
the most energy-efficient frequency.

Please don't get me wrong: I think that leveraging RAPL constraints as
additional variable is authentically useful especially for thermal
management, but it's largely complementary to frequency constraints
which provide a more direct way to control the energy efficiency of the
CPU.

But even if we decide to use RAPL for this, wouldn't the RAPL governor
need to make a certain latency trade-off?  In order to avoid performance
degradation it would be necessary for the governor to respond to changes
in the load of the CPU, and some awareness of the latency constraints of
the application seems necessary either way in order to do that
effectively.  IOW the kind of latency constraint I wanted to propose
would be useful to achieve the most energy-efficient outcome whether we
use RAPL, frequency constraints, or both.

> Using frequency limits may be good enough, but it generally limits the
> processor's ability to respond at short-time scales (for example,
> setting the max frequency limit will prevent the processor from using
> frequencies above that limit even temporarily, but that might be the
> most energy-efficient option in some cases).
>
> Using power limits (which is what RAPL does) doesn't bring such shortcomings in.

But preventing a short-term oscillation of the CPU frequency is the
desired outcome rather than a shortcoming whenever the time scale of the
oscillation is orders of magnitude smaller than the latency requirements
known to the application, since it lowers the energy efficiency (and
therefore parallelism) of the system without any visible benefit for the
workload.  The mechanism I'm proposing wouldn't prevent such short-term
oscillations when needed except when an application or device driver
explicitly requests PM to damp them.

>
>> >> A solution based on utilization clamps (with some
>> >> extensions) sounds more future-proof to me honestly.
>> >
>> > Except that it would be rather hard to connect it to something like
>> > RAPL, which should be quite straightforward with the approach I'm
>> > talking about.
>> >
>>
>> I think using RAPL as additional control variable would be useful, but
>> fully orthogonal to the cap being set by some global mechanism or being
>> derived from the aggregation of a number of per-process power caps based
>> on the scheduler behavior.
>
> I'm not sure what do you mean by "the cap" here.  A maximum frequency
> limit or something else?
>

Either a frequency or a power cap.  Either way it seems valuable (but
not strictly necessary up front) for the cap to be derived from the
scheduler's behavior.

>> The latter sounds like the more reasonable
>> fit for a multi-tasking, possibly virtualized environment honestly.
>> Either way RAPL is neither necessary nor sufficient in order to achieve
>> the energy efficiency improvement I'm working on.
>
> The "not necessary" I can agree with, but I don't see any arguments
> for the "not sufficient" statement.
>

Not sufficient since RAPL doesn't provide as much of a direct limit on
the energy efficiency of the system as a frequency constraint would
[More on that above].

>> > The problem with all scheduler-based ways, again, is that there is no
>> > direct connection between the scheduler and HWP,
>>
>> I was planning to introduce such a connection in RFC part 2.  I have a
>> prototype for that based on a not particularly pretty custom interface,
>> I wouldn't mind trying to get it to use utilization clamps if you think
>> that's the way forward.
>
> Well, I may think so, but that's just thinking at this point.  I have
> no real numbers to support that theory.
>

Right.  And the only way to get numbers is to implement it.  I wouldn't
mind giving that a shot as a follow up.  But a PM QoS-based solution is
likely to give most of the benefit in the most common scenarios.

>> > or even with whatever the processor does with the P-states in the
>> > turbo range.  If any P-state in the turbo range is requested, the
>> > processor has a license to use whatever P-state it wants, so this
>> > pretty much means allowing it to use as much power as it can.
>> >
>> > So in the first place, if you want to limit the use of power in the
>> > CPU subsystem through frequency control alone, you need to prevent it
>> > from using turbo P-states at all.  However, with RAPL you can just
>> > limit power which may still allow some (but not all) turbo P-states to
>> > be used.
>>
>> My goal is not to limit the use of power of the CPU (if it has enough
>> load to utilize 100% of the cycles at turbo frequency so be it), but to
>> get it to use it more efficiently.  If you are constrained by a given
>> power budget (e.g. the TDP or the one you want set via RAPL) you can do
>> more with it if you set a stable frequency rather than if you let the
>> CPU bounce back and forth between turbo and idle.
>
> Well, this basically means driving the CPU frequency by hand with the
> assumption that the processor cannot do the right thing in this
> respect, while in theory the HWP algorithm should be able to produce
> the desired result.
>
> IOW, your argumentation seems to go into the "HWP is useless"
> direction, more or less and while there are people who will agree with
> such a statement, others won't.
>

I don't want to drive the CPU frequency by hand, and I don't think HWP
is useless by any means.  The purpose of my changes is to get HWP to do
a better job by constraining its response to a reasonable range based on
information which is largely unavailable to HWP -- E.g.: What are the
latency constraints of the application?  Does the application have an IO
bottleneck?  Which CPU core did we schedule the IO-bottlenecking
application to?

>> This can only be
>> achieved effectively if the frequency governor has a rough idea of the
>> latency requirements of the workload, since it involves a
>> latency/energy-efficiency trade-off.
>
> Let me state this again (and this will be the last time, because I
> don't really like to repeat points): the frequency governor can only
> *request* the processor to do something in general and the request may
> or may not be granted, for various reasons.  If it is not granted, the
> whole "control" mechanism fails.

And what's wrong with that?  The purpose of the latency constraint
interface is not to provide a hard limit on the CPU frequency, but to
give applications some influence on the latency trade-off made by the
governor whenever it isn't in conflict with the constraints set by other
applications (possibly as a result of them being part of the same clock
domain which may indeed cause the effective frequency to deviate from
the range specified by the P-state governor).  IOW the CPU frequency
momentarily exceeding the optimal value for any specific application
wouldn't violate the interface.  The result can still be massively more
energy-efficient than placing a long-term power constraint, or not
placing any constraint at all, even if P-state requests are not
guaranteed to succeed in general.

Regards,
Francisco.

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 227 bytes --]

^ permalink raw reply	[flat|nested] 74+ messages in thread

end of thread, back to index

Thread overview: 74+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-02-11 22:51 [PATCH 00/28] PM: QoS: Get rid of unuseful code and rework CPU latency QoS interface Rafael J. Wysocki
2020-02-11 22:52 ` [PATCH 01/28] PM: QoS: Drop debugfs interface Rafael J. Wysocki
2020-02-11 22:58 ` [PATCH 02/28] PM: QoS: Drop pm_qos_update_request_timeout() Rafael J. Wysocki
2020-02-11 22:58 ` [PATCH 03/28] PM: QoS: Drop the PM_QOS_SUM QoS type Rafael J. Wysocki
2020-02-11 22:58 ` [PATCH 04/28] PM: QoS: Clean up pm_qos_update_target() and pm_qos_update_flags() Rafael J. Wysocki
2020-02-11 22:58 ` [PATCH 05/28] PM: QoS: Clean up pm_qos_read_value() and pm_qos_get/set_value() Rafael J. Wysocki
2020-02-11 22:59 ` [PATCH 06/28] PM: QoS: Drop iterations over global QoS classes Rafael J. Wysocki
2020-02-11 23:00 ` [PATCH 07/28] PM: QoS: Clean up misc device file operations Rafael J. Wysocki
2020-02-11 23:01 ` [PATCH 08/28] PM: QoS: Redefine struct pm_qos_request and drop struct pm_qos_object Rafael J. Wysocki
2020-02-11 23:02 ` [PATCH 09/28] PM: QoS: Drop PM_QOS_CPU_DMA_LATENCY notifier chain Rafael J. Wysocki
2020-02-11 23:04 ` [PATCH 10/28] PM: QoS: Rename things related to the CPU latency QoS Rafael J. Wysocki
2020-02-12 10:34   ` Rafael J. Wysocki
2020-02-12 19:13   ` Greg Kroah-Hartman
2020-02-11 23:06 ` [PATCH 11/28] PM: QoS: Simplify definitions of CPU latency QoS trace events Rafael J. Wysocki
2020-02-11 23:07 ` [PATCH 12/28] PM: QoS: Adjust pm_qos_request() signature and reorder pm_qos.h Rafael J. Wysocki
2020-02-11 23:07 ` [PATCH 13/28] PM: QoS: Add CPU latency QoS API wrappers Rafael J. Wysocki
2020-02-11 23:08 ` [PATCH 14/28] cpuidle: Call cpu_latency_qos_limit() instead of pm_qos_request() Rafael J. Wysocki
2020-02-11 23:10 ` [PATCH 15/28] x86: platform: iosf_mbi: Call cpu_latency_qos_*() instead of pm_qos_*() Rafael J. Wysocki
2020-02-12 10:14   ` Andy Shevchenko
2020-02-11 23:12 ` [PATCH 16/28] drm: i915: " Rafael J. Wysocki
2020-02-12 10:32   ` Rafael J. Wysocki
2020-02-14  7:42   ` Jani Nikula
2020-02-11 23:13 ` [PATCH 17/28] drivers: hsi: " Rafael J. Wysocki
2020-02-13 21:06   ` Sebastian Reichel
2020-02-11 23:17 ` [PATCH 18/28] drivers: media: " Rafael J. Wysocki
2020-02-12  5:37   ` Mauro Carvalho Chehab
2020-02-11 23:21 ` [PATCH 19/28] drivers: mmc: " Rafael J. Wysocki
2020-02-11 23:24 ` [PATCH 20/28] drivers: net: " Rafael J. Wysocki
2020-02-11 23:48   ` Jeff Kirsher
2020-02-12  5:49   ` Kalle Valo
2020-02-11 23:26 ` [PATCH 21/28] drivers: spi: " Rafael J. Wysocki
2020-02-11 23:27 ` [PATCH 22/28] drivers: tty: " Rafael J. Wysocki
2020-02-12 10:35   ` Rafael J. Wysocki
2020-02-12 19:13   ` Greg Kroah-Hartman
2020-02-11 23:28 ` [PATCH 23/28] drivers: usb: " Rafael J. Wysocki
2020-02-12 18:38   ` Greg KH
2020-02-18  8:03     ` Peter Chen
2020-02-18  8:08       ` Greg KH
2020-02-18  8:11         ` Peter Chen
2020-02-19  1:09   ` Peter Chen
2020-02-11 23:34 ` [PATCH 24/28] sound: " Rafael J. Wysocki
2020-02-12 10:08   ` Mark Brown
2020-02-12 10:16     ` Rafael J. Wysocki
2020-02-12 10:21       ` Takashi Iwai
2020-02-12 10:18   ` Mark Brown
2020-02-11 23:35 ` [PATCH 25/28] PM: QoS: Drop PM_QOS_CPU_DMA_LATENCY and rename related functions Rafael J. Wysocki
2020-02-11 23:35 ` [PATCH 26/28] PM: QoS: Update file information comments Rafael J. Wysocki
2020-02-11 23:36 ` [PATCH 27/28] Documentation: PM: QoS: Update to reflect previous code changes Rafael J. Wysocki
2020-02-11 23:37 ` [PATCH 28/28] PM: QoS: Make CPU latency QoS depend on CONFIG_CPU_IDLE Rafael J. Wysocki
2020-02-12  8:37 ` [PATCH 00/28] PM: QoS: Get rid of unuseful code and rework CPU latency QoS interface Ulf Hansson
2020-02-12  9:17   ` Rafael J. Wysocki
2020-02-12  9:39 ` Rafael J. Wysocki
2020-02-12 23:32 ` Francisco Jerez
2020-02-13  0:16   ` Rafael J. Wysocki
2020-02-13  0:37     ` Rafael J. Wysocki
2020-02-13  8:10       ` Francisco Jerez
2020-02-13 11:38         ` Rafael J. Wysocki
2020-02-21 22:10           ` Francisco Jerez
2020-02-24  0:29             ` Rafael J. Wysocki
2020-02-24 21:06               ` Francisco Jerez
2020-02-13  8:07     ` Francisco Jerez
2020-02-13 11:34       ` Rafael J. Wysocki
2020-02-13 16:35         ` Rafael J. Wysocki
2020-02-14  0:15           ` Francisco Jerez
2020-02-14 10:42             ` Rafael J. Wysocki
2020-02-14 20:32               ` Francisco Jerez
2020-02-24 10:39                 ` Rafael J. Wysocki
2020-02-24 21:16                   ` Francisco Jerez
2020-02-14  0:14         ` Francisco Jerez
2020-02-13  7:10 ` Amit Kucheria
2020-02-13 10:17   ` Rafael J. Wysocki
2020-02-13 10:22     ` Rafael J. Wysocki
2020-02-13 10:49     ` Amit Kucheria
2020-02-13 11:36       ` Rafael J. Wysocki

Linux-PM Archive on lore.kernel.org

Archives are clonable:
	git clone --mirror https://lore.kernel.org/linux-pm/0 linux-pm/git/0.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 linux-pm linux-pm/ https://lore.kernel.org/linux-pm \
		linux-pm@vger.kernel.org
	public-inbox-index linux-pm

Example config snippet for mirrors

Newsgroup available over NNTP:
	nntp://nntp.lore.kernel.org/org.kernel.vger.linux-pm


AGPL code for this site: git clone https://public-inbox.org/public-inbox.git