linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2 0/5] Introduce OPP bandwidth bindings
@ 2019-04-23 13:28 Georgi Djakov
  2019-04-23 13:28 ` [PATCH v2 1/5] dt-bindings: opp: Introduce bandwidth-MBps bindings Georgi Djakov
                   ` (5 more replies)
  0 siblings, 6 replies; 22+ messages in thread
From: Georgi Djakov @ 2019-04-23 13:28 UTC (permalink / raw)
  To: vireshk, sboyd, nm, robh+dt, mark.rutland, rjw
  Cc: jcrouse, vincent.guittot, bjorn.andersson, amit.kucheria, seansw,
	daidavid1, evgreen, sibis, linux-pm, devicetree, linux-kernel,
	linux-arm-msm, georgi.djakov

Here is a proposal to extend the OPP bindings with bandwidth based on
a previous discussion [1].

Every functional block on a SoC can contribute to the system power
efficiency by expressing its own bandwidth needs (to memory or other SoC
modules). This will allow the system to save power when high throughput
is not required (and also provide maximum throughput when needed).

There are at least three ways for a device to determine its bandwidth
needs:
	1. The device can dynamically calculate the needed bandwidth
based on some known variable. For example: UART (baud rate), I2C (fast
mode, high-speed mode, etc), USB (specification version, data transfer
type), SDHC (SD standard, clock rate, bus-width), Video Encoder/Decoder
(video format, resolution, frame-rate)

	2. There is a hardware specific value. For example: hardware
specific constant value (e.g. for PRNG) or use-case specific value that
is hard-coded.

	3. Predefined SoC/board specific bandwidth values. For example:
CPU or GPU bandwidth is related to the current core frequency and both
bandwidth and frequency are scaled together.

This patchset is trying to address point 3 above by extending the OPP
bindings to support predefined SoC/board bandwidth values and adds
support in cpufreq-dt to scale the interconnect between the CPU and the
DDR together with frequency and voltage.

[1] https://patchwork.kernel.org/patch/10577315/

Changes in v2:
* Added support for configuring multiple interconnect paths per each device
and changed the way we describe it in DT. (Viresh)
* Rename the DT property opp-bw-MBps to bandwidth-MBps. (Viresh)
* Document MBps in property-units.txt. (Rob)
* New patch to add of_icc_get_by_index() helper function.
* Add _of_find_paths() to populate OPP tables with interconnect path data
from DT. (Viresh)

v1: https://lore.kernel.org/lkml/20190313090010.20534-1-georgi.djakov@linaro.org/


Georgi Djakov (5):
  dt-bindings: opp: Introduce bandwidth-MBps bindings
  interconnect: Add of_icc_get_by_index() helper function
  OPP: Add support for parsing the interconnect bandwidth
  OPP: Update the bandwidth on OPP frequency changes
  cpufreq: dt: Add support for interconnect bandwidth scaling

 Documentation/devicetree/bindings/opp/opp.txt |  38 +++++++
 .../devicetree/bindings/property-units.txt    |   4 +
 drivers/cpufreq/cpufreq-dt.c                  |  27 ++++-
 drivers/interconnect/core.c                   |  45 ++++++--
 drivers/opp/core.c                            |  96 ++++++++++++++++-
 drivers/opp/of.c                              | 102 ++++++++++++++++++
 drivers/opp/opp.h                             |   9 ++
 include/linux/interconnect.h                  |   6 ++
 include/linux/pm_opp.h                        |  14 +++
 9 files changed, 327 insertions(+), 14 deletions(-)


^ permalink raw reply	[flat|nested] 22+ messages in thread

* [PATCH v2 1/5] dt-bindings: opp: Introduce bandwidth-MBps bindings
  2019-04-23 13:28 [PATCH v2 0/5] Introduce OPP bandwidth bindings Georgi Djakov
@ 2019-04-23 13:28 ` Georgi Djakov
  2019-04-24  5:33   ` Viresh Kumar
                     ` (2 more replies)
  2019-04-23 13:28 ` [PATCH v2 2/5] interconnect: Add of_icc_get_by_index() helper function Georgi Djakov
                   ` (4 subsequent siblings)
  5 siblings, 3 replies; 22+ messages in thread
From: Georgi Djakov @ 2019-04-23 13:28 UTC (permalink / raw)
  To: vireshk, sboyd, nm, robh+dt, mark.rutland, rjw
  Cc: jcrouse, vincent.guittot, bjorn.andersson, amit.kucheria, seansw,
	daidavid1, evgreen, sibis, linux-pm, devicetree, linux-kernel,
	linux-arm-msm, georgi.djakov

In addition to frequency and voltage, some devices may have bandwidth
requirements for their interconnect throughput - for example a CPU
or GPU may also need to increase or decrease their bandwidth to DDR
memory based on the current operating performance point.

Extend the OPP tables with additional property to describe the bandwidth
needs of a device. The average and peak bandwidth values depend on the
hardware and its properties.

Signed-off-by: Georgi Djakov <georgi.djakov@linaro.org>
---
 Documentation/devicetree/bindings/opp/opp.txt | 38 +++++++++++++++++++
 .../devicetree/bindings/property-units.txt    |  4 ++
 2 files changed, 42 insertions(+)

diff --git a/Documentation/devicetree/bindings/opp/opp.txt b/Documentation/devicetree/bindings/opp/opp.txt
index 76b6c79604a5..830f0206aea7 100644
--- a/Documentation/devicetree/bindings/opp/opp.txt
+++ b/Documentation/devicetree/bindings/opp/opp.txt
@@ -132,6 +132,9 @@ Optional properties:
 - opp-level: A value representing the performance level of the device,
   expressed as a 32-bit integer.
 
+- bandwidth-MBps: The interconnect bandwidth is specified with an array containing
+  the two integer values for average and peak bandwidth in megabytes per second.
+
 - clock-latency-ns: Specifies the maximum possible transition latency (in
   nanoseconds) for switching to this OPP from any other OPP.
 
@@ -546,3 +549,38 @@ Example 6: opp-microvolt-<name>, opp-microamp-<name>:
 		};
 	};
 };
+
+Example 7: bandwidth-MBps:
+Average and peak bandwidth values for the interconnects between CPU and DDR
+memory and also between CPU and L3 are defined per each OPP. Bandwidth of both
+interconnects is scaled together with CPU frequency.
+
+/ {
+	cpus {
+		CPU0: cpu@0 {
+			compatible = "arm,cortex-a53", "arm,armv8";
+			...
+			operating-points-v2 = <&cpu_opp_table>;
+			/* path between CPU and DDR memory and CPU and L3 */
+			interconnects = <&noc MASTER_CPU &noc SLAVE_DDR>,
+					<&noc MASTER_CPU &noc SLAVE_L3>;
+		};
+	};
+
+	cpu_opp_table: cpu_opp_table {
+		compatible = "operating-points-v2";
+		opp-shared;
+
+		opp-200000000 {
+			opp-hz = /bits/ 64 <200000000>;
+			/* CPU<->DDR bandwidth: 457 MB/s average, 1525 MB/s peak */
+			 * CPU<->L3 bandwidth: 914 MB/s average, 3050 MB/s peak */
+			bandwidth-MBps = <457 1525>, <914 3050>;
+		};
+		opp-400000000 {
+			opp-hz = /bits/ 64 <400000000>;
+			/* CPU<->DDR bandwidth: 915 MB/s average, 3051 MB/s peak */
+			 * CPU<->L3 bandwidth: 1828 MB/s average, 6102 MB/s peak */
+			bandwidth-MBps = <915 3051>, <1828 6102>;
+		};
+	};
diff --git a/Documentation/devicetree/bindings/property-units.txt b/Documentation/devicetree/bindings/property-units.txt
index bfd33734faca..9c3dbefcdae8 100644
--- a/Documentation/devicetree/bindings/property-units.txt
+++ b/Documentation/devicetree/bindings/property-units.txt
@@ -41,3 +41,7 @@ Temperature
 Pressure
 ----------------------------------------
 -kpascal	: kiloPascal
+
+Throughput
+----------------------------------------
+-MBps		: megabytes per second

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH v2 2/5] interconnect: Add of_icc_get_by_index() helper function
  2019-04-23 13:28 [PATCH v2 0/5] Introduce OPP bandwidth bindings Georgi Djakov
  2019-04-23 13:28 ` [PATCH v2 1/5] dt-bindings: opp: Introduce bandwidth-MBps bindings Georgi Djakov
@ 2019-04-23 13:28 ` Georgi Djakov
  2019-05-07 11:59   ` Sibi Sankar
  2019-04-23 13:28 ` [PATCH v2 3/5] OPP: Add support for parsing the interconnect bandwidth Georgi Djakov
                   ` (3 subsequent siblings)
  5 siblings, 1 reply; 22+ messages in thread
From: Georgi Djakov @ 2019-04-23 13:28 UTC (permalink / raw)
  To: vireshk, sboyd, nm, robh+dt, mark.rutland, rjw
  Cc: jcrouse, vincent.guittot, bjorn.andersson, amit.kucheria, seansw,
	daidavid1, evgreen, sibis, linux-pm, devicetree, linux-kernel,
	linux-arm-msm, georgi.djakov

This is the same as the traditional of_icc_get() function, but the
difference is that it takes index as an argument, instead of name.

Signed-off-by: Georgi Djakov <georgi.djakov@linaro.org>
---
 drivers/interconnect/core.c  | 45 ++++++++++++++++++++++++++++--------
 include/linux/interconnect.h |  6 +++++
 2 files changed, 41 insertions(+), 10 deletions(-)

diff --git a/drivers/interconnect/core.c b/drivers/interconnect/core.c
index 871eb4bc4efc..a7c3c262c974 100644
--- a/drivers/interconnect/core.c
+++ b/drivers/interconnect/core.c
@@ -295,9 +295,9 @@ static struct icc_node *of_icc_get_from_provider(struct of_phandle_args *spec)
 }
 
 /**
- * of_icc_get() - get a path handle from a DT node based on name
+ * of_icc_get_by_index() - get a path handle from a DT node based on index
  * @dev: device pointer for the consumer device
- * @name: interconnect path name
+ * @idx: interconnect path index
  *
  * This function will search for a path between two endpoints and return an
  * icc_path handle on success. Use icc_put() to release constraints when they
@@ -309,13 +309,12 @@ static struct icc_node *of_icc_get_from_provider(struct of_phandle_args *spec)
  * Return: icc_path pointer on success or ERR_PTR() on error. NULL is returned
  * when the API is disabled or the "interconnects" DT property is missing.
  */
-struct icc_path *of_icc_get(struct device *dev, const char *name)
+struct icc_path *of_icc_get_by_index(struct device *dev, int idx)
 {
 	struct icc_path *path = ERR_PTR(-EPROBE_DEFER);
 	struct icc_node *src_node, *dst_node;
 	struct device_node *np = NULL;
 	struct of_phandle_args src_args, dst_args;
-	int idx = 0;
 	int ret;
 
 	if (!dev || !dev->of_node)
@@ -335,12 +334,6 @@ struct icc_path *of_icc_get(struct device *dev, const char *name)
 	 * lets support only global ids and extend this in the future if needed
 	 * without breaking DT compatibility.
 	 */
-	if (name) {
-		idx = of_property_match_string(np, "interconnect-names", name);
-		if (idx < 0)
-			return ERR_PTR(idx);
-	}
-
 	ret = of_parse_phandle_with_args(np, "interconnects",
 					 "#interconnect-cells", idx * 2,
 					 &src_args);
@@ -383,6 +376,38 @@ struct icc_path *of_icc_get(struct device *dev, const char *name)
 
 	return path;
 }
+
+/**
+ * of_icc_get() - get a path handle from a DT node based on name
+ * @dev: device pointer for the consumer device
+ * @name: interconnect path name
+ *
+ * This function will search for a path between two endpoints and return an
+ * icc_path handle on success. Use icc_put() to release constraints when they
+ * are not needed anymore.
+ * If the interconnect API is disabled, NULL is returned and the consumer
+ * drivers will still build. Drivers are free to handle this specifically,
+ * but they don't have to.
+ *
+ * Return: icc_path pointer on success or ERR_PTR() on error. NULL is returned
+ * when the API is disabled or the "interconnects" DT property is missing.
+ */
+struct icc_path *of_icc_get(struct device *dev, const char *name)
+{
+	int idx = 0;
+
+	if (!dev || !dev->of_node)
+		return ERR_PTR(-ENODEV);
+
+	if (name) {
+		idx = of_property_match_string(dev->of_node,
+					       "interconnect-names", name);
+		if (idx < 0)
+			return ERR_PTR(idx);
+	}
+
+	return of_icc_get_by_index(dev, idx);
+}
 EXPORT_SYMBOL_GPL(of_icc_get);
 
 /**
diff --git a/include/linux/interconnect.h b/include/linux/interconnect.h
index dc25864755ba..0e430b3b6519 100644
--- a/include/linux/interconnect.h
+++ b/include/linux/interconnect.h
@@ -28,6 +28,7 @@ struct device;
 struct icc_path *icc_get(struct device *dev, const int src_id,
 			 const int dst_id);
 struct icc_path *of_icc_get(struct device *dev, const char *name);
+struct icc_path *of_icc_get_by_index(struct device *dev, int idx);
 void icc_put(struct icc_path *path);
 int icc_set_bw(struct icc_path *path, u32 avg_bw, u32 peak_bw);
 
@@ -45,6 +46,11 @@ static inline struct icc_path *of_icc_get(struct device *dev,
 	return NULL;
 }
 
+struct icc_path *of_icc_get_by_index(struct device *dev, int idx)
+{
+	return NULL;
+}
+
 static inline void icc_put(struct icc_path *path)
 {
 }

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH v2 3/5] OPP: Add support for parsing the interconnect bandwidth
  2019-04-23 13:28 [PATCH v2 0/5] Introduce OPP bandwidth bindings Georgi Djakov
  2019-04-23 13:28 ` [PATCH v2 1/5] dt-bindings: opp: Introduce bandwidth-MBps bindings Georgi Djakov
  2019-04-23 13:28 ` [PATCH v2 2/5] interconnect: Add of_icc_get_by_index() helper function Georgi Djakov
@ 2019-04-23 13:28 ` Georgi Djakov
  2019-04-24  5:52   ` Viresh Kumar
  2019-04-23 13:28 ` [PATCH v2 4/5] OPP: Update the bandwidth on OPP frequency changes Georgi Djakov
                   ` (2 subsequent siblings)
  5 siblings, 1 reply; 22+ messages in thread
From: Georgi Djakov @ 2019-04-23 13:28 UTC (permalink / raw)
  To: vireshk, sboyd, nm, robh+dt, mark.rutland, rjw
  Cc: jcrouse, vincent.guittot, bjorn.andersson, amit.kucheria, seansw,
	daidavid1, evgreen, sibis, linux-pm, devicetree, linux-kernel,
	linux-arm-msm, georgi.djakov

The OPP bindings now support bandwidth values, so add support to parse it
from device tree and store it into the new dev_pm_opp_icc_bw struct, which
is part of the dev_pm_opp.

Also add and export the dev_pm_opp_set_paths() and dev_pm_opp_put_paths()
helpers, to set (and release) an interconnect paths to a device. The
bandwidth of these paths will be updated when the OPPs are switched.

Signed-off-by: Georgi Djakov <georgi.djakov@linaro.org>
---
 drivers/opp/core.c     |  87 ++++++++++++++++++++++++++++++++++-
 drivers/opp/of.c       | 102 +++++++++++++++++++++++++++++++++++++++++
 drivers/opp/opp.h      |   9 ++++
 include/linux/pm_opp.h |  14 ++++++
 4 files changed, 210 insertions(+), 2 deletions(-)

diff --git a/drivers/opp/core.c b/drivers/opp/core.c
index 0420f7e8ad5b..97ee39ecdebd 100644
--- a/drivers/opp/core.c
+++ b/drivers/opp/core.c
@@ -19,6 +19,7 @@
 #include <linux/slab.h>
 #include <linux/device.h>
 #include <linux/export.h>
+#include <linux/interconnect.h>
 #include <linux/pm_domain.h>
 #include <linux/regulator/consumer.h>
 
@@ -876,6 +877,8 @@ static struct opp_table *_allocate_opp_table(struct device *dev, int index)
 				ret);
 	}
 
+	_of_find_paths(opp_table, dev);
+
 	BLOCKING_INIT_NOTIFIER_HEAD(&opp_table->head);
 	INIT_LIST_HEAD(&opp_table->opp_list);
 	kref_init(&opp_table->kref);
@@ -1129,11 +1132,12 @@ EXPORT_SYMBOL_GPL(dev_pm_opp_remove_all_dynamic);
 struct dev_pm_opp *_opp_allocate(struct opp_table *table)
 {
 	struct dev_pm_opp *opp;
-	int count, supply_size;
+	int count, supply_size, icc_size;
 
 	/* Allocate space for at least one supply */
 	count = table->regulator_count > 0 ? table->regulator_count : 1;
 	supply_size = sizeof(*opp->supplies) * count;
+	icc_size = sizeof(*opp->bandwidth) * table->path_count;
 
 	/* allocate new OPP node and supplies structures */
 	opp = kzalloc(sizeof(*opp) + supply_size, GFP_KERNEL);
@@ -1141,7 +1145,8 @@ struct dev_pm_opp *_opp_allocate(struct opp_table *table)
 		return NULL;
 
 	/* Put the supplies at the end of the OPP structure as an empty array */
-	opp->supplies = (struct dev_pm_opp_supply *)(opp + 1);
+	opp->bandwidth = (struct dev_pm_opp_icc_bw *)(opp + 1);
+	opp->supplies = (struct dev_pm_opp_supply *)(opp + icc_size + 1);
 	INIT_LIST_HEAD(&opp->node);
 
 	return opp;
@@ -1637,6 +1642,84 @@ void dev_pm_opp_put_clkname(struct opp_table *opp_table)
 }
 EXPORT_SYMBOL_GPL(dev_pm_opp_put_clkname);
 
+/**
+ * dev_pm_opp_set_paths() - Set interconnect path for a device
+ * @dev: Device for which interconnect path is being set.
+ *
+ * This must be called before any OPPs are initialized for the device.
+ */
+struct opp_table *dev_pm_opp_set_paths(struct device *dev)
+{
+	struct opp_table *opp_table;
+	int ret, i;
+
+	opp_table = dev_pm_opp_get_opp_table(dev);
+	if (!opp_table)
+		return ERR_PTR(-ENOMEM);
+
+	/* This should be called before OPPs are initialized */
+	if (WARN_ON(!list_empty(&opp_table->opp_list))) {
+		ret = -EBUSY;
+		goto err;
+	}
+
+	/* Another CPU that shares the OPP table has set the path */
+	if (opp_table->paths)
+		return opp_table;
+
+	opp_table->paths = kmalloc_array(opp_table->path_count,
+					 sizeof(*opp_table->paths), GFP_KERNEL);
+
+	/* Find interconnect path(s) for the device */
+	for (i = 0; i < opp_table->path_count; i++) {
+		opp_table->paths[i] = of_icc_get_by_index(dev, i);
+		if (IS_ERR(opp_table->paths[i])) {
+			ret = PTR_ERR(opp_table->paths[i]);
+			if (ret != -EPROBE_DEFER)
+				dev_err(dev, "%s: Couldn't find path%d: %d\n",
+					__func__, i, ret);
+			goto err;
+		}
+	}
+
+	return opp_table;
+
+err:
+	dev_pm_opp_put_opp_table(opp_table);
+
+	return ERR_PTR(ret);
+}
+EXPORT_SYMBOL_GPL(dev_pm_opp_set_paths);
+
+/**
+ * dev_pm_opp_put_paths() - Release interconnect path resources
+ * @opp_table: OPP table returned from dev_pm_opp_set_paths().
+ */
+void dev_pm_opp_put_paths(struct opp_table *opp_table)
+{
+	int i;
+
+	if (!opp_table->paths) {
+		pr_err("%s: Doesn't have paths set\n", __func__);
+		return;
+	}
+
+	/* Make sure there are no concurrent readers while updating opp_table */
+	WARN_ON(!list_empty(&opp_table->opp_list));
+
+	for (i = opp_table->path_count - 1; i >= 0; i--)
+		icc_put(opp_table->paths[i]);
+
+	_free_set_opp_data(opp_table);
+
+	kfree(opp_table->paths);
+	opp_table->paths = NULL;
+	opp_table->path_count = 0;
+
+	dev_pm_opp_put_opp_table(opp_table);
+}
+EXPORT_SYMBOL_GPL(dev_pm_opp_put_paths);
+
 /**
  * dev_pm_opp_register_set_opp_helper() - Register custom set OPP helper
  * @dev: Device for which the helper is getting registered.
diff --git a/drivers/opp/of.c b/drivers/opp/of.c
index c10c782d15aa..00af23280bc6 100644
--- a/drivers/opp/of.c
+++ b/drivers/opp/of.c
@@ -16,6 +16,7 @@
 #include <linux/cpu.h>
 #include <linux/errno.h>
 #include <linux/device.h>
+#include <linux/interconnect.h>
 #include <linux/of_device.h>
 #include <linux/pm_domain.h>
 #include <linux/slab.h>
@@ -363,6 +364,45 @@ static int _of_opp_alloc_required_opps(struct opp_table *opp_table,
 	return ret;
 }
 
+int _of_find_paths(struct opp_table *opp_table, struct device *dev)
+{
+	struct device_node *np;
+	int ret, i, count, num_paths;
+
+	np = of_node_get(dev->of_node);
+	if (np) {
+		count = of_count_phandle_with_args(np, "interconnects",
+						   "#interconnect-cells");
+		if (count % 2) {
+			dev_err(dev, "%s: Invalid interconnects values\n",
+				__func__);
+			ret = -EINVAL;
+			goto put_of_node;
+		}
+
+		num_paths = count / 2;
+		opp_table->paths = kcalloc(num_paths, sizeof(*opp_table->paths),
+					   GFP_KERNEL);
+		if (!opp_table->paths) {
+			ret = -ENOMEM;
+			goto put_of_node;
+		}
+
+		for (i = 0; i < num_paths; i++)
+			opp_table->paths[i] = of_icc_get_by_index(dev, i);
+
+		opp_table->path_count = num_paths;
+		of_node_put(np);
+	}
+
+	return 0;
+
+put_of_node:
+	of_node_put(np);
+
+	return ret;
+}
+
 static bool _opp_is_supported(struct device *dev, struct opp_table *opp_table,
 			      struct device_node *np)
 {
@@ -539,6 +579,64 @@ static int opp_parse_supplies(struct dev_pm_opp *opp, struct device *dev,
 	return ret;
 }
 
+static int opp_parse_icc_bw(struct dev_pm_opp *opp, struct device *dev,
+			    struct opp_table *opp_table)
+{
+	struct property *prop = NULL;
+	u32 *bandwidth;
+	char name[] = "bandwidth-MBps";
+	int count, i, j, ret;
+
+	/* Search for "bandwidth-MBps" */
+	prop = of_find_property(opp->np, name, NULL);
+
+	/* Missing property is not a problem */
+	if (!prop) {
+		dev_dbg(dev, "%s: Missing %s property\n", __func__, name);
+		return 0;
+	}
+
+	if (!prop->value) {
+		dev_dbg(dev, "%s: Missing %s value\n", __func__, name);
+		return -ENODATA;
+	}
+
+	/*
+	 * Bandwidth consists of average and peak values like:
+	 * bandwidth-MBps = <avg-MBps peak-MBps>
+	 */
+	count = prop->length / sizeof(u32);
+	if (count % 2) {
+		dev_err(dev, "%s: Invalid %s values\n", __func__, name);
+		return -EINVAL;
+	}
+
+	if (opp_table->path_count != count / 2) {
+		dev_err(dev, "%s Mismatch between values and paths (%d %d)\n",
+			__func__, opp_table->path_count, count / 2);
+		return -EINVAL;
+	}
+
+	bandwidth = kmalloc_array(count, sizeof(*bandwidth), GFP_KERNEL);
+	if (!bandwidth)
+		return -ENOMEM;
+
+	ret = of_property_read_u32_array(opp->np, name, bandwidth, count);
+	if (ret) {
+		dev_err(dev, "%s: Error parsing %s: %d\n", __func__, name, ret);
+		goto free_bandwidth;
+	}
+	for (i = 0, j = 0; i < count; i++) {
+		opp->bandwidth[i].avg = MBps_to_icc(bandwidth[j++]);
+		opp->bandwidth[i].peak = MBps_to_icc(bandwidth[j++]);
+	}
+
+free_bandwidth:
+	kfree(bandwidth);
+
+	return ret;
+}
+
 /**
  * dev_pm_opp_of_remove_table() - Free OPP table entries created from static DT
  *				  entries
@@ -635,6 +733,10 @@ static struct dev_pm_opp *_opp_add_static_v2(struct opp_table *opp_table,
 	if (opp_table->is_genpd)
 		new_opp->pstate = pm_genpd_opp_to_performance_state(dev, new_opp);
 
+	ret = opp_parse_icc_bw(new_opp, dev, opp_table);
+	if (ret)
+		goto free_opp;
+
 	ret = _opp_add(dev, new_opp, opp_table, rate_not_available);
 	if (ret) {
 		/* Don't return error for duplicate OPPs */
diff --git a/drivers/opp/opp.h b/drivers/opp/opp.h
index 569b3525aa67..70a537f2dbd3 100644
--- a/drivers/opp/opp.h
+++ b/drivers/opp/opp.h
@@ -24,6 +24,7 @@
 
 struct clk;
 struct regulator;
+struct icc_path;
 
 /* Lock to allow exclusive modification to the device and opp lists */
 extern struct mutex opp_table_lock;
@@ -62,6 +63,7 @@ extern struct list_head opp_tables;
  * @rate:	Frequency in hertz
  * @level:	Performance level
  * @supplies:	Power supplies voltage/current values
+ * @bandwidth:	Interconnect bandwidth values
  * @clock_latency_ns: Latency (in nanoseconds) of switching to this OPP's
  *		frequency from any other OPP's frequency.
  * @required_opps: List of OPPs that are required by this OPP.
@@ -84,6 +86,7 @@ struct dev_pm_opp {
 	unsigned int level;
 
 	struct dev_pm_opp_supply *supplies;
+	struct dev_pm_opp_icc_bw *bandwidth;
 
 	unsigned long clock_latency_ns;
 
@@ -150,6 +153,8 @@ enum opp_table_access {
  * @regulator_count: Number of power supply regulators. Its value can be -1
  * (uninitialized), 0 (no opp-microvolt property) or > 0 (has opp-microvolt
  * property).
+ * @paths: Interconnect path handles
+ * @path_count: Number of interconnect paths
  * @genpd_performance_state: Device's power domain support performance state.
  * @is_genpd: Marks if the OPP table belongs to a genpd.
  * @set_opp: Platform specific set_opp callback
@@ -194,6 +199,8 @@ struct opp_table {
 	struct clk *clk;
 	struct regulator **regulators;
 	int regulator_count;
+	struct icc_path **paths;
+	unsigned int path_count;
 	bool genpd_performance_state;
 	bool is_genpd;
 
@@ -228,12 +235,14 @@ void _of_clear_opp_table(struct opp_table *opp_table);
 struct opp_table *_managed_opp(struct device *dev, int index);
 void _of_opp_free_required_opps(struct opp_table *opp_table,
 				struct dev_pm_opp *opp);
+int _of_find_paths(struct opp_table *opp_table, struct device *dev);
 #else
 static inline void _of_init_opp_table(struct opp_table *opp_table, struct device *dev, int index) {}
 static inline void _of_clear_opp_table(struct opp_table *opp_table) {}
 static inline struct opp_table *_managed_opp(struct device *dev, int index) { return NULL; }
 static inline void _of_opp_free_required_opps(struct opp_table *opp_table,
 					      struct dev_pm_opp *opp) {}
+static inline int _of_find_paths(struct opp_table *opp_table, struct device *dev) { return 0; }
 #endif
 
 #ifdef CONFIG_DEBUG_FS
diff --git a/include/linux/pm_opp.h b/include/linux/pm_opp.h
index 24c757a32a7b..dabee09a92b8 100644
--- a/include/linux/pm_opp.h
+++ b/include/linux/pm_opp.h
@@ -43,6 +43,18 @@ struct dev_pm_opp_supply {
 	unsigned long u_amp;
 };
 
+/**
+ * struct dev_pm_opp_icc_bw - Interconnect bandwidth values
+ * @avg:	Average bandwidth corresponding to this OPP (in icc units)
+ * @peak:	Peak bandwidth corresponding to this OPP (in icc units)
+ *
+ * This structure stores the bandwidth values for a single interconnect path.
+ */
+struct dev_pm_opp_icc_bw {
+	u32 avg;
+	u32 peak;
+};
+
 /**
  * struct dev_pm_opp_info - OPP freq/voltage/current values
  * @rate:	Target clk rate in hz
@@ -127,6 +139,8 @@ struct opp_table *dev_pm_opp_set_regulators(struct device *dev, const char * con
 void dev_pm_opp_put_regulators(struct opp_table *opp_table);
 struct opp_table *dev_pm_opp_set_clkname(struct device *dev, const char * name);
 void dev_pm_opp_put_clkname(struct opp_table *opp_table);
+struct opp_table *dev_pm_opp_set_paths(struct device *dev);
+void dev_pm_opp_put_paths(struct opp_table *opp_table);
 struct opp_table *dev_pm_opp_register_set_opp_helper(struct device *dev, int (*set_opp)(struct dev_pm_set_opp_data *data));
 void dev_pm_opp_unregister_set_opp_helper(struct opp_table *opp_table);
 struct opp_table *dev_pm_opp_set_genpd_virt_dev(struct device *dev, struct device *virt_dev, int index);

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH v2 4/5] OPP: Update the bandwidth on OPP frequency changes
  2019-04-23 13:28 [PATCH v2 0/5] Introduce OPP bandwidth bindings Georgi Djakov
                   ` (2 preceding siblings ...)
  2019-04-23 13:28 ` [PATCH v2 3/5] OPP: Add support for parsing the interconnect bandwidth Georgi Djakov
@ 2019-04-23 13:28 ` Georgi Djakov
  2019-04-24  5:55   ` Viresh Kumar
  2019-04-24 10:05   ` Sibi Sankar
  2019-04-23 13:28 ` [PATCH v2 5/5] cpufreq: dt: Add support for interconnect bandwidth scaling Georgi Djakov
  2019-06-01  2:12 ` [PATCH v2 0/5] Introduce OPP bandwidth bindings Saravana Kannan
  5 siblings, 2 replies; 22+ messages in thread
From: Georgi Djakov @ 2019-04-23 13:28 UTC (permalink / raw)
  To: vireshk, sboyd, nm, robh+dt, mark.rutland, rjw
  Cc: jcrouse, vincent.guittot, bjorn.andersson, amit.kucheria, seansw,
	daidavid1, evgreen, sibis, linux-pm, devicetree, linux-kernel,
	linux-arm-msm, georgi.djakov

If the OPP bandwidth values are populated, we want to switch also the
interconnect bandwidth in addition to frequency and voltage.

Signed-off-by: Georgi Djakov <georgi.djakov@linaro.org>
---
 drivers/opp/core.c | 9 ++++++++-
 1 file changed, 8 insertions(+), 1 deletion(-)

diff --git a/drivers/opp/core.c b/drivers/opp/core.c
index 97ee39ecdebd..91d1c2abfb3e 100644
--- a/drivers/opp/core.c
+++ b/drivers/opp/core.c
@@ -707,7 +707,7 @@ int dev_pm_opp_set_rate(struct device *dev, unsigned long target_freq)
 	unsigned long freq, old_freq;
 	struct dev_pm_opp *old_opp, *opp;
 	struct clk *clk;
-	int ret;
+	int ret, i;
 
 	if (unlikely(!target_freq)) {
 		dev_err(dev, "%s: Invalid target frequency %lu\n", __func__,
@@ -780,6 +780,13 @@ int dev_pm_opp_set_rate(struct device *dev, unsigned long target_freq)
 		ret = _generic_set_opp_clk_only(dev, clk, freq);
 	}
 
+	if (!ret && !IS_ERR_OR_NULL(opp_table->paths)) {
+		for (i = 0; i < opp_table->path_count; i++) {
+			icc_set_bw(opp_table->paths[i], opp->bandwidth[i].avg,
+				   opp->bandwidth[i].peak);
+		}
+	}
+
 	/* Scaling down? Configure required OPPs after frequency */
 	if (!ret && freq < old_freq) {
 		ret = _set_required_opps(dev, opp_table, opp);

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH v2 5/5] cpufreq: dt: Add support for interconnect bandwidth scaling
  2019-04-23 13:28 [PATCH v2 0/5] Introduce OPP bandwidth bindings Georgi Djakov
                   ` (3 preceding siblings ...)
  2019-04-23 13:28 ` [PATCH v2 4/5] OPP: Update the bandwidth on OPP frequency changes Georgi Djakov
@ 2019-04-23 13:28 ` Georgi Djakov
  2019-06-01  2:12 ` [PATCH v2 0/5] Introduce OPP bandwidth bindings Saravana Kannan
  5 siblings, 0 replies; 22+ messages in thread
From: Georgi Djakov @ 2019-04-23 13:28 UTC (permalink / raw)
  To: vireshk, sboyd, nm, robh+dt, mark.rutland, rjw
  Cc: jcrouse, vincent.guittot, bjorn.andersson, amit.kucheria, seansw,
	daidavid1, evgreen, sibis, linux-pm, devicetree, linux-kernel,
	linux-arm-msm, georgi.djakov

In addition to clocks and regulators, some devices can scale the bandwidth
of their on-chip interconnect - for example between CPU and DDR memory. Add
support for that, so that platforms which support it can make use of it.

Signed-off-by: Georgi Djakov <georgi.djakov@linaro.org>
---
 drivers/cpufreq/cpufreq-dt.c | 27 ++++++++++++++++++++++++++-
 1 file changed, 26 insertions(+), 1 deletion(-)

diff --git a/drivers/cpufreq/cpufreq-dt.c b/drivers/cpufreq/cpufreq-dt.c
index bde28878725b..8a20f4d0243e 100644
--- a/drivers/cpufreq/cpufreq-dt.c
+++ b/drivers/cpufreq/cpufreq-dt.c
@@ -16,6 +16,7 @@
 #include <linux/cpufreq.h>
 #include <linux/cpumask.h>
 #include <linux/err.h>
+#include <linux/interconnect.h>
 #include <linux/module.h>
 #include <linux/of.h>
 #include <linux/pm_opp.h>
@@ -98,6 +99,7 @@ static int resources_available(void)
 	struct device *cpu_dev;
 	struct regulator *cpu_reg;
 	struct clk *cpu_clk;
+	struct icc_path *cpu_path;
 	int ret = 0;
 	const char *name;
 
@@ -124,6 +126,19 @@ static int resources_available(void)
 
 	clk_put(cpu_clk);
 
+	cpu_path = of_icc_get(cpu_dev, NULL);
+	ret = PTR_ERR_OR_ZERO(cpu_path);
+	if (ret) {
+		if (ret == -EPROBE_DEFER)
+			dev_dbg(cpu_dev, "defer icc path: %d\n", ret);
+		else
+			dev_err(cpu_dev, "failed to get icc path: %d\n", ret);
+
+		return ret;
+	}
+
+	icc_put(cpu_path);
+
 	name = find_supply_name(cpu_dev);
 	/* Platform doesn't require regulator */
 	if (!name)
@@ -203,10 +218,18 @@ static int cpufreq_init(struct cpufreq_policy *policy)
 		}
 	}
 
+	opp_table = dev_pm_opp_set_paths(cpu_dev);
+	if (IS_ERR(opp_table)) {
+		ret = PTR_ERR(opp_table);
+		dev_err(cpu_dev, "Failed to set interconnect path for cpu%d: %d\n",
+			policy->cpu, ret);
+		goto out_put_regulator;
+	}
+
 	priv = kzalloc(sizeof(*priv), GFP_KERNEL);
 	if (!priv) {
 		ret = -ENOMEM;
-		goto out_put_regulator;
+		goto out_put_path;
 	}
 
 	priv->reg_name = name;
@@ -288,6 +311,8 @@ static int cpufreq_init(struct cpufreq_policy *policy)
 	if (priv->have_static_opps)
 		dev_pm_opp_of_cpumask_remove_table(policy->cpus);
 	kfree(priv);
+out_put_path:
+	dev_pm_opp_put_paths(opp_table);
 out_put_regulator:
 	if (name)
 		dev_pm_opp_put_regulators(opp_table);

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* Re: [PATCH v2 1/5] dt-bindings: opp: Introduce bandwidth-MBps bindings
  2019-04-23 13:28 ` [PATCH v2 1/5] dt-bindings: opp: Introduce bandwidth-MBps bindings Georgi Djakov
@ 2019-04-24  5:33   ` Viresh Kumar
  2019-04-24  6:46   ` Rajendra Nayak
  2019-04-24  8:44   ` Sibi Sankar
  2 siblings, 0 replies; 22+ messages in thread
From: Viresh Kumar @ 2019-04-24  5:33 UTC (permalink / raw)
  To: Georgi Djakov
  Cc: vireshk, sboyd, nm, robh+dt, mark.rutland, rjw, jcrouse,
	vincent.guittot, bjorn.andersson, amit.kucheria, seansw,
	daidavid1, evgreen, sibis, linux-pm, devicetree, linux-kernel,
	linux-arm-msm

On 23-04-19, 16:28, Georgi Djakov wrote:
> In addition to frequency and voltage, some devices may have bandwidth
> requirements for their interconnect throughput - for example a CPU
> or GPU may also need to increase or decrease their bandwidth to DDR
> memory based on the current operating performance point.
> 
> Extend the OPP tables with additional property to describe the bandwidth
> needs of a device. The average and peak bandwidth values depend on the
> hardware and its properties.
> 
> Signed-off-by: Georgi Djakov <georgi.djakov@linaro.org>
> ---
>  Documentation/devicetree/bindings/opp/opp.txt | 38 +++++++++++++++++++
>  .../devicetree/bindings/property-units.txt    |  4 ++
>  2 files changed, 42 insertions(+)
> 
> diff --git a/Documentation/devicetree/bindings/opp/opp.txt b/Documentation/devicetree/bindings/opp/opp.txt
> index 76b6c79604a5..830f0206aea7 100644
> --- a/Documentation/devicetree/bindings/opp/opp.txt
> +++ b/Documentation/devicetree/bindings/opp/opp.txt
> @@ -132,6 +132,9 @@ Optional properties:
>  - opp-level: A value representing the performance level of the device,
>    expressed as a 32-bit integer.
>  
> +- bandwidth-MBps: The interconnect bandwidth is specified with an array containing
> +  the two integer values for average and peak bandwidth in megabytes per second.
> +
>  - clock-latency-ns: Specifies the maximum possible transition latency (in
>    nanoseconds) for switching to this OPP from any other OPP.
>  
> @@ -546,3 +549,38 @@ Example 6: opp-microvolt-<name>, opp-microamp-<name>:
>  		};
>  	};
>  };
> +
> +Example 7: bandwidth-MBps:
> +Average and peak bandwidth values for the interconnects between CPU and DDR
> +memory and also between CPU and L3 are defined per each OPP. Bandwidth of both
> +interconnects is scaled together with CPU frequency.
> +
> +/ {
> +	cpus {
> +		CPU0: cpu@0 {
> +			compatible = "arm,cortex-a53", "arm,armv8";
> +			...
> +			operating-points-v2 = <&cpu_opp_table>;
> +			/* path between CPU and DDR memory and CPU and L3 */
> +			interconnects = <&noc MASTER_CPU &noc SLAVE_DDR>,
> +					<&noc MASTER_CPU &noc SLAVE_L3>;
> +		};
> +	};
> +
> +	cpu_opp_table: cpu_opp_table {
> +		compatible = "operating-points-v2";
> +		opp-shared;
> +
> +		opp-200000000 {
> +			opp-hz = /bits/ 64 <200000000>;
> +			/* CPU<->DDR bandwidth: 457 MB/s average, 1525 MB/s peak */
> +			 * CPU<->L3 bandwidth: 914 MB/s average, 3050 MB/s peak */
> +			bandwidth-MBps = <457 1525>, <914 3050>;
> +		};
> +		opp-400000000 {
> +			opp-hz = /bits/ 64 <400000000>;
> +			/* CPU<->DDR bandwidth: 915 MB/s average, 3051 MB/s peak */
> +			 * CPU<->L3 bandwidth: 1828 MB/s average, 6102 MB/s peak */
> +			bandwidth-MBps = <915 3051>, <1828 6102>;
> +		};
> +	};
> diff --git a/Documentation/devicetree/bindings/property-units.txt b/Documentation/devicetree/bindings/property-units.txt
> index bfd33734faca..9c3dbefcdae8 100644
> --- a/Documentation/devicetree/bindings/property-units.txt
> +++ b/Documentation/devicetree/bindings/property-units.txt
> @@ -41,3 +41,7 @@ Temperature
>  Pressure
>  ----------------------------------------
>  -kpascal	: kiloPascal
> +
> +Throughput
> +----------------------------------------
> +-MBps		: megabytes per second

LGTM

-- 
viresh

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v2 3/5] OPP: Add support for parsing the interconnect bandwidth
  2019-04-23 13:28 ` [PATCH v2 3/5] OPP: Add support for parsing the interconnect bandwidth Georgi Djakov
@ 2019-04-24  5:52   ` Viresh Kumar
  2019-06-27  6:27     ` Sibi Sankar
  0 siblings, 1 reply; 22+ messages in thread
From: Viresh Kumar @ 2019-04-24  5:52 UTC (permalink / raw)
  To: Georgi Djakov
  Cc: vireshk, sboyd, nm, robh+dt, mark.rutland, rjw, jcrouse,
	vincent.guittot, bjorn.andersson, amit.kucheria, seansw,
	daidavid1, evgreen, sibis, linux-pm, devicetree, linux-kernel,
	linux-arm-msm

On 23-04-19, 16:28, Georgi Djakov wrote:
> The OPP bindings now support bandwidth values, so add support to parse it
> from device tree and store it into the new dev_pm_opp_icc_bw struct, which
> is part of the dev_pm_opp.
> 
> Also add and export the dev_pm_opp_set_paths() and dev_pm_opp_put_paths()
> helpers, to set (and release) an interconnect paths to a device. The
> bandwidth of these paths will be updated when the OPPs are switched.
> 
> Signed-off-by: Georgi Djakov <georgi.djakov@linaro.org>
> ---
>  drivers/opp/core.c     |  87 ++++++++++++++++++++++++++++++++++-
>  drivers/opp/of.c       | 102 +++++++++++++++++++++++++++++++++++++++++
>  drivers/opp/opp.h      |   9 ++++
>  include/linux/pm_opp.h |  14 ++++++
>  4 files changed, 210 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/opp/core.c b/drivers/opp/core.c
> index 0420f7e8ad5b..97ee39ecdebd 100644
> --- a/drivers/opp/core.c
> +++ b/drivers/opp/core.c
> @@ -19,6 +19,7 @@
>  #include <linux/slab.h>
>  #include <linux/device.h>
>  #include <linux/export.h>
> +#include <linux/interconnect.h>

Just include this once in opp.h and the other .c files won't need it.

>  #include <linux/pm_domain.h>
>  #include <linux/regulator/consumer.h>
>  
> @@ -876,6 +877,8 @@ static struct opp_table *_allocate_opp_table(struct device *dev, int index)
>  				ret);
>  	}
>  
> +	_of_find_paths(opp_table, dev);
> +
>  	BLOCKING_INIT_NOTIFIER_HEAD(&opp_table->head);
>  	INIT_LIST_HEAD(&opp_table->opp_list);
>  	kref_init(&opp_table->kref);
> @@ -1129,11 +1132,12 @@ EXPORT_SYMBOL_GPL(dev_pm_opp_remove_all_dynamic);
>  struct dev_pm_opp *_opp_allocate(struct opp_table *table)
>  {
>  	struct dev_pm_opp *opp;
> -	int count, supply_size;
> +	int count, supply_size, icc_size;
>  
>  	/* Allocate space for at least one supply */
>  	count = table->regulator_count > 0 ? table->regulator_count : 1;
>  	supply_size = sizeof(*opp->supplies) * count;
> +	icc_size = sizeof(*opp->bandwidth) * table->path_count;
>  
>  	/* allocate new OPP node and supplies structures */
>  	opp = kzalloc(sizeof(*opp) + supply_size, GFP_KERNEL);

You never updated this to include icc_size :(

> @@ -1141,7 +1145,8 @@ struct dev_pm_opp *_opp_allocate(struct opp_table *table)
>  		return NULL;
>  
>  	/* Put the supplies at the end of the OPP structure as an empty array */
> -	opp->supplies = (struct dev_pm_opp_supply *)(opp + 1);
> +	opp->bandwidth = (struct dev_pm_opp_icc_bw *)(opp + 1);

Keep the order as supplies and then bandwidth.

> +	opp->supplies = (struct dev_pm_opp_supply *)(opp + icc_size + 1);

Did you check what address gets assigned here ? I think the pointer
addition will screw things up for you.

>  	INIT_LIST_HEAD(&opp->node);
>  
>  	return opp;
> @@ -1637,6 +1642,84 @@ void dev_pm_opp_put_clkname(struct opp_table *opp_table)
>  }
>  EXPORT_SYMBOL_GPL(dev_pm_opp_put_clkname);
>  
> +/**
> + * dev_pm_opp_set_paths() - Set interconnect path for a device
> + * @dev: Device for which interconnect path is being set.
> + *
> + * This must be called before any OPPs are initialized for the device.
> + */
> +struct opp_table *dev_pm_opp_set_paths(struct device *dev)

I got a bit confused. Why is this routine required exactly as
_of_find_paths() would have already done something similar ?

> +{
> +	struct opp_table *opp_table;
> +	int ret, i;
> +
> +	opp_table = dev_pm_opp_get_opp_table(dev);
> +	if (!opp_table)
> +		return ERR_PTR(-ENOMEM);
> +
> +	/* This should be called before OPPs are initialized */
> +	if (WARN_ON(!list_empty(&opp_table->opp_list))) {
> +		ret = -EBUSY;
> +		goto err;
> +	}
> +
> +	/* Another CPU that shares the OPP table has set the path */
> +	if (opp_table->paths)
> +		return opp_table;
> +
> +	opp_table->paths = kmalloc_array(opp_table->path_count,
> +					 sizeof(*opp_table->paths), GFP_KERNEL);
> +
> +	/* Find interconnect path(s) for the device */
> +	for (i = 0; i < opp_table->path_count; i++) {
> +		opp_table->paths[i] = of_icc_get_by_index(dev, i);
> +		if (IS_ERR(opp_table->paths[i])) {
> +			ret = PTR_ERR(opp_table->paths[i]);
> +			if (ret != -EPROBE_DEFER)
> +				dev_err(dev, "%s: Couldn't find path%d: %d\n",
> +					__func__, i, ret);
> +			goto err;
> +		}
> +	}
> +
> +	return opp_table;
> +
> +err:
> +	dev_pm_opp_put_opp_table(opp_table);
> +
> +	return ERR_PTR(ret);
> +}
> +EXPORT_SYMBOL_GPL(dev_pm_opp_set_paths);
> +
> +/**
> + * dev_pm_opp_put_paths() - Release interconnect path resources
> + * @opp_table: OPP table returned from dev_pm_opp_set_paths().
> + */
> +void dev_pm_opp_put_paths(struct opp_table *opp_table)
> +{
> +	int i;
> +
> +	if (!opp_table->paths) {
> +		pr_err("%s: Doesn't have paths set\n", __func__);
> +		return;
> +	}
> +
> +	/* Make sure there are no concurrent readers while updating opp_table */
> +	WARN_ON(!list_empty(&opp_table->opp_list));
> +
> +	for (i = opp_table->path_count - 1; i >= 0; i--)
> +		icc_put(opp_table->paths[i]);
> +
> +	_free_set_opp_data(opp_table);
> +
> +	kfree(opp_table->paths);
> +	opp_table->paths = NULL;
> +	opp_table->path_count = 0;
> +
> +	dev_pm_opp_put_opp_table(opp_table);
> +}
> +EXPORT_SYMBOL_GPL(dev_pm_opp_put_paths);
> +
>  /**
>   * dev_pm_opp_register_set_opp_helper() - Register custom set OPP helper
>   * @dev: Device for which the helper is getting registered.
> diff --git a/drivers/opp/of.c b/drivers/opp/of.c
> index c10c782d15aa..00af23280bc6 100644
> --- a/drivers/opp/of.c
> +++ b/drivers/opp/of.c
> @@ -16,6 +16,7 @@
>  #include <linux/cpu.h>
>  #include <linux/errno.h>
>  #include <linux/device.h>
> +#include <linux/interconnect.h>
>  #include <linux/of_device.h>
>  #include <linux/pm_domain.h>
>  #include <linux/slab.h>
> @@ -363,6 +364,45 @@ static int _of_opp_alloc_required_opps(struct opp_table *opp_table,
>  	return ret;
>  }
>  
> +int _of_find_paths(struct opp_table *opp_table, struct device *dev)
> +{
> +	struct device_node *np;
> +	int ret, i, count, num_paths;
> +
> +	np = of_node_get(dev->of_node);
> +	if (np) {

I would rather do:

        if (!np)
                return 0;

That will kill unnecessary line breaks and indentation.

> +		count = of_count_phandle_with_args(np, "interconnects",
> +						   "#interconnect-cells");

You can do of_node_put() right here as it isn't used afterwards.

> +		if (count % 2) {

Shouldn't this be 4 instead of 2 ? Each path is represented as:

<&noc MASTER_CPU &noc SLAVE_DDR>

which has 4 fields.

> +			dev_err(dev, "%s: Invalid interconnects values\n",
> +				__func__);
> +			ret = -EINVAL;
> +			goto put_of_node;
> +		}
> +
> +		num_paths = count / 2;
> +		opp_table->paths = kcalloc(num_paths, sizeof(*opp_table->paths),
> +					   GFP_KERNEL);
> +		if (!opp_table->paths) {
> +			ret = -ENOMEM;
> +			goto put_of_node;
> +		}
> +
> +		for (i = 0; i < num_paths; i++)
> +			opp_table->paths[i] = of_icc_get_by_index(dev, i);
> +
> +		opp_table->path_count = num_paths;
> +		of_node_put(np);
> +	}
> +
> +	return 0;
> +
> +put_of_node:
> +	of_node_put(np);
> +
> +	return ret;
> +}
> +
>  static bool _opp_is_supported(struct device *dev, struct opp_table *opp_table,
>  			      struct device_node *np)
>  {
> @@ -539,6 +579,64 @@ static int opp_parse_supplies(struct dev_pm_opp *opp, struct device *dev,
>  	return ret;
>  }
>  
> +static int opp_parse_icc_bw(struct dev_pm_opp *opp, struct device *dev,
> +			    struct opp_table *opp_table)
> +{
> +	struct property *prop = NULL;
> +	u32 *bandwidth;
> +	char name[] = "bandwidth-MBps";
> +	int count, i, j, ret;
> +
> +	/* Search for "bandwidth-MBps" */
> +	prop = of_find_property(opp->np, name, NULL);
> +
> +	/* Missing property is not a problem */
> +	if (!prop) {
> +		dev_dbg(dev, "%s: Missing %s property\n", __func__, name);
> +		return 0;
> +	}
> +
> +	if (!prop->value) {
> +		dev_dbg(dev, "%s: Missing %s value\n", __func__, name);
> +		return -ENODATA;
> +	}
> +
> +	/*
> +	 * Bandwidth consists of average and peak values like:
> +	 * bandwidth-MBps = <avg-MBps peak-MBps>
> +	 */
> +	count = prop->length / sizeof(u32);
> +	if (count % 2) {
> +		dev_err(dev, "%s: Invalid %s values\n", __func__, name);
> +		return -EINVAL;
> +	}
> +
> +	if (opp_table->path_count != count / 2) {
> +		dev_err(dev, "%s Mismatch between values and paths (%d %d)\n",
> +			__func__, opp_table->path_count, count / 2);
> +		return -EINVAL;
> +	}
> +
> +	bandwidth = kmalloc_array(count, sizeof(*bandwidth), GFP_KERNEL);
> +	if (!bandwidth)
> +		return -ENOMEM;
> +
> +	ret = of_property_read_u32_array(opp->np, name, bandwidth, count);
> +	if (ret) {
> +		dev_err(dev, "%s: Error parsing %s: %d\n", __func__, name, ret);
> +		goto free_bandwidth;
> +	}
> +	for (i = 0, j = 0; i < count; i++) {
> +		opp->bandwidth[i].avg = MBps_to_icc(bandwidth[j++]);
> +		opp->bandwidth[i].peak = MBps_to_icc(bandwidth[j++]);
> +	}
> +
> +free_bandwidth:
> +	kfree(bandwidth);
> +
> +	return ret;
> +}
> +
>  /**
>   * dev_pm_opp_of_remove_table() - Free OPP table entries created from static DT
>   *				  entries
> @@ -635,6 +733,10 @@ static struct dev_pm_opp *_opp_add_static_v2(struct opp_table *opp_table,
>  	if (opp_table->is_genpd)
>  		new_opp->pstate = pm_genpd_opp_to_performance_state(dev, new_opp);
>  
> +	ret = opp_parse_icc_bw(new_opp, dev, opp_table);
> +	if (ret)
> +		goto free_opp;
> +
>  	ret = _opp_add(dev, new_opp, opp_table, rate_not_available);
>  	if (ret) {
>  		/* Don't return error for duplicate OPPs */
> diff --git a/drivers/opp/opp.h b/drivers/opp/opp.h
> index 569b3525aa67..70a537f2dbd3 100644
> --- a/drivers/opp/opp.h
> +++ b/drivers/opp/opp.h
> @@ -24,6 +24,7 @@
>  
>  struct clk;
>  struct regulator;
> +struct icc_path;
>  
>  /* Lock to allow exclusive modification to the device and opp lists */
>  extern struct mutex opp_table_lock;
> @@ -62,6 +63,7 @@ extern struct list_head opp_tables;
>   * @rate:	Frequency in hertz
>   * @level:	Performance level
>   * @supplies:	Power supplies voltage/current values
> + * @bandwidth:	Interconnect bandwidth values
>   * @clock_latency_ns: Latency (in nanoseconds) of switching to this OPP's
>   *		frequency from any other OPP's frequency.
>   * @required_opps: List of OPPs that are required by this OPP.
> @@ -84,6 +86,7 @@ struct dev_pm_opp {
>  	unsigned int level;
>  
>  	struct dev_pm_opp_supply *supplies;
> +	struct dev_pm_opp_icc_bw *bandwidth;
>  
>  	unsigned long clock_latency_ns;
>  
> @@ -150,6 +153,8 @@ enum opp_table_access {
>   * @regulator_count: Number of power supply regulators. Its value can be -1
>   * (uninitialized), 0 (no opp-microvolt property) or > 0 (has opp-microvolt
>   * property).
> + * @paths: Interconnect path handles
> + * @path_count: Number of interconnect paths
>   * @genpd_performance_state: Device's power domain support performance state.
>   * @is_genpd: Marks if the OPP table belongs to a genpd.
>   * @set_opp: Platform specific set_opp callback
> @@ -194,6 +199,8 @@ struct opp_table {
>  	struct clk *clk;
>  	struct regulator **regulators;
>  	int regulator_count;
> +	struct icc_path **paths;
> +	unsigned int path_count;
>  	bool genpd_performance_state;
>  	bool is_genpd;
>  
> @@ -228,12 +235,14 @@ void _of_clear_opp_table(struct opp_table *opp_table);
>  struct opp_table *_managed_opp(struct device *dev, int index);
>  void _of_opp_free_required_opps(struct opp_table *opp_table,
>  				struct dev_pm_opp *opp);
> +int _of_find_paths(struct opp_table *opp_table, struct device *dev);
>  #else
>  static inline void _of_init_opp_table(struct opp_table *opp_table, struct device *dev, int index) {}
>  static inline void _of_clear_opp_table(struct opp_table *opp_table) {}
>  static inline struct opp_table *_managed_opp(struct device *dev, int index) { return NULL; }
>  static inline void _of_opp_free_required_opps(struct opp_table *opp_table,
>  					      struct dev_pm_opp *opp) {}
> +static inline int _of_find_paths(struct opp_table *opp_table, struct device *dev) { return 0; }
>  #endif
>  
>  #ifdef CONFIG_DEBUG_FS
> diff --git a/include/linux/pm_opp.h b/include/linux/pm_opp.h
> index 24c757a32a7b..dabee09a92b8 100644
> --- a/include/linux/pm_opp.h
> +++ b/include/linux/pm_opp.h
> @@ -43,6 +43,18 @@ struct dev_pm_opp_supply {
>  	unsigned long u_amp;
>  };
>  
> +/**
> + * struct dev_pm_opp_icc_bw - Interconnect bandwidth values
> + * @avg:	Average bandwidth corresponding to this OPP (in icc units)
> + * @peak:	Peak bandwidth corresponding to this OPP (in icc units)
> + *
> + * This structure stores the bandwidth values for a single interconnect path.
> + */
> +struct dev_pm_opp_icc_bw {
> +	u32 avg;
> +	u32 peak;
> +};
> +
>  /**
>   * struct dev_pm_opp_info - OPP freq/voltage/current values
>   * @rate:	Target clk rate in hz
> @@ -127,6 +139,8 @@ struct opp_table *dev_pm_opp_set_regulators(struct device *dev, const char * con
>  void dev_pm_opp_put_regulators(struct opp_table *opp_table);
>  struct opp_table *dev_pm_opp_set_clkname(struct device *dev, const char * name);
>  void dev_pm_opp_put_clkname(struct opp_table *opp_table);
> +struct opp_table *dev_pm_opp_set_paths(struct device *dev);
> +void dev_pm_opp_put_paths(struct opp_table *opp_table);
>  struct opp_table *dev_pm_opp_register_set_opp_helper(struct device *dev, int (*set_opp)(struct dev_pm_set_opp_data *data));
>  void dev_pm_opp_unregister_set_opp_helper(struct opp_table *opp_table);
>  struct opp_table *dev_pm_opp_set_genpd_virt_dev(struct device *dev, struct device *virt_dev, int index);

-- 
viresh

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v2 4/5] OPP: Update the bandwidth on OPP frequency changes
  2019-04-23 13:28 ` [PATCH v2 4/5] OPP: Update the bandwidth on OPP frequency changes Georgi Djakov
@ 2019-04-24  5:55   ` Viresh Kumar
  2019-04-24 10:05   ` Sibi Sankar
  1 sibling, 0 replies; 22+ messages in thread
From: Viresh Kumar @ 2019-04-24  5:55 UTC (permalink / raw)
  To: Georgi Djakov
  Cc: vireshk, sboyd, nm, robh+dt, mark.rutland, rjw, jcrouse,
	vincent.guittot, bjorn.andersson, amit.kucheria, seansw,
	daidavid1, evgreen, sibis, linux-pm, devicetree, linux-kernel,
	linux-arm-msm

On 23-04-19, 16:28, Georgi Djakov wrote:
> If the OPP bandwidth values are populated, we want to switch also the
> interconnect bandwidth in addition to frequency and voltage.
> 
> Signed-off-by: Georgi Djakov <georgi.djakov@linaro.org>
> ---
>  drivers/opp/core.c | 9 ++++++++-
>  1 file changed, 8 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/opp/core.c b/drivers/opp/core.c
> index 97ee39ecdebd..91d1c2abfb3e 100644
> --- a/drivers/opp/core.c
> +++ b/drivers/opp/core.c
> @@ -707,7 +707,7 @@ int dev_pm_opp_set_rate(struct device *dev, unsigned long target_freq)
>  	unsigned long freq, old_freq;
>  	struct dev_pm_opp *old_opp, *opp;
>  	struct clk *clk;
> -	int ret;
> +	int ret, i;
>  
>  	if (unlikely(!target_freq)) {
>  		dev_err(dev, "%s: Invalid target frequency %lu\n", __func__,
> @@ -780,6 +780,13 @@ int dev_pm_opp_set_rate(struct device *dev, unsigned long target_freq)
>  		ret = _generic_set_opp_clk_only(dev, clk, freq);
>  	}
>  
> +	if (!ret && !IS_ERR_OR_NULL(opp_table->paths)) {

Can paths ever have a error value ? I believe only checking for NULL
is sufficient ?

> +		for (i = 0; i < opp_table->path_count; i++) {
> +			icc_set_bw(opp_table->paths[i], opp->bandwidth[i].avg,
> +				   opp->bandwidth[i].peak);
> +		}
> +	}
> +

I will set the path after required_opps are set.

>  	/* Scaling down? Configure required OPPs after frequency */
>  	if (!ret && freq < old_freq) {
>  		ret = _set_required_opps(dev, opp_table, opp);

-- 
viresh

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v2 1/5] dt-bindings: opp: Introduce bandwidth-MBps bindings
  2019-04-23 13:28 ` [PATCH v2 1/5] dt-bindings: opp: Introduce bandwidth-MBps bindings Georgi Djakov
  2019-04-24  5:33   ` Viresh Kumar
@ 2019-04-24  6:46   ` Rajendra Nayak
  2019-04-24  6:49     ` Viresh Kumar
  2019-04-24  8:44   ` Sibi Sankar
  2 siblings, 1 reply; 22+ messages in thread
From: Rajendra Nayak @ 2019-04-24  6:46 UTC (permalink / raw)
  To: Georgi Djakov, vireshk, sboyd, nm, robh+dt, mark.rutland, rjw
  Cc: jcrouse, vincent.guittot, bjorn.andersson, amit.kucheria, seansw,
	daidavid1, evgreen, sibis, linux-pm, devicetree, linux-kernel,
	linux-arm-msm



On 4/23/2019 6:58 PM, Georgi Djakov wrote:
> In addition to frequency and voltage, some devices may have bandwidth
> requirements for their interconnect throughput - for example a CPU
> or GPU may also need to increase or decrease their bandwidth to DDR
> memory based on the current operating performance point.
> 
> Extend the OPP tables with additional property to describe the bandwidth
> needs of a device. The average and peak bandwidth values depend on the
> hardware and its properties.
> 
> Signed-off-by: Georgi Djakov <georgi.djakov@linaro.org>
> ---
>   Documentation/devicetree/bindings/opp/opp.txt | 38 +++++++++++++++++++
>   .../devicetree/bindings/property-units.txt    |  4 ++
>   2 files changed, 42 insertions(+)
> 
> diff --git a/Documentation/devicetree/bindings/opp/opp.txt b/Documentation/devicetree/bindings/opp/opp.txt
> index 76b6c79604a5..830f0206aea7 100644
> --- a/Documentation/devicetree/bindings/opp/opp.txt
> +++ b/Documentation/devicetree/bindings/opp/opp.txt
> @@ -132,6 +132,9 @@ Optional properties:
>   - opp-level: A value representing the performance level of the device,
>     expressed as a 32-bit integer.
>   
> +- bandwidth-MBps: The interconnect bandwidth is specified with an array containing
> +  the two integer values for average and peak bandwidth in megabytes per second.
> +
>   - clock-latency-ns: Specifies the maximum possible transition latency (in
>     nanoseconds) for switching to this OPP from any other OPP.
>   
> @@ -546,3 +549,38 @@ Example 6: opp-microvolt-<name>, opp-microamp-<name>:
>   		};
>   	};
>   };
> +
> +Example 7: bandwidth-MBps:
> +Average and peak bandwidth values for the interconnects between CPU and DDR
> +memory and also between CPU and L3 are defined per each OPP. Bandwidth of both
> +interconnects is scaled together with CPU frequency.
> +
> +/ {
> +	cpus {
> +		CPU0: cpu@0 {
> +			compatible = "arm,cortex-a53", "arm,armv8";
> +			...
> +			operating-points-v2 = <&cpu_opp_table>;
> +			/* path between CPU and DDR memory and CPU and L3 */
> +			interconnects = <&noc MASTER_CPU &noc SLAVE_DDR>,
> +					<&noc MASTER_CPU &noc SLAVE_L3>;
> +		};
> +	};
> +
> +	cpu_opp_table: cpu_opp_table {
> +		compatible = "operating-points-v2";
> +		opp-shared;
> +
> +		opp-200000000 {
> +			opp-hz = /bits/ 64 <200000000>;
> +			/* CPU<->DDR bandwidth: 457 MB/s average, 1525 MB/s peak */
> +			 * CPU<->L3 bandwidth: 914 MB/s average, 3050 MB/s peak */
> +			bandwidth-MBps = <457 1525>, <914 3050>;

Should this also have a bandwidth-MBps-name perhaps? Without that I guess we assume
the order in which we specify the interconnects is the same as the order here?
  
> +		};
> +		opp-400000000 {
> +			opp-hz = /bits/ 64 <400000000>;
> +			/* CPU<->DDR bandwidth: 915 MB/s average, 3051 MB/s peak */
> +			 * CPU<->L3 bandwidth: 1828 MB/s average, 6102 MB/s peak */
> +			bandwidth-MBps = <915 3051>, <1828 6102>;
> +		};
> +	};
> diff --git a/Documentation/devicetree/bindings/property-units.txt b/Documentation/devicetree/bindings/property-units.txt
> index bfd33734faca..9c3dbefcdae8 100644
> --- a/Documentation/devicetree/bindings/property-units.txt
> +++ b/Documentation/devicetree/bindings/property-units.txt
> @@ -41,3 +41,7 @@ Temperature
>   Pressure
>   ----------------------------------------
>   -kpascal	: kiloPascal
> +
> +Throughput
> +----------------------------------------
> +-MBps		: megabytes per second
> 

-- 
QUALCOMM INDIA, on behalf of Qualcomm Innovation Center, Inc. is a member
of Code Aurora Forum, hosted by The Linux Foundation

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v2 1/5] dt-bindings: opp: Introduce bandwidth-MBps bindings
  2019-04-24  6:46   ` Rajendra Nayak
@ 2019-04-24  6:49     ` Viresh Kumar
  2019-04-24  9:00       ` Sibi Sankar
  0 siblings, 1 reply; 22+ messages in thread
From: Viresh Kumar @ 2019-04-24  6:49 UTC (permalink / raw)
  To: Rajendra Nayak
  Cc: Georgi Djakov, vireshk, sboyd, nm, robh+dt, mark.rutland, rjw,
	jcrouse, vincent.guittot, bjorn.andersson, amit.kucheria, seansw,
	daidavid1, evgreen, sibis, linux-pm, devicetree, linux-kernel,
	linux-arm-msm

On 24-04-19, 12:16, Rajendra Nayak wrote:
> 
> 
> On 4/23/2019 6:58 PM, Georgi Djakov wrote:
> > In addition to frequency and voltage, some devices may have bandwidth
> > requirements for their interconnect throughput - for example a CPU
> > or GPU may also need to increase or decrease their bandwidth to DDR
> > memory based on the current operating performance point.
> > 
> > Extend the OPP tables with additional property to describe the bandwidth
> > needs of a device. The average and peak bandwidth values depend on the
> > hardware and its properties.
> > 
> > Signed-off-by: Georgi Djakov <georgi.djakov@linaro.org>
> > ---
> >   Documentation/devicetree/bindings/opp/opp.txt | 38 +++++++++++++++++++
> >   .../devicetree/bindings/property-units.txt    |  4 ++
> >   2 files changed, 42 insertions(+)
> > 
> > diff --git a/Documentation/devicetree/bindings/opp/opp.txt b/Documentation/devicetree/bindings/opp/opp.txt
> > index 76b6c79604a5..830f0206aea7 100644
> > --- a/Documentation/devicetree/bindings/opp/opp.txt
> > +++ b/Documentation/devicetree/bindings/opp/opp.txt
> > @@ -132,6 +132,9 @@ Optional properties:
> >   - opp-level: A value representing the performance level of the device,
> >     expressed as a 32-bit integer.
> > +- bandwidth-MBps: The interconnect bandwidth is specified with an array containing
> > +  the two integer values for average and peak bandwidth in megabytes per second.
> > +
> >   - clock-latency-ns: Specifies the maximum possible transition latency (in
> >     nanoseconds) for switching to this OPP from any other OPP.
> > @@ -546,3 +549,38 @@ Example 6: opp-microvolt-<name>, opp-microamp-<name>:
> >   		};
> >   	};
> >   };
> > +
> > +Example 7: bandwidth-MBps:
> > +Average and peak bandwidth values for the interconnects between CPU and DDR
> > +memory and also between CPU and L3 are defined per each OPP. Bandwidth of both
> > +interconnects is scaled together with CPU frequency.
> > +
> > +/ {
> > +	cpus {
> > +		CPU0: cpu@0 {
> > +			compatible = "arm,cortex-a53", "arm,armv8";
> > +			...
> > +			operating-points-v2 = <&cpu_opp_table>;
> > +			/* path between CPU and DDR memory and CPU and L3 */
> > +			interconnects = <&noc MASTER_CPU &noc SLAVE_DDR>,
> > +					<&noc MASTER_CPU &noc SLAVE_L3>;
> > +		};
> > +	};
> > +
> > +	cpu_opp_table: cpu_opp_table {
> > +		compatible = "operating-points-v2";
> > +		opp-shared;
> > +
> > +		opp-200000000 {
> > +			opp-hz = /bits/ 64 <200000000>;
> > +			/* CPU<->DDR bandwidth: 457 MB/s average, 1525 MB/s peak */
> > +			 * CPU<->L3 bandwidth: 914 MB/s average, 3050 MB/s peak */
> > +			bandwidth-MBps = <457 1525>, <914 3050>;
> 
> Should this also have a bandwidth-MBps-name perhaps? Without that I guess we assume
> the order in which we specify the interconnects is the same as the order here?

Right, so I suggested not to add the -name property and to rely on the
order. Though I missed that he hasn't mentioned the order thing here.

@Georgi: Please mention above in the binding that the order is same as
interconnects binding.

-- 
viresh

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v2 1/5] dt-bindings: opp: Introduce bandwidth-MBps bindings
  2019-04-23 13:28 ` [PATCH v2 1/5] dt-bindings: opp: Introduce bandwidth-MBps bindings Georgi Djakov
  2019-04-24  5:33   ` Viresh Kumar
  2019-04-24  6:46   ` Rajendra Nayak
@ 2019-04-24  8:44   ` Sibi Sankar
  2 siblings, 0 replies; 22+ messages in thread
From: Sibi Sankar @ 2019-04-24  8:44 UTC (permalink / raw)
  To: Georgi Djakov, vireshk, sboyd, nm, robh+dt, mark.rutland, rjw
  Cc: jcrouse, vincent.guittot, bjorn.andersson, amit.kucheria, seansw,
	daidavid1, evgreen, linux-pm, devicetree, linux-kernel,
	linux-arm-msm

Hey Georgi,

On 4/23/19 6:58 PM, Georgi Djakov wrote:
> In addition to frequency and voltage, some devices may have bandwidth
> requirements for their interconnect throughput - for example a CPU
> or GPU may also need to increase or decrease their bandwidth to DDR
> memory based on the current operating performance point.
> 
> Extend the OPP tables with additional property to describe the bandwidth
> needs of a device. The average and peak bandwidth values depend on the
> hardware and its properties.
> 
> Signed-off-by: Georgi Djakov <georgi.djakov@linaro.org>
> ---
>   Documentation/devicetree/bindings/opp/opp.txt | 38 +++++++++++++++++++
>   .../devicetree/bindings/property-units.txt    |  4 ++
>   2 files changed, 42 insertions(+)
> 
> diff --git a/Documentation/devicetree/bindings/opp/opp.txt b/Documentation/devicetree/bindings/opp/opp.txt
> index 76b6c79604a5..830f0206aea7 100644
> --- a/Documentation/devicetree/bindings/opp/opp.txt
> +++ b/Documentation/devicetree/bindings/opp/opp.txt
> @@ -132,6 +132,9 @@ Optional properties:
>   - opp-level: A value representing the performance level of the device,
>     expressed as a 32-bit integer.
>   
> +- bandwidth-MBps: The interconnect bandwidth is specified with an array containing
> +  the two integer values for average and peak bandwidth in megabytes per second.
> +
>   - clock-latency-ns: Specifies the maximum possible transition latency (in
>     nanoseconds) for switching to this OPP from any other OPP.
>   
> @@ -546,3 +549,38 @@ Example 6: opp-microvolt-<name>, opp-microamp-<name>:
>   		};
>   	};
>   };
> +
> +Example 7: bandwidth-MBps:
> +Average and peak bandwidth values for the interconnects between CPU and DDR
> +memory and also between CPU and L3 are defined per each OPP. Bandwidth of both
> +interconnects is scaled together with CPU frequency.
> +
> +/ {
> +	cpus {
> +		CPU0: cpu@0 {
> +			compatible = "arm,cortex-a53", "arm,armv8";
> +			...
> +			operating-points-v2 = <&cpu_opp_table>;
> +			/* path between CPU and DDR memory and CPU and L3 */
> +			interconnects = <&noc MASTER_CPU &noc SLAVE_DDR>,
> +					<&noc MASTER_CPU &noc SLAVE_L3>;
> +		};
> +	};
> +
> +	cpu_opp_table: cpu_opp_table {
> +		compatible = "operating-points-v2";
> +		opp-shared;
> +
> +		opp-200000000 {
> +			opp-hz = /bits/ 64 <200000000>;
> +			/* CPU<->DDR bandwidth: 457 MB/s average, 1525 MB/s peak */
> +			 * CPU<->L3 bandwidth: 914 MB/s average, 3050 MB/s peak */
> +			bandwidth-MBps = <457 1525>, <914 3050>;
> +		};
> +		opp-400000000 {
> +			opp-hz = /bits/ 64 <400000000>;
> +			/* CPU<->DDR bandwidth: 915 MB/s average, 3051 MB/s peak */
> +			 * CPU<->L3 bandwidth: 1828 MB/s average, 6102 MB/s peak */
> +			bandwidth-MBps = <915 3051>, <1828 6102>;
> +		};
> +	};

nit: missed closing braces

> diff --git a/Documentation/devicetree/bindings/property-units.txt b/Documentation/devicetree/bindings/property-units.txt
> index bfd33734faca..9c3dbefcdae8 100644
> --- a/Documentation/devicetree/bindings/property-units.txt
> +++ b/Documentation/devicetree/bindings/property-units.txt
> @@ -41,3 +41,7 @@ Temperature
>   Pressure
>   ----------------------------------------
>   -kpascal	: kiloPascal
> +
> +Throughput
> +----------------------------------------
> +-MBps		: megabytes per second
> 

-- 
Qualcomm Innovation Center, Inc.
Qualcomm Innovation Center, Inc, is a member of Code Aurora Forum,
a Linux Foundation Collaborative Project

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v2 1/5] dt-bindings: opp: Introduce bandwidth-MBps bindings
  2019-04-24  6:49     ` Viresh Kumar
@ 2019-04-24  9:00       ` Sibi Sankar
  2019-04-24  9:05         ` Viresh Kumar
  2019-04-25  4:24         ` Bjorn Andersson
  0 siblings, 2 replies; 22+ messages in thread
From: Sibi Sankar @ 2019-04-24  9:00 UTC (permalink / raw)
  To: Viresh Kumar, Rajendra Nayak
  Cc: Georgi Djakov, vireshk, sboyd, nm, robh+dt, mark.rutland, rjw,
	jcrouse, vincent.guittot, bjorn.andersson, amit.kucheria, seansw,
	daidavid1, evgreen, linux-pm, devicetree, linux-kernel,
	linux-arm-msm

Hey Viresh,

On 4/24/19 12:19 PM, Viresh Kumar wrote:
> On 24-04-19, 12:16, Rajendra Nayak wrote:
>>
>>
>> On 4/23/2019 6:58 PM, Georgi Djakov wrote:
>>> In addition to frequency and voltage, some devices may have bandwidth
>>> requirements for their interconnect throughput - for example a CPU
>>> or GPU may also need to increase or decrease their bandwidth to DDR
>>> memory based on the current operating performance point.
>>>
>>> Extend the OPP tables with additional property to describe the bandwidth
>>> needs of a device. The average and peak bandwidth values depend on the
>>> hardware and its properties.
>>>
>>> Signed-off-by: Georgi Djakov <georgi.djakov@linaro.org>
>>> ---
>>>    Documentation/devicetree/bindings/opp/opp.txt | 38 +++++++++++++++++++
>>>    .../devicetree/bindings/property-units.txt    |  4 ++
>>>    2 files changed, 42 insertions(+)
>>>
>>> diff --git a/Documentation/devicetree/bindings/opp/opp.txt b/Documentation/devicetree/bindings/opp/opp.txt
>>> index 76b6c79604a5..830f0206aea7 100644
>>> --- a/Documentation/devicetree/bindings/opp/opp.txt
>>> +++ b/Documentation/devicetree/bindings/opp/opp.txt
>>> @@ -132,6 +132,9 @@ Optional properties:
>>>    - opp-level: A value representing the performance level of the device,
>>>      expressed as a 32-bit integer.
>>> +- bandwidth-MBps: The interconnect bandwidth is specified with an array containing
>>> +  the two integer values for average and peak bandwidth in megabytes per second.
>>> +
>>>    - clock-latency-ns: Specifies the maximum possible transition latency (in
>>>      nanoseconds) for switching to this OPP from any other OPP.
>>> @@ -546,3 +549,38 @@ Example 6: opp-microvolt-<name>, opp-microamp-<name>:
>>>    		};
>>>    	};
>>>    };
>>> +
>>> +Example 7: bandwidth-MBps:
>>> +Average and peak bandwidth values for the interconnects between CPU and DDR
>>> +memory and also between CPU and L3 are defined per each OPP. Bandwidth of both
>>> +interconnects is scaled together with CPU frequency.
>>> +
>>> +/ {
>>> +	cpus {
>>> +		CPU0: cpu@0 {
>>> +			compatible = "arm,cortex-a53", "arm,armv8";
>>> +			...
>>> +			operating-points-v2 = <&cpu_opp_table>;
>>> +			/* path between CPU and DDR memory and CPU and L3 */
>>> +			interconnects = <&noc MASTER_CPU &noc SLAVE_DDR>,
>>> +					<&noc MASTER_CPU &noc SLAVE_L3>;
>>> +		};
>>> +	};
>>> +
>>> +	cpu_opp_table: cpu_opp_table {
>>> +		compatible = "operating-points-v2";
>>> +		opp-shared;
>>> +
>>> +		opp-200000000 {
>>> +			opp-hz = /bits/ 64 <200000000>;
>>> +			/* CPU<->DDR bandwidth: 457 MB/s average, 1525 MB/s peak */
>>> +			 * CPU<->L3 bandwidth: 914 MB/s average, 3050 MB/s peak */
>>> +			bandwidth-MBps = <457 1525>, <914 3050>;
>>
>> Should this also have a bandwidth-MBps-name perhaps? Without that I guess we assume
>> the order in which we specify the interconnects is the same as the order here?
> 
> Right, so I suggested not to add the -name property and to rely on the
> order. Though I missed that he hasn't mentioned the order thing here.

by skipping names, aren't we forced to specify all the specified paths
bandwidths for each opp even if it is redundant? i.e if the first/second
icc path doesn't have to change across a few opps but if the other path
does need to change this scheme would force it to be included and will
try to set the first/second path again.


e.g: Here the first path does not have to change across these two opps
but have to specified nonetheless since we omit names.

+		opp-1200000000 {
+			opp-hz = /bits/ 64 <1200000000>;
+			bandwidth-MBps = <457 1525>, <914 3050>;
+		};
+		opp-1400000000 {
+			opp-hz = /bits/ 64 <1400000000>;
+			bandwidth-MBps = <457 1525>, <1828 6102>;
+		};


> 
> @Georgi: Please mention above in the binding that the order is same as
> interconnects binding.
> 

-- 
Qualcomm Innovation Center, Inc.
Qualcomm Innovation Center, Inc, is a member of Code Aurora Forum,
a Linux Foundation Collaborative Project

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v2 1/5] dt-bindings: opp: Introduce bandwidth-MBps bindings
  2019-04-24  9:00       ` Sibi Sankar
@ 2019-04-24  9:05         ` Viresh Kumar
  2019-04-25  4:24         ` Bjorn Andersson
  1 sibling, 0 replies; 22+ messages in thread
From: Viresh Kumar @ 2019-04-24  9:05 UTC (permalink / raw)
  To: Sibi Sankar
  Cc: Rajendra Nayak, Georgi Djakov, vireshk, sboyd, nm, robh+dt,
	mark.rutland, rjw, jcrouse, vincent.guittot, bjorn.andersson,
	amit.kucheria, seansw, daidavid1, evgreen, linux-pm, devicetree,
	linux-kernel, linux-arm-msm

On 24-04-19, 14:30, Sibi Sankar wrote:
> Hey Viresh,
> 
> On 4/24/19 12:19 PM, Viresh Kumar wrote:
> > On 24-04-19, 12:16, Rajendra Nayak wrote:
> > > 
> > > 
> > > On 4/23/2019 6:58 PM, Georgi Djakov wrote:
> > > > In addition to frequency and voltage, some devices may have bandwidth
> > > > requirements for their interconnect throughput - for example a CPU
> > > > or GPU may also need to increase or decrease their bandwidth to DDR
> > > > memory based on the current operating performance point.
> > > > 
> > > > Extend the OPP tables with additional property to describe the bandwidth
> > > > needs of a device. The average and peak bandwidth values depend on the
> > > > hardware and its properties.
> > > > 
> > > > Signed-off-by: Georgi Djakov <georgi.djakov@linaro.org>
> > > > ---
> > > >    Documentation/devicetree/bindings/opp/opp.txt | 38 +++++++++++++++++++
> > > >    .../devicetree/bindings/property-units.txt    |  4 ++
> > > >    2 files changed, 42 insertions(+)
> > > > 
> > > > diff --git a/Documentation/devicetree/bindings/opp/opp.txt b/Documentation/devicetree/bindings/opp/opp.txt
> > > > index 76b6c79604a5..830f0206aea7 100644
> > > > --- a/Documentation/devicetree/bindings/opp/opp.txt
> > > > +++ b/Documentation/devicetree/bindings/opp/opp.txt
> > > > @@ -132,6 +132,9 @@ Optional properties:
> > > >    - opp-level: A value representing the performance level of the device,
> > > >      expressed as a 32-bit integer.
> > > > +- bandwidth-MBps: The interconnect bandwidth is specified with an array containing
> > > > +  the two integer values for average and peak bandwidth in megabytes per second.
> > > > +
> > > >    - clock-latency-ns: Specifies the maximum possible transition latency (in
> > > >      nanoseconds) for switching to this OPP from any other OPP.
> > > > @@ -546,3 +549,38 @@ Example 6: opp-microvolt-<name>, opp-microamp-<name>:
> > > >    		};
> > > >    	};
> > > >    };
> > > > +
> > > > +Example 7: bandwidth-MBps:
> > > > +Average and peak bandwidth values for the interconnects between CPU and DDR
> > > > +memory and also between CPU and L3 are defined per each OPP. Bandwidth of both
> > > > +interconnects is scaled together with CPU frequency.
> > > > +
> > > > +/ {
> > > > +	cpus {
> > > > +		CPU0: cpu@0 {
> > > > +			compatible = "arm,cortex-a53", "arm,armv8";
> > > > +			...
> > > > +			operating-points-v2 = <&cpu_opp_table>;
> > > > +			/* path between CPU and DDR memory and CPU and L3 */
> > > > +			interconnects = <&noc MASTER_CPU &noc SLAVE_DDR>,
> > > > +					<&noc MASTER_CPU &noc SLAVE_L3>;
> > > > +		};
> > > > +	};
> > > > +
> > > > +	cpu_opp_table: cpu_opp_table {
> > > > +		compatible = "operating-points-v2";
> > > > +		opp-shared;
> > > > +
> > > > +		opp-200000000 {
> > > > +			opp-hz = /bits/ 64 <200000000>;
> > > > +			/* CPU<->DDR bandwidth: 457 MB/s average, 1525 MB/s peak */
> > > > +			 * CPU<->L3 bandwidth: 914 MB/s average, 3050 MB/s peak */
> > > > +			bandwidth-MBps = <457 1525>, <914 3050>;
> > > 
> > > Should this also have a bandwidth-MBps-name perhaps? Without that I guess we assume
> > > the order in which we specify the interconnects is the same as the order here?
> > 
> > Right, so I suggested not to add the -name property and to rely on the
> > order. Though I missed that he hasn't mentioned the order thing here.
> 
> by skipping names, aren't we forced to specify all the specified paths
> bandwidths for each opp even if it is redundant? i.e if the first/second
> icc path doesn't have to change across a few opps but if the other path
> does need to change this scheme would force it to be included and will
> try to set the first/second path again.
> 
> 
> e.g: Here the first path does not have to change across these two opps
> but have to specified nonetheless since we omit names.

The code should be efficient enough to return early in this case, but
these value must always be present. We can get some sort of
relationship between multiple paths that are part of same OPP, but not
between different OPPs.

> +		opp-1200000000 {
> +			opp-hz = /bits/ 64 <1200000000>;
> +			bandwidth-MBps = <457 1525>, <914 3050>;
> +		};
> +		opp-1400000000 {
> +			opp-hz = /bits/ 64 <1400000000>;
> +			bandwidth-MBps = <457 1525>, <1828 6102>;
> +		};
> 

-- 
viresh

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v2 4/5] OPP: Update the bandwidth on OPP frequency changes
  2019-04-23 13:28 ` [PATCH v2 4/5] OPP: Update the bandwidth on OPP frequency changes Georgi Djakov
  2019-04-24  5:55   ` Viresh Kumar
@ 2019-04-24 10:05   ` Sibi Sankar
  1 sibling, 0 replies; 22+ messages in thread
From: Sibi Sankar @ 2019-04-24 10:05 UTC (permalink / raw)
  To: Georgi Djakov, vireshk, sboyd, nm, robh+dt, mark.rutland, rjw
  Cc: jcrouse, vincent.guittot, bjorn.andersson, amit.kucheria, seansw,
	daidavid1, evgreen, linux-pm, devicetree, linux-kernel,
	linux-arm-msm

Hey Georgi,

On 4/23/19 6:58 PM, Georgi Djakov wrote:
> If the OPP bandwidth values are populated, we want to switch also the
> interconnect bandwidth in addition to frequency and voltage.
> 
> Signed-off-by: Georgi Djakov <georgi.djakov@linaro.org>
> ---
>   drivers/opp/core.c | 9 ++++++++-
>   1 file changed, 8 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/opp/core.c b/drivers/opp/core.c
> index 97ee39ecdebd..91d1c2abfb3e 100644
> --- a/drivers/opp/core.c
> +++ b/drivers/opp/core.c
> @@ -707,7 +707,7 @@ int dev_pm_opp_set_rate(struct device *dev, unsigned long target_freq)
>   	unsigned long freq, old_freq;
>   	struct dev_pm_opp *old_opp, *opp;
>   	struct clk *clk;
> -	int ret;
> +	int ret, i;
>   
>   	if (unlikely(!target_freq)) {
>   		dev_err(dev, "%s: Invalid target frequency %lu\n", __func__,
> @@ -780,6 +780,13 @@ int dev_pm_opp_set_rate(struct device *dev, unsigned long target_freq)
>   		ret = _generic_set_opp_clk_only(dev, clk, freq);
>   	}
>   
> +	if (!ret && !IS_ERR_OR_NULL(opp_table->paths)) {
> +		for (i = 0; i < opp_table->path_count; i++) {
> +			icc_set_bw(opp_table->paths[i], opp->bandwidth[i].avg,
> +				   opp->bandwidth[i].peak);
> +		}
> +	}

An helper funcion dev_pm_opp_set_bw() would be needed by devices that
use alternative ways of scaling like "qcom-cpufreq-hw". I can probably
do that when I get ddr scaling working on sdm845 with your series.

> +
>   	/* Scaling down? Configure required OPPs after frequency */
>   	if (!ret && freq < old_freq) {
>   		ret = _set_required_opps(dev, opp_table, opp);
> 

-- 
Qualcomm Innovation Center, Inc.
Qualcomm Innovation Center, Inc, is a member of Code Aurora Forum,
a Linux Foundation Collaborative Project

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v2 1/5] dt-bindings: opp: Introduce bandwidth-MBps bindings
  2019-04-24  9:00       ` Sibi Sankar
  2019-04-24  9:05         ` Viresh Kumar
@ 2019-04-25  4:24         ` Bjorn Andersson
  1 sibling, 0 replies; 22+ messages in thread
From: Bjorn Andersson @ 2019-04-25  4:24 UTC (permalink / raw)
  To: Sibi Sankar
  Cc: Viresh Kumar, Rajendra Nayak, Georgi Djakov, vireshk, sboyd, nm,
	robh+dt, mark.rutland, rjw, jcrouse, vincent.guittot,
	amit.kucheria, seansw, daidavid1, evgreen, linux-pm, devicetree,
	linux-kernel, linux-arm-msm

On Wed 24 Apr 02:00 PDT 2019, Sibi Sankar wrote:
> On 4/24/19 12:19 PM, Viresh Kumar wrote:
> > On 24-04-19, 12:16, Rajendra Nayak wrote:
> > > On 4/23/2019 6:58 PM, Georgi Djakov wrote:
[..]
> > > > +/ {
> > > > +	cpus {
> > > > +		CPU0: cpu@0 {
> > > > +			compatible = "arm,cortex-a53", "arm,armv8";
> > > > +			...
> > > > +			operating-points-v2 = <&cpu_opp_table>;
> > > > +			/* path between CPU and DDR memory and CPU and L3 */
> > > > +			interconnects = <&noc MASTER_CPU &noc SLAVE_DDR>,
> > > > +					<&noc MASTER_CPU &noc SLAVE_L3>;
> > > > +		};
> > > > +	};
> > > > +
> > > > +	cpu_opp_table: cpu_opp_table {
> > > > +		compatible = "operating-points-v2";
> > > > +		opp-shared;
> > > > +
> > > > +		opp-200000000 {
> > > > +			opp-hz = /bits/ 64 <200000000>;
> > > > +			/* CPU<->DDR bandwidth: 457 MB/s average, 1525 MB/s peak */
> > > > +			 * CPU<->L3 bandwidth: 914 MB/s average, 3050 MB/s peak */
> > > > +			bandwidth-MBps = <457 1525>, <914 3050>;
> > > 
> > > Should this also have a bandwidth-MBps-name perhaps? Without that I guess we assume
> > > the order in which we specify the interconnects is the same as the order here?
> > 
> > Right, so I suggested not to add the -name property and to rely on the
> > order. Though I missed that he hasn't mentioned the order thing here.
> 
> by skipping names, aren't we forced to specify all the specified paths
> bandwidths for each opp even if it is redundant? i.e if the first/second
> icc path doesn't have to change across a few opps but if the other path
> does need to change this scheme would force it to be included and will
> try to set the first/second path again.
> 
> 
> e.g: Here the first path does not have to change across these two opps
> but have to specified nonetheless since we omit names.
> 

If this is a pair in the middle of the list, we would either have to
define how non-specified values are inherited from neighbouring nodes or
you will get different behavior if you're coming from a lower or a
higher opp.

I think it looks clearer to just be explicit and repeat the values.

Regards,
Bjorn

> +		opp-1200000000 {
> +			opp-hz = /bits/ 64 <1200000000>;
> +			bandwidth-MBps = <457 1525>, <914 3050>;
> +		};
> +		opp-1400000000 {
> +			opp-hz = /bits/ 64 <1400000000>;
> +			bandwidth-MBps = <457 1525>, <1828 6102>;
> +		};

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v2 2/5] interconnect: Add of_icc_get_by_index() helper function
  2019-04-23 13:28 ` [PATCH v2 2/5] interconnect: Add of_icc_get_by_index() helper function Georgi Djakov
@ 2019-05-07 11:59   ` Sibi Sankar
  2019-06-27  5:56     ` Sibi Sankar
  0 siblings, 1 reply; 22+ messages in thread
From: Sibi Sankar @ 2019-05-07 11:59 UTC (permalink / raw)
  To: Georgi Djakov, vireshk, sboyd, nm, robh+dt, mark.rutland, rjw
  Cc: jcrouse, vincent.guittot, bjorn.andersson, amit.kucheria, seansw,
	daidavid1, evgreen, linux-pm, devicetree, linux-kernel,
	linux-arm-msm

Hey Georgi,

On 4/23/19 6:58 PM, Georgi Djakov wrote:
> This is the same as the traditional of_icc_get() function, but the
> difference is that it takes index as an argument, instead of name.
> 
> Signed-off-by: Georgi Djakov <georgi.djakov@linaro.org>
> ---
>   drivers/interconnect/core.c  | 45 ++++++++++++++++++++++++++++--------
>   include/linux/interconnect.h |  6 +++++
>   2 files changed, 41 insertions(+), 10 deletions(-)
> 
> diff --git a/drivers/interconnect/core.c b/drivers/interconnect/core.c
> index 871eb4bc4efc..a7c3c262c974 100644
> --- a/drivers/interconnect/core.c
> +++ b/drivers/interconnect/core.c
> @@ -295,9 +295,9 @@ static struct icc_node *of_icc_get_from_provider(struct of_phandle_args *spec)
>   }
>   
>   /**
> - * of_icc_get() - get a path handle from a DT node based on name
> + * of_icc_get_by_index() - get a path handle from a DT node based on index
>    * @dev: device pointer for the consumer device
> - * @name: interconnect path name
> + * @idx: interconnect path index
>    *
>    * This function will search for a path between two endpoints and return an
>    * icc_path handle on success. Use icc_put() to release constraints when they
> @@ -309,13 +309,12 @@ static struct icc_node *of_icc_get_from_provider(struct of_phandle_args *spec)
>    * Return: icc_path pointer on success or ERR_PTR() on error. NULL is returned
>    * when the API is disabled or the "interconnects" DT property is missing.
>    */
> -struct icc_path *of_icc_get(struct device *dev, const char *name)
> +struct icc_path *of_icc_get_by_index(struct device *dev, int idx)
>   {
>   	struct icc_path *path = ERR_PTR(-EPROBE_DEFER);
>   	struct icc_node *src_node, *dst_node;
>   	struct device_node *np = NULL;
>   	struct of_phandle_args src_args, dst_args;
> -	int idx = 0;
>   	int ret;
>   
>   	if (!dev || !dev->of_node)
> @@ -335,12 +334,6 @@ struct icc_path *of_icc_get(struct device *dev, const char *name)
>   	 * lets support only global ids and extend this in the future if needed
>   	 * without breaking DT compatibility.
>   	 */
> -	if (name) {
> -		idx = of_property_match_string(np, "interconnect-names", name);
> -		if (idx < 0)
> -			return ERR_PTR(idx);
> -	}
> -
>   	ret = of_parse_phandle_with_args(np, "interconnects",
>   					 "#interconnect-cells", idx * 2,
>   					 &src_args);
> @@ -383,6 +376,38 @@ struct icc_path *of_icc_get(struct device *dev, const char *name)
>   
>   	return path;
>   }
> +
> +/**
> + * of_icc_get() - get a path handle from a DT node based on name
> + * @dev: device pointer for the consumer device
> + * @name: interconnect path name
> + *
> + * This function will search for a path between two endpoints and return an
> + * icc_path handle on success. Use icc_put() to release constraints when they
> + * are not needed anymore.
> + * If the interconnect API is disabled, NULL is returned and the consumer
> + * drivers will still build. Drivers are free to handle this specifically,
> + * but they don't have to.
> + *
> + * Return: icc_path pointer on success or ERR_PTR() on error. NULL is returned
> + * when the API is disabled or the "interconnects" DT property is missing.
> + */
> +struct icc_path *of_icc_get(struct device *dev, const char *name)
> +{
> +	int idx = 0;
> +
> +	if (!dev || !dev->of_node)
> +		return ERR_PTR(-ENODEV);
> +
> +	if (name) {
> +		idx = of_property_match_string(dev->of_node,
> +					       "interconnect-names", name);
> +		if (idx < 0)
> +			return ERR_PTR(idx);
> +	}
> +
> +	return of_icc_get_by_index(dev, idx);
> +}
>   EXPORT_SYMBOL_GPL(of_icc_get);
>   
>   /**
> diff --git a/include/linux/interconnect.h b/include/linux/interconnect.h
> index dc25864755ba..0e430b3b6519 100644
> --- a/include/linux/interconnect.h
> +++ b/include/linux/interconnect.h
> @@ -28,6 +28,7 @@ struct device;
>   struct icc_path *icc_get(struct device *dev, const int src_id,
>   			 const int dst_id);
>   struct icc_path *of_icc_get(struct device *dev, const char *name);
> +struct icc_path *of_icc_get_by_index(struct device *dev, int idx);
>   void icc_put(struct icc_path *path);
>   int icc_set_bw(struct icc_path *path, u32 avg_bw, u32 peak_bw);
>   
> @@ -45,6 +46,11 @@ static inline struct icc_path *of_icc_get(struct device *dev,
>   	return NULL;
>   }
>   
> +struct icc_path *of_icc_get_by_index(struct device *dev, int idx)

This should be static inline instead

> +{
> +	return NULL;
> +}
> +
>   static inline void icc_put(struct icc_path *path)
>   {
>   }
> 

-- 
Qualcomm Innovation Center, Inc.
Qualcomm Innovation Center, Inc, is a member of Code Aurora Forum,
a Linux Foundation Collaborative Project

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v2 0/5] Introduce OPP bandwidth bindings
  2019-04-23 13:28 [PATCH v2 0/5] Introduce OPP bandwidth bindings Georgi Djakov
                   ` (4 preceding siblings ...)
  2019-04-23 13:28 ` [PATCH v2 5/5] cpufreq: dt: Add support for interconnect bandwidth scaling Georgi Djakov
@ 2019-06-01  2:12 ` Saravana Kannan
  2019-06-03 15:56   ` Jordan Crouse
  5 siblings, 1 reply; 22+ messages in thread
From: Saravana Kannan @ 2019-06-01  2:12 UTC (permalink / raw)
  To: georgi.djakov
  Cc: amit.kucheria, bjorn.andersson, daidavid1, devicetree, evgreen,
	jcrouse, linux-arm-msm, linux-kernel, linux-pm, mark.rutland, nm,
	rjw, robh+dt, sboyd, seansw, sibis, vincent.guittot, vireshk,
	saravanak, kernel-team

I'll have to Nack this series because it's making a couple of wrong assumptions
about bandwidth voting.

Firstly, it's mixing up OPP to bandwidth mapping (Eg: CPU freq to CPU<->DDR
bandwidth mapping) with the bandwidth levels that are actually supported by an
interconnect path (Eg: CPU<->DDR bandwidth levels). For example, CPU0 might
decide to vote for a max of 10 GB/s because it's a little CPU and never needs
anything higher than 10 GB/s even at CPU0's max frequency. But that has no
bearing on bandwidth level available between CPU<->DDR.

There needs to be a separate BW OPP table describing the bandwith levels
available for the CPU<->DDR path and then a separate mapping between CPU OPP to
CPU<->DDR BW OPP. That way, the mapping decision (policy or voltage based
config decision) doesn't affect the description of what the hardware really is
capable of.

Put another way, if someone comes around and decides the CPU0's max freq should
ask for 15 GB/s because some shared voltage rail would already be pushed to a
voltage sufficient to support 15 GB/s, then it shouldn't change the HW
description of what bandwidth levels are available between CPU<->DDR. If the
CPU<->DDR path supports 20 GB/s, it always does independent of the CPU OPP
table mapping.

By splitting out the available bandwidth levels of the CPU<->DDR path into a
separate BW OPP table, we avoid these kinds of issues.

Also, one could easily imagine using a bandwidth counter or some other means of
BW measurement hardware to vote for bandwidth between CPU<->DDR and CPU<->L3.
That entity should be able to know/learn all the available bandwidth levels in
the CPU<->DDR path without forcing bandwidth levels to be listed in CPU OPP
table. And if it's measuring bandwidth at a point common for all CPUs, what CPU
OPP table is it even supposed to look at to learn all the available bandwidth
levels. It just doesn't make sense.

It's also easy to envision having multiple policies or devfreq governors voting
for an interconnect path. The mapping you are talking about in this series is
just an input for one of them (the cpufreq-map governor I sent out a while
ago).

Secondly, when it comes to bandwidth OPP tables, the peak bandwidth should be
the key/first element (similar to how frequency is now). Long explanation
follows.

All the sensible frequency combinations of all the hardware interconnects
between the master and slave port (Eg: GPU -> MMNOC -> BIMC -> DDR) determine
the peak bandwidth levels available in that interconnect path.

If multiple devices (GPU, CPU, etc) vote for different peak bandwidths for an
interconnect (say BIMC), the interconnect provider picks the max peak bandwidth
amongst all the requests and then picks the lowest interconnect frequency that
can support the max peak bandwidth. 

So the devices (GPU, CPU, etc), actually have some control on what interconnect
frequencies are picked by asking for a specific peak bandwidth -- so there's
actually a notion of useful levels.

Average bandwidth is an additive property -- so if CPU and GPU ask for 5 GB/s
and 3 GB/s respectively for an interconnect, the interconnect provider adds
them up and configures the interconnect for 8 GB/s. So if GPU asks for 5 GB/s
average bandwidth, it has no idea what frequency the interconnect will actually
get configured to. So, average bandwidth really doesn't provide a sense of
levels to pick from for a given interconnect path.

So peak bandwidth is a much better pick than average bandwidth for being a key
to the bandwidth OPP table.

So what I think we need is:
* Bandwidth OPP support in the kernel
* Bandwidth OPP DT binding to describe the bandwidth levels available for
  different interconnect paths.
* A new "interconnect-opp" property that can point to different BW OPP
  tables for each of the interconnect paths listed under interconnects
  property.

Then for mapping from device OPP to interconnect path bandwidth OPPs, you
just used the existing required-opps binding to link an entry in GPU OPP to
an entry in GPU<->DDR bandwidth OPP table. That way the hardware is
actually described correctly and the mapping is kept separate.

So, in the end, it'll look something like this in DT.

gpu_cache_opp_table: gpu_cache_opp_table {
	compatible = "operating-points-v2";

	gpu_cache_3000: opp-3000 {
		opp-peak-mbps = <3000>;
		avg-mbps = <1000>;
	};
	gpu_cache_6000: opp-6000 {
		opp-peak-mbps = <6000>;
		avg-mbps = <2000>;
	};
	gpu_cache_9000: opp-9000 {
		opp-peak-mbps = <9000>;
		avg-mbps = <9000>;
	};
};

gpu_ddr_opp_table: gpu_ddr_opp_table {
	compatible = "operating-points-v2";

	gpu_ddr_1525: opp-1525 {
		opp-peak-mbps = <1525>;
		avg-mbps = <452>;
	};
	gpu_ddr_3051: opp-3051 {
		opp-peak-mbps = <3051>;
		avg-mbps = <915>;
	};
	gpu_ddr_7500: opp-7500 {
		opp-peak-mbps = <7500>;
		avg-mbps = <3000>;
	};
};

gpu_opp_table: gpu_opp_table {
	compatible = "operating-points-v2";
	opp-shared;

	opp-200000000 {
		opp-hz = /bits/ 64 <200000000>;
		required-opps = <&gpu_cache_3000>, <&gpu_ddr_1525>;
	};
	opp-400000000 {
		opp-hz = /bits/ 64 <400000000>;
		required-opps = <&gpu_cache_6000>, <&gpu_ddr_3051>;
	};
};

gpu@7864000 {
	...
	operating-points-v2 = <&gpu_opp_table>;
	interconnects = <&mmnoc MASTER_GPU_1 &bimc SLAVE_SYSTEL_CACHE>,
			<&mmnoc MASTER_GPU_1 &bimc SLAVE_DDR>;
	interconnect-names = "gpu-cache", "gpu-mem";
	interconnect-opps = <&gpu_cache_bw_opp>, <&gpu_ddr_bw_opp>
};

It's very clear what the HW supports vs what the mapping chooses to use. It's
also very clearer what the mapping is doing because it actually points to
entries in appropriately names OPP tables. There's no confusion on what mapping
corresponds to what interconnect paths -- which is why this doesn't need a
comment to clarify the intent here whereas in this patch series, the mappings
needed comments on which interconnect they are referring to.

Sorry about the long email and jumping in out of nowhere. The need for
something like this has been in my mind for a long time and my situation has
also changed where I can be more active upstream compared to before.

Thanks and have a nice weekend.

-Saravana

-- 

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v2 0/5] Introduce OPP bandwidth bindings
  2019-06-01  2:12 ` [PATCH v2 0/5] Introduce OPP bandwidth bindings Saravana Kannan
@ 2019-06-03 15:56   ` Jordan Crouse
  2019-06-03 19:12     ` Saravana Kannan
  0 siblings, 1 reply; 22+ messages in thread
From: Jordan Crouse @ 2019-06-03 15:56 UTC (permalink / raw)
  To: Saravana Kannan
  Cc: georgi.djakov, amit.kucheria, bjorn.andersson, daidavid1,
	devicetree, evgreen, linux-arm-msm, linux-kernel, linux-pm,
	mark.rutland, nm, rjw, robh+dt, sboyd, seansw, sibis,
	vincent.guittot, vireshk, kernel-team

On Fri, May 31, 2019 at 07:12:28PM -0700, Saravana Kannan wrote:
> I'll have to Nack this series because it's making a couple of wrong assumptions
> about bandwidth voting.
> 
> Firstly, it's mixing up OPP to bandwidth mapping (Eg: CPU freq to CPU<->DDR
> bandwidth mapping) with the bandwidth levels that are actually supported by an
> interconnect path (Eg: CPU<->DDR bandwidth levels). For example, CPU0 might
> decide to vote for a max of 10 GB/s because it's a little CPU and never needs
> anything higher than 10 GB/s even at CPU0's max frequency. But that has no
> bearing on bandwidth level available between CPU<->DDR.

I'm going to just quote this part of the email to avoid forcing people to
scroll too much.

I agree that there is an enormous universe of new and innovative things that can
be done for bandwidth voting. I would love to have smart governors and expansive
connections between different components that are all aware of each other. I
don't think that anybody is discounting that these things are possible.

But as it stands today, as a leaf driver developer my primary concern is that I
need to vote something for the GPU->DDR path. Right now I'm voting the maximum
because that is the bare minimum we need to get working GPU.

Then the next incremental baby step is to allow us to select a minimum
vote based on a GPU frequency level to allow for some sort of very coarse power
savings. It isn't perfect, but better than cranking everything to 11. This is
why we need the OPP bandwidth bindings to allow us to make the association and
tune down the vote. I fully agree that this isn't the optimal solution but
it is the only knob we have right now.

And after that we should go nuts. I'll gladly put the OPP bindings in the
rear-view mirror and turn over all bandwidth to a governor or two or three.
I'll be happy to have nothing to do with it again. But until then we need
a solution for the leaf drivers that lets us provide some modicum of power
control.

Jordan
-- 
The Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
a Linux Foundation Collaborative Project

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v2 0/5] Introduce OPP bandwidth bindings
  2019-06-03 15:56   ` Jordan Crouse
@ 2019-06-03 19:12     ` Saravana Kannan
  0 siblings, 0 replies; 22+ messages in thread
From: Saravana Kannan @ 2019-06-03 19:12 UTC (permalink / raw)
  To: Saravana Kannan, georgi.djakov, amit.kucheria, Bjorn Andersson,
	daidavid1, devicetree, evgreen, linux-arm-msm, linux-kernel,
	linux-pm, Mark Rutland, nm, rjw, Rob Herring, sboyd, seansw,
	sibis, Vincent Guittot, vireshk, Android Kernel Team

On Mon, Jun 3, 2019 at 8:56 AM Jordan Crouse <jcrouse@codeaurora.org> wrote:
>
> On Fri, May 31, 2019 at 07:12:28PM -0700, Saravana Kannan wrote:
> > I'll have to Nack this series because it's making a couple of wrong assumptions
> > about bandwidth voting.
> >
> > Firstly, it's mixing up OPP to bandwidth mapping (Eg: CPU freq to CPU<->DDR
> > bandwidth mapping) with the bandwidth levels that are actually supported by an
> > interconnect path (Eg: CPU<->DDR bandwidth levels). For example, CPU0 might
> > decide to vote for a max of 10 GB/s because it's a little CPU and never needs
> > anything higher than 10 GB/s even at CPU0's max frequency. But that has no
> > bearing on bandwidth level available between CPU<->DDR.
>
> I'm going to just quote this part of the email to avoid forcing people to
> scroll too much.
>
> I agree that there is an enormous universe of new and innovative things that can
> be done for bandwidth voting. I would love to have smart governors and expansive
> connections between different components that are all aware of each other. I
> don't think that anybody is discounting that these things are possible.
>
> But as it stands today, as a leaf driver developer my primary concern is that I
> need to vote something for the GPU->DDR path. Right now I'm voting the maximum
> because that is the bare minimum we need to get working GPU.
>
> Then the next incremental baby step is to allow us to select a minimum
> vote based on a GPU frequency level to allow for some sort of very coarse power
> savings. It isn't perfect, but better than cranking everything to 11.

I completely agree. I'm not saying you shouldn't do bandwidth voting
based on device frequency. In some cases, it's actually the right
thing to do too.

> This is
> why we need the OPP bandwidth bindings to allow us to make the association and
> tune down the vote.

Again, I'm perfectly fine with this too.

> I fully agree that this isn't the optimal solution but
> it is the only knob we have right now.
> And after that we should go nuts. I'll gladly put the OPP bindings in the
> rear-view mirror and turn over all bandwidth to a governor or two or three.

This is the problem part in the series. Once a property is exposed in
DT, we can't just take it back. A new kernel needs to continue
supporting old compiled DT binaries. So if we know we'll have to
change a DT property in the future to be "more correct", then we
should just do that one instead of "for now" bindings.

And I even proposed what the new bindings should look like and why we
should do it that way.

I'll try to get some patches out for that in the near future. But
doesn't have to be just from me. I'm just pointing out why the current
bindings aren't good/scalable.

> I'll be happy to have nothing to do with it again. But until then we need
> a solution for the leaf drivers that lets us provide some modicum of power
> control.

Agreed.

-Saravana

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v2 2/5] interconnect: Add of_icc_get_by_index() helper function
  2019-05-07 11:59   ` Sibi Sankar
@ 2019-06-27  5:56     ` Sibi Sankar
  0 siblings, 0 replies; 22+ messages in thread
From: Sibi Sankar @ 2019-06-27  5:56 UTC (permalink / raw)
  To: Georgi Djakov, vireshk, sboyd, nm, robh+dt, mark.rutland, rjw
  Cc: jcrouse, vincent.guittot, bjorn.andersson, amit.kucheria, seansw,
	daidavid1, evgreen, linux-pm, devicetree, linux-kernel,
	linux-arm-msm, linux-kernel-owner

Hey Georgi,

I heard there is a follow up discussion
planned to finalize on the which approach
to follow. If we do end up with your series,
I found some fixes that you might want to
use when you re-post.

On 2019-05-07 17:29, Sibi Sankar wrote:
> Hey Georgi,
> 
> On 4/23/19 6:58 PM, Georgi Djakov wrote:
>> This is the same as the traditional of_icc_get() function, but the
>> difference is that it takes index as an argument, instead of name.
>> 
>> Signed-off-by: Georgi Djakov <georgi.djakov@linaro.org>
>> ---
>>   drivers/interconnect/core.c  | 45 
>> ++++++++++++++++++++++++++++--------
>>   include/linux/interconnect.h |  6 +++++
>>   2 files changed, 41 insertions(+), 10 deletions(-)
>> 
>> diff --git a/drivers/interconnect/core.c b/drivers/interconnect/core.c
>> index 871eb4bc4efc..a7c3c262c974 100644
>> --- a/drivers/interconnect/core.c
>> +++ b/drivers/interconnect/core.c
>> @@ -295,9 +295,9 @@ static struct icc_node 
>> *of_icc_get_from_provider(struct of_phandle_args *spec)
>>   }
>>     /**
>> - * of_icc_get() - get a path handle from a DT node based on name
>> + * of_icc_get_by_index() - get a path handle from a DT node based on 
>> index
>>    * @dev: device pointer for the consumer device
>> - * @name: interconnect path name
>> + * @idx: interconnect path index
>>    *
>>    * This function will search for a path between two endpoints and 
>> return an
>>    * icc_path handle on success. Use icc_put() to release constraints 
>> when they
>> @@ -309,13 +309,12 @@ static struct icc_node 
>> *of_icc_get_from_provider(struct of_phandle_args *spec)
>>    * Return: icc_path pointer on success or ERR_PTR() on error. NULL 
>> is returned
>>    * when the API is disabled or the "interconnects" DT property is 
>> missing.
>>    */
>> -struct icc_path *of_icc_get(struct device *dev, const char *name)
>> +struct icc_path *of_icc_get_by_index(struct device *dev, int idx)
>>   {
>>   	struct icc_path *path = ERR_PTR(-EPROBE_DEFER);
>>   	struct icc_node *src_node, *dst_node;
>>   	struct device_node *np = NULL;
>>   	struct of_phandle_args src_args, dst_args;
>> -	int idx = 0;
>>   	int ret;
>>     	if (!dev || !dev->of_node)
>> @@ -335,12 +334,6 @@ struct icc_path *of_icc_get(struct device *dev, 
>> const char *name)
>>   	 * lets support only global ids and extend this in the future if 
>> needed
>>   	 * without breaking DT compatibility.
>>   	 */
>> -	if (name) {
>> -		idx = of_property_match_string(np, "interconnect-names", name);
>> -		if (idx < 0)
>> -			return ERR_PTR(idx);
>> -	}
>> -
>>   	ret = of_parse_phandle_with_args(np, "interconnects",
>>   					 "#interconnect-cells", idx * 2,
>>   					 &src_args);
>> @@ -383,6 +376,38 @@ struct icc_path *of_icc_get(struct device *dev, 
>> const char *name)
>>     	return path;
>>   }
>> +
>> +/**
>> + * of_icc_get() - get a path handle from a DT node based on name
>> + * @dev: device pointer for the consumer device
>> + * @name: interconnect path name
>> + *
>> + * This function will search for a path between two endpoints and 
>> return an
>> + * icc_path handle on success. Use icc_put() to release constraints 
>> when they
>> + * are not needed anymore.
>> + * If the interconnect API is disabled, NULL is returned and the 
>> consumer
>> + * drivers will still build. Drivers are free to handle this 
>> specifically,
>> + * but they don't have to.
>> + *
>> + * Return: icc_path pointer on success or ERR_PTR() on error. NULL is 
>> returned
>> + * when the API is disabled or the "interconnects" DT property is 
>> missing.
>> + */

please change the description since it does not
return NULL when the property is missing.

>> +struct icc_path *of_icc_get(struct device *dev, const char *name)
>> +{
>> +	int idx = 0;
>> +
>> +	if (!dev || !dev->of_node)
>> +		return ERR_PTR(-ENODEV);
>> +
>> +	if (name) {
>> +		idx = of_property_match_string(dev->of_node,
>> +					       "interconnect-names", name);
>> +		if (idx < 0)
>> +			return ERR_PTR(idx);
>> +	}
>> +
>> +	return of_icc_get_by_index(dev, idx);
>> +}
>>   EXPORT_SYMBOL_GPL(of_icc_get);
>>     /**
>> diff --git a/include/linux/interconnect.h 
>> b/include/linux/interconnect.h
>> index dc25864755ba..0e430b3b6519 100644
>> --- a/include/linux/interconnect.h
>> +++ b/include/linux/interconnect.h
>> @@ -28,6 +28,7 @@ struct device;
>>   struct icc_path *icc_get(struct device *dev, const int src_id,
>>   			 const int dst_id);
>>   struct icc_path *of_icc_get(struct device *dev, const char *name);
>> +struct icc_path *of_icc_get_by_index(struct device *dev, int idx);
>>   void icc_put(struct icc_path *path);
>>   int icc_set_bw(struct icc_path *path, u32 avg_bw, u32 peak_bw);
>>   @@ -45,6 +46,11 @@ static inline struct icc_path *of_icc_get(struct 
>> device *dev,
>>   	return NULL;
>>   }
>>   +struct icc_path *of_icc_get_by_index(struct device *dev, int idx)
> 
> This should be static inline instead
> 
>> +{
>> +	return NULL;
>> +}
>> +
>>   static inline void icc_put(struct icc_path *path)
>>   {
>>   }
>> 

-- 
-- Sibi Sankar --
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
a Linux Foundation Collaborative Project.

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v2 3/5] OPP: Add support for parsing the interconnect bandwidth
  2019-04-24  5:52   ` Viresh Kumar
@ 2019-06-27  6:27     ` Sibi Sankar
  0 siblings, 0 replies; 22+ messages in thread
From: Sibi Sankar @ 2019-06-27  6:27 UTC (permalink / raw)
  To: Viresh Kumar, Georgi Djakov
  Cc: vireshk, sboyd, nm, robh+dt, mark.rutland, rjw, jcrouse,
	vincent.guittot, bjorn.andersson, amit.kucheria, seansw,
	daidavid1, evgreen, linux-pm, devicetree, linux-kernel,
	linux-arm-msm

Hey Georgi,

In addition to Viresh's comments I found a few more while testing the
series on SDM845.

On 4/24/19 11:22 AM, Viresh Kumar wrote:
> On 23-04-19, 16:28, Georgi Djakov wrote:
>> The OPP bindings now support bandwidth values, so add support to parse it
>> from device tree and store it into the new dev_pm_opp_icc_bw struct, which
>> is part of the dev_pm_opp.
>>
>> Also add and export the dev_pm_opp_set_paths() and dev_pm_opp_put_paths()
>> helpers, to set (and release) an interconnect paths to a device. The
>> bandwidth of these paths will be updated when the OPPs are switched.
>>
>> Signed-off-by: Georgi Djakov <georgi.djakov@linaro.org>
>> ---
>>   drivers/opp/core.c     |  87 ++++++++++++++++++++++++++++++++++-
>>   drivers/opp/of.c       | 102 +++++++++++++++++++++++++++++++++++++++++
>>   drivers/opp/opp.h      |   9 ++++
>>   include/linux/pm_opp.h |  14 ++++++
>>   4 files changed, 210 insertions(+), 2 deletions(-)
>>
>> diff --git a/drivers/opp/core.c b/drivers/opp/core.c
>> index 0420f7e8ad5b..97ee39ecdebd 100644
>> --- a/drivers/opp/core.c
>> +++ b/drivers/opp/core.c
>> @@ -19,6 +19,7 @@
>>   #include <linux/slab.h>
>>   #include <linux/device.h>
>>   #include <linux/export.h>
>> +#include <linux/interconnect.h>
> 
> Just include this once in opp.h and the other .c files won't need it.
> 
>>   #include <linux/pm_domain.h>
>>   #include <linux/regulator/consumer.h>
>>   
>> @@ -876,6 +877,8 @@ static struct opp_table *_allocate_opp_table(struct device *dev, int index)
>>   				ret);
>>   	}
>>   
>> +	_of_find_paths(opp_table, dev);
>> +
>>   	BLOCKING_INIT_NOTIFIER_HEAD(&opp_table->head);
>>   	INIT_LIST_HEAD(&opp_table->opp_list);
>>   	kref_init(&opp_table->kref);
>> @@ -1129,11 +1132,12 @@ EXPORT_SYMBOL_GPL(dev_pm_opp_remove_all_dynamic);
>>   struct dev_pm_opp *_opp_allocate(struct opp_table *table)
>>   {
>>   	struct dev_pm_opp *opp;
>> -	int count, supply_size;
>> +	int count, supply_size, icc_size;
>>   
>>   	/* Allocate space for at least one supply */
>>   	count = table->regulator_count > 0 ? table->regulator_count : 1;
>>   	supply_size = sizeof(*opp->supplies) * count;
>> +	icc_size = sizeof(*opp->bandwidth) * table->path_count;
>>   
>>   	/* allocate new OPP node and supplies structures */
>>   	opp = kzalloc(sizeof(*opp) + supply_size, GFP_KERNEL);
> 
> You never updated this to include icc_size :(
> 
>> @@ -1141,7 +1145,8 @@ struct dev_pm_opp *_opp_allocate(struct opp_table *table)
>>   		return NULL;
>>   
>>   	/* Put the supplies at the end of the OPP structure as an empty array */
>> -	opp->supplies = (struct dev_pm_opp_supply *)(opp + 1);
>> +	opp->bandwidth = (struct dev_pm_opp_icc_bw *)(opp + 1);
> 
> Keep the order as supplies and then bandwidth.
> 
>> +	opp->supplies = (struct dev_pm_opp_supply *)(opp + icc_size + 1);

opp->supplies = (struct dev_pm_opp_supply *)(opp + 1);
opp->bandwidth = (struct dev_pm_opp_icc_bw *)(opp->supplies + 1);

> 
> Did you check what address gets assigned here ? I think the pointer
> addition will screw things up for you.
> 
>>   	INIT_LIST_HEAD(&opp->node);
>>   
>>   	return opp;
>> @@ -1637,6 +1642,84 @@ void dev_pm_opp_put_clkname(struct opp_table *opp_table)
>>   }
>>   EXPORT_SYMBOL_GPL(dev_pm_opp_put_clkname);
>>   
>> +/**
>> + * dev_pm_opp_set_paths() - Set interconnect path for a device
>> + * @dev: Device for which interconnect path is being set.
>> + *
>> + * This must be called before any OPPs are initialized for the device.
>> + */
>> +struct opp_table *dev_pm_opp_set_paths(struct device *dev)
> 
> I got a bit confused. Why is this routine required exactly as
> _of_find_paths() would have already done something similar ?
> 
>> +{
>> +	struct opp_table *opp_table;
>> +	int ret, i;
>> +
>> +	opp_table = dev_pm_opp_get_opp_table(dev);
>> +	if (!opp_table)
>> +		return ERR_PTR(-ENOMEM);
>> +
>> +	/* This should be called before OPPs are initialized */
>> +	if (WARN_ON(!list_empty(&opp_table->opp_list))) {
>> +		ret = -EBUSY;
>> +		goto err;
>> +	}
>> +
>> +	/* Another CPU that shares the OPP table has set the path */
>> +	if (opp_table->paths)
>> +		return opp_table;
>> + >> +	opp_table->paths = kmalloc_array(opp_table->path_count,

of_find_paths might have failed so you would want to
re-calculate opp_table->path_count.

>> +					 sizeof(*opp_table->paths), GFP_KERNEL);
>> +
>> +	/* Find interconnect path(s) for the device */
>> +	for (i = 0; i < opp_table->path_count; i++) {
>> +		opp_table->paths[i] = of_icc_get_by_index(dev, i);
>> +		if (IS_ERR(opp_table->paths[i])) {
>> +			ret = PTR_ERR(opp_table->paths[i]);
>> +			if (ret != -EPROBE_DEFER)
>> +				dev_err(dev, "%s: Couldn't find path%d: %d\n",
>> +					__func__, i, ret);

we should clean up by call icc_put on the paths that succeeded
and free/set the opp_table->paths to NULL.

>> +			goto err;
>> +		}
>> +	}
>> +
>> +	return opp_table;
>> +
>> +err:
>> +	dev_pm_opp_put_opp_table(opp_table);
>> +
>> +	return ERR_PTR(ret);
>> +}
>> +EXPORT_SYMBOL_GPL(dev_pm_opp_set_paths);
>> +
>> +/**
>> + * dev_pm_opp_put_paths() - Release interconnect path resources
>> + * @opp_table: OPP table returned from dev_pm_opp_set_paths().
>> + */
>> +void dev_pm_opp_put_paths(struct opp_table *opp_table)
>> +{
>> +	int i;
>> +
>> +	if (!opp_table->paths) {
>> +		pr_err("%s: Doesn't have paths set\n", __func__);
>> +		return;
>> +	}
>> +
>> +	/* Make sure there are no concurrent readers while updating opp_table */
>> +	WARN_ON(!list_empty(&opp_table->opp_list));
>> +
>> +	for (i = opp_table->path_count - 1; i >= 0; i--)
>> +		icc_put(opp_table->paths[i]);
>> +
>> +	_free_set_opp_data(opp_table);
>> +
>> +	kfree(opp_table->paths);
>> +	opp_table->paths = NULL;
>> +	opp_table->path_count = 0;
>> +
>> +	dev_pm_opp_put_opp_table(opp_table);
>> +}
>> +EXPORT_SYMBOL_GPL(dev_pm_opp_put_paths);
>> +
>>   /**
>>    * dev_pm_opp_register_set_opp_helper() - Register custom set OPP helper
>>    * @dev: Device for which the helper is getting registered.
>> diff --git a/drivers/opp/of.c b/drivers/opp/of.c
>> index c10c782d15aa..00af23280bc6 100644
>> --- a/drivers/opp/of.c
>> +++ b/drivers/opp/of.c
>> @@ -16,6 +16,7 @@
>>   #include <linux/cpu.h>
>>   #include <linux/errno.h>
>>   #include <linux/device.h>
>> +#include <linux/interconnect.h>
>>   #include <linux/of_device.h>
>>   #include <linux/pm_domain.h>
>>   #include <linux/slab.h>
>> @@ -363,6 +364,45 @@ static int _of_opp_alloc_required_opps(struct opp_table *opp_table,
>>   	return ret;
>>   }
>>   
>> +int _of_find_paths(struct opp_table *opp_table, struct device *dev)
>> +{
>> +	struct device_node *np;
>> +	int ret, i, count, num_paths;
>> +
>> +	np = of_node_get(dev->of_node);
>> +	if (np) {
> 
> I would rather do:
> 
>          if (!np)
>                  return 0;
> 
> That will kill unnecessary line breaks and indentation.
> 
>> +		count = of_count_phandle_with_args(np, "interconnects",
>> +						   "#interconnect-cells");

count will be an error code if interconnect property is missing
so we need to account for that as well.

> 
> You can do of_node_put() right here as it isn't used afterwards.
> 
>> +		if (count % 2) {
> 
> Shouldn't this be 4 instead of 2 ? Each path is represented as:
> 
> <&noc MASTER_CPU &noc SLAVE_DDR>
> 
> which has 4 fields.
> 
>> +			dev_err(dev, "%s: Invalid interconnects values\n",
>> +				__func__);
>> +			ret = -EINVAL;
>> +			goto put_of_node;
>> +		}
>> +
>> +		num_paths = count / 2;
>> +		opp_table->paths = kcalloc(num_paths, sizeof(*opp_table->paths),
>> +					   GFP_KERNEL);
>> +		if (!opp_table->paths) {
>> +			ret = -ENOMEM;
>> +			goto put_of_node;
>> +		}
>> +
>> +		for (i = 0; i < num_paths; i++)
>> +			opp_table->paths[i] = of_icc_get_by_index(dev, i);

we should clean up by call icc_put on the paths that succeeded
and free/set the opp_table->paths to NULL.

>> +
>> +		opp_table->path_count = num_paths;
>> +		of_node_put(np);
>> +	}
>> +
>> +	return 0;
>> +
>> +put_of_node:
>> +	of_node_put(np);
>> +
>> +	return ret;
>> +}
>> +
>>   static bool _opp_is_supported(struct device *dev, struct opp_table *opp_table,
>>   			      struct device_node *np)
>>   {
>> @@ -539,6 +579,64 @@ static int opp_parse_supplies(struct dev_pm_opp *opp, struct device *dev,
>>   	return ret;
>>   }
>>   
>> +static int opp_parse_icc_bw(struct dev_pm_opp *opp, struct device *dev,
>> +			    struct opp_table *opp_table)
>> +{
>> +	struct property *prop = NULL;
>> +	u32 *bandwidth;
>> +	char name[] = "bandwidth-MBps";
>> +	int count, i, j, ret;
>> +
>> +	/* Search for "bandwidth-MBps" */
>> +	prop = of_find_property(opp->np, name, NULL);
>> +
>> +	/* Missing property is not a problem */
>> +	if (!prop) {
>> +		dev_dbg(dev, "%s: Missing %s property\n", __func__, name);
>> +		return 0;
>> +	}
>> +
>> +	if (!prop->value) {
>> +		dev_dbg(dev, "%s: Missing %s value\n", __func__, name);
>> +		return -ENODATA;
>> +	}
>> +
>> +	/*
>> +	 * Bandwidth consists of average and peak values like:
>> +	 * bandwidth-MBps = <avg-MBps peak-MBps>
>> +	 */
>> +	count = prop->length / sizeof(u32);
>> +	if (count % 2) {
>> +		dev_err(dev, "%s: Invalid %s values\n", __func__, name);
>> +		return -EINVAL;
>> +	}
>> +
>> +	if (opp_table->path_count != count / 2) {
>> +		dev_err(dev, "%s Mismatch between values and paths (%d %d)\n",
>> +			__func__, opp_table->path_count, count / 2);
>> +		return -EINVAL;
>> +	}
>> +
>> +	bandwidth = kmalloc_array(count, sizeof(*bandwidth), GFP_KERNEL);
>> +	if (!bandwidth)
>> +		return -ENOMEM;
>> +
>> +	ret = of_property_read_u32_array(opp->np, name, bandwidth, count);
>> +	if (ret) {
>> +		dev_err(dev, "%s: Error parsing %s: %d\n", __func__, name, ret);
>> +		goto free_bandwidth;
>> +	}
>> +	for (i = 0, j = 0; i < count; i++) {
>> +		opp->bandwidth[i].avg = MBps_to_icc(bandwidth[j++]);
>> +		opp->bandwidth[i].peak = MBps_to_icc(bandwidth[j++]);
>> +	}
>> +
>> +free_bandwidth:
>> +	kfree(bandwidth);
>> +
>> +	return ret;
>> +}
>> +
>>   /**
>>    * dev_pm_opp_of_remove_table() - Free OPP table entries created from static DT
>>    *				  entries
>> @@ -635,6 +733,10 @@ static struct dev_pm_opp *_opp_add_static_v2(struct opp_table *opp_table,
>>   	if (opp_table->is_genpd)
>>   		new_opp->pstate = pm_genpd_opp_to_performance_state(dev, new_opp);
>>   
>> +	ret = opp_parse_icc_bw(new_opp, dev, opp_table);
>> +	if (ret)
>> +		goto free_opp;
>> +
>>   	ret = _opp_add(dev, new_opp, opp_table, rate_not_available);
>>   	if (ret) {
>>   		/* Don't return error for duplicate OPPs */
>> diff --git a/drivers/opp/opp.h b/drivers/opp/opp.h
>> index 569b3525aa67..70a537f2dbd3 100644
>> --- a/drivers/opp/opp.h
>> +++ b/drivers/opp/opp.h
>> @@ -24,6 +24,7 @@
>>   
>>   struct clk;
>>   struct regulator;
>> +struct icc_path;
>>   
>>   /* Lock to allow exclusive modification to the device and opp lists */
>>   extern struct mutex opp_table_lock;
>> @@ -62,6 +63,7 @@ extern struct list_head opp_tables;
>>    * @rate:	Frequency in hertz
>>    * @level:	Performance level
>>    * @supplies:	Power supplies voltage/current values
>> + * @bandwidth:	Interconnect bandwidth values
>>    * @clock_latency_ns: Latency (in nanoseconds) of switching to this OPP's
>>    *		frequency from any other OPP's frequency.
>>    * @required_opps: List of OPPs that are required by this OPP.
>> @@ -84,6 +86,7 @@ struct dev_pm_opp {
>>   	unsigned int level;
>>   
>>   	struct dev_pm_opp_supply *supplies;
>> +	struct dev_pm_opp_icc_bw *bandwidth;
>>   
>>   	unsigned long clock_latency_ns;
>>   
>> @@ -150,6 +153,8 @@ enum opp_table_access {
>>    * @regulator_count: Number of power supply regulators. Its value can be -1
>>    * (uninitialized), 0 (no opp-microvolt property) or > 0 (has opp-microvolt
>>    * property).
>> + * @paths: Interconnect path handles
>> + * @path_count: Number of interconnect paths
>>    * @genpd_performance_state: Device's power domain support performance state.
>>    * @is_genpd: Marks if the OPP table belongs to a genpd.
>>    * @set_opp: Platform specific set_opp callback
>> @@ -194,6 +199,8 @@ struct opp_table {
>>   	struct clk *clk;
>>   	struct regulator **regulators;
>>   	int regulator_count;
>> +	struct icc_path **paths;
>> +	unsigned int path_count;
>>   	bool genpd_performance_state;
>>   	bool is_genpd;
>>   
>> @@ -228,12 +235,14 @@ void _of_clear_opp_table(struct opp_table *opp_table);
>>   struct opp_table *_managed_opp(struct device *dev, int index);
>>   void _of_opp_free_required_opps(struct opp_table *opp_table,
>>   				struct dev_pm_opp *opp);
>> +int _of_find_paths(struct opp_table *opp_table, struct device *dev);
>>   #else
>>   static inline void _of_init_opp_table(struct opp_table *opp_table, struct device *dev, int index) {}
>>   static inline void _of_clear_opp_table(struct opp_table *opp_table) {}
>>   static inline struct opp_table *_managed_opp(struct device *dev, int index) { return NULL; }
>>   static inline void _of_opp_free_required_opps(struct opp_table *opp_table,
>>   					      struct dev_pm_opp *opp) {}
>> +static inline int _of_find_paths(struct opp_table *opp_table, struct device *dev) { return 0; }
>>   #endif
>>   
>>   #ifdef CONFIG_DEBUG_FS
>> diff --git a/include/linux/pm_opp.h b/include/linux/pm_opp.h
>> index 24c757a32a7b..dabee09a92b8 100644
>> --- a/include/linux/pm_opp.h
>> +++ b/include/linux/pm_opp.h
>> @@ -43,6 +43,18 @@ struct dev_pm_opp_supply {
>>   	unsigned long u_amp;
>>   };
>>   
>> +/**
>> + * struct dev_pm_opp_icc_bw - Interconnect bandwidth values
>> + * @avg:	Average bandwidth corresponding to this OPP (in icc units)
>> + * @peak:	Peak bandwidth corresponding to this OPP (in icc units)
>> + *
>> + * This structure stores the bandwidth values for a single interconnect path.
>> + */
>> +struct dev_pm_opp_icc_bw {
>> +	u32 avg;
>> +	u32 peak;
>> +};
>> +
>>   /**
>>    * struct dev_pm_opp_info - OPP freq/voltage/current values
>>    * @rate:	Target clk rate in hz
>> @@ -127,6 +139,8 @@ struct opp_table *dev_pm_opp_set_regulators(struct device *dev, const char * con
>>   void dev_pm_opp_put_regulators(struct opp_table *opp_table);
>>   struct opp_table *dev_pm_opp_set_clkname(struct device *dev, const char * name);
>>   void dev_pm_opp_put_clkname(struct opp_table *opp_table);
>> +struct opp_table *dev_pm_opp_set_paths(struct device *dev);
>> +void dev_pm_opp_put_paths(struct opp_table *opp_table);
>>   struct opp_table *dev_pm_opp_register_set_opp_helper(struct device *dev, int (*set_opp)(struct dev_pm_set_opp_data *data));
>>   void dev_pm_opp_unregister_set_opp_helper(struct opp_table *opp_table);
>>   struct opp_table *dev_pm_opp_set_genpd_virt_dev(struct device *dev, struct device *virt_dev, int index);
> 

-- 
Qualcomm Innovation Center, Inc.
Qualcomm Innovation Center, Inc, is a member of Code Aurora Forum,
a Linux Foundation Collaborative Project

^ permalink raw reply	[flat|nested] 22+ messages in thread

end of thread, other threads:[~2019-06-27  6:27 UTC | newest]

Thread overview: 22+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-04-23 13:28 [PATCH v2 0/5] Introduce OPP bandwidth bindings Georgi Djakov
2019-04-23 13:28 ` [PATCH v2 1/5] dt-bindings: opp: Introduce bandwidth-MBps bindings Georgi Djakov
2019-04-24  5:33   ` Viresh Kumar
2019-04-24  6:46   ` Rajendra Nayak
2019-04-24  6:49     ` Viresh Kumar
2019-04-24  9:00       ` Sibi Sankar
2019-04-24  9:05         ` Viresh Kumar
2019-04-25  4:24         ` Bjorn Andersson
2019-04-24  8:44   ` Sibi Sankar
2019-04-23 13:28 ` [PATCH v2 2/5] interconnect: Add of_icc_get_by_index() helper function Georgi Djakov
2019-05-07 11:59   ` Sibi Sankar
2019-06-27  5:56     ` Sibi Sankar
2019-04-23 13:28 ` [PATCH v2 3/5] OPP: Add support for parsing the interconnect bandwidth Georgi Djakov
2019-04-24  5:52   ` Viresh Kumar
2019-06-27  6:27     ` Sibi Sankar
2019-04-23 13:28 ` [PATCH v2 4/5] OPP: Update the bandwidth on OPP frequency changes Georgi Djakov
2019-04-24  5:55   ` Viresh Kumar
2019-04-24 10:05   ` Sibi Sankar
2019-04-23 13:28 ` [PATCH v2 5/5] cpufreq: dt: Add support for interconnect bandwidth scaling Georgi Djakov
2019-06-01  2:12 ` [PATCH v2 0/5] Introduce OPP bandwidth bindings Saravana Kannan
2019-06-03 15:56   ` Jordan Crouse
2019-06-03 19:12     ` Saravana Kannan

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).