linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v10 0/8] Introduce on-chip interconnect API
@ 2018-11-27 18:03 Georgi Djakov
  2018-11-27 18:03 ` [PATCH v10 1/7] interconnect: Add generic " Georgi Djakov
                   ` (7 more replies)
  0 siblings, 8 replies; 34+ messages in thread
From: Georgi Djakov @ 2018-11-27 18:03 UTC (permalink / raw)
  To: linux-pm
  Cc: gregkh, rjw, robh+dt, mturquette, khilman, vincent.guittot,
	skannan, bjorn.andersson, amit.kucheria, seansw, daidavid1,
	evgreen, mark.rutland, lorenzo.pieralisi, abailon, maxime.ripard,
	arnd, thierry.reding, ksitaraman, sanjayc, devicetree,
	linux-kernel, linux-arm-kernel, linux-arm-msm, linux-tegra,
	georgi.djakov

Modern SoCs have multiple processors and various dedicated cores (video, gpu,
graphics, modem). These cores are talking to each other and can generate a
lot of data flowing through the on-chip interconnects. These interconnect
buses could form different topologies such as crossbar, point to point buses,
hierarchical buses or use the network-on-chip concept.

These buses have been sized usually to handle use cases with high data
throughput but it is not necessary all the time and consume a lot of power.
Furthermore, the priority between masters can vary depending on the running
use case like video playback or CPU intensive tasks.

Having an API to control the requirement of the system in terms of bandwidth
and QoS, so we can adapt the interconnect configuration to match those by
scaling the frequencies, setting link priority and tuning QoS parameters.
This configuration can be a static, one-time operation done at boot for some
platforms or a dynamic set of operations that happen at run-time.

This patchset introduce a new API to get the requirement and configure the
interconnect buses across the entire chipset to fit with the current demand.
The API is NOT for changing the performance of the endpoint devices, but only
the interconnect path in between them.

The API is using a consumer/provider-based model, where the providers are
the interconnect buses and the consumers could be various drivers.
The consumers request interconnect resources (path) to an endpoint and set
the desired constraints on this data flow path. The provider(s) receive
requests from consumers and aggregate these requests for all master-slave
pairs on that path. Then the providers configure each participating in the
topology node according to the requested data flow path, physical links and
constraints. The topology could be complicated and multi-tiered and is SoC
specific.

Below is a simplified diagram of a real-world SoC topology. The interconnect
providers are the NoCs.

+----------------+    +----------------+
| HW Accelerator |--->|      M NoC     |<---------------+
+----------------+    +----------------+                |
                        |      |                    +------------+
 +-----+  +-------------+      V       +------+     |            |
 | DDR |  |                +--------+  | PCIe |     |            |
 +-----+  |                | Slaves |  +------+     |            |
   ^ ^    |                +--------+     |         |   C NoC    |
   | |    V                               V         |            |
+------------------+   +------------------------+   |            |   +-----+
|                  |-->|                        |-->|            |-->| CPU |
|                  |-->|                        |<--|            |   +-----+
|     Mem NoC      |   |         S NoC          |   +------------+
|                  |<--|                        |---------+    |
|                  |<--|                        |<------+ |    |   +--------+
+------------------+   +------------------------+       | |    +-->| Slaves |
  ^  ^    ^    ^          ^                             | |        +--------+
  |  |    |    |          |                             | V
+------+  |  +-----+   +-----+  +---------+   +----------------+   +--------+
| CPUs |  |  | GPU |   | DSP |  | Masters |-->|       P NoC    |-->| Slaves |
+------+  |  +-----+   +-----+  +---------+   +----------------+   +--------+
          |
      +-------+
      | Modem |
      +-------+

TODO:
* Create icc_set_extended() to handle parameters such as latency and other
  QoS values. Nvidia and Qcom guys are interested in this.
* Cache the path between the nodes instead of walking the graph on each get().
* Sync interconnect requests with the idle state of the device.

Changes since patchset v9 (https://lkml.org/lkml/2018/8/31/444)
* Converted from using global node identifiers to local per provider ids.
* Dropped msm8916 platform driver until we figure out DT bindings.
* Included sdm845 platform driver instead.
* Added macros for converting to mbps, gbps, etc. to icc units.
* Added comments about aggregation, other minor changes.
* Fixed uninitialized variable. (Gustavo A. R. Silva)
* Removed set but not used variable. (YueHaibing)
* Fixed build error without DEBUGFS. (Arnd Bergmann)

Changes since patchset v8 (https://lkml.org/lkml/2018/8/10/387)
* Fixed the names of the files when built as modules.
* Corrected some typos in comments.

Changes since patchset v7 (https://lkml.org/lkml/2018/7/31/647)
* Addressed comments on kernel-doc and grammar. (Randy)
* Picked Reviewed-by: Evan
* Squashed consumer and provider DT bindings into single patch. (Rob)
* Cleaned-up msm8916 DT bindings docs by removing unused port ids.
* Updated documentation for the cases when NULL is returned. (Saravana)
* New patch to add myself as maintainer.

Changes since patchset v6 (https://lkml.org/lkml/2018/7/9/698)
* [patches 1,6]: Move the aggregation within the provider from the framework to
  the platform driver's set() callback, as the aggregation point could be SoC
  specific.
* [patch 1]: Include missing header, reset state only of the traversed nodes,
  move more code into path_init(), add more asserts, move misplaced mutex,
  simplify icc_link_destroy() (Evan)
* [patch 1]: Fix the order of requests to go from source to destination. (Alex)
* [patch 7]: Use better wording in the documentation. (Evan)
* [patch 6]: Reorder struct members, sort nodes alphabetically, improve naming
  of variables , add missing clk_disable_unprepare() in error paths. (Matthias)
* [patch 6]: Remove redundant NULL pointer check in msm8916 driver. (Alex)
* [patch 6]: Add missing depend on QCOM_SMD_RPM in Kconfig. (Evan)
* [patch 3]: Don't check for errors on debugfs calls, remove debugfs directory
  when module is unloaded (Greg)

Changes since patchset v5 (https://lkml.org/lkml/2018/6/20/453)
* Fix the modular build, make rpm-smd driver a module.
* Optimize locking and move to higher level. (Evan)
* Code cleanups. Fix typos. (Evan, Matthias)
* Add the source node to the path. (Evan)
* Rename path_allocate() to path_init() with minor refactoring. (Evan)
* Rename *_remove() functions to *_destroy().
* Return fixed errors in icc_link_destroy(). (Evan)
* Fix krealloc() usage in icc_link_destroy(). (Evan)
* Add missing kfree() in icc_node_create(). (Matthias)
* Make icc_node_add() return void. (Matthias)
* Change mutex_init to mutex_lock in icc_provider_add(). (Matthias)
* Add new icc_node_del() function to delete nodes from provider.
* Fix the header guard to reflect the path in smd-rpm.h. (Evan)
* Check for errors returned by qcom_icc_rpm_smd_send(). (Evan)
* Propagate the error of icc_provider_del(). (Evan)

Changes since patchset v4 (https://lkml.org/lkml/2018/3/9/856)
* Simplified locking by using a single global mutex. (Evan)
* Changed the aggregation function interface.
* Implemented functions for node, link, provider removal. (Evan)
* Naming changes on variables and functions, removed redundant code. (Evan)
* Fixes and clarifications in the docs. (Matthias, Evan, Amit, Alexandre)
* Removed mandatory reg DT property, made interconnect-names optional. (Bjorn)
* Made interconnect-cells property required to align with other bindings. (Neil)
* Moved msm8916 specific bindings into a separate file and patch. (Bjorn)
* Use the names, instead of the hardcoded ids for topology. (Matthias)
* Init the node before creating the links. (Evan)
* Added icc_units_to_bps macro. (Amit)

Changes since patchset v3 (https://lkml.org/lkml/2017/9/8/544)
* Refactored the constraints aggregation.
* Use the IDR API.
* Split the provider and consumer bindings into separate patches and propose
  new bindings for consumers, which allows to specify the local source port.
* Adopted the icc_ prefix for API functions.
* Introduced separate API functions for creating interconnect nodes and links.
* Added DT lookup support in addition to platform data.
* Dropped the event tracing patch for now.
* Added a patch to provide summary via debugfs.
* Use macro for the list of topology definitions in the platform driver.
* Various minor changes.

Changes since patchset v2 (https://lkml.org/lkml/2017/7/20/825)
* Split the aggregation into per node and per provider. Cache the
  aggregated values.
* Various small refactorings and cleanups in the framework.
* Added a patch introducing basic tracepoint support for monitoring
  the time required to update the interconnect nodes.

Changes since patchset v1 (https://lkml.org/lkml/2017/6/27/890)
* Updates in the documentation.
* Changes in request aggregation, locking.
* Dropped the aggregate() callback and use the default as it currently
  sufficient for the single vendor driver. Will add it later when needed.
* Dropped the dt-bindings draft patch for now.

Changes since RFC v2 (https://lkml.org/lkml/2017/6/12/316)
* Converted documentation to rst format.
* Fixed an incorrect call to mutex_lock. Renamed max_bw to peak_bw.

Changes since RFC v1 (https://lkml.org/lkml/2017/5/15/605)
* Refactored code into shorter functions.
* Added a new aggregate() API function.
* Rearranged some structs to reduce padding bytes.

Changes since RFC v0 (https://lkml.org/lkml/2017/3/1/599)
* Removed DT support and added optional Patch 3 with new bindings proposal.
* Converted the topology into internal driver data.
* Made the framework modular.
* interconnect_get() now takes (src and dst ports as arguments).
* Removed public declarations of some structs.
* Now passing prev/next nodes to the vendor driver.
* Properly remove requests on _put().
* Added refcounting.
* Updated documentation.
* Changed struct interconnect_path to use array instead of linked list.

David Dai (2):
  interconnect: qcom: Add sdm845 interconnect provider driver
  arm64: dts: sdm845: Add interconnect provider DT nodes

Georgi Djakov (5):
  interconnect: Add generic on-chip interconnect API
  dt-bindings: Introduce interconnect binding
  interconnect: Allow endpoints translation via DT
  interconnect: Add debugfs support
  MAINTAINERS: add a maintainer for the interconnect API

 .../bindings/interconnect/interconnect.txt    |  60 ++
 .../bindings/interconnect/qcom,sdm845.txt     |  24 +
 Documentation/interconnect/interconnect.rst   |  94 ++
 MAINTAINERS                                   |  10 +
 arch/arm64/boot/dts/qcom/sdm845.dtsi          |   5 +
 drivers/Kconfig                               |   2 +
 drivers/Makefile                              |   1 +
 drivers/interconnect/Kconfig                  |  15 +
 drivers/interconnect/Makefile                 |   6 +
 drivers/interconnect/core.c                   | 796 +++++++++++++++++
 drivers/interconnect/qcom/Kconfig             |  13 +
 drivers/interconnect/qcom/Makefile            |   5 +
 drivers/interconnect/qcom/sdm845.c            | 836 ++++++++++++++++++
 .../dt-bindings/interconnect/qcom,sdm845.h    | 143 +++
 include/linux/interconnect-provider.h         | 142 +++
 include/linux/interconnect.h                  |  58 ++
 16 files changed, 2210 insertions(+)
 create mode 100644 Documentation/devicetree/bindings/interconnect/interconnect.txt
 create mode 100644 Documentation/devicetree/bindings/interconnect/qcom,sdm845.txt
 create mode 100644 Documentation/interconnect/interconnect.rst
 create mode 100644 drivers/interconnect/Kconfig
 create mode 100644 drivers/interconnect/Makefile
 create mode 100644 drivers/interconnect/core.c
 create mode 100644 drivers/interconnect/qcom/Kconfig
 create mode 100644 drivers/interconnect/qcom/Makefile
 create mode 100644 drivers/interconnect/qcom/sdm845.c
 create mode 100644 include/dt-bindings/interconnect/qcom,sdm845.h
 create mode 100644 include/linux/interconnect-provider.h
 create mode 100644 include/linux/interconnect.h


^ permalink raw reply	[flat|nested] 34+ messages in thread

* [PATCH v10 1/7] interconnect: Add generic on-chip interconnect API
  2018-11-27 18:03 [PATCH v10 0/8] Introduce on-chip interconnect API Georgi Djakov
@ 2018-11-27 18:03 ` Georgi Djakov
  2018-11-27 18:35   ` Joe Perches
                     ` (2 more replies)
  2018-11-27 18:03 ` [PATCH v10 2/7] dt-bindings: Introduce interconnect binding Georgi Djakov
                   ` (6 subsequent siblings)
  7 siblings, 3 replies; 34+ messages in thread
From: Georgi Djakov @ 2018-11-27 18:03 UTC (permalink / raw)
  To: linux-pm
  Cc: gregkh, rjw, robh+dt, mturquette, khilman, vincent.guittot,
	skannan, bjorn.andersson, amit.kucheria, seansw, daidavid1,
	evgreen, mark.rutland, lorenzo.pieralisi, abailon, maxime.ripard,
	arnd, thierry.reding, ksitaraman, sanjayc, devicetree,
	linux-kernel, linux-arm-kernel, linux-arm-msm, linux-tegra,
	georgi.djakov

This patch introduces a new API to get requirements and configure the
interconnect buses across the entire chipset to fit with the current
demand.

The API is using a consumer/provider-based model, where the providers are
the interconnect buses and the consumers could be various drivers.
The consumers request interconnect resources (path) between endpoints and
set the desired constraints on this data flow path. The providers receive
requests from consumers and aggregate these requests for all master-slave
pairs on that path. Then the providers configure each participating in the
topology node according to the requested data flow path, physical links and
constraints. The topology could be complicated and multi-tiered and is SoC
specific.

Signed-off-by: Georgi Djakov <georgi.djakov@linaro.org>
Reviewed-by: Evan Green <evgreen@chromium.org>
---
 Documentation/interconnect/interconnect.rst |  94 ++++
 drivers/Kconfig                             |   2 +
 drivers/Makefile                            |   1 +
 drivers/interconnect/Kconfig                |  10 +
 drivers/interconnect/Makefile               |   5 +
 drivers/interconnect/core.c                 | 569 ++++++++++++++++++++
 include/linux/interconnect-provider.h       | 125 +++++
 include/linux/interconnect.h                |  51 ++
 8 files changed, 857 insertions(+)
 create mode 100644 Documentation/interconnect/interconnect.rst
 create mode 100644 drivers/interconnect/Kconfig
 create mode 100644 drivers/interconnect/Makefile
 create mode 100644 drivers/interconnect/core.c
 create mode 100644 include/linux/interconnect-provider.h
 create mode 100644 include/linux/interconnect.h

diff --git a/Documentation/interconnect/interconnect.rst b/Documentation/interconnect/interconnect.rst
new file mode 100644
index 000000000000..b8107dcc4cd3
--- /dev/null
+++ b/Documentation/interconnect/interconnect.rst
@@ -0,0 +1,94 @@
+.. SPDX-License-Identifier: GPL-2.0
+
+=====================================
+GENERIC SYSTEM INTERCONNECT SUBSYSTEM
+=====================================
+
+Introduction
+------------
+
+This framework is designed to provide a standard kernel interface to control
+the settings of the interconnects on an SoC. These settings can be throughput,
+latency and priority between multiple interconnected devices or functional
+blocks. This can be controlled dynamically in order to save power or provide
+maximum performance.
+
+The interconnect bus is hardware with configurable parameters, which can be
+set on a data path according to the requests received from various drivers.
+An example of interconnect buses are the interconnects between various
+components or functional blocks in chipsets. There can be multiple interconnects
+on an SoC that can be multi-tiered.
+
+Below is a simplified diagram of a real-world SoC interconnect bus topology.
+
+::
+
+ +----------------+    +----------------+
+ | HW Accelerator |--->|      M NoC     |<---------------+
+ +----------------+    +----------------+                |
+                         |      |                    +------------+
+  +-----+  +-------------+      V       +------+     |            |
+  | DDR |  |                +--------+  | PCIe |     |            |
+  +-----+  |                | Slaves |  +------+     |            |
+    ^ ^    |                +--------+     |         |   C NoC    |
+    | |    V                               V         |            |
+ +------------------+   +------------------------+   |            |   +-----+
+ |                  |-->|                        |-->|            |-->| CPU |
+ |                  |-->|                        |<--|            |   +-----+
+ |     Mem NoC      |   |         S NoC          |   +------------+
+ |                  |<--|                        |---------+    |
+ |                  |<--|                        |<------+ |    |   +--------+
+ +------------------+   +------------------------+       | |    +-->| Slaves |
+   ^  ^    ^    ^          ^                             | |        +--------+
+   |  |    |    |          |                             | V
+ +------+  |  +-----+   +-----+  +---------+   +----------------+   +--------+
+ | CPUs |  |  | GPU |   | DSP |  | Masters |-->|       P NoC    |-->| Slaves |
+ +------+  |  +-----+   +-----+  +---------+   +----------------+   +--------+
+           |
+       +-------+
+       | Modem |
+       +-------+
+
+Terminology
+-----------
+
+Interconnect provider is the software definition of the interconnect hardware.
+The interconnect providers on the above diagram are M NoC, S NoC, C NoC, P NoC
+and Mem NoC.
+
+Interconnect node is the software definition of the interconnect hardware
+port. Each interconnect provider consists of multiple interconnect nodes,
+which are connected to other SoC components including other interconnect
+providers. The point on the diagram where the CPUs connect to the memory is
+called an interconnect node, which belongs to the Mem NoC interconnect provider.
+
+Interconnect endpoints are the first or the last element of the path. Every
+endpoint is a node, but not every node is an endpoint.
+
+Interconnect path is everything between two endpoints including all the nodes
+that have to be traversed to reach from a source to destination node. It may
+include multiple master-slave pairs across several interconnect providers.
+
+Interconnect consumers are the entities which make use of the data paths exposed
+by the providers. The consumers send requests to providers requesting various
+throughput, latency and priority. Usually the consumers are device drivers, that
+send request based on their needs. An example for a consumer is a video decoder
+that supports various formats and image sizes.
+
+Interconnect providers
+----------------------
+
+Interconnect provider is an entity that implements methods to initialize and
+configure interconnect bus hardware. The interconnect provider drivers should
+be registered with the interconnect provider core.
+
+.. kernel-doc:: include/linux/interconnect-provider.h
+
+Interconnect consumers
+----------------------
+
+Interconnect consumers are the clients which use the interconnect APIs to
+get paths between endpoints and set their bandwidth/latency/QoS requirements
+for these interconnect paths.
+
+.. kernel-doc:: include/linux/interconnect.h
diff --git a/drivers/Kconfig b/drivers/Kconfig
index ab4d43923c4d..ec510981554a 100644
--- a/drivers/Kconfig
+++ b/drivers/Kconfig
@@ -219,4 +219,6 @@ source "drivers/siox/Kconfig"
 
 source "drivers/slimbus/Kconfig"
 
+source "drivers/interconnect/Kconfig"
+
 endmenu
diff --git a/drivers/Makefile b/drivers/Makefile
index 578f469f72fb..06f68339e2a7 100644
--- a/drivers/Makefile
+++ b/drivers/Makefile
@@ -186,3 +186,4 @@ obj-$(CONFIG_MULTIPLEXER)	+= mux/
 obj-$(CONFIG_UNISYS_VISORBUS)	+= visorbus/
 obj-$(CONFIG_SIOX)		+= siox/
 obj-$(CONFIG_GNSS)		+= gnss/
+obj-$(CONFIG_INTERCONNECT)	+= interconnect/
diff --git a/drivers/interconnect/Kconfig b/drivers/interconnect/Kconfig
new file mode 100644
index 000000000000..a261c7d41deb
--- /dev/null
+++ b/drivers/interconnect/Kconfig
@@ -0,0 +1,10 @@
+menuconfig INTERCONNECT
+	tristate "On-Chip Interconnect management support"
+	help
+	  Support for management of the on-chip interconnects.
+
+	  This framework is designed to provide a generic interface for
+	  managing the interconnects in a SoC.
+
+	  If unsure, say no.
+
diff --git a/drivers/interconnect/Makefile b/drivers/interconnect/Makefile
new file mode 100644
index 000000000000..7a01f33b5593
--- /dev/null
+++ b/drivers/interconnect/Makefile
@@ -0,0 +1,5 @@
+# SPDX-License-Identifier: GPL-2.0
+
+icc-core-objs				:= core.o
+
+obj-$(CONFIG_INTERCONNECT)		+= icc-core.o
diff --git a/drivers/interconnect/core.c b/drivers/interconnect/core.c
new file mode 100644
index 000000000000..3ae8e5a58969
--- /dev/null
+++ b/drivers/interconnect/core.c
@@ -0,0 +1,569 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Interconnect framework core driver
+ *
+ * Copyright (c) 2018, Linaro Ltd.
+ * Author: Georgi Djakov <georgi.djakov@linaro.org>
+ */
+
+#include <linux/device.h>
+#include <linux/idr.h>
+#include <linux/init.h>
+#include <linux/interconnect.h>
+#include <linux/interconnect-provider.h>
+#include <linux/list.h>
+#include <linux/module.h>
+#include <linux/mutex.h>
+#include <linux/slab.h>
+#include <linux/overflow.h>
+
+static DEFINE_IDR(icc_idr);
+static LIST_HEAD(icc_providers);
+static DEFINE_MUTEX(icc_lock);
+
+/**
+ * struct icc_req - constraints that are attached to each node
+ * @req_node: entry in list of requests for the particular @node
+ * @node: the interconnect node to which this constraint applies
+ * @dev: reference to the device that sets the constraints
+ * @avg_bw: an integer describing the average bandwidth in kBps
+ * @peak_bw: an integer describing the peak bandwidth in kBps
+ */
+struct icc_req {
+	struct hlist_node req_node;
+	struct icc_node *node;
+	struct device *dev;
+	u32 avg_bw;
+	u32 peak_bw;
+};
+
+/**
+ * struct icc_path - interconnect path structure
+ * @num_nodes: number of hops (nodes)
+ * @reqs: array of the requests applicable to this path of nodes
+ */
+struct icc_path {
+	size_t num_nodes;
+	struct icc_req reqs[];
+};
+
+static struct icc_node *node_find(const int id)
+{
+	return idr_find(&icc_idr, id);
+}
+
+static struct icc_path *path_init(struct device *dev, struct icc_node *dst,
+				  ssize_t num_nodes)
+{
+	struct icc_node *node = dst;
+	struct icc_path *path;
+	int i;
+
+	path = kzalloc(struct_size(path, reqs, num_nodes), GFP_KERNEL);
+	if (!path)
+		return ERR_PTR(-ENOMEM);
+
+	path->num_nodes = num_nodes;
+
+	for (i = num_nodes - 1; i >= 0; i--) {
+		node->provider->users++;
+		hlist_add_head(&path->reqs[i].req_node, &node->req_list);
+		path->reqs[i].node = node;
+		path->reqs[i].dev = dev;
+		/* reference to previous node was saved during path traversal */
+		node = node->reverse;
+	}
+
+	return path;
+}
+
+static struct icc_path *path_find(struct device *dev, struct icc_node *src,
+				  struct icc_node *dst)
+{
+	struct icc_path *path = ERR_PTR(-EPROBE_DEFER);
+	struct icc_node *n, *node = NULL;
+	struct list_head traverse_list;
+	struct list_head edge_list;
+	struct list_head visited_list;
+	size_t i, depth = 1;
+	bool found = false;
+
+	INIT_LIST_HEAD(&traverse_list);
+	INIT_LIST_HEAD(&edge_list);
+	INIT_LIST_HEAD(&visited_list);
+
+	list_add(&src->search_list, &traverse_list);
+	src->reverse = NULL;
+
+	do {
+		list_for_each_entry_safe(node, n, &traverse_list, search_list) {
+			if (node == dst) {
+				found = true;
+				list_splice_init(&edge_list, &visited_list);
+				list_splice_init(&traverse_list, &visited_list);
+				break;
+			}
+			for (i = 0; i < node->num_links; i++) {
+				struct icc_node *tmp = node->links[i];
+
+				if (!tmp) {
+					path = ERR_PTR(-ENOENT);
+					goto out;
+				}
+
+				if (tmp->is_traversed)
+					continue;
+
+				tmp->is_traversed = true;
+				tmp->reverse = node;
+				list_add_tail(&tmp->search_list, &edge_list);
+			}
+		}
+
+		if (found)
+			break;
+
+		list_splice_init(&traverse_list, &visited_list);
+		list_splice_init(&edge_list, &traverse_list);
+
+		/* count the hops including the source */
+		depth++;
+
+	} while (!list_empty(&traverse_list));
+
+out:
+
+	/* reset the traversed state */
+	list_for_each_entry_reverse(n, &visited_list, search_list)
+		n->is_traversed = false;
+
+	if (found)
+		path = path_init(dev, dst, depth);
+
+	return path;
+}
+
+/*
+ * We want the path to honor all bandwidth requests, so the average and peak
+ * bandwidth requirements from each consumer are aggregated at each node.
+ * The aggregation is platform specific, so each platform can customize it by
+ * implementing it's own aggregate() function.
+ */
+
+static int aggregate_requests(struct icc_node *node)
+{
+	struct icc_provider *p = node->provider;
+	struct icc_req *r;
+
+	node->avg_bw = 0;
+	node->peak_bw = 0;
+
+	hlist_for_each_entry(r, &node->req_list, req_node)
+		p->aggregate(node, r->avg_bw, r->peak_bw,
+			     &node->avg_bw, &node->peak_bw);
+
+	return 0;
+}
+
+static int apply_constraints(struct icc_path *path)
+{
+	struct icc_node *next, *prev = NULL;
+	int ret = -EINVAL;
+	int i;
+
+	for (i = 0; i < path->num_nodes; i++, prev = next) {
+		struct icc_provider *p;
+
+		next = path->reqs[i].node;
+		/*
+		 * Both endpoints should be valid master-slave pairs of the
+		 * same interconnect provider that will be configured.
+		 */
+		if (!prev || next->provider != prev->provider)
+			continue;
+
+		p = next->provider;
+
+		/* set the constraints */
+		ret = p->set(prev, next);
+		if (ret)
+			goto out;
+	}
+out:
+	return ret;
+}
+
+/**
+ * icc_set() - set constraints on an interconnect path between two endpoints
+ * @path: reference to the path returned by icc_get()
+ * @avg_bw: average bandwidth in kilobytes per second
+ * @peak_bw: peak bandwidth in kilobytes per second
+ *
+ * This function is used by an interconnect consumer to express its own needs
+ * in terms of bandwidth for a previously requested path between two endpoints.
+ * The requests are aggregated and each node is updated accordingly. The entire
+ * path is locked by a mutex to ensure that the set() is completed.
+ * The @path can be NULL when the "interconnects" DT properties is missing,
+ * which will mean that no constraints will be set.
+ *
+ * Returns 0 on success, or an appropriate error code otherwise.
+ */
+int icc_set(struct icc_path *path, u32 avg_bw, u32 peak_bw)
+{
+	struct icc_node *node;
+	size_t i;
+	int ret;
+
+	if (!path)
+		return 0;
+
+	mutex_lock(&icc_lock);
+
+	for (i = 0; i < path->num_nodes; i++) {
+		node = path->reqs[i].node;
+
+		/* update the consumer request for this path */
+		path->reqs[i].avg_bw = avg_bw;
+		path->reqs[i].peak_bw = peak_bw;
+
+		/* aggregate requests for this node */
+		aggregate_requests(node);
+	}
+
+	ret = apply_constraints(path);
+	if (ret)
+		pr_debug("interconnect: error applying constraints (%d)", ret);
+
+	mutex_unlock(&icc_lock);
+
+	return ret;
+}
+EXPORT_SYMBOL_GPL(icc_set);
+
+/**
+ * icc_get() - return a handle for path between two endpoints
+ * @dev: the device requesting the path
+ * @src_id: source device port id
+ * @dst_id: destination device port id
+ *
+ * This function will search for a path between two endpoints and return an
+ * icc_path handle on success. Use icc_put() to release
+ * constraints when they are not needed anymore.
+ * If the interconnect API is disabled, NULL is returned and the consumer
+ * drivers will still build. Drivers are free to handle this specifically,
+ * but they don't have to.
+ *
+ * Return: icc_path pointer on success, ERR_PTR() on error or NULL if the
+ * interconnect API is disabled.
+ */
+struct icc_path *icc_get(struct device *dev, const int src_id, const int dst_id)
+{
+	struct icc_node *src, *dst;
+	struct icc_path *path = ERR_PTR(-EPROBE_DEFER);
+
+	mutex_lock(&icc_lock);
+
+	src = node_find(src_id);
+	if (!src)
+		goto out;
+
+	dst = node_find(dst_id);
+	if (!dst)
+		goto out;
+
+	path = path_find(dev, src, dst);
+	if (IS_ERR(path))
+		dev_err(dev, "%s: invalid path=%ld\n", __func__, PTR_ERR(path));
+
+out:
+	mutex_unlock(&icc_lock);
+	return path;
+}
+EXPORT_SYMBOL_GPL(icc_get);
+
+/**
+ * icc_put() - release the reference to the icc_path
+ * @path: interconnect path
+ *
+ * Use this function to release the constraints on a path when the path is
+ * no longer needed. The constraints will be re-aggregated.
+ */
+void icc_put(struct icc_path *path)
+{
+	struct icc_node *node;
+	size_t i;
+	int ret;
+
+	if (!path || WARN_ON(IS_ERR(path)))
+		return;
+
+	ret = icc_set(path, 0, 0);
+	if (ret)
+		pr_err("%s: error (%d)\n", __func__, ret);
+
+	mutex_lock(&icc_lock);
+	for (i = 0; i < path->num_nodes; i++) {
+		node = path->reqs[i].node;
+		hlist_del(&path->reqs[i].req_node);
+		if (!WARN_ON(!node->provider->users))
+			node->provider->users--;
+	}
+	mutex_unlock(&icc_lock);
+
+	kfree(path);
+}
+EXPORT_SYMBOL_GPL(icc_put);
+
+static struct icc_node *icc_node_create_nolock(int id)
+{
+	struct icc_node *node;
+
+	/* check if node already exists */
+	node = node_find(id);
+	if (node)
+		goto out;
+
+	node = kzalloc(sizeof(*node), GFP_KERNEL);
+	if (!node) {
+		node = ERR_PTR(-ENOMEM);
+		goto out;
+	}
+
+	id = idr_alloc(&icc_idr, node, id, id + 1, GFP_KERNEL);
+	if (WARN(id < 0, "couldn't get idr")) {
+		kfree(node);
+		node = ERR_PTR(id);
+		goto out;
+	}
+
+	node->id = id;
+
+out:
+	return node;
+}
+
+/**
+ * icc_node_create() - create a node
+ * @id: node id
+ *
+ * Return: icc_node pointer on success, or ERR_PTR() on error
+ */
+struct icc_node *icc_node_create(int id)
+{
+	struct icc_node *node;
+
+	mutex_lock(&icc_lock);
+
+	node = icc_node_create_nolock(id);
+
+	mutex_unlock(&icc_lock);
+
+	return node;
+}
+EXPORT_SYMBOL_GPL(icc_node_create);
+
+/**
+ * icc_node_destroy() - destroy a node
+ * @id: node id
+ */
+void icc_node_destroy(int id)
+{
+	struct icc_node *node;
+
+	mutex_lock(&icc_lock);
+
+	node = node_find(id);
+	if (node) {
+		idr_remove(&icc_idr, node->id);
+		WARN_ON(!hlist_empty(&node->req_list));
+	}
+
+	mutex_unlock(&icc_lock);
+
+	kfree(node);
+}
+EXPORT_SYMBOL_GPL(icc_node_destroy);
+
+/**
+ * icc_link_create() - create a link between two nodes
+ * @node: source node id
+ * @dst_id: destination node id
+ *
+ * Create a link between two nodes. The nodes might belong to different
+ * interconnect providers and the @dst_id node might not exist (if the
+ * provider driver has not probed yet). So just create the @dst_id node
+ * and when the actual provider driver is probed, the rest of the node
+ * data is filled.
+ *
+ * Return: 0 on success, or an error code otherwise
+ */
+int icc_link_create(struct icc_node *node, const int dst_id)
+{
+	struct icc_node *dst;
+	struct icc_node **new;
+	int ret = 0;
+
+	if (!node->provider)
+		return -EINVAL;
+
+	mutex_lock(&icc_lock);
+
+	dst = node_find(dst_id);
+	if (!dst) {
+		dst = icc_node_create_nolock(dst_id);
+
+		if (IS_ERR(dst)) {
+			ret = PTR_ERR(dst);
+			goto out;
+		}
+	}
+
+	new = krealloc(node->links,
+		       (node->num_links + 1) * sizeof(*node->links),
+		       GFP_KERNEL);
+	if (!new) {
+		ret = -ENOMEM;
+		goto out;
+	}
+
+	node->links = new;
+	node->links[node->num_links++] = dst;
+
+out:
+	mutex_unlock(&icc_lock);
+
+	return ret;
+}
+EXPORT_SYMBOL_GPL(icc_link_create);
+
+/**
+ * icc_link_destroy() - destroy a link between two nodes
+ * @src: pointer to source node
+ * @dst: pointer to destination node
+ *
+ * Return: 0 on success, or an error code otherwise
+ */
+int icc_link_destroy(struct icc_node *src, struct icc_node *dst)
+{
+	struct icc_node **new;
+	size_t slot;
+	int ret = 0;
+
+	if (IS_ERR_OR_NULL(src))
+		return -EINVAL;
+
+	if (IS_ERR_OR_NULL(dst))
+		return -EINVAL;
+
+	mutex_lock(&icc_lock);
+
+	for (slot = 0; slot < src->num_links; slot++)
+		if (src->links[slot] == dst)
+			break;
+
+	if (WARN_ON(slot == src->num_links)) {
+		ret = -ENXIO;
+		goto out;
+	}
+
+	src->links[slot] = src->links[--src->num_links];
+
+	new = krealloc(src->links,
+		       (src->num_links) * sizeof(*src->links),
+		       GFP_KERNEL);
+	if (new)
+		src->links = new;
+
+out:
+	mutex_unlock(&icc_lock);
+
+	return ret;
+}
+EXPORT_SYMBOL_GPL(icc_link_destroy);
+
+/**
+ * icc_node_add() - add interconnect node to interconnect provider
+ * @node: pointer to the interconnect node
+ * @provider: pointer to the interconnect provider
+ */
+void icc_node_add(struct icc_node *node, struct icc_provider *provider)
+{
+	mutex_lock(&icc_lock);
+
+	node->provider = provider;
+	list_add_tail(&node->node_list, &provider->nodes);
+
+	mutex_unlock(&icc_lock);
+}
+EXPORT_SYMBOL_GPL(icc_node_add);
+
+/**
+ * icc_node_del() - delete interconnect node from interconnect provider
+ * @node: pointer to the interconnect node
+ */
+void icc_node_del(struct icc_node *node)
+{
+	mutex_lock(&icc_lock);
+
+	list_del(&node->node_list);
+
+	mutex_unlock(&icc_lock);
+}
+EXPORT_SYMBOL_GPL(icc_node_del);
+
+/**
+ * icc_provider_add() - add a new interconnect provider
+ * @provider: the interconnect provider that will be added into topology
+ *
+ * Return: 0 on success, or an error code otherwise
+ */
+int icc_provider_add(struct icc_provider *provider)
+{
+	if (WARN_ON(!provider->set))
+		return -EINVAL;
+
+	mutex_lock(&icc_lock);
+
+	INIT_LIST_HEAD(&provider->nodes);
+	list_add_tail(&provider->provider_list, &icc_providers);
+
+	mutex_unlock(&icc_lock);
+
+	dev_dbg(provider->dev, "interconnect provider added to topology\n");
+
+	return 0;
+}
+EXPORT_SYMBOL_GPL(icc_provider_add);
+
+/**
+ * icc_provider_del() - delete previously added interconnect provider
+ * @provider: the interconnect provider that will be removed from topology
+ *
+ * Return: 0 on success, or an error code otherwise
+ */
+int icc_provider_del(struct icc_provider *provider)
+{
+	mutex_lock(&icc_lock);
+	if (provider->users) {
+		pr_warn("interconnect provider still has %d users\n",
+			provider->users);
+		mutex_unlock(&icc_lock);
+		return -EBUSY;
+	}
+
+	if (!list_empty(&provider->nodes)) {
+		pr_warn("interconnect provider still has nodes\n");
+		mutex_unlock(&icc_lock);
+		return -EBUSY;
+	}
+
+	list_del(&provider->provider_list);
+	mutex_unlock(&icc_lock);
+
+	return 0;
+}
+EXPORT_SYMBOL_GPL(icc_provider_del);
+
+MODULE_AUTHOR("Georgi Djakov <georgi.djakov@linaro.org");
+MODULE_DESCRIPTION("Interconnect Driver Core");
+MODULE_LICENSE("GPL v2");
diff --git a/include/linux/interconnect-provider.h b/include/linux/interconnect-provider.h
new file mode 100644
index 000000000000..78208a754181
--- /dev/null
+++ b/include/linux/interconnect-provider.h
@@ -0,0 +1,125 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (c) 2018, Linaro Ltd.
+ * Author: Georgi Djakov <georgi.djakov@linaro.org>
+ */
+
+#ifndef __LINUX_INTERCONNECT_PROVIDER_H
+#define __LINUX_INTERCONNECT_PROVIDER_H
+
+#include <linux/interconnect.h>
+
+#define icc_units_to_bps(bw)  ((bw) * 1000ULL)
+
+struct icc_node;
+
+/**
+ * struct icc_provider - interconnect provider (controller) entity that might
+ * provide multiple interconnect controls
+ *
+ * @provider_list: list of the registered interconnect providers
+ * @nodes: internal list of the interconnect provider nodes
+ * @set: pointer to device specific set operation function
+ * @aggregate: pointer to device specific aggregate operation function
+ * @dev: the device this interconnect provider belongs to
+ * @users: count of active users
+ * @data: pointer to private data
+ */
+struct icc_provider {
+	struct list_head	provider_list;
+	struct list_head	nodes;
+	int (*set)(struct icc_node *src, struct icc_node *dst);
+	int (*aggregate)(struct icc_node *node, u32 avg_bw, u32 peak_bw,
+			 u32 *agg_avg, u32 *agg_peak);
+	struct device		*dev;
+	int			users;
+	void			*data;
+};
+
+/**
+ * struct icc_node - entity that is part of the interconnect topology
+ *
+ * @id: platform specific node id
+ * @name: node name used in debugfs
+ * @links: a list of targets pointing to where we can go next when traversing
+ * @num_links: number of links to other interconnect nodes
+ * @provider: points to the interconnect provider of this node
+ * @node_list: the list entry in the parent provider's "nodes" list
+ * @search_list: list used when walking the nodes graph
+ * @reverse: pointer to previous node when walking the nodes graph
+ * @is_traversed: flag that is used when walking the nodes graph
+ * @req_list: a list of QoS constraint requests associated with this node
+ * @avg_bw: aggregated value of average bandwidth requests from all consumers
+ * @peak_bw: aggregated value of peak bandwidth requests from all consumers
+ * @data: pointer to private data
+ */
+struct icc_node {
+	int			id;
+	const char              *name;
+	struct icc_node		**links;
+	size_t			num_links;
+
+	struct icc_provider	*provider;
+	struct list_head	node_list;
+	struct list_head	search_list;
+	struct icc_node		*reverse;
+	u8			is_traversed:1;
+	struct hlist_head	req_list;
+	u32			avg_bw;
+	u32			peak_bw;
+	void			*data;
+};
+
+#if IS_ENABLED(CONFIG_INTERCONNECT)
+
+struct icc_node *icc_node_create(int id);
+void icc_node_destroy(int id);
+int icc_link_create(struct icc_node *node, const int dst_id);
+int icc_link_destroy(struct icc_node *src, struct icc_node *dst);
+void icc_node_add(struct icc_node *node, struct icc_provider *provider);
+void icc_node_del(struct icc_node *node);
+int icc_provider_add(struct icc_provider *provider);
+int icc_provider_del(struct icc_provider *provider);
+
+#else
+
+static inline struct icc_node *icc_node_create(int id)
+{
+	return ERR_PTR(-ENOTSUPP);
+}
+
+void icc_node_destroy(int id)
+{
+}
+
+static inline int icc_link_create(struct icc_node *node, const int dst_id)
+{
+	return -ENOTSUPP;
+}
+
+int icc_link_destroy(struct icc_node *src, struct icc_node *dst)
+{
+	return -ENOTSUPP;
+}
+
+void icc_node_add(struct icc_node *node, struct icc_provider *provider)
+{
+}
+
+void icc_node_del(struct icc_node *node)
+{
+}
+
+static inline int icc_provider_add(struct icc_provider *provider)
+{
+	return -ENOTSUPP;
+}
+
+static inline int icc_provider_del(struct icc_provider *provider)
+{
+	return -ENOTSUPP;
+}
+
+#endif /* CONFIG_INTERCONNECT */
+
+#endif /* __LINUX_INTERCONNECT_PROVIDER_H */
diff --git a/include/linux/interconnect.h b/include/linux/interconnect.h
new file mode 100644
index 000000000000..04b2966ded9f
--- /dev/null
+++ b/include/linux/interconnect.h
@@ -0,0 +1,51 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (c) 2018, Linaro Ltd.
+ * Author: Georgi Djakov <georgi.djakov@linaro.org>
+ */
+
+#ifndef __LINUX_INTERCONNECT_H
+#define __LINUX_INTERCONNECT_H
+
+#include <linux/mutex.h>
+#include <linux/types.h>
+
+/* macros for converting to icc units */
+#define bps_to_icc(x)	(1)
+#define kBps_to_icc(x)	(x)
+#define MBps_to_icc(x)	(x * 1000)
+#define GBps_to_icc(x)	(x * 1000 * 1000)
+#define kbps_to_icc(x)	(x / 8 + ((x) % 8 ? 1 : 0))
+#define Mbps_to_icc(x)	(x * 1000 / 8 )
+#define Gbps_to_icc(x)	(x * 1000 * 1000 / 8)
+
+struct icc_path;
+struct device;
+
+#if IS_ENABLED(CONFIG_INTERCONNECT)
+
+struct icc_path *icc_get(struct device *dev, const int src_id,
+			 const int dst_id);
+void icc_put(struct icc_path *path);
+int icc_set(struct icc_path *path, u32 avg_bw, u32 peak_bw);
+
+#else
+
+static inline struct icc_path *icc_get(struct device *dev, const int src_id,
+				       const int dst_id)
+{
+	return NULL;
+}
+
+static inline void icc_put(struct icc_path *path)
+{
+}
+
+static inline int icc_set(struct icc_path *path, u32 avg_bw, u32 peak_bw)
+{
+	return 0;
+}
+
+#endif /* CONFIG_INTERCONNECT */
+
+#endif /* __LINUX_INTERCONNECT_H */

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH v10 2/7] dt-bindings: Introduce interconnect binding
  2018-11-27 18:03 [PATCH v10 0/8] Introduce on-chip interconnect API Georgi Djakov
  2018-11-27 18:03 ` [PATCH v10 1/7] interconnect: Add generic " Georgi Djakov
@ 2018-11-27 18:03 ` Georgi Djakov
  2018-12-01  0:38   ` Evan Green
  2018-11-27 18:03 ` [PATCH v10 3/7] interconnect: Allow endpoints translation via DT Georgi Djakov
                   ` (5 subsequent siblings)
  7 siblings, 1 reply; 34+ messages in thread
From: Georgi Djakov @ 2018-11-27 18:03 UTC (permalink / raw)
  To: linux-pm
  Cc: gregkh, rjw, robh+dt, mturquette, khilman, vincent.guittot,
	skannan, bjorn.andersson, amit.kucheria, seansw, daidavid1,
	evgreen, mark.rutland, lorenzo.pieralisi, abailon, maxime.ripard,
	arnd, thierry.reding, ksitaraman, sanjayc, devicetree,
	linux-kernel, linux-arm-kernel, linux-arm-msm, linux-tegra,
	georgi.djakov

This binding is intended to represent the relations between the interconnect
controllers (providers) and consumer device nodes. It will allow creating links
between consumers and interconnect paths (exposed by interconnect providers).

Signed-off-by: Georgi Djakov <georgi.djakov@linaro.org>
---
 .../bindings/interconnect/interconnect.txt    | 60 +++++++++++++++++++
 1 file changed, 60 insertions(+)
 create mode 100644 Documentation/devicetree/bindings/interconnect/interconnect.txt

diff --git a/Documentation/devicetree/bindings/interconnect/interconnect.txt b/Documentation/devicetree/bindings/interconnect/interconnect.txt
new file mode 100644
index 000000000000..6775c07e1574
--- /dev/null
+++ b/Documentation/devicetree/bindings/interconnect/interconnect.txt
@@ -0,0 +1,60 @@
+Interconnect Provider Device Tree Bindings
+=========================================
+
+The purpose of this document is to define a common set of generic interconnect
+providers/consumers properties.
+
+
+= interconnect providers =
+
+The interconnect provider binding is intended to represent the interconnect
+controllers in the system. Each provider registers a set of interconnect
+nodes, which expose the interconnect related capabilities of the interconnect
+to consumer drivers. These capabilities can be throughput, latency, priority
+etc. The consumer drivers set constraints on interconnect path (or endpoints)
+depending on the use case. Interconnect providers can also be interconnect
+consumers, such as in the case where two network-on-chip fabrics interface
+directly.
+
+Required properties:
+- compatible : contains the interconnect provider compatible string
+- #interconnect-cells : number of cells in a interconnect specifier needed to
+			encode the interconnect node id
+
+Example:
+
+		snoc: interconnect@580000 {
+			compatible = "qcom,msm8916-snoc";
+			#interconnect-cells = <1>;
+			reg = <0x580000 0x14000>;
+			clock-names = "bus_clk", "bus_a_clk";
+			clocks = <&rpmcc RPM_SMD_SNOC_CLK>,
+				 <&rpmcc RPM_SMD_SNOC_A_CLK>;
+		};
+
+
+= interconnect consumers =
+
+The interconnect consumers are device nodes which dynamically express their
+bandwidth requirements along interconnect paths they are connected to. There
+can be multiple interconnect providers on a SoC and the consumer may consume
+multiple paths from different providers depending on use case and the
+components it has to interact with.
+
+Required properties:
+interconnects : Pairs of phandles and interconnect provider specifier to denote
+	        the edge source and destination ports of the interconnect path.
+
+Optional properties:
+interconnect-names : List of interconnect path name strings sorted in the same
+		     order as the interconnects property. Consumers drivers will use
+		     interconnect-names to match interconnect paths with interconnect
+		     specifier pairs.
+
+Example:
+
+	sdhci@7864000 {
+		...
+		interconnects = <&pnoc MASTER_SDCC_1 &bimc SLAVE_EBI_CH0>;
+		interconnect-names = "sdhc-ddr";
+	};

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH v10 3/7] interconnect: Allow endpoints translation via DT
  2018-11-27 18:03 [PATCH v10 0/8] Introduce on-chip interconnect API Georgi Djakov
  2018-11-27 18:03 ` [PATCH v10 1/7] interconnect: Add generic " Georgi Djakov
  2018-11-27 18:03 ` [PATCH v10 2/7] dt-bindings: Introduce interconnect binding Georgi Djakov
@ 2018-11-27 18:03 ` Georgi Djakov
  2018-12-01  0:38   ` Evan Green
  2018-11-27 18:03 ` [PATCH v10 4/7] interconnect: Add debugfs support Georgi Djakov
                   ` (4 subsequent siblings)
  7 siblings, 1 reply; 34+ messages in thread
From: Georgi Djakov @ 2018-11-27 18:03 UTC (permalink / raw)
  To: linux-pm
  Cc: gregkh, rjw, robh+dt, mturquette, khilman, vincent.guittot,
	skannan, bjorn.andersson, amit.kucheria, seansw, daidavid1,
	evgreen, mark.rutland, lorenzo.pieralisi, abailon, maxime.ripard,
	arnd, thierry.reding, ksitaraman, sanjayc, devicetree,
	linux-kernel, linux-arm-kernel, linux-arm-msm, linux-tegra,
	georgi.djakov

Currently we support only platform data for specifying the interconnect
endpoints. As now the endpoints are hard-coded into the consumer driver
this may lead to complications when a single driver is used by multiple
SoCs, which may have different interconnect topology.
To avoid cluttering the consumer drivers, introduce a translation function
to help us get the board specific interconnect data from device-tree.

Signed-off-by: Georgi Djakov <georgi.djakov@linaro.org>
---
 drivers/interconnect/core.c           | 156 ++++++++++++++++++++++++++
 include/linux/interconnect-provider.h |  17 +++
 include/linux/interconnect.h          |   7 ++
 3 files changed, 180 insertions(+)

diff --git a/drivers/interconnect/core.c b/drivers/interconnect/core.c
index 3ae8e5a58969..ebc42ef9fa46 100644
--- a/drivers/interconnect/core.c
+++ b/drivers/interconnect/core.c
@@ -15,6 +15,7 @@
 #include <linux/module.h>
 #include <linux/mutex.h>
 #include <linux/slab.h>
+#include <linux/of.h>
 #include <linux/overflow.h>
 
 static DEFINE_IDR(icc_idr);
@@ -193,6 +194,159 @@ static int apply_constraints(struct icc_path *path)
 	return ret;
 }
 
+/* of_icc_xlate_onecell() - Xlate function using a single index.
+ * @spec: OF phandle args to map into an interconnect node.
+ * @data: private data (pointer to struct icc_onecell_data)
+ *
+ * This is a generic xlate function that can be used to model simple
+ * interconnect providers that have one device tree node and provide
+ * multiple interconnect nodes. A single cell is used as an index into
+ * an array of icc nodes specified in the icc_onecell_data struct when
+ * registering the provider.
+ */
+struct icc_node *of_icc_xlate_onecell(struct of_phandle_args *spec,
+				      void *data)
+{
+	struct icc_onecell_data *icc_data = data;
+	unsigned int idx = spec->args[0];
+
+	if (idx >= icc_data->num_nodes) {
+		pr_err("%s: invalid index %u\n", __func__, idx);
+		return ERR_PTR(-EINVAL);
+	}
+
+	return icc_data->nodes[idx];
+}
+EXPORT_SYMBOL_GPL(of_icc_xlate_onecell);
+
+/**
+ * of_icc_get_from_provider() - Look-up interconnect node
+ * @spec: OF phandle args to use for look-up
+ *
+ * Looks for interconnect provider under the node specified by @spec and if
+ * found, uses xlate function of the provider to map phandle args to node.
+ *
+ * Returns a valid pointer to struct icc_node on success or ERR_PTR()
+ * on failure.
+ */
+static struct icc_node *of_icc_get_from_provider(struct of_phandle_args *spec)
+{
+	struct icc_node *node = ERR_PTR(-EPROBE_DEFER);
+	struct icc_provider *provider;
+
+	if (!spec)
+		return ERR_PTR(-EINVAL);
+
+	mutex_lock(&icc_lock);
+	list_for_each_entry(provider, &icc_providers, provider_list) {
+		if (provider->dev->of_node == spec->np)
+			node = provider->xlate(spec, provider->data);
+		if (!IS_ERR(node))
+			break;
+	}
+	mutex_unlock(&icc_lock);
+
+	return node;
+}
+
+/**
+ * of_icc_get() - get a path handle from a DT node based on name
+ * @dev: device pointer for the consumer device
+ * @name: interconnect path name
+ *
+ * This function will search for a path two endpoints and return an
+ * icc_path handle on success. Use icc_put() to release constraints when
+ * they are not needed anymore.
+ * If the interconnect API is disabled, NULL is returned and the consumer
+ * drivers will still build. Drivers are free to handle this specifically,
+ * but they don't have to. NULL is also returned when the "interconnects"
+ * DT property is missing.
+ *
+ * Return: icc_path pointer on success or ERR_PTR() on error. NULL is returned
+ * when the API is disabled or the "interconnects" DT property is missing.
+ */
+struct icc_path *of_icc_get(struct device *dev, const char *name)
+{
+	struct icc_path *path = ERR_PTR(-EPROBE_DEFER);
+	struct icc_node *src_node, *dst_node;
+	struct device_node *np = NULL;
+	struct of_phandle_args src_args, dst_args;
+	int idx = 0;
+	int ret;
+
+	if (!dev || !dev->of_node)
+		return ERR_PTR(-ENODEV);
+
+	np = dev->of_node;
+
+	/*
+	 * When the consumer DT node do not have "interconnects" property
+	 * return a NULL path to skip setting constraints.
+	 */
+	if (!of_find_property(np, "interconnects", NULL))
+		return NULL;
+
+	/*
+	 * We use a combination of phandle and specifier for endpoint. For now
+	 * lets support only global ids and extend this is the future if needed
+	 * without breaking DT compatibility.
+	 */
+	if (name) {
+		idx = of_property_match_string(np, "interconnect-names", name);
+		if (idx < 0)
+			return ERR_PTR(idx);
+	}
+
+	ret = of_parse_phandle_with_args(np, "interconnects",
+					 "#interconnect-cells", idx * 2,
+					 &src_args);
+	if (ret)
+		return ERR_PTR(ret);
+
+	of_node_put(src_args.np);
+
+	if (!src_args.args_count || src_args.args_count > 1)
+		return ERR_PTR(-EINVAL);
+
+	ret = of_parse_phandle_with_args(np, "interconnects",
+					 "#interconnect-cells", idx * 2 + 1,
+					 &dst_args);
+	if (ret)
+		return ERR_PTR(ret);
+
+	of_node_put(dst_args.np);
+
+	if (!dst_args.args_count || dst_args.args_count > 1)
+		return ERR_PTR(-EINVAL);
+
+	src_node = of_icc_get_from_provider(&src_args);
+
+	if (IS_ERR(src_node)) {
+		if (PTR_ERR(src_node) != -EPROBE_DEFER)
+			dev_err(dev, "error finding src node: %ld\n",
+				PTR_ERR(src_node));
+		return ERR_CAST(src_node);
+	}
+
+	dst_node = of_icc_get_from_provider(&dst_args);
+
+	if (IS_ERR(dst_node)) {
+		if (PTR_ERR(dst_node) != -EPROBE_DEFER)
+			dev_err(dev, "error finding dst node: %ld\n",
+				PTR_ERR(dst_node));
+		return ERR_CAST(dst_node);
+	}
+
+	mutex_lock(&icc_lock);
+	path = path_find(dev, src_node, dst_node);
+	if (IS_ERR(path))
+		dev_err(dev, "%s: invalid path=%ld\n", __func__, PTR_ERR(path));
+	mutex_unlock(&icc_lock);
+
+	return path;
+}
+EXPORT_SYMBOL_GPL(of_icc_get);
+
 /**
  * icc_set() - set constraints on an interconnect path between two endpoints
  * @path: reference to the path returned by icc_get()
@@ -521,6 +675,8 @@ int icc_provider_add(struct icc_provider *provider)
 {
 	if (WARN_ON(!provider->set))
 		return -EINVAL;
+	if (WARN_ON(!provider->xlate))
+		return -EINVAL;
 
 	mutex_lock(&icc_lock);
 
diff --git a/include/linux/interconnect-provider.h b/include/linux/interconnect-provider.h
index 78208a754181..63caccadc2db 100644
--- a/include/linux/interconnect-provider.h
+++ b/include/linux/interconnect-provider.h
@@ -12,6 +12,21 @@
 #define icc_units_to_bps(bw)  ((bw) * 1000ULL)
 
 struct icc_node;
+struct of_phandle_args;
+
+/**
+ * struct icc_onecell_data - driver data for onecell interconnect providers
+ *
+ * @num_nodes: number of nodes in this device
+ * @nodes: array of pointers to the nodes in this device
+ */
+struct icc_onecell_data {
+	unsigned int num_nodes;
+	struct icc_node *nodes[];
+};
+
+struct icc_node *of_icc_xlate_onecell(struct of_phandle_args *spec,
+				      void *data);
 
 /**
  * struct icc_provider - interconnect provider (controller) entity that might
@@ -21,6 +36,7 @@ struct icc_node;
  * @nodes: internal list of the interconnect provider nodes
  * @set: pointer to device specific set operation function
  * @aggregate: pointer to device specific aggregate operation function
+ * @xlate: provider-specific callback for mapping nodes from phandle arguments
  * @dev: the device this interconnect provider belongs to
  * @users: count of active users
  * @data: pointer to private data
@@ -31,6 +47,7 @@ struct icc_provider {
 	int (*set)(struct icc_node *src, struct icc_node *dst);
 	int (*aggregate)(struct icc_node *node, u32 avg_bw, u32 peak_bw,
 			 u32 *agg_avg, u32 *agg_peak);
+	struct icc_node* (*xlate)(struct of_phandle_args *spec, void *data);
 	struct device		*dev;
 	int			users;
 	void			*data;
diff --git a/include/linux/interconnect.h b/include/linux/interconnect.h
index 04b2966ded9f..41f7ecc2f20f 100644
--- a/include/linux/interconnect.h
+++ b/include/linux/interconnect.h
@@ -26,6 +26,7 @@ struct device;
 
 struct icc_path *icc_get(struct device *dev, const int src_id,
 			 const int dst_id);
+struct icc_path *of_icc_get(struct device *dev, const char *name);
 void icc_put(struct icc_path *path);
 int icc_set(struct icc_path *path, u32 avg_bw, u32 peak_bw);
 
@@ -37,6 +38,12 @@ static inline struct icc_path *icc_get(struct device *dev, const int src_id,
 	return NULL;
 }
 
+static inline struct icc_path *of_icc_get(struct device *dev,
+					  const char *name)
+{
+	return NULL;
+}
+
 static inline void icc_put(struct icc_path *path)
 {
 }

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH v10 4/7] interconnect: Add debugfs support
  2018-11-27 18:03 [PATCH v10 0/8] Introduce on-chip interconnect API Georgi Djakov
                   ` (2 preceding siblings ...)
  2018-11-27 18:03 ` [PATCH v10 3/7] interconnect: Allow endpoints translation via DT Georgi Djakov
@ 2018-11-27 18:03 ` Georgi Djakov
  2018-11-27 18:03 ` [PATCH v10 5/7] interconnect: qcom: Add sdm845 interconnect provider driver Georgi Djakov
                   ` (3 subsequent siblings)
  7 siblings, 0 replies; 34+ messages in thread
From: Georgi Djakov @ 2018-11-27 18:03 UTC (permalink / raw)
  To: linux-pm
  Cc: gregkh, rjw, robh+dt, mturquette, khilman, vincent.guittot,
	skannan, bjorn.andersson, amit.kucheria, seansw, daidavid1,
	evgreen, mark.rutland, lorenzo.pieralisi, abailon, maxime.ripard,
	arnd, thierry.reding, ksitaraman, sanjayc, devicetree,
	linux-kernel, linux-arm-kernel, linux-arm-msm, linux-tegra,
	georgi.djakov

Add a functionality to provide information about the current constraints
per each node and provider.

Signed-off-by: Georgi Djakov <georgi.djakov@linaro.org>
Reviewed-by: Evan Green <evgreen@chromium.org>
---
 drivers/interconnect/core.c | 71 +++++++++++++++++++++++++++++++++++++
 1 file changed, 71 insertions(+)

diff --git a/drivers/interconnect/core.c b/drivers/interconnect/core.c
index ebc42ef9fa46..efe39e8934cc 100644
--- a/drivers/interconnect/core.c
+++ b/drivers/interconnect/core.c
@@ -6,6 +6,7 @@
  * Author: Georgi Djakov <georgi.djakov@linaro.org>
  */
 
+#include <linux/debugfs.h>
 #include <linux/device.h>
 #include <linux/idr.h>
 #include <linux/init.h>
@@ -17,10 +18,12 @@
 #include <linux/slab.h>
 #include <linux/of.h>
 #include <linux/overflow.h>
+#include <linux/uaccess.h>
 
 static DEFINE_IDR(icc_idr);
 static LIST_HEAD(icc_providers);
 static DEFINE_MUTEX(icc_lock);
+static struct dentry *icc_debugfs_dir;
 
 /**
  * struct icc_req - constraints that are attached to each node
@@ -48,6 +51,59 @@ struct icc_path {
 	struct icc_req reqs[];
 };
 
+static void icc_summary_show_one(struct seq_file *s, struct icc_node *n)
+{
+	if (!n)
+		return;
+
+	seq_printf(s, "%-30s %12u %12u\n",
+		   n->name, n->avg_bw, n->peak_bw);
+}
+
+static int icc_summary_show(struct seq_file *s, void *data)
+{
+	struct icc_provider *provider;
+
+	seq_puts(s, " node                                   avg         peak\n");
+	seq_puts(s, "--------------------------------------------------------\n");
+
+	mutex_lock(&icc_lock);
+
+	list_for_each_entry(provider, &icc_providers, provider_list) {
+		struct icc_node *n;
+
+		list_for_each_entry(n, &provider->nodes, node_list) {
+			struct icc_req *r;
+
+			icc_summary_show_one(s, n);
+			hlist_for_each_entry(r, &n->req_list, req_node) {
+				if (!r->dev)
+					continue;
+
+				seq_printf(s, "    %-26s %12u %12u\n",
+					   dev_name(r->dev), r->avg_bw,
+					   r->peak_bw);
+			}
+		}
+	}
+
+	mutex_unlock(&icc_lock);
+
+	return 0;
+}
+
+static int icc_summary_open(struct inode *inode, struct file *file)
+{
+	return single_open(file, icc_summary_show, inode->i_private);
+}
+
+static const struct file_operations icc_summary_fops = {
+	.open		= icc_summary_open,
+	.read		= seq_read,
+	.llseek		= seq_lseek,
+	.release	= single_release,
+};
+
 static struct icc_node *node_find(const int id)
 {
 	return idr_find(&icc_idr, id);
@@ -720,6 +776,21 @@ int icc_provider_del(struct icc_provider *provider)
 }
 EXPORT_SYMBOL_GPL(icc_provider_del);
 
+static int __init icc_init(void)
+{
+	icc_debugfs_dir = debugfs_create_dir("interconnect", NULL);
+	debugfs_create_file("interconnect_summary", 0444,
+			    icc_debugfs_dir, NULL, &icc_summary_fops);
+	return 0;
+}
+
+static void __exit icc_exit(void)
+{
+	debugfs_remove_recursive(icc_debugfs_dir);
+}
+module_init(icc_init);
+module_exit(icc_exit);
+
 MODULE_AUTHOR("Georgi Djakov <georgi.djakov@linaro.org");
 MODULE_DESCRIPTION("Interconnect Driver Core");
 MODULE_LICENSE("GPL v2");

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH v10 5/7] interconnect: qcom: Add sdm845 interconnect provider driver
  2018-11-27 18:03 [PATCH v10 0/8] Introduce on-chip interconnect API Georgi Djakov
                   ` (3 preceding siblings ...)
  2018-11-27 18:03 ` [PATCH v10 4/7] interconnect: Add debugfs support Georgi Djakov
@ 2018-11-27 18:03 ` Georgi Djakov
  2018-12-01  0:39   ` Evan Green
  2018-11-27 18:03 ` [PATCH v10 6/7] arm64: dts: sdm845: Add interconnect provider DT nodes Georgi Djakov
                   ` (2 subsequent siblings)
  7 siblings, 1 reply; 34+ messages in thread
From: Georgi Djakov @ 2018-11-27 18:03 UTC (permalink / raw)
  To: linux-pm
  Cc: gregkh, rjw, robh+dt, mturquette, khilman, vincent.guittot,
	skannan, bjorn.andersson, amit.kucheria, seansw, daidavid1,
	evgreen, mark.rutland, lorenzo.pieralisi, abailon, maxime.ripard,
	arnd, thierry.reding, ksitaraman, sanjayc, devicetree,
	linux-kernel, linux-arm-kernel, linux-arm-msm, linux-tegra,
	georgi.djakov

From: David Dai <daidavid1@codeaurora.org>

Introduce Qualcomm SDM845 specific provider driver using the
interconnect framework.

Signed-off-by: David Dai <daidavid1@codeaurora.org>
Signed-off-by: Georgi Djakov <georgi.djakov@linaro.org>
---
 .../bindings/interconnect/qcom,sdm845.txt     |  24 +
 drivers/interconnect/Kconfig                  |   5 +
 drivers/interconnect/Makefile                 |   1 +
 drivers/interconnect/qcom/Kconfig             |  13 +
 drivers/interconnect/qcom/Makefile            |   5 +
 drivers/interconnect/qcom/sdm845.c            | 836 ++++++++++++++++++
 .../dt-bindings/interconnect/qcom,sdm845.h    | 143 +++
 7 files changed, 1027 insertions(+)
 create mode 100644 Documentation/devicetree/bindings/interconnect/qcom,sdm845.txt
 create mode 100644 drivers/interconnect/qcom/Kconfig
 create mode 100644 drivers/interconnect/qcom/Makefile
 create mode 100644 drivers/interconnect/qcom/sdm845.c
 create mode 100644 include/dt-bindings/interconnect/qcom,sdm845.h

diff --git a/Documentation/devicetree/bindings/interconnect/qcom,sdm845.txt b/Documentation/devicetree/bindings/interconnect/qcom,sdm845.txt
new file mode 100644
index 000000000000..d45150e99665
--- /dev/null
+++ b/Documentation/devicetree/bindings/interconnect/qcom,sdm845.txt
@@ -0,0 +1,24 @@
+Qualcomm SDM845 Network-On-Chip interconnect driver binding
+-----------------------------------------------------------
+
+SDM845 interconnect providers support system bandwidth requirements through
+RPMh hardware accelerators known as Bus Clock Manager(BCM). The provider is able
+to communicate with the BCM through the Resource State Coordinator(RSC)
+associated with each execution environment. Provider nodes must reside within
+an RPMh device node pertaining to their RSC and each provider maps to
+a single RPMh resource.
+
+Required properties :
+- compatible : shall contain only one of the following:
+			"qcom,sdm845-rsc-hlos"
+- #interconnect-cells : should contain 1
+
+Examples:
+
+apps_rsc: rsc {
+		qnoc: qnoc-rsc-hlos {
+			compatible = "qcom,sdm845-rsc-hlos";
+			#interconnect-cells = <1>;
+		};
+};
+
diff --git a/drivers/interconnect/Kconfig b/drivers/interconnect/Kconfig
index a261c7d41deb..07a8276fa35a 100644
--- a/drivers/interconnect/Kconfig
+++ b/drivers/interconnect/Kconfig
@@ -8,3 +8,8 @@ menuconfig INTERCONNECT
 
 	  If unsure, say no.
 
+if INTERCONNECT
+
+source "drivers/interconnect/qcom/Kconfig"
+
+endif
diff --git a/drivers/interconnect/Makefile b/drivers/interconnect/Makefile
index 7a01f33b5593..28f2ab0824d5 100644
--- a/drivers/interconnect/Makefile
+++ b/drivers/interconnect/Makefile
@@ -3,3 +3,4 @@
 icc-core-objs				:= core.o
 
 obj-$(CONFIG_INTERCONNECT)		+= icc-core.o
+obj-$(CONFIG_INTERCONNECT_QCOM)		+= qcom/
diff --git a/drivers/interconnect/qcom/Kconfig b/drivers/interconnect/qcom/Kconfig
new file mode 100644
index 000000000000..290d330abe5a
--- /dev/null
+++ b/drivers/interconnect/qcom/Kconfig
@@ -0,0 +1,13 @@
+config INTERCONNECT_QCOM
+	bool "Qualcomm Network-on-Chip interconnect drivers"
+	depends on ARCH_QCOM
+	help
+	  Support for Qualcomm's Network-on-Chip interconnect hardware.
+
+config INTERCONNECT_QCOM_SDM845
+	tristate "Qualcomm SDM845 interconnect driver"
+	depends on INTERCONNECT_QCOM
+	depends on (QCOM_RPMH && QCOM_COMMAND_DB && OF) || COMPILE_TEST
+	help
+	  This is a driver for the Qualcomm Network-on-Chip on sdm845-based
+	  platforms.
diff --git a/drivers/interconnect/qcom/Makefile b/drivers/interconnect/qcom/Makefile
new file mode 100644
index 000000000000..1c1cea690f92
--- /dev/null
+++ b/drivers/interconnect/qcom/Makefile
@@ -0,0 +1,5 @@
+# SPDX-License-Identifier: GPL-2.0
+
+qnoc-sdm845-objs			:= sdm845.o
+
+obj-$(CONFIG_INTERCONNECT_QCOM_SDM845) += qnoc-sdm845.o
diff --git a/drivers/interconnect/qcom/sdm845.c b/drivers/interconnect/qcom/sdm845.c
new file mode 100644
index 000000000000..1678de91ca52
--- /dev/null
+++ b/drivers/interconnect/qcom/sdm845.c
@@ -0,0 +1,836 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (c) 2018, The Linux Foundation. All rights reserved.
+ *
+ */
+
+#include <asm/div64.h>
+#include <dt-bindings/interconnect/qcom,sdm845.h>
+#include <linux/device.h>
+#include <linux/interconnect.h>
+#include <linux/interconnect-provider.h>
+#include <linux/io.h>
+#include <linux/module.h>
+#include <linux/of_device.h>
+#include <linux/of_platform.h>
+#include <linux/platform_device.h>
+#include <linux/sort.h>
+
+#include <soc/qcom/cmd-db.h>
+#include <soc/qcom/rpmh.h>
+#include <soc/qcom/tcs.h>
+
+#define BCM_TCS_CMD_COMMIT_SHFT		30
+#define BCM_TCS_CMD_COMMIT_MASK		0x40000000
+#define BCM_TCS_CMD_VALID_SHFT		29
+#define BCM_TCS_CMD_VALID_MASK		0x20000000
+#define BCM_TCS_CMD_VOTE_X_SHFT		14
+#define BCM_TCS_CMD_VOTE_MASK		0x3fff
+#define BCM_TCS_CMD_VOTE_Y_SHFT		0
+#define BCM_TCS_CMD_VOTE_Y_MASK		0xfffc000
+
+#define BCM_TCS_CMD(commit, valid, vote_x, vote_y) \
+	(((commit) << BCM_TCS_CMD_COMMIT_SHFT) |\
+	((valid) << BCM_TCS_CMD_VALID_SHFT) |\
+	((cpu_to_le32(vote_x) &\
+	BCM_TCS_CMD_VOTE_MASK) << BCM_TCS_CMD_VOTE_X_SHFT) |\
+	((cpu_to_le32(vote_y) &\
+	BCM_TCS_CMD_VOTE_MASK) << BCM_TCS_CMD_VOTE_Y_SHFT))
+
+#define to_qcom_provider(_provider) \
+	container_of(_provider, struct qcom_icc_provider, provider)
+
+struct qcom_icc_provider {
+	struct icc_provider provider;
+	struct device *dev;
+	struct qcom_icc_bcm **bcms;
+	size_t num_bcms;
+};
+
+/**
+ * struct bcm_db - Auxiliary data pertaining to each Bus Clock Manager(BCM)
+ * @unit: divisor used to convert bytes/sec bw value to an RPMh msg
+ * @width: multiplier used to convert bytes/sec bw value to an RPMh msg
+ * @vcd: virtual clock domain that this bcm belongs to
+ * @reserved: reserved field
+ */
+struct bcm_db {
+	u32 unit;
+	u16 width;
+	u8 vcd;
+	u8 reserved;
+};
+
+#define SDM845_MAX_LINKS	43
+#define SDM845_MAX_BCMS		30
+#define SDM845_MAX_BCM_PER_NODE	2
+#define SDM845_MAX_VCD		10
+
+/**
+ * struct qcom_icc_node - Qualcomm specific interconnect nodes
+ * @name: the node name used in debugfs
+ * @links: an array of nodes where we can go next while traversing
+ * @id: a unique node identifier
+ * @num_links: the total number of @links
+ * @channels: num of channels at this node
+ * @buswidth: width of the interconnect between a node and the bus
+ * @sum_avg: current sum aggregate value of all avg bw requests
+ * @max_peak: current max aggregate value of all peak bw requests
+ * @bcms: list of bcms associated with this logical node
+ * @num_bcms: num of @bcms
+ */
+struct qcom_icc_node {
+	const char *name;
+	u16 links[SDM845_MAX_LINKS];
+	u16 id;
+	u16 num_links;
+	u16 channels;
+	u16 buswidth;
+	u64 sum_avg;
+	u64 max_peak;
+	struct qcom_icc_bcm *bcms[SDM845_MAX_BCM_PER_NODE];
+	size_t num_bcms;
+};
+
+/**
+ * struct qcom_icc_bcm - Qualcomm specific hardware accelerator nodes
+ * known as Bus Clock Manager(BCM)
+ * @name: the bcm node name used to fetch BCM data from command db
+ * @type: latency or bandwidth bcm
+ * @addr: address offsets used when voting to RPMH
+ * @vote_x: aggregated threshold values, represents sum_bw when @type is bw bcm
+ * @vote_y: aggregated threshold values, represents peak_bw when @type is bw bcm
+ * @dirty: flag used to indicate whether the bcm needs to be committed
+ * @keepalive: flag used to indicate whether a keepalive is required
+ * @aux_data: auxiliary data used when calculating threshold values and
+ * communicating with RPMh
+ * @list: used to link to other bcms when compiling lists for commit
+ * @num_nodes: total number of @num_nodes
+ * @nodes: list of qcom_icc_nodes that this BCM encapsulates
+ */
+struct qcom_icc_bcm {
+	const char *name;
+	u32 type;
+	u32 addr;
+	u64 vote_x;
+	u64 vote_y;
+	bool dirty;
+	bool keepalive;
+	struct bcm_db aux_data;
+	struct list_head list;
+	size_t num_nodes;
+	struct qcom_icc_node *nodes[];
+};
+
+struct qcom_icc_fabric {
+	struct qcom_icc_node **nodes;
+	size_t num_nodes;
+};
+
+struct qcom_icc_desc {
+	struct qcom_icc_node **nodes;
+	size_t num_nodes;
+	struct qcom_icc_bcm **bcms;
+	size_t num_bcms;
+};
+
+#define DEFINE_QNODE(_name, _id, _channels, _buswidth,			\
+			_numlinks, ...)					\
+		static struct qcom_icc_node _name = {			\
+		.id = _id,						\
+		.name = #_name,						\
+		.channels = _channels,					\
+		.buswidth = _buswidth,					\
+		.num_links = _numlinks,					\
+		.links = { __VA_ARGS__ },				\
+	}
+
+DEFINE_QNODE(qhm_a1noc_cfg, MASTER_A1NOC_CFG, 1, 4, 1, SLAVE_SERVICE_A1NOC);
+DEFINE_QNODE(qhm_qup1, MASTER_BLSP_1, 1, 4, 1, SLAVE_A1NOC_SNOC);
+DEFINE_QNODE(qhm_tsif, MASTER_TSIF, 1, 4, 1, SLAVE_A1NOC_SNOC);
+DEFINE_QNODE(xm_sdc2, MASTER_SDCC_2, 1, 8, 1, SLAVE_A1NOC_SNOC);
+DEFINE_QNODE(xm_sdc4, MASTER_SDCC_4, 1, 8, 1, SLAVE_A1NOC_SNOC);
+DEFINE_QNODE(xm_ufs_card, MASTER_UFS_CARD, 1, 8, 1, SLAVE_A1NOC_SNOC);
+DEFINE_QNODE(xm_ufs_mem, MASTER_UFS_MEM, 1, 8, 1, SLAVE_A1NOC_SNOC);
+DEFINE_QNODE(xm_pcie_0, MASTER_PCIE_0, 1, 8, 1, SLAVE_ANOC_PCIE_A1NOC_SNOC);
+DEFINE_QNODE(qhm_a2noc_cfg, MASTER_A2NOC_CFG, 1, 4, 1, SLAVE_SERVICE_A2NOC);
+DEFINE_QNODE(qhm_qdss_bam, MASTER_QDSS_BAM, 1, 4, 1, SLAVE_A2NOC_SNOC);
+DEFINE_QNODE(qhm_qup2, MASTER_BLSP_2, 1, 4, 1, SLAVE_A2NOC_SNOC);
+DEFINE_QNODE(qnm_cnoc, MASTER_CNOC_A2NOC, 1, 8, 1, SLAVE_A2NOC_SNOC);
+DEFINE_QNODE(qxm_crypto, MASTER_CRYPTO, 1, 8, 1, SLAVE_A2NOC_SNOC);
+DEFINE_QNODE(qxm_ipa, MASTER_IPA, 1, 8, 1, SLAVE_A2NOC_SNOC);
+DEFINE_QNODE(xm_pcie3_1, MASTER_PCIE_1, 1, 8, 1, SLAVE_ANOC_PCIE_SNOC);
+DEFINE_QNODE(xm_qdss_etr, MASTER_QDSS_ETR, 1, 8, 1, SLAVE_A2NOC_SNOC);
+DEFINE_QNODE(xm_usb3_0, MASTER_USB3_0, 1, 8, 1, SLAVE_A2NOC_SNOC);
+DEFINE_QNODE(xm_usb3_1, MASTER_USB3_1, 1, 8, 1, SLAVE_A2NOC_SNOC);
+DEFINE_QNODE(qxm_camnoc_hf0_uncomp, MASTER_CAMNOC_HF0_UNCOMP, 1, 32, 1, SLAVE_CAMNOC_UNCOMP);
+DEFINE_QNODE(qxm_camnoc_hf1_uncomp, MASTER_CAMNOC_HF1_UNCOMP, 1, 32, 1, SLAVE_CAMNOC_UNCOMP);
+DEFINE_QNODE(qxm_camnoc_sf_uncomp, MASTER_CAMNOC_SF_UNCOMP, 1, 32, 1, SLAVE_CAMNOC_UNCOMP);
+DEFINE_QNODE(qhm_spdm, MASTER_SPDM, 1, 4, 1, SLAVE_CNOC_A2NOC);
+DEFINE_QNODE(qhm_tic, MASTER_TIC, 1, 4, 43, SLAVE_A1NOC_CFG, SLAVE_A2NOC_CFG, SLAVE_AOP, SLAVE_AOSS, SLAVE_CAMERA_CFG, SLAVE_CLK_CTL, SLAVE_CDSP_CFG, SLAVE_RBCPR_CX_CFG, SLAVE_CRYPTO_0_CFG, SLAVE_DCC_CFG, SLAVE_CNOC_DDRSS, SLAVE_DISPLAY_CFG, SLAVE_GLM, SLAVE_GFX3D_CFG, SLAVE_IMEM_CFG, SLAVE_IPA_CFG, SLAVE_CNOC_MNOC_CFG, SLAVE_PCIE_0_CFG, SLAVE_PCIE_1_CFG, SLAVE_PDM, SLAVE_SOUTH_PHY_CFG, SLAVE_PIMEM_CFG, SLAVE_PRNG, SLAVE_QDSS_CFG, SLAVE_BLSP_2, SLAVE_BLSP_1, SLAVE_SDCC_2, SLAVE_SDCC_4, SLAVE_SNOC_CFG, SLAVE_SPDM_WRAPPER, SLAVE_SPSS_CFG, SLAVE_TCSR, SLAVE_TLMM_NORTH, SLAVE_TLMM_SOUTH, SLAVE_TSIF, SLAVE_UFS_CARD_CFG, SLAVE_UFS_MEM_CFG, SLAVE_USB3_0, SLAVE_USB3_1, SLAVE_VENUS_CFG, SLAVE_VSENSE_CTRL_CFG, SLAVE_CNOC_A2NOC, SLAVE_SERVICE_CNOC);
+DEFINE_QNODE(qnm_snoc, MASTER_SNOC_CNOC, 1, 8, 42, SLAVE_A1NOC_CFG, SLAVE_A2NOC_CFG, SLAVE_AOP, SLAVE_AOSS, SLAVE_CAMERA_CFG, SLAVE_CLK_CTL, SLAVE_CDSP_CFG, SLAVE_RBCPR_CX_CFG, SLAVE_CRYPTO_0_CFG, SLAVE_DCC_CFG, SLAVE_CNOC_DDRSS, SLAVE_DISPLAY_CFG, SLAVE_GLM, SLAVE_GFX3D_CFG, SLAVE_IMEM_CFG, SLAVE_IPA_CFG, SLAVE_CNOC_MNOC_CFG, SLAVE_PCIE_0_CFG, SLAVE_PCIE_1_CFG, SLAVE_PDM, SLAVE_SOUTH_PHY_CFG, SLAVE_PIMEM_CFG, SLAVE_PRNG, SLAVE_QDSS_CFG, SLAVE_BLSP_2, SLAVE_BLSP_1, SLAVE_SDCC_2, SLAVE_SDCC_4, SLAVE_SNOC_CFG, SLAVE_SPDM_WRAPPER, SLAVE_SPSS_CFG, SLAVE_TCSR, SLAVE_TLMM_NORTH, SLAVE_TLMM_SOUTH, SLAVE_TSIF, SLAVE_UFS_CARD_CFG, SLAVE_UFS_MEM_CFG, SLAVE_USB3_0, SLAVE_USB3_1, SLAVE_VENUS_CFG, SLAVE_VSENSE_CTRL_CFG, SLAVE_SERVICE_CNOC);
+DEFINE_QNODE(xm_qdss_dap, MASTER_QDSS_DAP, 1, 8, 43, SLAVE_A1NOC_CFG, SLAVE_A2NOC_CFG, SLAVE_AOP, SLAVE_AOSS, SLAVE_CAMERA_CFG, SLAVE_CLK_CTL, SLAVE_CDSP_CFG, SLAVE_RBCPR_CX_CFG, SLAVE_CRYPTO_0_CFG, SLAVE_DCC_CFG, SLAVE_CNOC_DDRSS, SLAVE_DISPLAY_CFG, SLAVE_GLM, SLAVE_GFX3D_CFG, SLAVE_IMEM_CFG, SLAVE_IPA_CFG, SLAVE_CNOC_MNOC_CFG, SLAVE_PCIE_0_CFG, SLAVE_PCIE_1_CFG, SLAVE_PDM, SLAVE_SOUTH_PHY_CFG, SLAVE_PIMEM_CFG, SLAVE_PRNG, SLAVE_QDSS_CFG, SLAVE_BLSP_2, SLAVE_BLSP_1, SLAVE_SDCC_2, SLAVE_SDCC_4, SLAVE_SNOC_CFG, SLAVE_SPDM_WRAPPER, SLAVE_SPSS_CFG, SLAVE_TCSR, SLAVE_TLMM_NORTH, SLAVE_TLMM_SOUTH, SLAVE_TSIF, SLAVE_UFS_CARD_CFG, SLAVE_UFS_MEM_CFG, SLAVE_USB3_0, SLAVE_USB3_1, SLAVE_VENUS_CFG, SLAVE_VSENSE_CTRL_CFG, SLAVE_CNOC_A2NOC, SLAVE_SERVICE_CNOC);
+DEFINE_QNODE(qhm_cnoc, MASTER_CNOC_DC_NOC, 1, 4, 2, SLAVE_LLCC_CFG, SLAVE_MEM_NOC_CFG);
+DEFINE_QNODE(acm_l3, MASTER_APPSS_PROC, 1, 16, 3, SLAVE_GNOC_SNOC, SLAVE_GNOC_MEM_NOC, SLAVE_SERVICE_GNOC);
+DEFINE_QNODE(pm_gnoc_cfg, MASTER_GNOC_CFG, 1, 4, 1, SLAVE_SERVICE_GNOC);
+DEFINE_QNODE(llcc_mc, MASTER_LLCC, 4, 4, 1, SLAVE_EBI1);
+DEFINE_QNODE(acm_tcu, MASTER_TCU_0, 1, 8, 3, SLAVE_MEM_NOC_GNOC, SLAVE_LLCC, SLAVE_MEM_NOC_SNOC);
+DEFINE_QNODE(qhm_memnoc_cfg, MASTER_MEM_NOC_CFG, 1, 4, 2, SLAVE_MSS_PROC_MS_MPU_CFG, SLAVE_SERVICE_MEM_NOC);
+DEFINE_QNODE(qnm_apps, MASTER_GNOC_MEM_NOC, 2, 32, 1, SLAVE_LLCC);
+DEFINE_QNODE(qnm_mnoc_hf, MASTER_MNOC_HF_MEM_NOC, 2, 32, 2, SLAVE_MEM_NOC_GNOC, SLAVE_LLCC);
+DEFINE_QNODE(qnm_mnoc_sf, MASTER_MNOC_SF_MEM_NOC, 1, 32, 3, SLAVE_MEM_NOC_GNOC, SLAVE_LLCC, SLAVE_MEM_NOC_SNOC);
+DEFINE_QNODE(qnm_snoc_gc, MASTER_SNOC_GC_MEM_NOC, 1, 8, 1, SLAVE_LLCC);
+DEFINE_QNODE(qnm_snoc_sf, MASTER_SNOC_SF_MEM_NOC, 1, 16, 2, SLAVE_MEM_NOC_GNOC, SLAVE_LLCC);
+DEFINE_QNODE(qxm_gpu, MASTER_GFX3D, 2, 32, 3, SLAVE_MEM_NOC_GNOC, SLAVE_LLCC, SLAVE_MEM_NOC_SNOC);
+DEFINE_QNODE(qhm_mnoc_cfg, MASTER_CNOC_MNOC_CFG, 1, 4, 1, SLAVE_SERVICE_MNOC);
+DEFINE_QNODE(qxm_camnoc_hf0, MASTER_CAMNOC_HF0, 1, 32, 1, SLAVE_MNOC_HF_MEM_NOC);
+DEFINE_QNODE(qxm_camnoc_hf1, MASTER_CAMNOC_HF1, 1, 32, 1, SLAVE_MNOC_HF_MEM_NOC);
+DEFINE_QNODE(qxm_camnoc_sf, MASTER_CAMNOC_SF, 1, 32, 1, SLAVE_MNOC_SF_MEM_NOC);
+DEFINE_QNODE(qxm_mdp0, MASTER_MDP0, 1, 32, 1, SLAVE_MNOC_HF_MEM_NOC);
+DEFINE_QNODE(qxm_mdp1, MASTER_MDP1, 1, 32, 1, SLAVE_MNOC_HF_MEM_NOC);
+DEFINE_QNODE(qxm_rot, MASTER_ROTATOR, 1, 32, 1, SLAVE_MNOC_SF_MEM_NOC);
+DEFINE_QNODE(qxm_venus0, MASTER_VIDEO_P0, 1, 32, 1, SLAVE_MNOC_SF_MEM_NOC);
+DEFINE_QNODE(qxm_venus1, MASTER_VIDEO_P1, 1, 32, 1, SLAVE_MNOC_SF_MEM_NOC);
+DEFINE_QNODE(qxm_venus_arm9, MASTER_VIDEO_PROC, 1, 8, 1, SLAVE_MNOC_SF_MEM_NOC);
+DEFINE_QNODE(qhm_snoc_cfg, MASTER_SNOC_CFG, 1, 4, 1, SLAVE_SERVICE_SNOC);
+DEFINE_QNODE(qnm_aggre1_noc, MASTER_A1NOC_SNOC, 1, 16, 6, SLAVE_APPSS, SLAVE_SNOC_CNOC, SLAVE_SNOC_MEM_NOC_SF, SLAVE_IMEM, SLAVE_PIMEM, SLAVE_QDSS_STM);
+DEFINE_QNODE(qnm_aggre2_noc, MASTER_A2NOC_SNOC, 1, 16, 9, SLAVE_APPSS, SLAVE_SNOC_CNOC, SLAVE_SNOC_MEM_NOC_SF, SLAVE_IMEM, SLAVE_PCIE_0, SLAVE_PCIE_1, SLAVE_PIMEM, SLAVE_QDSS_STM, SLAVE_TCU);
+DEFINE_QNODE(qnm_gladiator_sodv, MASTER_GNOC_SNOC, 1, 8, 8, SLAVE_APPSS, SLAVE_SNOC_CNOC, SLAVE_IMEM, SLAVE_PCIE_0, SLAVE_PCIE_1, SLAVE_PIMEM, SLAVE_QDSS_STM, SLAVE_TCU);
+DEFINE_QNODE(qnm_memnoc, MASTER_MEM_NOC_SNOC, 1, 8, 5, SLAVE_APPSS, SLAVE_SNOC_CNOC, SLAVE_IMEM, SLAVE_PIMEM, SLAVE_QDSS_STM);
+DEFINE_QNODE(qnm_pcie_anoc, MASTER_ANOC_PCIE_SNOC, 1, 16, 5, SLAVE_APPSS, SLAVE_SNOC_CNOC, SLAVE_SNOC_MEM_NOC_SF, SLAVE_IMEM, SLAVE_QDSS_STM);
+DEFINE_QNODE(qxm_pimem, MASTER_PIMEM, 1, 8, 2, SLAVE_SNOC_MEM_NOC_GC, SLAVE_IMEM);
+DEFINE_QNODE(xm_gic, MASTER_GIC, 1, 8, 2, SLAVE_SNOC_MEM_NOC_GC, SLAVE_IMEM);
+DEFINE_QNODE(qns_a1noc_snoc, SLAVE_A1NOC_SNOC, 1, 16, 1, MASTER_A1NOC_SNOC);
+DEFINE_QNODE(srvc_aggre1_noc, SLAVE_SERVICE_A1NOC, 1, 4, 0);
+DEFINE_QNODE(qns_pcie_a1noc_snoc, SLAVE_ANOC_PCIE_A1NOC_SNOC, 1, 16, 1, MASTER_ANOC_PCIE_SNOC);
+DEFINE_QNODE(qns_a2noc_snoc, SLAVE_A2NOC_SNOC, 1, 16, 1, MASTER_A2NOC_SNOC);
+DEFINE_QNODE(qns_pcie_snoc, SLAVE_ANOC_PCIE_SNOC, 1, 16, 1, MASTER_ANOC_PCIE_SNOC);
+DEFINE_QNODE(srvc_aggre2_noc, SLAVE_SERVICE_A2NOC, 1, 4, 0);
+DEFINE_QNODE(qns_camnoc_uncomp, SLAVE_CAMNOC_UNCOMP, 1, 32, 0);
+DEFINE_QNODE(qhs_a1_noc_cfg, SLAVE_A1NOC_CFG, 1, 4, 1, MASTER_A1NOC_CFG);
+DEFINE_QNODE(qhs_a2_noc_cfg, SLAVE_A2NOC_CFG, 1, 4, 1, MASTER_A2NOC_CFG);
+DEFINE_QNODE(qhs_aop, SLAVE_AOP, 1, 4, 0);
+DEFINE_QNODE(qhs_aoss, SLAVE_AOSS, 1, 4, 0);
+DEFINE_QNODE(qhs_camera_cfg, SLAVE_CAMERA_CFG, 1, 4, 0);
+DEFINE_QNODE(qhs_clk_ctl, SLAVE_CLK_CTL, 1, 4, 0);
+DEFINE_QNODE(qhs_compute_dsp_cfg, SLAVE_CDSP_CFG, 1, 4, 0);
+DEFINE_QNODE(qhs_cpr_cx, SLAVE_RBCPR_CX_CFG, 1, 4, 0);
+DEFINE_QNODE(qhs_crypto0_cfg, SLAVE_CRYPTO_0_CFG, 1, 4, 0);
+DEFINE_QNODE(qhs_dcc_cfg, SLAVE_DCC_CFG, 1, 4, 1, MASTER_CNOC_DC_NOC);
+DEFINE_QNODE(qhs_ddrss_cfg, SLAVE_CNOC_DDRSS, 1, 4, 0);
+DEFINE_QNODE(qhs_display_cfg, SLAVE_DISPLAY_CFG, 1, 4, 0);
+DEFINE_QNODE(qhs_glm, SLAVE_GLM, 1, 4, 0);
+DEFINE_QNODE(qhs_gpuss_cfg, SLAVE_GFX3D_CFG, 1, 8, 0);
+DEFINE_QNODE(qhs_imem_cfg, SLAVE_IMEM_CFG, 1, 4, 0);
+DEFINE_QNODE(qhs_ipa, SLAVE_IPA_CFG, 1, 4, 0);
+DEFINE_QNODE(qhs_mnoc_cfg, SLAVE_CNOC_MNOC_CFG, 1, 4, 1, MASTER_CNOC_MNOC_CFG);
+DEFINE_QNODE(qhs_pcie0_cfg, SLAVE_PCIE_0_CFG, 1, 4, 0);
+DEFINE_QNODE(qhs_pcie_gen3_cfg, SLAVE_PCIE_1_CFG, 1, 4, 0);
+DEFINE_QNODE(qhs_pdm, SLAVE_PDM, 1, 4, 0);
+DEFINE_QNODE(qhs_phy_refgen_south, SLAVE_SOUTH_PHY_CFG, 1, 4, 0);
+DEFINE_QNODE(qhs_pimem_cfg, SLAVE_PIMEM_CFG, 1, 4, 0);
+DEFINE_QNODE(qhs_prng, SLAVE_PRNG, 1, 4, 0);
+DEFINE_QNODE(qhs_qdss_cfg, SLAVE_QDSS_CFG, 1, 4, 0);
+DEFINE_QNODE(qhs_qupv3_north, SLAVE_BLSP_2, 1, 4, 0);
+DEFINE_QNODE(qhs_qupv3_south, SLAVE_BLSP_1, 1, 4, 0);
+DEFINE_QNODE(qhs_sdc2, SLAVE_SDCC_2, 1, 4, 0);
+DEFINE_QNODE(qhs_sdc4, SLAVE_SDCC_4, 1, 4, 0);
+DEFINE_QNODE(qhs_snoc_cfg, SLAVE_SNOC_CFG, 1, 4, 1, MASTER_SNOC_CFG);
+DEFINE_QNODE(qhs_spdm, SLAVE_SPDM_WRAPPER, 1, 4, 0);
+DEFINE_QNODE(qhs_spss_cfg, SLAVE_SPSS_CFG, 1, 4, 0);
+DEFINE_QNODE(qhs_tcsr, SLAVE_TCSR, 1, 4, 0);
+DEFINE_QNODE(qhs_tlmm_north, SLAVE_TLMM_NORTH, 1, 4, 0);
+DEFINE_QNODE(qhs_tlmm_south, SLAVE_TLMM_SOUTH, 1, 4, 0);
+DEFINE_QNODE(qhs_tsif, SLAVE_TSIF, 1, 4, 0);
+DEFINE_QNODE(qhs_ufs_card_cfg, SLAVE_UFS_CARD_CFG, 1, 4, 0);
+DEFINE_QNODE(qhs_ufs_mem_cfg, SLAVE_UFS_MEM_CFG, 1, 4, 0);
+DEFINE_QNODE(qhs_usb3_0, SLAVE_USB3_0, 1, 4, 0);
+DEFINE_QNODE(qhs_usb3_1, SLAVE_USB3_1, 1, 4, 0);
+DEFINE_QNODE(qhs_venus_cfg, SLAVE_VENUS_CFG, 1, 4, 0);
+DEFINE_QNODE(qhs_vsense_ctrl_cfg, SLAVE_VSENSE_CTRL_CFG, 1, 4, 0);
+DEFINE_QNODE(qns_cnoc_a2noc, SLAVE_CNOC_A2NOC, 1, 8, 1, MASTER_CNOC_A2NOC);
+DEFINE_QNODE(srvc_cnoc, SLAVE_SERVICE_CNOC, 1, 4, 0);
+DEFINE_QNODE(qhs_llcc, SLAVE_LLCC_CFG, 1, 4, 0);
+DEFINE_QNODE(qhs_memnoc, SLAVE_MEM_NOC_CFG, 1, 4, 1, MASTER_MEM_NOC_CFG);
+DEFINE_QNODE(qns_gladiator_sodv, SLAVE_GNOC_SNOC, 1, 8, 1, MASTER_GNOC_SNOC);
+DEFINE_QNODE(qns_gnoc_memnoc, SLAVE_GNOC_MEM_NOC, 2, 32, 1, MASTER_GNOC_MEM_NOC);
+DEFINE_QNODE(srvc_gnoc, SLAVE_SERVICE_GNOC, 1, 4, 0);
+DEFINE_QNODE(ebi, SLAVE_EBI1, 4, 4, 0);
+DEFINE_QNODE(qhs_mdsp_ms_mpu_cfg, SLAVE_MSS_PROC_MS_MPU_CFG, 1, 4, 0);
+DEFINE_QNODE(qns_apps_io, SLAVE_MEM_NOC_GNOC, 1, 32, 0);
+DEFINE_QNODE(qns_llcc, SLAVE_LLCC, 4, 16, 1, MASTER_LLCC);
+DEFINE_QNODE(qns_memnoc_snoc, SLAVE_MEM_NOC_SNOC, 1, 8, 1, MASTER_MEM_NOC_SNOC);
+DEFINE_QNODE(srvc_memnoc, SLAVE_SERVICE_MEM_NOC, 1, 4, 0);
+DEFINE_QNODE(qns2_mem_noc, SLAVE_MNOC_SF_MEM_NOC, 1, 32, 1, MASTER_MNOC_SF_MEM_NOC);
+DEFINE_QNODE(qns_mem_noc_hf, SLAVE_MNOC_HF_MEM_NOC, 2, 32, 1, MASTER_MNOC_HF_MEM_NOC);
+DEFINE_QNODE(srvc_mnoc, SLAVE_SERVICE_MNOC, 1, 4, 0);
+DEFINE_QNODE(qhs_apss, SLAVE_APPSS, 1, 8, 0);
+DEFINE_QNODE(qns_cnoc, SLAVE_SNOC_CNOC, 1, 8, 1, MASTER_SNOC_CNOC);
+DEFINE_QNODE(qns_memnoc_gc, SLAVE_SNOC_MEM_NOC_GC, 1, 8, 1, MASTER_SNOC_GC_MEM_NOC);
+DEFINE_QNODE(qns_memnoc_sf, SLAVE_SNOC_MEM_NOC_SF, 1, 16, 1, MASTER_SNOC_SF_MEM_NOC);
+DEFINE_QNODE(qxs_imem, SLAVE_IMEM, 1, 8, 0);
+DEFINE_QNODE(qxs_pcie, SLAVE_PCIE_0, 1, 8, 0);
+DEFINE_QNODE(qxs_pcie_gen3, SLAVE_PCIE_1, 1, 8, 0);
+DEFINE_QNODE(qxs_pimem, SLAVE_PIMEM, 1, 8, 0);
+DEFINE_QNODE(srvc_snoc, SLAVE_SERVICE_SNOC, 1, 4, 0);
+DEFINE_QNODE(xs_qdss_stm, SLAVE_QDSS_STM, 1, 4, 0);
+DEFINE_QNODE(xs_sys_tcu_cfg, SLAVE_TCU, 1, 8, 0);
+
+#define DEFINE_QBCM(_name, _bcmname, _keepalive, _numnodes, ...)	\
+		static struct qcom_icc_bcm _name = {			\
+		.name = _bcmname,					\
+		.keepalive = _keepalive,				\
+		.num_nodes = _numnodes,					\
+		.nodes = { __VA_ARGS__ },				\
+	}
+
+DEFINE_QBCM(bcm_acv, "ACV", false, 1, &ebi);
+DEFINE_QBCM(bcm_mc0, "MC0", true, 1, &ebi);
+DEFINE_QBCM(bcm_sh0, "SH0", true, 1, &qns_llcc);
+DEFINE_QBCM(bcm_mm0, "MM0", false, 1, &qns_mem_noc_hf);
+DEFINE_QBCM(bcm_sh1, "SH1", false, 1, &qns_apps_io);
+DEFINE_QBCM(bcm_mm1, "MM1", false, 7, &qxm_camnoc_hf0_uncomp, &qxm_camnoc_hf1_uncomp, &qxm_camnoc_sf_uncomp, &qxm_camnoc_hf0, &qxm_camnoc_hf1, &qxm_mdp0, &qxm_mdp1);
+DEFINE_QBCM(bcm_sh2, "SH2", false, 1, &qns_memnoc_snoc);
+DEFINE_QBCM(bcm_mm2, "MM2", false, 1, &qns2_mem_noc);
+DEFINE_QBCM(bcm_sh3, "SH3", false, 1, &acm_tcu);
+DEFINE_QBCM(bcm_mm3, "MM3", false, 5, &qxm_camnoc_sf, &qxm_rot, &qxm_venus0, &qxm_venus1, &qxm_venus_arm9);
+DEFINE_QBCM(bcm_sh5, "SH5", false, 1, &qnm_apps);
+DEFINE_QBCM(bcm_sn0, "SN0", true, 1, &qns_memnoc_sf);
+DEFINE_QBCM(bcm_ce0, "CE0", false, 1, &qxm_crypto);
+DEFINE_QBCM(bcm_cn0, "CN0", false, 47, &qhm_spdm, &qhm_tic, &qnm_snoc, &xm_qdss_dap, &qhs_a1_noc_cfg, &qhs_a2_noc_cfg, &qhs_aop, &qhs_aoss, &qhs_camera_cfg, &qhs_clk_ctl, &qhs_compute_dsp_cfg, &qhs_cpr_cx, &qhs_crypto0_cfg, &qhs_dcc_cfg, &qhs_ddrss_cfg, &qhs_display_cfg, &qhs_glm, &qhs_gpuss_cfg, &qhs_imem_cfg, &qhs_ipa, &qhs_mnoc_cfg, &qhs_pcie0_cfg, &qhs_pcie_gen3_cfg, &qhs_pdm, &qhs_phy_refgen_south, &qhs_pimem_cfg, &qhs_prng, &qhs_qdss_cfg, &qhs_qupv3_north, &qhs_qupv3_south, &qhs_sdc2, &qhs_sdc4, &qhs_snoc_cfg, &qhs_spdm, &qhs_spss_cfg, &qhs_tcsr, &qhs_tlmm_north, &qhs_tlmm_south, &qhs_tsif, &qhs_ufs_card_cfg, &qhs_ufs_mem_cfg, &qhs_usb3_0, &qhs_usb3_1, &qhs_venus_cfg, &qhs_vsense_ctrl_cfg, &qns_cnoc_a2noc, &srvc_cnoc);
+DEFINE_QBCM(bcm_qup0, "QUP0", false, 2, &qhm_qup1, &qhm_qup2);
+DEFINE_QBCM(bcm_sn1, "SN1", false, 1, &qxs_imem);
+DEFINE_QBCM(bcm_sn2, "SN2", false, 1, &qns_memnoc_gc);
+DEFINE_QBCM(bcm_sn3, "SN3", false, 1, &qns_cnoc);
+DEFINE_QBCM(bcm_sn4, "SN4", false, 1, &qxm_pimem);
+DEFINE_QBCM(bcm_sn5, "SN5", false, 1, &xs_qdss_stm);
+DEFINE_QBCM(bcm_sn6, "SN6", false, 3, &qhs_apss, &srvc_snoc, &xs_sys_tcu_cfg);
+DEFINE_QBCM(bcm_sn7, "SN7", false, 1, &qxs_pcie);
+DEFINE_QBCM(bcm_sn8, "SN8", false, 1, &qxs_pcie_gen3);
+DEFINE_QBCM(bcm_sn9, "SN9", false, 2, &srvc_aggre1_noc, &qnm_aggre1_noc);
+DEFINE_QBCM(bcm_sn11, "SN11", false, 2, &srvc_aggre2_noc, &qnm_aggre2_noc);
+DEFINE_QBCM(bcm_sn12, "SN12", false, 2, &qnm_gladiator_sodv, &xm_gic);
+DEFINE_QBCM(bcm_sn14, "SN14", false, 1, &qnm_pcie_anoc);
+DEFINE_QBCM(bcm_sn15, "SN15", false, 1, &qnm_memnoc);
+
+static struct qcom_icc_node *rsc_hlos_nodes[] = {
+	[MASTER_APPSS_PROC] = &acm_l3,
+	[MASTER_TCU_0] = &acm_tcu,
+	[MASTER_LLCC] = &llcc_mc,
+	[MASTER_GNOC_CFG] = &pm_gnoc_cfg,
+	[MASTER_A1NOC_CFG] = &qhm_a1noc_cfg,
+	[MASTER_A2NOC_CFG] = &qhm_a2noc_cfg,
+	[MASTER_CNOC_DC_NOC] = &qhm_cnoc,
+	[MASTER_MEM_NOC_CFG] = &qhm_memnoc_cfg,
+	[MASTER_CNOC_MNOC_CFG] = &qhm_mnoc_cfg,
+	[MASTER_QDSS_BAM] = &qhm_qdss_bam,
+	[MASTER_BLSP_1] = &qhm_qup1,
+	[MASTER_BLSP_2] = &qhm_qup2,
+	[MASTER_SNOC_CFG] = &qhm_snoc_cfg,
+	[MASTER_SPDM] = &qhm_spdm,
+	[MASTER_TIC] = &qhm_tic,
+	[MASTER_TSIF] = &qhm_tsif,
+	[MASTER_A1NOC_SNOC] = &qnm_aggre1_noc,
+	[MASTER_A2NOC_SNOC] = &qnm_aggre2_noc,
+	[MASTER_GNOC_MEM_NOC] = &qnm_apps,
+	[MASTER_CNOC_A2NOC] = &qnm_cnoc,
+	[MASTER_GNOC_SNOC] = &qnm_gladiator_sodv,
+	[MASTER_MEM_NOC_SNOC] = &qnm_memnoc,
+	[MASTER_MNOC_HF_MEM_NOC] = &qnm_mnoc_hf,
+	[MASTER_MNOC_SF_MEM_NOC] = &qnm_mnoc_sf,
+	[MASTER_ANOC_PCIE_SNOC] = &qnm_pcie_anoc,
+	[MASTER_SNOC_CNOC] = &qnm_snoc,
+	[MASTER_SNOC_GC_MEM_NOC] = &qnm_snoc_gc,
+	[MASTER_SNOC_SF_MEM_NOC] = &qnm_snoc_sf,
+	[MASTER_CAMNOC_HF0] = &qxm_camnoc_hf0,
+	[MASTER_CAMNOC_HF0_UNCOMP] = &qxm_camnoc_hf0_uncomp,
+	[MASTER_CAMNOC_HF1] = &qxm_camnoc_hf1,
+	[MASTER_CAMNOC_HF1_UNCOMP] = &qxm_camnoc_hf1_uncomp,
+	[MASTER_CAMNOC_SF] = &qxm_camnoc_sf,
+	[MASTER_CAMNOC_SF_UNCOMP] = &qxm_camnoc_sf_uncomp,
+	[MASTER_CRYPTO] = &qxm_crypto,
+	[MASTER_GFX3D] = &qxm_gpu,
+	[MASTER_IPA] = &qxm_ipa,
+	[MASTER_MDP0] = &qxm_mdp0,
+	[MASTER_MDP1] = &qxm_mdp1,
+	[MASTER_PIMEM] = &qxm_pimem,
+	[MASTER_ROTATOR] = &qxm_rot,
+	[MASTER_VIDEO_P0] = &qxm_venus0,
+	[MASTER_VIDEO_P1] = &qxm_venus1,
+	[MASTER_VIDEO_PROC] = &qxm_venus_arm9,
+	[MASTER_GIC] = &xm_gic,
+	[MASTER_PCIE_1] = &xm_pcie3_1,
+	[MASTER_PCIE_0] = &xm_pcie_0,
+	[MASTER_QDSS_DAP] = &xm_qdss_dap,
+	[MASTER_QDSS_ETR] = &xm_qdss_etr,
+	[MASTER_SDCC_2] = &xm_sdc2,
+	[MASTER_SDCC_4] = &xm_sdc4,
+	[MASTER_UFS_CARD] = &xm_ufs_card,
+	[MASTER_UFS_MEM] = &xm_ufs_mem,
+	[MASTER_USB3_0] = &xm_usb3_0,
+	[MASTER_USB3_1] = &xm_usb3_1,
+	[SLAVE_EBI1] = &ebi,
+	[SLAVE_A1NOC_CFG] = &qhs_a1_noc_cfg,
+	[SLAVE_A2NOC_CFG] = &qhs_a2_noc_cfg,
+	[SLAVE_AOP] = &qhs_aop,
+	[SLAVE_AOSS] = &qhs_aoss,
+	[SLAVE_APPSS] = &qhs_apss,
+	[SLAVE_CAMERA_CFG] = &qhs_camera_cfg,
+	[SLAVE_CLK_CTL] = &qhs_clk_ctl,
+	[SLAVE_CDSP_CFG] = &qhs_compute_dsp_cfg,
+	[SLAVE_RBCPR_CX_CFG] = &qhs_cpr_cx,
+	[SLAVE_CRYPTO_0_CFG] = &qhs_crypto0_cfg,
+	[SLAVE_DCC_CFG] = &qhs_dcc_cfg,
+	[SLAVE_CNOC_DDRSS] = &qhs_ddrss_cfg,
+	[SLAVE_DISPLAY_CFG] = &qhs_display_cfg,
+	[SLAVE_GLM] = &qhs_glm,
+	[SLAVE_GFX3D_CFG] = &qhs_gpuss_cfg,
+	[SLAVE_IMEM_CFG] = &qhs_imem_cfg,
+	[SLAVE_IPA_CFG] = &qhs_ipa,
+	[SLAVE_LLCC_CFG] = &qhs_llcc,
+	[SLAVE_MSS_PROC_MS_MPU_CFG] = &qhs_mdsp_ms_mpu_cfg,
+	[SLAVE_MEM_NOC_CFG] = &qhs_memnoc,
+	[SLAVE_CNOC_MNOC_CFG] = &qhs_mnoc_cfg,
+	[SLAVE_PCIE_0_CFG] = &qhs_pcie0_cfg,
+	[SLAVE_PCIE_1_CFG] = &qhs_pcie_gen3_cfg,
+	[SLAVE_PDM] = &qhs_pdm,
+	[SLAVE_SOUTH_PHY_CFG] = &qhs_phy_refgen_south,
+	[SLAVE_PIMEM_CFG] = &qhs_pimem_cfg,
+	[SLAVE_PRNG] = &qhs_prng,
+	[SLAVE_QDSS_CFG] = &qhs_qdss_cfg,
+	[SLAVE_BLSP_2] = &qhs_qupv3_north,
+	[SLAVE_BLSP_1] = &qhs_qupv3_south,
+	[SLAVE_SDCC_2] = &qhs_sdc2,
+	[SLAVE_SDCC_4] = &qhs_sdc4,
+	[SLAVE_SNOC_CFG] = &qhs_snoc_cfg,
+	[SLAVE_SPDM_WRAPPER] = &qhs_spdm,
+	[SLAVE_SPSS_CFG] = &qhs_spss_cfg,
+	[SLAVE_TCSR] = &qhs_tcsr,
+	[SLAVE_TLMM_NORTH] = &qhs_tlmm_north,
+	[SLAVE_TLMM_SOUTH] = &qhs_tlmm_south,
+	[SLAVE_TSIF] = &qhs_tsif,
+	[SLAVE_UFS_CARD_CFG] = &qhs_ufs_card_cfg,
+	[SLAVE_UFS_MEM_CFG] = &qhs_ufs_mem_cfg,
+	[SLAVE_USB3_0] = &qhs_usb3_0,
+	[SLAVE_USB3_1] = &qhs_usb3_1,
+	[SLAVE_VENUS_CFG] = &qhs_venus_cfg,
+	[SLAVE_VSENSE_CTRL_CFG] = &qhs_vsense_ctrl_cfg,
+	[SLAVE_MNOC_SF_MEM_NOC] = &qns2_mem_noc,
+	[SLAVE_A1NOC_SNOC] = &qns_a1noc_snoc,
+	[SLAVE_A2NOC_SNOC] = &qns_a2noc_snoc,
+	[SLAVE_MEM_NOC_GNOC] = &qns_apps_io,
+	[SLAVE_CAMNOC_UNCOMP] = &qns_camnoc_uncomp,
+	[SLAVE_SNOC_CNOC] = &qns_cnoc,
+	[SLAVE_CNOC_A2NOC] = &qns_cnoc_a2noc,
+	[SLAVE_GNOC_SNOC] = &qns_gladiator_sodv,
+	[SLAVE_GNOC_MEM_NOC] = &qns_gnoc_memnoc,
+	[SLAVE_LLCC] = &qns_llcc,
+	[SLAVE_MNOC_HF_MEM_NOC] = &qns_mem_noc_hf,
+	[SLAVE_SNOC_MEM_NOC_GC] = &qns_memnoc_gc,
+	[SLAVE_SNOC_MEM_NOC_SF] = &qns_memnoc_sf,
+	[SLAVE_MEM_NOC_SNOC] = &qns_memnoc_snoc,
+	[SLAVE_ANOC_PCIE_A1NOC_SNOC] = &qns_pcie_a1noc_snoc,
+	[SLAVE_ANOC_PCIE_SNOC] = &qns_pcie_snoc,
+	[SLAVE_IMEM] = &qxs_imem,
+	[SLAVE_PCIE_0] = &qxs_pcie,
+	[SLAVE_PCIE_1] = &qxs_pcie_gen3,
+	[SLAVE_PIMEM] = &qxs_pimem,
+	[SLAVE_SERVICE_A1NOC] = &srvc_aggre1_noc,
+	[SLAVE_SERVICE_A2NOC] = &srvc_aggre2_noc,
+	[SLAVE_SERVICE_CNOC] = &srvc_cnoc,
+	[SLAVE_SERVICE_GNOC] = &srvc_gnoc,
+	[SLAVE_SERVICE_MEM_NOC] = &srvc_memnoc,
+	[SLAVE_SERVICE_MNOC] = &srvc_mnoc,
+	[SLAVE_SERVICE_SNOC] = &srvc_snoc,
+	[SLAVE_QDSS_STM] = &xs_qdss_stm,
+	[SLAVE_TCU] = &xs_sys_tcu_cfg,
+};
+
+static struct qcom_icc_bcm *rsc_hlos_bcms[] = {
+	&bcm_acv,
+	&bcm_mc0,
+	&bcm_sh0,
+	&bcm_mm0,
+	&bcm_sh1,
+	&bcm_mm1,
+	&bcm_sh2,
+	&bcm_mm2,
+	&bcm_sh3,
+	&bcm_mm3,
+	&bcm_sh5,
+	&bcm_sn0,
+	&bcm_ce0,
+	&bcm_cn0,
+	&bcm_qup0,
+	&bcm_sn1,
+	&bcm_sn2,
+	&bcm_sn3,
+	&bcm_sn4,
+	&bcm_sn5,
+	&bcm_sn6,
+	&bcm_sn7,
+	&bcm_sn8,
+	&bcm_sn9,
+	&bcm_sn11,
+	&bcm_sn12,
+	&bcm_sn14,
+	&bcm_sn15,
+};
+
+static struct qcom_icc_desc sdm845_rsc_hlos = {
+	.nodes = rsc_hlos_nodes,
+	.num_nodes = ARRAY_SIZE(rsc_hlos_nodes),
+	.bcms = rsc_hlos_bcms,
+	.num_bcms = ARRAY_SIZE(rsc_hlos_bcms),
+};
+
+static int qcom_icc_bcm_init(struct qcom_icc_bcm *bcm, struct device *dev)
+{
+	struct qcom_icc_node *qn;
+	int ret, i;
+
+	bcm->addr = cmd_db_read_addr(bcm->name);
+	if (!bcm->addr) {
+		dev_err(dev, "%s could not find RPMh address\n",
+			bcm->name);
+		return -EINVAL;
+	}
+
+	if (cmd_db_read_aux_data_len(bcm->name) < sizeof(struct bcm_db)) {
+		dev_err(dev, "%s command db missing or partial aux data\n",
+			bcm->name);
+		return -EINVAL;
+	}
+
+	ret = cmd_db_read_aux_data(bcm->name, (u8 *)&bcm->aux_data,
+				   sizeof(struct bcm_db));
+	if (ret < 0) {
+		dev_err(dev, "%s command db read error (%d)\n",
+			bcm->name, ret);
+		return ret;
+	}
+
+	bcm->aux_data.unit = le32_to_cpu(bcm->aux_data.unit);
+	bcm->aux_data.width = le16_to_cpu(bcm->aux_data.width);
+
+	/*
+	 * Link Qnodes to their respective BCMs
+	 */
+
+	for (i = 0; i < bcm->num_nodes; i++) {
+		qn = bcm->nodes[i];
+		qn->bcms[qn->num_bcms] = bcm;
+		qn->num_bcms++;
+	}
+
+	return 0;
+}
+
+inline void tcs_cmd_gen(struct tcs_cmd *cmd, u64 vote_x, u64 vote_y,
+			u32 addr, bool commit)
+{
+	bool valid = true;
+
+	if (!cmd)
+		return;
+
+	if (vote_x == 0 && vote_y == 0)
+		valid = false;
+
+	if (vote_x > BCM_TCS_CMD_VOTE_MASK)
+		vote_x = BCM_TCS_CMD_VOTE_MASK;
+
+	if (vote_y > BCM_TCS_CMD_VOTE_MASK)
+		vote_y = BCM_TCS_CMD_VOTE_MASK;
+
+	cmd->addr = addr;
+	cmd->data = BCM_TCS_CMD(commit, valid, vote_x, vote_y);
+
+	/*
+	 * Set the wait for completion flag on command that need to be completed
+	 * before the next command.
+	 */
+	if (commit)
+		cmd->wait = true;
+}
+
+static void tcs_list_gen(struct list_head *bcm_list,
+			 struct tcs_cmd *tcs_list, int *n)
+{
+	struct qcom_icc_bcm *bcm;
+	bool commit;
+	size_t idx = 0, batch = 0, cur_vcd_size = 0;
+
+	memset(n, 0, sizeof(int) * SDM845_MAX_VCD);
+
+	list_for_each_entry(bcm, bcm_list, list) {
+		commit = false;
+		cur_vcd_size++;
+		if ((list_is_last(&bcm->list, bcm_list)) ||
+		    bcm->aux_data.vcd != list_next_entry(bcm, list)->aux_data.vcd) {
+			commit = true;
+			cur_vcd_size = 0;
+		}
+		tcs_cmd_gen(&tcs_list[idx], bcm->vote_x, bcm->vote_y,
+			    bcm->addr, commit);
+		idx++;
+		n[batch]++;
+		/*
+		 * Batch the BCMs in such a way that we do not split them in
+		 * multiple payloads when they are under the same VCD. This is
+		 * to ensure that every BCM is committed since we only set the
+		 * commit bit on the last BCM request of every VCD.
+		 */
+		if (n[batch] >= MAX_RPMH_PAYLOAD) {
+			if (!commit) {
+				n[batch] -= cur_vcd_size;
+				n[batch + 1] = cur_vcd_size;
+			}
+			batch++;
+		}
+	}
+}
+
+static void bcm_aggregate(struct qcom_icc_bcm *bcm)
+{
+	size_t i;
+	u64 agg_avg = 0;
+	u64 agg_peak = 0;
+	u64 temp;
+
+	for (i = 0; i < bcm->num_nodes; i++) {
+		temp = bcm->nodes[i]->sum_avg * bcm->aux_data.width;
+		do_div(temp, bcm->nodes[i]->buswidth * bcm->nodes[i]->channels);
+		agg_avg = max(agg_avg, temp);
+
+		temp = bcm->nodes[i]->max_peak * bcm->aux_data.width;
+		do_div(temp, bcm->nodes[i]->buswidth);
+		agg_peak = max(agg_peak, temp);
+	}
+
+	temp = agg_avg * 1000ULL;
+	do_div(temp, bcm->aux_data.unit);
+	bcm->vote_x = temp;
+
+	temp = agg_peak * 1000ULL;
+	do_div(temp, bcm->aux_data.unit);
+	bcm->vote_y = temp;
+
+	if (bcm->keepalive && bcm->vote_x == 0 && bcm->vote_y == 0) {
+		bcm->vote_x = 1;
+		bcm->vote_y = 1;
+	}
+
+	bcm->dirty = false;
+}
+
+static int qcom_icc_aggregate(struct icc_node *node, u32 avg_bw,
+			      u32 peak_bw, u32 *agg_avg, u32 *agg_peak)
+{
+	size_t i;
+	struct qcom_icc_node *qn;
+
+	qn = node->data;
+
+	*agg_avg += avg_bw;
+	*agg_peak = max_t(u32, *agg_peak, peak_bw);
+
+	qn->sum_avg = *agg_avg;
+	qn->max_peak = *agg_peak;
+
+	for (i = 0; i < qn->num_bcms; i++)
+		qn->bcms[i]->dirty = true;
+
+	return 0;
+}
+
+static int qcom_icc_set(struct icc_node *src, struct icc_node *dst)
+{
+	struct qcom_icc_provider *qp;
+	struct icc_node *node;
+	struct tcs_cmd cmds[SDM845_MAX_BCMS];
+	struct list_head commit_list;
+	int commit_idx[SDM845_MAX_VCD];
+	int ret = 0, i;
+
+	if (!src)
+		node = dst;
+	else
+		node = src;
+
+	qp = to_qcom_provider(node->provider);
+
+	INIT_LIST_HEAD(&commit_list);
+
+	for (i = 0; i < qp->num_bcms; i++) {
+		if (qp->bcms[i]->dirty) {
+			bcm_aggregate(qp->bcms[i]);
+			list_add_tail(&qp->bcms[i]->list, &commit_list);
+		}
+	}
+
+	/*
+	 * Construct the command list based on a pre ordered list of BCMs
+	 * based on VCD.
+	 */
+	tcs_list_gen(&commit_list, cmds, commit_idx);
+
+	if (!commit_idx[0])
+		return ret;
+
+	ret = rpmh_invalidate(qp->dev);
+	if (ret) {
+		pr_err("Error invalidating RPMH client (%d)\n", ret);
+		return ret;
+	}
+
+	ret = rpmh_write_batch(qp->dev, RPMH_ACTIVE_ONLY_STATE,
+			       cmds, commit_idx);
+	if (ret) {
+		pr_err("Error sending AMC RPMH requests (%d)\n", ret);
+		return ret;
+	}
+
+	return ret;
+}
+
+static int cmp_vcd(const void *_l, const void *_r)
+{
+	const struct qcom_icc_bcm **l = (const struct qcom_icc_bcm **)_l;
+	const struct qcom_icc_bcm **r = (const struct qcom_icc_bcm **)_r;
+
+	if (l[0]->aux_data.vcd < r[0]->aux_data.vcd)
+		return -1;
+	else if (l[0]->aux_data.vcd == r[0]->aux_data.vcd)
+		return 0;
+	else
+		return 1;
+}
+
+static int qnoc_probe(struct platform_device *pdev)
+{
+	const struct qcom_icc_desc *desc;
+	struct icc_onecell_data *data;
+	struct icc_provider *provider;
+	struct qcom_icc_node **qnodes;
+	struct qcom_icc_provider *qp;
+	struct icc_node *node;
+	size_t num_nodes, i;
+	int ret;
+
+	desc = of_device_get_match_data(&pdev->dev);
+	if (!desc)
+		return -EINVAL;
+
+	qnodes = desc->nodes;
+	num_nodes = desc->num_nodes;
+
+	qp = devm_kzalloc(&pdev->dev, sizeof(*qp), GFP_KERNEL);
+	if (!qp)
+		return -ENOMEM;
+
+	data = devm_kcalloc(&pdev->dev, num_nodes, sizeof(*node), GFP_KERNEL);
+	if (!data)
+		return -ENOMEM;
+
+	provider = &qp->provider;
+	provider->dev = &pdev->dev;
+	provider->set = qcom_icc_set;
+	provider->aggregate = qcom_icc_aggregate;
+	provider->xlate = of_icc_xlate_onecell;
+	INIT_LIST_HEAD(&provider->nodes);
+	provider->data = data;
+
+	qp->dev = &pdev->dev;
+	qp->bcms = desc->bcms;
+	qp->num_bcms = desc->num_bcms;
+
+	ret = icc_provider_add(provider);
+	if (ret) {
+		dev_err(&pdev->dev, "error adding interconnect provider\n");
+		return ret;
+	}
+
+	for (i = 0; i < num_nodes; i++) {
+		size_t j;
+
+		node = icc_node_create(qnodes[i]->id);
+		if (IS_ERR(node)) {
+			ret = PTR_ERR(node);
+			goto err;
+		}
+
+		node->name = qnodes[i]->name;
+		node->data = qnodes[i];
+		icc_node_add(node, provider);
+
+		dev_dbg(&pdev->dev, "registered node %p %s %d\n", node,
+			    qnodes[i]->name, node->id);
+
+		/* populate links */
+		for (j = 0; j < qnodes[i]->num_links; j++)
+			icc_link_create(node, qnodes[i]->links[j]);
+
+		data->nodes[i] = node;
+	}
+	data->num_nodes = num_nodes;
+
+	for (i = 0; i < qp->num_bcms; i++)
+		qcom_icc_bcm_init(qp->bcms[i], &pdev->dev);
+
+	/*
+	 * Pre sort the BCMs based on VCD for ease of generating a command list
+	 * that groups the BCMs with the same VCD together. VCDs are numbered
+	 * with lowest being the most expensive time wise, ensuring that
+	 * those commands are being sent the earliest in the queue.
+	 */
+	sort(qp->bcms, qp->num_bcms, sizeof(*qp->bcms), cmp_vcd, NULL);
+
+	platform_set_drvdata(pdev, qp);
+
+	dev_dbg(&pdev->dev, "Registered SDM845 ICC\n");
+
+	return ret;
+err:
+	list_for_each_entry(node, &provider->nodes, node_list) {
+		icc_node_del(node);
+		icc_node_destroy(node->id);
+	}
+
+	icc_provider_del(provider);
+	return ret;
+}
+
+static int qnoc_remove(struct platform_device *pdev)
+{
+	struct qcom_icc_provider *qp = platform_get_drvdata(pdev);
+	struct icc_provider *provider = &qp->provider;
+	struct icc_node *n;
+
+	list_for_each_entry(n, &provider->nodes, node_list) {
+		icc_node_del(n);
+		icc_node_destroy(n->id);
+	}
+
+	return icc_provider_del(provider);
+}
+
+static const struct of_device_id qnoc_of_match[] = {
+	{ .compatible = "qcom,sdm845-rsc-hlos", .data = &sdm845_rsc_hlos },
+	{ },
+};
+MODULE_DEVICE_TABLE(of, qnoc_of_match);
+
+static struct platform_driver qnoc_driver = {
+	.probe = qnoc_probe,
+	.remove = qnoc_remove,
+	.driver = {
+		.name = "qnoc-sdm845",
+		.of_match_table = qnoc_of_match,
+	},
+};
+module_platform_driver(qnoc_driver);
+
+MODULE_AUTHOR("David Dai <daidavid1@codeaurora.org>");
+MODULE_DESCRIPTION("Qualcomm sdm845 NoC driver");
+MODULE_LICENSE("GPL v2");
diff --git a/include/dt-bindings/interconnect/qcom,sdm845.h b/include/dt-bindings/interconnect/qcom,sdm845.h
new file mode 100644
index 000000000000..90ac9c0d3821
--- /dev/null
+++ b/include/dt-bindings/interconnect/qcom,sdm845.h
@@ -0,0 +1,143 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Qualcomm interconnect IDs
+ *
+ * Copyright (c) 2018, Linaro Ltd.
+ * Author: Georgi Djakov <georgi.djakov@linaro.org>
+ */
+
+#ifndef __DT_BINDINGS_INTERCONNECT_QCOM_SDM845_H
+#define __DT_BINDINGS_INTERCONNECT_QCOM_SDM845_H
+
+#define MASTER_A1NOC_CFG		0
+#define MASTER_BLSP_1			1
+#define MASTER_TSIF			2
+#define MASTER_SDCC_2			3
+#define MASTER_SDCC_4			4
+#define MASTER_UFS_CARD			5
+#define MASTER_UFS_MEM			6
+#define MASTER_PCIE_0			7
+#define MASTER_A2NOC_CFG		8
+#define MASTER_QDSS_BAM			9
+#define MASTER_BLSP_2			10
+#define MASTER_CNOC_A2NOC		11
+#define MASTER_CRYPTO			12
+#define MASTER_IPA			13
+#define MASTER_PCIE_1			14
+#define MASTER_QDSS_ETR			15
+#define MASTER_USB3_0			16
+#define MASTER_USB3_1			17
+#define MASTER_CAMNOC_HF0_UNCOMP	18
+#define MASTER_CAMNOC_HF1_UNCOMP	19
+#define MASTER_CAMNOC_SF_UNCOMP		20
+#define MASTER_SPDM			21
+#define MASTER_TIC			22
+#define MASTER_SNOC_CNOC		23
+#define MASTER_QDSS_DAP			24
+#define MASTER_CNOC_DC_NOC		25
+#define MASTER_APPSS_PROC		26
+#define MASTER_GNOC_CFG			27
+#define MASTER_LLCC			28
+#define MASTER_TCU_0			29
+#define MASTER_MEM_NOC_CFG		30
+#define MASTER_GNOC_MEM_NOC		31
+#define MASTER_MNOC_HF_MEM_NOC		32
+#define MASTER_MNOC_SF_MEM_NOC		33
+#define MASTER_SNOC_GC_MEM_NOC		34
+#define MASTER_SNOC_SF_MEM_NOC		35
+#define MASTER_GFX3D			36
+#define MASTER_CNOC_MNOC_CFG		37
+#define MASTER_CAMNOC_HF0		38
+#define MASTER_CAMNOC_HF1		39
+#define MASTER_CAMNOC_SF		40
+#define MASTER_MDP0			41
+#define MASTER_MDP1			42
+#define MASTER_ROTATOR			43
+#define MASTER_VIDEO_P0			44
+#define MASTER_VIDEO_P1			45
+#define MASTER_VIDEO_PROC		46
+#define MASTER_SNOC_CFG			47
+#define MASTER_A1NOC_SNOC		48
+#define MASTER_A2NOC_SNOC		49
+#define MASTER_GNOC_SNOC		50
+#define MASTER_MEM_NOC_SNOC		51
+#define MASTER_ANOC_PCIE_SNOC		52
+#define MASTER_PIMEM			53
+#define MASTER_GIC			54
+#define SLAVE_A1NOC_SNOC		55
+#define SLAVE_SERVICE_A1NOC		56
+#define SLAVE_ANOC_PCIE_A1NOC_SNOC	57
+#define SLAVE_A2NOC_SNOC		58
+#define SLAVE_ANOC_PCIE_SNOC		59
+#define SLAVE_SERVICE_A2NOC		60
+#define SLAVE_CAMNOC_UNCOMP		61
+#define SLAVE_A1NOC_CFG			62
+#define SLAVE_A2NOC_CFG			63
+#define SLAVE_AOP			64
+#define SLAVE_AOSS			65
+#define SLAVE_CAMERA_CFG		66
+#define SLAVE_CLK_CTL			67
+#define SLAVE_CDSP_CFG			68
+#define SLAVE_RBCPR_CX_CFG		69
+#define SLAVE_CRYPTO_0_CFG		70
+#define SLAVE_DCC_CFG			71
+#define SLAVE_CNOC_DDRSS		72
+#define SLAVE_DISPLAY_CFG		73
+#define SLAVE_GLM			74
+#define SLAVE_GFX3D_CFG			75
+#define SLAVE_IMEM_CFG			76
+#define SLAVE_IPA_CFG			77
+#define SLAVE_CNOC_MNOC_CFG		78
+#define SLAVE_PCIE_0_CFG		79
+#define SLAVE_PCIE_1_CFG		80
+#define SLAVE_PDM			81
+#define SLAVE_SOUTH_PHY_CFG		82
+#define SLAVE_PIMEM_CFG			83
+#define SLAVE_PRNG			84
+#define SLAVE_QDSS_CFG			85
+#define SLAVE_BLSP_2			86
+#define SLAVE_BLSP_1			87
+#define SLAVE_SDCC_2			88
+#define SLAVE_SDCC_4			89
+#define SLAVE_SNOC_CFG			90
+#define SLAVE_SPDM_WRAPPER		91
+#define SLAVE_SPSS_CFG			92
+#define SLAVE_TCSR			93
+#define SLAVE_TLMM_NORTH		94
+#define SLAVE_TLMM_SOUTH		95
+#define SLAVE_TSIF			96
+#define SLAVE_UFS_CARD_CFG		97
+#define SLAVE_UFS_MEM_CFG		98
+#define SLAVE_USB3_0			99
+#define SLAVE_USB3_1			100
+#define SLAVE_VENUS_CFG			101
+#define SLAVE_VSENSE_CTRL_CFG		102
+#define SLAVE_CNOC_A2NOC		103
+#define SLAVE_SERVICE_CNOC		104
+#define SLAVE_LLCC_CFG			105
+#define SLAVE_MEM_NOC_CFG		106
+#define SLAVE_GNOC_SNOC			107
+#define SLAVE_GNOC_MEM_NOC		108
+#define SLAVE_SERVICE_GNOC		109
+#define SLAVE_EBI1			110
+#define SLAVE_MSS_PROC_MS_MPU_CFG	111
+#define SLAVE_MEM_NOC_GNOC		112
+#define SLAVE_LLCC			113
+#define SLAVE_MEM_NOC_SNOC		114
+#define SLAVE_SERVICE_MEM_NOC		115
+#define SLAVE_MNOC_SF_MEM_NOC		116
+#define SLAVE_MNOC_HF_MEM_NOC		117
+#define SLAVE_SERVICE_MNOC		118
+#define SLAVE_APPSS			119
+#define SLAVE_SNOC_CNOC			120
+#define SLAVE_SNOC_MEM_NOC_GC		121
+#define SLAVE_SNOC_MEM_NOC_SF		122
+#define SLAVE_IMEM			123
+#define SLAVE_PCIE_0			124
+#define SLAVE_PCIE_1			125
+#define SLAVE_PIMEM			126
+#define SLAVE_SERVICE_SNOC		127
+#define SLAVE_QDSS_STM			128
+#define SLAVE_TCU			129
+
+#endif

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH v10 6/7] arm64: dts: sdm845: Add interconnect provider DT nodes
  2018-11-27 18:03 [PATCH v10 0/8] Introduce on-chip interconnect API Georgi Djakov
                   ` (4 preceding siblings ...)
  2018-11-27 18:03 ` [PATCH v10 5/7] interconnect: qcom: Add sdm845 interconnect provider driver Georgi Djakov
@ 2018-11-27 18:03 ` Georgi Djakov
  2018-12-01  0:39   ` Evan Green
  2018-11-27 18:03 ` [PATCH v10 7/7] MAINTAINERS: add a maintainer for the interconnect API Georgi Djakov
  2018-12-05 20:41 ` [PATCH v10 0/8] Introduce on-chip " Evan Green
  7 siblings, 1 reply; 34+ messages in thread
From: Georgi Djakov @ 2018-11-27 18:03 UTC (permalink / raw)
  To: linux-pm
  Cc: gregkh, rjw, robh+dt, mturquette, khilman, vincent.guittot,
	skannan, bjorn.andersson, amit.kucheria, seansw, daidavid1,
	evgreen, mark.rutland, lorenzo.pieralisi, abailon, maxime.ripard,
	arnd, thierry.reding, ksitaraman, sanjayc, devicetree,
	linux-kernel, linux-arm-kernel, linux-arm-msm, linux-tegra,
	georgi.djakov

From: David Dai <daidavid1@codeaurora.org>

Add RSC (Resource State Coordinator) provider
dictating network-on-chip interconnect bus performance
found on SDM845-based platforms.

Signed-off-by: David Dai <daidavid1@codeaurora.org>
Signed-off-by: Georgi Djakov <georgi.djakov@linaro.org>
---
 arch/arm64/boot/dts/qcom/sdm845.dtsi | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/arch/arm64/boot/dts/qcom/sdm845.dtsi b/arch/arm64/boot/dts/qcom/sdm845.dtsi
index b72bdb0a31a5..856d33604e9c 100644
--- a/arch/arm64/boot/dts/qcom/sdm845.dtsi
+++ b/arch/arm64/boot/dts/qcom/sdm845.dtsi
@@ -1324,6 +1324,11 @@
 				compatible = "qcom,sdm845-rpmh-clk";
 				#clock-cells = <1>;
 			};
+
+			qnoc: qnoc {
+				compatible = "qcom,sdm845-rsc-hlos";
+				#interconnect-cells = <1>;
+			};
 		};
 
 		intc: interrupt-controller@17a00000 {

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH v10 7/7] MAINTAINERS: add a maintainer for the interconnect API
  2018-11-27 18:03 [PATCH v10 0/8] Introduce on-chip interconnect API Georgi Djakov
                   ` (5 preceding siblings ...)
  2018-11-27 18:03 ` [PATCH v10 6/7] arm64: dts: sdm845: Add interconnect provider DT nodes Georgi Djakov
@ 2018-11-27 18:03 ` Georgi Djakov
  2018-12-05 20:41 ` [PATCH v10 0/8] Introduce on-chip " Evan Green
  7 siblings, 0 replies; 34+ messages in thread
From: Georgi Djakov @ 2018-11-27 18:03 UTC (permalink / raw)
  To: linux-pm
  Cc: gregkh, rjw, robh+dt, mturquette, khilman, vincent.guittot,
	skannan, bjorn.andersson, amit.kucheria, seansw, daidavid1,
	evgreen, mark.rutland, lorenzo.pieralisi, abailon, maxime.ripard,
	arnd, thierry.reding, ksitaraman, sanjayc, devicetree,
	linux-kernel, linux-arm-kernel, linux-arm-msm, linux-tegra,
	georgi.djakov

Add myself as the maintainer of the interconnect API.

Signed-off-by: Georgi Djakov <georgi.djakov@linaro.org>
---
 MAINTAINERS | 10 ++++++++++
 1 file changed, 10 insertions(+)

diff --git a/MAINTAINERS b/MAINTAINERS
index 380e43f585d3..4f426863ff3b 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -7694,6 +7694,16 @@ L:	linux-gpio@vger.kernel.org
 S:	Maintained
 F:	drivers/gpio/gpio-intel-mid.c
 
+INTERCONNECT API
+M:	Georgi Djakov <georgi.djakov@linaro.org>
+S:	Maintained
+F:	Documentation/interconnect/
+F:	Documentation/devicetree/bindings/interconnect/
+F:	drivers/interconnect/
+F:	include/dt-bindings/interconnect/
+F:	include/linux/interconnect-provider.h
+F:	include/linux/interconnect.h
+
 INVENSENSE MPU-3050 GYROSCOPE DRIVER
 M:	Linus Walleij <linus.walleij@linaro.org>
 L:	linux-iio@vger.kernel.org

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* Re: [PATCH v10 1/7] interconnect: Add generic on-chip interconnect API
  2018-11-27 18:03 ` [PATCH v10 1/7] interconnect: Add generic " Georgi Djakov
@ 2018-11-27 18:35   ` Joe Perches
  2018-11-28 18:18     ` Georgi Djakov
  2018-12-01  0:38   ` Evan Green
  2018-12-05 16:16   ` Rob Herring
  2 siblings, 1 reply; 34+ messages in thread
From: Joe Perches @ 2018-11-27 18:35 UTC (permalink / raw)
  To: Georgi Djakov, linux-pm
  Cc: gregkh, rjw, robh+dt, mturquette, khilman, vincent.guittot,
	skannan, bjorn.andersson, amit.kucheria, seansw, daidavid1,
	evgreen, mark.rutland, lorenzo.pieralisi, abailon, maxime.ripard,
	arnd, thierry.reding, ksitaraman, sanjayc, devicetree,
	linux-kernel, linux-arm-kernel, linux-arm-msm, linux-tegra

On Tue, 2018-11-27 at 20:03 +0200, Georgi Djakov wrote:
> This patch introduces a new API to get requirements and configure the
> interconnect buses across the entire chipset to fit with the current
> demand.

trivial notes:

> diff --git a/drivers/interconnect/core.c b/drivers/interconnect/core.c
[]
> +static int apply_constraints(struct icc_path *path)
> +{
> +	struct icc_node *next, *prev = NULL;
> +	int ret = -EINVAL;
> +	int i;
> +
> +	for (i = 0; i < path->num_nodes; i++, prev = next) {
> +		struct icc_provider *p;
> +
> +		next = path->reqs[i].node;
> +		/*
> +		 * Both endpoints should be valid master-slave pairs of the
> +		 * same interconnect provider that will be configured.
> +		 */
> +		if (!prev || next->provider != prev->provider)
> +			continue;
> +
> +		p = next->provider;
> +
> +		/* set the constraints */
> +		ret = p->set(prev, next);
> +		if (ret)
> +			goto out;
> +	}
> +out:
> +	return ret;
> +}

The use of ", prev = next" appears somewhat tricky code.
Perhaps move the assignment of prev to the bottom of the loop.
Perhaps the temporary p assignment isn't useful either.

> +int icc_set(struct icc_path *path, u32 avg_bw, u32 peak_bw)
> +{
[]
> +	ret = apply_constraints(path);
> +	if (ret)
> +		pr_debug("interconnect: error applying constraints (%d)", ret);

Ideally all pr_<foo> formats should end in '\n'

> +static struct icc_node *icc_node_create_nolock(int id)
> +{
> +	struct icc_node *node;
> +
> +	/* check if node already exists */
> +	node = node_find(id);
> +	if (node)
> +		goto out;
> +
> +	node = kzalloc(sizeof(*node), GFP_KERNEL);
> +	if (!node) {
> +		node = ERR_PTR(-ENOMEM);
> +		goto out;

Generally, this code appears to overly rely on goto when
direct returns could be more readable.

> +	}
> +
> +	id = idr_alloc(&icc_idr, node, id, id + 1, GFP_KERNEL);
> +	if (WARN(id < 0, "couldn't get idr")) {

This seems to unnecessarily hide the id < 0 test in a WARN

Why is this a WARN and not a simpler
	if (id < 0) {
		[ pr_err(...); or WARN(1, ...); ]

> +		kfree(node);
> +		node = ERR_PTR(id);
> +		goto out;
> +	}
> +
> +	node->id = id;
> +
> +out:
> +	return node;
> +}
[]
> diff --git a/include/linux/interconnect.h b/include/linux/interconnect.h
[]
> +/* macros for converting to icc units */
> +#define bps_to_icc(x)	(1)
> +#define kBps_to_icc(x)	(x)
[]
> +#define MBps_to_icc(x)	(x * 1000)
> +#define GBps_to_icc(x)	(x * 1000 * 1000)
> +#define kbps_to_icc(x)	(x / 8 + ((x) % 8 ? 1 : 0))
> +#define Mbps_to_icc(x)	(x * 1000 / 8 )
> +#define Gbps_to_icc(x)	(x * 1000 * 1000 / 8)

The last 5 macros should parenthesize x



^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v10 1/7] interconnect: Add generic on-chip interconnect API
  2018-11-27 18:35   ` Joe Perches
@ 2018-11-28 18:18     ` Georgi Djakov
  0 siblings, 0 replies; 34+ messages in thread
From: Georgi Djakov @ 2018-11-28 18:18 UTC (permalink / raw)
  To: Joe Perches
  Cc: linux-pm, gregkh, rjw, robh+dt, mturquette, khilman,
	vincent.guittot, skannan, bjorn.andersson, amit.kucheria, seansw,
	daidavid1, evgreen, mark.rutland, lorenzo.pieralisi, abailon,
	maxime.ripard, arnd, thierry.reding, ksitaraman, sanjayc,
	devicetree, linux-kernel, linux-arm-kernel, linux-arm-msm,
	linux-tegra

Hi Joe,

On 11/27/18 20:35, Joe Perches wrote:
> On Tue, 2018-11-27 at 20:03 +0200, Georgi Djakov wrote:
>> This patch introduces a new API to get requirements and configure the
>> interconnect buses across the entire chipset to fit with the current
>> demand.
> 
> trivial notes:
> 
>> diff --git a/drivers/interconnect/core.c b/drivers/interconnect/core.c
> []
>> +static int apply_constraints(struct icc_path *path)
>> +{
>> +	struct icc_node *next, *prev = NULL;
>> +	int ret = -EINVAL;
>> +	int i;
>> +
>> +	for (i = 0; i < path->num_nodes; i++, prev = next) {
>> +		struct icc_provider *p;
>> +
>> +		next = path->reqs[i].node;
>> +		/*
>> +		 * Both endpoints should be valid master-slave pairs of the
>> +		 * same interconnect provider that will be configured.
>> +		 */
>> +		if (!prev || next->provider != prev->provider)
>> +			continue;
>> +
>> +		p = next->provider;
>> +
>> +		/* set the constraints */
>> +		ret = p->set(prev, next);
>> +		if (ret)
>> +			goto out;
>> +	}
>> +out:
>> +	return ret;
>> +}
> 
> The use of ", prev = next" appears somewhat tricky code.
> Perhaps move the assignment of prev to the bottom of the loop.
> Perhaps the temporary p assignment isn't useful either.
> 
>> +int icc_set(struct icc_path *path, u32 avg_bw, u32 peak_bw)
>> +{
> []
>> +	ret = apply_constraints(path);
>> +	if (ret)
>> +		pr_debug("interconnect: error applying constraints (%d)", ret);
> 
> Ideally all pr_<foo> formats should end in '\n'
> 
>> +static struct icc_node *icc_node_create_nolock(int id)
>> +{
>> +	struct icc_node *node;
>> +
>> +	/* check if node already exists */
>> +	node = node_find(id);
>> +	if (node)
>> +		goto out;
>> +
>> +	node = kzalloc(sizeof(*node), GFP_KERNEL);
>> +	if (!node) {
>> +		node = ERR_PTR(-ENOMEM);
>> +		goto out;
> 
> Generally, this code appears to overly rely on goto when
> direct returns could be more readable.
> 
>> +	}
>> +
>> +	id = idr_alloc(&icc_idr, node, id, id + 1, GFP_KERNEL);
>> +	if (WARN(id < 0, "couldn't get idr")) {
> 
> This seems to unnecessarily hide the id < 0 test in a WARN
> 
> Why is this a WARN and not a simpler
> 	if (id < 0) {
> 		[ pr_err(...); or WARN(1, ...); ]
> 
>> +		kfree(node);
>> +		node = ERR_PTR(id);
>> +		goto out;
>> +	}
>> +
>> +	node->id = id;
>> +
>> +out:
>> +	return node;
>> +}

Thank you for helping to improve the code. The above suggestions make it 
cleaner indeed.

> []
>> diff --git a/include/linux/interconnect.h b/include/linux/interconnect.h
> []
>> +/* macros for converting to icc units */
>> +#define bps_to_icc(x)	(1)
>> +#define kBps_to_icc(x)	(x)
> []
>> +#define MBps_to_icc(x)	(x * 1000)
>> +#define GBps_to_icc(x)	(x * 1000 * 1000)
>> +#define kbps_to_icc(x)	(x / 8 + ((x) % 8 ? 1 : 0))
>> +#define Mbps_to_icc(x)	(x * 1000 / 8 )
>> +#define Gbps_to_icc(x)	(x * 1000 * 1000 / 8)
> 
> The last 5 macros should parenthesize x

Oops.. obviously i forgot to run checkpatch --strict. Will fix!

BR,
Georgi

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v10 1/7] interconnect: Add generic on-chip interconnect API
  2018-11-27 18:03 ` [PATCH v10 1/7] interconnect: Add generic " Georgi Djakov
  2018-11-27 18:35   ` Joe Perches
@ 2018-12-01  0:38   ` Evan Green
  2018-12-05 15:57     ` Georgi Djakov
  2018-12-05 16:16   ` Rob Herring
  2 siblings, 1 reply; 34+ messages in thread
From: Evan Green @ 2018-12-01  0:38 UTC (permalink / raw)
  To: georgi.djakov
  Cc: linux-pm, gregkh, rjw, robh+dt, Michael Turquette, khilman,
	Vincent Guittot, Saravana Kannan, Bjorn Andersson, amit.kucheria,
	seansw, daidavid1, mark.rutland, lorenzo.pieralisi,
	Alexandre Bailon, maxime.ripard, Arnd Bergmann, thierry.reding,
	ksitaraman, sanjayc, devicetree, linux-kernel, linux-arm-kernel,
	linux-arm-msm, linux-tegra

On Tue, Nov 27, 2018 at 10:03 AM Georgi Djakov <georgi.djakov@linaro.org> wrote:
>
> This patch introduces a new API to get requirements and configure the
> interconnect buses across the entire chipset to fit with the current
> demand.
>
> The API is using a consumer/provider-based model, where the providers are
> the interconnect buses and the consumers could be various drivers.
> The consumers request interconnect resources (path) between endpoints and
> set the desired constraints on this data flow path. The providers receive
> requests from consumers and aggregate these requests for all master-slave
> pairs on that path. Then the providers configure each participating in the
> topology node according to the requested data flow path, physical links and

Strange word ordering. Consider something like: "Then the providers
configure each node participating in the topology"

...Or a slightly different flavor: "Then the providers configure each
node along the path to support a bandwidth that satisfies all
bandwidth requests that cross through that node".

> constraints. The topology could be complicated and multi-tiered and is SoC
> specific.
>
> Signed-off-by: Georgi Djakov <georgi.djakov@linaro.org>
> Reviewed-by: Evan Green <evgreen@chromium.org>

This already has my reviewed by, and I still stand by it, but I
couldn't help finding some nits anyway. Sorry :)

> ---
>  Documentation/interconnect/interconnect.rst |  94 ++++
>  drivers/Kconfig                             |   2 +
>  drivers/Makefile                            |   1 +
>  drivers/interconnect/Kconfig                |  10 +
>  drivers/interconnect/Makefile               |   5 +
>  drivers/interconnect/core.c                 | 569 ++++++++++++++++++++
>  include/linux/interconnect-provider.h       | 125 +++++
>  include/linux/interconnect.h                |  51 ++
>  8 files changed, 857 insertions(+)
>  create mode 100644 Documentation/interconnect/interconnect.rst
>  create mode 100644 drivers/interconnect/Kconfig
>  create mode 100644 drivers/interconnect/Makefile
>  create mode 100644 drivers/interconnect/core.c
>  create mode 100644 include/linux/interconnect-provider.h
>  create mode 100644 include/linux/interconnect.h
>
...
> diff --git a/drivers/interconnect/core.c b/drivers/interconnect/core.c
> new file mode 100644
> index 000000000000..3ae8e5a58969
> --- /dev/null
> +++ b/drivers/interconnect/core.c
> @@ -0,0 +1,569 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Interconnect framework core driver
> + *
> + * Copyright (c) 2018, Linaro Ltd.
> + * Author: Georgi Djakov <georgi.djakov@linaro.org>
> + */
> +
> +#include <linux/device.h>
> +#include <linux/idr.h>
> +#include <linux/init.h>
> +#include <linux/interconnect.h>
> +#include <linux/interconnect-provider.h>
> +#include <linux/list.h>
> +#include <linux/module.h>
> +#include <linux/mutex.h>
> +#include <linux/slab.h>
> +#include <linux/overflow.h>
> +
> +static DEFINE_IDR(icc_idr);
> +static LIST_HEAD(icc_providers);
> +static DEFINE_MUTEX(icc_lock);
> +
> +/**
> + * struct icc_req - constraints that are attached to each node
> + * @req_node: entry in list of requests for the particular @node
> + * @node: the interconnect node to which this constraint applies
> + * @dev: reference to the device that sets the constraints
> + * @avg_bw: an integer describing the average bandwidth in kBps
> + * @peak_bw: an integer describing the peak bandwidth in kBps
> + */
> +struct icc_req {
> +       struct hlist_node req_node;
> +       struct icc_node *node;
> +       struct device *dev;
> +       u32 avg_bw;
> +       u32 peak_bw;
> +};
> +
> +/**
> + * struct icc_path - interconnect path structure
> + * @num_nodes: number of hops (nodes)
> + * @reqs: array of the requests applicable to this path of nodes
> + */
> +struct icc_path {
> +       size_t num_nodes;
> +       struct icc_req reqs[];
> +};
> +
> +static struct icc_node *node_find(const int id)
> +{
> +       return idr_find(&icc_idr, id);
> +}
> +
> +static struct icc_path *path_init(struct device *dev, struct icc_node *dst,
> +                                 ssize_t num_nodes)
> +{
> +       struct icc_node *node = dst;
> +       struct icc_path *path;
> +       int i;
> +
> +       path = kzalloc(struct_size(path, reqs, num_nodes), GFP_KERNEL);
> +       if (!path)
> +               return ERR_PTR(-ENOMEM);
> +
> +       path->num_nodes = num_nodes;
> +
> +       for (i = num_nodes - 1; i >= 0; i--) {
> +               node->provider->users++;
> +               hlist_add_head(&path->reqs[i].req_node, &node->req_list);
> +               path->reqs[i].node = node;
> +               path->reqs[i].dev = dev;
> +               /* reference to previous node was saved during path traversal */
> +               node = node->reverse;
> +       }
> +
> +       return path;
> +}
> +
> +static struct icc_path *path_find(struct device *dev, struct icc_node *src,
> +                                 struct icc_node *dst)
> +{
> +       struct icc_path *path = ERR_PTR(-EPROBE_DEFER);
> +       struct icc_node *n, *node = NULL;
> +       struct list_head traverse_list;
> +       struct list_head edge_list;
> +       struct list_head visited_list;
> +       size_t i, depth = 1;
> +       bool found = false;
> +
> +       INIT_LIST_HEAD(&traverse_list);
> +       INIT_LIST_HEAD(&edge_list);
> +       INIT_LIST_HEAD(&visited_list);
> +
> +       list_add(&src->search_list, &traverse_list);
> +       src->reverse = NULL;
> +
> +       do {
> +               list_for_each_entry_safe(node, n, &traverse_list, search_list) {
> +                       if (node == dst) {
> +                               found = true;
> +                               list_splice_init(&edge_list, &visited_list);
> +                               list_splice_init(&traverse_list, &visited_list);
> +                               break;
> +                       }
> +                       for (i = 0; i < node->num_links; i++) {
> +                               struct icc_node *tmp = node->links[i];
> +
> +                               if (!tmp) {
> +                                       path = ERR_PTR(-ENOENT);
> +                                       goto out;
> +                               }
> +
> +                               if (tmp->is_traversed)
> +                                       continue;
> +
> +                               tmp->is_traversed = true;
> +                               tmp->reverse = node;
> +                               list_add_tail(&tmp->search_list, &edge_list);
> +                       }
> +               }
> +
> +               if (found)
> +                       break;
> +
> +               list_splice_init(&traverse_list, &visited_list);
> +               list_splice_init(&edge_list, &traverse_list);
> +
> +               /* count the hops including the source */
> +               depth++;
> +
> +       } while (!list_empty(&traverse_list));
> +
> +out:
> +
> +       /* reset the traversed state */
> +       list_for_each_entry_reverse(n, &visited_list, search_list)
> +               n->is_traversed = false;
> +
> +       if (found)
> +               path = path_init(dev, dst, depth);
> +
> +       return path;
> +}
> +
> +/*
> + * We want the path to honor all bandwidth requests, so the average and peak
> + * bandwidth requirements from each consumer are aggregated at each node.
> + * The aggregation is platform specific, so each platform can customize it by
> + * implementing it's own aggregate() function.

Grammar police: remove the apostrophe in its.

> + */
> +
> +static int aggregate_requests(struct icc_node *node)
> +{
> +       struct icc_provider *p = node->provider;
> +       struct icc_req *r;
> +
> +       node->avg_bw = 0;
> +       node->peak_bw = 0;
> +
> +       hlist_for_each_entry(r, &node->req_list, req_node)
> +               p->aggregate(node, r->avg_bw, r->peak_bw,
> +                            &node->avg_bw, &node->peak_bw);
> +
> +       return 0;
> +}
> +
> +static int apply_constraints(struct icc_path *path)
> +{
> +       struct icc_node *next, *prev = NULL;
> +       int ret = -EINVAL;
> +       int i;
> +
> +       for (i = 0; i < path->num_nodes; i++, prev = next) {
> +               struct icc_provider *p;
> +
> +               next = path->reqs[i].node;
> +               /*
> +                * Both endpoints should be valid master-slave pairs of the
> +                * same interconnect provider that will be configured.
> +                */
> +               if (!prev || next->provider != prev->provider)
> +                       continue;

I wonder if we should explicitly ban paths where we bounce through an
odd number of nodes within one provider. Otherwise, set() won't be
called on all nodes in the path. Or, if we wanted to support those
kinds of topologies, you could call set with one of the nodes set to
NULL to represent either the ingress or egress bandwidth to this NoC.
This doesn't necessarily need to be addressed in this series, but is
something that other providers might bump into when implementing their
topologies.

> +
> +               p = next->provider;
> +
> +               /* set the constraints */
> +               ret = p->set(prev, next);
> +               if (ret)
> +                       goto out;
> +       }
> +out:
> +       return ret;
> +}
> +
...
> +
> +/**
> + * icc_link_destroy() - destroy a link between two nodes
> + * @src: pointer to source node
> + * @dst: pointer to destination node
> + *
> + * Return: 0 on success, or an error code otherwise
> + */
> +int icc_link_destroy(struct icc_node *src, struct icc_node *dst)
> +{
> +       struct icc_node **new;
> +       size_t slot;
> +       int ret = 0;
> +
> +       if (IS_ERR_OR_NULL(src))
> +               return -EINVAL;
> +
> +       if (IS_ERR_OR_NULL(dst))
> +               return -EINVAL;
> +
> +       mutex_lock(&icc_lock);
> +
> +       for (slot = 0; slot < src->num_links; slot++)
> +               if (src->links[slot] == dst)
> +                       break;
> +
> +       if (WARN_ON(slot == src->num_links)) {
> +               ret = -ENXIO;
> +               goto out;
> +       }
> +
> +       src->links[slot] = src->links[--src->num_links];
> +
> +       new = krealloc(src->links,
> +                      (src->num_links) * sizeof(*src->links),

These parentheses aren't needed.

> +                      GFP_KERNEL);
> +       if (new)
> +               src->links = new;
> +
> +out:
> +       mutex_unlock(&icc_lock);
> +
> +       return ret;
> +}
> +EXPORT_SYMBOL_GPL(icc_link_destroy);
> +
> +/**
> + * icc_node_add() - add interconnect node to interconnect provider
> + * @node: pointer to the interconnect node
> + * @provider: pointer to the interconnect provider
> + */
> +void icc_node_add(struct icc_node *node, struct icc_provider *provider)
> +{
> +       mutex_lock(&icc_lock);
> +
> +       node->provider = provider;
> +       list_add_tail(&node->node_list, &provider->nodes);
> +
> +       mutex_unlock(&icc_lock);
> +}
> +EXPORT_SYMBOL_GPL(icc_node_add);
> +
> +/**
> + * icc_node_del() - delete interconnect node from interconnect provider
> + * @node: pointer to the interconnect node
> + */
> +void icc_node_del(struct icc_node *node)
> +{
> +       mutex_lock(&icc_lock);
> +
> +       list_del(&node->node_list);
> +
> +       mutex_unlock(&icc_lock);
> +}
> +EXPORT_SYMBOL_GPL(icc_node_del);
> +
> +/**
> + * icc_provider_add() - add a new interconnect provider
> + * @provider: the interconnect provider that will be added into topology
> + *
> + * Return: 0 on success, or an error code otherwise
> + */
> +int icc_provider_add(struct icc_provider *provider)
> +{
> +       if (WARN_ON(!provider->set))
> +               return -EINVAL;
> +
> +       mutex_lock(&icc_lock);
> +
> +       INIT_LIST_HEAD(&provider->nodes);
> +       list_add_tail(&provider->provider_list, &icc_providers);
> +
> +       mutex_unlock(&icc_lock);
> +
> +       dev_dbg(provider->dev, "interconnect provider added to topology\n");
> +
> +       return 0;
> +}
> +EXPORT_SYMBOL_GPL(icc_provider_add);
> +
> +/**
> + * icc_provider_del() - delete previously added interconnect provider
> + * @provider: the interconnect provider that will be removed from topology
> + *
> + * Return: 0 on success, or an error code otherwise
> + */
> +int icc_provider_del(struct icc_provider *provider)
> +{
> +       mutex_lock(&icc_lock);
> +       if (provider->users) {
> +               pr_warn("interconnect provider still has %d users\n",
> +                       provider->users);
> +               mutex_unlock(&icc_lock);
> +               return -EBUSY;
> +       }
> +
> +       if (!list_empty(&provider->nodes)) {
> +               pr_warn("interconnect provider still has nodes\n");
> +               mutex_unlock(&icc_lock);
> +               return -EBUSY;
> +       }
> +
> +       list_del(&provider->provider_list);
> +       mutex_unlock(&icc_lock);
> +
> +       return 0;
> +}
> +EXPORT_SYMBOL_GPL(icc_provider_del);
> +
> +MODULE_AUTHOR("Georgi Djakov <georgi.djakov@linaro.org");

This is missing the closing >

> +MODULE_DESCRIPTION("Interconnect Driver Core");
> +MODULE_LICENSE("GPL v2");
...
> diff --git a/include/linux/interconnect.h b/include/linux/interconnect.h
> new file mode 100644
> index 000000000000..04b2966ded9f
> --- /dev/null
> +++ b/include/linux/interconnect.h
> @@ -0,0 +1,51 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * Copyright (c) 2018, Linaro Ltd.
> + * Author: Georgi Djakov <georgi.djakov@linaro.org>
> + */
> +
> +#ifndef __LINUX_INTERCONNECT_H
> +#define __LINUX_INTERCONNECT_H
> +
> +#include <linux/mutex.h>
> +#include <linux/types.h>
> +
> +/* macros for converting to icc units */
> +#define bps_to_icc(x)  (1)
> +#define kBps_to_icc(x) (x)
> +#define MBps_to_icc(x) (x * 1000)
> +#define GBps_to_icc(x) (x * 1000 * 1000)
> +#define kbps_to_icc(x) (x / 8 + ((x) % 8 ? 1 : 0))
> +#define Mbps_to_icc(x) (x * 1000 / 8 )

Remove extra space before )

> +#define Gbps_to_icc(x) (x * 1000 * 1000 / 8)
> +
> +struct icc_path;
> +struct device;
> +
> +#if IS_ENABLED(CONFIG_INTERCONNECT)
> +
> +struct icc_path *icc_get(struct device *dev, const int src_id,
> +                        const int dst_id);
> +void icc_put(struct icc_path *path);
> +int icc_set(struct icc_path *path, u32 avg_bw, u32 peak_bw);
> +
> +#else
> +
> +static inline struct icc_path *icc_get(struct device *dev, const int src_id,
> +                                      const int dst_id)
> +{
> +       return NULL;
> +}
> +
> +static inline void icc_put(struct icc_path *path)
> +{
> +}
> +
> +static inline int icc_set(struct icc_path *path, u32 avg_bw, u32 peak_bw)
> +{
> +       return 0;

Should this really succeed?

> +}
> +
> +#endif /* CONFIG_INTERCONNECT */
> +
> +#endif /* __LINUX_INTERCONNECT_H */

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v10 3/7] interconnect: Allow endpoints translation via DT
  2018-11-27 18:03 ` [PATCH v10 3/7] interconnect: Allow endpoints translation via DT Georgi Djakov
@ 2018-12-01  0:38   ` Evan Green
  2018-12-05 15:59     ` Georgi Djakov
  0 siblings, 1 reply; 34+ messages in thread
From: Evan Green @ 2018-12-01  0:38 UTC (permalink / raw)
  To: georgi.djakov
  Cc: linux-pm, gregkh, rjw, robh+dt, Michael Turquette, khilman,
	Vincent Guittot, Saravana Kannan, Bjorn Andersson, amit.kucheria,
	seansw, daidavid1, mark.rutland, lorenzo.pieralisi,
	Alexandre Bailon, maxime.ripard, Arnd Bergmann, thierry.reding,
	ksitaraman, sanjayc, devicetree, linux-kernel, linux-arm-kernel,
	linux-arm-msm, linux-tegra

On Tue, Nov 27, 2018 at 10:04 AM Georgi Djakov <georgi.djakov@linaro.org> wrote:
>
> Currently we support only platform data for specifying the interconnect
> endpoints. As now the endpoints are hard-coded into the consumer driver
> this may lead to complications when a single driver is used by multiple
> SoCs, which may have different interconnect topology.
> To avoid cluttering the consumer drivers, introduce a translation function
> to help us get the board specific interconnect data from device-tree.
>
> Signed-off-by: Georgi Djakov <georgi.djakov@linaro.org>
> ---
>  drivers/interconnect/core.c           | 156 ++++++++++++++++++++++++++
>  include/linux/interconnect-provider.h |  17 +++
>  include/linux/interconnect.h          |   7 ++
>  3 files changed, 180 insertions(+)
>
> diff --git a/drivers/interconnect/core.c b/drivers/interconnect/core.c
> index 3ae8e5a58969..ebc42ef9fa46 100644
> --- a/drivers/interconnect/core.c
> +++ b/drivers/interconnect/core.c
> @@ -15,6 +15,7 @@
>  #include <linux/module.h>
>  #include <linux/mutex.h>
>  #include <linux/slab.h>
> +#include <linux/of.h>
>  #include <linux/overflow.h>
>
>  static DEFINE_IDR(icc_idr);
> @@ -193,6 +194,159 @@ static int apply_constraints(struct icc_path *path)
>         return ret;
>  }
>
> +/* of_icc_xlate_onecell() - Xlate function using a single index.

It would be nice if translate were spelled out instead of xlate in the
description portion (throughout).

> + * @spec: OF phandle args to map into an interconnect node.
> + * @data: private data (pointer to struct icc_onecell_data)
> + *
> + * This is a generic xlate function that can be used to model simple
> + * interconnect providers that have one device tree node and provide
> + * multiple interconnect nodes. A single cell is used as an index into
> + * an array of icc nodes specified in the icc_onecell_data struct when
> + * registering the provider.
> + */
> +struct icc_node *of_icc_xlate_onecell(struct of_phandle_args *spec,
> +                                     void *data)
> +{
> +       struct icc_onecell_data *icc_data = data;
> +       unsigned int idx = spec->args[0];
> +
> +       if (idx >= icc_data->num_nodes) {
> +               pr_err("%s: invalid index %u\n", __func__, idx);
> +               return ERR_PTR(-EINVAL);
> +       }
> +
> +       return icc_data->nodes[idx];
> +}
> +EXPORT_SYMBOL_GPL(of_icc_xlate_onecell);
> +
> +/**
> + * of_icc_get_from_provider() - Look-up interconnect node
> + * @spec: OF phandle args to use for look-up
> + *
> + * Looks for interconnect provider under the node specified by @spec and if
> + * found, uses xlate function of the provider to map phandle args to node.
> + *
> + * Returns a valid pointer to struct icc_node on success or ERR_PTR()
> + * on failure.
> + */
> +static struct icc_node *of_icc_get_from_provider(struct of_phandle_args *spec)
> +{
> +       struct icc_node *node = ERR_PTR(-EPROBE_DEFER);
> +       struct icc_provider *provider;
> +
> +       if (!spec)
> +               return ERR_PTR(-EINVAL);
> +
> +       mutex_lock(&icc_lock);
> +       list_for_each_entry(provider, &icc_providers, provider_list) {
> +               if (provider->dev->of_node == spec->np)
> +                       node = provider->xlate(spec, provider->data);
> +               if (!IS_ERR(node))
> +                       break;
> +       }
> +       mutex_unlock(&icc_lock);
> +
> +       return node;
> +}
> +
> +/**
> + * of_icc_get() - get a path handle from a DT node based on name
> + * @dev: device pointer for the consumer device
> + * @name: interconnect path name
> + *
> + * This function will search for a path two endpoints and return an

path between two endpoints

> + * icc_path handle on success. Use icc_put() to release constraints when
> + * they are not needed anymore.
> + * If the interconnect API is disabled, NULL is returned and the consumer
> + * drivers will still build. Drivers are free to handle this specifically,
> + * but they don't have to. NULL is also returned when the "interconnects"

I'm not sure the sentence starting with "Drivers are free" adds much.
Also, you mention that null is returned when the interconnect API is
disabled both above and below. We probably only need it once.

> + * DT property is missing.
> + *
> + * Return: icc_path pointer on success or ERR_PTR() on error. NULL is returned
> + * when the API is disabled or the "interconnects" DT property is missing.
> + */
> +struct icc_path *of_icc_get(struct device *dev, const char *name)
> +{
> +       struct icc_path *path = ERR_PTR(-EPROBE_DEFER);
> +       struct icc_node *src_node, *dst_node;
> +       struct device_node *np = NULL;
> +       struct of_phandle_args src_args, dst_args;
> +       int idx = 0;
> +       int ret;
> +
> +       if (!dev || !dev->of_node)
> +               return ERR_PTR(-ENODEV);
> +
> +       np = dev->of_node;
> +
> +       /*
> +        * When the consumer DT node do not have "interconnects" property
> +        * return a NULL path to skip setting constraints.
> +        */
> +       if (!of_find_property(np, "interconnects", NULL))
> +               return NULL;
> +
> +       /*
> +        * We use a combination of phandle and specifier for endpoint. For now
> +        * lets support only global ids and extend this is the future if needed

s/is the future/in the future/

> +        * without breaking DT compatibility.
> +        */
> +       if (name) {
> +               idx = of_property_match_string(np, "interconnect-names", name);
> +               if (idx < 0)
> +                       return ERR_PTR(idx);
> +       }
> +
> +       ret = of_parse_phandle_with_args(np, "interconnects",
> +                                        "#interconnect-cells", idx * 2,
> +                                        &src_args);
> +       if (ret)
> +               return ERR_PTR(ret);
> +
> +       of_node_put(src_args.np);
> +
> +       if (!src_args.args_count || src_args.args_count > 1)

This could be src_args.argc_count != 1

> +               return ERR_PTR(-EINVAL);
> +
> +       ret = of_parse_phandle_with_args(np, "interconnects",
> +                                        "#interconnect-cells", idx * 2 + 1,
> +                                        &dst_args);
> +       if (ret)
> +               return ERR_PTR(ret);
> +
> +       of_node_put(dst_args.np);
> +
> +       if (!dst_args.args_count || dst_args.args_count > 1)

Similarly, this could be dst_args.args_count != 1

> +               return ERR_PTR(-EINVAL);
> +
> +       src_node = of_icc_get_from_provider(&src_args);
> +
> +       if (IS_ERR(src_node)) {
> +               if (PTR_ERR(src_node) != -EPROBE_DEFER)
> +                       dev_err(dev, "error finding src node: %ld\n",
> +                               PTR_ERR(src_node));
> +               return ERR_CAST(src_node);
> +       }
> +
> +       dst_node = of_icc_get_from_provider(&dst_args);
> +
> +       if (IS_ERR(dst_node)) {
> +               if (PTR_ERR(dst_node) != -EPROBE_DEFER)
> +                       dev_err(dev, "error finding dst node: %ld\n",
> +                               PTR_ERR(dst_node));
> +               return ERR_CAST(dst_node);
> +       }
> +
> +       mutex_lock(&icc_lock);
> +       path = path_find(dev, src_node, dst_node);
> +       if (IS_ERR(path))
> +               dev_err(dev, "%s: invalid path=%ld\n", __func__, PTR_ERR(path));
> +       mutex_unlock(&icc_lock);
> +
> +       return path;
> +}
> +EXPORT_SYMBOL_GPL(of_icc_get);
> +
>  /**
>   * icc_set() - set constraints on an interconnect path between two endpoints
>   * @path: reference to the path returned by icc_get()
> @@ -521,6 +675,8 @@ int icc_provider_add(struct icc_provider *provider)
>  {
>         if (WARN_ON(!provider->set))
>                 return -EINVAL;
> +       if (WARN_ON(!provider->xlate))
> +               return -EINVAL;
>
>         mutex_lock(&icc_lock);
>
> diff --git a/include/linux/interconnect-provider.h b/include/linux/interconnect-provider.h
> index 78208a754181..63caccadc2db 100644
> --- a/include/linux/interconnect-provider.h
> +++ b/include/linux/interconnect-provider.h
> @@ -12,6 +12,21 @@
>  #define icc_units_to_bps(bw)  ((bw) * 1000ULL)
>
>  struct icc_node;
> +struct of_phandle_args;
> +
> +/**
> + * struct icc_onecell_data - driver data for onecell interconnect providers
> + *
> + * @num_nodes: number of nodes in this device
> + * @nodes: array of pointers to the nodes in this device
> + */
> +struct icc_onecell_data {
> +       unsigned int num_nodes;
> +       struct icc_node *nodes[];
> +};
> +
> +struct icc_node *of_icc_xlate_onecell(struct of_phandle_args *spec,
> +                                     void *data);
>
>  /**
>   * struct icc_provider - interconnect provider (controller) entity that might
> @@ -21,6 +36,7 @@ struct icc_node;
>   * @nodes: internal list of the interconnect provider nodes
>   * @set: pointer to device specific set operation function
>   * @aggregate: pointer to device specific aggregate operation function
> + * @xlate: provider-specific callback for mapping nodes from phandle arguments
>   * @dev: the device this interconnect provider belongs to
>   * @users: count of active users
>   * @data: pointer to private data
> @@ -31,6 +47,7 @@ struct icc_provider {
>         int (*set)(struct icc_node *src, struct icc_node *dst);
>         int (*aggregate)(struct icc_node *node, u32 avg_bw, u32 peak_bw,
>                          u32 *agg_avg, u32 *agg_peak);
> +       struct icc_node* (*xlate)(struct of_phandle_args *spec, void *data);
>         struct device           *dev;
>         int                     users;
>         void                    *data;
> diff --git a/include/linux/interconnect.h b/include/linux/interconnect.h
> index 04b2966ded9f..41f7ecc2f20f 100644
> --- a/include/linux/interconnect.h
> +++ b/include/linux/interconnect.h
> @@ -26,6 +26,7 @@ struct device;
>
>  struct icc_path *icc_get(struct device *dev, const int src_id,
>                          const int dst_id);
> +struct icc_path *of_icc_get(struct device *dev, const char *name);
>  void icc_put(struct icc_path *path);
>  int icc_set(struct icc_path *path, u32 avg_bw, u32 peak_bw);
>
> @@ -37,6 +38,12 @@ static inline struct icc_path *icc_get(struct device *dev, const int src_id,
>         return NULL;
>  }
>
> +static inline struct icc_path *of_icc_get(struct device *dev,
> +                                         const char *name)
> +{
> +       return NULL;
> +}
> +
>  static inline void icc_put(struct icc_path *path)
>  {
>  }

With these nits fixed:

Reviewed-by: Evan Green <evgreen@chromium.org>

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v10 2/7] dt-bindings: Introduce interconnect binding
  2018-11-27 18:03 ` [PATCH v10 2/7] dt-bindings: Introduce interconnect binding Georgi Djakov
@ 2018-12-01  0:38   ` Evan Green
  0 siblings, 0 replies; 34+ messages in thread
From: Evan Green @ 2018-12-01  0:38 UTC (permalink / raw)
  To: georgi.djakov
  Cc: linux-pm, gregkh, rjw, robh+dt, Michael Turquette, khilman,
	Vincent Guittot, Saravana Kannan, Bjorn Andersson, amit.kucheria,
	seansw, daidavid1, mark.rutland, lorenzo.pieralisi,
	Alexandre Bailon, maxime.ripard, Arnd Bergmann, thierry.reding,
	ksitaraman, sanjayc, devicetree, linux-kernel, linux-arm-kernel,
	linux-arm-msm, linux-tegra

On Tue, Nov 27, 2018 at 10:03 AM Georgi Djakov <georgi.djakov@linaro.org> wrote:
>
> This binding is intended to represent the relations between the interconnect
> controllers (providers) and consumer device nodes. It will allow creating links
> between consumers and interconnect paths (exposed by interconnect providers).
>
> Signed-off-by: Georgi Djakov <georgi.djakov@linaro.org>
> ---
>  .../bindings/interconnect/interconnect.txt    | 60 +++++++++++++++++++
>  1 file changed, 60 insertions(+)
>  create mode 100644 Documentation/devicetree/bindings/interconnect/interconnect.txt
>
> diff --git a/Documentation/devicetree/bindings/interconnect/interconnect.txt b/Documentation/devicetree/bindings/interconnect/interconnect.txt
> new file mode 100644
> index 000000000000..6775c07e1574
> --- /dev/null
> +++ b/Documentation/devicetree/bindings/interconnect/interconnect.txt
> @@ -0,0 +1,60 @@
> +Interconnect Provider Device Tree Bindings
> +=========================================
> +
> +The purpose of this document is to define a common set of generic interconnect
> +providers/consumers properties.
> +
> +
> += interconnect providers =
> +
> +The interconnect provider binding is intended to represent the interconnect
> +controllers in the system. Each provider registers a set of interconnect
> +nodes, which expose the interconnect related capabilities of the interconnect
> +to consumer drivers. These capabilities can be throughput, latency, priority
> +etc. The consumer drivers set constraints on interconnect path (or endpoints)
> +depending on the use case. Interconnect providers can also be interconnect
> +consumers, such as in the case where two network-on-chip fabrics interface
> +directly.
> +
> +Required properties:
> +- compatible : contains the interconnect provider compatible string
> +- #interconnect-cells : number of cells in a interconnect specifier needed to
> +                       encode the interconnect node id
> +
> +Example:
> +
> +               snoc: interconnect@580000 {
> +                       compatible = "qcom,msm8916-snoc";
> +                       #interconnect-cells = <1>;
> +                       reg = <0x580000 0x14000>;
> +                       clock-names = "bus_clk", "bus_a_clk";
> +                       clocks = <&rpmcc RPM_SMD_SNOC_CLK>,
> +                                <&rpmcc RPM_SMD_SNOC_A_CLK>;
> +               };
> +
> +
> += interconnect consumers =
> +
> +The interconnect consumers are device nodes which dynamically express their
> +bandwidth requirements along interconnect paths they are connected to. There
> +can be multiple interconnect providers on a SoC and the consumer may consume
> +multiple paths from different providers depending on use case and the
> +components it has to interact with.
> +
> +Required properties:
> +interconnects : Pairs of phandles and interconnect provider specifier to denote
> +               the edge source and destination ports of the interconnect path.
> +
> +Optional properties:
> +interconnect-names : List of interconnect path name strings sorted in the same
> +                    order as the interconnects property. Consumers drivers will use
> +                    interconnect-names to match interconnect paths with interconnect
> +                    specifier pairs.
> +
> +Example:
> +
> +       sdhci@7864000 {
> +               ...
> +               interconnects = <&pnoc MASTER_SDCC_1 &bimc SLAVE_EBI_CH0>;
> +               interconnect-names = "sdhc-ddr";
> +       };

Reviewed-by: Evan Green <evgreen@chromium.org>

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v10 5/7] interconnect: qcom: Add sdm845 interconnect provider driver
  2018-11-27 18:03 ` [PATCH v10 5/7] interconnect: qcom: Add sdm845 interconnect provider driver Georgi Djakov
@ 2018-12-01  0:39   ` Evan Green
  2018-12-05 16:00     ` Georgi Djakov
  0 siblings, 1 reply; 34+ messages in thread
From: Evan Green @ 2018-12-01  0:39 UTC (permalink / raw)
  To: georgi.djakov
  Cc: linux-pm, gregkh, rjw, robh+dt, Michael Turquette, khilman,
	Vincent Guittot, Saravana Kannan, Bjorn Andersson, amit.kucheria,
	seansw, daidavid1, mark.rutland, lorenzo.pieralisi,
	Alexandre Bailon, maxime.ripard, Arnd Bergmann, thierry.reding,
	ksitaraman, sanjayc, devicetree, linux-kernel, linux-arm-kernel,
	linux-arm-msm, linux-tegra

On Tue, Nov 27, 2018 at 10:04 AM Georgi Djakov <georgi.djakov@linaro.org> wrote:
>
> From: David Dai <daidavid1@codeaurora.org>
>
> Introduce Qualcomm SDM845 specific provider driver using the
> interconnect framework.
>
> Signed-off-by: David Dai <daidavid1@codeaurora.org>
> Signed-off-by: Georgi Djakov <georgi.djakov@linaro.org>
> ---
>  .../bindings/interconnect/qcom,sdm845.txt     |  24 +
>  drivers/interconnect/Kconfig                  |   5 +
>  drivers/interconnect/Makefile                 |   1 +
>  drivers/interconnect/qcom/Kconfig             |  13 +
>  drivers/interconnect/qcom/Makefile            |   5 +
>  drivers/interconnect/qcom/sdm845.c            | 836 ++++++++++++++++++
>  .../dt-bindings/interconnect/qcom,sdm845.h    | 143 +++
>  7 files changed, 1027 insertions(+)
>  create mode 100644 Documentation/devicetree/bindings/interconnect/qcom,sdm845.txt
>  create mode 100644 drivers/interconnect/qcom/Kconfig
>  create mode 100644 drivers/interconnect/qcom/Makefile
>  create mode 100644 drivers/interconnect/qcom/sdm845.c
>  create mode 100644 include/dt-bindings/interconnect/qcom,sdm845.h
>
> diff --git a/Documentation/devicetree/bindings/interconnect/qcom,sdm845.txt b/Documentation/devicetree/bindings/interconnect/qcom,sdm845.txt
> new file mode 100644
> index 000000000000..d45150e99665
> --- /dev/null
> +++ b/Documentation/devicetree/bindings/interconnect/qcom,sdm845.txt
> @@ -0,0 +1,24 @@
> +Qualcomm SDM845 Network-On-Chip interconnect driver binding
> +-----------------------------------------------------------
> +
> +SDM845 interconnect providers support system bandwidth requirements through
> +RPMh hardware accelerators known as Bus Clock Manager(BCM). The provider is able
> +to communicate with the BCM through the Resource State Coordinator(RSC)
> +associated with each execution environment. Provider nodes must reside within
> +an RPMh device node pertaining to their RSC and each provider maps to
> +a single RPMh resource.
> +
> +Required properties :
> +- compatible : shall contain only one of the following:
> +                       "qcom,sdm845-rsc-hlos"

I wonder if maybe hlos isn't necessary. Unless you somehow imagine
secure mode would have a device tree entry in here as well? Probably
not.

> +- #interconnect-cells : should contain 1
> +
> +Examples:
> +
> +apps_rsc: rsc {
> +               qnoc: qnoc-rsc-hlos {
> +                       compatible = "qcom,sdm845-rsc-hlos";
> +                       #interconnect-cells = <1>;
> +               };
> +};
> +
...
> diff --git a/drivers/interconnect/qcom/sdm845.c b/drivers/interconnect/qcom/sdm845.c
> new file mode 100644
> index 000000000000..1678de91ca52
> --- /dev/null
> +++ b/drivers/interconnect/qcom/sdm845.c
> @@ -0,0 +1,836 @@
...
> +
> +static void tcs_list_gen(struct list_head *bcm_list,
> +                        struct tcs_cmd *tcs_list, int *n)

We could make the prototype of this function be:

static void tcs_list_gen(struct list_head *bcm_list,
        struct tcs_cmd tcs_list[SDM845_MAX_VCD], int n[SDM845_MAX_VCD])

which would catch errors if somebody later passed in an array that
wasn't the right size, since we blindly memset below.

> +{
> +       struct qcom_icc_bcm *bcm;
> +       bool commit;
> +       size_t idx = 0, batch = 0, cur_vcd_size = 0;
> +
> +       memset(n, 0, sizeof(int) * SDM845_MAX_VCD);
> +
> +       list_for_each_entry(bcm, bcm_list, list) {
> +               commit = false;
> +               cur_vcd_size++;
> +               if ((list_is_last(&bcm->list, bcm_list)) ||
> +                   bcm->aux_data.vcd != list_next_entry(bcm, list)->aux_data.vcd) {
> +                       commit = true;
> +                       cur_vcd_size = 0;
> +               }
> +               tcs_cmd_gen(&tcs_list[idx], bcm->vote_x, bcm->vote_y,
> +                           bcm->addr, commit);
> +               idx++;
> +               n[batch]++;
> +               /*
> +                * Batch the BCMs in such a way that we do not split them in
> +                * multiple payloads when they are under the same VCD. This is
> +                * to ensure that every BCM is committed since we only set the
> +                * commit bit on the last BCM request of every VCD.
> +                */
> +               if (n[batch] >= MAX_RPMH_PAYLOAD) {
> +                       if (!commit) {
> +                               n[batch] -= cur_vcd_size;
> +                               n[batch + 1] = cur_vcd_size;
> +                       }
> +                       batch++;
> +               }
> +       }
> +}
> +

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v10 6/7] arm64: dts: sdm845: Add interconnect provider DT nodes
  2018-11-27 18:03 ` [PATCH v10 6/7] arm64: dts: sdm845: Add interconnect provider DT nodes Georgi Djakov
@ 2018-12-01  0:39   ` Evan Green
  2018-12-05 16:01     ` Georgi Djakov
  0 siblings, 1 reply; 34+ messages in thread
From: Evan Green @ 2018-12-01  0:39 UTC (permalink / raw)
  To: georgi.djakov
  Cc: linux-pm, gregkh, rjw, robh+dt, Michael Turquette, khilman,
	Vincent Guittot, Saravana Kannan, Bjorn Andersson, amit.kucheria,
	seansw, daidavid1, mark.rutland, lorenzo.pieralisi,
	Alexandre Bailon, maxime.ripard, Arnd Bergmann, thierry.reding,
	ksitaraman, sanjayc, devicetree, linux-kernel, linux-arm-kernel,
	linux-arm-msm, linux-tegra

On Tue, Nov 27, 2018 at 10:04 AM Georgi Djakov <georgi.djakov@linaro.org> wrote:
>
> From: David Dai <daidavid1@codeaurora.org>
>
> Add RSC (Resource State Coordinator) provider
> dictating network-on-chip interconnect bus performance
> found on SDM845-based platforms.
>
> Signed-off-by: David Dai <daidavid1@codeaurora.org>
> Signed-off-by: Georgi Djakov <georgi.djakov@linaro.org>
> ---
>  arch/arm64/boot/dts/qcom/sdm845.dtsi | 5 +++++
>  1 file changed, 5 insertions(+)
>
> diff --git a/arch/arm64/boot/dts/qcom/sdm845.dtsi b/arch/arm64/boot/dts/qcom/sdm845.dtsi
> index b72bdb0a31a5..856d33604e9c 100644
> --- a/arch/arm64/boot/dts/qcom/sdm845.dtsi
> +++ b/arch/arm64/boot/dts/qcom/sdm845.dtsi
> @@ -1324,6 +1324,11 @@
>                                 compatible = "qcom,sdm845-rpmh-clk";
>                                 #clock-cells = <1>;
>                         };
> +
> +                       qnoc: qnoc {
> +                               compatible = "qcom,sdm845-rsc-hlos";
> +                               #interconnect-cells = <1>;
> +                       };

Should we alphabetize this node above rpmhcc?

>                 };
>
>                 intc: interrupt-controller@17a00000 {

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v10 1/7] interconnect: Add generic on-chip interconnect API
  2018-12-01  0:38   ` Evan Green
@ 2018-12-05 15:57     ` Georgi Djakov
  0 siblings, 0 replies; 34+ messages in thread
From: Georgi Djakov @ 2018-12-05 15:57 UTC (permalink / raw)
  To: Evan Green
  Cc: linux-pm, gregkh, rjw, robh+dt, Michael Turquette, khilman,
	Vincent Guittot, Saravana Kannan, Bjorn Andersson, amit.kucheria,
	seansw, daidavid1, mark.rutland, lorenzo.pieralisi,
	Alexandre Bailon, maxime.ripard, Arnd Bergmann, thierry.reding,
	ksitaraman, sanjayc, devicetree, linux-kernel, linux-arm-kernel,
	linux-arm-msm, linux-tegra

Hi Evan,

On 12/1/18 02:38, Evan Green wrote:
> On Tue, Nov 27, 2018 at 10:03 AM Georgi Djakov <georgi.djakov@linaro.org> wrote:
>>
>> This patch introduces a new API to get requirements and configure the
>> interconnect buses across the entire chipset to fit with the current
>> demand.
>>
>> The API is using a consumer/provider-based model, where the providers are
>> the interconnect buses and the consumers could be various drivers.
>> The consumers request interconnect resources (path) between endpoints and
>> set the desired constraints on this data flow path. The providers receive
>> requests from consumers and aggregate these requests for all master-slave
>> pairs on that path. Then the providers configure each participating in the
>> topology node according to the requested data flow path, physical links and
> 
> Strange word ordering. Consider something like: "Then the providers
> configure each node participating in the topology"
> 
> ...Or a slightly different flavor: "Then the providers configure each
> node along the path to support a bandwidth that satisfies all
> bandwidth requests that cross through that node".
> 

This sounds better!

>> constraints. The topology could be complicated and multi-tiered and is SoC
>> specific.
>>
>> Signed-off-by: Georgi Djakov <georgi.djakov@linaro.org>
>> Reviewed-by: Evan Green <evgreen@chromium.org>
> 
> This already has my reviewed by, and I still stand by it, but I
> couldn't help finding some nits anyway. Sorry :)

Thanks for finding them!

>> ---
>>  Documentation/interconnect/interconnect.rst |  94 ++++
>>  drivers/Kconfig                             |   2 +
>>  drivers/Makefile                            |   1 +
>>  drivers/interconnect/Kconfig                |  10 +
>>  drivers/interconnect/Makefile               |   5 +
>>  drivers/interconnect/core.c                 | 569 ++++++++++++++++++++
>>  include/linux/interconnect-provider.h       | 125 +++++
>>  include/linux/interconnect.h                |  51 ++
>>  8 files changed, 857 insertions(+)
>>  create mode 100644 Documentation/interconnect/interconnect.rst
>>  create mode 100644 drivers/interconnect/Kconfig
>>  create mode 100644 drivers/interconnect/Makefile
>>  create mode 100644 drivers/interconnect/core.c
>>  create mode 100644 include/linux/interconnect-provider.h
>>  create mode 100644 include/linux/interconnect.h
>>
[..]
>> +/*
>> + * We want the path to honor all bandwidth requests, so the average and peak
>> + * bandwidth requirements from each consumer are aggregated at each node.
>> + * The aggregation is platform specific, so each platform can customize it by
>> + * implementing it's own aggregate() function.
> 
> Grammar police: remove the apostrophe in its.
> 

Oops.

>> + */
>> +
>> +static int aggregate_requests(struct icc_node *node)
>> +{
>> +       struct icc_provider *p = node->provider;
>> +       struct icc_req *r;
>> +
>> +       node->avg_bw = 0;
>> +       node->peak_bw = 0;
>> +
>> +       hlist_for_each_entry(r, &node->req_list, req_node)
>> +               p->aggregate(node, r->avg_bw, r->peak_bw,
>> +                            &node->avg_bw, &node->peak_bw);
>> +
>> +       return 0;
>> +}
>> +
>> +static int apply_constraints(struct icc_path *path)
>> +{
>> +       struct icc_node *next, *prev = NULL;
>> +       int ret = -EINVAL;
>> +       int i;
>> +
>> +       for (i = 0; i < path->num_nodes; i++, prev = next) {
>> +               struct icc_provider *p;
>> +
>> +               next = path->reqs[i].node;
>> +               /*
>> +                * Both endpoints should be valid master-slave pairs of the
>> +                * same interconnect provider that will be configured.
>> +                */
>> +               if (!prev || next->provider != prev->provider)
>> +                       continue;
> 
> I wonder if we should explicitly ban paths where we bounce through an
> odd number of nodes within one provider. Otherwise, set() won't be
> called on all nodes in the path. Or, if we wanted to support those
> kinds of topologies, you could call set with one of the nodes set to
> NULL to represent either the ingress or egress bandwidth to this NoC.
> This doesn't necessarily need to be addressed in this series, but is
> something that other providers might bump into when implementing their
> topologies.
> 

Yes, we can do something like this. But i prefer that we first add
support for more platforms and then according to the requirements we can
work out something.

[..]

>> +       new = krealloc(src->links,
>> +                      (src->num_links) * sizeof(*src->links),
> 
> These parentheses aren't needed.

Sure!

>> +                      GFP_KERNEL);
>> +       if (new)
>> +               src->links = new;
>> +

[..]

>> +
>> +MODULE_AUTHOR("Georgi Djakov <georgi.djakov@linaro.org");
> 
> This is missing the closing >

Thanks!

>> +MODULE_DESCRIPTION("Interconnect Driver Core");
>> +MODULE_LICENSE("GPL v2");
> ...
>> diff --git a/include/linux/interconnect.h b/include/linux/interconnect.h
>> new file mode 100644
>> index 000000000000..04b2966ded9f
>> --- /dev/null
>> +++ b/include/linux/interconnect.h
>> @@ -0,0 +1,51 @@
>> +/* SPDX-License-Identifier: GPL-2.0 */
>> +/*
>> + * Copyright (c) 2018, Linaro Ltd.
>> + * Author: Georgi Djakov <georgi.djakov@linaro.org>
>> + */
>> +
>> +#ifndef __LINUX_INTERCONNECT_H
>> +#define __LINUX_INTERCONNECT_H
>> +
>> +#include <linux/mutex.h>
>> +#include <linux/types.h>
>> +
>> +/* macros for converting to icc units */
>> +#define bps_to_icc(x)  (1)
>> +#define kBps_to_icc(x) (x)
>> +#define MBps_to_icc(x) (x * 1000)
>> +#define GBps_to_icc(x) (x * 1000 * 1000)
>> +#define kbps_to_icc(x) (x / 8 + ((x) % 8 ? 1 : 0))
>> +#define Mbps_to_icc(x) (x * 1000 / 8 )
> 
> Remove extra space before )

Done.

>> +#define Gbps_to_icc(x) (x * 1000 * 1000 / 8)
>> +
>> +struct icc_path;
>> +struct device;
>> +
>> +#if IS_ENABLED(CONFIG_INTERCONNECT)
>> +
>> +struct icc_path *icc_get(struct device *dev, const int src_id,
>> +                        const int dst_id);
>> +void icc_put(struct icc_path *path);
>> +int icc_set(struct icc_path *path, u32 avg_bw, u32 peak_bw);
>> +
>> +#else
>> +
>> +static inline struct icc_path *icc_get(struct device *dev, const int src_id,
>> +                                      const int dst_id)
>> +{
>> +       return NULL;
>> +}
>> +
>> +static inline void icc_put(struct icc_path *path)
>> +{
>> +}
>> +
>> +static inline int icc_set(struct icc_path *path, u32 avg_bw, u32 peak_bw)
>> +{
>> +       return 0;
> 
> Should this really succeed?

Yes, it should succeed if the framework is not enabled. The drivers
should handle the case of whether the framework is enabled or not when
icc_get() or of_icc_get() returns NULL. Based on that they should decide
if can continue without interconnect support or not.

BR,
Georgi

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v10 3/7] interconnect: Allow endpoints translation via DT
  2018-12-01  0:38   ` Evan Green
@ 2018-12-05 15:59     ` Georgi Djakov
  0 siblings, 0 replies; 34+ messages in thread
From: Georgi Djakov @ 2018-12-05 15:59 UTC (permalink / raw)
  To: Evan Green
  Cc: linux-pm, gregkh, rjw, robh+dt, Michael Turquette, khilman,
	Vincent Guittot, Saravana Kannan, Bjorn Andersson, amit.kucheria,
	seansw, daidavid1, mark.rutland, lorenzo.pieralisi,
	Alexandre Bailon, maxime.ripard, Arnd Bergmann, thierry.reding,
	ksitaraman, sanjayc, devicetree, linux-kernel, linux-arm-kernel,
	linux-arm-msm, linux-tegra

Hi Evan,

On 12/1/18 02:38, Evan Green wrote:
> On Tue, Nov 27, 2018 at 10:04 AM Georgi Djakov <georgi.djakov@linaro.org> wrote:
>>
>> Currently we support only platform data for specifying the interconnect
>> endpoints. As now the endpoints are hard-coded into the consumer driver
>> this may lead to complications when a single driver is used by multiple
>> SoCs, which may have different interconnect topology.
>> To avoid cluttering the consumer drivers, introduce a translation function
>> to help us get the board specific interconnect data from device-tree.
>>
>> Signed-off-by: Georgi Djakov <georgi.djakov@linaro.org>
>> ---
>>  drivers/interconnect/core.c           | 156 ++++++++++++++++++++++++++
>>  include/linux/interconnect-provider.h |  17 +++
>>  include/linux/interconnect.h          |   7 ++
>>  3 files changed, 180 insertions(+)
>>
>> diff --git a/drivers/interconnect/core.c b/drivers/interconnect/core.c
>> index 3ae8e5a58969..ebc42ef9fa46 100644
>> --- a/drivers/interconnect/core.c
>> +++ b/drivers/interconnect/core.c
>> @@ -15,6 +15,7 @@
>>  #include <linux/module.h>
>>  #include <linux/mutex.h>
>>  #include <linux/slab.h>
>> +#include <linux/of.h>
>>  #include <linux/overflow.h>
>>
>>  static DEFINE_IDR(icc_idr);
>> @@ -193,6 +194,159 @@ static int apply_constraints(struct icc_path *path)
>>         return ret;
>>  }
>>
>> +/* of_icc_xlate_onecell() - Xlate function using a single index.
> 
> It would be nice if translate were spelled out instead of xlate in the
> description portion (throughout).

Ok, done.

>> + * @spec: OF phandle args to map into an interconnect node.
>> + * @data: private data (pointer to struct icc_onecell_data)
>> + *
>> + * This is a generic xlate function that can be used to model simple
>> + * interconnect providers that have one device tree node and provide
>> + * multiple interconnect nodes. A single cell is used as an index into
>> + * an array of icc nodes specified in the icc_onecell_data struct when
>> + * registering the provider.
>> + */
[..]
>> +/**
>> + * of_icc_get() - get a path handle from a DT node based on name
>> + * @dev: device pointer for the consumer device
>> + * @name: interconnect path name
>> + *
>> + * This function will search for a path two endpoints and return an
> 
> path between two endpoints
> 

Ok.

>> + * icc_path handle on success. Use icc_put() to release constraints when
>> + * they are not needed anymore.
>> + * If the interconnect API is disabled, NULL is returned and the consumer
>> + * drivers will still build. Drivers are free to handle this specifically,
>> + * but they don't have to. NULL is also returned when the "interconnects"
> 
> I'm not sure the sentence starting with "Drivers are free" adds much.
> Also, you mention that null is returned when the interconnect API is
> disabled both above and below. We probably only need it once.
> 

So it depends on the driver how to handle it. If an enabled interconnect
is a hard requirement for a driver to work, it can choose to fail. If it
is optional, the driver can succeed and continue and all bandwidth
requests will be silently ignored.

>> + * DT property is missing.
>> + *
>> + * Return: icc_path pointer on success or ERR_PTR() on error. NULL is returned
>> + * when the API is disabled or the "interconnects" DT property is missing.
>> + */
>> +struct icc_path *of_icc_get(struct device *dev, const char *name)
>> +{
>> +       struct icc_path *path = ERR_PTR(-EPROBE_DEFER);
>> +       struct icc_node *src_node, *dst_node;
>> +       struct device_node *np = NULL;
>> +       struct of_phandle_args src_args, dst_args;
>> +       int idx = 0;
>> +       int ret;
>> +
>> +       if (!dev || !dev->of_node)
>> +               return ERR_PTR(-ENODEV);
>> +
>> +       np = dev->of_node;
>> +
>> +       /*
>> +        * When the consumer DT node do not have "interconnects" property
>> +        * return a NULL path to skip setting constraints.
>> +        */
>> +       if (!of_find_property(np, "interconnects", NULL))
>> +               return NULL;
>> +
>> +       /*
>> +        * We use a combination of phandle and specifier for endpoint. For now
>> +        * lets support only global ids and extend this is the future if needed
> 
> s/is the future/in the future/

Ok.

>> +        * without breaking DT compatibility.
>> +        */
>> +       if (name) {
>> +               idx = of_property_match_string(np, "interconnect-names", name);
>> +               if (idx < 0)
>> +                       return ERR_PTR(idx);
>> +       }
>> +
>> +       ret = of_parse_phandle_with_args(np, "interconnects",
>> +                                        "#interconnect-cells", idx * 2,
>> +                                        &src_args);
>> +       if (ret)
>> +               return ERR_PTR(ret);
>> +
>> +       of_node_put(src_args.np);
>> +
>> +       if (!src_args.args_count || src_args.args_count > 1)
> 
> This could be src_args.argc_count != 1
> 
>> +               return ERR_PTR(-EINVAL);
>> +
>> +       ret = of_parse_phandle_with_args(np, "interconnects",
>> +                                        "#interconnect-cells", idx * 2 + 1,
>> +                                        &dst_args);
>> +       if (ret)
>> +               return ERR_PTR(ret);
>> +
>> +       of_node_put(dst_args.np);
>> +
>> +       if (!dst_args.args_count || dst_args.args_count > 1)
> 
> Similarly, this could be dst_args.args_count != 1

Yes, and actually it might be even better if i move these checks to
of_icc_get_from_provider().

>> +               return ERR_PTR(-EINVAL);
>> +
>> +       src_node = of_icc_get_from_provider(&src_args);
>> +
>> +       if (IS_ERR(src_node)) {
>> +               if (PTR_ERR(src_node) != -EPROBE_DEFER)
>> +                       dev_err(dev, "error finding src node: %ld\n",
>> +                               PTR_ERR(src_node));
>> +               return ERR_CAST(src_node);
>> +       }
>> +
>> +       dst_node = of_icc_get_from_provider(&dst_args);
>> +
[..]>
> With these nits fixed:
> 
> Reviewed-by: Evan Green <evgreen@chromium.org>
> 

Thanks,
Georgi

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v10 5/7] interconnect: qcom: Add sdm845 interconnect provider driver
  2018-12-01  0:39   ` Evan Green
@ 2018-12-05 16:00     ` Georgi Djakov
  2018-12-06 21:53       ` David Dai
  0 siblings, 1 reply; 34+ messages in thread
From: Georgi Djakov @ 2018-12-05 16:00 UTC (permalink / raw)
  To: Evan Green
  Cc: linux-pm, gregkh, rjw, robh+dt, Michael Turquette, khilman,
	Vincent Guittot, Saravana Kannan, Bjorn Andersson, amit.kucheria,
	seansw, daidavid1, mark.rutland, lorenzo.pieralisi,
	Alexandre Bailon, maxime.ripard, Arnd Bergmann, thierry.reding,
	ksitaraman, sanjayc, devicetree, linux-kernel, linux-arm-kernel,
	linux-arm-msm, linux-tegra

Hi Evan,

On 12/1/18 02:39, Evan Green wrote:
> On Tue, Nov 27, 2018 at 10:04 AM Georgi Djakov <georgi.djakov@linaro.org> wrote:
>>
>> From: David Dai <daidavid1@codeaurora.org>
>>
>> Introduce Qualcomm SDM845 specific provider driver using the
>> interconnect framework.
>>
>> Signed-off-by: David Dai <daidavid1@codeaurora.org>
>> Signed-off-by: Georgi Djakov <georgi.djakov@linaro.org>
>> ---
>>  .../bindings/interconnect/qcom,sdm845.txt     |  24 +
>>  drivers/interconnect/Kconfig                  |   5 +
>>  drivers/interconnect/Makefile                 |   1 +
>>  drivers/interconnect/qcom/Kconfig             |  13 +
>>  drivers/interconnect/qcom/Makefile            |   5 +
>>  drivers/interconnect/qcom/sdm845.c            | 836 ++++++++++++++++++
>>  .../dt-bindings/interconnect/qcom,sdm845.h    | 143 +++
>>  7 files changed, 1027 insertions(+)
>>  create mode 100644 Documentation/devicetree/bindings/interconnect/qcom,sdm845.txt
>>  create mode 100644 drivers/interconnect/qcom/Kconfig
>>  create mode 100644 drivers/interconnect/qcom/Makefile
>>  create mode 100644 drivers/interconnect/qcom/sdm845.c
>>  create mode 100644 include/dt-bindings/interconnect/qcom,sdm845.h
>>
>> diff --git a/Documentation/devicetree/bindings/interconnect/qcom,sdm845.txt b/Documentation/devicetree/bindings/interconnect/qcom,sdm845.txt
>> new file mode 100644
>> index 000000000000..d45150e99665
>> --- /dev/null
>> +++ b/Documentation/devicetree/bindings/interconnect/qcom,sdm845.txt
>> @@ -0,0 +1,24 @@
>> +Qualcomm SDM845 Network-On-Chip interconnect driver binding
>> +-----------------------------------------------------------
>> +
>> +SDM845 interconnect providers support system bandwidth requirements through
>> +RPMh hardware accelerators known as Bus Clock Manager(BCM). The provider is able
>> +to communicate with the BCM through the Resource State Coordinator(RSC)
>> +associated with each execution environment. Provider nodes must reside within
>> +an RPMh device node pertaining to their RSC and each provider maps to
>> +a single RPMh resource.
>> +
>> +Required properties :
>> +- compatible : shall contain only one of the following:
>> +                       "qcom,sdm845-rsc-hlos"
> 
> I wonder if maybe hlos isn't necessary. Unless you somehow imagine
> secure mode would have a device tree entry in here as well? Probably
> not.

Ok, will remove it. David, please chime in if you have any concerns with
this.

>> +- #interconnect-cells : should contain 1
>> +
>> +Examples:
>> +
>> +apps_rsc: rsc {
>> +               qnoc: qnoc-rsc-hlos {
>> +                       compatible = "qcom,sdm845-rsc-hlos";
>> +                       #interconnect-cells = <1>;
>> +               };
>> +};
>> +
> ...
>> diff --git a/drivers/interconnect/qcom/sdm845.c b/drivers/interconnect/qcom/sdm845.c
>> new file mode 100644
>> index 000000000000..1678de91ca52
>> --- /dev/null
>> +++ b/drivers/interconnect/qcom/sdm845.c
>> @@ -0,0 +1,836 @@
> ...
>> +
>> +static void tcs_list_gen(struct list_head *bcm_list,
>> +                        struct tcs_cmd *tcs_list, int *n)
> 
> We could make the prototype of this function be:
> 
> static void tcs_list_gen(struct list_head *bcm_list,
>         struct tcs_cmd tcs_list[SDM845_MAX_VCD], int n[SDM845_MAX_VCD])
> 
> which would catch errors if somebody later passed in an array that
> wasn't the right size, since we blindly memset below.

Yes, sounds good. I will try to optimize it.

Thanks,
Georgi

>> +{
>> +       struct qcom_icc_bcm *bcm;
>> +       bool commit;
>> +       size_t idx = 0, batch = 0, cur_vcd_size = 0;
>> +
>> +       memset(n, 0, sizeof(int) * SDM845_MAX_VCD);
>> +
>> +       list_for_each_entry(bcm, bcm_list, list) {
>> +               commit = false;
>> +               cur_vcd_size++;
>> +               if ((list_is_last(&bcm->list, bcm_list)) ||
>> +                   bcm->aux_data.vcd != list_next_entry(bcm, list)->aux_data.vcd) {
>> +                       commit = true;
>> +                       cur_vcd_size = 0;
>> +               }
>> +               tcs_cmd_gen(&tcs_list[idx], bcm->vote_x, bcm->vote_y,
>> +                           bcm->addr, commit);
>> +               idx++;
>> +               n[batch]++;
>> +               /*
>> +                * Batch the BCMs in such a way that we do not split them in
>> +                * multiple payloads when they are under the same VCD. This is
>> +                * to ensure that every BCM is committed since we only set the
>> +                * commit bit on the last BCM request of every VCD.
>> +                */
>> +               if (n[batch] >= MAX_RPMH_PAYLOAD) {
>> +                       if (!commit) {
>> +                               n[batch] -= cur_vcd_size;
>> +                               n[batch + 1] = cur_vcd_size;
>> +                       }
>> +                       batch++;
>> +               }
>> +       }
>> +}
>> +

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v10 6/7] arm64: dts: sdm845: Add interconnect provider DT nodes
  2018-12-01  0:39   ` Evan Green
@ 2018-12-05 16:01     ` Georgi Djakov
  0 siblings, 0 replies; 34+ messages in thread
From: Georgi Djakov @ 2018-12-05 16:01 UTC (permalink / raw)
  To: Evan Green
  Cc: linux-pm, gregkh, rjw, robh+dt, Michael Turquette, khilman,
	Vincent Guittot, Saravana Kannan, Bjorn Andersson, amit.kucheria,
	seansw, daidavid1, mark.rutland, lorenzo.pieralisi,
	Alexandre Bailon, maxime.ripard, Arnd Bergmann, thierry.reding,
	ksitaraman, sanjayc, devicetree, linux-kernel, linux-arm-kernel,
	linux-arm-msm, linux-tegra

Hi Evan,

On 12/1/18 02:39, Evan Green wrote:
> On Tue, Nov 27, 2018 at 10:04 AM Georgi Djakov <georgi.djakov@linaro.org> wrote:
>>
>> From: David Dai <daidavid1@codeaurora.org>
>>
>> Add RSC (Resource State Coordinator) provider
>> dictating network-on-chip interconnect bus performance
>> found on SDM845-based platforms.
>>
>> Signed-off-by: David Dai <daidavid1@codeaurora.org>
>> Signed-off-by: Georgi Djakov <georgi.djakov@linaro.org>
>> ---
>>  arch/arm64/boot/dts/qcom/sdm845.dtsi | 5 +++++
>>  1 file changed, 5 insertions(+)
>>
>> diff --git a/arch/arm64/boot/dts/qcom/sdm845.dtsi b/arch/arm64/boot/dts/qcom/sdm845.dtsi
>> index b72bdb0a31a5..856d33604e9c 100644
>> --- a/arch/arm64/boot/dts/qcom/sdm845.dtsi
>> +++ b/arch/arm64/boot/dts/qcom/sdm845.dtsi
>> @@ -1324,6 +1324,11 @@
>>                                 compatible = "qcom,sdm845-rpmh-clk";
>>                                 #clock-cells = <1>;
>>                         };
>> +
>> +                       qnoc: qnoc {
>> +                               compatible = "qcom,sdm845-rsc-hlos";
>> +                               #interconnect-cells = <1>;
>> +                       };
> 
> Should we alphabetize this node above rpmhcc?

Sure!

Thanks,
Georgi

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v10 1/7] interconnect: Add generic on-chip interconnect API
  2018-11-27 18:03 ` [PATCH v10 1/7] interconnect: Add generic " Georgi Djakov
  2018-11-27 18:35   ` Joe Perches
  2018-12-01  0:38   ` Evan Green
@ 2018-12-05 16:16   ` Rob Herring
  2018-12-07 15:24     ` Georgi Djakov
  2 siblings, 1 reply; 34+ messages in thread
From: Rob Herring @ 2018-12-05 16:16 UTC (permalink / raw)
  To: Georgi Djakov
  Cc: open list:THERMAL, Greg Kroah-Hartman, Rafael J. Wysocki,
	Michael Turquette, Kevin Hilman, Vincent Guittot,
	Saravana Kannan, Bjorn Andersson, Amit Kucheria, seansw,
	daidavid1, Evan Green, Mark Rutland, Lorenzo Pieralisi,
	Alexandre Bailon, Maxime Ripard, Arnd Bergmann, Thierry Reding,
	ksitaraman, sanjayc, devicetree, linux-kernel,
	moderated list:ARM/FREESCALE IMX / MXC ARM ARCHITECTURE,
	linux-arm-msm, linux-tegra

On Tue, Nov 27, 2018 at 12:03 PM Georgi Djakov <georgi.djakov@linaro.org> wrote:
>
> This patch introduces a new API to get requirements and configure the
> interconnect buses across the entire chipset to fit with the current
> demand.
>
> The API is using a consumer/provider-based model, where the providers are
> the interconnect buses and the consumers could be various drivers.
> The consumers request interconnect resources (path) between endpoints and
> set the desired constraints on this data flow path. The providers receive
> requests from consumers and aggregate these requests for all master-slave
> pairs on that path. Then the providers configure each participating in the
> topology node according to the requested data flow path, physical links and
> constraints. The topology could be complicated and multi-tiered and is SoC
> specific.
>
> Signed-off-by: Georgi Djakov <georgi.djakov@linaro.org>
> Reviewed-by: Evan Green <evgreen@chromium.org>

[...]

> +struct icc_path *icc_get(struct device *dev, const int src_id,
> +                        const int dst_id);
> +void icc_put(struct icc_path *path);
> +int icc_set(struct icc_path *path, u32 avg_bw, u32 peak_bw);

icc_set() is very generic, but this function isn't easily extended to
other parameters than bandwidth. Perhaps icc_set_bandwidth() instead.
Then when you add some other setting, you just add a new function. Of
course, if you wind up with a bunch of different parameters, then
you'll probably need an atomic type interface where you test all the
settings together and then commit them separately in one call. But
from a DT perspective, I certainly hope there are not lots of new
settings folks want to add. :)

Rob

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v10 0/8] Introduce on-chip interconnect API
  2018-11-27 18:03 [PATCH v10 0/8] Introduce on-chip interconnect API Georgi Djakov
                   ` (6 preceding siblings ...)
  2018-11-27 18:03 ` [PATCH v10 7/7] MAINTAINERS: add a maintainer for the interconnect API Georgi Djakov
@ 2018-12-05 20:41 ` Evan Green
  2018-12-06 14:55   ` Greg KH
  7 siblings, 1 reply; 34+ messages in thread
From: Evan Green @ 2018-12-05 20:41 UTC (permalink / raw)
  To: georgi.djakov
  Cc: linux-pm, gregkh, rjw, robh+dt, Michael Turquette, khilman,
	Vincent Guittot, Saravana Kannan, Bjorn Andersson, amit.kucheria,
	seansw, daidavid1, mark.rutland, lorenzo.pieralisi,
	Alexandre Bailon, maxime.ripard, Arnd Bergmann, Thierry Reding,
	ksitaraman, sanjayc, devicetree, linux-kernel, linux-arm-kernel,
	linux-arm-msm, linux-tegra, Doug Anderson

On Tue, Nov 27, 2018 at 10:03 AM Georgi Djakov <georgi.djakov@linaro.org> wrote:
>
> Modern SoCs have multiple processors and various dedicated cores (video, gpu,
> graphics, modem). These cores are talking to each other and can generate a
> lot of data flowing through the on-chip interconnects. These interconnect
> buses could form different topologies such as crossbar, point to point buses,
> hierarchical buses or use the network-on-chip concept.
>
> These buses have been sized usually to handle use cases with high data
> throughput but it is not necessary all the time and consume a lot of power.
> Furthermore, the priority between masters can vary depending on the running
> use case like video playback or CPU intensive tasks.
>
> Having an API to control the requirement of the system in terms of bandwidth
> and QoS, so we can adapt the interconnect configuration to match those by
> scaling the frequencies, setting link priority and tuning QoS parameters.
> This configuration can be a static, one-time operation done at boot for some
> platforms or a dynamic set of operations that happen at run-time.
>
> This patchset introduce a new API to get the requirement and configure the
> interconnect buses across the entire chipset to fit with the current demand.
> The API is NOT for changing the performance of the endpoint devices, but only
> the interconnect path in between them.

For what it's worth, we are ready to land this in Chrome OS. I think
this series has been very well discussed and reviewed, hasn't changed
much in the last few spins, and is in good enough shape to use as a
base for future patches. Georgi's also done a great job reaching out
to other SoC vendors, and there appears to be enough consensus that
this framework will be usable by more than just Qualcomm. There are
also several drivers out on the list trying to add patches to use this
framework, with more to come, so it made sense (to us) to get this
base framework nailed down. In my experiments this is an important
piece of the overall power management story, especially on systems
that are mostly idle.

I'll continue to track changes to this series and we will ultimately
reconcile with whatever happens upstream, but I thought it was worth
sending this note to express our "thumbs up" towards this framework.

-Evan

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v10 0/8] Introduce on-chip interconnect API
  2018-12-05 20:41 ` [PATCH v10 0/8] Introduce on-chip " Evan Green
@ 2018-12-06 14:55   ` Greg KH
  2018-12-07 10:06     ` Georgi Djakov
  2018-12-10  9:04     ` Rafael J. Wysocki
  0 siblings, 2 replies; 34+ messages in thread
From: Greg KH @ 2018-12-06 14:55 UTC (permalink / raw)
  To: Evan Green
  Cc: georgi.djakov, linux-pm, rjw, robh+dt, Michael Turquette,
	khilman, Vincent Guittot, Saravana Kannan, Bjorn Andersson,
	amit.kucheria, seansw, daidavid1, mark.rutland,
	lorenzo.pieralisi, Alexandre Bailon, maxime.ripard,
	Arnd Bergmann, Thierry Reding, ksitaraman, sanjayc, devicetree,
	linux-kernel, linux-arm-kernel, linux-arm-msm, linux-tegra,
	Doug Anderson

On Wed, Dec 05, 2018 at 12:41:35PM -0800, Evan Green wrote:
> On Tue, Nov 27, 2018 at 10:03 AM Georgi Djakov <georgi.djakov@linaro.org> wrote:
> >
> > Modern SoCs have multiple processors and various dedicated cores (video, gpu,
> > graphics, modem). These cores are talking to each other and can generate a
> > lot of data flowing through the on-chip interconnects. These interconnect
> > buses could form different topologies such as crossbar, point to point buses,
> > hierarchical buses or use the network-on-chip concept.
> >
> > These buses have been sized usually to handle use cases with high data
> > throughput but it is not necessary all the time and consume a lot of power.
> > Furthermore, the priority between masters can vary depending on the running
> > use case like video playback or CPU intensive tasks.
> >
> > Having an API to control the requirement of the system in terms of bandwidth
> > and QoS, so we can adapt the interconnect configuration to match those by
> > scaling the frequencies, setting link priority and tuning QoS parameters.
> > This configuration can be a static, one-time operation done at boot for some
> > platforms or a dynamic set of operations that happen at run-time.
> >
> > This patchset introduce a new API to get the requirement and configure the
> > interconnect buses across the entire chipset to fit with the current demand.
> > The API is NOT for changing the performance of the endpoint devices, but only
> > the interconnect path in between them.
> 
> For what it's worth, we are ready to land this in Chrome OS. I think
> this series has been very well discussed and reviewed, hasn't changed
> much in the last few spins, and is in good enough shape to use as a
> base for future patches. Georgi's also done a great job reaching out
> to other SoC vendors, and there appears to be enough consensus that
> this framework will be usable by more than just Qualcomm. There are
> also several drivers out on the list trying to add patches to use this
> framework, with more to come, so it made sense (to us) to get this
> base framework nailed down. In my experiments this is an important
> piece of the overall power management story, especially on systems
> that are mostly idle.
> 
> I'll continue to track changes to this series and we will ultimately
> reconcile with whatever happens upstream, but I thought it was worth
> sending this note to express our "thumbs up" towards this framework.

Looks like a v11 will be forthcoming, so I'll wait for that one to apply
it to the tree if all looks good.

thanks,

greg k-h

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v10 5/7] interconnect: qcom: Add sdm845 interconnect provider driver
  2018-12-05 16:00     ` Georgi Djakov
@ 2018-12-06 21:53       ` David Dai
  0 siblings, 0 replies; 34+ messages in thread
From: David Dai @ 2018-12-06 21:53 UTC (permalink / raw)
  To: Georgi Djakov, Evan Green
  Cc: linux-pm, gregkh, rjw, robh+dt, Michael Turquette, khilman,
	Vincent Guittot, Saravana Kannan, Bjorn Andersson, amit.kucheria,
	seansw, mark.rutland, lorenzo.pieralisi, Alexandre Bailon,
	maxime.ripard, Arnd Bergmann, thierry.reding, ksitaraman,
	sanjayc, devicetree, linux-kernel, linux-arm-kernel,
	linux-arm-msm, linux-tegra



On 12/5/2018 8:00 AM, Georgi Djakov wrote:
> Hi Evan,
>
> On 12/1/18 02:39, Evan Green wrote:
>> On Tue, Nov 27, 2018 at 10:04 AM Georgi Djakov <georgi.djakov@linaro.org> wrote:
>>> From: David Dai <daidavid1@codeaurora.org>
>>>
>>> Introduce Qualcomm SDM845 specific provider driver using the
>>> interconnect framework.
>>>
>>> Signed-off-by: David Dai <daidavid1@codeaurora.org>
>>> Signed-off-by: Georgi Djakov <georgi.djakov@linaro.org>
>>> ---
>>>   .../bindings/interconnect/qcom,sdm845.txt     |  24 +
>>>   drivers/interconnect/Kconfig                  |   5 +
>>>   drivers/interconnect/Makefile                 |   1 +
>>>   drivers/interconnect/qcom/Kconfig             |  13 +
>>>   drivers/interconnect/qcom/Makefile            |   5 +
>>>   drivers/interconnect/qcom/sdm845.c            | 836 ++++++++++++++++++
>>>   .../dt-bindings/interconnect/qcom,sdm845.h    | 143 +++
>>>   7 files changed, 1027 insertions(+)
>>>   create mode 100644 Documentation/devicetree/bindings/interconnect/qcom,sdm845.txt
>>>   create mode 100644 drivers/interconnect/qcom/Kconfig
>>>   create mode 100644 drivers/interconnect/qcom/Makefile
>>>   create mode 100644 drivers/interconnect/qcom/sdm845.c
>>>   create mode 100644 include/dt-bindings/interconnect/qcom,sdm845.h
>>>
>>> diff --git a/Documentation/devicetree/bindings/interconnect/qcom,sdm845.txt b/Documentation/devicetree/bindings/interconnect/qcom,sdm845.txt
>>> new file mode 100644
>>> index 000000000000..d45150e99665
>>> --- /dev/null
>>> +++ b/Documentation/devicetree/bindings/interconnect/qcom,sdm845.txt
>>> @@ -0,0 +1,24 @@
>>> +Qualcomm SDM845 Network-On-Chip interconnect driver binding
>>> +-----------------------------------------------------------
>>> +
>>> +SDM845 interconnect providers support system bandwidth requirements through
>>> +RPMh hardware accelerators known as Bus Clock Manager(BCM). The provider is able
>>> +to communicate with the BCM through the Resource State Coordinator(RSC)
>>> +associated with each execution environment. Provider nodes must reside within
>>> +an RPMh device node pertaining to their RSC and each provider maps to
>>> +a single RPMh resource.
>>> +
>>> +Required properties :
>>> +- compatible : shall contain only one of the following:
>>> +                       "qcom,sdm845-rsc-hlos"
>> I wonder if maybe hlos isn't necessary. Unless you somehow imagine
>> secure mode would have a device tree entry in here as well? Probably
>> not.
> Ok, will remove it. David, please chime in if you have any concerns with
> this.

No strong preferences in terms of naming, but need to make the 
distinction between this and potential other rsc types if we're to add 
additional provider drivers.

>>> +- #interconnect-cells : should contain 1
>>> +
>>> +Examples:
>>> +
>>> +apps_rsc: rsc {
>>> +               qnoc: qnoc-rsc-hlos {
>>> +                       compatible = "qcom,sdm845-rsc-hlos";
>>> +                       #interconnect-cells = <1>;
>>> +               };
>>> +};
>>> +
>> ...
>>> diff --git a/drivers/interconnect/qcom/sdm845.c b/drivers/interconnect/qcom/sdm845.c
>>> new file mode 100644
>>> index 000000000000..1678de91ca52
>>> --- /dev/null
>>> +++ b/drivers/interconnect/qcom/sdm845.c
>>> @@ -0,0 +1,836 @@
>> ...
>>> +
>>> +static void tcs_list_gen(struct list_head *bcm_list,
>>> +                        struct tcs_cmd *tcs_list, int *n)
>> We could make the prototype of this function be:
>>
>> static void tcs_list_gen(struct list_head *bcm_list,
>>          struct tcs_cmd tcs_list[SDM845_MAX_VCD], int n[SDM845_MAX_VCD])
>>
>> which would catch errors if somebody later passed in an array that
>> wasn't the right size, since we blindly memset below.
> Yes, sounds good. I will try to optimize it.
>
> Thanks,
> Georgi
>
>>> +{
>>> +       struct qcom_icc_bcm *bcm;
>>> +       bool commit;
>>> +       size_t idx = 0, batch = 0, cur_vcd_size = 0;
>>> +
>>> +       memset(n, 0, sizeof(int) * SDM845_MAX_VCD);
>>> +
>>> +       list_for_each_entry(bcm, bcm_list, list) {
>>> +               commit = false;
>>> +               cur_vcd_size++;
>>> +               if ((list_is_last(&bcm->list, bcm_list)) ||
>>> +                   bcm->aux_data.vcd != list_next_entry(bcm, list)->aux_data.vcd) {
>>> +                       commit = true;
>>> +                       cur_vcd_size = 0;
>>> +               }
>>> +               tcs_cmd_gen(&tcs_list[idx], bcm->vote_x, bcm->vote_y,
>>> +                           bcm->addr, commit);
>>> +               idx++;
>>> +               n[batch]++;
>>> +               /*
>>> +                * Batch the BCMs in such a way that we do not split them in
>>> +                * multiple payloads when they are under the same VCD. This is
>>> +                * to ensure that every BCM is committed since we only set the
>>> +                * commit bit on the last BCM request of every VCD.
>>> +                */
>>> +               if (n[batch] >= MAX_RPMH_PAYLOAD) {
>>> +                       if (!commit) {
>>> +                               n[batch] -= cur_vcd_size;
>>> +                               n[batch + 1] = cur_vcd_size;
>>> +                       }
>>> +                       batch++;
>>> +               }
>>> +       }
>>> +}
>>> +

-- 
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum,
a Linux Foundation Collaborative Project


^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v10 0/8] Introduce on-chip interconnect API
  2018-12-06 14:55   ` Greg KH
@ 2018-12-07 10:06     ` Georgi Djakov
  2018-12-10  9:04     ` Rafael J. Wysocki
  1 sibling, 0 replies; 34+ messages in thread
From: Georgi Djakov @ 2018-12-07 10:06 UTC (permalink / raw)
  To: Greg KH, Evan Green
  Cc: linux-pm, rjw, robh+dt, Michael Turquette, khilman,
	Vincent Guittot, Saravana Kannan, Bjorn Andersson, amit.kucheria,
	seansw, daidavid1, mark.rutland, lorenzo.pieralisi,
	Alexandre Bailon, maxime.ripard, Arnd Bergmann, Thierry Reding,
	ksitaraman, sanjayc, devicetree, linux-kernel, linux-arm-kernel,
	linux-arm-msm, linux-tegra, Doug Anderson

Hi Greg and Evan,

On 12/6/18 16:55, Greg KH wrote:
> On Wed, Dec 05, 2018 at 12:41:35PM -0800, Evan Green wrote:
>> On Tue, Nov 27, 2018 at 10:03 AM Georgi Djakov <georgi.djakov@linaro.org> wrote:
>>>
>>> Modern SoCs have multiple processors and various dedicated cores (video, gpu,
>>> graphics, modem). These cores are talking to each other and can generate a
>>> lot of data flowing through the on-chip interconnects. These interconnect
>>> buses could form different topologies such as crossbar, point to point buses,
>>> hierarchical buses or use the network-on-chip concept.
>>>
>>> These buses have been sized usually to handle use cases with high data
>>> throughput but it is not necessary all the time and consume a lot of power.
>>> Furthermore, the priority between masters can vary depending on the running
>>> use case like video playback or CPU intensive tasks.
>>>
>>> Having an API to control the requirement of the system in terms of bandwidth
>>> and QoS, so we can adapt the interconnect configuration to match those by
>>> scaling the frequencies, setting link priority and tuning QoS parameters.
>>> This configuration can be a static, one-time operation done at boot for some
>>> platforms or a dynamic set of operations that happen at run-time.
>>>
>>> This patchset introduce a new API to get the requirement and configure the
>>> interconnect buses across the entire chipset to fit with the current demand.
>>> The API is NOT for changing the performance of the endpoint devices, but only
>>> the interconnect path in between them.
>>
>> For what it's worth, we are ready to land this in Chrome OS. I think
>> this series has been very well discussed and reviewed, hasn't changed
>> much in the last few spins, and is in good enough shape to use as a
>> base for future patches. Georgi's also done a great job reaching out
>> to other SoC vendors, and there appears to be enough consensus that
>> this framework will be usable by more than just Qualcomm. There are
>> also several drivers out on the list trying to add patches to use this
>> framework, with more to come, so it made sense (to us) to get this
>> base framework nailed down. In my experiments this is an important
>> piece of the overall power management story, especially on systems
>> that are mostly idle.
>>
>> I'll continue to track changes to this series and we will ultimately
>> reconcile with whatever happens upstream, but I thought it was worth
>> sending this note to express our "thumbs up" towards this framework.
> 
> Looks like a v11 will be forthcoming, so I'll wait for that one to apply
> it to the tree if all looks good.
> 

Yes, it's coming. I will also include an additional fixup patch, as the
sdm845 provider driver will fail to build in linux-next, due to a recent
change in the cmd_db API.

Thanks,
Georgi

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v10 1/7] interconnect: Add generic on-chip interconnect API
  2018-12-05 16:16   ` Rob Herring
@ 2018-12-07 15:24     ` Georgi Djakov
  0 siblings, 0 replies; 34+ messages in thread
From: Georgi Djakov @ 2018-12-07 15:24 UTC (permalink / raw)
  To: Rob Herring
  Cc: open list:THERMAL, Greg Kroah-Hartman, Rafael J. Wysocki,
	Michael Turquette, Kevin Hilman, Vincent Guittot,
	Saravana Kannan, Bjorn Andersson, Amit Kucheria, seansw,
	daidavid1, Evan Green, Mark Rutland, Lorenzo Pieralisi,
	Alexandre Bailon, Maxime Ripard, Arnd Bergmann, Thierry Reding,
	ksitaraman, sanjayc, devicetree, linux-kernel,
	moderated list:ARM/FREESCALE IMX / MXC ARM ARCHITECTURE,
	linux-arm-msm, linux-tegra

Hi Rob,

On 12/5/18 18:16, Rob Herring wrote:
> On Tue, Nov 27, 2018 at 12:03 PM Georgi Djakov <georgi.djakov@linaro.org> wrote:
>>
>> This patch introduces a new API to get requirements and configure the
>> interconnect buses across the entire chipset to fit with the current
>> demand.
>>
>> The API is using a consumer/provider-based model, where the providers are
>> the interconnect buses and the consumers could be various drivers.
>> The consumers request interconnect resources (path) between endpoints and
>> set the desired constraints on this data flow path. The providers receive
>> requests from consumers and aggregate these requests for all master-slave
>> pairs on that path. Then the providers configure each participating in the
>> topology node according to the requested data flow path, physical links and
>> constraints. The topology could be complicated and multi-tiered and is SoC
>> specific.
>>
>> Signed-off-by: Georgi Djakov <georgi.djakov@linaro.org>
>> Reviewed-by: Evan Green <evgreen@chromium.org>
> 
> [...]
> 
>> +struct icc_path *icc_get(struct device *dev, const int src_id,
>> +                        const int dst_id);
>> +void icc_put(struct icc_path *path);
>> +int icc_set(struct icc_path *path, u32 avg_bw, u32 peak_bw);
> 
> icc_set() is very generic, but this function isn't easily extended to
> other parameters than bandwidth. Perhaps icc_set_bandwidth() instead.
> Then when you add some other setting, you just add a new function. Of
> course, if you wind up with a bunch of different parameters, then
> you'll probably need an atomic type interface where you test all the
> settings together and then commit them separately in one call. But

Thanks for the comment. We have already started some discussion with the
Nvidia guys about supporting also latency and priority. We can do a
structure based approach and pass a struct with all the bandwidth /
latency / priority stuff to a function like icc_set_extended(); It's
very early for any conclusions yet and for now we support only
bandwidth. The function name still can be easy changed with a follow-up
patch until one is using it yet. Let me think again.

> from a DT perspective, I certainly hope there are not lots of new
> settings folks want to add. 
Regarding DT, all settings should go into the drivers for now and
later we can decide if something makes sense to be really in device-tree.

Thanks,
Georgi

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v10 0/8] Introduce on-chip interconnect API
  2018-12-06 14:55   ` Greg KH
  2018-12-07 10:06     ` Georgi Djakov
@ 2018-12-10  9:04     ` Rafael J. Wysocki
  2018-12-10 10:18       ` Georgi Djakov
  1 sibling, 1 reply; 34+ messages in thread
From: Rafael J. Wysocki @ 2018-12-10  9:04 UTC (permalink / raw)
  To: Greg Kroah-Hartman
  Cc: evgreen, Georgi Djakov, Linux PM, Rafael J. Wysocki, Rob Herring,
	Michael Turquette, Kevin Hilman, Vincent Guittot,
	Saravana Kannan, bjorn.andersson, Amit Kucheria, seansw,
	daidavid1, Mark Rutland, Lorenzo Pieralisi, abailon,
	maxime.ripard, Arnd Bergmann, Thierry Reding, ksitaraman,
	sanjayc, devicetree, Linux Kernel Mailing List, Linux ARM,
	linux-arm-msm, linux-tegra, Doug Anderson

On Thu, Dec 6, 2018 at 3:55 PM Greg KH <gregkh@linuxfoundation.org> wrote:
>
> On Wed, Dec 05, 2018 at 12:41:35PM -0800, Evan Green wrote:
> > On Tue, Nov 27, 2018 at 10:03 AM Georgi Djakov <georgi.djakov@linaro.org> wrote:
> > >
> > > Modern SoCs have multiple processors and various dedicated cores (video, gpu,
> > > graphics, modem). These cores are talking to each other and can generate a
> > > lot of data flowing through the on-chip interconnects. These interconnect
> > > buses could form different topologies such as crossbar, point to point buses,
> > > hierarchical buses or use the network-on-chip concept.
> > >
> > > These buses have been sized usually to handle use cases with high data
> > > throughput but it is not necessary all the time and consume a lot of power.
> > > Furthermore, the priority between masters can vary depending on the running
> > > use case like video playback or CPU intensive tasks.
> > >
> > > Having an API to control the requirement of the system in terms of bandwidth
> > > and QoS, so we can adapt the interconnect configuration to match those by
> > > scaling the frequencies, setting link priority and tuning QoS parameters.
> > > This configuration can be a static, one-time operation done at boot for some
> > > platforms or a dynamic set of operations that happen at run-time.
> > >
> > > This patchset introduce a new API to get the requirement and configure the
> > > interconnect buses across the entire chipset to fit with the current demand.
> > > The API is NOT for changing the performance of the endpoint devices, but only
> > > the interconnect path in between them.
> >
> > For what it's worth, we are ready to land this in Chrome OS. I think
> > this series has been very well discussed and reviewed, hasn't changed
> > much in the last few spins, and is in good enough shape to use as a
> > base for future patches. Georgi's also done a great job reaching out
> > to other SoC vendors, and there appears to be enough consensus that
> > this framework will be usable by more than just Qualcomm. There are
> > also several drivers out on the list trying to add patches to use this
> > framework, with more to come, so it made sense (to us) to get this
> > base framework nailed down. In my experiments this is an important
> > piece of the overall power management story, especially on systems
> > that are mostly idle.
> >
> > I'll continue to track changes to this series and we will ultimately
> > reconcile with whatever happens upstream, but I thought it was worth
> > sending this note to express our "thumbs up" towards this framework.
>
> Looks like a v11 will be forthcoming, so I'll wait for that one to apply
> it to the tree if all looks good.

I'm honestly not sure if it is ready yet.

New versions are coming on and on, which may make such an impression,
but we had some discussion on it at the LPC and some serious questions
were asked during it, for instance regarding the DT binding introduced
here.  I'm not sure how this particular issue has been addressed here,
for example.

Thanks,
Rafael

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v10 0/8] Introduce on-chip interconnect API
  2018-12-10  9:04     ` Rafael J. Wysocki
@ 2018-12-10 10:18       ` Georgi Djakov
  2018-12-10 11:00         ` Rafael J. Wysocki
  0 siblings, 1 reply; 34+ messages in thread
From: Georgi Djakov @ 2018-12-10 10:18 UTC (permalink / raw)
  To: Rafael J. Wysocki, Greg Kroah-Hartman
  Cc: evgreen, Linux PM, Rafael J. Wysocki, Rob Herring,
	Michael Turquette, Kevin Hilman, Vincent Guittot,
	Saravana Kannan, bjorn.andersson, Amit Kucheria, seansw,
	daidavid1, Mark Rutland, Lorenzo Pieralisi, abailon,
	maxime.ripard, Arnd Bergmann, Thierry Reding, ksitaraman,
	sanjayc, devicetree, Linux Kernel Mailing List, Linux ARM,
	linux-arm-msm, linux-tegra, Doug Anderson

Hi Rafael,

On 12/10/18 11:04, Rafael J. Wysocki wrote:
> On Thu, Dec 6, 2018 at 3:55 PM Greg KH <gregkh@linuxfoundation.org> wrote:
>>
>> On Wed, Dec 05, 2018 at 12:41:35PM -0800, Evan Green wrote:
>>> On Tue, Nov 27, 2018 at 10:03 AM Georgi Djakov <georgi.djakov@linaro.org> wrote:
>>>>
>>>> Modern SoCs have multiple processors and various dedicated cores (video, gpu,
>>>> graphics, modem). These cores are talking to each other and can generate a
>>>> lot of data flowing through the on-chip interconnects. These interconnect
>>>> buses could form different topologies such as crossbar, point to point buses,
>>>> hierarchical buses or use the network-on-chip concept.
>>>>
>>>> These buses have been sized usually to handle use cases with high data
>>>> throughput but it is not necessary all the time and consume a lot of power.
>>>> Furthermore, the priority between masters can vary depending on the running
>>>> use case like video playback or CPU intensive tasks.
>>>>
>>>> Having an API to control the requirement of the system in terms of bandwidth
>>>> and QoS, so we can adapt the interconnect configuration to match those by
>>>> scaling the frequencies, setting link priority and tuning QoS parameters.
>>>> This configuration can be a static, one-time operation done at boot for some
>>>> platforms or a dynamic set of operations that happen at run-time.
>>>>
>>>> This patchset introduce a new API to get the requirement and configure the
>>>> interconnect buses across the entire chipset to fit with the current demand.
>>>> The API is NOT for changing the performance of the endpoint devices, but only
>>>> the interconnect path in between them.
>>>
>>> For what it's worth, we are ready to land this in Chrome OS. I think
>>> this series has been very well discussed and reviewed, hasn't changed
>>> much in the last few spins, and is in good enough shape to use as a
>>> base for future patches. Georgi's also done a great job reaching out
>>> to other SoC vendors, and there appears to be enough consensus that
>>> this framework will be usable by more than just Qualcomm. There are
>>> also several drivers out on the list trying to add patches to use this
>>> framework, with more to come, so it made sense (to us) to get this
>>> base framework nailed down. In my experiments this is an important
>>> piece of the overall power management story, especially on systems
>>> that are mostly idle.
>>>
>>> I'll continue to track changes to this series and we will ultimately
>>> reconcile with whatever happens upstream, but I thought it was worth
>>> sending this note to express our "thumbs up" towards this framework.
>>
>> Looks like a v11 will be forthcoming, so I'll wait for that one to apply
>> it to the tree if all looks good.
> 
> I'm honestly not sure if it is ready yet.
> 
> New versions are coming on and on, which may make such an impression,
> but we had some discussion on it at the LPC and some serious questions
> were asked during it, for instance regarding the DT binding introduced
> here.  I'm not sure how this particular issue has been addressed here,
> for example.

There have been no changes in bindings since v4 (other than squashing
consumer and provider bindings into a single patch and fixing typos).

The last DT comment was on v9 [1] where Rob wanted confirmation from
other SoC vendors that this works for them too. And now we have that
confirmation and there are patches posted on the list [2].

The second thing (also discussed at LPC) was about possible cases where
some consumer drivers can't calculate how much bandwidth they actually
need and how to address that. The proposal was to extend the OPP
bindings with one more property, but this is not part of this patchset.
It is a future step that needs more discussion on the mailing list. If a
driver really needs some bandwidth data now, it should be put into the
driver and not in DT. After we have enough consumers, we can discuss
again if it makes sense to extract something into DT or not.

Thanks,
Georgi

[1] https://lkml.org/lkml/2018/9/25/939
[2] https://lkml.org/lkml/2018/11/28/12

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v10 0/8] Introduce on-chip interconnect API
  2018-12-10 10:18       ` Georgi Djakov
@ 2018-12-10 11:00         ` Rafael J. Wysocki
  2018-12-10 14:50           ` Georgi Djakov
  0 siblings, 1 reply; 34+ messages in thread
From: Rafael J. Wysocki @ 2018-12-10 11:00 UTC (permalink / raw)
  To: Georgi Djakov
  Cc: Rafael J. Wysocki, Greg Kroah-Hartman, evgreen, Linux PM,
	Rafael J. Wysocki, Rob Herring, Michael Turquette, Kevin Hilman,
	Vincent Guittot, Saravana Kannan, bjorn.andersson, Amit Kucheria,
	seansw, daidavid1, Mark Rutland, Lorenzo Pieralisi, abailon,
	maxime.ripard, Arnd Bergmann, Thierry Reding, ksitaraman,
	sanjayc, devicetree, Linux Kernel Mailing List, Linux ARM,
	linux-arm-msm, linux-tegra, Doug Anderson

On Mon, Dec 10, 2018 at 11:18 AM Georgi Djakov <georgi.djakov@linaro.org> wrote:
>
> Hi Rafael,
>
> On 12/10/18 11:04, Rafael J. Wysocki wrote:
> > On Thu, Dec 6, 2018 at 3:55 PM Greg KH <gregkh@linuxfoundation.org> wrote:
> >>
> >> On Wed, Dec 05, 2018 at 12:41:35PM -0800, Evan Green wrote:
> >>> On Tue, Nov 27, 2018 at 10:03 AM Georgi Djakov <georgi.djakov@linaro.org> wrote:
> >>>>
> >>>> Modern SoCs have multiple processors and various dedicated cores (video, gpu,
> >>>> graphics, modem). These cores are talking to each other and can generate a
> >>>> lot of data flowing through the on-chip interconnects. These interconnect
> >>>> buses could form different topologies such as crossbar, point to point buses,
> >>>> hierarchical buses or use the network-on-chip concept.
> >>>>
> >>>> These buses have been sized usually to handle use cases with high data
> >>>> throughput but it is not necessary all the time and consume a lot of power.
> >>>> Furthermore, the priority between masters can vary depending on the running
> >>>> use case like video playback or CPU intensive tasks.
> >>>>
> >>>> Having an API to control the requirement of the system in terms of bandwidth
> >>>> and QoS, so we can adapt the interconnect configuration to match those by
> >>>> scaling the frequencies, setting link priority and tuning QoS parameters.
> >>>> This configuration can be a static, one-time operation done at boot for some
> >>>> platforms or a dynamic set of operations that happen at run-time.
> >>>>
> >>>> This patchset introduce a new API to get the requirement and configure the
> >>>> interconnect buses across the entire chipset to fit with the current demand.
> >>>> The API is NOT for changing the performance of the endpoint devices, but only
> >>>> the interconnect path in between them.
> >>>
> >>> For what it's worth, we are ready to land this in Chrome OS. I think
> >>> this series has been very well discussed and reviewed, hasn't changed
> >>> much in the last few spins, and is in good enough shape to use as a
> >>> base for future patches. Georgi's also done a great job reaching out
> >>> to other SoC vendors, and there appears to be enough consensus that
> >>> this framework will be usable by more than just Qualcomm. There are
> >>> also several drivers out on the list trying to add patches to use this
> >>> framework, with more to come, so it made sense (to us) to get this
> >>> base framework nailed down. In my experiments this is an important
> >>> piece of the overall power management story, especially on systems
> >>> that are mostly idle.
> >>>
> >>> I'll continue to track changes to this series and we will ultimately
> >>> reconcile with whatever happens upstream, but I thought it was worth
> >>> sending this note to express our "thumbs up" towards this framework.
> >>
> >> Looks like a v11 will be forthcoming, so I'll wait for that one to apply
> >> it to the tree if all looks good.
> >
> > I'm honestly not sure if it is ready yet.
> >
> > New versions are coming on and on, which may make such an impression,
> > but we had some discussion on it at the LPC and some serious questions
> > were asked during it, for instance regarding the DT binding introduced
> > here.  I'm not sure how this particular issue has been addressed here,
> > for example.
>
> There have been no changes in bindings since v4 (other than squashing
> consumer and provider bindings into a single patch and fixing typos).
>
> The last DT comment was on v9 [1] where Rob wanted confirmation from
> other SoC vendors that this works for them too. And now we have that
> confirmation and there are patches posted on the list [2].

OK

> The second thing (also discussed at LPC) was about possible cases where
> some consumer drivers can't calculate how much bandwidth they actually
> need and how to address that. The proposal was to extend the OPP
> bindings with one more property, but this is not part of this patchset.
> It is a future step that needs more discussion on the mailing list. If a
> driver really needs some bandwidth data now, it should be put into the
> driver and not in DT. After we have enough consumers, we can discuss
> again if it makes sense to extract something into DT or not.

That's fine by me.

Admittedly, I have some reservations regarding the extent to which
this approach will turn out to be useful in practice, but I guess as
long as there is enough traction, the best way to find out it to try
and see. :-)

From now on I will assume that this series is going to be applied by Greg.

Thanks,
Rafael

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v10 0/8] Introduce on-chip interconnect API
  2018-12-10 11:00         ` Rafael J. Wysocki
@ 2018-12-10 14:50           ` Georgi Djakov
  2018-12-11  6:58             ` Greg Kroah-Hartman
  0 siblings, 1 reply; 34+ messages in thread
From: Georgi Djakov @ 2018-12-10 14:50 UTC (permalink / raw)
  To: Rafael J. Wysocki, Greg Kroah-Hartman, Arnd Bergmann, Olof Johansson
  Cc: evgreen, Linux PM, Rafael J. Wysocki, Rob Herring,
	Michael Turquette, Kevin Hilman, Vincent Guittot,
	Saravana Kannan, bjorn.andersson, Amit Kucheria, seansw,
	daidavid1, Mark Rutland, Lorenzo Pieralisi, abailon,
	maxime.ripard, Thierry Reding, ksitaraman, sanjayc, devicetree,
	Linux Kernel Mailing List, Linux ARM, linux-arm-msm, linux-tegra,
	Doug Anderson, Andy Gross

On 12/10/18 13:00, Rafael J. Wysocki wrote:
> On Mon, Dec 10, 2018 at 11:18 AM Georgi Djakov <georgi.djakov@linaro.org> wrote:
>>
>> Hi Rafael,
>>
>> On 12/10/18 11:04, Rafael J. Wysocki wrote:
>>> On Thu, Dec 6, 2018 at 3:55 PM Greg KH <gregkh@linuxfoundation.org> wrote:
>>>>
>>>> On Wed, Dec 05, 2018 at 12:41:35PM -0800, Evan Green wrote:
>>>>> On Tue, Nov 27, 2018 at 10:03 AM Georgi Djakov <georgi.djakov@linaro.org> wrote:
>>>>>>
>>>>>> Modern SoCs have multiple processors and various dedicated cores (video, gpu,
>>>>>> graphics, modem). These cores are talking to each other and can generate a
>>>>>> lot of data flowing through the on-chip interconnects. These interconnect
>>>>>> buses could form different topologies such as crossbar, point to point buses,
>>>>>> hierarchical buses or use the network-on-chip concept.
>>>>>>
>>>>>> These buses have been sized usually to handle use cases with high data
>>>>>> throughput but it is not necessary all the time and consume a lot of power.
>>>>>> Furthermore, the priority between masters can vary depending on the running
>>>>>> use case like video playback or CPU intensive tasks.
>>>>>>
>>>>>> Having an API to control the requirement of the system in terms of bandwidth
>>>>>> and QoS, so we can adapt the interconnect configuration to match those by
>>>>>> scaling the frequencies, setting link priority and tuning QoS parameters.
>>>>>> This configuration can be a static, one-time operation done at boot for some
>>>>>> platforms or a dynamic set of operations that happen at run-time.
>>>>>>
>>>>>> This patchset introduce a new API to get the requirement and configure the
>>>>>> interconnect buses across the entire chipset to fit with the current demand.
>>>>>> The API is NOT for changing the performance of the endpoint devices, but only
>>>>>> the interconnect path in between them.
>>>>>
>>>>> For what it's worth, we are ready to land this in Chrome OS. I think
>>>>> this series has been very well discussed and reviewed, hasn't changed
>>>>> much in the last few spins, and is in good enough shape to use as a
>>>>> base for future patches. Georgi's also done a great job reaching out
>>>>> to other SoC vendors, and there appears to be enough consensus that
>>>>> this framework will be usable by more than just Qualcomm. There are
>>>>> also several drivers out on the list trying to add patches to use this
>>>>> framework, with more to come, so it made sense (to us) to get this
>>>>> base framework nailed down. In my experiments this is an important
>>>>> piece of the overall power management story, especially on systems
>>>>> that are mostly idle.
>>>>>
>>>>> I'll continue to track changes to this series and we will ultimately
>>>>> reconcile with whatever happens upstream, but I thought it was worth
>>>>> sending this note to express our "thumbs up" towards this framework.
>>>>
>>>> Looks like a v11 will be forthcoming, so I'll wait for that one to apply
>>>> it to the tree if all looks good.
>>>
>>> I'm honestly not sure if it is ready yet.
>>>
>>> New versions are coming on and on, which may make such an impression,
>>> but we had some discussion on it at the LPC and some serious questions
>>> were asked during it, for instance regarding the DT binding introduced
>>> here.  I'm not sure how this particular issue has been addressed here,
>>> for example.
>>
>> There have been no changes in bindings since v4 (other than squashing
>> consumer and provider bindings into a single patch and fixing typos).
>>
>> The last DT comment was on v9 [1] where Rob wanted confirmation from
>> other SoC vendors that this works for them too. And now we have that
>> confirmation and there are patches posted on the list [2].
> 
> OK
> 
>> The second thing (also discussed at LPC) was about possible cases where
>> some consumer drivers can't calculate how much bandwidth they actually
>> need and how to address that. The proposal was to extend the OPP
>> bindings with one more property, but this is not part of this patchset.
>> It is a future step that needs more discussion on the mailing list. If a
>> driver really needs some bandwidth data now, it should be put into the
>> driver and not in DT. After we have enough consumers, we can discuss
>> again if it makes sense to extract something into DT or not.
> 
> That's fine by me.
> 
> Admittedly, I have some reservations regarding the extent to which
> this approach will turn out to be useful in practice, but I guess as
> long as there is enough traction, the best way to find out it to try
> and see. :-)
> 
> From now on I will assume that this series is going to be applied by Greg.

That was the initial idea, but the problem is that there is a recent
change in the cmd_db API (needed by the sdm845 provider driver), which
is going through arm-soc/qcom/drivers. So either Greg pulls also the
qcom-drivers-for-4.21 tag from Andy or the whole series goes via Olof
and Arnd. Maybe there are other options. I don't have any preference and
don't want to put extra burden on any maintainers, so i am ok with what
they prefer.

Thanks,
Georgi

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v10 0/8] Introduce on-chip interconnect API
  2018-12-10 14:50           ` Georgi Djakov
@ 2018-12-11  6:58             ` Greg Kroah-Hartman
  2018-12-17 11:17               ` Georgi Djakov
  0 siblings, 1 reply; 34+ messages in thread
From: Greg Kroah-Hartman @ 2018-12-11  6:58 UTC (permalink / raw)
  To: Georgi Djakov
  Cc: Rafael J. Wysocki, Arnd Bergmann, Olof Johansson, evgreen,
	Linux PM, Rafael J. Wysocki, Rob Herring, Michael Turquette,
	Kevin Hilman, Vincent Guittot, Saravana Kannan, bjorn.andersson,
	Amit Kucheria, seansw, daidavid1, Mark Rutland,
	Lorenzo Pieralisi, abailon, maxime.ripard, Thierry Reding,
	ksitaraman, sanjayc, devicetree, Linux Kernel Mailing List,
	Linux ARM, linux-arm-msm, linux-tegra, Doug Anderson, Andy Gross

On Mon, Dec 10, 2018 at 04:50:00PM +0200, Georgi Djakov wrote:
> On 12/10/18 13:00, Rafael J. Wysocki wrote:
> > On Mon, Dec 10, 2018 at 11:18 AM Georgi Djakov <georgi.djakov@linaro.org> wrote:
> >>
> >> Hi Rafael,
> >>
> >> On 12/10/18 11:04, Rafael J. Wysocki wrote:
> >>> On Thu, Dec 6, 2018 at 3:55 PM Greg KH <gregkh@linuxfoundation.org> wrote:
> >>>>
> >>>> On Wed, Dec 05, 2018 at 12:41:35PM -0800, Evan Green wrote:
> >>>>> On Tue, Nov 27, 2018 at 10:03 AM Georgi Djakov <georgi.djakov@linaro.org> wrote:
> >>>>>>
> >>>>>> Modern SoCs have multiple processors and various dedicated cores (video, gpu,
> >>>>>> graphics, modem). These cores are talking to each other and can generate a
> >>>>>> lot of data flowing through the on-chip interconnects. These interconnect
> >>>>>> buses could form different topologies such as crossbar, point to point buses,
> >>>>>> hierarchical buses or use the network-on-chip concept.
> >>>>>>
> >>>>>> These buses have been sized usually to handle use cases with high data
> >>>>>> throughput but it is not necessary all the time and consume a lot of power.
> >>>>>> Furthermore, the priority between masters can vary depending on the running
> >>>>>> use case like video playback or CPU intensive tasks.
> >>>>>>
> >>>>>> Having an API to control the requirement of the system in terms of bandwidth
> >>>>>> and QoS, so we can adapt the interconnect configuration to match those by
> >>>>>> scaling the frequencies, setting link priority and tuning QoS parameters.
> >>>>>> This configuration can be a static, one-time operation done at boot for some
> >>>>>> platforms or a dynamic set of operations that happen at run-time.
> >>>>>>
> >>>>>> This patchset introduce a new API to get the requirement and configure the
> >>>>>> interconnect buses across the entire chipset to fit with the current demand.
> >>>>>> The API is NOT for changing the performance of the endpoint devices, but only
> >>>>>> the interconnect path in between them.
> >>>>>
> >>>>> For what it's worth, we are ready to land this in Chrome OS. I think
> >>>>> this series has been very well discussed and reviewed, hasn't changed
> >>>>> much in the last few spins, and is in good enough shape to use as a
> >>>>> base for future patches. Georgi's also done a great job reaching out
> >>>>> to other SoC vendors, and there appears to be enough consensus that
> >>>>> this framework will be usable by more than just Qualcomm. There are
> >>>>> also several drivers out on the list trying to add patches to use this
> >>>>> framework, with more to come, so it made sense (to us) to get this
> >>>>> base framework nailed down. In my experiments this is an important
> >>>>> piece of the overall power management story, especially on systems
> >>>>> that are mostly idle.
> >>>>>
> >>>>> I'll continue to track changes to this series and we will ultimately
> >>>>> reconcile with whatever happens upstream, but I thought it was worth
> >>>>> sending this note to express our "thumbs up" towards this framework.
> >>>>
> >>>> Looks like a v11 will be forthcoming, so I'll wait for that one to apply
> >>>> it to the tree if all looks good.
> >>>
> >>> I'm honestly not sure if it is ready yet.
> >>>
> >>> New versions are coming on and on, which may make such an impression,
> >>> but we had some discussion on it at the LPC and some serious questions
> >>> were asked during it, for instance regarding the DT binding introduced
> >>> here.  I'm not sure how this particular issue has been addressed here,
> >>> for example.
> >>
> >> There have been no changes in bindings since v4 (other than squashing
> >> consumer and provider bindings into a single patch and fixing typos).
> >>
> >> The last DT comment was on v9 [1] where Rob wanted confirmation from
> >> other SoC vendors that this works for them too. And now we have that
> >> confirmation and there are patches posted on the list [2].
> > 
> > OK
> > 
> >> The second thing (also discussed at LPC) was about possible cases where
> >> some consumer drivers can't calculate how much bandwidth they actually
> >> need and how to address that. The proposal was to extend the OPP
> >> bindings with one more property, but this is not part of this patchset.
> >> It is a future step that needs more discussion on the mailing list. If a
> >> driver really needs some bandwidth data now, it should be put into the
> >> driver and not in DT. After we have enough consumers, we can discuss
> >> again if it makes sense to extract something into DT or not.
> > 
> > That's fine by me.
> > 
> > Admittedly, I have some reservations regarding the extent to which
> > this approach will turn out to be useful in practice, but I guess as
> > long as there is enough traction, the best way to find out it to try
> > and see. :-)
> > 
> > From now on I will assume that this series is going to be applied by Greg.
> 
> That was the initial idea, but the problem is that there is a recent
> change in the cmd_db API (needed by the sdm845 provider driver), which
> is going through arm-soc/qcom/drivers. So either Greg pulls also the
> qcom-drivers-for-4.21 tag from Andy or the whole series goes via Olof
> and Arnd. Maybe there are other options. I don't have any preference and
> don't want to put extra burden on any maintainers, so i am ok with what
> they prefer.

Let me take the time later this week to review the code, which I haven't
done in a while...

thanks,

greg k-h

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v10 0/8] Introduce on-chip interconnect API
  2018-12-11  6:58             ` Greg Kroah-Hartman
@ 2018-12-17 11:17               ` Georgi Djakov
  2019-01-10 14:19                 ` Georgi Djakov
  0 siblings, 1 reply; 34+ messages in thread
From: Georgi Djakov @ 2018-12-17 11:17 UTC (permalink / raw)
  To: Greg Kroah-Hartman
  Cc: Rafael J. Wysocki, Arnd Bergmann, Olof Johansson, evgreen,
	Linux PM, Rafael J. Wysocki, Rob Herring, Michael Turquette,
	Kevin Hilman, Vincent Guittot, Saravana Kannan, bjorn.andersson,
	Amit Kucheria, seansw, daidavid1, Mark Rutland,
	Lorenzo Pieralisi, abailon, maxime.ripard, Thierry Reding,
	ksitaraman, sanjayc, devicetree, Linux Kernel Mailing List,
	Linux ARM, linux-arm-msm, linux-tegra, Doug Anderson, Andy Gross

Hi Greg,

On 12/11/18 08:58, Greg Kroah-Hartman wrote:
> On Mon, Dec 10, 2018 at 04:50:00PM +0200, Georgi Djakov wrote:
>> On 12/10/18 13:00, Rafael J. Wysocki wrote:
>>> On Mon, Dec 10, 2018 at 11:18 AM Georgi Djakov <georgi.djakov@linaro.org> wrote:
>>>>
>>>> Hi Rafael,
>>>>
>>>> On 12/10/18 11:04, Rafael J. Wysocki wrote:
>>>>> On Thu, Dec 6, 2018 at 3:55 PM Greg KH <gregkh@linuxfoundation.org> wrote:
>>>>>>
>>>>>> On Wed, Dec 05, 2018 at 12:41:35PM -0800, Evan Green wrote:
>>>>>>> On Tue, Nov 27, 2018 at 10:03 AM Georgi Djakov <georgi.djakov@linaro.org> wrote:
>>>>>>>>
>>>>>>>> Modern SoCs have multiple processors and various dedicated cores (video, gpu,
>>>>>>>> graphics, modem). These cores are talking to each other and can generate a
>>>>>>>> lot of data flowing through the on-chip interconnects. These interconnect
>>>>>>>> buses could form different topologies such as crossbar, point to point buses,
>>>>>>>> hierarchical buses or use the network-on-chip concept.
>>>>>>>>
>>>>>>>> These buses have been sized usually to handle use cases with high data
>>>>>>>> throughput but it is not necessary all the time and consume a lot of power.
>>>>>>>> Furthermore, the priority between masters can vary depending on the running
>>>>>>>> use case like video playback or CPU intensive tasks.
>>>>>>>>
>>>>>>>> Having an API to control the requirement of the system in terms of bandwidth
>>>>>>>> and QoS, so we can adapt the interconnect configuration to match those by
>>>>>>>> scaling the frequencies, setting link priority and tuning QoS parameters.
>>>>>>>> This configuration can be a static, one-time operation done at boot for some
>>>>>>>> platforms or a dynamic set of operations that happen at run-time.
>>>>>>>>
>>>>>>>> This patchset introduce a new API to get the requirement and configure the
>>>>>>>> interconnect buses across the entire chipset to fit with the current demand.
>>>>>>>> The API is NOT for changing the performance of the endpoint devices, but only
>>>>>>>> the interconnect path in between them.
>>>>>>>
>>>>>>> For what it's worth, we are ready to land this in Chrome OS. I think
>>>>>>> this series has been very well discussed and reviewed, hasn't changed
>>>>>>> much in the last few spins, and is in good enough shape to use as a
>>>>>>> base for future patches. Georgi's also done a great job reaching out
>>>>>>> to other SoC vendors, and there appears to be enough consensus that
>>>>>>> this framework will be usable by more than just Qualcomm. There are
>>>>>>> also several drivers out on the list trying to add patches to use this
>>>>>>> framework, with more to come, so it made sense (to us) to get this
>>>>>>> base framework nailed down. In my experiments this is an important
>>>>>>> piece of the overall power management story, especially on systems
>>>>>>> that are mostly idle.
>>>>>>>
>>>>>>> I'll continue to track changes to this series and we will ultimately
>>>>>>> reconcile with whatever happens upstream, but I thought it was worth
>>>>>>> sending this note to express our "thumbs up" towards this framework.
>>>>>>
>>>>>> Looks like a v11 will be forthcoming, so I'll wait for that one to apply
>>>>>> it to the tree if all looks good.
>>>>>
>>>>> I'm honestly not sure if it is ready yet.
>>>>>
>>>>> New versions are coming on and on, which may make such an impression,
>>>>> but we had some discussion on it at the LPC and some serious questions
>>>>> were asked during it, for instance regarding the DT binding introduced
>>>>> here.  I'm not sure how this particular issue has been addressed here,
>>>>> for example.
>>>>
>>>> There have been no changes in bindings since v4 (other than squashing
>>>> consumer and provider bindings into a single patch and fixing typos).
>>>>
>>>> The last DT comment was on v9 [1] where Rob wanted confirmation from
>>>> other SoC vendors that this works for them too. And now we have that
>>>> confirmation and there are patches posted on the list [2].
>>>
>>> OK
>>>
>>>> The second thing (also discussed at LPC) was about possible cases where
>>>> some consumer drivers can't calculate how much bandwidth they actually
>>>> need and how to address that. The proposal was to extend the OPP
>>>> bindings with one more property, but this is not part of this patchset.
>>>> It is a future step that needs more discussion on the mailing list. If a
>>>> driver really needs some bandwidth data now, it should be put into the
>>>> driver and not in DT. After we have enough consumers, we can discuss
>>>> again if it makes sense to extract something into DT or not.
>>>
>>> That's fine by me.
>>>
>>> Admittedly, I have some reservations regarding the extent to which
>>> this approach will turn out to be useful in practice, but I guess as
>>> long as there is enough traction, the best way to find out it to try
>>> and see. :-)
>>>
>>> From now on I will assume that this series is going to be applied by Greg.
>>
>> That was the initial idea, but the problem is that there is a recent
>> change in the cmd_db API (needed by the sdm845 provider driver), which
>> is going through arm-soc/qcom/drivers. So either Greg pulls also the
>> qcom-drivers-for-4.21 tag from Andy or the whole series goes via Olof
>> and Arnd. Maybe there are other options. I don't have any preference and
>> don't want to put extra burden on any maintainers, so i am ok with what
>> they prefer.
> 
> Let me take the time later this week to review the code, which I haven't
> done in a while...
> 

When you get a chance to review, please keep in mind that the latest
version is v12 (from 08.Dec). The same is also available in linux-next
with no reported issues.

Thanks,
Georgi

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v10 0/8] Introduce on-chip interconnect API
  2018-12-17 11:17               ` Georgi Djakov
@ 2019-01-10 14:19                 ` Georgi Djakov
  2019-01-10 16:29                   ` Greg Kroah-Hartman
  0 siblings, 1 reply; 34+ messages in thread
From: Georgi Djakov @ 2019-01-10 14:19 UTC (permalink / raw)
  To: Greg Kroah-Hartman
  Cc: Rafael J. Wysocki, Arnd Bergmann, Olof Johansson, evgreen,
	Linux PM, Rafael J. Wysocki, Rob Herring, Michael Turquette,
	Kevin Hilman, Vincent Guittot, Saravana Kannan, bjorn.andersson,
	Amit Kucheria, seansw, daidavid1, Mark Rutland,
	Lorenzo Pieralisi, abailon, maxime.ripard, Thierry Reding,
	ksitaraman, sanjayc, devicetree, Linux Kernel Mailing List,
	Linux ARM, linux-arm-msm, linux-tegra, Doug Anderson, Andy Gross

Hi Greg,

On 12/17/18 13:17, Georgi Djakov wrote:
> Hi Greg,
> 
> On 12/11/18 08:58, Greg Kroah-Hartman wrote:
>> On Mon, Dec 10, 2018 at 04:50:00PM +0200, Georgi Djakov wrote:
>>> On 12/10/18 13:00, Rafael J. Wysocki wrote:
>>>> On Mon, Dec 10, 2018 at 11:18 AM Georgi Djakov <georgi.djakov@linaro.org> wrote:
>>>>>
>>>>> Hi Rafael,
>>>>>
>>>>> On 12/10/18 11:04, Rafael J. Wysocki wrote:
>>>>>> On Thu, Dec 6, 2018 at 3:55 PM Greg KH <gregkh@linuxfoundation.org> wrote:
>>>>>>>
>>>>>>> On Wed, Dec 05, 2018 at 12:41:35PM -0800, Evan Green wrote:
>>>>>>>> On Tue, Nov 27, 2018 at 10:03 AM Georgi Djakov <georgi.djakov@linaro.org> wrote:
>>>>>>>>>
>>>>>>>>> Modern SoCs have multiple processors and various dedicated cores (video, gpu,
>>>>>>>>> graphics, modem). These cores are talking to each other and can generate a
>>>>>>>>> lot of data flowing through the on-chip interconnects. These interconnect
>>>>>>>>> buses could form different topologies such as crossbar, point to point buses,
>>>>>>>>> hierarchical buses or use the network-on-chip concept.
>>>>>>>>>
>>>>>>>>> These buses have been sized usually to handle use cases with high data
>>>>>>>>> throughput but it is not necessary all the time and consume a lot of power.
>>>>>>>>> Furthermore, the priority between masters can vary depending on the running
>>>>>>>>> use case like video playback or CPU intensive tasks.
>>>>>>>>>
>>>>>>>>> Having an API to control the requirement of the system in terms of bandwidth
>>>>>>>>> and QoS, so we can adapt the interconnect configuration to match those by
>>>>>>>>> scaling the frequencies, setting link priority and tuning QoS parameters.
>>>>>>>>> This configuration can be a static, one-time operation done at boot for some
>>>>>>>>> platforms or a dynamic set of operations that happen at run-time.
>>>>>>>>>
>>>>>>>>> This patchset introduce a new API to get the requirement and configure the
>>>>>>>>> interconnect buses across the entire chipset to fit with the current demand.
>>>>>>>>> The API is NOT for changing the performance of the endpoint devices, but only
>>>>>>>>> the interconnect path in between them.
>>>>>>>>
>>>>>>>> For what it's worth, we are ready to land this in Chrome OS. I think
>>>>>>>> this series has been very well discussed and reviewed, hasn't changed
>>>>>>>> much in the last few spins, and is in good enough shape to use as a
>>>>>>>> base for future patches. Georgi's also done a great job reaching out
>>>>>>>> to other SoC vendors, and there appears to be enough consensus that
>>>>>>>> this framework will be usable by more than just Qualcomm. There are
>>>>>>>> also several drivers out on the list trying to add patches to use this
>>>>>>>> framework, with more to come, so it made sense (to us) to get this
>>>>>>>> base framework nailed down. In my experiments this is an important
>>>>>>>> piece of the overall power management story, especially on systems
>>>>>>>> that are mostly idle.
>>>>>>>>
>>>>>>>> I'll continue to track changes to this series and we will ultimately
>>>>>>>> reconcile with whatever happens upstream, but I thought it was worth
>>>>>>>> sending this note to express our "thumbs up" towards this framework.
>>>>>>>
>>>>>>> Looks like a v11 will be forthcoming, so I'll wait for that one to apply
>>>>>>> it to the tree if all looks good.
>>>>>>
>>>>>> I'm honestly not sure if it is ready yet.
>>>>>>
>>>>>> New versions are coming on and on, which may make such an impression,
>>>>>> but we had some discussion on it at the LPC and some serious questions
>>>>>> were asked during it, for instance regarding the DT binding introduced
>>>>>> here.  I'm not sure how this particular issue has been addressed here,
>>>>>> for example.
>>>>>
>>>>> There have been no changes in bindings since v4 (other than squashing
>>>>> consumer and provider bindings into a single patch and fixing typos).
>>>>>
>>>>> The last DT comment was on v9 [1] where Rob wanted confirmation from
>>>>> other SoC vendors that this works for them too. And now we have that
>>>>> confirmation and there are patches posted on the list [2].
>>>>
>>>> OK
>>>>
>>>>> The second thing (also discussed at LPC) was about possible cases where
>>>>> some consumer drivers can't calculate how much bandwidth they actually
>>>>> need and how to address that. The proposal was to extend the OPP
>>>>> bindings with one more property, but this is not part of this patchset.
>>>>> It is a future step that needs more discussion on the mailing list. If a
>>>>> driver really needs some bandwidth data now, it should be put into the
>>>>> driver and not in DT. After we have enough consumers, we can discuss
>>>>> again if it makes sense to extract something into DT or not.
>>>>
>>>> That's fine by me.
>>>>
>>>> Admittedly, I have some reservations regarding the extent to which
>>>> this approach will turn out to be useful in practice, but I guess as
>>>> long as there is enough traction, the best way to find out it to try
>>>> and see. :-)
>>>>
>>>> From now on I will assume that this series is going to be applied by Greg.
>>>
>>> That was the initial idea, but the problem is that there is a recent
>>> change in the cmd_db API (needed by the sdm845 provider driver), which
>>> is going through arm-soc/qcom/drivers. So either Greg pulls also the
>>> qcom-drivers-for-4.21 tag from Andy or the whole series goes via Olof
>>> and Arnd. Maybe there are other options. I don't have any preference and
>>> don't want to put extra burden on any maintainers, so i am ok with what
>>> they prefer.
>>
>> Let me take the time later this week to review the code, which I haven't
>> done in a while...
>>
> 
> When you get a chance to review, please keep in mind that the latest
> version is v12 (from 08.Dec). The same is also available in linux-next
> with no reported issues.

The dependencies for this patchset have been already merged in v5.0-rc1,
so i was wondering if this can still go into -rc2? Various patches that
use this API are already posted and having it sooner will make dealing
with dependencies and merge paths a bit easier during the next merge
window. Or i can just rebase and resend everything targeting v5.1.

Thanks,
Georgi

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v10 0/8] Introduce on-chip interconnect API
  2019-01-10 14:19                 ` Georgi Djakov
@ 2019-01-10 16:29                   ` Greg Kroah-Hartman
  2019-01-10 16:34                     ` Georgi Djakov
  0 siblings, 1 reply; 34+ messages in thread
From: Greg Kroah-Hartman @ 2019-01-10 16:29 UTC (permalink / raw)
  To: Georgi Djakov
  Cc: Rafael J. Wysocki, Arnd Bergmann, Olof Johansson, evgreen,
	Linux PM, Rafael J. Wysocki, Rob Herring, Michael Turquette,
	Kevin Hilman, Vincent Guittot, Saravana Kannan, bjorn.andersson,
	Amit Kucheria, seansw, daidavid1, Mark Rutland,
	Lorenzo Pieralisi, abailon, maxime.ripard, Thierry Reding,
	ksitaraman, sanjayc, devicetree, Linux Kernel Mailing List,
	Linux ARM, linux-arm-msm, linux-tegra, Doug Anderson, Andy Gross

On Thu, Jan 10, 2019 at 04:19:14PM +0200, Georgi Djakov wrote:
> Hi Greg,
> 
> On 12/17/18 13:17, Georgi Djakov wrote:
> > Hi Greg,
> > 
> > On 12/11/18 08:58, Greg Kroah-Hartman wrote:
> >> On Mon, Dec 10, 2018 at 04:50:00PM +0200, Georgi Djakov wrote:
> >>> On 12/10/18 13:00, Rafael J. Wysocki wrote:
> >>>> On Mon, Dec 10, 2018 at 11:18 AM Georgi Djakov <georgi.djakov@linaro.org> wrote:
> >>>>>
> >>>>> Hi Rafael,
> >>>>>
> >>>>> On 12/10/18 11:04, Rafael J. Wysocki wrote:
> >>>>>> On Thu, Dec 6, 2018 at 3:55 PM Greg KH <gregkh@linuxfoundation.org> wrote:
> >>>>>>>
> >>>>>>> On Wed, Dec 05, 2018 at 12:41:35PM -0800, Evan Green wrote:
> >>>>>>>> On Tue, Nov 27, 2018 at 10:03 AM Georgi Djakov <georgi.djakov@linaro.org> wrote:
> >>>>>>>>>
> >>>>>>>>> Modern SoCs have multiple processors and various dedicated cores (video, gpu,
> >>>>>>>>> graphics, modem). These cores are talking to each other and can generate a
> >>>>>>>>> lot of data flowing through the on-chip interconnects. These interconnect
> >>>>>>>>> buses could form different topologies such as crossbar, point to point buses,
> >>>>>>>>> hierarchical buses or use the network-on-chip concept.
> >>>>>>>>>
> >>>>>>>>> These buses have been sized usually to handle use cases with high data
> >>>>>>>>> throughput but it is not necessary all the time and consume a lot of power.
> >>>>>>>>> Furthermore, the priority between masters can vary depending on the running
> >>>>>>>>> use case like video playback or CPU intensive tasks.
> >>>>>>>>>
> >>>>>>>>> Having an API to control the requirement of the system in terms of bandwidth
> >>>>>>>>> and QoS, so we can adapt the interconnect configuration to match those by
> >>>>>>>>> scaling the frequencies, setting link priority and tuning QoS parameters.
> >>>>>>>>> This configuration can be a static, one-time operation done at boot for some
> >>>>>>>>> platforms or a dynamic set of operations that happen at run-time.
> >>>>>>>>>
> >>>>>>>>> This patchset introduce a new API to get the requirement and configure the
> >>>>>>>>> interconnect buses across the entire chipset to fit with the current demand.
> >>>>>>>>> The API is NOT for changing the performance of the endpoint devices, but only
> >>>>>>>>> the interconnect path in between them.
> >>>>>>>>
> >>>>>>>> For what it's worth, we are ready to land this in Chrome OS. I think
> >>>>>>>> this series has been very well discussed and reviewed, hasn't changed
> >>>>>>>> much in the last few spins, and is in good enough shape to use as a
> >>>>>>>> base for future patches. Georgi's also done a great job reaching out
> >>>>>>>> to other SoC vendors, and there appears to be enough consensus that
> >>>>>>>> this framework will be usable by more than just Qualcomm. There are
> >>>>>>>> also several drivers out on the list trying to add patches to use this
> >>>>>>>> framework, with more to come, so it made sense (to us) to get this
> >>>>>>>> base framework nailed down. In my experiments this is an important
> >>>>>>>> piece of the overall power management story, especially on systems
> >>>>>>>> that are mostly idle.
> >>>>>>>>
> >>>>>>>> I'll continue to track changes to this series and we will ultimately
> >>>>>>>> reconcile with whatever happens upstream, but I thought it was worth
> >>>>>>>> sending this note to express our "thumbs up" towards this framework.
> >>>>>>>
> >>>>>>> Looks like a v11 will be forthcoming, so I'll wait for that one to apply
> >>>>>>> it to the tree if all looks good.
> >>>>>>
> >>>>>> I'm honestly not sure if it is ready yet.
> >>>>>>
> >>>>>> New versions are coming on and on, which may make such an impression,
> >>>>>> but we had some discussion on it at the LPC and some serious questions
> >>>>>> were asked during it, for instance regarding the DT binding introduced
> >>>>>> here.  I'm not sure how this particular issue has been addressed here,
> >>>>>> for example.
> >>>>>
> >>>>> There have been no changes in bindings since v4 (other than squashing
> >>>>> consumer and provider bindings into a single patch and fixing typos).
> >>>>>
> >>>>> The last DT comment was on v9 [1] where Rob wanted confirmation from
> >>>>> other SoC vendors that this works for them too. And now we have that
> >>>>> confirmation and there are patches posted on the list [2].
> >>>>
> >>>> OK
> >>>>
> >>>>> The second thing (also discussed at LPC) was about possible cases where
> >>>>> some consumer drivers can't calculate how much bandwidth they actually
> >>>>> need and how to address that. The proposal was to extend the OPP
> >>>>> bindings with one more property, but this is not part of this patchset.
> >>>>> It is a future step that needs more discussion on the mailing list. If a
> >>>>> driver really needs some bandwidth data now, it should be put into the
> >>>>> driver and not in DT. After we have enough consumers, we can discuss
> >>>>> again if it makes sense to extract something into DT or not.
> >>>>
> >>>> That's fine by me.
> >>>>
> >>>> Admittedly, I have some reservations regarding the extent to which
> >>>> this approach will turn out to be useful in practice, but I guess as
> >>>> long as there is enough traction, the best way to find out it to try
> >>>> and see. :-)
> >>>>
> >>>> From now on I will assume that this series is going to be applied by Greg.
> >>>
> >>> That was the initial idea, but the problem is that there is a recent
> >>> change in the cmd_db API (needed by the sdm845 provider driver), which
> >>> is going through arm-soc/qcom/drivers. So either Greg pulls also the
> >>> qcom-drivers-for-4.21 tag from Andy or the whole series goes via Olof
> >>> and Arnd. Maybe there are other options. I don't have any preference and
> >>> don't want to put extra burden on any maintainers, so i am ok with what
> >>> they prefer.
> >>
> >> Let me take the time later this week to review the code, which I haven't
> >> done in a while...
> >>
> > 
> > When you get a chance to review, please keep in mind that the latest
> > version is v12 (from 08.Dec). The same is also available in linux-next
> > with no reported issues.
> 
> The dependencies for this patchset have been already merged in v5.0-rc1,
> so i was wondering if this can still go into -rc2? Various patches that
> use this API are already posted and having it sooner will make dealing
> with dependencies and merge paths a bit easier during the next merge
> window. Or i can just rebase and resend everything targeting v5.1.

We can't add new features after -rc1, sorry.

Please rebase and resend to target 5.1

thanks,

greg k-h

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v10 0/8] Introduce on-chip interconnect API
  2019-01-10 16:29                   ` Greg Kroah-Hartman
@ 2019-01-10 16:34                     ` Georgi Djakov
  0 siblings, 0 replies; 34+ messages in thread
From: Georgi Djakov @ 2019-01-10 16:34 UTC (permalink / raw)
  To: Greg Kroah-Hartman
  Cc: Rafael J. Wysocki, Arnd Bergmann, Olof Johansson, evgreen,
	Linux PM, Rafael J. Wysocki, Rob Herring, Michael Turquette,
	Kevin Hilman, Vincent Guittot, Saravana Kannan, bjorn.andersson,
	Amit Kucheria, seansw, daidavid1, Mark Rutland,
	Lorenzo Pieralisi, abailon, maxime.ripard, Thierry Reding,
	ksitaraman, sanjayc, devicetree, Linux Kernel Mailing List,
	Linux ARM, linux-arm-msm, linux-tegra, Doug Anderson, Andy Gross

On 1/10/19 18:29, Greg Kroah-Hartman wrote:
> On Thu, Jan 10, 2019 at 04:19:14PM +0200, Georgi Djakov wrote:
>> Hi Greg,
>>
>> On 12/17/18 13:17, Georgi Djakov wrote:
>>> Hi Greg,
>>>
>>> On 12/11/18 08:58, Greg Kroah-Hartman wrote:
>>>> On Mon, Dec 10, 2018 at 04:50:00PM +0200, Georgi Djakov wrote:
>>>>> On 12/10/18 13:00, Rafael J. Wysocki wrote:
>>>>>> On Mon, Dec 10, 2018 at 11:18 AM Georgi Djakov <georgi.djakov@linaro.org> wrote:
>>>>>>>
>>>>>>> Hi Rafael,
>>>>>>>
>>>>>>> On 12/10/18 11:04, Rafael J. Wysocki wrote:
>>>>>>>> On Thu, Dec 6, 2018 at 3:55 PM Greg KH <gregkh@linuxfoundation.org> wrote:
>>>>>>>>>
>>>>>>>>> On Wed, Dec 05, 2018 at 12:41:35PM -0800, Evan Green wrote:
>>>>>>>>>> On Tue, Nov 27, 2018 at 10:03 AM Georgi Djakov <georgi.djakov@linaro.org> wrote:
>>>>>>>>>>>
>>>>>>>>>>> Modern SoCs have multiple processors and various dedicated cores (video, gpu,
>>>>>>>>>>> graphics, modem). These cores are talking to each other and can generate a
>>>>>>>>>>> lot of data flowing through the on-chip interconnects. These interconnect
>>>>>>>>>>> buses could form different topologies such as crossbar, point to point buses,
>>>>>>>>>>> hierarchical buses or use the network-on-chip concept.
>>>>>>>>>>>
>>>>>>>>>>> These buses have been sized usually to handle use cases with high data
>>>>>>>>>>> throughput but it is not necessary all the time and consume a lot of power.
>>>>>>>>>>> Furthermore, the priority between masters can vary depending on the running
>>>>>>>>>>> use case like video playback or CPU intensive tasks.
>>>>>>>>>>>
>>>>>>>>>>> Having an API to control the requirement of the system in terms of bandwidth
>>>>>>>>>>> and QoS, so we can adapt the interconnect configuration to match those by
>>>>>>>>>>> scaling the frequencies, setting link priority and tuning QoS parameters.
>>>>>>>>>>> This configuration can be a static, one-time operation done at boot for some
>>>>>>>>>>> platforms or a dynamic set of operations that happen at run-time.
>>>>>>>>>>>
>>>>>>>>>>> This patchset introduce a new API to get the requirement and configure the
>>>>>>>>>>> interconnect buses across the entire chipset to fit with the current demand.
>>>>>>>>>>> The API is NOT for changing the performance of the endpoint devices, but only
>>>>>>>>>>> the interconnect path in between them.
>>>>>>>>>>
>>>>>>>>>> For what it's worth, we are ready to land this in Chrome OS. I think
>>>>>>>>>> this series has been very well discussed and reviewed, hasn't changed
>>>>>>>>>> much in the last few spins, and is in good enough shape to use as a
>>>>>>>>>> base for future patches. Georgi's also done a great job reaching out
>>>>>>>>>> to other SoC vendors, and there appears to be enough consensus that
>>>>>>>>>> this framework will be usable by more than just Qualcomm. There are
>>>>>>>>>> also several drivers out on the list trying to add patches to use this
>>>>>>>>>> framework, with more to come, so it made sense (to us) to get this
>>>>>>>>>> base framework nailed down. In my experiments this is an important
>>>>>>>>>> piece of the overall power management story, especially on systems
>>>>>>>>>> that are mostly idle.
>>>>>>>>>>
>>>>>>>>>> I'll continue to track changes to this series and we will ultimately
>>>>>>>>>> reconcile with whatever happens upstream, but I thought it was worth
>>>>>>>>>> sending this note to express our "thumbs up" towards this framework.
>>>>>>>>>
>>>>>>>>> Looks like a v11 will be forthcoming, so I'll wait for that one to apply
>>>>>>>>> it to the tree if all looks good.
>>>>>>>>
>>>>>>>> I'm honestly not sure if it is ready yet.
>>>>>>>>
>>>>>>>> New versions are coming on and on, which may make such an impression,
>>>>>>>> but we had some discussion on it at the LPC and some serious questions
>>>>>>>> were asked during it, for instance regarding the DT binding introduced
>>>>>>>> here.  I'm not sure how this particular issue has been addressed here,
>>>>>>>> for example.
>>>>>>>
>>>>>>> There have been no changes in bindings since v4 (other than squashing
>>>>>>> consumer and provider bindings into a single patch and fixing typos).
>>>>>>>
>>>>>>> The last DT comment was on v9 [1] where Rob wanted confirmation from
>>>>>>> other SoC vendors that this works for them too. And now we have that
>>>>>>> confirmation and there are patches posted on the list [2].
>>>>>>
>>>>>> OK
>>>>>>
>>>>>>> The second thing (also discussed at LPC) was about possible cases where
>>>>>>> some consumer drivers can't calculate how much bandwidth they actually
>>>>>>> need and how to address that. The proposal was to extend the OPP
>>>>>>> bindings with one more property, but this is not part of this patchset.
>>>>>>> It is a future step that needs more discussion on the mailing list. If a
>>>>>>> driver really needs some bandwidth data now, it should be put into the
>>>>>>> driver and not in DT. After we have enough consumers, we can discuss
>>>>>>> again if it makes sense to extract something into DT or not.
>>>>>>
>>>>>> That's fine by me.
>>>>>>
>>>>>> Admittedly, I have some reservations regarding the extent to which
>>>>>> this approach will turn out to be useful in practice, but I guess as
>>>>>> long as there is enough traction, the best way to find out it to try
>>>>>> and see. :-)
>>>>>>
>>>>>> From now on I will assume that this series is going to be applied by Greg.
>>>>>
>>>>> That was the initial idea, but the problem is that there is a recent
>>>>> change in the cmd_db API (needed by the sdm845 provider driver), which
>>>>> is going through arm-soc/qcom/drivers. So either Greg pulls also the
>>>>> qcom-drivers-for-4.21 tag from Andy or the whole series goes via Olof
>>>>> and Arnd. Maybe there are other options. I don't have any preference and
>>>>> don't want to put extra burden on any maintainers, so i am ok with what
>>>>> they prefer.
>>>>
>>>> Let me take the time later this week to review the code, which I haven't
>>>> done in a while...
>>>>
>>>
>>> When you get a chance to review, please keep in mind that the latest
>>> version is v12 (from 08.Dec). The same is also available in linux-next
>>> with no reported issues.
>>
>> The dependencies for this patchset have been already merged in v5.0-rc1,
>> so i was wondering if this can still go into -rc2? Various patches that
>> use this API are already posted and having it sooner will make dealing
>> with dependencies and merge paths a bit easier during the next merge
>> window. Or i can just rebase and resend everything targeting v5.1.
> 
> We can't add new features after -rc1, sorry.
> 
> Please rebase and resend to target 5.1

Ok, i was expecting that. Thanks for confirming!

BR,
Georgi

^ permalink raw reply	[flat|nested] 34+ messages in thread

end of thread, other threads:[~2019-01-10 16:34 UTC | newest]

Thread overview: 34+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-11-27 18:03 [PATCH v10 0/8] Introduce on-chip interconnect API Georgi Djakov
2018-11-27 18:03 ` [PATCH v10 1/7] interconnect: Add generic " Georgi Djakov
2018-11-27 18:35   ` Joe Perches
2018-11-28 18:18     ` Georgi Djakov
2018-12-01  0:38   ` Evan Green
2018-12-05 15:57     ` Georgi Djakov
2018-12-05 16:16   ` Rob Herring
2018-12-07 15:24     ` Georgi Djakov
2018-11-27 18:03 ` [PATCH v10 2/7] dt-bindings: Introduce interconnect binding Georgi Djakov
2018-12-01  0:38   ` Evan Green
2018-11-27 18:03 ` [PATCH v10 3/7] interconnect: Allow endpoints translation via DT Georgi Djakov
2018-12-01  0:38   ` Evan Green
2018-12-05 15:59     ` Georgi Djakov
2018-11-27 18:03 ` [PATCH v10 4/7] interconnect: Add debugfs support Georgi Djakov
2018-11-27 18:03 ` [PATCH v10 5/7] interconnect: qcom: Add sdm845 interconnect provider driver Georgi Djakov
2018-12-01  0:39   ` Evan Green
2018-12-05 16:00     ` Georgi Djakov
2018-12-06 21:53       ` David Dai
2018-11-27 18:03 ` [PATCH v10 6/7] arm64: dts: sdm845: Add interconnect provider DT nodes Georgi Djakov
2018-12-01  0:39   ` Evan Green
2018-12-05 16:01     ` Georgi Djakov
2018-11-27 18:03 ` [PATCH v10 7/7] MAINTAINERS: add a maintainer for the interconnect API Georgi Djakov
2018-12-05 20:41 ` [PATCH v10 0/8] Introduce on-chip " Evan Green
2018-12-06 14:55   ` Greg KH
2018-12-07 10:06     ` Georgi Djakov
2018-12-10  9:04     ` Rafael J. Wysocki
2018-12-10 10:18       ` Georgi Djakov
2018-12-10 11:00         ` Rafael J. Wysocki
2018-12-10 14:50           ` Georgi Djakov
2018-12-11  6:58             ` Greg Kroah-Hartman
2018-12-17 11:17               ` Georgi Djakov
2019-01-10 14:19                 ` Georgi Djakov
2019-01-10 16:29                   ` Greg Kroah-Hartman
2019-01-10 16:34                     ` Georgi Djakov

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).