linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v7 0/8] Introduce on-chip interconnect API
@ 2018-07-31 16:13 Georgi Djakov
  2018-07-31 16:13 ` [PATCH v7 1/8] interconnect: Add generic " Georgi Djakov
                   ` (7 more replies)
  0 siblings, 8 replies; 30+ messages in thread
From: Georgi Djakov @ 2018-07-31 16:13 UTC (permalink / raw)
  To: linux-pm, gregkh
  Cc: rjw, robh+dt, mturquette, khilman, vincent.guittot, skannan,
	bjorn.andersson, amit.kucheria, seansw, daidavid1, evgreen,
	mark.rutland, lorenzo.pieralisi, abailon, arnd, linux-kernel,
	linux-arm-kernel, linux-arm-msm, georgi.djakov

Modern SoCs have multiple processors and various dedicated cores (video, gpu,
graphics, modem). These cores are talking to each other and can generate a
lot of data flowing through the on-chip interconnects. These interconnect
buses could form different topologies such as crossbar, point to point buses,
hierarchical buses or use the network-on-chip concept.

These buses have been sized usually to handle use cases with high data
throughput but it is not necessary all the time and consume a lot of power.
Furthermore, the priority between masters can vary depending on the running
use case like video playback or CPU intensive tasks.

Having an API to control the requirement of the system in terms of bandwidth
and QoS, so we can adapt the interconnect configuration to match those by
scaling the frequencies, setting link priority and tuning QoS parameters.
This configuration can be a static, one-time operation done at boot for some
platforms or a dynamic set of operations that happen at run-time.

This patchset introduce a new API to get the requirement and configure the
interconnect buses across the entire chipset to fit with the current demand.
The API is NOT for changing the performance of the endpoint devices, but only
the interconnect path in between them.

The API is using a consumer/provider-based model, where the providers are
the interconnect buses and the consumers could be various drivers.
The consumers request interconnect resources (path) to an endpoint and set
the desired constraints on this data flow path. The provider(s) receive
requests from consumers and aggregate these requests for all master-slave
pairs on that path. Then the providers configure each participating in the
topology node according to the requested data flow path, physical links and
constraints. The topology could be complicated and multi-tiered and is SoC
specific.

Below is a simplified diagram of a real-world SoC topology. The interconnect
providers are the NoCs.

+----------------+    +----------------+
| HW Accelerator |--->|      M NoC     |<---------------+
+----------------+    +----------------+                |
                        |      |                    +------------+
 +-----+  +-------------+      V       +------+     |            |
 | DDR |  |                +--------+  | PCIe |     |            |
 +-----+  |                | Slaves |  +------+     |            |
   ^ ^    |                +--------+     |         |   C NoC    |
   | |    V                               V         |            |
+------------------+   +------------------------+   |            |   +-----+
|                  |-->|                        |-->|            |-->| CPU |
|                  |-->|                        |<--|            |   +-----+
|     Mem NoC      |   |         S NoC          |   +------------+
|                  |<--|                        |---------+    |
|                  |<--|                        |<------+ |    |   +--------+
+------------------+   +------------------------+       | |    +-->| Slaves |
  ^  ^    ^    ^          ^                             | |        +--------+
  |  |    |    |          |                             | V
+------+  |  +-----+   +-----+  +---------+   +----------------+   +--------+
| CPUs |  |  | GPU |   | DSP |  | Masters |-->|       P NoC    |-->| Slaves |
+------+  |  +-----+   +-----+  +---------+   +----------------+   +--------+
          |
      +-------+
      | Modem |
      +-------+

TODO:
* Create icc_set_extended() to handle parameters such as latency and other
  QoS values.
* Convert from using global node identifiers to local per provider ids.
* Cache the path between the nodes instead of walking the graph on each get().
* Sync interconnect requests with the idle state of the device.

Changes since patchset v6 (https://lkml.org/lkml/2018/7/9/698)
* [patches 1,6]: Move the aggregation within the provider from the framework to
  the platform driver's set() callback, as the aggregation point could be SoC
  specific.
* [patch 1]: Include missing header, reset state only of the traversed nodes,
  move more code into path_init(), add more asserts, move misplaced mutex,
  simplify icc_link_destroy() (Evan)
* [patch 1]: Fix the order of requests to go from source to destination. (Alex)
* [patch 7]: Use better wording in the documentation. (Evan)
* [patch 6]: Reorder struct members, sort nodes alphabetically, improve naming
  of variables , add missing clk_disable_unprepare() in error paths. (Matthias)
* [patch 6]: Remove redundant NULL pointer check in msm8916 driver. (Alex)
* [patch 6]: Add missing depend on QCOM_SMD_RPM in Kconfig. (Evan)
* [patch 3]: Don't check for errors on debugfs calls, remove debugfs directory
  when module is unloaded (Greg)

Changes since patchset v5 (https://lkml.org/lkml/2018/6/20/453)
* Fix the modular build, make rpm-smd driver a module.
* Optimize locking and move to higher level. (Evan)
* Code cleanups. Fix typos. (Evan, Matthias)
* Add the source node to the path. (Evan)
* Rename path_allocate() to path_init() with minor refactoring. (Evan)
* Rename *_remove() functions to *_destroy().
* Return fixed errors in icc_link_destroy(). (Evan)
* Fix krealloc() usage in icc_link_destroy(). (Evan)
* Add missing kfree() in icc_node_create(). (Matthias)
* Make icc_node_add() return void. (Matthias)
* Change mutex_init to mutex_lock in icc_provider_add(). (Matthias)
* Add new icc_node_del() function to delete nodes from provider.
* Fix the header guard to reflect the path in smd-rpm.h. (Evan)
* Check for errors returned by qcom_icc_rpm_smd_send(). (Evan)
* Propagate the error of icc_provider_del(). (Evan)

Changes since patchset v4 (https://lkml.org/lkml/2018/3/9/856)
* Simplified locking by using a single global mutex. (Evan)
* Changed the aggregation function interface.
* Implemented functions for node, link, provider removal. (Evan)
* Naming changes on variables and functions, removed redundant code. (Evan)
* Fixes and clarifications in the docs. (Matthias, Evan, Amit, Alexandre)
* Removed mandatory reg DT property, made interconnect-names optional. (Bjorn)
* Made interconnect-cells property required to align with other bindings. (Neil)
* Moved msm8916 specific bindings into a separate file and patch. (Bjorn)
* Use the names, instead of the hardcoded ids for topology. (Matthias)
* Init the node before creating the links. (Evan)
* Added icc_units_to_bps macro. (Amit)

Changes since patchset v3 (https://lkml.org/lkml/2017/9/8/544)
* Refactored the constraints aggregation.
* Use the IDR API.
* Split the provider and consumer bindings into separate patches and propose
  new bindings for consumers, which allows to specify the local source port.
* Adopted the icc_ prefix for API functions.
* Introduced separate API functions for creating interconnect nodes and links.
* Added DT lookup support in addition to platform data.
* Dropped the event tracing patch for now.
* Added a patch to provide summary via debugfs.
* Use macro for the list of topology definitions in the platform driver.
* Various minor changes.

Changes since patchset v2 (https://lkml.org/lkml/2017/7/20/825)
* Split the aggregation into per node and per provider. Cache the
  aggregated values.
* Various small refactorings and cleanups in the framework.
* Added a patch introducing basic tracepoint support for monitoring
  the time required to update the interconnect nodes.

Changes since patchset v1 (https://lkml.org/lkml/2017/6/27/890)
* Updates in the documentation.
* Changes in request aggregation, locking.
* Dropped the aggregate() callback and use the default as it currently
  sufficient for the single vendor driver. Will add it later when needed.
* Dropped the dt-bindings draft patch for now.

Changes since RFC v2 (https://lkml.org/lkml/2017/6/12/316)
* Converted documentation to rst format.
* Fixed an incorrect call to mutex_lock. Renamed max_bw to peak_bw.

Changes since RFC v1 (https://lkml.org/lkml/2017/5/15/605)
* Refactored code into shorter functions.
* Added a new aggregate() API function.
* Rearranged some structs to reduce padding bytes.

Changes since RFC v0 (https://lkml.org/lkml/2017/3/1/599)
* Removed DT support and added optional Patch 3 with new bindings proposal.
* Converted the topology into internal driver data.
* Made the framework modular.
* interconnect_get() now takes (src and dst ports as arguments).
* Removed public declarations of some structs.
* Now passing prev/next nodes to the vendor driver.
* Properly remove requests on _put().
* Added refcounting.
* Updated documentation.
* Changed struct interconnect_path to use array instead of linked list.

Georgi Djakov (8):
  interconnect: Add generic on-chip interconnect API
  dt-bindings: Introduce interconnect provider bindings
  interconnect: Add debugfs support
  interconnect: qcom: Add RPM communication
  dt-bindings: interconnect: Document qcom,msm8916 NoC bindings
  interconnect: qcom: Add msm8916 interconnect provider driver
  dt-bindings: Introduce interconnect consumers bindings
  interconnect: Allow endpoints translation via DT

 .../bindings/interconnect/interconnect.txt    |  60 ++
 .../bindings/interconnect/qcom-msm8916.txt    |  39 +
 .../bindings/interconnect/qcom-smd.txt        |  32 +
 Documentation/interconnect/interconnect.rst   |  96 +++
 drivers/Kconfig                               |   2 +
 drivers/Makefile                              |   1 +
 drivers/interconnect/Kconfig                  |  15 +
 drivers/interconnect/Makefile                 |   3 +
 drivers/interconnect/core.c                   | 709 ++++++++++++++++++
 drivers/interconnect/qcom/Kconfig             |  23 +
 drivers/interconnect/qcom/Makefile            |   4 +
 drivers/interconnect/qcom/msm8916.c           | 510 +++++++++++++
 drivers/interconnect/qcom/smd-rpm.c           |  91 +++
 drivers/interconnect/qcom/smd-rpm.h           |  16 +
 include/dt-bindings/interconnect/qcom.h       | 187 +++++
 include/linux/interconnect-provider.h         | 125 +++
 include/linux/interconnect.h                  |  49 ++
 17 files changed, 1962 insertions(+)
 create mode 100644 Documentation/devicetree/bindings/interconnect/interconnect.txt
 create mode 100644 Documentation/devicetree/bindings/interconnect/qcom-msm8916.txt
 create mode 100644 Documentation/devicetree/bindings/interconnect/qcom-smd.txt
 create mode 100644 Documentation/interconnect/interconnect.rst
 create mode 100644 drivers/interconnect/Kconfig
 create mode 100644 drivers/interconnect/Makefile
 create mode 100644 drivers/interconnect/core.c
 create mode 100644 drivers/interconnect/qcom/Kconfig
 create mode 100644 drivers/interconnect/qcom/Makefile
 create mode 100644 drivers/interconnect/qcom/msm8916.c
 create mode 100644 drivers/interconnect/qcom/smd-rpm.c
 create mode 100644 drivers/interconnect/qcom/smd-rpm.h
 create mode 100644 include/dt-bindings/interconnect/qcom.h
 create mode 100644 include/linux/interconnect-provider.h
 create mode 100644 include/linux/interconnect.h


^ permalink raw reply	[flat|nested] 30+ messages in thread

* [PATCH v7 1/8] interconnect: Add generic on-chip interconnect API
  2018-07-31 16:13 [PATCH v7 0/8] Introduce on-chip interconnect API Georgi Djakov
@ 2018-07-31 16:13 ` Georgi Djakov
  2018-08-02  0:05   ` Randy Dunlap
  2018-08-03 22:59   ` Evan Green
  2018-07-31 16:13 ` [PATCH v7 2/8] dt-bindings: Introduce interconnect provider bindings Georgi Djakov
                   ` (6 subsequent siblings)
  7 siblings, 2 replies; 30+ messages in thread
From: Georgi Djakov @ 2018-07-31 16:13 UTC (permalink / raw)
  To: linux-pm, gregkh
  Cc: rjw, robh+dt, mturquette, khilman, vincent.guittot, skannan,
	bjorn.andersson, amit.kucheria, seansw, daidavid1, evgreen,
	mark.rutland, lorenzo.pieralisi, abailon, arnd, linux-kernel,
	linux-arm-kernel, linux-arm-msm, georgi.djakov

This patch introduces a new API to get requirements and configure the
interconnect buses across the entire chipset to fit with the current
demand.

The API is using a consumer/provider-based model, where the providers are
the interconnect buses and the consumers could be various drivers.
The consumers request interconnect resources (path) between endpoints and
set the desired constraints on this data flow path. The providers receive
requests from consumers and aggregate these requests for all master-slave
pairs on that path. Then the providers configure each participating in the
topology node according to the requested data flow path, physical links and
constraints. The topology could be complicated and multi-tiered and is SoC
specific.

Signed-off-by: Georgi Djakov <georgi.djakov@linaro.org>
---
 Documentation/interconnect/interconnect.rst |  96 ++++
 drivers/Kconfig                             |   2 +
 drivers/Makefile                            |   1 +
 drivers/interconnect/Kconfig                |  10 +
 drivers/interconnect/Makefile               |   2 +
 drivers/interconnect/core.c                 | 569 ++++++++++++++++++++
 include/linux/interconnect-provider.h       | 125 +++++
 include/linux/interconnect.h                |  42 ++
 8 files changed, 847 insertions(+)
 create mode 100644 Documentation/interconnect/interconnect.rst
 create mode 100644 drivers/interconnect/Kconfig
 create mode 100644 drivers/interconnect/Makefile
 create mode 100644 drivers/interconnect/core.c
 create mode 100644 include/linux/interconnect-provider.h
 create mode 100644 include/linux/interconnect.h

diff --git a/Documentation/interconnect/interconnect.rst b/Documentation/interconnect/interconnect.rst
new file mode 100644
index 000000000000..e628881ee218
--- /dev/null
+++ b/Documentation/interconnect/interconnect.rst
@@ -0,0 +1,96 @@
+.. SPDX-License-Identifier: GPL-2.0
+
+=====================================
+GENERIC SYSTEM INTERCONNECT SUBSYSTEM
+=====================================
+
+Introduction
+------------
+
+This framework is designed to provide a standard kernel interface to control
+the settings of the interconnects on a SoC. These settings can be throughput,
+latency and priority between multiple interconnected devices or functional
+blocks. This can be controlled dynamically in order to save power or provide
+maximum performance.
+
+The interconnect bus is a hardware with configurable parameters, which can be
+set on a data path according to the requests received from various drivers.
+An example of interconnect buses are the interconnects between various
+components or functional blocks in chipsets. There can be multiple interconnects
+on a SoC that can be multi-tiered.
+
+Below is a simplified diagram of a real-world SoC interconnect bus topology.
+
+::
+
+ +----------------+    +----------------+
+ | HW Accelerator |--->|      M NoC     |<---------------+
+ +----------------+    +----------------+                |
+                         |      |                    +------------+
+  +-----+  +-------------+      V       +------+     |            |
+  | DDR |  |                +--------+  | PCIe |     |            |
+  +-----+  |                | Slaves |  +------+     |            |
+    ^ ^    |                +--------+     |         |   C NoC    |
+    | |    V                               V         |            |
+ +------------------+   +------------------------+   |            |   +-----+
+ |                  |-->|                        |-->|            |-->| CPU |
+ |                  |-->|                        |<--|            |   +-----+
+ |     Mem NoC      |   |         S NoC          |   +------------+
+ |                  |<--|                        |---------+    |
+ |                  |<--|                        |<------+ |    |   +--------+
+ +------------------+   +------------------------+       | |    +-->| Slaves |
+   ^  ^    ^    ^          ^                             | |        +--------+
+   |  |    |    |          |                             | V
+ +------+  |  +-----+   +-----+  +---------+   +----------------+   +--------+
+ | CPUs |  |  | GPU |   | DSP |  | Masters |-->|       P NoC    |-->| Slaves |
+ +------+  |  +-----+   +-----+  +---------+   +----------------+   +--------+
+           |
+       +-------+
+       | Modem |
+       +-------+
+
+Terminology
+-----------
+
+Interconnect provider is the software definition of the interconnect hardware.
+The interconnect providers on the above diagram are M NoC, S NoC, C NoC, P NoC
+and Mem NoC.
+
+Interconnect node is the software definition of the interconnect hardware
+port. Each interconnect provider consists of multiple interconnect nodes,
+which are connected to other SoC components including other interconnect
+providers. The point on the diagram where the CPUs connect to the memory is
+called an interconnect node, which belongs to the Mem NoC interconnect provider.
+
+Interconnect endpoints are the first or the last element of the path. Every
+endpoint is a node, but not every node is an endpoint.
+
+Interconnect path is everything between two endpoints including all the nodes
+that have to be traversed to reach from a source to destination node. It may
+include multiple master-slave pairs across several interconnect providers.
+
+Interconnect consumers are the entities which make use of the data paths exposed
+by the providers. The consumers send requests to providers requesting various
+throughput, latency and priority. Usually the consumers are device drivers, that
+send request based on their needs. An example for a consumer is a video decoder
+that supports various formats and image sizes.
+
+Interconnect providers
+----------------------
+
+Interconnect provider is an entity that implements methods to initialize and
+configure a interconnect bus hardware. The interconnect provider drivers should
+be registered with the interconnect provider core.
+
+The interconnect framework provider API functions are documented in
+.. kernel-doc:: include/linux/interconnect-provider.h
+
+Interconnect consumers
+----------------------
+
+Interconnect consumers are the clients which use the interconnect APIs to
+get paths between endpoints and set their bandwidth/latency/QoS requirements
+for these interconnect paths.
+
+The interconnect framework consumer API functions are documented in
+.. kernel-doc:: include/linux/interconnect.h
diff --git a/drivers/Kconfig b/drivers/Kconfig
index 95b9ccc08165..3ed6ede9d021 100644
--- a/drivers/Kconfig
+++ b/drivers/Kconfig
@@ -217,4 +217,6 @@ source "drivers/siox/Kconfig"
 
 source "drivers/slimbus/Kconfig"
 
+source "drivers/interconnect/Kconfig"
+
 endmenu
diff --git a/drivers/Makefile b/drivers/Makefile
index 24cd47014657..0cca95740d9b 100644
--- a/drivers/Makefile
+++ b/drivers/Makefile
@@ -185,3 +185,4 @@ obj-$(CONFIG_TEE)		+= tee/
 obj-$(CONFIG_MULTIPLEXER)	+= mux/
 obj-$(CONFIG_UNISYS_VISORBUS)	+= visorbus/
 obj-$(CONFIG_SIOX)		+= siox/
+obj-$(CONFIG_INTERCONNECT)	+= interconnect/
diff --git a/drivers/interconnect/Kconfig b/drivers/interconnect/Kconfig
new file mode 100644
index 000000000000..a261c7d41deb
--- /dev/null
+++ b/drivers/interconnect/Kconfig
@@ -0,0 +1,10 @@
+menuconfig INTERCONNECT
+	tristate "On-Chip Interconnect management support"
+	help
+	  Support for management of the on-chip interconnects.
+
+	  This framework is designed to provide a generic interface for
+	  managing the interconnects in a SoC.
+
+	  If unsure, say no.
+
diff --git a/drivers/interconnect/Makefile b/drivers/interconnect/Makefile
new file mode 100644
index 000000000000..97fca2e09d24
--- /dev/null
+++ b/drivers/interconnect/Makefile
@@ -0,0 +1,2 @@
+# SPDX-License-Identifier: GPL-2.0
+obj-$(CONFIG_INTERCONNECT)		+= core.o
diff --git a/drivers/interconnect/core.c b/drivers/interconnect/core.c
new file mode 100644
index 000000000000..1de4997bc75a
--- /dev/null
+++ b/drivers/interconnect/core.c
@@ -0,0 +1,569 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Interconnect framework core driver
+ *
+ * Copyright (c) 2018, Linaro Ltd.
+ * Author: Georgi Djakov <georgi.djakov@linaro.org>
+ */
+
+#include <linux/device.h>
+#include <linux/idr.h>
+#include <linux/init.h>
+#include <linux/interconnect.h>
+#include <linux/interconnect-provider.h>
+#include <linux/list.h>
+#include <linux/module.h>
+#include <linux/mutex.h>
+#include <linux/slab.h>
+#include <linux/overflow.h>
+
+static DEFINE_IDR(icc_idr);
+static LIST_HEAD(icc_provider_list);
+static DEFINE_MUTEX(icc_lock);
+
+/**
+ * struct icc_req - constraints that are attached to each node
+ *
+ * @req_node: entry in list of requests for the particular @node
+ * @node: the interconnect node to which this constraint applies
+ * @dev: reference to the device that sets the constraints
+ * @avg_bw: an integer describing the average bandwidth in kbps
+ * @peak_bw: an integer describing the peak bandwidth in kbps
+ */
+struct icc_req {
+	struct hlist_node req_node;
+	struct icc_node *node;
+	struct device *dev;
+	u32 avg_bw;
+	u32 peak_bw;
+};
+
+/**
+ * struct icc_path - interconnect path structure
+ * @num_nodes: number of hops (nodes)
+ * @reqs: array of the requests applicable to this path of nodes
+ */
+struct icc_path {
+	size_t num_nodes;
+	struct icc_req reqs[];
+};
+
+static struct icc_node *node_find(const int id)
+{
+	return idr_find(&icc_idr, id);
+}
+
+static struct icc_path *path_init(struct device *dev, struct icc_node *dst,
+				  ssize_t num_nodes)
+{
+	struct icc_node *node = dst;
+	struct icc_path *path;
+	int i;
+
+	path = kzalloc(struct_size(path, reqs, num_nodes), GFP_KERNEL);
+	if (!path)
+		return ERR_PTR(-ENOMEM);
+
+	path->num_nodes = num_nodes;
+
+	for (i = num_nodes - 1; i >= 0; i--) {
+		node->provider->users++;
+		hlist_add_head(&path->reqs[i].req_node, &node->req_list);
+		path->reqs[i].node = node;
+		path->reqs[i].dev = dev;
+		/* reference to previous node was saved during path traversal */
+		node = node->reverse;
+	}
+
+	return path;
+}
+
+static struct icc_path *path_find(struct device *dev, struct icc_node *src,
+				  struct icc_node *dst)
+{
+	struct icc_path *path = ERR_PTR(-EPROBE_DEFER);
+	struct icc_node *n, *node = NULL;
+	struct list_head traverse_list;
+	struct list_head edge_list;
+	struct list_head visited_list;
+	size_t i, depth = 1;
+	bool found = false;
+
+	INIT_LIST_HEAD(&traverse_list);
+	INIT_LIST_HEAD(&edge_list);
+	INIT_LIST_HEAD(&visited_list);
+
+	list_add(&src->search_list, &traverse_list);
+	src->reverse = NULL;
+
+	do {
+		list_for_each_entry_safe(node, n, &traverse_list, search_list) {
+			if (node == dst) {
+				found = true;
+				list_splice_init(&edge_list, &visited_list);
+				list_splice_init(&traverse_list, &visited_list);
+				break;
+			}
+			for (i = 0; i < node->num_links; i++) {
+				struct icc_node *tmp = node->links[i];
+
+				if (!tmp) {
+					path = ERR_PTR(-ENOENT);
+					goto out;
+				}
+
+				if (tmp->is_traversed)
+					continue;
+
+				tmp->is_traversed = true;
+				tmp->reverse = node;
+				list_add_tail(&tmp->search_list, &edge_list);
+			}
+		}
+
+		if (found)
+			break;
+
+		list_splice_init(&traverse_list, &visited_list);
+		list_splice_init(&edge_list, &traverse_list);
+
+		/* count the hops including the source */
+		depth++;
+
+	} while (!list_empty(&traverse_list));
+
+out:
+
+	/* reset the traversed state */
+	list_for_each_entry_reverse(n, &visited_list, search_list)
+		n->is_traversed = false;
+
+	if (found)
+		path = path_init(dev, dst, depth);
+
+	return path;
+}
+
+/*
+ * We want the path to honor all bandwidth requests, so the average and peak
+ * bandwidth requirements from each consumer are aggregated at each node.
+ */
+
+static int aggregate_requests(struct icc_node *node)
+{
+	struct icc_provider *p = node->provider;
+	struct icc_req *r;
+
+	node->avg_bw = 0;
+	node->peak_bw = 0;
+
+	hlist_for_each_entry(r, &node->req_list, req_node)
+		p->aggregate(node, r->avg_bw, r->peak_bw,
+			     &node->avg_bw, &node->peak_bw);
+
+	return 0;
+}
+
+static int apply_constraints(struct icc_path *path)
+{
+	struct icc_node *next, *prev = NULL;
+	int ret;
+	int i;
+
+	for (i = 0; i < path->num_nodes; i++, prev = next) {
+		struct icc_provider *p;
+
+		next = path->reqs[i].node;
+		/*
+		 * Both endpoints should be valid master-slave pairs of the
+		 * same interconnect provider that will be configured.
+		 */
+		if (!prev || next->provider != prev->provider)
+			continue;
+
+		p = next->provider;
+
+		/* set the constraints */
+		ret = p->set(prev, next);
+		if (ret)
+			goto out;
+	}
+out:
+	return ret;
+}
+
+/**
+ * icc_set() - set constraints on an interconnect path between two endpoints
+ * @path: reference to the path returned by icc_get()
+ * @avg_bw: average bandwidth in kbps
+ * @peak_bw: peak bandwidth in kbps
+ *
+ * This function is used by an interconnect consumer to express its own needs
+ * in terms of bandwidth for a previously requested path between two endpoints.
+ * The requests are aggregated and each node is updated accordingly. The entire
+ * path is locked by a mutex to ensure that the set() is completed.
+ * The @path can be NULL when the "interconnects" DT properties is missing,
+ * which will mean that no constraints will be set.
+ *
+ * Returns 0 on success, or an appropriate error code otherwise.
+ */
+int icc_set(struct icc_path *path, u32 avg_bw, u32 peak_bw)
+{
+	struct icc_node *node;
+	struct icc_provider *p;
+	size_t i;
+	int ret;
+
+	if (!path)
+		return 0;
+
+	mutex_lock(&icc_lock);
+
+	for (i = 0; i < path->num_nodes; i++) {
+		node = path->reqs[i].node;
+		p = node->provider;
+
+		/* update the consumer request for this path */
+		path->reqs[i].avg_bw = avg_bw;
+		path->reqs[i].peak_bw = peak_bw;
+
+		/* aggregate requests for this node */
+		aggregate_requests(node);
+	}
+
+	ret = apply_constraints(path);
+	if (ret)
+		pr_err("interconnect: error applying constraints (%d)", ret);
+
+	mutex_unlock(&icc_lock);
+
+	return ret;
+}
+EXPORT_SYMBOL_GPL(icc_set);
+
+/**
+ * icc_get() - return a handle for path between two endpoints
+ * @dev: the device requesting the path
+ * @src_id: source device port id
+ * @dst_id: destination device port id
+ *
+ * This function will search for a path between two endpoints and return an
+ * icc_path handle on success. Use icc_put() to release
+ * constraints when the they are not needed anymore.
+ *
+ * Return: icc_path pointer on success, or ERR_PTR() on error
+ */
+struct icc_path *icc_get(struct device *dev, const int src_id, const int dst_id)
+{
+	struct icc_node *src, *dst;
+	struct icc_path *path = ERR_PTR(-EPROBE_DEFER);
+
+	mutex_lock(&icc_lock);
+
+	src = node_find(src_id);
+	if (!src)
+		goto out;
+
+	dst = node_find(dst_id);
+	if (!dst)
+		goto out;
+
+	path = path_find(dev, src, dst);
+	if (IS_ERR(path))
+		dev_err(dev, "%s: invalid path=%ld\n", __func__, PTR_ERR(path));
+
+out:
+	mutex_unlock(&icc_lock);
+	return path;
+}
+EXPORT_SYMBOL_GPL(icc_get);
+
+/**
+ * icc_put() - release the reference to the icc_path
+ * @path: interconnect path
+ *
+ * Use this function to release the constraints on a path when the path is
+ * no longer needed. The constraints will be re-aggregated.
+ */
+void icc_put(struct icc_path *path)
+{
+	struct icc_node *node;
+	size_t i;
+	int ret;
+
+	if (!path || WARN_ON(IS_ERR(path)))
+		return;
+
+	ret = icc_set(path, 0, 0);
+	if (ret)
+		pr_err("%s: error (%d)\n", __func__, ret);
+
+	mutex_lock(&icc_lock);
+	for (i = 0; i < path->num_nodes; i++) {
+		node = path->reqs[i].node;
+		hlist_del(&path->reqs[i].req_node);
+		if (!WARN_ON(!node->provider->users))
+			node->provider->users--;
+	}
+	mutex_unlock(&icc_lock);
+
+	kfree(path);
+}
+EXPORT_SYMBOL_GPL(icc_put);
+
+static struct icc_node *icc_node_create_nolock(int id)
+{
+	struct icc_node *node;
+
+	/* check if node already exists */
+	node = node_find(id);
+	if (node)
+		goto out;
+
+	node = kzalloc(sizeof(*node), GFP_KERNEL);
+	if (!node) {
+		node = ERR_PTR(-ENOMEM);
+		goto out;
+	}
+
+	id = idr_alloc(&icc_idr, node, id, id + 1, GFP_KERNEL);
+	if (WARN(id < 0, "couldn't get idr")) {
+		kfree(node);
+		node = ERR_PTR(id);
+		goto out;
+	}
+
+	node->id = id;
+
+out:
+	return node;
+}
+
+/**
+ * icc_node_create() - create a node
+ * @id: node id
+ *
+ * Return: icc_node pointer on success, or ERR_PTR() on error
+ */
+struct icc_node *icc_node_create(int id)
+{
+	struct icc_node *node;
+
+	mutex_lock(&icc_lock);
+
+	node = icc_node_create_nolock(id);
+
+	mutex_unlock(&icc_lock);
+
+	return node;
+}
+EXPORT_SYMBOL_GPL(icc_node_create);
+
+/**
+ * icc_node_destroy() - destroy a node
+ * @id: node id
+ *
+ */
+void icc_node_destroy(int id)
+{
+	struct icc_node *node;
+
+	mutex_lock(&icc_lock);
+
+	node = node_find(id);
+	if (node) {
+		idr_remove(&icc_idr, node->id);
+		WARN_ON(!hlist_empty(&node->req_list));
+	}
+
+	mutex_unlock(&icc_lock);
+
+	kfree(node);
+}
+EXPORT_SYMBOL_GPL(icc_node_destroy);
+
+/**
+ * icc_link_create() - create a link between two nodes
+ * @src_id: source node id
+ * @dst_id: destination node id
+ *
+ * Create a link between two nodes. The nodes might belong to different
+ * interconnect providers and the @dst_id node might not exist (if the
+ * provider driver has not probed yet). So just create the @dst_id node
+ * and when the actual provider driver is probed, the rest of the node
+ * data is filled.
+ *
+ * Return: 0 on success, or an error code otherwise
+ */
+int icc_link_create(struct icc_node *node, const int dst_id)
+{
+	struct icc_node *dst;
+	struct icc_node **new;
+	int ret = 0;
+
+	if (!node->provider)
+		return -EINVAL;
+
+	mutex_lock(&icc_lock);
+
+	dst = node_find(dst_id);
+	if (!dst) {
+		dst = icc_node_create_nolock(dst_id);
+
+		if (IS_ERR(dst)) {
+			ret = PTR_ERR(dst);
+			goto out;
+		}
+	}
+
+	new = krealloc(node->links,
+		       (node->num_links + 1) * sizeof(*node->links),
+		       GFP_KERNEL);
+	if (!new) {
+		ret = -ENOMEM;
+		goto out;
+	}
+
+	node->links = new;
+	node->links[node->num_links++] = dst;
+
+out:
+	mutex_unlock(&icc_lock);
+
+	return ret;
+}
+EXPORT_SYMBOL_GPL(icc_link_create);
+
+/**
+ * icc_link_destroy() - destroy a link between two nodes
+ * @src: pointer to source node
+ * @dst: pointer to destination node
+ *
+ * Return: 0 on success, or an error code otherwise
+ */
+int icc_link_destroy(struct icc_node *src, struct icc_node *dst)
+{
+	struct icc_node **new;
+	size_t slot;
+	int ret = 0;
+
+	if (IS_ERR_OR_NULL(src))
+		return -EINVAL;
+
+	if (IS_ERR_OR_NULL(dst))
+		return -EINVAL;
+
+	mutex_lock(&icc_lock);
+
+	for (slot = 0; slot < src->num_links; slot++)
+		if (src->links[slot] == dst)
+			break;
+
+	if (WARN_ON(slot == src->num_links)) {
+		ret = -ENXIO;
+		goto out;
+	}
+
+	src->links[slot] = src->links[--src->num_links];
+
+	new = krealloc(src->links,
+		       (src->num_links) * sizeof(*src->links),
+		       GFP_KERNEL);
+	if (new)
+		src->links = new;
+
+out:
+	mutex_unlock(&icc_lock);
+
+	return ret;
+}
+EXPORT_SYMBOL_GPL(icc_link_destroy);
+
+/**
+ * icc_node_add() - add interconnect node to interconnect provider
+ * @node: pointer to the interconnect node
+ * @provider: pointer to the interconnect provider
+ *
+ */
+void icc_node_add(struct icc_node *node, struct icc_provider *provider)
+{
+	mutex_lock(&icc_lock);
+
+	node->provider = provider;
+	list_add(&node->node_list, &provider->nodes);
+
+	mutex_unlock(&icc_lock);
+}
+EXPORT_SYMBOL_GPL(icc_node_add);
+
+/**
+ * icc_node_del() - delete interconnect node from interconnect provider
+ * @node: pointer to the interconnect node
+ *
+ */
+void icc_node_del(struct icc_node *node)
+{
+	mutex_lock(&icc_lock);
+
+	list_del(&node->node_list);
+
+	mutex_unlock(&icc_lock);
+}
+EXPORT_SYMBOL_GPL(icc_node_del);
+
+/**
+ * icc_provider_add() - add a new interconnect provider
+ * @icc_provider: the interconnect provider that will be added into topology
+ *
+ * Return: 0 on success, or an error code otherwise
+ */
+int icc_provider_add(struct icc_provider *provider)
+{
+	if (WARN_ON(!provider->set))
+		return -EINVAL;
+
+	mutex_lock(&icc_lock);
+
+	INIT_LIST_HEAD(&provider->nodes);
+	list_add(&provider->provider_list, &icc_provider_list);
+
+	mutex_unlock(&icc_lock);
+
+	dev_dbg(provider->dev, "interconnect provider added to topology\n");
+
+	return 0;
+}
+EXPORT_SYMBOL_GPL(icc_provider_add);
+
+/**
+ * icc_provider_del() - delete previously added interconnect provider
+ * @icc_provider: the interconnect provider that will be removed from topology
+ *
+ * Return: 0 on success, or an error code otherwise
+ */
+int icc_provider_del(struct icc_provider *provider)
+{
+	mutex_lock(&icc_lock);
+	if (provider->users) {
+		pr_warn("interconnect provider still has %d users\n",
+			provider->users);
+		mutex_unlock(&icc_lock);
+		return -EBUSY;
+	}
+
+	if (!list_empty(&provider->nodes)) {
+		pr_warn("interconnect provider still has nodes\n");
+		mutex_unlock(&icc_lock);
+		return -EBUSY;
+	}
+
+	list_del(&provider->provider_list);
+	mutex_unlock(&icc_lock);
+
+	return 0;
+}
+EXPORT_SYMBOL_GPL(icc_provider_del);
+
+MODULE_AUTHOR("Georgi Djakov <georgi.djakov@linaro.org");
+MODULE_DESCRIPTION("Interconnect Driver Core");
+MODULE_LICENSE("GPL v2");
diff --git a/include/linux/interconnect-provider.h b/include/linux/interconnect-provider.h
new file mode 100644
index 000000000000..29b16e33fbaa
--- /dev/null
+++ b/include/linux/interconnect-provider.h
@@ -0,0 +1,125 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (c) 2018, Linaro Ltd.
+ * Author: Georgi Djakov <georgi.djakov@linaro.org>
+ */
+
+#ifndef _LINUX_INTERCONNECT_PROVIDER_H
+#define _LINUX_INTERCONNECT_PROVIDER_H
+
+#include <linux/interconnect.h>
+
+#define icc_units_to_bps(bw)  ((bw) * 1000ULL)
+
+struct icc_node;
+
+/**
+ * struct icc_provider - interconnect provider (controller) entity that might
+ * provide multiple interconnect controls
+ *
+ * @provider_list: list of the registered interconnect providers
+ * @nodes: internal list of the interconnect provider nodes
+ * @set: pointer to device specific set operation function
+ * @aggregate: pointer to device specific aggregate operation function
+ * @dev: the device this interconnect provider belongs to
+ * @users: count of active users
+ * @data: pointer to private data
+ */
+struct icc_provider {
+	struct list_head	provider_list;
+	struct list_head	nodes;
+	int (*set)(struct icc_node *src, struct icc_node *dst);
+	int (*aggregate)(struct icc_node *node, u32 avg_bw, u32 peak_bw,
+			 u32 *agg_avg, u32 *agg_peak);
+	struct device		*dev;
+	int			users;
+	void			*data;
+};
+
+/**
+ * struct icc_node - entity that is part of the interconnect topology
+ *
+ * @id: platform specific node id
+ * @name: node name used in debugfs
+ * @links: a list of targets pointing to where we can go next when traversing
+ * @num_links: number of links to other interconnect nodes
+ * @provider: points to the interconnect provider of this node
+ * @node_list: the list entry in the parent provider's "nodes" list
+ * @search_list: list used when walking the nodes graph
+ * @reverse: pointer to previous node when walking the nodes graph
+ * @is_traversed: flag that is used when walking the nodes graph
+ * @req_list: a list of QoS constraint requests associated with this node
+ * @avg_bw: aggregated value of average bandwidth requests from all consumers
+ * @peak_bw: aggregated value of peak bandwidth requests from all consumers
+ * @data: pointer to private data
+ */
+struct icc_node {
+	int			id;
+	const char              *name;
+	struct icc_node		**links;
+	size_t			num_links;
+
+	struct icc_provider	*provider;
+	struct list_head	node_list;
+	struct list_head	search_list;
+	struct icc_node		*reverse;
+	bool			is_traversed;
+	struct hlist_head	req_list;
+	u32			avg_bw;
+	u32			peak_bw;
+	void			*data;
+};
+
+#if IS_ENABLED(CONFIG_INTERCONNECT)
+
+struct icc_node *icc_node_create(int id);
+void icc_node_destroy(int id);
+int icc_link_create(struct icc_node *node, const int dst_id);
+int icc_link_destroy(struct icc_node *src, struct icc_node *dst);
+void icc_node_add(struct icc_node *node, struct icc_provider *provider);
+void icc_node_del(struct icc_node *node);
+int icc_provider_add(struct icc_provider *provider);
+int icc_provider_del(struct icc_provider *provider);
+
+#else
+
+static inline struct icc_node *icc_node_create(int id)
+{
+	return ERR_PTR(-ENOTSUPP);
+}
+
+void icc_node_destroy(int id)
+{
+}
+
+static inline int icc_link_create(struct icc_node *node, const int dst_id)
+{
+	return -ENOTSUPP;
+}
+
+int icc_link_destroy(struct icc_node *src, struct icc_node *dst)
+{
+	return -ENOTSUPP;
+}
+
+void icc_node_add(struct icc_node *node, struct icc_provider *provider)
+{
+}
+
+void icc_node_del(struct icc_node *node)
+{
+}
+
+static inline int icc_provider_add(struct icc_provider *provider)
+{
+	return -ENOTSUPP;
+}
+
+static inline int icc_provider_del(struct icc_provider *provider)
+{
+	return -ENOTSUPP;
+}
+
+#endif /* CONFIG_INTERCONNECT */
+
+#endif /* _LINUX_INTERCONNECT_PROVIDER_H */
diff --git a/include/linux/interconnect.h b/include/linux/interconnect.h
new file mode 100644
index 000000000000..593215371fd6
--- /dev/null
+++ b/include/linux/interconnect.h
@@ -0,0 +1,42 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (c) 2018, Linaro Ltd.
+ * Author: Georgi Djakov <georgi.djakov@linaro.org>
+ */
+
+#ifndef _LINUX_INTERCONNECT_H
+#define _LINUX_INTERCONNECT_H
+
+#include <linux/mutex.h>
+#include <linux/types.h>
+
+struct icc_path;
+struct device;
+
+#if IS_ENABLED(CONFIG_INTERCONNECT)
+
+struct icc_path *icc_get(struct device *dev, const int src_id,
+			 const int dst_id);
+void icc_put(struct icc_path *path);
+int icc_set(struct icc_path *path, u32 avg_bw, u32 peak_bw);
+
+#else
+
+static inline struct icc_path *icc_get(struct device *dev, const int src_id,
+				       const int dst_id)
+{
+	return NULL;
+}
+
+static inline void icc_put(struct icc_path *path)
+{
+}
+
+static inline int icc_set(struct icc_path *path, u32 avg_bw, u32 peak_bw)
+{
+	return 0;
+}
+
+#endif /* CONFIG_INTERCONNECT */
+
+#endif /* _LINUX_INTERCONNECT_H */

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH v7 2/8] dt-bindings: Introduce interconnect provider bindings
  2018-07-31 16:13 [PATCH v7 0/8] Introduce on-chip interconnect API Georgi Djakov
  2018-07-31 16:13 ` [PATCH v7 1/8] interconnect: Add generic " Georgi Djakov
@ 2018-07-31 16:13 ` Georgi Djakov
  2018-08-02 21:02   ` Rob Herring
  2018-07-31 16:13 ` [PATCH v7 3/8] interconnect: Add debugfs support Georgi Djakov
                   ` (5 subsequent siblings)
  7 siblings, 1 reply; 30+ messages in thread
From: Georgi Djakov @ 2018-07-31 16:13 UTC (permalink / raw)
  To: linux-pm, gregkh
  Cc: rjw, robh+dt, mturquette, khilman, vincent.guittot, skannan,
	bjorn.andersson, amit.kucheria, seansw, daidavid1, evgreen,
	mark.rutland, lorenzo.pieralisi, abailon, arnd, linux-kernel,
	linux-arm-kernel, linux-arm-msm, georgi.djakov

This binding is intended to represent the interconnect hardware present
in some of the modern SoCs. Currently it consists only of a binding for
the interconnect hardware devices (provider).

Signed-off-by: Georgi Djakov <georgi.djakov@linaro.org>
---
 .../bindings/interconnect/interconnect.txt    | 33 +++++++++++++++++++
 1 file changed, 33 insertions(+)
 create mode 100644 Documentation/devicetree/bindings/interconnect/interconnect.txt

diff --git a/Documentation/devicetree/bindings/interconnect/interconnect.txt b/Documentation/devicetree/bindings/interconnect/interconnect.txt
new file mode 100644
index 000000000000..6e2b2971b094
--- /dev/null
+++ b/Documentation/devicetree/bindings/interconnect/interconnect.txt
@@ -0,0 +1,33 @@
+Interconnect Provider Device Tree Bindings
+=========================================
+
+The purpose of this document is to define a common set of generic interconnect
+providers/consumers properties.
+
+
+= interconnect providers =
+
+The interconnect provider binding is intended to represent the interconnect
+controllers in the system. Each provider registers a set of interconnect
+nodes, which expose the interconnect related capabilities of the interconnect
+to consumer drivers. These capabilities can be throughput, latency, priority
+etc. The consumer drivers set constraints on interconnect path (or endpoints)
+depending on the use case. Interconnect providers can also be interconnect
+consumers, such as in the case where two network-on-chip fabrics interface
+directly
+
+Required properties:
+- compatible : contains the interconnect provider compatible string
+- #interconnect-cells : number of cells in a interconnect specifier needed to
+			encode the interconnect node id
+
+Example:
+
+		snoc: snoc@580000 {
+			compatible = "qcom,msm8916-snoc";
+			#interconnect-cells = <1>;
+			reg = <0x580000 0x14000>;
+			clock-names = "bus_clk", "bus_a_clk";
+			clocks = <&rpmcc RPM_SMD_SNOC_CLK>,
+				 <&rpmcc RPM_SMD_SNOC_A_CLK>;
+		};

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH v7 3/8] interconnect: Add debugfs support
  2018-07-31 16:13 [PATCH v7 0/8] Introduce on-chip interconnect API Georgi Djakov
  2018-07-31 16:13 ` [PATCH v7 1/8] interconnect: Add generic " Georgi Djakov
  2018-07-31 16:13 ` [PATCH v7 2/8] dt-bindings: Introduce interconnect provider bindings Georgi Djakov
@ 2018-07-31 16:13 ` Georgi Djakov
  2018-08-03 22:59   ` Evan Green
  2018-07-31 16:13 ` [PATCH v7 4/8] interconnect: qcom: Add RPM communication Georgi Djakov
                   ` (4 subsequent siblings)
  7 siblings, 1 reply; 30+ messages in thread
From: Georgi Djakov @ 2018-07-31 16:13 UTC (permalink / raw)
  To: linux-pm, gregkh
  Cc: rjw, robh+dt, mturquette, khilman, vincent.guittot, skannan,
	bjorn.andersson, amit.kucheria, seansw, daidavid1, evgreen,
	mark.rutland, lorenzo.pieralisi, abailon, arnd, linux-kernel,
	linux-arm-kernel, linux-arm-msm, georgi.djakov

Add a functionality to provide information about the current constraints
per each node and provider.

Signed-off-by: Georgi Djakov <georgi.djakov@linaro.org>
---
 drivers/interconnect/core.c | 78 +++++++++++++++++++++++++++++++++++++
 1 file changed, 78 insertions(+)

diff --git a/drivers/interconnect/core.c b/drivers/interconnect/core.c
index 1de4997bc75a..9fef180cf77e 100644
--- a/drivers/interconnect/core.c
+++ b/drivers/interconnect/core.c
@@ -6,6 +6,7 @@
  * Author: Georgi Djakov <georgi.djakov@linaro.org>
  */
 
+#include <linux/debugfs.h>
 #include <linux/device.h>
 #include <linux/idr.h>
 #include <linux/init.h>
@@ -16,10 +17,12 @@
 #include <linux/mutex.h>
 #include <linux/slab.h>
 #include <linux/overflow.h>
+#include <linux/uaccess.h>
 
 static DEFINE_IDR(icc_idr);
 static LIST_HEAD(icc_provider_list);
 static DEFINE_MUTEX(icc_lock);
+static struct dentry *icc_debugfs_dir;
 
 /**
  * struct icc_req - constraints that are attached to each node
@@ -48,6 +51,62 @@ struct icc_path {
 	struct icc_req reqs[];
 };
 
+#ifdef CONFIG_DEBUG_FS
+
+static void icc_summary_show_one(struct seq_file *s, struct icc_node *n)
+{
+	if (!n)
+		return;
+
+	seq_printf(s, "%-30s %12d %12d\n",
+		   n->name, n->avg_bw, n->peak_bw);
+}
+
+static int icc_summary_show(struct seq_file *s, void *data)
+{
+	struct icc_provider *provider;
+
+	seq_puts(s, " node                                   avg         peak\n");
+	seq_puts(s, "--------------------------------------------------------\n");
+
+	mutex_lock(&icc_lock);
+
+	list_for_each_entry(provider, &icc_provider_list, provider_list) {
+		struct icc_node *n;
+
+		list_for_each_entry(n, &provider->nodes, node_list) {
+			struct icc_req *r;
+
+			icc_summary_show_one(s, n);
+			hlist_for_each_entry(r, &n->req_list, req_node) {
+				if (!r->dev)
+					continue;
+
+				seq_printf(s, "    %-26s %12d %12d\n",
+					   dev_name(r->dev), r->avg_bw,
+					   r->peak_bw);
+			}
+		}
+	}
+
+	mutex_unlock(&icc_lock);
+
+	return 0;
+}
+
+static int icc_summary_open(struct inode *inode, struct file *file)
+{
+	return single_open(file, icc_summary_show, inode->i_private);
+}
+
+static const struct file_operations icc_summary_fops = {
+	.open		= icc_summary_open,
+	.read		= seq_read,
+	.llseek		= seq_lseek,
+	.release	= single_release,
+};
+#endif
+
 static struct icc_node *node_find(const int id)
 {
 	return idr_find(&icc_idr, id);
@@ -564,6 +623,25 @@ int icc_provider_del(struct icc_provider *provider)
 }
 EXPORT_SYMBOL_GPL(icc_provider_del);
 
+static int __init icc_init(void)
+{
+#ifdef CONFIG_DEBUG_FS
+	icc_debugfs_dir = debugfs_create_dir("interconnect", NULL);
+	debugfs_create_file("interconnect_summary", 0444,
+			    icc_debugfs_dir, NULL, &icc_summary_fops);
+#endif
+	return 0;
+}
+
+static void __exit icc_exit(void)
+{
+#ifdef CONFIG_DEBUG_FS
+	debugfs_remove_recursive(icc_debugfs_dir);
+#endif
+}
+module_init(icc_init);
+module_exit(icc_exit);
+
 MODULE_AUTHOR("Georgi Djakov <georgi.djakov@linaro.org");
 MODULE_DESCRIPTION("Interconnect Driver Core");
 MODULE_LICENSE("GPL v2");

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH v7 4/8] interconnect: qcom: Add RPM communication
  2018-07-31 16:13 [PATCH v7 0/8] Introduce on-chip interconnect API Georgi Djakov
                   ` (2 preceding siblings ...)
  2018-07-31 16:13 ` [PATCH v7 3/8] interconnect: Add debugfs support Georgi Djakov
@ 2018-07-31 16:13 ` Georgi Djakov
  2018-08-03 22:59   ` Evan Green
  2018-07-31 16:13 ` [PATCH v7 5/8] dt-bindings: interconnect: Document qcom,msm8916 NoC bindings Georgi Djakov
                   ` (3 subsequent siblings)
  7 siblings, 1 reply; 30+ messages in thread
From: Georgi Djakov @ 2018-07-31 16:13 UTC (permalink / raw)
  To: linux-pm, gregkh
  Cc: rjw, robh+dt, mturquette, khilman, vincent.guittot, skannan,
	bjorn.andersson, amit.kucheria, seansw, daidavid1, evgreen,
	mark.rutland, lorenzo.pieralisi, abailon, arnd, linux-kernel,
	linux-arm-kernel, linux-arm-msm, georgi.djakov

On some Qualcomm SoCs, there is a remote processor, which controls some of
the Network-On-Chip interconnect resources. Other CPUs express their needs
by communicating with this processor. Add a driver to handle communication
with this remote processor.

Signed-off-by: Georgi Djakov <georgi.djakov@linaro.org>
---
 .../bindings/interconnect/qcom-smd.txt        | 32 +++++++
 drivers/interconnect/qcom/Kconfig             | 14 +++
 drivers/interconnect/qcom/Makefile            |  2 +
 drivers/interconnect/qcom/smd-rpm.c           | 91 +++++++++++++++++++
 drivers/interconnect/qcom/smd-rpm.h           | 16 ++++
 5 files changed, 155 insertions(+)
 create mode 100644 Documentation/devicetree/bindings/interconnect/qcom-smd.txt
 create mode 100644 drivers/interconnect/qcom/Kconfig
 create mode 100644 drivers/interconnect/qcom/Makefile
 create mode 100644 drivers/interconnect/qcom/smd-rpm.c
 create mode 100644 drivers/interconnect/qcom/smd-rpm.h

diff --git a/Documentation/devicetree/bindings/interconnect/qcom-smd.txt b/Documentation/devicetree/bindings/interconnect/qcom-smd.txt
new file mode 100644
index 000000000000..88a5aeb50935
--- /dev/null
+++ b/Documentation/devicetree/bindings/interconnect/qcom-smd.txt
@@ -0,0 +1,32 @@
+Qualcomm SMD-RPM interconnect driver binding
+------------------------------------------------
+The RPM (Resource Power Manager) is a dedicated hardware engine
+for managing the shared SoC resources in order to keep the lowest
+power profile. It communicates with other hardware subsystems via
+the shared memory driver (SMD) back-end and accepts requests for
+various resources.
+
+Required properties :
+- compatible : shall contain only one of the following:
+			"qcom,interconnect-smd-rpm"
+
+Example:
+	smd {
+		compatible = "qcom,smd";
+
+		rpm {
+			interrupts = <0 168 1>;
+			qcom,ipc = <&apcs 8 0>;
+			qcom,smd-edge = <15>;
+
+			rpm_requests {
+				compatible = "qcom,rpm-msm8916";
+				qcom,smd-channels = "rpm_requests";
+
+				interconnect-smd-rpm {
+					compatible = "qcom,interconnect-smd-rpm";
+				};
+
+			};
+		};
+	};
diff --git a/drivers/interconnect/qcom/Kconfig b/drivers/interconnect/qcom/Kconfig
new file mode 100644
index 000000000000..812e9464765a
--- /dev/null
+++ b/drivers/interconnect/qcom/Kconfig
@@ -0,0 +1,14 @@
+config INTERCONNECT_QCOM
+	bool "Qualcomm Network-on-Chip interconnect drivers"
+	depends on INTERCONNECT
+	depends on ARCH_QCOM || COMPILE_TEST
+	help
+	  Support for Qualcomm's Network-on-Chip interconnect hardware.
+
+config INTERCONNECT_QCOM_SMD_RPM
+	tristate "Qualcomm SMD RPM interconnect driver"
+	depends on INTERCONNECT_QCOM
+	depends on QCOM_SMD_RPM
+	help
+	  This is a driver for communicating interconnect related configuration
+	  details with a remote processor (RPM) on Qualcomm platforms.
diff --git a/drivers/interconnect/qcom/Makefile b/drivers/interconnect/qcom/Makefile
new file mode 100644
index 000000000000..5b65e4011b59
--- /dev/null
+++ b/drivers/interconnect/qcom/Makefile
@@ -0,0 +1,2 @@
+# SPDX-License-Identifier: GPL-2.0
+obj-$(CONFIG_INTERCONNECT_QCOM_SMD_RPM) += smd-rpm.o
diff --git a/drivers/interconnect/qcom/smd-rpm.c b/drivers/interconnect/qcom/smd-rpm.c
new file mode 100644
index 000000000000..48b7a2a6eb84
--- /dev/null
+++ b/drivers/interconnect/qcom/smd-rpm.c
@@ -0,0 +1,91 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * RPM over SMD communication wrapper for interconnects
+ *
+ * Copyright (C) 2018 Linaro Ltd
+ * Author: Georgi Djakov <georgi.djakov@linaro.org>
+ */
+
+#include <linux/interconnect-provider.h>
+#include <linux/module.h>
+#include <linux/of.h>
+#include <linux/of_platform.h>
+#include <linux/platform_device.h>
+#include <linux/soc/qcom/smd-rpm.h>
+#include "smd-rpm.h"
+
+#define	RPM_KEY_BW	0x00007762
+
+static struct qcom_icc_rpm {
+	struct qcom_smd_rpm *rpm;
+} icc_rpm_smd;
+
+struct icc_rpm_smd_req {
+	__le32 key;
+	__le32 nbytes;
+	__le32 value;
+};
+
+bool qcom_icc_rpm_smd_available(void)
+{
+	if (!icc_rpm_smd.rpm)
+		return false;
+
+	return true;
+}
+EXPORT_SYMBOL_GPL(qcom_icc_rpm_smd_available);
+
+int qcom_icc_rpm_smd_send(int ctx, int rsc_type, int id, u32 val)
+{
+	struct icc_rpm_smd_req req = {
+		.key = cpu_to_le32(RPM_KEY_BW),
+		.nbytes = cpu_to_le32(sizeof(u32)),
+		.value = cpu_to_le32(val),
+	};
+
+	return qcom_rpm_smd_write(icc_rpm_smd.rpm, ctx, rsc_type, id, &req,
+				  sizeof(req));
+}
+EXPORT_SYMBOL_GPL(qcom_icc_rpm_smd_send);
+
+static int qcom_icc_rpm_smd_probe(struct platform_device *pdev)
+{
+	icc_rpm_smd.rpm = dev_get_drvdata(pdev->dev.parent);
+	if (!icc_rpm_smd.rpm) {
+		dev_err(&pdev->dev, "unable to retrieve handle to RPM\n");
+		return -ENODEV;
+	}
+
+	return 0;
+}
+
+static const struct of_device_id qcom_icc_rpm_smd_dt_match[] = {
+	{ .compatible = "qcom,interconnect-smd-rpm", },
+	{ },
+};
+
+MODULE_DEVICE_TABLE(of, qcom_icc_rpm_smd_dt_match);
+
+static struct platform_driver qcom_interconnect_rpm_smd_driver = {
+	.driver = {
+		.name		= "qcom-interconnect-smd-rpm",
+		.of_match_table	= qcom_icc_rpm_smd_dt_match,
+	},
+	.probe = qcom_icc_rpm_smd_probe,
+};
+
+static int __init rpm_smd_interconnect_init(void)
+{
+	return platform_driver_register(&qcom_interconnect_rpm_smd_driver);
+}
+subsys_initcall(rpm_smd_interconnect_init);
+
+static void __exit rpm_smd_interconnect_exit(void)
+{
+	platform_driver_unregister(&qcom_interconnect_rpm_smd_driver);
+}
+module_exit(rpm_smd_interconnect_exit)
+
+MODULE_AUTHOR("Georgi Djakov <georgi.djakov@linaro.org>");
+MODULE_DESCRIPTION("Qualcomm SMD RPM interconnect driver");
+MODULE_LICENSE("GPL v2");
diff --git a/drivers/interconnect/qcom/smd-rpm.h b/drivers/interconnect/qcom/smd-rpm.h
new file mode 100644
index 000000000000..3b1c347475ff
--- /dev/null
+++ b/drivers/interconnect/qcom/smd-rpm.h
@@ -0,0 +1,16 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (c) 2018, Linaro Ltd.
+ * Author: Georgi Djakov <georgi.djakov@linaro.org>
+ */
+
+#ifndef __DRIVERS_INTERCONNECT_QCOM_SMD_RPM_H
+#define __DRIVERS_INTERCONNECT_QCOM_SMD_RPM_H
+
+
+#include <linux/soc/qcom/smd-rpm.h>
+
+bool qcom_icc_rpm_smd_available(void);
+int qcom_icc_rpm_smd_send(int ctx, int rsc_type, int id, u32 val);
+
+#endif

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH v7 5/8] dt-bindings: interconnect: Document qcom,msm8916 NoC bindings
  2018-07-31 16:13 [PATCH v7 0/8] Introduce on-chip interconnect API Georgi Djakov
                   ` (3 preceding siblings ...)
  2018-07-31 16:13 ` [PATCH v7 4/8] interconnect: qcom: Add RPM communication Georgi Djakov
@ 2018-07-31 16:13 ` Georgi Djakov
  2018-07-31 16:13 ` [PATCH v7 6/8] interconnect: qcom: Add msm8916 interconnect provider driver Georgi Djakov
                   ` (2 subsequent siblings)
  7 siblings, 0 replies; 30+ messages in thread
From: Georgi Djakov @ 2018-07-31 16:13 UTC (permalink / raw)
  To: linux-pm, gregkh
  Cc: rjw, robh+dt, mturquette, khilman, vincent.guittot, skannan,
	bjorn.andersson, amit.kucheria, seansw, daidavid1, evgreen,
	mark.rutland, lorenzo.pieralisi, abailon, arnd, linux-kernel,
	linux-arm-kernel, linux-arm-msm, georgi.djakov

Document the device-tree bindings Network-On-Chip interconnect driver
for Qualcomm msm8916 platforms.

Signed-off-by: Georgi Djakov <georgi.djakov@linaro.org>
---
 .../bindings/interconnect/qcom-msm8916.txt    |  39 ++++
 include/dt-bindings/interconnect/qcom.h       | 187 ++++++++++++++++++
 2 files changed, 226 insertions(+)
 create mode 100644 Documentation/devicetree/bindings/interconnect/qcom-msm8916.txt
 create mode 100644 include/dt-bindings/interconnect/qcom.h

diff --git a/Documentation/devicetree/bindings/interconnect/qcom-msm8916.txt b/Documentation/devicetree/bindings/interconnect/qcom-msm8916.txt
new file mode 100644
index 000000000000..f309eaed3d19
--- /dev/null
+++ b/Documentation/devicetree/bindings/interconnect/qcom-msm8916.txt
@@ -0,0 +1,39 @@
+Qualcomm MSM8916 Network-On-Chip interconnect driver binding
+----------------------------------------------------
+
+Required properties :
+- compatible : shall contain only one of the following:
+			"qcom,msm8916-bimc"
+			"qcom,msm8916-pnoc"
+			"qcom,msm8916-snoc"
+- #interconnect-cells : should contain 1
+- reg : shall contain base register location and length
+
+Optional properties :
+clocks : list of phandles and specifiers to all interconnect bus clocks
+clock-names : clock names should include both "bus_clk" and "bus_a_clk"
+
+Examples:
+
+		snoc: snoc@580000 {
+			compatible = "qcom,msm8916-snoc";
+			#interconnect-cells = <1>;
+			reg = <0x580000 0x14000>;
+			clock-names = "bus_clk", "bus_a_clk";
+			clocks = <&rpmcc RPM_SMD_SNOC_CLK>, <&rpmcc RPM_SMD_SNOC_A_CLK>;
+		};
+		bimc: bimc@400000 {
+			compatible = "qcom,msm8916-bimc";
+			#interconnect-cells = <1>;
+			reg = <0x400000 0x62000>;
+			clock-names = "bus_clk", "bus_a_clk";
+			clocks = <&rpmcc RPM_SMD_BIMC_CLK>, <&rpmcc RPM_SMD_BIMC_A_CLK>;
+		};
+		pnoc: pnoc@500000 {
+			compatible = "qcom,msm8916-pnoc";
+			#interconnect-cells = <1>;
+			reg = <0x500000 0x11000>;
+			clock-names = "bus_clk", "bus_a_clk";
+			clocks = <&rpmcc RPM_SMD_PCNOC_CLK>, <&rpmcc RPM_SMD_PCNOC_A_CLK>;
+		};
+
diff --git a/include/dt-bindings/interconnect/qcom.h b/include/dt-bindings/interconnect/qcom.h
new file mode 100644
index 000000000000..2177a216e17e
--- /dev/null
+++ b/include/dt-bindings/interconnect/qcom.h
@@ -0,0 +1,187 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Qualcomm interconnect IDs
+ *
+ * Copyright (c) 2018, Linaro Ltd.
+ * Author: Georgi Djakov <georgi.djakov@linaro.org>
+ */
+
+#ifndef __QCOM_INTERCONNECT_IDS_H
+#define __QCOM_INTERCONNECT_IDS_H
+
+#define MASTER_AMPSS_M0 1
+#define MASTER_AMPSS_M1 2
+#define MASTER_LPASS_PROC 11
+#define MASTER_MSS_PROCI 12
+#define MASTER_MSS_PROCD 13
+#define MASTER_MSS_MDM_PORT0 14
+#define MASTER_LPASS 15
+#define MASTER_MDP_PORT0 22
+#define MASTER_MDP_PORT1 23
+#define MASTER_ROTATOR 25
+#define MASTER_GRAPHICS_3D 26
+#define MASTER_VFE 29
+#define MASTER_SPDM 36
+#define MASTER_PCIE 45
+#define MASTER_CRYPTO 47
+#define MASTER_QDSS_BAM 53
+#define MASTER_SNOC_CFG 54
+#define MASTER_CRYPTO_CORE0 55
+#define MASTER_OCMEM_DMA 58
+#define MASTER_QDSS_ETR 60
+#define MASTER_USB3 61
+#define MASTER_JPEG 62
+#define MASTER_VIDEO_P0 63
+#define MASTER_VIDEO_P1 64
+#define MASTER_JPEG_OCMEM 66
+#define MASTER_VIDEO_P0_OCMEM 68
+#define MASTER_DEHR 75
+#define MASTER_QDSS_DAP 76
+#define MASTER_TIC 77
+#define MASTER_SDCC_1 78
+#define MASTER_SDCC_2 81
+#define MASTER_TSIF 82
+#define MASTER_BLSP_2 84
+#define MASTER_BLSP_1 86
+#define MASTER_USB_HS 87
+#define MASTER_IPA 90
+#define MASTER_UFS 95
+#define MASTER_PCIE_1 100
+#define MASTER_USB3_1 101
+#define MASTER_CNOC_MNOC_MMSS_CFG 102
+#define MASTER_CNOC_MNOC_CFG 103
+#define MASTER_TCU_0 104
+#define MASTER_TCU_1 105
+#define MASTER_CPP 106
+
+#define SNOC_MM_INT_0 10000
+#define SNOC_MM_INT_1 10001
+#define SNOC_MM_INT_2 10002
+#define SNOC_MM_INT_BIMC 10003
+#define SNOC_INT_0 10004
+#define SNOC_INT_1 10005
+#define SNOC_INT_BIMC 10006
+#define SNOC_BIMC_0_MAS 10007
+#define SNOC_BIMC_1_MAS 10008
+#define SNOC_QDSS_INT 10009
+#define PNOC_SNOC_MAS 10010
+#define PNOC_SNOC_SLV 10011
+#define PNOC_INT_0 10012
+#define PNOC_INT_1 10013
+#define PNOC_M_0 10014
+#define PNOC_M_1 10015
+#define BIMC_SNOC_MAS 10016
+#define BIMC_SNOC_SLV 10017
+#define PNOC_SLV_0 10018
+#define PNOC_SLV_1 10019
+#define PNOC_SLV_2 10020
+#define PNOC_SLV_3 10021
+#define PNOC_SLV_4 10022
+#define PNOC_SLV_8 10023
+#define PNOC_SLV_9 10024
+#define SNOC_BIMC_0_SLV 10025
+#define SNOC_BIMC_1_SLV 10026
+#define BIMC_MNOC_MAS 10029
+#define BIMC_MNOC_SLV 10030
+#define SNOC_BIMC_MAS 10031
+#define SNOC_BIMC_SLV 10032
+#define CNOC_SNOC_MAS 10033
+#define CNOC_SNOC_SLV 10034
+#define SNOC_CNOC_MAS 10035
+#define SNOC_CNOC_SLV 10036
+#define SNOC_PNOC_MAS 10041
+#define SNOC_PNOC_SLV 10042
+#define BIMC_INT_APPS_EBI 10043
+#define BIMC_INT_APPS_SNOC 10044
+#define SNOC_BIMC_2_MAS 10045
+#define SNOC_BIMC_2_SLV 10046
+#define PNOC_SLV_5 10047
+#define PNOC_SLV_7 10048
+#define PNOC_INT_2 10049
+#define PNOC_INT_3 10050
+#define PNOC_INT_4 10051
+#define PNOC_INT_5 10052
+#define PNOC_INT_6 10053
+#define PNOC_INT_7 10054
+
+#define SLAVE_EBI_CH0 512
+#define SLAVE_AMPSS_L2 514
+#define SYSTEM_SLAVE_FAB_APPS 517
+#define SLAVE_SYSTEM_IMEM 519
+#define SLAVE_MSS 521
+#define SLAVE_LPASS 522
+#define SLAVE_SPDM 533
+#define SLAVE_RPM_MSG_RAM 535
+#define SLAVE_MPM 536
+#define SLAVE_TSIF 575
+#define SLAVE_MSM_PDM 577
+#define SLAVE_MSM_TCSR 579
+#define SLAVE_USB3 583
+#define SLAVE_OCIMEM 585
+#define SLAVE_SERVICE_SNOC 587
+#define SLAVE_QDSS_STM 588
+#define SLAVE_CAMERA_CFG 589
+#define SLAVE_DISPLAY_CFG 590
+#define SLAVE_CPR_CFG 592
+#define SLAVE_MISC_CFG 594
+#define SLAVE_VENUS_CFG 596
+#define SLAVE_GRAPHICS_3D_CFG 598
+#define SLAVE_MMSS_CLK_CFG 599
+#define SLAVE_MNOC_MPU_CFG 601
+#define SLAVE_SERVICE_MNOC 603
+#define SLAVE_OCMEM 604
+#define SLAVE_SDCC_1 606
+#define SLAVE_SDCC_2 608
+#define SLAVE_SDCC_4 609
+#define SLAVE_BLSP_2 611
+#define SLAVE_BLSP_1 613
+#define SLAVE_USB_HS 614
+#define SLAVE_PDM 615
+#define SLAVE_PRNG 618
+#define SLAVE_CLK_CTL 620
+#define SLAVE_SECURITY 622
+#define SLAVE_TCSR 623
+#define SLAVE_TLMM 624
+#define SLAVE_CRYPTO_0_CFG 625
+#define SLAVE_CRYPTO_1_CFG 626
+#define SLAVE_IMEM_CFG 627
+#define SLAVE_MESSAGE_RAM 628
+#define SLAVE_BIMC_CFG 629
+#define SLAVE_BOOT_ROM 630
+#define SLAVE_CNOC_MNOC_MMSS_CFG 631
+#define SLAVE_PMIC_ARB 632
+#define SLAVE_SPDM_WRAPPER 633
+#define SLAVE_DEHR_CFG 634
+#define SLAVE_QDSS_CFG 635
+#define SLAVE_RBCPR_CFG 636
+#define SLAVE_RBCPR_QDSS_APU_CFG 637
+#define SLAVE_SNOC_MPU_CFG 638
+#define SLAVE_CNOC_ONOC_CFG 639
+#define SLAVE_CNOC_MNOC_CFG 640
+#define SLAVE_PNOC_CFG 641
+#define SLAVE_SNOC_CFG 642
+#define SLAVE_EBI1_DLL_CFG 643
+#define SLAVE_PHY_APU_CFG 644
+#define SLAVE_EBI1_PHY_CFG 645
+#define SLAVE_SERVICE_CNOC 646
+#define SLAVE_IPS_CFG 647
+#define SLAVE_QPIC 648
+#define SLAVE_DSI_CFG 649
+#define SLAVE_UFS_CFG 650
+#define SLAVE_RBCPR_CX_CFG 651
+#define SLAVE_RBCPR_MX_CFG 652
+#define SLAVE_PCIE_CFG 653
+#define SLAVE_USB_PHYS_CFG 654
+#define SLAVE_VIDEO_CAP_CFG 655
+#define SLAVE_VPU_CFG 658
+#define SLAVE_OCMEM_GFX 662
+#define SLAVE_CATS_128 663
+#define SLAVE_OCMEM_64 664
+#define SLAVE_PCIE_0 665
+#define SLAVE_PCIE_1 666
+#define SLAVE_PCIE_0_CFG 667
+#define SLAVE_PCIE_1_CFG 668
+#define SLAVE_TCU 672
+#define SLAVE_APPSS 673
+
+#endif

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH v7 6/8] interconnect: qcom: Add msm8916 interconnect provider driver
  2018-07-31 16:13 [PATCH v7 0/8] Introduce on-chip interconnect API Georgi Djakov
                   ` (4 preceding siblings ...)
  2018-07-31 16:13 ` [PATCH v7 5/8] dt-bindings: interconnect: Document qcom,msm8916 NoC bindings Georgi Djakov
@ 2018-07-31 16:13 ` Georgi Djakov
  2018-08-03 22:59   ` Evan Green
  2018-07-31 16:13 ` [PATCH v7 7/8] dt-bindings: Introduce interconnect consumers bindings Georgi Djakov
  2018-07-31 16:13 ` [PATCH v7 8/8] interconnect: Allow endpoints translation via DT Georgi Djakov
  7 siblings, 1 reply; 30+ messages in thread
From: Georgi Djakov @ 2018-07-31 16:13 UTC (permalink / raw)
  To: linux-pm, gregkh
  Cc: rjw, robh+dt, mturquette, khilman, vincent.guittot, skannan,
	bjorn.andersson, amit.kucheria, seansw, daidavid1, evgreen,
	mark.rutland, lorenzo.pieralisi, abailon, arnd, linux-kernel,
	linux-arm-kernel, linux-arm-msm, georgi.djakov

Add driver for the Qualcomm interconnect buses found in msm8916 based
platforms.

Signed-off-by: Georgi Djakov <georgi.djakov@linaro.org>
---
 drivers/interconnect/Kconfig        |   5 +
 drivers/interconnect/Makefile       |   1 +
 drivers/interconnect/qcom/Kconfig   |   9 +
 drivers/interconnect/qcom/Makefile  |   2 +
 drivers/interconnect/qcom/msm8916.c | 510 ++++++++++++++++++++++++++++
 5 files changed, 527 insertions(+)
 create mode 100644 drivers/interconnect/qcom/msm8916.c

diff --git a/drivers/interconnect/Kconfig b/drivers/interconnect/Kconfig
index a261c7d41deb..07a8276fa35a 100644
--- a/drivers/interconnect/Kconfig
+++ b/drivers/interconnect/Kconfig
@@ -8,3 +8,8 @@ menuconfig INTERCONNECT
 
 	  If unsure, say no.
 
+if INTERCONNECT
+
+source "drivers/interconnect/qcom/Kconfig"
+
+endif
diff --git a/drivers/interconnect/Makefile b/drivers/interconnect/Makefile
index 97fca2e09d24..7944cbca0527 100644
--- a/drivers/interconnect/Makefile
+++ b/drivers/interconnect/Makefile
@@ -1,2 +1,3 @@
 # SPDX-License-Identifier: GPL-2.0
 obj-$(CONFIG_INTERCONNECT)		+= core.o
+obj-$(CONFIG_INTERCONNECT_QCOM)		+= qcom/
diff --git a/drivers/interconnect/qcom/Kconfig b/drivers/interconnect/qcom/Kconfig
index 812e9464765a..add6d7e1e24e 100644
--- a/drivers/interconnect/qcom/Kconfig
+++ b/drivers/interconnect/qcom/Kconfig
@@ -12,3 +12,12 @@ config INTERCONNECT_QCOM_SMD_RPM
 	help
 	  This is a driver for communicating interconnect related configuration
 	  details with a remote processor (RPM) on Qualcomm platforms.
+
+config INTERCONNECT_QCOM_MSM8916
+	tristate "Qualcomm MSM8916 interconnect driver"
+	depends on INTERCONNECT_QCOM
+	depends on QCOM_SMD_RPM
+	select INTERCONNECT_QCOM_SMD_RPM
+	help
+	  This is a driver for the Qualcomm Network-on-Chip on msm8916-based
+	  platforms.
diff --git a/drivers/interconnect/qcom/Makefile b/drivers/interconnect/qcom/Makefile
index 5b65e4011b59..53f3380277f6 100644
--- a/drivers/interconnect/qcom/Makefile
+++ b/drivers/interconnect/qcom/Makefile
@@ -1,2 +1,4 @@
 # SPDX-License-Identifier: GPL-2.0
 obj-$(CONFIG_INTERCONNECT_QCOM_SMD_RPM) += smd-rpm.o
+
+obj-$(CONFIG_INTERCONNECT_QCOM_MSM8916) += msm8916.o
diff --git a/drivers/interconnect/qcom/msm8916.c b/drivers/interconnect/qcom/msm8916.c
new file mode 100644
index 000000000000..74145f4f74f1
--- /dev/null
+++ b/drivers/interconnect/qcom/msm8916.c
@@ -0,0 +1,510 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) 2018 Linaro Ltd
+ * Author: Georgi Djakov <georgi.djakov@linaro.org>
+ */
+
+#include <dt-bindings/interconnect/qcom.h>
+#include <linux/clk.h>
+#include <linux/device.h>
+#include <linux/interconnect-provider.h>
+#include <linux/io.h>
+#include <linux/module.h>
+#include <linux/of_device.h>
+#include <linux/of_platform.h>
+#include <linux/platform_device.h>
+#include <linux/slab.h>
+
+#include "smd-rpm.h"
+
+#define RPM_BUS_MASTER_REQ      0x73616d62
+#define RPM_BUS_SLAVE_REQ       0x766c7362
+
+#define to_qcom_provider(_provider) \
+	container_of(_provider, struct qcom_icc_provider, provider)
+
+enum qcom_qos_mode {
+	QCOM_QOS_MODE_BYPASS = 0,
+	QCOM_QOS_MODE_FIXED,
+	QCOM_QOS_MODE_MAX,
+};
+
+struct qcom_icc_provider {
+	struct icc_provider	provider;
+	void __iomem		*base;
+	struct clk		*bus_clk;
+	struct clk		*bus_a_clk;
+};
+
+#define MSM8916_MAX_LINKS	8
+
+/**
+ * struct qcom_icc_node - Qualcomm specific interconnect nodes
+ * @name: the node name used in debugfs
+ * @id: a unique node identifier
+ * @links: an array of nodes where we can go next while traversing
+ * @num_links: the total number of @links
+ * @port: the offset index into the masters QoS register space
+ * @buswidth: width of the interconnect between a node and the bus (bytes)
+ * @ap_owned: the AP CPU does the writing to QoS registers
+ * @qos_mode: QoS mode for ap_owned resources
+ * @mas_rpm_id:	RPM id for devices that are bus masters
+ * @slv_rpm_id:	RPM id for devices that are bus slaves
+ * @rate: current bus clock rate in Hz
+ */
+struct qcom_icc_node {
+	unsigned char *name;
+	u16 id;
+	u16 links[MSM8916_MAX_LINKS];
+	u16 num_links;
+	u16 port;
+	u16 buswidth;
+	bool ap_owned;
+	enum qcom_qos_mode qos_mode;
+	int mas_rpm_id;
+	int slv_rpm_id;
+	u64 rate;
+};
+
+struct qcom_icc_desc {
+	struct qcom_icc_node **nodes;
+	size_t num_nodes;
+};
+
+#define DEFINE_QNODE(_name, _id, _port, _buswidth, _ap_owned,		\
+			_mas_rpm_id, _slv_rpm_id, _qos_mode,		\
+			_numlinks, ...)					\
+		static struct qcom_icc_node _name = {			\
+		.name = #_name,						\
+		.id = _id,						\
+		.port = _port,						\
+		.buswidth = _buswidth,					\
+		.ap_owned = _ap_owned,					\
+		.mas_rpm_id = _mas_rpm_id,				\
+		.slv_rpm_id = _slv_rpm_id,				\
+		.qos_mode = _qos_mode,					\
+		.num_links = _numlinks,					\
+		.links = { __VA_ARGS__ },				\
+	}
+
+DEFINE_QNODE(bimc_snoc_mas, BIMC_SNOC_MAS, 0, 8, 1, -1, -1, QCOM_QOS_MODE_FIXED, 1, BIMC_SNOC_SLV);
+DEFINE_QNODE(bimc_snoc_slv, BIMC_SNOC_SLV, 0, 8, 1, -1, -1, QCOM_QOS_MODE_FIXED, 2, SNOC_INT_0, SNOC_INT_1);
+DEFINE_QNODE(mas_apss, MASTER_AMPSS_M0, 0, 8, 1, -1, -1, QCOM_QOS_MODE_FIXED, 3, SLAVE_EBI_CH0, BIMC_SNOC_MAS, SLAVE_AMPSS_L2);
+DEFINE_QNODE(mas_audio, MASTER_LPASS, 0, 4, 0, -1, -1, QCOM_QOS_MODE_FIXED, 1, PNOC_M_0);
+DEFINE_QNODE(mas_blsp_1, MASTER_BLSP_1, 0, 4, 0, -1, -1, QCOM_QOS_MODE_FIXED, 1, PNOC_M_1);
+DEFINE_QNODE(mas_dehr, MASTER_DEHR, 0, 4, 0, -1, -1, QCOM_QOS_MODE_FIXED, 1, PNOC_M_0);
+DEFINE_QNODE(mas_gfx, MASTER_GRAPHICS_3D, 2, 8, 1, -1, -1, QCOM_QOS_MODE_FIXED, 3, SLAVE_EBI_CH0, BIMC_SNOC_MAS, SLAVE_AMPSS_L2);
+DEFINE_QNODE(mas_jpeg, MASTER_JPEG, 6, 16, 1, -1, -1, QCOM_QOS_MODE_BYPASS, 2, SNOC_MM_INT_0, SNOC_MM_INT_2);
+DEFINE_QNODE(mas_mdp, MASTER_MDP_PORT0, 7, 16, 1, -1, -1, QCOM_QOS_MODE_BYPASS, 2, SNOC_MM_INT_0, SNOC_MM_INT_2);
+DEFINE_QNODE(mas_pnoc_crypto_0, MASTER_CRYPTO_CORE0, 0, 8, 0, -1, -1, QCOM_QOS_MODE_FIXED, 1, PNOC_INT_1);
+DEFINE_QNODE(mas_pnoc_sdcc_1, MASTER_SDCC_1, 7, 8, 0, -1, -1, QCOM_QOS_MODE_FIXED, 1, PNOC_INT_1);
+DEFINE_QNODE(mas_pnoc_sdcc_2, MASTER_SDCC_2, 8, 8, 0, -1, -1, QCOM_QOS_MODE_FIXED, 1, PNOC_INT_1);
+DEFINE_QNODE(mas_qdss_bam, MASTER_QDSS_BAM, 11, 16, 1, -1, -1, QCOM_QOS_MODE_FIXED, 1, SNOC_QDSS_INT);
+DEFINE_QNODE(mas_qdss_etr, MASTER_QDSS_ETR, 10, 16, 1, -1, -1, QCOM_QOS_MODE_FIXED, 1, SNOC_QDSS_INT);
+DEFINE_QNODE(mas_snoc_cfg, MASTER_SNOC_CFG, 0, 16, 0, 20, -1, QCOM_QOS_MODE_BYPASS, 1, SNOC_QDSS_INT);
+DEFINE_QNODE(mas_spdm, MASTER_SPDM, 0, 4, 0, -1, -1, QCOM_QOS_MODE_FIXED, 1, PNOC_M_0);
+DEFINE_QNODE(mas_tcu0, MASTER_TCU_0, 5, 8, 1, -1, -1, QCOM_QOS_MODE_FIXED, 3, SLAVE_EBI_CH0, BIMC_SNOC_MAS, SLAVE_AMPSS_L2);
+DEFINE_QNODE(mas_tcu1, MASTER_TCU_1, 6, 8, 1, -1, -1, QCOM_QOS_MODE_FIXED, 3, SLAVE_EBI_CH0, BIMC_SNOC_MAS, SLAVE_AMPSS_L2);
+DEFINE_QNODE(mas_usb_hs, MASTER_USB_HS, 0, 4, 0, -1, -1, QCOM_QOS_MODE_FIXED, 1, PNOC_M_1);
+DEFINE_QNODE(mas_vfe, MASTER_VFE, 9, 16, 1, -1, -1, QCOM_QOS_MODE_BYPASS, 2, SNOC_MM_INT_1, SNOC_MM_INT_2);
+DEFINE_QNODE(mas_video, MASTER_VIDEO_P0, 8, 16, 1, -1, -1, QCOM_QOS_MODE_BYPASS, 2, SNOC_MM_INT_0, SNOC_MM_INT_2);
+DEFINE_QNODE(mm_int_0, SNOC_MM_INT_0, 0, 16, 1, -1, -1, QCOM_QOS_MODE_FIXED, 1, SNOC_MM_INT_BIMC);
+DEFINE_QNODE(mm_int_1, SNOC_MM_INT_1, 0, 16, 1, -1, -1, QCOM_QOS_MODE_FIXED, 1, SNOC_MM_INT_BIMC);
+DEFINE_QNODE(mm_int_2, SNOC_MM_INT_2, 0, 16, 1, -1, -1, QCOM_QOS_MODE_FIXED, 1, SNOC_INT_0);
+DEFINE_QNODE(mm_int_bimc, SNOC_MM_INT_BIMC, 0, 16, 1, -1, -1, QCOM_QOS_MODE_FIXED, 1, SNOC_BIMC_1_MAS);
+DEFINE_QNODE(pnoc_int_0, PNOC_INT_0, 0, 8, 0, -1, -1, QCOM_QOS_MODE_FIXED, 8, PNOC_SNOC_MAS, PNOC_SLV_0, PNOC_SLV_1, PNOC_SLV_2, PNOC_SLV_3, PNOC_SLV_4, PNOC_SLV_8, PNOC_SLV_9);
+DEFINE_QNODE(pnoc_int_1, PNOC_INT_1, 0, 8, 0, -1, -1, QCOM_QOS_MODE_FIXED, 1, PNOC_SNOC_MAS);
+DEFINE_QNODE(pnoc_m_0, PNOC_M_0, 0, 8, 0, -1, -1, QCOM_QOS_MODE_FIXED, 1, PNOC_INT_0);
+DEFINE_QNODE(pnoc_m_1, PNOC_M_1, 0, 8, 0, -1, -1, QCOM_QOS_MODE_FIXED, 1, PNOC_SNOC_MAS);
+DEFINE_QNODE(pnoc_s_0, PNOC_SLV_0, 0, 8, 0, -1, -1, QCOM_QOS_MODE_FIXED, 5, SLAVE_CLK_CTL, SLAVE_TLMM, SLAVE_MSM_TCSR, SLAVE_SECURITY, SLAVE_MSS);
+DEFINE_QNODE(pnoc_s_1, PNOC_SLV_1, 0, 8, 0, -1, -1, QCOM_QOS_MODE_FIXED, 5, SLAVE_IMEM_CFG, SLAVE_CRYPTO_0_CFG, SLAVE_RPM_MSG_RAM, SLAVE_MSM_PDM, SLAVE_PRNG);
+DEFINE_QNODE(pnoc_s_2, PNOC_SLV_2, 0, 8, 0, -1, -1, QCOM_QOS_MODE_FIXED, 5, SLAVE_SPDM, SLAVE_BOOT_ROM, SLAVE_BIMC_CFG, SLAVE_PNOC_CFG, SLAVE_PMIC_ARB);
+DEFINE_QNODE(pnoc_s_3, PNOC_SLV_3, 0, 8, 0, -1, -1, QCOM_QOS_MODE_FIXED, 5, SLAVE_MPM, SLAVE_SNOC_CFG, SLAVE_RBCPR_CFG, SLAVE_QDSS_CFG, SLAVE_DEHR_CFG);
+DEFINE_QNODE(pnoc_s_4, PNOC_SLV_4, 0, 8, 0, -1, -1, QCOM_QOS_MODE_FIXED, 3, SLAVE_VENUS_CFG, SLAVE_CAMERA_CFG, SLAVE_DISPLAY_CFG);
+DEFINE_QNODE(pnoc_s_8, PNOC_SLV_8, 0, 8, 0, -1, -1, QCOM_QOS_MODE_FIXED, 3, SLAVE_USB_HS, SLAVE_SDCC_1, SLAVE_BLSP_1);
+DEFINE_QNODE(pnoc_s_9, PNOC_SLV_9, 0, 8, 0, -1, -1, QCOM_QOS_MODE_FIXED, 3, SLAVE_SDCC_4, SLAVE_LPASS, SLAVE_GRAPHICS_3D_CFG);
+DEFINE_QNODE(pnoc_snoc_mas, PNOC_SNOC_MAS, 0, 8, 0, 29, -1, QCOM_QOS_MODE_FIXED, 1, PNOC_SNOC_SLV);
+DEFINE_QNODE(pnoc_snoc_slv, PNOC_SNOC_SLV, 0, 8, 0, -1, 45, QCOM_QOS_MODE_FIXED, 3, SNOC_INT_0, SNOC_INT_BIMC, SNOC_INT_1);
+DEFINE_QNODE(qdss_int, SNOC_QDSS_INT, 0, 8, 1, -1, -1, QCOM_QOS_MODE_FIXED, 2, SNOC_INT_0, SNOC_INT_BIMC);
+DEFINE_QNODE(slv_apps_l2, SLAVE_AMPSS_L2, 0, 8, 0, -1, -1, QCOM_QOS_MODE_FIXED, 0, 0);
+DEFINE_QNODE(slv_apss, SYSTEM_SLAVE_FAB_APPS, 0, 4, 0, -1, 20, QCOM_QOS_MODE_FIXED, 0, 0);
+DEFINE_QNODE(slv_audio, SLAVE_LPASS, 0, 4, 0, -1, -1, QCOM_QOS_MODE_FIXED, 0, 0);
+DEFINE_QNODE(slv_bimc_cfg, SLAVE_BIMC_CFG, 0, 4, 0, -1, -1, QCOM_QOS_MODE_FIXED, 0, 0);
+DEFINE_QNODE(slv_blsp_1, SLAVE_BLSP_1, 0, 4, 0, -1, -1, QCOM_QOS_MODE_FIXED, 0, 0);
+DEFINE_QNODE(slv_boot_rom, SLAVE_BOOT_ROM, 0, 4, 0, -1, -1, QCOM_QOS_MODE_FIXED, 0, 0);
+DEFINE_QNODE(slv_camera_cfg, SLAVE_CAMERA_CFG, 0, 4, 0, -1, -1, QCOM_QOS_MODE_FIXED, 0, 0);
+DEFINE_QNODE(slv_cats_0, SLAVE_CATS_128, 0, 16, 0, -1, 106, QCOM_QOS_MODE_FIXED, 0, 0);
+DEFINE_QNODE(slv_cats_1, SLAVE_OCMEM_64, 0, 8, 0, -1, 107, QCOM_QOS_MODE_FIXED, 0, 0);
+DEFINE_QNODE(slv_clk_ctl, SLAVE_CLK_CTL, 0, 4, 0, -1, -1, QCOM_QOS_MODE_FIXED, 0, 0);
+DEFINE_QNODE(slv_crypto_0_cfg, SLAVE_CRYPTO_0_CFG, 0, 4, 0, -1, -1, QCOM_QOS_MODE_FIXED, 0, 0);
+DEFINE_QNODE(slv_dehr_cfg, SLAVE_DEHR_CFG, 0, 4, 0, -1, -1, QCOM_QOS_MODE_FIXED, 0, 0);
+DEFINE_QNODE(slv_display_cfg, SLAVE_DISPLAY_CFG, 0, 4, 0, -1, -1, QCOM_QOS_MODE_FIXED, 0, 0);
+DEFINE_QNODE(slv_ebi_ch0, SLAVE_EBI_CH0, 0, 8, 0, -1, 0, QCOM_QOS_MODE_FIXED, 0, 0);
+DEFINE_QNODE(slv_gfx_cfg, SLAVE_GRAPHICS_3D_CFG, 0, 4, 0, -1, -1, QCOM_QOS_MODE_FIXED, 0, 0);
+DEFINE_QNODE(slv_imem_cfg, SLAVE_IMEM_CFG, 0, 4, 0, -1, -1, QCOM_QOS_MODE_FIXED, 0, 0);
+DEFINE_QNODE(slv_imem, SLAVE_SYSTEM_IMEM, 0, 8, 0, -1, 26, QCOM_QOS_MODE_FIXED, 0, 0);
+DEFINE_QNODE(slv_mpm, SLAVE_MPM, 0, 4, 0, -1, -1, QCOM_QOS_MODE_FIXED, 0, 0);
+DEFINE_QNODE(slv_msg_ram, SLAVE_RPM_MSG_RAM, 0, 4, 0, -1, -1, QCOM_QOS_MODE_FIXED, 0, 0);
+DEFINE_QNODE(slv_mss, SLAVE_MSS, 0, 4, 0, -1, -1, QCOM_QOS_MODE_FIXED, 0, 0);
+DEFINE_QNODE(slv_pdm, SLAVE_MSM_PDM, 0, 4, 0, -1, -1, QCOM_QOS_MODE_FIXED, 0, 0);
+DEFINE_QNODE(slv_pmic_arb, SLAVE_PMIC_ARB, 0, 4, 0, -1, -1, QCOM_QOS_MODE_FIXED, 0, 0);
+DEFINE_QNODE(slv_pnoc_cfg, SLAVE_PNOC_CFG, 0, 4, 0, -1, -1, QCOM_QOS_MODE_FIXED, 0, 0);
+DEFINE_QNODE(slv_prng, SLAVE_PRNG, 0, 4, 0, -1, -1, QCOM_QOS_MODE_FIXED, 0, 0);
+DEFINE_QNODE(slv_qdss_cfg, SLAVE_QDSS_CFG, 0, 4, 0, -1, -1, QCOM_QOS_MODE_FIXED, 0, 0);
+DEFINE_QNODE(slv_qdss_stm, SLAVE_QDSS_STM, 0, 4, 0, -1, 30, QCOM_QOS_MODE_FIXED, 0, 0);
+DEFINE_QNODE(slv_rbcpr_cfg, SLAVE_RBCPR_CFG, 0, 4, 0, -1, -1, QCOM_QOS_MODE_FIXED, 0, 0);
+DEFINE_QNODE(slv_sdcc_1, SLAVE_SDCC_1, 0, 4, 0, -1, -1, QCOM_QOS_MODE_FIXED, 0, 0);
+DEFINE_QNODE(slv_sdcc_2, SLAVE_SDCC_4, 0, 4, 0, -1, -1, QCOM_QOS_MODE_FIXED, 0, 0);
+DEFINE_QNODE(slv_security, SLAVE_SECURITY, 0, 4, 0, -1, -1, QCOM_QOS_MODE_FIXED, 0, 0);
+DEFINE_QNODE(slv_snoc_cfg, SLAVE_SNOC_CFG, 0, 4, 0, -1, -1, QCOM_QOS_MODE_FIXED, 0, 0);
+DEFINE_QNODE(slv_spdm, SLAVE_SPDM, 0, 4, 0, -1, -1, QCOM_QOS_MODE_FIXED, 0, 0);
+DEFINE_QNODE(slv_srvc_snoc, SLAVE_SERVICE_SNOC, 0, 8, 0, -1, 29, QCOM_QOS_MODE_FIXED, 0, 0);
+DEFINE_QNODE(slv_tcsr, SLAVE_MSM_TCSR, 0, 4, 0, -1, -1, QCOM_QOS_MODE_FIXED, 0, 0);
+DEFINE_QNODE(slv_tlmm, SLAVE_TLMM, 0, 4, 0, -1, -1, QCOM_QOS_MODE_FIXED, 0, 0);
+DEFINE_QNODE(slv_usb_hs, SLAVE_USB_HS, 0, 4, 0, -1, -1, QCOM_QOS_MODE_FIXED, 0, 0);
+DEFINE_QNODE(slv_venus_cfg, SLAVE_VENUS_CFG, 0, 4, 0, -1, -1, QCOM_QOS_MODE_FIXED, 0, 0);
+DEFINE_QNODE(snoc_bimc_0_mas, SNOC_BIMC_0_MAS, 0, 8, 0, 3, -1, QCOM_QOS_MODE_FIXED, 1, SNOC_BIMC_0_SLV);
+DEFINE_QNODE(snoc_bimc_0_slv, SNOC_BIMC_0_SLV, 0, 8, 0, -1, 24, QCOM_QOS_MODE_FIXED, 1, SLAVE_EBI_CH0);
+DEFINE_QNODE(snoc_bimc_1_mas, SNOC_BIMC_1_MAS, 0, 16, 1, -1, -1, QCOM_QOS_MODE_FIXED, 1, SNOC_BIMC_1_SLV);
+DEFINE_QNODE(snoc_bimc_1_slv, SNOC_BIMC_1_SLV, 0, 8, 1, -1, -1, QCOM_QOS_MODE_FIXED, 1, SLAVE_EBI_CH0);
+DEFINE_QNODE(snoc_int_0, SNOC_INT_0, 0, 8, 0, 99, 130, QCOM_QOS_MODE_FIXED, 3, SLAVE_QDSS_STM, SLAVE_SYSTEM_IMEM, SNOC_PNOC_MAS);
+DEFINE_QNODE(snoc_int_1, SNOC_INT_1, 0, 8, 0, 100, 131, QCOM_QOS_MODE_FIXED, 3, SYSTEM_SLAVE_FAB_APPS, SLAVE_CATS_128, SLAVE_OCMEM_64);
+DEFINE_QNODE(snoc_int_bimc, SNOC_INT_BIMC, 0, 8, 0, 101, 132, QCOM_QOS_MODE_FIXED, 1, SNOC_BIMC_0_MAS);
+DEFINE_QNODE(snoc_pnoc_mas, SNOC_PNOC_MAS, 0, 8, 0, -1, -1, QCOM_QOS_MODE_FIXED, 1, SNOC_PNOC_SLV);
+DEFINE_QNODE(snoc_pnoc_slv, SNOC_PNOC_SLV, 0, 8, 0, -1, -1, QCOM_QOS_MODE_FIXED, 1, PNOC_INT_0);
+
+static struct qcom_icc_node *msm8916_snoc_nodes[] = {
+	&bimc_snoc_slv,
+	&mas_jpeg,
+	&mas_mdp,
+	&mas_qdss_bam,
+	&mas_qdss_etr,
+	&mas_snoc_cfg,
+	&mas_vfe,
+	&mas_video,
+	&mm_int_0,
+	&mm_int_1,
+	&mm_int_2,
+	&mm_int_bimc,
+	&pnoc_snoc_slv,
+	&qdss_int,
+	&slv_apss,
+	&slv_cats_0,
+	&slv_cats_1,
+	&slv_imem,
+	&slv_qdss_stm,
+	&slv_srvc_snoc,
+	&snoc_bimc_0_mas,
+	&snoc_bimc_1_mas,
+	&snoc_int_0,
+	&snoc_int_1,
+	&snoc_int_bimc,
+	&snoc_pnoc_mas,
+};
+
+static struct qcom_icc_desc msm8916_snoc = {
+	.nodes = msm8916_snoc_nodes,
+	.num_nodes = ARRAY_SIZE(msm8916_snoc_nodes),
+};
+
+static struct qcom_icc_node *msm8916_bimc_nodes[] = {
+	&bimc_snoc_mas,
+	&mas_apss,
+	&mas_gfx,
+	&mas_tcu0,
+	&mas_tcu1,
+	&slv_apps_l2,
+	&slv_ebi_ch0,
+	&snoc_bimc_0_slv,
+	&snoc_bimc_1_slv,
+};
+
+static struct qcom_icc_desc msm8916_bimc = {
+	.nodes = msm8916_bimc_nodes,
+	.num_nodes = ARRAY_SIZE(msm8916_bimc_nodes),
+};
+
+static struct qcom_icc_node *msm8916_pnoc_nodes[] = {
+	&mas_audio,
+	&mas_blsp_1,
+	&mas_dehr,
+	&mas_pnoc_crypto_0,
+	&mas_pnoc_sdcc_1,
+	&mas_pnoc_sdcc_2,
+	&mas_spdm,
+	&mas_usb_hs,
+	&pnoc_int_0,
+	&pnoc_int_1,
+	&pnoc_m_0,
+	&pnoc_m_1,
+	&pnoc_s_0,
+	&pnoc_s_1,
+	&pnoc_s_2,
+	&pnoc_s_3,
+	&pnoc_s_4,
+	&pnoc_s_8,
+	&pnoc_s_9,
+	&pnoc_snoc_mas,
+	&slv_audio,
+	&slv_bimc_cfg,
+	&slv_blsp_1,
+	&slv_boot_rom,
+	&slv_camera_cfg,
+	&slv_clk_ctl,
+	&slv_crypto_0_cfg,
+	&slv_dehr_cfg,
+	&slv_display_cfg,
+	&slv_gfx_cfg,
+	&slv_imem_cfg,
+	&slv_mpm,
+	&slv_msg_ram,
+	&slv_mss,
+	&slv_pdm,
+	&slv_pmic_arb,
+	&slv_pnoc_cfg,
+	&slv_prng,
+	&slv_qdss_cfg,
+	&slv_rbcpr_cfg,
+	&slv_sdcc_1,
+	&slv_sdcc_2,
+	&slv_security,
+	&slv_snoc_cfg,
+	&slv_spdm,
+	&slv_tcsr,
+	&slv_tlmm,
+	&slv_usb_hs,
+	&slv_venus_cfg,
+	&snoc_pnoc_slv,
+};
+
+static struct qcom_icc_desc msm8916_pnoc = {
+	.nodes = msm8916_pnoc_nodes,
+	.num_nodes = ARRAY_SIZE(msm8916_pnoc_nodes),
+};
+
+static int qcom_icc_aggregate(struct icc_node *node, u32 avg_bw, u32 peak_bw,
+			      u32 *agg_avg, u32 *agg_peak)
+{
+	*agg_avg += avg_bw;
+	*agg_peak = max(*agg_peak, peak_bw);
+
+	return 0;
+}
+
+static int qcom_icc_set(struct icc_node *src, struct icc_node *dst)
+{
+	struct qcom_icc_provider *qp;
+	struct qcom_icc_node *qn;
+	struct icc_provider *provider;
+	struct icc_node *n;
+	u64 sum_bw;
+	u64 max_peak_bw;
+	u64 rate;
+	u32 agg_avg = 0;
+	u32 agg_peak = 0;
+	int ret = 0;
+
+	qn = src->data;
+	provider = src->provider;
+	qp = to_qcom_provider(provider);
+
+	list_for_each_entry(n, &provider->nodes, node_list)
+		qcom_icc_aggregate(n, n->avg_bw, n->peak_bw,
+				   &agg_avg, &agg_peak);
+
+	sum_bw = icc_units_to_bps(agg_avg);
+	max_peak_bw = icc_units_to_bps(agg_peak);
+
+	/* set bandwidth */
+	if (qn->ap_owned) {
+		/* TODO: set QoS */
+	} else {
+		/* send message to the RPM processor */
+		if (qn->mas_rpm_id != -1) {
+			ret = qcom_icc_rpm_smd_send(QCOM_SMD_RPM_ACTIVE_STATE,
+						    RPM_BUS_MASTER_REQ,
+						    qn->mas_rpm_id,
+						    sum_bw);
+			if (ret) {
+				pr_err("qcom_icc_rpm_smd_send mas error %d\n",
+				       ret);
+				return ret;
+			}
+		}
+
+		if (qn->slv_rpm_id != -1) {
+			ret = qcom_icc_rpm_smd_send(QCOM_SMD_RPM_ACTIVE_STATE,
+						    RPM_BUS_SLAVE_REQ,
+						    qn->slv_rpm_id,
+						    sum_bw);
+			if (ret) {
+				pr_err("qcom_icc_rpm_smd_send slv error %d\n",
+				       ret);
+				return ret;
+			}
+		}
+	}
+
+	rate = max(sum_bw, max_peak_bw);
+
+	do_div(rate, qn->buswidth);
+
+	if (qn->rate != rate) {
+		ret = clk_set_rate(qp->bus_clk, rate);
+		if (ret) {
+			pr_err("set clk rate %lld error %d\n", rate, ret);
+			return ret;
+		}
+
+		ret = clk_set_rate(qp->bus_a_clk, rate);
+		if (ret) {
+			pr_err("set clk rate %lld error %d\n", rate, ret);
+			return ret;
+		}
+
+		qn->rate = rate;
+	}
+
+	return ret;
+}
+
+static int qnoc_probe(struct platform_device *pdev)
+{
+	const struct qcom_icc_desc *desc;
+	struct qcom_icc_node **qnodes;
+	struct icc_node *node;
+	struct qcom_icc_provider *qp;
+	struct resource *res;
+	struct icc_provider *provider;
+	size_t num_nodes, i;
+	int ret;
+
+	/* wait for RPM */
+	if (!qcom_icc_rpm_smd_available())
+		return -EPROBE_DEFER;
+
+	desc = of_device_get_match_data(&pdev->dev);
+	if (!desc)
+		return -EINVAL;
+
+	qnodes = desc->nodes;
+	num_nodes = desc->num_nodes;
+
+	qp = devm_kzalloc(&pdev->dev, sizeof(*qp), GFP_KERNEL);
+	if (!qp)
+		return -ENOMEM;
+
+	res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+	qp->base = devm_ioremap_resource(&pdev->dev, res);
+	if (IS_ERR(qp->base))
+		return PTR_ERR(qp->base);
+
+	qp->bus_clk = devm_clk_get(&pdev->dev, "bus_clk");
+	if (IS_ERR(qp->bus_clk))
+		return PTR_ERR(qp->bus_clk);
+
+	ret = clk_prepare_enable(qp->bus_clk);
+	if (ret) {
+		dev_err(&pdev->dev, "error enabling bus_clk: %d\n", ret);
+		return ret;
+	}
+
+	qp->bus_a_clk = devm_clk_get(&pdev->dev, "bus_a_clk");
+	if (IS_ERR(qp->bus_a_clk))
+		return PTR_ERR(qp->bus_a_clk);
+
+	ret = clk_prepare_enable(qp->bus_a_clk);
+	if (ret) {
+		dev_err(&pdev->dev, "error enabling bus_a_clk: %d\n", ret);
+		clk_disable_unprepare(qp->bus_clk);
+		return ret;
+	}
+
+	provider = &qp->provider;
+	provider->dev = &pdev->dev;
+	provider->set = &qcom_icc_set;
+	provider->aggregate = &qcom_icc_aggregate;
+	INIT_LIST_HEAD(&provider->nodes);
+	provider->data = qp;
+
+	ret = icc_provider_add(provider);
+	if (ret) {
+		dev_err(&pdev->dev, "error adding interconnect provider\n");
+		clk_disable_unprepare(qp->bus_clk);
+		clk_disable_unprepare(qp->bus_a_clk);
+		return ret;
+	}
+
+	for (i = 0; i < num_nodes; i++) {
+		int ret;
+		size_t j;
+
+		node = icc_node_create(qnodes[i]->id);
+		if (IS_ERR(node)) {
+			ret = PTR_ERR(node);
+			goto err;
+		}
+
+		node->name = qnodes[i]->name;
+		node->data = qnodes[i];
+		icc_node_add(node, provider);
+
+		dev_dbg(&pdev->dev, "registered node %s\n", node->name);
+
+		/* populate links */
+		for (j = 0; j < qnodes[i]->num_links; j++)
+			if (qnodes[i]->links[j])
+				icc_link_create(node, qnodes[i]->links[j]);
+	}
+
+	platform_set_drvdata(pdev, provider);
+
+	return ret;
+err:
+	list_for_each_entry(node, &provider->nodes, node_list) {
+		icc_node_del(node);
+		icc_node_destroy(node->id);
+	}
+	clk_disable_unprepare(qp->bus_clk);
+	clk_disable_unprepare(qp->bus_a_clk);
+	icc_provider_del(provider);
+
+	return ret;
+}
+
+static int qnoc_remove(struct platform_device *pdev)
+{
+	struct icc_provider *provider = platform_get_drvdata(pdev);
+	struct qcom_icc_provider *qp = to_qcom_provider(provider);
+	struct icc_node *n;
+
+	list_for_each_entry(n, &provider->nodes, node_list) {
+		icc_node_del(n);
+		icc_node_destroy(n->id);
+	}
+	clk_disable_unprepare(qp->bus_clk);
+	clk_disable_unprepare(qp->bus_a_clk);
+
+	return icc_provider_del(provider);
+}
+
+static const struct of_device_id qnoc_of_match[] = {
+	{ .compatible = "qcom,msm8916-pnoc", .data = &msm8916_pnoc },
+	{ .compatible = "qcom,msm8916-snoc", .data = &msm8916_snoc },
+	{ .compatible = "qcom,msm8916-bimc", .data = &msm8916_bimc },
+	{ },
+};
+MODULE_DEVICE_TABLE(of, qnoc_of_match);
+
+static struct platform_driver qnoc_driver = {
+	.probe = qnoc_probe,
+	.remove = qnoc_remove,
+	.driver = {
+		.name = "qnoc-msm8916",
+		.of_match_table = qnoc_of_match,
+	},
+};
+module_platform_driver(qnoc_driver);
+MODULE_AUTHOR("Georgi Djakov <georgi.djakov@linaro.org>");
+MODULE_DESCRIPTION("Qualcomm msm8916 NoC driver");
+MODULE_LICENSE("GPL v2");

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH v7 7/8] dt-bindings: Introduce interconnect consumers bindings
  2018-07-31 16:13 [PATCH v7 0/8] Introduce on-chip interconnect API Georgi Djakov
                   ` (5 preceding siblings ...)
  2018-07-31 16:13 ` [PATCH v7 6/8] interconnect: qcom: Add msm8916 interconnect provider driver Georgi Djakov
@ 2018-07-31 16:13 ` Georgi Djakov
  2018-07-31 16:13 ` [PATCH v7 8/8] interconnect: Allow endpoints translation via DT Georgi Djakov
  7 siblings, 0 replies; 30+ messages in thread
From: Georgi Djakov @ 2018-07-31 16:13 UTC (permalink / raw)
  To: linux-pm, gregkh
  Cc: rjw, robh+dt, mturquette, khilman, vincent.guittot, skannan,
	bjorn.andersson, amit.kucheria, seansw, daidavid1, evgreen,
	mark.rutland, lorenzo.pieralisi, abailon, arnd, linux-kernel,
	linux-arm-kernel, linux-arm-msm, georgi.djakov

Add documentation for the interconnect consumer bindings, that will allow
to link a device node (consumer) to its interconnect controller hardware.

Tha aim is to enable drivers to request a framework API to configure an
interconnect path by providing their struct device pointer and a name.

Signed-off-by: Georgi Djakov <georgi.djakov@linaro.org>
---
 .../bindings/interconnect/interconnect.txt    | 27 +++++++++++++++++++
 1 file changed, 27 insertions(+)

diff --git a/Documentation/devicetree/bindings/interconnect/interconnect.txt b/Documentation/devicetree/bindings/interconnect/interconnect.txt
index 6e2b2971b094..5cb7d3c8d44d 100644
--- a/Documentation/devicetree/bindings/interconnect/interconnect.txt
+++ b/Documentation/devicetree/bindings/interconnect/interconnect.txt
@@ -31,3 +31,30 @@ Example:
 			clocks = <&rpmcc RPM_SMD_SNOC_CLK>,
 				 <&rpmcc RPM_SMD_SNOC_A_CLK>;
 		};
+
+
+= interconnect consumers =
+
+The interconnect consumers are device nodes which dynamically express their
+bandwidth requirements along interconnect paths they are connected to. There
+can be multiple interconnect providers on a SoC and the consumer may consume
+multiple paths from different providers depending on use case and the
+components it has to interact with.
+
+Required properties:
+interconnects : Pairs of phandles and interconnect provider specifier to denote
+	        the edge source and destination ports of the interconnect path.
+
+Optional properties:
+interconnect-names : List of interconnect path name strings sorted in the same
+		     order as the interconnects property. Consumers drivers will use
+		     interconnect-names to match interconnect paths with interconnect
+		     specifiers.
+
+Example:
+
+	sdhci@7864000 {
+		...
+		interconnects = <&pnoc MASTER_SDCC_1 &bimc SLAVE_EBI_CH0>;
+		interconnect-names = "ddr";
+	};

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH v7 8/8] interconnect: Allow endpoints translation via DT
  2018-07-31 16:13 [PATCH v7 0/8] Introduce on-chip interconnect API Georgi Djakov
                   ` (6 preceding siblings ...)
  2018-07-31 16:13 ` [PATCH v7 7/8] dt-bindings: Introduce interconnect consumers bindings Georgi Djakov
@ 2018-07-31 16:13 ` Georgi Djakov
  2018-08-01 22:57   ` skannan
  2018-08-03 22:59   ` Evan Green
  7 siblings, 2 replies; 30+ messages in thread
From: Georgi Djakov @ 2018-07-31 16:13 UTC (permalink / raw)
  To: linux-pm, gregkh
  Cc: rjw, robh+dt, mturquette, khilman, vincent.guittot, skannan,
	bjorn.andersson, amit.kucheria, seansw, daidavid1, evgreen,
	mark.rutland, lorenzo.pieralisi, abailon, arnd, linux-kernel,
	linux-arm-kernel, linux-arm-msm, georgi.djakov

Currently we support only platform data for specifying the interconnect
endpoints. As now the endpoints are hard-coded into the consumer driver
this may lead to complications when a single driver is used by multiple
SoCs, which may have different interconnect topology.
To avoid cluttering the consumer drivers, introduce a translation function
to help us get the board specific interconnect data from device-tree.

Signed-off-by: Georgi Djakov <georgi.djakov@linaro.org>
---
 drivers/interconnect/core.c  | 62 ++++++++++++++++++++++++++++++++++++
 include/linux/interconnect.h |  7 ++++
 2 files changed, 69 insertions(+)

diff --git a/drivers/interconnect/core.c b/drivers/interconnect/core.c
index 9fef180cf77e..d1b6adff0a3d 100644
--- a/drivers/interconnect/core.c
+++ b/drivers/interconnect/core.c
@@ -16,6 +16,7 @@
 #include <linux/module.h>
 #include <linux/mutex.h>
 #include <linux/slab.h>
+#include <linux/of.h>
 #include <linux/overflow.h>
 #include <linux/uaccess.h>
 
@@ -251,6 +252,67 @@ static int apply_constraints(struct icc_path *path)
 	return ret;
 }
 
+struct icc_path *of_icc_get(struct device *dev, const char *name)
+{
+	struct device_node *np = NULL;
+	struct of_phandle_args src_args, dst_args;
+	u32 src_id, dst_id;
+	int idx = 0;
+	int ret;
+
+	if (!dev || !dev->of_node)
+		return ERR_PTR(-ENODEV);
+
+	np = dev->of_node;
+
+	/*
+	 * When the consumer DT node do not have "interconnects" property
+	 * return a NULL path to skip setting constraints.
+	 */
+	if (!of_find_property(np, "interconnects", NULL))
+		return NULL;
+
+	/*
+	 * We use a combination of phandle and specifier for endpoint. For now
+	 * lets support only global ids and extend this is the future if needed
+	 * without breaking DT compatibility.
+	 */
+	if (name) {
+		idx = of_property_match_string(np, "interconnect-names", name);
+		if (idx < 0)
+			return ERR_PTR(idx);
+	}
+
+	ret = of_parse_phandle_with_args(np, "interconnects",
+					 "#interconnect-cells", idx * 2,
+					 &src_args);
+	if (ret)
+		return ERR_PTR(ret);
+
+	of_node_put(src_args.np);
+
+	if (!src_args.args_count || src_args.args_count > 1)
+		return ERR_PTR(-EINVAL);
+
+	src_id = src_args.args[0];
+
+	ret = of_parse_phandle_with_args(np, "interconnects",
+					 "#interconnect-cells", idx * 2 + 1,
+					 &dst_args);
+	if (ret)
+		return ERR_PTR(ret);
+
+	of_node_put(dst_args.np);
+
+	if (!dst_args.args_count || dst_args.args_count > 1)
+		return ERR_PTR(-EINVAL);
+
+	dst_id = dst_args.args[0];
+
+	return icc_get(dev, src_id, dst_id);
+}
+EXPORT_SYMBOL_GPL(of_icc_get);
+
 /**
  * icc_set() - set constraints on an interconnect path between two endpoints
  * @path: reference to the path returned by icc_get()
diff --git a/include/linux/interconnect.h b/include/linux/interconnect.h
index 593215371fd6..ae6744da9bc2 100644
--- a/include/linux/interconnect.h
+++ b/include/linux/interconnect.h
@@ -17,6 +17,7 @@ struct device;
 
 struct icc_path *icc_get(struct device *dev, const int src_id,
 			 const int dst_id);
+struct icc_path *of_icc_get(struct device *dev, const char *name);
 void icc_put(struct icc_path *path);
 int icc_set(struct icc_path *path, u32 avg_bw, u32 peak_bw);
 
@@ -28,6 +29,12 @@ static inline struct icc_path *icc_get(struct device *dev, const int src_id,
 	return NULL;
 }
 
+static inline struct icc_path *of_icc_get(struct device *dev,
+					  const char *name)
+{
+	return NULL;
+}
+
 static inline void icc_put(struct icc_path *path)
 {
 }

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* Re: [PATCH v7 8/8] interconnect: Allow endpoints translation via DT
  2018-07-31 16:13 ` [PATCH v7 8/8] interconnect: Allow endpoints translation via DT Georgi Djakov
@ 2018-08-01 22:57   ` skannan
  2018-08-02 12:07     ` Georgi Djakov
  2018-08-03 22:59   ` Evan Green
  1 sibling, 1 reply; 30+ messages in thread
From: skannan @ 2018-08-01 22:57 UTC (permalink / raw)
  To: Georgi Djakov
  Cc: linux-pm, gregkh, rjw, robh+dt, mturquette, khilman,
	vincent.guittot, bjorn.andersson, amit.kucheria, seansw,
	daidavid1, evgreen, mark.rutland, lorenzo.pieralisi, abailon,
	arnd, linux-kernel, linux-arm-kernel, linux-arm-msm

On 2018-07-31 09:13, Georgi Djakov wrote:
> Currently we support only platform data for specifying the interconnect
> endpoints. As now the endpoints are hard-coded into the consumer driver
> this may lead to complications when a single driver is used by multiple
> SoCs, which may have different interconnect topology.
> To avoid cluttering the consumer drivers, introduce a translation 
> function
> to help us get the board specific interconnect data from device-tree.
> 
> Signed-off-by: Georgi Djakov <georgi.djakov@linaro.org>
> ---
>  drivers/interconnect/core.c  | 62 ++++++++++++++++++++++++++++++++++++
>  include/linux/interconnect.h |  7 ++++
>  2 files changed, 69 insertions(+)
> 
> diff --git a/drivers/interconnect/core.c b/drivers/interconnect/core.c
> index 9fef180cf77e..d1b6adff0a3d 100644
> --- a/drivers/interconnect/core.c
> +++ b/drivers/interconnect/core.c
> @@ -16,6 +16,7 @@
>  #include <linux/module.h>
>  #include <linux/mutex.h>
>  #include <linux/slab.h>
> +#include <linux/of.h>
>  #include <linux/overflow.h>
>  #include <linux/uaccess.h>
> 
> @@ -251,6 +252,67 @@ static int apply_constraints(struct icc_path 
> *path)
>  	return ret;
>  }
> 
> +struct icc_path *of_icc_get(struct device *dev, const char *name)
> +{
> +	struct device_node *np = NULL;
> +	struct of_phandle_args src_args, dst_args;
> +	u32 src_id, dst_id;
> +	int idx = 0;
> +	int ret;
> +
> +	if (!dev || !dev->of_node)
> +		return ERR_PTR(-ENODEV);
> +
> +	np = dev->of_node;
> +
> +	/*
> +	 * When the consumer DT node do not have "interconnects" property
> +	 * return a NULL path to skip setting constraints.
> +	 */
> +	if (!of_find_property(np, "interconnects", NULL))
> +		return NULL;
> +
> +	/*
> +	 * We use a combination of phandle and specifier for endpoint. For 
> now
> +	 * lets support only global ids and extend this is the future if 
> needed
> +	 * without breaking DT compatibility.
> +	 */
> +	if (name) {
> +		idx = of_property_match_string(np, "interconnect-names", name);
> +		if (idx < 0)
> +			return ERR_PTR(idx);
> +	}
> +
> +	ret = of_parse_phandle_with_args(np, "interconnects",
> +					 "#interconnect-cells", idx * 2,
> +					 &src_args);
> +	if (ret)
> +		return ERR_PTR(ret);
> +
> +	of_node_put(src_args.np);
> +
> +	if (!src_args.args_count || src_args.args_count > 1)
> +		return ERR_PTR(-EINVAL);
> +
> +	src_id = src_args.args[0];
> +
> +	ret = of_parse_phandle_with_args(np, "interconnects",
> +					 "#interconnect-cells", idx * 2 + 1,
> +					 &dst_args);
> +	if (ret)
> +		return ERR_PTR(ret);
> +
> +	of_node_put(dst_args.np);
> +
> +	if (!dst_args.args_count || dst_args.args_count > 1)
> +		return ERR_PTR(-EINVAL);
> +
> +	dst_id = dst_args.args[0];
> +
> +	return icc_get(dev, src_id, dst_id);
> +}
> +EXPORT_SYMBOL_GPL(of_icc_get);
> +
>  /**
>   * icc_set() - set constraints on an interconnect path between two 
> endpoints
>   * @path: reference to the path returned by icc_get()
> diff --git a/include/linux/interconnect.h 
> b/include/linux/interconnect.h
> index 593215371fd6..ae6744da9bc2 100644
> --- a/include/linux/interconnect.h
> +++ b/include/linux/interconnect.h
> @@ -17,6 +17,7 @@ struct device;
> 
>  struct icc_path *icc_get(struct device *dev, const int src_id,
>  			 const int dst_id);
> +struct icc_path *of_icc_get(struct device *dev, const char *name);
>  void icc_put(struct icc_path *path);
>  int icc_set(struct icc_path *path, u32 avg_bw, u32 peak_bw);
> 
> @@ -28,6 +29,12 @@ static inline struct icc_path *icc_get(struct
> device *dev, const int src_id,
>  	return NULL;
>  }
> 
> +static inline struct icc_path *of_icc_get(struct device *dev,
> +					  const char *name)
> +{
> +	return NULL;
> +}
> +

Might want to return PTR(-ENODEV) or some error code so that client 
doesn't have to do NULL check AND an error check?

-Saravana

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v7 1/8] interconnect: Add generic on-chip interconnect API
  2018-07-31 16:13 ` [PATCH v7 1/8] interconnect: Add generic " Georgi Djakov
@ 2018-08-02  0:05   ` Randy Dunlap
  2018-08-02 11:58     ` Georgi Djakov
  2018-08-03 22:59   ` Evan Green
  1 sibling, 1 reply; 30+ messages in thread
From: Randy Dunlap @ 2018-08-02  0:05 UTC (permalink / raw)
  To: Georgi Djakov, linux-pm, gregkh
  Cc: rjw, robh+dt, mturquette, khilman, vincent.guittot, skannan,
	bjorn.andersson, amit.kucheria, seansw, daidavid1, evgreen,
	mark.rutland, lorenzo.pieralisi, abailon, arnd, linux-kernel,
	linux-arm-kernel, linux-arm-msm

On 07/31/2018 09:13 AM, Georgi Djakov wrote:
> This patch introduces a new API to get requirements and configure the
> interconnect buses across the entire chipset to fit with the current
> demand.
> 
> The API is using a consumer/provider-based model, where the providers are
> the interconnect buses and the consumers could be various drivers.
> The consumers request interconnect resources (path) between endpoints and
> set the desired constraints on this data flow path. The providers receive
> requests from consumers and aggregate these requests for all master-slave
> pairs on that path. Then the providers configure each participating in the
> topology node according to the requested data flow path, physical links and
> constraints. The topology could be complicated and multi-tiered and is SoC
> specific.
> 
> Signed-off-by: Georgi Djakov <georgi.djakov@linaro.org>
> ---
>  Documentation/interconnect/interconnect.rst |  96 ++++
>  drivers/Kconfig                             |   2 +
>  drivers/Makefile                            |   1 +
>  drivers/interconnect/Kconfig                |  10 +
>  drivers/interconnect/Makefile               |   2 +
>  drivers/interconnect/core.c                 | 569 ++++++++++++++++++++
>  include/linux/interconnect-provider.h       | 125 +++++
>  include/linux/interconnect.h                |  42 ++
>  8 files changed, 847 insertions(+)
>  create mode 100644 Documentation/interconnect/interconnect.rst
>  create mode 100644 drivers/interconnect/Kconfig
>  create mode 100644 drivers/interconnect/Makefile
>  create mode 100644 drivers/interconnect/core.c
>  create mode 100644 include/linux/interconnect-provider.h
>  create mode 100644 include/linux/interconnect.h
> 
> diff --git a/Documentation/interconnect/interconnect.rst b/Documentation/interconnect/interconnect.rst
> new file mode 100644
> index 000000000000..e628881ee218
> --- /dev/null
> +++ b/Documentation/interconnect/interconnect.rst
> @@ -0,0 +1,96 @@
> +.. SPDX-License-Identifier: GPL-2.0
> +
> +=====================================
> +GENERIC SYSTEM INTERCONNECT SUBSYSTEM
> +=====================================
> +
> +Introduction
> +------------
> +
> +This framework is designed to provide a standard kernel interface to control
> +the settings of the interconnects on a SoC. These settings can be throughput,

I would say:                         on an SoC.

Do you pronounce that as "sock" or the letters S.O.C.?


> +latency and priority between multiple interconnected devices or functional
> +blocks. This can be controlled dynamically in order to save power or provide
> +maximum performance.
> +
> +The interconnect bus is a hardware with configurable parameters, which can be

                    bus is hardware

> +set on a data path according to the requests received from various drivers.
> +An example of interconnect buses are the interconnects between various
> +components or functional blocks in chipsets. There can be multiple interconnects
> +on a SoC that can be multi-tiered.

      an SoC

> +
> +Below is a simplified diagram of a real-world SoC interconnect bus topology.
> +
> +::
> +
> + +----------------+    +----------------+
> + | HW Accelerator |--->|      M NoC     |<---------------+
> + +----------------+    +----------------+                |
> +                         |      |                    +------------+
> +  +-----+  +-------------+      V       +------+     |            |
> +  | DDR |  |                +--------+  | PCIe |     |            |
> +  +-----+  |                | Slaves |  +------+     |            |
> +    ^ ^    |                +--------+     |         |   C NoC    |
> +    | |    V                               V         |            |
> + +------------------+   +------------------------+   |            |   +-----+
> + |                  |-->|                        |-->|            |-->| CPU |
> + |                  |-->|                        |<--|            |   +-----+
> + |     Mem NoC      |   |         S NoC          |   +------------+
> + |                  |<--|                        |---------+    |
> + |                  |<--|                        |<------+ |    |   +--------+
> + +------------------+   +------------------------+       | |    +-->| Slaves |
> +   ^  ^    ^    ^          ^                             | |        +--------+
> +   |  |    |    |          |                             | V
> + +------+  |  +-----+   +-----+  +---------+   +----------------+   +--------+
> + | CPUs |  |  | GPU |   | DSP |  | Masters |-->|       P NoC    |-->| Slaves |
> + +------+  |  +-----+   +-----+  +---------+   +----------------+   +--------+
> +           |
> +       +-------+
> +       | Modem |
> +       +-------+
> +
> +Terminology
> +-----------
> +
> +Interconnect provider is the software definition of the interconnect hardware.
> +The interconnect providers on the above diagram are M NoC, S NoC, C NoC, P NoC
> +and Mem NoC.
> +
> +Interconnect node is the software definition of the interconnect hardware
> +port. Each interconnect provider consists of multiple interconnect nodes,
> +which are connected to other SoC components including other interconnect
> +providers. The point on the diagram where the CPUs connect to the memory is
> +called an interconnect node, which belongs to the Mem NoC interconnect provider.
> +
> +Interconnect endpoints are the first or the last element of the path. Every
> +endpoint is a node, but not every node is an endpoint.
> +
> +Interconnect path is everything between two endpoints including all the nodes
> +that have to be traversed to reach from a source to destination node. It may
> +include multiple master-slave pairs across several interconnect providers.
> +
> +Interconnect consumers are the entities which make use of the data paths exposed
> +by the providers. The consumers send requests to providers requesting various
> +throughput, latency and priority. Usually the consumers are device drivers, that
> +send request based on their needs. An example for a consumer is a video decoder
> +that supports various formats and image sizes.
> +
> +Interconnect providers
> +----------------------
> +
> +Interconnect provider is an entity that implements methods to initialize and
> +configure a interconnect bus hardware. The interconnect provider drivers should

   configure interconnect bus hardware.
(i.e., drop the "a")

> +be registered with the interconnect provider core.
> +
> +The interconnect framework provider API functions are documented in
> +.. kernel-doc:: include/linux/interconnect-provider.h

What do you want that to do?  and does that happen?

The .. kernel-doc:: line won't be printed.  It will just be expanded to the
contents of that header file, so the preceding sentence fragment will look/sound
odd.

> +
> +Interconnect consumers
> +----------------------
> +
> +Interconnect consumers are the clients which use the interconnect APIs to
> +get paths between endpoints and set their bandwidth/latency/QoS requirements
> +for these interconnect paths.
> +
> +The interconnect framework consumer API functions are documented in
> +.. kernel-doc:: include/linux/interconnect.h

same as above.


-- 
~Randy

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v7 1/8] interconnect: Add generic on-chip interconnect API
  2018-08-02  0:05   ` Randy Dunlap
@ 2018-08-02 11:58     ` Georgi Djakov
  0 siblings, 0 replies; 30+ messages in thread
From: Georgi Djakov @ 2018-08-02 11:58 UTC (permalink / raw)
  To: Randy Dunlap
  Cc: linux-pm, gregkh, rjw, robh+dt, mturquette, khilman,
	vincent.guittot, skannan, bjorn.andersson, amit.kucheria, seansw,
	daidavid1, evgreen, mark.rutland, lorenzo.pieralisi, abailon,
	arnd, linux-kernel, linux-arm-kernel, linux-arm-msm

Hi Randy,

On 08/02/2018 03:05 AM, Randy Dunlap wrote:
> On 07/31/2018 09:13 AM, Georgi Djakov wrote:
>> This patch introduces a new API to get requirements and configure the
>> interconnect buses across the entire chipset to fit with the current
>> demand.
>>
>> The API is using a consumer/provider-based model, where the providers are
>> the interconnect buses and the consumers could be various drivers.
>> The consumers request interconnect resources (path) between endpoints and
>> set the desired constraints on this data flow path. The providers receive
>> requests from consumers and aggregate these requests for all master-slave
>> pairs on that path. Then the providers configure each participating in the
>> topology node according to the requested data flow path, physical links and
>> constraints. The topology could be complicated and multi-tiered and is SoC
>> specific.
>>
>> Signed-off-by: Georgi Djakov <georgi.djakov@linaro.org>
>> ---
>>  Documentation/interconnect/interconnect.rst |  96 ++++
>>  drivers/Kconfig                             |   2 +
>>  drivers/Makefile                            |   1 +
>>  drivers/interconnect/Kconfig                |  10 +
>>  drivers/interconnect/Makefile               |   2 +
>>  drivers/interconnect/core.c                 | 569 ++++++++++++++++++++
>>  include/linux/interconnect-provider.h       | 125 +++++
>>  include/linux/interconnect.h                |  42 ++
>>  8 files changed, 847 insertions(+)
>>  create mode 100644 Documentation/interconnect/interconnect.rst
>>  create mode 100644 drivers/interconnect/Kconfig
>>  create mode 100644 drivers/interconnect/Makefile
>>  create mode 100644 drivers/interconnect/core.c
>>  create mode 100644 include/linux/interconnect-provider.h
>>  create mode 100644 include/linux/interconnect.h
>>
>> diff --git a/Documentation/interconnect/interconnect.rst b/Documentation/interconnect/interconnect.rst
>> new file mode 100644
>> index 000000000000..e628881ee218
>> --- /dev/null
>> +++ b/Documentation/interconnect/interconnect.rst
>> @@ -0,0 +1,96 @@
>> +.. SPDX-License-Identifier: GPL-2.0
>> +
>> +=====================================
>> +GENERIC SYSTEM INTERCONNECT SUBSYSTEM
>> +=====================================
>> +
>> +Introduction
>> +------------
>> +
>> +This framework is designed to provide a standard kernel interface to control
>> +the settings of the interconnects on a SoC. These settings can be throughput,
> 
> I would say:                         on an SoC.
> 
> Do you pronounce that as "sock" or the letters S.O.C.?

Ah right indeed, this should be based on the phonetics and not the writing.

>> +latency and priority between multiple interconnected devices or functional
>> +blocks. This can be controlled dynamically in order to save power or provide
>> +maximum performance.
>> +
>> +The interconnect bus is a hardware with configurable parameters, which can be
> 
>                     bus is hardware

Ok.

>> +set on a data path according to the requests received from various drivers.
>> +An example of interconnect buses are the interconnects between various
>> +components or functional blocks in chipsets. There can be multiple interconnects
>> +on a SoC that can be multi-tiered.
> 
>       an SoC
> 
>> +
>> +Below is a simplified diagram of a real-world SoC interconnect bus topology.
>> +
>> +::
>> +
>> + +----------------+    +----------------+
>> + | HW Accelerator |--->|      M NoC     |<---------------+
>> + +----------------+    +----------------+                |
>> +                         |      |                    +------------+
>> +  +-----+  +-------------+      V       +------+     |            |
>> +  | DDR |  |                +--------+  | PCIe |     |            |
>> +  +-----+  |                | Slaves |  +------+     |            |
>> +    ^ ^    |                +--------+     |         |   C NoC    |
>> +    | |    V                               V         |            |
>> + +------------------+   +------------------------+   |            |   +-----+
>> + |                  |-->|                        |-->|            |-->| CPU |
>> + |                  |-->|                        |<--|            |   +-----+
>> + |     Mem NoC      |   |         S NoC          |   +------------+
>> + |                  |<--|                        |---------+    |
>> + |                  |<--|                        |<------+ |    |   +--------+
>> + +------------------+   +------------------------+       | |    +-->| Slaves |
>> +   ^  ^    ^    ^          ^                             | |        +--------+
>> +   |  |    |    |          |                             | V
>> + +------+  |  +-----+   +-----+  +---------+   +----------------+   +--------+
>> + | CPUs |  |  | GPU |   | DSP |  | Masters |-->|       P NoC    |-->| Slaves |
>> + +------+  |  +-----+   +-----+  +---------+   +----------------+   +--------+
>> +           |
>> +       +-------+
>> +       | Modem |
>> +       +-------+
>> +
>> +Terminology
>> +-----------
>> +
>> +Interconnect provider is the software definition of the interconnect hardware.
>> +The interconnect providers on the above diagram are M NoC, S NoC, C NoC, P NoC
>> +and Mem NoC.
>> +
>> +Interconnect node is the software definition of the interconnect hardware
>> +port. Each interconnect provider consists of multiple interconnect nodes,
>> +which are connected to other SoC components including other interconnect
>> +providers. The point on the diagram where the CPUs connect to the memory is
>> +called an interconnect node, which belongs to the Mem NoC interconnect provider.
>> +
>> +Interconnect endpoints are the first or the last element of the path. Every
>> +endpoint is a node, but not every node is an endpoint.
>> +
>> +Interconnect path is everything between two endpoints including all the nodes
>> +that have to be traversed to reach from a source to destination node. It may
>> +include multiple master-slave pairs across several interconnect providers.
>> +
>> +Interconnect consumers are the entities which make use of the data paths exposed
>> +by the providers. The consumers send requests to providers requesting various
>> +throughput, latency and priority. Usually the consumers are device drivers, that
>> +send request based on their needs. An example for a consumer is a video decoder
>> +that supports various formats and image sizes.
>> +
>> +Interconnect providers
>> +----------------------
>> +
>> +Interconnect provider is an entity that implements methods to initialize and
>> +configure a interconnect bus hardware. The interconnect provider drivers should
> 
>    configure interconnect bus hardware.
> (i.e., drop the "a")

Ok.

>> +be registered with the interconnect provider core.
>> +
>> +The interconnect framework provider API functions are documented in
>> +.. kernel-doc:: include/linux/interconnect-provider.h
> 
> What do you want that to do?  and does that happen?
> 
> The .. kernel-doc:: line won't be printed.  It will just be expanded to the
> contents of that header file, so the preceding sentence fragment will look/sound
> odd.

Thanks for catching this, I understood it incorrectly. Will improve it.

BR,
Georgi

>> +
>> +Interconnect consumers
>> +----------------------
>> +
>> +Interconnect consumers are the clients which use the interconnect APIs to
>> +get paths between endpoints and set their bandwidth/latency/QoS requirements
>> +for these interconnect paths.
>> +
>> +The interconnect framework consumer API functions are documented in
>> +.. kernel-doc:: include/linux/interconnect.h
> 
> same as above.
> 
> 

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v7 8/8] interconnect: Allow endpoints translation via DT
  2018-08-01 22:57   ` skannan
@ 2018-08-02 12:07     ` Georgi Djakov
  2018-08-02 19:12       ` skannan
  0 siblings, 1 reply; 30+ messages in thread
From: Georgi Djakov @ 2018-08-02 12:07 UTC (permalink / raw)
  To: skannan
  Cc: linux-pm, gregkh, rjw, robh+dt, mturquette, khilman,
	vincent.guittot, bjorn.andersson, amit.kucheria, seansw,
	daidavid1, evgreen, mark.rutland, lorenzo.pieralisi, abailon,
	arnd, linux-kernel, linux-arm-kernel, linux-arm-msm

Hi Saravana,

On 08/02/2018 01:57 AM, skannan@codeaurora.org wrote:
> On 2018-07-31 09:13, Georgi Djakov wrote:
>> Currently we support only platform data for specifying the interconnect
>> endpoints. As now the endpoints are hard-coded into the consumer driver
>> this may lead to complications when a single driver is used by multiple
>> SoCs, which may have different interconnect topology.
>> To avoid cluttering the consumer drivers, introduce a translation
>> function
>> to help us get the board specific interconnect data from device-tree.
>>
>> Signed-off-by: Georgi Djakov <georgi.djakov@linaro.org>
>> ---
>>  drivers/interconnect/core.c  | 62 ++++++++++++++++++++++++++++++++++++
>>  include/linux/interconnect.h |  7 ++++
>>  2 files changed, 69 insertions(+)
>>
>> diff --git a/drivers/interconnect/core.c b/drivers/interconnect/core.c
>> index 9fef180cf77e..d1b6adff0a3d 100644
>> --- a/drivers/interconnect/core.c
>> +++ b/drivers/interconnect/core.c
[..]
>> --- a/include/linux/interconnect.h
>> +++ b/include/linux/interconnect.h
>> @@ -17,6 +17,7 @@ struct device;
>>
>>  struct icc_path *icc_get(struct device *dev, const int src_id,
>>               const int dst_id);
>> +struct icc_path *of_icc_get(struct device *dev, const char *name);
>>  void icc_put(struct icc_path *path);
>>  int icc_set(struct icc_path *path, u32 avg_bw, u32 peak_bw);
>>
>> @@ -28,6 +29,12 @@ static inline struct icc_path *icc_get(struct
>> device *dev, const int src_id,
>>      return NULL;
>>  }
>>
>> +static inline struct icc_path *of_icc_get(struct device *dev,
>> +                      const char *name)
>> +{
>> +    return NULL;
>> +}
>> +
> 
> Might want to return PTR(-ENODEV) or some error code so that client
> doesn't have to do NULL check AND an error check?
> 
> -Saravana

NULL is returned when CONFIG_INTERCONNECT=n. Configuration of
interconnects by consumer drivers could be optional and that's why null
is returned instead of an error. The consumer drivers decide how to
proceed in this case and if there is a hard requirement for interconnect
support, then i would suggest to express it as a dependency in Kconfig.

Thanks,
Georgi

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v7 8/8] interconnect: Allow endpoints translation via DT
  2018-08-02 12:07     ` Georgi Djakov
@ 2018-08-02 19:12       ` skannan
  2018-08-09 14:17         ` Georgi Djakov
  0 siblings, 1 reply; 30+ messages in thread
From: skannan @ 2018-08-02 19:12 UTC (permalink / raw)
  To: Georgi Djakov
  Cc: linux-pm, gregkh, rjw, robh+dt, mturquette, khilman,
	vincent.guittot, bjorn.andersson, amit.kucheria, seansw,
	daidavid1, evgreen, mark.rutland, lorenzo.pieralisi, abailon,
	arnd, linux-kernel, linux-arm-kernel, linux-arm-msm

On 2018-08-02 05:07, Georgi Djakov wrote:
> Hi Saravana,
> 
> On 08/02/2018 01:57 AM, skannan@codeaurora.org wrote:
>> On 2018-07-31 09:13, Georgi Djakov wrote:
>>> Currently we support only platform data for specifying the 
>>> interconnect
>>> endpoints. As now the endpoints are hard-coded into the consumer 
>>> driver
>>> this may lead to complications when a single driver is used by 
>>> multiple
>>> SoCs, which may have different interconnect topology.
>>> To avoid cluttering the consumer drivers, introduce a translation
>>> function
>>> to help us get the board specific interconnect data from device-tree.
>>> 
>>> Signed-off-by: Georgi Djakov <georgi.djakov@linaro.org>
>>> ---
>>>  drivers/interconnect/core.c  | 62 
>>> ++++++++++++++++++++++++++++++++++++
>>>  include/linux/interconnect.h |  7 ++++
>>>  2 files changed, 69 insertions(+)
>>> 
>>> diff --git a/drivers/interconnect/core.c 
>>> b/drivers/interconnect/core.c
>>> index 9fef180cf77e..d1b6adff0a3d 100644
>>> --- a/drivers/interconnect/core.c
>>> +++ b/drivers/interconnect/core.c
> [..]
>>> --- a/include/linux/interconnect.h
>>> +++ b/include/linux/interconnect.h
>>> @@ -17,6 +17,7 @@ struct device;
>>> 
>>>  struct icc_path *icc_get(struct device *dev, const int src_id,
>>>               const int dst_id);
>>> +struct icc_path *of_icc_get(struct device *dev, const char *name);
>>>  void icc_put(struct icc_path *path);
>>>  int icc_set(struct icc_path *path, u32 avg_bw, u32 peak_bw);
>>> 
>>> @@ -28,6 +29,12 @@ static inline struct icc_path *icc_get(struct
>>> device *dev, const int src_id,
>>>      return NULL;
>>>  }
>>> 
>>> +static inline struct icc_path *of_icc_get(struct device *dev,
>>> +                      const char *name)
>>> +{
>>> +    return NULL;
>>> +}
>>> +
>> 
>> Might want to return PTR(-ENODEV) or some error code so that client
>> doesn't have to do NULL check AND an error check?
>> 
>> -Saravana
> 
> NULL is returned when CONFIG_INTERCONNECT=n. Configuration of
> interconnects by consumer drivers could be optional and that's why null
> is returned instead of an error. The consumer drivers decide how to
> proceed in this case and if there is a hard requirement for 
> interconnect
> support, then i would suggest to express it as a dependency in Kconfig.

Ehh... you could make the same argument with an error. If it's not 
mandatory for functioning, they can ignore a specific error and 
continue? At a minimum, these stub functions returning NULL doesn't 
match with the documentation that says these APIs will only ever return 
ERR_PTR().

-Saravana

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v7 2/8] dt-bindings: Introduce interconnect provider bindings
  2018-07-31 16:13 ` [PATCH v7 2/8] dt-bindings: Introduce interconnect provider bindings Georgi Djakov
@ 2018-08-02 21:02   ` Rob Herring
  2018-08-07 14:54     ` Georgi Djakov
  0 siblings, 1 reply; 30+ messages in thread
From: Rob Herring @ 2018-08-02 21:02 UTC (permalink / raw)
  To: georgi.djakov, maxime.ripard
  Cc: linux-pm, Greg Kroah-Hartman, Rafael J. Wysocki, Rob Herring,
	Mike Turquette, khilman, Vincent Guittot, skannan,
	Bjorn Andersson, Amit Kucheria, seansw, daidavid1, evgreen,
	Mark Rutland, Lorenzo Pieralisi, Alexandre Bailon, Arnd Bergmann,
	Linux Kernel Mailing List, linux-arm-kernel, linux-arm-msm

On Tue, Jul 31, 2018 at 10:13 AM Georgi Djakov <georgi.djakov@linaro.org> wrote:
>
> This binding is intended to represent the interconnect hardware present
> in some of the modern SoCs. Currently it consists only of a binding for
> the interconnect hardware devices (provider).

If you want the bindings reviewed, then you need to send them to the
DT list. CC'ing me is pointless, I get CC'ed too many things to read.

The consumer and producer binding should be a single patch. One is not
useful without the other.

There is also a patch series from Maxime Ripard that's addressing the
same general area. See "dt-bindings: Add a dma-parent property". We
don't need multiple ways to address describing the device to memory
paths, so you all had better work out a common solution.

Rob

>
> Signed-off-by: Georgi Djakov <georgi.djakov@linaro.org>
> ---
>  .../bindings/interconnect/interconnect.txt    | 33 +++++++++++++++++++
>  1 file changed, 33 insertions(+)
>  create mode 100644 Documentation/devicetree/bindings/interconnect/interconnect.txt
>
> diff --git a/Documentation/devicetree/bindings/interconnect/interconnect.txt b/Documentation/devicetree/bindings/interconnect/interconnect.txt
> new file mode 100644
> index 000000000000..6e2b2971b094
> --- /dev/null
> +++ b/Documentation/devicetree/bindings/interconnect/interconnect.txt
> @@ -0,0 +1,33 @@
> +Interconnect Provider Device Tree Bindings
> +=========================================
> +
> +The purpose of this document is to define a common set of generic interconnect
> +providers/consumers properties.
> +
> +
> += interconnect providers =
> +
> +The interconnect provider binding is intended to represent the interconnect
> +controllers in the system. Each provider registers a set of interconnect
> +nodes, which expose the interconnect related capabilities of the interconnect
> +to consumer drivers. These capabilities can be throughput, latency, priority
> +etc. The consumer drivers set constraints on interconnect path (or endpoints)
> +depending on the use case. Interconnect providers can also be interconnect
> +consumers, such as in the case where two network-on-chip fabrics interface
> +directly
> +
> +Required properties:
> +- compatible : contains the interconnect provider compatible string
> +- #interconnect-cells : number of cells in a interconnect specifier needed to
> +                       encode the interconnect node id
> +
> +Example:
> +
> +               snoc: snoc@580000 {
> +                       compatible = "qcom,msm8916-snoc";
> +                       #interconnect-cells = <1>;
> +                       reg = <0x580000 0x14000>;
> +                       clock-names = "bus_clk", "bus_a_clk";
> +                       clocks = <&rpmcc RPM_SMD_SNOC_CLK>,
> +                                <&rpmcc RPM_SMD_SNOC_A_CLK>;
> +               };

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v7 1/8] interconnect: Add generic on-chip interconnect API
  2018-07-31 16:13 ` [PATCH v7 1/8] interconnect: Add generic " Georgi Djakov
  2018-08-02  0:05   ` Randy Dunlap
@ 2018-08-03 22:59   ` Evan Green
  1 sibling, 0 replies; 30+ messages in thread
From: Evan Green @ 2018-08-03 22:59 UTC (permalink / raw)
  To: georgi.djakov
  Cc: linux-pm, gregkh, rjw, robh+dt, Michael Turquette, khilman,
	Vincent Guittot, Saravana Kannan, Bjorn Andersson, amit.kucheria,
	seansw, daidavid1, mark.rutland, lorenzo.pieralisi,
	Alexandre Bailon, arnd, linux-kernel, linux-arm-kernel,
	linux-arm-msm

On Tue, Jul 31, 2018 at 9:13 AM Georgi Djakov <georgi.djakov@linaro.org> wrote:
>
> This patch introduces a new API to get requirements and configure the
> interconnect buses across the entire chipset to fit with the current
> demand.
>
> The API is using a consumer/provider-based model, where the providers are
> the interconnect buses and the consumers could be various drivers.
> The consumers request interconnect resources (path) between endpoints and
> set the desired constraints on this data flow path. The providers receive
> requests from consumers and aggregate these requests for all master-slave
> pairs on that path. Then the providers configure each participating in the
> topology node according to the requested data flow path, physical links and
> constraints. The topology could be complicated and multi-tiered and is SoC
> specific.
>
> Signed-off-by: Georgi Djakov <georgi.djakov@linaro.org>
> ---
>  Documentation/interconnect/interconnect.rst |  96 ++++
>  drivers/Kconfig                             |   2 +
>  drivers/Makefile                            |   1 +
>  drivers/interconnect/Kconfig                |  10 +
>  drivers/interconnect/Makefile               |   2 +
>  drivers/interconnect/core.c                 | 569 ++++++++++++++++++++
>  include/linux/interconnect-provider.h       | 125 +++++
>  include/linux/interconnect.h                |  42 ++
>  8 files changed, 847 insertions(+)
>  create mode 100644 Documentation/interconnect/interconnect.rst
>  create mode 100644 drivers/interconnect/Kconfig
>  create mode 100644 drivers/interconnect/Makefile
>  create mode 100644 drivers/interconnect/core.c
>  create mode 100644 include/linux/interconnect-provider.h
>  create mode 100644 include/linux/interconnect.h
>

Thanks Georgi. This looks great!

Reviewed-by: Evan Green <evgreen@chromium.org>

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v7 3/8] interconnect: Add debugfs support
  2018-07-31 16:13 ` [PATCH v7 3/8] interconnect: Add debugfs support Georgi Djakov
@ 2018-08-03 22:59   ` Evan Green
  0 siblings, 0 replies; 30+ messages in thread
From: Evan Green @ 2018-08-03 22:59 UTC (permalink / raw)
  To: georgi.djakov
  Cc: linux-pm, gregkh, rjw, robh+dt, Michael Turquette, khilman,
	Vincent Guittot, Saravana Kannan, Bjorn Andersson, amit.kucheria,
	seansw, daidavid1, mark.rutland, lorenzo.pieralisi,
	Alexandre Bailon, arnd, linux-kernel, linux-arm-kernel,
	linux-arm-msm

On Tue, Jul 31, 2018 at 9:13 AM Georgi Djakov <georgi.djakov@linaro.org> wrote:
>
> Add a functionality to provide information about the current constraints
> per each node and provider.
>
> Signed-off-by: Georgi Djakov <georgi.djakov@linaro.org>
> ---
>  drivers/interconnect/core.c | 78 +++++++++++++++++++++++++++++++++++++
>  1 file changed, 78 insertions(+)
>

Reviewed-by: Evan Green <evgreen@chromium.org>

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v7 4/8] interconnect: qcom: Add RPM communication
  2018-07-31 16:13 ` [PATCH v7 4/8] interconnect: qcom: Add RPM communication Georgi Djakov
@ 2018-08-03 22:59   ` Evan Green
  0 siblings, 0 replies; 30+ messages in thread
From: Evan Green @ 2018-08-03 22:59 UTC (permalink / raw)
  To: georgi.djakov
  Cc: linux-pm, gregkh, rjw, robh+dt, Michael Turquette, khilman,
	Vincent Guittot, Saravana Kannan, Bjorn Andersson, amit.kucheria,
	seansw, daidavid1, mark.rutland, lorenzo.pieralisi,
	Alexandre Bailon, arnd, linux-kernel, linux-arm-kernel,
	linux-arm-msm

On Tue, Jul 31, 2018 at 9:13 AM Georgi Djakov <georgi.djakov@linaro.org> wrote:
>
> On some Qualcomm SoCs, there is a remote processor, which controls some of
> the Network-On-Chip interconnect resources. Other CPUs express their needs
> by communicating with this processor. Add a driver to handle communication
> with this remote processor.
>
> Signed-off-by: Georgi Djakov <georgi.djakov@linaro.org>
> ---
>  .../bindings/interconnect/qcom-smd.txt        | 32 +++++++
>  drivers/interconnect/qcom/Kconfig             | 14 +++
>  drivers/interconnect/qcom/Makefile            |  2 +
>  drivers/interconnect/qcom/smd-rpm.c           | 91 +++++++++++++++++++
>  drivers/interconnect/qcom/smd-rpm.h           | 16 ++++
>  5 files changed, 155 insertions(+)
>  create mode 100644 Documentation/devicetree/bindings/interconnect/qcom-smd.txt
>  create mode 100644 drivers/interconnect/qcom/Kconfig
>  create mode 100644 drivers/interconnect/qcom/Makefile
>  create mode 100644 drivers/interconnect/qcom/smd-rpm.c
>  create mode 100644 drivers/interconnect/qcom/smd-rpm.h
>
> diff --git a/Documentation/devicetree/bindings/interconnect/qcom-smd.txt b/Documentation/devicetree/bindings/interconnect/qcom-smd.txt
> new file mode 100644
> index 000000000000..88a5aeb50935
> --- /dev/null
> +++ b/Documentation/devicetree/bindings/interconnect/qcom-smd.txt
> @@ -0,0 +1,32 @@
> +Qualcomm SMD-RPM interconnect driver binding
> +------------------------------------------------
> +The RPM (Resource Power Manager) is a dedicated hardware engine
> +for managing the shared SoC resources in order to keep the lowest
> +power profile. It communicates with other hardware subsystems via
> +the shared memory driver (SMD) back-end and accepts requests for
> +various resources.
> +
> +Required properties :
> +- compatible : shall contain only one of the following:
> +                       "qcom,interconnect-smd-rpm"
> +
> +Example:
> +       smd {
> +               compatible = "qcom,smd";
> +
> +               rpm {
> +                       interrupts = <0 168 1>;

It would be nice if this interrupt example were more proper, somethine
like <GIC_SPI 168 IRQ_TYPE_EDGE_RISING>. Other than that:

Reviewed-by: Evan Green <evgreen@chromium.org>

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v7 6/8] interconnect: qcom: Add msm8916 interconnect provider driver
  2018-07-31 16:13 ` [PATCH v7 6/8] interconnect: qcom: Add msm8916 interconnect provider driver Georgi Djakov
@ 2018-08-03 22:59   ` Evan Green
  0 siblings, 0 replies; 30+ messages in thread
From: Evan Green @ 2018-08-03 22:59 UTC (permalink / raw)
  To: georgi.djakov
  Cc: linux-pm, gregkh, rjw, robh+dt, Michael Turquette, khilman,
	Vincent Guittot, Saravana Kannan, Bjorn Andersson, amit.kucheria,
	seansw, daidavid1, mark.rutland, lorenzo.pieralisi,
	Alexandre Bailon, arnd, linux-kernel, linux-arm-kernel,
	linux-arm-msm

On Tue, Jul 31, 2018 at 9:13 AM Georgi Djakov <georgi.djakov@linaro.org> wrote:
>
> Add driver for the Qualcomm interconnect buses found in msm8916 based
> platforms.
>
> Signed-off-by: Georgi Djakov <georgi.djakov@linaro.org>
> ---
>  drivers/interconnect/Kconfig        |   5 +
>  drivers/interconnect/Makefile       |   1 +
>  drivers/interconnect/qcom/Kconfig   |   9 +
>  drivers/interconnect/qcom/Makefile  |   2 +
>  drivers/interconnect/qcom/msm8916.c | 510 ++++++++++++++++++++++++++++
>  5 files changed, 527 insertions(+)
>  create mode 100644 drivers/interconnect/qcom/msm8916.c
>

Reviewed-by: Evan Green <evgreen@chromium.org>

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v7 8/8] interconnect: Allow endpoints translation via DT
  2018-07-31 16:13 ` [PATCH v7 8/8] interconnect: Allow endpoints translation via DT Georgi Djakov
  2018-08-01 22:57   ` skannan
@ 2018-08-03 22:59   ` Evan Green
  1 sibling, 0 replies; 30+ messages in thread
From: Evan Green @ 2018-08-03 22:59 UTC (permalink / raw)
  To: georgi.djakov
  Cc: linux-pm, gregkh, rjw, robh+dt, Michael Turquette, khilman,
	Vincent Guittot, Saravana Kannan, Bjorn Andersson, amit.kucheria,
	seansw, daidavid1, mark.rutland, lorenzo.pieralisi,
	Alexandre Bailon, arnd, linux-kernel, linux-arm-kernel,
	linux-arm-msm

On Tue, Jul 31, 2018 at 9:13 AM Georgi Djakov <georgi.djakov@linaro.org> wrote:
>
> Currently we support only platform data for specifying the interconnect
> endpoints. As now the endpoints are hard-coded into the consumer driver
> this may lead to complications when a single driver is used by multiple
> SoCs, which may have different interconnect topology.
> To avoid cluttering the consumer drivers, introduce a translation function
> to help us get the board specific interconnect data from device-tree.
>
> Signed-off-by: Georgi Djakov <georgi.djakov@linaro.org>
> ---
>  drivers/interconnect/core.c  | 62 ++++++++++++++++++++++++++++++++++++
>  include/linux/interconnect.h |  7 ++++
>  2 files changed, 69 insertions(+)
>

FWIW I don't have any strong opinions on whether of_icc_get returns an
error or NULL, other than to suggest following suit with what other
frameworks have chosen to do. Though perhaps most other frameworks are
not quite as optional for a particular driver.

Reviewed-by: Evan Green <evgreen@chromium.org>

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re:[PATCH v7 2/8] dt-bindings: Introduce interconnect provider bindings
  2018-08-02 21:02   ` Rob Herring
@ 2018-08-07 14:54     ` Georgi Djakov
  2018-08-20 15:32       ` [PATCH " Maxime Ripard
  0 siblings, 1 reply; 30+ messages in thread
From: Georgi Djakov @ 2018-08-07 14:54 UTC (permalink / raw)
  To: Rob Herring, maxime.ripard
  Cc: linux-pm, Greg Kroah-Hartman, Rafael J. Wysocki, Rob Herring,
	Mike Turquette, khilman, Vincent Guittot, skannan,
	Bjorn Andersson, Amit Kucheria, seansw, daidavid1, evgreen,
	Mark Rutland, Lorenzo Pieralisi, Alexandre Bailon, Arnd Bergmann,
	Linux Kernel Mailing List, linux-arm-kernel, linux-arm-msm,
	devicetree

Hi Rob,

On 08/03/2018 12:02 AM, Rob Herring wrote:
> On Tue, Jul 31, 2018 at 10:13 AM Georgi Djakov <georgi.djakov@linaro.org> wrote:
>>
>> This binding is intended to represent the interconnect hardware present
>> in some of the modern SoCs. Currently it consists only of a binding for
>> the interconnect hardware devices (provider).
> 
> If you want the bindings reviewed, then you need to send them to the
> DT list. CC'ing me is pointless, I get CC'ed too many things to read.

Ops, ok!

> The consumer and producer binding should be a single patch. One is not
> useful without the other.

The reason for splitting them is that they can be reviewed separately.
Also we can rely on platform data instead of using DT and the consumer
binding. However will do as you suggest.

> There is also a patch series from Maxime Ripard that's addressing the
> same general area. See "dt-bindings: Add a dma-parent property". We
> don't need multiple ways to address describing the device to memory
> paths, so you all had better work out a common solution.

Looks like this fits exactly into the interconnect API concept. I see
MBUS as interconnect provider and display/camera as consumers, that
report their bandwidth needs. I am also planning to add support for
priority.

Thanks,
Georgi

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v7 8/8] interconnect: Allow endpoints translation via DT
  2018-08-02 19:12       ` skannan
@ 2018-08-09 14:17         ` Georgi Djakov
  0 siblings, 0 replies; 30+ messages in thread
From: Georgi Djakov @ 2018-08-09 14:17 UTC (permalink / raw)
  To: skannan
  Cc: linux-pm, gregkh, rjw, robh+dt, mturquette, khilman,
	vincent.guittot, bjorn.andersson, amit.kucheria, seansw,
	daidavid1, evgreen, mark.rutland, lorenzo.pieralisi, abailon,
	arnd, linux-kernel, linux-arm-kernel, linux-arm-msm

Hi Saravana,

On 08/02/2018 10:12 PM, skannan@codeaurora.org wrote:
> On 2018-08-02 05:07, Georgi Djakov wrote:
>> Hi Saravana,
>>
>> On 08/02/2018 01:57 AM, skannan@codeaurora.org wrote:
>>> On 2018-07-31 09:13, Georgi Djakov wrote:
>>>> Currently we support only platform data for specifying the interconnect
>>>> endpoints. As now the endpoints are hard-coded into the consumer driver
>>>> this may lead to complications when a single driver is used by multiple
>>>> SoCs, which may have different interconnect topology.
>>>> To avoid cluttering the consumer drivers, introduce a translation
>>>> function
>>>> to help us get the board specific interconnect data from device-tree.
>>>>
>>>> Signed-off-by: Georgi Djakov <georgi.djakov@linaro.org>
>>>> ---
>>>>  drivers/interconnect/core.c  | 62 ++++++++++++++++++++++++++++++++++++
>>>>  include/linux/interconnect.h |  7 ++++
>>>>  2 files changed, 69 insertions(+)
>>>>
>>>> diff --git a/drivers/interconnect/core.c b/drivers/interconnect/core.c
>>>> index 9fef180cf77e..d1b6adff0a3d 100644
>>>> --- a/drivers/interconnect/core.c
>>>> +++ b/drivers/interconnect/core.c
>> [..]
>>>> --- a/include/linux/interconnect.h
>>>> +++ b/include/linux/interconnect.h
>>>> @@ -17,6 +17,7 @@ struct device;
>>>>
>>>>  struct icc_path *icc_get(struct device *dev, const int src_id,
>>>>               const int dst_id);
>>>> +struct icc_path *of_icc_get(struct device *dev, const char *name);
>>>>  void icc_put(struct icc_path *path);
>>>>  int icc_set(struct icc_path *path, u32 avg_bw, u32 peak_bw);
>>>>
>>>> @@ -28,6 +29,12 @@ static inline struct icc_path *icc_get(struct
>>>> device *dev, const int src_id,
>>>>      return NULL;
>>>>  }
>>>>
>>>> +static inline struct icc_path *of_icc_get(struct device *dev,
>>>> +                      const char *name)
>>>> +{
>>>> +    return NULL;
>>>> +}
>>>> +
>>>
>>> Might want to return PTR(-ENODEV) or some error code so that client
>>> doesn't have to do NULL check AND an error check?
>>>
>>> -Saravana
>>
>> NULL is returned when CONFIG_INTERCONNECT=n. Configuration of
>> interconnects by consumer drivers could be optional and that's why null
>> is returned instead of an error. The consumer drivers decide how to
>> proceed in this case and if there is a hard requirement for interconnect
>> support, then i would suggest to express it as a dependency in Kconfig.
> 
> Ehh... you could make the same argument with an error. If it's not
> mandatory for functioning, they can ignore a specific error and
> continue? At a minimum, these stub functions returning NULL doesn't
> match with the documentation that says these APIs will only ever return
> ERR_PTR().
> 
> -Saravana

Here i am trying to align with other frameworks, which provide some
resources to consumers. The regulator/clock/reset frameworks return
NULL, so i prefer to do the same here. Drivers are free to handle NULL
specifically, but they don't have to. This NULL may also mean that
interconnects have been already configured by bootloader or firmware.
Of course i accept the comment to update the documentation to mention this.

Thanks,
Georgi

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v7 2/8] dt-bindings: Introduce interconnect provider bindings
  2018-08-07 14:54     ` Georgi Djakov
@ 2018-08-20 15:32       ` Maxime Ripard
  2018-08-24 14:51         ` Georgi Djakov
  0 siblings, 1 reply; 30+ messages in thread
From: Maxime Ripard @ 2018-08-20 15:32 UTC (permalink / raw)
  To: Georgi Djakov
  Cc: Rob Herring, linux-pm, Greg Kroah-Hartman, Rafael J. Wysocki,
	Rob Herring, Mike Turquette, khilman, Vincent Guittot, skannan,
	Bjorn Andersson, Amit Kucheria, seansw, daidavid1, evgreen,
	Mark Rutland, Lorenzo Pieralisi, Alexandre Bailon, Arnd Bergmann,
	Linux Kernel Mailing List, linux-arm-kernel, linux-arm-msm,
	devicetree

[-- Attachment #1: Type: text/plain, Size: 1137 bytes --]

Hi Georgi,

On Tue, Aug 07, 2018 at 05:54:38PM +0300, Georgi Djakov wrote:
> > There is also a patch series from Maxime Ripard that's addressing the
> > same general area. See "dt-bindings: Add a dma-parent property". We
> > don't need multiple ways to address describing the device to memory
> > paths, so you all had better work out a common solution.
> 
> Looks like this fits exactly into the interconnect API concept. I see
> MBUS as interconnect provider and display/camera as consumers, that
> report their bandwidth needs. I am also planning to add support for
> priority.

Thanks for working on this. After looking at your serie, the one thing
I'm a bit uncertain about (and the most important one to us) is how we
would be able to tell through which interconnect the DMA are done.

This is important to us since our topology is actually quite simple as
you've seen, but the RAM is not mapped on that bus and on the CPU's,
so we need to apply an offset to each buffer being DMA'd.

Maxime

-- 
Maxime Ripard, Bootlin (formerly Free Electrons)
Embedded Linux and Kernel engineering
https://bootlin.com

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v7 2/8] dt-bindings: Introduce interconnect provider bindings
  2018-08-20 15:32       ` [PATCH " Maxime Ripard
@ 2018-08-24 14:51         ` Georgi Djakov
  2018-08-24 15:35           ` Rob Herring
  2018-08-27 15:08           ` Maxime Ripard
  0 siblings, 2 replies; 30+ messages in thread
From: Georgi Djakov @ 2018-08-24 14:51 UTC (permalink / raw)
  To: Maxime Ripard
  Cc: Rob Herring, linux-pm, Greg Kroah-Hartman, Rafael J. Wysocki,
	Rob Herring, Mike Turquette, khilman, Vincent Guittot, skannan,
	Bjorn Andersson, Amit Kucheria, seansw, daidavid1, evgreen,
	Mark Rutland, Lorenzo Pieralisi, Alexandre Bailon, Arnd Bergmann,
	Linux Kernel Mailing List, linux-arm-kernel, linux-arm-msm,
	devicetree

Hi Maxime,

On 08/20/2018 06:32 PM, Maxime Ripard wrote:
> Hi Georgi,
> 
> On Tue, Aug 07, 2018 at 05:54:38PM +0300, Georgi Djakov wrote:
>>> There is also a patch series from Maxime Ripard that's addressing the
>>> same general area. See "dt-bindings: Add a dma-parent property". We
>>> don't need multiple ways to address describing the device to memory
>>> paths, so you all had better work out a common solution.
>>
>> Looks like this fits exactly into the interconnect API concept. I see
>> MBUS as interconnect provider and display/camera as consumers, that
>> report their bandwidth needs. I am also planning to add support for
>> priority.
> 
> Thanks for working on this. After looking at your serie, the one thing
> I'm a bit uncertain about (and the most important one to us) is how we
> would be able to tell through which interconnect the DMA are done.
> 
> This is important to us since our topology is actually quite simple as
> you've seen, but the RAM is not mapped on that bus and on the CPU's,
> so we need to apply an offset to each buffer being DMA'd.

Ok, i see - your problem is not about bandwidth scaling but about using
different memory ranges by the driver to access the same location. So
this is not really the same and your problem is different. Also the
interconnect bindings are describing a path and endpoints. However i am
open to any ideas.

Thanks,
Georgi

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v7 2/8] dt-bindings: Introduce interconnect provider bindings
  2018-08-24 14:51         ` Georgi Djakov
@ 2018-08-24 15:35           ` Rob Herring
  2018-08-27 15:11             ` Maxime Ripard
  2018-08-27 15:08           ` Maxime Ripard
  1 sibling, 1 reply; 30+ messages in thread
From: Rob Herring @ 2018-08-24 15:35 UTC (permalink / raw)
  To: Georgi Djakov
  Cc: Maxime Ripard, open list:THERMAL, Greg Kroah-Hartman,
	Rafael J. Wysocki, Michael Turquette, Kevin Hilman,
	Vincent Guittot, Saravana Kannan, Bjorn Andersson, Amit Kucheria,
	seansw, daidavid1, Evan Green, Mark Rutland, Lorenzo Pieralisi,
	Alexandre Bailon, Arnd Bergmann, linux-kernel,
	moderated list:ARM/FREESCALE IMX / MXC ARM ARCHITECTURE,
	linux-arm-msm, devicetree

On Fri, Aug 24, 2018 at 9:51 AM Georgi Djakov <georgi.djakov@linaro.org> wrote:
>
> Hi Maxime,
>
> On 08/20/2018 06:32 PM, Maxime Ripard wrote:
> > Hi Georgi,
> >
> > On Tue, Aug 07, 2018 at 05:54:38PM +0300, Georgi Djakov wrote:
> >>> There is also a patch series from Maxime Ripard that's addressing the
> >>> same general area. See "dt-bindings: Add a dma-parent property". We
> >>> don't need multiple ways to address describing the device to memory
> >>> paths, so you all had better work out a common solution.
> >>
> >> Looks like this fits exactly into the interconnect API concept. I see
> >> MBUS as interconnect provider and display/camera as consumers, that
> >> report their bandwidth needs. I am also planning to add support for
> >> priority.
> >
> > Thanks for working on this. After looking at your serie, the one thing
> > I'm a bit uncertain about (and the most important one to us) is how we
> > would be able to tell through which interconnect the DMA are done.
> >
> > This is important to us since our topology is actually quite simple as
> > you've seen, but the RAM is not mapped on that bus and on the CPU's,
> > so we need to apply an offset to each buffer being DMA'd.
>
> Ok, i see - your problem is not about bandwidth scaling but about using
> different memory ranges by the driver to access the same location. So
> this is not really the same and your problem is different. Also the
> interconnect bindings are describing a path and endpoints. However i am
> open to any ideas.

It may be different things you need, but both are related to the path
between a bus master and memory. We can't have each 'problem'
described in a different way. Well, we could as long as each platform
has different problems, but that's unlikely.

It could turn out that the only commonality is property naming
convention, but that's still better than 2 independent solutions.

I know you each want to just fix your issues, but the fact that DT
doesn't model the DMA side of the bus structure has been an issue at
least since the start of DT on ARM. Either we should address this in a
flexible way or we can just continue to manage without. So I'm not
inclined to take something that only addresses one SoC family.

Rob

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v7 2/8] dt-bindings: Introduce interconnect provider bindings
  2018-08-24 14:51         ` Georgi Djakov
  2018-08-24 15:35           ` Rob Herring
@ 2018-08-27 15:08           ` Maxime Ripard
  2018-08-29 12:31             ` Georgi Djakov
  1 sibling, 1 reply; 30+ messages in thread
From: Maxime Ripard @ 2018-08-27 15:08 UTC (permalink / raw)
  To: Georgi Djakov
  Cc: Rob Herring, linux-pm, Greg Kroah-Hartman, Rafael J. Wysocki,
	Rob Herring, Mike Turquette, khilman, Vincent Guittot, skannan,
	Bjorn Andersson, Amit Kucheria, seansw, daidavid1, evgreen,
	Mark Rutland, Lorenzo Pieralisi, Alexandre Bailon, Arnd Bergmann,
	Linux Kernel Mailing List, linux-arm-kernel, linux-arm-msm,
	devicetree

[-- Attachment #1: Type: text/plain, Size: 2050 bytes --]

Hi!

On Fri, Aug 24, 2018 at 05:51:37PM +0300, Georgi Djakov wrote:
> Hi Maxime,
> 
> On 08/20/2018 06:32 PM, Maxime Ripard wrote:
> > Hi Georgi,
> > 
> > On Tue, Aug 07, 2018 at 05:54:38PM +0300, Georgi Djakov wrote:
> >>> There is also a patch series from Maxime Ripard that's addressing the
> >>> same general area. See "dt-bindings: Add a dma-parent property". We
> >>> don't need multiple ways to address describing the device to memory
> >>> paths, so you all had better work out a common solution.
> >>
> >> Looks like this fits exactly into the interconnect API concept. I see
> >> MBUS as interconnect provider and display/camera as consumers, that
> >> report their bandwidth needs. I am also planning to add support for
> >> priority.
> > 
> > Thanks for working on this. After looking at your serie, the one thing
> > I'm a bit uncertain about (and the most important one to us) is how we
> > would be able to tell through which interconnect the DMA are done.
> > 
> > This is important to us since our topology is actually quite simple as
> > you've seen, but the RAM is not mapped on that bus and on the CPU's,
> > so we need to apply an offset to each buffer being DMA'd.
> 
> Ok, i see - your problem is not about bandwidth scaling but about using
> different memory ranges by the driver to access the same location.

Well, it turns out that the problem we are bitten by at the moment is
the memory range one, but the controller it goes through also provides
bandwidth scaling, priorities and so on, so it's not too far off.

> So this is not really the same and your problem is different. Also
> the interconnect bindings are describing a path and
> endpoints. However i am open to any ideas.

It's describing a path and endpoints, but it can describe multiple of
them for the same device, right? If so, we'd need to provide
additional information to distinguish which path is used for DMA.

Maxime

-- 
Maxime Ripard, Bootlin
Embedded Linux and Kernel engineering
https://bootlin.com

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v7 2/8] dt-bindings: Introduce interconnect provider bindings
  2018-08-24 15:35           ` Rob Herring
@ 2018-08-27 15:11             ` Maxime Ripard
  2018-08-29 12:33               ` Georgi Djakov
  0 siblings, 1 reply; 30+ messages in thread
From: Maxime Ripard @ 2018-08-27 15:11 UTC (permalink / raw)
  To: Rob Herring
  Cc: Georgi Djakov, open list:THERMAL, Greg Kroah-Hartman,
	Rafael J. Wysocki, Michael Turquette, Kevin Hilman,
	Vincent Guittot, Saravana Kannan, Bjorn Andersson, Amit Kucheria,
	seansw, daidavid1, Evan Green, Mark Rutland, Lorenzo Pieralisi,
	Alexandre Bailon, Arnd Bergmann, linux-kernel,
	moderated list:ARM/FREESCALE IMX / MXC ARM ARCHITECTURE,
	linux-arm-msm, devicetree

[-- Attachment #1: Type: text/plain, Size: 2808 bytes --]

On Fri, Aug 24, 2018 at 10:35:23AM -0500, Rob Herring wrote:
> On Fri, Aug 24, 2018 at 9:51 AM Georgi Djakov <georgi.djakov@linaro.org> wrote:
> >
> > Hi Maxime,
> >
> > On 08/20/2018 06:32 PM, Maxime Ripard wrote:
> > > Hi Georgi,
> > >
> > > On Tue, Aug 07, 2018 at 05:54:38PM +0300, Georgi Djakov wrote:
> > >>> There is also a patch series from Maxime Ripard that's addressing the
> > >>> same general area. See "dt-bindings: Add a dma-parent property". We
> > >>> don't need multiple ways to address describing the device to memory
> > >>> paths, so you all had better work out a common solution.
> > >>
> > >> Looks like this fits exactly into the interconnect API concept. I see
> > >> MBUS as interconnect provider and display/camera as consumers, that
> > >> report their bandwidth needs. I am also planning to add support for
> > >> priority.
> > >
> > > Thanks for working on this. After looking at your serie, the one thing
> > > I'm a bit uncertain about (and the most important one to us) is how we
> > > would be able to tell through which interconnect the DMA are done.
> > >
> > > This is important to us since our topology is actually quite simple as
> > > you've seen, but the RAM is not mapped on that bus and on the CPU's,
> > > so we need to apply an offset to each buffer being DMA'd.
> >
> > Ok, i see - your problem is not about bandwidth scaling but about using
> > different memory ranges by the driver to access the same location. So
> > this is not really the same and your problem is different. Also the
> > interconnect bindings are describing a path and endpoints. However i am
> > open to any ideas.
> 
> It may be different things you need, but both are related to the path
> between a bus master and memory. We can't have each 'problem'
> described in a different way. Well, we could as long as each platform
> has different problems, but that's unlikely.
> 
> It could turn out that the only commonality is property naming
> convention, but that's still better than 2 independent solutions.

Yeah, I really don't think the two issues are unrelated. Can we maybe
have a particular interconnect-names value to mark the interconnect
being used to perform DMA?

> I know you each want to just fix your issues, but the fact that DT
> doesn't model the DMA side of the bus structure has been an issue at
> least since the start of DT on ARM. Either we should address this in a
> flexible way or we can just continue to manage without. So I'm not
> inclined to take something that only addresses one SoC family.

I'd really like to have it addressed. We're getting bit by this, and
the hacks we have don't work well anymore.

Maxime

-- 
Maxime Ripard, Bootlin
Embedded Linux and Kernel engineering
https://bootlin.com

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v7 2/8] dt-bindings: Introduce interconnect provider bindings
  2018-08-27 15:08           ` Maxime Ripard
@ 2018-08-29 12:31             ` Georgi Djakov
  0 siblings, 0 replies; 30+ messages in thread
From: Georgi Djakov @ 2018-08-29 12:31 UTC (permalink / raw)
  To: Maxime Ripard
  Cc: Rob Herring, linux-pm, Greg Kroah-Hartman, Rafael J. Wysocki,
	Rob Herring, Mike Turquette, khilman, Vincent Guittot, skannan,
	Bjorn Andersson, Amit Kucheria, seansw, daidavid1, evgreen,
	Mark Rutland, Lorenzo Pieralisi, Alexandre Bailon, Arnd Bergmann,
	Linux Kernel Mailing List, linux-arm-kernel, linux-arm-msm,
	devicetree

Hi Maxime,

On 08/27/2018 06:08 PM, Maxime Ripard wrote:
> Hi!
> 
> On Fri, Aug 24, 2018 at 05:51:37PM +0300, Georgi Djakov wrote:
>> Hi Maxime,
>>
>> On 08/20/2018 06:32 PM, Maxime Ripard wrote:
>>> Hi Georgi,
>>>
>>> On Tue, Aug 07, 2018 at 05:54:38PM +0300, Georgi Djakov wrote:
>>>>> There is also a patch series from Maxime Ripard that's addressing the
>>>>> same general area. See "dt-bindings: Add a dma-parent property". We
>>>>> don't need multiple ways to address describing the device to memory
>>>>> paths, so you all had better work out a common solution.
>>>>
>>>> Looks like this fits exactly into the interconnect API concept. I see
>>>> MBUS as interconnect provider and display/camera as consumers, that
>>>> report their bandwidth needs. I am also planning to add support for
>>>> priority.
>>>
>>> Thanks for working on this. After looking at your serie, the one thing
>>> I'm a bit uncertain about (and the most important one to us) is how we
>>> would be able to tell through which interconnect the DMA are done.
>>>
>>> This is important to us since our topology is actually quite simple as
>>> you've seen, but the RAM is not mapped on that bus and on the CPU's,
>>> so we need to apply an offset to each buffer being DMA'd.
>>
>> Ok, i see - your problem is not about bandwidth scaling but about using
>> different memory ranges by the driver to access the same location.
> 
> Well, it turns out that the problem we are bitten by at the moment is
> the memory range one, but the controller it goes through also provides
> bandwidth scaling, priorities and so on, so it's not too far off.

Thanks for the clarification. Alright, so this will fit nicely into the
model as a provider. I agree that we should try to use the same binding
to describe a path from a master to memory in DT.

>> So this is not really the same and your problem is different. Also
>> the interconnect bindings are describing a path and
>> endpoints. However i am open to any ideas.
> 
> It's describing a path and endpoints, but it can describe multiple of
> them for the same device, right? If so, we'd need to provide
> additional information to distinguish which path is used for DMA.

Sure, multiple paths are supported.

BR,
Georgi

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v7 2/8] dt-bindings: Introduce interconnect provider bindings
  2018-08-27 15:11             ` Maxime Ripard
@ 2018-08-29 12:33               ` Georgi Djakov
  2018-08-30  7:47                 ` Maxime Ripard
  0 siblings, 1 reply; 30+ messages in thread
From: Georgi Djakov @ 2018-08-29 12:33 UTC (permalink / raw)
  To: Maxime Ripard, Rob Herring
  Cc: open list:THERMAL, Greg Kroah-Hartman, Rafael J. Wysocki,
	Michael Turquette, Kevin Hilman, Vincent Guittot,
	Saravana Kannan, Bjorn Andersson, Amit Kucheria, seansw,
	daidavid1, Evan Green, Mark Rutland, Lorenzo Pieralisi,
	Alexandre Bailon, Arnd Bergmann, linux-kernel,
	moderated list:ARM/FREESCALE IMX / MXC ARM ARCHITECTURE,
	linux-arm-msm, devicetree

Hi Rob and Maxime,

On 08/27/2018 06:11 PM, Maxime Ripard wrote:
> On Fri, Aug 24, 2018 at 10:35:23AM -0500, Rob Herring wrote:
>> On Fri, Aug 24, 2018 at 9:51 AM Georgi Djakov <georgi.djakov@linaro.org> wrote:
>>>
>>> Hi Maxime,
>>>
>>> On 08/20/2018 06:32 PM, Maxime Ripard wrote:
>>>> Hi Georgi,
>>>>
>>>> On Tue, Aug 07, 2018 at 05:54:38PM +0300, Georgi Djakov wrote:
>>>>>> There is also a patch series from Maxime Ripard that's addressing the
>>>>>> same general area. See "dt-bindings: Add a dma-parent property". We
>>>>>> don't need multiple ways to address describing the device to memory
>>>>>> paths, so you all had better work out a common solution.
>>>>>
>>>>> Looks like this fits exactly into the interconnect API concept. I see
>>>>> MBUS as interconnect provider and display/camera as consumers, that
>>>>> report their bandwidth needs. I am also planning to add support for
>>>>> priority.
>>>>
>>>> Thanks for working on this. After looking at your serie, the one thing
>>>> I'm a bit uncertain about (and the most important one to us) is how we
>>>> would be able to tell through which interconnect the DMA are done.
>>>>
>>>> This is important to us since our topology is actually quite simple as
>>>> you've seen, but the RAM is not mapped on that bus and on the CPU's,
>>>> so we need to apply an offset to each buffer being DMA'd.
>>>
>>> Ok, i see - your problem is not about bandwidth scaling but about using
>>> different memory ranges by the driver to access the same location. So
>>> this is not really the same and your problem is different. Also the
>>> interconnect bindings are describing a path and endpoints. However i am
>>> open to any ideas.
>>
>> It may be different things you need, but both are related to the path
>> between a bus master and memory. We can't have each 'problem'
>> described in a different way. Well, we could as long as each platform
>> has different problems, but that's unlikely.
>>
>> It could turn out that the only commonality is property naming
>> convention, but that's still better than 2 independent solutions.
> 
> Yeah, I really don't think the two issues are unrelated. Can we maybe
> have a particular interconnect-names value to mark the interconnect
> being used to perform DMA?

We can call one of the paths "dma" and use it to perform DMA for the
current device. I don't see a problem with this. The name of the path is
descriptive and makes sense. And by doing we avoid adding more DT
properties, which would be an other option.

This also makes me think that it might be a good idea to have a standard
name for the path to memory as i expect some people will call it "mem",
others "ddr" etc.

Thanks,
Georgi

>> I know you each want to just fix your issues, but the fact that DT
>> doesn't model the DMA side of the bus structure has been an issue at
>> least since the start of DT on ARM. Either we should address this in a
>> flexible way or we can just continue to manage without. So I'm not
>> inclined to take something that only addresses one SoC family.
> 
> I'd really like to have it addressed. We're getting bit by this, and
> the hacks we have don't work well anymore.

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v7 2/8] dt-bindings: Introduce interconnect provider bindings
  2018-08-29 12:33               ` Georgi Djakov
@ 2018-08-30  7:47                 ` Maxime Ripard
  0 siblings, 0 replies; 30+ messages in thread
From: Maxime Ripard @ 2018-08-30  7:47 UTC (permalink / raw)
  To: Georgi Djakov
  Cc: Rob Herring, open list:THERMAL, Greg Kroah-Hartman,
	Rafael J. Wysocki, Michael Turquette, Kevin Hilman,
	Vincent Guittot, Saravana Kannan, Bjorn Andersson, Amit Kucheria,
	seansw, daidavid1, Evan Green, Mark Rutland, Lorenzo Pieralisi,
	Alexandre Bailon, Arnd Bergmann, linux-kernel,
	moderated list:ARM/FREESCALE IMX / MXC ARM ARCHITECTURE,
	linux-arm-msm, devicetree

[-- Attachment #1: Type: text/plain, Size: 2931 bytes --]

Hi,

On Wed, Aug 29, 2018 at 03:33:29PM +0300, Georgi Djakov wrote:
> On 08/27/2018 06:11 PM, Maxime Ripard wrote:
> > On Fri, Aug 24, 2018 at 10:35:23AM -0500, Rob Herring wrote:
> >> On Fri, Aug 24, 2018 at 9:51 AM Georgi Djakov <georgi.djakov@linaro.org> wrote:
> >>>
> >>> Hi Maxime,
> >>>
> >>> On 08/20/2018 06:32 PM, Maxime Ripard wrote:
> >>>> Hi Georgi,
> >>>>
> >>>> On Tue, Aug 07, 2018 at 05:54:38PM +0300, Georgi Djakov wrote:
> >>>>>> There is also a patch series from Maxime Ripard that's addressing the
> >>>>>> same general area. See "dt-bindings: Add a dma-parent property". We
> >>>>>> don't need multiple ways to address describing the device to memory
> >>>>>> paths, so you all had better work out a common solution.
> >>>>>
> >>>>> Looks like this fits exactly into the interconnect API concept. I see
> >>>>> MBUS as interconnect provider and display/camera as consumers, that
> >>>>> report their bandwidth needs. I am also planning to add support for
> >>>>> priority.
> >>>>
> >>>> Thanks for working on this. After looking at your serie, the one thing
> >>>> I'm a bit uncertain about (and the most important one to us) is how we
> >>>> would be able to tell through which interconnect the DMA are done.
> >>>>
> >>>> This is important to us since our topology is actually quite simple as
> >>>> you've seen, but the RAM is not mapped on that bus and on the CPU's,
> >>>> so we need to apply an offset to each buffer being DMA'd.
> >>>
> >>> Ok, i see - your problem is not about bandwidth scaling but about using
> >>> different memory ranges by the driver to access the same location. So
> >>> this is not really the same and your problem is different. Also the
> >>> interconnect bindings are describing a path and endpoints. However i am
> >>> open to any ideas.
> >>
> >> It may be different things you need, but both are related to the path
> >> between a bus master and memory. We can't have each 'problem'
> >> described in a different way. Well, we could as long as each platform
> >> has different problems, but that's unlikely.
> >>
> >> It could turn out that the only commonality is property naming
> >> convention, but that's still better than 2 independent solutions.
> > 
> > Yeah, I really don't think the two issues are unrelated. Can we maybe
> > have a particular interconnect-names value to mark the interconnect
> > being used to perform DMA?
> 
> We can call one of the paths "dma" and use it to perform DMA for the
> current device. I don't see a problem with this. The name of the path is
> descriptive and makes sense. And by doing we avoid adding more DT
> properties, which would be an other option.

That works for me. If Rob is fine with it too, I'll send an updated
version of my serie based on yours.

Thanks!
Maxime



-- 
Maxime Ripard, Bootlin
Embedded Linux and Kernel engineering
https://bootlin.com

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 30+ messages in thread

end of thread, other threads:[~2018-08-30  7:47 UTC | newest]

Thread overview: 30+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-07-31 16:13 [PATCH v7 0/8] Introduce on-chip interconnect API Georgi Djakov
2018-07-31 16:13 ` [PATCH v7 1/8] interconnect: Add generic " Georgi Djakov
2018-08-02  0:05   ` Randy Dunlap
2018-08-02 11:58     ` Georgi Djakov
2018-08-03 22:59   ` Evan Green
2018-07-31 16:13 ` [PATCH v7 2/8] dt-bindings: Introduce interconnect provider bindings Georgi Djakov
2018-08-02 21:02   ` Rob Herring
2018-08-07 14:54     ` Georgi Djakov
2018-08-20 15:32       ` [PATCH " Maxime Ripard
2018-08-24 14:51         ` Georgi Djakov
2018-08-24 15:35           ` Rob Herring
2018-08-27 15:11             ` Maxime Ripard
2018-08-29 12:33               ` Georgi Djakov
2018-08-30  7:47                 ` Maxime Ripard
2018-08-27 15:08           ` Maxime Ripard
2018-08-29 12:31             ` Georgi Djakov
2018-07-31 16:13 ` [PATCH v7 3/8] interconnect: Add debugfs support Georgi Djakov
2018-08-03 22:59   ` Evan Green
2018-07-31 16:13 ` [PATCH v7 4/8] interconnect: qcom: Add RPM communication Georgi Djakov
2018-08-03 22:59   ` Evan Green
2018-07-31 16:13 ` [PATCH v7 5/8] dt-bindings: interconnect: Document qcom,msm8916 NoC bindings Georgi Djakov
2018-07-31 16:13 ` [PATCH v7 6/8] interconnect: qcom: Add msm8916 interconnect provider driver Georgi Djakov
2018-08-03 22:59   ` Evan Green
2018-07-31 16:13 ` [PATCH v7 7/8] dt-bindings: Introduce interconnect consumers bindings Georgi Djakov
2018-07-31 16:13 ` [PATCH v7 8/8] interconnect: Allow endpoints translation via DT Georgi Djakov
2018-08-01 22:57   ` skannan
2018-08-02 12:07     ` Georgi Djakov
2018-08-02 19:12       ` skannan
2018-08-09 14:17         ` Georgi Djakov
2018-08-03 22:59   ` Evan Green

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).