linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [Patch v1 00/10] Tegra234 Memory interconnect support
@ 2022-12-20 16:02 Sumit Gupta
  2022-12-20 16:02 ` [Patch v1 01/10] memory: tegra: add interconnect support for DRAM scaling in Tegra234 Sumit Gupta
                   ` (9 more replies)
  0 siblings, 10 replies; 53+ messages in thread
From: Sumit Gupta @ 2022-12-20 16:02 UTC (permalink / raw)
  To: treding, krzysztof.kozlowski, dmitry.osipenko, viresh.kumar,
	rafael, jonathanh, robh+dt, linux-kernel, linux-tegra, linux-pm,
	devicetree
  Cc: sanjayc, ksitaraman, ishah, bbasu, sumitg

This patch series adds memory interconnect support for Tegra234 SoC.
It is used to dynamically scale DRAM Frequency as per the bandwidth
requests from different Memory Controller (MC) clients.
MC Clients use ICC Framework's icc_set_bw() api to dynamically request
for the DRAM bandwidth (BW). As per path, the request will be routed
from MC to the EMC driver. EMC driver will then send the Client ID,
type, and frequency request info to the BPMP-FW which will set the
final DRAM freq considering all exisiting requests.

MC and EMC are the ICC providers. Nodes in path for a request will be:
     Client[1-n] -> MC -> EMC -> EMEM/DRAM

The patch series also adds interconnect support in the CPUFREQ driver
for scaling bandwidth with CPU frequency. For that, added per cluster
OPP table in the CPUFREQ driver and using that to scale DRAM freq by
requesting the minimum BW respective to the given CPU frequency in OPP
table for that cluster.

Sumit Gupta (10):
  memory: tegra: add interconnect support for DRAM scaling in Tegra234
  memory: tegra: adding iso mc clients for Tegra234
  memory: tegra: add pcie mc clients for Tegra234
  memory: tegra: add support for software mc clients in Tegra234
  dt-bindings: tegra: add icc ids for dummy MC clients
  arm64: tegra: Add cpu OPP tables and interconnects property
  cpufreq: Add Tegra234 to cpufreq-dt-platdev blocklist
  cpufreq: tegra194: add OPP support and set bandwidth
  memory: tegra: get number of enabled mc channels
  memory: tegra: make cluster bw request a multiple of mc_channels

 arch/arm64/boot/dts/nvidia/tegra234.dtsi | 276 +++++++++++
 drivers/cpufreq/cpufreq-dt-platdev.c     |   1 +
 drivers/cpufreq/tegra194-cpufreq.c       | 152 +++++-
 drivers/memory/tegra/mc.c                |  80 +++-
 drivers/memory/tegra/mc.h                |   1 +
 drivers/memory/tegra/tegra186-emc.c      | 166 +++++++
 drivers/memory/tegra/tegra234.c          | 565 ++++++++++++++++++++++-
 include/dt-bindings/memory/tegra234-mc.h |   5 +
 include/soc/tegra/mc.h                   |  11 +
 include/soc/tegra/tegra-icc.h            |  79 ++++
 10 files changed, 1312 insertions(+), 24 deletions(-)
 create mode 100644 include/soc/tegra/tegra-icc.h

-- 
2.17.1


^ permalink raw reply	[flat|nested] 53+ messages in thread

* [Patch v1 01/10] memory: tegra: add interconnect support for DRAM scaling in Tegra234
  2022-12-20 16:02 [Patch v1 00/10] Tegra234 Memory interconnect support Sumit Gupta
@ 2022-12-20 16:02 ` Sumit Gupta
  2022-12-20 18:05   ` Dmitry Osipenko
                     ` (9 more replies)
  2022-12-20 16:02 ` [Patch v1 02/10] memory: tegra: adding iso mc clients for Tegra234 Sumit Gupta
                   ` (8 subsequent siblings)
  9 siblings, 10 replies; 53+ messages in thread
From: Sumit Gupta @ 2022-12-20 16:02 UTC (permalink / raw)
  To: treding, krzysztof.kozlowski, dmitry.osipenko, viresh.kumar,
	rafael, jonathanh, robh+dt, linux-kernel, linux-tegra, linux-pm,
	devicetree
  Cc: sanjayc, ksitaraman, ishah, bbasu, sumitg

Adding Interconnect framework support to dynamically set the DRAM
bandwidth from different clients. Both the MC and EMC drivers are
added as ICC providers. The path for any request will be:
 MC-Client[1-n] -> MC -> EMC -> EMEM/DRAM

MC clients will request for bandwidth to the MC driver which will
pass the tegra icc node having current request info to the EMC driver.
The EMC driver will send the BPMP Client ID, Client type and bandwidth
request info to the BPMP-FW where the final DRAM freq for achieving the
requested bandwidth is set based on the passed parameters.

Signed-off-by: Sumit Gupta <sumitg@nvidia.com>
---
 drivers/memory/tegra/mc.c           |  18 ++-
 drivers/memory/tegra/tegra186-emc.c | 166 ++++++++++++++++++++++++++++
 drivers/memory/tegra/tegra234.c     | 101 ++++++++++++++++-
 include/soc/tegra/mc.h              |   7 ++
 include/soc/tegra/tegra-icc.h       |  72 ++++++++++++
 5 files changed, 362 insertions(+), 2 deletions(-)
 create mode 100644 include/soc/tegra/tegra-icc.h

diff --git a/drivers/memory/tegra/mc.c b/drivers/memory/tegra/mc.c
index 592907546ee6..ff887fb03bce 100644
--- a/drivers/memory/tegra/mc.c
+++ b/drivers/memory/tegra/mc.c
@@ -17,6 +17,7 @@
 #include <linux/sort.h>
 
 #include <soc/tegra/fuse.h>
+#include <soc/tegra/tegra-icc.h>
 
 #include "mc.h"
 
@@ -779,6 +780,7 @@ const char *const tegra_mc_error_names[8] = {
  */
 static int tegra_mc_interconnect_setup(struct tegra_mc *mc)
 {
+	struct tegra_icc_node *tnode;
 	struct icc_node *node;
 	unsigned int i;
 	int err;
@@ -792,7 +794,11 @@ static int tegra_mc_interconnect_setup(struct tegra_mc *mc)
 	mc->provider.data = &mc->provider;
 	mc->provider.set = mc->soc->icc_ops->set;
 	mc->provider.aggregate = mc->soc->icc_ops->aggregate;
-	mc->provider.xlate_extended = mc->soc->icc_ops->xlate_extended;
+	mc->provider.get_bw = mc->soc->icc_ops->get_bw;
+	if (mc->soc->icc_ops->xlate)
+		mc->provider.xlate = mc->soc->icc_ops->xlate;
+	if (mc->soc->icc_ops->xlate_extended)
+		mc->provider.xlate_extended = mc->soc->icc_ops->xlate_extended;
 
 	err = icc_provider_add(&mc->provider);
 	if (err)
@@ -814,6 +820,10 @@ static int tegra_mc_interconnect_setup(struct tegra_mc *mc)
 		goto remove_nodes;
 
 	for (i = 0; i < mc->soc->num_clients; i++) {
+		tnode = kzalloc(sizeof(*tnode), GFP_KERNEL);
+		if (!tnode)
+			return -ENOMEM;
+
 		/* create MC client node */
 		node = icc_node_create(mc->soc->clients[i].id);
 		if (IS_ERR(node)) {
@@ -828,6 +838,12 @@ static int tegra_mc_interconnect_setup(struct tegra_mc *mc)
 		err = icc_link_create(node, TEGRA_ICC_MC);
 		if (err)
 			goto remove_nodes;
+
+		node->data = tnode;
+		tnode->node = node;
+		tnode->type = mc->soc->clients[i].type;
+		tnode->bpmp_id = mc->soc->clients[i].bpmp_id;
+		tnode->mc = mc;
 	}
 
 	return 0;
diff --git a/drivers/memory/tegra/tegra186-emc.c b/drivers/memory/tegra/tegra186-emc.c
index 26e763bde92a..3500ed6ccd8f 100644
--- a/drivers/memory/tegra/tegra186-emc.c
+++ b/drivers/memory/tegra/tegra186-emc.c
@@ -8,8 +8,11 @@
 #include <linux/module.h>
 #include <linux/mod_devicetable.h>
 #include <linux/platform_device.h>
+#include <linux/of_platform.h>
 
 #include <soc/tegra/bpmp.h>
+#include <soc/tegra/tegra-icc.h>
+#include "mc.h"
 
 struct tegra186_emc_dvfs {
 	unsigned long latency;
@@ -29,8 +32,15 @@ struct tegra186_emc {
 		unsigned long min_rate;
 		unsigned long max_rate;
 	} debugfs;
+
+	struct icc_provider provider;
 };
 
+static inline struct tegra186_emc *to_tegra186_emc(struct icc_provider *provider)
+{
+	return container_of(provider, struct tegra186_emc, provider);
+}
+
 /*
  * debugfs interface
  *
@@ -146,11 +156,150 @@ DEFINE_DEBUGFS_ATTRIBUTE(tegra186_emc_debug_max_rate_fops,
 			  tegra186_emc_debug_max_rate_get,
 			  tegra186_emc_debug_max_rate_set, "%llu\n");
 
+/*
+ * tegra_emc_icc_set_bw() - Pass MC client info and BW request to BPMP
+ * @src: ICC node for External Memory Controller (EMC)
+ * @dst: ICC node for External Memory (DRAM)
+ *
+ * Passing the MC client info and Banwidth request to BPMP-FW  where
+ * LA and PTSA registers are accessed and the final EMC freq is set based
+ * on client, type, latency and bandwidth.
+ */
+static int tegra_emc_icc_set_bw(struct icc_node *src, struct icc_node *dst)
+{
+	struct tegra186_emc *emc = to_tegra186_emc(dst->provider);
+	struct tegra_mc *mc = dev_get_drvdata(emc->dev->parent);
+	struct mrq_bwmgr_int_request bwmgr_req = { 0 };
+	struct mrq_bwmgr_int_response bwmgr_resp = { 0 };
+	struct tegra_icc_node *tnode = mc->curr_tnode;
+	struct tegra_bpmp_message msg;
+	int ret = 0;
+
+	/*
+	 * Same Src and Dst node will happen during boot from icc_node_add().
+	 * This can be used to pre-initialize and set bandwidth for all clients
+	 * before their drivers are loaded. We are skipping this case as for us,
+	 * the pre-initialization already happened in Bootloader(MB2) and BPMP-FW.
+	 */
+	if (src->id == dst->id)
+		return 0;
+
+	if (mc->curr_tnode->type == TEGRA_ICC_NISO)
+		bwmgr_req.bwmgr_calc_set_req.niso_bw = tnode->node->avg_bw;
+	else
+		bwmgr_req.bwmgr_calc_set_req.iso_bw = tnode->node->avg_bw;
+
+	bwmgr_req.bwmgr_calc_set_req.client_id = tnode->bpmp_id;
+
+	bwmgr_req.cmd = CMD_BWMGR_INT_CALC_AND_SET;
+	bwmgr_req.bwmgr_calc_set_req.mc_floor = tnode->node->peak_bw;
+	bwmgr_req.bwmgr_calc_set_req.floor_unit = BWMGR_INT_UNIT_KBPS;
+
+	memset(&msg, 0, sizeof(msg));
+	msg.mrq = MRQ_BWMGR_INT;
+	msg.tx.data = &bwmgr_req;
+	msg.tx.size = sizeof(bwmgr_req);
+	msg.rx.data = &bwmgr_resp;
+	msg.rx.size = sizeof(bwmgr_resp);
+
+	ret = tegra_bpmp_transfer(emc->bpmp, &msg);
+	if (ret < 0) {
+		dev_err(emc->dev, "BPMP transfer failed: %d\n", ret);
+		goto error;
+	}
+	if (msg.rx.ret < 0) {
+		pr_err("failed to set bandwidth for %u: %d\n",
+		       bwmgr_req.bwmgr_calc_set_req.client_id, msg.rx.ret);
+		ret = -EINVAL;
+	}
+
+error:
+	return ret;
+}
+
+static struct icc_node *
+tegra_emc_of_icc_xlate(struct of_phandle_args *spec, void *data)
+{
+	struct icc_provider *provider = data;
+	struct icc_node *node;
+
+	/* External Memory is the only possible ICC route */
+	list_for_each_entry(node, &provider->nodes, node_list) {
+		if (node->id != TEGRA_ICC_EMEM)
+			continue;
+
+		return node;
+	}
+
+	return ERR_PTR(-EPROBE_DEFER);
+}
+
+static int tegra_emc_icc_get_init_bw(struct icc_node *node, u32 *avg, u32 *peak)
+{
+	*avg = 0;
+	*peak = 0;
+
+	return 0;
+}
+
+static int tegra_emc_interconnect_init(struct tegra186_emc *emc)
+{
+	struct tegra_mc *mc = dev_get_drvdata(emc->dev->parent);
+	const struct tegra_mc_soc *soc = mc->soc;
+	struct icc_node *node;
+	int err = 0;
+
+	emc->provider.dev = emc->dev;
+	emc->provider.set = tegra_emc_icc_set_bw;
+	emc->provider.data = &emc->provider;
+	emc->provider.aggregate = soc->icc_ops->aggregate;
+	emc->provider.xlate = tegra_emc_of_icc_xlate;
+	emc->provider.get_bw = tegra_emc_icc_get_init_bw;
+
+	err = icc_provider_add(&emc->provider);
+	if (err)
+		return err;
+
+	/* create External Memory Controller node */
+	node = icc_node_create(TEGRA_ICC_EMC);
+	if (IS_ERR(node)) {
+		err = PTR_ERR(node);
+		goto del_provider;
+	}
+
+	node->name = "External Memory Controller";
+	icc_node_add(node, &emc->provider);
+
+	/* link External Memory Controller to External Memory (DRAM) */
+	err = icc_link_create(node, TEGRA_ICC_EMEM);
+	if (err)
+		goto remove_nodes;
+
+	/* create External Memory node */
+	node = icc_node_create(TEGRA_ICC_EMEM);
+	if (IS_ERR(node)) {
+		err = PTR_ERR(node);
+		goto remove_nodes;
+	}
+
+	node->name = "External Memory (DRAM)";
+	icc_node_add(node, &emc->provider);
+
+	return 0;
+remove_nodes:
+	icc_nodes_remove(&emc->provider);
+del_provider:
+	icc_provider_del(&emc->provider);
+
+	return err;
+}
+
 static int tegra186_emc_probe(struct platform_device *pdev)
 {
 	struct mrq_emc_dvfs_latency_response response;
 	struct tegra_bpmp_message msg;
 	struct tegra186_emc *emc;
+	struct tegra_mc *mc;
 	unsigned int i;
 	int err;
 
@@ -158,6 +307,9 @@ static int tegra186_emc_probe(struct platform_device *pdev)
 	if (!emc)
 		return -ENOMEM;
 
+	platform_set_drvdata(pdev, emc);
+	emc->dev = &pdev->dev;
+
 	emc->bpmp = tegra_bpmp_get(&pdev->dev);
 	if (IS_ERR(emc->bpmp))
 		return dev_err_probe(&pdev->dev, PTR_ERR(emc->bpmp), "failed to get BPMP\n");
@@ -236,6 +388,19 @@ static int tegra186_emc_probe(struct platform_device *pdev)
 	debugfs_create_file("max_rate", S_IRUGO | S_IWUSR, emc->debugfs.root,
 			    emc, &tegra186_emc_debug_max_rate_fops);
 
+	mc = dev_get_drvdata(emc->dev->parent);
+	if (mc && mc->soc->icc_ops) {
+		if (tegra_bpmp_mrq_is_supported(emc->bpmp, MRQ_BWMGR_INT)) {
+			err = tegra_emc_interconnect_init(emc);
+			if (!err)
+				return err;
+			dev_err(&pdev->dev, "tegra_emc_interconnect_init failed:%d\n", err);
+			goto put_bpmp;
+		} else {
+			dev_info(&pdev->dev, "MRQ_BWMGR_INT not present\n");
+		}
+	}
+
 	return 0;
 
 put_bpmp:
@@ -272,6 +437,7 @@ static struct platform_driver tegra186_emc_driver = {
 		.name = "tegra186-emc",
 		.of_match_table = tegra186_emc_of_match,
 		.suppress_bind_attrs = true,
+		.sync_state = icc_sync_state,
 	},
 	.probe = tegra186_emc_probe,
 	.remove = tegra186_emc_remove,
diff --git a/drivers/memory/tegra/tegra234.c b/drivers/memory/tegra/tegra234.c
index 02dcc5748bba..65d4e32ee118 100644
--- a/drivers/memory/tegra/tegra234.c
+++ b/drivers/memory/tegra/tegra234.c
@@ -1,6 +1,6 @@
 // SPDX-License-Identifier: GPL-2.0-only
 /*
- * Copyright (C) 2021-2022, NVIDIA CORPORATION.  All rights reserved.
+ * Copyright (C) 2021-2023, NVIDIA CORPORATION.  All rights reserved.
  */
 
 #include <soc/tegra/mc.h>
@@ -8,11 +8,16 @@
 #include <dt-bindings/memory/tegra234-mc.h>
 
 #include "mc.h"
+#include <soc/tegra/tegra-icc.h>
+#include <linux/interconnect.h>
+#include <linux/of_device.h>
 
 static const struct tegra_mc_client tegra234_mc_clients[] = {
 	{
 		.id = TEGRA234_MEMORY_CLIENT_MGBEARD,
 		.name = "mgbeard",
+		.bpmp_id = TEGRA_ICC_BPMP_EQOS,
+		.type = TEGRA_ICC_NISO,
 		.sid = TEGRA234_SID_MGBE,
 		.regs = {
 			.sid = {
@@ -23,6 +28,8 @@ static const struct tegra_mc_client tegra234_mc_clients[] = {
 	}, {
 		.id = TEGRA234_MEMORY_CLIENT_MGBEBRD,
 		.name = "mgbebrd",
+		.bpmp_id = TEGRA_ICC_BPMP_EQOS,
+		.type = TEGRA_ICC_NISO,
 		.sid = TEGRA234_SID_MGBE_VF1,
 		.regs = {
 			.sid = {
@@ -33,6 +40,8 @@ static const struct tegra_mc_client tegra234_mc_clients[] = {
 	}, {
 		.id = TEGRA234_MEMORY_CLIENT_MGBECRD,
 		.name = "mgbecrd",
+		.bpmp_id = TEGRA_ICC_BPMP_EQOS,
+		.type = TEGRA_ICC_NISO,
 		.sid = TEGRA234_SID_MGBE_VF2,
 		.regs = {
 			.sid = {
@@ -43,6 +52,8 @@ static const struct tegra_mc_client tegra234_mc_clients[] = {
 	}, {
 		.id = TEGRA234_MEMORY_CLIENT_MGBEDRD,
 		.name = "mgbedrd",
+		.bpmp_id = TEGRA_ICC_BPMP_EQOS,
+		.type = TEGRA_ICC_NISO,
 		.sid = TEGRA234_SID_MGBE_VF3,
 		.regs = {
 			.sid = {
@@ -52,6 +63,8 @@ static const struct tegra_mc_client tegra234_mc_clients[] = {
 		},
 	}, {
 		.id = TEGRA234_MEMORY_CLIENT_MGBEAWR,
+		.bpmp_id = TEGRA_ICC_BPMP_EQOS,
+		.type = TEGRA_ICC_NISO,
 		.name = "mgbeawr",
 		.sid = TEGRA234_SID_MGBE,
 		.regs = {
@@ -63,6 +76,8 @@ static const struct tegra_mc_client tegra234_mc_clients[] = {
 	}, {
 		.id = TEGRA234_MEMORY_CLIENT_MGBEBWR,
 		.name = "mgbebwr",
+		.bpmp_id = TEGRA_ICC_BPMP_EQOS,
+		.type = TEGRA_ICC_NISO,
 		.sid = TEGRA234_SID_MGBE_VF1,
 		.regs = {
 			.sid = {
@@ -73,6 +88,8 @@ static const struct tegra_mc_client tegra234_mc_clients[] = {
 	}, {
 		.id = TEGRA234_MEMORY_CLIENT_MGBECWR,
 		.name = "mgbecwr",
+		.bpmp_id = TEGRA_ICC_BPMP_EQOS,
+		.type = TEGRA_ICC_NISO,
 		.sid = TEGRA234_SID_MGBE_VF2,
 		.regs = {
 			.sid = {
@@ -83,6 +100,8 @@ static const struct tegra_mc_client tegra234_mc_clients[] = {
 	}, {
 		.id = TEGRA234_MEMORY_CLIENT_SDMMCRAB,
 		.name = "sdmmcrab",
+		.bpmp_id = TEGRA_ICC_BPMP_SDMMC_4,
+		.type = TEGRA_ICC_NISO,
 		.sid = TEGRA234_SID_SDMMC4,
 		.regs = {
 			.sid = {
@@ -93,6 +112,8 @@ static const struct tegra_mc_client tegra234_mc_clients[] = {
 	}, {
 		.id = TEGRA234_MEMORY_CLIENT_MGBEDWR,
 		.name = "mgbedwr",
+		.bpmp_id = TEGRA_ICC_BPMP_EQOS,
+		.type = TEGRA_ICC_NISO,
 		.sid = TEGRA234_SID_MGBE_VF3,
 		.regs = {
 			.sid = {
@@ -103,6 +124,8 @@ static const struct tegra_mc_client tegra234_mc_clients[] = {
 	}, {
 		.id = TEGRA234_MEMORY_CLIENT_SDMMCWAB,
 		.name = "sdmmcwab",
+		.bpmp_id = TEGRA_ICC_BPMP_SDMMC_4,
+		.type = TEGRA_ICC_NISO,
 		.sid = TEGRA234_SID_SDMMC4,
 		.regs = {
 			.sid = {
@@ -153,6 +176,8 @@ static const struct tegra_mc_client tegra234_mc_clients[] = {
 	}, {
 		.id = TEGRA234_MEMORY_CLIENT_APEDMAR,
 		.name = "apedmar",
+		.bpmp_id = TEGRA_ICC_BPMP_APEDMA,
+		.type = TEGRA_ICC_ISO_AUDIO,
 		.sid = TEGRA234_SID_APE,
 		.regs = {
 			.sid = {
@@ -163,6 +188,8 @@ static const struct tegra_mc_client tegra234_mc_clients[] = {
 	}, {
 		.id = TEGRA234_MEMORY_CLIENT_APEDMAW,
 		.name = "apedmaw",
+		.bpmp_id = TEGRA_ICC_BPMP_APEDMA,
+		.type = TEGRA_ICC_ISO_AUDIO,
 		.sid = TEGRA234_SID_APE,
 		.regs = {
 			.sid = {
@@ -333,6 +360,77 @@ static const struct tegra_mc_client tegra234_mc_clients[] = {
 	},
 };
 
+/*
+ * tegra234_mc_icc_set() - Pass MC client info to External Memory Controller (EMC)
+ * @src: ICC node for Memory Controller's (MC) Client
+ * @dst: ICC node for Memory Controller (MC)
+ *
+ * Passing the current request info from the MC to the EMC driver using
+ * 'struct tegra_mc'. EMC driver will further send the MC client's request info
+ * to the BPMP-FW where LA and PTSA registers are accessed and the final EMC freq
+ * is set based on client, type, latency and bandwidth.
+ * icc_set_bw() makes set_bw calls for both MC and EMC providers in sequence.
+ * Both the calls are protected by 'mutex_lock(&icc_lock)'. So, the data passed
+ * from MC and EMC won't be updated by concurrent set calls from other clients.
+ */
+static int tegra234_mc_icc_set(struct icc_node *src, struct icc_node *dst)
+{
+	struct tegra_mc *mc = icc_provider_to_tegra_mc(dst->provider);
+	struct tegra_icc_node *tnode = src->data;
+
+	/*
+	 * Same Src and Dst node will happen during boot from icc_node_add().
+	 * This can be used to pre-initialize and set bandwidth for all clients
+	 * before their drivers are loaded. We are skipping this case as for us,
+	 * the pre-initialization already happened in Bootloader(MB2) and BPMP-FW.
+	 */
+	if (src->id == dst->id)
+		return 0;
+
+	if (tnode->node)
+		mc->curr_tnode = tnode;
+	else
+		pr_err("%s, tegra_icc_node is null\n", __func__);
+
+	return 0;
+}
+
+static struct icc_node*
+tegra234_mc_of_icc_xlate(struct of_phandle_args *spec, void *data)
+{
+	struct tegra_mc *mc = icc_provider_to_tegra_mc(data);
+	unsigned int cl_id = spec->args[0];
+	struct icc_node *node;
+
+	list_for_each_entry(node, &mc->provider.nodes, node_list) {
+		if (node->id != cl_id)
+			continue;
+
+		return node;
+	}
+
+	/*
+	 * If a client driver calls devm_of_icc_get() before the MC driver
+	 * is probed, then return EPROBE_DEFER to the client driver.
+	 */
+	return ERR_PTR(-EPROBE_DEFER);
+}
+
+static int tegra234_mc_icc_get_init_bw(struct icc_node *node, u32 *avg, u32 *peak)
+{
+	*avg = 0;
+	*peak = 0;
+
+	return 0;
+}
+
+static const struct tegra_mc_icc_ops tegra234_mc_icc_ops = {
+	.xlate = tegra234_mc_of_icc_xlate,
+	.get_bw = tegra234_mc_icc_get_init_bw,
+	.aggregate = icc_std_aggregate,
+	.set = tegra234_mc_icc_set,
+};
+
 const struct tegra_mc_soc tegra234_mc_soc = {
 	.num_clients = ARRAY_SIZE(tegra234_mc_clients),
 	.clients = tegra234_mc_clients,
@@ -345,6 +443,7 @@ const struct tegra_mc_soc tegra234_mc_soc = {
 		   MC_INT_SECURITY_VIOLATION | MC_INT_DECERR_EMEM,
 	.has_addr_hi_reg = true,
 	.ops = &tegra186_mc_ops,
+	.icc_ops = &tegra234_mc_icc_ops,
 	.ch_intmask = 0x0000ff00,
 	.global_intstatus_channel_shift = 8,
 	/*
diff --git a/include/soc/tegra/mc.h b/include/soc/tegra/mc.h
index 51a2263e1bc5..0a32a9eb12a4 100644
--- a/include/soc/tegra/mc.h
+++ b/include/soc/tegra/mc.h
@@ -13,6 +13,7 @@
 #include <linux/irq.h>
 #include <linux/reset-controller.h>
 #include <linux/types.h>
+#include <soc/tegra/tegra-icc.h>
 
 struct clk;
 struct device;
@@ -26,6 +27,8 @@ struct tegra_mc_timing {
 
 struct tegra_mc_client {
 	unsigned int id;
+	unsigned int bpmp_id;
+	enum tegra_icc_client_type type;
 	const char *name;
 	/*
 	 * For Tegra210 and earlier, this is the SWGROUP ID used for IOVA translations in the
@@ -166,8 +169,10 @@ struct tegra_mc_icc_ops {
 	int (*set)(struct icc_node *src, struct icc_node *dst);
 	int (*aggregate)(struct icc_node *node, u32 tag, u32 avg_bw,
 			 u32 peak_bw, u32 *agg_avg, u32 *agg_peak);
+	struct icc_node* (*xlate)(struct of_phandle_args *spec, void *data);
 	struct icc_node_data *(*xlate_extended)(struct of_phandle_args *spec,
 						void *data);
+	int (*get_bw)(struct icc_node *node, u32 *avg, u32 *peak);
 };
 
 struct tegra_mc_ops {
@@ -238,6 +243,8 @@ struct tegra_mc {
 	struct {
 		struct dentry *root;
 	} debugfs;
+
+	struct tegra_icc_node *curr_tnode;
 };
 
 int tegra_mc_write_emem_configuration(struct tegra_mc *mc, unsigned long rate);
diff --git a/include/soc/tegra/tegra-icc.h b/include/soc/tegra/tegra-icc.h
new file mode 100644
index 000000000000..3855d8571281
--- /dev/null
+++ b/include/soc/tegra/tegra-icc.h
@@ -0,0 +1,72 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * Copyright (C) 2022-2023 NVIDIA CORPORATION.  All rights reserved.
+ */
+
+#ifndef MEMORY_TEGRA_ICC_H
+#define MEMORY_TEGRA_ICC_H
+
+enum tegra_icc_client_type {
+	TEGRA_ICC_NONE,
+	TEGRA_ICC_NISO,
+	TEGRA_ICC_ISO_DISPLAY,
+	TEGRA_ICC_ISO_VI,
+	TEGRA_ICC_ISO_AUDIO,
+	TEGRA_ICC_ISO_VIFAL,
+};
+
+struct tegra_icc_node {
+	struct icc_node *node;
+	struct tegra_mc *mc;
+	u32 bpmp_id;
+	u32 type;
+};
+
+/* ICC ID's for MC client's used in BPMP */
+#define TEGRA_ICC_BPMP_DEBUG		1
+#define TEGRA_ICC_BPMP_CPU_CLUSTER0	2
+#define TEGRA_ICC_BPMP_CPU_CLUSTER1	3
+#define TEGRA_ICC_BPMP_CPU_CLUSTER2	4
+#define TEGRA_ICC_BPMP_GPU		5
+#define TEGRA_ICC_BPMP_CACTMON		6
+#define TEGRA_ICC_BPMP_DISPLAY		7
+#define TEGRA_ICC_BPMP_VI		8
+#define TEGRA_ICC_BPMP_EQOS		9
+#define TEGRA_ICC_BPMP_PCIE_0		10
+#define TEGRA_ICC_BPMP_PCIE_1		11
+#define TEGRA_ICC_BPMP_PCIE_2		12
+#define TEGRA_ICC_BPMP_PCIE_3		13
+#define TEGRA_ICC_BPMP_PCIE_4		14
+#define TEGRA_ICC_BPMP_PCIE_5		15
+#define TEGRA_ICC_BPMP_PCIE_6		16
+#define TEGRA_ICC_BPMP_PCIE_7		17
+#define TEGRA_ICC_BPMP_PCIE_8		18
+#define TEGRA_ICC_BPMP_PCIE_9		19
+#define TEGRA_ICC_BPMP_PCIE_10		20
+#define TEGRA_ICC_BPMP_DLA_0		21
+#define TEGRA_ICC_BPMP_DLA_1		22
+#define TEGRA_ICC_BPMP_SDMMC_1		23
+#define TEGRA_ICC_BPMP_SDMMC_2		24
+#define TEGRA_ICC_BPMP_SDMMC_3		25
+#define TEGRA_ICC_BPMP_SDMMC_4		26
+#define TEGRA_ICC_BPMP_NVDEC		27
+#define TEGRA_ICC_BPMP_NVENC		28
+#define TEGRA_ICC_BPMP_NVJPG_0		29
+#define TEGRA_ICC_BPMP_NVJPG_1		30
+#define TEGRA_ICC_BPMP_OFAA		31
+#define TEGRA_ICC_BPMP_XUSB_HOST	32
+#define TEGRA_ICC_BPMP_XUSB_DEV		33
+#define TEGRA_ICC_BPMP_TSEC		34
+#define TEGRA_ICC_BPMP_VIC		35
+#define TEGRA_ICC_BPMP_APE		36
+#define TEGRA_ICC_BPMP_APEDMA		37
+#define TEGRA_ICC_BPMP_SE		38
+#define TEGRA_ICC_BPMP_ISP		39
+#define TEGRA_ICC_BPMP_HDA		40
+#define TEGRA_ICC_BPMP_VIFAL		41
+#define TEGRA_ICC_BPMP_VI2FAL		42
+#define TEGRA_ICC_BPMP_VI2		43
+#define TEGRA_ICC_BPMP_RCE		44
+#define TEGRA_ICC_BPMP_PVA		45
+
+#endif /* MEMORY_TEGRA_ICC_H */
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [Patch v1 02/10] memory: tegra: adding iso mc clients for Tegra234
  2022-12-20 16:02 [Patch v1 00/10] Tegra234 Memory interconnect support Sumit Gupta
  2022-12-20 16:02 ` [Patch v1 01/10] memory: tegra: add interconnect support for DRAM scaling in Tegra234 Sumit Gupta
@ 2022-12-20 16:02 ` Sumit Gupta
  2022-12-20 16:02 ` [Patch v1 03/10] memory: tegra: add pcie " Sumit Gupta
                   ` (7 subsequent siblings)
  9 siblings, 0 replies; 53+ messages in thread
From: Sumit Gupta @ 2022-12-20 16:02 UTC (permalink / raw)
  To: treding, krzysztof.kozlowski, dmitry.osipenko, viresh.kumar,
	rafael, jonathanh, robh+dt, linux-kernel, linux-tegra, linux-pm,
	devicetree
  Cc: sanjayc, ksitaraman, ishah, bbasu, sumitg

Adding Isochronous (ISO) MC clients to the mc_clients
table for Tegra234. ISO clients have guaranteed bandwidth
requirement.

Signed-off-by: Sumit Gupta <sumitg@nvidia.com>
---
 drivers/memory/tegra/tegra234.c | 108 ++++++++++++++++++++++++++++++++
 1 file changed, 108 insertions(+)

diff --git a/drivers/memory/tegra/tegra234.c b/drivers/memory/tegra/tegra234.c
index 65d4e32ee118..2e37b37da9be 100644
--- a/drivers/memory/tegra/tegra234.c
+++ b/drivers/memory/tegra/tegra234.c
@@ -14,6 +14,30 @@
 
 static const struct tegra_mc_client tegra234_mc_clients[] = {
 	{
+		.id = TEGRA234_MEMORY_CLIENT_HDAR,
+		.name = "hdar",
+		.bpmp_id = TEGRA_ICC_BPMP_HDA,
+		.type = TEGRA_ICC_ISO_AUDIO,
+		.sid = TEGRA234_SID_HDA,
+		.regs = {
+			.sid = {
+				.override = 0xa8,
+				.security = 0xac,
+			},
+		},
+	}, {
+		.id = TEGRA234_MEMORY_CLIENT_HDAW,
+		.name = "hdaw",
+		.bpmp_id = TEGRA_ICC_BPMP_HDA,
+		.type = TEGRA_ICC_ISO_AUDIO,
+		.sid = TEGRA234_SID_HDA,
+		.regs = {
+			.sid = {
+				.override = 0x1a8,
+				.security = 0x1ac,
+			},
+		},
+	}, {
 		.id = TEGRA234_MEMORY_CLIENT_MGBEARD,
 		.name = "mgbeard",
 		.bpmp_id = TEGRA_ICC_BPMP_EQOS,
@@ -133,6 +157,90 @@ static const struct tegra_mc_client tegra234_mc_clients[] = {
 				.security = 0x33c,
 			},
 		},
+	}, {
+		.id = TEGRA234_MEMORY_CLIENT_VI2W,
+		.name = "vi2w",
+		.bpmp_id = TEGRA_ICC_BPMP_VI2,
+		.type = TEGRA_ICC_ISO_VI,
+		.sid = TEGRA234_SID_ISO_VI2,
+		.regs = {
+			.sid = {
+				.override = 0x380,
+				.security = 0x384,
+			},
+		},
+	}, {
+		.id = TEGRA234_MEMORY_CLIENT_VI2FALR,
+		.name = "vi2falr",
+		.bpmp_id = TEGRA_ICC_BPMP_VI2FAL,
+		.type = TEGRA_ICC_ISO_VIFAL,
+		.sid = TEGRA234_SID_ISO_VI2FALC,
+		.regs = {
+			.sid = {
+				.override = 0x388,
+				.security = 0x38c,
+			},
+		},
+	}, {
+		.id = TEGRA234_MEMORY_CLIENT_VI2FALW,
+		.name = "vi2falw",
+		.bpmp_id = TEGRA_ICC_BPMP_VI2FAL,
+		.type = TEGRA_ICC_ISO_VIFAL,
+		.sid = TEGRA234_SID_ISO_VI2FALC,
+		.regs = {
+			.sid = {
+				.override = 0x3e0,
+				.security = 0x3e4,
+			},
+		},
+	}, {
+		.id = TEGRA234_MEMORY_CLIENT_APER,
+		.name = "aper",
+		.bpmp_id = TEGRA_ICC_BPMP_APE,
+		.type = TEGRA_ICC_ISO_AUDIO,
+		.sid = TEGRA234_SID_APE,
+		.regs = {
+			.sid = {
+				.override = 0x3d0,
+				.security = 0x3d4,
+			},
+		},
+	}, {
+		.id = TEGRA234_MEMORY_CLIENT_APEW,
+		.name = "apew",
+		.bpmp_id = TEGRA_ICC_BPMP_APE,
+		.type = TEGRA_ICC_ISO_AUDIO,
+		.sid = TEGRA234_SID_APE,
+		.regs = {
+			.sid = {
+				.override = 0x3d8,
+				.security = 0x3dc,
+			},
+		},
+	}, {
+		.id = TEGRA234_MEMORY_CLIENT_NVDISPLAYR,
+		.name = "nvdisplayr",
+		.bpmp_id = TEGRA_ICC_BPMP_DISPLAY,
+		.type = TEGRA_ICC_ISO_DISPLAY,
+		.sid = TEGRA234_SID_ISO_NVDISPLAY,
+		.regs = {
+			.sid = {
+				.override = 0x490,
+				.security = 0x494,
+			},
+		},
+	}, {
+		.id = TEGRA234_MEMORY_CLIENT_NVDISPLAYR1,
+		.name = "nvdisplayr1",
+		.bpmp_id = TEGRA_ICC_BPMP_DISPLAY,
+		.type = TEGRA_ICC_ISO_DISPLAY,
+		.sid = TEGRA234_SID_ISO_NVDISPLAY,
+		.regs = {
+			.sid = {
+				.override = 0x508,
+				.security = 0x50c,
+			},
+		},
 	}, {
 		.id = TEGRA234_MEMORY_CLIENT_BPMPR,
 		.name = "bpmpr",
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [Patch v1 03/10] memory: tegra: add pcie mc clients for Tegra234
  2022-12-20 16:02 [Patch v1 00/10] Tegra234 Memory interconnect support Sumit Gupta
  2022-12-20 16:02 ` [Patch v1 01/10] memory: tegra: add interconnect support for DRAM scaling in Tegra234 Sumit Gupta
  2022-12-20 16:02 ` [Patch v1 02/10] memory: tegra: adding iso mc clients for Tegra234 Sumit Gupta
@ 2022-12-20 16:02 ` Sumit Gupta
  2022-12-22 11:33   ` Krzysztof Kozlowski
  2022-12-20 16:02 ` [Patch v1 04/10] memory: tegra: add support for software mc clients in Tegra234 Sumit Gupta
                   ` (6 subsequent siblings)
  9 siblings, 1 reply; 53+ messages in thread
From: Sumit Gupta @ 2022-12-20 16:02 UTC (permalink / raw)
  To: treding, krzysztof.kozlowski, dmitry.osipenko, viresh.kumar,
	rafael, jonathanh, robh+dt, linux-kernel, linux-tegra, linux-pm,
	devicetree
  Cc: sanjayc, ksitaraman, ishah, bbasu, sumitg

Adding PCIE clients representing each controller to the
mc_clients table for Tegra234.

Signed-off-by: Sumit Gupta <sumitg@nvidia.com>
---
 drivers/memory/tegra/tegra234.c | 314 +++++++++++++++++++++++++++++++-
 1 file changed, 313 insertions(+), 1 deletion(-)

diff --git a/drivers/memory/tegra/tegra234.c b/drivers/memory/tegra/tegra234.c
index 2e37b37da9be..420546270c8b 100644
--- a/drivers/memory/tegra/tegra234.c
+++ b/drivers/memory/tegra/tegra234.c
@@ -465,7 +465,319 @@ static const struct tegra_mc_client tegra234_mc_clients[] = {
 				.security = 0x37c,
 			},
 		},
-	},
+	}, {
+		.id = TEGRA234_MEMORY_CLIENT_PCIE0R,
+		.name = "pcie0r",
+		.bpmp_id = TEGRA_ICC_BPMP_PCIE_0,
+		.type = TEGRA_ICC_NISO,
+		.sid = TEGRA234_SID_PCIE0,
+		.regs = {
+			.sid = {
+				.override = 0x6c0,
+				.security = 0x6c4,
+			},
+		},
+	}, {
+		.id = TEGRA234_MEMORY_CLIENT_PCIE0W,
+		.name = "pcie0w",
+		.bpmp_id = TEGRA_ICC_BPMP_PCIE_0,
+		.type = TEGRA_ICC_NISO,
+		.sid = TEGRA234_SID_PCIE0,
+		.regs = {
+			.sid = {
+				.override = 0x6c8,
+				.security = 0x6cc,
+			},
+		},
+	}, {
+		.id = TEGRA234_MEMORY_CLIENT_PCIE1R,
+		.name = "pcie1r",
+		.bpmp_id = TEGRA_ICC_BPMP_PCIE_1,
+		.type = TEGRA_ICC_NISO,
+		.sid = TEGRA234_SID_PCIE1,
+		.regs = {
+			.sid = {
+				.override = 0x6d0,
+				.security = 0x6d4,
+			},
+		},
+	}, {
+		.id = TEGRA234_MEMORY_CLIENT_PCIE1W,
+		.name = "pcie1w",
+		.bpmp_id = TEGRA_ICC_BPMP_PCIE_1,
+		.type = TEGRA_ICC_NISO,
+		.sid = TEGRA234_SID_PCIE1,
+		.regs = {
+			.sid = {
+				.override = 0x6d8,
+				.security = 0x6dc,
+			},
+		},
+	}, {
+		.id = TEGRA234_MEMORY_CLIENT_PCIE2AR,
+		.name = "pcie2ar",
+		.bpmp_id = TEGRA_ICC_BPMP_PCIE_2,
+		.type = TEGRA_ICC_NISO,
+		.sid = TEGRA234_SID_PCIE2,
+		.regs = {
+			.sid = {
+				.override = 0x6e0,
+				.security = 0x6e4,
+			},
+		},
+	}, {
+		.id = TEGRA234_MEMORY_CLIENT_PCIE2AW,
+		.name = "pcie2aw",
+		.bpmp_id = TEGRA_ICC_BPMP_PCIE_2,
+		.type = TEGRA_ICC_NISO,
+		.sid = TEGRA234_SID_PCIE2,
+		.regs = {
+			.sid = {
+				.override = 0x6e8,
+				.security = 0x6ec,
+			},
+		},
+	}, {
+		.id = TEGRA234_MEMORY_CLIENT_PCIE3R,
+		.name = "pcie3r",
+		.bpmp_id = TEGRA_ICC_BPMP_PCIE_3,
+		.type = TEGRA_ICC_NISO,
+		.sid = TEGRA234_SID_PCIE3,
+		.regs = {
+			.sid = {
+				.override = 0x6f0,
+				.security = 0x6f4,
+			},
+		},
+	}, {
+		.id = TEGRA234_MEMORY_CLIENT_PCIE3W,
+		.name = "pcie3w",
+		.bpmp_id = TEGRA_ICC_BPMP_PCIE_3,
+		.type = TEGRA_ICC_NISO,
+		.sid = TEGRA234_SID_PCIE3,
+		.regs = {
+			.sid = {
+				.override = 0x6f8,
+				.security = 0x6fc,
+			},
+		},
+	}, {
+		.id = TEGRA234_MEMORY_CLIENT_PCIE4R,
+		.name = "pcie4r",
+		.bpmp_id = TEGRA_ICC_BPMP_PCIE_4,
+		.type = TEGRA_ICC_NISO,
+		.sid = TEGRA234_SID_PCIE4,
+		.regs = {
+			.sid = {
+				.override = 0x700,
+				.security = 0x704,
+			},
+		},
+	}, {
+		.id = TEGRA234_MEMORY_CLIENT_PCIE4W,
+		.name = "pcie4w",
+		.bpmp_id = TEGRA_ICC_BPMP_PCIE_4,
+		.type = TEGRA_ICC_NISO,
+		.sid = TEGRA234_SID_PCIE4,
+		.regs = {
+			.sid = {
+				.override = 0x708,
+				.security = 0x70c,
+			},
+		},
+	}, {
+		.id = TEGRA234_MEMORY_CLIENT_PCIE5R,
+		.name = "pcie5r",
+		.bpmp_id = TEGRA_ICC_BPMP_PCIE_5,
+		.type = TEGRA_ICC_NISO,
+		.sid = TEGRA234_SID_PCIE5,
+		.regs = {
+			.sid = {
+				.override = 0x710,
+				.security = 0x714,
+			},
+		},
+	}, {
+		.id = TEGRA234_MEMORY_CLIENT_PCIE5W,
+		.name = "pcie5w",
+		.bpmp_id = TEGRA_ICC_BPMP_PCIE_5,
+		.type = TEGRA_ICC_NISO,
+		.sid = TEGRA234_SID_PCIE5,
+		.regs = {
+			.sid = {
+				.override = 0x718,
+				.security = 0x71c,
+			},
+		}
+	}, {
+		.id = TEGRA234_MEMORY_CLIENT_PCIE6AR,
+		.name = "pcie6ar",
+		.bpmp_id = TEGRA_ICC_BPMP_PCIE_6,
+		.type = TEGRA_ICC_NISO,
+		.sid = TEGRA234_SID_PCIE6,
+		.regs = {
+			.sid = {
+				.override = 0x140,
+				.security = 0x144,
+			},
+		},
+	}, {
+		.id = TEGRA234_MEMORY_CLIENT_PCIE5R1,
+		.name = "pcie5r1",
+		.bpmp_id = TEGRA_ICC_BPMP_PCIE_5,
+		.type = TEGRA_ICC_NISO,
+		.sid = TEGRA234_SID_PCIE5,
+		.regs = {
+			.sid = {
+				.override = 0x778,
+				.security = 0x77c,
+			},
+		},
+	}, {
+		.id = TEGRA234_MEMORY_CLIENT_PCIE6AW,
+		.name = "pcie6aw",
+		.bpmp_id = TEGRA_ICC_BPMP_PCIE_6,
+		.type = TEGRA_ICC_NISO,
+		.sid = TEGRA234_SID_PCIE6,
+		.regs = {
+			.sid = {
+				.override = 0x148,
+				.security = 0x14c,
+			},
+		},
+	}, {
+		.id = TEGRA234_MEMORY_CLIENT_PCIE6AR1,
+		.name = "pcie6ar1",
+		.bpmp_id = TEGRA_ICC_BPMP_PCIE_6,
+		.type = TEGRA_ICC_NISO,
+		.sid = TEGRA234_SID_PCIE6,
+		.regs = {
+			.sid = {
+				.override = 0x1e8,
+				.security = 0x1ec,
+			},
+		},
+	}, {
+		.id = TEGRA234_MEMORY_CLIENT_PCIE7AR,
+		.name = "pcie7ar",
+		.bpmp_id = TEGRA_ICC_BPMP_PCIE_7,
+		.type = TEGRA_ICC_NISO,
+		.sid = TEGRA234_SID_PCIE7,
+		.regs = {
+			.sid = {
+				.override = 0x150,
+				.security = 0x154,
+			},
+		},
+	}, {
+		.id = TEGRA234_MEMORY_CLIENT_PCIE7AW,
+		.name = "pcie7aw",
+		.bpmp_id = TEGRA_ICC_BPMP_PCIE_7,
+		.type = TEGRA_ICC_NISO,
+		.sid = TEGRA234_SID_PCIE7,
+		.regs = {
+			.sid = {
+				.override = 0x180,
+				.security = 0x184,
+			},
+		}
+	}, {
+		.id = TEGRA234_MEMORY_CLIENT_PCIE7AR1,
+		.name = "pcie7ar1",
+		.bpmp_id = TEGRA_ICC_BPMP_PCIE_7,
+		.type = TEGRA_ICC_NISO,
+		.sid = TEGRA234_SID_PCIE7,
+		.regs = {
+			.sid = {
+				.override = 0x248,
+				.security = 0x24c,
+			},
+		},
+	}, {
+		.id = TEGRA234_MEMORY_CLIENT_PCIE8AR,
+		.name = "pcie8ar",
+		.bpmp_id = TEGRA_ICC_BPMP_PCIE_8,
+		.type = TEGRA_ICC_NISO,
+		.sid = TEGRA234_SID_PCIE8,
+		.regs = {
+			.sid = {
+				.override = 0x190,
+				.security = 0x194,
+			},
+		},
+	}, {
+		.id = TEGRA234_MEMORY_CLIENT_PCIE8AW,
+		.name = "pcie8aw",
+		.bpmp_id = TEGRA_ICC_BPMP_PCIE_8,
+		.type = TEGRA_ICC_NISO,
+		.sid = TEGRA234_SID_PCIE8,
+		.regs = {
+			.sid = {
+				.override = 0x1d8,
+				.security = 0x1dc,
+			},
+		}
+	}, {
+		.id = TEGRA234_MEMORY_CLIENT_PCIE9AR,
+		.name = "pcie9ar",
+		.bpmp_id = TEGRA_ICC_BPMP_PCIE_9,
+		.type = TEGRA_ICC_NISO,
+		.sid = TEGRA234_SID_PCIE9,
+		.regs = {
+			.sid = {
+				.override = 0x1e0,
+				.security = 0x1e4,
+			},
+		},
+	}, {
+		.id = TEGRA234_MEMORY_CLIENT_PCIE9AW,
+		.name = "pcie9aw",
+		.bpmp_id = TEGRA_ICC_BPMP_PCIE_9,
+		.type = TEGRA_ICC_NISO,
+		.sid = TEGRA234_SID_PCIE9,
+		.regs = {
+			.sid = {
+				.override = 0x1f0,
+				.security = 0x1f4,
+			},
+		}
+	}, {
+		.id = TEGRA234_MEMORY_CLIENT_PCIE10AR,
+		.name = "pcie10ar",
+		.bpmp_id = TEGRA_ICC_BPMP_PCIE_10,
+		.type = TEGRA_ICC_NISO,
+		.sid = TEGRA234_SID_PCIE10,
+		.regs = {
+			.sid = {
+				.override = 0x1f8,
+				.security = 0x1fc,
+			},
+		},
+	}, {
+		.id = TEGRA234_MEMORY_CLIENT_PCIE10AW,
+		.name = "pcie10aw",
+		.bpmp_id = TEGRA_ICC_BPMP_PCIE_10,
+		.type = TEGRA_ICC_NISO,
+		.sid = TEGRA234_SID_PCIE10,
+		.regs = {
+			.sid = {
+				.override = 0x200,
+				.security = 0x204,
+			},
+		}
+	}, {
+		.id = TEGRA234_MEMORY_CLIENT_PCIE10AR1,
+		.name = "pcie10ar1",
+		.bpmp_id = TEGRA_ICC_BPMP_PCIE_10,
+		.type = TEGRA_ICC_NISO,
+		.sid = TEGRA234_SID_PCIE10,
+		.regs = {
+			.sid = {
+				.override = 0x240,
+				.security = 0x244,
+			},
+		},
+	}
 };
 
 /*
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [Patch v1 04/10] memory: tegra: add support for software mc clients in Tegra234
  2022-12-20 16:02 [Patch v1 00/10] Tegra234 Memory interconnect support Sumit Gupta
                   ` (2 preceding siblings ...)
  2022-12-20 16:02 ` [Patch v1 03/10] memory: tegra: add pcie " Sumit Gupta
@ 2022-12-20 16:02 ` Sumit Gupta
  2022-12-22 11:36   ` Krzysztof Kozlowski
  2022-12-20 16:02 ` [Patch v1 05/10] dt-bindings: tegra: add icc ids for dummy MC clients Sumit Gupta
                   ` (5 subsequent siblings)
  9 siblings, 1 reply; 53+ messages in thread
From: Sumit Gupta @ 2022-12-20 16:02 UTC (permalink / raw)
  To: treding, krzysztof.kozlowski, dmitry.osipenko, viresh.kumar,
	rafael, jonathanh, robh+dt, linux-kernel, linux-tegra, linux-pm,
	devicetree
  Cc: sanjayc, ksitaraman, ishah, bbasu, sumitg

Adding support for dummy memory controller clients for use by
software.
---
 drivers/memory/tegra/mc.c       | 65 +++++++++++++++++++++++----------
 drivers/memory/tegra/tegra234.c | 21 +++++++++++
 include/soc/tegra/mc.h          |  3 ++
 include/soc/tegra/tegra-icc.h   |  7 ++++
 4 files changed, 76 insertions(+), 20 deletions(-)

diff --git a/drivers/memory/tegra/mc.c b/drivers/memory/tegra/mc.c
index ff887fb03bce..4ddf9808fe6b 100644
--- a/drivers/memory/tegra/mc.c
+++ b/drivers/memory/tegra/mc.c
@@ -755,6 +755,39 @@ const char *const tegra_mc_error_names[8] = {
 	[6] = "SMMU translation error",
 };
 
+static int tegra_mc_add_icc_node(struct tegra_mc *mc, unsigned int id, const char *name,
+				 unsigned int bpmp_id, unsigned int type)
+{
+	struct tegra_icc_node *tnode;
+	struct icc_node *node;
+	int err;
+
+	tnode = kzalloc(sizeof(*tnode), GFP_KERNEL);
+	if (!tnode)
+		return -ENOMEM;
+
+	/* create MC client node */
+	node = icc_node_create(id);
+	if (IS_ERR(node))
+		return -EINVAL;
+
+	node->name = name;
+	icc_node_add(node, &mc->provider);
+
+	/* link Memory Client to Memory Controller */
+	err = icc_link_create(node, TEGRA_ICC_MC);
+	if (err)
+		return err;
+
+	node->data = tnode;
+	tnode->node = node;
+	tnode->bpmp_id = bpmp_id;
+	tnode->type = type;
+	tnode->mc = mc;
+
+	return 0;
+}
+
 /*
  * Memory Controller (MC) has few Memory Clients that are issuing memory
  * bandwidth allocation requests to the MC interconnect provider. The MC
@@ -780,7 +813,6 @@ const char *const tegra_mc_error_names[8] = {
  */
 static int tegra_mc_interconnect_setup(struct tegra_mc *mc)
 {
-	struct tegra_icc_node *tnode;
 	struct icc_node *node;
 	unsigned int i;
 	int err;
@@ -820,30 +852,23 @@ static int tegra_mc_interconnect_setup(struct tegra_mc *mc)
 		goto remove_nodes;
 
 	for (i = 0; i < mc->soc->num_clients; i++) {
-		tnode = kzalloc(sizeof(*tnode), GFP_KERNEL);
-		if (!tnode)
-			return -ENOMEM;
-
-		/* create MC client node */
-		node = icc_node_create(mc->soc->clients[i].id);
-		if (IS_ERR(node)) {
-			err = PTR_ERR(node);
+		err = tegra_mc_add_icc_node(mc, mc->soc->clients[i].id,
+					    mc->soc->clients[i].name,
+					    mc->soc->clients[i].bpmp_id,
+					    mc->soc->clients[i].type);
+		if (err)
 			goto remove_nodes;
-		}
 
-		node->name = mc->soc->clients[i].name;
-		icc_node_add(node, &mc->provider);
+	}
+
+	for (i = 0; i < mc->soc->num_sw_clients; i++) {
+		err =  tegra_mc_add_icc_node(mc, mc->soc->sw_clients[i].id,
+					     mc->soc->sw_clients[i].name,
+					     mc->soc->sw_clients[i].bpmp_id,
+					     mc->soc->sw_clients[i].type);
 
-		/* link Memory Client to Memory Controller */
-		err = icc_link_create(node, TEGRA_ICC_MC);
 		if (err)
 			goto remove_nodes;
-
-		node->data = tnode;
-		tnode->node = node;
-		tnode->type = mc->soc->clients[i].type;
-		tnode->bpmp_id = mc->soc->clients[i].bpmp_id;
-		tnode->mc = mc;
 	}
 
 	return 0;
diff --git a/drivers/memory/tegra/tegra234.c b/drivers/memory/tegra/tegra234.c
index 420546270c8b..82ce6c3c3eb0 100644
--- a/drivers/memory/tegra/tegra234.c
+++ b/drivers/memory/tegra/tegra234.c
@@ -780,6 +780,25 @@ static const struct tegra_mc_client tegra234_mc_clients[] = {
 	}
 };
 
+static const struct tegra_mc_sw_client tegra234_mc_sw_clients[] = {
+	{
+		.id = TEGRA_ICC_MC_CPU_CLUSTER0,
+		.name = "sw_cluster0",
+		.bpmp_id = TEGRA_ICC_BPMP_CPU_CLUSTER0,
+		.type = TEGRA_ICC_NISO,
+	}, {
+		.id = TEGRA_ICC_MC_CPU_CLUSTER1,
+		.name = "sw_cluster1",
+		.bpmp_id = TEGRA_ICC_BPMP_CPU_CLUSTER1,
+		.type = TEGRA_ICC_NISO,
+	}, {
+		.id = TEGRA_ICC_MC_CPU_CLUSTER2,
+		.name = "sw_cluster2",
+		.bpmp_id = TEGRA_ICC_BPMP_CPU_CLUSTER2,
+		.type = TEGRA_ICC_NISO,
+	},
+};
+
 /*
  * tegra234_mc_icc_set() - Pass MC client info to External Memory Controller (EMC)
  * @src: ICC node for Memory Controller's (MC) Client
@@ -854,6 +873,8 @@ static const struct tegra_mc_icc_ops tegra234_mc_icc_ops = {
 const struct tegra_mc_soc tegra234_mc_soc = {
 	.num_clients = ARRAY_SIZE(tegra234_mc_clients),
 	.clients = tegra234_mc_clients,
+	.num_sw_clients = ARRAY_SIZE(tegra234_mc_sw_clients),
+	.sw_clients = tegra234_mc_sw_clients,
 	.num_address_bits = 40,
 	.num_channels = 16,
 	.client_id_mask = 0x1ff,
diff --git a/include/soc/tegra/mc.h b/include/soc/tegra/mc.h
index 0a32a9eb12a4..6a94e88b6100 100644
--- a/include/soc/tegra/mc.h
+++ b/include/soc/tegra/mc.h
@@ -192,6 +192,9 @@ struct tegra_mc_soc {
 	const struct tegra_mc_client *clients;
 	unsigned int num_clients;
 
+	const struct tegra_mc_sw_client *sw_clients;
+	unsigned int num_sw_clients;
+
 	const unsigned long *emem_regs;
 	unsigned int num_emem_regs;
 
diff --git a/include/soc/tegra/tegra-icc.h b/include/soc/tegra/tegra-icc.h
index 3855d8571281..f9bcaae8ffee 100644
--- a/include/soc/tegra/tegra-icc.h
+++ b/include/soc/tegra/tegra-icc.h
@@ -22,6 +22,13 @@ struct tegra_icc_node {
 	u32 type;
 };
 
+struct tegra_mc_sw_client {
+	unsigned int id;
+	unsigned int bpmp_id;
+	unsigned int type;
+	const char *name;
+};
+
 /* ICC ID's for MC client's used in BPMP */
 #define TEGRA_ICC_BPMP_DEBUG		1
 #define TEGRA_ICC_BPMP_CPU_CLUSTER0	2
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [Patch v1 05/10] dt-bindings: tegra: add icc ids for dummy MC clients
  2022-12-20 16:02 [Patch v1 00/10] Tegra234 Memory interconnect support Sumit Gupta
                   ` (3 preceding siblings ...)
  2022-12-20 16:02 ` [Patch v1 04/10] memory: tegra: add support for software mc clients in Tegra234 Sumit Gupta
@ 2022-12-20 16:02 ` Sumit Gupta
  2022-12-22 11:29   ` Krzysztof Kozlowski
  2023-01-13 17:11   ` Krzysztof Kozlowski
  2022-12-20 16:02 ` [Patch v1 06/10] arm64: tegra: Add cpu OPP tables and interconnects property Sumit Gupta
                   ` (4 subsequent siblings)
  9 siblings, 2 replies; 53+ messages in thread
From: Sumit Gupta @ 2022-12-20 16:02 UTC (permalink / raw)
  To: treding, krzysztof.kozlowski, dmitry.osipenko, viresh.kumar,
	rafael, jonathanh, robh+dt, linux-kernel, linux-tegra, linux-pm,
	devicetree
  Cc: sanjayc, ksitaraman, ishah, bbasu, sumitg

Adding ICC id's for dummy software clients representing CCPLEX clusters.

Signed-off-by: Sumit Gupta <sumitg@nvidia.com>
---
 include/dt-bindings/memory/tegra234-mc.h | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/include/dt-bindings/memory/tegra234-mc.h b/include/dt-bindings/memory/tegra234-mc.h
index 347e55e89a2a..6e60d55491b3 100644
--- a/include/dt-bindings/memory/tegra234-mc.h
+++ b/include/dt-bindings/memory/tegra234-mc.h
@@ -536,4 +536,9 @@
 #define TEGRA234_MEMORY_CLIENT_NVJPG1SRD 0x123
 #define TEGRA234_MEMORY_CLIENT_NVJPG1SWR 0x124
 
+/* ICC ID's for dummy MC clients used to represent CPU Clusters */
+#define TEGRA_ICC_MC_CPU_CLUSTER0       1003
+#define TEGRA_ICC_MC_CPU_CLUSTER1       1004
+#define TEGRA_ICC_MC_CPU_CLUSTER2       1005
+
 #endif
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [Patch v1 06/10] arm64: tegra: Add cpu OPP tables and interconnects property
  2022-12-20 16:02 [Patch v1 00/10] Tegra234 Memory interconnect support Sumit Gupta
                   ` (4 preceding siblings ...)
  2022-12-20 16:02 ` [Patch v1 05/10] dt-bindings: tegra: add icc ids for dummy MC clients Sumit Gupta
@ 2022-12-20 16:02 ` Sumit Gupta
  2023-01-16 16:29   ` Thierry Reding
  2022-12-20 16:02 ` [Patch v1 07/10] cpufreq: Add Tegra234 to cpufreq-dt-platdev blocklist Sumit Gupta
                   ` (3 subsequent siblings)
  9 siblings, 1 reply; 53+ messages in thread
From: Sumit Gupta @ 2022-12-20 16:02 UTC (permalink / raw)
  To: treding, krzysztof.kozlowski, dmitry.osipenko, viresh.kumar,
	rafael, jonathanh, robh+dt, linux-kernel, linux-tegra, linux-pm,
	devicetree
  Cc: sanjayc, ksitaraman, ishah, bbasu, sumitg

Add OPP table and interconnects property required to scale DDR
frequency for better performance. The OPP table has CPU frequency
to per MC channel bandwidth mapping in each operating point entry.
One table is added for each cluster even though the table data is
same because the bandwidth request is per cluster. OPP framework
is creating a single icc path if the table is marked 'opp-shared'
and shared among all clusters. For us the OPP table is same but
the MC client ID argument to interconnects property is different
for each cluster which makes different icc path for all.

Signed-off-by: Sumit Gupta <sumitg@nvidia.com>
---
 arch/arm64/boot/dts/nvidia/tegra234.dtsi | 276 +++++++++++++++++++++++
 1 file changed, 276 insertions(+)

diff --git a/arch/arm64/boot/dts/nvidia/tegra234.dtsi b/arch/arm64/boot/dts/nvidia/tegra234.dtsi
index eaf05ee9acd1..ed7d0f7da431 100644
--- a/arch/arm64/boot/dts/nvidia/tegra234.dtsi
+++ b/arch/arm64/boot/dts/nvidia/tegra234.dtsi
@@ -2840,6 +2840,9 @@
 
 			enable-method = "psci";
 
+			operating-points-v2 = <&cl0_opp_tbl>;
+			interconnects = <&mc TEGRA_ICC_MC_CPU_CLUSTER0 &emc>;
+
 			i-cache-size = <65536>;
 			i-cache-line-size = <64>;
 			i-cache-sets = <256>;
@@ -2856,6 +2859,9 @@
 
 			enable-method = "psci";
 
+			operating-points-v2 = <&cl0_opp_tbl>;
+			interconnects = <&mc TEGRA_ICC_MC_CPU_CLUSTER0 &emc>;
+
 			i-cache-size = <65536>;
 			i-cache-line-size = <64>;
 			i-cache-sets = <256>;
@@ -2872,6 +2878,9 @@
 
 			enable-method = "psci";
 
+			operating-points-v2 = <&cl0_opp_tbl>;
+			interconnects = <&mc TEGRA_ICC_MC_CPU_CLUSTER0 &emc>;
+
 			i-cache-size = <65536>;
 			i-cache-line-size = <64>;
 			i-cache-sets = <256>;
@@ -2888,6 +2897,9 @@
 
 			enable-method = "psci";
 
+			operating-points-v2 = <&cl0_opp_tbl>;
+			interconnects = <&mc TEGRA_ICC_MC_CPU_CLUSTER0 &emc>;
+
 			i-cache-size = <65536>;
 			i-cache-line-size = <64>;
 			i-cache-sets = <256>;
@@ -2904,6 +2916,9 @@
 
 			enable-method = "psci";
 
+			operating-points-v2 = <&cl1_opp_tbl>;
+			interconnects = <&mc TEGRA_ICC_MC_CPU_CLUSTER1 &emc>;
+
 			i-cache-size = <65536>;
 			i-cache-line-size = <64>;
 			i-cache-sets = <256>;
@@ -2920,6 +2935,9 @@
 
 			enable-method = "psci";
 
+			operating-points-v2 = <&cl1_opp_tbl>;
+			interconnects = <&mc TEGRA_ICC_MC_CPU_CLUSTER1 &emc>;
+
 			i-cache-size = <65536>;
 			i-cache-line-size = <64>;
 			i-cache-sets = <256>;
@@ -2936,6 +2954,9 @@
 
 			enable-method = "psci";
 
+			operating-points-v2 = <&cl1_opp_tbl>;
+			interconnects = <&mc TEGRA_ICC_MC_CPU_CLUSTER1 &emc>;
+
 			i-cache-size = <65536>;
 			i-cache-line-size = <64>;
 			i-cache-sets = <256>;
@@ -2952,6 +2973,9 @@
 
 			enable-method = "psci";
 
+			operating-points-v2 = <&cl1_opp_tbl>;
+			interconnects = <&mc TEGRA_ICC_MC_CPU_CLUSTER1 &emc>;
+
 			i-cache-size = <65536>;
 			i-cache-line-size = <64>;
 			i-cache-sets = <256>;
@@ -2968,6 +2992,9 @@
 
 			enable-method = "psci";
 
+			operating-points-v2 = <&cl2_opp_tbl>;
+			interconnects = <&mc TEGRA_ICC_MC_CPU_CLUSTER2 &emc>;
+
 			i-cache-size = <65536>;
 			i-cache-line-size = <64>;
 			i-cache-sets = <256>;
@@ -2984,6 +3011,9 @@
 
 			enable-method = "psci";
 
+			operating-points-v2 = <&cl2_opp_tbl>;
+			interconnects = <&mc TEGRA_ICC_MC_CPU_CLUSTER2 &emc>;
+
 			i-cache-size = <65536>;
 			i-cache-line-size = <64>;
 			i-cache-sets = <256>;
@@ -3000,6 +3030,9 @@
 
 			enable-method = "psci";
 
+			operating-points-v2 = <&cl2_opp_tbl>;
+			interconnects = <&mc TEGRA_ICC_MC_CPU_CLUSTER2 &emc>;
+
 			i-cache-size = <65536>;
 			i-cache-line-size = <64>;
 			i-cache-sets = <256>;
@@ -3016,6 +3049,9 @@
 
 			enable-method = "psci";
 
+			operating-points-v2 = <&cl2_opp_tbl>;
+			interconnects = <&mc TEGRA_ICC_MC_CPU_CLUSTER2 &emc>;
+
 			i-cache-size = <65536>;
 			i-cache-line-size = <64>;
 			i-cache-sets = <256>;
@@ -3272,4 +3308,244 @@
 		interrupt-parent = <&gic>;
 		always-on;
 	};
+
+	cl0_opp_tbl: opp-table-cluster0 {
+		compatible = "operating-points-v2";
+		opp-shared;
+
+		cl0_ch1_opp1: opp-115200000 {
+			  opp-hz = /bits/ 64 <115200000>;
+			  opp-peak-kBps = <816000>;
+		};
+
+		cl0_ch1_opp2: opp-268800000 {
+			opp-hz = /bits/ 64 <268800000>;
+			opp-peak-kBps = <816000>;
+		};
+
+		cl0_ch1_opp3: opp-422400000 {
+			opp-hz = /bits/ 64 <422400000>;
+			opp-peak-kBps = <816000>;
+		};
+
+		cl0_ch1_opp4: opp-576000000 {
+			opp-hz = /bits/ 64 <576000000>;
+			opp-peak-kBps = <816000>;
+		};
+
+		cl0_ch1_opp5: opp-729600000 {
+			opp-hz = /bits/ 64 <729600000>;
+			opp-peak-kBps = <816000>;
+		};
+
+		cl0_ch1_opp6: opp-883200000 {
+			opp-hz = /bits/ 64 <883200000>;
+			opp-peak-kBps = <816000>;
+		};
+
+		cl0_ch1_opp7: opp-1036800000 {
+			opp-hz = /bits/ 64 <1036800000>;
+			opp-peak-kBps = <816000>;
+		};
+
+		cl0_ch1_opp8: opp-1190400000 {
+			opp-hz = /bits/ 64 <1190400000>;
+			opp-peak-kBps = <816000>;
+		};
+
+		cl0_ch1_opp9: opp-1344000000 {
+			opp-hz = /bits/ 64 <1344000000>;
+			opp-peak-kBps = <1632000>;
+		};
+
+		cl0_ch1_opp10: opp-1497600000 {
+			opp-hz = /bits/ 64 <1497600000>;
+			opp-peak-kBps = <1632000>;
+		};
+
+		cl0_ch1_opp11: opp-1651200000 {
+			opp-hz = /bits/ 64 <1651200000>;
+			opp-peak-kBps = <2660000>;
+		};
+
+		cl0_ch1_opp12: opp-1804800000 {
+			opp-hz = /bits/ 64 <1804800000>;
+			opp-peak-kBps = <2660000>;
+		};
+
+		cl0_ch1_opp13: opp-1958400000 {
+			opp-hz = /bits/ 64 <1958400000>;
+			opp-peak-kBps = <3200000>;
+		};
+
+		cl0_ch1_opp14: opp-2112000000 {
+			opp-hz = /bits/ 64 <2112000000>;
+			opp-peak-kBps = <6400000>;
+		};
+
+		cl0_ch1_opp15: opp-2201600000 {
+			opp-hz = /bits/ 64 <2201600000>;
+			opp-peak-kBps = <6400000>;
+		};
+	};
+
+	cl1_opp_tbl: opp-table-cluster1 {
+		compatible = "operating-points-v2";
+		opp-shared;
+
+		cl1_ch1_opp1: opp-115200000 {
+			  opp-hz = /bits/ 64 <115200000>;
+			  opp-peak-kBps = <816000>;
+		};
+
+		cl1_ch1_opp2: opp-268800000 {
+			opp-hz = /bits/ 64 <268800000>;
+			opp-peak-kBps = <816000>;
+		};
+
+		cl1_ch1_opp3: opp-422400000 {
+			opp-hz = /bits/ 64 <422400000>;
+			opp-peak-kBps = <816000>;
+		};
+
+		cl1_ch1_opp4: opp-576000000 {
+			opp-hz = /bits/ 64 <576000000>;
+			opp-peak-kBps = <816000>;
+		};
+
+		cl1_ch1_opp5: opp-729600000 {
+			opp-hz = /bits/ 64 <729600000>;
+			opp-peak-kBps = <816000>;
+		};
+
+		cl1_ch1_opp6: opp-883200000 {
+			opp-hz = /bits/ 64 <883200000>;
+			opp-peak-kBps = <816000>;
+		};
+
+		cl1_ch1_opp7: opp-1036800000 {
+			opp-hz = /bits/ 64 <1036800000>;
+			opp-peak-kBps = <816000>;
+		};
+
+		cl1_ch1_opp8: opp-1190400000 {
+			opp-hz = /bits/ 64 <1190400000>;
+			opp-peak-kBps = <816000>;
+		};
+
+		cl1_ch1_opp9: opp-1344000000 {
+			opp-hz = /bits/ 64 <1344000000>;
+			opp-peak-kBps = <1632000>;
+		};
+
+		cl1_ch1_opp10: opp-1497600000 {
+			opp-hz = /bits/ 64 <1497600000>;
+			opp-peak-kBps = <1632000>;
+		};
+
+		cl1_ch1_opp11: opp-1651200000 {
+			opp-hz = /bits/ 64 <1651200000>;
+			opp-peak-kBps = <2660000>;
+		};
+
+		cl1_ch1_opp12: opp-1804800000 {
+			opp-hz = /bits/ 64 <1804800000>;
+			opp-peak-kBps = <2660000>;
+		};
+
+		cl1_ch1_opp13: opp-1958400000 {
+			opp-hz = /bits/ 64 <1958400000>;
+			opp-peak-kBps = <3200000>;
+		};
+
+		cl1_ch1_opp14: opp-2112000000 {
+			opp-hz = /bits/ 64 <2112000000>;
+			opp-peak-kBps = <6400000>;
+		};
+
+		cl1_ch1_opp15: opp-2201600000 {
+			opp-hz = /bits/ 64 <2201600000>;
+			opp-peak-kBps = <6400000>;
+		};
+	};
+
+	cl2_opp_tbl: opp-table-cluster2 {
+		compatible = "operating-points-v2";
+		opp-shared;
+
+		cl2_ch1_opp1: opp-115200000 {
+			  opp-hz = /bits/ 64 <115200000>;
+			  opp-peak-kBps = <816000>;
+		};
+
+		cl2_ch1_opp2: opp-268800000 {
+			opp-hz = /bits/ 64 <268800000>;
+			opp-peak-kBps = <816000>;
+		};
+
+		cl2_ch1_opp3: opp-422400000 {
+			opp-hz = /bits/ 64 <422400000>;
+			opp-peak-kBps = <816000>;
+		};
+
+		cl2_ch1_opp4: opp-576000000 {
+			opp-hz = /bits/ 64 <576000000>;
+			opp-peak-kBps = <816000>;
+		};
+
+		cl2_ch1_opp5: opp-729600000 {
+			opp-hz = /bits/ 64 <729600000>;
+			opp-peak-kBps = <816000>;
+		};
+
+		cl2_ch1_opp6: opp-883200000 {
+			opp-hz = /bits/ 64 <883200000>;
+			opp-peak-kBps = <816000>;
+		};
+
+		cl2_ch1_opp7: opp-1036800000 {
+			opp-hz = /bits/ 64 <1036800000>;
+			opp-peak-kBps = <816000>;
+		};
+
+		cl2_ch1_opp8: opp-1190400000 {
+			opp-hz = /bits/ 64 <1190400000>;
+			opp-peak-kBps = <816000>;
+		};
+
+		cl2_ch1_opp9: opp-1344000000 {
+			opp-hz = /bits/ 64 <1344000000>;
+			opp-peak-kBps = <1632000>;
+		};
+
+		cl2_ch1_opp10: opp-1497600000 {
+			opp-hz = /bits/ 64 <1497600000>;
+			opp-peak-kBps = <1632000>;
+		};
+
+		cl2_ch1_opp11: opp-1651200000 {
+			opp-hz = /bits/ 64 <1651200000>;
+			opp-peak-kBps = <2660000>;
+		};
+
+		cl2_ch1_opp12: opp-1804800000 {
+			opp-hz = /bits/ 64 <1804800000>;
+			opp-peak-kBps = <2660000>;
+		};
+
+		cl2_ch1_opp13: opp-1958400000 {
+			opp-hz = /bits/ 64 <1958400000>;
+			opp-peak-kBps = <3200000>;
+		};
+
+		cl2_ch1_opp14: opp-2112000000 {
+			opp-hz = /bits/ 64 <2112000000>;
+			opp-peak-kBps = <6400000>;
+		};
+
+		cl2_ch1_opp15: opp-2201600000 {
+			opp-hz = /bits/ 64 <2201600000>;
+			opp-peak-kBps = <6400000>;
+		};
+	};
 };
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [Patch v1 07/10] cpufreq: Add Tegra234 to cpufreq-dt-platdev blocklist
  2022-12-20 16:02 [Patch v1 00/10] Tegra234 Memory interconnect support Sumit Gupta
                   ` (5 preceding siblings ...)
  2022-12-20 16:02 ` [Patch v1 06/10] arm64: tegra: Add cpu OPP tables and interconnects property Sumit Gupta
@ 2022-12-20 16:02 ` Sumit Gupta
  2022-12-21  5:01   ` Viresh Kumar
  2022-12-20 16:02 ` [Patch v1 08/10] cpufreq: tegra194: add OPP support and set bandwidth Sumit Gupta
                   ` (2 subsequent siblings)
  9 siblings, 1 reply; 53+ messages in thread
From: Sumit Gupta @ 2022-12-20 16:02 UTC (permalink / raw)
  To: treding, krzysztof.kozlowski, dmitry.osipenko, viresh.kumar,
	rafael, jonathanh, robh+dt, linux-kernel, linux-tegra, linux-pm,
	devicetree
  Cc: sanjayc, ksitaraman, ishah, bbasu, sumitg

Tegra234 platform uses the tegra194-cpufreq driver, so add it
to the blocklist in cpufreq-dt-platdev driver to avoid the cpufreq
driver registration from there.

Signed-off-by: Sumit Gupta <sumitg@nvidia.com>
---
 drivers/cpufreq/cpufreq-dt-platdev.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/drivers/cpufreq/cpufreq-dt-platdev.c b/drivers/cpufreq/cpufreq-dt-platdev.c
index 8ab672883043..e329d29d1f9d 100644
--- a/drivers/cpufreq/cpufreq-dt-platdev.c
+++ b/drivers/cpufreq/cpufreq-dt-platdev.c
@@ -137,6 +137,7 @@ static const struct of_device_id blocklist[] __initconst = {
 	{ .compatible = "nvidia,tegra30", },
 	{ .compatible = "nvidia,tegra124", },
 	{ .compatible = "nvidia,tegra210", },
+	{ .compatible = "nvidia,tegra234", },
 
 	{ .compatible = "qcom,apq8096", },
 	{ .compatible = "qcom,msm8996", },
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [Patch v1 08/10] cpufreq: tegra194: add OPP support and set bandwidth
  2022-12-20 16:02 [Patch v1 00/10] Tegra234 Memory interconnect support Sumit Gupta
                   ` (6 preceding siblings ...)
  2022-12-20 16:02 ` [Patch v1 07/10] cpufreq: Add Tegra234 to cpufreq-dt-platdev blocklist Sumit Gupta
@ 2022-12-20 16:02 ` Sumit Gupta
  2022-12-22 15:46   ` Dmitry Osipenko
  2022-12-20 16:02 ` [Patch v1 09/10] memory: tegra: get number of enabled mc channels Sumit Gupta
  2022-12-20 16:02 ` [Patch v1 10/10] memory: tegra: make cluster bw request a multiple of mc_channels Sumit Gupta
  9 siblings, 1 reply; 53+ messages in thread
From: Sumit Gupta @ 2022-12-20 16:02 UTC (permalink / raw)
  To: treding, krzysztof.kozlowski, dmitry.osipenko, viresh.kumar,
	rafael, jonathanh, robh+dt, linux-kernel, linux-tegra, linux-pm,
	devicetree
  Cc: sanjayc, ksitaraman, ishah, bbasu, sumitg

Add support to use OPP table from DT in Tegra194 cpufreq driver.
Tegra SoC's receive the frequency lookup table (LUT) from BPMP-FW.
Cross check the OPP's present in DT against the LUT from BPMP-FW
and enable only those DT OPP's which are present in LUT also.

The OPP table in DT has CPU Frequency to bandwidth mapping where
the bandwidth value is per MC channel. DRAM bandwidth depends on the
number of MC channels which can vary as per the boot configuration.
This per channel bandwidth from OPP table will be later converted by
MC driver to final bandwidth value by multiplying with number of
channels before sending the request to BPMP-FW.

If OPP table is not present in DT, then use the LUT from BPMP-FW directy
as the frequency table and not do the DRAM frequency scaling which is
same as the current behavior.

Now, as the CPU Frequency table is being controlling through OPP table
in DT. Keeping fewer entries in the table will create less frequency
steps and scale fast to high frequencies if required.

Signed-off-by: Sumit Gupta <sumitg@nvidia.com>
---
 drivers/cpufreq/tegra194-cpufreq.c | 152 ++++++++++++++++++++++++++---
 1 file changed, 139 insertions(+), 13 deletions(-)

diff --git a/drivers/cpufreq/tegra194-cpufreq.c b/drivers/cpufreq/tegra194-cpufreq.c
index 4596c3e323aa..8155a9ea0815 100644
--- a/drivers/cpufreq/tegra194-cpufreq.c
+++ b/drivers/cpufreq/tegra194-cpufreq.c
@@ -12,6 +12,7 @@
 #include <linux/of_platform.h>
 #include <linux/platform_device.h>
 #include <linux/slab.h>
+#include <linux/units.h>
 
 #include <asm/smp_plat.h>
 
@@ -65,12 +66,32 @@ struct tegra_cpufreq_soc {
 
 struct tegra194_cpufreq_data {
 	void __iomem *regs;
-	struct cpufreq_frequency_table **tables;
+	struct cpufreq_frequency_table **bpmp_luts;
 	const struct tegra_cpufreq_soc *soc;
+	bool icc_dram_bw_scaling;
 };
 
 static struct workqueue_struct *read_counters_wq;
 
+static int tegra_cpufreq_set_bw(struct cpufreq_policy *policy, unsigned long freq_khz)
+{
+	struct dev_pm_opp *opp;
+	struct device *dev;
+	int ret;
+
+	dev = get_cpu_device(policy->cpu);
+	if (!dev)
+		return -ENODEV;
+
+	opp = dev_pm_opp_find_freq_exact(dev, freq_khz * KHZ, true);
+	if (IS_ERR(opp))
+		return PTR_ERR(opp);
+
+	ret = dev_pm_opp_set_opp(dev, opp);
+	dev_pm_opp_put(opp);
+	return ret;
+}
+
 static void tegra_get_cpu_mpidr(void *mpidr)
 {
 	*((u64 *)mpidr) = read_cpuid_mpidr() & MPIDR_HWID_BITMASK;
@@ -354,7 +375,7 @@ static unsigned int tegra194_get_speed(u32 cpu)
 	 * to the last written ndiv value from freq_table. This is
 	 * done to return consistent value.
 	 */
-	cpufreq_for_each_valid_entry(pos, data->tables[clusterid]) {
+	cpufreq_for_each_valid_entry(pos, data->bpmp_luts[clusterid]) {
 		if (pos->driver_data != ndiv)
 			continue;
 
@@ -369,16 +390,93 @@ static unsigned int tegra194_get_speed(u32 cpu)
 	return rate;
 }
 
+int tegra_cpufreq_init_cpufreq_table(struct cpufreq_policy *policy,
+				     struct cpufreq_frequency_table *bpmp_lut,
+				     struct cpufreq_frequency_table **opp_table)
+{
+	struct tegra194_cpufreq_data *data = cpufreq_get_driver_data();
+	struct cpufreq_frequency_table *freq_table = NULL;
+	struct cpufreq_frequency_table *pos;
+	int ret = 0, max_opps = 0;
+	struct device *cpu_dev;
+	struct dev_pm_opp *opp;
+	unsigned long rate;
+	int j = 0;
+
+	cpu_dev = get_cpu_device(policy->cpu);
+	if (!cpu_dev) {
+		pr_err("%s: failed to get cpu%d device\n", __func__, policy->cpu);
+		return -ENODEV;
+	}
+
+	/* Initialize OPP table mentioned in operating-points-v2 property in DT */
+	ret = dev_pm_opp_of_add_table_indexed(cpu_dev, 0);
+	if (!ret) {
+		max_opps = dev_pm_opp_get_opp_count(cpu_dev);
+		if (max_opps <= 0) {
+			dev_err(cpu_dev, "Failed to add OPPs\n");
+			return max_opps;
+		}
+
+		/* Disable all opps and cross-validate against LUT later */
+		for (rate = 0; ; rate++) {
+			opp = dev_pm_opp_find_freq_ceil(cpu_dev, &rate);
+			if (IS_ERR(opp))
+				break;
+
+			dev_pm_opp_put(opp);
+			dev_pm_opp_disable(cpu_dev, rate);
+		}
+	} else {
+		dev_err(cpu_dev, "Invalid or empty opp table in device tree\n");
+		data->icc_dram_bw_scaling = false;
+		return ret;
+	}
+
+	freq_table = kcalloc((max_opps + 1), sizeof(*freq_table), GFP_KERNEL);
+	if (!freq_table)
+		return -ENOMEM;
+
+	/*
+	 * Cross check the frequencies from BPMP-FW LUT against the OPP's present in DT.
+	 * Enable only those DT OPP's which are present in LUT also.
+	 */
+	cpufreq_for_each_valid_entry(pos, bpmp_lut) {
+		opp = dev_pm_opp_find_freq_exact(cpu_dev, pos->frequency * KHZ, false);
+		if (IS_ERR(opp))
+			continue;
+
+		ret = dev_pm_opp_enable(cpu_dev, pos->frequency * KHZ);
+		if (ret < 0)
+			return ret;
+
+		freq_table[j].driver_data = pos->driver_data;
+		freq_table[j].frequency = pos->frequency;
+		j++;
+	}
+
+	freq_table[j].driver_data = pos->driver_data;
+	freq_table[j].frequency = CPUFREQ_TABLE_END;
+
+	*opp_table = &freq_table[0];
+
+	dev_pm_opp_set_sharing_cpus(cpu_dev, policy->cpus);
+
+	return ret;
+}
+
 static int tegra194_cpufreq_init(struct cpufreq_policy *policy)
 {
 	struct tegra194_cpufreq_data *data = cpufreq_get_driver_data();
 	int maxcpus_per_cluster = data->soc->maxcpus_per_cluster;
+	struct cpufreq_frequency_table *freq_table;
+	struct cpufreq_frequency_table *bpmp_lut;
 	u32 start_cpu, cpu;
 	u32 clusterid;
+	int ret;
 
 	data->soc->ops->get_cpu_cluster_id(policy->cpu, NULL, &clusterid);
-
-	if (clusterid >= data->soc->num_clusters || !data->tables[clusterid])
+	if (clusterid >= data->soc->num_clusters || !data->bpmp_luts[clusterid])
 		return -EINVAL;
 
 	start_cpu = rounddown(policy->cpu, maxcpus_per_cluster);
@@ -387,9 +485,22 @@ static int tegra194_cpufreq_init(struct cpufreq_policy *policy)
 		if (cpu_possible(cpu))
 			cpumask_set_cpu(cpu, policy->cpus);
 	}
-	policy->freq_table = data->tables[clusterid];
 	policy->cpuinfo.transition_latency = TEGRA_CPUFREQ_TRANSITION_LATENCY;
 
+	bpmp_lut = data->bpmp_luts[clusterid];
+
+	if (data->icc_dram_bw_scaling) {
+		ret = tegra_cpufreq_init_cpufreq_table(policy, bpmp_lut, &freq_table);
+		if (!ret) {
+			policy->freq_table = freq_table;
+			return 0;
+		}
+	}
+
+	data->icc_dram_bw_scaling = false;
+	policy->freq_table = bpmp_lut;
+	pr_info("OPP tables missing from DT, EMC frequency scaling disabled\n");
+
 	return 0;
 }
 
@@ -406,6 +517,9 @@ static int tegra194_cpufreq_set_target(struct cpufreq_policy *policy,
 	 */
 	data->soc->ops->set_cpu_ndiv(policy, (u64)tbl->driver_data);
 
+	if (data->icc_dram_bw_scaling)
+		tegra_cpufreq_set_bw(policy, tbl->frequency);
+
 	return 0;
 }
 
@@ -438,8 +552,8 @@ static void tegra194_cpufreq_free_resources(void)
 }
 
 static struct cpufreq_frequency_table *
-init_freq_table(struct platform_device *pdev, struct tegra_bpmp *bpmp,
-		unsigned int cluster_id)
+tegra_cpufreq_bpmp_read_lut(struct platform_device *pdev, struct tegra_bpmp *bpmp,
+			    unsigned int cluster_id)
 {
 	struct cpufreq_frequency_table *freq_table;
 	struct mrq_cpu_ndiv_limits_response resp;
@@ -514,6 +628,7 @@ static int tegra194_cpufreq_probe(struct platform_device *pdev)
 	const struct tegra_cpufreq_soc *soc;
 	struct tegra194_cpufreq_data *data;
 	struct tegra_bpmp *bpmp;
+	struct device *cpu_dev;
 	int err, i;
 
 	data = devm_kzalloc(&pdev->dev, sizeof(*data), GFP_KERNEL);
@@ -529,9 +644,9 @@ static int tegra194_cpufreq_probe(struct platform_device *pdev)
 		return -EINVAL;
 	}
 
-	data->tables = devm_kcalloc(&pdev->dev, data->soc->num_clusters,
-				    sizeof(*data->tables), GFP_KERNEL);
-	if (!data->tables)
+	data->bpmp_luts = devm_kcalloc(&pdev->dev, data->soc->num_clusters,
+				       sizeof(*data->bpmp_luts), GFP_KERNEL);
+	if (!data->bpmp_luts)
 		return -ENOMEM;
 
 	if (soc->actmon_cntr_base) {
@@ -555,15 +670,26 @@ static int tegra194_cpufreq_probe(struct platform_device *pdev)
 	}
 
 	for (i = 0; i < data->soc->num_clusters; i++) {
-		data->tables[i] = init_freq_table(pdev, bpmp, i);
-		if (IS_ERR(data->tables[i])) {
-			err = PTR_ERR(data->tables[i]);
+		data->bpmp_luts[i] = tegra_cpufreq_bpmp_read_lut(pdev, bpmp, i);
+		if (IS_ERR(data->bpmp_luts[i])) {
+			err = PTR_ERR(data->bpmp_luts[i]);
 			goto err_free_res;
 		}
 	}
 
 	tegra194_cpufreq_driver.driver_data = data;
 
+	/* Check for optional OPPv2 and interconnect paths on CPU0 to enable ICC scaling */
+	cpu_dev = get_cpu_device(0);
+	if (!cpu_dev)
+		return -EPROBE_DEFER;
+
+	if (dev_pm_opp_of_get_opp_desc_node(cpu_dev)) {
+		err = dev_pm_opp_of_find_icc_paths(cpu_dev, NULL);
+		if (!err)
+			data->icc_dram_bw_scaling = true;
+	}
+
 	err = cpufreq_register_driver(&tegra194_cpufreq_driver);
 	if (!err)
 		goto put_bpmp;
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [Patch v1 09/10] memory: tegra: get number of enabled mc channels
  2022-12-20 16:02 [Patch v1 00/10] Tegra234 Memory interconnect support Sumit Gupta
                   ` (7 preceding siblings ...)
  2022-12-20 16:02 ` [Patch v1 08/10] cpufreq: tegra194: add OPP support and set bandwidth Sumit Gupta
@ 2022-12-20 16:02 ` Sumit Gupta
  2022-12-22 11:37   ` Krzysztof Kozlowski
  2022-12-20 16:02 ` [Patch v1 10/10] memory: tegra: make cluster bw request a multiple of mc_channels Sumit Gupta
  9 siblings, 1 reply; 53+ messages in thread
From: Sumit Gupta @ 2022-12-20 16:02 UTC (permalink / raw)
  To: treding, krzysztof.kozlowski, dmitry.osipenko, viresh.kumar,
	rafael, jonathanh, robh+dt, linux-kernel, linux-tegra, linux-pm,
	devicetree
  Cc: sanjayc, ksitaraman, ishah, bbasu, sumitg

Get number of MC channels which are actually enabled
in current boot configuration.

Signed-off-by: Sumit Gupta <sumitg@nvidia.com>
---
 drivers/memory/tegra/mc.c | 19 +++++++++++++++++++
 drivers/memory/tegra/mc.h |  1 +
 include/soc/tegra/mc.h    |  1 +
 3 files changed, 21 insertions(+)

diff --git a/drivers/memory/tegra/mc.c b/drivers/memory/tegra/mc.c
index 4ddf9808fe6b..3d52eb3e24e4 100644
--- a/drivers/memory/tegra/mc.c
+++ b/drivers/memory/tegra/mc.c
@@ -881,6 +881,23 @@ static int tegra_mc_interconnect_setup(struct tegra_mc *mc)
 	return err;
 }
 
+static void tegra_mc_num_channel_enabled(struct tegra_mc *mc)
+{
+	unsigned int i;
+	u32 value = 0;
+
+	value = mc_ch_readl(mc, 0, MC_EMEM_ADR_CFG_CHANNEL_ENABLE);
+	if (value <= 0) {
+		mc->num_channels = mc->soc->num_channels;
+		return;
+	}
+
+	for (i = 0; i < 32; i++) {
+		if (value & BIT(i))
+			mc->num_channels++;
+	}
+}
+
 static int tegra_mc_probe(struct platform_device *pdev)
 {
 	struct tegra_mc *mc;
@@ -919,6 +936,8 @@ static int tegra_mc_probe(struct platform_device *pdev)
 			return err;
 	}
 
+	tegra_mc_num_channel_enabled(mc);
+
 	if (mc->soc->ops && mc->soc->ops->handle_irq) {
 		mc->irq = platform_get_irq(pdev, 0);
 		if (mc->irq < 0)
diff --git a/drivers/memory/tegra/mc.h b/drivers/memory/tegra/mc.h
index bc01586b6560..c3f6655bec60 100644
--- a/drivers/memory/tegra/mc.h
+++ b/drivers/memory/tegra/mc.h
@@ -53,6 +53,7 @@
 #define MC_ERR_ROUTE_SANITY_ADR				0x9c4
 #define MC_ERR_GENERALIZED_CARVEOUT_STATUS		0xc00
 #define MC_ERR_GENERALIZED_CARVEOUT_ADR			0xc04
+#define MC_EMEM_ADR_CFG_CHANNEL_ENABLE			0xdf8
 #define MC_GLOBAL_INTSTATUS				0xf24
 #define MC_ERR_ADR_HI					0x11fc
 
diff --git a/include/soc/tegra/mc.h b/include/soc/tegra/mc.h
index 6a94e88b6100..815c7d5f6292 100644
--- a/include/soc/tegra/mc.h
+++ b/include/soc/tegra/mc.h
@@ -236,6 +236,7 @@ struct tegra_mc {
 
 	struct tegra_mc_timing *timings;
 	unsigned int num_timings;
+	unsigned int num_channels;
 
 	struct reset_controller_dev reset;
 
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [Patch v1 10/10] memory: tegra: make cluster bw request a multiple of mc_channels
  2022-12-20 16:02 [Patch v1 00/10] Tegra234 Memory interconnect support Sumit Gupta
                   ` (8 preceding siblings ...)
  2022-12-20 16:02 ` [Patch v1 09/10] memory: tegra: get number of enabled mc channels Sumit Gupta
@ 2022-12-20 16:02 ` Sumit Gupta
  9 siblings, 0 replies; 53+ messages in thread
From: Sumit Gupta @ 2022-12-20 16:02 UTC (permalink / raw)
  To: treding, krzysztof.kozlowski, dmitry.osipenko, viresh.kumar,
	rafael, jonathanh, robh+dt, linux-kernel, linux-tegra, linux-pm,
	devicetree
  Cc: sanjayc, ksitaraman, ishah, bbasu, sumitg

CPU opp table have bandwidth data per MC channel. The actual bandwidth
depends on number of MC channels which can change as per the boot config
for a board. So, multiply the bandwidth request for CPU clusters with
number of enabled MC channels. This is not required to be done for other
MC clients.

Signed-off-by: Sumit Gupta <sumitg@nvidia.com>
---
 drivers/memory/tegra/tegra234.c | 25 ++++++++++++++++++++++++-
 1 file changed, 24 insertions(+), 1 deletion(-)

diff --git a/drivers/memory/tegra/tegra234.c b/drivers/memory/tegra/tegra234.c
index 82ce6c3c3eb0..a45fc99b9c82 100644
--- a/drivers/memory/tegra/tegra234.c
+++ b/drivers/memory/tegra/tegra234.c
@@ -834,6 +834,29 @@ static int tegra234_mc_icc_set(struct icc_node *src, struct icc_node *dst)
 	return 0;
 }
 
+static int tegra234_mc_icc_aggregate(struct icc_node *node, u32 tag, u32 avg_bw,
+				     u32 peak_bw, u32 *agg_avg, u32 *agg_peak)
+{
+	struct tegra_icc_node *tnode = NULL;
+	struct tegra_mc *mc = NULL;
+
+	if (node->id == TEGRA_ICC_MC_CPU_CLUSTER0 ||
+	    node->id == TEGRA_ICC_MC_CPU_CLUSTER1 ||
+	    node->id == TEGRA_ICC_MC_CPU_CLUSTER2) {
+		if (node->data) {
+			tnode = node->data;
+			mc = tnode->mc;
+			if (mc)
+				peak_bw = peak_bw * mc->num_channels;
+		}
+	}
+
+	*agg_avg += avg_bw;
+	*agg_peak = max(*agg_peak, peak_bw);
+
+	return 0;
+}
+
 static struct icc_node*
 tegra234_mc_of_icc_xlate(struct of_phandle_args *spec, void *data)
 {
@@ -866,7 +889,7 @@ static int tegra234_mc_icc_get_init_bw(struct icc_node *node, u32 *avg, u32 *pea
 static const struct tegra_mc_icc_ops tegra234_mc_icc_ops = {
 	.xlate = tegra234_mc_of_icc_xlate,
 	.get_bw = tegra234_mc_icc_get_init_bw,
-	.aggregate = icc_std_aggregate,
+	.aggregate = tegra234_mc_icc_aggregate,
 	.set = tegra234_mc_icc_set,
 };
 
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 53+ messages in thread

* Re: [Patch v1 01/10] memory: tegra: add interconnect support for DRAM scaling in Tegra234
  2022-12-20 16:02 ` [Patch v1 01/10] memory: tegra: add interconnect support for DRAM scaling in Tegra234 Sumit Gupta
@ 2022-12-20 18:05   ` Dmitry Osipenko
  2022-12-21  7:53     ` Sumit Gupta
  2022-12-20 18:06   ` Dmitry Osipenko
                     ` (8 subsequent siblings)
  9 siblings, 1 reply; 53+ messages in thread
From: Dmitry Osipenko @ 2022-12-20 18:05 UTC (permalink / raw)
  To: Sumit Gupta, treding, krzysztof.kozlowski, dmitry.osipenko,
	viresh.kumar, rafael, jonathanh, robh+dt, linux-kernel,
	linux-tegra, linux-pm, devicetree

20.12.2022 19:02, Sumit Gupta пишет:
> @@ -779,6 +780,7 @@ const char *const tegra_mc_error_names[8] = {
>   */
>  static int tegra_mc_interconnect_setup(struct tegra_mc *mc)
>  {
> +	struct tegra_icc_node *tnode;
>  	struct icc_node *node;
>  	unsigned int i;
>  	int err;
> @@ -792,7 +794,11 @@ static int tegra_mc_interconnect_setup(struct tegra_mc *mc)
>  	mc->provider.data = &mc->provider;
>  	mc->provider.set = mc->soc->icc_ops->set;
>  	mc->provider.aggregate = mc->soc->icc_ops->aggregate;
> -	mc->provider.xlate_extended = mc->soc->icc_ops->xlate_extended;
> +	mc->provider.get_bw = mc->soc->icc_ops->get_bw;
> +	if (mc->soc->icc_ops->xlate)
> +		mc->provider.xlate = mc->soc->icc_ops->xlate;
> +	if (mc->soc->icc_ops->xlate_extended)
> +		mc->provider.xlate_extended = mc->soc->icc_ops->xlate_extended;

These IFs look pointless

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [Patch v1 01/10] memory: tegra: add interconnect support for DRAM scaling in Tegra234
  2022-12-20 16:02 ` [Patch v1 01/10] memory: tegra: add interconnect support for DRAM scaling in Tegra234 Sumit Gupta
  2022-12-20 18:05   ` Dmitry Osipenko
@ 2022-12-20 18:06   ` Dmitry Osipenko
  2022-12-21  7:54     ` Sumit Gupta
  2022-12-20 18:07   ` Dmitry Osipenko
                     ` (7 subsequent siblings)
  9 siblings, 1 reply; 53+ messages in thread
From: Dmitry Osipenko @ 2022-12-20 18:06 UTC (permalink / raw)
  To: Sumit Gupta, treding, krzysztof.kozlowski, dmitry.osipenko,
	viresh.kumar, rafael, jonathanh, robh+dt, linux-kernel,
	linux-tegra, linux-pm, devicetree
  Cc: sanjayc, ksitaraman, ishah, bbasu

20.12.2022 19:02, Sumit Gupta пишет:
> +		if (tegra_bpmp_mrq_is_supported(emc->bpmp, MRQ_BWMGR_INT)) {
> +			err = tegra_emc_interconnect_init(emc);
> +			if (!err)
> +				return err;

"return err" doesn't sound good

> +			dev_err(&pdev->dev, "tegra_emc_interconnect_init failed:%d\n", err);
> +			goto put_bpmp;

Move error message to tegra_emc_interconnect_init()

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [Patch v1 01/10] memory: tegra: add interconnect support for DRAM scaling in Tegra234
  2022-12-20 16:02 ` [Patch v1 01/10] memory: tegra: add interconnect support for DRAM scaling in Tegra234 Sumit Gupta
  2022-12-20 18:05   ` Dmitry Osipenko
  2022-12-20 18:06   ` Dmitry Osipenko
@ 2022-12-20 18:07   ` Dmitry Osipenko
  2022-12-21  8:05     ` Sumit Gupta
  2022-12-20 18:10   ` Dmitry Osipenko
                     ` (6 subsequent siblings)
  9 siblings, 1 reply; 53+ messages in thread
From: Dmitry Osipenko @ 2022-12-20 18:07 UTC (permalink / raw)
  To: Sumit Gupta, treding, krzysztof.kozlowski, dmitry.osipenko,
	viresh.kumar, rafael, jonathanh, robh+dt, linux-kernel,
	linux-tegra, linux-pm, devicetree
  Cc: sanjayc, ksitaraman, ishah, bbasu

20.12.2022 19:02, Sumit Gupta пишет:
> +#ifndef MEMORY_TEGRA_ICC_H
> +#define MEMORY_TEGRA_ICC_H
> +
> +enum tegra_icc_client_type {
> +	TEGRA_ICC_NONE,
> +	TEGRA_ICC_NISO,
> +	TEGRA_ICC_ISO_DISPLAY,
> +	TEGRA_ICC_ISO_VI,
> +	TEGRA_ICC_ISO_AUDIO,
> +	TEGRA_ICC_ISO_VIFAL,
> +};

You using only TEGRA_ICC_NISO and !TEGRA_ICC_NISO in the code.

include/soc/tegra/mc.h defines TAG_DEFAULT/ISO, please drop all these
duplicated and unused "types" unless there is a good reason to keep them


^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [Patch v1 01/10] memory: tegra: add interconnect support for DRAM scaling in Tegra234
  2022-12-20 16:02 ` [Patch v1 01/10] memory: tegra: add interconnect support for DRAM scaling in Tegra234 Sumit Gupta
                     ` (2 preceding siblings ...)
  2022-12-20 18:07   ` Dmitry Osipenko
@ 2022-12-20 18:10   ` Dmitry Osipenko
  2022-12-21  9:35     ` Sumit Gupta
  2022-12-21  0:55   ` Dmitry Osipenko
                     ` (5 subsequent siblings)
  9 siblings, 1 reply; 53+ messages in thread
From: Dmitry Osipenko @ 2022-12-20 18:10 UTC (permalink / raw)
  To: Sumit Gupta, treding, krzysztof.kozlowski, dmitry.osipenko,
	viresh.kumar, rafael, jonathanh, robh+dt, linux-kernel,
	linux-tegra, linux-pm, devicetree

20.12.2022 19:02, Sumit Gupta пишет:
> +static int tegra_emc_icc_get_init_bw(struct icc_node *node, u32 *avg, u32 *peak)
> +{
> +	*avg = 0;
> +	*peak = 0;
> +
> +	return 0;
> +}

Looks wrong, you should add ICC support to all the drivers first and
only then enable ICC. I think you added this init_bw() to work around
the fact that ICC isn't supported by T234 drivers.

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [Patch v1 01/10] memory: tegra: add interconnect support for DRAM scaling in Tegra234
  2022-12-20 16:02 ` [Patch v1 01/10] memory: tegra: add interconnect support for DRAM scaling in Tegra234 Sumit Gupta
                     ` (3 preceding siblings ...)
  2022-12-20 18:10   ` Dmitry Osipenko
@ 2022-12-21  0:55   ` Dmitry Osipenko
  2022-12-21  8:07     ` Sumit Gupta
  2022-12-21 16:54   ` Dmitry Osipenko
                     ` (4 subsequent siblings)
  9 siblings, 1 reply; 53+ messages in thread
From: Dmitry Osipenko @ 2022-12-21  0:55 UTC (permalink / raw)
  To: Sumit Gupta, treding, krzysztof.kozlowski, dmitry.osipenko,
	viresh.kumar, rafael, jonathanh, robh+dt, linux-kernel,
	linux-tegra, linux-pm, devicetree
  Cc: sanjayc, ksitaraman, ishah, bbasu

20.12.2022 19:02, Sumit Gupta пишет:
> +static int tegra234_mc_icc_set(struct icc_node *src, struct icc_node *dst)
> +{
> +	struct tegra_mc *mc = icc_provider_to_tegra_mc(dst->provider);
> +	struct tegra_icc_node *tnode = src->data;
> +
> +	/*
> +	 * Same Src and Dst node will happen during boot from icc_node_add().
> +	 * This can be used to pre-initialize and set bandwidth for all clients
> +	 * before their drivers are loaded. We are skipping this case as for us,
> +	 * the pre-initialization already happened in Bootloader(MB2) and BPMP-FW.
> +	 */
> +	if (src->id == dst->id)
> +		return 0;
> +
> +	if (tnode->node)
> +		mc->curr_tnode = tnode;
> +	else
> +		pr_err("%s, tegra_icc_node is null\n", __func__);

The tnode->node can't be NULL.

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [Patch v1 07/10] cpufreq: Add Tegra234 to cpufreq-dt-platdev blocklist
  2022-12-20 16:02 ` [Patch v1 07/10] cpufreq: Add Tegra234 to cpufreq-dt-platdev blocklist Sumit Gupta
@ 2022-12-21  5:01   ` Viresh Kumar
  0 siblings, 0 replies; 53+ messages in thread
From: Viresh Kumar @ 2022-12-21  5:01 UTC (permalink / raw)
  To: Sumit Gupta
  Cc: treding, krzysztof.kozlowski, dmitry.osipenko, rafael, jonathanh,
	robh+dt, linux-kernel, linux-tegra, linux-pm, devicetree,
	sanjayc, ksitaraman, ishah, bbasu

On 20-12-22, 21:32, Sumit Gupta wrote:
> Tegra234 platform uses the tegra194-cpufreq driver, so add it
> to the blocklist in cpufreq-dt-platdev driver to avoid the cpufreq
> driver registration from there.
> 
> Signed-off-by: Sumit Gupta <sumitg@nvidia.com>
> ---
>  drivers/cpufreq/cpufreq-dt-platdev.c | 1 +
>  1 file changed, 1 insertion(+)
> 
> diff --git a/drivers/cpufreq/cpufreq-dt-platdev.c b/drivers/cpufreq/cpufreq-dt-platdev.c
> index 8ab672883043..e329d29d1f9d 100644
> --- a/drivers/cpufreq/cpufreq-dt-platdev.c
> +++ b/drivers/cpufreq/cpufreq-dt-platdev.c
> @@ -137,6 +137,7 @@ static const struct of_device_id blocklist[] __initconst = {
>  	{ .compatible = "nvidia,tegra30", },
>  	{ .compatible = "nvidia,tegra124", },
>  	{ .compatible = "nvidia,tegra210", },
> +	{ .compatible = "nvidia,tegra234", },
>  
>  	{ .compatible = "qcom,apq8096", },
>  	{ .compatible = "qcom,msm8996", },

Applied. Thanks.

-- 
viresh

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [Patch v1 01/10] memory: tegra: add interconnect support for DRAM scaling in Tegra234
  2022-12-20 18:05   ` Dmitry Osipenko
@ 2022-12-21  7:53     ` Sumit Gupta
  0 siblings, 0 replies; 53+ messages in thread
From: Sumit Gupta @ 2022-12-21  7:53 UTC (permalink / raw)
  To: Dmitry Osipenko, treding, krzysztof.kozlowski, dmitry.osipenko,
	viresh.kumar, rafael, jonathanh, robh+dt, linux-kernel,
	linux-tegra, linux-pm, devicetree, Sumit Gupta



On 20/12/22 23:35, Dmitry Osipenko wrote:
> External email: Use caution opening links or attachments
> 
> 
> 20.12.2022 19:02, Sumit Gupta пишет:
>> @@ -779,6 +780,7 @@ const char *const tegra_mc_error_names[8] = {
>>    */
>>   static int tegra_mc_interconnect_setup(struct tegra_mc *mc)
>>   {
>> +     struct tegra_icc_node *tnode;
>>        struct icc_node *node;
>>        unsigned int i;
>>        int err;
>> @@ -792,7 +794,11 @@ static int tegra_mc_interconnect_setup(struct tegra_mc *mc)
>>        mc->provider.data = &mc->provider;
>>        mc->provider.set = mc->soc->icc_ops->set;
>>        mc->provider.aggregate = mc->soc->icc_ops->aggregate;
>> -     mc->provider.xlate_extended = mc->soc->icc_ops->xlate_extended;
>> +     mc->provider.get_bw = mc->soc->icc_ops->get_bw;
>> +     if (mc->soc->icc_ops->xlate)
>> +             mc->provider.xlate = mc->soc->icc_ops->xlate;
>> +     if (mc->soc->icc_ops->xlate_extended)
>> +             mc->provider.xlate_extended = mc->soc->icc_ops->xlate_extended;
> 
> These IFs look pointless

Ok. Will remove.

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [Patch v1 01/10] memory: tegra: add interconnect support for DRAM scaling in Tegra234
  2022-12-20 18:06   ` Dmitry Osipenko
@ 2022-12-21  7:54     ` Sumit Gupta
  0 siblings, 0 replies; 53+ messages in thread
From: Sumit Gupta @ 2022-12-21  7:54 UTC (permalink / raw)
  To: Dmitry Osipenko, treding, krzysztof.kozlowski, dmitry.osipenko,
	viresh.kumar, rafael, jonathanh, robh+dt, linux-kernel,
	linux-tegra, linux-pm, devicetree, Sumit Gupta
  Cc: sanjayc, ksitaraman, ishah, bbasu



On 20/12/22 23:36, Dmitry Osipenko wrote:
> External email: Use caution opening links or attachments
> 
> 
> 20.12.2022 19:02, Sumit Gupta пишет:
>> +             if (tegra_bpmp_mrq_is_supported(emc->bpmp, MRQ_BWMGR_INT)) {
>> +                     err = tegra_emc_interconnect_init(emc);
>> +                     if (!err)
>> +                             return err;
> 
> "return err" doesn't sound good
> 
>> +                     dev_err(&pdev->dev, "tegra_emc_interconnect_init failed:%d\n", err);
>> +                     goto put_bpmp;
> 
> Move error message to tegra_emc_interconnect_init()
Ok, will change.

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [Patch v1 01/10] memory: tegra: add interconnect support for DRAM scaling in Tegra234
  2022-12-20 18:07   ` Dmitry Osipenko
@ 2022-12-21  8:05     ` Sumit Gupta
  2022-12-21 16:44       ` Dmitry Osipenko
  0 siblings, 1 reply; 53+ messages in thread
From: Sumit Gupta @ 2022-12-21  8:05 UTC (permalink / raw)
  To: Dmitry Osipenko, treding, krzysztof.kozlowski, dmitry.osipenko,
	viresh.kumar, rafael, jonathanh, robh+dt, linux-kernel,
	linux-tegra, linux-pm, devicetree, Sumit Gupta
  Cc: sanjayc, ksitaraman, ishah, bbasu



On 20/12/22 23:37, Dmitry Osipenko wrote:
> External email: Use caution opening links or attachments
> 
> 
> 20.12.2022 19:02, Sumit Gupta пишет:
>> +#ifndef MEMORY_TEGRA_ICC_H
>> +#define MEMORY_TEGRA_ICC_H
>> +
>> +enum tegra_icc_client_type {
>> +     TEGRA_ICC_NONE,
>> +     TEGRA_ICC_NISO,
>> +     TEGRA_ICC_ISO_DISPLAY,
>> +     TEGRA_ICC_ISO_VI,
>> +     TEGRA_ICC_ISO_AUDIO,
>> +     TEGRA_ICC_ISO_VIFAL,
>> +};
> 
> You using only TEGRA_ICC_NISO and !TEGRA_ICC_NISO in the code.
> 
> include/soc/tegra/mc.h defines TAG_DEFAULT/ISO, please drop all these
> duplicated and unused "types" unless there is a good reason to keep them
> 

These type are used while defining clients in "tegra234_mc_clients[]" 
and its passed to BPMP-FW which has handling for each client type.

  kernel$ grep -B2 ".type = TEGRA_ICC_ISO" drivers/memory/tegra/tegra234.c
      .name = "hdar",
      .bpmp_id = TEGRA_ICC_BPMP_HDA,
      .type = TEGRA_ICC_ISO_AUDIO,
  --
      .name = "hdaw",
      .bpmp_id = TEGRA_ICC_BPMP_HDA,
      .type = TEGRA_ICC_ISO_AUDIO,
  --
      .name = "vi2w",
      .bpmp_id = TEGRA_ICC_BPMP_VI2,
      .type = TEGRA_ICC_ISO_VI,
  --
      .name = "vi2falr",
      .bpmp_id = TEGRA_ICC_BPMP_VI2FAL,
      .type = TEGRA_ICC_ISO_VIFAL,
  --
      .name = "vi2falw",
      .bpmp_id = TEGRA_ICC_BPMP_VI2FAL,
      .type = TEGRA_ICC_ISO_VIFAL,
  --
      .name = "aper",
      .bpmp_id = TEGRA_ICC_BPMP_APE,
      .type = TEGRA_ICC_ISO_AUDIO,
  --
      .name = "apew",
      .bpmp_id = TEGRA_ICC_BPMP_APE,
      .type = TEGRA_ICC_ISO_AUDIO,
  --
      .name = "nvdisplayr",
      .bpmp_id = TEGRA_ICC_BPMP_DISPLAY,
      .type = TEGRA_ICC_ISO_DISPLAY,
  --
      .name = "nvdisplayr1",
      .bpmp_id = TEGRA_ICC_BPMP_DISPLAY,
      .type = TEGRA_ICC_ISO_DISPLAY,
  --
      .name = "apedmar",
      .bpmp_id = TEGRA_ICC_BPMP_APEDMA,
      .type = TEGRA_ICC_ISO_AUDIO,
  --
      .name = "apedmaw",
      .bpmp_id = TEGRA_ICC_BPMP_APEDMA,
      .type = TEGRA_ICC_ISO_AUDIO,



^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [Patch v1 01/10] memory: tegra: add interconnect support for DRAM scaling in Tegra234
  2022-12-21  0:55   ` Dmitry Osipenko
@ 2022-12-21  8:07     ` Sumit Gupta
  0 siblings, 0 replies; 53+ messages in thread
From: Sumit Gupta @ 2022-12-21  8:07 UTC (permalink / raw)
  To: Dmitry Osipenko, treding, krzysztof.kozlowski, dmitry.osipenko,
	viresh.kumar, rafael, jonathanh, robh+dt, linux-kernel,
	linux-tegra, linux-pm, devicetree, Sumit Gupta
  Cc: sanjayc, ksitaraman, ishah, bbasu



On 21/12/22 06:25, Dmitry Osipenko wrote:
> External email: Use caution opening links or attachments
> 
> 
> 20.12.2022 19:02, Sumit Gupta пишет:
>> +static int tegra234_mc_icc_set(struct icc_node *src, struct icc_node *dst)
>> +{
>> +     struct tegra_mc *mc = icc_provider_to_tegra_mc(dst->provider);
>> +     struct tegra_icc_node *tnode = src->data;
>> +
>> +     /*
>> +      * Same Src and Dst node will happen during boot from icc_node_add().
>> +      * This can be used to pre-initialize and set bandwidth for all clients
>> +      * before their drivers are loaded. We are skipping this case as for us,
>> +      * the pre-initialization already happened in Bootloader(MB2) and BPMP-FW.
>> +      */
>> +     if (src->id == dst->id)
>> +             return 0;
>> +
>> +     if (tnode->node)
>> +             mc->curr_tnode = tnode;
>> +     else
>> +             pr_err("%s, tegra_icc_node is null\n", __func__);
> 
> The tnode->node can't be NULL.

Ok, will remove the check.

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [Patch v1 01/10] memory: tegra: add interconnect support for DRAM scaling in Tegra234
  2022-12-20 18:10   ` Dmitry Osipenko
@ 2022-12-21  9:35     ` Sumit Gupta
  2022-12-21 16:43       ` Dmitry Osipenko
  0 siblings, 1 reply; 53+ messages in thread
From: Sumit Gupta @ 2022-12-21  9:35 UTC (permalink / raw)
  To: Dmitry Osipenko, treding, krzysztof.kozlowski, dmitry.osipenko,
	viresh.kumar, rafael, jonathanh, robh+dt, linux-kernel,
	linux-tegra, linux-pm, devicetree, Sumit Gupta



On 20/12/22 23:40, Dmitry Osipenko wrote:
> External email: Use caution opening links or attachments
> 
> 
> 20.12.2022 19:02, Sumit Gupta пишет:
>> +static int tegra_emc_icc_get_init_bw(struct icc_node *node, u32 *avg, u32 *peak)
>> +{
>> +     *avg = 0;
>> +     *peak = 0;
>> +
>> +     return 0;
>> +}
> 
> Looks wrong, you should add ICC support to all the drivers first and
> only then enable ICC. I think you added this init_bw() to work around
> the fact that ICC isn't supported by T234 drivers.

If get_bw hook is not added then max freq is set due to 'INT_MAX' below.

  void icc_node_add(struct icc_node *node, struct icc_provider *provider)
  {
    ....
    /* get the initial bandwidth values and sync them with hardware */
    if (provider->get_bw) {
  		provider->get_bw(node, &node->init_avg, &node->init_peak);
    } else {
  		node->init_avg = INT_MAX;
  		node->init_peak = INT_MAX;
  }

So, will have to add the empty functions at least.

  static int tegra_emc_icc_get_init_bw(struct icc_node *node, u32 *avg, 
u32 *peak)
  {
-       *avg = 0;
-       *peak = 0;
-
         return 0;
  }

Support to all the client drivers can't be added at once as there are 
many drivers all with different requirements and handling. This patch 
series is the beginning to add the basic interconnect support in new 
Tegra SoC's. Support for more clients will be added later one by one or 
in batch.

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [Patch v1 01/10] memory: tegra: add interconnect support for DRAM scaling in Tegra234
  2022-12-21  9:35     ` Sumit Gupta
@ 2022-12-21 16:43       ` Dmitry Osipenko
  2023-01-13 12:15         ` Sumit Gupta
  0 siblings, 1 reply; 53+ messages in thread
From: Dmitry Osipenko @ 2022-12-21 16:43 UTC (permalink / raw)
  To: Sumit Gupta, treding, krzysztof.kozlowski, dmitry.osipenko,
	viresh.kumar, rafael, jonathanh, robh+dt, linux-kernel,
	linux-tegra, linux-pm, devicetree

21.12.2022 12:35, Sumit Gupta пишет:
> 
> 
> On 20/12/22 23:40, Dmitry Osipenko wrote:
>> External email: Use caution opening links or attachments
>>
>>
>> 20.12.2022 19:02, Sumit Gupta пишет:
>>> +static int tegra_emc_icc_get_init_bw(struct icc_node *node, u32
>>> *avg, u32 *peak)
>>> +{
>>> +     *avg = 0;
>>> +     *peak = 0;
>>> +
>>> +     return 0;
>>> +}
>>
>> Looks wrong, you should add ICC support to all the drivers first and
>> only then enable ICC. I think you added this init_bw() to work around
>> the fact that ICC isn't supported by T234 drivers.
> 
> If get_bw hook is not added then max freq is set due to 'INT_MAX' below.
> 
>  void icc_node_add(struct icc_node *node, struct icc_provider *provider)
>  {
>    ....
>    /* get the initial bandwidth values and sync them with hardware */
>    if (provider->get_bw) {
>          provider->get_bw(node, &node->init_avg, &node->init_peak);
>    } else {
>          node->init_avg = INT_MAX;
>          node->init_peak = INT_MAX;
>  }
> 
> So, will have to add the empty functions at least.
> 
>  static int tegra_emc_icc_get_init_bw(struct icc_node *node, u32 *avg,
> u32 *peak)
>  {
> -       *avg = 0;
> -       *peak = 0;
> -
>         return 0;
>  }
> 
> Support to all the client drivers can't be added at once as there are
> many drivers all with different requirements and handling. This patch
> series is the beginning to add the basic interconnect support in new
> Tegra SoC's. Support for more clients will be added later one by one or
> in batch.

This means that bandwidth management isn't working properly. You should
leave the freq to INT_MAX and fix the missing integer overflows in the
code if any, or read out the BW from FW.

Once you'll enable ICC for all drivers, it will start working.


^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [Patch v1 01/10] memory: tegra: add interconnect support for DRAM scaling in Tegra234
  2022-12-21  8:05     ` Sumit Gupta
@ 2022-12-21 16:44       ` Dmitry Osipenko
  2023-01-17 13:03         ` Sumit Gupta
  0 siblings, 1 reply; 53+ messages in thread
From: Dmitry Osipenko @ 2022-12-21 16:44 UTC (permalink / raw)
  To: Sumit Gupta, treding, krzysztof.kozlowski, dmitry.osipenko,
	viresh.kumar, rafael, jonathanh, robh+dt, linux-kernel,
	linux-tegra, linux-pm, devicetree
  Cc: sanjayc, ksitaraman, ishah, bbasu

21.12.2022 11:05, Sumit Gupta пишет:
> On 20/12/22 23:37, Dmitry Osipenko wrote:
>> External email: Use caution opening links or attachments
>>
>>
>> 20.12.2022 19:02, Sumit Gupta пишет:
>>> +#ifndef MEMORY_TEGRA_ICC_H
>>> +#define MEMORY_TEGRA_ICC_H
>>> +
>>> +enum tegra_icc_client_type {
>>> +     TEGRA_ICC_NONE,
>>> +     TEGRA_ICC_NISO,
>>> +     TEGRA_ICC_ISO_DISPLAY,
>>> +     TEGRA_ICC_ISO_VI,
>>> +     TEGRA_ICC_ISO_AUDIO,
>>> +     TEGRA_ICC_ISO_VIFAL,
>>> +};
>>
>> You using only TEGRA_ICC_NISO and !TEGRA_ICC_NISO in the code.
>>
>> include/soc/tegra/mc.h defines TAG_DEFAULT/ISO, please drop all these
>> duplicated and unused "types" unless there is a good reason to keep them
>>
> 
> These type are used while defining clients in "tegra234_mc_clients[]"
> and its passed to BPMP-FW which has handling for each client type.

The type should be based on the ICC tag, IMO. AFAICS, type isn't fixed
in FW and you can set both ISO and NISO BW, hence it's up to a device
driver to select the appropriate tag.


^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [Patch v1 01/10] memory: tegra: add interconnect support for DRAM scaling in Tegra234
  2022-12-20 16:02 ` [Patch v1 01/10] memory: tegra: add interconnect support for DRAM scaling in Tegra234 Sumit Gupta
                     ` (4 preceding siblings ...)
  2022-12-21  0:55   ` Dmitry Osipenko
@ 2022-12-21 16:54   ` Dmitry Osipenko
  2023-01-13 12:25     ` Sumit Gupta
  2022-12-21 19:17   ` Dmitry Osipenko
                     ` (3 subsequent siblings)
  9 siblings, 1 reply; 53+ messages in thread
From: Dmitry Osipenko @ 2022-12-21 16:54 UTC (permalink / raw)
  To: Sumit Gupta, treding, krzysztof.kozlowski, dmitry.osipenko,
	viresh.kumar, rafael, jonathanh, robh+dt, linux-kernel,
	linux-tegra, linux-pm, devicetree
  Cc: sanjayc, ksitaraman, ishah, bbasu

20.12.2022 19:02, Sumit Gupta пишет:
>  static int tegra186_emc_probe(struct platform_device *pdev)
>  {
>  	struct mrq_emc_dvfs_latency_response response;
>  	struct tegra_bpmp_message msg;
>  	struct tegra186_emc *emc;
> +	struct tegra_mc *mc;
>  	unsigned int i;
>  	int err;
>  
> @@ -158,6 +307,9 @@ static int tegra186_emc_probe(struct platform_device *pdev)
>  	if (!emc)
>  		return -ENOMEM;
>  
> +	platform_set_drvdata(pdev, emc);
> +	emc->dev = &pdev->dev;
> +
>  	emc->bpmp = tegra_bpmp_get(&pdev->dev);
>  	if (IS_ERR(emc->bpmp))
>  		return dev_err_probe(&pdev->dev, PTR_ERR(emc->bpmp), "failed to get BPMP\n");
> @@ -236,6 +388,19 @@ static int tegra186_emc_probe(struct platform_device *pdev)
>  	debugfs_create_file("max_rate", S_IRUGO | S_IWUSR, emc->debugfs.root,
>  			    emc, &tegra186_emc_debug_max_rate_fops);
>  
> +	mc = dev_get_drvdata(emc->dev->parent);
> +	if (mc && mc->soc->icc_ops) {
> +		if (tegra_bpmp_mrq_is_supported(emc->bpmp, MRQ_BWMGR_INT)) {
> +			err = tegra_emc_interconnect_init(emc);
> +			if (!err)
> +				return err;
> +			dev_err(&pdev->dev, "tegra_emc_interconnect_init failed:%d\n", err);
> +			goto put_bpmp;
> +		} else {
> +			dev_info(&pdev->dev, "MRQ_BWMGR_INT not present\n");
> +		}

If there is no MRQ_BWMGR_INT, then device drivers using ICC won't probe.
This is either a error condition, or ICC should inited and then ICC
changes should be skipped.


^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [Patch v1 01/10] memory: tegra: add interconnect support for DRAM scaling in Tegra234
  2022-12-20 16:02 ` [Patch v1 01/10] memory: tegra: add interconnect support for DRAM scaling in Tegra234 Sumit Gupta
                     ` (5 preceding siblings ...)
  2022-12-21 16:54   ` Dmitry Osipenko
@ 2022-12-21 19:17   ` Dmitry Osipenko
  2022-12-21 19:20   ` Dmitry Osipenko
                     ` (2 subsequent siblings)
  9 siblings, 0 replies; 53+ messages in thread
From: Dmitry Osipenko @ 2022-12-21 19:17 UTC (permalink / raw)
  To: Sumit Gupta, treding, krzysztof.kozlowski, dmitry.osipenko,
	viresh.kumar, rafael, jonathanh, robh+dt, linux-kernel,
	linux-tegra, linux-pm, devicetree
  Cc: sanjayc, ksitaraman, ishah, bbasu

20.12.2022 19:02, Sumit Gupta пишет:
> +static int tegra_emc_icc_set_bw(struct icc_node *src, struct icc_node *dst)
> +{
> +	struct tegra186_emc *emc = to_tegra186_emc(dst->provider);
> +	struct tegra_mc *mc = dev_get_drvdata(emc->dev->parent);
> +	struct mrq_bwmgr_int_request bwmgr_req = { 0 };
> +	struct mrq_bwmgr_int_response bwmgr_resp = { 0 };
> +	struct tegra_icc_node *tnode = mc->curr_tnode;
> +	struct tegra_bpmp_message msg;
> +	int ret = 0;

Nit: unnecessarily initialized var, the same for the rest of the code


^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [Patch v1 01/10] memory: tegra: add interconnect support for DRAM scaling in Tegra234
  2022-12-20 16:02 ` [Patch v1 01/10] memory: tegra: add interconnect support for DRAM scaling in Tegra234 Sumit Gupta
                     ` (6 preceding siblings ...)
  2022-12-21 19:17   ` Dmitry Osipenko
@ 2022-12-21 19:20   ` Dmitry Osipenko
  2022-12-22 15:56     ` Dmitry Osipenko
  2023-01-13 12:40     ` Sumit Gupta
  2022-12-21 19:43   ` Dmitry Osipenko
  2022-12-22 11:32   ` Krzysztof Kozlowski
  9 siblings, 2 replies; 53+ messages in thread
From: Dmitry Osipenko @ 2022-12-21 19:20 UTC (permalink / raw)
  To: Sumit Gupta, treding, krzysztof.kozlowski, dmitry.osipenko,
	viresh.kumar, rafael, jonathanh, robh+dt, linux-kernel,
	linux-tegra, linux-pm, devicetree
  Cc: sanjayc, ksitaraman, ishah, bbasu

20.12.2022 19:02, Sumit Gupta пишет:
> +static int tegra_emc_icc_set_bw(struct icc_node *src, struct icc_node *dst)
> +{
> +	struct tegra186_emc *emc = to_tegra186_emc(dst->provider);
> +	struct tegra_mc *mc = dev_get_drvdata(emc->dev->parent);
> +	struct mrq_bwmgr_int_request bwmgr_req = { 0 };
> +	struct mrq_bwmgr_int_response bwmgr_resp = { 0 };
> +	struct tegra_icc_node *tnode = mc->curr_tnode;
> +	struct tegra_bpmp_message msg;
> +	int ret = 0;
> +
> +	/*
> +	 * Same Src and Dst node will happen during boot from icc_node_add().
> +	 * This can be used to pre-initialize and set bandwidth for all clients
> +	 * before their drivers are loaded. We are skipping this case as for us,
> +	 * the pre-initialization already happened in Bootloader(MB2) and BPMP-FW.
> +	 */
> +	if (src->id == dst->id)
> +		return 0;
> +
> +	if (mc->curr_tnode->type == TEGRA_ICC_NISO)

The mc->curr_tnode usage looks suspicious, why you can't use src node?


^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [Patch v1 01/10] memory: tegra: add interconnect support for DRAM scaling in Tegra234
  2022-12-20 16:02 ` [Patch v1 01/10] memory: tegra: add interconnect support for DRAM scaling in Tegra234 Sumit Gupta
                     ` (7 preceding siblings ...)
  2022-12-21 19:20   ` Dmitry Osipenko
@ 2022-12-21 19:43   ` Dmitry Osipenko
  2022-12-22 11:32   ` Krzysztof Kozlowski
  9 siblings, 0 replies; 53+ messages in thread
From: Dmitry Osipenko @ 2022-12-21 19:43 UTC (permalink / raw)
  To: Sumit Gupta, treding, krzysztof.kozlowski, dmitry.osipenko,
	viresh.kumar, rafael, jonathanh, robh+dt, linux-kernel,
	linux-tegra, linux-pm, devicetree
  Cc: sanjayc, ksitaraman, ishah, bbasu

20.12.2022 19:02, Sumit Gupta пишет:
>  static int tegra_mc_interconnect_setup(struct tegra_mc *mc)
>  {
> +	struct tegra_icc_node *tnode;
>  	struct icc_node *node;
>  	unsigned int i;
>  	int err;
> @@ -792,7 +794,11 @@ static int tegra_mc_interconnect_setup(struct tegra_mc *mc)
>  	mc->provider.data = &mc->provider;
>  	mc->provider.set = mc->soc->icc_ops->set;
>  	mc->provider.aggregate = mc->soc->icc_ops->aggregate;
> -	mc->provider.xlate_extended = mc->soc->icc_ops->xlate_extended;
> +	mc->provider.get_bw = mc->soc->icc_ops->get_bw;
> +	if (mc->soc->icc_ops->xlate)
> +		mc->provider.xlate = mc->soc->icc_ops->xlate;
> +	if (mc->soc->icc_ops->xlate_extended)
> +		mc->provider.xlate_extended = mc->soc->icc_ops->xlate_extended;
>  
>  	err = icc_provider_add(&mc->provider);
>  	if (err)
> @@ -814,6 +820,10 @@ static int tegra_mc_interconnect_setup(struct tegra_mc *mc)
>  		goto remove_nodes;
>  
>  	for (i = 0; i < mc->soc->num_clients; i++) {
> +		tnode = kzalloc(sizeof(*tnode), GFP_KERNEL);

devm_kzalloc

On the other hand, the tnode is unnecessary at all. Use struct
tegra_mc_client for sw clients.


^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [Patch v1 05/10] dt-bindings: tegra: add icc ids for dummy MC clients
  2022-12-20 16:02 ` [Patch v1 05/10] dt-bindings: tegra: add icc ids for dummy MC clients Sumit Gupta
@ 2022-12-22 11:29   ` Krzysztof Kozlowski
  2023-01-13 14:44     ` Sumit Gupta
  2023-01-13 17:11   ` Krzysztof Kozlowski
  1 sibling, 1 reply; 53+ messages in thread
From: Krzysztof Kozlowski @ 2022-12-22 11:29 UTC (permalink / raw)
  To: Sumit Gupta, treding, dmitry.osipenko, viresh.kumar, rafael,
	jonathanh, robh+dt, linux-kernel, linux-tegra, linux-pm,
	devicetree
  Cc: sanjayc, ksitaraman, ishah, bbasu

On 20/12/2022 17:02, Sumit Gupta wrote:
> Adding ICC id's for dummy software clients representing CCPLEX clusters.
> 
> Signed-off-by: Sumit Gupta <sumitg@nvidia.com>
> ---
>  include/dt-bindings/memory/tegra234-mc.h | 5 +++++
>  1 file changed, 5 insertions(+)
> 
> diff --git a/include/dt-bindings/memory/tegra234-mc.h b/include/dt-bindings/memory/tegra234-mc.h
> index 347e55e89a2a..6e60d55491b3 100644
> --- a/include/dt-bindings/memory/tegra234-mc.h
> +++ b/include/dt-bindings/memory/tegra234-mc.h
> @@ -536,4 +536,9 @@
>  #define TEGRA234_MEMORY_CLIENT_NVJPG1SRD 0x123
>  #define TEGRA234_MEMORY_CLIENT_NVJPG1SWR 0x124
>  
> +/* ICC ID's for dummy MC clients used to represent CPU Clusters */
> +#define TEGRA_ICC_MC_CPU_CLUSTER0       1003

Why the IDs do not start from 0?

Best regards,
Krzysztof


^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [Patch v1 01/10] memory: tegra: add interconnect support for DRAM scaling in Tegra234
  2022-12-20 16:02 ` [Patch v1 01/10] memory: tegra: add interconnect support for DRAM scaling in Tegra234 Sumit Gupta
                     ` (8 preceding siblings ...)
  2022-12-21 19:43   ` Dmitry Osipenko
@ 2022-12-22 11:32   ` Krzysztof Kozlowski
  2023-03-06 19:28     ` Sumit Gupta
  9 siblings, 1 reply; 53+ messages in thread
From: Krzysztof Kozlowski @ 2022-12-22 11:32 UTC (permalink / raw)
  To: Sumit Gupta, treding, dmitry.osipenko, viresh.kumar, rafael,
	jonathanh, robh+dt, linux-kernel, linux-tegra, linux-pm,
	devicetree
  Cc: sanjayc, ksitaraman, ishah, bbasu

On 20/12/2022 17:02, Sumit Gupta wrote:
> Adding Interconnect framework support to dynamically set the DRAM
> bandwidth from different clients. Both the MC and EMC drivers are
> added as ICC providers. The path for any request will be:
>  MC-Client[1-n] -> MC -> EMC -> EMEM/DRAM
> 
> MC clients will request for bandwidth to the MC driver which will
> pass the tegra icc node having current request info to the EMC driver.
> The EMC driver will send the BPMP Client ID, Client type and bandwidth
> request info to the BPMP-FW where the final DRAM freq for achieving the
> requested bandwidth is set based on the passed parameters.
> 
> Signed-off-by: Sumit Gupta <sumitg@nvidia.com>
> ---
>  drivers/memory/tegra/mc.c           |  18 ++-
>  drivers/memory/tegra/tegra186-emc.c | 166 ++++++++++++++++++++++++++++
>  drivers/memory/tegra/tegra234.c     | 101 ++++++++++++++++-
>  include/soc/tegra/mc.h              |   7 ++
>  include/soc/tegra/tegra-icc.h       |  72 ++++++++++++
>  5 files changed, 362 insertions(+), 2 deletions(-)
>  create mode 100644 include/soc/tegra/tegra-icc.h
> 
> diff --git a/drivers/memory/tegra/mc.c b/drivers/memory/tegra/mc.c
> index 592907546ee6..ff887fb03bce 100644
> --- a/drivers/memory/tegra/mc.c
> +++ b/drivers/memory/tegra/mc.c
> @@ -17,6 +17,7 @@
>  #include <linux/sort.h>
>  
>  #include <soc/tegra/fuse.h>
> +#include <soc/tegra/tegra-icc.h>
>  
>  #include "mc.h"
>  
> @@ -779,6 +780,7 @@ const char *const tegra_mc_error_names[8] = {
>   */
>  static int tegra_mc_interconnect_setup(struct tegra_mc *mc)
>  {
> +	struct tegra_icc_node *tnode;
>  	struct icc_node *node;
>  	unsigned int i;
>  	int err;
> @@ -792,7 +794,11 @@ static int tegra_mc_interconnect_setup(struct tegra_mc *mc)
>  	mc->provider.data = &mc->provider;
>  	mc->provider.set = mc->soc->icc_ops->set;
>  	mc->provider.aggregate = mc->soc->icc_ops->aggregate;
> -	mc->provider.xlate_extended = mc->soc->icc_ops->xlate_extended;
> +	mc->provider.get_bw = mc->soc->icc_ops->get_bw;
> +	if (mc->soc->icc_ops->xlate)
> +		mc->provider.xlate = mc->soc->icc_ops->xlate;
> +	if (mc->soc->icc_ops->xlate_extended)
> +		mc->provider.xlate_extended = mc->soc->icc_ops->xlate_extended;
>  
>  	err = icc_provider_add(&mc->provider);
>  	if (err)
> @@ -814,6 +820,10 @@ static int tegra_mc_interconnect_setup(struct tegra_mc *mc)
>  		goto remove_nodes;
>  
>  	for (i = 0; i < mc->soc->num_clients; i++) {
> +		tnode = kzalloc(sizeof(*tnode), GFP_KERNEL);
> +		if (!tnode)
> +			return -ENOMEM;
> +
>  		/* create MC client node */
>  		node = icc_node_create(mc->soc->clients[i].id);
>  		if (IS_ERR(node)) {
> @@ -828,6 +838,12 @@ static int tegra_mc_interconnect_setup(struct tegra_mc *mc)
>  		err = icc_link_create(node, TEGRA_ICC_MC);
>  		if (err)
>  			goto remove_nodes;
> +
> +		node->data = tnode;

Where is it freed?


(...)

>  
>  struct tegra_mc_ops {
> @@ -238,6 +243,8 @@ struct tegra_mc {
>  	struct {
>  		struct dentry *root;
>  	} debugfs;
> +
> +	struct tegra_icc_node *curr_tnode;
>  };
>  
>  int tegra_mc_write_emem_configuration(struct tegra_mc *mc, unsigned long rate);
> diff --git a/include/soc/tegra/tegra-icc.h b/include/soc/tegra/tegra-icc.h
> new file mode 100644
> index 000000000000..3855d8571281
> --- /dev/null
> +++ b/include/soc/tegra/tegra-icc.h

Why not in linux?

> @@ -0,0 +1,72 @@
> +/* SPDX-License-Identifier: GPL-2.0-only */
> +/*
> + * Copyright (C) 2022-2023 NVIDIA CORPORATION.  All rights reserved.
> + */
> +
> +#ifndef MEMORY_TEGRA_ICC_H

This does not match the path/name.

> +#define MEMORY_TEGRA_ICC_H
> +
> +enum tegra_icc_client_type {
> +	TEGRA_ICC_NONE,
> +	TEGRA_ICC_NISO,
> +	TEGRA_ICC_ISO_DISPLAY,
> +	TEGRA_ICC_ISO_VI,
> +	TEGRA_ICC_ISO_AUDIO,
> +	TEGRA_ICC_ISO_VIFAL,
> +};
> +
> +struct tegra_icc_node {
> +	struct icc_node *node;
> +	struct tegra_mc *mc;
> +	u32 bpmp_id;
> +	u32 type;
> +};
> +
> +/* ICC ID's for MC client's used in BPMP */
> +#define TEGRA_ICC_BPMP_DEBUG		1
> +#define TEGRA_ICC_BPMP_CPU_CLUSTER0	2
> +#define TEGRA_ICC_BPMP_CPU_CLUSTER1	3
> +#define TEGRA_ICC_BPMP_CPU_CLUSTER2	4
> +#define TEGRA_ICC_BPMP_GPU		5
> +#define TEGRA_ICC_BPMP_CACTMON		6
> +#define TEGRA_ICC_BPMP_DISPLAY		7
> +#define TEGRA_ICC_BPMP_VI		8
> +#define TEGRA_ICC_BPMP_EQOS		9
> +#define TEGRA_ICC_BPMP_PCIE_0		10
> +#define TEGRA_ICC_BPMP_PCIE_1		11
> +#define TEGRA_ICC_BPMP_PCIE_2		12
> +#define TEGRA_ICC_BPMP_PCIE_3		13
> +#define TEGRA_ICC_BPMP_PCIE_4		14
> +#define TEGRA_ICC_BPMP_PCIE_5		15
> +#define TEGRA_ICC_BPMP_PCIE_6		16
> +#define TEGRA_ICC_BPMP_PCIE_7		17
> +#define TEGRA_ICC_BPMP_PCIE_8		18
> +#define TEGRA_ICC_BPMP_PCIE_9		19
> +#define TEGRA_ICC_BPMP_PCIE_10		20
> +#define TEGRA_ICC_BPMP_DLA_0		21
> +#define TEGRA_ICC_BPMP_DLA_1		22
> +#define TEGRA_ICC_BPMP_SDMMC_1		23
> +#define TEGRA_ICC_BPMP_SDMMC_2		24
> +#define TEGRA_ICC_BPMP_SDMMC_3		25
> +#define TEGRA_ICC_BPMP_SDMMC_4		26
> +#define TEGRA_ICC_BPMP_NVDEC		27
> +#define TEGRA_ICC_BPMP_NVENC		28
> +#define TEGRA_ICC_BPMP_NVJPG_0		29
> +#define TEGRA_ICC_BPMP_NVJPG_1		30
> +#define TEGRA_ICC_BPMP_OFAA		31
> +#define TEGRA_ICC_BPMP_XUSB_HOST	32
> +#define TEGRA_ICC_BPMP_XUSB_DEV		33
> +#define TEGRA_ICC_BPMP_TSEC		34
> +#define TEGRA_ICC_BPMP_VIC		35
> +#define TEGRA_ICC_BPMP_APE		36
> +#define TEGRA_ICC_BPMP_APEDMA		37
> +#define TEGRA_ICC_BPMP_SE		38
> +#define TEGRA_ICC_BPMP_ISP		39
> +#define TEGRA_ICC_BPMP_HDA		40
> +#define TEGRA_ICC_BPMP_VIFAL		41
> +#define TEGRA_ICC_BPMP_VI2FAL		42
> +#define TEGRA_ICC_BPMP_VI2		43
> +#define TEGRA_ICC_BPMP_RCE		44
> +#define TEGRA_ICC_BPMP_PVA		45
> +
> +#endif /* MEMORY_TEGRA_ICC_H */

Best regards,
Krzysztof


^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [Patch v1 03/10] memory: tegra: add pcie mc clients for Tegra234
  2022-12-20 16:02 ` [Patch v1 03/10] memory: tegra: add pcie " Sumit Gupta
@ 2022-12-22 11:33   ` Krzysztof Kozlowski
  2023-01-13 14:51     ` Sumit Gupta
  0 siblings, 1 reply; 53+ messages in thread
From: Krzysztof Kozlowski @ 2022-12-22 11:33 UTC (permalink / raw)
  To: Sumit Gupta, treding, dmitry.osipenko, viresh.kumar, rafael,
	jonathanh, robh+dt, linux-kernel, linux-tegra, linux-pm,
	devicetree
  Cc: sanjayc, ksitaraman, ishah, bbasu

On 20/12/2022 17:02, Sumit Gupta wrote:
> Adding PCIE clients representing each controller to the
> mc_clients table for Tegra234.
> 
> Signed-off-by: Sumit Gupta <sumitg@nvidia.com>
> ---
>  drivers/memory/tegra/tegra234.c | 314 +++++++++++++++++++++++++++++++-
>  1 file changed, 313 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/memory/tegra/tegra234.c b/drivers/memory/tegra/tegra234.c
> index 2e37b37da9be..420546270c8b 100644
> --- a/drivers/memory/tegra/tegra234.c
> +++ b/drivers/memory/tegra/tegra234.c
> @@ -465,7 +465,319 @@ static const struct tegra_mc_client tegra234_mc_clients[] = {
>  				.security = 0x37c,
>  			},
>  		},
> -	},
> +	}, {

Didn't you change the same structure in previous patch?

Best regards,
Krzysztof


^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [Patch v1 04/10] memory: tegra: add support for software mc clients in Tegra234
  2022-12-20 16:02 ` [Patch v1 04/10] memory: tegra: add support for software mc clients in Tegra234 Sumit Gupta
@ 2022-12-22 11:36   ` Krzysztof Kozlowski
  2023-03-06 19:41     ` Sumit Gupta
  0 siblings, 1 reply; 53+ messages in thread
From: Krzysztof Kozlowski @ 2022-12-22 11:36 UTC (permalink / raw)
  To: Sumit Gupta, treding, dmitry.osipenko, viresh.kumar, rafael,
	jonathanh, robh+dt, linux-kernel, linux-tegra, linux-pm,
	devicetree
  Cc: sanjayc, ksitaraman, ishah, bbasu

On 20/12/2022 17:02, Sumit Gupta wrote:
> Adding support for dummy memory controller clients for use by
> software.

Use imperative mode (applies to other commits as well)
https://elixir.bootlin.com/linux/v5.17.1/source/Documentation/process/submitting-patches.rst#L95

> ---
>  drivers/memory/tegra/mc.c       | 65 +++++++++++++++++++++++----------
>  drivers/memory/tegra/tegra234.c | 21 +++++++++++
>  include/soc/tegra/mc.h          |  3 ++
>  include/soc/tegra/tegra-icc.h   |  7 ++++
>  4 files changed, 76 insertions(+), 20 deletions(-)
> 
> diff --git a/drivers/memory/tegra/mc.c b/drivers/memory/tegra/mc.c
> index ff887fb03bce..4ddf9808fe6b 100644
> --- a/drivers/memory/tegra/mc.c
> +++ b/drivers/memory/tegra/mc.c
> @@ -755,6 +755,39 @@ const char *const tegra_mc_error_names[8] = {
>  	[6] = "SMMU translation error",
>  };
>  
> +static int tegra_mc_add_icc_node(struct tegra_mc *mc, unsigned int id, const char *name,
> +				 unsigned int bpmp_id, unsigned int type)
> +{
> +	struct tegra_icc_node *tnode;
> +	struct icc_node *node;
> +	int err;
> +
> +	tnode = kzalloc(sizeof(*tnode), GFP_KERNEL);
> +	if (!tnode)
> +		return -ENOMEM;
> +
> +	/* create MC client node */
> +	node = icc_node_create(id);
> +	if (IS_ERR(node))
> +		return -EINVAL;

Why do you return other error? It does not look like you moved the code
correctly, but with changes. I also do not see how this is related to
commit msg...

> +
> +	node->name = name;
> +	icc_node_add(node, &mc->provider);
> +
> +	/* link Memory Client to Memory Controller */
> +	err = icc_link_create(node, TEGRA_ICC_MC);
> +	if (err)
> +		return err;
> +
> +	node->data = tnode;
> +	tnode->node = node;
> +	tnode->bpmp_id = bpmp_id;
> +	tnode->type = type;
> +	tnode->mc = mc;
> +
> +	return 0;
> +}
> +
>  /*
>   * Memory Controller (MC) has few Memory Clients that are issuing memory
>   * bandwidth allocation requests to the MC interconnect provider. The MC
> @@ -780,7 +813,6 @@ const char *const tegra_mc_error_names[8] = {
>   */
>  static int tegra_mc_interconnect_setup(struct tegra_mc *mc)
>  {
> -	struct tegra_icc_node *tnode;
>  	struct icc_node *node;
>  	unsigned int i;
>  	int err;
> @@ -820,30 +852,23 @@ static int tegra_mc_interconnect_setup(struct tegra_mc *mc)
>  		goto remove_nodes;
>  
>  	for (i = 0; i < mc->soc->num_clients; i++) {
> -		tnode = kzalloc(sizeof(*tnode), GFP_KERNEL);
> -		if (!tnode)
> -			return -ENOMEM;
> -
> -		/* create MC client node */
> -		node = icc_node_create(mc->soc->clients[i].id);
> -		if (IS_ERR(node)) {
> -			err = PTR_ERR(node);
> +		err = tegra_mc_add_icc_node(mc, mc->soc->clients[i].id,
> +					    mc->soc->clients[i].name,
> +					    mc->soc->clients[i].bpmp_id,
> +					    mc->soc->clients[i].type);
> +		if (err)
>  			goto remove_nodes;
> -		}
>  
> -		node->name = mc->soc->clients[i].name;
> -		icc_node_add(node, &mc->provider);
> +	}
> +

Best regards,
Krzysztof


^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [Patch v1 09/10] memory: tegra: get number of enabled mc channels
  2022-12-20 16:02 ` [Patch v1 09/10] memory: tegra: get number of enabled mc channels Sumit Gupta
@ 2022-12-22 11:37   ` Krzysztof Kozlowski
  2023-01-13 15:04     ` Sumit Gupta
  0 siblings, 1 reply; 53+ messages in thread
From: Krzysztof Kozlowski @ 2022-12-22 11:37 UTC (permalink / raw)
  To: Sumit Gupta, treding, dmitry.osipenko, viresh.kumar, rafael,
	jonathanh, robh+dt, linux-kernel, linux-tegra, linux-pm,
	devicetree
  Cc: sanjayc, ksitaraman, ishah, bbasu

On 20/12/2022 17:02, Sumit Gupta wrote:
> Get number of MC channels which are actually enabled
> in current boot configuration.

Why? You don't do anything with it. Commit msg should give the reason of
changes.


Best regards,
Krzysztof


^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [Patch v1 08/10] cpufreq: tegra194: add OPP support and set bandwidth
  2022-12-20 16:02 ` [Patch v1 08/10] cpufreq: tegra194: add OPP support and set bandwidth Sumit Gupta
@ 2022-12-22 15:46   ` Dmitry Osipenko
  2023-01-13 13:50     ` Sumit Gupta
  0 siblings, 1 reply; 53+ messages in thread
From: Dmitry Osipenko @ 2022-12-22 15:46 UTC (permalink / raw)
  To: Sumit Gupta, treding, krzysztof.kozlowski, dmitry.osipenko,
	viresh.kumar, rafael, jonathanh, robh+dt, linux-kernel,
	linux-tegra, linux-pm, devicetree
  Cc: sanjayc, ksitaraman, ishah, bbasu

20.12.2022 19:02, Sumit Gupta пишет:
> Add support to use OPP table from DT in Tegra194 cpufreq driver.
> Tegra SoC's receive the frequency lookup table (LUT) from BPMP-FW.
> Cross check the OPP's present in DT against the LUT from BPMP-FW
> and enable only those DT OPP's which are present in LUT also.
> 
> The OPP table in DT has CPU Frequency to bandwidth mapping where
> the bandwidth value is per MC channel. DRAM bandwidth depends on the
> number of MC channels which can vary as per the boot configuration.
> This per channel bandwidth from OPP table will be later converted by
> MC driver to final bandwidth value by multiplying with number of
> channels before sending the request to BPMP-FW.
> 
> If OPP table is not present in DT, then use the LUT from BPMP-FW directy
> as the frequency table and not do the DRAM frequency scaling which is
> same as the current behavior.
> 
> Now, as the CPU Frequency table is being controlling through OPP table
> in DT. Keeping fewer entries in the table will create less frequency
> steps and scale fast to high frequencies if required.

It's not exactly clear what you're doing here. Are you going to scale
memory BW based on CPU freq? If yes, then this is wrong because CPU freq
is independent from the memory subsystem.

All Tegra30+ SoCs have ACTMON hardware unit that monitors CPU memory
activity and CPU memory BW should be scaled based on CPU memory events
counter. We have ACTMON devfreq driver for older SoCs. I have no clue
how ACTMON can be accessed on T186+, perhaps there should be a BPMP FW
API for that.


^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [Patch v1 01/10] memory: tegra: add interconnect support for DRAM scaling in Tegra234
  2022-12-21 19:20   ` Dmitry Osipenko
@ 2022-12-22 15:56     ` Dmitry Osipenko
  2023-01-13 12:35       ` Sumit Gupta
  2023-01-13 12:40     ` Sumit Gupta
  1 sibling, 1 reply; 53+ messages in thread
From: Dmitry Osipenko @ 2022-12-22 15:56 UTC (permalink / raw)
  To: Sumit Gupta, treding, krzysztof.kozlowski, dmitry.osipenko,
	viresh.kumar, rafael, jonathanh, robh+dt, linux-kernel,
	linux-tegra, linux-pm, devicetree
  Cc: sanjayc, ksitaraman, ishah, bbasu

21.12.2022 22:20, Dmitry Osipenko пишет:
> 20.12.2022 19:02, Sumit Gupta пишет:
>> +static int tegra_emc_icc_set_bw(struct icc_node *src, struct icc_node *dst)
>> +{
>> +	struct tegra186_emc *emc = to_tegra186_emc(dst->provider);
>> +	struct tegra_mc *mc = dev_get_drvdata(emc->dev->parent);
>> +	struct mrq_bwmgr_int_request bwmgr_req = { 0 };
>> +	struct mrq_bwmgr_int_response bwmgr_resp = { 0 };
>> +	struct tegra_icc_node *tnode = mc->curr_tnode;
>> +	struct tegra_bpmp_message msg;
>> +	int ret = 0;
>> +
>> +	/*
>> +	 * Same Src and Dst node will happen during boot from icc_node_add().
>> +	 * This can be used to pre-initialize and set bandwidth for all clients
>> +	 * before their drivers are loaded. We are skipping this case as for us,
>> +	 * the pre-initialization already happened in Bootloader(MB2) and BPMP-FW.
>> +	 */
>> +	if (src->id == dst->id)
>> +		return 0;
>> +
>> +	if (mc->curr_tnode->type == TEGRA_ICC_NISO)
> 
> The mc->curr_tnode usage looks suspicious, why you can't use src node?
> 

This function sets memory BW for a memory client and not for EMC.
Apparently, you should move the BW setting to tegra234_mc_icc_set() and
then tegra_emc_icc_set_bw() will be a noop.


^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [Patch v1 01/10] memory: tegra: add interconnect support for DRAM scaling in Tegra234
  2022-12-21 16:43       ` Dmitry Osipenko
@ 2023-01-13 12:15         ` Sumit Gupta
  0 siblings, 0 replies; 53+ messages in thread
From: Sumit Gupta @ 2023-01-13 12:15 UTC (permalink / raw)
  To: Dmitry Osipenko, treding, krzysztof.kozlowski, dmitry.osipenko,
	viresh.kumar, rafael, jonathanh, robh+dt, linux-kernel,
	linux-tegra, linux-pm, devicetree
  Cc: Sanjay Chandrashekara, Krishna Sitaraman, Ishan Shah, Bibek Basu,
	Sumit Gupta



On 21/12/22 22:13, Dmitry Osipenko wrote:
> External email: Use caution opening links or attachments
> 
> 
> 21.12.2022 12:35, Sumit Gupta пишет:
>>
>>
>> On 20/12/22 23:40, Dmitry Osipenko wrote:
>>> External email: Use caution opening links or attachments
>>>
>>>
>>> 20.12.2022 19:02, Sumit Gupta пишет:
>>>> +static int tegra_emc_icc_get_init_bw(struct icc_node *node, u32
>>>> *avg, u32 *peak)
>>>> +{
>>>> +     *avg = 0;
>>>> +     *peak = 0;
>>>> +
>>>> +     return 0;
>>>> +}
>>>
>>> Looks wrong, you should add ICC support to all the drivers first and
>>> only then enable ICC. I think you added this init_bw() to work around
>>> the fact that ICC isn't supported by T234 drivers.
>>
>> If get_bw hook is not added then max freq is set due to 'INT_MAX' below.
>>
>>   void icc_node_add(struct icc_node *node, struct icc_provider *provider)
>>   {
>>     ....
>>     /* get the initial bandwidth values and sync them with hardware */
>>     if (provider->get_bw) {
>>           provider->get_bw(node, &node->init_avg, &node->init_peak);
>>     } else {
>>           node->init_avg = INT_MAX;
>>           node->init_peak = INT_MAX;
>>   }
>>
>> So, will have to add the empty functions at least.
>>
>>   static int tegra_emc_icc_get_init_bw(struct icc_node *node, u32 *avg,
>> u32 *peak)
>>   {
>> -       *avg = 0;
>> -       *peak = 0;
>> -
>>          return 0;
>>   }
>>
>> Support to all the client drivers can't be added at once as there are
>> many drivers all with different requirements and handling. This patch
>> series is the beginning to add the basic interconnect support in new
>> Tegra SoC's. Support for more clients will be added later one by one or
>> in batch.
> 
> This means that bandwidth management isn't working properly. You should
> leave the freq to INT_MAX and fix the missing integer overflows in the
> code if any, or read out the BW from FW.
> 
> Once you'll enable ICC for all drivers, it will start working.
> 

Referred the below patches and now understand what you mean.
 
https://patchwork.kernel.org/project/linux-arm-msm/cover/20200825170152.6434-1-georgi.djakov@linaro.org/
  https://lore.kernel.org/all/20210912183009.6400-1-digetx@gmail.com/

There is no MRQ currently in BPMP-FW to get the bandwidth for a client.
So, can't add that in get_bw(). As explained earlier, we are in process 
adding of adding support for all clients but that may take some time. We 
can remove the get_bw() calls initializing "*bw = 0" once we have the 
support added in all clients. Noticed that below drivers also do same.

  $ grep -rB3 "*peak = 0" drivers/interconnect/*
  drivers/interconnect/imx/imx.c-static int imx_icc_get_bw(struct 
icc_node *node, u32 *avg, u32 *peak)
  drivers/interconnect/imx/imx.c-{
  drivers/interconnect/imx/imx.c- *avg = 0;
  drivers/interconnect/imx/imx.c: *peak = 0;
  --
  drivers/interconnect/qcom/msm8974.c-static int msm8974_get_bw(struct 
icc_node *node, u32 *avg, u32 *peak)
  drivers/interconnect/qcom/msm8974.c-{
  drivers/interconnect/qcom/msm8974.c-    *avg = 0;
  drivers/interconnect/qcom/msm8974.c:    *peak = 0;

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [Patch v1 01/10] memory: tegra: add interconnect support for DRAM scaling in Tegra234
  2022-12-21 16:54   ` Dmitry Osipenko
@ 2023-01-13 12:25     ` Sumit Gupta
  0 siblings, 0 replies; 53+ messages in thread
From: Sumit Gupta @ 2023-01-13 12:25 UTC (permalink / raw)
  To: Dmitry Osipenko, treding, krzysztof.kozlowski, dmitry.osipenko,
	viresh.kumar, rafael, jonathanh, robh+dt, linux-kernel,
	linux-tegra, linux-pm, devicetree
  Cc: sanjayc, ksitaraman, ishah, bbasu, Sumit Gupta



On 21/12/22 22:24, Dmitry Osipenko wrote:
> External email: Use caution opening links or attachments
> 
> 
> 20.12.2022 19:02, Sumit Gupta пишет:
>>   static int tegra186_emc_probe(struct platform_device *pdev)
>>   {
>>        struct mrq_emc_dvfs_latency_response response;
>>        struct tegra_bpmp_message msg;
>>        struct tegra186_emc *emc;
>> +     struct tegra_mc *mc;
>>        unsigned int i;
>>        int err;
>>
>> @@ -158,6 +307,9 @@ static int tegra186_emc_probe(struct platform_device *pdev)
>>        if (!emc)
>>                return -ENOMEM;
>>
>> +     platform_set_drvdata(pdev, emc);
>> +     emc->dev = &pdev->dev;
>> +
>>        emc->bpmp = tegra_bpmp_get(&pdev->dev);
>>        if (IS_ERR(emc->bpmp))
>>                return dev_err_probe(&pdev->dev, PTR_ERR(emc->bpmp), "failed to get BPMP\n");
>> @@ -236,6 +388,19 @@ static int tegra186_emc_probe(struct platform_device *pdev)
>>        debugfs_create_file("max_rate", S_IRUGO | S_IWUSR, emc->debugfs.root,
>>                            emc, &tegra186_emc_debug_max_rate_fops);
>>
>> +     mc = dev_get_drvdata(emc->dev->parent);
>> +     if (mc && mc->soc->icc_ops) {
>> +             if (tegra_bpmp_mrq_is_supported(emc->bpmp, MRQ_BWMGR_INT)) {
>> +                     err = tegra_emc_interconnect_init(emc);
>> +                     if (!err)
>> +                             return err;
>> +                     dev_err(&pdev->dev, "tegra_emc_interconnect_init failed:%d\n", err);
>> +                     goto put_bpmp;
>> +             } else {
>> +                     dev_info(&pdev->dev, "MRQ_BWMGR_INT not present\n");
>> +             }
> 
> If there is no MRQ_BWMGR_INT, then device drivers using ICC won't probe.
> This is either a error condition, or ICC should inited and then ICC
> changes should be skipped.
> 

If the MRQ_BW_MGR is not supported by a BPMP-FW binary, then the MC & 
EMC drivers will still probe successfully and scaling will be disabled.

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [Patch v1 01/10] memory: tegra: add interconnect support for DRAM scaling in Tegra234
  2022-12-22 15:56     ` Dmitry Osipenko
@ 2023-01-13 12:35       ` Sumit Gupta
  0 siblings, 0 replies; 53+ messages in thread
From: Sumit Gupta @ 2023-01-13 12:35 UTC (permalink / raw)
  To: Dmitry Osipenko, treding, krzysztof.kozlowski, dmitry.osipenko,
	viresh.kumar, rafael, jonathanh, robh+dt, linux-kernel,
	linux-tegra, linux-pm, devicetree
  Cc: sanjayc, ksitaraman, ishah, bbasu, Sumit Gupta



On 22/12/22 21:26, Dmitry Osipenko wrote:
> External email: Use caution opening links or attachments
> 
> 
> 21.12.2022 22:20, Dmitry Osipenko пишет:
>> 20.12.2022 19:02, Sumit Gupta пишет:
>>> +static int tegra_emc_icc_set_bw(struct icc_node *src, struct icc_node *dst)
>>> +{
>>> +    struct tegra186_emc *emc = to_tegra186_emc(dst->provider);
>>> +    struct tegra_mc *mc = dev_get_drvdata(emc->dev->parent);
>>> +    struct mrq_bwmgr_int_request bwmgr_req = { 0 };
>>> +    struct mrq_bwmgr_int_response bwmgr_resp = { 0 };
>>> +    struct tegra_icc_node *tnode = mc->curr_tnode;
>>> +    struct tegra_bpmp_message msg;
>>> +    int ret = 0;
>>> +
>>> +    /*
>>> +     * Same Src and Dst node will happen during boot from icc_node_add().
>>> +     * This can be used to pre-initialize and set bandwidth for all clients
>>> +     * before their drivers are loaded. We are skipping this case as for us,
>>> +     * the pre-initialization already happened in Bootloader(MB2) and BPMP-FW.
>>> +     */
>>> +    if (src->id == dst->id)
>>> +            return 0;
>>> +
>>> +    if (mc->curr_tnode->type == TEGRA_ICC_NISO)
>>
>> The mc->curr_tnode usage looks suspicious, why you can't use src node?
>>
> 
> This function sets memory BW for a memory client and not for EMC.
> Apparently, you should move the BW setting to tegra234_mc_icc_set() and
> then tegra_emc_icc_set_bw() will be a noop.
> 
Yes, will move this code to set_bw() in MC driver. The set_bw call in 
EMC will be a dummy function after change.
This change will also help to remove "struct tegra_icc_node 
*curr_tnode;" from "struct tegra_mc" which is used for passing data 
between MC and EMC drivers to transfer to BPMP.

static int tegra_emc_icc_set_bw(struct icc_node *src, struct icc_node *dst)
{
         return 0;
}

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [Patch v1 01/10] memory: tegra: add interconnect support for DRAM scaling in Tegra234
  2022-12-21 19:20   ` Dmitry Osipenko
  2022-12-22 15:56     ` Dmitry Osipenko
@ 2023-01-13 12:40     ` Sumit Gupta
  1 sibling, 0 replies; 53+ messages in thread
From: Sumit Gupta @ 2023-01-13 12:40 UTC (permalink / raw)
  To: Dmitry Osipenko, treding, krzysztof.kozlowski, dmitry.osipenko,
	viresh.kumar, rafael, jonathanh, robh+dt, linux-kernel,
	linux-tegra, linux-pm, devicetree
  Cc: sanjayc, ksitaraman, ishah, bbasu, Sumit Gupta



On 22/12/22 00:50, Dmitry Osipenko wrote:
> External email: Use caution opening links or attachments
> 
> 
> 20.12.2022 19:02, Sumit Gupta пишет:
>> +static int tegra_emc_icc_set_bw(struct icc_node *src, struct icc_node *dst)
>> +{
>> +     struct tegra186_emc *emc = to_tegra186_emc(dst->provider);
>> +     struct tegra_mc *mc = dev_get_drvdata(emc->dev->parent);
>> +     struct mrq_bwmgr_int_request bwmgr_req = { 0 };
>> +     struct mrq_bwmgr_int_response bwmgr_resp = { 0 };
>> +     struct tegra_icc_node *tnode = mc->curr_tnode;
>> +     struct tegra_bpmp_message msg;
>> +     int ret = 0;
>> +
>> +     /*
>> +      * Same Src and Dst node will happen during boot from icc_node_add().
>> +      * This can be used to pre-initialize and set bandwidth for all clients
>> +      * before their drivers are loaded. We are skipping this case as for us,
>> +      * the pre-initialization already happened in Bootloader(MB2) and BPMP-FW.
>> +      */
>> +     if (src->id == dst->id)
>> +             return 0;
>> +
>> +     if (mc->curr_tnode->type == TEGRA_ICC_NISO)
> 
> The mc->curr_tnode usage looks suspicious, why you can't use src node?
> 

Now, we can get rid of "curr_tnode" after moving the code from the EMC 
to MC driver for transferring MC client's request info to the BPMP.

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [Patch v1 08/10] cpufreq: tegra194: add OPP support and set bandwidth
  2022-12-22 15:46   ` Dmitry Osipenko
@ 2023-01-13 13:50     ` Sumit Gupta
  2023-01-16 12:16       ` Dmitry Osipenko
  0 siblings, 1 reply; 53+ messages in thread
From: Sumit Gupta @ 2023-01-13 13:50 UTC (permalink / raw)
  To: Dmitry Osipenko, treding, krzysztof.kozlowski, dmitry.osipenko,
	viresh.kumar, rafael, jonathanh, robh+dt, linux-kernel,
	linux-tegra, linux-pm, devicetree
  Cc: sanjayc, ksitaraman, ishah, bbasu, Sumit Gupta, Rajkumar Kasirajan



On 22/12/22 21:16, Dmitry Osipenko wrote:
> External email: Use caution opening links or attachments
> 
> 
> 20.12.2022 19:02, Sumit Gupta пишет:
>> Add support to use OPP table from DT in Tegra194 cpufreq driver.
>> Tegra SoC's receive the frequency lookup table (LUT) from BPMP-FW.
>> Cross check the OPP's present in DT against the LUT from BPMP-FW
>> and enable only those DT OPP's which are present in LUT also.
>>
>> The OPP table in DT has CPU Frequency to bandwidth mapping where
>> the bandwidth value is per MC channel. DRAM bandwidth depends on the
>> number of MC channels which can vary as per the boot configuration.
>> This per channel bandwidth from OPP table will be later converted by
>> MC driver to final bandwidth value by multiplying with number of
>> channels before sending the request to BPMP-FW.
>>
>> If OPP table is not present in DT, then use the LUT from BPMP-FW directy
>> as the frequency table and not do the DRAM frequency scaling which is
>> same as the current behavior.
>>
>> Now, as the CPU Frequency table is being controlling through OPP table
>> in DT. Keeping fewer entries in the table will create less frequency
>> steps and scale fast to high frequencies if required.
> 
> It's not exactly clear what you're doing here. Are you going to scale
> memory BW based on CPU freq? If yes, then this is wrong because CPU freq
> is independent from the memory subsystem.
> 
> All Tegra30+ SoCs have ACTMON hardware unit that monitors CPU memory
> activity and CPU memory BW should be scaled based on CPU memory events
> counter. We have ACTMON devfreq driver for older SoCs. I have no clue
> how ACTMON can be accessed on T186+, perhaps there should be a BPMP FW
> API for that.
> 

Yes, scaling the memory BW based on CPU freq.
Referred below patch set for previous generation of Tegra Soc's which 
you mentioned and tried to trace the history.
 
https://patchwork.ozlabs.org/project/linux-tegra/patch/1418719298-25314-3-git-send-email-tomeu.vizoso@collabora.com/

In new Tegra Soc's, actmon counter control and usage has been moved to 
BPMP-FW where only 'MCALL' counter is used and 'MCCPU is not being used.
Using the actmon counter was a reactive way to scale the frequency which 
is less effective due to averaging over a time period.
We are now using the proactive way where clients tell their bandwidth
needs to help achieve better performance.

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [Patch v1 05/10] dt-bindings: tegra: add icc ids for dummy MC clients
  2022-12-22 11:29   ` Krzysztof Kozlowski
@ 2023-01-13 14:44     ` Sumit Gupta
  0 siblings, 0 replies; 53+ messages in thread
From: Sumit Gupta @ 2023-01-13 14:44 UTC (permalink / raw)
  To: Krzysztof Kozlowski, treding, dmitry.osipenko, viresh.kumar,
	rafael, jonathanh, robh+dt, linux-kernel, linux-tegra, linux-pm,
	devicetree
  Cc: sanjayc, ksitaraman, ishah, bbasu, Sumit Gupta



On 22/12/22 16:59, Krzysztof Kozlowski wrote:
> External email: Use caution opening links or attachments
> 
> 
> On 20/12/2022 17:02, Sumit Gupta wrote:
>> Adding ICC id's for dummy software clients representing CCPLEX clusters.
>>
>> Signed-off-by: Sumit Gupta <sumitg@nvidia.com>
>> ---
>>   include/dt-bindings/memory/tegra234-mc.h | 5 +++++
>>   1 file changed, 5 insertions(+)
>>
>> diff --git a/include/dt-bindings/memory/tegra234-mc.h b/include/dt-bindings/memory/tegra234-mc.h
>> index 347e55e89a2a..6e60d55491b3 100644
>> --- a/include/dt-bindings/memory/tegra234-mc.h
>> +++ b/include/dt-bindings/memory/tegra234-mc.h
>> @@ -536,4 +536,9 @@
>>   #define TEGRA234_MEMORY_CLIENT_NVJPG1SRD 0x123
>>   #define TEGRA234_MEMORY_CLIENT_NVJPG1SWR 0x124
>>
>> +/* ICC ID's for dummy MC clients used to represent CPU Clusters */
>> +#define TEGRA_ICC_MC_CPU_CLUSTER0       1003
> 
> Why the IDs do not start from 0?
> 
> Best regards,
> Krzysztof
> 

MC client ID's are starting from zero. These ID's are used as 
"icc_node->id" while creating the icc_node. So, can't use the duplicate 
numbers.

  $ grep TEGRA234_MEMORY_ "include/dt-bindings/memory/tegra234- mc.h"
  #define TEGRA234_MEMORY_CLIENT_PTCR 0x00
  #define TEGRA234_MEMORY_CLIENT_MIU7R 0x01
  #define TEGRA234_MEMORY_CLIENT_MIU7W 0x02
  #define TEGRA234_MEMORY_CLIENT_MIU8R 0x03
  ....

Dummy ID's which are already being used are starting from '1000'. So, 
added to that number as the ID's used for creating "icc_node" for CPU 
clusters are also dummy and they are not part of HW MC client's list.

  $ grep "100[0-9]" ./drivers/memory/tegra/mc.h
  #define TEGRA_ICC_MC            1000
  #define TEGRA_ICC_EMC           1001
  #define TEGRA_ICC_EMEM          1002




^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [Patch v1 03/10] memory: tegra: add pcie mc clients for Tegra234
  2022-12-22 11:33   ` Krzysztof Kozlowski
@ 2023-01-13 14:51     ` Sumit Gupta
  0 siblings, 0 replies; 53+ messages in thread
From: Sumit Gupta @ 2023-01-13 14:51 UTC (permalink / raw)
  To: Krzysztof Kozlowski, treding, dmitry.osipenko, viresh.kumar,
	rafael, jonathanh, robh+dt, linux-kernel, linux-tegra, linux-pm,
	devicetree
  Cc: sanjayc, ksitaraman, ishah, bbasu, Sumit Gupta



On 22/12/22 17:03, Krzysztof Kozlowski wrote:
> External email: Use caution opening links or attachments
> 
> 
> On 20/12/2022 17:02, Sumit Gupta wrote:
>> Adding PCIE clients representing each controller to the
>> mc_clients table for Tegra234.
>>
>> Signed-off-by: Sumit Gupta <sumitg@nvidia.com>
>> ---
>>   drivers/memory/tegra/tegra234.c | 314 +++++++++++++++++++++++++++++++-
>>   1 file changed, 313 insertions(+), 1 deletion(-)
>>
>> diff --git a/drivers/memory/tegra/tegra234.c b/drivers/memory/tegra/tegra234.c
>> index 2e37b37da9be..420546270c8b 100644
>> --- a/drivers/memory/tegra/tegra234.c
>> +++ b/drivers/memory/tegra/tegra234.c
>> @@ -465,7 +465,319 @@ static const struct tegra_mc_client tegra234_mc_clients[] = {
>>                                .security = 0x37c,
>>                        },
>>                },
>> -     },
>> +     }, {
> 
> Didn't you change the same structure in previous patch?
> 
> Best regards,
> Krzysztof
> 

Yes, kept the Iso-chronous (ISO) MC clients and PCIE (NISO) client 
entries in separate patches.
Will merge them together in a single patch.

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [Patch v1 09/10] memory: tegra: get number of enabled mc channels
  2022-12-22 11:37   ` Krzysztof Kozlowski
@ 2023-01-13 15:04     ` Sumit Gupta
  2023-01-16 16:30       ` Thierry Reding
  0 siblings, 1 reply; 53+ messages in thread
From: Sumit Gupta @ 2023-01-13 15:04 UTC (permalink / raw)
  To: Krzysztof Kozlowski, treding, dmitry.osipenko, viresh.kumar,
	rafael, jonathanh, robh+dt, linux-kernel, linux-tegra, linux-pm,
	devicetree
  Cc: sanjayc, ksitaraman, ishah, bbasu, Sumit Gupta



On 22/12/22 17:07, Krzysztof Kozlowski wrote:
> External email: Use caution opening links or attachments
> 
> 
> On 20/12/2022 17:02, Sumit Gupta wrote:
>> Get number of MC channels which are actually enabled
>> in current boot configuration.
> 
> Why? You don't do anything with it. Commit msg should give the reason of
> changes.
> 
> 
> Best regards,
> Krzysztof
> 

CPU OPP tables have per channel bandwidth info. The "mc->num_channels" 
is used in [1] (Patch v1 10/10) to make the per MC channel bandwidth 
requested by the CPU cluster as a multiple of number of the enabled mc 
channels.

Will update the commit description with this info.

[1] 
https://lore.kernel.org/lkml/20221220160240.27494-1-sumitg@nvidia.com/T/#m3ac150a86977e89b97c5d19c60384f29d7a01d21

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [Patch v1 05/10] dt-bindings: tegra: add icc ids for dummy MC clients
  2022-12-20 16:02 ` [Patch v1 05/10] dt-bindings: tegra: add icc ids for dummy MC clients Sumit Gupta
  2022-12-22 11:29   ` Krzysztof Kozlowski
@ 2023-01-13 17:11   ` Krzysztof Kozlowski
  1 sibling, 0 replies; 53+ messages in thread
From: Krzysztof Kozlowski @ 2023-01-13 17:11 UTC (permalink / raw)
  To: Sumit Gupta, treding, dmitry.osipenko, viresh.kumar, rafael,
	jonathanh, robh+dt, linux-kernel, linux-tegra, linux-pm,
	devicetree
  Cc: sanjayc, ksitaraman, ishah, bbasu

On 20/12/2022 17:02, Sumit Gupta wrote:
> Adding ICC id's for dummy software clients representing CCPLEX clusters.
> 
> Signed-off-by: Sumit Gupta <sumitg@nvidia.com>
> ---

Acked-by: Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org>

Best regards,
Krzysztof


^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [Patch v1 08/10] cpufreq: tegra194: add OPP support and set bandwidth
  2023-01-13 13:50     ` Sumit Gupta
@ 2023-01-16 12:16       ` Dmitry Osipenko
  2023-01-19 10:26         ` Thierry Reding
  0 siblings, 1 reply; 53+ messages in thread
From: Dmitry Osipenko @ 2023-01-16 12:16 UTC (permalink / raw)
  To: Sumit Gupta, Dmitry Osipenko, treding, krzysztof.kozlowski,
	viresh.kumar, rafael, jonathanh, robh+dt, linux-kernel,
	linux-tegra, linux-pm, devicetree
  Cc: sanjayc, ksitaraman, ishah, bbasu, Rajkumar Kasirajan

On 1/13/23 16:50, Sumit Gupta wrote:
> 
> 
> On 22/12/22 21:16, Dmitry Osipenko wrote:
>> External email: Use caution opening links or attachments
>>
>>
>> 20.12.2022 19:02, Sumit Gupta пишет:
>>> Add support to use OPP table from DT in Tegra194 cpufreq driver.
>>> Tegra SoC's receive the frequency lookup table (LUT) from BPMP-FW.
>>> Cross check the OPP's present in DT against the LUT from BPMP-FW
>>> and enable only those DT OPP's which are present in LUT also.
>>>
>>> The OPP table in DT has CPU Frequency to bandwidth mapping where
>>> the bandwidth value is per MC channel. DRAM bandwidth depends on the
>>> number of MC channels which can vary as per the boot configuration.
>>> This per channel bandwidth from OPP table will be later converted by
>>> MC driver to final bandwidth value by multiplying with number of
>>> channels before sending the request to BPMP-FW.
>>>
>>> If OPP table is not present in DT, then use the LUT from BPMP-FW directy
>>> as the frequency table and not do the DRAM frequency scaling which is
>>> same as the current behavior.
>>>
>>> Now, as the CPU Frequency table is being controlling through OPP table
>>> in DT. Keeping fewer entries in the table will create less frequency
>>> steps and scale fast to high frequencies if required.
>>
>> It's not exactly clear what you're doing here. Are you going to scale
>> memory BW based on CPU freq? If yes, then this is wrong because CPU freq
>> is independent from the memory subsystem.
>>
>> All Tegra30+ SoCs have ACTMON hardware unit that monitors CPU memory
>> activity and CPU memory BW should be scaled based on CPU memory events
>> counter. We have ACTMON devfreq driver for older SoCs. I have no clue
>> how ACTMON can be accessed on T186+, perhaps there should be a BPMP FW
>> API for that.
>>
> 
> Yes, scaling the memory BW based on CPU freq.
> Referred below patch set for previous generation of Tegra Soc's which
> you mentioned and tried to trace the history.
> 
> https://patchwork.ozlabs.org/project/linux-tegra/patch/1418719298-25314-3-git-send-email-tomeu.vizoso@collabora.com/
> 
> In new Tegra Soc's, actmon counter control and usage has been moved to
> BPMP-FW where only 'MCALL' counter is used and 'MCCPU is not being used.
> Using the actmon counter was a reactive way to scale the frequency which
> is less effective due to averaging over a time period.
> We are now using the proactive way where clients tell their bandwidth
> needs to help achieve better performance.

You don't know what bandwidth CPU needs, you trying to guess it.

It should be a bad decision to use cpufreq for memory bandwidth scaling.
You'll be wasting memory power 90% of time because cpufreq doesn't have
relation to the DRAM, your heuristics will be wrong and won't do
anything good compared to using ACTMON. The L2 CPU cache + memory
prefetching hides memory from CPU. And cpufreq should be less reactive
than ACTMON in general.

Scaling memory freq based on cpufreq is what downstream NV kernel did
10+ years ago for the oldest Tegra generations. Today upstream has all
the necessary infrastructure for doing memory bandwidth scaling properly
and we even using h/w memory counters on T20. It's strange that you want
to bring the downstream archaity to the modern upstream for the latest
Tegra generations.

If you can skip the BPMP-FW and use ACTMON directly from kernel, then
that's what I suggest to do.

-- 
Best regards,
Dmitry


^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [Patch v1 06/10] arm64: tegra: Add cpu OPP tables and interconnects property
  2022-12-20 16:02 ` [Patch v1 06/10] arm64: tegra: Add cpu OPP tables and interconnects property Sumit Gupta
@ 2023-01-16 16:29   ` Thierry Reding
  0 siblings, 0 replies; 53+ messages in thread
From: Thierry Reding @ 2023-01-16 16:29 UTC (permalink / raw)
  To: Sumit Gupta
  Cc: treding, krzysztof.kozlowski, dmitry.osipenko, viresh.kumar,
	rafael, jonathanh, robh+dt, linux-kernel, linux-tegra, linux-pm,
	devicetree, sanjayc, ksitaraman, ishah, bbasu

[-- Attachment #1: Type: text/plain, Size: 1923 bytes --]

On Tue, Dec 20, 2022 at 09:32:36PM +0530, Sumit Gupta wrote:
> Add OPP table and interconnects property required to scale DDR
> frequency for better performance. The OPP table has CPU frequency
> to per MC channel bandwidth mapping in each operating point entry.
> One table is added for each cluster even though the table data is
> same because the bandwidth request is per cluster. OPP framework
> is creating a single icc path if the table is marked 'opp-shared'
> and shared among all clusters. For us the OPP table is same but
> the MC client ID argument to interconnects property is different
> for each cluster which makes different icc path for all.
> 
> Signed-off-by: Sumit Gupta <sumitg@nvidia.com>
> ---
>  arch/arm64/boot/dts/nvidia/tegra234.dtsi | 276 +++++++++++++++++++++++
>  1 file changed, 276 insertions(+)
> 
> diff --git a/arch/arm64/boot/dts/nvidia/tegra234.dtsi b/arch/arm64/boot/dts/nvidia/tegra234.dtsi
> index eaf05ee9acd1..ed7d0f7da431 100644
> --- a/arch/arm64/boot/dts/nvidia/tegra234.dtsi
> +++ b/arch/arm64/boot/dts/nvidia/tegra234.dtsi
> @@ -2840,6 +2840,9 @@
>  
>  			enable-method = "psci";
>  
> +			operating-points-v2 = <&cl0_opp_tbl>;
> +			interconnects = <&mc TEGRA_ICC_MC_CPU_CLUSTER0 &emc>;

I dislike how this muddies the water between hardware and software
description. We don't have a hardware client ID for the CPU clusters, so
there's no good way to describe this in a hardware-centric way. We used
to have MPCORE read and write clients for this, but as far as I know
they used to be for the entire CCPLEX rather than per-cluster. It'd be
interesting to know what the BPMP does underneath, perhaps that could
give some indication as to what would be a better hardware value to use
for this.

Failing that, I wonder if a combination of icc_node_create() and
icc_get() can be used for this type of "virtual node" special case.

Thierry

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [Patch v1 09/10] memory: tegra: get number of enabled mc channels
  2023-01-13 15:04     ` Sumit Gupta
@ 2023-01-16 16:30       ` Thierry Reding
  0 siblings, 0 replies; 53+ messages in thread
From: Thierry Reding @ 2023-01-16 16:30 UTC (permalink / raw)
  To: Sumit Gupta
  Cc: Krzysztof Kozlowski, treding, dmitry.osipenko, viresh.kumar,
	rafael, jonathanh, robh+dt, linux-kernel, linux-tegra, linux-pm,
	devicetree, sanjayc, ksitaraman, ishah, bbasu

[-- Attachment #1: Type: text/plain, Size: 1051 bytes --]

On Fri, Jan 13, 2023 at 08:34:18PM +0530, Sumit Gupta wrote:
> 
> 
> On 22/12/22 17:07, Krzysztof Kozlowski wrote:
> > External email: Use caution opening links or attachments
> > 
> > 
> > On 20/12/2022 17:02, Sumit Gupta wrote:
> > > Get number of MC channels which are actually enabled
> > > in current boot configuration.
> > 
> > Why? You don't do anything with it. Commit msg should give the reason of
> > changes.
> > 
> > 
> > Best regards,
> > Krzysztof
> > 
> 
> CPU OPP tables have per channel bandwidth info. The "mc->num_channels" is
> used in [1] (Patch v1 10/10) to make the per MC channel bandwidth requested
> by the CPU cluster as a multiple of number of the enabled mc channels.
> 
> Will update the commit description with this info.
> 
> [1] https://lore.kernel.org/lkml/20221220160240.27494-1-sumitg@nvidia.com/T/#m3ac150a86977e89b97c5d19c60384f29d7a01d21

Both patch 9 and 10 are reasonably small, so it would be okay to merge
the two patches and avoid any need for an extra explanation.

Thierry

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [Patch v1 01/10] memory: tegra: add interconnect support for DRAM scaling in Tegra234
  2022-12-21 16:44       ` Dmitry Osipenko
@ 2023-01-17 13:03         ` Sumit Gupta
  0 siblings, 0 replies; 53+ messages in thread
From: Sumit Gupta @ 2023-01-17 13:03 UTC (permalink / raw)
  To: Dmitry Osipenko, treding, krzysztof.kozlowski, dmitry.osipenko,
	viresh.kumar, rafael, jonathanh, robh+dt, linux-kernel,
	linux-tegra, linux-pm, devicetree
  Cc: sanjayc, ksitaraman, ishah, bbasu, Sumit Gupta



On 21/12/22 22:14, Dmitry Osipenko wrote:
> External email: Use caution opening links or attachments
> 
> 
> 21.12.2022 11:05, Sumit Gupta пишет:
>> On 20/12/22 23:37, Dmitry Osipenko wrote:
>>> External email: Use caution opening links or attachments
>>>
>>>
>>> 20.12.2022 19:02, Sumit Gupta пишет:
>>>> +#ifndef MEMORY_TEGRA_ICC_H
>>>> +#define MEMORY_TEGRA_ICC_H
>>>> +
>>>> +enum tegra_icc_client_type {
>>>> +     TEGRA_ICC_NONE,
>>>> +     TEGRA_ICC_NISO,
>>>> +     TEGRA_ICC_ISO_DISPLAY,
>>>> +     TEGRA_ICC_ISO_VI,
>>>> +     TEGRA_ICC_ISO_AUDIO,
>>>> +     TEGRA_ICC_ISO_VIFAL,
>>>> +};
>>>
>>> You using only TEGRA_ICC_NISO and !TEGRA_ICC_NISO in the code.
>>>
>>> include/soc/tegra/mc.h defines TAG_DEFAULT/ISO, please drop all these
>>> duplicated and unused "types" unless there is a good reason to keep them
>>>
>>
>> These type are used while defining clients in "tegra234_mc_clients[]"
>> and its passed to BPMP-FW which has handling for each client type.
> 
> The type should be based on the ICC tag, IMO. AFAICS, type isn't fixed
> in FW and you can set both ISO and NISO BW, hence it's up to a device
> driver to select the appropriate tag.
> 

Type for a MC client is fixed. So, adding the tag and giving option to 
client driver won't have impact.
Also, we need to pass the type to BPMP from bw set api. But the tag info 
is available to aggregate api and not set.

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [Patch v1 08/10] cpufreq: tegra194: add OPP support and set bandwidth
  2023-01-16 12:16       ` Dmitry Osipenko
@ 2023-01-19 10:26         ` Thierry Reding
  2023-01-19 13:01           ` Dmitry Osipenko
  0 siblings, 1 reply; 53+ messages in thread
From: Thierry Reding @ 2023-01-19 10:26 UTC (permalink / raw)
  To: Dmitry Osipenko
  Cc: Sumit Gupta, Dmitry Osipenko, treding, krzysztof.kozlowski,
	viresh.kumar, rafael, jonathanh, robh+dt, linux-kernel,
	linux-tegra, linux-pm, devicetree, sanjayc, ksitaraman, ishah,
	bbasu, Rajkumar Kasirajan

[-- Attachment #1: Type: text/plain, Size: 4112 bytes --]

On Mon, Jan 16, 2023 at 03:16:48PM +0300, Dmitry Osipenko wrote:
> On 1/13/23 16:50, Sumit Gupta wrote:
> > 
> > 
> > On 22/12/22 21:16, Dmitry Osipenko wrote:
> >> External email: Use caution opening links or attachments
> >>
> >>
> >> 20.12.2022 19:02, Sumit Gupta пишет:
> >>> Add support to use OPP table from DT in Tegra194 cpufreq driver.
> >>> Tegra SoC's receive the frequency lookup table (LUT) from BPMP-FW.
> >>> Cross check the OPP's present in DT against the LUT from BPMP-FW
> >>> and enable only those DT OPP's which are present in LUT also.
> >>>
> >>> The OPP table in DT has CPU Frequency to bandwidth mapping where
> >>> the bandwidth value is per MC channel. DRAM bandwidth depends on the
> >>> number of MC channels which can vary as per the boot configuration.
> >>> This per channel bandwidth from OPP table will be later converted by
> >>> MC driver to final bandwidth value by multiplying with number of
> >>> channels before sending the request to BPMP-FW.
> >>>
> >>> If OPP table is not present in DT, then use the LUT from BPMP-FW directy
> >>> as the frequency table and not do the DRAM frequency scaling which is
> >>> same as the current behavior.
> >>>
> >>> Now, as the CPU Frequency table is being controlling through OPP table
> >>> in DT. Keeping fewer entries in the table will create less frequency
> >>> steps and scale fast to high frequencies if required.
> >>
> >> It's not exactly clear what you're doing here. Are you going to scale
> >> memory BW based on CPU freq? If yes, then this is wrong because CPU freq
> >> is independent from the memory subsystem.
> >>
> >> All Tegra30+ SoCs have ACTMON hardware unit that monitors CPU memory
> >> activity and CPU memory BW should be scaled based on CPU memory events
> >> counter. We have ACTMON devfreq driver for older SoCs. I have no clue
> >> how ACTMON can be accessed on T186+, perhaps there should be a BPMP FW
> >> API for that.
> >>
> > 
> > Yes, scaling the memory BW based on CPU freq.
> > Referred below patch set for previous generation of Tegra Soc's which
> > you mentioned and tried to trace the history.
> > 
> > https://patchwork.ozlabs.org/project/linux-tegra/patch/1418719298-25314-3-git-send-email-tomeu.vizoso@collabora.com/
> > 
> > In new Tegra Soc's, actmon counter control and usage has been moved to
> > BPMP-FW where only 'MCALL' counter is used and 'MCCPU is not being used.
> > Using the actmon counter was a reactive way to scale the frequency which
> > is less effective due to averaging over a time period.
> > We are now using the proactive way where clients tell their bandwidth
> > needs to help achieve better performance.
> 
> You don't know what bandwidth CPU needs, you trying to guess it.
> 
> It should be a bad decision to use cpufreq for memory bandwidth scaling.
> You'll be wasting memory power 90% of time because cpufreq doesn't have
> relation to the DRAM, your heuristics will be wrong and won't do
> anything good compared to using ACTMON. The L2 CPU cache + memory
> prefetching hides memory from CPU. And cpufreq should be less reactive
> than ACTMON in general.
> 
> Scaling memory freq based on cpufreq is what downstream NV kernel did
> 10+ years ago for the oldest Tegra generations. Today upstream has all
> the necessary infrastructure for doing memory bandwidth scaling properly
> and we even using h/w memory counters on T20. It's strange that you want
> to bring the downstream archaity to the modern upstream for the latest
> Tegra generations.
> 
> If you can skip the BPMP-FW and use ACTMON directly from kernel, then
> that's what I suggest to do.

After talking to a few people, it turns out that BPMP is already using
ACTMON internally to do the actual scaling of the EMC frequency (or the
CPUs contribution to that). So BPMP will use ACTMON counters to monitor
the effective memory load of the CPU and adjust the EMC frequency. The
bandwidth request that we generate from the cpufreq driver is more of a
guideline for the maximum bandwidth we might consume.

Thierry

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [Patch v1 08/10] cpufreq: tegra194: add OPP support and set bandwidth
  2023-01-19 10:26         ` Thierry Reding
@ 2023-01-19 13:01           ` Dmitry Osipenko
  2023-02-06 13:31             ` Sumit Gupta
  0 siblings, 1 reply; 53+ messages in thread
From: Dmitry Osipenko @ 2023-01-19 13:01 UTC (permalink / raw)
  To: Thierry Reding
  Cc: Sumit Gupta, Dmitry Osipenko, treding, krzysztof.kozlowski,
	viresh.kumar, rafael, jonathanh, robh+dt, linux-kernel,
	linux-tegra, linux-pm, devicetree, sanjayc, ksitaraman, ishah,
	bbasu, Rajkumar Kasirajan

On 1/19/23 13:26, Thierry Reding wrote:
> On Mon, Jan 16, 2023 at 03:16:48PM +0300, Dmitry Osipenko wrote:
>> On 1/13/23 16:50, Sumit Gupta wrote:
>>>
>>>
>>> On 22/12/22 21:16, Dmitry Osipenko wrote:
>>>> External email: Use caution opening links or attachments
>>>>
>>>>
>>>> 20.12.2022 19:02, Sumit Gupta пишет:
>>>>> Add support to use OPP table from DT in Tegra194 cpufreq driver.
>>>>> Tegra SoC's receive the frequency lookup table (LUT) from BPMP-FW.
>>>>> Cross check the OPP's present in DT against the LUT from BPMP-FW
>>>>> and enable only those DT OPP's which are present in LUT also.
>>>>>
>>>>> The OPP table in DT has CPU Frequency to bandwidth mapping where
>>>>> the bandwidth value is per MC channel. DRAM bandwidth depends on the
>>>>> number of MC channels which can vary as per the boot configuration.
>>>>> This per channel bandwidth from OPP table will be later converted by
>>>>> MC driver to final bandwidth value by multiplying with number of
>>>>> channels before sending the request to BPMP-FW.
>>>>>
>>>>> If OPP table is not present in DT, then use the LUT from BPMP-FW directy
>>>>> as the frequency table and not do the DRAM frequency scaling which is
>>>>> same as the current behavior.
>>>>>
>>>>> Now, as the CPU Frequency table is being controlling through OPP table
>>>>> in DT. Keeping fewer entries in the table will create less frequency
>>>>> steps and scale fast to high frequencies if required.
>>>>
>>>> It's not exactly clear what you're doing here. Are you going to scale
>>>> memory BW based on CPU freq? If yes, then this is wrong because CPU freq
>>>> is independent from the memory subsystem.
>>>>
>>>> All Tegra30+ SoCs have ACTMON hardware unit that monitors CPU memory
>>>> activity and CPU memory BW should be scaled based on CPU memory events
>>>> counter. We have ACTMON devfreq driver for older SoCs. I have no clue
>>>> how ACTMON can be accessed on T186+, perhaps there should be a BPMP FW
>>>> API for that.
>>>>
>>>
>>> Yes, scaling the memory BW based on CPU freq.
>>> Referred below patch set for previous generation of Tegra Soc's which
>>> you mentioned and tried to trace the history.
>>>
>>> https://patchwork.ozlabs.org/project/linux-tegra/patch/1418719298-25314-3-git-send-email-tomeu.vizoso@collabora.com/
>>>
>>> In new Tegra Soc's, actmon counter control and usage has been moved to
>>> BPMP-FW where only 'MCALL' counter is used and 'MCCPU is not being used.
>>> Using the actmon counter was a reactive way to scale the frequency which
>>> is less effective due to averaging over a time period.
>>> We are now using the proactive way where clients tell their bandwidth
>>> needs to help achieve better performance.
>>
>> You don't know what bandwidth CPU needs, you trying to guess it.
>>
>> It should be a bad decision to use cpufreq for memory bandwidth scaling.
>> You'll be wasting memory power 90% of time because cpufreq doesn't have
>> relation to the DRAM, your heuristics will be wrong and won't do
>> anything good compared to using ACTMON. The L2 CPU cache + memory
>> prefetching hides memory from CPU. And cpufreq should be less reactive
>> than ACTMON in general.
>>
>> Scaling memory freq based on cpufreq is what downstream NV kernel did
>> 10+ years ago for the oldest Tegra generations. Today upstream has all
>> the necessary infrastructure for doing memory bandwidth scaling properly
>> and we even using h/w memory counters on T20. It's strange that you want
>> to bring the downstream archaity to the modern upstream for the latest
>> Tegra generations.
>>
>> If you can skip the BPMP-FW and use ACTMON directly from kernel, then
>> that's what I suggest to do.
> 
> After talking to a few people, it turns out that BPMP is already using
> ACTMON internally to do the actual scaling of the EMC frequency (or the
> CPUs contribution to that). So BPMP will use ACTMON counters to monitor
> the effective memory load of the CPU and adjust the EMC frequency. The
> bandwidth request that we generate from the cpufreq driver is more of a
> guideline for the maximum bandwidth we might consume.

Our kernel ACTMON driver uses cpufreq for guiding the EMC freq. Driving
EMC rate solely based on cpufreq would be a not good decision. So does
it mean you're now going to extend the ACTMON driver with the BPMP support?

-- 
Best regards,
Dmitry


^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [Patch v1 08/10] cpufreq: tegra194: add OPP support and set bandwidth
  2023-01-19 13:01           ` Dmitry Osipenko
@ 2023-02-06 13:31             ` Sumit Gupta
  0 siblings, 0 replies; 53+ messages in thread
From: Sumit Gupta @ 2023-02-06 13:31 UTC (permalink / raw)
  To: Dmitry Osipenko, Thierry Reding
  Cc: Dmitry Osipenko, treding, krzysztof.kozlowski, viresh.kumar,
	rafael, jonathanh, robh+dt, linux-kernel, linux-tegra, linux-pm,
	devicetree, sanjayc, ksitaraman, ishah, bbasu,
	Rajkumar Kasirajan, Sumit Gupta



On 19/01/23 18:31, Dmitry Osipenko wrote:
> External email: Use caution opening links or attachments
> 
> 
> On 1/19/23 13:26, Thierry Reding wrote:
>> On Mon, Jan 16, 2023 at 03:16:48PM +0300, Dmitry Osipenko wrote:
>>> On 1/13/23 16:50, Sumit Gupta wrote:
>>>>
>>>>
>>>> On 22/12/22 21:16, Dmitry Osipenko wrote:
>>>>> External email: Use caution opening links or attachments
>>>>>
>>>>>
>>>>> 20.12.2022 19:02, Sumit Gupta пишет:
>>>>>> Add support to use OPP table from DT in Tegra194 cpufreq driver.
>>>>>> Tegra SoC's receive the frequency lookup table (LUT) from BPMP-FW.
>>>>>> Cross check the OPP's present in DT against the LUT from BPMP-FW
>>>>>> and enable only those DT OPP's which are present in LUT also.
>>>>>>
>>>>>> The OPP table in DT has CPU Frequency to bandwidth mapping where
>>>>>> the bandwidth value is per MC channel. DRAM bandwidth depends on the
>>>>>> number of MC channels which can vary as per the boot configuration.
>>>>>> This per channel bandwidth from OPP table will be later converted by
>>>>>> MC driver to final bandwidth value by multiplying with number of
>>>>>> channels before sending the request to BPMP-FW.
>>>>>>
>>>>>> If OPP table is not present in DT, then use the LUT from BPMP-FW directy
>>>>>> as the frequency table and not do the DRAM frequency scaling which is
>>>>>> same as the current behavior.
>>>>>>
>>>>>> Now, as the CPU Frequency table is being controlling through OPP table
>>>>>> in DT. Keeping fewer entries in the table will create less frequency
>>>>>> steps and scale fast to high frequencies if required.
>>>>>
>>>>> It's not exactly clear what you're doing here. Are you going to scale
>>>>> memory BW based on CPU freq? If yes, then this is wrong because CPU freq
>>>>> is independent from the memory subsystem.
>>>>>
>>>>> All Tegra30+ SoCs have ACTMON hardware unit that monitors CPU memory
>>>>> activity and CPU memory BW should be scaled based on CPU memory events
>>>>> counter. We have ACTMON devfreq driver for older SoCs. I have no clue
>>>>> how ACTMON can be accessed on T186+, perhaps there should be a BPMP FW
>>>>> API for that.
>>>>>
>>>>
>>>> Yes, scaling the memory BW based on CPU freq.
>>>> Referred below patch set for previous generation of Tegra Soc's which
>>>> you mentioned and tried to trace the history.
>>>>
>>>> https://patchwork.ozlabs.org/project/linux-tegra/patch/1418719298-25314-3-git-send-email-tomeu.vizoso@collabora.com/
>>>>
>>>> In new Tegra Soc's, actmon counter control and usage has been moved to
>>>> BPMP-FW where only 'MCALL' counter is used and 'MCCPU is not being used.
>>>> Using the actmon counter was a reactive way to scale the frequency which
>>>> is less effective due to averaging over a time period.
>>>> We are now using the proactive way where clients tell their bandwidth
>>>> needs to help achieve better performance.
>>>
>>> You don't know what bandwidth CPU needs, you trying to guess it.
>>>
>>> It should be a bad decision to use cpufreq for memory bandwidth scaling.
>>> You'll be wasting memory power 90% of time because cpufreq doesn't have
>>> relation to the DRAM, your heuristics will be wrong and won't do
>>> anything good compared to using ACTMON. The L2 CPU cache + memory
>>> prefetching hides memory from CPU. And cpufreq should be less reactive
>>> than ACTMON in general.
>>>
>>> Scaling memory freq based on cpufreq is what downstream NV kernel did
>>> 10+ years ago for the oldest Tegra generations. Today upstream has all
>>> the necessary infrastructure for doing memory bandwidth scaling properly
>>> and we even using h/w memory counters on T20. It's strange that you want
>>> to bring the downstream archaity to the modern upstream for the latest
>>> Tegra generations.
>>>
>>> If you can skip the BPMP-FW and use ACTMON directly from kernel, then
>>> that's what I suggest to do.
>>
>> After talking to a few people, it turns out that BPMP is already using
>> ACTMON internally to do the actual scaling of the EMC frequency (or the
>> CPUs contribution to that). So BPMP will use ACTMON counters to monitor
>> the effective memory load of the CPU and adjust the EMC frequency. The
>> bandwidth request that we generate from the cpufreq driver is more of a
>> guideline for the maximum bandwidth we might consume.
> 
> Our kernel ACTMON driver uses cpufreq for guiding the EMC freq. Driving
> EMC rate solely based on cpufreq would be a not good decision. So does
> it mean you're now going to extend the ACTMON driver with the BPMP support?
> 
> --
> Best regards,
> Dmitry
> 

As we are using the ACTMON in BPMP-FW now and there is no plan to move 
it back to kernel for future SoC's. The ACTMON kernel driver won't be 
used in T234 and later SoC's.
The current patches to scale DRAM_FREQ with CPU_FREQ is used in T234.
In future SoC's, we are planning to use a new solution for this in
BPMP-FW which will provide better metric in addition to ACTMON's for 
scaling DRAM FREQ for CPU Clusters.

Thanks,
Sumit

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [Patch v1 01/10] memory: tegra: add interconnect support for DRAM scaling in Tegra234
  2022-12-22 11:32   ` Krzysztof Kozlowski
@ 2023-03-06 19:28     ` Sumit Gupta
  0 siblings, 0 replies; 53+ messages in thread
From: Sumit Gupta @ 2023-03-06 19:28 UTC (permalink / raw)
  To: Krzysztof Kozlowski, treding, dmitry.osipenko, viresh.kumar,
	rafael, jonathanh, robh+dt, linux-kernel, linux-tegra, linux-pm,
	devicetree
  Cc: sanjayc, ksitaraman, ishah, bbasu, Sumit Gupta



On 22/12/22 17:02, Krzysztof Kozlowski wrote:
> External email: Use caution opening links or attachments
> 
> 
> On 20/12/2022 17:02, Sumit Gupta wrote:
>> Adding Interconnect framework support to dynamically set the DRAM
>> bandwidth from different clients. Both the MC and EMC drivers are
>> added as ICC providers. The path for any request will be:
>>   MC-Client[1-n] -> MC -> EMC -> EMEM/DRAM
>>
>> MC clients will request for bandwidth to the MC driver which will
>> pass the tegra icc node having current request info to the EMC driver.
>> The EMC driver will send the BPMP Client ID, Client type and bandwidth
>> request info to the BPMP-FW where the final DRAM freq for achieving the
>> requested bandwidth is set based on the passed parameters.
>>
>> Signed-off-by: Sumit Gupta <sumitg@nvidia.com>
>> ---
>>   drivers/memory/tegra/mc.c           |  18 ++-
>>   drivers/memory/tegra/tegra186-emc.c | 166 ++++++++++++++++++++++++++++
>>   drivers/memory/tegra/tegra234.c     | 101 ++++++++++++++++-
>>   include/soc/tegra/mc.h              |   7 ++
>>   include/soc/tegra/tegra-icc.h       |  72 ++++++++++++
>>   5 files changed, 362 insertions(+), 2 deletions(-)
>>   create mode 100644 include/soc/tegra/tegra-icc.h
>>
>> diff --git a/drivers/memory/tegra/mc.c b/drivers/memory/tegra/mc.c
>> index 592907546ee6..ff887fb03bce 100644
>> --- a/drivers/memory/tegra/mc.c
>> +++ b/drivers/memory/tegra/mc.c
>> @@ -17,6 +17,7 @@
>>   #include <linux/sort.h>
>>
>>   #include <soc/tegra/fuse.h>
>> +#include <soc/tegra/tegra-icc.h>
>>
>>   #include "mc.h"
>>
>> @@ -779,6 +780,7 @@ const char *const tegra_mc_error_names[8] = {
>>    */
>>   static int tegra_mc_interconnect_setup(struct tegra_mc *mc)
>>   {
>> +     struct tegra_icc_node *tnode;
>>        struct icc_node *node;
>>        unsigned int i;
>>        int err;
>> @@ -792,7 +794,11 @@ static int tegra_mc_interconnect_setup(struct tegra_mc *mc)
>>        mc->provider.data = &mc->provider;
>>        mc->provider.set = mc->soc->icc_ops->set;
>>        mc->provider.aggregate = mc->soc->icc_ops->aggregate;
>> -     mc->provider.xlate_extended = mc->soc->icc_ops->xlate_extended;
>> +     mc->provider.get_bw = mc->soc->icc_ops->get_bw;
>> +     if (mc->soc->icc_ops->xlate)
>> +             mc->provider.xlate = mc->soc->icc_ops->xlate;
>> +     if (mc->soc->icc_ops->xlate_extended)
>> +             mc->provider.xlate_extended = mc->soc->icc_ops->xlate_extended;
>>
>>        err = icc_provider_add(&mc->provider);
>>        if (err)
>> @@ -814,6 +820,10 @@ static int tegra_mc_interconnect_setup(struct tegra_mc *mc)
>>                goto remove_nodes;
>>
>>        for (i = 0; i < mc->soc->num_clients; i++) {
>> +             tnode = kzalloc(sizeof(*tnode), GFP_KERNEL);
>> +             if (!tnode)
>> +                     return -ENOMEM;
>> +
>>                /* create MC client node */
>>                node = icc_node_create(mc->soc->clients[i].id);
>>                if (IS_ERR(node)) {
>> @@ -828,6 +838,12 @@ static int tegra_mc_interconnect_setup(struct tegra_mc *mc)
>>                err = icc_link_create(node, TEGRA_ICC_MC);
>>                if (err)
>>                        goto remove_nodes;
>> +
>> +             node->data = tnode;
> 
> Where is it freed?
> 
> 
> (...)
> 
Have removed 'struct tegra_icc_node' in v2. Instead 'node->data'
now points to the entry of MC client in the static table 
'tegra234_mc_clients' as below. So, the old alloc of 'struct 
tegra_icc_node' is not required now.

  + node->data = (char *)&(mc->soc->clients[i]);

>>
>>   struct tegra_mc_ops {
>> @@ -238,6 +243,8 @@ struct tegra_mc {
>>        struct {
>>                struct dentry *root;
>>        } debugfs;
>> +
>> +     struct tegra_icc_node *curr_tnode;
>>   };
>>
>>   int tegra_mc_write_emem_configuration(struct tegra_mc *mc, unsigned long rate);
>> diff --git a/include/soc/tegra/tegra-icc.h b/include/soc/tegra/tegra-icc.h
>> new file mode 100644
>> index 000000000000..3855d8571281
>> --- /dev/null
>> +++ b/include/soc/tegra/tegra-icc.h
> 
> Why not in linux?
> 
Moved the file to 'include/linux/tegra-icc.h' in v2.

>> @@ -0,0 +1,72 @@
>> +/* SPDX-License-Identifier: GPL-2.0-only */
>> +/*
>> + * Copyright (C) 2022-2023 NVIDIA CORPORATION.  All rights reserved.
>> + */
>> +
>> +#ifndef MEMORY_TEGRA_ICC_H
> 
> This does not match the path/name.
> 
Have changed path name to below in v2.

   +#ifndef LINUX_TEGRA_ICC_H
   +#define LINUX_TEGRA_ICC_H

>> +#define MEMORY_TEGRA_ICC_H
>> +
>> +enum tegra_icc_client_type {
>> +     TEGRA_ICC_NONE,
>> +     TEGRA_ICC_NISO,
>> +     TEGRA_ICC_ISO_DISPLAY,
>> +     TEGRA_ICC_ISO_VI,
>> +     TEGRA_ICC_ISO_AUDIO,
>> +     TEGRA_ICC_ISO_VIFAL,
>> +};
>> +
>> +struct tegra_icc_node {
>> +     struct icc_node *node;
>> +     struct tegra_mc *mc;
>> +     u32 bpmp_id;
>> +     u32 type;
>> +};
>> +
>> +/* ICC ID's for MC client's used in BPMP */
>> +#define TEGRA_ICC_BPMP_DEBUG         1
>> +#define TEGRA_ICC_BPMP_CPU_CLUSTER0  2
>> +#define TEGRA_ICC_BPMP_CPU_CLUSTER1  3
>> +#define TEGRA_ICC_BPMP_CPU_CLUSTER2  4
>> +#define TEGRA_ICC_BPMP_GPU           5
>> +#define TEGRA_ICC_BPMP_CACTMON               6
>> +#define TEGRA_ICC_BPMP_DISPLAY               7
>> +#define TEGRA_ICC_BPMP_VI            8
>> +#define TEGRA_ICC_BPMP_EQOS          9
>> +#define TEGRA_ICC_BPMP_PCIE_0                10
>> +#define TEGRA_ICC_BPMP_PCIE_1                11
>> +#define TEGRA_ICC_BPMP_PCIE_2                12
>> +#define TEGRA_ICC_BPMP_PCIE_3                13
>> +#define TEGRA_ICC_BPMP_PCIE_4                14
>> +#define TEGRA_ICC_BPMP_PCIE_5                15
>> +#define TEGRA_ICC_BPMP_PCIE_6                16
>> +#define TEGRA_ICC_BPMP_PCIE_7                17
>> +#define TEGRA_ICC_BPMP_PCIE_8                18
>> +#define TEGRA_ICC_BPMP_PCIE_9                19
>> +#define TEGRA_ICC_BPMP_PCIE_10               20
>> +#define TEGRA_ICC_BPMP_DLA_0         21
>> +#define TEGRA_ICC_BPMP_DLA_1         22
>> +#define TEGRA_ICC_BPMP_SDMMC_1               23
>> +#define TEGRA_ICC_BPMP_SDMMC_2               24
>> +#define TEGRA_ICC_BPMP_SDMMC_3               25
>> +#define TEGRA_ICC_BPMP_SDMMC_4               26
>> +#define TEGRA_ICC_BPMP_NVDEC         27
>> +#define TEGRA_ICC_BPMP_NVENC         28
>> +#define TEGRA_ICC_BPMP_NVJPG_0               29
>> +#define TEGRA_ICC_BPMP_NVJPG_1               30
>> +#define TEGRA_ICC_BPMP_OFAA          31
>> +#define TEGRA_ICC_BPMP_XUSB_HOST     32
>> +#define TEGRA_ICC_BPMP_XUSB_DEV              33
>> +#define TEGRA_ICC_BPMP_TSEC          34
>> +#define TEGRA_ICC_BPMP_VIC           35
>> +#define TEGRA_ICC_BPMP_APE           36
>> +#define TEGRA_ICC_BPMP_APEDMA                37
>> +#define TEGRA_ICC_BPMP_SE            38
>> +#define TEGRA_ICC_BPMP_ISP           39
>> +#define TEGRA_ICC_BPMP_HDA           40
>> +#define TEGRA_ICC_BPMP_VIFAL         41
>> +#define TEGRA_ICC_BPMP_VI2FAL                42
>> +#define TEGRA_ICC_BPMP_VI2           43
>> +#define TEGRA_ICC_BPMP_RCE           44
>> +#define TEGRA_ICC_BPMP_PVA           45
>> +
>> +#endif /* MEMORY_TEGRA_ICC_H */
> 
> Best regards,
> Krzysztof
> 

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [Patch v1 04/10] memory: tegra: add support for software mc clients in Tegra234
  2022-12-22 11:36   ` Krzysztof Kozlowski
@ 2023-03-06 19:41     ` Sumit Gupta
  0 siblings, 0 replies; 53+ messages in thread
From: Sumit Gupta @ 2023-03-06 19:41 UTC (permalink / raw)
  To: Krzysztof Kozlowski, treding, dmitry.osipenko, viresh.kumar,
	rafael, jonathanh, robh+dt, linux-kernel, linux-tegra, linux-pm,
	devicetree
  Cc: sanjayc, ksitaraman, ishah, bbasu, Sumit Gupta



On 22/12/22 17:06, Krzysztof Kozlowski wrote:
> External email: Use caution opening links or attachments
> 
> 
> On 20/12/2022 17:02, Sumit Gupta wrote:
>> Adding support for dummy memory controller clients for use by
>> software.
> 
> Use imperative mode (applies to other commits as well)
> https://elixir.bootlin.com/linux/v5.17.1/source/Documentation/process/submitting-patches.rst#L95
> 
Thank you for suggesting.
I referred this and changed in v2.

>> ---
>>   drivers/memory/tegra/mc.c       | 65 +++++++++++++++++++++++----------
>>   drivers/memory/tegra/tegra234.c | 21 +++++++++++
>>   include/soc/tegra/mc.h          |  3 ++
>>   include/soc/tegra/tegra-icc.h   |  7 ++++
>>   4 files changed, 76 insertions(+), 20 deletions(-)
>>
>> diff --git a/drivers/memory/tegra/mc.c b/drivers/memory/tegra/mc.c
>> index ff887fb03bce..4ddf9808fe6b 100644
>> --- a/drivers/memory/tegra/mc.c
>> +++ b/drivers/memory/tegra/mc.c
>> @@ -755,6 +755,39 @@ const char *const tegra_mc_error_names[8] = {
>>        [6] = "SMMU translation error",
>>   };
>>
>> +static int tegra_mc_add_icc_node(struct tegra_mc *mc, unsigned int id, const char *name,
>> +                              unsigned int bpmp_id, unsigned int type)
>> +{
>> +     struct tegra_icc_node *tnode;
>> +     struct icc_node *node;
>> +     int err;
>> +
>> +     tnode = kzalloc(sizeof(*tnode), GFP_KERNEL);
>> +     if (!tnode)
>> +             return -ENOMEM;
>> +
>> +     /* create MC client node */
>> +     node = icc_node_create(id);
>> +     if (IS_ERR(node))
>> +             return -EINVAL;
> 
> Why do you return other error? It does not look like you moved the code
> correctly, but with changes. I also do not see how this is related to
> commit msg...
> 
Corrected in v2.

Thanks,
Sumit

>> +
>> +     node->name = name;
>> +     icc_node_add(node, &mc->provider);
>> +
>> +     /* link Memory Client to Memory Controller */
>> +     err = icc_link_create(node, TEGRA_ICC_MC);
>> +     if (err)
>> +             return err;
>> +
>> +     node->data = tnode;
>> +     tnode->node = node;
>> +     tnode->bpmp_id = bpmp_id;
>> +     tnode->type = type;
>> +     tnode->mc = mc;
>> +
>> +     return 0;
>> +}
>> +
>>   /*
>>    * Memory Controller (MC) has few Memory Clients that are issuing memory
>>    * bandwidth allocation requests to the MC interconnect provider. The MC
>> @@ -780,7 +813,6 @@ const char *const tegra_mc_error_names[8] = {
>>    */
>>   static int tegra_mc_interconnect_setup(struct tegra_mc *mc)
>>   {
>> -     struct tegra_icc_node *tnode;
>>        struct icc_node *node;
>>        unsigned int i;
>>        int err;
>> @@ -820,30 +852,23 @@ static int tegra_mc_interconnect_setup(struct tegra_mc *mc)
>>                goto remove_nodes;
>>
>>        for (i = 0; i < mc->soc->num_clients; i++) {
>> -             tnode = kzalloc(sizeof(*tnode), GFP_KERNEL);
>> -             if (!tnode)
>> -                     return -ENOMEM;
>> -
>> -             /* create MC client node */
>> -             node = icc_node_create(mc->soc->clients[i].id);
>> -             if (IS_ERR(node)) {
>> -                     err = PTR_ERR(node);
>> +             err = tegra_mc_add_icc_node(mc, mc->soc->clients[i].id,
>> +                                         mc->soc->clients[i].name,
>> +                                         mc->soc->clients[i].bpmp_id,
>> +                                         mc->soc->clients[i].type);
>> +             if (err)
>>                        goto remove_nodes;
>> -             }
>>
>> -             node->name = mc->soc->clients[i].name;
>> -             icc_node_add(node, &mc->provider);
>> +     }
>> +
> 
> Best regards,
> Krzysztof
> 

^ permalink raw reply	[flat|nested] 53+ messages in thread

end of thread, other threads:[~2023-03-06 19:41 UTC | newest]

Thread overview: 53+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-12-20 16:02 [Patch v1 00/10] Tegra234 Memory interconnect support Sumit Gupta
2022-12-20 16:02 ` [Patch v1 01/10] memory: tegra: add interconnect support for DRAM scaling in Tegra234 Sumit Gupta
2022-12-20 18:05   ` Dmitry Osipenko
2022-12-21  7:53     ` Sumit Gupta
2022-12-20 18:06   ` Dmitry Osipenko
2022-12-21  7:54     ` Sumit Gupta
2022-12-20 18:07   ` Dmitry Osipenko
2022-12-21  8:05     ` Sumit Gupta
2022-12-21 16:44       ` Dmitry Osipenko
2023-01-17 13:03         ` Sumit Gupta
2022-12-20 18:10   ` Dmitry Osipenko
2022-12-21  9:35     ` Sumit Gupta
2022-12-21 16:43       ` Dmitry Osipenko
2023-01-13 12:15         ` Sumit Gupta
2022-12-21  0:55   ` Dmitry Osipenko
2022-12-21  8:07     ` Sumit Gupta
2022-12-21 16:54   ` Dmitry Osipenko
2023-01-13 12:25     ` Sumit Gupta
2022-12-21 19:17   ` Dmitry Osipenko
2022-12-21 19:20   ` Dmitry Osipenko
2022-12-22 15:56     ` Dmitry Osipenko
2023-01-13 12:35       ` Sumit Gupta
2023-01-13 12:40     ` Sumit Gupta
2022-12-21 19:43   ` Dmitry Osipenko
2022-12-22 11:32   ` Krzysztof Kozlowski
2023-03-06 19:28     ` Sumit Gupta
2022-12-20 16:02 ` [Patch v1 02/10] memory: tegra: adding iso mc clients for Tegra234 Sumit Gupta
2022-12-20 16:02 ` [Patch v1 03/10] memory: tegra: add pcie " Sumit Gupta
2022-12-22 11:33   ` Krzysztof Kozlowski
2023-01-13 14:51     ` Sumit Gupta
2022-12-20 16:02 ` [Patch v1 04/10] memory: tegra: add support for software mc clients in Tegra234 Sumit Gupta
2022-12-22 11:36   ` Krzysztof Kozlowski
2023-03-06 19:41     ` Sumit Gupta
2022-12-20 16:02 ` [Patch v1 05/10] dt-bindings: tegra: add icc ids for dummy MC clients Sumit Gupta
2022-12-22 11:29   ` Krzysztof Kozlowski
2023-01-13 14:44     ` Sumit Gupta
2023-01-13 17:11   ` Krzysztof Kozlowski
2022-12-20 16:02 ` [Patch v1 06/10] arm64: tegra: Add cpu OPP tables and interconnects property Sumit Gupta
2023-01-16 16:29   ` Thierry Reding
2022-12-20 16:02 ` [Patch v1 07/10] cpufreq: Add Tegra234 to cpufreq-dt-platdev blocklist Sumit Gupta
2022-12-21  5:01   ` Viresh Kumar
2022-12-20 16:02 ` [Patch v1 08/10] cpufreq: tegra194: add OPP support and set bandwidth Sumit Gupta
2022-12-22 15:46   ` Dmitry Osipenko
2023-01-13 13:50     ` Sumit Gupta
2023-01-16 12:16       ` Dmitry Osipenko
2023-01-19 10:26         ` Thierry Reding
2023-01-19 13:01           ` Dmitry Osipenko
2023-02-06 13:31             ` Sumit Gupta
2022-12-20 16:02 ` [Patch v1 09/10] memory: tegra: get number of enabled mc channels Sumit Gupta
2022-12-22 11:37   ` Krzysztof Kozlowski
2023-01-13 15:04     ` Sumit Gupta
2023-01-16 16:30       ` Thierry Reding
2022-12-20 16:02 ` [Patch v1 10/10] memory: tegra: make cluster bw request a multiple of mc_channels Sumit Gupta

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).