linux-pci.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Sumit Gupta <sumitg@nvidia.com>
To: <treding@nvidia.com>, <krzysztof.kozlowski@linaro.org>,
	<dmitry.osipenko@collabora.com>, <viresh.kumar@linaro.org>,
	<rafael@kernel.org>, <jonathanh@nvidia.com>, <robh+dt@kernel.org>,
	<lpieralisi@kernel.org>
Cc: <linux-kernel@vger.kernel.org>, <linux-tegra@vger.kernel.org>,
	<linux-pm@vger.kernel.org>, <devicetree@vger.kernel.org>,
	<linux-pci@vger.kernel.org>, <mmaddireddy@nvidia.com>,
	<kw@linux.com>, <bhelgaas@google.com>, <vidyas@nvidia.com>,
	<sanjayc@nvidia.com>, <ksitaraman@nvidia.com>, <ishah@nvidia.com>,
	<bbasu@nvidia.com>, <sumitg@nvidia.com>
Subject: [Patch v3 11/11] memory: tegra186-emc: fix interconnect registration race
Date: Mon, 20 Mar 2023 23:54:41 +0530	[thread overview]
Message-ID: <20230320182441.11904-12-sumitg@nvidia.com> (raw)
In-Reply-To: <20230320182441.11904-1-sumitg@nvidia.com>

The current interconnect provider registration interface is inherently
racy as nodes are not added until the after adding the provider. This
can specifically cause racing DT lookups to fail.

Switch to using the new API where the provider is not registered until
after it has been fully initialised.

Fixes: ("memory: tegra: add interconnect support for DRAM scaling in Tegra234")
Signed-off-by: Sumit Gupta <sumitg@nvidia.com>
---
 drivers/memory/tegra/tegra186-emc.c | 12 ++++++------
 1 file changed, 6 insertions(+), 6 deletions(-)

diff --git a/drivers/memory/tegra/tegra186-emc.c b/drivers/memory/tegra/tegra186-emc.c
index 0d68a20fd376..a5a2bce01db4 100644
--- a/drivers/memory/tegra/tegra186-emc.c
+++ b/drivers/memory/tegra/tegra186-emc.c
@@ -207,15 +207,13 @@ static int tegra_emc_interconnect_init(struct tegra186_emc *emc)
 	emc->provider.xlate = tegra_emc_of_icc_xlate;
 	emc->provider.get_bw = tegra_emc_icc_get_init_bw;
 
-	err = icc_provider_add(&emc->provider);
-	if (err)
-		goto err_msg;
+	icc_provider_init(&emc->provider);
 
 	/* create External Memory Controller node */
 	node = icc_node_create(TEGRA_ICC_EMC);
 	if (IS_ERR(node)) {
 		err = PTR_ERR(node);
-		goto del_provider;
+		goto err_msg;
 	}
 
 	node->name = "External Memory Controller";
@@ -236,11 +234,13 @@ static int tegra_emc_interconnect_init(struct tegra186_emc *emc)
 	node->name = "External Memory (DRAM)";
 	icc_node_add(node, &emc->provider);
 
+	err = icc_provider_register(&emc->provider);
+	if (err)
+		goto remove_nodes;
+
 	return 0;
 remove_nodes:
 	icc_nodes_remove(&emc->provider);
-del_provider:
-	icc_provider_del(&emc->provider);
 err_msg:
 	dev_err(emc->dev, "failed to initialize ICC: %d\n", err);
 
-- 
2.17.1


  parent reply	other threads:[~2023-03-20 18:36 UTC|newest]

Thread overview: 25+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-03-20 18:24 [Patch v3 00/11] Tegra234 Memory interconnect support Sumit Gupta
2023-03-20 18:24 ` [Patch v3 01/11] firmware: tegra: add function to get BPMP data Sumit Gupta
2023-03-23 10:08   ` Thierry Reding
2023-03-23 12:59     ` Sumit Gupta
2023-03-20 18:24 ` [Patch v3 02/11] memory: tegra: add interconnect support for DRAM scaling in Tegra234 Sumit Gupta
2023-03-23 10:14   ` Thierry Reding
2023-03-20 18:24 ` [Patch v3 03/11] memory: tegra: add mc clients for Tegra234 Sumit Gupta
2023-03-20 18:24 ` [Patch v3 04/11] memory: tegra: add software mc clients in Tegra234 Sumit Gupta
2023-03-20 18:24 ` [Patch v3 05/11] dt-bindings: tegra: add icc ids for dummy MC clients Sumit Gupta
2023-03-20 18:24 ` [Patch v3 06/11] arm64: tegra: Add cpu OPP tables and interconnects property Sumit Gupta
2023-03-20 18:24 ` [Patch v3 07/11] cpufreq: tegra194: add OPP support and set bandwidth Sumit Gupta
2023-03-21  7:36   ` kernel test robot
2023-03-21 11:49     ` Sumit Gupta
2023-03-22 17:51       ` Krzysztof Kozlowski
2023-03-23 12:50         ` Sumit Gupta
2023-03-20 18:24 ` [Patch v3 08/11] memory: tegra: make cpu cluster bw request a multiple of mc channels Sumit Gupta
2023-03-20 18:24 ` [Patch v3 09/11] PCI: tegra194: add interconnect support in Tegra234 Sumit Gupta
2023-03-20 18:24 ` [Patch v3 10/11] memory: tegra: handle no BWMGR MRQ support in BPMP Sumit Gupta
2023-03-22 17:50   ` Krzysztof Kozlowski
2023-03-23  9:55     ` Thierry Reding
2023-03-23  9:58       ` Krzysztof Kozlowski
2023-03-23 10:02         ` Thierry Reding
2023-03-23 12:46           ` Sumit Gupta
2023-03-20 18:24 ` Sumit Gupta [this message]
2023-03-22 17:50   ` [Patch v3 11/11] memory: tegra186-emc: fix interconnect registration race Krzysztof Kozlowski

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20230320182441.11904-12-sumitg@nvidia.com \
    --to=sumitg@nvidia.com \
    --cc=bbasu@nvidia.com \
    --cc=bhelgaas@google.com \
    --cc=devicetree@vger.kernel.org \
    --cc=dmitry.osipenko@collabora.com \
    --cc=ishah@nvidia.com \
    --cc=jonathanh@nvidia.com \
    --cc=krzysztof.kozlowski@linaro.org \
    --cc=ksitaraman@nvidia.com \
    --cc=kw@linux.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-pci@vger.kernel.org \
    --cc=linux-pm@vger.kernel.org \
    --cc=linux-tegra@vger.kernel.org \
    --cc=lpieralisi@kernel.org \
    --cc=mmaddireddy@nvidia.com \
    --cc=rafael@kernel.org \
    --cc=robh+dt@kernel.org \
    --cc=sanjayc@nvidia.com \
    --cc=treding@nvidia.com \
    --cc=vidyas@nvidia.com \
    --cc=viresh.kumar@linaro.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).