linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH V13 00/12] PCI: tegra: Add Tegra194 PCIe support
@ 2019-07-10  6:22 Vidya Sagar
  2019-07-10  6:22 ` [PATCH V13 01/12] PCI: Add #defines for some of PCIe spec r4.0 features Vidya Sagar
                   ` (11 more replies)
  0 siblings, 12 replies; 34+ messages in thread
From: Vidya Sagar @ 2019-07-10  6:22 UTC (permalink / raw)
  To: lorenzo.pieralisi, bhelgaas, robh+dt, mark.rutland,
	thierry.reding, jonathanh, kishon, catalin.marinas, will.deacon,
	jingoohan1, gustavo.pimentel
  Cc: digetx, mperttunen, linux-pci, devicetree, linux-tegra,
	linux-kernel, linux-arm-kernel, kthota, mmaddireddy, vidyas,
	sagar.tv

Tegra194 has six PCIe controllers based on Synopsys DesignWare core.
There are two Universal PHY (UPHY) blocks with each supporting 12(HSIO:
Hisg Speed IO) and 8(NVHS: NVIDIA High Speed) lanes respectively.
Controllers:0~4 use UPHY lanes from HSIO brick whereas Controller:5 uses
UPHY lanes from NVHS brick. Lane mapping in HSIO UPHY brick to each PCIe
controller (0~4) is controlled in XBAR module by BPMP-FW. Since PCIe
core has PIPE interface, a glue module called PIPE-to-UPHY (P2U) is used
to connect each UPHY lane (applicable to both HSIO and NVHS UPHY bricks)
to PCIe controller
This patch series
- Adds support for P2U PHY driver
- Adds support for PCIe host controller
- Adds device tree nodes each PCIe controllers
- Enables nodes applicable to p2972-0000 platform
- Adds helper APIs in Designware core driver to get capability regs offset
- Adds defines for new feature registers of PCIe spec revision 4
- Makes changes in DesignWare core driver to get Tegra194 PCIe working

Testing done on P2972-0000 platform
- Able to get PCIe link up with on-board Marvel eSATA controller
- Able to get PCIe link up with NVMe cards connected to M.2 Key-M slot
- Able to do data transfers with both SATA drives and NVMe cards

Note
- Enabling x8 slot on P2972-0000 platform requires pinmux driver for Tegra194.
  It is being worked on currently and hence Controller:5 (i.e. x8 slot) is
  disabled in this patch series. A future patch series would enable this.
- This series is based on top of the following series
  Jisheng's patches to add support to .remove() in Designware sub-system
  https://patchwork.kernel.org/project/linux-pci/list/?series=98559
  (Update: Jisheng's patches are now accepted and applied for v5.2)
  My patches made on top of Jisheng's patches to export various symbols
  http://patchwork.ozlabs.org/project/linux-pci/list/?series=115671
  (Update: My above patch series is accepted and applied for v5.3)

V13:
* Addressed Bjorn's review comments for adding Gen-4 specific defines to pci_regs.h header file

V12:
* Modified the commit message of patch-3 in this series to address review
  comments from Lorenzo

V11:
* Removed device-tree patches from the series as they are applied to relevant
  Tegra specific trees by Thierry Reding.
* Included older Tegra chips to extend quirk that disables MSI interrupt being
  used for Tegra PCIe root ports.
* Addressed review comments in P2U driver file.

V10:
* Used _relaxed() versions of readl() & writel()

V9:
* Made the drivers dependent on ARCH_TEGRA_194_SOC directly
* Addressed review comments from Dmitry

V8:
* Changed P2U driver file name from pcie-p2u-tegra194.c to phy-tegra194-p2u.c
* Addressed review comments from Thierry and Rob

V7:
* Took care of review comments from Rob
* Added a quirk to disable MSI for root ports
* Removed using pcie_pme_disable_msi() API in host controller driver

V6:
* Removed patch that exports pcie_bus_config symbol
* Took care of review comments from Thierry and Rob

V5:
* Removed redundant APIs in pcie-designware-ep.c file after moving them
  to pcie-designware.c file based on Bjorn's review comments

V4:
* Rebased on top of linux-next top of the tree
* Addressed Gustavo's comments and added his Ack for some of the changes.

V3:
* Addressed review comments from Thierry

V2:
* Addressed review comments from Bjorn, Thierry, Jonathan, Rob & Kishon
* Added more patches in v2 series

Vidya Sagar (12):
  PCI: Add #defines for some of PCIe spec r4.0 features
  PCI: Disable MSI for Tegra root ports
  PCI: dwc: Perform dbi regs write lock towards the end
  PCI: dwc: Move config space capability search API
  PCI: dwc: Add ext config space capability search API
  dt-bindings: PCI: designware: Add binding for CDM register check
  PCI: dwc: Add support to enable CDM register check
  dt-bindings: Add PCIe supports-clkreq property
  dt-bindings: PCI: tegra: Add device tree support for Tegra194
  dt-bindings: PHY: P2U: Add Tegra194 P2U block
  phy: tegra: Add PCIe PIPE2UPHY support
  PCI: tegra: Add Tegra194 PCIe support

 .../bindings/pci/designware-pcie.txt          |    5 +
 .../bindings/pci/nvidia,tegra194-pcie.txt     |  155 ++
 Documentation/devicetree/bindings/pci/pci.txt |    5 +
 .../bindings/phy/phy-tegra194-p2u.txt         |   28 +
 drivers/pci/controller/dwc/Kconfig            |   10 +
 drivers/pci/controller/dwc/Makefile           |    1 +
 .../pci/controller/dwc/pcie-designware-ep.c   |   37 +-
 .../pci/controller/dwc/pcie-designware-host.c |   14 +-
 drivers/pci/controller/dwc/pcie-designware.c  |   87 +
 drivers/pci/controller/dwc/pcie-designware.h  |   12 +
 drivers/pci/controller/dwc/pcie-tegra194.c    | 1632 +++++++++++++++++
 drivers/pci/quirks.c                          |   53 +
 drivers/phy/tegra/Kconfig                     |    7 +
 drivers/phy/tegra/Makefile                    |    1 +
 drivers/phy/tegra/phy-tegra194-p2u.c          |  120 ++
 include/uapi/linux/pci_regs.h                 |   14 +-
 16 files changed, 2139 insertions(+), 42 deletions(-)
 create mode 100644 Documentation/devicetree/bindings/pci/nvidia,tegra194-pcie.txt
 create mode 100644 Documentation/devicetree/bindings/phy/phy-tegra194-p2u.txt
 create mode 100644 drivers/pci/controller/dwc/pcie-tegra194.c
 create mode 100644 drivers/phy/tegra/phy-tegra194-p2u.c

-- 
2.17.1


^ permalink raw reply	[flat|nested] 34+ messages in thread

* [PATCH V13 01/12] PCI: Add #defines for some of PCIe spec r4.0 features
  2019-07-10  6:22 [PATCH V13 00/12] PCI: tegra: Add Tegra194 PCIe support Vidya Sagar
@ 2019-07-10  6:22 ` Vidya Sagar
  2019-07-10 20:14   ` Bjorn Helgaas
  2019-07-10  6:22 ` [PATCH V13 02/12] PCI: Disable MSI for Tegra root ports Vidya Sagar
                   ` (10 subsequent siblings)
  11 siblings, 1 reply; 34+ messages in thread
From: Vidya Sagar @ 2019-07-10  6:22 UTC (permalink / raw)
  To: lorenzo.pieralisi, bhelgaas, robh+dt, mark.rutland,
	thierry.reding, jonathanh, kishon, catalin.marinas, will.deacon,
	jingoohan1, gustavo.pimentel
  Cc: digetx, mperttunen, linux-pci, devicetree, linux-tegra,
	linux-kernel, linux-arm-kernel, kthota, mmaddireddy, vidyas,
	sagar.tv

Add #defines only for the Data Link Feature and Physical Layer 16.0 GT/s
features as defined in PCIe spec r4.0, sec 7.7.4 for Data Link Feature and
sec 7.7.5 for Physical Layer 16.0 GT/s.

Signed-off-by: Vidya Sagar <vidyas@nvidia.com>
Reviewed-by: Thierry Reding <treding@nvidia.com>
---
V13:
* Updated commit message to include references from spec
* Removed unused defines and moved some from pcie-tegra194.c file
* Addressed review comments from Bjorn

V12:
* None

V11:
* None

V10:
* None

V9:
* None

V8:
* None

V7:
* None

V6:
* None

V5:
* None

V4:
* None

V3:
* Updated commit message and description to explicitly mention that defines are
  added only for some of the features and not all.

V2:
* None

 include/uapi/linux/pci_regs.h | 14 +++++++++++++-
 1 file changed, 13 insertions(+), 1 deletion(-)

diff --git a/include/uapi/linux/pci_regs.h b/include/uapi/linux/pci_regs.h
index f28e562d7ca8..d28d0319d932 100644
--- a/include/uapi/linux/pci_regs.h
+++ b/include/uapi/linux/pci_regs.h
@@ -713,7 +713,9 @@
 #define PCI_EXT_CAP_ID_DPC	0x1D	/* Downstream Port Containment */
 #define PCI_EXT_CAP_ID_L1SS	0x1E	/* L1 PM Substates */
 #define PCI_EXT_CAP_ID_PTM	0x1F	/* Precision Time Measurement */
-#define PCI_EXT_CAP_ID_MAX	PCI_EXT_CAP_ID_PTM
+#define PCI_EXT_CAP_ID_DLF	0x25	/* Data Link Feature */
+#define PCI_EXT_CAP_ID_PL_16GT	0x26	/* Physical Layer 16.0 GT/s */
+#define PCI_EXT_CAP_ID_MAX	PCI_EXT_CAP_ID_PL_16GT
 
 #define PCI_EXT_CAP_DSN_SIZEOF	12
 #define PCI_EXT_CAP_MCAST_ENDPOINT_SIZEOF 40
@@ -1053,4 +1055,14 @@
 #define  PCI_L1SS_CTL1_LTR_L12_TH_SCALE	0xe0000000  /* LTR_L1.2_THRESHOLD_Scale */
 #define PCI_L1SS_CTL2		0x0c	/* Control 2 Register */
 
+/* Data Link Feature */
+#define PCI_DLF_CAP		0x04	/* Capabilities Register */
+#define  PCI_DLF_EXCHANGE_ENABLE	0x80000000  /* Data Link Feature Exchange Enable */
+
+/* Physical Layer 16.0 GT/s */
+#define PCI_PL_16GT_LE_CTRL	0x20	/* Lane Equalization Control Register */
+#define  PCI_PL_16GT_LE_CTRL_DSP_TX_PRESET_MASK		0x0000000F
+#define  PCI_PL_16GT_LE_CTRL_USP_TX_PRESET_MASK		0x000000F0
+#define  PCI_PL_16GT_LE_CTRL_USP_TX_PRESET_SHIFT	4
+
 #endif /* LINUX_PCI_REGS_H */
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH V13 02/12] PCI: Disable MSI for Tegra root ports
  2019-07-10  6:22 [PATCH V13 00/12] PCI: tegra: Add Tegra194 PCIe support Vidya Sagar
  2019-07-10  6:22 ` [PATCH V13 01/12] PCI: Add #defines for some of PCIe spec r4.0 features Vidya Sagar
@ 2019-07-10  6:22 ` Vidya Sagar
  2019-07-10  6:22 ` [PATCH V13 03/12] PCI: dwc: Perform dbi regs write lock towards the end Vidya Sagar
                   ` (9 subsequent siblings)
  11 siblings, 0 replies; 34+ messages in thread
From: Vidya Sagar @ 2019-07-10  6:22 UTC (permalink / raw)
  To: lorenzo.pieralisi, bhelgaas, robh+dt, mark.rutland,
	thierry.reding, jonathanh, kishon, catalin.marinas, will.deacon,
	jingoohan1, gustavo.pimentel
  Cc: digetx, mperttunen, linux-pci, devicetree, linux-tegra,
	linux-kernel, linux-arm-kernel, kthota, mmaddireddy, vidyas,
	sagar.tv

Tegra PCIe rootports don't generate MSI interrupts for PME and AER events.
Since PCIe spec (Ref: r4.0 sec 7.7.1.2 and 7.7.2.2) doesn't support using
a mix of INTx and MSI/MSI-X, MSI needs to be disabled to avoid root ports
service drivers registering their respective ISRs with MSI interrupt and
to let only INTx be used for all events.

Signed-off-by: Vidya Sagar <vidyas@nvidia.com>
Reviewed-by: Thierry Reding <treding@nvidia.com>
---
V13:
* None

V12:
* None

V11:
* Included older Tegra chips to extend the quirk as this issue is present in
  older Tegra chips as well.

V10:
* None

V9:
* None

V8:
* Changed quirk macro to consider class code as well to avoid this quirk
  getting applied to Tegra194 when it is operating in endpoint mode. Also
  quoted relevant sections from PCIe spec in comments.

V7:
* This is a new patch

 drivers/pci/quirks.c | 53 ++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 53 insertions(+)

diff --git a/drivers/pci/quirks.c b/drivers/pci/quirks.c
index c66c0ca446c4..1d3cac43ecbc 100644
--- a/drivers/pci/quirks.c
+++ b/drivers/pci/quirks.c
@@ -2592,6 +2592,59 @@ DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_NVIDIA,
 			PCI_DEVICE_ID_NVIDIA_NVENET_15,
 			nvenet_msi_disable);
 
+/*
+ * PCIe spec r4.0 sec 7.7.1.2 and sec 7.7.2.2 say that if MSI/MSI-X is enabled,
+ * then the device can't use INTx interrupts. Tegra's PCIe root ports don't
+ * generate MSI interrupts for PME and AER events instead only INTx interrupts
+ * are generated. Though Tegra's PCIe root ports can generate MSI interrupts
+ * for other events, since PCIe specificiation doesn't support using a mix of
+ * INTx and MSI/MSI-X, it is required to disable MSI interrupts to avoid port
+ * service drivers registering their respective ISRs for MSIs.
+ */
+static void pci_quirk_nvidia_tegra_disable_rp_msi(struct pci_dev *dev)
+{
+	dev->no_msi = 1;
+}
+DECLARE_PCI_FIXUP_CLASS_EARLY(PCI_VENDOR_ID_NVIDIA, 0x1ad0,
+			      PCI_CLASS_BRIDGE_PCI, 8,
+			      pci_quirk_nvidia_tegra_disable_rp_msi);
+DECLARE_PCI_FIXUP_CLASS_EARLY(PCI_VENDOR_ID_NVIDIA, 0x1ad1,
+			      PCI_CLASS_BRIDGE_PCI, 8,
+			      pci_quirk_nvidia_tegra_disable_rp_msi);
+DECLARE_PCI_FIXUP_CLASS_EARLY(PCI_VENDOR_ID_NVIDIA, 0x1ad2,
+			      PCI_CLASS_BRIDGE_PCI, 8,
+			      pci_quirk_nvidia_tegra_disable_rp_msi);
+DECLARE_PCI_FIXUP_CLASS_EARLY(PCI_VENDOR_ID_NVIDIA, 0x0bf0,
+			      PCI_CLASS_BRIDGE_PCI, 8,
+			      pci_quirk_nvidia_tegra_disable_rp_msi);
+DECLARE_PCI_FIXUP_CLASS_EARLY(PCI_VENDOR_ID_NVIDIA, 0x0bf1,
+			      PCI_CLASS_BRIDGE_PCI, 8,
+			      pci_quirk_nvidia_tegra_disable_rp_msi);
+DECLARE_PCI_FIXUP_CLASS_EARLY(PCI_VENDOR_ID_NVIDIA, 0x0e1c,
+			      PCI_CLASS_BRIDGE_PCI, 8,
+			      pci_quirk_nvidia_tegra_disable_rp_msi);
+DECLARE_PCI_FIXUP_CLASS_EARLY(PCI_VENDOR_ID_NVIDIA, 0x0e1d,
+			      PCI_CLASS_BRIDGE_PCI, 8,
+			      pci_quirk_nvidia_tegra_disable_rp_msi);
+DECLARE_PCI_FIXUP_CLASS_EARLY(PCI_VENDOR_ID_NVIDIA, 0x0e12,
+			      PCI_CLASS_BRIDGE_PCI, 8,
+			      pci_quirk_nvidia_tegra_disable_rp_msi);
+DECLARE_PCI_FIXUP_CLASS_EARLY(PCI_VENDOR_ID_NVIDIA, 0x0e13,
+			      PCI_CLASS_BRIDGE_PCI, 8,
+			      pci_quirk_nvidia_tegra_disable_rp_msi);
+DECLARE_PCI_FIXUP_CLASS_EARLY(PCI_VENDOR_ID_NVIDIA, 0x0fae,
+			      PCI_CLASS_BRIDGE_PCI, 8,
+			      pci_quirk_nvidia_tegra_disable_rp_msi);
+DECLARE_PCI_FIXUP_CLASS_EARLY(PCI_VENDOR_ID_NVIDIA, 0x0faf,
+			      PCI_CLASS_BRIDGE_PCI, 8,
+			      pci_quirk_nvidia_tegra_disable_rp_msi);
+DECLARE_PCI_FIXUP_CLASS_EARLY(PCI_VENDOR_ID_NVIDIA, 0x10e5,
+			      PCI_CLASS_BRIDGE_PCI, 8,
+			      pci_quirk_nvidia_tegra_disable_rp_msi);
+DECLARE_PCI_FIXUP_CLASS_EARLY(PCI_VENDOR_ID_NVIDIA, 0x10e6,
+			      PCI_CLASS_BRIDGE_PCI, 8,
+			      pci_quirk_nvidia_tegra_disable_rp_msi);
+
 /*
  * Some versions of the MCP55 bridge from Nvidia have a legacy IRQ routing
  * config register.  This register controls the routing of legacy
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH V13 03/12] PCI: dwc: Perform dbi regs write lock towards the end
  2019-07-10  6:22 [PATCH V13 00/12] PCI: tegra: Add Tegra194 PCIe support Vidya Sagar
  2019-07-10  6:22 ` [PATCH V13 01/12] PCI: Add #defines for some of PCIe spec r4.0 features Vidya Sagar
  2019-07-10  6:22 ` [PATCH V13 02/12] PCI: Disable MSI for Tegra root ports Vidya Sagar
@ 2019-07-10  6:22 ` Vidya Sagar
  2019-07-10  6:22 ` [PATCH V13 04/12] PCI: dwc: Move config space capability search API Vidya Sagar
                   ` (8 subsequent siblings)
  11 siblings, 0 replies; 34+ messages in thread
From: Vidya Sagar @ 2019-07-10  6:22 UTC (permalink / raw)
  To: lorenzo.pieralisi, bhelgaas, robh+dt, mark.rutland,
	thierry.reding, jonathanh, kishon, catalin.marinas, will.deacon,
	jingoohan1, gustavo.pimentel
  Cc: digetx, mperttunen, linux-pci, devicetree, linux-tegra,
	linux-kernel, linux-arm-kernel, kthota, mmaddireddy, vidyas,
	sagar.tv

Some of DesignWare core's DBI registers (a.k.a configuration space
registers) are write-protected with a lock without enabling which they are
read-only by default. These write-protected registers are implementation
specific. Tegra194's BAR-0 register which is at offset 0x10 in the
configuration space is an example. Current implementation in
dw_pcie_setup_rc() API attempts to unlock those write-protected registers
whenever they are updated and lock them back again for writing. This patch
attempts to unlock all such write-protected registers for writing in the
beginning of the API once and lock them back again towards the end to avoid
bloating the API with multiple unlock/lock sequences for all those
write-protected registers.

Signed-off-by: Vidya Sagar <vidyas@nvidia.com>
Reviewed-by: Thierry Reding <treding@nvidia.com>
Acked-by: Jingoo Han <jingoohan1@gmail.com>
---
V13:
* None

V12:
* Modified commit message to make it explicit that write-protected registers are
  implementation specific.

V11:
* None

V10:
* None

V9:
* None

V8:
* None

V7:
* None

V6:
* Moved write enable to the beginning of the API and write disable to the end

V5:
* None

V4:
* None

V3:
* None

V2:
* None

 drivers/pci/controller/dwc/pcie-designware-host.c | 14 ++++++++------
 1 file changed, 8 insertions(+), 6 deletions(-)

diff --git a/drivers/pci/controller/dwc/pcie-designware-host.c b/drivers/pci/controller/dwc/pcie-designware-host.c
index f93252d0da5b..d3156446ff27 100644
--- a/drivers/pci/controller/dwc/pcie-designware-host.c
+++ b/drivers/pci/controller/dwc/pcie-designware-host.c
@@ -628,6 +628,12 @@ void dw_pcie_setup_rc(struct pcie_port *pp)
 	u32 val, ctrl, num_ctrls;
 	struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
 
+	/*
+	 * Enable DBI read-only registers for writing/updating configuration.
+	 * Write permission gets disabled towards the end of this function.
+	 */
+	dw_pcie_dbi_ro_wr_en(pci);
+
 	dw_pcie_setup(pci);
 
 	if (!pp->ops->msi_host_init) {
@@ -650,12 +656,10 @@ void dw_pcie_setup_rc(struct pcie_port *pp)
 	dw_pcie_writel_dbi(pci, PCI_BASE_ADDRESS_1, 0x00000000);
 
 	/* Setup interrupt pins */
-	dw_pcie_dbi_ro_wr_en(pci);
 	val = dw_pcie_readl_dbi(pci, PCI_INTERRUPT_LINE);
 	val &= 0xffff00ff;
 	val |= 0x00000100;
 	dw_pcie_writel_dbi(pci, PCI_INTERRUPT_LINE, val);
-	dw_pcie_dbi_ro_wr_dis(pci);
 
 	/* Setup bus numbers */
 	val = dw_pcie_readl_dbi(pci, PCI_PRIMARY_BUS);
@@ -687,15 +691,13 @@ void dw_pcie_setup_rc(struct pcie_port *pp)
 
 	dw_pcie_wr_own_conf(pp, PCI_BASE_ADDRESS_0, 4, 0);
 
-	/* Enable write permission for the DBI read-only register */
-	dw_pcie_dbi_ro_wr_en(pci);
 	/* Program correct class for RC */
 	dw_pcie_wr_own_conf(pp, PCI_CLASS_DEVICE, 2, PCI_CLASS_BRIDGE_PCI);
-	/* Better disable write permission right after the update */
-	dw_pcie_dbi_ro_wr_dis(pci);
 
 	dw_pcie_rd_own_conf(pp, PCIE_LINK_WIDTH_SPEED_CONTROL, 4, &val);
 	val |= PORT_LOGIC_SPEED_CHANGE;
 	dw_pcie_wr_own_conf(pp, PCIE_LINK_WIDTH_SPEED_CONTROL, 4, val);
+
+	dw_pcie_dbi_ro_wr_dis(pci);
 }
 EXPORT_SYMBOL_GPL(dw_pcie_setup_rc);
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH V13 04/12] PCI: dwc: Move config space capability search API
  2019-07-10  6:22 [PATCH V13 00/12] PCI: tegra: Add Tegra194 PCIe support Vidya Sagar
                   ` (2 preceding siblings ...)
  2019-07-10  6:22 ` [PATCH V13 03/12] PCI: dwc: Perform dbi regs write lock towards the end Vidya Sagar
@ 2019-07-10  6:22 ` Vidya Sagar
  2019-07-10  6:22 ` [PATCH V13 05/12] PCI: dwc: Add ext " Vidya Sagar
                   ` (7 subsequent siblings)
  11 siblings, 0 replies; 34+ messages in thread
From: Vidya Sagar @ 2019-07-10  6:22 UTC (permalink / raw)
  To: lorenzo.pieralisi, bhelgaas, robh+dt, mark.rutland,
	thierry.reding, jonathanh, kishon, catalin.marinas, will.deacon,
	jingoohan1, gustavo.pimentel
  Cc: digetx, mperttunen, linux-pci, devicetree, linux-tegra,
	linux-kernel, linux-arm-kernel, kthota, mmaddireddy, vidyas,
	sagar.tv

Move PCIe config space capability search API to common DesignWare file
as this can be used by both host and ep mode codes.

Signed-off-by: Vidya Sagar <vidyas@nvidia.com>
Acked-by: Gustavo Pimentel <gustavo.pimentel@synopsys.com>
Reviewed-by: Thierry Reding <treding@nvidia.com>
---
V13:
* None

V12:
* None

V11:
* None

V10:
* None

V9:
* None

V8:
* Changed comment to explicitly state their mere resemblance to standard APIs
  but not their operation and place of use.

V7:
* Exported dw_pcie_find_capability() API

V6:
* None

V5:
* Removed redundant APIs in pcie-designware-ep.c file after moving them
  to pcie-designware.c file based on Bjorn's comments.

V4:
* Rebased to linux-next top of the tree

V3:
* None

V2:
* Removed dw_pcie_find_next_ext_capability() API from here and made a
  separate patch for that

 .../pci/controller/dwc/pcie-designware-ep.c   | 37 +-----------------
 drivers/pci/controller/dwc/pcie-designware.c  | 39 +++++++++++++++++++
 drivers/pci/controller/dwc/pcie-designware.h  |  2 +
 3 files changed, 43 insertions(+), 35 deletions(-)

diff --git a/drivers/pci/controller/dwc/pcie-designware-ep.c b/drivers/pci/controller/dwc/pcie-designware-ep.c
index 2bf5a35c0570..65f479250087 100644
--- a/drivers/pci/controller/dwc/pcie-designware-ep.c
+++ b/drivers/pci/controller/dwc/pcie-designware-ep.c
@@ -40,39 +40,6 @@ void dw_pcie_ep_reset_bar(struct dw_pcie *pci, enum pci_barno bar)
 	__dw_pcie_ep_reset_bar(pci, bar, 0);
 }
 
-static u8 __dw_pcie_ep_find_next_cap(struct dw_pcie *pci, u8 cap_ptr,
-			      u8 cap)
-{
-	u8 cap_id, next_cap_ptr;
-	u16 reg;
-
-	if (!cap_ptr)
-		return 0;
-
-	reg = dw_pcie_readw_dbi(pci, cap_ptr);
-	cap_id = (reg & 0x00ff);
-
-	if (cap_id > PCI_CAP_ID_MAX)
-		return 0;
-
-	if (cap_id == cap)
-		return cap_ptr;
-
-	next_cap_ptr = (reg & 0xff00) >> 8;
-	return __dw_pcie_ep_find_next_cap(pci, next_cap_ptr, cap);
-}
-
-static u8 dw_pcie_ep_find_capability(struct dw_pcie *pci, u8 cap)
-{
-	u8 next_cap_ptr;
-	u16 reg;
-
-	reg = dw_pcie_readw_dbi(pci, PCI_CAPABILITY_LIST);
-	next_cap_ptr = (reg & 0x00ff);
-
-	return __dw_pcie_ep_find_next_cap(pci, next_cap_ptr, cap);
-}
-
 static int dw_pcie_ep_write_header(struct pci_epc *epc, u8 func_no,
 				   struct pci_epf_header *hdr)
 {
@@ -612,9 +579,9 @@ int dw_pcie_ep_init(struct dw_pcie_ep *ep)
 		dev_err(dev, "Failed to reserve memory for MSI/MSI-X\n");
 		return -ENOMEM;
 	}
-	ep->msi_cap = dw_pcie_ep_find_capability(pci, PCI_CAP_ID_MSI);
+	ep->msi_cap = dw_pcie_find_capability(pci, PCI_CAP_ID_MSI);
 
-	ep->msix_cap = dw_pcie_ep_find_capability(pci, PCI_CAP_ID_MSIX);
+	ep->msix_cap = dw_pcie_find_capability(pci, PCI_CAP_ID_MSIX);
 
 	offset = dw_pcie_ep_find_ext_capability(pci, PCI_EXT_CAP_ID_REBAR);
 	if (offset) {
diff --git a/drivers/pci/controller/dwc/pcie-designware.c b/drivers/pci/controller/dwc/pcie-designware.c
index 7d25102c304c..7818b4febb08 100644
--- a/drivers/pci/controller/dwc/pcie-designware.c
+++ b/drivers/pci/controller/dwc/pcie-designware.c
@@ -14,6 +14,45 @@
 
 #include "pcie-designware.h"
 
+/*
+ * These interfaces resemble the pci_find_*capability() interfaces, but these
+ * are for configuring host controllers, which are bridges *to* PCI devices but
+ * are not PCI devices themselves.
+ */
+static u8 __dw_pcie_find_next_cap(struct dw_pcie *pci, u8 cap_ptr,
+				  u8 cap)
+{
+	u8 cap_id, next_cap_ptr;
+	u16 reg;
+
+	if (!cap_ptr)
+		return 0;
+
+	reg = dw_pcie_readw_dbi(pci, cap_ptr);
+	cap_id = (reg & 0x00ff);
+
+	if (cap_id > PCI_CAP_ID_MAX)
+		return 0;
+
+	if (cap_id == cap)
+		return cap_ptr;
+
+	next_cap_ptr = (reg & 0xff00) >> 8;
+	return __dw_pcie_find_next_cap(pci, next_cap_ptr, cap);
+}
+
+u8 dw_pcie_find_capability(struct dw_pcie *pci, u8 cap)
+{
+	u8 next_cap_ptr;
+	u16 reg;
+
+	reg = dw_pcie_readw_dbi(pci, PCI_CAPABILITY_LIST);
+	next_cap_ptr = (reg & 0x00ff);
+
+	return __dw_pcie_find_next_cap(pci, next_cap_ptr, cap);
+}
+EXPORT_SYMBOL_GPL(dw_pcie_find_capability);
+
 int dw_pcie_read(void __iomem *addr, int size, u32 *val)
 {
 	if (!IS_ALIGNED((uintptr_t)addr, size)) {
diff --git a/drivers/pci/controller/dwc/pcie-designware.h b/drivers/pci/controller/dwc/pcie-designware.h
index ffed084a0b4f..d8c66a6827dc 100644
--- a/drivers/pci/controller/dwc/pcie-designware.h
+++ b/drivers/pci/controller/dwc/pcie-designware.h
@@ -251,6 +251,8 @@ struct dw_pcie {
 #define to_dw_pcie_from_ep(endpoint)   \
 		container_of((endpoint), struct dw_pcie, ep)
 
+u8 dw_pcie_find_capability(struct dw_pcie *pci, u8 cap);
+
 int dw_pcie_read(void __iomem *addr, int size, u32 *val);
 int dw_pcie_write(void __iomem *addr, int size, u32 val);
 
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH V13 05/12] PCI: dwc: Add ext config space capability search API
  2019-07-10  6:22 [PATCH V13 00/12] PCI: tegra: Add Tegra194 PCIe support Vidya Sagar
                   ` (3 preceding siblings ...)
  2019-07-10  6:22 ` [PATCH V13 04/12] PCI: dwc: Move config space capability search API Vidya Sagar
@ 2019-07-10  6:22 ` Vidya Sagar
  2019-07-10 10:37   ` Lorenzo Pieralisi
  2019-07-10  6:22 ` [PATCH V13 06/12] dt-bindings: PCI: designware: Add binding for CDM register check Vidya Sagar
                   ` (6 subsequent siblings)
  11 siblings, 1 reply; 34+ messages in thread
From: Vidya Sagar @ 2019-07-10  6:22 UTC (permalink / raw)
  To: lorenzo.pieralisi, bhelgaas, robh+dt, mark.rutland,
	thierry.reding, jonathanh, kishon, catalin.marinas, will.deacon,
	jingoohan1, gustavo.pimentel
  Cc: digetx, mperttunen, linux-pci, devicetree, linux-tegra,
	linux-kernel, linux-arm-kernel, kthota, mmaddireddy, vidyas,
	sagar.tv

Add extended configuration space capability search API using struct dw_pcie *
pointer

Signed-off-by: Vidya Sagar <vidyas@nvidia.com>
Acked-by: Gustavo Pimentel <gustavo.pimentel@synopsys.com>
Acked-by: Thierry Reding <treding@nvidia.com>
---
V13:
* None

V12:
* None

V11:
* None

V10:
* None

V9:
* Added Acked-by from Thierry

V8:
* Changed data types of return and arguments to be inline with data being returned
  and passed.

V7:
* None

V6:
* None

V5:
* None

V4:
* None

V3:
* None

V2:
* This is a new patch in v2 series

 drivers/pci/controller/dwc/pcie-designware.c | 41 ++++++++++++++++++++
 drivers/pci/controller/dwc/pcie-designware.h |  1 +
 2 files changed, 42 insertions(+)

diff --git a/drivers/pci/controller/dwc/pcie-designware.c b/drivers/pci/controller/dwc/pcie-designware.c
index 7818b4febb08..181449e342f1 100644
--- a/drivers/pci/controller/dwc/pcie-designware.c
+++ b/drivers/pci/controller/dwc/pcie-designware.c
@@ -53,6 +53,47 @@ u8 dw_pcie_find_capability(struct dw_pcie *pci, u8 cap)
 }
 EXPORT_SYMBOL_GPL(dw_pcie_find_capability);
 
+static u16 dw_pcie_find_next_ext_capability(struct dw_pcie *pci, u16 start,
+					    u8 cap)
+{
+	u32 header;
+	int ttl;
+	int pos = PCI_CFG_SPACE_SIZE;
+
+	/* minimum 8 bytes per capability */
+	ttl = (PCI_CFG_SPACE_EXP_SIZE - PCI_CFG_SPACE_SIZE) / 8;
+
+	if (start)
+		pos = start;
+
+	header = dw_pcie_readl_dbi(pci, pos);
+	/*
+	 * If we have no capabilities, this is indicated by cap ID,
+	 * cap version and next pointer all being 0.
+	 */
+	if (header == 0)
+		return 0;
+
+	while (ttl-- > 0) {
+		if (PCI_EXT_CAP_ID(header) == cap && pos != start)
+			return pos;
+
+		pos = PCI_EXT_CAP_NEXT(header);
+		if (pos < PCI_CFG_SPACE_SIZE)
+			break;
+
+		header = dw_pcie_readl_dbi(pci, pos);
+	}
+
+	return 0;
+}
+
+u16 dw_pcie_find_ext_capability(struct dw_pcie *pci, u8 cap)
+{
+	return dw_pcie_find_next_ext_capability(pci, 0, cap);
+}
+EXPORT_SYMBOL_GPL(dw_pcie_find_ext_capability);
+
 int dw_pcie_read(void __iomem *addr, int size, u32 *val)
 {
 	if (!IS_ALIGNED((uintptr_t)addr, size)) {
diff --git a/drivers/pci/controller/dwc/pcie-designware.h b/drivers/pci/controller/dwc/pcie-designware.h
index d8c66a6827dc..11c223471416 100644
--- a/drivers/pci/controller/dwc/pcie-designware.h
+++ b/drivers/pci/controller/dwc/pcie-designware.h
@@ -252,6 +252,7 @@ struct dw_pcie {
 		container_of((endpoint), struct dw_pcie, ep)
 
 u8 dw_pcie_find_capability(struct dw_pcie *pci, u8 cap);
+u16 dw_pcie_find_ext_capability(struct dw_pcie *pci, u8 cap);
 
 int dw_pcie_read(void __iomem *addr, int size, u32 *val);
 int dw_pcie_write(void __iomem *addr, int size, u32 val);
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH V13 06/12] dt-bindings: PCI: designware: Add binding for CDM register check
  2019-07-10  6:22 [PATCH V13 00/12] PCI: tegra: Add Tegra194 PCIe support Vidya Sagar
                   ` (4 preceding siblings ...)
  2019-07-10  6:22 ` [PATCH V13 05/12] PCI: dwc: Add ext " Vidya Sagar
@ 2019-07-10  6:22 ` Vidya Sagar
  2019-07-10  6:22 ` [PATCH V13 07/12] PCI: dwc: Add support to enable " Vidya Sagar
                   ` (5 subsequent siblings)
  11 siblings, 0 replies; 34+ messages in thread
From: Vidya Sagar @ 2019-07-10  6:22 UTC (permalink / raw)
  To: lorenzo.pieralisi, bhelgaas, robh+dt, mark.rutland,
	thierry.reding, jonathanh, kishon, catalin.marinas, will.deacon,
	jingoohan1, gustavo.pimentel
  Cc: digetx, mperttunen, linux-pci, devicetree, linux-tegra,
	linux-kernel, linux-arm-kernel, kthota, mmaddireddy, vidyas,
	sagar.tv

Add support to enable CDM (Configuration Dependent Module) registers check
for any data corruption. CDM registers include standard PCIe configuration
space registers, Port Logic registers and iATU and DMA registers.
Refer Section S.4 of Synopsys DesignWare Cores PCI Express Controller Databook
Version 4.90a

Signed-off-by: Vidya Sagar <vidyas@nvidia.com>
Reviewed-by: Thierry Reding <treding@nvidia.com>
Reviewed-by: Rob Herring <robh@kernel.org>
---
V13:
* None

V12:
* None

V11:
* None

V10:
* None

V9:
* None

V8:
* None

V7:
* Changed "enable-cdm-check" to "snps,enable-cdm-check"

V6:
* None

V5:
* None

V4:
* None

V3:
* Changed flag name from 'cdm-check' to 'enable-cdm-check'
* Added info about Port Logic and DMA registers being part of CDM

V2:
* This is a new patch in v2 series

 Documentation/devicetree/bindings/pci/designware-pcie.txt | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/Documentation/devicetree/bindings/pci/designware-pcie.txt b/Documentation/devicetree/bindings/pci/designware-pcie.txt
index 5561a1c060d0..3fba04da6a59 100644
--- a/Documentation/devicetree/bindings/pci/designware-pcie.txt
+++ b/Documentation/devicetree/bindings/pci/designware-pcie.txt
@@ -34,6 +34,11 @@ Optional properties:
 - clock-names: Must include the following entries:
 	- "pcie"
 	- "pcie_bus"
+- snps,enable-cdm-check: This is a boolean property and if present enables
+   automatic checking of CDM (Configuration Dependent Module) registers
+   for data corruption. CDM registers include standard PCIe configuration
+   space registers, Port Logic registers, DMA and iATU (internal Address
+   Translation Unit) registers.
 RC mode:
 - num-viewport: number of view ports configured in hardware. If a platform
   does not specify it, the driver assumes 2.
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH V13 07/12] PCI: dwc: Add support to enable CDM register check
  2019-07-10  6:22 [PATCH V13 00/12] PCI: tegra: Add Tegra194 PCIe support Vidya Sagar
                   ` (5 preceding siblings ...)
  2019-07-10  6:22 ` [PATCH V13 06/12] dt-bindings: PCI: designware: Add binding for CDM register check Vidya Sagar
@ 2019-07-10  6:22 ` Vidya Sagar
  2019-07-10  6:22 ` [PATCH V13 08/12] dt-bindings: Add PCIe supports-clkreq property Vidya Sagar
                   ` (4 subsequent siblings)
  11 siblings, 0 replies; 34+ messages in thread
From: Vidya Sagar @ 2019-07-10  6:22 UTC (permalink / raw)
  To: lorenzo.pieralisi, bhelgaas, robh+dt, mark.rutland,
	thierry.reding, jonathanh, kishon, catalin.marinas, will.deacon,
	jingoohan1, gustavo.pimentel
  Cc: digetx, mperttunen, linux-pci, devicetree, linux-tegra,
	linux-kernel, linux-arm-kernel, kthota, mmaddireddy, vidyas,
	sagar.tv

Add support to enable CDM (Configuration Dependent Module) register check
for any data corruption based on the device-tree flag 'snps,enable-cdm-check'.

Signed-off-by: Vidya Sagar <vidyas@nvidia.com>
Acked-by: Gustavo Pimentel <gustavo.pimentel@synopsys.com>
Reviewed-by: Thierry Reding <treding@nvidia.com>
---
V13:
* None

V12:
* None

V11:
* None

V10:
* None

V9:
* None

V8:
* None

V7:
* Changed "enable-cdm-check" to "snps,enable-cdm-check"

V6:
* None

V5:
* None

V4:
* None

V3:
* Changed code and commit description to reflect change in flag from
  'cdm-check' to 'enable-cdm-check'

V2:
* This is a new patch in v2 series

 drivers/pci/controller/dwc/pcie-designware.c | 7 +++++++
 drivers/pci/controller/dwc/pcie-designware.h | 9 +++++++++
 2 files changed, 16 insertions(+)

diff --git a/drivers/pci/controller/dwc/pcie-designware.c b/drivers/pci/controller/dwc/pcie-designware.c
index 181449e342f1..01f9227a5ade 100644
--- a/drivers/pci/controller/dwc/pcie-designware.c
+++ b/drivers/pci/controller/dwc/pcie-designware.c
@@ -546,4 +546,11 @@ void dw_pcie_setup(struct dw_pcie *pci)
 		break;
 	}
 	dw_pcie_writel_dbi(pci, PCIE_LINK_WIDTH_SPEED_CONTROL, val);
+
+	if (of_property_read_bool(np, "snps,enable-cdm-check")) {
+		val = dw_pcie_readl_dbi(pci, PCIE_PL_CHK_REG_CONTROL_STATUS);
+		val |= PCIE_PL_CHK_REG_CHK_REG_CONTINUOUS |
+		       PCIE_PL_CHK_REG_CHK_REG_START;
+		dw_pcie_writel_dbi(pci, PCIE_PL_CHK_REG_CONTROL_STATUS, val);
+	}
 }
diff --git a/drivers/pci/controller/dwc/pcie-designware.h b/drivers/pci/controller/dwc/pcie-designware.h
index 11c223471416..5a18e94e52c8 100644
--- a/drivers/pci/controller/dwc/pcie-designware.h
+++ b/drivers/pci/controller/dwc/pcie-designware.h
@@ -86,6 +86,15 @@
 #define PCIE_MISC_CONTROL_1_OFF		0x8BC
 #define PCIE_DBI_RO_WR_EN		BIT(0)
 
+#define PCIE_PL_CHK_REG_CONTROL_STATUS			0xB20
+#define PCIE_PL_CHK_REG_CHK_REG_START			BIT(0)
+#define PCIE_PL_CHK_REG_CHK_REG_CONTINUOUS		BIT(1)
+#define PCIE_PL_CHK_REG_CHK_REG_COMPARISON_ERROR	BIT(16)
+#define PCIE_PL_CHK_REG_CHK_REG_LOGIC_ERROR		BIT(17)
+#define PCIE_PL_CHK_REG_CHK_REG_COMPLETE		BIT(18)
+
+#define PCIE_PL_CHK_REG_ERR_ADDR			0xB28
+
 /*
  * iATU Unroll-specific register definitions
  * From 4.80 core version the address translation will be made by unroll
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH V13 08/12] dt-bindings: Add PCIe supports-clkreq property
  2019-07-10  6:22 [PATCH V13 00/12] PCI: tegra: Add Tegra194 PCIe support Vidya Sagar
                   ` (6 preceding siblings ...)
  2019-07-10  6:22 ` [PATCH V13 07/12] PCI: dwc: Add support to enable " Vidya Sagar
@ 2019-07-10  6:22 ` Vidya Sagar
  2019-07-10 15:28   ` Lorenzo Pieralisi
  2019-07-10  6:22 ` [PATCH V13 09/12] dt-bindings: PCI: tegra: Add device tree support for Tegra194 Vidya Sagar
                   ` (3 subsequent siblings)
  11 siblings, 1 reply; 34+ messages in thread
From: Vidya Sagar @ 2019-07-10  6:22 UTC (permalink / raw)
  To: lorenzo.pieralisi, bhelgaas, robh+dt, mark.rutland,
	thierry.reding, jonathanh, kishon, catalin.marinas, will.deacon,
	jingoohan1, gustavo.pimentel
  Cc: digetx, mperttunen, linux-pci, devicetree, linux-tegra,
	linux-kernel, linux-arm-kernel, kthota, mmaddireddy, vidyas,
	sagar.tv

Some host controllers need to know the existence of clkreq signal routing to
downstream devices to be able to advertise low power features like ASPM L1
substates. Without clkreq signal routing being present, enabling ASPM L1 sub
states might lead to downstream devices falling off the bus. Hence a new device
tree property 'supports-clkreq' is added to make such host controllers
aware of clkreq signal routing to downstream devices.

Signed-off-by: Vidya Sagar <vidyas@nvidia.com>
Reviewed-by: Rob Herring <robh@kernel.org>
Reviewed-by: Thierry Reding <treding@nvidia.com>
---
V13:
* None

V12:
* Rebased on top of linux-next top of the tree

V11:
* None

V10:
* None

V9:
* None

V8:
* None

V7:
* None

V6:
* s/Documentation\/devicetree/dt-bindings/ in the subject

V5:
* None

V4:
* Rebased on top of linux-next top of the tree

V3:
* None

V2:
* This is a new patch in v2 series

 Documentation/devicetree/bindings/pci/pci.txt | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/Documentation/devicetree/bindings/pci/pci.txt b/Documentation/devicetree/bindings/pci/pci.txt
index 2a5d91024059..29bcbd88f457 100644
--- a/Documentation/devicetree/bindings/pci/pci.txt
+++ b/Documentation/devicetree/bindings/pci/pci.txt
@@ -27,6 +27,11 @@ driver implementation may support the following properties:
 - reset-gpios:
    If present this property specifies PERST# GPIO. Host drivers can parse the
    GPIO and apply fundamental reset to endpoints.
+- supports-clkreq:
+   If present this property specifies that CLKREQ signal routing exists from
+   root port to downstream device and host bridge drivers can do programming
+   which depends on CLKREQ signal existence. For example, programming root port
+   not to advertise ASPM L1 Sub-States support if there is no CLKREQ signal.
 
 PCI-PCI Bridge properties
 -------------------------
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH V13 09/12] dt-bindings: PCI: tegra: Add device tree support for Tegra194
  2019-07-10  6:22 [PATCH V13 00/12] PCI: tegra: Add Tegra194 PCIe support Vidya Sagar
                   ` (7 preceding siblings ...)
  2019-07-10  6:22 ` [PATCH V13 08/12] dt-bindings: Add PCIe supports-clkreq property Vidya Sagar
@ 2019-07-10  6:22 ` Vidya Sagar
  2019-07-10  6:22 ` [PATCH V13 10/12] dt-bindings: PHY: P2U: Add Tegra194 P2U block Vidya Sagar
                   ` (2 subsequent siblings)
  11 siblings, 0 replies; 34+ messages in thread
From: Vidya Sagar @ 2019-07-10  6:22 UTC (permalink / raw)
  To: lorenzo.pieralisi, bhelgaas, robh+dt, mark.rutland,
	thierry.reding, jonathanh, kishon, catalin.marinas, will.deacon,
	jingoohan1, gustavo.pimentel
  Cc: digetx, mperttunen, linux-pci, devicetree, linux-tegra,
	linux-kernel, linux-arm-kernel, kthota, mmaddireddy, vidyas,
	sagar.tv

Add support for Tegra194 PCIe controllers. These controllers are based
on Synopsys DesignWare core IP.

Signed-off-by: Vidya Sagar <vidyas@nvidia.com>
Reviewed-by: Rob Herring <robh@kernel.org>
Acked-by: Thierry Reding <treding@nvidia.com>
---
V13:
* None

V12:
* None

V11:
* None

V10:
* None

V9:
* Added Acked-by from Thierry

V8:
* Addressed review comments from Thierry
* Modified DT example to reflect new changes

V7:
* Changed description of the property "nvidia,bpmp".
* Removed property "nvidia,disable-aspm-states".

V6:
* Removed 'max-link-speed' as it is going to be a common sub-system property
* Removed 'nvidia,init-link-speed' as there isn't much value addition
* Removed 'nvidia,wake-gpios' for now
* Addressed review comments from Thierry and Rob in general

V5:
* None

V4:
* None

V3:
* Using only 'Cx' (x-being controller number) format to represent a controller
* Changed to 'value: description' format where applicable
* Changed 'nvidia,init-speed' to 'nvidia,init-link-speed'
* Provided more documentation for 'nvidia,init-link-speed' property
* Changed 'nvidia,pex-wake' to 'nvidia,wake-gpios'

V2:
* Added documentation for 'power-domains' property
* Removed 'window1' and 'window2' properties
* Removed '_clk' and '_rst' from clock and reset names
* Dropped 'pcie' from phy-names
* Added entry for BPMP-FW handle
* Removed offsets for some of the registers and added them in code and would be pickedup based on
  controller ID
* Changed 'nvidia,max-speed' to 'max-link-speed' and is made as an optional
* Changed 'nvidia,disable-clock-request' to 'supports-clkreq' with inverted operation
* Added more documentation for 'nvidia,update-fc-fixup' property
* Removed 'nvidia,enable-power-down' and 'nvidia,plat-gpios' properties
* Added '-us' to all properties that represent time in microseconds
* Moved P2U documentation to a separate file

 .../bindings/pci/nvidia,tegra194-pcie.txt     | 155 ++++++++++++++++++
 1 file changed, 155 insertions(+)
 create mode 100644 Documentation/devicetree/bindings/pci/nvidia,tegra194-pcie.txt

diff --git a/Documentation/devicetree/bindings/pci/nvidia,tegra194-pcie.txt b/Documentation/devicetree/bindings/pci/nvidia,tegra194-pcie.txt
new file mode 100644
index 000000000000..674e5adb2895
--- /dev/null
+++ b/Documentation/devicetree/bindings/pci/nvidia,tegra194-pcie.txt
@@ -0,0 +1,155 @@
+NVIDIA Tegra PCIe controller (Synopsys DesignWare Core based)
+
+This PCIe host controller is based on the Synopsis Designware PCIe IP
+and thus inherits all the common properties defined in designware-pcie.txt.
+
+Required properties:
+- compatible: For Tegra19x, must contain "nvidia,tegra194-pcie".
+- device_type: Must be "pci"
+- power-domains: A phandle to the node that controls power to the respective
+  PCIe controller and a specifier name for the PCIe controller. Following are
+  the specifiers for the different PCIe controllers
+    TEGRA194_POWER_DOMAIN_PCIEX8B: C0
+    TEGRA194_POWER_DOMAIN_PCIEX1A: C1
+    TEGRA194_POWER_DOMAIN_PCIEX1A: C2
+    TEGRA194_POWER_DOMAIN_PCIEX1A: C3
+    TEGRA194_POWER_DOMAIN_PCIEX4A: C4
+    TEGRA194_POWER_DOMAIN_PCIEX8A: C5
+  these specifiers are defined in
+  "include/dt-bindings/power/tegra194-powergate.h" file.
+- reg: A list of physical base address and length pairs for each set of
+  controller registers. Must contain an entry for each entry in the reg-names
+  property.
+- reg-names: Must include the following entries:
+  "appl": Controller's application logic registers
+  "config": As per the definition in designware-pcie.txt
+  "atu_dma": iATU and DMA registers. This is where the iATU (internal Address
+             Translation Unit) registers of the PCIe core are made available
+             for SW access.
+  "dbi": The aperture where root port's own configuration registers are
+         available
+- interrupts: A list of interrupt outputs of the controller. Must contain an
+  entry for each entry in the interrupt-names property.
+- interrupt-names: Must include the following entries:
+  "intr": The Tegra interrupt that is asserted for controller interrupts
+  "msi": The Tegra interrupt that is asserted when an MSI is received
+- bus-range: Range of bus numbers associated with this controller
+- #address-cells: Address representation for root ports (must be 3)
+  - cell 0 specifies the bus and device numbers of the root port:
+    [23:16]: bus number
+    [15:11]: device number
+  - cell 1 denotes the upper 32 address bits and should be 0
+  - cell 2 contains the lower 32 address bits and is used to translate to the
+    CPU address space
+- #size-cells: Size representation for root ports (must be 2)
+- ranges: Describes the translation of addresses for root ports and standard
+  PCI regions. The entries must be 7 cells each, where the first three cells
+  correspond to the address as described for the #address-cells property
+  above, the fourth and fifth cells are for the physical CPU address to
+  translate to and the sixth and seventh cells are as described for the
+  #size-cells property above.
+  - Entries setup the mapping for the standard I/O, memory and
+    prefetchable PCI regions. The first cell determines the type of region
+    that is setup:
+    - 0x81000000: I/O memory region
+    - 0x82000000: non-prefetchable memory region
+    - 0xc2000000: prefetchable memory region
+  Please refer to the standard PCI bus binding document for a more detailed
+  explanation.
+- #interrupt-cells: Size representation for interrupts (must be 1)
+- interrupt-map-mask and interrupt-map: Standard PCI IRQ mapping properties
+  Please refer to the standard PCI bus binding document for a more detailed
+  explanation.
+- clocks: Must contain an entry for each entry in clock-names.
+  See ../clocks/clock-bindings.txt for details.
+- clock-names: Must include the following entries:
+  - core
+- resets: Must contain an entry for each entry in reset-names.
+  See ../reset/reset.txt for details.
+- reset-names: Must include the following entries:
+  - apb
+  - core
+- phys: Must contain a phandle to P2U PHY for each entry in phy-names.
+- phy-names: Must include an entry for each active lane.
+  "p2u-N": where N ranges from 0 to one less than the total number of lanes
+- nvidia,bpmp: Must contain a pair of phandle to BPMP controller node followed
+  by controller-id. Following are the controller ids for each controller.
+    0: C0
+    1: C1
+    2: C2
+    3: C3
+    4: C4
+    5: C5
+- vddio-pex-ctl-supply: Regulator supply for PCIe side band signals
+
+Optional properties:
+- supports-clkreq: Refer to Documentation/devicetree/bindings/pci/pci.txt
+- nvidia,update-fc-fixup: This is a boolean property and needs to be present to
+    improve performance when a platform is designed in such a way that it
+    satisfies at least one of the following conditions thereby enabling root
+    port to exchange optimum number of FC (Flow Control) credits with
+    downstream devices
+    1. If C0/C4/C5 run at x1/x2 link widths (irrespective of speed and MPS)
+    2. If C0/C1/C2/C3/C4/C5 operate at their respective max link widths and
+       a) speed is Gen-2 and MPS is 256B
+       b) speed is >= Gen-3 with any MPS
+- nvidia,aspm-cmrt-us: Common Mode Restore Time for proper operation of ASPM
+   to be specified in microseconds
+- nvidia,aspm-pwr-on-t-us: Power On time for proper operation of ASPM to be
+   specified in microseconds
+- nvidia,aspm-l0s-entrance-latency-us: ASPM L0s entrance latency to be
+   specified in microseconds
+
+Examples:
+=========
+
+Tegra194:
+--------
+
+	pcie@14180000 {
+		compatible = "nvidia,tegra194-pcie", "snps,dw-pcie";
+		power-domains = <&bpmp TEGRA194_POWER_DOMAIN_PCIEX8B>;
+		reg = <0x00 0x14180000 0x0 0x00020000   /* appl registers (128K)      */
+		       0x00 0x38000000 0x0 0x00040000   /* configuration space (256K) */
+		       0x00 0x38040000 0x0 0x00040000>; /* iATU_DMA reg space (256K)  */
+		reg-names = "appl", "config", "atu_dma";
+
+		#address-cells = <3>;
+		#size-cells = <2>;
+		device_type = "pci";
+		num-lanes = <8>;
+		linux,pci-domain = <0>;
+
+		clocks = <&bpmp TEGRA194_CLK_PEX0_CORE_0>;
+		clock-names = "core";
+
+		resets = <&bpmp TEGRA194_RESET_PEX0_CORE_0_APB>,
+			 <&bpmp TEGRA194_RESET_PEX0_CORE_0>;
+		reset-names = "apb", "core";
+
+		interrupts = <GIC_SPI 72 IRQ_TYPE_LEVEL_HIGH>,	/* controller interrupt */
+			     <GIC_SPI 73 IRQ_TYPE_LEVEL_HIGH>;	/* MSI interrupt */
+		interrupt-names = "intr", "msi";
+
+		#interrupt-cells = <1>;
+		interrupt-map-mask = <0 0 0 0>;
+		interrupt-map = <0 0 0 0 &gic GIC_SPI 72 IRQ_TYPE_LEVEL_HIGH>;
+
+		nvidia,bpmp = <&bpmp 0>;
+
+		supports-clkreq;
+		nvidia,aspm-cmrt-us = <60>;
+		nvidia,aspm-pwr-on-t-us = <20>;
+		nvidia,aspm-l0s-entrance-latency-us = <3>;
+
+		bus-range = <0x0 0xff>;
+		ranges = <0x81000000 0x0  0x38100000 0x0  0x38100000 0x0 0x00100000    /* downstream I/O (1MB) */
+			  0x82000000 0x0  0x38200000 0x0  0x38200000 0x0 0x01E00000    /* non-prefetchable memory (30MB) */
+			  0xc2000000 0x18 0x00000000 0x18 0x00000000 0x4 0x00000000>;  /* prefetchable memory (16GB) */
+
+		vddio-pex-ctl-supply = <&vdd_1v8ao>;
+
+		phys = <&p2u_hsio_2>, <&p2u_hsio_3>, <&p2u_hsio_4>,
+		       <&p2u_hsio_5>;
+		phy-names = "p2u-0", "p2u-1", "p2u-2", "p2u-3";
+	};
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH V13 10/12] dt-bindings: PHY: P2U: Add Tegra194 P2U block
  2019-07-10  6:22 [PATCH V13 00/12] PCI: tegra: Add Tegra194 PCIe support Vidya Sagar
                   ` (8 preceding siblings ...)
  2019-07-10  6:22 ` [PATCH V13 09/12] dt-bindings: PCI: tegra: Add device tree support for Tegra194 Vidya Sagar
@ 2019-07-10  6:22 ` Vidya Sagar
  2019-07-10  6:22 ` [PATCH V13 11/12] phy: tegra: Add PCIe PIPE2UPHY support Vidya Sagar
  2019-07-10  6:22 ` [PATCH V13 12/12] PCI: tegra: Add Tegra194 PCIe support Vidya Sagar
  11 siblings, 0 replies; 34+ messages in thread
From: Vidya Sagar @ 2019-07-10  6:22 UTC (permalink / raw)
  To: lorenzo.pieralisi, bhelgaas, robh+dt, mark.rutland,
	thierry.reding, jonathanh, kishon, catalin.marinas, will.deacon,
	jingoohan1, gustavo.pimentel
  Cc: digetx, mperttunen, linux-pci, devicetree, linux-tegra,
	linux-kernel, linux-arm-kernel, kthota, mmaddireddy, vidyas,
	sagar.tv

Add support for Tegra194 P2U (PIPE to UPHY) module block which is a glue
module instantiated one for each PCIe lane between Synopsys DesignWare core
based PCIe IP and Universal PHY block.

Signed-off-by: Vidya Sagar <vidyas@nvidia.com>
Reviewed-by: Rob Herring <robh@kernel.org>
Acked-by: Thierry Reding <treding@nvidia.com>
Acked-by: Kishon Vijay Abraham I <kishon@ti.com>
---
V13:
* None

V12:
* None

V11:
* None

V10:
* None

V9:
* None

V8:
* None

V7:
* None

V6:
* Added Sob
* Changed node name from "p2u@xxxxxxxx" to "phy@xxxxxxxx"

V5:
* None

V4:
* None

V3:
* Changed node label to reflect new format that includes either 'hsio' or
  'nvhs' in its name to reflect which UPHY brick they belong to

V2:
* This is a new patch in v2 series

 .../bindings/phy/phy-tegra194-p2u.txt         | 28 +++++++++++++++++++
 1 file changed, 28 insertions(+)
 create mode 100644 Documentation/devicetree/bindings/phy/phy-tegra194-p2u.txt

diff --git a/Documentation/devicetree/bindings/phy/phy-tegra194-p2u.txt b/Documentation/devicetree/bindings/phy/phy-tegra194-p2u.txt
new file mode 100644
index 000000000000..d23ff90baad5
--- /dev/null
+++ b/Documentation/devicetree/bindings/phy/phy-tegra194-p2u.txt
@@ -0,0 +1,28 @@
+NVIDIA Tegra194 P2U binding
+
+Tegra194 has two PHY bricks namely HSIO (High Speed IO) and NVHS (NVIDIA High
+Speed) each interfacing with 12 and 8 P2U instances respectively.
+A P2U instance is a glue logic between Synopsys DesignWare Core PCIe IP's PIPE
+interface and PHY of HSIO/NVHS bricks. Each P2U instance represents one PCIe
+lane.
+
+Required properties:
+- compatible: For Tegra19x, must contain "nvidia,tegra194-p2u".
+- reg: Should be the physical address space and length of respective each P2U
+       instance.
+- reg-names: Must include the entry "ctl".
+
+Required properties for PHY port node:
+- #phy-cells: Defined by generic PHY bindings.  Must be 0.
+
+Refer to phy/phy-bindings.txt for the generic PHY binding properties.
+
+Example:
+
+p2u_hsio_0: phy@3e10000 {
+	compatible = "nvidia,tegra194-p2u";
+	reg = <0x03e10000 0x10000>;
+	reg-names = "ctl";
+
+	#phy-cells = <0>;
+};
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH V13 11/12] phy: tegra: Add PCIe PIPE2UPHY support
  2019-07-10  6:22 [PATCH V13 00/12] PCI: tegra: Add Tegra194 PCIe support Vidya Sagar
                   ` (9 preceding siblings ...)
  2019-07-10  6:22 ` [PATCH V13 10/12] dt-bindings: PHY: P2U: Add Tegra194 P2U block Vidya Sagar
@ 2019-07-10  6:22 ` Vidya Sagar
  2019-07-10  6:22 ` [PATCH V13 12/12] PCI: tegra: Add Tegra194 PCIe support Vidya Sagar
  11 siblings, 0 replies; 34+ messages in thread
From: Vidya Sagar @ 2019-07-10  6:22 UTC (permalink / raw)
  To: lorenzo.pieralisi, bhelgaas, robh+dt, mark.rutland,
	thierry.reding, jonathanh, kishon, catalin.marinas, will.deacon,
	jingoohan1, gustavo.pimentel
  Cc: digetx, mperttunen, linux-pci, devicetree, linux-tegra,
	linux-kernel, linux-arm-kernel, kthota, mmaddireddy, vidyas,
	sagar.tv

Synopsys DesignWare core based PCIe controllers in Tegra 194 SoC interface
with Universal PHY (UPHY) module through a PIPE2UPHY (P2U) module.
For each PCIe lane of a controller, there is a P2U unit instantiated at
hardware level. This driver provides support for the programming required
for each P2U that is going to be used for a PCIe controller.

Signed-off-by: Vidya Sagar <vidyas@nvidia.com>
Acked-by: Kishon Vijay Abraham I <kishon@ti.com>
---
V13:
* None

V12:
* None

V11:
* Replaced PTR_ERR_OR_ZERO() with PTR_ERR() as the check for zero is already
  present in the code.

V10:
* Used _relaxed() versions of readl() & writel()

V9:
* Made it dependent on ARCH_TEGRA_194_SOC directly instead of ARCH_TEGRA

V8:
* Changed P2U driver file name from pcie-p2u-tegra194.c to phy-tegra194-p2u.c

V7:
* None

V6:
* Addressed review comments from Thierry

V5:
* None

V4:
* Rebased on top of linux-next top of the tree

V3:
* Replaced spaces with tabs in Kconfig file
* Sorted header file inclusion alphabetically

V2:
* Added COMPILE_TEST in Kconfig
* Removed empty phy_ops implementations
* Modified code according to DT documentation file modifications

 drivers/phy/tegra/Kconfig            |   7 ++
 drivers/phy/tegra/Makefile           |   1 +
 drivers/phy/tegra/phy-tegra194-p2u.c | 120 +++++++++++++++++++++++++++
 3 files changed, 128 insertions(+)
 create mode 100644 drivers/phy/tegra/phy-tegra194-p2u.c

diff --git a/drivers/phy/tegra/Kconfig b/drivers/phy/tegra/Kconfig
index e516967d695b..f9817c3ae85f 100644
--- a/drivers/phy/tegra/Kconfig
+++ b/drivers/phy/tegra/Kconfig
@@ -7,3 +7,10 @@ config PHY_TEGRA_XUSB
 
 	  To compile this driver as a module, choose M here: the module will
 	  be called phy-tegra-xusb.
+
+config PHY_TEGRA194_P2U
+	tristate "NVIDIA Tegra194 PIPE2UPHY PHY driver"
+	depends on ARCH_TEGRA_194_SOC || COMPILE_TEST
+	select GENERIC_PHY
+	help
+	  Enable this to support the P2U (PIPE to UPHY) that is part of Tegra 19x SOCs.
diff --git a/drivers/phy/tegra/Makefile b/drivers/phy/tegra/Makefile
index 64ccaeacb631..320dd389f34d 100644
--- a/drivers/phy/tegra/Makefile
+++ b/drivers/phy/tegra/Makefile
@@ -6,3 +6,4 @@ phy-tegra-xusb-$(CONFIG_ARCH_TEGRA_124_SOC) += xusb-tegra124.o
 phy-tegra-xusb-$(CONFIG_ARCH_TEGRA_132_SOC) += xusb-tegra124.o
 phy-tegra-xusb-$(CONFIG_ARCH_TEGRA_210_SOC) += xusb-tegra210.o
 phy-tegra-xusb-$(CONFIG_ARCH_TEGRA_186_SOC) += xusb-tegra186.o
+obj-$(CONFIG_PHY_TEGRA194_P2U) += phy-tegra194-p2u.o
diff --git a/drivers/phy/tegra/phy-tegra194-p2u.c b/drivers/phy/tegra/phy-tegra194-p2u.c
new file mode 100644
index 000000000000..7042bed9feaa
--- /dev/null
+++ b/drivers/phy/tegra/phy-tegra194-p2u.c
@@ -0,0 +1,120 @@
+// SPDX-License-Identifier: GPL-2.0+
+/*
+ * P2U (PIPE to UPHY) driver for Tegra T194 SoC
+ *
+ * Copyright (C) 2019 NVIDIA Corporation.
+ *
+ * Author: Vidya Sagar <vidyas@nvidia.com>
+ */
+
+#include <linux/err.h>
+#include <linux/io.h>
+#include <linux/module.h>
+#include <linux/of.h>
+#include <linux/of_platform.h>
+#include <linux/phy/phy.h>
+
+#define P2U_PERIODIC_EQ_CTRL_GEN3	0xc0
+#define P2U_PERIODIC_EQ_CTRL_GEN3_PERIODIC_EQ_EN		BIT(0)
+#define P2U_PERIODIC_EQ_CTRL_GEN3_INIT_PRESET_EQ_TRAIN_EN	BIT(1)
+#define P2U_PERIODIC_EQ_CTRL_GEN4	0xc4
+#define P2U_PERIODIC_EQ_CTRL_GEN4_INIT_PRESET_EQ_TRAIN_EN	BIT(1)
+
+#define P2U_RX_DEBOUNCE_TIME				0xa4
+#define P2U_RX_DEBOUNCE_TIME_DEBOUNCE_TIMER_MASK	0xffff
+#define P2U_RX_DEBOUNCE_TIME_DEBOUNCE_TIMER_VAL		160
+
+struct tegra_p2u {
+	void __iomem *base;
+};
+
+static inline void p2u_writel(struct tegra_p2u *phy, const u32 value,
+			      const u32 reg)
+{
+	writel_relaxed(value, phy->base + reg);
+}
+
+static inline u32 p2u_readl(struct tegra_p2u *phy, const u32 reg)
+{
+	return readl_relaxed(phy->base + reg);
+}
+
+static int tegra_p2u_power_on(struct phy *x)
+{
+	struct tegra_p2u *phy = phy_get_drvdata(x);
+	u32 val;
+
+	val = p2u_readl(phy, P2U_PERIODIC_EQ_CTRL_GEN3);
+	val &= ~P2U_PERIODIC_EQ_CTRL_GEN3_PERIODIC_EQ_EN;
+	val |= P2U_PERIODIC_EQ_CTRL_GEN3_INIT_PRESET_EQ_TRAIN_EN;
+	p2u_writel(phy, val, P2U_PERIODIC_EQ_CTRL_GEN3);
+
+	val = p2u_readl(phy, P2U_PERIODIC_EQ_CTRL_GEN4);
+	val |= P2U_PERIODIC_EQ_CTRL_GEN4_INIT_PRESET_EQ_TRAIN_EN;
+	p2u_writel(phy, val, P2U_PERIODIC_EQ_CTRL_GEN4);
+
+	val = p2u_readl(phy, P2U_RX_DEBOUNCE_TIME);
+	val &= ~P2U_RX_DEBOUNCE_TIME_DEBOUNCE_TIMER_MASK;
+	val |= P2U_RX_DEBOUNCE_TIME_DEBOUNCE_TIMER_VAL;
+	p2u_writel(phy, val, P2U_RX_DEBOUNCE_TIME);
+
+	return 0;
+}
+
+static const struct phy_ops ops = {
+	.power_on = tegra_p2u_power_on,
+	.owner = THIS_MODULE,
+};
+
+static int tegra_p2u_probe(struct platform_device *pdev)
+{
+	struct phy_provider *phy_provider;
+	struct device *dev = &pdev->dev;
+	struct phy *generic_phy;
+	struct tegra_p2u *phy;
+	struct resource *res;
+
+	phy = devm_kzalloc(dev, sizeof(*phy), GFP_KERNEL);
+	if (!phy)
+		return -ENOMEM;
+
+	res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "ctl");
+	phy->base = devm_ioremap_resource(dev, res);
+	if (IS_ERR(phy->base))
+		return PTR_ERR(phy->base);
+
+	platform_set_drvdata(pdev, phy);
+
+	generic_phy = devm_phy_create(dev, NULL, &ops);
+	if (IS_ERR(generic_phy))
+		return PTR_ERR(generic_phy);
+
+	phy_set_drvdata(generic_phy, phy);
+
+	phy_provider = devm_of_phy_provider_register(dev, of_phy_simple_xlate);
+	if (IS_ERR(phy_provider))
+		return PTR_ERR(phy_provider);
+
+	return 0;
+}
+
+static const struct of_device_id tegra_p2u_id_table[] = {
+	{
+		.compatible = "nvidia,tegra194-p2u",
+	},
+	{}
+};
+MODULE_DEVICE_TABLE(of, tegra_p2u_id_table);
+
+static struct platform_driver tegra_p2u_driver = {
+	.probe = tegra_p2u_probe,
+	.driver = {
+		.name = "tegra194-p2u",
+		.of_match_table = tegra_p2u_id_table,
+	},
+};
+module_platform_driver(tegra_p2u_driver);
+
+MODULE_AUTHOR("Vidya Sagar <vidyas@nvidia.com>");
+MODULE_DESCRIPTION("NVIDIA Tegra194 PIPE2UPHY PHY driver");
+MODULE_LICENSE("GPL v2");
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH V13 12/12] PCI: tegra: Add Tegra194 PCIe support
  2019-07-10  6:22 [PATCH V13 00/12] PCI: tegra: Add Tegra194 PCIe support Vidya Sagar
                   ` (10 preceding siblings ...)
  2019-07-10  6:22 ` [PATCH V13 11/12] phy: tegra: Add PCIe PIPE2UPHY support Vidya Sagar
@ 2019-07-10  6:22 ` Vidya Sagar
  2019-07-10 17:02   ` Lorenzo Pieralisi
  2019-07-11 12:54   ` Lorenzo Pieralisi
  11 siblings, 2 replies; 34+ messages in thread
From: Vidya Sagar @ 2019-07-10  6:22 UTC (permalink / raw)
  To: lorenzo.pieralisi, bhelgaas, robh+dt, mark.rutland,
	thierry.reding, jonathanh, kishon, catalin.marinas, will.deacon,
	jingoohan1, gustavo.pimentel
  Cc: digetx, mperttunen, linux-pci, devicetree, linux-tegra,
	linux-kernel, linux-arm-kernel, kthota, mmaddireddy, vidyas,
	sagar.tv

Add support for Synopsys DesignWare core IP based PCIe host controller
present in Tegra194 SoC.

Signed-off-by: Vidya Sagar <vidyas@nvidia.com>
Acked-by: Thierry Reding <treding@nvidia.com>
---
V13:
* Modified according to modifications in PATCH V13 01/12

V12:
* None

V11:
* None

V10:
* Used _relaxed() versions of readl() & writel()

V9:
* Made it dependent on ARCH_TEGRA_194_SOC directly

V8:
* Addressed review comments from Thierry

V7:
* Removed code around "nvidia,disable-aspm-states" DT property
* Refactored code to remove code duplication

V6:
* Addressed review comments from Thierry

V5:
* None

V4:
* None

V3:
* Changed 'nvidia,init-speed' to 'nvidia,init-link-speed'
* Changed 'nvidia,pex-wake' to 'nvidia,wake-gpios'
* Removed .runtime_suspend() & .runtime_resume() implementations

V2:
* Made CONFIG_PCIE_TEGRA194 as 'm' by default from its previous 'y' state
* Modified code as per changes made to DT documentation
* Refactored code to address Bjorn & Thierry's review comments
* Added goto to avoid recursion in tegra_pcie_dw_host_init() API
* Merged .scan_bus() of dw_pcie_host_ops implementation to tegra_pcie_dw_host_init() API

 drivers/pci/controller/dwc/Kconfig         |   10 +
 drivers/pci/controller/dwc/Makefile        |    1 +
 drivers/pci/controller/dwc/pcie-tegra194.c | 1632 ++++++++++++++++++++
 3 files changed, 1643 insertions(+)
 create mode 100644 drivers/pci/controller/dwc/pcie-tegra194.c

diff --git a/drivers/pci/controller/dwc/Kconfig b/drivers/pci/controller/dwc/Kconfig
index 6ea778ae4877..49475f5c42c3 100644
--- a/drivers/pci/controller/dwc/Kconfig
+++ b/drivers/pci/controller/dwc/Kconfig
@@ -220,6 +220,16 @@ config PCI_MESON
 	  and therefore the driver re-uses the DesignWare core functions to
 	  implement the driver.
 
+config PCIE_TEGRA194
+	tristate "NVIDIA Tegra194 (and later) PCIe controller"
+	depends on ARCH_TEGRA_194_SOC || COMPILE_TEST
+	depends on PCI_MSI_IRQ_DOMAIN
+	select PCIE_DW_HOST
+	select PHY_TEGRA194_P2U
+	help
+	  Say Y here if you want support for DesignWare core based PCIe host
+	  controller found in NVIDIA Tegra194 SoC.
+
 config PCIE_UNIPHIER
 	bool "Socionext UniPhier PCIe controllers"
 	depends on ARCH_UNIPHIER || COMPILE_TEST
diff --git a/drivers/pci/controller/dwc/Makefile b/drivers/pci/controller/dwc/Makefile
index b085dfd4fab7..b30336181d46 100644
--- a/drivers/pci/controller/dwc/Makefile
+++ b/drivers/pci/controller/dwc/Makefile
@@ -15,6 +15,7 @@ obj-$(CONFIG_PCIE_ARTPEC6) += pcie-artpec6.o
 obj-$(CONFIG_PCIE_KIRIN) += pcie-kirin.o
 obj-$(CONFIG_PCIE_HISI_STB) += pcie-histb.o
 obj-$(CONFIG_PCI_MESON) += pci-meson.o
+obj-$(CONFIG_PCIE_TEGRA194) += pcie-tegra194.o
 obj-$(CONFIG_PCIE_UNIPHIER) += pcie-uniphier.o
 
 # The following drivers are for devices that use the generic ACPI
diff --git a/drivers/pci/controller/dwc/pcie-tegra194.c b/drivers/pci/controller/dwc/pcie-tegra194.c
new file mode 100644
index 000000000000..189779edd976
--- /dev/null
+++ b/drivers/pci/controller/dwc/pcie-tegra194.c
@@ -0,0 +1,1632 @@
+// SPDX-License-Identifier: GPL-2.0+
+/*
+ * PCIe host controller driver for Tegra194 SoC
+ *
+ * Copyright (C) 2019 NVIDIA Corporation.
+ *
+ * Author: Vidya Sagar <vidyas@nvidia.com>
+ */
+
+#include <linux/clk.h>
+#include <linux/debugfs.h>
+#include <linux/delay.h>
+#include <linux/gpio.h>
+#include <linux/interrupt.h>
+#include <linux/iopoll.h>
+#include <linux/kernel.h>
+#include <linux/kfifo.h>
+#include <linux/kthread.h>
+#include <linux/module.h>
+#include <linux/of.h>
+#include <linux/of_device.h>
+#include <linux/of_gpio.h>
+#include <linux/of_irq.h>
+#include <linux/of_pci.h>
+#include <linux/pci.h>
+#include <linux/pci-aspm.h>
+#include <linux/phy/phy.h>
+#include <linux/platform_device.h>
+#include <linux/pm_runtime.h>
+#include <linux/random.h>
+#include <linux/reset.h>
+#include <linux/resource.h>
+#include <linux/types.h>
+#include "pcie-designware.h"
+#include <soc/tegra/bpmp.h>
+#include <soc/tegra/bpmp-abi.h>
+#include "../../pci.h"
+
+#define APPL_PINMUX				0x0
+#define APPL_PINMUX_PEX_RST			BIT(0)
+#define APPL_PINMUX_CLKREQ_OVERRIDE_EN		BIT(2)
+#define APPL_PINMUX_CLKREQ_OVERRIDE		BIT(3)
+#define APPL_PINMUX_CLK_OUTPUT_IN_OVERRIDE_EN	BIT(4)
+#define APPL_PINMUX_CLK_OUTPUT_IN_OVERRIDE	BIT(5)
+#define APPL_PINMUX_CLKREQ_OUT_OVRD_EN		BIT(9)
+#define APPL_PINMUX_CLKREQ_OUT_OVRD		BIT(10)
+
+#define APPL_CTRL				0x4
+#define APPL_CTRL_SYS_PRE_DET_STATE		BIT(6)
+#define APPL_CTRL_LTSSM_EN			BIT(7)
+#define APPL_CTRL_HW_HOT_RST_EN			BIT(20)
+#define APPL_CTRL_HW_HOT_RST_MODE_MASK		GENMASK(1, 0)
+#define APPL_CTRL_HW_HOT_RST_MODE_SHIFT		22
+#define APPL_CTRL_HW_HOT_RST_MODE_IMDT_RST	0x1
+
+#define APPL_INTR_EN_L0_0			0x8
+#define APPL_INTR_EN_L0_0_LINK_STATE_INT_EN	BIT(0)
+#define APPL_INTR_EN_L0_0_MSI_RCV_INT_EN	BIT(4)
+#define APPL_INTR_EN_L0_0_INT_INT_EN		BIT(8)
+#define APPL_INTR_EN_L0_0_CDM_REG_CHK_INT_EN	BIT(19)
+#define APPL_INTR_EN_L0_0_SYS_INTR_EN		BIT(30)
+#define APPL_INTR_EN_L0_0_SYS_MSI_INTR_EN	BIT(31)
+
+#define APPL_INTR_STATUS_L0			0xC
+#define APPL_INTR_STATUS_L0_LINK_STATE_INT	BIT(0)
+#define APPL_INTR_STATUS_L0_INT_INT		BIT(8)
+#define APPL_INTR_STATUS_L0_CDM_REG_CHK_INT	BIT(18)
+
+#define APPL_INTR_EN_L1_0_0				0x1C
+#define APPL_INTR_EN_L1_0_0_LINK_REQ_RST_NOT_INT_EN	BIT(1)
+
+#define APPL_INTR_STATUS_L1_0_0				0x20
+#define APPL_INTR_STATUS_L1_0_0_LINK_REQ_RST_NOT_CHGED	BIT(1)
+
+#define APPL_INTR_STATUS_L1_1			0x2C
+#define APPL_INTR_STATUS_L1_2			0x30
+#define APPL_INTR_STATUS_L1_3			0x34
+#define APPL_INTR_STATUS_L1_6			0x3C
+#define APPL_INTR_STATUS_L1_7			0x40
+
+#define APPL_INTR_EN_L1_8_0			0x44
+#define APPL_INTR_EN_L1_8_BW_MGT_INT_EN		BIT(2)
+#define APPL_INTR_EN_L1_8_AUTO_BW_INT_EN	BIT(3)
+#define APPL_INTR_EN_L1_8_INTX_EN		BIT(11)
+#define APPL_INTR_EN_L1_8_AER_INT_EN		BIT(15)
+
+#define APPL_INTR_STATUS_L1_8_0			0x4C
+#define APPL_INTR_STATUS_L1_8_0_EDMA_INT_MASK	GENMASK(11, 6)
+#define APPL_INTR_STATUS_L1_8_0_BW_MGT_INT_STS	BIT(2)
+#define APPL_INTR_STATUS_L1_8_0_AUTO_BW_INT_STS	BIT(3)
+
+#define APPL_INTR_STATUS_L1_9			0x54
+#define APPL_INTR_STATUS_L1_10			0x58
+#define APPL_INTR_STATUS_L1_11			0x64
+#define APPL_INTR_STATUS_L1_13			0x74
+#define APPL_INTR_STATUS_L1_14			0x78
+#define APPL_INTR_STATUS_L1_15			0x7C
+#define APPL_INTR_STATUS_L1_17			0x88
+
+#define APPL_INTR_EN_L1_18				0x90
+#define APPL_INTR_EN_L1_18_CDM_REG_CHK_CMPLT		BIT(2)
+#define APPL_INTR_EN_L1_18_CDM_REG_CHK_CMP_ERR		BIT(1)
+#define APPL_INTR_EN_L1_18_CDM_REG_CHK_LOGIC_ERR	BIT(0)
+
+#define APPL_INTR_STATUS_L1_18				0x94
+#define APPL_INTR_STATUS_L1_18_CDM_REG_CHK_CMPLT	BIT(2)
+#define APPL_INTR_STATUS_L1_18_CDM_REG_CHK_CMP_ERR	BIT(1)
+#define APPL_INTR_STATUS_L1_18_CDM_REG_CHK_LOGIC_ERR	BIT(0)
+
+#define APPL_MSI_CTRL_2				0xB0
+
+#define APPL_LTR_MSG_1				0xC4
+#define LTR_MSG_REQ				BIT(15)
+#define LTR_MST_NO_SNOOP_SHIFT			16
+
+#define APPL_LTR_MSG_2				0xC8
+#define APPL_LTR_MSG_2_LTR_MSG_REQ_STATE	BIT(3)
+
+#define APPL_LINK_STATUS			0xCC
+#define APPL_LINK_STATUS_RDLH_LINK_UP		BIT(0)
+
+#define APPL_DEBUG				0xD0
+#define APPL_DEBUG_PM_LINKST_IN_L2_LAT		BIT(21)
+#define APPL_DEBUG_PM_LINKST_IN_L0		0x11
+#define APPL_DEBUG_LTSSM_STATE_MASK		GENMASK(8, 3)
+#define APPL_DEBUG_LTSSM_STATE_SHIFT		3
+#define LTSSM_STATE_PRE_DETECT			5
+
+#define APPL_RADM_STATUS			0xE4
+#define APPL_PM_XMT_TURNOFF_STATE		BIT(0)
+
+#define APPL_DM_TYPE				0x100
+#define APPL_DM_TYPE_MASK			GENMASK(3, 0)
+#define APPL_DM_TYPE_RP				0x4
+#define APPL_DM_TYPE_EP				0x0
+
+#define APPL_CFG_BASE_ADDR			0x104
+#define APPL_CFG_BASE_ADDR_MASK			GENMASK(31, 12)
+
+#define APPL_CFG_IATU_DMA_BASE_ADDR		0x108
+#define APPL_CFG_IATU_DMA_BASE_ADDR_MASK	GENMASK(31, 18)
+
+#define APPL_CFG_MISC				0x110
+#define APPL_CFG_MISC_SLV_EP_MODE		BIT(14)
+#define APPL_CFG_MISC_ARCACHE_MASK		GENMASK(13, 10)
+#define APPL_CFG_MISC_ARCACHE_SHIFT		10
+#define APPL_CFG_MISC_ARCACHE_VAL		3
+
+#define APPL_CFG_SLCG_OVERRIDE			0x114
+#define APPL_CFG_SLCG_OVERRIDE_SLCG_EN_MASTER	BIT(0)
+
+#define APPL_CAR_RESET_OVRD				0x12C
+#define APPL_CAR_RESET_OVRD_CYA_OVERRIDE_CORE_RST_N	BIT(0)
+
+#define IO_BASE_IO_DECODE				BIT(0)
+#define IO_BASE_IO_DECODE_BIT8				BIT(8)
+
+#define CFG_PREF_MEM_LIMIT_BASE_MEM_DECODE		BIT(0)
+#define CFG_PREF_MEM_LIMIT_BASE_MEM_LIMIT_DECODE	BIT(16)
+
+#define CFG_TIMER_CTRL_MAX_FUNC_NUM_OFF	0x718
+#define CFG_TIMER_CTRL_ACK_NAK_SHIFT	(19)
+
+#define EVENT_COUNTER_ALL_CLEAR		0x3
+#define EVENT_COUNTER_ENABLE_ALL	0x7
+#define EVENT_COUNTER_ENABLE_SHIFT	2
+#define EVENT_COUNTER_EVENT_SEL_MASK	GENMASK(7, 0)
+#define EVENT_COUNTER_EVENT_SEL_SHIFT	16
+#define EVENT_COUNTER_EVENT_Tx_L0S	0x2
+#define EVENT_COUNTER_EVENT_Rx_L0S	0x3
+#define EVENT_COUNTER_EVENT_L1		0x5
+#define EVENT_COUNTER_EVENT_L1_1	0x7
+#define EVENT_COUNTER_EVENT_L1_2	0x8
+#define EVENT_COUNTER_GROUP_SEL_SHIFT	24
+#define EVENT_COUNTER_GROUP_5		0x5
+
+#define PORT_LOGIC_ACK_F_ASPM_CTRL			0x70C
+#define ENTER_ASPM					BIT(30)
+#define L0S_ENTRANCE_LAT_SHIFT				24
+#define L0S_ENTRANCE_LAT_MASK				GENMASK(26, 24)
+#define L1_ENTRANCE_LAT_SHIFT				27
+#define L1_ENTRANCE_LAT_MASK				GENMASK(29, 27)
+#define N_FTS_SHIFT					8
+#define N_FTS_MASK					GENMASK(7, 0)
+#define N_FTS_VAL					52
+
+#define PORT_LOGIC_GEN2_CTRL				0x80C
+#define PORT_LOGIC_GEN2_CTRL_DIRECT_SPEED_CHANGE	BIT(17)
+#define FTS_MASK					GENMASK(7, 0)
+#define FTS_VAL						52
+
+#define PORT_LOGIC_MSI_CTRL_INT_0_EN		0x828
+
+#define GEN3_EQ_CONTROL_OFF			0x8a8
+#define GEN3_EQ_CONTROL_OFF_PSET_REQ_VEC_SHIFT	8
+#define GEN3_EQ_CONTROL_OFF_PSET_REQ_VEC_MASK	GENMASK(23, 8)
+#define GEN3_EQ_CONTROL_OFF_FB_MODE_MASK	GENMASK(3, 0)
+
+#define GEN3_RELATED_OFF			0x890
+#define GEN3_RELATED_OFF_GEN3_ZRXDC_NONCOMPL	BIT(0)
+#define GEN3_RELATED_OFF_GEN3_EQ_DISABLE	BIT(16)
+#define GEN3_RELATED_OFF_RATE_SHADOW_SEL_SHIFT	24
+#define GEN3_RELATED_OFF_RATE_SHADOW_SEL_MASK	GENMASK(25, 24)
+
+#define PORT_LOGIC_AMBA_ERROR_RESPONSE_DEFAULT	0x8D0
+#define AMBA_ERROR_RESPONSE_CRS_SHIFT		3
+#define AMBA_ERROR_RESPONSE_CRS_MASK		GENMASK(1, 0)
+#define AMBA_ERROR_RESPONSE_CRS_OKAY		0
+#define AMBA_ERROR_RESPONSE_CRS_OKAY_FFFFFFFF	1
+#define AMBA_ERROR_RESPONSE_CRS_OKAY_FFFF0001	2
+
+#define PORT_LOGIC_MSIX_DOORBELL			0x948
+
+#define CAP_SPCIE_CAP_OFF			0x154
+#define CAP_SPCIE_CAP_OFF_DSP_TX_PRESET0_MASK	GENMASK(3, 0)
+#define CAP_SPCIE_CAP_OFF_USP_TX_PRESET0_MASK	GENMASK(11, 8)
+#define CAP_SPCIE_CAP_OFF_USP_TX_PRESET0_SHIFT	8
+
+#define PME_ACK_TIMEOUT 10000
+
+#define LTSSM_TIMEOUT 50000	/* 50ms */
+
+#define GEN3_GEN4_EQ_PRESET_INIT	5
+
+#define GEN1_CORE_CLK_FREQ	62500000
+#define GEN2_CORE_CLK_FREQ	125000000
+#define GEN3_CORE_CLK_FREQ	250000000
+#define GEN4_CORE_CLK_FREQ	500000000
+
+static const unsigned int pcie_gen_freq[] = {
+	GEN1_CORE_CLK_FREQ,
+	GEN2_CORE_CLK_FREQ,
+	GEN3_CORE_CLK_FREQ,
+	GEN4_CORE_CLK_FREQ
+};
+
+static const u32 event_cntr_ctrl_offset[] = {
+	0x1d8,
+	0x1a8,
+	0x1a8,
+	0x1a8,
+	0x1c4,
+	0x1d8
+};
+
+static const u32 event_cntr_data_offset[] = {
+	0x1dc,
+	0x1ac,
+	0x1ac,
+	0x1ac,
+	0x1c8,
+	0x1dc
+};
+
+struct tegra_pcie_dw {
+	struct device *dev;
+	struct resource *appl_res;
+	struct resource *dbi_res;
+	struct resource *atu_dma_res;
+	void __iomem *appl_base;
+	struct clk *core_clk;
+	struct reset_control *core_apb_rst;
+	struct reset_control *core_rst;
+	struct dw_pcie pci;
+	enum dw_pcie_device_mode mode;
+	struct tegra_bpmp *bpmp;
+
+	bool supports_clkreq;
+	bool enable_cdm_check;
+	u8 init_link_width;
+	bool link_state;
+	u32 msi_ctrl_int;
+	u32 num_lanes;
+	u32 max_speed;
+	u32 cid;
+	bool update_fc_fixup;
+	u32 cfg_link_cap_l1sub;
+	u32 pcie_cap_base;
+	u32 aspm_cmrt;
+	u32 aspm_pwr_on_t;
+	u32 aspm_l0s_enter_lat;
+
+	struct regulator *pex_ctl_supply;
+
+	unsigned int phy_count;
+	struct phy **phys;
+
+	struct dentry *debugfs;
+};
+
+static inline struct tegra_pcie_dw *to_tegra_pcie(struct dw_pcie *pci)
+{
+	return container_of(pci, struct tegra_pcie_dw, pci);
+}
+
+static inline void appl_writel(struct tegra_pcie_dw *pcie, const u32 value,
+			       const u32 reg)
+{
+	writel_relaxed(value, pcie->appl_base + reg);
+}
+
+static inline u32 appl_readl(struct tegra_pcie_dw *pcie, const u32 reg)
+{
+	return readl_relaxed(pcie->appl_base + reg);
+}
+
+struct tegra_pcie_soc {
+	enum dw_pcie_device_mode mode;
+};
+
+static void apply_bad_link_workaround(struct pcie_port *pp)
+{
+	struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
+	struct tegra_pcie_dw *pcie = to_tegra_pcie(pci);
+	u32 current_link_width;
+	u16 val;
+
+	/*
+	 * NOTE:- Since this scenario is uncommon and link as such is not
+	 * stable anyway, not waiting to confirm if link is really
+	 * transitioning to Gen-2 speed
+	 */
+	val = dw_pcie_readw_dbi(pci, pcie->pcie_cap_base + PCI_EXP_LNKSTA);
+	if (val & PCI_EXP_LNKSTA_LBMS) {
+		current_link_width = (val & PCI_EXP_LNKSTA_NLW) >>
+				     PCI_EXP_LNKSTA_NLW_SHIFT;
+		if (pcie->init_link_width > current_link_width) {
+			dev_warn(pci->dev, "PCIe link is bad, width reduced\n");
+			val = dw_pcie_readw_dbi(pci, pcie->pcie_cap_base +
+						PCI_EXP_LNKCTL2);
+			val &= ~PCI_EXP_LNKCTL2_TLS;
+			val |= PCI_EXP_LNKCTL2_TLS_2_5GT;
+			dw_pcie_writew_dbi(pci, pcie->pcie_cap_base +
+					   PCI_EXP_LNKCTL2, val);
+
+			val = dw_pcie_readw_dbi(pci, pcie->pcie_cap_base +
+						PCI_EXP_LNKCTL);
+			val |= PCI_EXP_LNKCTL_RL;
+			dw_pcie_writew_dbi(pci, pcie->pcie_cap_base +
+					   PCI_EXP_LNKCTL, val);
+		}
+	}
+}
+
+static irqreturn_t tegra_pcie_rp_irq_handler(struct tegra_pcie_dw *pcie)
+{
+	struct dw_pcie *pci = &pcie->pci;
+	struct pcie_port *pp = &pci->pp;
+	u32 val, tmp;
+	u16 val_w;
+
+	val = appl_readl(pcie, APPL_INTR_STATUS_L0);
+	if (val & APPL_INTR_STATUS_L0_LINK_STATE_INT) {
+		val = appl_readl(pcie, APPL_INTR_STATUS_L1_0_0);
+		if (val & APPL_INTR_STATUS_L1_0_0_LINK_REQ_RST_NOT_CHGED) {
+			appl_writel(pcie, val, APPL_INTR_STATUS_L1_0_0);
+
+			/* SBR & Surprise Link Down WAR */
+			val = appl_readl(pcie, APPL_CAR_RESET_OVRD);
+			val &= ~APPL_CAR_RESET_OVRD_CYA_OVERRIDE_CORE_RST_N;
+			appl_writel(pcie, val, APPL_CAR_RESET_OVRD);
+			udelay(1);
+			val = appl_readl(pcie, APPL_CAR_RESET_OVRD);
+			val |= APPL_CAR_RESET_OVRD_CYA_OVERRIDE_CORE_RST_N;
+			appl_writel(pcie, val, APPL_CAR_RESET_OVRD);
+
+			val = dw_pcie_readl_dbi(pci, PORT_LOGIC_GEN2_CTRL);
+			val |= PORT_LOGIC_GEN2_CTRL_DIRECT_SPEED_CHANGE;
+			dw_pcie_writel_dbi(pci, PORT_LOGIC_GEN2_CTRL, val);
+		}
+	}
+
+	if (val & APPL_INTR_STATUS_L0_INT_INT) {
+		val = appl_readl(pcie, APPL_INTR_STATUS_L1_8_0);
+		if (val & APPL_INTR_STATUS_L1_8_0_AUTO_BW_INT_STS) {
+			appl_writel(pcie,
+				    APPL_INTR_STATUS_L1_8_0_AUTO_BW_INT_STS,
+				    APPL_INTR_STATUS_L1_8_0);
+			apply_bad_link_workaround(pp);
+		}
+		if (val & APPL_INTR_STATUS_L1_8_0_BW_MGT_INT_STS) {
+			appl_writel(pcie,
+				    APPL_INTR_STATUS_L1_8_0_BW_MGT_INT_STS,
+				    APPL_INTR_STATUS_L1_8_0);
+
+			val_w = dw_pcie_readw_dbi(pci, pcie->pcie_cap_base +
+						  PCI_EXP_LNKSTA);
+			dev_dbg(pci->dev, "Link Speed : Gen-%u\n", val_w &
+				PCI_EXP_LNKSTA_CLS);
+		}
+	}
+
+	val = appl_readl(pcie, APPL_INTR_STATUS_L0);
+	if (val & APPL_INTR_STATUS_L0_CDM_REG_CHK_INT) {
+		val = appl_readl(pcie, APPL_INTR_STATUS_L1_18);
+		tmp = dw_pcie_readl_dbi(pci, PCIE_PL_CHK_REG_CONTROL_STATUS);
+		if (val & APPL_INTR_STATUS_L1_18_CDM_REG_CHK_CMPLT) {
+			dev_info(pci->dev, "CDM check complete\n");
+			tmp |= PCIE_PL_CHK_REG_CHK_REG_COMPLETE;
+		}
+		if (val & APPL_INTR_STATUS_L1_18_CDM_REG_CHK_CMP_ERR) {
+			dev_err(pci->dev, "CDM comparison mismatch\n");
+			tmp |= PCIE_PL_CHK_REG_CHK_REG_COMPARISON_ERROR;
+		}
+		if (val & APPL_INTR_STATUS_L1_18_CDM_REG_CHK_LOGIC_ERR) {
+			dev_err(pci->dev, "CDM Logic error\n");
+			tmp |= PCIE_PL_CHK_REG_CHK_REG_LOGIC_ERROR;
+		}
+		dw_pcie_writel_dbi(pci, PCIE_PL_CHK_REG_CONTROL_STATUS, tmp);
+		tmp = dw_pcie_readl_dbi(pci, PCIE_PL_CHK_REG_ERR_ADDR);
+		dev_err(pci->dev, "CDM Error Address Offset = 0x%08X\n", tmp);
+	}
+
+	return IRQ_HANDLED;
+}
+
+static irqreturn_t tegra_pcie_irq_handler(int irq, void *arg)
+{
+	struct tegra_pcie_dw *pcie = arg;
+
+	if (pcie->mode == DW_PCIE_RC_TYPE)
+		return tegra_pcie_rp_irq_handler(pcie);
+
+	return IRQ_NONE;
+}
+
+static int tegra_pcie_dw_rd_own_conf(struct pcie_port *pp, int where, int size,
+				     u32 *val)
+{
+	struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
+
+	/*
+	 * This is an endpoint mode specific register happen to appear even
+	 * when controller is operating in root port mode and system hangs
+	 * when it is accessed with link being in ASPM-L1 state.
+	 * So skip accessing it altogether
+	 */
+	if (where == PORT_LOGIC_MSIX_DOORBELL) {
+		*val = 0x00000000;
+		return PCIBIOS_SUCCESSFUL;
+	}
+
+	return dw_pcie_read(pci->dbi_base + where, size, val);
+}
+
+static int tegra_pcie_dw_wr_own_conf(struct pcie_port *pp, int where, int size,
+				     u32 val)
+{
+	struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
+
+	/*
+	 * This is an endpoint mode specific register happen to appear even
+	 * when controller is operating in root port mode and system hangs
+	 * when it is accessed with link being in ASPM-L1 state.
+	 * So skip accessing it altogether
+	 */
+	if (where == PORT_LOGIC_MSIX_DOORBELL)
+		return PCIBIOS_SUCCESSFUL;
+
+	return dw_pcie_write(pci->dbi_base + where, size, val);
+}
+
+#if defined(CONFIG_PCIEASPM)
+static void disable_aspm_l11(struct tegra_pcie_dw *pcie)
+{
+	u32 val;
+
+	val = dw_pcie_readl_dbi(&pcie->pci, pcie->cfg_link_cap_l1sub);
+	val &= ~PCI_L1SS_CAP_ASPM_L1_1;
+	dw_pcie_writel_dbi(&pcie->pci, pcie->cfg_link_cap_l1sub, val);
+}
+
+static void disable_aspm_l12(struct tegra_pcie_dw *pcie)
+{
+	u32 val;
+
+	val = dw_pcie_readl_dbi(&pcie->pci, pcie->cfg_link_cap_l1sub);
+	val &= ~PCI_L1SS_CAP_ASPM_L1_2;
+	dw_pcie_writel_dbi(&pcie->pci, pcie->cfg_link_cap_l1sub, val);
+}
+
+static inline u32 event_counter_prog(struct tegra_pcie_dw *pcie, u32 event)
+{
+	u32 val;
+
+	val = dw_pcie_readl_dbi(&pcie->pci, event_cntr_ctrl_offset[pcie->cid]);
+	val &= ~(EVENT_COUNTER_EVENT_SEL_MASK << EVENT_COUNTER_EVENT_SEL_SHIFT);
+	val |= EVENT_COUNTER_GROUP_5 << EVENT_COUNTER_GROUP_SEL_SHIFT;
+	val |= event << EVENT_COUNTER_EVENT_SEL_SHIFT;
+	val |= EVENT_COUNTER_ENABLE_ALL << EVENT_COUNTER_ENABLE_SHIFT;
+	dw_pcie_writel_dbi(&pcie->pci, event_cntr_ctrl_offset[pcie->cid], val);
+	val = dw_pcie_readl_dbi(&pcie->pci, event_cntr_data_offset[pcie->cid]);
+	return val;
+}
+
+static int aspm_state_cnt(struct seq_file *s, void *data)
+{
+	struct tegra_pcie_dw *pcie = (struct tegra_pcie_dw *)
+				     dev_get_drvdata(s->private);
+	u32 val;
+
+	seq_printf(s, "Tx L0s entry count : %u\n",
+		   event_counter_prog(pcie, EVENT_COUNTER_EVENT_Tx_L0S));
+
+	seq_printf(s, "Rx L0s entry count : %u\n",
+		   event_counter_prog(pcie, EVENT_COUNTER_EVENT_Rx_L0S));
+
+	seq_printf(s, "Link L1 entry count : %u\n",
+		   event_counter_prog(pcie, EVENT_COUNTER_EVENT_L1));
+
+	seq_printf(s, "Link L1.1 entry count : %u\n",
+		   event_counter_prog(pcie, EVENT_COUNTER_EVENT_L1_1));
+
+	seq_printf(s, "Link L1.2 entry count : %u\n",
+		   event_counter_prog(pcie, EVENT_COUNTER_EVENT_L1_2));
+
+	/* Clear all counters */
+	dw_pcie_writel_dbi(&pcie->pci, event_cntr_ctrl_offset[pcie->cid],
+			   EVENT_COUNTER_ALL_CLEAR);
+
+	/* Re-enable counting */
+	val = EVENT_COUNTER_ENABLE_ALL << EVENT_COUNTER_ENABLE_SHIFT;
+	val |= EVENT_COUNTER_GROUP_5 << EVENT_COUNTER_GROUP_SEL_SHIFT;
+	dw_pcie_writel_dbi(&pcie->pci, event_cntr_ctrl_offset[pcie->cid], val);
+
+	return 0;
+}
+#endif
+
+static int init_debugfs(struct tegra_pcie_dw *pcie)
+{
+#if defined(CONFIG_PCIEASPM)
+	struct dentry *d;
+
+	d = debugfs_create_devm_seqfile(pcie->dev, "aspm_state_cnt",
+					pcie->debugfs, aspm_state_cnt);
+	if (IS_ERR_OR_NULL(d))
+		dev_err(pcie->dev,
+			"Failed to create debugfs file \"aspm_state_cnt\"\n");
+#endif
+	return 0;
+}
+
+static void tegra_pcie_enable_system_interrupts(struct pcie_port *pp)
+{
+	struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
+	struct tegra_pcie_dw *pcie = to_tegra_pcie(pci);
+	u32 val;
+	u16 val_w;
+
+	val = appl_readl(pcie, APPL_INTR_EN_L0_0);
+	val |= APPL_INTR_EN_L0_0_LINK_STATE_INT_EN;
+	appl_writel(pcie, val, APPL_INTR_EN_L0_0);
+
+	val = appl_readl(pcie, APPL_INTR_EN_L1_0_0);
+	val |= APPL_INTR_EN_L1_0_0_LINK_REQ_RST_NOT_INT_EN;
+	appl_writel(pcie, val, APPL_INTR_EN_L1_0_0);
+
+	if (pcie->enable_cdm_check) {
+		val = appl_readl(pcie, APPL_INTR_EN_L0_0);
+		val |= APPL_INTR_EN_L0_0_CDM_REG_CHK_INT_EN;
+		appl_writel(pcie, val, APPL_INTR_EN_L0_0);
+
+		val = appl_readl(pcie, APPL_INTR_EN_L1_18);
+		val |= APPL_INTR_EN_L1_18_CDM_REG_CHK_CMP_ERR;
+		val |= APPL_INTR_EN_L1_18_CDM_REG_CHK_LOGIC_ERR;
+		appl_writel(pcie, val, APPL_INTR_EN_L1_18);
+	}
+
+	val_w = dw_pcie_readw_dbi(&pcie->pci, pcie->pcie_cap_base +
+				  PCI_EXP_LNKSTA);
+	pcie->init_link_width = (val_w & PCI_EXP_LNKSTA_NLW) >>
+				PCI_EXP_LNKSTA_NLW_SHIFT;
+
+	val_w = dw_pcie_readw_dbi(&pcie->pci, pcie->pcie_cap_base +
+				  PCI_EXP_LNKCTL);
+	val_w |= PCI_EXP_LNKCTL_LBMIE;
+	dw_pcie_writew_dbi(&pcie->pci, pcie->pcie_cap_base + PCI_EXP_LNKCTL,
+			   val_w);
+}
+
+static void tegra_pcie_enable_legacy_interrupts(struct pcie_port *pp)
+{
+	struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
+	struct tegra_pcie_dw *pcie = to_tegra_pcie(pci);
+	u32 val;
+
+	/* Enable legacy interrupt generation */
+	val = appl_readl(pcie, APPL_INTR_EN_L0_0);
+	val |= APPL_INTR_EN_L0_0_SYS_INTR_EN;
+	val |= APPL_INTR_EN_L0_0_INT_INT_EN;
+	appl_writel(pcie, val, APPL_INTR_EN_L0_0);
+
+	val = appl_readl(pcie, APPL_INTR_EN_L1_8_0);
+	val |= APPL_INTR_EN_L1_8_INTX_EN;
+	val |= APPL_INTR_EN_L1_8_AUTO_BW_INT_EN;
+	val |= APPL_INTR_EN_L1_8_BW_MGT_INT_EN;
+	if (IS_ENABLED(CONFIG_PCIEAER))
+		val |= APPL_INTR_EN_L1_8_AER_INT_EN;
+	appl_writel(pcie, val, APPL_INTR_EN_L1_8_0);
+}
+
+static void tegra_pcie_enable_msi_interrupts(struct pcie_port *pp)
+{
+	struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
+	struct tegra_pcie_dw *pcie = to_tegra_pcie(pci);
+	u32 val;
+
+	dw_pcie_msi_init(pp);
+
+	/* Enable MSI interrupt generation */
+	val = appl_readl(pcie, APPL_INTR_EN_L0_0);
+	val |= APPL_INTR_EN_L0_0_SYS_MSI_INTR_EN;
+	val |= APPL_INTR_EN_L0_0_MSI_RCV_INT_EN;
+	appl_writel(pcie, val, APPL_INTR_EN_L0_0);
+}
+
+static void tegra_pcie_enable_interrupts(struct pcie_port *pp)
+{
+	struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
+	struct tegra_pcie_dw *pcie = to_tegra_pcie(pci);
+
+	/* Clear interrupt statuses before enabling interrupts */
+	appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L0);
+	appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L1_0_0);
+	appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L1_1);
+	appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L1_2);
+	appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L1_3);
+	appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L1_6);
+	appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L1_7);
+	appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L1_8_0);
+	appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L1_9);
+	appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L1_10);
+	appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L1_11);
+	appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L1_13);
+	appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L1_14);
+	appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L1_15);
+	appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L1_17);
+
+	tegra_pcie_enable_system_interrupts(pp);
+	tegra_pcie_enable_legacy_interrupts(pp);
+	if (IS_ENABLED(CONFIG_PCI_MSI))
+		tegra_pcie_enable_msi_interrupts(pp);
+}
+
+static void config_gen3_gen4_eq_presets(struct tegra_pcie_dw *pcie)
+{
+	struct dw_pcie *pci = &pcie->pci;
+	u32 val, offset, i;
+
+	/* Program init preset */
+	for (i = 0; i < pcie->num_lanes; i++) {
+		dw_pcie_read(pci->dbi_base + CAP_SPCIE_CAP_OFF
+				 + (i * 2), 2, &val);
+		val &= ~CAP_SPCIE_CAP_OFF_DSP_TX_PRESET0_MASK;
+		val |= GEN3_GEN4_EQ_PRESET_INIT;
+		val &= ~CAP_SPCIE_CAP_OFF_USP_TX_PRESET0_MASK;
+		val |= (GEN3_GEN4_EQ_PRESET_INIT <<
+			   CAP_SPCIE_CAP_OFF_USP_TX_PRESET0_SHIFT);
+		dw_pcie_write(pci->dbi_base + CAP_SPCIE_CAP_OFF
+				 + (i * 2), 2, val);
+
+		offset = dw_pcie_find_ext_capability(pci,
+						     PCI_EXT_CAP_ID_PL_16GT) +
+				PCI_PL_16GT_LE_CTRL;
+		dw_pcie_read(pci->dbi_base + offset + i, 1, &val);
+		val &= ~PCI_PL_16GT_LE_CTRL_DSP_TX_PRESET_MASK;
+		val |= GEN3_GEN4_EQ_PRESET_INIT;
+		val &= ~PCI_PL_16GT_LE_CTRL_USP_TX_PRESET_MASK;
+		val |= (GEN3_GEN4_EQ_PRESET_INIT <<
+			PCI_PL_16GT_LE_CTRL_USP_TX_PRESET_SHIFT);
+		dw_pcie_write(pci->dbi_base + offset + i, 1, val);
+	}
+
+	val = dw_pcie_readl_dbi(pci, GEN3_RELATED_OFF);
+	val &= ~GEN3_RELATED_OFF_RATE_SHADOW_SEL_MASK;
+	dw_pcie_writel_dbi(pci, GEN3_RELATED_OFF, val);
+
+	val = dw_pcie_readl_dbi(pci, GEN3_EQ_CONTROL_OFF);
+	val &= ~GEN3_EQ_CONTROL_OFF_PSET_REQ_VEC_MASK;
+	val |= (0x3ff << GEN3_EQ_CONTROL_OFF_PSET_REQ_VEC_SHIFT);
+	val &= ~GEN3_EQ_CONTROL_OFF_FB_MODE_MASK;
+	dw_pcie_writel_dbi(pci, GEN3_EQ_CONTROL_OFF, val);
+
+	val = dw_pcie_readl_dbi(pci, GEN3_RELATED_OFF);
+	val &= ~GEN3_RELATED_OFF_RATE_SHADOW_SEL_MASK;
+	val |= (0x1 << GEN3_RELATED_OFF_RATE_SHADOW_SEL_SHIFT);
+	dw_pcie_writel_dbi(pci, GEN3_RELATED_OFF, val);
+
+	val = dw_pcie_readl_dbi(pci, GEN3_EQ_CONTROL_OFF);
+	val &= ~GEN3_EQ_CONTROL_OFF_PSET_REQ_VEC_MASK;
+	val |= (0x360 << GEN3_EQ_CONTROL_OFF_PSET_REQ_VEC_SHIFT);
+	val &= ~GEN3_EQ_CONTROL_OFF_FB_MODE_MASK;
+	dw_pcie_writel_dbi(pci, GEN3_EQ_CONTROL_OFF, val);
+
+	val = dw_pcie_readl_dbi(pci, GEN3_RELATED_OFF);
+	val &= ~GEN3_RELATED_OFF_RATE_SHADOW_SEL_MASK;
+	dw_pcie_writel_dbi(pci, GEN3_RELATED_OFF, val);
+}
+
+static int tegra_pcie_dw_host_init(struct pcie_port *pp)
+{
+	struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
+	struct tegra_pcie_dw *pcie = to_tegra_pcie(pci);
+	u32 val, tmp, offset, speed;
+	unsigned int count;
+	u16 val_w;
+
+core_init:
+	count = 200;
+#if defined(CONFIG_PCIEASPM)
+	offset = dw_pcie_find_ext_capability(pci, PCI_EXT_CAP_ID_L1SS);
+	pcie->cfg_link_cap_l1sub = offset + PCI_L1SS_CAP;
+#endif
+	val = dw_pcie_readl_dbi(pci, PCI_IO_BASE);
+	val &= ~(IO_BASE_IO_DECODE | IO_BASE_IO_DECODE_BIT8);
+	dw_pcie_writel_dbi(pci, PCI_IO_BASE, val);
+
+	val = dw_pcie_readl_dbi(pci, PCI_PREF_MEMORY_BASE);
+	val |= CFG_PREF_MEM_LIMIT_BASE_MEM_DECODE;
+	val |= CFG_PREF_MEM_LIMIT_BASE_MEM_LIMIT_DECODE;
+	dw_pcie_writel_dbi(pci, PCI_PREF_MEMORY_BASE, val);
+
+	dw_pcie_writel_dbi(pci, PCI_BASE_ADDRESS_0, 0);
+
+	/* Configure FTS */
+	val = dw_pcie_readl_dbi(pci, PORT_LOGIC_ACK_F_ASPM_CTRL);
+	val &= ~(N_FTS_MASK << N_FTS_SHIFT);
+	val |= N_FTS_VAL << N_FTS_SHIFT;
+	dw_pcie_writel_dbi(pci, PORT_LOGIC_ACK_F_ASPM_CTRL, val);
+
+	val = dw_pcie_readl_dbi(pci, PORT_LOGIC_GEN2_CTRL);
+	val &= ~FTS_MASK;
+	val |= FTS_VAL;
+	dw_pcie_writel_dbi(pci, PORT_LOGIC_GEN2_CTRL, val);
+
+	/* Enable as 0xFFFF0001 response for CRS */
+	val = dw_pcie_readl_dbi(pci, PORT_LOGIC_AMBA_ERROR_RESPONSE_DEFAULT);
+	val &= ~(AMBA_ERROR_RESPONSE_CRS_MASK << AMBA_ERROR_RESPONSE_CRS_SHIFT);
+	val |= (AMBA_ERROR_RESPONSE_CRS_OKAY_FFFF0001 <<
+		AMBA_ERROR_RESPONSE_CRS_SHIFT);
+	dw_pcie_writel_dbi(pci, PORT_LOGIC_AMBA_ERROR_RESPONSE_DEFAULT, val);
+
+	/* Configure Max Speed from DT */
+	if (pcie->max_speed && pcie->max_speed != -EINVAL) {
+		val = dw_pcie_readl_dbi(pci, pcie->pcie_cap_base +
+					PCI_EXP_LNKCAP);
+		val &= ~PCI_EXP_LNKCAP_SLS;
+		val |= pcie->max_speed;
+		dw_pcie_writel_dbi(pci, pcie->pcie_cap_base + PCI_EXP_LNKCAP,
+				   val);
+	}
+
+	/* Configure Max lane width from DT */
+	val = dw_pcie_readl_dbi(pci, pcie->pcie_cap_base + PCI_EXP_LNKCAP);
+	val &= ~PCI_EXP_LNKCAP_MLW;
+	val |= (pcie->num_lanes << PCI_EXP_LNKSTA_NLW_SHIFT);
+	dw_pcie_writel_dbi(pci, pcie->pcie_cap_base + PCI_EXP_LNKCAP, val);
+
+	config_gen3_gen4_eq_presets(pcie);
+
+#if defined(CONFIG_PCIEASPM)
+	/* Enable ASPM counters */
+	val = EVENT_COUNTER_ENABLE_ALL << EVENT_COUNTER_ENABLE_SHIFT;
+	val |= EVENT_COUNTER_GROUP_5 << EVENT_COUNTER_GROUP_SEL_SHIFT;
+	dw_pcie_writel_dbi(pci, event_cntr_ctrl_offset[pcie->cid], val);
+
+	/* Program T_cmrt and T_pwr_on values */
+	val = dw_pcie_readl_dbi(pci, pcie->cfg_link_cap_l1sub);
+	val &= ~(PCI_L1SS_CAP_CM_RESTORE_TIME | PCI_L1SS_CAP_P_PWR_ON_VALUE);
+	val |= (pcie->aspm_cmrt << 8);
+	val |= (pcie->aspm_pwr_on_t << 19);
+	dw_pcie_writel_dbi(pci, pcie->cfg_link_cap_l1sub, val);
+
+	/* Program L0s and L1 entrance latencies */
+	val = dw_pcie_readl_dbi(pci, PORT_LOGIC_ACK_F_ASPM_CTRL);
+	val &= ~L0S_ENTRANCE_LAT_MASK;
+	val |= (pcie->aspm_l0s_enter_lat << L0S_ENTRANCE_LAT_SHIFT);
+	val |= ENTER_ASPM;
+	dw_pcie_writel_dbi(pci, PORT_LOGIC_ACK_F_ASPM_CTRL, val);
+#endif
+	val = dw_pcie_readl_dbi(pci, GEN3_RELATED_OFF);
+	val &= ~GEN3_RELATED_OFF_GEN3_ZRXDC_NONCOMPL;
+	dw_pcie_writel_dbi(pci, GEN3_RELATED_OFF, val);
+
+	if (pcie->update_fc_fixup) {
+		val = dw_pcie_readl_dbi(pci, CFG_TIMER_CTRL_MAX_FUNC_NUM_OFF);
+		val |= 0x1 << CFG_TIMER_CTRL_ACK_NAK_SHIFT;
+		dw_pcie_writel_dbi(pci, CFG_TIMER_CTRL_MAX_FUNC_NUM_OFF, val);
+	}
+
+	dw_pcie_setup_rc(pp);
+
+	clk_set_rate(pcie->core_clk, GEN4_CORE_CLK_FREQ);
+
+	/* Assert RST */
+	val = appl_readl(pcie, APPL_PINMUX);
+	val &= ~APPL_PINMUX_PEX_RST;
+	appl_writel(pcie, val, APPL_PINMUX);
+
+	usleep_range(100, 200);
+
+	/* Enable LTSSM */
+	val = appl_readl(pcie, APPL_CTRL);
+	val |= APPL_CTRL_LTSSM_EN;
+	appl_writel(pcie, val, APPL_CTRL);
+
+	/* De-assert RST */
+	val = appl_readl(pcie, APPL_PINMUX);
+	val |= APPL_PINMUX_PEX_RST;
+	appl_writel(pcie, val, APPL_PINMUX);
+
+	msleep(100);
+
+	val_w = dw_pcie_readw_dbi(pci, pcie->pcie_cap_base + PCI_EXP_LNKSTA);
+	while (!(val_w & PCI_EXP_LNKSTA_DLLLA)) {
+		if (count) {
+			dev_dbg(pci->dev, "Waiting for link up\n");
+			usleep_range(1000, 2000);
+			val_w = dw_pcie_readw_dbi(pci, pcie->pcie_cap_base +
+						  PCI_EXP_LNKSTA);
+			count--;
+			continue;
+		}
+
+		val = appl_readl(pcie, APPL_DEBUG);
+		val &= APPL_DEBUG_LTSSM_STATE_MASK;
+		val >>= APPL_DEBUG_LTSSM_STATE_SHIFT;
+		tmp = appl_readl(pcie, APPL_LINK_STATUS);
+		tmp &= APPL_LINK_STATUS_RDLH_LINK_UP;
+		if (!(val == 0x11 && !tmp)) {
+			dev_info(pci->dev, "Link is down\n");
+			return 0;
+		}
+
+		dev_info(pci->dev, "Link is down in DLL");
+		dev_info(pci->dev, "Trying again with DLFE disabled\n");
+		/* Disable LTSSM */
+		val = appl_readl(pcie, APPL_CTRL);
+		val &= ~APPL_CTRL_LTSSM_EN;
+		appl_writel(pcie, val, APPL_CTRL);
+
+		reset_control_assert(pcie->core_rst);
+		reset_control_deassert(pcie->core_rst);
+
+		offset = dw_pcie_find_ext_capability(pci, PCI_EXT_CAP_ID_DLF);
+		val = dw_pcie_readl_dbi(pci, offset + PCI_DLF_CAP);
+		val &= ~PCI_DLF_EXCHANGE_ENABLE;
+		dw_pcie_writel_dbi(pci, offset, val);
+
+		/* Retry now with DLF Exchange disabled */
+		goto core_init;
+	}
+	dev_dbg(pci->dev, "Link is up\n");
+
+	speed = dw_pcie_readw_dbi(pci, pcie->pcie_cap_base + PCI_EXP_LNKSTA) &
+		PCI_EXP_LNKSTA_CLS;
+	clk_set_rate(pcie->core_clk, pcie_gen_freq[speed - 1]);
+
+	tegra_pcie_enable_interrupts(pp);
+
+	return 0;
+}
+
+static int tegra_pcie_dw_link_up(struct dw_pcie *pci)
+{
+	struct tegra_pcie_dw *pcie = to_tegra_pcie(pci);
+	u32 val = dw_pcie_readw_dbi(pci, pcie->pcie_cap_base + PCI_EXP_LNKSTA);
+
+	return !!(val & PCI_EXP_LNKSTA_DLLLA);
+}
+
+static void tegra_pcie_set_msi_vec_num(struct pcie_port *pp)
+{
+	pp->num_vectors = MAX_MSI_IRQS;
+}
+
+static const struct dw_pcie_ops tegra_dw_pcie_ops = {
+	.link_up = tegra_pcie_dw_link_up,
+};
+
+static struct dw_pcie_host_ops tegra_pcie_dw_host_ops = {
+	.rd_own_conf = tegra_pcie_dw_rd_own_conf,
+	.wr_own_conf = tegra_pcie_dw_wr_own_conf,
+	.host_init = tegra_pcie_dw_host_init,
+	.set_num_vectors = tegra_pcie_set_msi_vec_num,
+};
+
+static void tegra_pcie_disable_phy(struct tegra_pcie_dw *pcie)
+{
+	unsigned int phy_count = pcie->phy_count;
+
+	while (phy_count--) {
+		phy_power_off(pcie->phys[phy_count]);
+		phy_exit(pcie->phys[phy_count]);
+	}
+}
+
+static int tegra_pcie_enable_phy(struct tegra_pcie_dw *pcie)
+{
+	unsigned int i;
+	int ret;
+
+	for (i = 0; i < pcie->phy_count; i++) {
+		ret = phy_init(pcie->phys[i]);
+		if (ret < 0)
+			goto phy_power_off;
+
+		ret = phy_power_on(pcie->phys[i]);
+		if (ret < 0)
+			goto phy_exit;
+	}
+
+	return 0;
+
+phy_power_off:
+	while (i--) {
+		phy_power_off(pcie->phys[i]);
+phy_exit:
+		phy_exit(pcie->phys[i]);
+	}
+
+	return ret;
+}
+
+static int tegra_pcie_dw_parse_dt(struct tegra_pcie_dw *pcie)
+{
+	struct device_node *np = pcie->dev->of_node;
+	int ret;
+
+	ret = of_property_read_u32(np, "nvidia,aspm-cmrt-us", &pcie->aspm_cmrt);
+	if (ret < 0) {
+		dev_info(pcie->dev, "Failed to read ASPM T_cmrt: %d\n", ret);
+		return ret;
+	}
+
+	ret = of_property_read_u32(np, "nvidia,aspm-pwr-on-t-us",
+				   &pcie->aspm_pwr_on_t);
+	if (ret < 0)
+		dev_info(pcie->dev, "Failed to read ASPM Power On time: %d\n",
+			 ret);
+
+	ret = of_property_read_u32(np, "nvidia,aspm-l0s-entrance-latency-us",
+				   &pcie->aspm_l0s_enter_lat);
+	if (ret < 0)
+		dev_info(pcie->dev,
+			 "Failed to read ASPM L0s Entrance latency: %d\n", ret);
+
+	ret = of_property_read_u32(np, "num-lanes", &pcie->num_lanes);
+	if (ret < 0) {
+		dev_err(pcie->dev, "Failed to read num-lanes: %d\n", ret);
+		return ret;
+	}
+
+	pcie->max_speed = of_pci_get_max_link_speed(np);
+
+	ret = of_property_read_u32_index(np, "nvidia,bpmp", 1, &pcie->cid);
+	if (ret) {
+		dev_err(pcie->dev, "Failed to read Controller-ID: %d\n", ret);
+		return ret;
+	}
+
+	pcie->phy_count = of_property_count_strings(np, "phy-names");
+	if (pcie->phy_count < 0) {
+		dev_err(pcie->dev, "Failed to find PHY entries: %d\n",
+			pcie->phy_count);
+		return pcie->phy_count;
+	}
+
+	if (of_property_read_bool(np, "nvidia,update-fc-fixup"))
+		pcie->update_fc_fixup = true;
+
+	pcie->supports_clkreq =
+		of_property_read_bool(pcie->dev->of_node, "supports-clkreq");
+
+	pcie->enable_cdm_check =
+		of_property_read_bool(np, "snps,enable-cdm-check");
+
+	return 0;
+}
+
+static int tegra_pcie_bpmp_set_ctrl_state(struct tegra_pcie_dw *pcie,
+					  bool enable)
+{
+	struct mrq_uphy_response resp;
+	struct tegra_bpmp_message msg;
+	struct mrq_uphy_request req;
+	int err;
+
+	if (pcie->cid == 5)
+		return 0;
+
+	memset(&req, 0, sizeof(req));
+	memset(&resp, 0, sizeof(resp));
+
+	req.cmd = CMD_UPHY_PCIE_CONTROLLER_STATE;
+	req.controller_state.pcie_controller = pcie->cid;
+	req.controller_state.enable = enable;
+
+	memset(&msg, 0, sizeof(msg));
+	msg.mrq = MRQ_UPHY;
+	msg.tx.data = &req;
+	msg.tx.size = sizeof(req);
+	msg.rx.data = &resp;
+	msg.rx.size = sizeof(resp);
+
+	if (irqs_disabled())
+		err = tegra_bpmp_transfer_atomic(pcie->bpmp, &msg);
+	else
+		err = tegra_bpmp_transfer(pcie->bpmp, &msg);
+
+	return err;
+}
+
+static void tegra_pcie_downstream_dev_to_D0(struct tegra_pcie_dw *pcie)
+{
+	struct pcie_port *pp = &pcie->pci.pp;
+	struct pci_dev *pdev;
+	struct pci_bus *child;
+
+	list_for_each_entry(child, &pp->root_bus->children, node) {
+		/* Bring downstream devices to D0 if they are not already in */
+		if (child->parent == pp->root_bus)
+			break;
+	}
+	list_for_each_entry(pdev, &child->devices, bus_list) {
+		if (PCI_SLOT(pdev->devfn) == 0) {
+			if (pci_set_power_state(pdev, PCI_D0))
+				dev_err(pcie->dev,
+					"Failed to transition %s to D0 state\n",
+					dev_name(&pdev->dev));
+		}
+	}
+}
+
+static int tegra_pcie_config_controller(struct tegra_pcie_dw *pcie,
+					bool en_hw_hot_rst)
+{
+	int ret;
+	u32 val;
+
+	ret = tegra_pcie_bpmp_set_ctrl_state(pcie, true);
+	if (ret) {
+		dev_err(pcie->dev,
+			"Failed to enable controller %u: %d\n", pcie->cid, ret);
+		return ret;
+	}
+
+	ret = regulator_enable(pcie->pex_ctl_supply);
+	if (ret < 0) {
+		dev_err(pcie->dev, "Failed to enable regulator: %d\n", ret);
+		goto fail_reg_en;
+	}
+
+	ret = clk_prepare_enable(pcie->core_clk);
+	if (ret) {
+		dev_err(pcie->dev, "Failed to enable core clock: %d\n", ret);
+		goto fail_core_clk;
+	}
+
+	ret = reset_control_deassert(pcie->core_apb_rst);
+	if (ret) {
+		dev_err(pcie->dev, "Failed to deassert core APB reset: %d\n",
+			ret);
+		goto fail_core_apb_rst;
+	}
+
+	if (en_hw_hot_rst) {
+		/* Enable HW_HOT_RST mode */
+		val = appl_readl(pcie, APPL_CTRL);
+		val &= ~(APPL_CTRL_HW_HOT_RST_MODE_MASK <<
+			 APPL_CTRL_HW_HOT_RST_MODE_SHIFT);
+		val |= APPL_CTRL_HW_HOT_RST_EN;
+		appl_writel(pcie, val, APPL_CTRL);
+	}
+
+	ret = tegra_pcie_enable_phy(pcie);
+	if (ret) {
+		dev_err(pcie->dev, "Failed to enable PHY: %d\n", ret);
+		goto fail_phy;
+	}
+
+	/* Update CFG base address */
+	appl_writel(pcie, pcie->dbi_res->start & APPL_CFG_BASE_ADDR_MASK,
+		    APPL_CFG_BASE_ADDR);
+
+	/* Configure this core for RP mode operation */
+	appl_writel(pcie, APPL_DM_TYPE_RP, APPL_DM_TYPE);
+
+	appl_writel(pcie, 0x0, APPL_CFG_SLCG_OVERRIDE);
+
+	val = appl_readl(pcie, APPL_CTRL);
+	appl_writel(pcie, val | APPL_CTRL_SYS_PRE_DET_STATE, APPL_CTRL);
+
+	val = appl_readl(pcie, APPL_CFG_MISC);
+	val |= (APPL_CFG_MISC_ARCACHE_VAL << APPL_CFG_MISC_ARCACHE_SHIFT);
+	appl_writel(pcie, val, APPL_CFG_MISC);
+
+	if (!pcie->supports_clkreq) {
+		val = appl_readl(pcie, APPL_PINMUX);
+		val |= APPL_PINMUX_CLKREQ_OUT_OVRD_EN;
+		val |= APPL_PINMUX_CLKREQ_OUT_OVRD;
+		appl_writel(pcie, val, APPL_PINMUX);
+	}
+
+	/* Update iATU_DMA base address */
+	appl_writel(pcie,
+		    pcie->atu_dma_res->start & APPL_CFG_IATU_DMA_BASE_ADDR_MASK,
+		    APPL_CFG_IATU_DMA_BASE_ADDR);
+
+	reset_control_deassert(pcie->core_rst);
+
+	pcie->pcie_cap_base = dw_pcie_find_capability(&pcie->pci,
+						      PCI_CAP_ID_EXP);
+
+#if defined(CONFIG_PCIEASPM)
+	/* Disable ASPM-L1SS advertisement as there is no CLKREQ routing */
+	if (!pcie->supports_clkreq) {
+		disable_aspm_l11(pcie);
+		disable_aspm_l12(pcie);
+	}
+#endif
+	return ret;
+
+fail_phy:
+	reset_control_assert(pcie->core_apb_rst);
+fail_core_apb_rst:
+	clk_disable_unprepare(pcie->core_clk);
+fail_core_clk:
+	regulator_disable(pcie->pex_ctl_supply);
+fail_reg_en:
+	tegra_pcie_bpmp_set_ctrl_state(pcie, false);
+
+	return ret;
+}
+
+static int __deinit_controller(struct tegra_pcie_dw *pcie)
+{
+	int ret;
+
+	ret = reset_control_assert(pcie->core_rst);
+	if (ret) {
+		dev_err(pcie->dev, "Failed to assert \"core\" reset: %d\n",
+			ret);
+		return ret;
+	}
+	tegra_pcie_disable_phy(pcie);
+	ret = reset_control_assert(pcie->core_apb_rst);
+	if (ret) {
+		dev_err(pcie->dev, "Failed to assert APB reset: %d\n", ret);
+		return ret;
+	}
+	clk_disable_unprepare(pcie->core_clk);
+	ret = regulator_disable(pcie->pex_ctl_supply);
+	if (ret) {
+		dev_err(pcie->dev, "Failed to disable regulator: %d\n", ret);
+		return ret;
+	}
+	ret = tegra_pcie_bpmp_set_ctrl_state(pcie, false);
+	if (ret) {
+		dev_err(pcie->dev, "Failed to disable controller %d: %d\n",
+			pcie->cid, ret);
+		return ret;
+	}
+	return ret;
+}
+
+static int tegra_pcie_init_controller(struct tegra_pcie_dw *pcie)
+{
+	struct dw_pcie *pci = &pcie->pci;
+	struct pcie_port *pp = &pci->pp;
+	int ret;
+
+	ret = tegra_pcie_config_controller(pcie, false);
+	if (ret < 0)
+		return ret;
+
+	pp->root_bus_nr = -1;
+	pp->ops = &tegra_pcie_dw_host_ops;
+
+	ret = dw_pcie_host_init(pp);
+	if (ret < 0) {
+		dev_err(pcie->dev, "Failed to add PCIe port: %d\n", ret);
+		goto fail_host_init;
+	}
+
+	return 0;
+
+fail_host_init:
+	return __deinit_controller(pcie);
+}
+
+static int tegra_pcie_try_link_l2(struct tegra_pcie_dw *pcie)
+{
+	u32 val;
+
+	if (!tegra_pcie_dw_link_up(&pcie->pci))
+		return 0;
+
+	val = appl_readl(pcie, APPL_RADM_STATUS);
+	val |= APPL_PM_XMT_TURNOFF_STATE;
+	appl_writel(pcie, val, APPL_RADM_STATUS);
+
+	return readl_poll_timeout_atomic(pcie->appl_base + APPL_DEBUG, val,
+				 val & APPL_DEBUG_PM_LINKST_IN_L2_LAT,
+				 1, PME_ACK_TIMEOUT);
+}
+
+static void tegra_pcie_dw_pme_turnoff(struct tegra_pcie_dw *pcie)
+{
+	u32 data;
+	int err;
+
+	if (!tegra_pcie_dw_link_up(&pcie->pci)) {
+		dev_dbg(pcie->dev, "PCIe link is not up...!\n");
+		return;
+	}
+
+	if (tegra_pcie_try_link_l2(pcie)) {
+		dev_info(pcie->dev, "Link didn't transition to L2 state\n");
+		/*
+		 * TX lane clock freq will reset to Gen1 only if link is in L2
+		 * or detect state.
+		 * So apply pex_rst to end point to force RP to go into detect
+		 * state
+		 */
+		data = appl_readl(pcie, APPL_PINMUX);
+		data &= ~APPL_PINMUX_PEX_RST;
+		appl_writel(pcie, data, APPL_PINMUX);
+
+		err = readl_poll_timeout_atomic(pcie->appl_base + APPL_DEBUG,
+						data,
+						((data &
+						APPL_DEBUG_LTSSM_STATE_MASK) >>
+						APPL_DEBUG_LTSSM_STATE_SHIFT) ==
+						LTSSM_STATE_PRE_DETECT,
+						1, LTSSM_TIMEOUT);
+		if (err) {
+			dev_info(pcie->dev, "Link didn't go to detect state\n");
+		} else {
+			/* Disable LTSSM after link is in detect state */
+			data = appl_readl(pcie, APPL_CTRL);
+			data &= ~APPL_CTRL_LTSSM_EN;
+			appl_writel(pcie, data, APPL_CTRL);
+		}
+	}
+	/*
+	 * DBI registers may not be accessible after this as PLL-E would be
+	 * down depending on how CLKREQ is pulled by end point
+	 */
+	data = appl_readl(pcie, APPL_PINMUX);
+	data |= (APPL_PINMUX_CLKREQ_OVERRIDE_EN | APPL_PINMUX_CLKREQ_OVERRIDE);
+	/* Cut REFCLK to slot */
+	data |= APPL_PINMUX_CLK_OUTPUT_IN_OVERRIDE_EN;
+	data &= ~APPL_PINMUX_CLK_OUTPUT_IN_OVERRIDE;
+	appl_writel(pcie, data, APPL_PINMUX);
+}
+
+static int tegra_pcie_deinit_controller(struct tegra_pcie_dw *pcie)
+{
+	/*
+	 * link doesn't go into L2 state with some of the endpoints with Tegra
+	 * if they are not in D0 state. So, need to make sure that immediate
+	 * downstream devices are in D0 state before sending PME_TurnOff to put
+	 * link into L2 state
+	 */
+	tegra_pcie_downstream_dev_to_D0(pcie);
+	dw_pcie_host_deinit(&pcie->pci.pp);
+	tegra_pcie_dw_pme_turnoff(pcie);
+	return __deinit_controller(pcie);
+}
+
+static int tegra_pcie_config_rp(struct tegra_pcie_dw *pcie)
+{
+	struct pcie_port *pp = &pcie->pci.pp;
+	struct device *dev = pcie->dev;
+	char *name;
+	int ret;
+
+	if (IS_ENABLED(CONFIG_PCI_MSI)) {
+		pp->msi_irq = of_irq_get_byname(dev->of_node, "msi");
+		if (!pp->msi_irq) {
+			dev_err(dev, "Failed to get MSI interrupt\n");
+			return -ENODEV;
+		}
+	}
+
+	pm_runtime_enable(dev);
+	ret = pm_runtime_get_sync(dev);
+	if (ret < 0) {
+		dev_err(dev, "Failed to get runtime sync for PCIe dev: %d\n",
+			ret);
+		pm_runtime_disable(dev);
+		return ret;
+	}
+
+	tegra_pcie_init_controller(pcie);
+
+	pcie->link_state = tegra_pcie_dw_link_up(&pcie->pci);
+
+	if (!pcie->link_state) {
+		ret = -ENOMEDIUM;
+		goto fail_host_init;
+	}
+
+	name = devm_kasprintf(dev, GFP_KERNEL, "%pOFP", dev->of_node);
+	if (!name) {
+		ret = -ENOMEM;
+		goto fail_host_init;
+	}
+
+	pcie->debugfs = debugfs_create_dir(name, NULL);
+	if (!pcie->debugfs)
+		dev_err(dev, "Failed to create debugfs\n");
+	else
+		init_debugfs(pcie);
+
+	return ret;
+
+fail_host_init:
+	tegra_pcie_deinit_controller(pcie);
+	pm_runtime_put_sync(dev);
+	pm_runtime_disable(dev);
+	return ret;
+}
+
+static const struct tegra_pcie_soc tegra_pcie_rc_of_data = {
+	.mode = DW_PCIE_RC_TYPE,
+};
+
+static const struct of_device_id tegra_pcie_dw_of_match[] = {
+	{
+		.compatible = "nvidia,tegra194-pcie",
+		.data = &tegra_pcie_rc_of_data,
+	},
+	{},
+};
+MODULE_DEVICE_TABLE(of, tegra_pcie_dw_of_match);
+
+static int tegra_pcie_dw_probe(struct platform_device *pdev)
+{
+	const struct tegra_pcie_soc *data;
+	struct device *dev = &pdev->dev;
+	struct resource *atu_dma_res;
+	struct tegra_pcie_dw *pcie;
+	struct resource *dbi_res;
+	struct pcie_port *pp;
+	struct dw_pcie *pci;
+	struct phy **phys;
+	char *name;
+	int ret;
+	u32 i;
+
+	pcie = devm_kzalloc(dev, sizeof(*pcie), GFP_KERNEL);
+	if (!pcie)
+		return -ENOMEM;
+
+	pci = &pcie->pci;
+	pci->dev = &pdev->dev;
+	pci->ops = &tegra_dw_pcie_ops;
+	pp = &pci->pp;
+	pcie->dev = &pdev->dev;
+
+	data = (struct tegra_pcie_soc *)of_device_get_match_data(dev);
+	if (!data)
+		return -EINVAL;
+	pcie->mode = (enum dw_pcie_device_mode)data->mode;
+
+	ret = tegra_pcie_dw_parse_dt(pcie);
+	if (ret < 0) {
+		dev_err(dev, "Failed to parse device tree: %d\n", ret);
+		return ret;
+	}
+
+	pcie->pex_ctl_supply = devm_regulator_get(dev, "vddio-pex-ctl");
+	if (IS_ERR(pcie->pex_ctl_supply)) {
+		dev_err(dev, "Failed to get regulator: %ld\n",
+			PTR_ERR(pcie->pex_ctl_supply));
+		return PTR_ERR(pcie->pex_ctl_supply);
+	}
+
+	pcie->core_clk = devm_clk_get(dev, "core");
+	if (IS_ERR(pcie->core_clk)) {
+		dev_err(dev, "Failed to get core clock: %ld\n",
+			PTR_ERR(pcie->core_clk));
+		return PTR_ERR(pcie->core_clk);
+	}
+
+	pcie->appl_res = platform_get_resource_byname(pdev, IORESOURCE_MEM,
+						      "appl");
+	if (!pcie->appl_res) {
+		dev_err(dev, "Failed to find \"appl\" region\n");
+		return PTR_ERR(pcie->appl_res);
+	}
+	pcie->appl_base = devm_ioremap_resource(dev, pcie->appl_res);
+	if (IS_ERR(pcie->appl_base))
+		return PTR_ERR(pcie->appl_base);
+
+	pcie->core_apb_rst = devm_reset_control_get(dev, "apb");
+	if (IS_ERR(pcie->core_apb_rst)) {
+		dev_err(dev, "Failed to get APB reset: %ld\n",
+			PTR_ERR(pcie->core_apb_rst));
+		return PTR_ERR(pcie->core_apb_rst);
+	}
+
+	phys = devm_kcalloc(dev, pcie->phy_count, sizeof(*phys), GFP_KERNEL);
+	if (!phys)
+		return PTR_ERR(phys);
+
+	for (i = 0; i < pcie->phy_count; i++) {
+		name = kasprintf(GFP_KERNEL, "p2u-%u", i);
+		if (!name) {
+			dev_err(dev, "Failed to create P2U string\n");
+			return -ENOMEM;
+		}
+		phys[i] = devm_phy_get(dev, name);
+		kfree(name);
+		if (IS_ERR(phys[i])) {
+			ret = PTR_ERR(phys[i]);
+			dev_err(dev, "Failed to get PHY: %d\n", ret);
+			return ret;
+		}
+	}
+
+	pcie->phys = phys;
+
+	dbi_res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "dbi");
+	if (!dbi_res) {
+		dev_err(dev, "Failed to find \"dbi\" region\n");
+		return PTR_ERR(dbi_res);
+	}
+	pcie->dbi_res = dbi_res;
+
+	pci->dbi_base = devm_ioremap_resource(dev, dbi_res);
+	if (IS_ERR(pci->dbi_base))
+		return PTR_ERR(pci->dbi_base);
+
+	/* Tegra HW locates DBI2 at a fixed offset from DBI */
+	pci->dbi_base2 = pci->dbi_base + 0x1000;
+
+	atu_dma_res = platform_get_resource_byname(pdev, IORESOURCE_MEM,
+						   "atu_dma");
+	if (!atu_dma_res) {
+		dev_err(dev, "Failed to find \"atu_dma\" region\n");
+		return PTR_ERR(atu_dma_res);
+	}
+	pcie->atu_dma_res = atu_dma_res;
+	pci->atu_base = devm_ioremap_resource(dev, atu_dma_res);
+	if (IS_ERR(pci->atu_base))
+		return PTR_ERR(pci->atu_base);
+
+	pcie->core_rst = devm_reset_control_get(dev, "core");
+	if (IS_ERR(pcie->core_rst)) {
+		dev_err(dev, "Failed to get core reset: %ld\n",
+			PTR_ERR(pcie->core_rst));
+		return PTR_ERR(pcie->core_rst);
+	}
+
+	pp->irq = platform_get_irq_byname(pdev, "intr");
+	if (!pp->irq) {
+		dev_err(dev, "Failed to get \"intr\" interrupt\n");
+		return -ENODEV;
+	}
+
+	ret = devm_request_irq(dev, pp->irq, tegra_pcie_irq_handler,
+			       IRQF_SHARED, "tegra-pcie-intr", pcie);
+	if (ret) {
+		dev_err(dev, "Failed to request IRQ %d: %d\n", pp->irq, ret);
+		return ret;
+	}
+
+	pcie->bpmp = tegra_bpmp_get(dev);
+	if (IS_ERR(pcie->bpmp))
+		return PTR_ERR(pcie->bpmp);
+
+	platform_set_drvdata(pdev, pcie);
+
+	if (pcie->mode == DW_PCIE_RC_TYPE) {
+		ret = tegra_pcie_config_rp(pcie);
+		if (ret && ret != -ENOMEDIUM)
+			goto fail;
+		else
+			return 0;
+	}
+
+fail:
+	tegra_bpmp_put(pcie->bpmp);
+	return ret;
+}
+
+static int tegra_pcie_dw_remove(struct platform_device *pdev)
+{
+	struct tegra_pcie_dw *pcie = platform_get_drvdata(pdev);
+
+	if (pcie->mode != DW_PCIE_RC_TYPE)
+		return 0;
+
+	if (!pcie->link_state)
+		return 0;
+
+	debugfs_remove_recursive(pcie->debugfs);
+	tegra_pcie_deinit_controller(pcie);
+	pm_runtime_put_sync(pcie->dev);
+	pm_runtime_disable(pcie->dev);
+	tegra_bpmp_put(pcie->bpmp);
+
+	return 0;
+}
+
+static int tegra_pcie_dw_suspend_late(struct device *dev)
+{
+	struct tegra_pcie_dw *pcie = dev_get_drvdata(dev);
+	u32 val;
+
+	if (!pcie->link_state)
+		return 0;
+
+	/* Enable HW_HOT_RST mode */
+	val = appl_readl(pcie, APPL_CTRL);
+	val &= ~(APPL_CTRL_HW_HOT_RST_MODE_MASK <<
+		 APPL_CTRL_HW_HOT_RST_MODE_SHIFT);
+	val |= APPL_CTRL_HW_HOT_RST_EN;
+	appl_writel(pcie, val, APPL_CTRL);
+
+	return 0;
+}
+
+static int tegra_pcie_dw_suspend_noirq(struct device *dev)
+{
+	struct tegra_pcie_dw *pcie = dev_get_drvdata(dev);
+
+	if (!pcie->link_state)
+		return 0;
+
+	/* Save MSI interrupt vector */
+	pcie->msi_ctrl_int = dw_pcie_readl_dbi(&pcie->pci,
+					       PORT_LOGIC_MSI_CTRL_INT_0_EN);
+	tegra_pcie_downstream_dev_to_D0(pcie);
+	tegra_pcie_dw_pme_turnoff(pcie);
+	return __deinit_controller(pcie);
+}
+
+static int tegra_pcie_dw_resume_noirq(struct device *dev)
+{
+	struct tegra_pcie_dw *pcie = dev_get_drvdata(dev);
+	int ret;
+
+	if (!pcie->link_state)
+		return 0;
+
+	ret = tegra_pcie_config_controller(pcie, true);
+	if (ret < 0)
+		return ret;
+
+	ret = tegra_pcie_dw_host_init(&pcie->pci.pp);
+	if (ret < 0) {
+		dev_err(dev, "Failed to init host: %d\n", ret);
+		goto fail_host_init;
+	}
+
+	/* Restore MSI interrupt vector */
+	dw_pcie_writel_dbi(&pcie->pci, PORT_LOGIC_MSI_CTRL_INT_0_EN,
+			   pcie->msi_ctrl_int);
+
+	return 0;
+fail_host_init:
+	return __deinit_controller(pcie);
+}
+
+static int tegra_pcie_dw_resume_early(struct device *dev)
+{
+	struct tegra_pcie_dw *pcie = dev_get_drvdata(dev);
+	u32 val;
+
+	if (!pcie->link_state)
+		return 0;
+
+	/* Disable HW_HOT_RST mode */
+	val = appl_readl(pcie, APPL_CTRL);
+	val &= ~(APPL_CTRL_HW_HOT_RST_MODE_MASK <<
+		 APPL_CTRL_HW_HOT_RST_MODE_SHIFT);
+	val |= APPL_CTRL_HW_HOT_RST_MODE_IMDT_RST <<
+	       APPL_CTRL_HW_HOT_RST_MODE_SHIFT;
+	val &= ~APPL_CTRL_HW_HOT_RST_EN;
+	appl_writel(pcie, val, APPL_CTRL);
+
+	return 0;
+}
+
+static void tegra_pcie_dw_shutdown(struct platform_device *pdev)
+{
+	struct tegra_pcie_dw *pcie = platform_get_drvdata(pdev);
+
+	if (pcie->mode != DW_PCIE_RC_TYPE)
+		return;
+
+	if (!pcie->link_state)
+		return;
+
+	debugfs_remove_recursive(pcie->debugfs);
+	tegra_pcie_downstream_dev_to_D0(pcie);
+
+	disable_irq(pcie->pci.pp.irq);
+	if (IS_ENABLED(CONFIG_PCI_MSI))
+		disable_irq(pcie->pci.pp.msi_irq);
+
+	tegra_pcie_dw_pme_turnoff(pcie);
+	__deinit_controller(pcie);
+}
+
+static const struct dev_pm_ops tegra_pcie_dw_pm_ops = {
+	.suspend_late = tegra_pcie_dw_suspend_late,
+	.suspend_noirq = tegra_pcie_dw_suspend_noirq,
+	.resume_noirq = tegra_pcie_dw_resume_noirq,
+	.resume_early = tegra_pcie_dw_resume_early,
+};
+
+static struct platform_driver tegra_pcie_dw_driver = {
+	.probe = tegra_pcie_dw_probe,
+	.remove = tegra_pcie_dw_remove,
+	.shutdown = tegra_pcie_dw_shutdown,
+	.driver = {
+		.name	= "tegra194-pcie",
+		.pm = &tegra_pcie_dw_pm_ops,
+		.of_match_table = tegra_pcie_dw_of_match,
+	},
+};
+module_platform_driver(tegra_pcie_dw_driver);
+
+MODULE_AUTHOR("Vidya Sagar <vidyas@nvidia.com>");
+MODULE_DESCRIPTION("NVIDIA PCIe host controller driver");
+MODULE_LICENSE("GPL v2");
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 34+ messages in thread

* Re: [PATCH V13 05/12] PCI: dwc: Add ext config space capability search API
  2019-07-10  6:22 ` [PATCH V13 05/12] PCI: dwc: Add ext " Vidya Sagar
@ 2019-07-10 10:37   ` Lorenzo Pieralisi
  2019-07-10 11:27     ` Vidya Sagar
  0 siblings, 1 reply; 34+ messages in thread
From: Lorenzo Pieralisi @ 2019-07-10 10:37 UTC (permalink / raw)
  To: Vidya Sagar
  Cc: bhelgaas, robh+dt, mark.rutland, thierry.reding, jonathanh,
	kishon, catalin.marinas, will.deacon, jingoohan1,
	gustavo.pimentel, digetx, mperttunen, linux-pci, devicetree,
	linux-tegra, linux-kernel, linux-arm-kernel, kthota, mmaddireddy,
	sagar.tv

On Wed, Jul 10, 2019 at 11:52:05AM +0530, Vidya Sagar wrote:
> Add extended configuration space capability search API using struct dw_pcie *
> pointer

Sentences are terminated with a period and this is v13 not v1, which
proves that you do not read the commit logs you write.

I need you guys to understand that I can't rewrite commit logs all
the time, I do not want to go as far as not accepting your patches
anymore so please do pay attention to commit log details they
are as important as the code itself.

https://lore.kernel.org/linux-pci/20171026223701.GA25649@bhelgaas-glaptop.roam.corp.google.com/

Thanks,
Lorenzo

> Signed-off-by: Vidya Sagar <vidyas@nvidia.com>
> Acked-by: Gustavo Pimentel <gustavo.pimentel@synopsys.com>
> Acked-by: Thierry Reding <treding@nvidia.com>
> ---
> V13:
> * None
> 
> V12:
> * None
> 
> V11:
> * None
> 
> V10:
> * None
> 
> V9:
> * Added Acked-by from Thierry
> 
> V8:
> * Changed data types of return and arguments to be inline with data being returned
>   and passed.
> 
> V7:
> * None
> 
> V6:
> * None
> 
> V5:
> * None
> 
> V4:
> * None
> 
> V3:
> * None
> 
> V2:
> * This is a new patch in v2 series
> 
>  drivers/pci/controller/dwc/pcie-designware.c | 41 ++++++++++++++++++++
>  drivers/pci/controller/dwc/pcie-designware.h |  1 +
>  2 files changed, 42 insertions(+)
> 
> diff --git a/drivers/pci/controller/dwc/pcie-designware.c b/drivers/pci/controller/dwc/pcie-designware.c
> index 7818b4febb08..181449e342f1 100644
> --- a/drivers/pci/controller/dwc/pcie-designware.c
> +++ b/drivers/pci/controller/dwc/pcie-designware.c
> @@ -53,6 +53,47 @@ u8 dw_pcie_find_capability(struct dw_pcie *pci, u8 cap)
>  }
>  EXPORT_SYMBOL_GPL(dw_pcie_find_capability);
>  
> +static u16 dw_pcie_find_next_ext_capability(struct dw_pcie *pci, u16 start,
> +					    u8 cap)
> +{
> +	u32 header;
> +	int ttl;
> +	int pos = PCI_CFG_SPACE_SIZE;
> +
> +	/* minimum 8 bytes per capability */
> +	ttl = (PCI_CFG_SPACE_EXP_SIZE - PCI_CFG_SPACE_SIZE) / 8;
> +
> +	if (start)
> +		pos = start;
> +
> +	header = dw_pcie_readl_dbi(pci, pos);
> +	/*
> +	 * If we have no capabilities, this is indicated by cap ID,
> +	 * cap version and next pointer all being 0.
> +	 */
> +	if (header == 0)
> +		return 0;
> +
> +	while (ttl-- > 0) {
> +		if (PCI_EXT_CAP_ID(header) == cap && pos != start)
> +			return pos;
> +
> +		pos = PCI_EXT_CAP_NEXT(header);
> +		if (pos < PCI_CFG_SPACE_SIZE)
> +			break;
> +
> +		header = dw_pcie_readl_dbi(pci, pos);
> +	}
> +
> +	return 0;
> +}
> +
> +u16 dw_pcie_find_ext_capability(struct dw_pcie *pci, u8 cap)
> +{
> +	return dw_pcie_find_next_ext_capability(pci, 0, cap);
> +}
> +EXPORT_SYMBOL_GPL(dw_pcie_find_ext_capability);
> +
>  int dw_pcie_read(void __iomem *addr, int size, u32 *val)
>  {
>  	if (!IS_ALIGNED((uintptr_t)addr, size)) {
> diff --git a/drivers/pci/controller/dwc/pcie-designware.h b/drivers/pci/controller/dwc/pcie-designware.h
> index d8c66a6827dc..11c223471416 100644
> --- a/drivers/pci/controller/dwc/pcie-designware.h
> +++ b/drivers/pci/controller/dwc/pcie-designware.h
> @@ -252,6 +252,7 @@ struct dw_pcie {
>  		container_of((endpoint), struct dw_pcie, ep)
>  
>  u8 dw_pcie_find_capability(struct dw_pcie *pci, u8 cap);
> +u16 dw_pcie_find_ext_capability(struct dw_pcie *pci, u8 cap);
>  
>  int dw_pcie_read(void __iomem *addr, int size, u32 *val);
>  int dw_pcie_write(void __iomem *addr, int size, u32 val);
> -- 
> 2.17.1
> 

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH V13 05/12] PCI: dwc: Add ext config space capability search API
  2019-07-10 10:37   ` Lorenzo Pieralisi
@ 2019-07-10 11:27     ` Vidya Sagar
  2019-07-10 14:19       ` Lorenzo Pieralisi
  0 siblings, 1 reply; 34+ messages in thread
From: Vidya Sagar @ 2019-07-10 11:27 UTC (permalink / raw)
  To: Lorenzo Pieralisi
  Cc: bhelgaas, robh+dt, mark.rutland, thierry.reding, jonathanh,
	kishon, catalin.marinas, will.deacon, jingoohan1,
	gustavo.pimentel, digetx, mperttunen, linux-pci, devicetree,
	linux-tegra, linux-kernel, linux-arm-kernel, kthota, mmaddireddy,
	sagar.tv

On 7/10/2019 4:07 PM, Lorenzo Pieralisi wrote:
> On Wed, Jul 10, 2019 at 11:52:05AM +0530, Vidya Sagar wrote:
>> Add extended configuration space capability search API using struct dw_pcie *
>> pointer
> 
> Sentences are terminated with a period and this is v13 not v1, which
> proves that you do not read the commit logs you write.
> 
> I need you guys to understand that I can't rewrite commit logs all
> the time, I do not want to go as far as not accepting your patches
> anymore so please do pay attention to commit log details they
> are as important as the code itself.
> 
> https://lore.kernel.org/linux-pci/20171026223701.GA25649@bhelgaas-glaptop.roam.corp.google.com/
My sincere apologies.
Since I didn't touch this patch much all through this series, I missed it.
I'll make a point to not make such mistakes again.
Do you want me to send a new version fixing it?

Thanks,
Vidya Sagar

> 
> Thanks,
> Lorenzo
> 
>> Signed-off-by: Vidya Sagar <vidyas@nvidia.com>
>> Acked-by: Gustavo Pimentel <gustavo.pimentel@synopsys.com>
>> Acked-by: Thierry Reding <treding@nvidia.com>
>> ---
>> V13:
>> * None
>>
>> V12:
>> * None
>>
>> V11:
>> * None
>>
>> V10:
>> * None
>>
>> V9:
>> * Added Acked-by from Thierry
>>
>> V8:
>> * Changed data types of return and arguments to be inline with data being returned
>>    and passed.
>>
>> V7:
>> * None
>>
>> V6:
>> * None
>>
>> V5:
>> * None
>>
>> V4:
>> * None
>>
>> V3:
>> * None
>>
>> V2:
>> * This is a new patch in v2 series
>>
>>   drivers/pci/controller/dwc/pcie-designware.c | 41 ++++++++++++++++++++
>>   drivers/pci/controller/dwc/pcie-designware.h |  1 +
>>   2 files changed, 42 insertions(+)
>>
>> diff --git a/drivers/pci/controller/dwc/pcie-designware.c b/drivers/pci/controller/dwc/pcie-designware.c
>> index 7818b4febb08..181449e342f1 100644
>> --- a/drivers/pci/controller/dwc/pcie-designware.c
>> +++ b/drivers/pci/controller/dwc/pcie-designware.c
>> @@ -53,6 +53,47 @@ u8 dw_pcie_find_capability(struct dw_pcie *pci, u8 cap)
>>   }
>>   EXPORT_SYMBOL_GPL(dw_pcie_find_capability);
>>   
>> +static u16 dw_pcie_find_next_ext_capability(struct dw_pcie *pci, u16 start,
>> +					    u8 cap)
>> +{
>> +	u32 header;
>> +	int ttl;
>> +	int pos = PCI_CFG_SPACE_SIZE;
>> +
>> +	/* minimum 8 bytes per capability */
>> +	ttl = (PCI_CFG_SPACE_EXP_SIZE - PCI_CFG_SPACE_SIZE) / 8;
>> +
>> +	if (start)
>> +		pos = start;
>> +
>> +	header = dw_pcie_readl_dbi(pci, pos);
>> +	/*
>> +	 * If we have no capabilities, this is indicated by cap ID,
>> +	 * cap version and next pointer all being 0.
>> +	 */
>> +	if (header == 0)
>> +		return 0;
>> +
>> +	while (ttl-- > 0) {
>> +		if (PCI_EXT_CAP_ID(header) == cap && pos != start)
>> +			return pos;
>> +
>> +		pos = PCI_EXT_CAP_NEXT(header);
>> +		if (pos < PCI_CFG_SPACE_SIZE)
>> +			break;
>> +
>> +		header = dw_pcie_readl_dbi(pci, pos);
>> +	}
>> +
>> +	return 0;
>> +}
>> +
>> +u16 dw_pcie_find_ext_capability(struct dw_pcie *pci, u8 cap)
>> +{
>> +	return dw_pcie_find_next_ext_capability(pci, 0, cap);
>> +}
>> +EXPORT_SYMBOL_GPL(dw_pcie_find_ext_capability);
>> +
>>   int dw_pcie_read(void __iomem *addr, int size, u32 *val)
>>   {
>>   	if (!IS_ALIGNED((uintptr_t)addr, size)) {
>> diff --git a/drivers/pci/controller/dwc/pcie-designware.h b/drivers/pci/controller/dwc/pcie-designware.h
>> index d8c66a6827dc..11c223471416 100644
>> --- a/drivers/pci/controller/dwc/pcie-designware.h
>> +++ b/drivers/pci/controller/dwc/pcie-designware.h
>> @@ -252,6 +252,7 @@ struct dw_pcie {
>>   		container_of((endpoint), struct dw_pcie, ep)
>>   
>>   u8 dw_pcie_find_capability(struct dw_pcie *pci, u8 cap);
>> +u16 dw_pcie_find_ext_capability(struct dw_pcie *pci, u8 cap);
>>   
>>   int dw_pcie_read(void __iomem *addr, int size, u32 *val);
>>   int dw_pcie_write(void __iomem *addr, int size, u32 val);
>> -- 
>> 2.17.1
>>


^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH V13 05/12] PCI: dwc: Add ext config space capability search API
  2019-07-10 11:27     ` Vidya Sagar
@ 2019-07-10 14:19       ` Lorenzo Pieralisi
  0 siblings, 0 replies; 34+ messages in thread
From: Lorenzo Pieralisi @ 2019-07-10 14:19 UTC (permalink / raw)
  To: Vidya Sagar
  Cc: bhelgaas, robh+dt, mark.rutland, thierry.reding, jonathanh,
	kishon, catalin.marinas, will.deacon, jingoohan1,
	gustavo.pimentel, digetx, mperttunen, linux-pci, devicetree,
	linux-tegra, linux-kernel, linux-arm-kernel, kthota, mmaddireddy,
	sagar.tv

On Wed, Jul 10, 2019 at 04:57:23PM +0530, Vidya Sagar wrote:
> On 7/10/2019 4:07 PM, Lorenzo Pieralisi wrote:
> > On Wed, Jul 10, 2019 at 11:52:05AM +0530, Vidya Sagar wrote:
> > > Add extended configuration space capability search API using struct dw_pcie *
> > > pointer
> > 
> > Sentences are terminated with a period and this is v13 not v1, which
> > proves that you do not read the commit logs you write.
> > 
> > I need you guys to understand that I can't rewrite commit logs all
> > the time, I do not want to go as far as not accepting your patches
> > anymore so please do pay attention to commit log details they
> > are as important as the code itself.
> > 
> > https://lore.kernel.org/linux-pci/20171026223701.GA25649@bhelgaas-glaptop.roam.corp.google.com/
> My sincere apologies.
> Since I didn't touch this patch much all through this series, I missed it.
> I'll make a point to not make such mistakes again.
> Do you want me to send a new version fixing it?

I will update it, I just wanted to get the point across, no
problems.

Lorenzo

> Thanks,
> Vidya Sagar
> 
> > 
> > Thanks,
> > Lorenzo
> > 
> > > Signed-off-by: Vidya Sagar <vidyas@nvidia.com>
> > > Acked-by: Gustavo Pimentel <gustavo.pimentel@synopsys.com>
> > > Acked-by: Thierry Reding <treding@nvidia.com>
> > > ---
> > > V13:
> > > * None
> > > 
> > > V12:
> > > * None
> > > 
> > > V11:
> > > * None
> > > 
> > > V10:
> > > * None
> > > 
> > > V9:
> > > * Added Acked-by from Thierry
> > > 
> > > V8:
> > > * Changed data types of return and arguments to be inline with data being returned
> > >    and passed.
> > > 
> > > V7:
> > > * None
> > > 
> > > V6:
> > > * None
> > > 
> > > V5:
> > > * None
> > > 
> > > V4:
> > > * None
> > > 
> > > V3:
> > > * None
> > > 
> > > V2:
> > > * This is a new patch in v2 series
> > > 
> > >   drivers/pci/controller/dwc/pcie-designware.c | 41 ++++++++++++++++++++
> > >   drivers/pci/controller/dwc/pcie-designware.h |  1 +
> > >   2 files changed, 42 insertions(+)
> > > 
> > > diff --git a/drivers/pci/controller/dwc/pcie-designware.c b/drivers/pci/controller/dwc/pcie-designware.c
> > > index 7818b4febb08..181449e342f1 100644
> > > --- a/drivers/pci/controller/dwc/pcie-designware.c
> > > +++ b/drivers/pci/controller/dwc/pcie-designware.c
> > > @@ -53,6 +53,47 @@ u8 dw_pcie_find_capability(struct dw_pcie *pci, u8 cap)
> > >   }
> > >   EXPORT_SYMBOL_GPL(dw_pcie_find_capability);
> > > +static u16 dw_pcie_find_next_ext_capability(struct dw_pcie *pci, u16 start,
> > > +					    u8 cap)
> > > +{
> > > +	u32 header;
> > > +	int ttl;
> > > +	int pos = PCI_CFG_SPACE_SIZE;
> > > +
> > > +	/* minimum 8 bytes per capability */
> > > +	ttl = (PCI_CFG_SPACE_EXP_SIZE - PCI_CFG_SPACE_SIZE) / 8;
> > > +
> > > +	if (start)
> > > +		pos = start;
> > > +
> > > +	header = dw_pcie_readl_dbi(pci, pos);
> > > +	/*
> > > +	 * If we have no capabilities, this is indicated by cap ID,
> > > +	 * cap version and next pointer all being 0.
> > > +	 */
> > > +	if (header == 0)
> > > +		return 0;
> > > +
> > > +	while (ttl-- > 0) {
> > > +		if (PCI_EXT_CAP_ID(header) == cap && pos != start)
> > > +			return pos;
> > > +
> > > +		pos = PCI_EXT_CAP_NEXT(header);
> > > +		if (pos < PCI_CFG_SPACE_SIZE)
> > > +			break;
> > > +
> > > +		header = dw_pcie_readl_dbi(pci, pos);
> > > +	}
> > > +
> > > +	return 0;
> > > +}
> > > +
> > > +u16 dw_pcie_find_ext_capability(struct dw_pcie *pci, u8 cap)
> > > +{
> > > +	return dw_pcie_find_next_ext_capability(pci, 0, cap);
> > > +}
> > > +EXPORT_SYMBOL_GPL(dw_pcie_find_ext_capability);
> > > +
> > >   int dw_pcie_read(void __iomem *addr, int size, u32 *val)
> > >   {
> > >   	if (!IS_ALIGNED((uintptr_t)addr, size)) {
> > > diff --git a/drivers/pci/controller/dwc/pcie-designware.h b/drivers/pci/controller/dwc/pcie-designware.h
> > > index d8c66a6827dc..11c223471416 100644
> > > --- a/drivers/pci/controller/dwc/pcie-designware.h
> > > +++ b/drivers/pci/controller/dwc/pcie-designware.h
> > > @@ -252,6 +252,7 @@ struct dw_pcie {
> > >   		container_of((endpoint), struct dw_pcie, ep)
> > >   u8 dw_pcie_find_capability(struct dw_pcie *pci, u8 cap);
> > > +u16 dw_pcie_find_ext_capability(struct dw_pcie *pci, u8 cap);
> > >   int dw_pcie_read(void __iomem *addr, int size, u32 *val);
> > >   int dw_pcie_write(void __iomem *addr, int size, u32 val);
> > > -- 
> > > 2.17.1
> > > 
> 

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH V13 08/12] dt-bindings: Add PCIe supports-clkreq property
  2019-07-10  6:22 ` [PATCH V13 08/12] dt-bindings: Add PCIe supports-clkreq property Vidya Sagar
@ 2019-07-10 15:28   ` Lorenzo Pieralisi
  2019-07-10 17:14     ` Vidya Sagar
  0 siblings, 1 reply; 34+ messages in thread
From: Lorenzo Pieralisi @ 2019-07-10 15:28 UTC (permalink / raw)
  To: Vidya Sagar
  Cc: bhelgaas, robh+dt, mark.rutland, thierry.reding, jonathanh,
	kishon, catalin.marinas, will.deacon, jingoohan1,
	gustavo.pimentel, digetx, mperttunen, linux-pci, devicetree,
	linux-tegra, linux-kernel, linux-arm-kernel, kthota, mmaddireddy,
	sagar.tv

On Wed, Jul 10, 2019 at 11:52:08AM +0530, Vidya Sagar wrote:
> Some host controllers need to know the existence of clkreq signal routing to
> downstream devices to be able to advertise low power features like ASPM L1
> substates. Without clkreq signal routing being present, enabling ASPM L1 sub
> states might lead to downstream devices falling off the bus. Hence a new device

You mean "being disconnected from the bus" right ? I will update it.

Lorenzo

> tree property 'supports-clkreq' is added to make such host controllers
> aware of clkreq signal routing to downstream devices.
> 
> Signed-off-by: Vidya Sagar <vidyas@nvidia.com>
> Reviewed-by: Rob Herring <robh@kernel.org>
> Reviewed-by: Thierry Reding <treding@nvidia.com>
> ---
> V13:
> * None
> 
> V12:
> * Rebased on top of linux-next top of the tree
> 
> V11:
> * None
> 
> V10:
> * None
> 
> V9:
> * None
> 
> V8:
> * None
> 
> V7:
> * None
> 
> V6:
> * s/Documentation\/devicetree/dt-bindings/ in the subject
> 
> V5:
> * None
> 
> V4:
> * Rebased on top of linux-next top of the tree
> 
> V3:
> * None
> 
> V2:
> * This is a new patch in v2 series
> 
>  Documentation/devicetree/bindings/pci/pci.txt | 5 +++++
>  1 file changed, 5 insertions(+)
> 
> diff --git a/Documentation/devicetree/bindings/pci/pci.txt b/Documentation/devicetree/bindings/pci/pci.txt
> index 2a5d91024059..29bcbd88f457 100644
> --- a/Documentation/devicetree/bindings/pci/pci.txt
> +++ b/Documentation/devicetree/bindings/pci/pci.txt
> @@ -27,6 +27,11 @@ driver implementation may support the following properties:
>  - reset-gpios:
>     If present this property specifies PERST# GPIO. Host drivers can parse the
>     GPIO and apply fundamental reset to endpoints.
> +- supports-clkreq:
> +   If present this property specifies that CLKREQ signal routing exists from
> +   root port to downstream device and host bridge drivers can do programming
> +   which depends on CLKREQ signal existence. For example, programming root port
> +   not to advertise ASPM L1 Sub-States support if there is no CLKREQ signal.
>  
>  PCI-PCI Bridge properties
>  -------------------------
> -- 
> 2.17.1
> 

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH V13 12/12] PCI: tegra: Add Tegra194 PCIe support
  2019-07-10  6:22 ` [PATCH V13 12/12] PCI: tegra: Add Tegra194 PCIe support Vidya Sagar
@ 2019-07-10 17:02   ` Lorenzo Pieralisi
  2019-07-10 17:26     ` Vidya Sagar
  2019-07-11 12:54   ` Lorenzo Pieralisi
  1 sibling, 1 reply; 34+ messages in thread
From: Lorenzo Pieralisi @ 2019-07-10 17:02 UTC (permalink / raw)
  To: Vidya Sagar
  Cc: bhelgaas, robh+dt, mark.rutland, thierry.reding, jonathanh,
	kishon, catalin.marinas, will.deacon, jingoohan1,
	gustavo.pimentel, digetx, mperttunen, linux-pci, devicetree,
	linux-tegra, linux-kernel, linux-arm-kernel, kthota, mmaddireddy,
	sagar.tv

On Wed, Jul 10, 2019 at 11:52:12AM +0530, Vidya Sagar wrote:

[...]

> +#if defined(CONFIG_PCIEASPM)
> +static void disable_aspm_l11(struct tegra_pcie_dw *pcie)
> +{
> +	u32 val;
> +
> +	val = dw_pcie_readl_dbi(&pcie->pci, pcie->cfg_link_cap_l1sub);
> +	val &= ~PCI_L1SS_CAP_ASPM_L1_1;
> +	dw_pcie_writel_dbi(&pcie->pci, pcie->cfg_link_cap_l1sub, val);
> +}
> +
> +static void disable_aspm_l12(struct tegra_pcie_dw *pcie)
> +{
> +	u32 val;
> +
> +	val = dw_pcie_readl_dbi(&pcie->pci, pcie->cfg_link_cap_l1sub);
> +	val &= ~PCI_L1SS_CAP_ASPM_L1_2;
> +	dw_pcie_writel_dbi(&pcie->pci, pcie->cfg_link_cap_l1sub, val);
> +}
> +
> +static inline u32 event_counter_prog(struct tegra_pcie_dw *pcie, u32 event)
> +{
> +	u32 val;
> +
> +	val = dw_pcie_readl_dbi(&pcie->pci, event_cntr_ctrl_offset[pcie->cid]);
> +	val &= ~(EVENT_COUNTER_EVENT_SEL_MASK << EVENT_COUNTER_EVENT_SEL_SHIFT);
> +	val |= EVENT_COUNTER_GROUP_5 << EVENT_COUNTER_GROUP_SEL_SHIFT;
> +	val |= event << EVENT_COUNTER_EVENT_SEL_SHIFT;
> +	val |= EVENT_COUNTER_ENABLE_ALL << EVENT_COUNTER_ENABLE_SHIFT;
> +	dw_pcie_writel_dbi(&pcie->pci, event_cntr_ctrl_offset[pcie->cid], val);
> +	val = dw_pcie_readl_dbi(&pcie->pci, event_cntr_data_offset[pcie->cid]);
> +	return val;
> +}
> +
> +static int aspm_state_cnt(struct seq_file *s, void *data)
> +{
> +	struct tegra_pcie_dw *pcie = (struct tegra_pcie_dw *)
> +				     dev_get_drvdata(s->private);
> +	u32 val;
> +
> +	seq_printf(s, "Tx L0s entry count : %u\n",
> +		   event_counter_prog(pcie, EVENT_COUNTER_EVENT_Tx_L0S));
> +
> +	seq_printf(s, "Rx L0s entry count : %u\n",
> +		   event_counter_prog(pcie, EVENT_COUNTER_EVENT_Rx_L0S));
> +
> +	seq_printf(s, "Link L1 entry count : %u\n",
> +		   event_counter_prog(pcie, EVENT_COUNTER_EVENT_L1));
> +
> +	seq_printf(s, "Link L1.1 entry count : %u\n",
> +		   event_counter_prog(pcie, EVENT_COUNTER_EVENT_L1_1));
> +
> +	seq_printf(s, "Link L1.2 entry count : %u\n",
> +		   event_counter_prog(pcie, EVENT_COUNTER_EVENT_L1_2));
> +
> +	/* Clear all counters */
> +	dw_pcie_writel_dbi(&pcie->pci, event_cntr_ctrl_offset[pcie->cid],
> +			   EVENT_COUNTER_ALL_CLEAR);
> +
> +	/* Re-enable counting */
> +	val = EVENT_COUNTER_ENABLE_ALL << EVENT_COUNTER_ENABLE_SHIFT;
> +	val |= EVENT_COUNTER_GROUP_5 << EVENT_COUNTER_GROUP_SEL_SHIFT;
> +	dw_pcie_writel_dbi(&pcie->pci, event_cntr_ctrl_offset[pcie->cid], val);
> +
> +	return 0;
> +}
> +#endif
> +
> +static int init_debugfs(struct tegra_pcie_dw *pcie)
> +{
> +#if defined(CONFIG_PCIEASPM)
> +	struct dentry *d;
> +
> +	d = debugfs_create_devm_seqfile(pcie->dev, "aspm_state_cnt",
> +					pcie->debugfs, aspm_state_cnt);
> +	if (IS_ERR_OR_NULL(d))
> +		dev_err(pcie->dev,
> +			"Failed to create debugfs file \"aspm_state_cnt\"\n");
> +#endif
> +	return 0;
> +}

I prefer writing it:

#if defined(CONFIG_PCIEASPM)
static void disable_aspm_l11(struct tegra_pcie_dw *pcie)
{
...
}

static void disable_aspm_l12(struct tegra_pcie_dw *pcie)
{
...
}

static inline u32 event_counter_prog(struct tegra_pcie_dw *pcie, u32 event)
{
...
}

static int aspm_state_cnt(struct seq_file *s, void *data)
{
...
}

static int init_debugfs(struct tegra_pcie_dw *pcie)
{
	struct dentry *d;

	d = debugfs_create_devm_seqfile(pcie->dev, "aspm_state_cnt",
					pcie->debugfs, aspm_state_cnt);
	if (IS_ERR_OR_NULL(d))
		dev_err(pcie->dev,
			"Failed to create debugfs file \"aspm_state_cnt\"\n");
	return 0;
}
#else
static inline int init_debugfs(struct tegra_pcie_dw *pcie) { return 0; }
#endif

> +
> +static void tegra_pcie_enable_system_interrupts(struct pcie_port *pp)
> +{
> +	struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
> +	struct tegra_pcie_dw *pcie = to_tegra_pcie(pci);
> +	u32 val;
> +	u16 val_w;
> +
> +	val = appl_readl(pcie, APPL_INTR_EN_L0_0);
> +	val |= APPL_INTR_EN_L0_0_LINK_STATE_INT_EN;
> +	appl_writel(pcie, val, APPL_INTR_EN_L0_0);
> +
> +	val = appl_readl(pcie, APPL_INTR_EN_L1_0_0);
> +	val |= APPL_INTR_EN_L1_0_0_LINK_REQ_RST_NOT_INT_EN;
> +	appl_writel(pcie, val, APPL_INTR_EN_L1_0_0);
> +
> +	if (pcie->enable_cdm_check) {
> +		val = appl_readl(pcie, APPL_INTR_EN_L0_0);
> +		val |= APPL_INTR_EN_L0_0_CDM_REG_CHK_INT_EN;
> +		appl_writel(pcie, val, APPL_INTR_EN_L0_0);
> +
> +		val = appl_readl(pcie, APPL_INTR_EN_L1_18);
> +		val |= APPL_INTR_EN_L1_18_CDM_REG_CHK_CMP_ERR;
> +		val |= APPL_INTR_EN_L1_18_CDM_REG_CHK_LOGIC_ERR;
> +		appl_writel(pcie, val, APPL_INTR_EN_L1_18);
> +	}
> +
> +	val_w = dw_pcie_readw_dbi(&pcie->pci, pcie->pcie_cap_base +
> +				  PCI_EXP_LNKSTA);
> +	pcie->init_link_width = (val_w & PCI_EXP_LNKSTA_NLW) >>
> +				PCI_EXP_LNKSTA_NLW_SHIFT;
> +
> +	val_w = dw_pcie_readw_dbi(&pcie->pci, pcie->pcie_cap_base +
> +				  PCI_EXP_LNKCTL);
> +	val_w |= PCI_EXP_LNKCTL_LBMIE;
> +	dw_pcie_writew_dbi(&pcie->pci, pcie->pcie_cap_base + PCI_EXP_LNKCTL,
> +			   val_w);
> +}
> +
> +static void tegra_pcie_enable_legacy_interrupts(struct pcie_port *pp)
> +{
> +	struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
> +	struct tegra_pcie_dw *pcie = to_tegra_pcie(pci);
> +	u32 val;
> +
> +	/* Enable legacy interrupt generation */
> +	val = appl_readl(pcie, APPL_INTR_EN_L0_0);
> +	val |= APPL_INTR_EN_L0_0_SYS_INTR_EN;
> +	val |= APPL_INTR_EN_L0_0_INT_INT_EN;
> +	appl_writel(pcie, val, APPL_INTR_EN_L0_0);
> +
> +	val = appl_readl(pcie, APPL_INTR_EN_L1_8_0);
> +	val |= APPL_INTR_EN_L1_8_INTX_EN;
> +	val |= APPL_INTR_EN_L1_8_AUTO_BW_INT_EN;
> +	val |= APPL_INTR_EN_L1_8_BW_MGT_INT_EN;
> +	if (IS_ENABLED(CONFIG_PCIEAER))
> +		val |= APPL_INTR_EN_L1_8_AER_INT_EN;
> +	appl_writel(pcie, val, APPL_INTR_EN_L1_8_0);
> +}
> +
> +static void tegra_pcie_enable_msi_interrupts(struct pcie_port *pp)
> +{
> +	struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
> +	struct tegra_pcie_dw *pcie = to_tegra_pcie(pci);
> +	u32 val;
> +
> +	dw_pcie_msi_init(pp);
> +
> +	/* Enable MSI interrupt generation */
> +	val = appl_readl(pcie, APPL_INTR_EN_L0_0);
> +	val |= APPL_INTR_EN_L0_0_SYS_MSI_INTR_EN;
> +	val |= APPL_INTR_EN_L0_0_MSI_RCV_INT_EN;
> +	appl_writel(pcie, val, APPL_INTR_EN_L0_0);
> +}
> +
> +static void tegra_pcie_enable_interrupts(struct pcie_port *pp)
> +{
> +	struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
> +	struct tegra_pcie_dw *pcie = to_tegra_pcie(pci);
> +
> +	/* Clear interrupt statuses before enabling interrupts */
> +	appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L0);
> +	appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L1_0_0);
> +	appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L1_1);
> +	appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L1_2);
> +	appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L1_3);
> +	appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L1_6);
> +	appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L1_7);
> +	appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L1_8_0);
> +	appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L1_9);
> +	appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L1_10);
> +	appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L1_11);
> +	appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L1_13);
> +	appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L1_14);
> +	appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L1_15);
> +	appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L1_17);
> +
> +	tegra_pcie_enable_system_interrupts(pp);
> +	tegra_pcie_enable_legacy_interrupts(pp);
> +	if (IS_ENABLED(CONFIG_PCI_MSI))
> +		tegra_pcie_enable_msi_interrupts(pp);
> +}
> +
> +static void config_gen3_gen4_eq_presets(struct tegra_pcie_dw *pcie)
> +{
> +	struct dw_pcie *pci = &pcie->pci;
> +	u32 val, offset, i;
> +
> +	/* Program init preset */
> +	for (i = 0; i < pcie->num_lanes; i++) {
> +		dw_pcie_read(pci->dbi_base + CAP_SPCIE_CAP_OFF
> +				 + (i * 2), 2, &val);
> +		val &= ~CAP_SPCIE_CAP_OFF_DSP_TX_PRESET0_MASK;
> +		val |= GEN3_GEN4_EQ_PRESET_INIT;
> +		val &= ~CAP_SPCIE_CAP_OFF_USP_TX_PRESET0_MASK;
> +		val |= (GEN3_GEN4_EQ_PRESET_INIT <<
> +			   CAP_SPCIE_CAP_OFF_USP_TX_PRESET0_SHIFT);
> +		dw_pcie_write(pci->dbi_base + CAP_SPCIE_CAP_OFF
> +				 + (i * 2), 2, val);
> +
> +		offset = dw_pcie_find_ext_capability(pci,
> +						     PCI_EXT_CAP_ID_PL_16GT) +
> +				PCI_PL_16GT_LE_CTRL;
> +		dw_pcie_read(pci->dbi_base + offset + i, 1, &val);
> +		val &= ~PCI_PL_16GT_LE_CTRL_DSP_TX_PRESET_MASK;
> +		val |= GEN3_GEN4_EQ_PRESET_INIT;
> +		val &= ~PCI_PL_16GT_LE_CTRL_USP_TX_PRESET_MASK;
> +		val |= (GEN3_GEN4_EQ_PRESET_INIT <<
> +			PCI_PL_16GT_LE_CTRL_USP_TX_PRESET_SHIFT);
> +		dw_pcie_write(pci->dbi_base + offset + i, 1, val);
> +	}
> +
> +	val = dw_pcie_readl_dbi(pci, GEN3_RELATED_OFF);
> +	val &= ~GEN3_RELATED_OFF_RATE_SHADOW_SEL_MASK;
> +	dw_pcie_writel_dbi(pci, GEN3_RELATED_OFF, val);
> +
> +	val = dw_pcie_readl_dbi(pci, GEN3_EQ_CONTROL_OFF);
> +	val &= ~GEN3_EQ_CONTROL_OFF_PSET_REQ_VEC_MASK;
> +	val |= (0x3ff << GEN3_EQ_CONTROL_OFF_PSET_REQ_VEC_SHIFT);
> +	val &= ~GEN3_EQ_CONTROL_OFF_FB_MODE_MASK;
> +	dw_pcie_writel_dbi(pci, GEN3_EQ_CONTROL_OFF, val);
> +
> +	val = dw_pcie_readl_dbi(pci, GEN3_RELATED_OFF);
> +	val &= ~GEN3_RELATED_OFF_RATE_SHADOW_SEL_MASK;
> +	val |= (0x1 << GEN3_RELATED_OFF_RATE_SHADOW_SEL_SHIFT);
> +	dw_pcie_writel_dbi(pci, GEN3_RELATED_OFF, val);
> +
> +	val = dw_pcie_readl_dbi(pci, GEN3_EQ_CONTROL_OFF);
> +	val &= ~GEN3_EQ_CONTROL_OFF_PSET_REQ_VEC_MASK;
> +	val |= (0x360 << GEN3_EQ_CONTROL_OFF_PSET_REQ_VEC_SHIFT);
> +	val &= ~GEN3_EQ_CONTROL_OFF_FB_MODE_MASK;
> +	dw_pcie_writel_dbi(pci, GEN3_EQ_CONTROL_OFF, val);
> +
> +	val = dw_pcie_readl_dbi(pci, GEN3_RELATED_OFF);
> +	val &= ~GEN3_RELATED_OFF_RATE_SHADOW_SEL_MASK;
> +	dw_pcie_writel_dbi(pci, GEN3_RELATED_OFF, val);
> +}
> +
> +static int tegra_pcie_dw_host_init(struct pcie_port *pp)
> +{
> +	struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
> +	struct tegra_pcie_dw *pcie = to_tegra_pcie(pci);
> +	u32 val, tmp, offset, speed;
> +	unsigned int count;
> +	u16 val_w;
> +
> +core_init:

Why should we restart from here ? What's the effect of the reset
on following registers set-up ?

> +	count = 200;
> +#if defined(CONFIG_PCIEASPM)
> +	offset = dw_pcie_find_ext_capability(pci, PCI_EXT_CAP_ID_L1SS);
> +	pcie->cfg_link_cap_l1sub = offset + PCI_L1SS_CAP;
> +#endif

Can we group it in the #ifdef above ?

> +	val = dw_pcie_readl_dbi(pci, PCI_IO_BASE);
> +	val &= ~(IO_BASE_IO_DECODE | IO_BASE_IO_DECODE_BIT8);
> +	dw_pcie_writel_dbi(pci, PCI_IO_BASE, val);
> +
> +	val = dw_pcie_readl_dbi(pci, PCI_PREF_MEMORY_BASE);
> +	val |= CFG_PREF_MEM_LIMIT_BASE_MEM_DECODE;
> +	val |= CFG_PREF_MEM_LIMIT_BASE_MEM_LIMIT_DECODE;
> +	dw_pcie_writel_dbi(pci, PCI_PREF_MEMORY_BASE, val);
> +
> +	dw_pcie_writel_dbi(pci, PCI_BASE_ADDRESS_0, 0);
> +
> +	/* Configure FTS */
> +	val = dw_pcie_readl_dbi(pci, PORT_LOGIC_ACK_F_ASPM_CTRL);
> +	val &= ~(N_FTS_MASK << N_FTS_SHIFT);
> +	val |= N_FTS_VAL << N_FTS_SHIFT;
> +	dw_pcie_writel_dbi(pci, PORT_LOGIC_ACK_F_ASPM_CTRL, val);
> +
> +	val = dw_pcie_readl_dbi(pci, PORT_LOGIC_GEN2_CTRL);
> +	val &= ~FTS_MASK;
> +	val |= FTS_VAL;
> +	dw_pcie_writel_dbi(pci, PORT_LOGIC_GEN2_CTRL, val);
> +
> +	/* Enable as 0xFFFF0001 response for CRS */
> +	val = dw_pcie_readl_dbi(pci, PORT_LOGIC_AMBA_ERROR_RESPONSE_DEFAULT);
> +	val &= ~(AMBA_ERROR_RESPONSE_CRS_MASK << AMBA_ERROR_RESPONSE_CRS_SHIFT);
> +	val |= (AMBA_ERROR_RESPONSE_CRS_OKAY_FFFF0001 <<
> +		AMBA_ERROR_RESPONSE_CRS_SHIFT);
> +	dw_pcie_writel_dbi(pci, PORT_LOGIC_AMBA_ERROR_RESPONSE_DEFAULT, val);
> +
> +	/* Configure Max Speed from DT */
> +	if (pcie->max_speed && pcie->max_speed != -EINVAL) {
> +		val = dw_pcie_readl_dbi(pci, pcie->pcie_cap_base +
> +					PCI_EXP_LNKCAP);
> +		val &= ~PCI_EXP_LNKCAP_SLS;
> +		val |= pcie->max_speed;
> +		dw_pcie_writel_dbi(pci, pcie->pcie_cap_base + PCI_EXP_LNKCAP,
> +				   val);
> +	}
> +
> +	/* Configure Max lane width from DT */
> +	val = dw_pcie_readl_dbi(pci, pcie->pcie_cap_base + PCI_EXP_LNKCAP);
> +	val &= ~PCI_EXP_LNKCAP_MLW;
> +	val |= (pcie->num_lanes << PCI_EXP_LNKSTA_NLW_SHIFT);
> +	dw_pcie_writel_dbi(pci, pcie->pcie_cap_base + PCI_EXP_LNKCAP, val);
> +
> +	config_gen3_gen4_eq_presets(pcie);
> +
> +#if defined(CONFIG_PCIEASPM)
> +	/* Enable ASPM counters */
> +	val = EVENT_COUNTER_ENABLE_ALL << EVENT_COUNTER_ENABLE_SHIFT;
> +	val |= EVENT_COUNTER_GROUP_5 << EVENT_COUNTER_GROUP_SEL_SHIFT;
> +	dw_pcie_writel_dbi(pci, event_cntr_ctrl_offset[pcie->cid], val);
> +
> +	/* Program T_cmrt and T_pwr_on values */
> +	val = dw_pcie_readl_dbi(pci, pcie->cfg_link_cap_l1sub);
> +	val &= ~(PCI_L1SS_CAP_CM_RESTORE_TIME | PCI_L1SS_CAP_P_PWR_ON_VALUE);
> +	val |= (pcie->aspm_cmrt << 8);
> +	val |= (pcie->aspm_pwr_on_t << 19);
> +	dw_pcie_writel_dbi(pci, pcie->cfg_link_cap_l1sub, val);
> +
> +	/* Program L0s and L1 entrance latencies */
> +	val = dw_pcie_readl_dbi(pci, PORT_LOGIC_ACK_F_ASPM_CTRL);
> +	val &= ~L0S_ENTRANCE_LAT_MASK;
> +	val |= (pcie->aspm_l0s_enter_lat << L0S_ENTRANCE_LAT_SHIFT);
> +	val |= ENTER_ASPM;
> +	dw_pcie_writel_dbi(pci, PORT_LOGIC_ACK_F_ASPM_CTRL, val);
> +#endif

It would be good to group all init guarded in CONFIG_PCIEASPM in
one function and related ifdef guard.

> +	val = dw_pcie_readl_dbi(pci, GEN3_RELATED_OFF);
> +	val &= ~GEN3_RELATED_OFF_GEN3_ZRXDC_NONCOMPL;
> +	dw_pcie_writel_dbi(pci, GEN3_RELATED_OFF, val);
> +
> +	if (pcie->update_fc_fixup) {
> +		val = dw_pcie_readl_dbi(pci, CFG_TIMER_CTRL_MAX_FUNC_NUM_OFF);
> +		val |= 0x1 << CFG_TIMER_CTRL_ACK_NAK_SHIFT;
> +		dw_pcie_writel_dbi(pci, CFG_TIMER_CTRL_MAX_FUNC_NUM_OFF, val);
> +	}
> +
> +	dw_pcie_setup_rc(pp);
> +
> +	clk_set_rate(pcie->core_clk, GEN4_CORE_CLK_FREQ);
> +
> +	/* Assert RST */
> +	val = appl_readl(pcie, APPL_PINMUX);
> +	val &= ~APPL_PINMUX_PEX_RST;
> +	appl_writel(pcie, val, APPL_PINMUX);

What's the effect of this RST on the registers programmed above ?

> +	usleep_range(100, 200);
> +
> +	/* Enable LTSSM */
> +	val = appl_readl(pcie, APPL_CTRL);
> +	val |= APPL_CTRL_LTSSM_EN;
> +	appl_writel(pcie, val, APPL_CTRL);
> +
> +	/* De-assert RST */
> +	val = appl_readl(pcie, APPL_PINMUX);
> +	val |= APPL_PINMUX_PEX_RST;
> +	appl_writel(pcie, val, APPL_PINMUX);
> +
> +	msleep(100);
> +
> +	val_w = dw_pcie_readw_dbi(pci, pcie->pcie_cap_base + PCI_EXP_LNKSTA);
> +	while (!(val_w & PCI_EXP_LNKSTA_DLLLA)) {
> +		if (count) {
> +			dev_dbg(pci->dev, "Waiting for link up\n");
> +			usleep_range(1000, 2000);
> +			val_w = dw_pcie_readw_dbi(pci, pcie->pcie_cap_base +
> +						  PCI_EXP_LNKSTA);
> +			count--;
> +			continue;
> +		}
> +
> +		val = appl_readl(pcie, APPL_DEBUG);
> +		val &= APPL_DEBUG_LTSSM_STATE_MASK;
> +		val >>= APPL_DEBUG_LTSSM_STATE_SHIFT;
> +		tmp = appl_readl(pcie, APPL_LINK_STATUS);
> +		tmp &= APPL_LINK_STATUS_RDLH_LINK_UP;
> +		if (!(val == 0x11 && !tmp)) {
> +			dev_info(pci->dev, "Link is down\n");
> +			return 0;
> +		}
> +
> +		dev_info(pci->dev, "Link is down in DLL");
> +		dev_info(pci->dev, "Trying again with DLFE disabled\n");
> +		/* Disable LTSSM */
> +		val = appl_readl(pcie, APPL_CTRL);
> +		val &= ~APPL_CTRL_LTSSM_EN;
> +		appl_writel(pcie, val, APPL_CTRL);
> +
> +		reset_control_assert(pcie->core_rst);
> +		reset_control_deassert(pcie->core_rst);
> +
> +		offset = dw_pcie_find_ext_capability(pci, PCI_EXT_CAP_ID_DLF);
> +		val = dw_pcie_readl_dbi(pci, offset + PCI_DLF_CAP);
> +		val &= ~PCI_DLF_EXCHANGE_ENABLE;
> +		dw_pcie_writel_dbi(pci, offset, val);
> +
> +		/* Retry now with DLF Exchange disabled */
> +		goto core_init;

See above.

> +	}
> +	dev_dbg(pci->dev, "Link is up\n");
> +
> +	speed = dw_pcie_readw_dbi(pci, pcie->pcie_cap_base + PCI_EXP_LNKSTA) &
> +		PCI_EXP_LNKSTA_CLS;
> +	clk_set_rate(pcie->core_clk, pcie_gen_freq[speed - 1]);
> +
> +	tegra_pcie_enable_interrupts(pp);
> +
> +	return 0;
> +}
> +
> +static int tegra_pcie_dw_link_up(struct dw_pcie *pci)
> +{
> +	struct tegra_pcie_dw *pcie = to_tegra_pcie(pci);
> +	u32 val = dw_pcie_readw_dbi(pci, pcie->pcie_cap_base + PCI_EXP_LNKSTA);
> +
> +	return !!(val & PCI_EXP_LNKSTA_DLLLA);
> +}
> +
> +static void tegra_pcie_set_msi_vec_num(struct pcie_port *pp)
> +{
> +	pp->num_vectors = MAX_MSI_IRQS;
> +}
> +
> +static const struct dw_pcie_ops tegra_dw_pcie_ops = {
> +	.link_up = tegra_pcie_dw_link_up,
> +};
> +
> +static struct dw_pcie_host_ops tegra_pcie_dw_host_ops = {
> +	.rd_own_conf = tegra_pcie_dw_rd_own_conf,
> +	.wr_own_conf = tegra_pcie_dw_wr_own_conf,
> +	.host_init = tegra_pcie_dw_host_init,
> +	.set_num_vectors = tegra_pcie_set_msi_vec_num,
> +};
> +
> +static void tegra_pcie_disable_phy(struct tegra_pcie_dw *pcie)
> +{
> +	unsigned int phy_count = pcie->phy_count;
> +
> +	while (phy_count--) {
> +		phy_power_off(pcie->phys[phy_count]);
> +		phy_exit(pcie->phys[phy_count]);
> +	}
> +}
> +
> +static int tegra_pcie_enable_phy(struct tegra_pcie_dw *pcie)
> +{
> +	unsigned int i;
> +	int ret;
> +
> +	for (i = 0; i < pcie->phy_count; i++) {
> +		ret = phy_init(pcie->phys[i]);
> +		if (ret < 0)
> +			goto phy_power_off;
> +
> +		ret = phy_power_on(pcie->phys[i]);
> +		if (ret < 0)
> +			goto phy_exit;
> +	}
> +
> +	return 0;
> +
> +phy_power_off:
> +	while (i--) {
> +		phy_power_off(pcie->phys[i]);
> +phy_exit:
> +		phy_exit(pcie->phys[i]);
> +	}
> +
> +	return ret;
> +}
> +
> +static int tegra_pcie_dw_parse_dt(struct tegra_pcie_dw *pcie)
> +{
> +	struct device_node *np = pcie->dev->of_node;
> +	int ret;
> +
> +	ret = of_property_read_u32(np, "nvidia,aspm-cmrt-us", &pcie->aspm_cmrt);
> +	if (ret < 0) {
> +		dev_info(pcie->dev, "Failed to read ASPM T_cmrt: %d\n", ret);
> +		return ret;
> +	}
> +
> +	ret = of_property_read_u32(np, "nvidia,aspm-pwr-on-t-us",
> +				   &pcie->aspm_pwr_on_t);
> +	if (ret < 0)
> +		dev_info(pcie->dev, "Failed to read ASPM Power On time: %d\n",
> +			 ret);
> +
> +	ret = of_property_read_u32(np, "nvidia,aspm-l0s-entrance-latency-us",
> +				   &pcie->aspm_l0s_enter_lat);
> +	if (ret < 0)
> +		dev_info(pcie->dev,
> +			 "Failed to read ASPM L0s Entrance latency: %d\n", ret);
> +
> +	ret = of_property_read_u32(np, "num-lanes", &pcie->num_lanes);
> +	if (ret < 0) {
> +		dev_err(pcie->dev, "Failed to read num-lanes: %d\n", ret);
> +		return ret;
> +	}
> +
> +	pcie->max_speed = of_pci_get_max_link_speed(np);
> +
> +	ret = of_property_read_u32_index(np, "nvidia,bpmp", 1, &pcie->cid);
> +	if (ret) {
> +		dev_err(pcie->dev, "Failed to read Controller-ID: %d\n", ret);
> +		return ret;
> +	}
> +
> +	pcie->phy_count = of_property_count_strings(np, "phy-names");
> +	if (pcie->phy_count < 0) {
> +		dev_err(pcie->dev, "Failed to find PHY entries: %d\n",
> +			pcie->phy_count);
> +		return pcie->phy_count;
> +	}
> +
> +	if (of_property_read_bool(np, "nvidia,update-fc-fixup"))
> +		pcie->update_fc_fixup = true;
> +
> +	pcie->supports_clkreq =
> +		of_property_read_bool(pcie->dev->of_node, "supports-clkreq");
> +
> +	pcie->enable_cdm_check =
> +		of_property_read_bool(np, "snps,enable-cdm-check");
> +
> +	return 0;
> +}
> +
> +static int tegra_pcie_bpmp_set_ctrl_state(struct tegra_pcie_dw *pcie,
> +					  bool enable)
> +{
> +	struct mrq_uphy_response resp;
> +	struct tegra_bpmp_message msg;
> +	struct mrq_uphy_request req;
> +	int err;
> +
> +	if (pcie->cid == 5)
> +		return 0;
> +
> +	memset(&req, 0, sizeof(req));
> +	memset(&resp, 0, sizeof(resp));
> +
> +	req.cmd = CMD_UPHY_PCIE_CONTROLLER_STATE;
> +	req.controller_state.pcie_controller = pcie->cid;
> +	req.controller_state.enable = enable;
> +
> +	memset(&msg, 0, sizeof(msg));
> +	msg.mrq = MRQ_UPHY;
> +	msg.tx.data = &req;
> +	msg.tx.size = sizeof(req);
> +	msg.rx.data = &resp;
> +	msg.rx.size = sizeof(resp);
> +
> +	if (irqs_disabled())
> +		err = tegra_bpmp_transfer_atomic(pcie->bpmp, &msg);
> +	else
> +		err = tegra_bpmp_transfer(pcie->bpmp, &msg);
> +
> +	return err;
> +}
> +
> +static void tegra_pcie_downstream_dev_to_D0(struct tegra_pcie_dw *pcie)
> +{
> +	struct pcie_port *pp = &pcie->pci.pp;
> +	struct pci_dev *pdev;
> +	struct pci_bus *child;
> +
> +	list_for_each_entry(child, &pp->root_bus->children, node) {
> +		/* Bring downstream devices to D0 if they are not already in */
> +		if (child->parent == pp->root_bus)
> +			break;
> +	}
> +	list_for_each_entry(pdev, &child->devices, bus_list) {
> +		if (PCI_SLOT(pdev->devfn) == 0) {
> +			if (pci_set_power_state(pdev, PCI_D0))
> +				dev_err(pcie->dev,
> +					"Failed to transition %s to D0 state\n",
> +					dev_name(&pdev->dev));
> +		}
> +	}
> +}
> +
> +static int tegra_pcie_config_controller(struct tegra_pcie_dw *pcie,
> +					bool en_hw_hot_rst)
> +{
> +	int ret;
> +	u32 val;
> +
> +	ret = tegra_pcie_bpmp_set_ctrl_state(pcie, true);
> +	if (ret) {
> +		dev_err(pcie->dev,
> +			"Failed to enable controller %u: %d\n", pcie->cid, ret);
> +		return ret;
> +	}
> +
> +	ret = regulator_enable(pcie->pex_ctl_supply);
> +	if (ret < 0) {
> +		dev_err(pcie->dev, "Failed to enable regulator: %d\n", ret);
> +		goto fail_reg_en;
> +	}
> +
> +	ret = clk_prepare_enable(pcie->core_clk);
> +	if (ret) {
> +		dev_err(pcie->dev, "Failed to enable core clock: %d\n", ret);
> +		goto fail_core_clk;
> +	}
> +
> +	ret = reset_control_deassert(pcie->core_apb_rst);
> +	if (ret) {
> +		dev_err(pcie->dev, "Failed to deassert core APB reset: %d\n",
> +			ret);
> +		goto fail_core_apb_rst;
> +	}
> +
> +	if (en_hw_hot_rst) {
> +		/* Enable HW_HOT_RST mode */
> +		val = appl_readl(pcie, APPL_CTRL);
> +		val &= ~(APPL_CTRL_HW_HOT_RST_MODE_MASK <<
> +			 APPL_CTRL_HW_HOT_RST_MODE_SHIFT);
> +		val |= APPL_CTRL_HW_HOT_RST_EN;
> +		appl_writel(pcie, val, APPL_CTRL);
> +	}
> +
> +	ret = tegra_pcie_enable_phy(pcie);
> +	if (ret) {
> +		dev_err(pcie->dev, "Failed to enable PHY: %d\n", ret);
> +		goto fail_phy;
> +	}
> +
> +	/* Update CFG base address */
> +	appl_writel(pcie, pcie->dbi_res->start & APPL_CFG_BASE_ADDR_MASK,
> +		    APPL_CFG_BASE_ADDR);
> +
> +	/* Configure this core for RP mode operation */
> +	appl_writel(pcie, APPL_DM_TYPE_RP, APPL_DM_TYPE);
> +
> +	appl_writel(pcie, 0x0, APPL_CFG_SLCG_OVERRIDE);
> +
> +	val = appl_readl(pcie, APPL_CTRL);
> +	appl_writel(pcie, val | APPL_CTRL_SYS_PRE_DET_STATE, APPL_CTRL);
> +
> +	val = appl_readl(pcie, APPL_CFG_MISC);
> +	val |= (APPL_CFG_MISC_ARCACHE_VAL << APPL_CFG_MISC_ARCACHE_SHIFT);
> +	appl_writel(pcie, val, APPL_CFG_MISC);
> +
> +	if (!pcie->supports_clkreq) {
> +		val = appl_readl(pcie, APPL_PINMUX);
> +		val |= APPL_PINMUX_CLKREQ_OUT_OVRD_EN;
> +		val |= APPL_PINMUX_CLKREQ_OUT_OVRD;
> +		appl_writel(pcie, val, APPL_PINMUX);
> +	}
> +
> +	/* Update iATU_DMA base address */
> +	appl_writel(pcie,
> +		    pcie->atu_dma_res->start & APPL_CFG_IATU_DMA_BASE_ADDR_MASK,
> +		    APPL_CFG_IATU_DMA_BASE_ADDR);
> +
> +	reset_control_deassert(pcie->core_rst);
> +
> +	pcie->pcie_cap_base = dw_pcie_find_capability(&pcie->pci,
> +						      PCI_CAP_ID_EXP);
> +
> +#if defined(CONFIG_PCIEASPM)
> +	/* Disable ASPM-L1SS advertisement as there is no CLKREQ routing */
> +	if (!pcie->supports_clkreq) {
> +		disable_aspm_l11(pcie);
> +		disable_aspm_l12(pcie);
> +	}
> +#endif
> +	return ret;
> +
> +fail_phy:
> +	reset_control_assert(pcie->core_apb_rst);
> +fail_core_apb_rst:
> +	clk_disable_unprepare(pcie->core_clk);
> +fail_core_clk:
> +	regulator_disable(pcie->pex_ctl_supply);
> +fail_reg_en:
> +	tegra_pcie_bpmp_set_ctrl_state(pcie, false);
> +
> +	return ret;
> +}
> +
> +static int __deinit_controller(struct tegra_pcie_dw *pcie)
> +{
> +	int ret;
> +
> +	ret = reset_control_assert(pcie->core_rst);
> +	if (ret) {
> +		dev_err(pcie->dev, "Failed to assert \"core\" reset: %d\n",
> +			ret);
> +		return ret;
> +	}
> +	tegra_pcie_disable_phy(pcie);
> +	ret = reset_control_assert(pcie->core_apb_rst);
> +	if (ret) {
> +		dev_err(pcie->dev, "Failed to assert APB reset: %d\n", ret);
> +		return ret;
> +	}
> +	clk_disable_unprepare(pcie->core_clk);
> +	ret = regulator_disable(pcie->pex_ctl_supply);
> +	if (ret) {
> +		dev_err(pcie->dev, "Failed to disable regulator: %d\n", ret);
> +		return ret;
> +	}
> +	ret = tegra_pcie_bpmp_set_ctrl_state(pcie, false);
> +	if (ret) {
> +		dev_err(pcie->dev, "Failed to disable controller %d: %d\n",
> +			pcie->cid, ret);
> +		return ret;
> +	}
> +	return ret;
> +}
> +
> +static int tegra_pcie_init_controller(struct tegra_pcie_dw *pcie)
> +{
> +	struct dw_pcie *pci = &pcie->pci;
> +	struct pcie_port *pp = &pci->pp;
> +	int ret;
> +
> +	ret = tegra_pcie_config_controller(pcie, false);
> +	if (ret < 0)
> +		return ret;
> +
> +	pp->root_bus_nr = -1;

Useless, drop it, it is initialized in dw_pcie_host_init().

> +	pp->ops = &tegra_pcie_dw_host_ops;
> +
> +	ret = dw_pcie_host_init(pp);
> +	if (ret < 0) {
> +		dev_err(pcie->dev, "Failed to add PCIe port: %d\n", ret);
> +		goto fail_host_init;
> +	}
> +
> +	return 0;
> +
> +fail_host_init:
> +	return __deinit_controller(pcie);
> +}
> +
> +static int tegra_pcie_try_link_l2(struct tegra_pcie_dw *pcie)
> +{
> +	u32 val;
> +
> +	if (!tegra_pcie_dw_link_up(&pcie->pci))
> +		return 0;
> +
> +	val = appl_readl(pcie, APPL_RADM_STATUS);
> +	val |= APPL_PM_XMT_TURNOFF_STATE;
> +	appl_writel(pcie, val, APPL_RADM_STATUS);
> +
> +	return readl_poll_timeout_atomic(pcie->appl_base + APPL_DEBUG, val,
> +				 val & APPL_DEBUG_PM_LINKST_IN_L2_LAT,
> +				 1, PME_ACK_TIMEOUT);
> +}
> +
> +static void tegra_pcie_dw_pme_turnoff(struct tegra_pcie_dw *pcie)
> +{
> +	u32 data;
> +	int err;
> +
> +	if (!tegra_pcie_dw_link_up(&pcie->pci)) {
> +		dev_dbg(pcie->dev, "PCIe link is not up...!\n");
> +		return;
> +	}
> +
> +	if (tegra_pcie_try_link_l2(pcie)) {
> +		dev_info(pcie->dev, "Link didn't transition to L2 state\n");
> +		/*
> +		 * TX lane clock freq will reset to Gen1 only if link is in L2
> +		 * or detect state.
> +		 * So apply pex_rst to end point to force RP to go into detect
> +		 * state
> +		 */
> +		data = appl_readl(pcie, APPL_PINMUX);
> +		data &= ~APPL_PINMUX_PEX_RST;
> +		appl_writel(pcie, data, APPL_PINMUX);
> +
> +		err = readl_poll_timeout_atomic(pcie->appl_base + APPL_DEBUG,
> +						data,
> +						((data &
> +						APPL_DEBUG_LTSSM_STATE_MASK) >>
> +						APPL_DEBUG_LTSSM_STATE_SHIFT) ==
> +						LTSSM_STATE_PRE_DETECT,
> +						1, LTSSM_TIMEOUT);
> +		if (err) {
> +			dev_info(pcie->dev, "Link didn't go to detect state\n");
> +		} else {
> +			/* Disable LTSSM after link is in detect state */
> +			data = appl_readl(pcie, APPL_CTRL);
> +			data &= ~APPL_CTRL_LTSSM_EN;
> +			appl_writel(pcie, data, APPL_CTRL);
> +		}
> +	}
> +	/*
> +	 * DBI registers may not be accessible after this as PLL-E would be
> +	 * down depending on how CLKREQ is pulled by end point
> +	 */
> +	data = appl_readl(pcie, APPL_PINMUX);
> +	data |= (APPL_PINMUX_CLKREQ_OVERRIDE_EN | APPL_PINMUX_CLKREQ_OVERRIDE);
> +	/* Cut REFCLK to slot */
> +	data |= APPL_PINMUX_CLK_OUTPUT_IN_OVERRIDE_EN;
> +	data &= ~APPL_PINMUX_CLK_OUTPUT_IN_OVERRIDE;
> +	appl_writel(pcie, data, APPL_PINMUX);
> +}
> +
> +static int tegra_pcie_deinit_controller(struct tegra_pcie_dw *pcie)
> +{
> +	/*
> +	 * link doesn't go into L2 state with some of the endpoints with Tegra
> +	 * if they are not in D0 state. So, need to make sure that immediate
> +	 * downstream devices are in D0 state before sending PME_TurnOff to put
> +	 * link into L2 state
> +	 */
> +	tegra_pcie_downstream_dev_to_D0(pcie);
> +	dw_pcie_host_deinit(&pcie->pci.pp);
> +	tegra_pcie_dw_pme_turnoff(pcie);
> +	return __deinit_controller(pcie);
> +}
> +
> +static int tegra_pcie_config_rp(struct tegra_pcie_dw *pcie)
> +{
> +	struct pcie_port *pp = &pcie->pci.pp;
> +	struct device *dev = pcie->dev;
> +	char *name;
> +	int ret;
> +
> +	if (IS_ENABLED(CONFIG_PCI_MSI)) {
> +		pp->msi_irq = of_irq_get_byname(dev->of_node, "msi");
> +		if (!pp->msi_irq) {
> +			dev_err(dev, "Failed to get MSI interrupt\n");
> +			return -ENODEV;
> +		}
> +	}
> +
> +	pm_runtime_enable(dev);
> +	ret = pm_runtime_get_sync(dev);
> +	if (ret < 0) {
> +		dev_err(dev, "Failed to get runtime sync for PCIe dev: %d\n",
> +			ret);
> +		pm_runtime_disable(dev);
> +		return ret;
> +	}
> +
> +	tegra_pcie_init_controller(pcie);
> +
> +	pcie->link_state = tegra_pcie_dw_link_up(&pcie->pci);
> +
> +	if (!pcie->link_state) {
> +		ret = -ENOMEDIUM;
> +		goto fail_host_init;
> +	}
> +
> +	name = devm_kasprintf(dev, GFP_KERNEL, "%pOFP", dev->of_node);
> +	if (!name) {
> +		ret = -ENOMEM;
> +		goto fail_host_init;
> +	}
> +
> +	pcie->debugfs = debugfs_create_dir(name, NULL);
> +	if (!pcie->debugfs)
> +		dev_err(dev, "Failed to create debugfs\n");
> +	else
> +		init_debugfs(pcie);
> +
> +	return ret;
> +
> +fail_host_init:
> +	tegra_pcie_deinit_controller(pcie);
> +	pm_runtime_put_sync(dev);
> +	pm_runtime_disable(dev);
> +	return ret;
> +}
> +
> +static const struct tegra_pcie_soc tegra_pcie_rc_of_data = {
> +	.mode = DW_PCIE_RC_TYPE,
> +};
> +
> +static const struct of_device_id tegra_pcie_dw_of_match[] = {
> +	{
> +		.compatible = "nvidia,tegra194-pcie",
> +		.data = &tegra_pcie_rc_of_data,
> +	},
> +	{},
> +};
> +MODULE_DEVICE_TABLE(of, tegra_pcie_dw_of_match);

Move it closer to end of file where MODULE_DESCRIPTION() et al are.

> +
> +static int tegra_pcie_dw_probe(struct platform_device *pdev)
> +{
> +	const struct tegra_pcie_soc *data;
> +	struct device *dev = &pdev->dev;
> +	struct resource *atu_dma_res;
> +	struct tegra_pcie_dw *pcie;
> +	struct resource *dbi_res;
> +	struct pcie_port *pp;
> +	struct dw_pcie *pci;
> +	struct phy **phys;
> +	char *name;
> +	int ret;
> +	u32 i;
> +
> +	pcie = devm_kzalloc(dev, sizeof(*pcie), GFP_KERNEL);
> +	if (!pcie)
> +		return -ENOMEM;
> +
> +	pci = &pcie->pci;
> +	pci->dev = &pdev->dev;
> +	pci->ops = &tegra_dw_pcie_ops;
> +	pp = &pci->pp;
> +	pcie->dev = &pdev->dev;
> +
> +	data = (struct tegra_pcie_soc *)of_device_get_match_data(dev);
> +	if (!data)
> +		return -EINVAL;
> +	pcie->mode = (enum dw_pcie_device_mode)data->mode;
> +
> +	ret = tegra_pcie_dw_parse_dt(pcie);
> +	if (ret < 0) {
> +		dev_err(dev, "Failed to parse device tree: %d\n", ret);
> +		return ret;
> +	}
> +
> +	pcie->pex_ctl_supply = devm_regulator_get(dev, "vddio-pex-ctl");
> +	if (IS_ERR(pcie->pex_ctl_supply)) {
> +		dev_err(dev, "Failed to get regulator: %ld\n",
> +			PTR_ERR(pcie->pex_ctl_supply));
> +		return PTR_ERR(pcie->pex_ctl_supply);
> +	}
> +
> +	pcie->core_clk = devm_clk_get(dev, "core");
> +	if (IS_ERR(pcie->core_clk)) {
> +		dev_err(dev, "Failed to get core clock: %ld\n",
> +			PTR_ERR(pcie->core_clk));
> +		return PTR_ERR(pcie->core_clk);
> +	}
> +
> +	pcie->appl_res = platform_get_resource_byname(pdev, IORESOURCE_MEM,
> +						      "appl");
> +	if (!pcie->appl_res) {
> +		dev_err(dev, "Failed to find \"appl\" region\n");
> +		return PTR_ERR(pcie->appl_res);
> +	}

Add an empty line.

> +	pcie->appl_base = devm_ioremap_resource(dev, pcie->appl_res);
> +	if (IS_ERR(pcie->appl_base))
> +		return PTR_ERR(pcie->appl_base);
> +
> +	pcie->core_apb_rst = devm_reset_control_get(dev, "apb");
> +	if (IS_ERR(pcie->core_apb_rst)) {
> +		dev_err(dev, "Failed to get APB reset: %ld\n",
> +			PTR_ERR(pcie->core_apb_rst));
> +		return PTR_ERR(pcie->core_apb_rst);
> +	}
> +
> +	phys = devm_kcalloc(dev, pcie->phy_count, sizeof(*phys), GFP_KERNEL);
> +	if (!phys)
> +		return PTR_ERR(phys);
> +
> +	for (i = 0; i < pcie->phy_count; i++) {
> +		name = kasprintf(GFP_KERNEL, "p2u-%u", i);
> +		if (!name) {
> +			dev_err(dev, "Failed to create P2U string\n");
> +			return -ENOMEM;
> +		}
> +		phys[i] = devm_phy_get(dev, name);
> +		kfree(name);
> +		if (IS_ERR(phys[i])) {
> +			ret = PTR_ERR(phys[i]);
> +			dev_err(dev, "Failed to get PHY: %d\n", ret);
> +			return ret;
> +		}
> +	}
> +
> +	pcie->phys = phys;
> +
> +	dbi_res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "dbi");
> +	if (!dbi_res) {
> +		dev_err(dev, "Failed to find \"dbi\" region\n");
> +		return PTR_ERR(dbi_res);
> +	}
> +	pcie->dbi_res = dbi_res;
> +
> +	pci->dbi_base = devm_ioremap_resource(dev, dbi_res);
> +	if (IS_ERR(pci->dbi_base))
> +		return PTR_ERR(pci->dbi_base);
> +
> +	/* Tegra HW locates DBI2 at a fixed offset from DBI */
> +	pci->dbi_base2 = pci->dbi_base + 0x1000;
> +
> +	atu_dma_res = platform_get_resource_byname(pdev, IORESOURCE_MEM,
> +						   "atu_dma");
> +	if (!atu_dma_res) {
> +		dev_err(dev, "Failed to find \"atu_dma\" region\n");
> +		return PTR_ERR(atu_dma_res);
> +	}
> +	pcie->atu_dma_res = atu_dma_res;

Add an empty line.

I have just skimmed over it, I need more time to review the driver more
thoroughly, I will try to go over it tomorrow and we can see if we can
hit v5.3 but I can't guarantee anything, sorry but there was a
significant backlog on the PCI patch queue.

The rest of the series is fine but it is useless to merge it
without this patch so let's see how it goes.

Lorenzo

> +	pci->atu_base = devm_ioremap_resource(dev, atu_dma_res);
> +	if (IS_ERR(pci->atu_base))
> +		return PTR_ERR(pci->atu_base);
> +
> +	pcie->core_rst = devm_reset_control_get(dev, "core");
> +	if (IS_ERR(pcie->core_rst)) {
> +		dev_err(dev, "Failed to get core reset: %ld\n",
> +			PTR_ERR(pcie->core_rst));
> +		return PTR_ERR(pcie->core_rst);
> +	}
> +
> +	pp->irq = platform_get_irq_byname(pdev, "intr");
> +	if (!pp->irq) {
> +		dev_err(dev, "Failed to get \"intr\" interrupt\n");
> +		return -ENODEV;
> +	}
> +
> +	ret = devm_request_irq(dev, pp->irq, tegra_pcie_irq_handler,
> +			       IRQF_SHARED, "tegra-pcie-intr", pcie);
> +	if (ret) {
> +		dev_err(dev, "Failed to request IRQ %d: %d\n", pp->irq, ret);
> +		return ret;
> +	}
> +
> +	pcie->bpmp = tegra_bpmp_get(dev);
> +	if (IS_ERR(pcie->bpmp))
> +		return PTR_ERR(pcie->bpmp);
> +
> +	platform_set_drvdata(pdev, pcie);
> +
> +	if (pcie->mode == DW_PCIE_RC_TYPE) {
> +		ret = tegra_pcie_config_rp(pcie);
> +		if (ret && ret != -ENOMEDIUM)
> +			goto fail;
> +		else
> +			return 0;
> +	}
> +
> +fail:
> +	tegra_bpmp_put(pcie->bpmp);
> +	return ret;
> +}
> +
> +static int tegra_pcie_dw_remove(struct platform_device *pdev)
> +{
> +	struct tegra_pcie_dw *pcie = platform_get_drvdata(pdev);
> +
> +	if (pcie->mode != DW_PCIE_RC_TYPE)
> +		return 0;
> +
> +	if (!pcie->link_state)
> +		return 0;
> +
> +	debugfs_remove_recursive(pcie->debugfs);
> +	tegra_pcie_deinit_controller(pcie);
> +	pm_runtime_put_sync(pcie->dev);
> +	pm_runtime_disable(pcie->dev);
> +	tegra_bpmp_put(pcie->bpmp);
> +
> +	return 0;
> +}
> +
> +static int tegra_pcie_dw_suspend_late(struct device *dev)
> +{
> +	struct tegra_pcie_dw *pcie = dev_get_drvdata(dev);
> +	u32 val;
> +
> +	if (!pcie->link_state)
> +		return 0;
> +
> +	/* Enable HW_HOT_RST mode */
> +	val = appl_readl(pcie, APPL_CTRL);
> +	val &= ~(APPL_CTRL_HW_HOT_RST_MODE_MASK <<
> +		 APPL_CTRL_HW_HOT_RST_MODE_SHIFT);
> +	val |= APPL_CTRL_HW_HOT_RST_EN;
> +	appl_writel(pcie, val, APPL_CTRL);
> +
> +	return 0;
> +}
> +
> +static int tegra_pcie_dw_suspend_noirq(struct device *dev)
> +{
> +	struct tegra_pcie_dw *pcie = dev_get_drvdata(dev);
> +
> +	if (!pcie->link_state)
> +		return 0;
> +
> +	/* Save MSI interrupt vector */
> +	pcie->msi_ctrl_int = dw_pcie_readl_dbi(&pcie->pci,
> +					       PORT_LOGIC_MSI_CTRL_INT_0_EN);
> +	tegra_pcie_downstream_dev_to_D0(pcie);
> +	tegra_pcie_dw_pme_turnoff(pcie);
> +	return __deinit_controller(pcie);
> +}
> +
> +static int tegra_pcie_dw_resume_noirq(struct device *dev)
> +{
> +	struct tegra_pcie_dw *pcie = dev_get_drvdata(dev);
> +	int ret;
> +
> +	if (!pcie->link_state)
> +		return 0;
> +
> +	ret = tegra_pcie_config_controller(pcie, true);
> +	if (ret < 0)
> +		return ret;
> +
> +	ret = tegra_pcie_dw_host_init(&pcie->pci.pp);
> +	if (ret < 0) {
> +		dev_err(dev, "Failed to init host: %d\n", ret);
> +		goto fail_host_init;
> +	}
> +
> +	/* Restore MSI interrupt vector */
> +	dw_pcie_writel_dbi(&pcie->pci, PORT_LOGIC_MSI_CTRL_INT_0_EN,
> +			   pcie->msi_ctrl_int);
> +
> +	return 0;
> +fail_host_init:
> +	return __deinit_controller(pcie);
> +}
> +
> +static int tegra_pcie_dw_resume_early(struct device *dev)
> +{
> +	struct tegra_pcie_dw *pcie = dev_get_drvdata(dev);
> +	u32 val;
> +
> +	if (!pcie->link_state)
> +		return 0;
> +
> +	/* Disable HW_HOT_RST mode */
> +	val = appl_readl(pcie, APPL_CTRL);
> +	val &= ~(APPL_CTRL_HW_HOT_RST_MODE_MASK <<
> +		 APPL_CTRL_HW_HOT_RST_MODE_SHIFT);
> +	val |= APPL_CTRL_HW_HOT_RST_MODE_IMDT_RST <<
> +	       APPL_CTRL_HW_HOT_RST_MODE_SHIFT;
> +	val &= ~APPL_CTRL_HW_HOT_RST_EN;
> +	appl_writel(pcie, val, APPL_CTRL);
> +
> +	return 0;
> +}
> +
> +static void tegra_pcie_dw_shutdown(struct platform_device *pdev)
> +{
> +	struct tegra_pcie_dw *pcie = platform_get_drvdata(pdev);
> +
> +	if (pcie->mode != DW_PCIE_RC_TYPE)
> +		return;
> +
> +	if (!pcie->link_state)
> +		return;
> +
> +	debugfs_remove_recursive(pcie->debugfs);
> +	tegra_pcie_downstream_dev_to_D0(pcie);
> +
> +	disable_irq(pcie->pci.pp.irq);
> +	if (IS_ENABLED(CONFIG_PCI_MSI))
> +		disable_irq(pcie->pci.pp.msi_irq);
> +
> +	tegra_pcie_dw_pme_turnoff(pcie);
> +	__deinit_controller(pcie);
> +}
> +
> +static const struct dev_pm_ops tegra_pcie_dw_pm_ops = {
> +	.suspend_late = tegra_pcie_dw_suspend_late,
> +	.suspend_noirq = tegra_pcie_dw_suspend_noirq,
> +	.resume_noirq = tegra_pcie_dw_resume_noirq,
> +	.resume_early = tegra_pcie_dw_resume_early,
> +};
> +
> +static struct platform_driver tegra_pcie_dw_driver = {
> +	.probe = tegra_pcie_dw_probe,
> +	.remove = tegra_pcie_dw_remove,
> +	.shutdown = tegra_pcie_dw_shutdown,
> +	.driver = {
> +		.name	= "tegra194-pcie",
> +		.pm = &tegra_pcie_dw_pm_ops,
> +		.of_match_table = tegra_pcie_dw_of_match,
> +	},
> +};
> +module_platform_driver(tegra_pcie_dw_driver);
> +
> +MODULE_AUTHOR("Vidya Sagar <vidyas@nvidia.com>");
> +MODULE_DESCRIPTION("NVIDIA PCIe host controller driver");
> +MODULE_LICENSE("GPL v2");
> -- 
> 2.17.1
> 

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH V13 08/12] dt-bindings: Add PCIe supports-clkreq property
  2019-07-10 15:28   ` Lorenzo Pieralisi
@ 2019-07-10 17:14     ` Vidya Sagar
  0 siblings, 0 replies; 34+ messages in thread
From: Vidya Sagar @ 2019-07-10 17:14 UTC (permalink / raw)
  To: Lorenzo Pieralisi
  Cc: bhelgaas, robh+dt, mark.rutland, thierry.reding, jonathanh,
	kishon, catalin.marinas, will.deacon, jingoohan1,
	gustavo.pimentel, digetx, mperttunen, linux-pci, devicetree,
	linux-tegra, linux-kernel, linux-arm-kernel, kthota, mmaddireddy,
	sagar.tv

On 7/10/2019 8:58 PM, Lorenzo Pieralisi wrote:
> On Wed, Jul 10, 2019 at 11:52:08AM +0530, Vidya Sagar wrote:
>> Some host controllers need to know the existence of clkreq signal routing to
>> downstream devices to be able to advertise low power features like ASPM L1
>> substates. Without clkreq signal routing being present, enabling ASPM L1 sub
>> states might lead to downstream devices falling off the bus. Hence a new device
> 
> You mean "being disconnected from the bus" right ? I will update it.
Yes. I meant the same.

> 
> Lorenzo
> 
>> tree property 'supports-clkreq' is added to make such host controllers
>> aware of clkreq signal routing to downstream devices.
>>
>> Signed-off-by: Vidya Sagar <vidyas@nvidia.com>
>> Reviewed-by: Rob Herring <robh@kernel.org>
>> Reviewed-by: Thierry Reding <treding@nvidia.com>
>> ---
>> V13:
>> * None
>>
>> V12:
>> * Rebased on top of linux-next top of the tree
>>
>> V11:
>> * None
>>
>> V10:
>> * None
>>
>> V9:
>> * None
>>
>> V8:
>> * None
>>
>> V7:
>> * None
>>
>> V6:
>> * s/Documentation\/devicetree/dt-bindings/ in the subject
>>
>> V5:
>> * None
>>
>> V4:
>> * Rebased on top of linux-next top of the tree
>>
>> V3:
>> * None
>>
>> V2:
>> * This is a new patch in v2 series
>>
>>   Documentation/devicetree/bindings/pci/pci.txt | 5 +++++
>>   1 file changed, 5 insertions(+)
>>
>> diff --git a/Documentation/devicetree/bindings/pci/pci.txt b/Documentation/devicetree/bindings/pci/pci.txt
>> index 2a5d91024059..29bcbd88f457 100644
>> --- a/Documentation/devicetree/bindings/pci/pci.txt
>> +++ b/Documentation/devicetree/bindings/pci/pci.txt
>> @@ -27,6 +27,11 @@ driver implementation may support the following properties:
>>   - reset-gpios:
>>      If present this property specifies PERST# GPIO. Host drivers can parse the
>>      GPIO and apply fundamental reset to endpoints.
>> +- supports-clkreq:
>> +   If present this property specifies that CLKREQ signal routing exists from
>> +   root port to downstream device and host bridge drivers can do programming
>> +   which depends on CLKREQ signal existence. For example, programming root port
>> +   not to advertise ASPM L1 Sub-States support if there is no CLKREQ signal.
>>   
>>   PCI-PCI Bridge properties
>>   -------------------------
>> -- 
>> 2.17.1
>>


^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH V13 12/12] PCI: tegra: Add Tegra194 PCIe support
  2019-07-10 17:02   ` Lorenzo Pieralisi
@ 2019-07-10 17:26     ` Vidya Sagar
  0 siblings, 0 replies; 34+ messages in thread
From: Vidya Sagar @ 2019-07-10 17:26 UTC (permalink / raw)
  To: Lorenzo Pieralisi
  Cc: bhelgaas, robh+dt, mark.rutland, thierry.reding, jonathanh,
	kishon, catalin.marinas, will.deacon, jingoohan1,
	gustavo.pimentel, digetx, mperttunen, linux-pci, devicetree,
	linux-tegra, linux-kernel, linux-arm-kernel, kthota, mmaddireddy,
	sagar.tv

On 7/10/2019 10:32 PM, Lorenzo Pieralisi wrote:
> On Wed, Jul 10, 2019 at 11:52:12AM +0530, Vidya Sagar wrote:
> 
> [...]
> 
>> +#if defined(CONFIG_PCIEASPM)
>> +static void disable_aspm_l11(struct tegra_pcie_dw *pcie)
>> +{
>> +	u32 val;
>> +
>> +	val = dw_pcie_readl_dbi(&pcie->pci, pcie->cfg_link_cap_l1sub);
>> +	val &= ~PCI_L1SS_CAP_ASPM_L1_1;
>> +	dw_pcie_writel_dbi(&pcie->pci, pcie->cfg_link_cap_l1sub, val);
>> +}
>> +
>> +static void disable_aspm_l12(struct tegra_pcie_dw *pcie)
>> +{
>> +	u32 val;
>> +
>> +	val = dw_pcie_readl_dbi(&pcie->pci, pcie->cfg_link_cap_l1sub);
>> +	val &= ~PCI_L1SS_CAP_ASPM_L1_2;
>> +	dw_pcie_writel_dbi(&pcie->pci, pcie->cfg_link_cap_l1sub, val);
>> +}
>> +
>> +static inline u32 event_counter_prog(struct tegra_pcie_dw *pcie, u32 event)
>> +{
>> +	u32 val;
>> +
>> +	val = dw_pcie_readl_dbi(&pcie->pci, event_cntr_ctrl_offset[pcie->cid]);
>> +	val &= ~(EVENT_COUNTER_EVENT_SEL_MASK << EVENT_COUNTER_EVENT_SEL_SHIFT);
>> +	val |= EVENT_COUNTER_GROUP_5 << EVENT_COUNTER_GROUP_SEL_SHIFT;
>> +	val |= event << EVENT_COUNTER_EVENT_SEL_SHIFT;
>> +	val |= EVENT_COUNTER_ENABLE_ALL << EVENT_COUNTER_ENABLE_SHIFT;
>> +	dw_pcie_writel_dbi(&pcie->pci, event_cntr_ctrl_offset[pcie->cid], val);
>> +	val = dw_pcie_readl_dbi(&pcie->pci, event_cntr_data_offset[pcie->cid]);
>> +	return val;
>> +}
>> +
>> +static int aspm_state_cnt(struct seq_file *s, void *data)
>> +{
>> +	struct tegra_pcie_dw *pcie = (struct tegra_pcie_dw *)
>> +				     dev_get_drvdata(s->private);
>> +	u32 val;
>> +
>> +	seq_printf(s, "Tx L0s entry count : %u\n",
>> +		   event_counter_prog(pcie, EVENT_COUNTER_EVENT_Tx_L0S));
>> +
>> +	seq_printf(s, "Rx L0s entry count : %u\n",
>> +		   event_counter_prog(pcie, EVENT_COUNTER_EVENT_Rx_L0S));
>> +
>> +	seq_printf(s, "Link L1 entry count : %u\n",
>> +		   event_counter_prog(pcie, EVENT_COUNTER_EVENT_L1));
>> +
>> +	seq_printf(s, "Link L1.1 entry count : %u\n",
>> +		   event_counter_prog(pcie, EVENT_COUNTER_EVENT_L1_1));
>> +
>> +	seq_printf(s, "Link L1.2 entry count : %u\n",
>> +		   event_counter_prog(pcie, EVENT_COUNTER_EVENT_L1_2));
>> +
>> +	/* Clear all counters */
>> +	dw_pcie_writel_dbi(&pcie->pci, event_cntr_ctrl_offset[pcie->cid],
>> +			   EVENT_COUNTER_ALL_CLEAR);
>> +
>> +	/* Re-enable counting */
>> +	val = EVENT_COUNTER_ENABLE_ALL << EVENT_COUNTER_ENABLE_SHIFT;
>> +	val |= EVENT_COUNTER_GROUP_5 << EVENT_COUNTER_GROUP_SEL_SHIFT;
>> +	dw_pcie_writel_dbi(&pcie->pci, event_cntr_ctrl_offset[pcie->cid], val);
>> +
>> +	return 0;
>> +}
>> +#endif
>> +
>> +static int init_debugfs(struct tegra_pcie_dw *pcie)
>> +{
>> +#if defined(CONFIG_PCIEASPM)
>> +	struct dentry *d;
>> +
>> +	d = debugfs_create_devm_seqfile(pcie->dev, "aspm_state_cnt",
>> +					pcie->debugfs, aspm_state_cnt);
>> +	if (IS_ERR_OR_NULL(d))
>> +		dev_err(pcie->dev,
>> +			"Failed to create debugfs file \"aspm_state_cnt\"\n");
>> +#endif
>> +	return 0;
>> +}
> 
> I prefer writing it:
> 
> #if defined(CONFIG_PCIEASPM)
> static void disable_aspm_l11(struct tegra_pcie_dw *pcie)
> {
> ...
> }
> 
> static void disable_aspm_l12(struct tegra_pcie_dw *pcie)
> {
> ...
> }
> 
> static inline u32 event_counter_prog(struct tegra_pcie_dw *pcie, u32 event)
> {
> ...
> }
> 
> static int aspm_state_cnt(struct seq_file *s, void *data)
> {
> ...
> }
> 
> static int init_debugfs(struct tegra_pcie_dw *pcie)
> {
> 	struct dentry *d;
> 
> 	d = debugfs_create_devm_seqfile(pcie->dev, "aspm_state_cnt",
> 					pcie->debugfs, aspm_state_cnt);
> 	if (IS_ERR_OR_NULL(d))
> 		dev_err(pcie->dev,
> 			"Failed to create debugfs file \"aspm_state_cnt\"\n");
> 	return 0;
> }
> #else
> static inline int init_debugfs(struct tegra_pcie_dw *pcie) { return 0; }
> #endif
I'll address it in the next patch.

> 
>> +
>> +static void tegra_pcie_enable_system_interrupts(struct pcie_port *pp)
>> +{
>> +	struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
>> +	struct tegra_pcie_dw *pcie = to_tegra_pcie(pci);
>> +	u32 val;
>> +	u16 val_w;
>> +
>> +	val = appl_readl(pcie, APPL_INTR_EN_L0_0);
>> +	val |= APPL_INTR_EN_L0_0_LINK_STATE_INT_EN;
>> +	appl_writel(pcie, val, APPL_INTR_EN_L0_0);
>> +
>> +	val = appl_readl(pcie, APPL_INTR_EN_L1_0_0);
>> +	val |= APPL_INTR_EN_L1_0_0_LINK_REQ_RST_NOT_INT_EN;
>> +	appl_writel(pcie, val, APPL_INTR_EN_L1_0_0);
>> +
>> +	if (pcie->enable_cdm_check) {
>> +		val = appl_readl(pcie, APPL_INTR_EN_L0_0);
>> +		val |= APPL_INTR_EN_L0_0_CDM_REG_CHK_INT_EN;
>> +		appl_writel(pcie, val, APPL_INTR_EN_L0_0);
>> +
>> +		val = appl_readl(pcie, APPL_INTR_EN_L1_18);
>> +		val |= APPL_INTR_EN_L1_18_CDM_REG_CHK_CMP_ERR;
>> +		val |= APPL_INTR_EN_L1_18_CDM_REG_CHK_LOGIC_ERR;
>> +		appl_writel(pcie, val, APPL_INTR_EN_L1_18);
>> +	}
>> +
>> +	val_w = dw_pcie_readw_dbi(&pcie->pci, pcie->pcie_cap_base +
>> +				  PCI_EXP_LNKSTA);
>> +	pcie->init_link_width = (val_w & PCI_EXP_LNKSTA_NLW) >>
>> +				PCI_EXP_LNKSTA_NLW_SHIFT;
>> +
>> +	val_w = dw_pcie_readw_dbi(&pcie->pci, pcie->pcie_cap_base +
>> +				  PCI_EXP_LNKCTL);
>> +	val_w |= PCI_EXP_LNKCTL_LBMIE;
>> +	dw_pcie_writew_dbi(&pcie->pci, pcie->pcie_cap_base + PCI_EXP_LNKCTL,
>> +			   val_w);
>> +}
>> +
>> +static void tegra_pcie_enable_legacy_interrupts(struct pcie_port *pp)
>> +{
>> +	struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
>> +	struct tegra_pcie_dw *pcie = to_tegra_pcie(pci);
>> +	u32 val;
>> +
>> +	/* Enable legacy interrupt generation */
>> +	val = appl_readl(pcie, APPL_INTR_EN_L0_0);
>> +	val |= APPL_INTR_EN_L0_0_SYS_INTR_EN;
>> +	val |= APPL_INTR_EN_L0_0_INT_INT_EN;
>> +	appl_writel(pcie, val, APPL_INTR_EN_L0_0);
>> +
>> +	val = appl_readl(pcie, APPL_INTR_EN_L1_8_0);
>> +	val |= APPL_INTR_EN_L1_8_INTX_EN;
>> +	val |= APPL_INTR_EN_L1_8_AUTO_BW_INT_EN;
>> +	val |= APPL_INTR_EN_L1_8_BW_MGT_INT_EN;
>> +	if (IS_ENABLED(CONFIG_PCIEAER))
>> +		val |= APPL_INTR_EN_L1_8_AER_INT_EN;
>> +	appl_writel(pcie, val, APPL_INTR_EN_L1_8_0);
>> +}
>> +
>> +static void tegra_pcie_enable_msi_interrupts(struct pcie_port *pp)
>> +{
>> +	struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
>> +	struct tegra_pcie_dw *pcie = to_tegra_pcie(pci);
>> +	u32 val;
>> +
>> +	dw_pcie_msi_init(pp);
>> +
>> +	/* Enable MSI interrupt generation */
>> +	val = appl_readl(pcie, APPL_INTR_EN_L0_0);
>> +	val |= APPL_INTR_EN_L0_0_SYS_MSI_INTR_EN;
>> +	val |= APPL_INTR_EN_L0_0_MSI_RCV_INT_EN;
>> +	appl_writel(pcie, val, APPL_INTR_EN_L0_0);
>> +}
>> +
>> +static void tegra_pcie_enable_interrupts(struct pcie_port *pp)
>> +{
>> +	struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
>> +	struct tegra_pcie_dw *pcie = to_tegra_pcie(pci);
>> +
>> +	/* Clear interrupt statuses before enabling interrupts */
>> +	appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L0);
>> +	appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L1_0_0);
>> +	appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L1_1);
>> +	appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L1_2);
>> +	appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L1_3);
>> +	appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L1_6);
>> +	appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L1_7);
>> +	appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L1_8_0);
>> +	appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L1_9);
>> +	appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L1_10);
>> +	appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L1_11);
>> +	appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L1_13);
>> +	appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L1_14);
>> +	appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L1_15);
>> +	appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L1_17);
>> +
>> +	tegra_pcie_enable_system_interrupts(pp);
>> +	tegra_pcie_enable_legacy_interrupts(pp);
>> +	if (IS_ENABLED(CONFIG_PCI_MSI))
>> +		tegra_pcie_enable_msi_interrupts(pp);
>> +}
>> +
>> +static void config_gen3_gen4_eq_presets(struct tegra_pcie_dw *pcie)
>> +{
>> +	struct dw_pcie *pci = &pcie->pci;
>> +	u32 val, offset, i;
>> +
>> +	/* Program init preset */
>> +	for (i = 0; i < pcie->num_lanes; i++) {
>> +		dw_pcie_read(pci->dbi_base + CAP_SPCIE_CAP_OFF
>> +				 + (i * 2), 2, &val);
>> +		val &= ~CAP_SPCIE_CAP_OFF_DSP_TX_PRESET0_MASK;
>> +		val |= GEN3_GEN4_EQ_PRESET_INIT;
>> +		val &= ~CAP_SPCIE_CAP_OFF_USP_TX_PRESET0_MASK;
>> +		val |= (GEN3_GEN4_EQ_PRESET_INIT <<
>> +			   CAP_SPCIE_CAP_OFF_USP_TX_PRESET0_SHIFT);
>> +		dw_pcie_write(pci->dbi_base + CAP_SPCIE_CAP_OFF
>> +				 + (i * 2), 2, val);
>> +
>> +		offset = dw_pcie_find_ext_capability(pci,
>> +						     PCI_EXT_CAP_ID_PL_16GT) +
>> +				PCI_PL_16GT_LE_CTRL;
>> +		dw_pcie_read(pci->dbi_base + offset + i, 1, &val);
>> +		val &= ~PCI_PL_16GT_LE_CTRL_DSP_TX_PRESET_MASK;
>> +		val |= GEN3_GEN4_EQ_PRESET_INIT;
>> +		val &= ~PCI_PL_16GT_LE_CTRL_USP_TX_PRESET_MASK;
>> +		val |= (GEN3_GEN4_EQ_PRESET_INIT <<
>> +			PCI_PL_16GT_LE_CTRL_USP_TX_PRESET_SHIFT);
>> +		dw_pcie_write(pci->dbi_base + offset + i, 1, val);
>> +	}
>> +
>> +	val = dw_pcie_readl_dbi(pci, GEN3_RELATED_OFF);
>> +	val &= ~GEN3_RELATED_OFF_RATE_SHADOW_SEL_MASK;
>> +	dw_pcie_writel_dbi(pci, GEN3_RELATED_OFF, val);
>> +
>> +	val = dw_pcie_readl_dbi(pci, GEN3_EQ_CONTROL_OFF);
>> +	val &= ~GEN3_EQ_CONTROL_OFF_PSET_REQ_VEC_MASK;
>> +	val |= (0x3ff << GEN3_EQ_CONTROL_OFF_PSET_REQ_VEC_SHIFT);
>> +	val &= ~GEN3_EQ_CONTROL_OFF_FB_MODE_MASK;
>> +	dw_pcie_writel_dbi(pci, GEN3_EQ_CONTROL_OFF, val);
>> +
>> +	val = dw_pcie_readl_dbi(pci, GEN3_RELATED_OFF);
>> +	val &= ~GEN3_RELATED_OFF_RATE_SHADOW_SEL_MASK;
>> +	val |= (0x1 << GEN3_RELATED_OFF_RATE_SHADOW_SEL_SHIFT);
>> +	dw_pcie_writel_dbi(pci, GEN3_RELATED_OFF, val);
>> +
>> +	val = dw_pcie_readl_dbi(pci, GEN3_EQ_CONTROL_OFF);
>> +	val &= ~GEN3_EQ_CONTROL_OFF_PSET_REQ_VEC_MASK;
>> +	val |= (0x360 << GEN3_EQ_CONTROL_OFF_PSET_REQ_VEC_SHIFT);
>> +	val &= ~GEN3_EQ_CONTROL_OFF_FB_MODE_MASK;
>> +	dw_pcie_writel_dbi(pci, GEN3_EQ_CONTROL_OFF, val);
>> +
>> +	val = dw_pcie_readl_dbi(pci, GEN3_RELATED_OFF);
>> +	val &= ~GEN3_RELATED_OFF_RATE_SHADOW_SEL_MASK;
>> +	dw_pcie_writel_dbi(pci, GEN3_RELATED_OFF, val);
>> +}
>> +
>> +static int tegra_pcie_dw_host_init(struct pcie_port *pp)
>> +{
>> +	struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
>> +	struct tegra_pcie_dw *pcie = to_tegra_pcie(pci);
>> +	u32 val, tmp, offset, speed;
>> +	unsigned int count;
>> +	u16 val_w;
>> +
>> +core_init:
> 
> Why should we restart from here ? What's the effect of the reset
> on following registers set-up ?
before jumping to 'core_init' label, PCIe IP core is put through reset cycle and
registers in the core get their default values and hence need to be programmed again.

> 
>> +	count = 200;
>> +#if defined(CONFIG_PCIEASPM)
>> +	offset = dw_pcie_find_ext_capability(pci, PCI_EXT_CAP_ID_L1SS);
>> +	pcie->cfg_link_cap_l1sub = offset + PCI_L1SS_CAP;
>> +#endif
> 
> Can we group it in the #ifdef above ?
I'll address it in the next patch.

> 
>> +	val = dw_pcie_readl_dbi(pci, PCI_IO_BASE);
>> +	val &= ~(IO_BASE_IO_DECODE | IO_BASE_IO_DECODE_BIT8);
>> +	dw_pcie_writel_dbi(pci, PCI_IO_BASE, val);
>> +
>> +	val = dw_pcie_readl_dbi(pci, PCI_PREF_MEMORY_BASE);
>> +	val |= CFG_PREF_MEM_LIMIT_BASE_MEM_DECODE;
>> +	val |= CFG_PREF_MEM_LIMIT_BASE_MEM_LIMIT_DECODE;
>> +	dw_pcie_writel_dbi(pci, PCI_PREF_MEMORY_BASE, val);
>> +
>> +	dw_pcie_writel_dbi(pci, PCI_BASE_ADDRESS_0, 0);
>> +
>> +	/* Configure FTS */
>> +	val = dw_pcie_readl_dbi(pci, PORT_LOGIC_ACK_F_ASPM_CTRL);
>> +	val &= ~(N_FTS_MASK << N_FTS_SHIFT);
>> +	val |= N_FTS_VAL << N_FTS_SHIFT;
>> +	dw_pcie_writel_dbi(pci, PORT_LOGIC_ACK_F_ASPM_CTRL, val);
>> +
>> +	val = dw_pcie_readl_dbi(pci, PORT_LOGIC_GEN2_CTRL);
>> +	val &= ~FTS_MASK;
>> +	val |= FTS_VAL;
>> +	dw_pcie_writel_dbi(pci, PORT_LOGIC_GEN2_CTRL, val);
>> +
>> +	/* Enable as 0xFFFF0001 response for CRS */
>> +	val = dw_pcie_readl_dbi(pci, PORT_LOGIC_AMBA_ERROR_RESPONSE_DEFAULT);
>> +	val &= ~(AMBA_ERROR_RESPONSE_CRS_MASK << AMBA_ERROR_RESPONSE_CRS_SHIFT);
>> +	val |= (AMBA_ERROR_RESPONSE_CRS_OKAY_FFFF0001 <<
>> +		AMBA_ERROR_RESPONSE_CRS_SHIFT);
>> +	dw_pcie_writel_dbi(pci, PORT_LOGIC_AMBA_ERROR_RESPONSE_DEFAULT, val);
>> +
>> +	/* Configure Max Speed from DT */
>> +	if (pcie->max_speed && pcie->max_speed != -EINVAL) {
>> +		val = dw_pcie_readl_dbi(pci, pcie->pcie_cap_base +
>> +					PCI_EXP_LNKCAP);
>> +		val &= ~PCI_EXP_LNKCAP_SLS;
>> +		val |= pcie->max_speed;
>> +		dw_pcie_writel_dbi(pci, pcie->pcie_cap_base + PCI_EXP_LNKCAP,
>> +				   val);
>> +	}
>> +
>> +	/* Configure Max lane width from DT */
>> +	val = dw_pcie_readl_dbi(pci, pcie->pcie_cap_base + PCI_EXP_LNKCAP);
>> +	val &= ~PCI_EXP_LNKCAP_MLW;
>> +	val |= (pcie->num_lanes << PCI_EXP_LNKSTA_NLW_SHIFT);
>> +	dw_pcie_writel_dbi(pci, pcie->pcie_cap_base + PCI_EXP_LNKCAP, val);
>> +
>> +	config_gen3_gen4_eq_presets(pcie);
>> +
>> +#if defined(CONFIG_PCIEASPM)
>> +	/* Enable ASPM counters */
>> +	val = EVENT_COUNTER_ENABLE_ALL << EVENT_COUNTER_ENABLE_SHIFT;
>> +	val |= EVENT_COUNTER_GROUP_5 << EVENT_COUNTER_GROUP_SEL_SHIFT;
>> +	dw_pcie_writel_dbi(pci, event_cntr_ctrl_offset[pcie->cid], val);
>> +
>> +	/* Program T_cmrt and T_pwr_on values */
>> +	val = dw_pcie_readl_dbi(pci, pcie->cfg_link_cap_l1sub);
>> +	val &= ~(PCI_L1SS_CAP_CM_RESTORE_TIME | PCI_L1SS_CAP_P_PWR_ON_VALUE);
>> +	val |= (pcie->aspm_cmrt << 8);
>> +	val |= (pcie->aspm_pwr_on_t << 19);
>> +	dw_pcie_writel_dbi(pci, pcie->cfg_link_cap_l1sub, val);
>> +
>> +	/* Program L0s and L1 entrance latencies */
>> +	val = dw_pcie_readl_dbi(pci, PORT_LOGIC_ACK_F_ASPM_CTRL);
>> +	val &= ~L0S_ENTRANCE_LAT_MASK;
>> +	val |= (pcie->aspm_l0s_enter_lat << L0S_ENTRANCE_LAT_SHIFT);
>> +	val |= ENTER_ASPM;
>> +	dw_pcie_writel_dbi(pci, PORT_LOGIC_ACK_F_ASPM_CTRL, val);
>> +#endif
> 
> It would be good to group all init guarded in CONFIG_PCIEASPM in
> one function and related ifdef guard.
> 
>> +	val = dw_pcie_readl_dbi(pci, GEN3_RELATED_OFF);
>> +	val &= ~GEN3_RELATED_OFF_GEN3_ZRXDC_NONCOMPL;
>> +	dw_pcie_writel_dbi(pci, GEN3_RELATED_OFF, val);
>> +
>> +	if (pcie->update_fc_fixup) {
>> +		val = dw_pcie_readl_dbi(pci, CFG_TIMER_CTRL_MAX_FUNC_NUM_OFF);
>> +		val |= 0x1 << CFG_TIMER_CTRL_ACK_NAK_SHIFT;
>> +		dw_pcie_writel_dbi(pci, CFG_TIMER_CTRL_MAX_FUNC_NUM_OFF, val);
>> +	}
>> +
>> +	dw_pcie_setup_rc(pp);
>> +
>> +	clk_set_rate(pcie->core_clk, GEN4_CORE_CLK_FREQ);
>> +
>> +	/* Assert RST */
>> +	val = appl_readl(pcie, APPL_PINMUX);
>> +	val &= ~APPL_PINMUX_PEX_RST;
>> +	appl_writel(pcie, val, APPL_PINMUX);
> 
> What's the effect of this RST on the registers programmed above ?
This register is software interface to toggle PERST sideband signal going to
downstream devices. Nothing happens to the registers programmed above because
of applying fundamental reset to downstream devices.

> 
>> +	usleep_range(100, 200);
>> +
>> +	/* Enable LTSSM */
>> +	val = appl_readl(pcie, APPL_CTRL);
>> +	val |= APPL_CTRL_LTSSM_EN;
>> +	appl_writel(pcie, val, APPL_CTRL);
>> +
>> +	/* De-assert RST */
>> +	val = appl_readl(pcie, APPL_PINMUX);
>> +	val |= APPL_PINMUX_PEX_RST;
>> +	appl_writel(pcie, val, APPL_PINMUX);
>> +
>> +	msleep(100);
>> +
>> +	val_w = dw_pcie_readw_dbi(pci, pcie->pcie_cap_base + PCI_EXP_LNKSTA);
>> +	while (!(val_w & PCI_EXP_LNKSTA_DLLLA)) {
>> +		if (count) {
>> +			dev_dbg(pci->dev, "Waiting for link up\n");
>> +			usleep_range(1000, 2000);
>> +			val_w = dw_pcie_readw_dbi(pci, pcie->pcie_cap_base +
>> +						  PCI_EXP_LNKSTA);
>> +			count--;
>> +			continue;
>> +		}
>> +
>> +		val = appl_readl(pcie, APPL_DEBUG);
>> +		val &= APPL_DEBUG_LTSSM_STATE_MASK;
>> +		val >>= APPL_DEBUG_LTSSM_STATE_SHIFT;
>> +		tmp = appl_readl(pcie, APPL_LINK_STATUS);
>> +		tmp &= APPL_LINK_STATUS_RDLH_LINK_UP;
>> +		if (!(val == 0x11 && !tmp)) {
>> +			dev_info(pci->dev, "Link is down\n");
>> +			return 0;
>> +		}
>> +
>> +		dev_info(pci->dev, "Link is down in DLL");
>> +		dev_info(pci->dev, "Trying again with DLFE disabled\n");
>> +		/* Disable LTSSM */
>> +		val = appl_readl(pcie, APPL_CTRL);
>> +		val &= ~APPL_CTRL_LTSSM_EN;
>> +		appl_writel(pcie, val, APPL_CTRL);
>> +
>> +		reset_control_assert(pcie->core_rst);
>> +		reset_control_deassert(pcie->core_rst);
>> +
>> +		offset = dw_pcie_find_ext_capability(pci, PCI_EXT_CAP_ID_DLF);
>> +		val = dw_pcie_readl_dbi(pci, offset + PCI_DLF_CAP);
>> +		val &= ~PCI_DLF_EXCHANGE_ENABLE;
>> +		dw_pcie_writel_dbi(pci, offset, val);
>> +
>> +		/* Retry now with DLF Exchange disabled */
>> +		goto core_init;
> 
> See above.
> 
>> +	}
>> +	dev_dbg(pci->dev, "Link is up\n");
>> +
>> +	speed = dw_pcie_readw_dbi(pci, pcie->pcie_cap_base + PCI_EXP_LNKSTA) &
>> +		PCI_EXP_LNKSTA_CLS;
>> +	clk_set_rate(pcie->core_clk, pcie_gen_freq[speed - 1]);
>> +
>> +	tegra_pcie_enable_interrupts(pp);
>> +
>> +	return 0;
>> +}
>> +
>> +static int tegra_pcie_dw_link_up(struct dw_pcie *pci)
>> +{
>> +	struct tegra_pcie_dw *pcie = to_tegra_pcie(pci);
>> +	u32 val = dw_pcie_readw_dbi(pci, pcie->pcie_cap_base + PCI_EXP_LNKSTA);
>> +
>> +	return !!(val & PCI_EXP_LNKSTA_DLLLA);
>> +}
>> +
>> +static void tegra_pcie_set_msi_vec_num(struct pcie_port *pp)
>> +{
>> +	pp->num_vectors = MAX_MSI_IRQS;
>> +}
>> +
>> +static const struct dw_pcie_ops tegra_dw_pcie_ops = {
>> +	.link_up = tegra_pcie_dw_link_up,
>> +};
>> +
>> +static struct dw_pcie_host_ops tegra_pcie_dw_host_ops = {
>> +	.rd_own_conf = tegra_pcie_dw_rd_own_conf,
>> +	.wr_own_conf = tegra_pcie_dw_wr_own_conf,
>> +	.host_init = tegra_pcie_dw_host_init,
>> +	.set_num_vectors = tegra_pcie_set_msi_vec_num,
>> +};
>> +
>> +static void tegra_pcie_disable_phy(struct tegra_pcie_dw *pcie)
>> +{
>> +	unsigned int phy_count = pcie->phy_count;
>> +
>> +	while (phy_count--) {
>> +		phy_power_off(pcie->phys[phy_count]);
>> +		phy_exit(pcie->phys[phy_count]);
>> +	}
>> +}
>> +
>> +static int tegra_pcie_enable_phy(struct tegra_pcie_dw *pcie)
>> +{
>> +	unsigned int i;
>> +	int ret;
>> +
>> +	for (i = 0; i < pcie->phy_count; i++) {
>> +		ret = phy_init(pcie->phys[i]);
>> +		if (ret < 0)
>> +			goto phy_power_off;
>> +
>> +		ret = phy_power_on(pcie->phys[i]);
>> +		if (ret < 0)
>> +			goto phy_exit;
>> +	}
>> +
>> +	return 0;
>> +
>> +phy_power_off:
>> +	while (i--) {
>> +		phy_power_off(pcie->phys[i]);
>> +phy_exit:
>> +		phy_exit(pcie->phys[i]);
>> +	}
>> +
>> +	return ret;
>> +}
>> +
>> +static int tegra_pcie_dw_parse_dt(struct tegra_pcie_dw *pcie)
>> +{
>> +	struct device_node *np = pcie->dev->of_node;
>> +	int ret;
>> +
>> +	ret = of_property_read_u32(np, "nvidia,aspm-cmrt-us", &pcie->aspm_cmrt);
>> +	if (ret < 0) {
>> +		dev_info(pcie->dev, "Failed to read ASPM T_cmrt: %d\n", ret);
>> +		return ret;
>> +	}
>> +
>> +	ret = of_property_read_u32(np, "nvidia,aspm-pwr-on-t-us",
>> +				   &pcie->aspm_pwr_on_t);
>> +	if (ret < 0)
>> +		dev_info(pcie->dev, "Failed to read ASPM Power On time: %d\n",
>> +			 ret);
>> +
>> +	ret = of_property_read_u32(np, "nvidia,aspm-l0s-entrance-latency-us",
>> +				   &pcie->aspm_l0s_enter_lat);
>> +	if (ret < 0)
>> +		dev_info(pcie->dev,
>> +			 "Failed to read ASPM L0s Entrance latency: %d\n", ret);
>> +
>> +	ret = of_property_read_u32(np, "num-lanes", &pcie->num_lanes);
>> +	if (ret < 0) {
>> +		dev_err(pcie->dev, "Failed to read num-lanes: %d\n", ret);
>> +		return ret;
>> +	}
>> +
>> +	pcie->max_speed = of_pci_get_max_link_speed(np);
>> +
>> +	ret = of_property_read_u32_index(np, "nvidia,bpmp", 1, &pcie->cid);
>> +	if (ret) {
>> +		dev_err(pcie->dev, "Failed to read Controller-ID: %d\n", ret);
>> +		return ret;
>> +	}
>> +
>> +	pcie->phy_count = of_property_count_strings(np, "phy-names");
>> +	if (pcie->phy_count < 0) {
>> +		dev_err(pcie->dev, "Failed to find PHY entries: %d\n",
>> +			pcie->phy_count);
>> +		return pcie->phy_count;
>> +	}
>> +
>> +	if (of_property_read_bool(np, "nvidia,update-fc-fixup"))
>> +		pcie->update_fc_fixup = true;
>> +
>> +	pcie->supports_clkreq =
>> +		of_property_read_bool(pcie->dev->of_node, "supports-clkreq");
>> +
>> +	pcie->enable_cdm_check =
>> +		of_property_read_bool(np, "snps,enable-cdm-check");
>> +
>> +	return 0;
>> +}
>> +
>> +static int tegra_pcie_bpmp_set_ctrl_state(struct tegra_pcie_dw *pcie,
>> +					  bool enable)
>> +{
>> +	struct mrq_uphy_response resp;
>> +	struct tegra_bpmp_message msg;
>> +	struct mrq_uphy_request req;
>> +	int err;
>> +
>> +	if (pcie->cid == 5)
>> +		return 0;
>> +
>> +	memset(&req, 0, sizeof(req));
>> +	memset(&resp, 0, sizeof(resp));
>> +
>> +	req.cmd = CMD_UPHY_PCIE_CONTROLLER_STATE;
>> +	req.controller_state.pcie_controller = pcie->cid;
>> +	req.controller_state.enable = enable;
>> +
>> +	memset(&msg, 0, sizeof(msg));
>> +	msg.mrq = MRQ_UPHY;
>> +	msg.tx.data = &req;
>> +	msg.tx.size = sizeof(req);
>> +	msg.rx.data = &resp;
>> +	msg.rx.size = sizeof(resp);
>> +
>> +	if (irqs_disabled())
>> +		err = tegra_bpmp_transfer_atomic(pcie->bpmp, &msg);
>> +	else
>> +		err = tegra_bpmp_transfer(pcie->bpmp, &msg);
>> +
>> +	return err;
>> +}
>> +
>> +static void tegra_pcie_downstream_dev_to_D0(struct tegra_pcie_dw *pcie)
>> +{
>> +	struct pcie_port *pp = &pcie->pci.pp;
>> +	struct pci_dev *pdev;
>> +	struct pci_bus *child;
>> +
>> +	list_for_each_entry(child, &pp->root_bus->children, node) {
>> +		/* Bring downstream devices to D0 if they are not already in */
>> +		if (child->parent == pp->root_bus)
>> +			break;
>> +	}
>> +	list_for_each_entry(pdev, &child->devices, bus_list) {
>> +		if (PCI_SLOT(pdev->devfn) == 0) {
>> +			if (pci_set_power_state(pdev, PCI_D0))
>> +				dev_err(pcie->dev,
>> +					"Failed to transition %s to D0 state\n",
>> +					dev_name(&pdev->dev));
>> +		}
>> +	}
>> +}
>> +
>> +static int tegra_pcie_config_controller(struct tegra_pcie_dw *pcie,
>> +					bool en_hw_hot_rst)
>> +{
>> +	int ret;
>> +	u32 val;
>> +
>> +	ret = tegra_pcie_bpmp_set_ctrl_state(pcie, true);
>> +	if (ret) {
>> +		dev_err(pcie->dev,
>> +			"Failed to enable controller %u: %d\n", pcie->cid, ret);
>> +		return ret;
>> +	}
>> +
>> +	ret = regulator_enable(pcie->pex_ctl_supply);
>> +	if (ret < 0) {
>> +		dev_err(pcie->dev, "Failed to enable regulator: %d\n", ret);
>> +		goto fail_reg_en;
>> +	}
>> +
>> +	ret = clk_prepare_enable(pcie->core_clk);
>> +	if (ret) {
>> +		dev_err(pcie->dev, "Failed to enable core clock: %d\n", ret);
>> +		goto fail_core_clk;
>> +	}
>> +
>> +	ret = reset_control_deassert(pcie->core_apb_rst);
>> +	if (ret) {
>> +		dev_err(pcie->dev, "Failed to deassert core APB reset: %d\n",
>> +			ret);
>> +		goto fail_core_apb_rst;
>> +	}
>> +
>> +	if (en_hw_hot_rst) {
>> +		/* Enable HW_HOT_RST mode */
>> +		val = appl_readl(pcie, APPL_CTRL);
>> +		val &= ~(APPL_CTRL_HW_HOT_RST_MODE_MASK <<
>> +			 APPL_CTRL_HW_HOT_RST_MODE_SHIFT);
>> +		val |= APPL_CTRL_HW_HOT_RST_EN;
>> +		appl_writel(pcie, val, APPL_CTRL);
>> +	}
>> +
>> +	ret = tegra_pcie_enable_phy(pcie);
>> +	if (ret) {
>> +		dev_err(pcie->dev, "Failed to enable PHY: %d\n", ret);
>> +		goto fail_phy;
>> +	}
>> +
>> +	/* Update CFG base address */
>> +	appl_writel(pcie, pcie->dbi_res->start & APPL_CFG_BASE_ADDR_MASK,
>> +		    APPL_CFG_BASE_ADDR);
>> +
>> +	/* Configure this core for RP mode operation */
>> +	appl_writel(pcie, APPL_DM_TYPE_RP, APPL_DM_TYPE);
>> +
>> +	appl_writel(pcie, 0x0, APPL_CFG_SLCG_OVERRIDE);
>> +
>> +	val = appl_readl(pcie, APPL_CTRL);
>> +	appl_writel(pcie, val | APPL_CTRL_SYS_PRE_DET_STATE, APPL_CTRL);
>> +
>> +	val = appl_readl(pcie, APPL_CFG_MISC);
>> +	val |= (APPL_CFG_MISC_ARCACHE_VAL << APPL_CFG_MISC_ARCACHE_SHIFT);
>> +	appl_writel(pcie, val, APPL_CFG_MISC);
>> +
>> +	if (!pcie->supports_clkreq) {
>> +		val = appl_readl(pcie, APPL_PINMUX);
>> +		val |= APPL_PINMUX_CLKREQ_OUT_OVRD_EN;
>> +		val |= APPL_PINMUX_CLKREQ_OUT_OVRD;
>> +		appl_writel(pcie, val, APPL_PINMUX);
>> +	}
>> +
>> +	/* Update iATU_DMA base address */
>> +	appl_writel(pcie,
>> +		    pcie->atu_dma_res->start & APPL_CFG_IATU_DMA_BASE_ADDR_MASK,
>> +		    APPL_CFG_IATU_DMA_BASE_ADDR);
>> +
>> +	reset_control_deassert(pcie->core_rst);
>> +
>> +	pcie->pcie_cap_base = dw_pcie_find_capability(&pcie->pci,
>> +						      PCI_CAP_ID_EXP);
>> +
>> +#if defined(CONFIG_PCIEASPM)
>> +	/* Disable ASPM-L1SS advertisement as there is no CLKREQ routing */
>> +	if (!pcie->supports_clkreq) {
>> +		disable_aspm_l11(pcie);
>> +		disable_aspm_l12(pcie);
>> +	}
>> +#endif
>> +	return ret;
>> +
>> +fail_phy:
>> +	reset_control_assert(pcie->core_apb_rst);
>> +fail_core_apb_rst:
>> +	clk_disable_unprepare(pcie->core_clk);
>> +fail_core_clk:
>> +	regulator_disable(pcie->pex_ctl_supply);
>> +fail_reg_en:
>> +	tegra_pcie_bpmp_set_ctrl_state(pcie, false);
>> +
>> +	return ret;
>> +}
>> +
>> +static int __deinit_controller(struct tegra_pcie_dw *pcie)
>> +{
>> +	int ret;
>> +
>> +	ret = reset_control_assert(pcie->core_rst);
>> +	if (ret) {
>> +		dev_err(pcie->dev, "Failed to assert \"core\" reset: %d\n",
>> +			ret);
>> +		return ret;
>> +	}
>> +	tegra_pcie_disable_phy(pcie);
>> +	ret = reset_control_assert(pcie->core_apb_rst);
>> +	if (ret) {
>> +		dev_err(pcie->dev, "Failed to assert APB reset: %d\n", ret);
>> +		return ret;
>> +	}
>> +	clk_disable_unprepare(pcie->core_clk);
>> +	ret = regulator_disable(pcie->pex_ctl_supply);
>> +	if (ret) {
>> +		dev_err(pcie->dev, "Failed to disable regulator: %d\n", ret);
>> +		return ret;
>> +	}
>> +	ret = tegra_pcie_bpmp_set_ctrl_state(pcie, false);
>> +	if (ret) {
>> +		dev_err(pcie->dev, "Failed to disable controller %d: %d\n",
>> +			pcie->cid, ret);
>> +		return ret;
>> +	}
>> +	return ret;
>> +}
>> +
>> +static int tegra_pcie_init_controller(struct tegra_pcie_dw *pcie)
>> +{
>> +	struct dw_pcie *pci = &pcie->pci;
>> +	struct pcie_port *pp = &pci->pp;
>> +	int ret;
>> +
>> +	ret = tegra_pcie_config_controller(pcie, false);
>> +	if (ret < 0)
>> +		return ret;
>> +
>> +	pp->root_bus_nr = -1;
> 
> Useless, drop it, it is initialized in dw_pcie_host_init().
Ok.

> 
>> +	pp->ops = &tegra_pcie_dw_host_ops;
>> +
>> +	ret = dw_pcie_host_init(pp);
>> +	if (ret < 0) {
>> +		dev_err(pcie->dev, "Failed to add PCIe port: %d\n", ret);
>> +		goto fail_host_init;
>> +	}
>> +
>> +	return 0;
>> +
>> +fail_host_init:
>> +	return __deinit_controller(pcie);
>> +}
>> +
>> +static int tegra_pcie_try_link_l2(struct tegra_pcie_dw *pcie)
>> +{
>> +	u32 val;
>> +
>> +	if (!tegra_pcie_dw_link_up(&pcie->pci))
>> +		return 0;
>> +
>> +	val = appl_readl(pcie, APPL_RADM_STATUS);
>> +	val |= APPL_PM_XMT_TURNOFF_STATE;
>> +	appl_writel(pcie, val, APPL_RADM_STATUS);
>> +
>> +	return readl_poll_timeout_atomic(pcie->appl_base + APPL_DEBUG, val,
>> +				 val & APPL_DEBUG_PM_LINKST_IN_L2_LAT,
>> +				 1, PME_ACK_TIMEOUT);
>> +}
>> +
>> +static void tegra_pcie_dw_pme_turnoff(struct tegra_pcie_dw *pcie)
>> +{
>> +	u32 data;
>> +	int err;
>> +
>> +	if (!tegra_pcie_dw_link_up(&pcie->pci)) {
>> +		dev_dbg(pcie->dev, "PCIe link is not up...!\n");
>> +		return;
>> +	}
>> +
>> +	if (tegra_pcie_try_link_l2(pcie)) {
>> +		dev_info(pcie->dev, "Link didn't transition to L2 state\n");
>> +		/*
>> +		 * TX lane clock freq will reset to Gen1 only if link is in L2
>> +		 * or detect state.
>> +		 * So apply pex_rst to end point to force RP to go into detect
>> +		 * state
>> +		 */
>> +		data = appl_readl(pcie, APPL_PINMUX);
>> +		data &= ~APPL_PINMUX_PEX_RST;
>> +		appl_writel(pcie, data, APPL_PINMUX);
>> +
>> +		err = readl_poll_timeout_atomic(pcie->appl_base + APPL_DEBUG,
>> +						data,
>> +						((data &
>> +						APPL_DEBUG_LTSSM_STATE_MASK) >>
>> +						APPL_DEBUG_LTSSM_STATE_SHIFT) ==
>> +						LTSSM_STATE_PRE_DETECT,
>> +						1, LTSSM_TIMEOUT);
>> +		if (err) {
>> +			dev_info(pcie->dev, "Link didn't go to detect state\n");
>> +		} else {
>> +			/* Disable LTSSM after link is in detect state */
>> +			data = appl_readl(pcie, APPL_CTRL);
>> +			data &= ~APPL_CTRL_LTSSM_EN;
>> +			appl_writel(pcie, data, APPL_CTRL);
>> +		}
>> +	}
>> +	/*
>> +	 * DBI registers may not be accessible after this as PLL-E would be
>> +	 * down depending on how CLKREQ is pulled by end point
>> +	 */
>> +	data = appl_readl(pcie, APPL_PINMUX);
>> +	data |= (APPL_PINMUX_CLKREQ_OVERRIDE_EN | APPL_PINMUX_CLKREQ_OVERRIDE);
>> +	/* Cut REFCLK to slot */
>> +	data |= APPL_PINMUX_CLK_OUTPUT_IN_OVERRIDE_EN;
>> +	data &= ~APPL_PINMUX_CLK_OUTPUT_IN_OVERRIDE;
>> +	appl_writel(pcie, data, APPL_PINMUX);
>> +}
>> +
>> +static int tegra_pcie_deinit_controller(struct tegra_pcie_dw *pcie)
>> +{
>> +	/*
>> +	 * link doesn't go into L2 state with some of the endpoints with Tegra
>> +	 * if they are not in D0 state. So, need to make sure that immediate
>> +	 * downstream devices are in D0 state before sending PME_TurnOff to put
>> +	 * link into L2 state
>> +	 */
>> +	tegra_pcie_downstream_dev_to_D0(pcie);
>> +	dw_pcie_host_deinit(&pcie->pci.pp);
>> +	tegra_pcie_dw_pme_turnoff(pcie);
>> +	return __deinit_controller(pcie);
>> +}
>> +
>> +static int tegra_pcie_config_rp(struct tegra_pcie_dw *pcie)
>> +{
>> +	struct pcie_port *pp = &pcie->pci.pp;
>> +	struct device *dev = pcie->dev;
>> +	char *name;
>> +	int ret;
>> +
>> +	if (IS_ENABLED(CONFIG_PCI_MSI)) {
>> +		pp->msi_irq = of_irq_get_byname(dev->of_node, "msi");
>> +		if (!pp->msi_irq) {
>> +			dev_err(dev, "Failed to get MSI interrupt\n");
>> +			return -ENODEV;
>> +		}
>> +	}
>> +
>> +	pm_runtime_enable(dev);
>> +	ret = pm_runtime_get_sync(dev);
>> +	if (ret < 0) {
>> +		dev_err(dev, "Failed to get runtime sync for PCIe dev: %d\n",
>> +			ret);
>> +		pm_runtime_disable(dev);
>> +		return ret;
>> +	}
>> +
>> +	tegra_pcie_init_controller(pcie);
>> +
>> +	pcie->link_state = tegra_pcie_dw_link_up(&pcie->pci);
>> +
>> +	if (!pcie->link_state) {
>> +		ret = -ENOMEDIUM;
>> +		goto fail_host_init;
>> +	}
>> +
>> +	name = devm_kasprintf(dev, GFP_KERNEL, "%pOFP", dev->of_node);
>> +	if (!name) {
>> +		ret = -ENOMEM;
>> +		goto fail_host_init;
>> +	}
>> +
>> +	pcie->debugfs = debugfs_create_dir(name, NULL);
>> +	if (!pcie->debugfs)
>> +		dev_err(dev, "Failed to create debugfs\n");
>> +	else
>> +		init_debugfs(pcie);
>> +
>> +	return ret;
>> +
>> +fail_host_init:
>> +	tegra_pcie_deinit_controller(pcie);
>> +	pm_runtime_put_sync(dev);
>> +	pm_runtime_disable(dev);
>> +	return ret;
>> +}
>> +
>> +static const struct tegra_pcie_soc tegra_pcie_rc_of_data = {
>> +	.mode = DW_PCIE_RC_TYPE,
>> +};
>> +
>> +static const struct of_device_id tegra_pcie_dw_of_match[] = {
>> +	{
>> +		.compatible = "nvidia,tegra194-pcie",
>> +		.data = &tegra_pcie_rc_of_data,
>> +	},
>> +	{},
>> +};
>> +MODULE_DEVICE_TABLE(of, tegra_pcie_dw_of_match);
> 
> Move it closer to end of file where MODULE_DESCRIPTION() et al are.
Ok.

> 
>> +
>> +static int tegra_pcie_dw_probe(struct platform_device *pdev)
>> +{
>> +	const struct tegra_pcie_soc *data;
>> +	struct device *dev = &pdev->dev;
>> +	struct resource *atu_dma_res;
>> +	struct tegra_pcie_dw *pcie;
>> +	struct resource *dbi_res;
>> +	struct pcie_port *pp;
>> +	struct dw_pcie *pci;
>> +	struct phy **phys;
>> +	char *name;
>> +	int ret;
>> +	u32 i;
>> +
>> +	pcie = devm_kzalloc(dev, sizeof(*pcie), GFP_KERNEL);
>> +	if (!pcie)
>> +		return -ENOMEM;
>> +
>> +	pci = &pcie->pci;
>> +	pci->dev = &pdev->dev;
>> +	pci->ops = &tegra_dw_pcie_ops;
>> +	pp = &pci->pp;
>> +	pcie->dev = &pdev->dev;
>> +
>> +	data = (struct tegra_pcie_soc *)of_device_get_match_data(dev);
>> +	if (!data)
>> +		return -EINVAL;
>> +	pcie->mode = (enum dw_pcie_device_mode)data->mode;
>> +
>> +	ret = tegra_pcie_dw_parse_dt(pcie);
>> +	if (ret < 0) {
>> +		dev_err(dev, "Failed to parse device tree: %d\n", ret);
>> +		return ret;
>> +	}
>> +
>> +	pcie->pex_ctl_supply = devm_regulator_get(dev, "vddio-pex-ctl");
>> +	if (IS_ERR(pcie->pex_ctl_supply)) {
>> +		dev_err(dev, "Failed to get regulator: %ld\n",
>> +			PTR_ERR(pcie->pex_ctl_supply));
>> +		return PTR_ERR(pcie->pex_ctl_supply);
>> +	}
>> +
>> +	pcie->core_clk = devm_clk_get(dev, "core");
>> +	if (IS_ERR(pcie->core_clk)) {
>> +		dev_err(dev, "Failed to get core clock: %ld\n",
>> +			PTR_ERR(pcie->core_clk));
>> +		return PTR_ERR(pcie->core_clk);
>> +	}
>> +
>> +	pcie->appl_res = platform_get_resource_byname(pdev, IORESOURCE_MEM,
>> +						      "appl");
>> +	if (!pcie->appl_res) {
>> +		dev_err(dev, "Failed to find \"appl\" region\n");
>> +		return PTR_ERR(pcie->appl_res);
>> +	}
> 
> Add an empty line.
Ok.

> 
>> +	pcie->appl_base = devm_ioremap_resource(dev, pcie->appl_res);
>> +	if (IS_ERR(pcie->appl_base))
>> +		return PTR_ERR(pcie->appl_base);
>> +
>> +	pcie->core_apb_rst = devm_reset_control_get(dev, "apb");
>> +	if (IS_ERR(pcie->core_apb_rst)) {
>> +		dev_err(dev, "Failed to get APB reset: %ld\n",
>> +			PTR_ERR(pcie->core_apb_rst));
>> +		return PTR_ERR(pcie->core_apb_rst);
>> +	}
>> +
>> +	phys = devm_kcalloc(dev, pcie->phy_count, sizeof(*phys), GFP_KERNEL);
>> +	if (!phys)
>> +		return PTR_ERR(phys);
>> +
>> +	for (i = 0; i < pcie->phy_count; i++) {
>> +		name = kasprintf(GFP_KERNEL, "p2u-%u", i);
>> +		if (!name) {
>> +			dev_err(dev, "Failed to create P2U string\n");
>> +			return -ENOMEM;
>> +		}
>> +		phys[i] = devm_phy_get(dev, name);
>> +		kfree(name);
>> +		if (IS_ERR(phys[i])) {
>> +			ret = PTR_ERR(phys[i]);
>> +			dev_err(dev, "Failed to get PHY: %d\n", ret);
>> +			return ret;
>> +		}
>> +	}
>> +
>> +	pcie->phys = phys;
>> +
>> +	dbi_res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "dbi");
>> +	if (!dbi_res) {
>> +		dev_err(dev, "Failed to find \"dbi\" region\n");
>> +		return PTR_ERR(dbi_res);
>> +	}
>> +	pcie->dbi_res = dbi_res;
>> +
>> +	pci->dbi_base = devm_ioremap_resource(dev, dbi_res);
>> +	if (IS_ERR(pci->dbi_base))
>> +		return PTR_ERR(pci->dbi_base);
>> +
>> +	/* Tegra HW locates DBI2 at a fixed offset from DBI */
>> +	pci->dbi_base2 = pci->dbi_base + 0x1000;
>> +
>> +	atu_dma_res = platform_get_resource_byname(pdev, IORESOURCE_MEM,
>> +						   "atu_dma");
>> +	if (!atu_dma_res) {
>> +		dev_err(dev, "Failed to find \"atu_dma\" region\n");
>> +		return PTR_ERR(atu_dma_res);
>> +	}
>> +	pcie->atu_dma_res = atu_dma_res;
> 
> Add an empty line.
> 
> I have just skimmed over it, I need more time to review the driver more
> thoroughly, I will try to go over it tomorrow and we can see if we can
> hit v5.3 but I can't guarantee anything, sorry but there was a
> significant backlog on the PCI patch queue.
> 
> The rest of the series is fine but it is useless to merge it
> without this patch so let's see how it goes.
Ok. I'll wait for your thorough review and then push next patch.
Thanks for the review.

> 
> Lorenzo
> 
>> +	pci->atu_base = devm_ioremap_resource(dev, atu_dma_res);
>> +	if (IS_ERR(pci->atu_base))
>> +		return PTR_ERR(pci->atu_base);
>> +
>> +	pcie->core_rst = devm_reset_control_get(dev, "core");
>> +	if (IS_ERR(pcie->core_rst)) {
>> +		dev_err(dev, "Failed to get core reset: %ld\n",
>> +			PTR_ERR(pcie->core_rst));
>> +		return PTR_ERR(pcie->core_rst);
>> +	}
>> +
>> +	pp->irq = platform_get_irq_byname(pdev, "intr");
>> +	if (!pp->irq) {
>> +		dev_err(dev, "Failed to get \"intr\" interrupt\n");
>> +		return -ENODEV;
>> +	}
>> +
>> +	ret = devm_request_irq(dev, pp->irq, tegra_pcie_irq_handler,
>> +			       IRQF_SHARED, "tegra-pcie-intr", pcie);
>> +	if (ret) {
>> +		dev_err(dev, "Failed to request IRQ %d: %d\n", pp->irq, ret);
>> +		return ret;
>> +	}
>> +
>> +	pcie->bpmp = tegra_bpmp_get(dev);
>> +	if (IS_ERR(pcie->bpmp))
>> +		return PTR_ERR(pcie->bpmp);
>> +
>> +	platform_set_drvdata(pdev, pcie);
>> +
>> +	if (pcie->mode == DW_PCIE_RC_TYPE) {
>> +		ret = tegra_pcie_config_rp(pcie);
>> +		if (ret && ret != -ENOMEDIUM)
>> +			goto fail;
>> +		else
>> +			return 0;
>> +	}
>> +
>> +fail:
>> +	tegra_bpmp_put(pcie->bpmp);
>> +	return ret;
>> +}
>> +
>> +static int tegra_pcie_dw_remove(struct platform_device *pdev)
>> +{
>> +	struct tegra_pcie_dw *pcie = platform_get_drvdata(pdev);
>> +
>> +	if (pcie->mode != DW_PCIE_RC_TYPE)
>> +		return 0;
>> +
>> +	if (!pcie->link_state)
>> +		return 0;
>> +
>> +	debugfs_remove_recursive(pcie->debugfs);
>> +	tegra_pcie_deinit_controller(pcie);
>> +	pm_runtime_put_sync(pcie->dev);
>> +	pm_runtime_disable(pcie->dev);
>> +	tegra_bpmp_put(pcie->bpmp);
>> +
>> +	return 0;
>> +}
>> +
>> +static int tegra_pcie_dw_suspend_late(struct device *dev)
>> +{
>> +	struct tegra_pcie_dw *pcie = dev_get_drvdata(dev);
>> +	u32 val;
>> +
>> +	if (!pcie->link_state)
>> +		return 0;
>> +
>> +	/* Enable HW_HOT_RST mode */
>> +	val = appl_readl(pcie, APPL_CTRL);
>> +	val &= ~(APPL_CTRL_HW_HOT_RST_MODE_MASK <<
>> +		 APPL_CTRL_HW_HOT_RST_MODE_SHIFT);
>> +	val |= APPL_CTRL_HW_HOT_RST_EN;
>> +	appl_writel(pcie, val, APPL_CTRL);
>> +
>> +	return 0;
>> +}
>> +
>> +static int tegra_pcie_dw_suspend_noirq(struct device *dev)
>> +{
>> +	struct tegra_pcie_dw *pcie = dev_get_drvdata(dev);
>> +
>> +	if (!pcie->link_state)
>> +		return 0;
>> +
>> +	/* Save MSI interrupt vector */
>> +	pcie->msi_ctrl_int = dw_pcie_readl_dbi(&pcie->pci,
>> +					       PORT_LOGIC_MSI_CTRL_INT_0_EN);
>> +	tegra_pcie_downstream_dev_to_D0(pcie);
>> +	tegra_pcie_dw_pme_turnoff(pcie);
>> +	return __deinit_controller(pcie);
>> +}
>> +
>> +static int tegra_pcie_dw_resume_noirq(struct device *dev)
>> +{
>> +	struct tegra_pcie_dw *pcie = dev_get_drvdata(dev);
>> +	int ret;
>> +
>> +	if (!pcie->link_state)
>> +		return 0;
>> +
>> +	ret = tegra_pcie_config_controller(pcie, true);
>> +	if (ret < 0)
>> +		return ret;
>> +
>> +	ret = tegra_pcie_dw_host_init(&pcie->pci.pp);
>> +	if (ret < 0) {
>> +		dev_err(dev, "Failed to init host: %d\n", ret);
>> +		goto fail_host_init;
>> +	}
>> +
>> +	/* Restore MSI interrupt vector */
>> +	dw_pcie_writel_dbi(&pcie->pci, PORT_LOGIC_MSI_CTRL_INT_0_EN,
>> +			   pcie->msi_ctrl_int);
>> +
>> +	return 0;
>> +fail_host_init:
>> +	return __deinit_controller(pcie);
>> +}
>> +
>> +static int tegra_pcie_dw_resume_early(struct device *dev)
>> +{
>> +	struct tegra_pcie_dw *pcie = dev_get_drvdata(dev);
>> +	u32 val;
>> +
>> +	if (!pcie->link_state)
>> +		return 0;
>> +
>> +	/* Disable HW_HOT_RST mode */
>> +	val = appl_readl(pcie, APPL_CTRL);
>> +	val &= ~(APPL_CTRL_HW_HOT_RST_MODE_MASK <<
>> +		 APPL_CTRL_HW_HOT_RST_MODE_SHIFT);
>> +	val |= APPL_CTRL_HW_HOT_RST_MODE_IMDT_RST <<
>> +	       APPL_CTRL_HW_HOT_RST_MODE_SHIFT;
>> +	val &= ~APPL_CTRL_HW_HOT_RST_EN;
>> +	appl_writel(pcie, val, APPL_CTRL);
>> +
>> +	return 0;
>> +}
>> +
>> +static void tegra_pcie_dw_shutdown(struct platform_device *pdev)
>> +{
>> +	struct tegra_pcie_dw *pcie = platform_get_drvdata(pdev);
>> +
>> +	if (pcie->mode != DW_PCIE_RC_TYPE)
>> +		return;
>> +
>> +	if (!pcie->link_state)
>> +		return;
>> +
>> +	debugfs_remove_recursive(pcie->debugfs);
>> +	tegra_pcie_downstream_dev_to_D0(pcie);
>> +
>> +	disable_irq(pcie->pci.pp.irq);
>> +	if (IS_ENABLED(CONFIG_PCI_MSI))
>> +		disable_irq(pcie->pci.pp.msi_irq);
>> +
>> +	tegra_pcie_dw_pme_turnoff(pcie);
>> +	__deinit_controller(pcie);
>> +}
>> +
>> +static const struct dev_pm_ops tegra_pcie_dw_pm_ops = {
>> +	.suspend_late = tegra_pcie_dw_suspend_late,
>> +	.suspend_noirq = tegra_pcie_dw_suspend_noirq,
>> +	.resume_noirq = tegra_pcie_dw_resume_noirq,
>> +	.resume_early = tegra_pcie_dw_resume_early,
>> +};
>> +
>> +static struct platform_driver tegra_pcie_dw_driver = {
>> +	.probe = tegra_pcie_dw_probe,
>> +	.remove = tegra_pcie_dw_remove,
>> +	.shutdown = tegra_pcie_dw_shutdown,
>> +	.driver = {
>> +		.name	= "tegra194-pcie",
>> +		.pm = &tegra_pcie_dw_pm_ops,
>> +		.of_match_table = tegra_pcie_dw_of_match,
>> +	},
>> +};
>> +module_platform_driver(tegra_pcie_dw_driver);
>> +
>> +MODULE_AUTHOR("Vidya Sagar <vidyas@nvidia.com>");
>> +MODULE_DESCRIPTION("NVIDIA PCIe host controller driver");
>> +MODULE_LICENSE("GPL v2");
>> -- 
>> 2.17.1
>>


^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH V13 01/12] PCI: Add #defines for some of PCIe spec r4.0 features
  2019-07-10  6:22 ` [PATCH V13 01/12] PCI: Add #defines for some of PCIe spec r4.0 features Vidya Sagar
@ 2019-07-10 20:14   ` Bjorn Helgaas
  0 siblings, 0 replies; 34+ messages in thread
From: Bjorn Helgaas @ 2019-07-10 20:14 UTC (permalink / raw)
  To: Vidya Sagar
  Cc: lorenzo.pieralisi, robh+dt, mark.rutland, thierry.reding,
	jonathanh, kishon, catalin.marinas, will.deacon, jingoohan1,
	gustavo.pimentel, digetx, mperttunen, linux-pci, devicetree,
	linux-tegra, linux-kernel, linux-arm-kernel, kthota, mmaddireddy,
	sagar.tv

On Wed, Jul 10, 2019 at 11:52:01AM +0530, Vidya Sagar wrote:
> Add #defines only for the Data Link Feature and Physical Layer 16.0 GT/s
> features as defined in PCIe spec r4.0, sec 7.7.4 for Data Link Feature and
> sec 7.7.5 for Physical Layer 16.0 GT/s.
> 
> Signed-off-by: Vidya Sagar <vidyas@nvidia.com>
> Reviewed-by: Thierry Reding <treding@nvidia.com>

Acked-by: Bjorn Helgaas <bhelgaas@google.com>

Looks good, thanks!

> ---
> V13:
> * Updated commit message to include references from spec
> * Removed unused defines and moved some from pcie-tegra194.c file
> * Addressed review comments from Bjorn
> 
> V12:
> * None
> 
> V11:
> * None
> 
> V10:
> * None
> 
> V9:
> * None
> 
> V8:
> * None
> 
> V7:
> * None
> 
> V6:
> * None
> 
> V5:
> * None
> 
> V4:
> * None
> 
> V3:
> * Updated commit message and description to explicitly mention that defines are
>   added only for some of the features and not all.
> 
> V2:
> * None
> 
>  include/uapi/linux/pci_regs.h | 14 +++++++++++++-
>  1 file changed, 13 insertions(+), 1 deletion(-)
> 
> diff --git a/include/uapi/linux/pci_regs.h b/include/uapi/linux/pci_regs.h
> index f28e562d7ca8..d28d0319d932 100644
> --- a/include/uapi/linux/pci_regs.h
> +++ b/include/uapi/linux/pci_regs.h
> @@ -713,7 +713,9 @@
>  #define PCI_EXT_CAP_ID_DPC	0x1D	/* Downstream Port Containment */
>  #define PCI_EXT_CAP_ID_L1SS	0x1E	/* L1 PM Substates */
>  #define PCI_EXT_CAP_ID_PTM	0x1F	/* Precision Time Measurement */
> -#define PCI_EXT_CAP_ID_MAX	PCI_EXT_CAP_ID_PTM
> +#define PCI_EXT_CAP_ID_DLF	0x25	/* Data Link Feature */
> +#define PCI_EXT_CAP_ID_PL_16GT	0x26	/* Physical Layer 16.0 GT/s */
> +#define PCI_EXT_CAP_ID_MAX	PCI_EXT_CAP_ID_PL_16GT
>  
>  #define PCI_EXT_CAP_DSN_SIZEOF	12
>  #define PCI_EXT_CAP_MCAST_ENDPOINT_SIZEOF 40
> @@ -1053,4 +1055,14 @@
>  #define  PCI_L1SS_CTL1_LTR_L12_TH_SCALE	0xe0000000  /* LTR_L1.2_THRESHOLD_Scale */
>  #define PCI_L1SS_CTL2		0x0c	/* Control 2 Register */
>  
> +/* Data Link Feature */
> +#define PCI_DLF_CAP		0x04	/* Capabilities Register */
> +#define  PCI_DLF_EXCHANGE_ENABLE	0x80000000  /* Data Link Feature Exchange Enable */
> +
> +/* Physical Layer 16.0 GT/s */
> +#define PCI_PL_16GT_LE_CTRL	0x20	/* Lane Equalization Control Register */
> +#define  PCI_PL_16GT_LE_CTRL_DSP_TX_PRESET_MASK		0x0000000F
> +#define  PCI_PL_16GT_LE_CTRL_USP_TX_PRESET_MASK		0x000000F0
> +#define  PCI_PL_16GT_LE_CTRL_USP_TX_PRESET_SHIFT	4
> +
>  #endif /* LINUX_PCI_REGS_H */
> -- 
> 2.17.1
> 

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH V13 12/12] PCI: tegra: Add Tegra194 PCIe support
  2019-07-10  6:22 ` [PATCH V13 12/12] PCI: tegra: Add Tegra194 PCIe support Vidya Sagar
  2019-07-10 17:02   ` Lorenzo Pieralisi
@ 2019-07-11 12:54   ` Lorenzo Pieralisi
  2019-07-12 15:32     ` Vidya Sagar
  1 sibling, 1 reply; 34+ messages in thread
From: Lorenzo Pieralisi @ 2019-07-11 12:54 UTC (permalink / raw)
  To: Vidya Sagar
  Cc: bhelgaas, robh+dt, mark.rutland, thierry.reding, jonathanh,
	kishon, catalin.marinas, will.deacon, jingoohan1,
	gustavo.pimentel, digetx, mperttunen, linux-pci, devicetree,
	linux-tegra, linux-kernel, linux-arm-kernel, kthota, mmaddireddy,
	sagar.tv

On Wed, Jul 10, 2019 at 11:52:12AM +0530, Vidya Sagar wrote:

[...]

> diff --git a/drivers/pci/controller/dwc/pcie-tegra194.c b/drivers/pci/controller/dwc/pcie-tegra194.c
> new file mode 100644
> index 000000000000..189779edd976
> --- /dev/null
> +++ b/drivers/pci/controller/dwc/pcie-tegra194.c
> @@ -0,0 +1,1632 @@
> +// SPDX-License-Identifier: GPL-2.0+
> +/*
> + * PCIe host controller driver for Tegra194 SoC
> + *
> + * Copyright (C) 2019 NVIDIA Corporation.
> + *
> + * Author: Vidya Sagar <vidyas@nvidia.com>
> + */
> +
> +#include <linux/clk.h>
> +#include <linux/debugfs.h>
> +#include <linux/delay.h>
> +#include <linux/gpio.h>
> +#include <linux/interrupt.h>
> +#include <linux/iopoll.h>
> +#include <linux/kernel.h>
> +#include <linux/kfifo.h>
> +#include <linux/kthread.h>

What do you need these two headers for ?

[...]

> +struct tegra_pcie_dw {
> +	struct device *dev;
> +	struct resource *appl_res;
> +	struct resource *dbi_res;
> +	struct resource *atu_dma_res;
> +	void __iomem *appl_base;
> +	struct clk *core_clk;
> +	struct reset_control *core_apb_rst;
> +	struct reset_control *core_rst;
> +	struct dw_pcie pci;
> +	enum dw_pcie_device_mode mode;
> +	struct tegra_bpmp *bpmp;
> +
> +	bool supports_clkreq;
> +	bool enable_cdm_check;
> +	u8 init_link_width;
> +	bool link_state;
> +	u32 msi_ctrl_int;
> +	u32 num_lanes;
> +	u32 max_speed;
> +	u32 cid;
> +	bool update_fc_fixup;
> +	u32 cfg_link_cap_l1sub;
> +	u32 pcie_cap_base;
> +	u32 aspm_cmrt;
> +	u32 aspm_pwr_on_t;
> +	u32 aspm_l0s_enter_lat;o

Nit: You should try to group variables according to their usage,
it would make sense to group booleans together.

> +	struct regulator *pex_ctl_supply;
> +
> +	unsigned int phy_count;
> +	struct phy **phys;
> +
> +	struct dentry *debugfs;
> +};
> +
> +static inline struct tegra_pcie_dw *to_tegra_pcie(struct dw_pcie *pci)
> +{
> +	return container_of(pci, struct tegra_pcie_dw, pci);
> +}
> +
> +static inline void appl_writel(struct tegra_pcie_dw *pcie, const u32 value,
> +			       const u32 reg)
> +{
> +	writel_relaxed(value, pcie->appl_base + reg);
> +}
> +
> +static inline u32 appl_readl(struct tegra_pcie_dw *pcie, const u32 reg)
> +{
> +	return readl_relaxed(pcie->appl_base + reg);
> +}
> +
> +struct tegra_pcie_soc {
> +	enum dw_pcie_device_mode mode;
> +};
> +
> +static void apply_bad_link_workaround(struct pcie_port *pp)
> +{
> +	struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
> +	struct tegra_pcie_dw *pcie = to_tegra_pcie(pci);
> +	u32 current_link_width;
> +	u16 val;
> +
> +	/*
> +	 * NOTE:- Since this scenario is uncommon and link as such is not
> +	 * stable anyway, not waiting to confirm if link is really
> +	 * transitioning to Gen-2 speed
> +	 */
> +	val = dw_pcie_readw_dbi(pci, pcie->pcie_cap_base + PCI_EXP_LNKSTA);
> +	if (val & PCI_EXP_LNKSTA_LBMS) {
> +		current_link_width = (val & PCI_EXP_LNKSTA_NLW) >>
> +				     PCI_EXP_LNKSTA_NLW_SHIFT;
> +		if (pcie->init_link_width > current_link_width) {
> +			dev_warn(pci->dev, "PCIe link is bad, width reduced\n");
> +			val = dw_pcie_readw_dbi(pci, pcie->pcie_cap_base +
> +						PCI_EXP_LNKCTL2);
> +			val &= ~PCI_EXP_LNKCTL2_TLS;
> +			val |= PCI_EXP_LNKCTL2_TLS_2_5GT;
> +			dw_pcie_writew_dbi(pci, pcie->pcie_cap_base +
> +					   PCI_EXP_LNKCTL2, val);
> +
> +			val = dw_pcie_readw_dbi(pci, pcie->pcie_cap_base +
> +						PCI_EXP_LNKCTL);
> +			val |= PCI_EXP_LNKCTL_RL;
> +			dw_pcie_writew_dbi(pci, pcie->pcie_cap_base +
> +					   PCI_EXP_LNKCTL, val);
> +		}
> +	}
> +}
> +
> +static irqreturn_t tegra_pcie_rp_irq_handler(struct tegra_pcie_dw *pcie)
> +{
> +	struct dw_pcie *pci = &pcie->pci;
> +	struct pcie_port *pp = &pci->pp;
> +	u32 val, tmp;
> +	u16 val_w;
> +
> +	val = appl_readl(pcie, APPL_INTR_STATUS_L0);
> +	if (val & APPL_INTR_STATUS_L0_LINK_STATE_INT) {
> +		val = appl_readl(pcie, APPL_INTR_STATUS_L1_0_0);
> +		if (val & APPL_INTR_STATUS_L1_0_0_LINK_REQ_RST_NOT_CHGED) {
> +			appl_writel(pcie, val, APPL_INTR_STATUS_L1_0_0);
> +
> +			/* SBR & Surprise Link Down WAR */
> +			val = appl_readl(pcie, APPL_CAR_RESET_OVRD);
> +			val &= ~APPL_CAR_RESET_OVRD_CYA_OVERRIDE_CORE_RST_N;
> +			appl_writel(pcie, val, APPL_CAR_RESET_OVRD);
> +			udelay(1);
> +			val = appl_readl(pcie, APPL_CAR_RESET_OVRD);
> +			val |= APPL_CAR_RESET_OVRD_CYA_OVERRIDE_CORE_RST_N;
> +			appl_writel(pcie, val, APPL_CAR_RESET_OVRD);
> +
> +			val = dw_pcie_readl_dbi(pci, PORT_LOGIC_GEN2_CTRL);
> +			val |= PORT_LOGIC_GEN2_CTRL_DIRECT_SPEED_CHANGE;
> +			dw_pcie_writel_dbi(pci, PORT_LOGIC_GEN2_CTRL, val);
> +		}
> +	}
> +
> +	if (val & APPL_INTR_STATUS_L0_INT_INT) {
> +		val = appl_readl(pcie, APPL_INTR_STATUS_L1_8_0);
> +		if (val & APPL_INTR_STATUS_L1_8_0_AUTO_BW_INT_STS) {
> +			appl_writel(pcie,
> +				    APPL_INTR_STATUS_L1_8_0_AUTO_BW_INT_STS,
> +				    APPL_INTR_STATUS_L1_8_0);
> +			apply_bad_link_workaround(pp);
> +		}
> +		if (val & APPL_INTR_STATUS_L1_8_0_BW_MGT_INT_STS) {
> +			appl_writel(pcie,
> +				    APPL_INTR_STATUS_L1_8_0_BW_MGT_INT_STS,
> +				    APPL_INTR_STATUS_L1_8_0);
> +
> +			val_w = dw_pcie_readw_dbi(pci, pcie->pcie_cap_base +
> +						  PCI_EXP_LNKSTA);
> +			dev_dbg(pci->dev, "Link Speed : Gen-%u\n", val_w &
> +				PCI_EXP_LNKSTA_CLS);
> +		}
> +	}
> +
> +	val = appl_readl(pcie, APPL_INTR_STATUS_L0);
> +	if (val & APPL_INTR_STATUS_L0_CDM_REG_CHK_INT) {
> +		val = appl_readl(pcie, APPL_INTR_STATUS_L1_18);
> +		tmp = dw_pcie_readl_dbi(pci, PCIE_PL_CHK_REG_CONTROL_STATUS);
> +		if (val & APPL_INTR_STATUS_L1_18_CDM_REG_CHK_CMPLT) {
> +			dev_info(pci->dev, "CDM check complete\n");
> +			tmp |= PCIE_PL_CHK_REG_CHK_REG_COMPLETE;
> +		}
> +		if (val & APPL_INTR_STATUS_L1_18_CDM_REG_CHK_CMP_ERR) {
> +			dev_err(pci->dev, "CDM comparison mismatch\n");
> +			tmp |= PCIE_PL_CHK_REG_CHK_REG_COMPARISON_ERROR;
> +		}
> +		if (val & APPL_INTR_STATUS_L1_18_CDM_REG_CHK_LOGIC_ERR) {
> +			dev_err(pci->dev, "CDM Logic error\n");
> +			tmp |= PCIE_PL_CHK_REG_CHK_REG_LOGIC_ERROR;
> +		}
> +		dw_pcie_writel_dbi(pci, PCIE_PL_CHK_REG_CONTROL_STATUS, tmp);
> +		tmp = dw_pcie_readl_dbi(pci, PCIE_PL_CHK_REG_ERR_ADDR);
> +		dev_err(pci->dev, "CDM Error Address Offset = 0x%08X\n", tmp);
> +	}
> +
> +	return IRQ_HANDLED;
> +}
> +
> +static irqreturn_t tegra_pcie_irq_handler(int irq, void *arg)
> +{
> +	struct tegra_pcie_dw *pcie = arg;
> +
> +	if (pcie->mode == DW_PCIE_RC_TYPE)
> +		return tegra_pcie_rp_irq_handler(pcie);

What's the point of registering the handler if mode != DW_PCIE_RC_TYPE ?

> +	return IRQ_NONE;
> +}
> +
> +static int tegra_pcie_dw_rd_own_conf(struct pcie_port *pp, int where, int size,
> +				     u32 *val)
> +{
> +	struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
> +
> +	/*
> +	 * This is an endpoint mode specific register happen to appear even
> +	 * when controller is operating in root port mode and system hangs
> +	 * when it is accessed with link being in ASPM-L1 state.
> +	 * So skip accessing it altogether
> +	 */
> +	if (where == PORT_LOGIC_MSIX_DOORBELL) {
> +		*val = 0x00000000;
> +		return PCIBIOS_SUCCESSFUL;
> +	}
> +
> +	return dw_pcie_read(pci->dbi_base + where, size, val);
> +}
> +
> +static int tegra_pcie_dw_wr_own_conf(struct pcie_port *pp, int where, int size,
> +				     u32 val)
> +{
> +	struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
> +
> +	/*
> +	 * This is an endpoint mode specific register happen to appear even
> +	 * when controller is operating in root port mode and system hangs
> +	 * when it is accessed with link being in ASPM-L1 state.
> +	 * So skip accessing it altogether
> +	 */
> +	if (where == PORT_LOGIC_MSIX_DOORBELL)
> +		return PCIBIOS_SUCCESSFUL;
> +
> +	return dw_pcie_write(pci->dbi_base + where, size, val);
> +}
> +
> +#if defined(CONFIG_PCIEASPM)
> +static void disable_aspm_l11(struct tegra_pcie_dw *pcie)
> +{
> +	u32 val;
> +
> +	val = dw_pcie_readl_dbi(&pcie->pci, pcie->cfg_link_cap_l1sub);
> +	val &= ~PCI_L1SS_CAP_ASPM_L1_1;
> +	dw_pcie_writel_dbi(&pcie->pci, pcie->cfg_link_cap_l1sub, val);
> +}
> +
> +static void disable_aspm_l12(struct tegra_pcie_dw *pcie)
> +{
> +	u32 val;
> +
> +	val = dw_pcie_readl_dbi(&pcie->pci, pcie->cfg_link_cap_l1sub);
> +	val &= ~PCI_L1SS_CAP_ASPM_L1_2;
> +	dw_pcie_writel_dbi(&pcie->pci, pcie->cfg_link_cap_l1sub, val);
> +}
> +
> +static inline u32 event_counter_prog(struct tegra_pcie_dw *pcie, u32 event)
> +{
> +	u32 val;
> +
> +	val = dw_pcie_readl_dbi(&pcie->pci, event_cntr_ctrl_offset[pcie->cid]);
> +	val &= ~(EVENT_COUNTER_EVENT_SEL_MASK << EVENT_COUNTER_EVENT_SEL_SHIFT);
> +	val |= EVENT_COUNTER_GROUP_5 << EVENT_COUNTER_GROUP_SEL_SHIFT;
> +	val |= event << EVENT_COUNTER_EVENT_SEL_SHIFT;
> +	val |= EVENT_COUNTER_ENABLE_ALL << EVENT_COUNTER_ENABLE_SHIFT;
> +	dw_pcie_writel_dbi(&pcie->pci, event_cntr_ctrl_offset[pcie->cid], val);
> +	val = dw_pcie_readl_dbi(&pcie->pci, event_cntr_data_offset[pcie->cid]);
> +	return val;
> +}
> +
> +static int aspm_state_cnt(struct seq_file *s, void *data)
> +{
> +	struct tegra_pcie_dw *pcie = (struct tegra_pcie_dw *)
> +				     dev_get_drvdata(s->private);
> +	u32 val;
> +
> +	seq_printf(s, "Tx L0s entry count : %u\n",
> +		   event_counter_prog(pcie, EVENT_COUNTER_EVENT_Tx_L0S));
> +
> +	seq_printf(s, "Rx L0s entry count : %u\n",
> +		   event_counter_prog(pcie, EVENT_COUNTER_EVENT_Rx_L0S));
> +
> +	seq_printf(s, "Link L1 entry count : %u\n",
> +		   event_counter_prog(pcie, EVENT_COUNTER_EVENT_L1));
> +
> +	seq_printf(s, "Link L1.1 entry count : %u\n",
> +		   event_counter_prog(pcie, EVENT_COUNTER_EVENT_L1_1));
> +
> +	seq_printf(s, "Link L1.2 entry count : %u\n",
> +		   event_counter_prog(pcie, EVENT_COUNTER_EVENT_L1_2));
> +
> +	/* Clear all counters */
> +	dw_pcie_writel_dbi(&pcie->pci, event_cntr_ctrl_offset[pcie->cid],
> +			   EVENT_COUNTER_ALL_CLEAR);
> +
> +	/* Re-enable counting */
> +	val = EVENT_COUNTER_ENABLE_ALL << EVENT_COUNTER_ENABLE_SHIFT;
> +	val |= EVENT_COUNTER_GROUP_5 << EVENT_COUNTER_GROUP_SEL_SHIFT;
> +	dw_pcie_writel_dbi(&pcie->pci, event_cntr_ctrl_offset[pcie->cid], val);
> +
> +	return 0;
> +}
> +#endif
> +
> +static int init_debugfs(struct tegra_pcie_dw *pcie)
> +{
> +#if defined(CONFIG_PCIEASPM)
> +	struct dentry *d;
> +
> +	d = debugfs_create_devm_seqfile(pcie->dev, "aspm_state_cnt",
> +					pcie->debugfs, aspm_state_cnt);
> +	if (IS_ERR_OR_NULL(d))
> +		dev_err(pcie->dev,
> +			"Failed to create debugfs file \"aspm_state_cnt\"\n");
> +#endif
> +	return 0;
> +}
> +
> +static void tegra_pcie_enable_system_interrupts(struct pcie_port *pp)
> +{
> +	struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
> +	struct tegra_pcie_dw *pcie = to_tegra_pcie(pci);
> +	u32 val;
> +	u16 val_w;
> +
> +	val = appl_readl(pcie, APPL_INTR_EN_L0_0);
> +	val |= APPL_INTR_EN_L0_0_LINK_STATE_INT_EN;
> +	appl_writel(pcie, val, APPL_INTR_EN_L0_0);
> +
> +	val = appl_readl(pcie, APPL_INTR_EN_L1_0_0);
> +	val |= APPL_INTR_EN_L1_0_0_LINK_REQ_RST_NOT_INT_EN;
> +	appl_writel(pcie, val, APPL_INTR_EN_L1_0_0);
> +
> +	if (pcie->enable_cdm_check) {
> +		val = appl_readl(pcie, APPL_INTR_EN_L0_0);
> +		val |= APPL_INTR_EN_L0_0_CDM_REG_CHK_INT_EN;
> +		appl_writel(pcie, val, APPL_INTR_EN_L0_0);
> +
> +		val = appl_readl(pcie, APPL_INTR_EN_L1_18);
> +		val |= APPL_INTR_EN_L1_18_CDM_REG_CHK_CMP_ERR;
> +		val |= APPL_INTR_EN_L1_18_CDM_REG_CHK_LOGIC_ERR;
> +		appl_writel(pcie, val, APPL_INTR_EN_L1_18);
> +	}
> +
> +	val_w = dw_pcie_readw_dbi(&pcie->pci, pcie->pcie_cap_base +
> +				  PCI_EXP_LNKSTA);
> +	pcie->init_link_width = (val_w & PCI_EXP_LNKSTA_NLW) >>
> +				PCI_EXP_LNKSTA_NLW_SHIFT;
> +
> +	val_w = dw_pcie_readw_dbi(&pcie->pci, pcie->pcie_cap_base +
> +				  PCI_EXP_LNKCTL);
> +	val_w |= PCI_EXP_LNKCTL_LBMIE;
> +	dw_pcie_writew_dbi(&pcie->pci, pcie->pcie_cap_base + PCI_EXP_LNKCTL,
> +			   val_w);
> +}
> +
> +static void tegra_pcie_enable_legacy_interrupts(struct pcie_port *pp)
> +{
> +	struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
> +	struct tegra_pcie_dw *pcie = to_tegra_pcie(pci);
> +	u32 val;
> +
> +	/* Enable legacy interrupt generation */
> +	val = appl_readl(pcie, APPL_INTR_EN_L0_0);
> +	val |= APPL_INTR_EN_L0_0_SYS_INTR_EN;
> +	val |= APPL_INTR_EN_L0_0_INT_INT_EN;
> +	appl_writel(pcie, val, APPL_INTR_EN_L0_0);
> +
> +	val = appl_readl(pcie, APPL_INTR_EN_L1_8_0);
> +	val |= APPL_INTR_EN_L1_8_INTX_EN;
> +	val |= APPL_INTR_EN_L1_8_AUTO_BW_INT_EN;
> +	val |= APPL_INTR_EN_L1_8_BW_MGT_INT_EN;
> +	if (IS_ENABLED(CONFIG_PCIEAER))
> +		val |= APPL_INTR_EN_L1_8_AER_INT_EN;
> +	appl_writel(pcie, val, APPL_INTR_EN_L1_8_0);
> +}
> +
> +static void tegra_pcie_enable_msi_interrupts(struct pcie_port *pp)
> +{
> +	struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
> +	struct tegra_pcie_dw *pcie = to_tegra_pcie(pci);
> +	u32 val;
> +
> +	dw_pcie_msi_init(pp);
> +
> +	/* Enable MSI interrupt generation */
> +	val = appl_readl(pcie, APPL_INTR_EN_L0_0);
> +	val |= APPL_INTR_EN_L0_0_SYS_MSI_INTR_EN;
> +	val |= APPL_INTR_EN_L0_0_MSI_RCV_INT_EN;
> +	appl_writel(pcie, val, APPL_INTR_EN_L0_0);
> +}
> +
> +static void tegra_pcie_enable_interrupts(struct pcie_port *pp)
> +{
> +	struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
> +	struct tegra_pcie_dw *pcie = to_tegra_pcie(pci);
> +
> +	/* Clear interrupt statuses before enabling interrupts */
> +	appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L0);
> +	appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L1_0_0);
> +	appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L1_1);
> +	appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L1_2);
> +	appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L1_3);
> +	appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L1_6);
> +	appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L1_7);
> +	appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L1_8_0);
> +	appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L1_9);
> +	appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L1_10);
> +	appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L1_11);
> +	appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L1_13);
> +	appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L1_14);
> +	appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L1_15);
> +	appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L1_17);
> +
> +	tegra_pcie_enable_system_interrupts(pp);
> +	tegra_pcie_enable_legacy_interrupts(pp);
> +	if (IS_ENABLED(CONFIG_PCI_MSI))
> +		tegra_pcie_enable_msi_interrupts(pp);
> +}
> +
> +static void config_gen3_gen4_eq_presets(struct tegra_pcie_dw *pcie)
> +{
> +	struct dw_pcie *pci = &pcie->pci;
> +	u32 val, offset, i;
> +
> +	/* Program init preset */
> +	for (i = 0; i < pcie->num_lanes; i++) {
> +		dw_pcie_read(pci->dbi_base + CAP_SPCIE_CAP_OFF
> +				 + (i * 2), 2, &val);
> +		val &= ~CAP_SPCIE_CAP_OFF_DSP_TX_PRESET0_MASK;
> +		val |= GEN3_GEN4_EQ_PRESET_INIT;
> +		val &= ~CAP_SPCIE_CAP_OFF_USP_TX_PRESET0_MASK;
> +		val |= (GEN3_GEN4_EQ_PRESET_INIT <<
> +			   CAP_SPCIE_CAP_OFF_USP_TX_PRESET0_SHIFT);
> +		dw_pcie_write(pci->dbi_base + CAP_SPCIE_CAP_OFF
> +				 + (i * 2), 2, val);
> +
> +		offset = dw_pcie_find_ext_capability(pci,
> +						     PCI_EXT_CAP_ID_PL_16GT) +
> +				PCI_PL_16GT_LE_CTRL;
> +		dw_pcie_read(pci->dbi_base + offset + i, 1, &val);
> +		val &= ~PCI_PL_16GT_LE_CTRL_DSP_TX_PRESET_MASK;
> +		val |= GEN3_GEN4_EQ_PRESET_INIT;
> +		val &= ~PCI_PL_16GT_LE_CTRL_USP_TX_PRESET_MASK;
> +		val |= (GEN3_GEN4_EQ_PRESET_INIT <<
> +			PCI_PL_16GT_LE_CTRL_USP_TX_PRESET_SHIFT);
> +		dw_pcie_write(pci->dbi_base + offset + i, 1, val);
> +	}
> +
> +	val = dw_pcie_readl_dbi(pci, GEN3_RELATED_OFF);
> +	val &= ~GEN3_RELATED_OFF_RATE_SHADOW_SEL_MASK;
> +	dw_pcie_writel_dbi(pci, GEN3_RELATED_OFF, val);
> +
> +	val = dw_pcie_readl_dbi(pci, GEN3_EQ_CONTROL_OFF);
> +	val &= ~GEN3_EQ_CONTROL_OFF_PSET_REQ_VEC_MASK;
> +	val |= (0x3ff << GEN3_EQ_CONTROL_OFF_PSET_REQ_VEC_SHIFT);
> +	val &= ~GEN3_EQ_CONTROL_OFF_FB_MODE_MASK;
> +	dw_pcie_writel_dbi(pci, GEN3_EQ_CONTROL_OFF, val);
> +
> +	val = dw_pcie_readl_dbi(pci, GEN3_RELATED_OFF);
> +	val &= ~GEN3_RELATED_OFF_RATE_SHADOW_SEL_MASK;
> +	val |= (0x1 << GEN3_RELATED_OFF_RATE_SHADOW_SEL_SHIFT);
> +	dw_pcie_writel_dbi(pci, GEN3_RELATED_OFF, val);
> +
> +	val = dw_pcie_readl_dbi(pci, GEN3_EQ_CONTROL_OFF);
> +	val &= ~GEN3_EQ_CONTROL_OFF_PSET_REQ_VEC_MASK;
> +	val |= (0x360 << GEN3_EQ_CONTROL_OFF_PSET_REQ_VEC_SHIFT);
> +	val &= ~GEN3_EQ_CONTROL_OFF_FB_MODE_MASK;
> +	dw_pcie_writel_dbi(pci, GEN3_EQ_CONTROL_OFF, val);
> +
> +	val = dw_pcie_readl_dbi(pci, GEN3_RELATED_OFF);
> +	val &= ~GEN3_RELATED_OFF_RATE_SHADOW_SEL_MASK;
> +	dw_pcie_writel_dbi(pci, GEN3_RELATED_OFF, val);
> +}
> +
> +static int tegra_pcie_dw_host_init(struct pcie_port *pp)
> +{
> +	struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
> +	struct tegra_pcie_dw *pcie = to_tegra_pcie(pci);
> +	u32 val, tmp, offset, speed;
> +	unsigned int count;
> +	u16 val_w;
> +
> +core_init:

I think it would be cleaner to include all registers programming
within a function and we remove this label (and goto) below.

> +	count = 200;

It would be easier to read if we could inizialize this value closer
to where it is actually used. Actually explaining why 200 as value
was chosen would be appreciated.

> +#if defined(CONFIG_PCIEASPM)
> +	offset = dw_pcie_find_ext_capability(pci, PCI_EXT_CAP_ID_L1SS);
> +	pcie->cfg_link_cap_l1sub = offset + PCI_L1SS_CAP;
> +#endif

Please try to factor this out within a single #ifdef guard.

> +	val = dw_pcie_readl_dbi(pci, PCI_IO_BASE);
> +	val &= ~(IO_BASE_IO_DECODE | IO_BASE_IO_DECODE_BIT8);
> +	dw_pcie_writel_dbi(pci, PCI_IO_BASE, val);
> +
> +	val = dw_pcie_readl_dbi(pci, PCI_PREF_MEMORY_BASE);
> +	val |= CFG_PREF_MEM_LIMIT_BASE_MEM_DECODE;
> +	val |= CFG_PREF_MEM_LIMIT_BASE_MEM_LIMIT_DECODE;
> +	dw_pcie_writel_dbi(pci, PCI_PREF_MEMORY_BASE, val);
> +
> +	dw_pcie_writel_dbi(pci, PCI_BASE_ADDRESS_0, 0);
> +
> +	/* Configure FTS */
> +	val = dw_pcie_readl_dbi(pci, PORT_LOGIC_ACK_F_ASPM_CTRL);
> +	val &= ~(N_FTS_MASK << N_FTS_SHIFT);
> +	val |= N_FTS_VAL << N_FTS_SHIFT;
> +	dw_pcie_writel_dbi(pci, PORT_LOGIC_ACK_F_ASPM_CTRL, val);
> +
> +	val = dw_pcie_readl_dbi(pci, PORT_LOGIC_GEN2_CTRL);
> +	val &= ~FTS_MASK;
> +	val |= FTS_VAL;
> +	dw_pcie_writel_dbi(pci, PORT_LOGIC_GEN2_CTRL, val);
> +
> +	/* Enable as 0xFFFF0001 response for CRS */
> +	val = dw_pcie_readl_dbi(pci, PORT_LOGIC_AMBA_ERROR_RESPONSE_DEFAULT);
> +	val &= ~(AMBA_ERROR_RESPONSE_CRS_MASK << AMBA_ERROR_RESPONSE_CRS_SHIFT);
> +	val |= (AMBA_ERROR_RESPONSE_CRS_OKAY_FFFF0001 <<
> +		AMBA_ERROR_RESPONSE_CRS_SHIFT);
> +	dw_pcie_writel_dbi(pci, PORT_LOGIC_AMBA_ERROR_RESPONSE_DEFAULT, val);
> +
> +	/* Configure Max Speed from DT */
> +	if (pcie->max_speed && pcie->max_speed != -EINVAL) {
> +		val = dw_pcie_readl_dbi(pci, pcie->pcie_cap_base +
> +					PCI_EXP_LNKCAP);
> +		val &= ~PCI_EXP_LNKCAP_SLS;
> +		val |= pcie->max_speed;
> +		dw_pcie_writel_dbi(pci, pcie->pcie_cap_base + PCI_EXP_LNKCAP,
> +				   val);
> +	}
> +
> +	/* Configure Max lane width from DT */
> +	val = dw_pcie_readl_dbi(pci, pcie->pcie_cap_base + PCI_EXP_LNKCAP);
> +	val &= ~PCI_EXP_LNKCAP_MLW;
> +	val |= (pcie->num_lanes << PCI_EXP_LNKSTA_NLW_SHIFT);
> +	dw_pcie_writel_dbi(pci, pcie->pcie_cap_base + PCI_EXP_LNKCAP, val);
> +
> +	config_gen3_gen4_eq_presets(pcie);
> +
> +#if defined(CONFIG_PCIEASPM)
> +	/* Enable ASPM counters */
> +	val = EVENT_COUNTER_ENABLE_ALL << EVENT_COUNTER_ENABLE_SHIFT;
> +	val |= EVENT_COUNTER_GROUP_5 << EVENT_COUNTER_GROUP_SEL_SHIFT;
> +	dw_pcie_writel_dbi(pci, event_cntr_ctrl_offset[pcie->cid], val);
> +
> +	/* Program T_cmrt and T_pwr_on values */
> +	val = dw_pcie_readl_dbi(pci, pcie->cfg_link_cap_l1sub);
> +	val &= ~(PCI_L1SS_CAP_CM_RESTORE_TIME | PCI_L1SS_CAP_P_PWR_ON_VALUE);
> +	val |= (pcie->aspm_cmrt << 8);
> +	val |= (pcie->aspm_pwr_on_t << 19);
> +	dw_pcie_writel_dbi(pci, pcie->cfg_link_cap_l1sub, val);
> +
> +	/* Program L0s and L1 entrance latencies */
> +	val = dw_pcie_readl_dbi(pci, PORT_LOGIC_ACK_F_ASPM_CTRL);
> +	val &= ~L0S_ENTRANCE_LAT_MASK;
> +	val |= (pcie->aspm_l0s_enter_lat << L0S_ENTRANCE_LAT_SHIFT);
> +	val |= ENTER_ASPM;
> +	dw_pcie_writel_dbi(pci, PORT_LOGIC_ACK_F_ASPM_CTRL, val);
> +#endif

See above.

> +	val = dw_pcie_readl_dbi(pci, GEN3_RELATED_OFF);
> +	val &= ~GEN3_RELATED_OFF_GEN3_ZRXDC_NONCOMPL;
> +	dw_pcie_writel_dbi(pci, GEN3_RELATED_OFF, val);
> +
> +	if (pcie->update_fc_fixup) {
> +		val = dw_pcie_readl_dbi(pci, CFG_TIMER_CTRL_MAX_FUNC_NUM_OFF);
> +		val |= 0x1 << CFG_TIMER_CTRL_ACK_NAK_SHIFT;
> +		dw_pcie_writel_dbi(pci, CFG_TIMER_CTRL_MAX_FUNC_NUM_OFF, val);
> +	}
> +
> +	dw_pcie_setup_rc(pp);
> +
> +	clk_set_rate(pcie->core_clk, GEN4_CORE_CLK_FREQ);
> +
> +	/* Assert RST */
> +	val = appl_readl(pcie, APPL_PINMUX);
> +	val &= ~APPL_PINMUX_PEX_RST;
> +	appl_writel(pcie, val, APPL_PINMUX);
> +
> +	usleep_range(100, 200);
> +
> +	/* Enable LTSSM */
> +	val = appl_readl(pcie, APPL_CTRL);
> +	val |= APPL_CTRL_LTSSM_EN;
> +	appl_writel(pcie, val, APPL_CTRL);
> +
> +	/* De-assert RST */
> +	val = appl_readl(pcie, APPL_PINMUX);
> +	val |= APPL_PINMUX_PEX_RST;
> +	appl_writel(pcie, val, APPL_PINMUX);
> +
> +	msleep(100);
> +
> +	val_w = dw_pcie_readw_dbi(pci, pcie->pcie_cap_base + PCI_EXP_LNKSTA);
> +	while (!(val_w & PCI_EXP_LNKSTA_DLLLA)) {
> +		if (count) {
> +			dev_dbg(pci->dev, "Waiting for link up\n");
> +			usleep_range(1000, 2000);
> +			val_w = dw_pcie_readw_dbi(pci, pcie->pcie_cap_base +
> +						  PCI_EXP_LNKSTA);
> +			count--;
> +			continue;
> +		}
> +
> +		val = appl_readl(pcie, APPL_DEBUG);
> +		val &= APPL_DEBUG_LTSSM_STATE_MASK;
> +		val >>= APPL_DEBUG_LTSSM_STATE_SHIFT;
> +		tmp = appl_readl(pcie, APPL_LINK_STATUS);
> +		tmp &= APPL_LINK_STATUS_RDLH_LINK_UP;
> +		if (!(val == 0x11 && !tmp)) {
> +			dev_info(pci->dev, "Link is down\n");
> +			return 0;
> +		}
> +
> +		dev_info(pci->dev, "Link is down in DLL");
> +		dev_info(pci->dev, "Trying again with DLFE disabled\n");
> +		/* Disable LTSSM */
> +		val = appl_readl(pcie, APPL_CTRL);
> +		val &= ~APPL_CTRL_LTSSM_EN;
> +		appl_writel(pcie, val, APPL_CTRL);
> +
> +		reset_control_assert(pcie->core_rst);
> +		reset_control_deassert(pcie->core_rst);
> +
> +		offset = dw_pcie_find_ext_capability(pci, PCI_EXT_CAP_ID_DLF);
> +		val = dw_pcie_readl_dbi(pci, offset + PCI_DLF_CAP);
> +		val &= ~PCI_DLF_EXCHANGE_ENABLE;
> +		dw_pcie_writel_dbi(pci, offset, val);
> +
> +		/* Retry now with DLF Exchange disabled */
> +		goto core_init;

This goto upwards honestly is a bit horrible. Use some functions
to make the code more readable.

> +	}
> +	dev_dbg(pci->dev, "Link is up\n");
> +
> +	speed = dw_pcie_readw_dbi(pci, pcie->pcie_cap_base + PCI_EXP_LNKSTA) &
> +		PCI_EXP_LNKSTA_CLS;
> +	clk_set_rate(pcie->core_clk, pcie_gen_freq[speed - 1]);
> +
> +	tegra_pcie_enable_interrupts(pp);
> +
> +	return 0;
> +}
> +
> +static int tegra_pcie_dw_link_up(struct dw_pcie *pci)
> +{
> +	struct tegra_pcie_dw *pcie = to_tegra_pcie(pci);
> +	u32 val = dw_pcie_readw_dbi(pci, pcie->pcie_cap_base + PCI_EXP_LNKSTA);
> +
> +	return !!(val & PCI_EXP_LNKSTA_DLLLA);

It looks like this can be common code reusable by other DWC drivers ?

Not mandatory, just a hint.

> +}
> +
> +static void tegra_pcie_set_msi_vec_num(struct pcie_port *pp)
> +{
> +	pp->num_vectors = MAX_MSI_IRQS;
> +}
> +
> +static const struct dw_pcie_ops tegra_dw_pcie_ops = {
> +	.link_up = tegra_pcie_dw_link_up,
> +};
> +
> +static struct dw_pcie_host_ops tegra_pcie_dw_host_ops = {
> +	.rd_own_conf = tegra_pcie_dw_rd_own_conf,
> +	.wr_own_conf = tegra_pcie_dw_wr_own_conf,
> +	.host_init = tegra_pcie_dw_host_init,
> +	.set_num_vectors = tegra_pcie_set_msi_vec_num,
> +};
> +
> +static void tegra_pcie_disable_phy(struct tegra_pcie_dw *pcie)
> +{
> +	unsigned int phy_count = pcie->phy_count;
> +
> +	while (phy_count--) {
> +		phy_power_off(pcie->phys[phy_count]);
> +		phy_exit(pcie->phys[phy_count]);
> +	}
> +}
> +
> +static int tegra_pcie_enable_phy(struct tegra_pcie_dw *pcie)
> +{
> +	unsigned int i;
> +	int ret;
> +
> +	for (i = 0; i < pcie->phy_count; i++) {
> +		ret = phy_init(pcie->phys[i]);
> +		if (ret < 0)
> +			goto phy_power_off;
> +
> +		ret = phy_power_on(pcie->phys[i]);
> +		if (ret < 0)
> +			goto phy_exit;
> +	}
> +
> +	return 0;
> +
> +phy_power_off:
> +	while (i--) {
> +		phy_power_off(pcie->phys[i]);
> +phy_exit:
> +		phy_exit(pcie->phys[i]);
> +	}
> +
> +	return ret;
> +}
> +
> +static int tegra_pcie_dw_parse_dt(struct tegra_pcie_dw *pcie)
> +{
> +	struct device_node *np = pcie->dev->of_node;
> +	int ret;
> +
> +	ret = of_property_read_u32(np, "nvidia,aspm-cmrt-us", &pcie->aspm_cmrt);
> +	if (ret < 0) {
> +		dev_info(pcie->dev, "Failed to read ASPM T_cmrt: %d\n", ret);
> +		return ret;
> +	}
> +
> +	ret = of_property_read_u32(np, "nvidia,aspm-pwr-on-t-us",
> +				   &pcie->aspm_pwr_on_t);
> +	if (ret < 0)
> +		dev_info(pcie->dev, "Failed to read ASPM Power On time: %d\n",
> +			 ret);
> +
> +	ret = of_property_read_u32(np, "nvidia,aspm-l0s-entrance-latency-us",
> +				   &pcie->aspm_l0s_enter_lat);
> +	if (ret < 0)
> +		dev_info(pcie->dev,
> +			 "Failed to read ASPM L0s Entrance latency: %d\n", ret);
> +
> +	ret = of_property_read_u32(np, "num-lanes", &pcie->num_lanes);
> +	if (ret < 0) {
> +		dev_err(pcie->dev, "Failed to read num-lanes: %d\n", ret);
> +		return ret;
> +	}
> +
> +	pcie->max_speed = of_pci_get_max_link_speed(np);
> +
> +	ret = of_property_read_u32_index(np, "nvidia,bpmp", 1, &pcie->cid);
> +	if (ret) {
> +		dev_err(pcie->dev, "Failed to read Controller-ID: %d\n", ret);
> +		return ret;
> +	}
> +
> +	pcie->phy_count = of_property_count_strings(np, "phy-names");
> +	if (pcie->phy_count < 0) {
> +		dev_err(pcie->dev, "Failed to find PHY entries: %d\n",
> +			pcie->phy_count);
> +		return pcie->phy_count;
> +	}
> +
> +	if (of_property_read_bool(np, "nvidia,update-fc-fixup"))
> +		pcie->update_fc_fixup = true;
> +
> +	pcie->supports_clkreq =
> +		of_property_read_bool(pcie->dev->of_node, "supports-clkreq");
> +
> +	pcie->enable_cdm_check =
> +		of_property_read_bool(np, "snps,enable-cdm-check");
> +
> +	return 0;
> +}
> +
> +static int tegra_pcie_bpmp_set_ctrl_state(struct tegra_pcie_dw *pcie,
> +					  bool enable)
> +{
> +	struct mrq_uphy_response resp;
> +	struct tegra_bpmp_message msg;
> +	struct mrq_uphy_request req;
> +	int err;
> +
> +	if (pcie->cid == 5)
> +		return 0;

What's wrong with cid == 5 ? Explain please.

> +	memset(&req, 0, sizeof(req));
> +	memset(&resp, 0, sizeof(resp));
> +
> +	req.cmd = CMD_UPHY_PCIE_CONTROLLER_STATE;
> +	req.controller_state.pcie_controller = pcie->cid;
> +	req.controller_state.enable = enable;
> +
> +	memset(&msg, 0, sizeof(msg));
> +	msg.mrq = MRQ_UPHY;
> +	msg.tx.data = &req;
> +	msg.tx.size = sizeof(req);
> +	msg.rx.data = &resp;
> +	msg.rx.size = sizeof(resp);
> +
> +	if (irqs_disabled())

Can you explain to me what this check is meant to achieve please ?

> +		err = tegra_bpmp_transfer_atomic(pcie->bpmp, &msg);
> +	else
> +		err = tegra_bpmp_transfer(pcie->bpmp, &msg);
> +
> +	return err;
> +}
> +
> +static void tegra_pcie_downstream_dev_to_D0(struct tegra_pcie_dw *pcie)
> +{
> +	struct pcie_port *pp = &pcie->pci.pp;
> +	struct pci_dev *pdev;
> +	struct pci_bus *child;
> +
> +	list_for_each_entry(child, &pp->root_bus->children, node) {
> +		/* Bring downstream devices to D0 if they are not already in */
> +		if (child->parent == pp->root_bus)
> +			break;
> +	}
> +	list_for_each_entry(pdev, &child->devices, bus_list) {
> +		if (PCI_SLOT(pdev->devfn) == 0) {
> +			if (pci_set_power_state(pdev, PCI_D0))
> +				dev_err(pcie->dev,
> +					"Failed to transition %s to D0 state\n",
> +					dev_name(&pdev->dev));
> +		}
> +	}
> +}
> +
> +static int tegra_pcie_config_controller(struct tegra_pcie_dw *pcie,
> +					bool en_hw_hot_rst)
> +{
> +	int ret;
> +	u32 val;
> +
> +	ret = tegra_pcie_bpmp_set_ctrl_state(pcie, true);
> +	if (ret) {
> +		dev_err(pcie->dev,
> +			"Failed to enable controller %u: %d\n", pcie->cid, ret);
> +		return ret;
> +	}
> +
> +	ret = regulator_enable(pcie->pex_ctl_supply);
> +	if (ret < 0) {
> +		dev_err(pcie->dev, "Failed to enable regulator: %d\n", ret);
> +		goto fail_reg_en;
> +	}
> +
> +	ret = clk_prepare_enable(pcie->core_clk);
> +	if (ret) {
> +		dev_err(pcie->dev, "Failed to enable core clock: %d\n", ret);
> +		goto fail_core_clk;
> +	}
> +
> +	ret = reset_control_deassert(pcie->core_apb_rst);
> +	if (ret) {
> +		dev_err(pcie->dev, "Failed to deassert core APB reset: %d\n",
> +			ret);
> +		goto fail_core_apb_rst;
> +	}
> +
> +	if (en_hw_hot_rst) {
> +		/* Enable HW_HOT_RST mode */
> +		val = appl_readl(pcie, APPL_CTRL);
> +		val &= ~(APPL_CTRL_HW_HOT_RST_MODE_MASK <<
> +			 APPL_CTRL_HW_HOT_RST_MODE_SHIFT);
> +		val |= APPL_CTRL_HW_HOT_RST_EN;
> +		appl_writel(pcie, val, APPL_CTRL);
> +	}
> +
> +	ret = tegra_pcie_enable_phy(pcie);
> +	if (ret) {
> +		dev_err(pcie->dev, "Failed to enable PHY: %d\n", ret);
> +		goto fail_phy;
> +	}
> +
> +	/* Update CFG base address */
> +	appl_writel(pcie, pcie->dbi_res->start & APPL_CFG_BASE_ADDR_MASK,
> +		    APPL_CFG_BASE_ADDR);
> +
> +	/* Configure this core for RP mode operation */
> +	appl_writel(pcie, APPL_DM_TYPE_RP, APPL_DM_TYPE);
> +
> +	appl_writel(pcie, 0x0, APPL_CFG_SLCG_OVERRIDE);
> +
> +	val = appl_readl(pcie, APPL_CTRL);
> +	appl_writel(pcie, val | APPL_CTRL_SYS_PRE_DET_STATE, APPL_CTRL);
> +
> +	val = appl_readl(pcie, APPL_CFG_MISC);
> +	val |= (APPL_CFG_MISC_ARCACHE_VAL << APPL_CFG_MISC_ARCACHE_SHIFT);
> +	appl_writel(pcie, val, APPL_CFG_MISC);
> +
> +	if (!pcie->supports_clkreq) {
> +		val = appl_readl(pcie, APPL_PINMUX);
> +		val |= APPL_PINMUX_CLKREQ_OUT_OVRD_EN;
> +		val |= APPL_PINMUX_CLKREQ_OUT_OVRD;
> +		appl_writel(pcie, val, APPL_PINMUX);
> +	}
> +
> +	/* Update iATU_DMA base address */
> +	appl_writel(pcie,
> +		    pcie->atu_dma_res->start & APPL_CFG_IATU_DMA_BASE_ADDR_MASK,
> +		    APPL_CFG_IATU_DMA_BASE_ADDR);
> +
> +	reset_control_deassert(pcie->core_rst);
> +
> +	pcie->pcie_cap_base = dw_pcie_find_capability(&pcie->pci,
> +						      PCI_CAP_ID_EXP);
> +
> +#if defined(CONFIG_PCIEASPM)
> +	/* Disable ASPM-L1SS advertisement as there is no CLKREQ routing */
> +	if (!pcie->supports_clkreq) {
> +		disable_aspm_l11(pcie);
> +		disable_aspm_l12(pcie);
> +	}
> +#endif
> +	return ret;
> +
> +fail_phy:
> +	reset_control_assert(pcie->core_apb_rst);
> +fail_core_apb_rst:
> +	clk_disable_unprepare(pcie->core_clk);
> +fail_core_clk:
> +	regulator_disable(pcie->pex_ctl_supply);
> +fail_reg_en:
> +	tegra_pcie_bpmp_set_ctrl_state(pcie, false);
> +
> +	return ret;
> +}
> +
> +static int __deinit_controller(struct tegra_pcie_dw *pcie)
> +{
> +	int ret;
> +
> +	ret = reset_control_assert(pcie->core_rst);
> +	if (ret) {
> +		dev_err(pcie->dev, "Failed to assert \"core\" reset: %d\n",
> +			ret);
> +		return ret;
> +	}
> +	tegra_pcie_disable_phy(pcie);
> +	ret = reset_control_assert(pcie->core_apb_rst);
> +	if (ret) {
> +		dev_err(pcie->dev, "Failed to assert APB reset: %d\n", ret);
> +		return ret;
> +	}
> +	clk_disable_unprepare(pcie->core_clk);
> +	ret = regulator_disable(pcie->pex_ctl_supply);
> +	if (ret) {
> +		dev_err(pcie->dev, "Failed to disable regulator: %d\n", ret);
> +		return ret;
> +	}
> +	ret = tegra_pcie_bpmp_set_ctrl_state(pcie, false);
> +	if (ret) {
> +		dev_err(pcie->dev, "Failed to disable controller %d: %d\n",
> +			pcie->cid, ret);
> +		return ret;
> +	}
> +	return ret;
> +}
> +
> +static int tegra_pcie_init_controller(struct tegra_pcie_dw *pcie)
> +{
> +	struct dw_pcie *pci = &pcie->pci;
> +	struct pcie_port *pp = &pci->pp;
> +	int ret;
> +
> +	ret = tegra_pcie_config_controller(pcie, false);
> +	if (ret < 0)
> +		return ret;
> +
> +	pp->root_bus_nr = -1;
> +	pp->ops = &tegra_pcie_dw_host_ops;
> +
> +	ret = dw_pcie_host_init(pp);
> +	if (ret < 0) {
> +		dev_err(pcie->dev, "Failed to add PCIe port: %d\n", ret);
> +		goto fail_host_init;
> +	}
> +
> +	return 0;
> +
> +fail_host_init:
> +	return __deinit_controller(pcie);
> +}
> +
> +static int tegra_pcie_try_link_l2(struct tegra_pcie_dw *pcie)
> +{
> +	u32 val;
> +
> +	if (!tegra_pcie_dw_link_up(&pcie->pci))
> +		return 0;
> +
> +	val = appl_readl(pcie, APPL_RADM_STATUS);
> +	val |= APPL_PM_XMT_TURNOFF_STATE;
> +	appl_writel(pcie, val, APPL_RADM_STATUS);
> +
> +	return readl_poll_timeout_atomic(pcie->appl_base + APPL_DEBUG, val,
> +				 val & APPL_DEBUG_PM_LINKST_IN_L2_LAT,
> +				 1, PME_ACK_TIMEOUT);
> +}
> +
> +static void tegra_pcie_dw_pme_turnoff(struct tegra_pcie_dw *pcie)
> +{
> +	u32 data;
> +	int err;
> +
> +	if (!tegra_pcie_dw_link_up(&pcie->pci)) {
> +		dev_dbg(pcie->dev, "PCIe link is not up...!\n");
> +		return;
> +	}
> +
> +	if (tegra_pcie_try_link_l2(pcie)) {
> +		dev_info(pcie->dev, "Link didn't transition to L2 state\n");
> +		/*
> +		 * TX lane clock freq will reset to Gen1 only if link is in L2
> +		 * or detect state.
> +		 * So apply pex_rst to end point to force RP to go into detect
> +		 * state
> +		 */
> +		data = appl_readl(pcie, APPL_PINMUX);
> +		data &= ~APPL_PINMUX_PEX_RST;
> +		appl_writel(pcie, data, APPL_PINMUX);
> +
> +		err = readl_poll_timeout_atomic(pcie->appl_base + APPL_DEBUG,
> +						data,
> +						((data &
> +						APPL_DEBUG_LTSSM_STATE_MASK) >>
> +						APPL_DEBUG_LTSSM_STATE_SHIFT) ==
> +						LTSSM_STATE_PRE_DETECT,
> +						1, LTSSM_TIMEOUT);
> +		if (err) {
> +			dev_info(pcie->dev, "Link didn't go to detect state\n");
> +		} else {
> +			/* Disable LTSSM after link is in detect state */
> +			data = appl_readl(pcie, APPL_CTRL);
> +			data &= ~APPL_CTRL_LTSSM_EN;
> +			appl_writel(pcie, data, APPL_CTRL);
> +		}
> +	}
> +	/*
> +	 * DBI registers may not be accessible after this as PLL-E would be
> +	 * down depending on how CLKREQ is pulled by end point
> +	 */
> +	data = appl_readl(pcie, APPL_PINMUX);
> +	data |= (APPL_PINMUX_CLKREQ_OVERRIDE_EN | APPL_PINMUX_CLKREQ_OVERRIDE);
> +	/* Cut REFCLK to slot */
> +	data |= APPL_PINMUX_CLK_OUTPUT_IN_OVERRIDE_EN;
> +	data &= ~APPL_PINMUX_CLK_OUTPUT_IN_OVERRIDE;
> +	appl_writel(pcie, data, APPL_PINMUX);
> +}
> +
> +static int tegra_pcie_deinit_controller(struct tegra_pcie_dw *pcie)
> +{
> +	/*
> +	 * link doesn't go into L2 state with some of the endpoints with Tegra
> +	 * if they are not in D0 state. So, need to make sure that immediate
> +	 * downstream devices are in D0 state before sending PME_TurnOff to put
> +	 * link into L2 state
> +	 */
> +	tegra_pcie_downstream_dev_to_D0(pcie);
> +	dw_pcie_host_deinit(&pcie->pci.pp);
> +	tegra_pcie_dw_pme_turnoff(pcie);
> +	return __deinit_controller(pcie);
> +}
> +
> +static int tegra_pcie_config_rp(struct tegra_pcie_dw *pcie)
> +{
> +	struct pcie_port *pp = &pcie->pci.pp;
> +	struct device *dev = pcie->dev;
> +	char *name;
> +	int ret;
> +
> +	if (IS_ENABLED(CONFIG_PCI_MSI)) {
> +		pp->msi_irq = of_irq_get_byname(dev->of_node, "msi");
> +		if (!pp->msi_irq) {
> +			dev_err(dev, "Failed to get MSI interrupt\n");
> +			return -ENODEV;
> +		}
> +	}
> +
> +	pm_runtime_enable(dev);
> +	ret = pm_runtime_get_sync(dev);
> +	if (ret < 0) {
> +		dev_err(dev, "Failed to get runtime sync for PCIe dev: %d\n",
> +			ret);
> +		pm_runtime_disable(dev);
> +		return ret;
> +	}
> +
> +	tegra_pcie_init_controller(pcie);
> +
> +	pcie->link_state = tegra_pcie_dw_link_up(&pcie->pci);
> +
> +	if (!pcie->link_state) {
> +		ret = -ENOMEDIUM;
> +		goto fail_host_init;
> +	}
> +
> +	name = devm_kasprintf(dev, GFP_KERNEL, "%pOFP", dev->of_node);
> +	if (!name) {
> +		ret = -ENOMEM;
> +		goto fail_host_init;
> +	}
> +
> +	pcie->debugfs = debugfs_create_dir(name, NULL);
> +	if (!pcie->debugfs)
> +		dev_err(dev, "Failed to create debugfs\n");
> +	else
> +		init_debugfs(pcie);
> +
> +	return ret;
> +
> +fail_host_init:
> +	tegra_pcie_deinit_controller(pcie);
> +	pm_runtime_put_sync(dev);
> +	pm_runtime_disable(dev);
> +	return ret;
> +}
> +
> +static const struct tegra_pcie_soc tegra_pcie_rc_of_data = {
> +	.mode = DW_PCIE_RC_TYPE,
> +};
> +
> +static const struct of_device_id tegra_pcie_dw_of_match[] = {
> +	{
> +		.compatible = "nvidia,tegra194-pcie",
> +		.data = &tegra_pcie_rc_of_data,
> +	},
> +	{},
> +};
> +MODULE_DEVICE_TABLE(of, tegra_pcie_dw_of_match);
> +
> +static int tegra_pcie_dw_probe(struct platform_device *pdev)
> +{
> +	const struct tegra_pcie_soc *data;
> +	struct device *dev = &pdev->dev;
> +	struct resource *atu_dma_res;
> +	struct tegra_pcie_dw *pcie;
> +	struct resource *dbi_res;
> +	struct pcie_port *pp;
> +	struct dw_pcie *pci;
> +	struct phy **phys;
> +	char *name;
> +	int ret;
> +	u32 i;
> +
> +	pcie = devm_kzalloc(dev, sizeof(*pcie), GFP_KERNEL);
> +	if (!pcie)
> +		return -ENOMEM;
> +
> +	pci = &pcie->pci;
> +	pci->dev = &pdev->dev;
> +	pci->ops = &tegra_dw_pcie_ops;
> +	pp = &pci->pp;
> +	pcie->dev = &pdev->dev;
> +
> +	data = (struct tegra_pcie_soc *)of_device_get_match_data(dev);
> +	if (!data)
> +		return -EINVAL;
> +	pcie->mode = (enum dw_pcie_device_mode)data->mode;
> +
> +	ret = tegra_pcie_dw_parse_dt(pcie);
> +	if (ret < 0) {
> +		dev_err(dev, "Failed to parse device tree: %d\n", ret);
> +		return ret;
> +	}
> +
> +	pcie->pex_ctl_supply = devm_regulator_get(dev, "vddio-pex-ctl");
> +	if (IS_ERR(pcie->pex_ctl_supply)) {
> +		dev_err(dev, "Failed to get regulator: %ld\n",
> +			PTR_ERR(pcie->pex_ctl_supply));
> +		return PTR_ERR(pcie->pex_ctl_supply);
> +	}
> +
> +	pcie->core_clk = devm_clk_get(dev, "core");
> +	if (IS_ERR(pcie->core_clk)) {
> +		dev_err(dev, "Failed to get core clock: %ld\n",
> +			PTR_ERR(pcie->core_clk));
> +		return PTR_ERR(pcie->core_clk);
> +	}
> +
> +	pcie->appl_res = platform_get_resource_byname(pdev, IORESOURCE_MEM,
> +						      "appl");
> +	if (!pcie->appl_res) {
> +		dev_err(dev, "Failed to find \"appl\" region\n");
> +		return PTR_ERR(pcie->appl_res);
> +	}
> +	pcie->appl_base = devm_ioremap_resource(dev, pcie->appl_res);
> +	if (IS_ERR(pcie->appl_base))
> +		return PTR_ERR(pcie->appl_base);
> +
> +	pcie->core_apb_rst = devm_reset_control_get(dev, "apb");
> +	if (IS_ERR(pcie->core_apb_rst)) {
> +		dev_err(dev, "Failed to get APB reset: %ld\n",
> +			PTR_ERR(pcie->core_apb_rst));
> +		return PTR_ERR(pcie->core_apb_rst);
> +	}
> +
> +	phys = devm_kcalloc(dev, pcie->phy_count, sizeof(*phys), GFP_KERNEL);
> +	if (!phys)
> +		return PTR_ERR(phys);
> +
> +	for (i = 0; i < pcie->phy_count; i++) {
> +		name = kasprintf(GFP_KERNEL, "p2u-%u", i);
> +		if (!name) {
> +			dev_err(dev, "Failed to create P2U string\n");
> +			return -ENOMEM;
> +		}
> +		phys[i] = devm_phy_get(dev, name);
> +		kfree(name);
> +		if (IS_ERR(phys[i])) {
> +			ret = PTR_ERR(phys[i]);
> +			dev_err(dev, "Failed to get PHY: %d\n", ret);
> +			return ret;
> +		}
> +	}
> +
> +	pcie->phys = phys;
> +
> +	dbi_res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "dbi");
> +	if (!dbi_res) {
> +		dev_err(dev, "Failed to find \"dbi\" region\n");
> +		return PTR_ERR(dbi_res);
> +	}
> +	pcie->dbi_res = dbi_res;
> +
> +	pci->dbi_base = devm_ioremap_resource(dev, dbi_res);
> +	if (IS_ERR(pci->dbi_base))
> +		return PTR_ERR(pci->dbi_base);
> +
> +	/* Tegra HW locates DBI2 at a fixed offset from DBI */
> +	pci->dbi_base2 = pci->dbi_base + 0x1000;
> +
> +	atu_dma_res = platform_get_resource_byname(pdev, IORESOURCE_MEM,
> +						   "atu_dma");
> +	if (!atu_dma_res) {
> +		dev_err(dev, "Failed to find \"atu_dma\" region\n");
> +		return PTR_ERR(atu_dma_res);
> +	}
> +	pcie->atu_dma_res = atu_dma_res;
> +	pci->atu_base = devm_ioremap_resource(dev, atu_dma_res);
> +	if (IS_ERR(pci->atu_base))
> +		return PTR_ERR(pci->atu_base);
> +
> +	pcie->core_rst = devm_reset_control_get(dev, "core");
> +	if (IS_ERR(pcie->core_rst)) {
> +		dev_err(dev, "Failed to get core reset: %ld\n",
> +			PTR_ERR(pcie->core_rst));
> +		return PTR_ERR(pcie->core_rst);
> +	}
> +
> +	pp->irq = platform_get_irq_byname(pdev, "intr");
> +	if (!pp->irq) {
> +		dev_err(dev, "Failed to get \"intr\" interrupt\n");
> +		return -ENODEV;
> +	}
> +
> +	ret = devm_request_irq(dev, pp->irq, tegra_pcie_irq_handler,
> +			       IRQF_SHARED, "tegra-pcie-intr", pcie);
> +	if (ret) {
> +		dev_err(dev, "Failed to request IRQ %d: %d\n", pp->irq, ret);
> +		return ret;
> +	}
> +
> +	pcie->bpmp = tegra_bpmp_get(dev);
> +	if (IS_ERR(pcie->bpmp))
> +		return PTR_ERR(pcie->bpmp);
> +
> +	platform_set_drvdata(pdev, pcie);
> +
> +	if (pcie->mode == DW_PCIE_RC_TYPE) {
> +		ret = tegra_pcie_config_rp(pcie);
> +		if (ret && ret != -ENOMEDIUM)
> +			goto fail;
> +		else
> +			return 0;

So if the link is not up we still go ahead and make probe
succeed. What for ?

> +	}
> +
> +fail:
> +	tegra_bpmp_put(pcie->bpmp);
> +	return ret;
> +}
> +
> +static int tegra_pcie_dw_remove(struct platform_device *pdev)
> +{
> +	struct tegra_pcie_dw *pcie = platform_get_drvdata(pdev);
> +
> +	if (pcie->mode != DW_PCIE_RC_TYPE)
> +		return 0;
> +
> +	if (!pcie->link_state)
> +		return 0;
> +
> +	debugfs_remove_recursive(pcie->debugfs);
> +	tegra_pcie_deinit_controller(pcie);
> +	pm_runtime_put_sync(pcie->dev);
> +	pm_runtime_disable(pcie->dev);
> +	tegra_bpmp_put(pcie->bpmp);
> +
> +	return 0;
> +}
> +
> +static int tegra_pcie_dw_suspend_late(struct device *dev)
> +{
> +	struct tegra_pcie_dw *pcie = dev_get_drvdata(dev);
> +	u32 val;
> +
> +	if (!pcie->link_state)
> +		return 0;
> +
> +	/* Enable HW_HOT_RST mode */
> +	val = appl_readl(pcie, APPL_CTRL);
> +	val &= ~(APPL_CTRL_HW_HOT_RST_MODE_MASK <<
> +		 APPL_CTRL_HW_HOT_RST_MODE_SHIFT);
> +	val |= APPL_CTRL_HW_HOT_RST_EN;
> +	appl_writel(pcie, val, APPL_CTRL);
> +
> +	return 0;
> +}
> +
> +static int tegra_pcie_dw_suspend_noirq(struct device *dev)
> +{
> +	struct tegra_pcie_dw *pcie = dev_get_drvdata(dev);
> +
> +	if (!pcie->link_state)
> +		return 0;
> +
> +	/* Save MSI interrupt vector */
> +	pcie->msi_ctrl_int = dw_pcie_readl_dbi(&pcie->pci,
> +					       PORT_LOGIC_MSI_CTRL_INT_0_EN);
> +	tegra_pcie_downstream_dev_to_D0(pcie);

I think this requires some comments. AFAIU this is allowed by
the PCI specs (PCI Express Base 4.0 r1.0 September 29-2017,
5.2 Link State Power Management). However, I would like to
understand how this plays with the D state the devices are left
in upon system suspend.

"As the following example illustrates, it is also possible to remove
power without first placing all Functions into D3Hot".

I assume that's what happens on this platform to allow L2 entry but
again, this needs clarification.

Lorenzo

> +	tegra_pcie_dw_pme_turnoff(pcie);
> +	return __deinit_controller(pcie);
> +}
> +
> +static int tegra_pcie_dw_resume_noirq(struct device *dev)
> +{
> +	struct tegra_pcie_dw *pcie = dev_get_drvdata(dev);
> +	int ret;
> +
> +	if (!pcie->link_state)
> +		return 0;
> +
> +	ret = tegra_pcie_config_controller(pcie, true);
> +	if (ret < 0)
> +		return ret;
> +
> +	ret = tegra_pcie_dw_host_init(&pcie->pci.pp);
> +	if (ret < 0) {
> +		dev_err(dev, "Failed to init host: %d\n", ret);
> +		goto fail_host_init;
> +	}
> +
> +	/* Restore MSI interrupt vector */
> +	dw_pcie_writel_dbi(&pcie->pci, PORT_LOGIC_MSI_CTRL_INT_0_EN,
> +			   pcie->msi_ctrl_int);
> +
> +	return 0;
> +fail_host_init:
> +	return __deinit_controller(pcie);
> +}
> +
> +static int tegra_pcie_dw_resume_early(struct device *dev)
> +{
> +	struct tegra_pcie_dw *pcie = dev_get_drvdata(dev);
> +	u32 val;
> +
> +	if (!pcie->link_state)
> +		return 0;
> +
> +	/* Disable HW_HOT_RST mode */
> +	val = appl_readl(pcie, APPL_CTRL);
> +	val &= ~(APPL_CTRL_HW_HOT_RST_MODE_MASK <<
> +		 APPL_CTRL_HW_HOT_RST_MODE_SHIFT);
> +	val |= APPL_CTRL_HW_HOT_RST_MODE_IMDT_RST <<
> +	       APPL_CTRL_HW_HOT_RST_MODE_SHIFT;
> +	val &= ~APPL_CTRL_HW_HOT_RST_EN;
> +	appl_writel(pcie, val, APPL_CTRL);
> +
> +	return 0;
> +}
> +
> +static void tegra_pcie_dw_shutdown(struct platform_device *pdev)
> +{
> +	struct tegra_pcie_dw *pcie = platform_get_drvdata(pdev);
> +
> +	if (pcie->mode != DW_PCIE_RC_TYPE)
> +		return;
> +
> +	if (!pcie->link_state)
> +		return;
> +
> +	debugfs_remove_recursive(pcie->debugfs);
> +	tegra_pcie_downstream_dev_to_D0(pcie);
> +
> +	disable_irq(pcie->pci.pp.irq);
> +	if (IS_ENABLED(CONFIG_PCI_MSI))
> +		disable_irq(pcie->pci.pp.msi_irq);
> +
> +	tegra_pcie_dw_pme_turnoff(pcie);
> +	__deinit_controller(pcie);
> +}
> +
> +static const struct dev_pm_ops tegra_pcie_dw_pm_ops = {
> +	.suspend_late = tegra_pcie_dw_suspend_late,
> +	.suspend_noirq = tegra_pcie_dw_suspend_noirq,
> +	.resume_noirq = tegra_pcie_dw_resume_noirq,
> +	.resume_early = tegra_pcie_dw_resume_early,
> +};
> +
> +static struct platform_driver tegra_pcie_dw_driver = {
> +	.probe = tegra_pcie_dw_probe,
> +	.remove = tegra_pcie_dw_remove,
> +	.shutdown = tegra_pcie_dw_shutdown,
> +	.driver = {
> +		.name	= "tegra194-pcie",
> +		.pm = &tegra_pcie_dw_pm_ops,
> +		.of_match_table = tegra_pcie_dw_of_match,
> +	},
> +};
> +module_platform_driver(tegra_pcie_dw_driver);
> +
> +MODULE_AUTHOR("Vidya Sagar <vidyas@nvidia.com>");
> +MODULE_DESCRIPTION("NVIDIA PCIe host controller driver");
> +MODULE_LICENSE("GPL v2");
> -- 
> 2.17.1
> 

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH V13 12/12] PCI: tegra: Add Tegra194 PCIe support
  2019-07-11 12:54   ` Lorenzo Pieralisi
@ 2019-07-12 15:32     ` Vidya Sagar
  2019-07-12 16:07       ` Lorenzo Pieralisi
  0 siblings, 1 reply; 34+ messages in thread
From: Vidya Sagar @ 2019-07-12 15:32 UTC (permalink / raw)
  To: Lorenzo Pieralisi
  Cc: bhelgaas, robh+dt, mark.rutland, thierry.reding, jonathanh,
	kishon, catalin.marinas, will.deacon, jingoohan1,
	gustavo.pimentel, digetx, mperttunen, linux-pci, devicetree,
	linux-tegra, linux-kernel, linux-arm-kernel, kthota, mmaddireddy,
	sagar.tv

On 7/11/2019 6:24 PM, Lorenzo Pieralisi wrote:
> On Wed, Jul 10, 2019 at 11:52:12AM +0530, Vidya Sagar wrote:
> 
> [...]
> 
>> diff --git a/drivers/pci/controller/dwc/pcie-tegra194.c b/drivers/pci/controller/dwc/pcie-tegra194.c
>> new file mode 100644
>> index 000000000000..189779edd976
>> --- /dev/null
>> +++ b/drivers/pci/controller/dwc/pcie-tegra194.c
>> @@ -0,0 +1,1632 @@
>> +// SPDX-License-Identifier: GPL-2.0+
>> +/*
>> + * PCIe host controller driver for Tegra194 SoC
>> + *
>> + * Copyright (C) 2019 NVIDIA Corporation.
>> + *
>> + * Author: Vidya Sagar <vidyas@nvidia.com>
>> + */
>> +
>> +#include <linux/clk.h>
>> +#include <linux/debugfs.h>
>> +#include <linux/delay.h>
>> +#include <linux/gpio.h>
>> +#include <linux/interrupt.h>
>> +#include <linux/iopoll.h>
>> +#include <linux/kernel.h>
>> +#include <linux/kfifo.h>
>> +#include <linux/kthread.h>
> 
> What do you need these two headers for ?
These were added for a different reason. They are not required
and I'll remove them.

> 
> [...]
> 
>> +struct tegra_pcie_dw {
>> +	struct device *dev;
>> +	struct resource *appl_res;
>> +	struct resource *dbi_res;
>> +	struct resource *atu_dma_res;
>> +	void __iomem *appl_base;
>> +	struct clk *core_clk;
>> +	struct reset_control *core_apb_rst;
>> +	struct reset_control *core_rst;
>> +	struct dw_pcie pci;
>> +	enum dw_pcie_device_mode mode;
>> +	struct tegra_bpmp *bpmp;
>> +
>> +	bool supports_clkreq;
>> +	bool enable_cdm_check;
>> +	u8 init_link_width;
>> +	bool link_state;
>> +	u32 msi_ctrl_int;
>> +	u32 num_lanes;
>> +	u32 max_speed;
>> +	u32 cid;
>> +	bool update_fc_fixup;
>> +	u32 cfg_link_cap_l1sub;
>> +	u32 pcie_cap_base;
>> +	u32 aspm_cmrt;
>> +	u32 aspm_pwr_on_t;
>> +	u32 aspm_l0s_enter_lat;o
> 
> Nit: You should try to group variables according to their usage,
> it would make sense to group booleans together.
Ok.

> 
>> +	struct regulator *pex_ctl_supply;
>> +
>> +	unsigned int phy_count;
>> +	struct phy **phys;
>> +
>> +	struct dentry *debugfs;
>> +};
>> +
>> +static inline struct tegra_pcie_dw *to_tegra_pcie(struct dw_pcie *pci)
>> +{
>> +	return container_of(pci, struct tegra_pcie_dw, pci);
>> +}
>> +
>> +static inline void appl_writel(struct tegra_pcie_dw *pcie, const u32 value,
>> +			       const u32 reg)
>> +{
>> +	writel_relaxed(value, pcie->appl_base + reg);
>> +}
>> +
>> +static inline u32 appl_readl(struct tegra_pcie_dw *pcie, const u32 reg)
>> +{
>> +	return readl_relaxed(pcie->appl_base + reg);
>> +}
>> +
>> +struct tegra_pcie_soc {
>> +	enum dw_pcie_device_mode mode;
>> +};
>> +
>> +static void apply_bad_link_workaround(struct pcie_port *pp)
>> +{
>> +	struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
>> +	struct tegra_pcie_dw *pcie = to_tegra_pcie(pci);
>> +	u32 current_link_width;
>> +	u16 val;
>> +
>> +	/*
>> +	 * NOTE:- Since this scenario is uncommon and link as such is not
>> +	 * stable anyway, not waiting to confirm if link is really
>> +	 * transitioning to Gen-2 speed
>> +	 */
>> +	val = dw_pcie_readw_dbi(pci, pcie->pcie_cap_base + PCI_EXP_LNKSTA);
>> +	if (val & PCI_EXP_LNKSTA_LBMS) {
>> +		current_link_width = (val & PCI_EXP_LNKSTA_NLW) >>
>> +				     PCI_EXP_LNKSTA_NLW_SHIFT;
>> +		if (pcie->init_link_width > current_link_width) {
>> +			dev_warn(pci->dev, "PCIe link is bad, width reduced\n");
>> +			val = dw_pcie_readw_dbi(pci, pcie->pcie_cap_base +
>> +						PCI_EXP_LNKCTL2);
>> +			val &= ~PCI_EXP_LNKCTL2_TLS;
>> +			val |= PCI_EXP_LNKCTL2_TLS_2_5GT;
>> +			dw_pcie_writew_dbi(pci, pcie->pcie_cap_base +
>> +					   PCI_EXP_LNKCTL2, val);
>> +
>> +			val = dw_pcie_readw_dbi(pci, pcie->pcie_cap_base +
>> +						PCI_EXP_LNKCTL);
>> +			val |= PCI_EXP_LNKCTL_RL;
>> +			dw_pcie_writew_dbi(pci, pcie->pcie_cap_base +
>> +					   PCI_EXP_LNKCTL, val);
>> +		}
>> +	}
>> +}
>> +
>> +static irqreturn_t tegra_pcie_rp_irq_handler(struct tegra_pcie_dw *pcie)
>> +{
>> +	struct dw_pcie *pci = &pcie->pci;
>> +	struct pcie_port *pp = &pci->pp;
>> +	u32 val, tmp;
>> +	u16 val_w;
>> +
>> +	val = appl_readl(pcie, APPL_INTR_STATUS_L0);
>> +	if (val & APPL_INTR_STATUS_L0_LINK_STATE_INT) {
>> +		val = appl_readl(pcie, APPL_INTR_STATUS_L1_0_0);
>> +		if (val & APPL_INTR_STATUS_L1_0_0_LINK_REQ_RST_NOT_CHGED) {
>> +			appl_writel(pcie, val, APPL_INTR_STATUS_L1_0_0);
>> +
>> +			/* SBR & Surprise Link Down WAR */
>> +			val = appl_readl(pcie, APPL_CAR_RESET_OVRD);
>> +			val &= ~APPL_CAR_RESET_OVRD_CYA_OVERRIDE_CORE_RST_N;
>> +			appl_writel(pcie, val, APPL_CAR_RESET_OVRD);
>> +			udelay(1);
>> +			val = appl_readl(pcie, APPL_CAR_RESET_OVRD);
>> +			val |= APPL_CAR_RESET_OVRD_CYA_OVERRIDE_CORE_RST_N;
>> +			appl_writel(pcie, val, APPL_CAR_RESET_OVRD);
>> +
>> +			val = dw_pcie_readl_dbi(pci, PORT_LOGIC_GEN2_CTRL);
>> +			val |= PORT_LOGIC_GEN2_CTRL_DIRECT_SPEED_CHANGE;
>> +			dw_pcie_writel_dbi(pci, PORT_LOGIC_GEN2_CTRL, val);
>> +		}
>> +	}
>> +
>> +	if (val & APPL_INTR_STATUS_L0_INT_INT) {
>> +		val = appl_readl(pcie, APPL_INTR_STATUS_L1_8_0);
>> +		if (val & APPL_INTR_STATUS_L1_8_0_AUTO_BW_INT_STS) {
>> +			appl_writel(pcie,
>> +				    APPL_INTR_STATUS_L1_8_0_AUTO_BW_INT_STS,
>> +				    APPL_INTR_STATUS_L1_8_0);
>> +			apply_bad_link_workaround(pp);
>> +		}
>> +		if (val & APPL_INTR_STATUS_L1_8_0_BW_MGT_INT_STS) {
>> +			appl_writel(pcie,
>> +				    APPL_INTR_STATUS_L1_8_0_BW_MGT_INT_STS,
>> +				    APPL_INTR_STATUS_L1_8_0);
>> +
>> +			val_w = dw_pcie_readw_dbi(pci, pcie->pcie_cap_base +
>> +						  PCI_EXP_LNKSTA);
>> +			dev_dbg(pci->dev, "Link Speed : Gen-%u\n", val_w &
>> +				PCI_EXP_LNKSTA_CLS);
>> +		}
>> +	}
>> +
>> +	val = appl_readl(pcie, APPL_INTR_STATUS_L0);
>> +	if (val & APPL_INTR_STATUS_L0_CDM_REG_CHK_INT) {
>> +		val = appl_readl(pcie, APPL_INTR_STATUS_L1_18);
>> +		tmp = dw_pcie_readl_dbi(pci, PCIE_PL_CHK_REG_CONTROL_STATUS);
>> +		if (val & APPL_INTR_STATUS_L1_18_CDM_REG_CHK_CMPLT) {
>> +			dev_info(pci->dev, "CDM check complete\n");
>> +			tmp |= PCIE_PL_CHK_REG_CHK_REG_COMPLETE;
>> +		}
>> +		if (val & APPL_INTR_STATUS_L1_18_CDM_REG_CHK_CMP_ERR) {
>> +			dev_err(pci->dev, "CDM comparison mismatch\n");
>> +			tmp |= PCIE_PL_CHK_REG_CHK_REG_COMPARISON_ERROR;
>> +		}
>> +		if (val & APPL_INTR_STATUS_L1_18_CDM_REG_CHK_LOGIC_ERR) {
>> +			dev_err(pci->dev, "CDM Logic error\n");
>> +			tmp |= PCIE_PL_CHK_REG_CHK_REG_LOGIC_ERROR;
>> +		}
>> +		dw_pcie_writel_dbi(pci, PCIE_PL_CHK_REG_CONTROL_STATUS, tmp);
>> +		tmp = dw_pcie_readl_dbi(pci, PCIE_PL_CHK_REG_ERR_ADDR);
>> +		dev_err(pci->dev, "CDM Error Address Offset = 0x%08X\n", tmp);
>> +	}
>> +
>> +	return IRQ_HANDLED;
>> +}
>> +
>> +static irqreturn_t tegra_pcie_irq_handler(int irq, void *arg)
>> +{
>> +	struct tegra_pcie_dw *pcie = arg;
>> +
>> +	if (pcie->mode == DW_PCIE_RC_TYPE)
>> +		return tegra_pcie_rp_irq_handler(pcie);
> 
> What's the point of registering the handler if mode != DW_PCIE_RC_TYPE ?
Currently this driver supports only root port mode but we have a plan to add
support for endpoint mode (as Tegra194 as dual mode controllers) also in future
and when that happens, we'll have a corresponding tegra_pcie_ep_irq_handler()
to take care of ep specific interrupts.

> 
>> +	return IRQ_NONE;
>> +}
>> +
>> +static int tegra_pcie_dw_rd_own_conf(struct pcie_port *pp, int where, int size,
>> +				     u32 *val)
>> +{
>> +	struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
>> +
>> +	/*
>> +	 * This is an endpoint mode specific register happen to appear even
>> +	 * when controller is operating in root port mode and system hangs
>> +	 * when it is accessed with link being in ASPM-L1 state.
>> +	 * So skip accessing it altogether
>> +	 */
>> +	if (where == PORT_LOGIC_MSIX_DOORBELL) {
>> +		*val = 0x00000000;
>> +		return PCIBIOS_SUCCESSFUL;
>> +	}
>> +
>> +	return dw_pcie_read(pci->dbi_base + where, size, val);
>> +}
>> +
>> +static int tegra_pcie_dw_wr_own_conf(struct pcie_port *pp, int where, int size,
>> +				     u32 val)
>> +{
>> +	struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
>> +
>> +	/*
>> +	 * This is an endpoint mode specific register happen to appear even
>> +	 * when controller is operating in root port mode and system hangs
>> +	 * when it is accessed with link being in ASPM-L1 state.
>> +	 * So skip accessing it altogether
>> +	 */
>> +	if (where == PORT_LOGIC_MSIX_DOORBELL)
>> +		return PCIBIOS_SUCCESSFUL;
>> +
>> +	return dw_pcie_write(pci->dbi_base + where, size, val);
>> +}
>> +
>> +#if defined(CONFIG_PCIEASPM)
>> +static void disable_aspm_l11(struct tegra_pcie_dw *pcie)
>> +{
>> +	u32 val;
>> +
>> +	val = dw_pcie_readl_dbi(&pcie->pci, pcie->cfg_link_cap_l1sub);
>> +	val &= ~PCI_L1SS_CAP_ASPM_L1_1;
>> +	dw_pcie_writel_dbi(&pcie->pci, pcie->cfg_link_cap_l1sub, val);
>> +}
>> +
>> +static void disable_aspm_l12(struct tegra_pcie_dw *pcie)
>> +{
>> +	u32 val;
>> +
>> +	val = dw_pcie_readl_dbi(&pcie->pci, pcie->cfg_link_cap_l1sub);
>> +	val &= ~PCI_L1SS_CAP_ASPM_L1_2;
>> +	dw_pcie_writel_dbi(&pcie->pci, pcie->cfg_link_cap_l1sub, val);
>> +}
>> +
>> +static inline u32 event_counter_prog(struct tegra_pcie_dw *pcie, u32 event)
>> +{
>> +	u32 val;
>> +
>> +	val = dw_pcie_readl_dbi(&pcie->pci, event_cntr_ctrl_offset[pcie->cid]);
>> +	val &= ~(EVENT_COUNTER_EVENT_SEL_MASK << EVENT_COUNTER_EVENT_SEL_SHIFT);
>> +	val |= EVENT_COUNTER_GROUP_5 << EVENT_COUNTER_GROUP_SEL_SHIFT;
>> +	val |= event << EVENT_COUNTER_EVENT_SEL_SHIFT;
>> +	val |= EVENT_COUNTER_ENABLE_ALL << EVENT_COUNTER_ENABLE_SHIFT;
>> +	dw_pcie_writel_dbi(&pcie->pci, event_cntr_ctrl_offset[pcie->cid], val);
>> +	val = dw_pcie_readl_dbi(&pcie->pci, event_cntr_data_offset[pcie->cid]);
>> +	return val;
>> +}
>> +
>> +static int aspm_state_cnt(struct seq_file *s, void *data)
>> +{
>> +	struct tegra_pcie_dw *pcie = (struct tegra_pcie_dw *)
>> +				     dev_get_drvdata(s->private);
>> +	u32 val;
>> +
>> +	seq_printf(s, "Tx L0s entry count : %u\n",
>> +		   event_counter_prog(pcie, EVENT_COUNTER_EVENT_Tx_L0S));
>> +
>> +	seq_printf(s, "Rx L0s entry count : %u\n",
>> +		   event_counter_prog(pcie, EVENT_COUNTER_EVENT_Rx_L0S));
>> +
>> +	seq_printf(s, "Link L1 entry count : %u\n",
>> +		   event_counter_prog(pcie, EVENT_COUNTER_EVENT_L1));
>> +
>> +	seq_printf(s, "Link L1.1 entry count : %u\n",
>> +		   event_counter_prog(pcie, EVENT_COUNTER_EVENT_L1_1));
>> +
>> +	seq_printf(s, "Link L1.2 entry count : %u\n",
>> +		   event_counter_prog(pcie, EVENT_COUNTER_EVENT_L1_2));
>> +
>> +	/* Clear all counters */
>> +	dw_pcie_writel_dbi(&pcie->pci, event_cntr_ctrl_offset[pcie->cid],
>> +			   EVENT_COUNTER_ALL_CLEAR);
>> +
>> +	/* Re-enable counting */
>> +	val = EVENT_COUNTER_ENABLE_ALL << EVENT_COUNTER_ENABLE_SHIFT;
>> +	val |= EVENT_COUNTER_GROUP_5 << EVENT_COUNTER_GROUP_SEL_SHIFT;
>> +	dw_pcie_writel_dbi(&pcie->pci, event_cntr_ctrl_offset[pcie->cid], val);
>> +
>> +	return 0;
>> +}
>> +#endif
>> +
>> +static int init_debugfs(struct tegra_pcie_dw *pcie)
>> +{
>> +#if defined(CONFIG_PCIEASPM)
>> +	struct dentry *d;
>> +
>> +	d = debugfs_create_devm_seqfile(pcie->dev, "aspm_state_cnt",
>> +					pcie->debugfs, aspm_state_cnt);
>> +	if (IS_ERR_OR_NULL(d))
>> +		dev_err(pcie->dev,
>> +			"Failed to create debugfs file \"aspm_state_cnt\"\n");
>> +#endif
>> +	return 0;
>> +}
>> +
>> +static void tegra_pcie_enable_system_interrupts(struct pcie_port *pp)
>> +{
>> +	struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
>> +	struct tegra_pcie_dw *pcie = to_tegra_pcie(pci);
>> +	u32 val;
>> +	u16 val_w;
>> +
>> +	val = appl_readl(pcie, APPL_INTR_EN_L0_0);
>> +	val |= APPL_INTR_EN_L0_0_LINK_STATE_INT_EN;
>> +	appl_writel(pcie, val, APPL_INTR_EN_L0_0);
>> +
>> +	val = appl_readl(pcie, APPL_INTR_EN_L1_0_0);
>> +	val |= APPL_INTR_EN_L1_0_0_LINK_REQ_RST_NOT_INT_EN;
>> +	appl_writel(pcie, val, APPL_INTR_EN_L1_0_0);
>> +
>> +	if (pcie->enable_cdm_check) {
>> +		val = appl_readl(pcie, APPL_INTR_EN_L0_0);
>> +		val |= APPL_INTR_EN_L0_0_CDM_REG_CHK_INT_EN;
>> +		appl_writel(pcie, val, APPL_INTR_EN_L0_0);
>> +
>> +		val = appl_readl(pcie, APPL_INTR_EN_L1_18);
>> +		val |= APPL_INTR_EN_L1_18_CDM_REG_CHK_CMP_ERR;
>> +		val |= APPL_INTR_EN_L1_18_CDM_REG_CHK_LOGIC_ERR;
>> +		appl_writel(pcie, val, APPL_INTR_EN_L1_18);
>> +	}
>> +
>> +	val_w = dw_pcie_readw_dbi(&pcie->pci, pcie->pcie_cap_base +
>> +				  PCI_EXP_LNKSTA);
>> +	pcie->init_link_width = (val_w & PCI_EXP_LNKSTA_NLW) >>
>> +				PCI_EXP_LNKSTA_NLW_SHIFT;
>> +
>> +	val_w = dw_pcie_readw_dbi(&pcie->pci, pcie->pcie_cap_base +
>> +				  PCI_EXP_LNKCTL);
>> +	val_w |= PCI_EXP_LNKCTL_LBMIE;
>> +	dw_pcie_writew_dbi(&pcie->pci, pcie->pcie_cap_base + PCI_EXP_LNKCTL,
>> +			   val_w);
>> +}
>> +
>> +static void tegra_pcie_enable_legacy_interrupts(struct pcie_port *pp)
>> +{
>> +	struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
>> +	struct tegra_pcie_dw *pcie = to_tegra_pcie(pci);
>> +	u32 val;
>> +
>> +	/* Enable legacy interrupt generation */
>> +	val = appl_readl(pcie, APPL_INTR_EN_L0_0);
>> +	val |= APPL_INTR_EN_L0_0_SYS_INTR_EN;
>> +	val |= APPL_INTR_EN_L0_0_INT_INT_EN;
>> +	appl_writel(pcie, val, APPL_INTR_EN_L0_0);
>> +
>> +	val = appl_readl(pcie, APPL_INTR_EN_L1_8_0);
>> +	val |= APPL_INTR_EN_L1_8_INTX_EN;
>> +	val |= APPL_INTR_EN_L1_8_AUTO_BW_INT_EN;
>> +	val |= APPL_INTR_EN_L1_8_BW_MGT_INT_EN;
>> +	if (IS_ENABLED(CONFIG_PCIEAER))
>> +		val |= APPL_INTR_EN_L1_8_AER_INT_EN;
>> +	appl_writel(pcie, val, APPL_INTR_EN_L1_8_0);
>> +}
>> +
>> +static void tegra_pcie_enable_msi_interrupts(struct pcie_port *pp)
>> +{
>> +	struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
>> +	struct tegra_pcie_dw *pcie = to_tegra_pcie(pci);
>> +	u32 val;
>> +
>> +	dw_pcie_msi_init(pp);
>> +
>> +	/* Enable MSI interrupt generation */
>> +	val = appl_readl(pcie, APPL_INTR_EN_L0_0);
>> +	val |= APPL_INTR_EN_L0_0_SYS_MSI_INTR_EN;
>> +	val |= APPL_INTR_EN_L0_0_MSI_RCV_INT_EN;
>> +	appl_writel(pcie, val, APPL_INTR_EN_L0_0);
>> +}
>> +
>> +static void tegra_pcie_enable_interrupts(struct pcie_port *pp)
>> +{
>> +	struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
>> +	struct tegra_pcie_dw *pcie = to_tegra_pcie(pci);
>> +
>> +	/* Clear interrupt statuses before enabling interrupts */
>> +	appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L0);
>> +	appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L1_0_0);
>> +	appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L1_1);
>> +	appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L1_2);
>> +	appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L1_3);
>> +	appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L1_6);
>> +	appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L1_7);
>> +	appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L1_8_0);
>> +	appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L1_9);
>> +	appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L1_10);
>> +	appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L1_11);
>> +	appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L1_13);
>> +	appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L1_14);
>> +	appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L1_15);
>> +	appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L1_17);
>> +
>> +	tegra_pcie_enable_system_interrupts(pp);
>> +	tegra_pcie_enable_legacy_interrupts(pp);
>> +	if (IS_ENABLED(CONFIG_PCI_MSI))
>> +		tegra_pcie_enable_msi_interrupts(pp);
>> +}
>> +
>> +static void config_gen3_gen4_eq_presets(struct tegra_pcie_dw *pcie)
>> +{
>> +	struct dw_pcie *pci = &pcie->pci;
>> +	u32 val, offset, i;
>> +
>> +	/* Program init preset */
>> +	for (i = 0; i < pcie->num_lanes; i++) {
>> +		dw_pcie_read(pci->dbi_base + CAP_SPCIE_CAP_OFF
>> +				 + (i * 2), 2, &val);
>> +		val &= ~CAP_SPCIE_CAP_OFF_DSP_TX_PRESET0_MASK;
>> +		val |= GEN3_GEN4_EQ_PRESET_INIT;
>> +		val &= ~CAP_SPCIE_CAP_OFF_USP_TX_PRESET0_MASK;
>> +		val |= (GEN3_GEN4_EQ_PRESET_INIT <<
>> +			   CAP_SPCIE_CAP_OFF_USP_TX_PRESET0_SHIFT);
>> +		dw_pcie_write(pci->dbi_base + CAP_SPCIE_CAP_OFF
>> +				 + (i * 2), 2, val);
>> +
>> +		offset = dw_pcie_find_ext_capability(pci,
>> +						     PCI_EXT_CAP_ID_PL_16GT) +
>> +				PCI_PL_16GT_LE_CTRL;
>> +		dw_pcie_read(pci->dbi_base + offset + i, 1, &val);
>> +		val &= ~PCI_PL_16GT_LE_CTRL_DSP_TX_PRESET_MASK;
>> +		val |= GEN3_GEN4_EQ_PRESET_INIT;
>> +		val &= ~PCI_PL_16GT_LE_CTRL_USP_TX_PRESET_MASK;
>> +		val |= (GEN3_GEN4_EQ_PRESET_INIT <<
>> +			PCI_PL_16GT_LE_CTRL_USP_TX_PRESET_SHIFT);
>> +		dw_pcie_write(pci->dbi_base + offset + i, 1, val);
>> +	}
>> +
>> +	val = dw_pcie_readl_dbi(pci, GEN3_RELATED_OFF);
>> +	val &= ~GEN3_RELATED_OFF_RATE_SHADOW_SEL_MASK;
>> +	dw_pcie_writel_dbi(pci, GEN3_RELATED_OFF, val);
>> +
>> +	val = dw_pcie_readl_dbi(pci, GEN3_EQ_CONTROL_OFF);
>> +	val &= ~GEN3_EQ_CONTROL_OFF_PSET_REQ_VEC_MASK;
>> +	val |= (0x3ff << GEN3_EQ_CONTROL_OFF_PSET_REQ_VEC_SHIFT);
>> +	val &= ~GEN3_EQ_CONTROL_OFF_FB_MODE_MASK;
>> +	dw_pcie_writel_dbi(pci, GEN3_EQ_CONTROL_OFF, val);
>> +
>> +	val = dw_pcie_readl_dbi(pci, GEN3_RELATED_OFF);
>> +	val &= ~GEN3_RELATED_OFF_RATE_SHADOW_SEL_MASK;
>> +	val |= (0x1 << GEN3_RELATED_OFF_RATE_SHADOW_SEL_SHIFT);
>> +	dw_pcie_writel_dbi(pci, GEN3_RELATED_OFF, val);
>> +
>> +	val = dw_pcie_readl_dbi(pci, GEN3_EQ_CONTROL_OFF);
>> +	val &= ~GEN3_EQ_CONTROL_OFF_PSET_REQ_VEC_MASK;
>> +	val |= (0x360 << GEN3_EQ_CONTROL_OFF_PSET_REQ_VEC_SHIFT);
>> +	val &= ~GEN3_EQ_CONTROL_OFF_FB_MODE_MASK;
>> +	dw_pcie_writel_dbi(pci, GEN3_EQ_CONTROL_OFF, val);
>> +
>> +	val = dw_pcie_readl_dbi(pci, GEN3_RELATED_OFF);
>> +	val &= ~GEN3_RELATED_OFF_RATE_SHADOW_SEL_MASK;
>> +	dw_pcie_writel_dbi(pci, GEN3_RELATED_OFF, val);
>> +}
>> +
>> +static int tegra_pcie_dw_host_init(struct pcie_port *pp)
>> +{
>> +	struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
>> +	struct tegra_pcie_dw *pcie = to_tegra_pcie(pci);
>> +	u32 val, tmp, offset, speed;
>> +	unsigned int count;
>> +	u16 val_w;
>> +
>> +core_init:
> 
> I think it would be cleaner to include all registers programming
> within a function and we remove this label (and goto) below.
Some background:
As per spec r4.0 v1.0, downstream ports that support 16.0 GT/s must
support Scaled Flow Control (sec 3.4.2) and Tegra194's downstream ports
being 16.0 GT/s capable, enable scaled flow control by having
Data Link Feature (sec 7.7.4) enabled by default. There is one
endpoint (ASMedia USB3.0 controller) that doesn't link up with root port
because of this (i.e. DLF being enabled). The way we are detecting this situation
is to check for partial linkup i.e. one of application logic registers (i.e. from
"appl" region) says link is up but the same is not reflected in configuration space
of root port. Recommendation from our hardware team in this situation is to disable
DLF in root port and try link up with endpoint again.  To achieve this, we put the
core through reset cycle, disable DLF and proceed with configuring all other registers
and check for link up.

Initially in Patch-1, I didn't have goto statement but a recursion with depth-1 (as the
above situation occurs only once). It was reviewed @ http://patchwork.ozlabs.org/patch/1065707/
and Thierry said it would be simpler to use a goto instead of calling the same
function again. So, I modified the code accordingly. Please do let me know if you strongly
feel we should call tegra_pcie_dw_host_init() instead of goto here. I'll change it.

> 
>> +	count = 200;
> 
> It would be easier to read if we could inizialize this value closer
> to where it is actually used. Actually explaining why 200 as value
> was chosen would be appreciated.
Ok.

> 
>> +#if defined(CONFIG_PCIEASPM)
>> +	offset = dw_pcie_find_ext_capability(pci, PCI_EXT_CAP_ID_L1SS);
>> +	pcie->cfg_link_cap_l1sub = offset + PCI_L1SS_CAP;
>> +#endif
> 
> Please try to factor this out within a single #ifdef guard.
Ok.

> 
>> +	val = dw_pcie_readl_dbi(pci, PCI_IO_BASE);
>> +	val &= ~(IO_BASE_IO_DECODE | IO_BASE_IO_DECODE_BIT8);
>> +	dw_pcie_writel_dbi(pci, PCI_IO_BASE, val);
>> +
>> +	val = dw_pcie_readl_dbi(pci, PCI_PREF_MEMORY_BASE);
>> +	val |= CFG_PREF_MEM_LIMIT_BASE_MEM_DECODE;
>> +	val |= CFG_PREF_MEM_LIMIT_BASE_MEM_LIMIT_DECODE;
>> +	dw_pcie_writel_dbi(pci, PCI_PREF_MEMORY_BASE, val);
>> +
>> +	dw_pcie_writel_dbi(pci, PCI_BASE_ADDRESS_0, 0);
>> +
>> +	/* Configure FTS */
>> +	val = dw_pcie_readl_dbi(pci, PORT_LOGIC_ACK_F_ASPM_CTRL);
>> +	val &= ~(N_FTS_MASK << N_FTS_SHIFT);
>> +	val |= N_FTS_VAL << N_FTS_SHIFT;
>> +	dw_pcie_writel_dbi(pci, PORT_LOGIC_ACK_F_ASPM_CTRL, val);
>> +
>> +	val = dw_pcie_readl_dbi(pci, PORT_LOGIC_GEN2_CTRL);
>> +	val &= ~FTS_MASK;
>> +	val |= FTS_VAL;
>> +	dw_pcie_writel_dbi(pci, PORT_LOGIC_GEN2_CTRL, val);
>> +
>> +	/* Enable as 0xFFFF0001 response for CRS */
>> +	val = dw_pcie_readl_dbi(pci, PORT_LOGIC_AMBA_ERROR_RESPONSE_DEFAULT);
>> +	val &= ~(AMBA_ERROR_RESPONSE_CRS_MASK << AMBA_ERROR_RESPONSE_CRS_SHIFT);
>> +	val |= (AMBA_ERROR_RESPONSE_CRS_OKAY_FFFF0001 <<
>> +		AMBA_ERROR_RESPONSE_CRS_SHIFT);
>> +	dw_pcie_writel_dbi(pci, PORT_LOGIC_AMBA_ERROR_RESPONSE_DEFAULT, val);
>> +
>> +	/* Configure Max Speed from DT */
>> +	if (pcie->max_speed && pcie->max_speed != -EINVAL) {
>> +		val = dw_pcie_readl_dbi(pci, pcie->pcie_cap_base +
>> +					PCI_EXP_LNKCAP);
>> +		val &= ~PCI_EXP_LNKCAP_SLS;
>> +		val |= pcie->max_speed;
>> +		dw_pcie_writel_dbi(pci, pcie->pcie_cap_base + PCI_EXP_LNKCAP,
>> +				   val);
>> +	}
>> +
>> +	/* Configure Max lane width from DT */
>> +	val = dw_pcie_readl_dbi(pci, pcie->pcie_cap_base + PCI_EXP_LNKCAP);
>> +	val &= ~PCI_EXP_LNKCAP_MLW;
>> +	val |= (pcie->num_lanes << PCI_EXP_LNKSTA_NLW_SHIFT);
>> +	dw_pcie_writel_dbi(pci, pcie->pcie_cap_base + PCI_EXP_LNKCAP, val);
>> +
>> +	config_gen3_gen4_eq_presets(pcie);
>> +
>> +#if defined(CONFIG_PCIEASPM)
>> +	/* Enable ASPM counters */
>> +	val = EVENT_COUNTER_ENABLE_ALL << EVENT_COUNTER_ENABLE_SHIFT;
>> +	val |= EVENT_COUNTER_GROUP_5 << EVENT_COUNTER_GROUP_SEL_SHIFT;
>> +	dw_pcie_writel_dbi(pci, event_cntr_ctrl_offset[pcie->cid], val);
>> +
>> +	/* Program T_cmrt and T_pwr_on values */
>> +	val = dw_pcie_readl_dbi(pci, pcie->cfg_link_cap_l1sub);
>> +	val &= ~(PCI_L1SS_CAP_CM_RESTORE_TIME | PCI_L1SS_CAP_P_PWR_ON_VALUE);
>> +	val |= (pcie->aspm_cmrt << 8);
>> +	val |= (pcie->aspm_pwr_on_t << 19);
>> +	dw_pcie_writel_dbi(pci, pcie->cfg_link_cap_l1sub, val);
>> +
>> +	/* Program L0s and L1 entrance latencies */
>> +	val = dw_pcie_readl_dbi(pci, PORT_LOGIC_ACK_F_ASPM_CTRL);
>> +	val &= ~L0S_ENTRANCE_LAT_MASK;
>> +	val |= (pcie->aspm_l0s_enter_lat << L0S_ENTRANCE_LAT_SHIFT);
>> +	val |= ENTER_ASPM;
>> +	dw_pcie_writel_dbi(pci, PORT_LOGIC_ACK_F_ASPM_CTRL, val);
>> +#endif
> 
> See above.
Ok.

> 
>> +	val = dw_pcie_readl_dbi(pci, GEN3_RELATED_OFF);
>> +	val &= ~GEN3_RELATED_OFF_GEN3_ZRXDC_NONCOMPL;
>> +	dw_pcie_writel_dbi(pci, GEN3_RELATED_OFF, val);
>> +
>> +	if (pcie->update_fc_fixup) {
>> +		val = dw_pcie_readl_dbi(pci, CFG_TIMER_CTRL_MAX_FUNC_NUM_OFF);
>> +		val |= 0x1 << CFG_TIMER_CTRL_ACK_NAK_SHIFT;
>> +		dw_pcie_writel_dbi(pci, CFG_TIMER_CTRL_MAX_FUNC_NUM_OFF, val);
>> +	}
>> +
>> +	dw_pcie_setup_rc(pp);
>> +
>> +	clk_set_rate(pcie->core_clk, GEN4_CORE_CLK_FREQ);
>> +
>> +	/* Assert RST */
>> +	val = appl_readl(pcie, APPL_PINMUX);
>> +	val &= ~APPL_PINMUX_PEX_RST;
>> +	appl_writel(pcie, val, APPL_PINMUX);
>> +
>> +	usleep_range(100, 200);
>> +
>> +	/* Enable LTSSM */
>> +	val = appl_readl(pcie, APPL_CTRL);
>> +	val |= APPL_CTRL_LTSSM_EN;
>> +	appl_writel(pcie, val, APPL_CTRL);
>> +
>> +	/* De-assert RST */
>> +	val = appl_readl(pcie, APPL_PINMUX);
>> +	val |= APPL_PINMUX_PEX_RST;
>> +	appl_writel(pcie, val, APPL_PINMUX);
>> +
>> +	msleep(100);
>> +
>> +	val_w = dw_pcie_readw_dbi(pci, pcie->pcie_cap_base + PCI_EXP_LNKSTA);
>> +	while (!(val_w & PCI_EXP_LNKSTA_DLLLA)) {
>> +		if (count) {
>> +			dev_dbg(pci->dev, "Waiting for link up\n");
>> +			usleep_range(1000, 2000);
>> +			val_w = dw_pcie_readw_dbi(pci, pcie->pcie_cap_base +
>> +						  PCI_EXP_LNKSTA);
>> +			count--;
>> +			continue;
>> +		}
>> +
>> +		val = appl_readl(pcie, APPL_DEBUG);
>> +		val &= APPL_DEBUG_LTSSM_STATE_MASK;
>> +		val >>= APPL_DEBUG_LTSSM_STATE_SHIFT;
>> +		tmp = appl_readl(pcie, APPL_LINK_STATUS);
>> +		tmp &= APPL_LINK_STATUS_RDLH_LINK_UP;
>> +		if (!(val == 0x11 && !tmp)) {
>> +			dev_info(pci->dev, "Link is down\n");
>> +			return 0;
>> +		}
>> +
>> +		dev_info(pci->dev, "Link is down in DLL");
>> +		dev_info(pci->dev, "Trying again with DLFE disabled\n");
>> +		/* Disable LTSSM */
>> +		val = appl_readl(pcie, APPL_CTRL);
>> +		val &= ~APPL_CTRL_LTSSM_EN;
>> +		appl_writel(pcie, val, APPL_CTRL);
>> +
>> +		reset_control_assert(pcie->core_rst);
>> +		reset_control_deassert(pcie->core_rst);
>> +
>> +		offset = dw_pcie_find_ext_capability(pci, PCI_EXT_CAP_ID_DLF);
>> +		val = dw_pcie_readl_dbi(pci, offset + PCI_DLF_CAP);
>> +		val &= ~PCI_DLF_EXCHANGE_ENABLE;
>> +		dw_pcie_writel_dbi(pci, offset, val);
>> +
>> +		/* Retry now with DLF Exchange disabled */
>> +		goto core_init;
> 
> This goto upwards honestly is a bit horrible. Use some functions
> to make the code more readable.
Updated my comments above.

> 
>> +	}
>> +	dev_dbg(pci->dev, "Link is up\n");
>> +
>> +	speed = dw_pcie_readw_dbi(pci, pcie->pcie_cap_base + PCI_EXP_LNKSTA) &
>> +		PCI_EXP_LNKSTA_CLS;
>> +	clk_set_rate(pcie->core_clk, pcie_gen_freq[speed - 1]);
>> +
>> +	tegra_pcie_enable_interrupts(pp);
>> +
>> +	return 0;
>> +}
>> +
>> +static int tegra_pcie_dw_link_up(struct dw_pcie *pci)
>> +{
>> +	struct tegra_pcie_dw *pcie = to_tegra_pcie(pci);
>> +	u32 val = dw_pcie_readw_dbi(pci, pcie->pcie_cap_base + PCI_EXP_LNKSTA);
>> +
>> +	return !!(val & PCI_EXP_LNKSTA_DLLLA);
> 
> It looks like this can be common code reusable by other DWC drivers ?
> 
> Not mandatory, just a hint.
Well, each DesignWare core based implementation has a different way to find out
if the link is up and hence this hook is provided by DWC framework to let host controller
drivers implement their implementation specific link up check. In Tegra194, we are just
relying on spec defined configuration space register for link up.

> 
>> +}
>> +
>> +static void tegra_pcie_set_msi_vec_num(struct pcie_port *pp)
>> +{
>> +	pp->num_vectors = MAX_MSI_IRQS;
>> +}
>> +
>> +static const struct dw_pcie_ops tegra_dw_pcie_ops = {
>> +	.link_up = tegra_pcie_dw_link_up,
>> +};
>> +
>> +static struct dw_pcie_host_ops tegra_pcie_dw_host_ops = {
>> +	.rd_own_conf = tegra_pcie_dw_rd_own_conf,
>> +	.wr_own_conf = tegra_pcie_dw_wr_own_conf,
>> +	.host_init = tegra_pcie_dw_host_init,
>> +	.set_num_vectors = tegra_pcie_set_msi_vec_num,
>> +};
>> +
>> +static void tegra_pcie_disable_phy(struct tegra_pcie_dw *pcie)
>> +{
>> +	unsigned int phy_count = pcie->phy_count;
>> +
>> +	while (phy_count--) {
>> +		phy_power_off(pcie->phys[phy_count]);
>> +		phy_exit(pcie->phys[phy_count]);
>> +	}
>> +}
>> +
>> +static int tegra_pcie_enable_phy(struct tegra_pcie_dw *pcie)
>> +{
>> +	unsigned int i;
>> +	int ret;
>> +
>> +	for (i = 0; i < pcie->phy_count; i++) {
>> +		ret = phy_init(pcie->phys[i]);
>> +		if (ret < 0)
>> +			goto phy_power_off;
>> +
>> +		ret = phy_power_on(pcie->phys[i]);
>> +		if (ret < 0)
>> +			goto phy_exit;
>> +	}
>> +
>> +	return 0;
>> +
>> +phy_power_off:
>> +	while (i--) {
>> +		phy_power_off(pcie->phys[i]);
>> +phy_exit:
>> +		phy_exit(pcie->phys[i]);
>> +	}
>> +
>> +	return ret;
>> +}
>> +
>> +static int tegra_pcie_dw_parse_dt(struct tegra_pcie_dw *pcie)
>> +{
>> +	struct device_node *np = pcie->dev->of_node;
>> +	int ret;
>> +
>> +	ret = of_property_read_u32(np, "nvidia,aspm-cmrt-us", &pcie->aspm_cmrt);
>> +	if (ret < 0) {
>> +		dev_info(pcie->dev, "Failed to read ASPM T_cmrt: %d\n", ret);
>> +		return ret;
>> +	}
>> +
>> +	ret = of_property_read_u32(np, "nvidia,aspm-pwr-on-t-us",
>> +				   &pcie->aspm_pwr_on_t);
>> +	if (ret < 0)
>> +		dev_info(pcie->dev, "Failed to read ASPM Power On time: %d\n",
>> +			 ret);
>> +
>> +	ret = of_property_read_u32(np, "nvidia,aspm-l0s-entrance-latency-us",
>> +				   &pcie->aspm_l0s_enter_lat);
>> +	if (ret < 0)
>> +		dev_info(pcie->dev,
>> +			 "Failed to read ASPM L0s Entrance latency: %d\n", ret);
>> +
>> +	ret = of_property_read_u32(np, "num-lanes", &pcie->num_lanes);
>> +	if (ret < 0) {
>> +		dev_err(pcie->dev, "Failed to read num-lanes: %d\n", ret);
>> +		return ret;
>> +	}
>> +
>> +	pcie->max_speed = of_pci_get_max_link_speed(np);
>> +
>> +	ret = of_property_read_u32_index(np, "nvidia,bpmp", 1, &pcie->cid);
>> +	if (ret) {
>> +		dev_err(pcie->dev, "Failed to read Controller-ID: %d\n", ret);
>> +		return ret;
>> +	}
>> +
>> +	pcie->phy_count = of_property_count_strings(np, "phy-names");
>> +	if (pcie->phy_count < 0) {
>> +		dev_err(pcie->dev, "Failed to find PHY entries: %d\n",
>> +			pcie->phy_count);
>> +		return pcie->phy_count;
>> +	}
>> +
>> +	if (of_property_read_bool(np, "nvidia,update-fc-fixup"))
>> +		pcie->update_fc_fixup = true;
>> +
>> +	pcie->supports_clkreq =
>> +		of_property_read_bool(pcie->dev->of_node, "supports-clkreq");
>> +
>> +	pcie->enable_cdm_check =
>> +		of_property_read_bool(np, "snps,enable-cdm-check");
>> +
>> +	return 0;
>> +}
>> +
>> +static int tegra_pcie_bpmp_set_ctrl_state(struct tegra_pcie_dw *pcie,
>> +					  bool enable)
>> +{
>> +	struct mrq_uphy_response resp;
>> +	struct tegra_bpmp_message msg;
>> +	struct mrq_uphy_request req;
>> +	int err;
>> +
>> +	if (pcie->cid == 5)
>> +		return 0;
> 
> What's wrong with cid == 5 ? Explain please.
Controller with ID=5 doesn't need any programming to enable it which is
done here through calling firmware API.

> 
>> +	memset(&req, 0, sizeof(req));
>> +	memset(&resp, 0, sizeof(resp));
>> +
>> +	req.cmd = CMD_UPHY_PCIE_CONTROLLER_STATE;
>> +	req.controller_state.pcie_controller = pcie->cid;
>> +	req.controller_state.enable = enable;
>> +
>> +	memset(&msg, 0, sizeof(msg));
>> +	msg.mrq = MRQ_UPHY;
>> +	msg.tx.data = &req;
>> +	msg.tx.size = sizeof(req);
>> +	msg.rx.data = &resp;
>> +	msg.rx.size = sizeof(resp);
>> +
>> +	if (irqs_disabled())
> 
> Can you explain to me what this check is meant to achieve please ?
Firmware interface provides different APIs to be called when there are no
interrupts enabled in the system (noirq context) and otherwise hence
checking that situation here and calling appropriate API.

> 
>> +		err = tegra_bpmp_transfer_atomic(pcie->bpmp, &msg);
>> +	else
>> +		err = tegra_bpmp_transfer(pcie->bpmp, &msg);
>> +
>> +	return err;
>> +}
>> +
>> +static void tegra_pcie_downstream_dev_to_D0(struct tegra_pcie_dw *pcie)
>> +{
>> +	struct pcie_port *pp = &pcie->pci.pp;
>> +	struct pci_dev *pdev;
>> +	struct pci_bus *child;
>> +
>> +	list_for_each_entry(child, &pp->root_bus->children, node) {
>> +		/* Bring downstream devices to D0 if they are not already in */
>> +		if (child->parent == pp->root_bus)
>> +			break;
>> +	}
>> +	list_for_each_entry(pdev, &child->devices, bus_list) {
>> +		if (PCI_SLOT(pdev->devfn) == 0) {
>> +			if (pci_set_power_state(pdev, PCI_D0))
>> +				dev_err(pcie->dev,
>> +					"Failed to transition %s to D0 state\n",
>> +					dev_name(&pdev->dev));
>> +		}
>> +	}
>> +}
>> +
>> +static int tegra_pcie_config_controller(struct tegra_pcie_dw *pcie,
>> +					bool en_hw_hot_rst)
>> +{
>> +	int ret;
>> +	u32 val;
>> +
>> +	ret = tegra_pcie_bpmp_set_ctrl_state(pcie, true);
>> +	if (ret) {
>> +		dev_err(pcie->dev,
>> +			"Failed to enable controller %u: %d\n", pcie->cid, ret);
>> +		return ret;
>> +	}
>> +
>> +	ret = regulator_enable(pcie->pex_ctl_supply);
>> +	if (ret < 0) {
>> +		dev_err(pcie->dev, "Failed to enable regulator: %d\n", ret);
>> +		goto fail_reg_en;
>> +	}
>> +
>> +	ret = clk_prepare_enable(pcie->core_clk);
>> +	if (ret) {
>> +		dev_err(pcie->dev, "Failed to enable core clock: %d\n", ret);
>> +		goto fail_core_clk;
>> +	}
>> +
>> +	ret = reset_control_deassert(pcie->core_apb_rst);
>> +	if (ret) {
>> +		dev_err(pcie->dev, "Failed to deassert core APB reset: %d\n",
>> +			ret);
>> +		goto fail_core_apb_rst;
>> +	}
>> +
>> +	if (en_hw_hot_rst) {
>> +		/* Enable HW_HOT_RST mode */
>> +		val = appl_readl(pcie, APPL_CTRL);
>> +		val &= ~(APPL_CTRL_HW_HOT_RST_MODE_MASK <<
>> +			 APPL_CTRL_HW_HOT_RST_MODE_SHIFT);
>> +		val |= APPL_CTRL_HW_HOT_RST_EN;
>> +		appl_writel(pcie, val, APPL_CTRL);
>> +	}
>> +
>> +	ret = tegra_pcie_enable_phy(pcie);
>> +	if (ret) {
>> +		dev_err(pcie->dev, "Failed to enable PHY: %d\n", ret);
>> +		goto fail_phy;
>> +	}
>> +
>> +	/* Update CFG base address */
>> +	appl_writel(pcie, pcie->dbi_res->start & APPL_CFG_BASE_ADDR_MASK,
>> +		    APPL_CFG_BASE_ADDR);
>> +
>> +	/* Configure this core for RP mode operation */
>> +	appl_writel(pcie, APPL_DM_TYPE_RP, APPL_DM_TYPE);
>> +
>> +	appl_writel(pcie, 0x0, APPL_CFG_SLCG_OVERRIDE);
>> +
>> +	val = appl_readl(pcie, APPL_CTRL);
>> +	appl_writel(pcie, val | APPL_CTRL_SYS_PRE_DET_STATE, APPL_CTRL);
>> +
>> +	val = appl_readl(pcie, APPL_CFG_MISC);
>> +	val |= (APPL_CFG_MISC_ARCACHE_VAL << APPL_CFG_MISC_ARCACHE_SHIFT);
>> +	appl_writel(pcie, val, APPL_CFG_MISC);
>> +
>> +	if (!pcie->supports_clkreq) {
>> +		val = appl_readl(pcie, APPL_PINMUX);
>> +		val |= APPL_PINMUX_CLKREQ_OUT_OVRD_EN;
>> +		val |= APPL_PINMUX_CLKREQ_OUT_OVRD;
>> +		appl_writel(pcie, val, APPL_PINMUX);
>> +	}
>> +
>> +	/* Update iATU_DMA base address */
>> +	appl_writel(pcie,
>> +		    pcie->atu_dma_res->start & APPL_CFG_IATU_DMA_BASE_ADDR_MASK,
>> +		    APPL_CFG_IATU_DMA_BASE_ADDR);
>> +
>> +	reset_control_deassert(pcie->core_rst);
>> +
>> +	pcie->pcie_cap_base = dw_pcie_find_capability(&pcie->pci,
>> +						      PCI_CAP_ID_EXP);
>> +
>> +#if defined(CONFIG_PCIEASPM)
>> +	/* Disable ASPM-L1SS advertisement as there is no CLKREQ routing */
>> +	if (!pcie->supports_clkreq) {
>> +		disable_aspm_l11(pcie);
>> +		disable_aspm_l12(pcie);
>> +	}
>> +#endif
>> +	return ret;
>> +
>> +fail_phy:
>> +	reset_control_assert(pcie->core_apb_rst);
>> +fail_core_apb_rst:
>> +	clk_disable_unprepare(pcie->core_clk);
>> +fail_core_clk:
>> +	regulator_disable(pcie->pex_ctl_supply);
>> +fail_reg_en:
>> +	tegra_pcie_bpmp_set_ctrl_state(pcie, false);
>> +
>> +	return ret;
>> +}
>> +
>> +static int __deinit_controller(struct tegra_pcie_dw *pcie)
>> +{
>> +	int ret;
>> +
>> +	ret = reset_control_assert(pcie->core_rst);
>> +	if (ret) {
>> +		dev_err(pcie->dev, "Failed to assert \"core\" reset: %d\n",
>> +			ret);
>> +		return ret;
>> +	}
>> +	tegra_pcie_disable_phy(pcie);
>> +	ret = reset_control_assert(pcie->core_apb_rst);
>> +	if (ret) {
>> +		dev_err(pcie->dev, "Failed to assert APB reset: %d\n", ret);
>> +		return ret;
>> +	}
>> +	clk_disable_unprepare(pcie->core_clk);
>> +	ret = regulator_disable(pcie->pex_ctl_supply);
>> +	if (ret) {
>> +		dev_err(pcie->dev, "Failed to disable regulator: %d\n", ret);
>> +		return ret;
>> +	}
>> +	ret = tegra_pcie_bpmp_set_ctrl_state(pcie, false);
>> +	if (ret) {
>> +		dev_err(pcie->dev, "Failed to disable controller %d: %d\n",
>> +			pcie->cid, ret);
>> +		return ret;
>> +	}
>> +	return ret;
>> +}
>> +
>> +static int tegra_pcie_init_controller(struct tegra_pcie_dw *pcie)
>> +{
>> +	struct dw_pcie *pci = &pcie->pci;
>> +	struct pcie_port *pp = &pci->pp;
>> +	int ret;
>> +
>> +	ret = tegra_pcie_config_controller(pcie, false);
>> +	if (ret < 0)
>> +		return ret;
>> +
>> +	pp->root_bus_nr = -1;
>> +	pp->ops = &tegra_pcie_dw_host_ops;
>> +
>> +	ret = dw_pcie_host_init(pp);
>> +	if (ret < 0) {
>> +		dev_err(pcie->dev, "Failed to add PCIe port: %d\n", ret);
>> +		goto fail_host_init;
>> +	}
>> +
>> +	return 0;
>> +
>> +fail_host_init:
>> +	return __deinit_controller(pcie);
>> +}
>> +
>> +static int tegra_pcie_try_link_l2(struct tegra_pcie_dw *pcie)
>> +{
>> +	u32 val;
>> +
>> +	if (!tegra_pcie_dw_link_up(&pcie->pci))
>> +		return 0;
>> +
>> +	val = appl_readl(pcie, APPL_RADM_STATUS);
>> +	val |= APPL_PM_XMT_TURNOFF_STATE;
>> +	appl_writel(pcie, val, APPL_RADM_STATUS);
>> +
>> +	return readl_poll_timeout_atomic(pcie->appl_base + APPL_DEBUG, val,
>> +				 val & APPL_DEBUG_PM_LINKST_IN_L2_LAT,
>> +				 1, PME_ACK_TIMEOUT);
>> +}
>> +
>> +static void tegra_pcie_dw_pme_turnoff(struct tegra_pcie_dw *pcie)
>> +{
>> +	u32 data;
>> +	int err;
>> +
>> +	if (!tegra_pcie_dw_link_up(&pcie->pci)) {
>> +		dev_dbg(pcie->dev, "PCIe link is not up...!\n");
>> +		return;
>> +	}
>> +
>> +	if (tegra_pcie_try_link_l2(pcie)) {
>> +		dev_info(pcie->dev, "Link didn't transition to L2 state\n");
>> +		/*
>> +		 * TX lane clock freq will reset to Gen1 only if link is in L2
>> +		 * or detect state.
>> +		 * So apply pex_rst to end point to force RP to go into detect
>> +		 * state
>> +		 */
>> +		data = appl_readl(pcie, APPL_PINMUX);
>> +		data &= ~APPL_PINMUX_PEX_RST;
>> +		appl_writel(pcie, data, APPL_PINMUX);
>> +
>> +		err = readl_poll_timeout_atomic(pcie->appl_base + APPL_DEBUG,
>> +						data,
>> +						((data &
>> +						APPL_DEBUG_LTSSM_STATE_MASK) >>
>> +						APPL_DEBUG_LTSSM_STATE_SHIFT) ==
>> +						LTSSM_STATE_PRE_DETECT,
>> +						1, LTSSM_TIMEOUT);
>> +		if (err) {
>> +			dev_info(pcie->dev, "Link didn't go to detect state\n");
>> +		} else {
>> +			/* Disable LTSSM after link is in detect state */
>> +			data = appl_readl(pcie, APPL_CTRL);
>> +			data &= ~APPL_CTRL_LTSSM_EN;
>> +			appl_writel(pcie, data, APPL_CTRL);
>> +		}
>> +	}
>> +	/*
>> +	 * DBI registers may not be accessible after this as PLL-E would be
>> +	 * down depending on how CLKREQ is pulled by end point
>> +	 */
>> +	data = appl_readl(pcie, APPL_PINMUX);
>> +	data |= (APPL_PINMUX_CLKREQ_OVERRIDE_EN | APPL_PINMUX_CLKREQ_OVERRIDE);
>> +	/* Cut REFCLK to slot */
>> +	data |= APPL_PINMUX_CLK_OUTPUT_IN_OVERRIDE_EN;
>> +	data &= ~APPL_PINMUX_CLK_OUTPUT_IN_OVERRIDE;
>> +	appl_writel(pcie, data, APPL_PINMUX);
>> +}
>> +
>> +static int tegra_pcie_deinit_controller(struct tegra_pcie_dw *pcie)
>> +{
>> +	/*
>> +	 * link doesn't go into L2 state with some of the endpoints with Tegra
>> +	 * if they are not in D0 state. So, need to make sure that immediate
>> +	 * downstream devices are in D0 state before sending PME_TurnOff to put
>> +	 * link into L2 state
>> +	 */
>> +	tegra_pcie_downstream_dev_to_D0(pcie);
>> +	dw_pcie_host_deinit(&pcie->pci.pp);
>> +	tegra_pcie_dw_pme_turnoff(pcie);
>> +	return __deinit_controller(pcie);
>> +}
>> +
>> +static int tegra_pcie_config_rp(struct tegra_pcie_dw *pcie)
>> +{
>> +	struct pcie_port *pp = &pcie->pci.pp;
>> +	struct device *dev = pcie->dev;
>> +	char *name;
>> +	int ret;
>> +
>> +	if (IS_ENABLED(CONFIG_PCI_MSI)) {
>> +		pp->msi_irq = of_irq_get_byname(dev->of_node, "msi");
>> +		if (!pp->msi_irq) {
>> +			dev_err(dev, "Failed to get MSI interrupt\n");
>> +			return -ENODEV;
>> +		}
>> +	}
>> +
>> +	pm_runtime_enable(dev);
>> +	ret = pm_runtime_get_sync(dev);
>> +	if (ret < 0) {
>> +		dev_err(dev, "Failed to get runtime sync for PCIe dev: %d\n",
>> +			ret);
>> +		pm_runtime_disable(dev);
>> +		return ret;
>> +	}
>> +
>> +	tegra_pcie_init_controller(pcie);
>> +
>> +	pcie->link_state = tegra_pcie_dw_link_up(&pcie->pci);
>> +
>> +	if (!pcie->link_state) {
>> +		ret = -ENOMEDIUM;
>> +		goto fail_host_init;
>> +	}
>> +
>> +	name = devm_kasprintf(dev, GFP_KERNEL, "%pOFP", dev->of_node);
>> +	if (!name) {
>> +		ret = -ENOMEM;
>> +		goto fail_host_init;
>> +	}
>> +
>> +	pcie->debugfs = debugfs_create_dir(name, NULL);
>> +	if (!pcie->debugfs)
>> +		dev_err(dev, "Failed to create debugfs\n");
>> +	else
>> +		init_debugfs(pcie);
>> +
>> +	return ret;
>> +
>> +fail_host_init:
>> +	tegra_pcie_deinit_controller(pcie);
>> +	pm_runtime_put_sync(dev);
>> +	pm_runtime_disable(dev);
>> +	return ret;
>> +}
>> +
>> +static const struct tegra_pcie_soc tegra_pcie_rc_of_data = {
>> +	.mode = DW_PCIE_RC_TYPE,
>> +};
>> +
>> +static const struct of_device_id tegra_pcie_dw_of_match[] = {
>> +	{
>> +		.compatible = "nvidia,tegra194-pcie",
>> +		.data = &tegra_pcie_rc_of_data,
>> +	},
>> +	{},
>> +};
>> +MODULE_DEVICE_TABLE(of, tegra_pcie_dw_of_match);
>> +
>> +static int tegra_pcie_dw_probe(struct platform_device *pdev)
>> +{
>> +	const struct tegra_pcie_soc *data;
>> +	struct device *dev = &pdev->dev;
>> +	struct resource *atu_dma_res;
>> +	struct tegra_pcie_dw *pcie;
>> +	struct resource *dbi_res;
>> +	struct pcie_port *pp;
>> +	struct dw_pcie *pci;
>> +	struct phy **phys;
>> +	char *name;
>> +	int ret;
>> +	u32 i;
>> +
>> +	pcie = devm_kzalloc(dev, sizeof(*pcie), GFP_KERNEL);
>> +	if (!pcie)
>> +		return -ENOMEM;
>> +
>> +	pci = &pcie->pci;
>> +	pci->dev = &pdev->dev;
>> +	pci->ops = &tegra_dw_pcie_ops;
>> +	pp = &pci->pp;
>> +	pcie->dev = &pdev->dev;
>> +
>> +	data = (struct tegra_pcie_soc *)of_device_get_match_data(dev);
>> +	if (!data)
>> +		return -EINVAL;
>> +	pcie->mode = (enum dw_pcie_device_mode)data->mode;
>> +
>> +	ret = tegra_pcie_dw_parse_dt(pcie);
>> +	if (ret < 0) {
>> +		dev_err(dev, "Failed to parse device tree: %d\n", ret);
>> +		return ret;
>> +	}
>> +
>> +	pcie->pex_ctl_supply = devm_regulator_get(dev, "vddio-pex-ctl");
>> +	if (IS_ERR(pcie->pex_ctl_supply)) {
>> +		dev_err(dev, "Failed to get regulator: %ld\n",
>> +			PTR_ERR(pcie->pex_ctl_supply));
>> +		return PTR_ERR(pcie->pex_ctl_supply);
>> +	}
>> +
>> +	pcie->core_clk = devm_clk_get(dev, "core");
>> +	if (IS_ERR(pcie->core_clk)) {
>> +		dev_err(dev, "Failed to get core clock: %ld\n",
>> +			PTR_ERR(pcie->core_clk));
>> +		return PTR_ERR(pcie->core_clk);
>> +	}
>> +
>> +	pcie->appl_res = platform_get_resource_byname(pdev, IORESOURCE_MEM,
>> +						      "appl");
>> +	if (!pcie->appl_res) {
>> +		dev_err(dev, "Failed to find \"appl\" region\n");
>> +		return PTR_ERR(pcie->appl_res);
>> +	}
>> +	pcie->appl_base = devm_ioremap_resource(dev, pcie->appl_res);
>> +	if (IS_ERR(pcie->appl_base))
>> +		return PTR_ERR(pcie->appl_base);
>> +
>> +	pcie->core_apb_rst = devm_reset_control_get(dev, "apb");
>> +	if (IS_ERR(pcie->core_apb_rst)) {
>> +		dev_err(dev, "Failed to get APB reset: %ld\n",
>> +			PTR_ERR(pcie->core_apb_rst));
>> +		return PTR_ERR(pcie->core_apb_rst);
>> +	}
>> +
>> +	phys = devm_kcalloc(dev, pcie->phy_count, sizeof(*phys), GFP_KERNEL);
>> +	if (!phys)
>> +		return PTR_ERR(phys);
>> +
>> +	for (i = 0; i < pcie->phy_count; i++) {
>> +		name = kasprintf(GFP_KERNEL, "p2u-%u", i);
>> +		if (!name) {
>> +			dev_err(dev, "Failed to create P2U string\n");
>> +			return -ENOMEM;
>> +		}
>> +		phys[i] = devm_phy_get(dev, name);
>> +		kfree(name);
>> +		if (IS_ERR(phys[i])) {
>> +			ret = PTR_ERR(phys[i]);
>> +			dev_err(dev, "Failed to get PHY: %d\n", ret);
>> +			return ret;
>> +		}
>> +	}
>> +
>> +	pcie->phys = phys;
>> +
>> +	dbi_res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "dbi");
>> +	if (!dbi_res) {
>> +		dev_err(dev, "Failed to find \"dbi\" region\n");
>> +		return PTR_ERR(dbi_res);
>> +	}
>> +	pcie->dbi_res = dbi_res;
>> +
>> +	pci->dbi_base = devm_ioremap_resource(dev, dbi_res);
>> +	if (IS_ERR(pci->dbi_base))
>> +		return PTR_ERR(pci->dbi_base);
>> +
>> +	/* Tegra HW locates DBI2 at a fixed offset from DBI */
>> +	pci->dbi_base2 = pci->dbi_base + 0x1000;
>> +
>> +	atu_dma_res = platform_get_resource_byname(pdev, IORESOURCE_MEM,
>> +						   "atu_dma");
>> +	if (!atu_dma_res) {
>> +		dev_err(dev, "Failed to find \"atu_dma\" region\n");
>> +		return PTR_ERR(atu_dma_res);
>> +	}
>> +	pcie->atu_dma_res = atu_dma_res;
>> +	pci->atu_base = devm_ioremap_resource(dev, atu_dma_res);
>> +	if (IS_ERR(pci->atu_base))
>> +		return PTR_ERR(pci->atu_base);
>> +
>> +	pcie->core_rst = devm_reset_control_get(dev, "core");
>> +	if (IS_ERR(pcie->core_rst)) {
>> +		dev_err(dev, "Failed to get core reset: %ld\n",
>> +			PTR_ERR(pcie->core_rst));
>> +		return PTR_ERR(pcie->core_rst);
>> +	}
>> +
>> +	pp->irq = platform_get_irq_byname(pdev, "intr");
>> +	if (!pp->irq) {
>> +		dev_err(dev, "Failed to get \"intr\" interrupt\n");
>> +		return -ENODEV;
>> +	}
>> +
>> +	ret = devm_request_irq(dev, pp->irq, tegra_pcie_irq_handler,
>> +			       IRQF_SHARED, "tegra-pcie-intr", pcie);
>> +	if (ret) {
>> +		dev_err(dev, "Failed to request IRQ %d: %d\n", pp->irq, ret);
>> +		return ret;
>> +	}
>> +
>> +	pcie->bpmp = tegra_bpmp_get(dev);
>> +	if (IS_ERR(pcie->bpmp))
>> +		return PTR_ERR(pcie->bpmp);
>> +
>> +	platform_set_drvdata(pdev, pcie);
>> +
>> +	if (pcie->mode == DW_PCIE_RC_TYPE) {
>> +		ret = tegra_pcie_config_rp(pcie);
>> +		if (ret && ret != -ENOMEDIUM)
>> +			goto fail;
>> +		else
>> +			return 0;
> 
> So if the link is not up we still go ahead and make probe
> succeed. What for ?
We may need root port to be available to support hot-plugging of endpoint
devices, so, we don't fail the probe.

> 
>> +	}
>> +
>> +fail:
>> +	tegra_bpmp_put(pcie->bpmp);
>> +	return ret;
>> +}
>> +
>> +static int tegra_pcie_dw_remove(struct platform_device *pdev)
>> +{
>> +	struct tegra_pcie_dw *pcie = platform_get_drvdata(pdev);
>> +
>> +	if (pcie->mode != DW_PCIE_RC_TYPE)
>> +		return 0;
>> +
>> +	if (!pcie->link_state)
>> +		return 0;
>> +
>> +	debugfs_remove_recursive(pcie->debugfs);
>> +	tegra_pcie_deinit_controller(pcie);
>> +	pm_runtime_put_sync(pcie->dev);
>> +	pm_runtime_disable(pcie->dev);
>> +	tegra_bpmp_put(pcie->bpmp);
>> +
>> +	return 0;
>> +}
>> +
>> +static int tegra_pcie_dw_suspend_late(struct device *dev)
>> +{
>> +	struct tegra_pcie_dw *pcie = dev_get_drvdata(dev);
>> +	u32 val;
>> +
>> +	if (!pcie->link_state)
>> +		return 0;
>> +
>> +	/* Enable HW_HOT_RST mode */
>> +	val = appl_readl(pcie, APPL_CTRL);
>> +	val &= ~(APPL_CTRL_HW_HOT_RST_MODE_MASK <<
>> +		 APPL_CTRL_HW_HOT_RST_MODE_SHIFT);
>> +	val |= APPL_CTRL_HW_HOT_RST_EN;
>> +	appl_writel(pcie, val, APPL_CTRL);
>> +
>> +	return 0;
>> +}
>> +
>> +static int tegra_pcie_dw_suspend_noirq(struct device *dev)
>> +{
>> +	struct tegra_pcie_dw *pcie = dev_get_drvdata(dev);
>> +
>> +	if (!pcie->link_state)
>> +		return 0;
>> +
>> +	/* Save MSI interrupt vector */
>> +	pcie->msi_ctrl_int = dw_pcie_readl_dbi(&pcie->pci,
>> +					       PORT_LOGIC_MSI_CTRL_INT_0_EN);
>> +	tegra_pcie_downstream_dev_to_D0(pcie);
> 
> I think this requires some comments. AFAIU this is allowed by
> the PCI specs (PCI Express Base 4.0 r1.0 September 29-2017,
> 5.2 Link State Power Management). However, I would like to
> understand how this plays with the D state the devices are left
> in upon system suspend.
> 
> "As the following example illustrates, it is also possible to remove
> power without first placing all Functions into D3Hot".
> 
> I assume that's what happens on this platform to allow L2 entry but
> again, this needs clarification.
Yes. It is true that in the case of Tegra194, it brings devices back to D0
before putting link to L2 state.

> 
> Lorenzo
> 
>> +	tegra_pcie_dw_pme_turnoff(pcie);
>> +	return __deinit_controller(pcie);
>> +}
>> +
>> +static int tegra_pcie_dw_resume_noirq(struct device *dev)
>> +{
>> +	struct tegra_pcie_dw *pcie = dev_get_drvdata(dev);
>> +	int ret;
>> +
>> +	if (!pcie->link_state)
>> +		return 0;
>> +
>> +	ret = tegra_pcie_config_controller(pcie, true);
>> +	if (ret < 0)
>> +		return ret;
>> +
>> +	ret = tegra_pcie_dw_host_init(&pcie->pci.pp);
>> +	if (ret < 0) {
>> +		dev_err(dev, "Failed to init host: %d\n", ret);
>> +		goto fail_host_init;
>> +	}
>> +
>> +	/* Restore MSI interrupt vector */
>> +	dw_pcie_writel_dbi(&pcie->pci, PORT_LOGIC_MSI_CTRL_INT_0_EN,
>> +			   pcie->msi_ctrl_int);
>> +
>> +	return 0;
>> +fail_host_init:
>> +	return __deinit_controller(pcie);
>> +}
>> +
>> +static int tegra_pcie_dw_resume_early(struct device *dev)
>> +{
>> +	struct tegra_pcie_dw *pcie = dev_get_drvdata(dev);
>> +	u32 val;
>> +
>> +	if (!pcie->link_state)
>> +		return 0;
>> +
>> +	/* Disable HW_HOT_RST mode */
>> +	val = appl_readl(pcie, APPL_CTRL);
>> +	val &= ~(APPL_CTRL_HW_HOT_RST_MODE_MASK <<
>> +		 APPL_CTRL_HW_HOT_RST_MODE_SHIFT);
>> +	val |= APPL_CTRL_HW_HOT_RST_MODE_IMDT_RST <<
>> +	       APPL_CTRL_HW_HOT_RST_MODE_SHIFT;
>> +	val &= ~APPL_CTRL_HW_HOT_RST_EN;
>> +	appl_writel(pcie, val, APPL_CTRL);
>> +
>> +	return 0;
>> +}
>> +
>> +static void tegra_pcie_dw_shutdown(struct platform_device *pdev)
>> +{
>> +	struct tegra_pcie_dw *pcie = platform_get_drvdata(pdev);
>> +
>> +	if (pcie->mode != DW_PCIE_RC_TYPE)
>> +		return;
>> +
>> +	if (!pcie->link_state)
>> +		return;
>> +
>> +	debugfs_remove_recursive(pcie->debugfs);
>> +	tegra_pcie_downstream_dev_to_D0(pcie);
>> +
>> +	disable_irq(pcie->pci.pp.irq);
>> +	if (IS_ENABLED(CONFIG_PCI_MSI))
>> +		disable_irq(pcie->pci.pp.msi_irq);
>> +
>> +	tegra_pcie_dw_pme_turnoff(pcie);
>> +	__deinit_controller(pcie);
>> +}
>> +
>> +static const struct dev_pm_ops tegra_pcie_dw_pm_ops = {
>> +	.suspend_late = tegra_pcie_dw_suspend_late,
>> +	.suspend_noirq = tegra_pcie_dw_suspend_noirq,
>> +	.resume_noirq = tegra_pcie_dw_resume_noirq,
>> +	.resume_early = tegra_pcie_dw_resume_early,
>> +};
>> +
>> +static struct platform_driver tegra_pcie_dw_driver = {
>> +	.probe = tegra_pcie_dw_probe,
>> +	.remove = tegra_pcie_dw_remove,
>> +	.shutdown = tegra_pcie_dw_shutdown,
>> +	.driver = {
>> +		.name	= "tegra194-pcie",
>> +		.pm = &tegra_pcie_dw_pm_ops,
>> +		.of_match_table = tegra_pcie_dw_of_match,
>> +	},
>> +};
>> +module_platform_driver(tegra_pcie_dw_driver);
>> +
>> +MODULE_AUTHOR("Vidya Sagar <vidyas@nvidia.com>");
>> +MODULE_DESCRIPTION("NVIDIA PCIe host controller driver");
>> +MODULE_LICENSE("GPL v2");
>> -- 
>> 2.17.1
>>


^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH V13 12/12] PCI: tegra: Add Tegra194 PCIe support
  2019-07-12 15:32     ` Vidya Sagar
@ 2019-07-12 16:07       ` Lorenzo Pieralisi
  2019-07-13  7:04         ` Vidya Sagar
  0 siblings, 1 reply; 34+ messages in thread
From: Lorenzo Pieralisi @ 2019-07-12 16:07 UTC (permalink / raw)
  To: Vidya Sagar
  Cc: bhelgaas, robh+dt, mark.rutland, thierry.reding, jonathanh,
	kishon, catalin.marinas, will.deacon, jingoohan1,
	gustavo.pimentel, digetx, mperttunen, linux-pci, devicetree,
	linux-tegra, linux-kernel, linux-arm-kernel, kthota, mmaddireddy,
	sagar.tv

On Fri, Jul 12, 2019 at 09:02:49PM +0530, Vidya Sagar wrote:

[...]

> > > +static irqreturn_t tegra_pcie_irq_handler(int irq, void *arg)
> > > +{
> > > +	struct tegra_pcie_dw *pcie = arg;
> > > +
> > > +	if (pcie->mode == DW_PCIE_RC_TYPE)
> > > +		return tegra_pcie_rp_irq_handler(pcie);
> > 
> > What's the point of registering the handler if mode != DW_PCIE_RC_TYPE ?
> Currently this driver supports only root port mode but we have a plan
> to add support for endpoint mode (as Tegra194 as dual mode
> controllers) also in future and when that happens, we'll have a
> corresponding tegra_pcie_ep_irq_handler() to take care of ep specific
> interrupts.

Sure, that's why you should add tegra_pcie_dw->mode when it is needed,
not in this patch.

[...]

> > > +static int tegra_pcie_dw_host_init(struct pcie_port *pp)
> > > +{
> > > +	struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
> > > +	struct tegra_pcie_dw *pcie = to_tegra_pcie(pci);
> > > +	u32 val, tmp, offset, speed;
> > > +	unsigned int count;
> > > +	u16 val_w;
> > > +
> > > +core_init:
> > 
> > I think it would be cleaner to include all registers programming
> > within a function and we remove this label (and goto) below.
> Some background: As per spec r4.0 v1.0, downstream ports that support
> 16.0 GT/s must support Scaled Flow Control (sec 3.4.2) and Tegra194's
> downstream ports being 16.0 GT/s capable, enable scaled flow control
> by having Data Link Feature (sec 7.7.4) enabled by default. There is
> one endpoint (ASMedia USB3.0 controller) that doesn't link up with
> root port because of this (i.e. DLF being enabled). The way we are
> detecting this situation is to check for partial linkup i.e. one of
> application logic registers (i.e. from "appl" region) says link is up
> but the same is not reflected in configuration space of root port.
> Recommendation from our hardware team in this situation is to disable
> DLF in root port and try link up with endpoint again.  To achieve
> this, we put the core through reset cycle, disable DLF and proceed
> with configuring all other registers and check for link up.
> 
> Initially in Patch-1, I didn't have goto statement but a recursion
> with depth-1 (as the above situation occurs only once). It was
> reviewed @ http://patchwork.ozlabs.org/patch/1065707/ and Thierry said
> it would be simpler to use a goto instead of calling the same function
> again. So, I modified the code accordingly. Please do let me know if
> you strongly feel we should call tegra_pcie_dw_host_init() instead of
> goto here. I'll change it.

I did not say we should call tegra_pcie_dw_host_init(), sorry for
not being clear. What I asked is factoring out registers programming
in a function and call it where core_init: label is and call it
again if DLF enablement causes link up to fail.

[...]

> > > +static int tegra_pcie_bpmp_set_ctrl_state(struct tegra_pcie_dw *pcie,
> > > +					  bool enable)
> > > +{
> > > +	struct mrq_uphy_response resp;
> > > +	struct tegra_bpmp_message msg;
> > > +	struct mrq_uphy_request req;
> > > +	int err;
> > > +
> > > +	if (pcie->cid == 5)
> > > +		return 0;
> > 
> > What's wrong with cid == 5 ? Explain please.
> Controller with ID=5 doesn't need any programming to enable it which is
> done here through calling firmware API.
> 
> > 
> > > +	memset(&req, 0, sizeof(req));
> > > +	memset(&resp, 0, sizeof(resp));
> > > +
> > > +	req.cmd = CMD_UPHY_PCIE_CONTROLLER_STATE;
> > > +	req.controller_state.pcie_controller = pcie->cid;
> > > +	req.controller_state.enable = enable;
> > > +
> > > +	memset(&msg, 0, sizeof(msg));
> > > +	msg.mrq = MRQ_UPHY;
> > > +	msg.tx.data = &req;
> > > +	msg.tx.size = sizeof(req);
> > > +	msg.rx.data = &resp;
> > > +	msg.rx.size = sizeof(resp);
> > > +
> > > +	if (irqs_disabled())
> > 
> > Can you explain to me what this check is meant to achieve please ?
> Firmware interface provides different APIs to be called when there are
> no interrupts enabled in the system (noirq context) and otherwise
> hence checking that situation here and calling appropriate API.

That's what I am questioning. Being called from {suspend/resume}_noirq()
callbacks (if that's the code path this check caters for) does not mean
irqs_disabled() == true.

Actually, if tegra_bpmp_transfer() requires IRQs to be enabled you may
even end up in a situation where that blocking call does not wake up
because the IRQ in question was disabled in the NOIRQ suspend/resume
phase.

[...]

> > > +static int tegra_pcie_dw_probe(struct platform_device *pdev)
> > > +{
> > > +	const struct tegra_pcie_soc *data;
> > > +	struct device *dev = &pdev->dev;
> > > +	struct resource *atu_dma_res;
> > > +	struct tegra_pcie_dw *pcie;
> > > +	struct resource *dbi_res;
> > > +	struct pcie_port *pp;
> > > +	struct dw_pcie *pci;
> > > +	struct phy **phys;
> > > +	char *name;
> > > +	int ret;
> > > +	u32 i;
> > > +
> > > +	pcie = devm_kzalloc(dev, sizeof(*pcie), GFP_KERNEL);
> > > +	if (!pcie)
> > > +		return -ENOMEM;
> > > +
> > > +	pci = &pcie->pci;
> > > +	pci->dev = &pdev->dev;
> > > +	pci->ops = &tegra_dw_pcie_ops;
> > > +	pp = &pci->pp;
> > > +	pcie->dev = &pdev->dev;
> > > +
> > > +	data = (struct tegra_pcie_soc *)of_device_get_match_data(dev);
> > > +	if (!data)
> > > +		return -EINVAL;
> > > +	pcie->mode = (enum dw_pcie_device_mode)data->mode;
> > > +
> > > +	ret = tegra_pcie_dw_parse_dt(pcie);
> > > +	if (ret < 0) {
> > > +		dev_err(dev, "Failed to parse device tree: %d\n", ret);
> > > +		return ret;
> > > +	}
> > > +
> > > +	pcie->pex_ctl_supply = devm_regulator_get(dev, "vddio-pex-ctl");
> > > +	if (IS_ERR(pcie->pex_ctl_supply)) {
> > > +		dev_err(dev, "Failed to get regulator: %ld\n",
> > > +			PTR_ERR(pcie->pex_ctl_supply));
> > > +		return PTR_ERR(pcie->pex_ctl_supply);
> > > +	}
> > > +
> > > +	pcie->core_clk = devm_clk_get(dev, "core");
> > > +	if (IS_ERR(pcie->core_clk)) {
> > > +		dev_err(dev, "Failed to get core clock: %ld\n",
> > > +			PTR_ERR(pcie->core_clk));
> > > +		return PTR_ERR(pcie->core_clk);
> > > +	}
> > > +
> > > +	pcie->appl_res = platform_get_resource_byname(pdev, IORESOURCE_MEM,
> > > +						      "appl");
> > > +	if (!pcie->appl_res) {
> > > +		dev_err(dev, "Failed to find \"appl\" region\n");
> > > +		return PTR_ERR(pcie->appl_res);
> > > +	}
> > > +	pcie->appl_base = devm_ioremap_resource(dev, pcie->appl_res);
> > > +	if (IS_ERR(pcie->appl_base))
> > > +		return PTR_ERR(pcie->appl_base);
> > > +
> > > +	pcie->core_apb_rst = devm_reset_control_get(dev, "apb");
> > > +	if (IS_ERR(pcie->core_apb_rst)) {
> > > +		dev_err(dev, "Failed to get APB reset: %ld\n",
> > > +			PTR_ERR(pcie->core_apb_rst));
> > > +		return PTR_ERR(pcie->core_apb_rst);
> > > +	}
> > > +
> > > +	phys = devm_kcalloc(dev, pcie->phy_count, sizeof(*phys), GFP_KERNEL);
> > > +	if (!phys)
> > > +		return PTR_ERR(phys);
> > > +
> > > +	for (i = 0; i < pcie->phy_count; i++) {
> > > +		name = kasprintf(GFP_KERNEL, "p2u-%u", i);
> > > +		if (!name) {
> > > +			dev_err(dev, "Failed to create P2U string\n");
> > > +			return -ENOMEM;
> > > +		}
> > > +		phys[i] = devm_phy_get(dev, name);
> > > +		kfree(name);
> > > +		if (IS_ERR(phys[i])) {
> > > +			ret = PTR_ERR(phys[i]);
> > > +			dev_err(dev, "Failed to get PHY: %d\n", ret);
> > > +			return ret;
> > > +		}
> > > +	}
> > > +
> > > +	pcie->phys = phys;
> > > +
> > > +	dbi_res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "dbi");
> > > +	if (!dbi_res) {
> > > +		dev_err(dev, "Failed to find \"dbi\" region\n");
> > > +		return PTR_ERR(dbi_res);
> > > +	}
> > > +	pcie->dbi_res = dbi_res;
> > > +
> > > +	pci->dbi_base = devm_ioremap_resource(dev, dbi_res);
> > > +	if (IS_ERR(pci->dbi_base))
> > > +		return PTR_ERR(pci->dbi_base);
> > > +
> > > +	/* Tegra HW locates DBI2 at a fixed offset from DBI */
> > > +	pci->dbi_base2 = pci->dbi_base + 0x1000;
> > > +
> > > +	atu_dma_res = platform_get_resource_byname(pdev, IORESOURCE_MEM,
> > > +						   "atu_dma");
> > > +	if (!atu_dma_res) {
> > > +		dev_err(dev, "Failed to find \"atu_dma\" region\n");
> > > +		return PTR_ERR(atu_dma_res);
> > > +	}
> > > +	pcie->atu_dma_res = atu_dma_res;
> > > +	pci->atu_base = devm_ioremap_resource(dev, atu_dma_res);
> > > +	if (IS_ERR(pci->atu_base))
> > > +		return PTR_ERR(pci->atu_base);
> > > +
> > > +	pcie->core_rst = devm_reset_control_get(dev, "core");
> > > +	if (IS_ERR(pcie->core_rst)) {
> > > +		dev_err(dev, "Failed to get core reset: %ld\n",
> > > +			PTR_ERR(pcie->core_rst));
> > > +		return PTR_ERR(pcie->core_rst);
> > > +	}
> > > +
> > > +	pp->irq = platform_get_irq_byname(pdev, "intr");
> > > +	if (!pp->irq) {
> > > +		dev_err(dev, "Failed to get \"intr\" interrupt\n");
> > > +		return -ENODEV;
> > > +	}
> > > +
> > > +	ret = devm_request_irq(dev, pp->irq, tegra_pcie_irq_handler,
> > > +			       IRQF_SHARED, "tegra-pcie-intr", pcie);
> > > +	if (ret) {
> > > +		dev_err(dev, "Failed to request IRQ %d: %d\n", pp->irq, ret);
> > > +		return ret;
> > > +	}
> > > +
> > > +	pcie->bpmp = tegra_bpmp_get(dev);
> > > +	if (IS_ERR(pcie->bpmp))
> > > +		return PTR_ERR(pcie->bpmp);
> > > +
> > > +	platform_set_drvdata(pdev, pcie);
> > > +
> > > +	if (pcie->mode == DW_PCIE_RC_TYPE) {
> > > +		ret = tegra_pcie_config_rp(pcie);
> > > +		if (ret && ret != -ENOMEDIUM)
> > > +			goto fail;
> > > +		else
> > > +			return 0;
> > 
> > So if the link is not up we still go ahead and make probe
> > succeed. What for ?
> We may need root port to be available to support hot-plugging of
> endpoint devices, so, we don't fail the probe.

We need it or we don't. If you do support hotplugging of endpoint
devices point me at the code, otherwise link up failure means
failure to probe.

> > > +	}
> > > +
> > > +fail:
> > > +	tegra_bpmp_put(pcie->bpmp);
> > > +	return ret;
> > > +}
> > > +
> > > +static int tegra_pcie_dw_remove(struct platform_device *pdev)
> > > +{
> > > +	struct tegra_pcie_dw *pcie = platform_get_drvdata(pdev);
> > > +
> > > +	if (pcie->mode != DW_PCIE_RC_TYPE)
> > > +		return 0;
> > > +
> > > +	if (!pcie->link_state)
> > > +		return 0;
> > > +
> > > +	debugfs_remove_recursive(pcie->debugfs);
> > > +	tegra_pcie_deinit_controller(pcie);
> > > +	pm_runtime_put_sync(pcie->dev);
> > > +	pm_runtime_disable(pcie->dev);
> > > +	tegra_bpmp_put(pcie->bpmp);
> > > +
> > > +	return 0;
> > > +}
> > > +
> > > +static int tegra_pcie_dw_suspend_late(struct device *dev)
> > > +{
> > > +	struct tegra_pcie_dw *pcie = dev_get_drvdata(dev);
> > > +	u32 val;
> > > +
> > > +	if (!pcie->link_state)
> > > +		return 0;
> > > +
> > > +	/* Enable HW_HOT_RST mode */
> > > +	val = appl_readl(pcie, APPL_CTRL);
> > > +	val &= ~(APPL_CTRL_HW_HOT_RST_MODE_MASK <<
> > > +		 APPL_CTRL_HW_HOT_RST_MODE_SHIFT);
> > > +	val |= APPL_CTRL_HW_HOT_RST_EN;
> > > +	appl_writel(pcie, val, APPL_CTRL);
> > > +
> > > +	return 0;
> > > +}
> > > +
> > > +static int tegra_pcie_dw_suspend_noirq(struct device *dev)
> > > +{
> > > +	struct tegra_pcie_dw *pcie = dev_get_drvdata(dev);
> > > +
> > > +	if (!pcie->link_state)
> > > +		return 0;
> > > +
> > > +	/* Save MSI interrupt vector */
> > > +	pcie->msi_ctrl_int = dw_pcie_readl_dbi(&pcie->pci,
> > > +					       PORT_LOGIC_MSI_CTRL_INT_0_EN);
> > > +	tegra_pcie_downstream_dev_to_D0(pcie);
> > 
> > I think this requires some comments. AFAIU this is allowed by
> > the PCI specs (PCI Express Base 4.0 r1.0 September 29-2017,
> > 5.2 Link State Power Management). However, I would like to
> > understand how this plays with the D state the devices are left
> > in upon system suspend.
> > 
> > "As the following example illustrates, it is also possible to remove
> > power without first placing all Functions into D3Hot".
> > 
> > I assume that's what happens on this platform to allow L2 entry but
> > again, this needs clarification.
> Yes. It is true that in the case of Tegra194, it brings devices back
> to D0 before putting link to L2 state.

Comment it, extensively, so that anyone reading the code understands
why it is done so and what happens to devices then.

Lorenzo

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH V13 12/12] PCI: tegra: Add Tegra194 PCIe support
  2019-07-12 16:07       ` Lorenzo Pieralisi
@ 2019-07-13  7:04         ` Vidya Sagar
  2019-07-16 11:22           ` Lorenzo Pieralisi
  0 siblings, 1 reply; 34+ messages in thread
From: Vidya Sagar @ 2019-07-13  7:04 UTC (permalink / raw)
  To: Lorenzo Pieralisi
  Cc: bhelgaas, robh+dt, mark.rutland, thierry.reding, jonathanh,
	kishon, catalin.marinas, will.deacon, jingoohan1,
	gustavo.pimentel, digetx, mperttunen, linux-pci, devicetree,
	linux-tegra, linux-kernel, linux-arm-kernel, kthota, mmaddireddy,
	sagar.tv

On 7/12/2019 9:37 PM, Lorenzo Pieralisi wrote:
> On Fri, Jul 12, 2019 at 09:02:49PM +0530, Vidya Sagar wrote:
> 
> [...]
> 
>>>> +static irqreturn_t tegra_pcie_irq_handler(int irq, void *arg)
>>>> +{
>>>> +	struct tegra_pcie_dw *pcie = arg;
>>>> +
>>>> +	if (pcie->mode == DW_PCIE_RC_TYPE)
>>>> +		return tegra_pcie_rp_irq_handler(pcie);
>>>
>>> What's the point of registering the handler if mode != DW_PCIE_RC_TYPE ?
>> Currently this driver supports only root port mode but we have a plan
>> to add support for endpoint mode (as Tegra194 as dual mode
>> controllers) also in future and when that happens, we'll have a
>> corresponding tegra_pcie_ep_irq_handler() to take care of ep specific
>> interrupts.
> 
> Sure, that's why you should add tegra_pcie_dw->mode when it is needed,
> not in this patch.
Ok.

> 
> [...]
> 
>>>> +static int tegra_pcie_dw_host_init(struct pcie_port *pp)
>>>> +{
>>>> +	struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
>>>> +	struct tegra_pcie_dw *pcie = to_tegra_pcie(pci);
>>>> +	u32 val, tmp, offset, speed;
>>>> +	unsigned int count;
>>>> +	u16 val_w;
>>>> +
>>>> +core_init:
>>>
>>> I think it would be cleaner to include all registers programming
>>> within a function and we remove this label (and goto) below.
>> Some background: As per spec r4.0 v1.0, downstream ports that support
>> 16.0 GT/s must support Scaled Flow Control (sec 3.4.2) and Tegra194's
>> downstream ports being 16.0 GT/s capable, enable scaled flow control
>> by having Data Link Feature (sec 7.7.4) enabled by default. There is
>> one endpoint (ASMedia USB3.0 controller) that doesn't link up with
>> root port because of this (i.e. DLF being enabled). The way we are
>> detecting this situation is to check for partial linkup i.e. one of
>> application logic registers (i.e. from "appl" region) says link is up
>> but the same is not reflected in configuration space of root port.
>> Recommendation from our hardware team in this situation is to disable
>> DLF in root port and try link up with endpoint again.  To achieve
>> this, we put the core through reset cycle, disable DLF and proceed
>> with configuring all other registers and check for link up.
>>
>> Initially in Patch-1, I didn't have goto statement but a recursion
>> with depth-1 (as the above situation occurs only once). It was
>> reviewed @ http://patchwork.ozlabs.org/patch/1065707/ and Thierry said
>> it would be simpler to use a goto instead of calling the same function
>> again. So, I modified the code accordingly. Please do let me know if
>> you strongly feel we should call tegra_pcie_dw_host_init() instead of
>> goto here. I'll change it.
> 
> I did not say we should call tegra_pcie_dw_host_init(), sorry for
> not being clear. What I asked is factoring out registers programming
> in a function and call it where core_init: label is and call it
> again if DLF enablement causes link up to fail.
Ok.

> 
> [...]
> 
>>>> +static int tegra_pcie_bpmp_set_ctrl_state(struct tegra_pcie_dw *pcie,
>>>> +					  bool enable)
>>>> +{
>>>> +	struct mrq_uphy_response resp;
>>>> +	struct tegra_bpmp_message msg;
>>>> +	struct mrq_uphy_request req;
>>>> +	int err;
>>>> +
>>>> +	if (pcie->cid == 5)
>>>> +		return 0;
>>>
>>> What's wrong with cid == 5 ? Explain please.
>> Controller with ID=5 doesn't need any programming to enable it which is
>> done here through calling firmware API.
>>
>>>
>>>> +	memset(&req, 0, sizeof(req));
>>>> +	memset(&resp, 0, sizeof(resp));
>>>> +
>>>> +	req.cmd = CMD_UPHY_PCIE_CONTROLLER_STATE;
>>>> +	req.controller_state.pcie_controller = pcie->cid;
>>>> +	req.controller_state.enable = enable;
>>>> +
>>>> +	memset(&msg, 0, sizeof(msg));
>>>> +	msg.mrq = MRQ_UPHY;
>>>> +	msg.tx.data = &req;
>>>> +	msg.tx.size = sizeof(req);
>>>> +	msg.rx.data = &resp;
>>>> +	msg.rx.size = sizeof(resp);
>>>> +
>>>> +	if (irqs_disabled())
>>>
>>> Can you explain to me what this check is meant to achieve please ?
>> Firmware interface provides different APIs to be called when there are
>> no interrupts enabled in the system (noirq context) and otherwise
>> hence checking that situation here and calling appropriate API.
> 
> That's what I am questioning. Being called from {suspend/resume}_noirq()
> callbacks (if that's the code path this check caters for) does not mean
> irqs_disabled() == true.
Agree.
Actually, I got a hint of having this check from the following.
Both tegra_bpmp_transfer_atomic() and tegra_bpmp_transfer() are indirectly
called by APIs registered with .master_xfer() and .master_xfer_atomic() hooks of
struct i2c_algorithm and the decision to call which one of these is made using the
following check in i2c-core.h file.
static inline bool i2c_in_atomic_xfer_mode(void)
{
	return system_state > SYSTEM_RUNNING && irqs_disabled();
}
I think I should use this condition as is IIUC.
Please let me know if there are any concerns with this.

> 
> Actually, if tegra_bpmp_transfer() requires IRQs to be enabled you may
> even end up in a situation where that blocking call does not wake up
> because the IRQ in question was disabled in the NOIRQ suspend/resume
> phase.
> 
> [...]
> 
>>>> +static int tegra_pcie_dw_probe(struct platform_device *pdev)
>>>> +{
>>>> +	const struct tegra_pcie_soc *data;
>>>> +	struct device *dev = &pdev->dev;
>>>> +	struct resource *atu_dma_res;
>>>> +	struct tegra_pcie_dw *pcie;
>>>> +	struct resource *dbi_res;
>>>> +	struct pcie_port *pp;
>>>> +	struct dw_pcie *pci;
>>>> +	struct phy **phys;
>>>> +	char *name;
>>>> +	int ret;
>>>> +	u32 i;
>>>> +
>>>> +	pcie = devm_kzalloc(dev, sizeof(*pcie), GFP_KERNEL);
>>>> +	if (!pcie)
>>>> +		return -ENOMEM;
>>>> +
>>>> +	pci = &pcie->pci;
>>>> +	pci->dev = &pdev->dev;
>>>> +	pci->ops = &tegra_dw_pcie_ops;
>>>> +	pp = &pci->pp;
>>>> +	pcie->dev = &pdev->dev;
>>>> +
>>>> +	data = (struct tegra_pcie_soc *)of_device_get_match_data(dev);
>>>> +	if (!data)
>>>> +		return -EINVAL;
>>>> +	pcie->mode = (enum dw_pcie_device_mode)data->mode;
>>>> +
>>>> +	ret = tegra_pcie_dw_parse_dt(pcie);
>>>> +	if (ret < 0) {
>>>> +		dev_err(dev, "Failed to parse device tree: %d\n", ret);
>>>> +		return ret;
>>>> +	}
>>>> +
>>>> +	pcie->pex_ctl_supply = devm_regulator_get(dev, "vddio-pex-ctl");
>>>> +	if (IS_ERR(pcie->pex_ctl_supply)) {
>>>> +		dev_err(dev, "Failed to get regulator: %ld\n",
>>>> +			PTR_ERR(pcie->pex_ctl_supply));
>>>> +		return PTR_ERR(pcie->pex_ctl_supply);
>>>> +	}
>>>> +
>>>> +	pcie->core_clk = devm_clk_get(dev, "core");
>>>> +	if (IS_ERR(pcie->core_clk)) {
>>>> +		dev_err(dev, "Failed to get core clock: %ld\n",
>>>> +			PTR_ERR(pcie->core_clk));
>>>> +		return PTR_ERR(pcie->core_clk);
>>>> +	}
>>>> +
>>>> +	pcie->appl_res = platform_get_resource_byname(pdev, IORESOURCE_MEM,
>>>> +						      "appl");
>>>> +	if (!pcie->appl_res) {
>>>> +		dev_err(dev, "Failed to find \"appl\" region\n");
>>>> +		return PTR_ERR(pcie->appl_res);
>>>> +	}
>>>> +	pcie->appl_base = devm_ioremap_resource(dev, pcie->appl_res);
>>>> +	if (IS_ERR(pcie->appl_base))
>>>> +		return PTR_ERR(pcie->appl_base);
>>>> +
>>>> +	pcie->core_apb_rst = devm_reset_control_get(dev, "apb");
>>>> +	if (IS_ERR(pcie->core_apb_rst)) {
>>>> +		dev_err(dev, "Failed to get APB reset: %ld\n",
>>>> +			PTR_ERR(pcie->core_apb_rst));
>>>> +		return PTR_ERR(pcie->core_apb_rst);
>>>> +	}
>>>> +
>>>> +	phys = devm_kcalloc(dev, pcie->phy_count, sizeof(*phys), GFP_KERNEL);
>>>> +	if (!phys)
>>>> +		return PTR_ERR(phys);
>>>> +
>>>> +	for (i = 0; i < pcie->phy_count; i++) {
>>>> +		name = kasprintf(GFP_KERNEL, "p2u-%u", i);
>>>> +		if (!name) {
>>>> +			dev_err(dev, "Failed to create P2U string\n");
>>>> +			return -ENOMEM;
>>>> +		}
>>>> +		phys[i] = devm_phy_get(dev, name);
>>>> +		kfree(name);
>>>> +		if (IS_ERR(phys[i])) {
>>>> +			ret = PTR_ERR(phys[i]);
>>>> +			dev_err(dev, "Failed to get PHY: %d\n", ret);
>>>> +			return ret;
>>>> +		}
>>>> +	}
>>>> +
>>>> +	pcie->phys = phys;
>>>> +
>>>> +	dbi_res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "dbi");
>>>> +	if (!dbi_res) {
>>>> +		dev_err(dev, "Failed to find \"dbi\" region\n");
>>>> +		return PTR_ERR(dbi_res);
>>>> +	}
>>>> +	pcie->dbi_res = dbi_res;
>>>> +
>>>> +	pci->dbi_base = devm_ioremap_resource(dev, dbi_res);
>>>> +	if (IS_ERR(pci->dbi_base))
>>>> +		return PTR_ERR(pci->dbi_base);
>>>> +
>>>> +	/* Tegra HW locates DBI2 at a fixed offset from DBI */
>>>> +	pci->dbi_base2 = pci->dbi_base + 0x1000;
>>>> +
>>>> +	atu_dma_res = platform_get_resource_byname(pdev, IORESOURCE_MEM,
>>>> +						   "atu_dma");
>>>> +	if (!atu_dma_res) {
>>>> +		dev_err(dev, "Failed to find \"atu_dma\" region\n");
>>>> +		return PTR_ERR(atu_dma_res);
>>>> +	}
>>>> +	pcie->atu_dma_res = atu_dma_res;
>>>> +	pci->atu_base = devm_ioremap_resource(dev, atu_dma_res);
>>>> +	if (IS_ERR(pci->atu_base))
>>>> +		return PTR_ERR(pci->atu_base);
>>>> +
>>>> +	pcie->core_rst = devm_reset_control_get(dev, "core");
>>>> +	if (IS_ERR(pcie->core_rst)) {
>>>> +		dev_err(dev, "Failed to get core reset: %ld\n",
>>>> +			PTR_ERR(pcie->core_rst));
>>>> +		return PTR_ERR(pcie->core_rst);
>>>> +	}
>>>> +
>>>> +	pp->irq = platform_get_irq_byname(pdev, "intr");
>>>> +	if (!pp->irq) {
>>>> +		dev_err(dev, "Failed to get \"intr\" interrupt\n");
>>>> +		return -ENODEV;
>>>> +	}
>>>> +
>>>> +	ret = devm_request_irq(dev, pp->irq, tegra_pcie_irq_handler,
>>>> +			       IRQF_SHARED, "tegra-pcie-intr", pcie);
>>>> +	if (ret) {
>>>> +		dev_err(dev, "Failed to request IRQ %d: %d\n", pp->irq, ret);
>>>> +		return ret;
>>>> +	}
>>>> +
>>>> +	pcie->bpmp = tegra_bpmp_get(dev);
>>>> +	if (IS_ERR(pcie->bpmp))
>>>> +		return PTR_ERR(pcie->bpmp);
>>>> +
>>>> +	platform_set_drvdata(pdev, pcie);
>>>> +
>>>> +	if (pcie->mode == DW_PCIE_RC_TYPE) {
>>>> +		ret = tegra_pcie_config_rp(pcie);
>>>> +		if (ret && ret != -ENOMEDIUM)
>>>> +			goto fail;
>>>> +		else
>>>> +			return 0;
>>>
>>> So if the link is not up we still go ahead and make probe
>>> succeed. What for ?
>> We may need root port to be available to support hot-plugging of
>> endpoint devices, so, we don't fail the probe.
> 
> We need it or we don't. If you do support hotplugging of endpoint
> devices point me at the code, otherwise link up failure means
> failure to probe.
Currently hotplugging of endpoint is not supported, but it is one of the use cases that
we may add support for in future.
But, why should we fail probe if link up doesn't happen? As such, nothing went wrong in
terms of root port initialization right?
I checked other DWC based implementations and following are not failing the probe
pci-dra7xx.c, pcie-armada8k.c, pcie-artpec6.c, pcie-histb.c, pcie-kirin.c, pcie-spear13xx.c,
pci-exynos.c, pci-imx6.c, pci-keystone.c, pci-layerscape.c

Although following do fail the probe if link is not up.
pcie-qcom.c, pcie-uniphier.c, pci-meson.c

So, to me, it looks more like a choice we can make whether to fail the probe or not and in this
case we are choosing not to fail.
> 
>>>> +	}
>>>> +
>>>> +fail:
>>>> +	tegra_bpmp_put(pcie->bpmp);
>>>> +	return ret;
>>>> +}
>>>> +
>>>> +static int tegra_pcie_dw_remove(struct platform_device *pdev)
>>>> +{
>>>> +	struct tegra_pcie_dw *pcie = platform_get_drvdata(pdev);
>>>> +
>>>> +	if (pcie->mode != DW_PCIE_RC_TYPE)
>>>> +		return 0;
>>>> +
>>>> +	if (!pcie->link_state)
>>>> +		return 0;
>>>> +
>>>> +	debugfs_remove_recursive(pcie->debugfs);
>>>> +	tegra_pcie_deinit_controller(pcie);
>>>> +	pm_runtime_put_sync(pcie->dev);
>>>> +	pm_runtime_disable(pcie->dev);
>>>> +	tegra_bpmp_put(pcie->bpmp);
>>>> +
>>>> +	return 0;
>>>> +}
>>>> +
>>>> +static int tegra_pcie_dw_suspend_late(struct device *dev)
>>>> +{
>>>> +	struct tegra_pcie_dw *pcie = dev_get_drvdata(dev);
>>>> +	u32 val;
>>>> +
>>>> +	if (!pcie->link_state)
>>>> +		return 0;
>>>> +
>>>> +	/* Enable HW_HOT_RST mode */
>>>> +	val = appl_readl(pcie, APPL_CTRL);
>>>> +	val &= ~(APPL_CTRL_HW_HOT_RST_MODE_MASK <<
>>>> +		 APPL_CTRL_HW_HOT_RST_MODE_SHIFT);
>>>> +	val |= APPL_CTRL_HW_HOT_RST_EN;
>>>> +	appl_writel(pcie, val, APPL_CTRL);
>>>> +
>>>> +	return 0;
>>>> +}
>>>> +
>>>> +static int tegra_pcie_dw_suspend_noirq(struct device *dev)
>>>> +{
>>>> +	struct tegra_pcie_dw *pcie = dev_get_drvdata(dev);
>>>> +
>>>> +	if (!pcie->link_state)
>>>> +		return 0;
>>>> +
>>>> +	/* Save MSI interrupt vector */
>>>> +	pcie->msi_ctrl_int = dw_pcie_readl_dbi(&pcie->pci,
>>>> +					       PORT_LOGIC_MSI_CTRL_INT_0_EN);
>>>> +	tegra_pcie_downstream_dev_to_D0(pcie);
>>>
>>> I think this requires some comments. AFAIU this is allowed by
>>> the PCI specs (PCI Express Base 4.0 r1.0 September 29-2017,
>>> 5.2 Link State Power Management). However, I would like to
>>> understand how this plays with the D state the devices are left
>>> in upon system suspend.
>>>
>>> "As the following example illustrates, it is also possible to remove
>>> power without first placing all Functions into D3Hot".
>>>
>>> I assume that's what happens on this platform to allow L2 entry but
>>> again, this needs clarification.
>> Yes. It is true that in the case of Tegra194, it brings devices back
>> to D0 before putting link to L2 state.
> 
> Comment it, extensively, so that anyone reading the code understands
> why it is done so and what happens to devices then.
I added comment in tegra_pcie_deinit_controller() where this API is called
but I think I should move the comment inside tegra_pcie_downstream_dev_to_D0()
API and also add more info to it.
I'll take care of it in the next patch.

> 
> Lorenzo
> 


^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH V13 12/12] PCI: tegra: Add Tegra194 PCIe support
  2019-07-13  7:04         ` Vidya Sagar
@ 2019-07-16 11:22           ` Lorenzo Pieralisi
  2019-07-16 19:00             ` Bjorn Helgaas
  2019-07-23 14:44             ` Vidya Sagar
  0 siblings, 2 replies; 34+ messages in thread
From: Lorenzo Pieralisi @ 2019-07-16 11:22 UTC (permalink / raw)
  To: Vidya Sagar
  Cc: bhelgaas, robh+dt, mark.rutland, thierry.reding, jonathanh,
	kishon, catalin.marinas, will.deacon, jingoohan1,
	gustavo.pimentel, digetx, mperttunen, linux-pci, devicetree,
	linux-tegra, linux-kernel, linux-arm-kernel, kthota, mmaddireddy,
	sagar.tv

On Sat, Jul 13, 2019 at 12:34:34PM +0530, Vidya Sagar wrote:

[...]

> > > > > +static int tegra_pcie_bpmp_set_ctrl_state(struct tegra_pcie_dw *pcie,
> > > > > +					  bool enable)
> > > > > +{
> > > > > +	struct mrq_uphy_response resp;
> > > > > +	struct tegra_bpmp_message msg;
> > > > > +	struct mrq_uphy_request req;
> > > > > +	int err;
> > > > > +
> > > > > +	if (pcie->cid == 5)
> > > > > +		return 0;
> > > > 
> > > > What's wrong with cid == 5 ? Explain please.
> > > Controller with ID=5 doesn't need any programming to enable it which is
> > > done here through calling firmware API.
> > > 
> > > > 
> > > > > +	memset(&req, 0, sizeof(req));
> > > > > +	memset(&resp, 0, sizeof(resp));
> > > > > +
> > > > > +	req.cmd = CMD_UPHY_PCIE_CONTROLLER_STATE;
> > > > > +	req.controller_state.pcie_controller = pcie->cid;
> > > > > +	req.controller_state.enable = enable;
> > > > > +
> > > > > +	memset(&msg, 0, sizeof(msg));
> > > > > +	msg.mrq = MRQ_UPHY;
> > > > > +	msg.tx.data = &req;
> > > > > +	msg.tx.size = sizeof(req);
> > > > > +	msg.rx.data = &resp;
> > > > > +	msg.rx.size = sizeof(resp);
> > > > > +
> > > > > +	if (irqs_disabled())
> > > > 
> > > > Can you explain to me what this check is meant to achieve please ?
> > > Firmware interface provides different APIs to be called when there are
> > > no interrupts enabled in the system (noirq context) and otherwise
> > > hence checking that situation here and calling appropriate API.
> > 
> > That's what I am questioning. Being called from {suspend/resume}_noirq()
> > callbacks (if that's the code path this check caters for) does not mean
> > irqs_disabled() == true.
> Agree.
> Actually, I got a hint of having this check from the following.
> Both tegra_bpmp_transfer_atomic() and tegra_bpmp_transfer() are indirectly
> called by APIs registered with .master_xfer() and .master_xfer_atomic() hooks of
> struct i2c_algorithm and the decision to call which one of these is made using the
> following check in i2c-core.h file.
> static inline bool i2c_in_atomic_xfer_mode(void)
> {
> 	return system_state > SYSTEM_RUNNING && irqs_disabled();
> }
> I think I should use this condition as is IIUC.
> Please let me know if there are any concerns with this.

It is not a concern, it is just that I don't understand how this code
can be called with IRQs disabled, if you can give me an execution path I
am happy to leave the check there. On top of that, when called from
suspend NOIRQ context, it is likely to use the blocking API (because
IRQs aren't disabled at CPU level) behind which there is most certainly
an IRQ required to wake the thread up and if the IRQ in question was
disabled in the suspend NOIRQ phase this code is likely to deadlock.

I want to make sure we can justify adding this check, I do not
want to add it because we think it can be needed when it may not
be needed at all (and it gets copy and pasted over and over again
in other drivers).

> > Actually, if tegra_bpmp_transfer() requires IRQs to be enabled you may
> > even end up in a situation where that blocking call does not wake up
> > because the IRQ in question was disabled in the NOIRQ suspend/resume
> > phase.
> > 
> > [...]
> > 
> > > > > +static int tegra_pcie_dw_probe(struct platform_device *pdev)
> > > > > +{
> > > > > +	const struct tegra_pcie_soc *data;
> > > > > +	struct device *dev = &pdev->dev;
> > > > > +	struct resource *atu_dma_res;
> > > > > +	struct tegra_pcie_dw *pcie;
> > > > > +	struct resource *dbi_res;
> > > > > +	struct pcie_port *pp;
> > > > > +	struct dw_pcie *pci;
> > > > > +	struct phy **phys;
> > > > > +	char *name;
> > > > > +	int ret;
> > > > > +	u32 i;
> > > > > +
> > > > > +	pcie = devm_kzalloc(dev, sizeof(*pcie), GFP_KERNEL);
> > > > > +	if (!pcie)
> > > > > +		return -ENOMEM;
> > > > > +
> > > > > +	pci = &pcie->pci;
> > > > > +	pci->dev = &pdev->dev;
> > > > > +	pci->ops = &tegra_dw_pcie_ops;
> > > > > +	pp = &pci->pp;
> > > > > +	pcie->dev = &pdev->dev;
> > > > > +
> > > > > +	data = (struct tegra_pcie_soc *)of_device_get_match_data(dev);
> > > > > +	if (!data)
> > > > > +		return -EINVAL;
> > > > > +	pcie->mode = (enum dw_pcie_device_mode)data->mode;
> > > > > +
> > > > > +	ret = tegra_pcie_dw_parse_dt(pcie);
> > > > > +	if (ret < 0) {
> > > > > +		dev_err(dev, "Failed to parse device tree: %d\n", ret);
> > > > > +		return ret;
> > > > > +	}
> > > > > +
> > > > > +	pcie->pex_ctl_supply = devm_regulator_get(dev, "vddio-pex-ctl");
> > > > > +	if (IS_ERR(pcie->pex_ctl_supply)) {
> > > > > +		dev_err(dev, "Failed to get regulator: %ld\n",
> > > > > +			PTR_ERR(pcie->pex_ctl_supply));
> > > > > +		return PTR_ERR(pcie->pex_ctl_supply);
> > > > > +	}
> > > > > +
> > > > > +	pcie->core_clk = devm_clk_get(dev, "core");
> > > > > +	if (IS_ERR(pcie->core_clk)) {
> > > > > +		dev_err(dev, "Failed to get core clock: %ld\n",
> > > > > +			PTR_ERR(pcie->core_clk));
> > > > > +		return PTR_ERR(pcie->core_clk);
> > > > > +	}
> > > > > +
> > > > > +	pcie->appl_res = platform_get_resource_byname(pdev, IORESOURCE_MEM,
> > > > > +						      "appl");
> > > > > +	if (!pcie->appl_res) {
> > > > > +		dev_err(dev, "Failed to find \"appl\" region\n");
> > > > > +		return PTR_ERR(pcie->appl_res);
> > > > > +	}
> > > > > +	pcie->appl_base = devm_ioremap_resource(dev, pcie->appl_res);
> > > > > +	if (IS_ERR(pcie->appl_base))
> > > > > +		return PTR_ERR(pcie->appl_base);
> > > > > +
> > > > > +	pcie->core_apb_rst = devm_reset_control_get(dev, "apb");
> > > > > +	if (IS_ERR(pcie->core_apb_rst)) {
> > > > > +		dev_err(dev, "Failed to get APB reset: %ld\n",
> > > > > +			PTR_ERR(pcie->core_apb_rst));
> > > > > +		return PTR_ERR(pcie->core_apb_rst);
> > > > > +	}
> > > > > +
> > > > > +	phys = devm_kcalloc(dev, pcie->phy_count, sizeof(*phys), GFP_KERNEL);
> > > > > +	if (!phys)
> > > > > +		return PTR_ERR(phys);
> > > > > +
> > > > > +	for (i = 0; i < pcie->phy_count; i++) {
> > > > > +		name = kasprintf(GFP_KERNEL, "p2u-%u", i);
> > > > > +		if (!name) {
> > > > > +			dev_err(dev, "Failed to create P2U string\n");
> > > > > +			return -ENOMEM;
> > > > > +		}
> > > > > +		phys[i] = devm_phy_get(dev, name);
> > > > > +		kfree(name);
> > > > > +		if (IS_ERR(phys[i])) {
> > > > > +			ret = PTR_ERR(phys[i]);
> > > > > +			dev_err(dev, "Failed to get PHY: %d\n", ret);
> > > > > +			return ret;
> > > > > +		}
> > > > > +	}
> > > > > +
> > > > > +	pcie->phys = phys;
> > > > > +
> > > > > +	dbi_res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "dbi");
> > > > > +	if (!dbi_res) {
> > > > > +		dev_err(dev, "Failed to find \"dbi\" region\n");
> > > > > +		return PTR_ERR(dbi_res);
> > > > > +	}
> > > > > +	pcie->dbi_res = dbi_res;
> > > > > +
> > > > > +	pci->dbi_base = devm_ioremap_resource(dev, dbi_res);
> > > > > +	if (IS_ERR(pci->dbi_base))
> > > > > +		return PTR_ERR(pci->dbi_base);
> > > > > +
> > > > > +	/* Tegra HW locates DBI2 at a fixed offset from DBI */
> > > > > +	pci->dbi_base2 = pci->dbi_base + 0x1000;
> > > > > +
> > > > > +	atu_dma_res = platform_get_resource_byname(pdev, IORESOURCE_MEM,
> > > > > +						   "atu_dma");
> > > > > +	if (!atu_dma_res) {
> > > > > +		dev_err(dev, "Failed to find \"atu_dma\" region\n");
> > > > > +		return PTR_ERR(atu_dma_res);
> > > > > +	}
> > > > > +	pcie->atu_dma_res = atu_dma_res;
> > > > > +	pci->atu_base = devm_ioremap_resource(dev, atu_dma_res);
> > > > > +	if (IS_ERR(pci->atu_base))
> > > > > +		return PTR_ERR(pci->atu_base);
> > > > > +
> > > > > +	pcie->core_rst = devm_reset_control_get(dev, "core");
> > > > > +	if (IS_ERR(pcie->core_rst)) {
> > > > > +		dev_err(dev, "Failed to get core reset: %ld\n",
> > > > > +			PTR_ERR(pcie->core_rst));
> > > > > +		return PTR_ERR(pcie->core_rst);
> > > > > +	}
> > > > > +
> > > > > +	pp->irq = platform_get_irq_byname(pdev, "intr");
> > > > > +	if (!pp->irq) {
> > > > > +		dev_err(dev, "Failed to get \"intr\" interrupt\n");
> > > > > +		return -ENODEV;
> > > > > +	}
> > > > > +
> > > > > +	ret = devm_request_irq(dev, pp->irq, tegra_pcie_irq_handler,
> > > > > +			       IRQF_SHARED, "tegra-pcie-intr", pcie);
> > > > > +	if (ret) {
> > > > > +		dev_err(dev, "Failed to request IRQ %d: %d\n", pp->irq, ret);
> > > > > +		return ret;
> > > > > +	}
> > > > > +
> > > > > +	pcie->bpmp = tegra_bpmp_get(dev);
> > > > > +	if (IS_ERR(pcie->bpmp))
> > > > > +		return PTR_ERR(pcie->bpmp);
> > > > > +
> > > > > +	platform_set_drvdata(pdev, pcie);
> > > > > +
> > > > > +	if (pcie->mode == DW_PCIE_RC_TYPE) {
> > > > > +		ret = tegra_pcie_config_rp(pcie);
> > > > > +		if (ret && ret != -ENOMEDIUM)
> > > > > +			goto fail;
> > > > > +		else
> > > > > +			return 0;
> > > > 
> > > > So if the link is not up we still go ahead and make probe
> > > > succeed. What for ?
> > > We may need root port to be available to support hot-plugging of
> > > endpoint devices, so, we don't fail the probe.
> > 
> > We need it or we don't. If you do support hotplugging of endpoint
> > devices point me at the code, otherwise link up failure means
> > failure to probe.
> Currently hotplugging of endpoint is not supported, but it is one of
> the use cases that we may add support for in future. 

You should elaborate on this, I do not understand what you mean,
either the root port(s) supports hotplug or it does not.

> But, why should we fail probe if link up doesn't happen? As such,
> nothing went wrong in terms of root port initialization right?  I
> checked other DWC based implementations and following are not failing
> the probe pci-dra7xx.c, pcie-armada8k.c, pcie-artpec6.c, pcie-histb.c,
> pcie-kirin.c, pcie-spear13xx.c, pci-exynos.c, pci-imx6.c,
> pci-keystone.c, pci-layerscape.c
> 
> Although following do fail the probe if link is not up.  pcie-qcom.c,
> pcie-uniphier.c, pci-meson.c
> 
> So, to me, it looks more like a choice we can make whether to fail the
> probe or not and in this case we are choosing not to fail.

I disagree. I had an offline chat with Bjorn and whether link-up should
fail the probe or not depends on whether the root port(s) is hotplug
capable or not and this in turn relies on the root port "Slot
implemented" bit in the PCI Express capabilities register.

It is a choice but it should be based on evidence.

Lorenzo

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH V13 12/12] PCI: tegra: Add Tegra194 PCIe support
  2019-07-16 11:22           ` Lorenzo Pieralisi
@ 2019-07-16 19:00             ` Bjorn Helgaas
  2019-07-23 14:28               ` Vidya Sagar
  2019-07-23 14:44             ` Vidya Sagar
  1 sibling, 1 reply; 34+ messages in thread
From: Bjorn Helgaas @ 2019-07-16 19:00 UTC (permalink / raw)
  To: Lorenzo Pieralisi
  Cc: Vidya Sagar, robh+dt, mark.rutland, thierry.reding, jonathanh,
	kishon, catalin.marinas, will.deacon, jingoohan1,
	gustavo.pimentel, digetx, mperttunen, linux-pci, devicetree,
	linux-tegra, linux-kernel, linux-arm-kernel, kthota, mmaddireddy,
	sagar.tv

On Tue, Jul 16, 2019 at 12:22:25PM +0100, Lorenzo Pieralisi wrote:
> On Sat, Jul 13, 2019 at 12:34:34PM +0530, Vidya Sagar wrote:

> > > > > So if the link is not up we still go ahead and make probe
> > > > > succeed. What for ?
> > > > We may need root port to be available to support hot-plugging of
> > > > endpoint devices, so, we don't fail the probe.
> > > 
> > > We need it or we don't. If you do support hotplugging of endpoint
> > > devices point me at the code, otherwise link up failure means
> > > failure to probe.
> > Currently hotplugging of endpoint is not supported, but it is one of
> > the use cases that we may add support for in future. 
> 
> You should elaborate on this, I do not understand what you mean,
> either the root port(s) supports hotplug or it does not.
> 
> > But, why should we fail probe if link up doesn't happen? As such,
> > nothing went wrong in terms of root port initialization right?  I
> > checked other DWC based implementations and following are not failing
> > the probe pci-dra7xx.c, pcie-armada8k.c, pcie-artpec6.c, pcie-histb.c,
> > pcie-kirin.c, pcie-spear13xx.c, pci-exynos.c, pci-imx6.c,
> > pci-keystone.c, pci-layerscape.c
> > 
> > Although following do fail the probe if link is not up.  pcie-qcom.c,
> > pcie-uniphier.c, pci-meson.c
> > 
> > So, to me, it looks more like a choice we can make whether to fail the
> > probe or not and in this case we are choosing not to fail.
> 
> I disagree. I had an offline chat with Bjorn and whether link-up should
> fail the probe or not depends on whether the root port(s) is hotplug
> capable or not and this in turn relies on the root port "Slot
> implemented" bit in the PCI Express capabilities register.

There might be a little more we can talk about in this regard.  I did
bring up the "Slot implemented" bit, but after thinking about it more,
I don't really think the host bridge driver should be looking at that.
That's a PCIe concept, and it's really *downstream* from the host
bridge itself.  The host bridge is logically a device on the CPU bus,
not the PCI bus.

I'm starting to think that the host bridge driver probe should be
disconnected from question of whether the root port links are up.

Logically, the host bridge driver connects the CPU bus to a PCI root
bus, so it converts CPU-side accesses to PCI config, memory, or I/O
port transactions.  Given that, the PCI core can enumerate devices on
the root bus and downstream buses.

Devices on the root bus typically include Root Ports, but might also
include endpoints, Root Complex Integrated Endpoints, Root Complex
Event Collectors, etc.  I think in principle, we would want the host
bridge probe to succeed so we can use these devices even if none of
the Root Ports have a link.

If a Root Port is present, I think users will expect to see it in the
"lspci" output, even if its downstream link is not up.  That will
enable things like manually poking the Root Port via "setpci" for
debug.  And if it has a connector, the generic pciehp should be able
to handle hot-add events without any special help from the host bridge
driver.

On ACPI systems there is no concept of the host bridge driver probe
failing because of lack of link on a Root Port.  If a Root Port
doesn't have an operational link, we still keep the pci_root.c driver,
and we'll enumerate the Root Port itself.  So I tend to think DT
systems should behave the same way, i.e., the driver probe should
succeed unless it fails to allocate resources or something similar.  I
think this is analogous to a NIC or USB adapter driver, where the
probe succeeds even if there's no network cable or USB device
attached.

Bjorn

^ permalink raw reply	[flat|nested] 34+ messages in thread

* RE: [PATCH V13 12/12] PCI: tegra: Add Tegra194 PCIe support
  2019-07-16 19:00             ` Bjorn Helgaas
@ 2019-07-23 14:28               ` Vidya Sagar
  0 siblings, 0 replies; 34+ messages in thread
From: Vidya Sagar @ 2019-07-23 14:28 UTC (permalink / raw)
  To: Bjorn Helgaas, Lorenzo Pieralisi
  Cc: robh+dt, mark.rutland, thierry.reding, Jonathan Hunter, kishon,
	catalin.marinas, will.deacon, jingoohan1, gustavo.pimentel,
	digetx, Mikko Perttunen, linux-pci, devicetree, linux-tegra,
	linux-kernel, linux-arm-kernel, Krishna Thota,
	Manikanta Maddireddy, sagar.tv



> -----Original Message-----
> From: devicetree-owner@vger.kernel.org <devicetree-owner@vger.kernel.org>
> On Behalf Of Bjorn Helgaas
> Sent: Wednesday, July 17, 2019 12:30 AM
> To: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
> Cc: Vidya Sagar <vidyas@nvidia.com>; robh+dt@kernel.org;
> mark.rutland@arm.com; thierry.reding@gmail.com; Jonathan Hunter
> <jonathanh@nvidia.com>; kishon@ti.com; catalin.marinas@arm.com;
> will.deacon@arm.com; jingoohan1@gmail.com;
> gustavo.pimentel@synopsys.com; digetx@gmail.com; Mikko Perttunen
> <mperttunen@nvidia.com>; linux-pci@vger.kernel.org;
> devicetree@vger.kernel.org; linux-tegra@vger.kernel.org; linux-
> kernel@vger.kernel.org; linux-arm-kernel@lists.infradead.org; Krishna Thota
> <kthota@nvidia.com>; Manikanta Maddireddy <mmaddireddy@nvidia.com>;
> sagar.tv@gmail.com
> Subject: Re: [PATCH V13 12/12] PCI: tegra: Add Tegra194 PCIe support
> 
> On Tue, Jul 16, 2019 at 12:22:25PM +0100, Lorenzo Pieralisi wrote:
> > On Sat, Jul 13, 2019 at 12:34:34PM +0530, Vidya Sagar wrote:
> 
> > > > > > So if the link is not up we still go ahead and make probe
> > > > > > succeed. What for ?
> > > > > We may need root port to be available to support hot-plugging of
> > > > > endpoint devices, so, we don't fail the probe.
> > > >
> > > > We need it or we don't. If you do support hotplugging of endpoint
> > > > devices point me at the code, otherwise link up failure means
> > > > failure to probe.
> > > Currently hotplugging of endpoint is not supported, but it is one of
> > > the use cases that we may add support for in future.
> >
> > You should elaborate on this, I do not understand what you mean,
> > either the root port(s) supports hotplug or it does not.
> >
> > > But, why should we fail probe if link up doesn't happen? As such,
> > > nothing went wrong in terms of root port initialization right?  I
> > > checked other DWC based implementations and following are not
> > > failing the probe pci-dra7xx.c, pcie-armada8k.c, pcie-artpec6.c,
> > > pcie-histb.c, pcie-kirin.c, pcie-spear13xx.c, pci-exynos.c,
> > > pci-imx6.c, pci-keystone.c, pci-layerscape.c
> > >
> > > Although following do fail the probe if link is not up.
> > > pcie-qcom.c, pcie-uniphier.c, pci-meson.c
> > >
> > > So, to me, it looks more like a choice we can make whether to fail
> > > the probe or not and in this case we are choosing not to fail.
> >
> > I disagree. I had an offline chat with Bjorn and whether link-up
> > should fail the probe or not depends on whether the root port(s) is
> > hotplug capable or not and this in turn relies on the root port "Slot
> > implemented" bit in the PCI Express capabilities register.
> 
> There might be a little more we can talk about in this regard.  I did bring up the
> "Slot implemented" bit, but after thinking about it more, I don't really think the
> host bridge driver should be looking at that.
> That's a PCIe concept, and it's really *downstream* from the host bridge itself.
> The host bridge is logically a device on the CPU bus, not the PCI bus.
> 
> I'm starting to think that the host bridge driver probe should be disconnected
> from question of whether the root port links are up.
> 
> Logically, the host bridge driver connects the CPU bus to a PCI root bus, so it
> converts CPU-side accesses to PCI config, memory, or I/O port transactions.
> Given that, the PCI core can enumerate devices on the root bus and downstream
> buses.
> 
> Devices on the root bus typically include Root Ports, but might also include
> endpoints, Root Complex Integrated Endpoints, Root Complex Event Collectors,
> etc.  I think in principle, we would want the host bridge probe to succeed so we
> can use these devices even if none of the Root Ports have a link.
> 
> If a Root Port is present, I think users will expect to see it in the "lspci" output,
> even if its downstream link is not up.  That will enable things like manually
> poking the Root Port via "setpci" for debug.  And if it has a connector, the
> generic pciehp should be able to handle hot-add events without any special help
> from the host bridge driver.
> 
> On ACPI systems there is no concept of the host bridge driver probe failing
> because of lack of link on a Root Port.  If a Root Port doesn't have an
> operational link, we still keep the pci_root.c driver, and we'll enumerate the
> Root Port itself.  So I tend to think DT systems should behave the same way, i.e.,
> the driver probe should succeed unless it fails to allocate resources or something
> similar.  I think this is analogous to a NIC or USB adapter driver, where the probe
> succeeds even if there's no network cable or USB device attached.
> 
> Bjorn
Thanks Bjorn for your valuable inputs. I hope we are good here to not power down host
even if there are no endpoints detected.

- Vidya Sagar

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH V13 12/12] PCI: tegra: Add Tegra194 PCIe support
  2019-07-16 11:22           ` Lorenzo Pieralisi
  2019-07-16 19:00             ` Bjorn Helgaas
@ 2019-07-23 14:44             ` Vidya Sagar
  2019-07-30 15:49               ` Lorenzo Pieralisi
  1 sibling, 1 reply; 34+ messages in thread
From: Vidya Sagar @ 2019-07-23 14:44 UTC (permalink / raw)
  To: Lorenzo Pieralisi
  Cc: bhelgaas, robh+dt, mark.rutland, thierry.reding, jonathanh,
	kishon, catalin.marinas, will.deacon, jingoohan1,
	gustavo.pimentel, digetx, mperttunen, linux-pci, devicetree,
	linux-tegra, linux-kernel, linux-arm-kernel, kthota, mmaddireddy,
	sagar.tv

On 7/16/2019 4:52 PM, Lorenzo Pieralisi wrote:
> On Sat, Jul 13, 2019 at 12:34:34PM +0530, Vidya Sagar wrote:
> 
> [...]
> 
>>>>>> +static int tegra_pcie_bpmp_set_ctrl_state(struct tegra_pcie_dw *pcie,
>>>>>> +					  bool enable)
>>>>>> +{
>>>>>> +	struct mrq_uphy_response resp;
>>>>>> +	struct tegra_bpmp_message msg;
>>>>>> +	struct mrq_uphy_request req;
>>>>>> +	int err;
>>>>>> +
>>>>>> +	if (pcie->cid == 5)
>>>>>> +		return 0;
>>>>>
>>>>> What's wrong with cid == 5 ? Explain please.
>>>> Controller with ID=5 doesn't need any programming to enable it which is
>>>> done here through calling firmware API.
>>>>
>>>>>
>>>>>> +	memset(&req, 0, sizeof(req));
>>>>>> +	memset(&resp, 0, sizeof(resp));
>>>>>> +
>>>>>> +	req.cmd = CMD_UPHY_PCIE_CONTROLLER_STATE;
>>>>>> +	req.controller_state.pcie_controller = pcie->cid;
>>>>>> +	req.controller_state.enable = enable;
>>>>>> +
>>>>>> +	memset(&msg, 0, sizeof(msg));
>>>>>> +	msg.mrq = MRQ_UPHY;
>>>>>> +	msg.tx.data = &req;
>>>>>> +	msg.tx.size = sizeof(req);
>>>>>> +	msg.rx.data = &resp;
>>>>>> +	msg.rx.size = sizeof(resp);
>>>>>> +
>>>>>> +	if (irqs_disabled())
>>>>>
>>>>> Can you explain to me what this check is meant to achieve please ?
>>>> Firmware interface provides different APIs to be called when there are
>>>> no interrupts enabled in the system (noirq context) and otherwise
>>>> hence checking that situation here and calling appropriate API.
>>>
>>> That's what I am questioning. Being called from {suspend/resume}_noirq()
>>> callbacks (if that's the code path this check caters for) does not mean
>>> irqs_disabled() == true.
>> Agree.
>> Actually, I got a hint of having this check from the following.
>> Both tegra_bpmp_transfer_atomic() and tegra_bpmp_transfer() are indirectly
>> called by APIs registered with .master_xfer() and .master_xfer_atomic() hooks of
>> struct i2c_algorithm and the decision to call which one of these is made using the
>> following check in i2c-core.h file.
>> static inline bool i2c_in_atomic_xfer_mode(void)
>> {
>> 	return system_state > SYSTEM_RUNNING && irqs_disabled();
>> }
>> I think I should use this condition as is IIUC.
>> Please let me know if there are any concerns with this.
> 
> It is not a concern, it is just that I don't understand how this code
> can be called with IRQs disabled, if you can give me an execution path I
> am happy to leave the check there. On top of that, when called from
> suspend NOIRQ context, it is likely to use the blocking API (because
> IRQs aren't disabled at CPU level) behind which there is most certainly
> an IRQ required to wake the thread up and if the IRQ in question was
> disabled in the suspend NOIRQ phase this code is likely to deadlock.
> 
> I want to make sure we can justify adding this check, I do not
> want to add it because we think it can be needed when it may not
> be needed at all (and it gets copy and pasted over and over again
> in other drivers).
I had a discussion internally about this and the prescribed usage of these APIs
seem to be that
use tegra_bpmp_transfer() in .probe() and other paths where interrupts are
enabled as this API needs interrupts to be enabled for its working.
Use tegra_bpmp_transfer_atomic() surrounded by local_irq_save()/local_irq_restore()
in other paths where interrupt servicing is disabled.
I'll go ahead and make next patch series with this if this looks fine to you.

> 
>>> Actually, if tegra_bpmp_transfer() requires IRQs to be enabled you may
>>> even end up in a situation where that blocking call does not wake up
>>> because the IRQ in question was disabled in the NOIRQ suspend/resume
>>> phase.
>>>
>>> [...]
>>>
>>>>>> +static int tegra_pcie_dw_probe(struct platform_device *pdev)
>>>>>> +{
>>>>>> +	const struct tegra_pcie_soc *data;
>>>>>> +	struct device *dev = &pdev->dev;
>>>>>> +	struct resource *atu_dma_res;
>>>>>> +	struct tegra_pcie_dw *pcie;
>>>>>> +	struct resource *dbi_res;
>>>>>> +	struct pcie_port *pp;
>>>>>> +	struct dw_pcie *pci;
>>>>>> +	struct phy **phys;
>>>>>> +	char *name;
>>>>>> +	int ret;
>>>>>> +	u32 i;
>>>>>> +
>>>>>> +	pcie = devm_kzalloc(dev, sizeof(*pcie), GFP_KERNEL);
>>>>>> +	if (!pcie)
>>>>>> +		return -ENOMEM;
>>>>>> +
>>>>>> +	pci = &pcie->pci;
>>>>>> +	pci->dev = &pdev->dev;
>>>>>> +	pci->ops = &tegra_dw_pcie_ops;
>>>>>> +	pp = &pci->pp;
>>>>>> +	pcie->dev = &pdev->dev;
>>>>>> +
>>>>>> +	data = (struct tegra_pcie_soc *)of_device_get_match_data(dev);
>>>>>> +	if (!data)
>>>>>> +		return -EINVAL;
>>>>>> +	pcie->mode = (enum dw_pcie_device_mode)data->mode;
>>>>>> +
>>>>>> +	ret = tegra_pcie_dw_parse_dt(pcie);
>>>>>> +	if (ret < 0) {
>>>>>> +		dev_err(dev, "Failed to parse device tree: %d\n", ret);
>>>>>> +		return ret;
>>>>>> +	}
>>>>>> +
>>>>>> +	pcie->pex_ctl_supply = devm_regulator_get(dev, "vddio-pex-ctl");
>>>>>> +	if (IS_ERR(pcie->pex_ctl_supply)) {
>>>>>> +		dev_err(dev, "Failed to get regulator: %ld\n",
>>>>>> +			PTR_ERR(pcie->pex_ctl_supply));
>>>>>> +		return PTR_ERR(pcie->pex_ctl_supply);
>>>>>> +	}
>>>>>> +
>>>>>> +	pcie->core_clk = devm_clk_get(dev, "core");
>>>>>> +	if (IS_ERR(pcie->core_clk)) {
>>>>>> +		dev_err(dev, "Failed to get core clock: %ld\n",
>>>>>> +			PTR_ERR(pcie->core_clk));
>>>>>> +		return PTR_ERR(pcie->core_clk);
>>>>>> +	}
>>>>>> +
>>>>>> +	pcie->appl_res = platform_get_resource_byname(pdev, IORESOURCE_MEM,
>>>>>> +						      "appl");
>>>>>> +	if (!pcie->appl_res) {
>>>>>> +		dev_err(dev, "Failed to find \"appl\" region\n");
>>>>>> +		return PTR_ERR(pcie->appl_res);
>>>>>> +	}
>>>>>> +	pcie->appl_base = devm_ioremap_resource(dev, pcie->appl_res);
>>>>>> +	if (IS_ERR(pcie->appl_base))
>>>>>> +		return PTR_ERR(pcie->appl_base);
>>>>>> +
>>>>>> +	pcie->core_apb_rst = devm_reset_control_get(dev, "apb");
>>>>>> +	if (IS_ERR(pcie->core_apb_rst)) {
>>>>>> +		dev_err(dev, "Failed to get APB reset: %ld\n",
>>>>>> +			PTR_ERR(pcie->core_apb_rst));
>>>>>> +		return PTR_ERR(pcie->core_apb_rst);
>>>>>> +	}
>>>>>> +
>>>>>> +	phys = devm_kcalloc(dev, pcie->phy_count, sizeof(*phys), GFP_KERNEL);
>>>>>> +	if (!phys)
>>>>>> +		return PTR_ERR(phys);
>>>>>> +
>>>>>> +	for (i = 0; i < pcie->phy_count; i++) {
>>>>>> +		name = kasprintf(GFP_KERNEL, "p2u-%u", i);
>>>>>> +		if (!name) {
>>>>>> +			dev_err(dev, "Failed to create P2U string\n");
>>>>>> +			return -ENOMEM;
>>>>>> +		}
>>>>>> +		phys[i] = devm_phy_get(dev, name);
>>>>>> +		kfree(name);
>>>>>> +		if (IS_ERR(phys[i])) {
>>>>>> +			ret = PTR_ERR(phys[i]);
>>>>>> +			dev_err(dev, "Failed to get PHY: %d\n", ret);
>>>>>> +			return ret;
>>>>>> +		}
>>>>>> +	}
>>>>>> +
>>>>>> +	pcie->phys = phys;
>>>>>> +
>>>>>> +	dbi_res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "dbi");
>>>>>> +	if (!dbi_res) {
>>>>>> +		dev_err(dev, "Failed to find \"dbi\" region\n");
>>>>>> +		return PTR_ERR(dbi_res);
>>>>>> +	}
>>>>>> +	pcie->dbi_res = dbi_res;
>>>>>> +
>>>>>> +	pci->dbi_base = devm_ioremap_resource(dev, dbi_res);
>>>>>> +	if (IS_ERR(pci->dbi_base))
>>>>>> +		return PTR_ERR(pci->dbi_base);
>>>>>> +
>>>>>> +	/* Tegra HW locates DBI2 at a fixed offset from DBI */
>>>>>> +	pci->dbi_base2 = pci->dbi_base + 0x1000;
>>>>>> +
>>>>>> +	atu_dma_res = platform_get_resource_byname(pdev, IORESOURCE_MEM,
>>>>>> +						   "atu_dma");
>>>>>> +	if (!atu_dma_res) {
>>>>>> +		dev_err(dev, "Failed to find \"atu_dma\" region\n");
>>>>>> +		return PTR_ERR(atu_dma_res);
>>>>>> +	}
>>>>>> +	pcie->atu_dma_res = atu_dma_res;
>>>>>> +	pci->atu_base = devm_ioremap_resource(dev, atu_dma_res);
>>>>>> +	if (IS_ERR(pci->atu_base))
>>>>>> +		return PTR_ERR(pci->atu_base);
>>>>>> +
>>>>>> +	pcie->core_rst = devm_reset_control_get(dev, "core");
>>>>>> +	if (IS_ERR(pcie->core_rst)) {
>>>>>> +		dev_err(dev, "Failed to get core reset: %ld\n",
>>>>>> +			PTR_ERR(pcie->core_rst));
>>>>>> +		return PTR_ERR(pcie->core_rst);
>>>>>> +	}
>>>>>> +
>>>>>> +	pp->irq = platform_get_irq_byname(pdev, "intr");
>>>>>> +	if (!pp->irq) {
>>>>>> +		dev_err(dev, "Failed to get \"intr\" interrupt\n");
>>>>>> +		return -ENODEV;
>>>>>> +	}
>>>>>> +
>>>>>> +	ret = devm_request_irq(dev, pp->irq, tegra_pcie_irq_handler,
>>>>>> +			       IRQF_SHARED, "tegra-pcie-intr", pcie);
>>>>>> +	if (ret) {
>>>>>> +		dev_err(dev, "Failed to request IRQ %d: %d\n", pp->irq, ret);
>>>>>> +		return ret;
>>>>>> +	}
>>>>>> +
>>>>>> +	pcie->bpmp = tegra_bpmp_get(dev);
>>>>>> +	if (IS_ERR(pcie->bpmp))
>>>>>> +		return PTR_ERR(pcie->bpmp);
>>>>>> +
>>>>>> +	platform_set_drvdata(pdev, pcie);
>>>>>> +
>>>>>> +	if (pcie->mode == DW_PCIE_RC_TYPE) {
>>>>>> +		ret = tegra_pcie_config_rp(pcie);
>>>>>> +		if (ret && ret != -ENOMEDIUM)
>>>>>> +			goto fail;
>>>>>> +		else
>>>>>> +			return 0;
>>>>>
>>>>> So if the link is not up we still go ahead and make probe
>>>>> succeed. What for ?
>>>> We may need root port to be available to support hot-plugging of
>>>> endpoint devices, so, we don't fail the probe.
>>>
>>> We need it or we don't. If you do support hotplugging of endpoint
>>> devices point me at the code, otherwise link up failure means
>>> failure to probe.
>> Currently hotplugging of endpoint is not supported, but it is one of
>> the use cases that we may add support for in future.
> 
> You should elaborate on this, I do not understand what you mean,
> either the root port(s) supports hotplug or it does not.
> 
>> But, why should we fail probe if link up doesn't happen? As such,
>> nothing went wrong in terms of root port initialization right?  I
>> checked other DWC based implementations and following are not failing
>> the probe pci-dra7xx.c, pcie-armada8k.c, pcie-artpec6.c, pcie-histb.c,
>> pcie-kirin.c, pcie-spear13xx.c, pci-exynos.c, pci-imx6.c,
>> pci-keystone.c, pci-layerscape.c
>>
>> Although following do fail the probe if link is not up.  pcie-qcom.c,
>> pcie-uniphier.c, pci-meson.c
>>
>> So, to me, it looks more like a choice we can make whether to fail the
>> probe or not and in this case we are choosing not to fail.
> 
> I disagree. I had an offline chat with Bjorn and whether link-up should
> fail the probe or not depends on whether the root port(s) is hotplug
> capable or not and this in turn relies on the root port "Slot
> implemented" bit in the PCI Express capabilities register.
> 
> It is a choice but it should be based on evidence.
> 
> Lorenzo
With Bjorn's latest comment on top of this, I think we are good not to fail
the probe here.

- Vidya Sagar
> 


^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH V13 12/12] PCI: tegra: Add Tegra194 PCIe support
  2019-07-23 14:44             ` Vidya Sagar
@ 2019-07-30 15:49               ` Lorenzo Pieralisi
  2019-08-02 12:06                 ` Vidya Sagar
  0 siblings, 1 reply; 34+ messages in thread
From: Lorenzo Pieralisi @ 2019-07-30 15:49 UTC (permalink / raw)
  To: Vidya Sagar
  Cc: bhelgaas, robh+dt, mark.rutland, thierry.reding, jonathanh,
	kishon, catalin.marinas, will.deacon, jingoohan1,
	gustavo.pimentel, digetx, mperttunen, linux-pci, devicetree,
	linux-tegra, linux-kernel, linux-arm-kernel, kthota, mmaddireddy,
	sagar.tv

On Tue, Jul 23, 2019 at 08:14:08PM +0530, Vidya Sagar wrote:
> On 7/16/2019 4:52 PM, Lorenzo Pieralisi wrote:
> > On Sat, Jul 13, 2019 at 12:34:34PM +0530, Vidya Sagar wrote:
> > 
> > [...]
> > 
> > > > > > > +static int tegra_pcie_bpmp_set_ctrl_state(struct tegra_pcie_dw *pcie,
> > > > > > > +					  bool enable)
> > > > > > > +{
> > > > > > > +	struct mrq_uphy_response resp;
> > > > > > > +	struct tegra_bpmp_message msg;
> > > > > > > +	struct mrq_uphy_request req;
> > > > > > > +	int err;
> > > > > > > +
> > > > > > > +	if (pcie->cid == 5)
> > > > > > > +		return 0;
> > > > > > 
> > > > > > What's wrong with cid == 5 ? Explain please.
> > > > > Controller with ID=5 doesn't need any programming to enable it which is
> > > > > done here through calling firmware API.
> > > > > 
> > > > > > 
> > > > > > > +	memset(&req, 0, sizeof(req));
> > > > > > > +	memset(&resp, 0, sizeof(resp));
> > > > > > > +
> > > > > > > +	req.cmd = CMD_UPHY_PCIE_CONTROLLER_STATE;
> > > > > > > +	req.controller_state.pcie_controller = pcie->cid;
> > > > > > > +	req.controller_state.enable = enable;
> > > > > > > +
> > > > > > > +	memset(&msg, 0, sizeof(msg));
> > > > > > > +	msg.mrq = MRQ_UPHY;
> > > > > > > +	msg.tx.data = &req;
> > > > > > > +	msg.tx.size = sizeof(req);
> > > > > > > +	msg.rx.data = &resp;
> > > > > > > +	msg.rx.size = sizeof(resp);
> > > > > > > +
> > > > > > > +	if (irqs_disabled())
> > > > > > 
> > > > > > Can you explain to me what this check is meant to achieve please ?
> > > > > Firmware interface provides different APIs to be called when there are
> > > > > no interrupts enabled in the system (noirq context) and otherwise
> > > > > hence checking that situation here and calling appropriate API.
> > > > 
> > > > That's what I am questioning. Being called from {suspend/resume}_noirq()
> > > > callbacks (if that's the code path this check caters for) does not mean
> > > > irqs_disabled() == true.
> > > Agree.
> > > Actually, I got a hint of having this check from the following.
> > > Both tegra_bpmp_transfer_atomic() and tegra_bpmp_transfer() are indirectly
> > > called by APIs registered with .master_xfer() and .master_xfer_atomic() hooks of
> > > struct i2c_algorithm and the decision to call which one of these is made using the
> > > following check in i2c-core.h file.
> > > static inline bool i2c_in_atomic_xfer_mode(void)
> > > {
> > > 	return system_state > SYSTEM_RUNNING && irqs_disabled();
> > > }
> > > I think I should use this condition as is IIUC.
> > > Please let me know if there are any concerns with this.
> > 
> > It is not a concern, it is just that I don't understand how this code
> > can be called with IRQs disabled, if you can give me an execution path I
> > am happy to leave the check there. On top of that, when called from
> > suspend NOIRQ context, it is likely to use the blocking API (because
> > IRQs aren't disabled at CPU level) behind which there is most certainly
> > an IRQ required to wake the thread up and if the IRQ in question was
> > disabled in the suspend NOIRQ phase this code is likely to deadlock.
> > 
> > I want to make sure we can justify adding this check, I do not
> > want to add it because we think it can be needed when it may not
> > be needed at all (and it gets copy and pasted over and over again
> > in other drivers).
> I had a discussion internally about this and the prescribed usage of these APIs
> seem to be that
> use tegra_bpmp_transfer() in .probe() and other paths where interrupts are
> enabled as this API needs interrupts to be enabled for its working.
> Use tegra_bpmp_transfer_atomic() surrounded by local_irq_save()/local_irq_restore()
> in other paths where interrupt servicing is disabled.

Why tegra_bpmp_transfer_atomic() needs IRQs to be disabled ? And why
is it needed in this piece of code where IRQs are _never_ disabled
at CPU level ?

IRQs are enabled when you call a suspend_noirq() callback, so the
blocking API can be used as long as the IRQ descriptor backing
the IRQ that will wake-up the blocked call is marked as
IRQF_NO_SUSPEND.

The problem is not IRQs enabled/disabled at CPU level, the problem is
the IRQ descriptor of the IRQ required to handle the blocking BPMP call,
mark it as IRQF_NO_SUSPEND and remove the tegra_bpmp_transfer_atomic()
call from this code (or please give me a concrete example pinpointing
why it is needed).

Thanks,
Lorenzo

> I'll go ahead and make next patch series with this if this looks fine to you.
> 
> > 
> > > > Actually, if tegra_bpmp_transfer() requires IRQs to be enabled you may
> > > > even end up in a situation where that blocking call does not wake up
> > > > because the IRQ in question was disabled in the NOIRQ suspend/resume
> > > > phase.
> > > > 
> > > > [...]
> > > > 
> > > > > > > +static int tegra_pcie_dw_probe(struct platform_device *pdev)
> > > > > > > +{
> > > > > > > +	const struct tegra_pcie_soc *data;
> > > > > > > +	struct device *dev = &pdev->dev;
> > > > > > > +	struct resource *atu_dma_res;
> > > > > > > +	struct tegra_pcie_dw *pcie;
> > > > > > > +	struct resource *dbi_res;
> > > > > > > +	struct pcie_port *pp;
> > > > > > > +	struct dw_pcie *pci;
> > > > > > > +	struct phy **phys;
> > > > > > > +	char *name;
> > > > > > > +	int ret;
> > > > > > > +	u32 i;
> > > > > > > +
> > > > > > > +	pcie = devm_kzalloc(dev, sizeof(*pcie), GFP_KERNEL);
> > > > > > > +	if (!pcie)
> > > > > > > +		return -ENOMEM;
> > > > > > > +
> > > > > > > +	pci = &pcie->pci;
> > > > > > > +	pci->dev = &pdev->dev;
> > > > > > > +	pci->ops = &tegra_dw_pcie_ops;
> > > > > > > +	pp = &pci->pp;
> > > > > > > +	pcie->dev = &pdev->dev;
> > > > > > > +
> > > > > > > +	data = (struct tegra_pcie_soc *)of_device_get_match_data(dev);
> > > > > > > +	if (!data)
> > > > > > > +		return -EINVAL;
> > > > > > > +	pcie->mode = (enum dw_pcie_device_mode)data->mode;
> > > > > > > +
> > > > > > > +	ret = tegra_pcie_dw_parse_dt(pcie);
> > > > > > > +	if (ret < 0) {
> > > > > > > +		dev_err(dev, "Failed to parse device tree: %d\n", ret);
> > > > > > > +		return ret;
> > > > > > > +	}
> > > > > > > +
> > > > > > > +	pcie->pex_ctl_supply = devm_regulator_get(dev, "vddio-pex-ctl");
> > > > > > > +	if (IS_ERR(pcie->pex_ctl_supply)) {
> > > > > > > +		dev_err(dev, "Failed to get regulator: %ld\n",
> > > > > > > +			PTR_ERR(pcie->pex_ctl_supply));
> > > > > > > +		return PTR_ERR(pcie->pex_ctl_supply);
> > > > > > > +	}
> > > > > > > +
> > > > > > > +	pcie->core_clk = devm_clk_get(dev, "core");
> > > > > > > +	if (IS_ERR(pcie->core_clk)) {
> > > > > > > +		dev_err(dev, "Failed to get core clock: %ld\n",
> > > > > > > +			PTR_ERR(pcie->core_clk));
> > > > > > > +		return PTR_ERR(pcie->core_clk);
> > > > > > > +	}
> > > > > > > +
> > > > > > > +	pcie->appl_res = platform_get_resource_byname(pdev, IORESOURCE_MEM,
> > > > > > > +						      "appl");
> > > > > > > +	if (!pcie->appl_res) {
> > > > > > > +		dev_err(dev, "Failed to find \"appl\" region\n");
> > > > > > > +		return PTR_ERR(pcie->appl_res);
> > > > > > > +	}
> > > > > > > +	pcie->appl_base = devm_ioremap_resource(dev, pcie->appl_res);
> > > > > > > +	if (IS_ERR(pcie->appl_base))
> > > > > > > +		return PTR_ERR(pcie->appl_base);
> > > > > > > +
> > > > > > > +	pcie->core_apb_rst = devm_reset_control_get(dev, "apb");
> > > > > > > +	if (IS_ERR(pcie->core_apb_rst)) {
> > > > > > > +		dev_err(dev, "Failed to get APB reset: %ld\n",
> > > > > > > +			PTR_ERR(pcie->core_apb_rst));
> > > > > > > +		return PTR_ERR(pcie->core_apb_rst);
> > > > > > > +	}
> > > > > > > +
> > > > > > > +	phys = devm_kcalloc(dev, pcie->phy_count, sizeof(*phys), GFP_KERNEL);
> > > > > > > +	if (!phys)
> > > > > > > +		return PTR_ERR(phys);
> > > > > > > +
> > > > > > > +	for (i = 0; i < pcie->phy_count; i++) {
> > > > > > > +		name = kasprintf(GFP_KERNEL, "p2u-%u", i);
> > > > > > > +		if (!name) {
> > > > > > > +			dev_err(dev, "Failed to create P2U string\n");
> > > > > > > +			return -ENOMEM;
> > > > > > > +		}
> > > > > > > +		phys[i] = devm_phy_get(dev, name);
> > > > > > > +		kfree(name);
> > > > > > > +		if (IS_ERR(phys[i])) {
> > > > > > > +			ret = PTR_ERR(phys[i]);
> > > > > > > +			dev_err(dev, "Failed to get PHY: %d\n", ret);
> > > > > > > +			return ret;
> > > > > > > +		}
> > > > > > > +	}
> > > > > > > +
> > > > > > > +	pcie->phys = phys;
> > > > > > > +
> > > > > > > +	dbi_res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "dbi");
> > > > > > > +	if (!dbi_res) {
> > > > > > > +		dev_err(dev, "Failed to find \"dbi\" region\n");
> > > > > > > +		return PTR_ERR(dbi_res);
> > > > > > > +	}
> > > > > > > +	pcie->dbi_res = dbi_res;
> > > > > > > +
> > > > > > > +	pci->dbi_base = devm_ioremap_resource(dev, dbi_res);
> > > > > > > +	if (IS_ERR(pci->dbi_base))
> > > > > > > +		return PTR_ERR(pci->dbi_base);
> > > > > > > +
> > > > > > > +	/* Tegra HW locates DBI2 at a fixed offset from DBI */
> > > > > > > +	pci->dbi_base2 = pci->dbi_base + 0x1000;
> > > > > > > +
> > > > > > > +	atu_dma_res = platform_get_resource_byname(pdev, IORESOURCE_MEM,
> > > > > > > +						   "atu_dma");
> > > > > > > +	if (!atu_dma_res) {
> > > > > > > +		dev_err(dev, "Failed to find \"atu_dma\" region\n");
> > > > > > > +		return PTR_ERR(atu_dma_res);
> > > > > > > +	}
> > > > > > > +	pcie->atu_dma_res = atu_dma_res;
> > > > > > > +	pci->atu_base = devm_ioremap_resource(dev, atu_dma_res);
> > > > > > > +	if (IS_ERR(pci->atu_base))
> > > > > > > +		return PTR_ERR(pci->atu_base);
> > > > > > > +
> > > > > > > +	pcie->core_rst = devm_reset_control_get(dev, "core");
> > > > > > > +	if (IS_ERR(pcie->core_rst)) {
> > > > > > > +		dev_err(dev, "Failed to get core reset: %ld\n",
> > > > > > > +			PTR_ERR(pcie->core_rst));
> > > > > > > +		return PTR_ERR(pcie->core_rst);
> > > > > > > +	}
> > > > > > > +
> > > > > > > +	pp->irq = platform_get_irq_byname(pdev, "intr");
> > > > > > > +	if (!pp->irq) {
> > > > > > > +		dev_err(dev, "Failed to get \"intr\" interrupt\n");
> > > > > > > +		return -ENODEV;
> > > > > > > +	}
> > > > > > > +
> > > > > > > +	ret = devm_request_irq(dev, pp->irq, tegra_pcie_irq_handler,
> > > > > > > +			       IRQF_SHARED, "tegra-pcie-intr", pcie);
> > > > > > > +	if (ret) {
> > > > > > > +		dev_err(dev, "Failed to request IRQ %d: %d\n", pp->irq, ret);
> > > > > > > +		return ret;
> > > > > > > +	}
> > > > > > > +
> > > > > > > +	pcie->bpmp = tegra_bpmp_get(dev);
> > > > > > > +	if (IS_ERR(pcie->bpmp))
> > > > > > > +		return PTR_ERR(pcie->bpmp);
> > > > > > > +
> > > > > > > +	platform_set_drvdata(pdev, pcie);
> > > > > > > +
> > > > > > > +	if (pcie->mode == DW_PCIE_RC_TYPE) {
> > > > > > > +		ret = tegra_pcie_config_rp(pcie);
> > > > > > > +		if (ret && ret != -ENOMEDIUM)
> > > > > > > +			goto fail;
> > > > > > > +		else
> > > > > > > +			return 0;
> > > > > > 
> > > > > > So if the link is not up we still go ahead and make probe
> > > > > > succeed. What for ?
> > > > > We may need root port to be available to support hot-plugging of
> > > > > endpoint devices, so, we don't fail the probe.
> > > > 
> > > > We need it or we don't. If you do support hotplugging of endpoint
> > > > devices point me at the code, otherwise link up failure means
> > > > failure to probe.
> > > Currently hotplugging of endpoint is not supported, but it is one of
> > > the use cases that we may add support for in future.
> > 
> > You should elaborate on this, I do not understand what you mean,
> > either the root port(s) supports hotplug or it does not.
> > 
> > > But, why should we fail probe if link up doesn't happen? As such,
> > > nothing went wrong in terms of root port initialization right?  I
> > > checked other DWC based implementations and following are not failing
> > > the probe pci-dra7xx.c, pcie-armada8k.c, pcie-artpec6.c, pcie-histb.c,
> > > pcie-kirin.c, pcie-spear13xx.c, pci-exynos.c, pci-imx6.c,
> > > pci-keystone.c, pci-layerscape.c
> > > 
> > > Although following do fail the probe if link is not up.  pcie-qcom.c,
> > > pcie-uniphier.c, pci-meson.c
> > > 
> > > So, to me, it looks more like a choice we can make whether to fail the
> > > probe or not and in this case we are choosing not to fail.
> > 
> > I disagree. I had an offline chat with Bjorn and whether link-up should
> > fail the probe or not depends on whether the root port(s) is hotplug
> > capable or not and this in turn relies on the root port "Slot
> > implemented" bit in the PCI Express capabilities register.
> > 
> > It is a choice but it should be based on evidence.
> > 
> > Lorenzo
> With Bjorn's latest comment on top of this, I think we are good not to fail
> the probe here.
> 
> - Vidya Sagar
> > 
> 

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH V13 12/12] PCI: tegra: Add Tegra194 PCIe support
  2019-07-30 15:49               ` Lorenzo Pieralisi
@ 2019-08-02 12:06                 ` Vidya Sagar
  2019-08-05 14:01                   ` Lorenzo Pieralisi
  0 siblings, 1 reply; 34+ messages in thread
From: Vidya Sagar @ 2019-08-02 12:06 UTC (permalink / raw)
  To: Lorenzo Pieralisi
  Cc: bhelgaas, robh+dt, mark.rutland, thierry.reding, jonathanh,
	kishon, catalin.marinas, will.deacon, jingoohan1,
	gustavo.pimentel, digetx, mperttunen, linux-pci, devicetree,
	linux-tegra, linux-kernel, linux-arm-kernel, kthota, mmaddireddy,
	sagar.tv

On 7/30/2019 9:19 PM, Lorenzo Pieralisi wrote:
> On Tue, Jul 23, 2019 at 08:14:08PM +0530, Vidya Sagar wrote:
>> On 7/16/2019 4:52 PM, Lorenzo Pieralisi wrote:
>>> On Sat, Jul 13, 2019 at 12:34:34PM +0530, Vidya Sagar wrote:
>>>
>>> [...]
>>>
>>>>>>>> +static int tegra_pcie_bpmp_set_ctrl_state(struct tegra_pcie_dw *pcie,
>>>>>>>> +					  bool enable)
>>>>>>>> +{
>>>>>>>> +	struct mrq_uphy_response resp;
>>>>>>>> +	struct tegra_bpmp_message msg;
>>>>>>>> +	struct mrq_uphy_request req;
>>>>>>>> +	int err;
>>>>>>>> +
>>>>>>>> +	if (pcie->cid == 5)
>>>>>>>> +		return 0;
>>>>>>>
>>>>>>> What's wrong with cid == 5 ? Explain please.
>>>>>> Controller with ID=5 doesn't need any programming to enable it which is
>>>>>> done here through calling firmware API.
>>>>>>
>>>>>>>
>>>>>>>> +	memset(&req, 0, sizeof(req));
>>>>>>>> +	memset(&resp, 0, sizeof(resp));
>>>>>>>> +
>>>>>>>> +	req.cmd = CMD_UPHY_PCIE_CONTROLLER_STATE;
>>>>>>>> +	req.controller_state.pcie_controller = pcie->cid;
>>>>>>>> +	req.controller_state.enable = enable;
>>>>>>>> +
>>>>>>>> +	memset(&msg, 0, sizeof(msg));
>>>>>>>> +	msg.mrq = MRQ_UPHY;
>>>>>>>> +	msg.tx.data = &req;
>>>>>>>> +	msg.tx.size = sizeof(req);
>>>>>>>> +	msg.rx.data = &resp;
>>>>>>>> +	msg.rx.size = sizeof(resp);
>>>>>>>> +
>>>>>>>> +	if (irqs_disabled())
>>>>>>>
>>>>>>> Can you explain to me what this check is meant to achieve please ?
>>>>>> Firmware interface provides different APIs to be called when there are
>>>>>> no interrupts enabled in the system (noirq context) and otherwise
>>>>>> hence checking that situation here and calling appropriate API.
>>>>>
>>>>> That's what I am questioning. Being called from {suspend/resume}_noirq()
>>>>> callbacks (if that's the code path this check caters for) does not mean
>>>>> irqs_disabled() == true.
>>>> Agree.
>>>> Actually, I got a hint of having this check from the following.
>>>> Both tegra_bpmp_transfer_atomic() and tegra_bpmp_transfer() are indirectly
>>>> called by APIs registered with .master_xfer() and .master_xfer_atomic() hooks of
>>>> struct i2c_algorithm and the decision to call which one of these is made using the
>>>> following check in i2c-core.h file.
>>>> static inline bool i2c_in_atomic_xfer_mode(void)
>>>> {
>>>> 	return system_state > SYSTEM_RUNNING && irqs_disabled();
>>>> }
>>>> I think I should use this condition as is IIUC.
>>>> Please let me know if there are any concerns with this.
>>>
>>> It is not a concern, it is just that I don't understand how this code
>>> can be called with IRQs disabled, if you can give me an execution path I
>>> am happy to leave the check there. On top of that, when called from
>>> suspend NOIRQ context, it is likely to use the blocking API (because
>>> IRQs aren't disabled at CPU level) behind which there is most certainly
>>> an IRQ required to wake the thread up and if the IRQ in question was
>>> disabled in the suspend NOIRQ phase this code is likely to deadlock.
>>>
>>> I want to make sure we can justify adding this check, I do not
>>> want to add it because we think it can be needed when it may not
>>> be needed at all (and it gets copy and pasted over and over again
>>> in other drivers).
>> I had a discussion internally about this and the prescribed usage of these APIs
>> seem to be that
>> use tegra_bpmp_transfer() in .probe() and other paths where interrupts are
>> enabled as this API needs interrupts to be enabled for its working.
>> Use tegra_bpmp_transfer_atomic() surrounded by local_irq_save()/local_irq_restore()
>> in other paths where interrupt servicing is disabled.
> 
> Why tegra_bpmp_transfer_atomic() needs IRQs to be disabled ? And why
> is it needed in this piece of code where IRQs are _never_ disabled
> at CPU level ?
> 
> IRQs are enabled when you call a suspend_noirq() callback, so the
> blocking API can be used as long as the IRQ descriptor backing
> the IRQ that will wake-up the blocked call is marked as
> IRQF_NO_SUSPEND.
> 
> The problem is not IRQs enabled/disabled at CPU level, the problem is
> the IRQ descriptor of the IRQ required to handle the blocking BPMP call,
> mark it as IRQF_NO_SUSPEND and remove the tegra_bpmp_transfer_atomic()
> call from this code (or please give me a concrete example pinpointing
> why it is needed).
Ideally, using tegra_bpmp_transfer() alone in all paths (.probe() as well as .resume_noirq())
should have worked as the corresponding IRQ is already flagged as IRQF_NO_SUSPEND, but,
because of the way BPMP-FW driver in kernel making its interface available through .resume_early(),
tegra_bpmp_transfer() wasn't working as expected and I pushed a patch (CC'ing you) at
http://patchwork.ozlabs.org/patch/1140973/ to make it .resume_noirq() from .resume_early().
With that in place, we can just use tegra_bpmp_trasnfer().
I'll push a new patch with this change once my BPMP-FW driver patch is approved.

Thanks,
Vidya Sagar
> 
> Thanks,
> Lorenzo
> 
>> I'll go ahead and make next patch series with this if this looks fine to you.
>>
>>>
>>>>> Actually, if tegra_bpmp_transfer() requires IRQs to be enabled you may
>>>>> even end up in a situation where that blocking call does not wake up
>>>>> because the IRQ in question was disabled in the NOIRQ suspend/resume
>>>>> phase.
>>>>>
>>>>> [...]
>>>>>
>>>>>>>> +static int tegra_pcie_dw_probe(struct platform_device *pdev)
>>>>>>>> +{
>>>>>>>> +	const struct tegra_pcie_soc *data;
>>>>>>>> +	struct device *dev = &pdev->dev;
>>>>>>>> +	struct resource *atu_dma_res;
>>>>>>>> +	struct tegra_pcie_dw *pcie;
>>>>>>>> +	struct resource *dbi_res;
>>>>>>>> +	struct pcie_port *pp;
>>>>>>>> +	struct dw_pcie *pci;
>>>>>>>> +	struct phy **phys;
>>>>>>>> +	char *name;
>>>>>>>> +	int ret;
>>>>>>>> +	u32 i;
>>>>>>>> +
>>>>>>>> +	pcie = devm_kzalloc(dev, sizeof(*pcie), GFP_KERNEL);
>>>>>>>> +	if (!pcie)
>>>>>>>> +		return -ENOMEM;
>>>>>>>> +
>>>>>>>> +	pci = &pcie->pci;
>>>>>>>> +	pci->dev = &pdev->dev;
>>>>>>>> +	pci->ops = &tegra_dw_pcie_ops;
>>>>>>>> +	pp = &pci->pp;
>>>>>>>> +	pcie->dev = &pdev->dev;
>>>>>>>> +
>>>>>>>> +	data = (struct tegra_pcie_soc *)of_device_get_match_data(dev);
>>>>>>>> +	if (!data)
>>>>>>>> +		return -EINVAL;
>>>>>>>> +	pcie->mode = (enum dw_pcie_device_mode)data->mode;
>>>>>>>> +
>>>>>>>> +	ret = tegra_pcie_dw_parse_dt(pcie);
>>>>>>>> +	if (ret < 0) {
>>>>>>>> +		dev_err(dev, "Failed to parse device tree: %d\n", ret);
>>>>>>>> +		return ret;
>>>>>>>> +	}
>>>>>>>> +
>>>>>>>> +	pcie->pex_ctl_supply = devm_regulator_get(dev, "vddio-pex-ctl");
>>>>>>>> +	if (IS_ERR(pcie->pex_ctl_supply)) {
>>>>>>>> +		dev_err(dev, "Failed to get regulator: %ld\n",
>>>>>>>> +			PTR_ERR(pcie->pex_ctl_supply));
>>>>>>>> +		return PTR_ERR(pcie->pex_ctl_supply);
>>>>>>>> +	}
>>>>>>>> +
>>>>>>>> +	pcie->core_clk = devm_clk_get(dev, "core");
>>>>>>>> +	if (IS_ERR(pcie->core_clk)) {
>>>>>>>> +		dev_err(dev, "Failed to get core clock: %ld\n",
>>>>>>>> +			PTR_ERR(pcie->core_clk));
>>>>>>>> +		return PTR_ERR(pcie->core_clk);
>>>>>>>> +	}
>>>>>>>> +
>>>>>>>> +	pcie->appl_res = platform_get_resource_byname(pdev, IORESOURCE_MEM,
>>>>>>>> +						      "appl");
>>>>>>>> +	if (!pcie->appl_res) {
>>>>>>>> +		dev_err(dev, "Failed to find \"appl\" region\n");
>>>>>>>> +		return PTR_ERR(pcie->appl_res);
>>>>>>>> +	}
>>>>>>>> +	pcie->appl_base = devm_ioremap_resource(dev, pcie->appl_res);
>>>>>>>> +	if (IS_ERR(pcie->appl_base))
>>>>>>>> +		return PTR_ERR(pcie->appl_base);
>>>>>>>> +
>>>>>>>> +	pcie->core_apb_rst = devm_reset_control_get(dev, "apb");
>>>>>>>> +	if (IS_ERR(pcie->core_apb_rst)) {
>>>>>>>> +		dev_err(dev, "Failed to get APB reset: %ld\n",
>>>>>>>> +			PTR_ERR(pcie->core_apb_rst));
>>>>>>>> +		return PTR_ERR(pcie->core_apb_rst);
>>>>>>>> +	}
>>>>>>>> +
>>>>>>>> +	phys = devm_kcalloc(dev, pcie->phy_count, sizeof(*phys), GFP_KERNEL);
>>>>>>>> +	if (!phys)
>>>>>>>> +		return PTR_ERR(phys);
>>>>>>>> +
>>>>>>>> +	for (i = 0; i < pcie->phy_count; i++) {
>>>>>>>> +		name = kasprintf(GFP_KERNEL, "p2u-%u", i);
>>>>>>>> +		if (!name) {
>>>>>>>> +			dev_err(dev, "Failed to create P2U string\n");
>>>>>>>> +			return -ENOMEM;
>>>>>>>> +		}
>>>>>>>> +		phys[i] = devm_phy_get(dev, name);
>>>>>>>> +		kfree(name);
>>>>>>>> +		if (IS_ERR(phys[i])) {
>>>>>>>> +			ret = PTR_ERR(phys[i]);
>>>>>>>> +			dev_err(dev, "Failed to get PHY: %d\n", ret);
>>>>>>>> +			return ret;
>>>>>>>> +		}
>>>>>>>> +	}
>>>>>>>> +
>>>>>>>> +	pcie->phys = phys;
>>>>>>>> +
>>>>>>>> +	dbi_res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "dbi");
>>>>>>>> +	if (!dbi_res) {
>>>>>>>> +		dev_err(dev, "Failed to find \"dbi\" region\n");
>>>>>>>> +		return PTR_ERR(dbi_res);
>>>>>>>> +	}
>>>>>>>> +	pcie->dbi_res = dbi_res;
>>>>>>>> +
>>>>>>>> +	pci->dbi_base = devm_ioremap_resource(dev, dbi_res);
>>>>>>>> +	if (IS_ERR(pci->dbi_base))
>>>>>>>> +		return PTR_ERR(pci->dbi_base);
>>>>>>>> +
>>>>>>>> +	/* Tegra HW locates DBI2 at a fixed offset from DBI */
>>>>>>>> +	pci->dbi_base2 = pci->dbi_base + 0x1000;
>>>>>>>> +
>>>>>>>> +	atu_dma_res = platform_get_resource_byname(pdev, IORESOURCE_MEM,
>>>>>>>> +						   "atu_dma");
>>>>>>>> +	if (!atu_dma_res) {
>>>>>>>> +		dev_err(dev, "Failed to find \"atu_dma\" region\n");
>>>>>>>> +		return PTR_ERR(atu_dma_res);
>>>>>>>> +	}
>>>>>>>> +	pcie->atu_dma_res = atu_dma_res;
>>>>>>>> +	pci->atu_base = devm_ioremap_resource(dev, atu_dma_res);
>>>>>>>> +	if (IS_ERR(pci->atu_base))
>>>>>>>> +		return PTR_ERR(pci->atu_base);
>>>>>>>> +
>>>>>>>> +	pcie->core_rst = devm_reset_control_get(dev, "core");
>>>>>>>> +	if (IS_ERR(pcie->core_rst)) {
>>>>>>>> +		dev_err(dev, "Failed to get core reset: %ld\n",
>>>>>>>> +			PTR_ERR(pcie->core_rst));
>>>>>>>> +		return PTR_ERR(pcie->core_rst);
>>>>>>>> +	}
>>>>>>>> +
>>>>>>>> +	pp->irq = platform_get_irq_byname(pdev, "intr");
>>>>>>>> +	if (!pp->irq) {
>>>>>>>> +		dev_err(dev, "Failed to get \"intr\" interrupt\n");
>>>>>>>> +		return -ENODEV;
>>>>>>>> +	}
>>>>>>>> +
>>>>>>>> +	ret = devm_request_irq(dev, pp->irq, tegra_pcie_irq_handler,
>>>>>>>> +			       IRQF_SHARED, "tegra-pcie-intr", pcie);
>>>>>>>> +	if (ret) {
>>>>>>>> +		dev_err(dev, "Failed to request IRQ %d: %d\n", pp->irq, ret);
>>>>>>>> +		return ret;
>>>>>>>> +	}
>>>>>>>> +
>>>>>>>> +	pcie->bpmp = tegra_bpmp_get(dev);
>>>>>>>> +	if (IS_ERR(pcie->bpmp))
>>>>>>>> +		return PTR_ERR(pcie->bpmp);
>>>>>>>> +
>>>>>>>> +	platform_set_drvdata(pdev, pcie);
>>>>>>>> +
>>>>>>>> +	if (pcie->mode == DW_PCIE_RC_TYPE) {
>>>>>>>> +		ret = tegra_pcie_config_rp(pcie);
>>>>>>>> +		if (ret && ret != -ENOMEDIUM)
>>>>>>>> +			goto fail;
>>>>>>>> +		else
>>>>>>>> +			return 0;
>>>>>>>
>>>>>>> So if the link is not up we still go ahead and make probe
>>>>>>> succeed. What for ?
>>>>>> We may need root port to be available to support hot-plugging of
>>>>>> endpoint devices, so, we don't fail the probe.
>>>>>
>>>>> We need it or we don't. If you do support hotplugging of endpoint
>>>>> devices point me at the code, otherwise link up failure means
>>>>> failure to probe.
>>>> Currently hotplugging of endpoint is not supported, but it is one of
>>>> the use cases that we may add support for in future.
>>>
>>> You should elaborate on this, I do not understand what you mean,
>>> either the root port(s) supports hotplug or it does not.
>>>
>>>> But, why should we fail probe if link up doesn't happen? As such,
>>>> nothing went wrong in terms of root port initialization right?  I
>>>> checked other DWC based implementations and following are not failing
>>>> the probe pci-dra7xx.c, pcie-armada8k.c, pcie-artpec6.c, pcie-histb.c,
>>>> pcie-kirin.c, pcie-spear13xx.c, pci-exynos.c, pci-imx6.c,
>>>> pci-keystone.c, pci-layerscape.c
>>>>
>>>> Although following do fail the probe if link is not up.  pcie-qcom.c,
>>>> pcie-uniphier.c, pci-meson.c
>>>>
>>>> So, to me, it looks more like a choice we can make whether to fail the
>>>> probe or not and in this case we are choosing not to fail.
>>>
>>> I disagree. I had an offline chat with Bjorn and whether link-up should
>>> fail the probe or not depends on whether the root port(s) is hotplug
>>> capable or not and this in turn relies on the root port "Slot
>>> implemented" bit in the PCI Express capabilities register.
>>>
>>> It is a choice but it should be based on evidence.
>>>
>>> Lorenzo
>> With Bjorn's latest comment on top of this, I think we are good not to fail
>> the probe here.
>>
>> - Vidya Sagar
>>>
>>


^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH V13 12/12] PCI: tegra: Add Tegra194 PCIe support
  2019-08-02 12:06                 ` Vidya Sagar
@ 2019-08-05 14:01                   ` Lorenzo Pieralisi
  2019-08-05 16:54                     ` Vidya Sagar
  0 siblings, 1 reply; 34+ messages in thread
From: Lorenzo Pieralisi @ 2019-08-05 14:01 UTC (permalink / raw)
  To: Vidya Sagar
  Cc: bhelgaas, robh+dt, mark.rutland, thierry.reding, jonathanh,
	kishon, catalin.marinas, will.deacon, jingoohan1,
	gustavo.pimentel, digetx, mperttunen, linux-pci, devicetree,
	linux-tegra, linux-kernel, linux-arm-kernel, kthota, mmaddireddy,
	sagar.tv

On Fri, Aug 02, 2019 at 05:36:43PM +0530, Vidya Sagar wrote:
> On 7/30/2019 9:19 PM, Lorenzo Pieralisi wrote:
> > On Tue, Jul 23, 2019 at 08:14:08PM +0530, Vidya Sagar wrote:
> > > On 7/16/2019 4:52 PM, Lorenzo Pieralisi wrote:
> > > > On Sat, Jul 13, 2019 at 12:34:34PM +0530, Vidya Sagar wrote:
> > > > 
> > > > [...]
> > > > 
> > > > > > > > > +static int tegra_pcie_bpmp_set_ctrl_state(struct tegra_pcie_dw *pcie,
> > > > > > > > > +					  bool enable)
> > > > > > > > > +{
> > > > > > > > > +	struct mrq_uphy_response resp;
> > > > > > > > > +	struct tegra_bpmp_message msg;
> > > > > > > > > +	struct mrq_uphy_request req;
> > > > > > > > > +	int err;
> > > > > > > > > +
> > > > > > > > > +	if (pcie->cid == 5)
> > > > > > > > > +		return 0;
> > > > > > > > 
> > > > > > > > What's wrong with cid == 5 ? Explain please.
> > > > > > > Controller with ID=5 doesn't need any programming to enable it which is
> > > > > > > done here through calling firmware API.
> > > > > > > 
> > > > > > > > 
> > > > > > > > > +	memset(&req, 0, sizeof(req));
> > > > > > > > > +	memset(&resp, 0, sizeof(resp));
> > > > > > > > > +
> > > > > > > > > +	req.cmd = CMD_UPHY_PCIE_CONTROLLER_STATE;
> > > > > > > > > +	req.controller_state.pcie_controller = pcie->cid;
> > > > > > > > > +	req.controller_state.enable = enable;
> > > > > > > > > +
> > > > > > > > > +	memset(&msg, 0, sizeof(msg));
> > > > > > > > > +	msg.mrq = MRQ_UPHY;
> > > > > > > > > +	msg.tx.data = &req;
> > > > > > > > > +	msg.tx.size = sizeof(req);
> > > > > > > > > +	msg.rx.data = &resp;
> > > > > > > > > +	msg.rx.size = sizeof(resp);
> > > > > > > > > +
> > > > > > > > > +	if (irqs_disabled())
> > > > > > > > 
> > > > > > > > Can you explain to me what this check is meant to achieve please ?
> > > > > > > Firmware interface provides different APIs to be called when there are
> > > > > > > no interrupts enabled in the system (noirq context) and otherwise
> > > > > > > hence checking that situation here and calling appropriate API.
> > > > > > 
> > > > > > That's what I am questioning. Being called from {suspend/resume}_noirq()
> > > > > > callbacks (if that's the code path this check caters for) does not mean
> > > > > > irqs_disabled() == true.
> > > > > Agree.
> > > > > Actually, I got a hint of having this check from the following.
> > > > > Both tegra_bpmp_transfer_atomic() and tegra_bpmp_transfer() are indirectly
> > > > > called by APIs registered with .master_xfer() and .master_xfer_atomic() hooks of
> > > > > struct i2c_algorithm and the decision to call which one of these is made using the
> > > > > following check in i2c-core.h file.
> > > > > static inline bool i2c_in_atomic_xfer_mode(void)
> > > > > {
> > > > > 	return system_state > SYSTEM_RUNNING && irqs_disabled();
> > > > > }
> > > > > I think I should use this condition as is IIUC.
> > > > > Please let me know if there are any concerns with this.
> > > > 
> > > > It is not a concern, it is just that I don't understand how this code
> > > > can be called with IRQs disabled, if you can give me an execution path I
> > > > am happy to leave the check there. On top of that, when called from
> > > > suspend NOIRQ context, it is likely to use the blocking API (because
> > > > IRQs aren't disabled at CPU level) behind which there is most certainly
> > > > an IRQ required to wake the thread up and if the IRQ in question was
> > > > disabled in the suspend NOIRQ phase this code is likely to deadlock.
> > > > 
> > > > I want to make sure we can justify adding this check, I do not
> > > > want to add it because we think it can be needed when it may not
> > > > be needed at all (and it gets copy and pasted over and over again
> > > > in other drivers).
> > > I had a discussion internally about this and the prescribed usage of these APIs
> > > seem to be that
> > > use tegra_bpmp_transfer() in .probe() and other paths where interrupts are
> > > enabled as this API needs interrupts to be enabled for its working.
> > > Use tegra_bpmp_transfer_atomic() surrounded by local_irq_save()/local_irq_restore()
> > > in other paths where interrupt servicing is disabled.
> > 
> > Why tegra_bpmp_transfer_atomic() needs IRQs to be disabled ? And why
> > is it needed in this piece of code where IRQs are _never_ disabled
> > at CPU level ?
> > 
> > IRQs are enabled when you call a suspend_noirq() callback, so the
> > blocking API can be used as long as the IRQ descriptor backing
> > the IRQ that will wake-up the blocked call is marked as
> > IRQF_NO_SUSPEND.
> > 
> > The problem is not IRQs enabled/disabled at CPU level, the problem is
> > the IRQ descriptor of the IRQ required to handle the blocking BPMP call,
> > mark it as IRQF_NO_SUSPEND and remove the tegra_bpmp_transfer_atomic()
> > call from this code (or please give me a concrete example pinpointing
> > why it is needed).
> Ideally, using tegra_bpmp_transfer() alone in all paths (.probe() as
> well as .resume_noirq()) should have worked as the corresponding IRQ
> is already flagged as IRQF_NO_SUSPEND, but, because of the way BPMP-FW
> driver in kernel making its interface available through
> .resume_early(), tegra_bpmp_transfer() wasn't working as expected and
> I pushed a patch (CC'ing you) at
> http://patchwork.ozlabs.org/patch/1140973/ to make it .resume_noirq()
> from .resume_early().  With that in place, we can just use
> tegra_bpmp_trasnfer().  I'll push a new patch with this change once my
> BPMP-FW driver patch is approved.

Does this leave you with a resume_noirq() callbacks ordering issue to
sort out ?

a.k.a How will you guarantee that the BPMP will resume before the host
bridge ?

Thanks,
Lorenzo

> Thanks,
> Vidya Sagar
> > 
> > Thanks,
> > Lorenzo
> > 
> > > I'll go ahead and make next patch series with this if this looks fine to you.
> > > 
> > > > 
> > > > > > Actually, if tegra_bpmp_transfer() requires IRQs to be enabled you may
> > > > > > even end up in a situation where that blocking call does not wake up
> > > > > > because the IRQ in question was disabled in the NOIRQ suspend/resume
> > > > > > phase.
> > > > > > 
> > > > > > [...]
> > > > > > 
> > > > > > > > > +static int tegra_pcie_dw_probe(struct platform_device *pdev)
> > > > > > > > > +{
> > > > > > > > > +	const struct tegra_pcie_soc *data;
> > > > > > > > > +	struct device *dev = &pdev->dev;
> > > > > > > > > +	struct resource *atu_dma_res;
> > > > > > > > > +	struct tegra_pcie_dw *pcie;
> > > > > > > > > +	struct resource *dbi_res;
> > > > > > > > > +	struct pcie_port *pp;
> > > > > > > > > +	struct dw_pcie *pci;
> > > > > > > > > +	struct phy **phys;
> > > > > > > > > +	char *name;
> > > > > > > > > +	int ret;
> > > > > > > > > +	u32 i;
> > > > > > > > > +
> > > > > > > > > +	pcie = devm_kzalloc(dev, sizeof(*pcie), GFP_KERNEL);
> > > > > > > > > +	if (!pcie)
> > > > > > > > > +		return -ENOMEM;
> > > > > > > > > +
> > > > > > > > > +	pci = &pcie->pci;
> > > > > > > > > +	pci->dev = &pdev->dev;
> > > > > > > > > +	pci->ops = &tegra_dw_pcie_ops;
> > > > > > > > > +	pp = &pci->pp;
> > > > > > > > > +	pcie->dev = &pdev->dev;
> > > > > > > > > +
> > > > > > > > > +	data = (struct tegra_pcie_soc *)of_device_get_match_data(dev);
> > > > > > > > > +	if (!data)
> > > > > > > > > +		return -EINVAL;
> > > > > > > > > +	pcie->mode = (enum dw_pcie_device_mode)data->mode;
> > > > > > > > > +
> > > > > > > > > +	ret = tegra_pcie_dw_parse_dt(pcie);
> > > > > > > > > +	if (ret < 0) {
> > > > > > > > > +		dev_err(dev, "Failed to parse device tree: %d\n", ret);
> > > > > > > > > +		return ret;
> > > > > > > > > +	}
> > > > > > > > > +
> > > > > > > > > +	pcie->pex_ctl_supply = devm_regulator_get(dev, "vddio-pex-ctl");
> > > > > > > > > +	if (IS_ERR(pcie->pex_ctl_supply)) {
> > > > > > > > > +		dev_err(dev, "Failed to get regulator: %ld\n",
> > > > > > > > > +			PTR_ERR(pcie->pex_ctl_supply));
> > > > > > > > > +		return PTR_ERR(pcie->pex_ctl_supply);
> > > > > > > > > +	}
> > > > > > > > > +
> > > > > > > > > +	pcie->core_clk = devm_clk_get(dev, "core");
> > > > > > > > > +	if (IS_ERR(pcie->core_clk)) {
> > > > > > > > > +		dev_err(dev, "Failed to get core clock: %ld\n",
> > > > > > > > > +			PTR_ERR(pcie->core_clk));
> > > > > > > > > +		return PTR_ERR(pcie->core_clk);
> > > > > > > > > +	}
> > > > > > > > > +
> > > > > > > > > +	pcie->appl_res = platform_get_resource_byname(pdev, IORESOURCE_MEM,
> > > > > > > > > +						      "appl");
> > > > > > > > > +	if (!pcie->appl_res) {
> > > > > > > > > +		dev_err(dev, "Failed to find \"appl\" region\n");
> > > > > > > > > +		return PTR_ERR(pcie->appl_res);
> > > > > > > > > +	}
> > > > > > > > > +	pcie->appl_base = devm_ioremap_resource(dev, pcie->appl_res);
> > > > > > > > > +	if (IS_ERR(pcie->appl_base))
> > > > > > > > > +		return PTR_ERR(pcie->appl_base);
> > > > > > > > > +
> > > > > > > > > +	pcie->core_apb_rst = devm_reset_control_get(dev, "apb");
> > > > > > > > > +	if (IS_ERR(pcie->core_apb_rst)) {
> > > > > > > > > +		dev_err(dev, "Failed to get APB reset: %ld\n",
> > > > > > > > > +			PTR_ERR(pcie->core_apb_rst));
> > > > > > > > > +		return PTR_ERR(pcie->core_apb_rst);
> > > > > > > > > +	}
> > > > > > > > > +
> > > > > > > > > +	phys = devm_kcalloc(dev, pcie->phy_count, sizeof(*phys), GFP_KERNEL);
> > > > > > > > > +	if (!phys)
> > > > > > > > > +		return PTR_ERR(phys);
> > > > > > > > > +
> > > > > > > > > +	for (i = 0; i < pcie->phy_count; i++) {
> > > > > > > > > +		name = kasprintf(GFP_KERNEL, "p2u-%u", i);
> > > > > > > > > +		if (!name) {
> > > > > > > > > +			dev_err(dev, "Failed to create P2U string\n");
> > > > > > > > > +			return -ENOMEM;
> > > > > > > > > +		}
> > > > > > > > > +		phys[i] = devm_phy_get(dev, name);
> > > > > > > > > +		kfree(name);
> > > > > > > > > +		if (IS_ERR(phys[i])) {
> > > > > > > > > +			ret = PTR_ERR(phys[i]);
> > > > > > > > > +			dev_err(dev, "Failed to get PHY: %d\n", ret);
> > > > > > > > > +			return ret;
> > > > > > > > > +		}
> > > > > > > > > +	}
> > > > > > > > > +
> > > > > > > > > +	pcie->phys = phys;
> > > > > > > > > +
> > > > > > > > > +	dbi_res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "dbi");
> > > > > > > > > +	if (!dbi_res) {
> > > > > > > > > +		dev_err(dev, "Failed to find \"dbi\" region\n");
> > > > > > > > > +		return PTR_ERR(dbi_res);
> > > > > > > > > +	}
> > > > > > > > > +	pcie->dbi_res = dbi_res;
> > > > > > > > > +
> > > > > > > > > +	pci->dbi_base = devm_ioremap_resource(dev, dbi_res);
> > > > > > > > > +	if (IS_ERR(pci->dbi_base))
> > > > > > > > > +		return PTR_ERR(pci->dbi_base);
> > > > > > > > > +
> > > > > > > > > +	/* Tegra HW locates DBI2 at a fixed offset from DBI */
> > > > > > > > > +	pci->dbi_base2 = pci->dbi_base + 0x1000;
> > > > > > > > > +
> > > > > > > > > +	atu_dma_res = platform_get_resource_byname(pdev, IORESOURCE_MEM,
> > > > > > > > > +						   "atu_dma");
> > > > > > > > > +	if (!atu_dma_res) {
> > > > > > > > > +		dev_err(dev, "Failed to find \"atu_dma\" region\n");
> > > > > > > > > +		return PTR_ERR(atu_dma_res);
> > > > > > > > > +	}
> > > > > > > > > +	pcie->atu_dma_res = atu_dma_res;
> > > > > > > > > +	pci->atu_base = devm_ioremap_resource(dev, atu_dma_res);
> > > > > > > > > +	if (IS_ERR(pci->atu_base))
> > > > > > > > > +		return PTR_ERR(pci->atu_base);
> > > > > > > > > +
> > > > > > > > > +	pcie->core_rst = devm_reset_control_get(dev, "core");
> > > > > > > > > +	if (IS_ERR(pcie->core_rst)) {
> > > > > > > > > +		dev_err(dev, "Failed to get core reset: %ld\n",
> > > > > > > > > +			PTR_ERR(pcie->core_rst));
> > > > > > > > > +		return PTR_ERR(pcie->core_rst);
> > > > > > > > > +	}
> > > > > > > > > +
> > > > > > > > > +	pp->irq = platform_get_irq_byname(pdev, "intr");
> > > > > > > > > +	if (!pp->irq) {
> > > > > > > > > +		dev_err(dev, "Failed to get \"intr\" interrupt\n");
> > > > > > > > > +		return -ENODEV;
> > > > > > > > > +	}
> > > > > > > > > +
> > > > > > > > > +	ret = devm_request_irq(dev, pp->irq, tegra_pcie_irq_handler,
> > > > > > > > > +			       IRQF_SHARED, "tegra-pcie-intr", pcie);
> > > > > > > > > +	if (ret) {
> > > > > > > > > +		dev_err(dev, "Failed to request IRQ %d: %d\n", pp->irq, ret);
> > > > > > > > > +		return ret;
> > > > > > > > > +	}
> > > > > > > > > +
> > > > > > > > > +	pcie->bpmp = tegra_bpmp_get(dev);
> > > > > > > > > +	if (IS_ERR(pcie->bpmp))
> > > > > > > > > +		return PTR_ERR(pcie->bpmp);
> > > > > > > > > +
> > > > > > > > > +	platform_set_drvdata(pdev, pcie);
> > > > > > > > > +
> > > > > > > > > +	if (pcie->mode == DW_PCIE_RC_TYPE) {
> > > > > > > > > +		ret = tegra_pcie_config_rp(pcie);
> > > > > > > > > +		if (ret && ret != -ENOMEDIUM)
> > > > > > > > > +			goto fail;
> > > > > > > > > +		else
> > > > > > > > > +			return 0;
> > > > > > > > 
> > > > > > > > So if the link is not up we still go ahead and make probe
> > > > > > > > succeed. What for ?
> > > > > > > We may need root port to be available to support hot-plugging of
> > > > > > > endpoint devices, so, we don't fail the probe.
> > > > > > 
> > > > > > We need it or we don't. If you do support hotplugging of endpoint
> > > > > > devices point me at the code, otherwise link up failure means
> > > > > > failure to probe.
> > > > > Currently hotplugging of endpoint is not supported, but it is one of
> > > > > the use cases that we may add support for in future.
> > > > 
> > > > You should elaborate on this, I do not understand what you mean,
> > > > either the root port(s) supports hotplug or it does not.
> > > > 
> > > > > But, why should we fail probe if link up doesn't happen? As such,
> > > > > nothing went wrong in terms of root port initialization right?  I
> > > > > checked other DWC based implementations and following are not failing
> > > > > the probe pci-dra7xx.c, pcie-armada8k.c, pcie-artpec6.c, pcie-histb.c,
> > > > > pcie-kirin.c, pcie-spear13xx.c, pci-exynos.c, pci-imx6.c,
> > > > > pci-keystone.c, pci-layerscape.c
> > > > > 
> > > > > Although following do fail the probe if link is not up.  pcie-qcom.c,
> > > > > pcie-uniphier.c, pci-meson.c
> > > > > 
> > > > > So, to me, it looks more like a choice we can make whether to fail the
> > > > > probe or not and in this case we are choosing not to fail.
> > > > 
> > > > I disagree. I had an offline chat with Bjorn and whether link-up should
> > > > fail the probe or not depends on whether the root port(s) is hotplug
> > > > capable or not and this in turn relies on the root port "Slot
> > > > implemented" bit in the PCI Express capabilities register.
> > > > 
> > > > It is a choice but it should be based on evidence.
> > > > 
> > > > Lorenzo
> > > With Bjorn's latest comment on top of this, I think we are good not to fail
> > > the probe here.
> > > 
> > > - Vidya Sagar
> > > > 
> > > 
> 

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH V13 12/12] PCI: tegra: Add Tegra194 PCIe support
  2019-08-05 14:01                   ` Lorenzo Pieralisi
@ 2019-08-05 16:54                     ` Vidya Sagar
  2019-08-06 14:51                       ` Lorenzo Pieralisi
  0 siblings, 1 reply; 34+ messages in thread
From: Vidya Sagar @ 2019-08-05 16:54 UTC (permalink / raw)
  To: Lorenzo Pieralisi
  Cc: bhelgaas, robh+dt, mark.rutland, thierry.reding, jonathanh,
	kishon, catalin.marinas, will.deacon, jingoohan1,
	gustavo.pimentel, digetx, mperttunen, linux-pci, devicetree,
	linux-tegra, linux-kernel, linux-arm-kernel, kthota, mmaddireddy,
	sagar.tv

On 8/5/2019 7:31 PM, Lorenzo Pieralisi wrote:
> On Fri, Aug 02, 2019 at 05:36:43PM +0530, Vidya Sagar wrote:
>> On 7/30/2019 9:19 PM, Lorenzo Pieralisi wrote:
>>> On Tue, Jul 23, 2019 at 08:14:08PM +0530, Vidya Sagar wrote:
>>>> On 7/16/2019 4:52 PM, Lorenzo Pieralisi wrote:
>>>>> On Sat, Jul 13, 2019 at 12:34:34PM +0530, Vidya Sagar wrote:
>>>>>
>>>>> [...]
>>>>>
>>>>>>>>>> +static int tegra_pcie_bpmp_set_ctrl_state(struct tegra_pcie_dw *pcie,
>>>>>>>>>> +					  bool enable)
>>>>>>>>>> +{
>>>>>>>>>> +	struct mrq_uphy_response resp;
>>>>>>>>>> +	struct tegra_bpmp_message msg;
>>>>>>>>>> +	struct mrq_uphy_request req;
>>>>>>>>>> +	int err;
>>>>>>>>>> +
>>>>>>>>>> +	if (pcie->cid == 5)
>>>>>>>>>> +		return 0;
>>>>>>>>>
>>>>>>>>> What's wrong with cid == 5 ? Explain please.
>>>>>>>> Controller with ID=5 doesn't need any programming to enable it which is
>>>>>>>> done here through calling firmware API.
>>>>>>>>
>>>>>>>>>
>>>>>>>>>> +	memset(&req, 0, sizeof(req));
>>>>>>>>>> +	memset(&resp, 0, sizeof(resp));
>>>>>>>>>> +
>>>>>>>>>> +	req.cmd = CMD_UPHY_PCIE_CONTROLLER_STATE;
>>>>>>>>>> +	req.controller_state.pcie_controller = pcie->cid;
>>>>>>>>>> +	req.controller_state.enable = enable;
>>>>>>>>>> +
>>>>>>>>>> +	memset(&msg, 0, sizeof(msg));
>>>>>>>>>> +	msg.mrq = MRQ_UPHY;
>>>>>>>>>> +	msg.tx.data = &req;
>>>>>>>>>> +	msg.tx.size = sizeof(req);
>>>>>>>>>> +	msg.rx.data = &resp;
>>>>>>>>>> +	msg.rx.size = sizeof(resp);
>>>>>>>>>> +
>>>>>>>>>> +	if (irqs_disabled())
>>>>>>>>>
>>>>>>>>> Can you explain to me what this check is meant to achieve please ?
>>>>>>>> Firmware interface provides different APIs to be called when there are
>>>>>>>> no interrupts enabled in the system (noirq context) and otherwise
>>>>>>>> hence checking that situation here and calling appropriate API.
>>>>>>>
>>>>>>> That's what I am questioning. Being called from {suspend/resume}_noirq()
>>>>>>> callbacks (if that's the code path this check caters for) does not mean
>>>>>>> irqs_disabled() == true.
>>>>>> Agree.
>>>>>> Actually, I got a hint of having this check from the following.
>>>>>> Both tegra_bpmp_transfer_atomic() and tegra_bpmp_transfer() are indirectly
>>>>>> called by APIs registered with .master_xfer() and .master_xfer_atomic() hooks of
>>>>>> struct i2c_algorithm and the decision to call which one of these is made using the
>>>>>> following check in i2c-core.h file.
>>>>>> static inline bool i2c_in_atomic_xfer_mode(void)
>>>>>> {
>>>>>> 	return system_state > SYSTEM_RUNNING && irqs_disabled();
>>>>>> }
>>>>>> I think I should use this condition as is IIUC.
>>>>>> Please let me know if there are any concerns with this.
>>>>>
>>>>> It is not a concern, it is just that I don't understand how this code
>>>>> can be called with IRQs disabled, if you can give me an execution path I
>>>>> am happy to leave the check there. On top of that, when called from
>>>>> suspend NOIRQ context, it is likely to use the blocking API (because
>>>>> IRQs aren't disabled at CPU level) behind which there is most certainly
>>>>> an IRQ required to wake the thread up and if the IRQ in question was
>>>>> disabled in the suspend NOIRQ phase this code is likely to deadlock.
>>>>>
>>>>> I want to make sure we can justify adding this check, I do not
>>>>> want to add it because we think it can be needed when it may not
>>>>> be needed at all (and it gets copy and pasted over and over again
>>>>> in other drivers).
>>>> I had a discussion internally about this and the prescribed usage of these APIs
>>>> seem to be that
>>>> use tegra_bpmp_transfer() in .probe() and other paths where interrupts are
>>>> enabled as this API needs interrupts to be enabled for its working.
>>>> Use tegra_bpmp_transfer_atomic() surrounded by local_irq_save()/local_irq_restore()
>>>> in other paths where interrupt servicing is disabled.
>>>
>>> Why tegra_bpmp_transfer_atomic() needs IRQs to be disabled ? And why
>>> is it needed in this piece of code where IRQs are _never_ disabled
>>> at CPU level ?
>>>
>>> IRQs are enabled when you call a suspend_noirq() callback, so the
>>> blocking API can be used as long as the IRQ descriptor backing
>>> the IRQ that will wake-up the blocked call is marked as
>>> IRQF_NO_SUSPEND.
>>>
>>> The problem is not IRQs enabled/disabled at CPU level, the problem is
>>> the IRQ descriptor of the IRQ required to handle the blocking BPMP call,
>>> mark it as IRQF_NO_SUSPEND and remove the tegra_bpmp_transfer_atomic()
>>> call from this code (or please give me a concrete example pinpointing
>>> why it is needed).
>> Ideally, using tegra_bpmp_transfer() alone in all paths (.probe() as
>> well as .resume_noirq()) should have worked as the corresponding IRQ
>> is already flagged as IRQF_NO_SUSPEND, but, because of the way BPMP-FW
>> driver in kernel making its interface available through
>> .resume_early(), tegra_bpmp_transfer() wasn't working as expected and
>> I pushed a patch (CC'ing you) at
>> http://patchwork.ozlabs.org/patch/1140973/ to make it .resume_noirq()
>> from .resume_early().  With that in place, we can just use
>> tegra_bpmp_trasnfer().  I'll push a new patch with this change once my
>> BPMP-FW driver patch is approved.
> 
> Does this leave you with a resume_noirq() callbacks ordering issue to
> sort out ?
Not really.

> 
> a.k.a How will you guarantee that the BPMP will resume before the host
> bridge ?
It is already taken care of in the following way.
PCIe controller's device-tree node has an entry with a phandle of BPMP-FW's node
to get a handle of it and PCIe driver uses tegra_bpmp_get() API for that.
This API returns -EPROBE_DEFER if BPMP-FW's driver is not ready yet, which guarantees
that PCIe driver gets loaded only after BPMP-FW's driver and this order is followed
during noirq phase also.

> 
> Thanks,
> Lorenzo
> 
>> Thanks,
>> Vidya Sagar
>>>
>>> Thanks,
>>> Lorenzo
>>>
>>>> I'll go ahead and make next patch series with this if this looks fine to you.
>>>>
>>>>>
>>>>>>> Actually, if tegra_bpmp_transfer() requires IRQs to be enabled you may
>>>>>>> even end up in a situation where that blocking call does not wake up
>>>>>>> because the IRQ in question was disabled in the NOIRQ suspend/resume
>>>>>>> phase.
>>>>>>>
>>>>>>> [...]
>>>>>>>
>>>>>>>>>> +static int tegra_pcie_dw_probe(struct platform_device *pdev)
>>>>>>>>>> +{
>>>>>>>>>> +	const struct tegra_pcie_soc *data;
>>>>>>>>>> +	struct device *dev = &pdev->dev;
>>>>>>>>>> +	struct resource *atu_dma_res;
>>>>>>>>>> +	struct tegra_pcie_dw *pcie;
>>>>>>>>>> +	struct resource *dbi_res;
>>>>>>>>>> +	struct pcie_port *pp;
>>>>>>>>>> +	struct dw_pcie *pci;
>>>>>>>>>> +	struct phy **phys;
>>>>>>>>>> +	char *name;
>>>>>>>>>> +	int ret;
>>>>>>>>>> +	u32 i;
>>>>>>>>>> +
>>>>>>>>>> +	pcie = devm_kzalloc(dev, sizeof(*pcie), GFP_KERNEL);
>>>>>>>>>> +	if (!pcie)
>>>>>>>>>> +		return -ENOMEM;
>>>>>>>>>> +
>>>>>>>>>> +	pci = &pcie->pci;
>>>>>>>>>> +	pci->dev = &pdev->dev;
>>>>>>>>>> +	pci->ops = &tegra_dw_pcie_ops;
>>>>>>>>>> +	pp = &pci->pp;
>>>>>>>>>> +	pcie->dev = &pdev->dev;
>>>>>>>>>> +
>>>>>>>>>> +	data = (struct tegra_pcie_soc *)of_device_get_match_data(dev);
>>>>>>>>>> +	if (!data)
>>>>>>>>>> +		return -EINVAL;
>>>>>>>>>> +	pcie->mode = (enum dw_pcie_device_mode)data->mode;
>>>>>>>>>> +
>>>>>>>>>> +	ret = tegra_pcie_dw_parse_dt(pcie);
>>>>>>>>>> +	if (ret < 0) {
>>>>>>>>>> +		dev_err(dev, "Failed to parse device tree: %d\n", ret);
>>>>>>>>>> +		return ret;
>>>>>>>>>> +	}
>>>>>>>>>> +
>>>>>>>>>> +	pcie->pex_ctl_supply = devm_regulator_get(dev, "vddio-pex-ctl");
>>>>>>>>>> +	if (IS_ERR(pcie->pex_ctl_supply)) {
>>>>>>>>>> +		dev_err(dev, "Failed to get regulator: %ld\n",
>>>>>>>>>> +			PTR_ERR(pcie->pex_ctl_supply));
>>>>>>>>>> +		return PTR_ERR(pcie->pex_ctl_supply);
>>>>>>>>>> +	}
>>>>>>>>>> +
>>>>>>>>>> +	pcie->core_clk = devm_clk_get(dev, "core");
>>>>>>>>>> +	if (IS_ERR(pcie->core_clk)) {
>>>>>>>>>> +		dev_err(dev, "Failed to get core clock: %ld\n",
>>>>>>>>>> +			PTR_ERR(pcie->core_clk));
>>>>>>>>>> +		return PTR_ERR(pcie->core_clk);
>>>>>>>>>> +	}
>>>>>>>>>> +
>>>>>>>>>> +	pcie->appl_res = platform_get_resource_byname(pdev, IORESOURCE_MEM,
>>>>>>>>>> +						      "appl");
>>>>>>>>>> +	if (!pcie->appl_res) {
>>>>>>>>>> +		dev_err(dev, "Failed to find \"appl\" region\n");
>>>>>>>>>> +		return PTR_ERR(pcie->appl_res);
>>>>>>>>>> +	}
>>>>>>>>>> +	pcie->appl_base = devm_ioremap_resource(dev, pcie->appl_res);
>>>>>>>>>> +	if (IS_ERR(pcie->appl_base))
>>>>>>>>>> +		return PTR_ERR(pcie->appl_base);
>>>>>>>>>> +
>>>>>>>>>> +	pcie->core_apb_rst = devm_reset_control_get(dev, "apb");
>>>>>>>>>> +	if (IS_ERR(pcie->core_apb_rst)) {
>>>>>>>>>> +		dev_err(dev, "Failed to get APB reset: %ld\n",
>>>>>>>>>> +			PTR_ERR(pcie->core_apb_rst));
>>>>>>>>>> +		return PTR_ERR(pcie->core_apb_rst);
>>>>>>>>>> +	}
>>>>>>>>>> +
>>>>>>>>>> +	phys = devm_kcalloc(dev, pcie->phy_count, sizeof(*phys), GFP_KERNEL);
>>>>>>>>>> +	if (!phys)
>>>>>>>>>> +		return PTR_ERR(phys);
>>>>>>>>>> +
>>>>>>>>>> +	for (i = 0; i < pcie->phy_count; i++) {
>>>>>>>>>> +		name = kasprintf(GFP_KERNEL, "p2u-%u", i);
>>>>>>>>>> +		if (!name) {
>>>>>>>>>> +			dev_err(dev, "Failed to create P2U string\n");
>>>>>>>>>> +			return -ENOMEM;
>>>>>>>>>> +		}
>>>>>>>>>> +		phys[i] = devm_phy_get(dev, name);
>>>>>>>>>> +		kfree(name);
>>>>>>>>>> +		if (IS_ERR(phys[i])) {
>>>>>>>>>> +			ret = PTR_ERR(phys[i]);
>>>>>>>>>> +			dev_err(dev, "Failed to get PHY: %d\n", ret);
>>>>>>>>>> +			return ret;
>>>>>>>>>> +		}
>>>>>>>>>> +	}
>>>>>>>>>> +
>>>>>>>>>> +	pcie->phys = phys;
>>>>>>>>>> +
>>>>>>>>>> +	dbi_res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "dbi");
>>>>>>>>>> +	if (!dbi_res) {
>>>>>>>>>> +		dev_err(dev, "Failed to find \"dbi\" region\n");
>>>>>>>>>> +		return PTR_ERR(dbi_res);
>>>>>>>>>> +	}
>>>>>>>>>> +	pcie->dbi_res = dbi_res;
>>>>>>>>>> +
>>>>>>>>>> +	pci->dbi_base = devm_ioremap_resource(dev, dbi_res);
>>>>>>>>>> +	if (IS_ERR(pci->dbi_base))
>>>>>>>>>> +		return PTR_ERR(pci->dbi_base);
>>>>>>>>>> +
>>>>>>>>>> +	/* Tegra HW locates DBI2 at a fixed offset from DBI */
>>>>>>>>>> +	pci->dbi_base2 = pci->dbi_base + 0x1000;
>>>>>>>>>> +
>>>>>>>>>> +	atu_dma_res = platform_get_resource_byname(pdev, IORESOURCE_MEM,
>>>>>>>>>> +						   "atu_dma");
>>>>>>>>>> +	if (!atu_dma_res) {
>>>>>>>>>> +		dev_err(dev, "Failed to find \"atu_dma\" region\n");
>>>>>>>>>> +		return PTR_ERR(atu_dma_res);
>>>>>>>>>> +	}
>>>>>>>>>> +	pcie->atu_dma_res = atu_dma_res;
>>>>>>>>>> +	pci->atu_base = devm_ioremap_resource(dev, atu_dma_res);
>>>>>>>>>> +	if (IS_ERR(pci->atu_base))
>>>>>>>>>> +		return PTR_ERR(pci->atu_base);
>>>>>>>>>> +
>>>>>>>>>> +	pcie->core_rst = devm_reset_control_get(dev, "core");
>>>>>>>>>> +	if (IS_ERR(pcie->core_rst)) {
>>>>>>>>>> +		dev_err(dev, "Failed to get core reset: %ld\n",
>>>>>>>>>> +			PTR_ERR(pcie->core_rst));
>>>>>>>>>> +		return PTR_ERR(pcie->core_rst);
>>>>>>>>>> +	}
>>>>>>>>>> +
>>>>>>>>>> +	pp->irq = platform_get_irq_byname(pdev, "intr");
>>>>>>>>>> +	if (!pp->irq) {
>>>>>>>>>> +		dev_err(dev, "Failed to get \"intr\" interrupt\n");
>>>>>>>>>> +		return -ENODEV;
>>>>>>>>>> +	}
>>>>>>>>>> +
>>>>>>>>>> +	ret = devm_request_irq(dev, pp->irq, tegra_pcie_irq_handler,
>>>>>>>>>> +			       IRQF_SHARED, "tegra-pcie-intr", pcie);
>>>>>>>>>> +	if (ret) {
>>>>>>>>>> +		dev_err(dev, "Failed to request IRQ %d: %d\n", pp->irq, ret);
>>>>>>>>>> +		return ret;
>>>>>>>>>> +	}
>>>>>>>>>> +
>>>>>>>>>> +	pcie->bpmp = tegra_bpmp_get(dev);
>>>>>>>>>> +	if (IS_ERR(pcie->bpmp))
>>>>>>>>>> +		return PTR_ERR(pcie->bpmp);
>>>>>>>>>> +
>>>>>>>>>> +	platform_set_drvdata(pdev, pcie);
>>>>>>>>>> +
>>>>>>>>>> +	if (pcie->mode == DW_PCIE_RC_TYPE) {
>>>>>>>>>> +		ret = tegra_pcie_config_rp(pcie);
>>>>>>>>>> +		if (ret && ret != -ENOMEDIUM)
>>>>>>>>>> +			goto fail;
>>>>>>>>>> +		else
>>>>>>>>>> +			return 0;
>>>>>>>>>
>>>>>>>>> So if the link is not up we still go ahead and make probe
>>>>>>>>> succeed. What for ?
>>>>>>>> We may need root port to be available to support hot-plugging of
>>>>>>>> endpoint devices, so, we don't fail the probe.
>>>>>>>
>>>>>>> We need it or we don't. If you do support hotplugging of endpoint
>>>>>>> devices point me at the code, otherwise link up failure means
>>>>>>> failure to probe.
>>>>>> Currently hotplugging of endpoint is not supported, but it is one of
>>>>>> the use cases that we may add support for in future.
>>>>>
>>>>> You should elaborate on this, I do not understand what you mean,
>>>>> either the root port(s) supports hotplug or it does not.
>>>>>
>>>>>> But, why should we fail probe if link up doesn't happen? As such,
>>>>>> nothing went wrong in terms of root port initialization right?  I
>>>>>> checked other DWC based implementations and following are not failing
>>>>>> the probe pci-dra7xx.c, pcie-armada8k.c, pcie-artpec6.c, pcie-histb.c,
>>>>>> pcie-kirin.c, pcie-spear13xx.c, pci-exynos.c, pci-imx6.c,
>>>>>> pci-keystone.c, pci-layerscape.c
>>>>>>
>>>>>> Although following do fail the probe if link is not up.  pcie-qcom.c,
>>>>>> pcie-uniphier.c, pci-meson.c
>>>>>>
>>>>>> So, to me, it looks more like a choice we can make whether to fail the
>>>>>> probe or not and in this case we are choosing not to fail.
>>>>>
>>>>> I disagree. I had an offline chat with Bjorn and whether link-up should
>>>>> fail the probe or not depends on whether the root port(s) is hotplug
>>>>> capable or not and this in turn relies on the root port "Slot
>>>>> implemented" bit in the PCI Express capabilities register.
>>>>>
>>>>> It is a choice but it should be based on evidence.
>>>>>
>>>>> Lorenzo
>>>> With Bjorn's latest comment on top of this, I think we are good not to fail
>>>> the probe here.
>>>>
>>>> - Vidya Sagar
>>>>>
>>>>
>>


^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH V13 12/12] PCI: tegra: Add Tegra194 PCIe support
  2019-08-05 16:54                     ` Vidya Sagar
@ 2019-08-06 14:51                       ` Lorenzo Pieralisi
  0 siblings, 0 replies; 34+ messages in thread
From: Lorenzo Pieralisi @ 2019-08-06 14:51 UTC (permalink / raw)
  To: Vidya Sagar
  Cc: bhelgaas, robh+dt, mark.rutland, thierry.reding, jonathanh,
	kishon, catalin.marinas, will.deacon, jingoohan1,
	gustavo.pimentel, digetx, mperttunen, linux-pci, devicetree,
	linux-tegra, linux-kernel, linux-arm-kernel, kthota, mmaddireddy,
	sagar.tv

On Mon, Aug 05, 2019 at 10:24:42PM +0530, Vidya Sagar wrote:

[...]

> > > > IRQs are enabled when you call a suspend_noirq() callback, so the
> > > > blocking API can be used as long as the IRQ descriptor backing
> > > > the IRQ that will wake-up the blocked call is marked as
> > > > IRQF_NO_SUSPEND.
> > > > 
> > > > The problem is not IRQs enabled/disabled at CPU level, the problem is
> > > > the IRQ descriptor of the IRQ required to handle the blocking BPMP call,
> > > > mark it as IRQF_NO_SUSPEND and remove the tegra_bpmp_transfer_atomic()
> > > > call from this code (or please give me a concrete example pinpointing
> > > > why it is needed).
> > > Ideally, using tegra_bpmp_transfer() alone in all paths (.probe() as
> > > well as .resume_noirq()) should have worked as the corresponding IRQ
> > > is already flagged as IRQF_NO_SUSPEND, but, because of the way BPMP-FW
> > > driver in kernel making its interface available through
> > > .resume_early(), tegra_bpmp_transfer() wasn't working as expected and
> > > I pushed a patch (CC'ing you) at
> > > http://patchwork.ozlabs.org/patch/1140973/ to make it .resume_noirq()
> > > from .resume_early().  With that in place, we can just use
> > > tegra_bpmp_trasnfer().  I'll push a new patch with this change once my
> > > BPMP-FW driver patch is approved.
> > 
> > Does this leave you with a resume_noirq() callbacks ordering issue to
> > sort out ?
> Not really.
> 
> > 
> > a.k.a How will you guarantee that the BPMP will resume before the host
> > bridge ?
> It is already taken care of in the following way.  PCIe controller's
> device-tree node has an entry with a phandle of BPMP-FW's node to get
> a handle of it and PCIe driver uses tegra_bpmp_get() API for that.
> This API returns -EPROBE_DEFER if BPMP-FW's driver is not ready yet,
> which guarantees that PCIe driver gets loaded only after BPMP-FW's
> driver and this order is followed during noirq phase also.

OK, thanks, this makes much more sense than the original code.

Lorenzo

> > Thanks,
> > Lorenzo
> > 
> > > Thanks,
> > > Vidya Sagar
> > > > 
> > > > Thanks,
> > > > Lorenzo
> > > > 
> > > > > I'll go ahead and make next patch series with this if this looks fine to you.
> > > > > 
> > > > > > 
> > > > > > > > Actually, if tegra_bpmp_transfer() requires IRQs to be enabled you may
> > > > > > > > even end up in a situation where that blocking call does not wake up
> > > > > > > > because the IRQ in question was disabled in the NOIRQ suspend/resume
> > > > > > > > phase.
> > > > > > > > 
> > > > > > > > [...]
> > > > > > > > 
> > > > > > > > > > > +static int tegra_pcie_dw_probe(struct platform_device *pdev)
> > > > > > > > > > > +{
> > > > > > > > > > > +	const struct tegra_pcie_soc *data;
> > > > > > > > > > > +	struct device *dev = &pdev->dev;
> > > > > > > > > > > +	struct resource *atu_dma_res;
> > > > > > > > > > > +	struct tegra_pcie_dw *pcie;
> > > > > > > > > > > +	struct resource *dbi_res;
> > > > > > > > > > > +	struct pcie_port *pp;
> > > > > > > > > > > +	struct dw_pcie *pci;
> > > > > > > > > > > +	struct phy **phys;
> > > > > > > > > > > +	char *name;
> > > > > > > > > > > +	int ret;
> > > > > > > > > > > +	u32 i;
> > > > > > > > > > > +
> > > > > > > > > > > +	pcie = devm_kzalloc(dev, sizeof(*pcie), GFP_KERNEL);
> > > > > > > > > > > +	if (!pcie)
> > > > > > > > > > > +		return -ENOMEM;
> > > > > > > > > > > +
> > > > > > > > > > > +	pci = &pcie->pci;
> > > > > > > > > > > +	pci->dev = &pdev->dev;
> > > > > > > > > > > +	pci->ops = &tegra_dw_pcie_ops;
> > > > > > > > > > > +	pp = &pci->pp;
> > > > > > > > > > > +	pcie->dev = &pdev->dev;
> > > > > > > > > > > +
> > > > > > > > > > > +	data = (struct tegra_pcie_soc *)of_device_get_match_data(dev);
> > > > > > > > > > > +	if (!data)
> > > > > > > > > > > +		return -EINVAL;
> > > > > > > > > > > +	pcie->mode = (enum dw_pcie_device_mode)data->mode;
> > > > > > > > > > > +
> > > > > > > > > > > +	ret = tegra_pcie_dw_parse_dt(pcie);
> > > > > > > > > > > +	if (ret < 0) {
> > > > > > > > > > > +		dev_err(dev, "Failed to parse device tree: %d\n", ret);
> > > > > > > > > > > +		return ret;
> > > > > > > > > > > +	}
> > > > > > > > > > > +
> > > > > > > > > > > +	pcie->pex_ctl_supply = devm_regulator_get(dev, "vddio-pex-ctl");
> > > > > > > > > > > +	if (IS_ERR(pcie->pex_ctl_supply)) {
> > > > > > > > > > > +		dev_err(dev, "Failed to get regulator: %ld\n",
> > > > > > > > > > > +			PTR_ERR(pcie->pex_ctl_supply));
> > > > > > > > > > > +		return PTR_ERR(pcie->pex_ctl_supply);
> > > > > > > > > > > +	}
> > > > > > > > > > > +
> > > > > > > > > > > +	pcie->core_clk = devm_clk_get(dev, "core");
> > > > > > > > > > > +	if (IS_ERR(pcie->core_clk)) {
> > > > > > > > > > > +		dev_err(dev, "Failed to get core clock: %ld\n",
> > > > > > > > > > > +			PTR_ERR(pcie->core_clk));
> > > > > > > > > > > +		return PTR_ERR(pcie->core_clk);
> > > > > > > > > > > +	}
> > > > > > > > > > > +
> > > > > > > > > > > +	pcie->appl_res = platform_get_resource_byname(pdev, IORESOURCE_MEM,
> > > > > > > > > > > +						      "appl");
> > > > > > > > > > > +	if (!pcie->appl_res) {
> > > > > > > > > > > +		dev_err(dev, "Failed to find \"appl\" region\n");
> > > > > > > > > > > +		return PTR_ERR(pcie->appl_res);
> > > > > > > > > > > +	}
> > > > > > > > > > > +	pcie->appl_base = devm_ioremap_resource(dev, pcie->appl_res);
> > > > > > > > > > > +	if (IS_ERR(pcie->appl_base))
> > > > > > > > > > > +		return PTR_ERR(pcie->appl_base);
> > > > > > > > > > > +
> > > > > > > > > > > +	pcie->core_apb_rst = devm_reset_control_get(dev, "apb");
> > > > > > > > > > > +	if (IS_ERR(pcie->core_apb_rst)) {
> > > > > > > > > > > +		dev_err(dev, "Failed to get APB reset: %ld\n",
> > > > > > > > > > > +			PTR_ERR(pcie->core_apb_rst));
> > > > > > > > > > > +		return PTR_ERR(pcie->core_apb_rst);
> > > > > > > > > > > +	}
> > > > > > > > > > > +
> > > > > > > > > > > +	phys = devm_kcalloc(dev, pcie->phy_count, sizeof(*phys), GFP_KERNEL);
> > > > > > > > > > > +	if (!phys)
> > > > > > > > > > > +		return PTR_ERR(phys);
> > > > > > > > > > > +
> > > > > > > > > > > +	for (i = 0; i < pcie->phy_count; i++) {
> > > > > > > > > > > +		name = kasprintf(GFP_KERNEL, "p2u-%u", i);
> > > > > > > > > > > +		if (!name) {
> > > > > > > > > > > +			dev_err(dev, "Failed to create P2U string\n");
> > > > > > > > > > > +			return -ENOMEM;
> > > > > > > > > > > +		}
> > > > > > > > > > > +		phys[i] = devm_phy_get(dev, name);
> > > > > > > > > > > +		kfree(name);
> > > > > > > > > > > +		if (IS_ERR(phys[i])) {
> > > > > > > > > > > +			ret = PTR_ERR(phys[i]);
> > > > > > > > > > > +			dev_err(dev, "Failed to get PHY: %d\n", ret);
> > > > > > > > > > > +			return ret;
> > > > > > > > > > > +		}
> > > > > > > > > > > +	}
> > > > > > > > > > > +
> > > > > > > > > > > +	pcie->phys = phys;
> > > > > > > > > > > +
> > > > > > > > > > > +	dbi_res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "dbi");
> > > > > > > > > > > +	if (!dbi_res) {
> > > > > > > > > > > +		dev_err(dev, "Failed to find \"dbi\" region\n");
> > > > > > > > > > > +		return PTR_ERR(dbi_res);
> > > > > > > > > > > +	}
> > > > > > > > > > > +	pcie->dbi_res = dbi_res;
> > > > > > > > > > > +
> > > > > > > > > > > +	pci->dbi_base = devm_ioremap_resource(dev, dbi_res);
> > > > > > > > > > > +	if (IS_ERR(pci->dbi_base))
> > > > > > > > > > > +		return PTR_ERR(pci->dbi_base);
> > > > > > > > > > > +
> > > > > > > > > > > +	/* Tegra HW locates DBI2 at a fixed offset from DBI */
> > > > > > > > > > > +	pci->dbi_base2 = pci->dbi_base + 0x1000;
> > > > > > > > > > > +
> > > > > > > > > > > +	atu_dma_res = platform_get_resource_byname(pdev, IORESOURCE_MEM,
> > > > > > > > > > > +						   "atu_dma");
> > > > > > > > > > > +	if (!atu_dma_res) {
> > > > > > > > > > > +		dev_err(dev, "Failed to find \"atu_dma\" region\n");
> > > > > > > > > > > +		return PTR_ERR(atu_dma_res);
> > > > > > > > > > > +	}
> > > > > > > > > > > +	pcie->atu_dma_res = atu_dma_res;
> > > > > > > > > > > +	pci->atu_base = devm_ioremap_resource(dev, atu_dma_res);
> > > > > > > > > > > +	if (IS_ERR(pci->atu_base))
> > > > > > > > > > > +		return PTR_ERR(pci->atu_base);
> > > > > > > > > > > +
> > > > > > > > > > > +	pcie->core_rst = devm_reset_control_get(dev, "core");
> > > > > > > > > > > +	if (IS_ERR(pcie->core_rst)) {
> > > > > > > > > > > +		dev_err(dev, "Failed to get core reset: %ld\n",
> > > > > > > > > > > +			PTR_ERR(pcie->core_rst));
> > > > > > > > > > > +		return PTR_ERR(pcie->core_rst);
> > > > > > > > > > > +	}
> > > > > > > > > > > +
> > > > > > > > > > > +	pp->irq = platform_get_irq_byname(pdev, "intr");
> > > > > > > > > > > +	if (!pp->irq) {
> > > > > > > > > > > +		dev_err(dev, "Failed to get \"intr\" interrupt\n");
> > > > > > > > > > > +		return -ENODEV;
> > > > > > > > > > > +	}
> > > > > > > > > > > +
> > > > > > > > > > > +	ret = devm_request_irq(dev, pp->irq, tegra_pcie_irq_handler,
> > > > > > > > > > > +			       IRQF_SHARED, "tegra-pcie-intr", pcie);
> > > > > > > > > > > +	if (ret) {
> > > > > > > > > > > +		dev_err(dev, "Failed to request IRQ %d: %d\n", pp->irq, ret);
> > > > > > > > > > > +		return ret;
> > > > > > > > > > > +	}
> > > > > > > > > > > +
> > > > > > > > > > > +	pcie->bpmp = tegra_bpmp_get(dev);
> > > > > > > > > > > +	if (IS_ERR(pcie->bpmp))
> > > > > > > > > > > +		return PTR_ERR(pcie->bpmp);
> > > > > > > > > > > +
> > > > > > > > > > > +	platform_set_drvdata(pdev, pcie);
> > > > > > > > > > > +
> > > > > > > > > > > +	if (pcie->mode == DW_PCIE_RC_TYPE) {
> > > > > > > > > > > +		ret = tegra_pcie_config_rp(pcie);
> > > > > > > > > > > +		if (ret && ret != -ENOMEDIUM)
> > > > > > > > > > > +			goto fail;
> > > > > > > > > > > +		else
> > > > > > > > > > > +			return 0;
> > > > > > > > > > 
> > > > > > > > > > So if the link is not up we still go ahead and make probe
> > > > > > > > > > succeed. What for ?
> > > > > > > > > We may need root port to be available to support hot-plugging of
> > > > > > > > > endpoint devices, so, we don't fail the probe.
> > > > > > > > 
> > > > > > > > We need it or we don't. If you do support hotplugging of endpoint
> > > > > > > > devices point me at the code, otherwise link up failure means
> > > > > > > > failure to probe.
> > > > > > > Currently hotplugging of endpoint is not supported, but it is one of
> > > > > > > the use cases that we may add support for in future.
> > > > > > 
> > > > > > You should elaborate on this, I do not understand what you mean,
> > > > > > either the root port(s) supports hotplug or it does not.
> > > > > > 
> > > > > > > But, why should we fail probe if link up doesn't happen? As such,
> > > > > > > nothing went wrong in terms of root port initialization right?  I
> > > > > > > checked other DWC based implementations and following are not failing
> > > > > > > the probe pci-dra7xx.c, pcie-armada8k.c, pcie-artpec6.c, pcie-histb.c,
> > > > > > > pcie-kirin.c, pcie-spear13xx.c, pci-exynos.c, pci-imx6.c,
> > > > > > > pci-keystone.c, pci-layerscape.c
> > > > > > > 
> > > > > > > Although following do fail the probe if link is not up.  pcie-qcom.c,
> > > > > > > pcie-uniphier.c, pci-meson.c
> > > > > > > 
> > > > > > > So, to me, it looks more like a choice we can make whether to fail the
> > > > > > > probe or not and in this case we are choosing not to fail.
> > > > > > 
> > > > > > I disagree. I had an offline chat with Bjorn and whether link-up should
> > > > > > fail the probe or not depends on whether the root port(s) is hotplug
> > > > > > capable or not and this in turn relies on the root port "Slot
> > > > > > implemented" bit in the PCI Express capabilities register.
> > > > > > 
> > > > > > It is a choice but it should be based on evidence.
> > > > > > 
> > > > > > Lorenzo
> > > > > With Bjorn's latest comment on top of this, I think we are good not to fail
> > > > > the probe here.
> > > > > 
> > > > > - Vidya Sagar
> > > > > > 
> > > > > 
> > > 
> 

^ permalink raw reply	[flat|nested] 34+ messages in thread

end of thread, other threads:[~2019-08-06 14:51 UTC | newest]

Thread overview: 34+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-07-10  6:22 [PATCH V13 00/12] PCI: tegra: Add Tegra194 PCIe support Vidya Sagar
2019-07-10  6:22 ` [PATCH V13 01/12] PCI: Add #defines for some of PCIe spec r4.0 features Vidya Sagar
2019-07-10 20:14   ` Bjorn Helgaas
2019-07-10  6:22 ` [PATCH V13 02/12] PCI: Disable MSI for Tegra root ports Vidya Sagar
2019-07-10  6:22 ` [PATCH V13 03/12] PCI: dwc: Perform dbi regs write lock towards the end Vidya Sagar
2019-07-10  6:22 ` [PATCH V13 04/12] PCI: dwc: Move config space capability search API Vidya Sagar
2019-07-10  6:22 ` [PATCH V13 05/12] PCI: dwc: Add ext " Vidya Sagar
2019-07-10 10:37   ` Lorenzo Pieralisi
2019-07-10 11:27     ` Vidya Sagar
2019-07-10 14:19       ` Lorenzo Pieralisi
2019-07-10  6:22 ` [PATCH V13 06/12] dt-bindings: PCI: designware: Add binding for CDM register check Vidya Sagar
2019-07-10  6:22 ` [PATCH V13 07/12] PCI: dwc: Add support to enable " Vidya Sagar
2019-07-10  6:22 ` [PATCH V13 08/12] dt-bindings: Add PCIe supports-clkreq property Vidya Sagar
2019-07-10 15:28   ` Lorenzo Pieralisi
2019-07-10 17:14     ` Vidya Sagar
2019-07-10  6:22 ` [PATCH V13 09/12] dt-bindings: PCI: tegra: Add device tree support for Tegra194 Vidya Sagar
2019-07-10  6:22 ` [PATCH V13 10/12] dt-bindings: PHY: P2U: Add Tegra194 P2U block Vidya Sagar
2019-07-10  6:22 ` [PATCH V13 11/12] phy: tegra: Add PCIe PIPE2UPHY support Vidya Sagar
2019-07-10  6:22 ` [PATCH V13 12/12] PCI: tegra: Add Tegra194 PCIe support Vidya Sagar
2019-07-10 17:02   ` Lorenzo Pieralisi
2019-07-10 17:26     ` Vidya Sagar
2019-07-11 12:54   ` Lorenzo Pieralisi
2019-07-12 15:32     ` Vidya Sagar
2019-07-12 16:07       ` Lorenzo Pieralisi
2019-07-13  7:04         ` Vidya Sagar
2019-07-16 11:22           ` Lorenzo Pieralisi
2019-07-16 19:00             ` Bjorn Helgaas
2019-07-23 14:28               ` Vidya Sagar
2019-07-23 14:44             ` Vidya Sagar
2019-07-30 15:49               ` Lorenzo Pieralisi
2019-08-02 12:06                 ` Vidya Sagar
2019-08-05 14:01                   ` Lorenzo Pieralisi
2019-08-05 16:54                     ` Vidya Sagar
2019-08-06 14:51                       ` Lorenzo Pieralisi

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).