Linux-PCI Archive on lore.kernel.org
 help / color / Atom feed
* [PATCH v3 0/2] PCI: Add Intel PCIe Driver and respective dt-binding yaml file
@ 2019-09-04 10:10 Dilip Kota
  2019-09-04 10:10 ` [PATCH v3 1/2] dt-bindings: PCI: intel: Add YAML schemas for the PCIe RC controller Dilip Kota
  2019-09-04 10:10 ` [PATCH v3 2/2] dwc: PCI: intel: Intel PCIe RC controller driver Dilip Kota
  0 siblings, 2 replies; 37+ messages in thread
From: Dilip Kota @ 2019-09-04 10:10 UTC (permalink / raw)
  To: jingoohan1, gustavo.pimentel, lorenzo.pieralisi, robh,
	martin.blumenstingl, linux-pci, hch, devicetree
  Cc: linux-kernel, andriy.shevchenko, cheol.yong.kim, chuanhua.lei,
	qi-ming.wu, Dilip Kota

Intel PCIe is synopsys based controller utilizes the Designware
framework for host initialization and intel application
specific register configurations.

Changes on v3:
        Compared to v2, map_irq() patch is removed as it is no longer
          required for Intel PCIe driver. Intel PCIe driver does platform
          specific interrupt configuration during core initialization. So
          changed the subject line too.
        Address v2 review comments for DT binding and PCIe driver

Dilip Kota (2):
  dt-bindings: PCI: intel: Add YAML schemas for the PCIe RC controller
  dwc: PCI: intel: Intel PCIe RC controller driver

 .../devicetree/bindings/pci/intel,lgm-pcie.yaml    | 137 ++++
 drivers/pci/controller/dwc/Kconfig                 |  13 +
 drivers/pci/controller/dwc/Makefile                |   1 +
 drivers/pci/controller/dwc/pcie-intel-axi.c        | 840 +++++++++++++++++++++
 4 files changed, 991 insertions(+)
 create mode 100644 Documentation/devicetree/bindings/pci/intel,lgm-pcie.yaml
 create mode 100644 drivers/pci/controller/dwc/pcie-intel-axi.c

-- 
2.11.0


^ permalink raw reply	[flat|nested] 37+ messages in thread

* [PATCH v3 1/2] dt-bindings: PCI: intel: Add YAML schemas for the PCIe RC controller
  2019-09-04 10:10 [PATCH v3 0/2] PCI: Add Intel PCIe Driver and respective dt-binding yaml file Dilip Kota
@ 2019-09-04 10:10 ` Dilip Kota
  2019-09-05  2:23   ` Chuan Hua, Lei
                     ` (2 more replies)
  2019-09-04 10:10 ` [PATCH v3 2/2] dwc: PCI: intel: Intel PCIe RC controller driver Dilip Kota
  1 sibling, 3 replies; 37+ messages in thread
From: Dilip Kota @ 2019-09-04 10:10 UTC (permalink / raw)
  To: jingoohan1, gustavo.pimentel, lorenzo.pieralisi, robh,
	martin.blumenstingl, linux-pci, hch, devicetree
  Cc: linux-kernel, andriy.shevchenko, cheol.yong.kim, chuanhua.lei,
	qi-ming.wu, Dilip Kota

The Intel PCIe RC controller is Synopsys Designware
based PCIe core. Add YAML schemas for PCIe in RC mode
present in Intel Universal Gateway soc.

Signed-off-by: Dilip Kota <eswara.kota@linux.intel.com>
---
changes on v3:
	Add the appropriate License-Identifier
	Rename intel,rst-interval to 'reset-assert-us'
	Add additionalProperties: false
	Rename phy-names to 'pciephy'
	Remove the dtsi node split of SoC and board in the example
	Add #interrupt-cells = <1>; or else interrupt parsing will fail
	Name yaml file with compatible name

 .../devicetree/bindings/pci/intel,lgm-pcie.yaml    | 137 +++++++++++++++++++++
 1 file changed, 137 insertions(+)
 create mode 100644 Documentation/devicetree/bindings/pci/intel,lgm-pcie.yaml

diff --git a/Documentation/devicetree/bindings/pci/intel,lgm-pcie.yaml b/Documentation/devicetree/bindings/pci/intel,lgm-pcie.yaml
new file mode 100644
index 000000000000..5e5cc7fd66cd
--- /dev/null
+++ b/Documentation/devicetree/bindings/pci/intel,lgm-pcie.yaml
@@ -0,0 +1,137 @@
+# SPDX-License-Identifier: (GPL-2.0 OR BSD-2-Clause)
+%YAML 1.2
+---
+$id: http://devicetree.org/schemas/pci/intel-pcie.yaml#
+$schema: http://devicetree.org/meta-schemas/core.yaml#
+
+title: Intel AXI bus based PCI express root complex
+
+maintainers:
+  - Dilip Kota <eswara.kota@linux.intel.com>
+
+properties:
+  compatible:
+    const: intel,lgm-pcie
+
+  device_type:
+    const: pci
+
+  "#address-cells":
+    const: 3
+
+  "#size-cells":
+    const: 2
+
+  reg:
+    items:
+      - description: Controller control and status registers.
+      - description: PCIe configuration registers.
+      - description: Controller application registers.
+
+  reg-names:
+    items:
+      - const: dbi
+      - const: config
+      - const: app
+
+  ranges:
+    description: Ranges for the PCI memory and I/O regions.
+
+  resets:
+    maxItems: 1
+
+  clocks:
+    description: PCIe registers interface clock.
+
+  phys:
+    maxItems: 1
+
+  phy-names:
+    const: pciephy
+
+  reset-gpios:
+    maxItems: 1
+
+  num-lanes:
+    description: Number of lanes to use for this port.
+
+  linux,pci-domain:
+    $ref: /schemas/types.yaml#/definitions/uint32
+    description: PCI domain ID.
+
+  interrupts:
+    description: PCIe core integrated miscellaneous interrupt.
+
+  '#interrupt-cells':
+    const: 1
+
+  interrupt-map-mask:
+    description: Standard PCI IRQ mapping properties.
+
+  interrupt-map:
+    description: Standard PCI IRQ mapping properties.
+
+  max-link-speed:
+    description: Specify PCI Gen for link capability.
+
+  bus-range:
+    description: Range of bus numbers associated with this controller.
+
+  reset-assert-ms:
+    description: |
+      Device reset interval in ms.
+      Some devices need an interval upto 500ms. By default it is 100ms.
+
+required:
+  - compatible
+  - device_type
+  - reg
+  - reg-names
+  - ranges
+  - resets
+  - clocks
+  - phys
+  - phy-names
+  - reset-gpios
+  - num-lanes
+  - linux,pci-domain
+  - interrupts
+  - interrupt-map
+  - interrupt-map-mask
+
+additionalProperties: false
+
+examples:
+  - |
+    pcie10:pcie@d0e00000 {
+      compatible = "intel,lgm-pcie";
+      device_type = "pci";
+      #address-cells = <3>;
+      #size-cells = <2>;
+      reg = <
+            0xd0e00000 0x1000
+            0xd2000000 0x800000
+            0xd0a41000 0x1000
+            >;
+      reg-names = "dbi", "config", "app";
+      linux,pci-domain = <0>;
+      max-link-speed = <4>;
+      bus-range = <0x00 0x08>;
+      interrupt-parent = <&ioapic1>;
+      interrupts = <67 1>;
+      #interrupt-cells = <1>;
+      interrupt-map-mask = <0 0 0 0x7>;
+      interrupt-map = <0 0 0 1 &ioapic1 27 1>,
+                      <0 0 0 2 &ioapic1 28 1>,
+                      <0 0 0 3 &ioapic1 29 1>,
+                      <0 0 0 4 &ioapic1 30 1>;
+      ranges = <0x02000000 0 0xd4000000 0xd4000000 0 0x04000000>;
+      resets = <&rcu0 0x50 0>;
+      clocks = <&cgu0 LGM_GCLK_PCIE10>;
+      phys = <&cb0phy0>;
+      phy-names = "pciephy";
+      status = "okay";
+      reset-assert-ms = <500>;
+      reset-gpios = <&gpio0 3 GPIO_ACTIVE_LOW>;
+      num-lanes = <2>;
+    };
-- 
2.11.0


^ permalink raw reply	[flat|nested] 37+ messages in thread

* [PATCH v3 2/2] dwc: PCI: intel: Intel PCIe RC controller driver
  2019-09-04 10:10 [PATCH v3 0/2] PCI: Add Intel PCIe Driver and respective dt-binding yaml file Dilip Kota
  2019-09-04 10:10 ` [PATCH v3 1/2] dt-bindings: PCI: intel: Add YAML schemas for the PCIe RC controller Dilip Kota
@ 2019-09-04 10:10 ` Dilip Kota
  2019-09-04 13:05   ` Andy Shevchenko
                     ` (2 more replies)
  1 sibling, 3 replies; 37+ messages in thread
From: Dilip Kota @ 2019-09-04 10:10 UTC (permalink / raw)
  To: jingoohan1, gustavo.pimentel, lorenzo.pieralisi, robh,
	martin.blumenstingl, linux-pci, hch, devicetree
  Cc: linux-kernel, andriy.shevchenko, cheol.yong.kim, chuanhua.lei,
	qi-ming.wu, Dilip Kota

Add support to PCIe RC controller on Intel Universal
Gateway SoC. PCIe controller is based of Synopsys
Designware pci core.

Signed-off-by: Dilip Kota <eswara.kota@linux.intel.com>
---
changes on v3:
	Rename PCIe app logic registers with PCIE_APP prefix.
	PCIE_IOP_CTRL configuration is not required. Remove respective code.
	Remove wrapper functions for clk enable/disable APIs.
	Use platform_get_resource_byname() instead of
	  devm_platform_ioremap_resource() to be similar with DWC framework.
	Rename phy name to "pciephy".
	Modify debug message in msi_init() callback to be more specific.
	Remove map_irq() callback.
	Enable the INTx interrupts at the time of PCIe initialization.
	Reduce memory fragmentation by using variable "struct dw_pcie pci"
	  instead of allocating memory.
	Reduce the delay to 100us during enpoint initialization
	  intel_pcie_ep_rst_init().
	Call  dw_pcie_host_deinit() during remove() instead of directly
	  calling PCIe core APIs.
	Rename "intel,rst-interval" to "reset-assert-ms".
	Remove unused APP logic Interrupt bit macro definitions.
 	Use dwc framework's dw_pcie_setup_rc() for PCIe host specific
	 configuration instead of redefining the same functionality in
	 the driver.
	Move the whole DT parsing specific code to intel_pcie_get_resources()

 drivers/pci/controller/dwc/Kconfig          |  13 +
 drivers/pci/controller/dwc/Makefile         |   1 +
 drivers/pci/controller/dwc/pcie-intel-axi.c | 840 ++++++++++++++++++++++++++++
 3 files changed, 854 insertions(+)
 create mode 100644 drivers/pci/controller/dwc/pcie-intel-axi.c

diff --git a/drivers/pci/controller/dwc/Kconfig b/drivers/pci/controller/dwc/Kconfig
index 6ea778ae4877..e44b9b6a6390 100644
--- a/drivers/pci/controller/dwc/Kconfig
+++ b/drivers/pci/controller/dwc/Kconfig
@@ -82,6 +82,19 @@ config PCIE_DW_PLAT_EP
 	  order to enable device-specific features PCI_DW_PLAT_EP must be
 	  selected.
 
+config PCIE_INTEL_AXI
+        bool "Intel AHB/AXI PCIe host controller support"
+        depends on PCI_MSI
+        depends on PCI
+        depends on OF
+        select PCIE_DW_HOST
+        help
+          Say 'Y' here to enable support for Intel AHB/AXI PCIe Host
+	  controller driver.
+	  The Intel PCIe controller is based on the Synopsys Designware
+	  pcie core and therefore uses the Designware core functions to
+	  implement the driver.
+
 config PCI_EXYNOS
 	bool "Samsung Exynos PCIe controller"
 	depends on SOC_EXYNOS5440 || COMPILE_TEST
diff --git a/drivers/pci/controller/dwc/Makefile b/drivers/pci/controller/dwc/Makefile
index b085dfd4fab7..46e656ebdf90 100644
--- a/drivers/pci/controller/dwc/Makefile
+++ b/drivers/pci/controller/dwc/Makefile
@@ -3,6 +3,7 @@ obj-$(CONFIG_PCIE_DW) += pcie-designware.o
 obj-$(CONFIG_PCIE_DW_HOST) += pcie-designware-host.o
 obj-$(CONFIG_PCIE_DW_EP) += pcie-designware-ep.o
 obj-$(CONFIG_PCIE_DW_PLAT) += pcie-designware-plat.o
+obj-$(CONFIG_PCIE_INTEL_AXI) += pcie-intel-axi.o
 obj-$(CONFIG_PCI_DRA7XX) += pci-dra7xx.o
 obj-$(CONFIG_PCI_EXYNOS) += pci-exynos.o
 obj-$(CONFIG_PCI_IMX6) += pci-imx6.o
diff --git a/drivers/pci/controller/dwc/pcie-intel-axi.c b/drivers/pci/controller/dwc/pcie-intel-axi.c
new file mode 100644
index 000000000000..75607ce03ebf
--- /dev/null
+++ b/drivers/pci/controller/dwc/pcie-intel-axi.c
@@ -0,0 +1,840 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * PCIe host controller driver for Intel AXI PCIe Bridge
+ *
+ * Copyright (c) 2019 Intel Corporation.
+ */
+
+#include <linux/bitfield.h>
+#include <linux/clk.h>
+#include <linux/gpio/consumer.h>
+#include <linux/interrupt.h>
+#include <linux/iopoll.h>
+#include <linux/of_irq.h>
+#include <linux/of_pci.h>
+#include <linux/of_platform.h>
+#include <linux/phy/phy.h>
+#include <linux/platform_device.h>
+#include <linux/reset.h>
+
+#include "../../pci.h"
+#include "pcie-designware.h"
+
+#define PCIE_CCRID				0x8
+
+#define PCIE_LCAP				0x7C
+#define PCIE_LCAP_MAX_LINK_SPEED		GENMASK(3, 0)
+#define PCIE_LCAP_MAX_LENGTH_WIDTH		GENMASK(9, 4)
+
+/* Link Control and Status Register */
+#define PCIE_LCTLSTS				0x80
+#define PCIE_LCTLSTS_ASPM_ENABLE		GENMASK(1, 0)
+#define PCIE_LCTLSTS_RCB128			BIT(3)
+#define PCIE_LCTLSTS_LINK_DISABLE		BIT(4)
+#define PCIE_LCTLSTS_COM_CLK_CFG		BIT(6)
+#define PCIE_LCTLSTS_HW_AW_DIS			BIT(9)
+#define PCIE_LCTLSTS_LINK_SPEED			GENMASK(19, 16)
+#define PCIE_LCTLSTS_NEGOTIATED_LINK_WIDTH	GENMASK(25, 20)
+#define PCIE_LCTLSTS_SLOT_CLK_CFG		BIT(28)
+
+#define PCIE_LCTLSTS2				0xA0
+#define PCIE_LCTLSTS2_TGT_LINK_SPEED		GENMASK(3, 0)
+#define PCIE_LCTLSTS2_TGT_LINK_SPEED_25GT	0x1
+#define PCIE_LCTLSTS2_TGT_LINK_SPEED_5GT	0x2
+#define PCIE_LCTLSTS2_TGT_LINK_SPEED_8GT	0x3
+#define PCIE_LCTLSTS2_TGT_LINK_SPEED_16GT	0x4
+#define PCIE_LCTLSTS2_HW_AUTO_DIS		BIT(5)
+
+/* Ack Frequency Register */
+#define PCIE_AFR				0x70C
+#define PCIE_AFR_FTS_NUM			GENMASK(15, 8)
+#define PCIE_AFR_COM_FTS_NUM			GENMASK(23, 16)
+#define PCIE_AFR_GEN12_FTS_NUM_DFT		(SZ_128 - 1)
+#define PCIE_AFR_GEN3_FTS_NUM_DFT		180
+#define PCIE_AFR_GEN4_FTS_NUM_DFT		196
+
+#define PCIE_PLCR_DLL_LINK_EN			BIT(5)
+#define PCIE_PORT_LOGIC_FTS			GENMASK(7, 0)
+#define PCIE_PORT_LOGIC_DFT_FTS_NUM		(SZ_128 - 1)
+
+#define PCIE_MISC_CTRL				0x8BC
+#define PCIE_MISC_CTRL_DBI_RO_WR_EN		BIT(0)
+
+#define PCIE_MULTI_LANE_CTRL			0x8C0
+#define PCIE_UPCONFIG_SUPPORT			BIT(7)
+#define PCIE_DIRECT_LINK_WIDTH_CHANGE		BIT(6)
+#define PCIE_TARGET_LINK_WIDTH			GENMASK(5, 0)
+
+/* APP RC Core Control Register */
+#define PCIE_APP_CCR				0x10
+#define PCIE_APP_CCR_LTSSM_ENABLE		BIT(0)
+
+/* PCIe Message Control */
+#define PCIE_APP_MSG_CR				0x30
+#define PCIE_APP_MSG_XMT_PM_TURNOFF		BIT(0)
+
+/* PCIe Power Management Control */
+#define PCIE_APP_PMC				0x44
+#define PCIE_APP_PMC_IN_L2			BIT(20)
+
+/* Interrupt Enable Register */
+#define PCIE_APP_IRNEN				0xF4
+#define PCIE_APP_IRNCR				0xF8
+#define PCIE_APP_IRN_AER_REPORT			BIT(0)
+#define PCIE_APP_IRN_PME			BIT(2)
+#define PCIE_APP_IRN_RX_VDM_MSG			BIT(4)
+#define PCIE_APP_IRN_PM_TO_ACK			BIT(9)
+#define PCIE_APP_IRN_LINK_AUTO_BW_STAT		BIT(11)
+#define PCIE_APP_IRN_BW_MGT			BIT(12)
+#define PCIE_APP_IRN_MSG_LTR			BIT(18)
+#define PCIE_APP_IRN_SYS_ERR_RC			BIT(29)
+
+#define PCIE_APP_INTX_OFST	12
+#define PCIE_APP_IRN_INT	(PCIE_APP_IRN_AER_REPORT | PCIE_APP_IRN_PME | \
+			PCIE_APP_IRN_RX_VDM_MSG | PCIE_APP_IRN_SYS_ERR_RC | \
+			PCIE_APP_IRN_PM_TO_ACK | PCIE_APP_IRN_MSG_LTR | \
+			PCIE_APP_IRN_BW_MGT | PCIE_APP_IRN_LINK_AUTO_BW_STAT | \
+			(PCIE_APP_INTX_OFST + PCI_INTERRUPT_INTA) | \
+			(PCIE_APP_INTX_OFST + PCI_INTERRUPT_INTB) | \
+			(PCIE_APP_INTX_OFST + PCI_INTERRUPT_INTC) | \
+			(PCIE_APP_INTX_OFST + PCI_INTERRUPT_INTD))
+
+#define BUS_IATU_OFFS		SZ_256M
+#define RST_INTRVL_DFT_MS	100
+enum {
+	PCIE_LINK_SPEED_AUTO = 0,
+	PCIE_LINK_SPEED_GEN1,
+	PCIE_LINK_SPEED_GEN2,
+	PCIE_LINK_SPEED_GEN3,
+	PCIE_LINK_SPEED_GEN4,
+};
+
+struct intel_pcie_soc {
+	unsigned int pcie_ver;
+	unsigned int pcie_atu_offset;
+	u32 num_viewport;
+};
+
+struct intel_pcie_port {
+	struct dw_pcie		pci;
+	unsigned int		id; /* Physical RC Index */
+	void __iomem		*app_base;
+	struct gpio_desc	*reset_gpio;
+	u32			rst_interval;
+	u32			max_speed;
+	u32			link_gen;
+	u32			max_width;
+	u32			lanes;
+	struct clk		*core_clk;
+	struct reset_control	*core_rst;
+	struct phy		*phy;
+};
+
+static void pcie_update_bits(void __iomem *base, u32 mask, u32 val, u32 ofs)
+{
+	u32 orig, tmp;
+
+	orig = readl(base + ofs);
+
+	tmp = (orig & ~mask) | (val & mask);
+
+	if (tmp != orig)
+		writel(tmp, base + ofs);
+}
+
+static inline u32 pcie_app_rd(struct intel_pcie_port *lpp, u32 ofs)
+{
+	return readl(lpp->app_base + ofs);
+}
+
+static inline void pcie_app_wr(struct intel_pcie_port *lpp, u32 val, u32 ofs)
+{
+	writel(val, lpp->app_base + ofs);
+}
+
+static void pcie_app_wr_mask(struct intel_pcie_port *lpp,
+			     u32 mask, u32 val, u32 ofs)
+{
+	pcie_update_bits(lpp->app_base, mask, val, ofs);
+}
+
+static inline u32 pcie_rc_cfg_rd(struct intel_pcie_port *lpp, u32 ofs)
+{
+	return dw_pcie_readl_dbi(&lpp->pci, ofs);
+}
+
+static inline void pcie_rc_cfg_wr(struct intel_pcie_port *lpp, u32 val, u32 ofs)
+{
+	dw_pcie_writel_dbi(&lpp->pci, ofs, val);
+}
+
+static void pcie_rc_cfg_wr_mask(struct intel_pcie_port *lpp,
+				u32 mask, u32 val, u32 ofs)
+{
+	pcie_update_bits(lpp->pci.dbi_base, mask, val, ofs);
+}
+
+static void intel_pcie_mem_iatu(struct intel_pcie_port *lpp)
+{
+	struct pcie_port *pp = &lpp->pci.pp;
+	phys_addr_t cpu_addr = pp->mem_base;
+
+	dw_pcie_prog_outbound_atu(&lpp->pci, PCIE_ATU_REGION_INDEX0,
+				  PCIE_ATU_TYPE_MEM, cpu_addr,
+				  pp->mem_base, pp->mem_size);
+}
+
+static void intel_pcie_ltssm_enable(struct intel_pcie_port *lpp)
+{
+	pcie_app_wr_mask(lpp, PCIE_APP_CCR_LTSSM_ENABLE,
+			 PCIE_APP_CCR_LTSSM_ENABLE, PCIE_APP_CCR);
+}
+
+static void intel_pcie_ltssm_disable(struct intel_pcie_port *lpp)
+{
+	pcie_app_wr_mask(lpp, PCIE_APP_CCR_LTSSM_ENABLE, 0, PCIE_APP_CCR);
+}
+
+static const char *pcie_link_gen_to_str(int gen)
+{
+	switch (gen) {
+	case PCIE_LINK_SPEED_GEN1:
+		return "2.5";
+	case PCIE_LINK_SPEED_GEN2:
+		return "5.0";
+	case PCIE_LINK_SPEED_GEN3:
+		return "8.0";
+	case PCIE_LINK_SPEED_GEN4:
+		return "16.0";
+	default:
+		return "???";
+	}
+}
+
+static void intel_pcie_link_setup(struct intel_pcie_port *lpp)
+{
+	u32 val;
+
+	val = pcie_rc_cfg_rd(lpp, PCIE_LCAP);
+	lpp->max_speed = FIELD_GET(PCIE_LCAP_MAX_LINK_SPEED, val);
+	lpp->max_width = FIELD_GET(PCIE_LCAP_MAX_LENGTH_WIDTH, val);
+
+	val = pcie_rc_cfg_rd(lpp, PCIE_LCTLSTS);
+
+	val &= ~(PCIE_LCTLSTS_LINK_DISABLE | PCIE_LCTLSTS_ASPM_ENABLE);
+	val |= (PCIE_LCTLSTS_SLOT_CLK_CFG | PCIE_LCTLSTS_COM_CLK_CFG |
+		PCIE_LCTLSTS_RCB128);
+	pcie_rc_cfg_wr(lpp, val, PCIE_LCTLSTS);
+}
+
+static void intel_pcie_max_speed_setup(struct intel_pcie_port *lpp)
+{
+	u32 reg, val;
+
+	reg = pcie_rc_cfg_rd(lpp, PCIE_LCTLSTS2);
+	switch (lpp->link_gen) {
+	case PCIE_LINK_SPEED_GEN1:
+		reg &= ~PCIE_LCTLSTS2_TGT_LINK_SPEED;
+		reg |= PCIE_LCTLSTS2_HW_AUTO_DIS |
+			PCIE_LCTLSTS2_TGT_LINK_SPEED_25GT;
+		break;
+	case PCIE_LINK_SPEED_GEN2:
+		reg &= ~PCIE_LCTLSTS2_TGT_LINK_SPEED;
+		reg |= PCIE_LCTLSTS2_HW_AUTO_DIS |
+			PCIE_LCTLSTS2_TGT_LINK_SPEED_5GT;
+		break;
+	case PCIE_LINK_SPEED_GEN3:
+		reg &= ~PCIE_LCTLSTS2_TGT_LINK_SPEED;
+		reg |= PCIE_LCTLSTS2_HW_AUTO_DIS |
+			PCIE_LCTLSTS2_TGT_LINK_SPEED_8GT;
+		break;
+	case PCIE_LINK_SPEED_GEN4:
+		reg &= ~PCIE_LCTLSTS2_TGT_LINK_SPEED;
+		reg |= PCIE_LCTLSTS2_HW_AUTO_DIS |
+			PCIE_LCTLSTS2_TGT_LINK_SPEED_16GT;
+		break;
+	default:
+		/* Use hardware capability */
+		val = pcie_rc_cfg_rd(lpp, PCIE_LCAP);
+		val = FIELD_GET(PCIE_LCAP_MAX_LINK_SPEED, val);
+		reg &= ~PCIE_LCTLSTS2_HW_AUTO_DIS;
+		reg |= val;
+		break;
+	}
+	pcie_rc_cfg_wr(lpp, reg, PCIE_LCTLSTS2);
+}
+
+static void intel_pcie_speed_change_enable(struct intel_pcie_port *lpp)
+{
+	u32 mask, val;
+
+	mask = PORT_LOGIC_SPEED_CHANGE | PCIE_PORT_LOGIC_FTS;
+	val = PORT_LOGIC_SPEED_CHANGE | PCIE_PORT_LOGIC_DFT_FTS_NUM;
+
+	pcie_rc_cfg_wr_mask(lpp, mask, val, PCIE_LINK_WIDTH_SPEED_CONTROL);
+}
+
+static void intel_pcie_speed_change_disable(struct intel_pcie_port *lpp)
+{
+	pcie_rc_cfg_wr_mask(lpp, PORT_LOGIC_SPEED_CHANGE, 0,
+			    PCIE_LINK_WIDTH_SPEED_CONTROL);
+}
+
+static void intel_pcie_max_link_width_setup(struct intel_pcie_port *lpp)
+{
+	u32 mask, val;
+
+	/* HW auto bandwidth negotiation must be enabled */
+	pcie_rc_cfg_wr_mask(lpp, PCIE_LCTLSTS_HW_AW_DIS, 0, PCIE_LCTLSTS);
+
+	mask = PCIE_DIRECT_LINK_WIDTH_CHANGE | PCIE_TARGET_LINK_WIDTH;
+	val = PCIE_DIRECT_LINK_WIDTH_CHANGE | lpp->lanes;
+	pcie_rc_cfg_wr_mask(lpp, mask, val, PCIE_MULTI_LANE_CTRL);
+}
+
+static void intel_pcie_port_logic_setup(struct intel_pcie_port *lpp)
+{
+	u32 val, mask, fts;
+
+	switch (lpp->max_speed) {
+	case PCIE_LINK_SPEED_GEN1:
+	case PCIE_LINK_SPEED_GEN2:
+		fts = PCIE_AFR_GEN12_FTS_NUM_DFT;
+		break;
+	case PCIE_LINK_SPEED_GEN3:
+		fts = PCIE_AFR_GEN3_FTS_NUM_DFT;
+		break;
+	case PCIE_LINK_SPEED_GEN4:
+		fts = PCIE_AFR_GEN4_FTS_NUM_DFT;
+		break;
+	default:
+		fts = PCIE_AFR_GEN12_FTS_NUM_DFT;
+		break;
+	}
+	mask = PCIE_AFR_FTS_NUM | PCIE_AFR_COM_FTS_NUM;
+	val = FIELD_PREP(PCIE_AFR_FTS_NUM, fts) |
+	       FIELD_PREP(PCIE_AFR_COM_FTS_NUM, fts);
+	pcie_rc_cfg_wr_mask(lpp, mask, val, PCIE_AFR);
+
+	/* Port Link Control Register */
+	pcie_rc_cfg_wr_mask(lpp, PCIE_PLCR_DLL_LINK_EN,
+			    PCIE_PLCR_DLL_LINK_EN, PCIE_PORT_LINK_CONTROL);
+}
+
+static void intel_pcie_upconfig_setup(struct intel_pcie_port *lpp)
+{
+	pcie_rc_cfg_wr_mask(lpp, PCIE_UPCONFIG_SUPPORT,
+			    PCIE_UPCONFIG_SUPPORT, PCIE_MULTI_LANE_CTRL);
+}
+
+static void intel_pcie_rc_setup(struct intel_pcie_port *lpp)
+{
+	intel_pcie_ltssm_disable(lpp);
+	intel_pcie_link_setup(lpp);
+	dw_pcie_setup_rc(&lpp->pci.pp);
+	intel_pcie_upconfig_setup(lpp);
+
+	intel_pcie_max_speed_setup(lpp);
+	intel_pcie_speed_change_enable(lpp);
+	intel_pcie_port_logic_setup(lpp);
+	intel_pcie_mem_iatu(lpp);
+}
+
+static int intel_pcie_ep_rst_init(struct intel_pcie_port *lpp)
+{
+	struct device *dev = lpp->pci.dev;
+	int ret;
+
+	lpp->reset_gpio = devm_gpiod_get(dev, "reset", GPIOD_OUT_LOW);
+	if (IS_ERR(lpp->reset_gpio)) {
+		ret = PTR_ERR(lpp->reset_gpio);
+		if (ret != -EPROBE_DEFER)
+			dev_err(dev, "failed to request PCIe GPIO: %d\n", ret);
+		return ret;
+	}
+	/* Make initial reset last for 100us */
+	usleep_range(100, 200);
+
+	return 0;
+}
+
+static void intel_pcie_core_rst_assert(struct intel_pcie_port *lpp)
+{
+	reset_control_assert(lpp->core_rst);
+}
+
+static void intel_pcie_core_rst_deassert(struct intel_pcie_port *lpp)
+{
+	/*
+	 * One micro-second delay to make sure the reset pulse
+	 * wide enough so that core reset is clean.
+	 */
+	udelay(1);
+	reset_control_deassert(lpp->core_rst);
+
+	/*
+	 * Some SoC core reset also reset PHY, more delay needed
+	 * to make sure the reset process is done.
+	 */
+	usleep_range(1000, 2000);
+}
+
+static void intel_pcie_device_rst_assert(struct intel_pcie_port *lpp)
+{
+	gpiod_set_value_cansleep(lpp->reset_gpio, 1);
+}
+
+static void intel_pcie_device_rst_deassert(struct intel_pcie_port *lpp)
+{
+	msleep(lpp->rst_interval);
+	gpiod_set_value_cansleep(lpp->reset_gpio, 0);
+}
+
+static int intel_pcie_app_logic_setup(struct intel_pcie_port *lpp)
+{
+	intel_pcie_device_rst_deassert(lpp);
+	intel_pcie_ltssm_enable(lpp);
+
+	return dw_pcie_wait_for_link(&lpp->pci);
+}
+
+static irqreturn_t intel_pcie_core_isr(int irq, void *arg)
+{
+	struct intel_pcie_port *lpp = arg;
+	u32 val, reg;
+
+	reg = pcie_app_rd(lpp, PCIE_APP_IRNCR);
+	val = reg & PCIE_APP_IRN_INT;
+
+	pcie_app_wr(lpp, val, PCIE_APP_IRNCR);
+
+	trace_printk("PCIe misc interrupt status 0x%x\n", reg);
+	return IRQ_HANDLED;
+}
+
+static int intel_pcie_setup_irq(struct intel_pcie_port *lpp)
+{
+	struct device *dev = lpp->pci.dev;
+	struct platform_device *pdev;
+	char *irq_name;
+	int irq, ret;
+
+	pdev = to_platform_device(dev);
+	irq = platform_get_irq(pdev, 0);
+	if (irq < 0) {
+		dev_err(dev, "missing sys integrated irq resource\n");
+		return irq;
+	}
+
+	irq_name = devm_kasprintf(dev, GFP_KERNEL, "pcie_misc%d", lpp->id);
+	if (!irq_name) {
+		dev_err(dev, "failed to alloc irq name\n");
+		return -ENOMEM;
+	}
+
+	ret = devm_request_irq(dev, irq, intel_pcie_core_isr,
+			       IRQF_SHARED, irq_name, lpp);
+	if (ret) {
+		dev_err(dev, "request irq %d failed\n", irq);
+		return ret;
+	}
+	/* Enable integrated interrupts */
+	pcie_app_wr_mask(lpp, PCIE_APP_IRN_INT,
+			 PCIE_APP_IRN_INT, PCIE_APP_IRNEN);
+
+	return ret;
+}
+
+static void intel_pcie_core_irq_disable(struct intel_pcie_port *lpp)
+{
+	pcie_app_wr(lpp, 0, PCIE_APP_IRNEN);
+	pcie_app_wr(lpp, PCIE_APP_IRN_INT,  PCIE_APP_IRNCR);
+}
+
+static int intel_pcie_get_resources(struct platform_device *pdev)
+{
+	struct intel_pcie_port *lpp;
+	struct resource *res;
+	struct dw_pcie *pci;
+	struct device *dev;
+	int ret;
+
+	lpp = platform_get_drvdata(pdev);
+	pci = &lpp->pci;
+	dev = pci->dev;
+
+	ret = device_property_read_u32(dev, "linux,pci-domain", &lpp->id);
+	if (ret) {
+		dev_err(dev, "failed to get domain id, errno %d\n", ret);
+		return ret;
+	}
+
+	res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "dbi");
+	if (!res)
+		return -ENXIO;
+
+	pci->dbi_base = devm_ioremap_resource(dev, res);
+	if (IS_ERR(pci->dbi_base))
+		return PTR_ERR(pci->dbi_base);
+
+	lpp->core_clk = devm_clk_get(dev, NULL);
+	if (IS_ERR(lpp->core_clk)) {
+		ret = PTR_ERR(lpp->core_clk);
+		if (ret != -EPROBE_DEFER)
+			dev_err(dev, "failed to get clks: %d\n", ret);
+		return ret;
+	}
+
+	lpp->core_rst = devm_reset_control_get(dev, NULL);
+	if (IS_ERR(lpp->core_rst)) {
+		ret = PTR_ERR(lpp->core_rst);
+		if (ret != -EPROBE_DEFER)
+			dev_err(dev, "failed to get resets: %d\n", ret);
+		return ret;
+	}
+
+	ret = device_property_match_string(dev, "device_type", "pci");
+	if (ret) {
+		dev_err(dev, "failed to find pci device type: %d\n", ret);
+		return ret;
+	}
+
+	if (device_property_read_u32(dev, "reset-assert-ms",
+				     &lpp->rst_interval))
+		lpp->rst_interval = RST_INTRVL_DFT_MS;
+
+	if (device_property_read_u32(dev, "max-link-speed", &lpp->link_gen))
+		lpp->link_gen = 0; /* Fallback to auto */
+
+	res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "app");
+	if (!res)
+		return -ENXIO;
+
+	lpp->app_base = devm_ioremap_resource(dev, res);
+	if (IS_ERR(lpp->app_base))
+		return PTR_ERR(lpp->app_base);
+
+	ret = intel_pcie_ep_rst_init(lpp);
+	if (ret)
+		return ret;
+
+	lpp->phy = devm_phy_get(dev, "pciephy");
+	if (IS_ERR(lpp->phy)) {
+		ret = PTR_ERR(lpp->phy);
+		if (ret != -EPROBE_DEFER)
+			dev_err(dev, "couldn't get pcie-phy: %d\n", ret);
+		return ret;
+	}
+	return 0;
+}
+
+static void intel_pcie_deinit_phy(struct intel_pcie_port *lpp)
+{
+	phy_exit(lpp->phy);
+}
+
+static int intel_pcie_wait_l2(struct intel_pcie_port *lpp)
+{
+	u32 value;
+	int ret;
+
+	if (lpp->max_speed < PCIE_LINK_SPEED_GEN3)
+		return 0;
+
+	/* Send PME_TURN_OFF message */
+	pcie_app_wr_mask(lpp, PCIE_APP_MSG_XMT_PM_TURNOFF,
+			 PCIE_APP_MSG_XMT_PM_TURNOFF, PCIE_APP_MSG_CR);
+
+	/* Read PMC status and wait for falling into L2 link state */
+	ret = readl_poll_timeout(lpp->app_base + PCIE_APP_PMC, value,
+				 (value & PCIE_APP_PMC_IN_L2), 20,
+				 jiffies_to_usecs(5 * HZ));
+	if (ret)
+		dev_err(lpp->pci.dev, "PCIe link enter L2 timeout!\n");
+
+	return ret;
+}
+
+static void intel_pcie_turn_off(struct intel_pcie_port *lpp)
+{
+	if (dw_pcie_link_up(&lpp->pci))
+		intel_pcie_wait_l2(lpp);
+
+	/* Put EP in reset state */
+	intel_pcie_device_rst_assert(lpp);
+	pcie_rc_cfg_wr_mask(lpp, PCI_COMMAND_MEMORY, 0, PCI_COMMAND);
+}
+
+static int intel_pcie_host_setup(struct intel_pcie_port *lpp)
+{
+	int ret;
+
+	intel_pcie_core_rst_assert(lpp);
+	intel_pcie_device_rst_assert(lpp);
+
+	ret = phy_init(lpp->phy);
+	if (ret)
+		return ret;
+
+	intel_pcie_core_rst_deassert(lpp);
+
+	ret = clk_prepare_enable(lpp->core_clk);
+	if (ret) {
+		dev_err(lpp->pci.dev, "Core clock enable failed: %d\n", ret);
+		goto clk_err;
+	}
+
+	intel_pcie_rc_setup(lpp);
+	ret = intel_pcie_app_logic_setup(lpp);
+	if (ret)
+		goto app_init_err;
+
+	ret = intel_pcie_setup_irq(lpp);
+	if (!ret)
+		return ret;
+
+	intel_pcie_turn_off(lpp);
+app_init_err:
+	clk_disable_unprepare(lpp->core_clk);
+clk_err:
+	intel_pcie_core_rst_assert(lpp);
+	intel_pcie_deinit_phy(lpp);
+	return ret;
+}
+
+static ssize_t
+pcie_link_status_show(struct device *dev, struct device_attribute *attr,
+		      char *buf)
+{
+	u32 reg, width, gen;
+	struct intel_pcie_port *lpp;
+
+	lpp = dev_get_drvdata(dev);
+
+	reg = pcie_rc_cfg_rd(lpp, PCIE_LCTLSTS);
+	width = FIELD_GET(PCIE_LCTLSTS_NEGOTIATED_LINK_WIDTH, reg);
+	gen = FIELD_GET(PCIE_LCTLSTS_LINK_SPEED, reg);
+	if (gen > lpp->max_speed)
+		return -EINVAL;
+
+	return sprintf(buf, "Port %2u Width x%u Speed %s GT/s\n", lpp->id,
+		       width, pcie_link_gen_to_str(gen));
+}
+static DEVICE_ATTR_RO(pcie_link_status);
+
+static ssize_t pcie_speed_store(struct device *dev,
+				struct device_attribute *attr,
+				const char *buf, size_t len)
+{
+	struct intel_pcie_port *lpp;
+	unsigned long val;
+	int ret;
+
+	lpp = dev_get_drvdata(dev);
+
+	ret = kstrtoul(buf, 10, &val);
+	if (ret)
+		return ret;
+
+	if (val > lpp->max_speed)
+		return -EINVAL;
+
+	lpp->link_gen = val;
+	intel_pcie_max_speed_setup(lpp);
+	intel_pcie_speed_change_disable(lpp);
+	intel_pcie_speed_change_enable(lpp);
+
+	return len;
+}
+static DEVICE_ATTR_WO(pcie_speed);
+
+/*
+ * Link width change on the fly is not always successful.
+ * It also depends on the partner.
+ */
+static ssize_t pcie_width_store(struct device *dev,
+				struct device_attribute *attr,
+				const char *buf, size_t len)
+{
+	struct intel_pcie_port *lpp;
+	unsigned long val;
+
+	lpp = dev_get_drvdata(dev);
+
+	if (kstrtoul(buf, 10, &val))
+		return -EINVAL;
+
+	if (val > lpp->max_width)
+		return -EINVAL;
+
+	lpp->lanes = val;
+	intel_pcie_max_link_width_setup(lpp);
+
+	return len;
+}
+static DEVICE_ATTR_WO(pcie_width);
+
+static struct attribute *pcie_cfg_attrs[] = {
+	&dev_attr_pcie_link_status.attr,
+	&dev_attr_pcie_speed.attr,
+	&dev_attr_pcie_width.attr,
+	NULL,
+};
+ATTRIBUTE_GROUPS(pcie_cfg);
+
+static int intel_pcie_sysfs_init(struct intel_pcie_port *lpp)
+{
+	return devm_device_add_groups(lpp->pci.dev, pcie_cfg_groups);
+}
+
+static void __intel_pcie_remove(struct intel_pcie_port *lpp)
+{
+	pcie_rc_cfg_wr_mask(lpp, PCI_COMMAND_MEMORY | PCI_COMMAND_MASTER,
+			    0, PCI_COMMAND);
+	intel_pcie_core_irq_disable(lpp);
+	intel_pcie_turn_off(lpp);
+	clk_disable_unprepare(lpp->core_clk);
+	intel_pcie_core_rst_assert(lpp);
+	intel_pcie_deinit_phy(lpp);
+}
+
+static int intel_pcie_remove(struct platform_device *pdev)
+{
+	struct intel_pcie_port *lpp = platform_get_drvdata(pdev);
+	struct pcie_port *pp = &lpp->pci.pp;
+
+	dw_pcie_host_deinit(pp);
+	__intel_pcie_remove(lpp);
+
+	return 0;
+}
+
+static int __maybe_unused intel_pcie_suspend_noirq(struct device *dev)
+{
+	struct intel_pcie_port *lpp = dev_get_drvdata(dev);
+	int ret;
+
+	intel_pcie_core_irq_disable(lpp);
+	ret = intel_pcie_wait_l2(lpp);
+	if (ret)
+		return ret;
+
+	intel_pcie_deinit_phy(lpp);
+	clk_disable_unprepare(lpp->core_clk);
+	return ret;
+}
+
+static int __maybe_unused intel_pcie_resume_noirq(struct device *dev)
+{
+	struct intel_pcie_port *lpp = dev_get_drvdata(dev);
+
+	return intel_pcie_host_setup(lpp);
+}
+
+static int intel_pcie_rc_init(struct pcie_port *pp)
+{
+	struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
+	struct intel_pcie_port *lpp = dev_get_drvdata(pci->dev);
+	int ret;
+
+	/* RC/host initialization */
+	ret = intel_pcie_host_setup(lpp);
+	if (ret)
+		return ret;
+	ret = intel_pcie_sysfs_init(lpp);
+	if (ret)
+		__intel_pcie_remove(lpp);
+	return ret;
+}
+
+int intel_pcie_msi_init(struct pcie_port *pp)
+{
+	struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
+
+	dev_dbg(pci->dev, "PCIe MSI/MSIx is handled by MSI in x86 processor\n");
+	return 0;
+}
+
+u64 intel_pcie_cpu_addr(struct dw_pcie *pcie, u64 cpu_addr)
+{
+	return cpu_addr + BUS_IATU_OFFS;
+}
+
+static const struct dw_pcie_ops intel_pcie_ops = {
+	.cpu_addr_fixup = intel_pcie_cpu_addr,
+};
+
+static const struct dw_pcie_host_ops intel_pcie_dw_ops = {
+	.host_init =		intel_pcie_rc_init,
+	.msi_host_init =	intel_pcie_msi_init,
+};
+
+static const struct intel_pcie_soc pcie_data = {
+	.pcie_ver =		0x520A,
+	.pcie_atu_offset =	0xC0000,
+	.num_viewport =		3,
+};
+
+static int intel_pcie_probe(struct platform_device *pdev)
+{
+	const struct intel_pcie_soc *data;
+	struct device *dev = &pdev->dev;
+	struct intel_pcie_port *lpp;
+	struct pcie_port *pp;
+	struct dw_pcie *pci;
+	int ret;
+
+	lpp = devm_kzalloc(dev, sizeof(*lpp), GFP_KERNEL);
+	if (!lpp)
+		return -ENOMEM;
+
+	platform_set_drvdata(pdev, lpp);
+	pci = &lpp->pci;
+	pci->dev = dev;
+	pp = &pci->pp;
+
+	ret = intel_pcie_get_resources(pdev);
+	if (ret)
+		return ret;
+
+	data = device_get_match_data(dev);
+	pci->ops = &intel_pcie_ops;
+	pci->version = data->pcie_ver;
+	pci->atu_base = pci->dbi_base + data->pcie_atu_offset;
+	pp->ops = &intel_pcie_dw_ops;
+
+	ret = dw_pcie_host_init(pp);
+	if (ret) {
+		dev_err(dev, "cannot initialize host\n");
+		return ret;
+	}
+	/* Intel PCIe doesn't configure IO region, so configure
+	 * viewport to not to access IO region during register
+	 * read write operations.
+	 */
+	pci->num_viewport = data->num_viewport;
+	dev_info(dev,
+		 "Intel AXI PCIe Root Complex Port %d Init Done\n", lpp->id);
+	return ret;
+}
+
+static const struct dev_pm_ops intel_pcie_pm_ops = {
+	SET_NOIRQ_SYSTEM_SLEEP_PM_OPS(intel_pcie_suspend_noirq,
+				      intel_pcie_resume_noirq)
+};
+
+static const struct of_device_id of_intel_pcie_match[] = {
+	{ .compatible = "intel,lgm-pcie", .data = &pcie_data },
+	{}
+};
+
+static struct platform_driver intel_pcie_driver = {
+	.probe = intel_pcie_probe,
+	.remove = intel_pcie_remove,
+	.driver = {
+		.name = "intel-lgm-pcie",
+		.of_match_table = of_intel_pcie_match,
+		.pm = &intel_pcie_pm_ops,
+	},
+};
+builtin_platform_driver(intel_pcie_driver);
-- 
2.11.0


^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH v3 2/2] dwc: PCI: intel: Intel PCIe RC controller driver
  2019-09-04 10:10 ` [PATCH v3 2/2] dwc: PCI: intel: Intel PCIe RC controller driver Dilip Kota
@ 2019-09-04 13:05   ` Andy Shevchenko
  2019-09-06 10:39     ` Dilip Kota
  2019-09-05  2:30   ` Chuan Hua, Lei
  2019-09-05 10:45   ` Andrew Murray
  2 siblings, 1 reply; 37+ messages in thread
From: Andy Shevchenko @ 2019-09-04 13:05 UTC (permalink / raw)
  To: Dilip Kota
  Cc: jingoohan1, gustavo.pimentel, lorenzo.pieralisi, robh,
	martin.blumenstingl, linux-pci, hch, devicetree, linux-kernel,
	cheol.yong.kim, chuanhua.lei, qi-ming.wu

On Wed, Sep 04, 2019 at 06:10:31PM +0800, Dilip Kota wrote:
> Add support to PCIe RC controller on Intel Universal
> Gateway SoC. PCIe controller is based of Synopsys
> Designware pci core.

Thanks for an update. My comments below.

> +config PCIE_INTEL_AXI
> +        bool "Intel AHB/AXI PCIe host controller support"
> +        depends on PCI_MSI
> +        depends on PCI
> +        depends on OF
> +        select PCIE_DW_HOST
> +        help
> +          Say 'Y' here to enable support for Intel AHB/AXI PCIe Host
> +	  controller driver.
> +	  The Intel PCIe controller is based on the Synopsys Designware
> +	  pcie core and therefore uses the Designware core functions to
> +	  implement the driver.

This hunk is full of indentation issues. Please, see how it's done in other
cases which are already part of upstream kernel.

> +#define PCIE_APP_INTX_OFST	12
> +#define PCIE_APP_IRN_INT	(PCIE_APP_IRN_AER_REPORT | PCIE_APP_IRN_PME | \

It would be slightly easier to read if the value starts from the next line.

> +			PCIE_APP_IRN_RX_VDM_MSG | PCIE_APP_IRN_SYS_ERR_RC | \
> +			PCIE_APP_IRN_PM_TO_ACK | PCIE_APP_IRN_MSG_LTR | \
> +			PCIE_APP_IRN_BW_MGT | PCIE_APP_IRN_LINK_AUTO_BW_STAT | \
> +			(PCIE_APP_INTX_OFST + PCI_INTERRUPT_INTA) | \
> +			(PCIE_APP_INTX_OFST + PCI_INTERRUPT_INTB) | \
> +			(PCIE_APP_INTX_OFST + PCI_INTERRUPT_INTC) | \
> +			(PCIE_APP_INTX_OFST + PCI_INTERRUPT_INTD))

> +static void intel_pcie_link_setup(struct intel_pcie_port *lpp)
> +{
> +	u32 val;
> +
> +	val = pcie_rc_cfg_rd(lpp, PCIE_LCAP);
> +	lpp->max_speed = FIELD_GET(PCIE_LCAP_MAX_LINK_SPEED, val);
> +	lpp->max_width = FIELD_GET(PCIE_LCAP_MAX_LENGTH_WIDTH, val);
> +
> +	val = pcie_rc_cfg_rd(lpp, PCIE_LCTLSTS);
> +
> +	val &= ~(PCIE_LCTLSTS_LINK_DISABLE | PCIE_LCTLSTS_ASPM_ENABLE);

> +	val |= (PCIE_LCTLSTS_SLOT_CLK_CFG | PCIE_LCTLSTS_COM_CLK_CFG |
> +		PCIE_LCTLSTS_RCB128);

Unnecessary parentheses.

> +	pcie_rc_cfg_wr(lpp, val, PCIE_LCTLSTS);
> +}

> +	switch (lpp->max_speed) {
> +	case PCIE_LINK_SPEED_GEN1:
> +	case PCIE_LINK_SPEED_GEN2:
> +		fts = PCIE_AFR_GEN12_FTS_NUM_DFT;
> +		break;

You may group this with default case, right?

> +	case PCIE_LINK_SPEED_GEN3:
> +		fts = PCIE_AFR_GEN3_FTS_NUM_DFT;
> +		break;
> +	case PCIE_LINK_SPEED_GEN4:
> +		fts = PCIE_AFR_GEN4_FTS_NUM_DFT;
> +		break;
> +	default:
> +		fts = PCIE_AFR_GEN12_FTS_NUM_DFT;
> +		break;
> +	}

> +	trace_printk("PCIe misc interrupt status 0x%x\n", reg);

Hmm... Doesn't it give you a BIG warning?

kernel/trace/trace.c:trace_printk_init_buffers()

> +	return IRQ_HANDLED;
> +}

Perhaps the PCI needs some trace points instead.

> +static int intel_pcie_setup_irq(struct intel_pcie_port *lpp)
> +{
> +	struct device *dev = lpp->pci.dev;
> +	struct platform_device *pdev;
> +	char *irq_name;
> +	int irq, ret;
> +
> +	pdev = to_platform_device(dev);
> +	irq = platform_get_irq(pdev, 0);
> +	if (irq < 0) {

> +		dev_err(dev, "missing sys integrated irq resource\n");

Redundant since this cycle.

> +		return irq;
> +	}

> +	irq_name = devm_kasprintf(dev, GFP_KERNEL, "pcie_misc%d", lpp->id);
> +	if (!irq_name) {

> +		dev_err(dev, "failed to alloc irq name\n");

Not very useful. initcall_debug shows an error code.

> +		return -ENOMEM;
> +	}
> +
> +	ret = devm_request_irq(dev, irq, intel_pcie_core_isr,
> +			       IRQF_SHARED, irq_name, lpp);
> +	if (ret) {
> +		dev_err(dev, "request irq %d failed\n", irq);
> +		return ret;
> +	}

+ blank line.

> +	/* Enable integrated interrupts */
> +	pcie_app_wr_mask(lpp, PCIE_APP_IRN_INT,
> +			 PCIE_APP_IRN_INT, PCIE_APP_IRNEN);

At least one parameter can be located on the previous line.

> +	return ret;
> +}

> +static int intel_pcie_get_resources(struct platform_device *pdev)
> +{
> +	struct intel_pcie_port *lpp;
> +	struct resource *res;
> +	struct dw_pcie *pci;
> +	struct device *dev;
> +	int ret;
> +

> +	lpp = platform_get_drvdata(pdev);
> +	pci = &lpp->pci;
> +	dev = pci->dev;

You may save here and there few LOCs by adding these to corresponding
definition blocks.

> +
> +	ret = device_property_read_u32(dev, "linux,pci-domain", &lpp->id);
> +	if (ret) {
> +		dev_err(dev, "failed to get domain id, errno %d\n", ret);
> +		return ret;
> +	}
> +
> +	res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "dbi");

> +	if (!res)
> +		return -ENXIO;

Redundant. It's done in below call.

> +
> +	pci->dbi_base = devm_ioremap_resource(dev, res);
> +	if (IS_ERR(pci->dbi_base))
> +		return PTR_ERR(pci->dbi_base);

> +
> +	lpp->core_clk = devm_clk_get(dev, NULL);
> +	if (IS_ERR(lpp->core_clk)) {
> +		ret = PTR_ERR(lpp->core_clk);
> +		if (ret != -EPROBE_DEFER)
> +			dev_err(dev, "failed to get clks: %d\n", ret);
> +		return ret;
> +	}
> +
> +	lpp->core_rst = devm_reset_control_get(dev, NULL);
> +	if (IS_ERR(lpp->core_rst)) {
> +		ret = PTR_ERR(lpp->core_rst);
> +		if (ret != -EPROBE_DEFER)
> +			dev_err(dev, "failed to get resets: %d\n", ret);
> +		return ret;
> +	}
> +
> +	ret = device_property_match_string(dev, "device_type", "pci");
> +	if (ret) {
> +		dev_err(dev, "failed to find pci device type: %d\n", ret);
> +		return ret;
> +	}
> +

> +	if (device_property_read_u32(dev, "reset-assert-ms",
> +				     &lpp->rst_interval))

I would rather leave it on one line.


> +		lpp->rst_interval = RST_INTRVL_DFT_MS;
> +
> +	if (device_property_read_u32(dev, "max-link-speed", &lpp->link_gen))
> +		lpp->link_gen = 0; /* Fallback to auto */
> +
> +	res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "app");

> +	if (!res)
> +		return -ENXIO;

Redundant.

> +
> +	lpp->app_base = devm_ioremap_resource(dev, res);
> +	if (IS_ERR(lpp->app_base))
> +		return PTR_ERR(lpp->app_base);
> +
> +	ret = intel_pcie_ep_rst_init(lpp);
> +	if (ret)
> +		return ret;
> +
> +	lpp->phy = devm_phy_get(dev, "pciephy");
> +	if (IS_ERR(lpp->phy)) {
> +		ret = PTR_ERR(lpp->phy);
> +		if (ret != -EPROBE_DEFER)
> +			dev_err(dev, "couldn't get pcie-phy: %d\n", ret);
> +		return ret;
> +	}
> +	return 0;
> +}

> +static int intel_pcie_host_setup(struct intel_pcie_port *lpp)
> +{
> +	int ret;
> +
> +	intel_pcie_core_rst_assert(lpp);
> +	intel_pcie_device_rst_assert(lpp);
> +
> +	ret = phy_init(lpp->phy);
> +	if (ret)
> +		return ret;
> +
> +	intel_pcie_core_rst_deassert(lpp);
> +
> +	ret = clk_prepare_enable(lpp->core_clk);
> +	if (ret) {
> +		dev_err(lpp->pci.dev, "Core clock enable failed: %d\n", ret);
> +		goto clk_err;
> +	}
> +
> +	intel_pcie_rc_setup(lpp);
> +	ret = intel_pcie_app_logic_setup(lpp);
> +	if (ret)
> +		goto app_init_err;
> +
> +	ret = intel_pcie_setup_irq(lpp);
> +	if (!ret)
> +		return ret;
> +
> +	intel_pcie_turn_off(lpp);
> +app_init_err:
> +	clk_disable_unprepare(lpp->core_clk);
> +clk_err:
> +	intel_pcie_core_rst_assert(lpp);

> +	intel_pcie_deinit_phy(lpp);

Why not phy_exit()?

> +	return ret;
> +}

> +static ssize_t pcie_width_store(struct device *dev,
> +				struct device_attribute *attr,
> +				const char *buf, size_t len)
> +{
> +	struct intel_pcie_port *lpp;
> +	unsigned long val;
> +
> +	lpp = dev_get_drvdata(dev);
> +

> +	if (kstrtoul(buf, 10, &val))

> +		return -EINVAL;

Why shadowing error code?

> +
> +	if (val > lpp->max_width)
> +		return -EINVAL;
> +
> +	lpp->lanes = val;
> +	intel_pcie_max_link_width_setup(lpp);
> +
> +	return len;
> +}

> +	pcie_rc_cfg_wr_mask(lpp, PCI_COMMAND_MEMORY | PCI_COMMAND_MASTER,
> +			    0, PCI_COMMAND);

0 is pretty much fits previous line.

You need to fix your editor settings.

> +int intel_pcie_msi_init(struct pcie_port *pp)
> +{
> +	struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
> +

> +	dev_dbg(pci->dev, "PCIe MSI/MSIx is handled by MSI in x86 processor\n");

Is it useful message?

> +	return 0;
> +}

> +	dev_info(dev,
> +		 "Intel AXI PCIe Root Complex Port %d Init Done\n", lpp->id);

Same, literal can be placed on previous line.

-- 
With Best Regards,
Andy Shevchenko



^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH v3 1/2] dt-bindings: PCI: intel: Add YAML schemas for the PCIe RC controller
  2019-09-04 10:10 ` [PATCH v3 1/2] dt-bindings: PCI: intel: Add YAML schemas for the PCIe RC controller Dilip Kota
@ 2019-09-05  2:23   ` Chuan Hua, Lei
  2019-09-06 10:39     ` Dilip Kota
  2019-09-05 20:31   ` Martin Blumenstingl
  2019-09-17 18:40   ` Rob Herring
  2 siblings, 1 reply; 37+ messages in thread
From: Chuan Hua, Lei @ 2019-09-05  2:23 UTC (permalink / raw)
  To: Dilip Kota, jingoohan1, gustavo.pimentel, lorenzo.pieralisi,
	robh, martin.blumenstingl, linux-pci, hch, devicetree
  Cc: linux-kernel, andriy.shevchenko, cheol.yong.kim, qi-ming.wu

Hi Dilip,

On 9/4/2019 6:10 PM, Dilip Kota wrote:
> The Intel PCIe RC controller is Synopsys Designware
> based PCIe core. Add YAML schemas for PCIe in RC mode
> present in Intel Universal Gateway soc.
>
> Signed-off-by: Dilip Kota <eswara.kota@linux.intel.com>
> ---
> changes on v3:
> 	Add the appropriate License-Identifier
> 	Rename intel,rst-interval to 'reset-assert-us'
rst->interval to reset-assert-ms(should be typo error)
> 	Add additionalProperties: false
> 	Rename phy-names to 'pciephy'
> 	Remove the dtsi node split of SoC and board in the example
> 	Add #interrupt-cells = <1>; or else interrupt parsing will fail
> 	Name yaml file with compatible name
>
>   .../devicetree/bindings/pci/intel,lgm-pcie.yaml    | 137 +++++++++++++++++++++
>   1 file changed, 137 insertions(+)
>   create mode 100644 Documentation/devicetree/bindings/pci/intel,lgm-pcie.yaml
>
> diff --git a/Documentation/devicetree/bindings/pci/intel,lgm-pcie.yaml b/Documentation/devicetree/bindings/pci/intel,lgm-pcie.yaml
> new file mode 100644
> index 000000000000..5e5cc7fd66cd
> --- /dev/null
> +++ b/Documentation/devicetree/bindings/pci/intel,lgm-pcie.yaml
> @@ -0,0 +1,137 @@
> +# SPDX-License-Identifier: (GPL-2.0 OR BSD-2-Clause)
> +%YAML 1.2
> +---
> +$id: http://devicetree.org/schemas/pci/intel-pcie.yaml#
> +$schema: http://devicetree.org/meta-schemas/core.yaml#
> +
> +title: Intel AXI bus based PCI express root complex
> +
> +maintainers:
> +  - Dilip Kota <eswara.kota@linux.intel.com>
> +
> +properties:
> +  compatible:
> +    const: intel,lgm-pcie
> +
> +  device_type:
> +    const: pci
> +
> +  "#address-cells":
> +    const: 3
> +
> +  "#size-cells":
> +    const: 2
> +
> +  reg:
> +    items:
> +      - description: Controller control and status registers.
> +      - description: PCIe configuration registers.
> +      - description: Controller application registers.
> +
> +  reg-names:
> +    items:
> +      - const: dbi
> +      - const: config
> +      - const: app
> +
> +  ranges:
> +    description: Ranges for the PCI memory and I/O regions.
> +
> +  resets:
> +    maxItems: 1
> +
> +  clocks:
> +    description: PCIe registers interface clock.
> +
> +  phys:
> +    maxItems: 1
> +
> +  phy-names:
> +    const: pciephy
> +
> +  reset-gpios:
> +    maxItems: 1
> +
> +  num-lanes:
> +    description: Number of lanes to use for this port.
> +
> +  linux,pci-domain:
> +    $ref: /schemas/types.yaml#/definitions/uint32
> +    description: PCI domain ID.
> +
> +  interrupts:
> +    description: PCIe core integrated miscellaneous interrupt.
> +
> +  '#interrupt-cells':
> +    const: 1
> +
> +  interrupt-map-mask:
> +    description: Standard PCI IRQ mapping properties.
> +
> +  interrupt-map:
> +    description: Standard PCI IRQ mapping properties.
> +
> +  max-link-speed:
> +    description: Specify PCI Gen for link capability.
> +
> +  bus-range:
> +    description: Range of bus numbers associated with this controller.
> +
> +  reset-assert-ms:
> +    description: |
> +      Device reset interval in ms.
> +      Some devices need an interval upto 500ms. By default it is 100ms.
> +
> +required:
> +  - compatible
> +  - device_type
> +  - reg
> +  - reg-names
> +  - ranges
> +  - resets
> +  - clocks
> +  - phys
> +  - phy-names
> +  - reset-gpios
> +  - num-lanes
> +  - linux,pci-domain
> +  - interrupts
> +  - interrupt-map
> +  - interrupt-map-mask
> +
> +additionalProperties: false
> +
> +examples:
> +  - |
> +    pcie10:pcie@d0e00000 {
> +      compatible = "intel,lgm-pcie";
> +      device_type = "pci";
> +      #address-cells = <3>;
> +      #size-cells = <2>;
> +      reg = <
> +            0xd0e00000 0x1000
> +            0xd2000000 0x800000
> +            0xd0a41000 0x1000
> +            >;
> +      reg-names = "dbi", "config", "app";
> +      linux,pci-domain = <0>;
> +      max-link-speed = <4>;
> +      bus-range = <0x00 0x08>;
> +      interrupt-parent = <&ioapic1>;
> +      interrupts = <67 1>;
> +      #interrupt-cells = <1>;
> +      interrupt-map-mask = <0 0 0 0x7>;
> +      interrupt-map = <0 0 0 1 &ioapic1 27 1>,
> +                      <0 0 0 2 &ioapic1 28 1>,
> +                      <0 0 0 3 &ioapic1 29 1>,
> +                      <0 0 0 4 &ioapic1 30 1>;
> +      ranges = <0x02000000 0 0xd4000000 0xd4000000 0 0x04000000>;
> +      resets = <&rcu0 0x50 0>;
> +      clocks = <&cgu0 LGM_GCLK_PCIE10>;
> +      phys = <&cb0phy0>;
> +      phy-names = "pciephy";
> +      status = "okay";
> +      reset-assert-ms = <500>;
> +      reset-gpios = <&gpio0 3 GPIO_ACTIVE_LOW>;
> +      num-lanes = <2>;
> +    };

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH v3 2/2] dwc: PCI: intel: Intel PCIe RC controller driver
  2019-09-04 10:10 ` [PATCH v3 2/2] dwc: PCI: intel: Intel PCIe RC controller driver Dilip Kota
  2019-09-04 13:05   ` Andy Shevchenko
@ 2019-09-05  2:30   ` Chuan Hua, Lei
  2019-09-05 10:45   ` Andrew Murray
  2 siblings, 0 replies; 37+ messages in thread
From: Chuan Hua, Lei @ 2019-09-05  2:30 UTC (permalink / raw)
  To: Dilip Kota, jingoohan1, gustavo.pimentel, lorenzo.pieralisi,
	robh, martin.blumenstingl, linux-pci, hch, devicetree
  Cc: linux-kernel, andriy.shevchenko, cheol.yong.kim, qi-ming.wu

Hi Dilip,

On 9/4/2019 6:10 PM, Dilip Kota wrote:
> Add support to PCIe RC controller on Intel Universal
> Gateway SoC. PCIe controller is based of Synopsys
> Designware pci core.
>
> Signed-off-by: Dilip Kota <eswara.kota@linux.intel.com>
> ---
> changes on v3:
> 	Rename PCIe app logic registers with PCIE_APP prefix.
> 	PCIE_IOP_CTRL configuration is not required. Remove respective code.
> 	Remove wrapper functions for clk enable/disable APIs.
> 	Use platform_get_resource_byname() instead of
> 	  devm_platform_ioremap_resource() to be similar with DWC framework.
> 	Rename phy name to "pciephy".
> 	Modify debug message in msi_init() callback to be more specific.
> 	Remove map_irq() callback.
> 	Enable the INTx interrupts at the time of PCIe initialization.
> 	Reduce memory fragmentation by using variable "struct dw_pcie pci"
> 	  instead of allocating memory.
> 	Reduce the delay to 100us during enpoint initialization
> 	  intel_pcie_ep_rst_init().
> 	Call  dw_pcie_host_deinit() during remove() instead of directly
> 	  calling PCIe core APIs.
> 	Rename "intel,rst-interval" to "reset-assert-ms".
> 	Remove unused APP logic Interrupt bit macro definitions.
>   	Use dwc framework's dw_pcie_setup_rc() for PCIe host specific
> 	 configuration instead of redefining the same functionality in
> 	 the driver.
> 	Move the whole DT parsing specific code to intel_pcie_get_resources()
>
>   drivers/pci/controller/dwc/Kconfig          |  13 +
>   drivers/pci/controller/dwc/Makefile         |   1 +
>   drivers/pci/controller/dwc/pcie-intel-axi.c | 840 ++++++++++++++++++++++++++++
>   3 files changed, 854 insertions(+)
>   create mode 100644 drivers/pci/controller/dwc/pcie-intel-axi.c
>
> diff --git a/drivers/pci/controller/dwc/Kconfig b/drivers/pci/controller/dwc/Kconfig
> index 6ea778ae4877..e44b9b6a6390 100644
> --- a/drivers/pci/controller/dwc/Kconfig
> +++ b/drivers/pci/controller/dwc/Kconfig
> @@ -82,6 +82,19 @@ config PCIE_DW_PLAT_EP
>   	  order to enable device-specific features PCI_DW_PLAT_EP must be
>   	  selected.
>   
> +config PCIE_INTEL_AXI
> +        bool "Intel AHB/AXI PCIe host controller support"
> +        depends on PCI_MSI
> +        depends on PCI
> +        depends on OF
> +        select PCIE_DW_HOST
> +        help
> +          Say 'Y' here to enable support for Intel AHB/AXI PCIe Host
> +	  controller driver.
> +	  The Intel PCIe controller is based on the Synopsys Designware
> +	  pcie core and therefore uses the Designware core functions to
> +	  implement the driver.
> +
>   config PCI_EXYNOS
>   	bool "Samsung Exynos PCIe controller"
>   	depends on SOC_EXYNOS5440 || COMPILE_TEST
> diff --git a/drivers/pci/controller/dwc/Makefile b/drivers/pci/controller/dwc/Makefile
> index b085dfd4fab7..46e656ebdf90 100644
> --- a/drivers/pci/controller/dwc/Makefile
> +++ b/drivers/pci/controller/dwc/Makefile
> @@ -3,6 +3,7 @@ obj-$(CONFIG_PCIE_DW) += pcie-designware.o
>   obj-$(CONFIG_PCIE_DW_HOST) += pcie-designware-host.o
>   obj-$(CONFIG_PCIE_DW_EP) += pcie-designware-ep.o
>   obj-$(CONFIG_PCIE_DW_PLAT) += pcie-designware-plat.o
> +obj-$(CONFIG_PCIE_INTEL_AXI) += pcie-intel-axi.o
>   obj-$(CONFIG_PCI_DRA7XX) += pci-dra7xx.o
>   obj-$(CONFIG_PCI_EXYNOS) += pci-exynos.o
>   obj-$(CONFIG_PCI_IMX6) += pci-imx6.o
> diff --git a/drivers/pci/controller/dwc/pcie-intel-axi.c b/drivers/pci/controller/dwc/pcie-intel-axi.c
> new file mode 100644
> index 000000000000..75607ce03ebf
> --- /dev/null
> +++ b/drivers/pci/controller/dwc/pcie-intel-axi.c
> @@ -0,0 +1,840 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * PCIe host controller driver for Intel AXI PCIe Bridge
> + *
> + * Copyright (c) 2019 Intel Corporation.
> + */
> +
> +#include <linux/bitfield.h>
> +#include <linux/clk.h>
> +#include <linux/gpio/consumer.h>
> +#include <linux/interrupt.h>
> +#include <linux/iopoll.h>
> +#include <linux/of_irq.h>
> +#include <linux/of_pci.h>
> +#include <linux/of_platform.h>
> +#include <linux/phy/phy.h>
> +#include <linux/platform_device.h>
> +#include <linux/reset.h>
> +
> +#include "../../pci.h"
> +#include "pcie-designware.h"
> +
> +#define PCIE_CCRID				0x8
> +
> +#define PCIE_LCAP				0x7C
> +#define PCIE_LCAP_MAX_LINK_SPEED		GENMASK(3, 0)
> +#define PCIE_LCAP_MAX_LENGTH_WIDTH		GENMASK(9, 4)
> +
> +/* Link Control and Status Register */
> +#define PCIE_LCTLSTS				0x80
> +#define PCIE_LCTLSTS_ASPM_ENABLE		GENMASK(1, 0)
> +#define PCIE_LCTLSTS_RCB128			BIT(3)
> +#define PCIE_LCTLSTS_LINK_DISABLE		BIT(4)
> +#define PCIE_LCTLSTS_COM_CLK_CFG		BIT(6)
> +#define PCIE_LCTLSTS_HW_AW_DIS			BIT(9)
> +#define PCIE_LCTLSTS_LINK_SPEED			GENMASK(19, 16)
> +#define PCIE_LCTLSTS_NEGOTIATED_LINK_WIDTH	GENMASK(25, 20)
> +#define PCIE_LCTLSTS_SLOT_CLK_CFG		BIT(28)
> +
> +#define PCIE_LCTLSTS2				0xA0
> +#define PCIE_LCTLSTS2_TGT_LINK_SPEED		GENMASK(3, 0)
> +#define PCIE_LCTLSTS2_TGT_LINK_SPEED_25GT	0x1
> +#define PCIE_LCTLSTS2_TGT_LINK_SPEED_5GT	0x2
> +#define PCIE_LCTLSTS2_TGT_LINK_SPEED_8GT	0x3
> +#define PCIE_LCTLSTS2_TGT_LINK_SPEED_16GT	0x4
> +#define PCIE_LCTLSTS2_HW_AUTO_DIS		BIT(5)
> +
> +/* Ack Frequency Register */
> +#define PCIE_AFR				0x70C
> +#define PCIE_AFR_FTS_NUM			GENMASK(15, 8)
> +#define PCIE_AFR_COM_FTS_NUM			GENMASK(23, 16)
> +#define PCIE_AFR_GEN12_FTS_NUM_DFT		(SZ_128 - 1)
> +#define PCIE_AFR_GEN3_FTS_NUM_DFT		180
> +#define PCIE_AFR_GEN4_FTS_NUM_DFT		196
> +
> +#define PCIE_PLCR_DLL_LINK_EN			BIT(5)
> +#define PCIE_PORT_LOGIC_FTS			GENMASK(7, 0)
> +#define PCIE_PORT_LOGIC_DFT_FTS_NUM		(SZ_128 - 1)
> +
> +#define PCIE_MISC_CTRL				0x8BC
> +#define PCIE_MISC_CTRL_DBI_RO_WR_EN		BIT(0)
> +
> +#define PCIE_MULTI_LANE_CTRL			0x8C0
> +#define PCIE_UPCONFIG_SUPPORT			BIT(7)
> +#define PCIE_DIRECT_LINK_WIDTH_CHANGE		BIT(6)
> +#define PCIE_TARGET_LINK_WIDTH			GENMASK(5, 0)
> +
> +/* APP RC Core Control Register */
> +#define PCIE_APP_CCR				0x10
> +#define PCIE_APP_CCR_LTSSM_ENABLE		BIT(0)
> +
> +/* PCIe Message Control */
> +#define PCIE_APP_MSG_CR				0x30
> +#define PCIE_APP_MSG_XMT_PM_TURNOFF		BIT(0)
> +
> +/* PCIe Power Management Control */
> +#define PCIE_APP_PMC				0x44
> +#define PCIE_APP_PMC_IN_L2			BIT(20)
> +
> +/* Interrupt Enable Register */
> +#define PCIE_APP_IRNEN				0xF4
> +#define PCIE_APP_IRNCR				0xF8
> +#define PCIE_APP_IRN_AER_REPORT			BIT(0)
> +#define PCIE_APP_IRN_PME			BIT(2)
> +#define PCIE_APP_IRN_RX_VDM_MSG			BIT(4)
> +#define PCIE_APP_IRN_PM_TO_ACK			BIT(9)
> +#define PCIE_APP_IRN_LINK_AUTO_BW_STAT		BIT(11)
> +#define PCIE_APP_IRN_BW_MGT			BIT(12)
> +#define PCIE_APP_IRN_MSG_LTR			BIT(18)
> +#define PCIE_APP_IRN_SYS_ERR_RC			BIT(29)
> +
> +#define PCIE_APP_INTX_OFST	12
> +#define PCIE_APP_IRN_INT	(PCIE_APP_IRN_AER_REPORT | PCIE_APP_IRN_PME | \
> +			PCIE_APP_IRN_RX_VDM_MSG | PCIE_APP_IRN_SYS_ERR_RC | \
> +			PCIE_APP_IRN_PM_TO_ACK | PCIE_APP_IRN_MSG_LTR | \
> +			PCIE_APP_IRN_BW_MGT | PCIE_APP_IRN_LINK_AUTO_BW_STAT | \
> +			(PCIE_APP_INTX_OFST + PCI_INTERRUPT_INTA) | \
> +			(PCIE_APP_INTX_OFST + PCI_INTERRUPT_INTB) | \
> +			(PCIE_APP_INTX_OFST + PCI_INTERRUPT_INTC) | \
> +			(PCIE_APP_INTX_OFST + PCI_INTERRUPT_INTD))
> +
> +#define BUS_IATU_OFFS		SZ_256M
> +#define RST_INTRVL_DFT_MS	100
> +enum {
> +	PCIE_LINK_SPEED_AUTO = 0,
> +	PCIE_LINK_SPEED_GEN1,
> +	PCIE_LINK_SPEED_GEN2,
> +	PCIE_LINK_SPEED_GEN3,
> +	PCIE_LINK_SPEED_GEN4,
> +};
> +
> +struct intel_pcie_soc {
> +	unsigned int pcie_ver;
> +	unsigned int pcie_atu_offset;
> +	u32 num_viewport;
> +};
> +
> +struct intel_pcie_port {
> +	struct dw_pcie		pci;
> +	unsigned int		id; /* Physical RC Index */
> +	void __iomem		*app_base;
> +	struct gpio_desc	*reset_gpio;
> +	u32			rst_interval;
> +	u32			max_speed;
> +	u32			link_gen;
> +	u32			max_width;
> +	u32			lanes;
> +	struct clk		*core_clk;
> +	struct reset_control	*core_rst;
> +	struct phy		*phy;
> +};
> +
> +static void pcie_update_bits(void __iomem *base, u32 mask, u32 val, u32 ofs)
> +{
> +	u32 orig, tmp;
> +
> +	orig = readl(base + ofs);
> +
> +	tmp = (orig & ~mask) | (val & mask);
> +
> +	if (tmp != orig)
> +		writel(tmp, base + ofs);
> +}
> +
> +static inline u32 pcie_app_rd(struct intel_pcie_port *lpp, u32 ofs)
> +{
> +	return readl(lpp->app_base + ofs);
> +}
> +
> +static inline void pcie_app_wr(struct intel_pcie_port *lpp, u32 val, u32 ofs)
> +{
> +	writel(val, lpp->app_base + ofs);
> +}
> +
> +static void pcie_app_wr_mask(struct intel_pcie_port *lpp,
> +			     u32 mask, u32 val, u32 ofs)
> +{
> +	pcie_update_bits(lpp->app_base, mask, val, ofs);
> +}
> +
> +static inline u32 pcie_rc_cfg_rd(struct intel_pcie_port *lpp, u32 ofs)
> +{
> +	return dw_pcie_readl_dbi(&lpp->pci, ofs);
> +}
> +
> +static inline void pcie_rc_cfg_wr(struct intel_pcie_port *lpp, u32 val, u32 ofs)
> +{
> +	dw_pcie_writel_dbi(&lpp->pci, ofs, val);
> +}
> +
> +static void pcie_rc_cfg_wr_mask(struct intel_pcie_port *lpp,
> +				u32 mask, u32 val, u32 ofs)
> +{
> +	pcie_update_bits(lpp->pci.dbi_base, mask, val, ofs);
> +}
> +
> +static void intel_pcie_mem_iatu(struct intel_pcie_port *lpp)
> +{
> +	struct pcie_port *pp = &lpp->pci.pp;
> +	phys_addr_t cpu_addr = pp->mem_base;
> +
> +	dw_pcie_prog_outbound_atu(&lpp->pci, PCIE_ATU_REGION_INDEX0,
> +				  PCIE_ATU_TYPE_MEM, cpu_addr,
> +				  pp->mem_base, pp->mem_size);
> +}
> +
> +static void intel_pcie_ltssm_enable(struct intel_pcie_port *lpp)
> +{
> +	pcie_app_wr_mask(lpp, PCIE_APP_CCR_LTSSM_ENABLE,
> +			 PCIE_APP_CCR_LTSSM_ENABLE, PCIE_APP_CCR);
> +}
> +
> +static void intel_pcie_ltssm_disable(struct intel_pcie_port *lpp)
> +{
> +	pcie_app_wr_mask(lpp, PCIE_APP_CCR_LTSSM_ENABLE, 0, PCIE_APP_CCR);
> +}
> +
> +static const char *pcie_link_gen_to_str(int gen)
> +{
> +	switch (gen) {
> +	case PCIE_LINK_SPEED_GEN1:
> +		return "2.5";
> +	case PCIE_LINK_SPEED_GEN2:
> +		return "5.0";
> +	case PCIE_LINK_SPEED_GEN3:
> +		return "8.0";
> +	case PCIE_LINK_SPEED_GEN4:
> +		return "16.0";
> +	default:
> +		return "???";
> +	}
> +}
> +
> +static void intel_pcie_link_setup(struct intel_pcie_port *lpp)
> +{
> +	u32 val;
> +
> +	val = pcie_rc_cfg_rd(lpp, PCIE_LCAP);
> +	lpp->max_speed = FIELD_GET(PCIE_LCAP_MAX_LINK_SPEED, val);
> +	lpp->max_width = FIELD_GET(PCIE_LCAP_MAX_LENGTH_WIDTH, val);
> +
> +	val = pcie_rc_cfg_rd(lpp, PCIE_LCTLSTS);
> +
> +	val &= ~(PCIE_LCTLSTS_LINK_DISABLE | PCIE_LCTLSTS_ASPM_ENABLE);
> +	val |= (PCIE_LCTLSTS_SLOT_CLK_CFG | PCIE_LCTLSTS_COM_CLK_CFG |
> +		PCIE_LCTLSTS_RCB128);
> +	pcie_rc_cfg_wr(lpp, val, PCIE_LCTLSTS);
> +}
> +
> +static void intel_pcie_max_speed_setup(struct intel_pcie_port *lpp)
> +{
> +	u32 reg, val;
> +
> +	reg = pcie_rc_cfg_rd(lpp, PCIE_LCTLSTS2);
> +	switch (lpp->link_gen) {
> +	case PCIE_LINK_SPEED_GEN1:
> +		reg &= ~PCIE_LCTLSTS2_TGT_LINK_SPEED;
> +		reg |= PCIE_LCTLSTS2_HW_AUTO_DIS |
> +			PCIE_LCTLSTS2_TGT_LINK_SPEED_25GT;
> +		break;
> +	case PCIE_LINK_SPEED_GEN2:
> +		reg &= ~PCIE_LCTLSTS2_TGT_LINK_SPEED;
> +		reg |= PCIE_LCTLSTS2_HW_AUTO_DIS |
> +			PCIE_LCTLSTS2_TGT_LINK_SPEED_5GT;
> +		break;
> +	case PCIE_LINK_SPEED_GEN3:
> +		reg &= ~PCIE_LCTLSTS2_TGT_LINK_SPEED;
> +		reg |= PCIE_LCTLSTS2_HW_AUTO_DIS |
> +			PCIE_LCTLSTS2_TGT_LINK_SPEED_8GT;
> +		break;
> +	case PCIE_LINK_SPEED_GEN4:
> +		reg &= ~PCIE_LCTLSTS2_TGT_LINK_SPEED;
> +		reg |= PCIE_LCTLSTS2_HW_AUTO_DIS |
> +			PCIE_LCTLSTS2_TGT_LINK_SPEED_16GT;
> +		break;
> +	default:
> +		/* Use hardware capability */
> +		val = pcie_rc_cfg_rd(lpp, PCIE_LCAP);
> +		val = FIELD_GET(PCIE_LCAP_MAX_LINK_SPEED, val);
> +		reg &= ~PCIE_LCTLSTS2_HW_AUTO_DIS;
> +		reg |= val;
> +		break;
> +	}
> +	pcie_rc_cfg_wr(lpp, reg, PCIE_LCTLSTS2);
> +}
> +
> +static void intel_pcie_speed_change_enable(struct intel_pcie_port *lpp)
> +{
> +	u32 mask, val;
> +
> +	mask = PORT_LOGIC_SPEED_CHANGE | PCIE_PORT_LOGIC_FTS;
> +	val = PORT_LOGIC_SPEED_CHANGE | PCIE_PORT_LOGIC_DFT_FTS_NUM;
> +
> +	pcie_rc_cfg_wr_mask(lpp, mask, val, PCIE_LINK_WIDTH_SPEED_CONTROL);
> +}
> +
> +static void intel_pcie_speed_change_disable(struct intel_pcie_port *lpp)
> +{
> +	pcie_rc_cfg_wr_mask(lpp, PORT_LOGIC_SPEED_CHANGE, 0,
> +			    PCIE_LINK_WIDTH_SPEED_CONTROL);
> +}
> +
> +static void intel_pcie_max_link_width_setup(struct intel_pcie_port *lpp)
> +{
> +	u32 mask, val;
> +
> +	/* HW auto bandwidth negotiation must be enabled */
> +	pcie_rc_cfg_wr_mask(lpp, PCIE_LCTLSTS_HW_AW_DIS, 0, PCIE_LCTLSTS);
> +
> +	mask = PCIE_DIRECT_LINK_WIDTH_CHANGE | PCIE_TARGET_LINK_WIDTH;
> +	val = PCIE_DIRECT_LINK_WIDTH_CHANGE | lpp->lanes;
> +	pcie_rc_cfg_wr_mask(lpp, mask, val, PCIE_MULTI_LANE_CTRL);
> +}
> +
> +static void intel_pcie_port_logic_setup(struct intel_pcie_port *lpp)
> +{
> +	u32 val, mask, fts;
> +
> +	switch (lpp->max_speed) {
> +	case PCIE_LINK_SPEED_GEN1:
> +	case PCIE_LINK_SPEED_GEN2:
> +		fts = PCIE_AFR_GEN12_FTS_NUM_DFT;
> +		break;
> +	case PCIE_LINK_SPEED_GEN3:
> +		fts = PCIE_AFR_GEN3_FTS_NUM_DFT;
> +		break;
> +	case PCIE_LINK_SPEED_GEN4:
> +		fts = PCIE_AFR_GEN4_FTS_NUM_DFT;
> +		break;
> +	default:
> +		fts = PCIE_AFR_GEN12_FTS_NUM_DFT;
> +		break;
> +	}
> +	mask = PCIE_AFR_FTS_NUM | PCIE_AFR_COM_FTS_NUM;
> +	val = FIELD_PREP(PCIE_AFR_FTS_NUM, fts) |
> +	       FIELD_PREP(PCIE_AFR_COM_FTS_NUM, fts);
> +	pcie_rc_cfg_wr_mask(lpp, mask, val, PCIE_AFR);
> +
> +	/* Port Link Control Register */
> +	pcie_rc_cfg_wr_mask(lpp, PCIE_PLCR_DLL_LINK_EN,
> +			    PCIE_PLCR_DLL_LINK_EN, PCIE_PORT_LINK_CONTROL);
> +}
> +
> +static void intel_pcie_upconfig_setup(struct intel_pcie_port *lpp)
> +{
> +	pcie_rc_cfg_wr_mask(lpp, PCIE_UPCONFIG_SUPPORT,
> +			    PCIE_UPCONFIG_SUPPORT, PCIE_MULTI_LANE_CTRL);
> +}
> +
> +static void intel_pcie_rc_setup(struct intel_pcie_port *lpp)
> +{
> +	intel_pcie_ltssm_disable(lpp);
> +	intel_pcie_link_setup(lpp);
> +	dw_pcie_setup_rc(&lpp->pci.pp);
> +	intel_pcie_upconfig_setup(lpp);
> +
> +	intel_pcie_max_speed_setup(lpp);
> +	intel_pcie_speed_change_enable(lpp);
> +	intel_pcie_port_logic_setup(lpp);
> +	intel_pcie_mem_iatu(lpp);
> +}
> +
> +static int intel_pcie_ep_rst_init(struct intel_pcie_port *lpp)
> +{
> +	struct device *dev = lpp->pci.dev;
> +	int ret;
> +
> +	lpp->reset_gpio = devm_gpiod_get(dev, "reset", GPIOD_OUT_LOW);
> +	if (IS_ERR(lpp->reset_gpio)) {
> +		ret = PTR_ERR(lpp->reset_gpio);
> +		if (ret != -EPROBE_DEFER)
> +			dev_err(dev, "failed to request PCIe GPIO: %d\n", ret);
> +		return ret;
> +	}
> +	/* Make initial reset last for 100us */
> +	usleep_range(100, 200);
> +
> +	return 0;
> +}
> +
> +static void intel_pcie_core_rst_assert(struct intel_pcie_port *lpp)
> +{
> +	reset_control_assert(lpp->core_rst);
> +}
> +
> +static void intel_pcie_core_rst_deassert(struct intel_pcie_port *lpp)
> +{
> +	/*
> +	 * One micro-second delay to make sure the reset pulse
> +	 * wide enough so that core reset is clean.
> +	 */
> +	udelay(1);
> +	reset_control_deassert(lpp->core_rst);
> +
> +	/*
> +	 * Some SoC core reset also reset PHY, more delay needed
> +	 * to make sure the reset process is done.
> +	 */
> +	usleep_range(1000, 2000);
> +}
> +
> +static void intel_pcie_device_rst_assert(struct intel_pcie_port *lpp)
> +{
> +	gpiod_set_value_cansleep(lpp->reset_gpio, 1);
> +}
> +
> +static void intel_pcie_device_rst_deassert(struct intel_pcie_port *lpp)
> +{
> +	msleep(lpp->rst_interval);
> +	gpiod_set_value_cansleep(lpp->reset_gpio, 0);
> +}
> +
> +static int intel_pcie_app_logic_setup(struct intel_pcie_port *lpp)
> +{
> +	intel_pcie_device_rst_deassert(lpp);
> +	intel_pcie_ltssm_enable(lpp);
> +
> +	return dw_pcie_wait_for_link(&lpp->pci);
> +}
> +
> +static irqreturn_t intel_pcie_core_isr(int irq, void *arg)
> +{
> +	struct intel_pcie_port *lpp = arg;
> +	u32 val, reg;
> +
> +	reg = pcie_app_rd(lpp, PCIE_APP_IRNCR);
> +	val = reg & PCIE_APP_IRN_INT;
> +
> +	pcie_app_wr(lpp, val, PCIE_APP_IRNCR);
> +
> +	trace_printk("PCIe misc interrupt status 0x%x\n", reg);
> +	return IRQ_HANDLED;
> +}
> +
> +static int intel_pcie_setup_irq(struct intel_pcie_port *lpp)
> +{
> +	struct device *dev = lpp->pci.dev;
> +	struct platform_device *pdev;
> +	char *irq_name;
> +	int irq, ret;
> +
> +	pdev = to_platform_device(dev);
> +	irq = platform_get_irq(pdev, 0);
> +	if (irq < 0) {
> +		dev_err(dev, "missing sys integrated irq resource\n");
> +		return irq;
> +	}
> +
> +	irq_name = devm_kasprintf(dev, GFP_KERNEL, "pcie_misc%d", lpp->id);
> +	if (!irq_name) {
> +		dev_err(dev, "failed to alloc irq name\n");
> +		return -ENOMEM;
> +	}
> +
> +	ret = devm_request_irq(dev, irq, intel_pcie_core_isr,
> +			       IRQF_SHARED, irq_name, lpp);
> +	if (ret) {
> +		dev_err(dev, "request irq %d failed\n", irq);
> +		return ret;
> +	}
> +	/* Enable integrated interrupts */
> +	pcie_app_wr_mask(lpp, PCIE_APP_IRN_INT,
> +			 PCIE_APP_IRN_INT, PCIE_APP_IRNEN);
> +
> +	return ret;
> +}
> +
> +static void intel_pcie_core_irq_disable(struct intel_pcie_port *lpp)
> +{
> +	pcie_app_wr(lpp, 0, PCIE_APP_IRNEN);
> +	pcie_app_wr(lpp, PCIE_APP_IRN_INT,  PCIE_APP_IRNCR);
> +}
> +
> +static int intel_pcie_get_resources(struct platform_device *pdev)
> +{
> +	struct intel_pcie_port *lpp;
> +	struct resource *res;
> +	struct dw_pcie *pci;
> +	struct device *dev;
> +	int ret;
> +
> +	lpp = platform_get_drvdata(pdev);
> +	pci = &lpp->pci;
> +	dev = pci->dev;
> +
> +	ret = device_property_read_u32(dev, "linux,pci-domain", &lpp->id);
> +	if (ret) {
> +		dev_err(dev, "failed to get domain id, errno %d\n", ret);
> +		return ret;
> +	}
> +
> +	res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "dbi");
> +	if (!res)
> +		return -ENXIO;
> +
> +	pci->dbi_base = devm_ioremap_resource(dev, res);
> +	if (IS_ERR(pci->dbi_base))
> +		return PTR_ERR(pci->dbi_base);
> +
> +	lpp->core_clk = devm_clk_get(dev, NULL);
> +	if (IS_ERR(lpp->core_clk)) {
> +		ret = PTR_ERR(lpp->core_clk);
> +		if (ret != -EPROBE_DEFER)
> +			dev_err(dev, "failed to get clks: %d\n", ret);
> +		return ret;
> +	}
> +
> +	lpp->core_rst = devm_reset_control_get(dev, NULL);
> +	if (IS_ERR(lpp->core_rst)) {
> +		ret = PTR_ERR(lpp->core_rst);
> +		if (ret != -EPROBE_DEFER)
> +			dev_err(dev, "failed to get resets: %d\n", ret);
> +		return ret;
> +	}
> +
> +	ret = device_property_match_string(dev, "device_type", "pci");
> +	if (ret) {
> +		dev_err(dev, "failed to find pci device type: %d\n", ret);
> +		return ret;
> +	}
> +
> +	if (device_property_read_u32(dev, "reset-assert-ms",
> +				     &lpp->rst_interval))
> +		lpp->rst_interval = RST_INTRVL_DFT_MS;
> +
> +	if (device_property_read_u32(dev, "max-link-speed", &lpp->link_gen))
> +		lpp->link_gen = 0; /* Fallback to auto */
> +
> +	res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "app");
> +	if (!res)
> +		return -ENXIO;
> +
> +	lpp->app_base = devm_ioremap_resource(dev, res);
> +	if (IS_ERR(lpp->app_base))
> +		return PTR_ERR(lpp->app_base);
> +
> +	ret = intel_pcie_ep_rst_init(lpp);
> +	if (ret)
> +		return ret;
> +
> +	lpp->phy = devm_phy_get(dev, "pciephy");
> +	if (IS_ERR(lpp->phy)) {
> +		ret = PTR_ERR(lpp->phy);
> +		if (ret != -EPROBE_DEFER)
> +			dev_err(dev, "couldn't get pcie-phy: %d\n", ret);
> +		return ret;
> +	}
> +	return 0;
> +}
> +
> +static void intel_pcie_deinit_phy(struct intel_pcie_port *lpp)
> +{
> +	phy_exit(lpp->phy);
> +}
> +
> +static int intel_pcie_wait_l2(struct intel_pcie_port *lpp)
> +{
> +	u32 value;
> +	int ret;
> +
> +	if (lpp->max_speed < PCIE_LINK_SPEED_GEN3)
> +		return 0;
> +
> +	/* Send PME_TURN_OFF message */
> +	pcie_app_wr_mask(lpp, PCIE_APP_MSG_XMT_PM_TURNOFF,
> +			 PCIE_APP_MSG_XMT_PM_TURNOFF, PCIE_APP_MSG_CR);
> +
> +	/* Read PMC status and wait for falling into L2 link state */
> +	ret = readl_poll_timeout(lpp->app_base + PCIE_APP_PMC, value,
> +				 (value & PCIE_APP_PMC_IN_L2), 20,
> +				 jiffies_to_usecs(5 * HZ));
> +	if (ret)
> +		dev_err(lpp->pci.dev, "PCIe link enter L2 timeout!\n");
> +
> +	return ret;
> +}
> +
> +static void intel_pcie_turn_off(struct intel_pcie_port *lpp)
> +{
> +	if (dw_pcie_link_up(&lpp->pci))
> +		intel_pcie_wait_l2(lpp);
> +
> +	/* Put EP in reset state */
> +	intel_pcie_device_rst_assert(lpp);
> +	pcie_rc_cfg_wr_mask(lpp, PCI_COMMAND_MEMORY, 0, PCI_COMMAND);
> +}
> +
> +static int intel_pcie_host_setup(struct intel_pcie_port *lpp)
> +{
> +	int ret;
> +
> +	intel_pcie_core_rst_assert(lpp);
> +	intel_pcie_device_rst_assert(lpp);
> +
> +	ret = phy_init(lpp->phy);
> +	if (ret)
> +		return ret;
> +
> +	intel_pcie_core_rst_deassert(lpp);
> +
> +	ret = clk_prepare_enable(lpp->core_clk);
> +	if (ret) {
> +		dev_err(lpp->pci.dev, "Core clock enable failed: %d\n", ret);
> +		goto clk_err;
> +	}
> +
> +	intel_pcie_rc_setup(lpp);
> +	ret = intel_pcie_app_logic_setup(lpp);
> +	if (ret)
> +		goto app_init_err;
> +
> +	ret = intel_pcie_setup_irq(lpp);
> +	if (!ret)
> +		return ret;
> +
> +	intel_pcie_turn_off(lpp);
> +app_init_err:
> +	clk_disable_unprepare(lpp->core_clk);
> +clk_err:
> +	intel_pcie_core_rst_assert(lpp);
> +	intel_pcie_deinit_phy(lpp);
> +	return ret;
> +}
> +
> +static ssize_t
> +pcie_link_status_show(struct device *dev, struct device_attribute *attr,
> +		      char *buf)
> +{
> +	u32 reg, width, gen;
> +	struct intel_pcie_port *lpp;
> +
> +	lpp = dev_get_drvdata(dev);
> +
> +	reg = pcie_rc_cfg_rd(lpp, PCIE_LCTLSTS);
> +	width = FIELD_GET(PCIE_LCTLSTS_NEGOTIATED_LINK_WIDTH, reg);
> +	gen = FIELD_GET(PCIE_LCTLSTS_LINK_SPEED, reg);
> +	if (gen > lpp->max_speed)
> +		return -EINVAL;
> +
> +	return sprintf(buf, "Port %2u Width x%u Speed %s GT/s\n", lpp->id,
> +		       width, pcie_link_gen_to_str(gen));
> +}
> +static DEVICE_ATTR_RO(pcie_link_status);
> +
> +static ssize_t pcie_speed_store(struct device *dev,
> +				struct device_attribute *attr,
> +				const char *buf, size_t len)
> +{
> +	struct intel_pcie_port *lpp;
> +	unsigned long val;
> +	int ret;
> +
> +	lpp = dev_get_drvdata(dev);
> +
> +	ret = kstrtoul(buf, 10, &val);
> +	if (ret)
> +		return ret;
> +
> +	if (val > lpp->max_speed)
> +		return -EINVAL;
> +
> +	lpp->link_gen = val;
> +	intel_pcie_max_speed_setup(lpp);
> +	intel_pcie_speed_change_disable(lpp);
> +	intel_pcie_speed_change_enable(lpp);
> +
> +	return len;
> +}
> +static DEVICE_ATTR_WO(pcie_speed);
> +
> +/*
> + * Link width change on the fly is not always successful.
> + * It also depends on the partner.
> + */
> +static ssize_t pcie_width_store(struct device *dev,
> +				struct device_attribute *attr,
> +				const char *buf, size_t len)
> +{
> +	struct intel_pcie_port *lpp;
> +	unsigned long val;
> +
> +	lpp = dev_get_drvdata(dev);
> +
> +	if (kstrtoul(buf, 10, &val))
> +		return -EINVAL;
> +
> +	if (val > lpp->max_width)
> +		return -EINVAL;
> +
> +	lpp->lanes = val;
> +	intel_pcie_max_link_width_setup(lpp);
> +
> +	return len;
> +}
> +static DEVICE_ATTR_WO(pcie_width);
> +
> +static struct attribute *pcie_cfg_attrs[] = {
> +	&dev_attr_pcie_link_status.attr,
> +	&dev_attr_pcie_speed.attr,
> +	&dev_attr_pcie_width.attr,
> +	NULL,
> +};
> +ATTRIBUTE_GROUPS(pcie_cfg);
> +
> +static int intel_pcie_sysfs_init(struct intel_pcie_port *lpp)
> +{
> +	return devm_device_add_groups(lpp->pci.dev, pcie_cfg_groups);
> +}
> +
> +static void __intel_pcie_remove(struct intel_pcie_port *lpp)
> +{
> +	pcie_rc_cfg_wr_mask(lpp, PCI_COMMAND_MEMORY | PCI_COMMAND_MASTER,
> +			    0, PCI_COMMAND);
Does DWC handle this?
> +	intel_pcie_core_irq_disable(lpp);
> +	intel_pcie_turn_off(lpp);
> +	clk_disable_unprepare(lpp->core_clk);
> +	intel_pcie_core_rst_assert(lpp);
> +	intel_pcie_deinit_phy(lpp);
> +}
> +
> +static int intel_pcie_remove(struct platform_device *pdev)
> +{
> +	struct intel_pcie_port *lpp = platform_get_drvdata(pdev);
> +	struct pcie_port *pp = &lpp->pci.pp;
> +
> +	dw_pcie_host_deinit(pp);
> +	__intel_pcie_remove(lpp);
> +
> +	return 0;
> +}
> +
> +static int __maybe_unused intel_pcie_suspend_noirq(struct device *dev)
> +{
> +	struct intel_pcie_port *lpp = dev_get_drvdata(dev);
> +	int ret;
> +
> +	intel_pcie_core_irq_disable(lpp);
> +	ret = intel_pcie_wait_l2(lpp);
> +	if (ret)
> +		return ret;
> +
> +	intel_pcie_deinit_phy(lpp);
> +	clk_disable_unprepare(lpp->core_clk);
> +	return ret;
> +}
> +
> +static int __maybe_unused intel_pcie_resume_noirq(struct device *dev)
> +{
> +	struct intel_pcie_port *lpp = dev_get_drvdata(dev);
> +
> +	return intel_pcie_host_setup(lpp);
> +}
> +
> +static int intel_pcie_rc_init(struct pcie_port *pp)
> +{
> +	struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
> +	struct intel_pcie_port *lpp = dev_get_drvdata(pci->dev);
> +	int ret;
> +
> +	/* RC/host initialization */
Remove this comment
> +	ret = intel_pcie_host_setup(lpp);
> +	if (ret)
> +		return ret;
> +	ret = intel_pcie_sysfs_init(lpp);
> +	if (ret)
> +		__intel_pcie_remove(lpp);
> +	return ret;
> +}
> +
> +int intel_pcie_msi_init(struct pcie_port *pp)
> +{
> +	struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
> +
> +	dev_dbg(pci->dev, "PCIe MSI/MSIx is handled by MSI in x86 processor\n");
> +	return 0;
> +}
> +
> +u64 intel_pcie_cpu_addr(struct dw_pcie *pcie, u64 cpu_addr)
> +{
> +	return cpu_addr + BUS_IATU_OFFS;
> +}
> +
> +static const struct dw_pcie_ops intel_pcie_ops = {
> +	.cpu_addr_fixup = intel_pcie_cpu_addr,
> +};
> +
> +static const struct dw_pcie_host_ops intel_pcie_dw_ops = {
> +	.host_init =		intel_pcie_rc_init,
> +	.msi_host_init =	intel_pcie_msi_init,
> +};
> +
> +static const struct intel_pcie_soc pcie_data = {
> +	.pcie_ver =		0x520A,
> +	.pcie_atu_offset =	0xC0000,
> +	.num_viewport =		3,
> +};
>
> +static int intel_pcie_probe(struct platform_device *pdev)
> +{
> +	const struct intel_pcie_soc *data;
> +	struct device *dev = &pdev->dev;
> +	struct intel_pcie_port *lpp;
> +	struct pcie_port *pp;
> +	struct dw_pcie *pci;
> +	int ret;
> +
> +	lpp = devm_kzalloc(dev, sizeof(*lpp), GFP_KERNEL);
> +	if (!lpp)
> +		return -ENOMEM;
> +
> +	platform_set_drvdata(pdev, lpp);
> +	pci = &lpp->pci;
> +	pci->dev = dev;
> +	pp = &pci->pp;
> +
> +	ret = intel_pcie_get_resources(pdev);
> +	if (ret)
> +		return ret;
> +
> +	data = device_get_match_data(dev);
> +	pci->ops = &intel_pcie_ops;
> +	pci->version = data->pcie_ver;
> +	pci->atu_base = pci->dbi_base + data->pcie_atu_offset;
> +	pp->ops = &intel_pcie_dw_ops;
> +
> +	ret = dw_pcie_host_init(pp);
> +	if (ret) {
> +		dev_err(dev, "cannot initialize host\n");
> +		return ret;
> +	}
> +	/* Intel PCIe doesn't configure IO region, so configure
> +	 * viewport to not to access IO region during register
> +	 * read write operations.
> +	 */
> +	pci->num_viewport = data->num_viewport;
> +	dev_info(dev,
> +		 "Intel AXI PCIe Root Complex Port %d Init Done\n", lpp->id);
> +	return ret;
> +}
> +
> +static const struct dev_pm_ops intel_pcie_pm_ops = {
> +	SET_NOIRQ_SYSTEM_SLEEP_PM_OPS(intel_pcie_suspend_noirq,
> +				      intel_pcie_resume_noirq)
> +};
> +
> +static const struct of_device_id of_intel_pcie_match[] = {
> +	{ .compatible = "intel,lgm-pcie", .data = &pcie_data },
> +	{}
> +};
> +
> +static struct platform_driver intel_pcie_driver = {
> +	.probe = intel_pcie_probe,
> +	.remove = intel_pcie_remove,
> +	.driver = {
> +		.name = "intel-lgm-pcie",
> +		.of_match_table = of_intel_pcie_match,
> +		.pm = &intel_pcie_pm_ops,
> +	},
> +};
> +builtin_platform_driver(intel_pcie_driver);

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH v3 2/2] dwc: PCI: intel: Intel PCIe RC controller driver
  2019-09-04 10:10 ` [PATCH v3 2/2] dwc: PCI: intel: Intel PCIe RC controller driver Dilip Kota
  2019-09-04 13:05   ` Andy Shevchenko
  2019-09-05  2:30   ` Chuan Hua, Lei
@ 2019-09-05 10:45   ` Andrew Murray
  2019-09-05 11:26     ` Christoph Hellwig
                       ` (2 more replies)
  2 siblings, 3 replies; 37+ messages in thread
From: Andrew Murray @ 2019-09-05 10:45 UTC (permalink / raw)
  To: Dilip Kota
  Cc: jingoohan1, gustavo.pimentel, lorenzo.pieralisi, robh,
	martin.blumenstingl, linux-pci, hch, devicetree, linux-kernel,
	andriy.shevchenko, cheol.yong.kim, chuanhua.lei, qi-ming.wu

On Wed, Sep 04, 2019 at 06:10:31PM +0800, Dilip Kota wrote:
> Add support to PCIe RC controller on Intel Universal
> Gateway SoC. PCIe controller is based of Synopsys
> Designware pci core.
> 
> Signed-off-by: Dilip Kota <eswara.kota@linux.intel.com>
> ---

Hi Dilip,

Thanks for the patch, initial feedback below:

> changes on v3:
> 	Rename PCIe app logic registers with PCIE_APP prefix.
> 	PCIE_IOP_CTRL configuration is not required. Remove respective code.
> 	Remove wrapper functions for clk enable/disable APIs.
> 	Use platform_get_resource_byname() instead of
> 	  devm_platform_ioremap_resource() to be similar with DWC framework.
> 	Rename phy name to "pciephy".
> 	Modify debug message in msi_init() callback to be more specific.
> 	Remove map_irq() callback.
> 	Enable the INTx interrupts at the time of PCIe initialization.
> 	Reduce memory fragmentation by using variable "struct dw_pcie pci"
> 	  instead of allocating memory.
> 	Reduce the delay to 100us during enpoint initialization
> 	  intel_pcie_ep_rst_init().
> 	Call  dw_pcie_host_deinit() during remove() instead of directly
> 	  calling PCIe core APIs.
> 	Rename "intel,rst-interval" to "reset-assert-ms".
> 	Remove unused APP logic Interrupt bit macro definitions.
>  	Use dwc framework's dw_pcie_setup_rc() for PCIe host specific
> 	 configuration instead of redefining the same functionality in
> 	 the driver.
> 	Move the whole DT parsing specific code to intel_pcie_get_resources()
> 
>  drivers/pci/controller/dwc/Kconfig          |  13 +
>  drivers/pci/controller/dwc/Makefile         |   1 +
>  drivers/pci/controller/dwc/pcie-intel-axi.c | 840 ++++++++++++++++++++++++++++
>  3 files changed, 854 insertions(+)
>  create mode 100644 drivers/pci/controller/dwc/pcie-intel-axi.c
> 
> diff --git a/drivers/pci/controller/dwc/Kconfig b/drivers/pci/controller/dwc/Kconfig
> index 6ea778ae4877..e44b9b6a6390 100644
> --- a/drivers/pci/controller/dwc/Kconfig
> +++ b/drivers/pci/controller/dwc/Kconfig
> @@ -82,6 +82,19 @@ config PCIE_DW_PLAT_EP
>  	  order to enable device-specific features PCI_DW_PLAT_EP must be
>  	  selected.
>  
> +config PCIE_INTEL_AXI
> +        bool "Intel AHB/AXI PCIe host controller support"
> +        depends on PCI_MSI
> +        depends on PCI

I don't think the PCI dependency is required here.

I'm also not sure why PCI_MSI is required, we select PCIE_DW_HOST which
depends on PCI_MSI_IRQ_DOMAIN which depends on PCI_MSI.

> +        depends on OF
> +        select PCIE_DW_HOST
> +        help
> +          Say 'Y' here to enable support for Intel AHB/AXI PCIe Host
> +	  controller driver.
> +	  The Intel PCIe controller is based on the Synopsys Designware
> +	  pcie core and therefore uses the Designware core functions to
> +	  implement the driver.

I can see this description is similar to others in the same Kconfig,
however I'm not sure what value a user gains by knowing implementation
details - it's helpful to know that PCIE_INTEL_AXI is based on the
Designware core, but is it helpful to know that the Designware core
functions are used?

> +
>  config PCI_EXYNOS
>  	bool "Samsung Exynos PCIe controller"
>  	depends on SOC_EXYNOS5440 || COMPILE_TEST
> diff --git a/drivers/pci/controller/dwc/Makefile b/drivers/pci/controller/dwc/Makefile
> index b085dfd4fab7..46e656ebdf90 100644
> --- a/drivers/pci/controller/dwc/Makefile
> +++ b/drivers/pci/controller/dwc/Makefile
> @@ -3,6 +3,7 @@ obj-$(CONFIG_PCIE_DW) += pcie-designware.o
>  obj-$(CONFIG_PCIE_DW_HOST) += pcie-designware-host.o
>  obj-$(CONFIG_PCIE_DW_EP) += pcie-designware-ep.o
>  obj-$(CONFIG_PCIE_DW_PLAT) += pcie-designware-plat.o
> +obj-$(CONFIG_PCIE_INTEL_AXI) += pcie-intel-axi.o
>  obj-$(CONFIG_PCI_DRA7XX) += pci-dra7xx.o
>  obj-$(CONFIG_PCI_EXYNOS) += pci-exynos.o
>  obj-$(CONFIG_PCI_IMX6) += pci-imx6.o
> diff --git a/drivers/pci/controller/dwc/pcie-intel-axi.c b/drivers/pci/controller/dwc/pcie-intel-axi.c
> new file mode 100644
> index 000000000000..75607ce03ebf
> --- /dev/null
> +++ b/drivers/pci/controller/dwc/pcie-intel-axi.c
> @@ -0,0 +1,840 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * PCIe host controller driver for Intel AXI PCIe Bridge
> + *
> + * Copyright (c) 2019 Intel Corporation.
> + */
> +
> +#include <linux/bitfield.h>
> +#include <linux/clk.h>
> +#include <linux/gpio/consumer.h>
> +#include <linux/interrupt.h>
> +#include <linux/iopoll.h>
> +#include <linux/of_irq.h>
> +#include <linux/of_pci.h>
> +#include <linux/of_platform.h>
> +#include <linux/phy/phy.h>
> +#include <linux/platform_device.h>
> +#include <linux/reset.h>
> +
> +#include "../../pci.h"

Please remove this - it isn't needed.

> +#include "pcie-designware.h"
> +
> +#define PCIE_CCRID				0x8
> +
> +#define PCIE_LCAP				0x7C
> +#define PCIE_LCAP_MAX_LINK_SPEED		GENMASK(3, 0)
> +#define PCIE_LCAP_MAX_LENGTH_WIDTH		GENMASK(9, 4)

These look like the standard PCI Link Capabilities Register,
can you use the standard ones defined in pci_regs.h? (see
PCI_EXP_LNKCAP_SLS_*).

> +
> +/* Link Control and Status Register */
> +#define PCIE_LCTLSTS				0x80
> +#define PCIE_LCTLSTS_ASPM_ENABLE		GENMASK(1, 0)
> +#define PCIE_LCTLSTS_RCB128			BIT(3)
> +#define PCIE_LCTLSTS_LINK_DISABLE		BIT(4)
> +#define PCIE_LCTLSTS_COM_CLK_CFG		BIT(6)
> +#define PCIE_LCTLSTS_HW_AW_DIS			BIT(9)
> +#define PCIE_LCTLSTS_LINK_SPEED			GENMASK(19, 16)
> +#define PCIE_LCTLSTS_NEGOTIATED_LINK_WIDTH	GENMASK(25, 20)
> +#define PCIE_LCTLSTS_SLOT_CLK_CFG		BIT(28)

These look like the standard Link Control and Link Status register, can
you use the standard ones defined in pci_regs.h? (see PCI_EXP_LNKCTL_*
and PCI_EXP_LNKSTA_*).

> +
> +#define PCIE_LCTLSTS2				0xA0
> +#define PCIE_LCTLSTS2_TGT_LINK_SPEED		GENMASK(3, 0)
> +#define PCIE_LCTLSTS2_TGT_LINK_SPEED_25GT	0x1
> +#define PCIE_LCTLSTS2_TGT_LINK_SPEED_5GT	0x2
> +#define PCIE_LCTLSTS2_TGT_LINK_SPEED_8GT	0x3
> +#define PCIE_LCTLSTS2_TGT_LINK_SPEED_16GT	0x4

And these look like the standard Link Control Register 2 (see
PCI_EXP_LNKCTL2_*).

Most of the #defines above and below look just like standard PCI defines
(for example those found in pci_regs.h) - both in terms of their name and
what they do. Ideally where the functionality is the same or very similar,
then we should use the standard PCI defines in (pci_regs.h). This helps
readability/understanding, helps to identify duplicate code and reduces
the size of your patch.

Also the capability offsets (e.g. PCIE_LCTLSTS2) are also standard. The
offsets you define are offset by an additional 0x70. E.g.i
PCIE_LCTLSTS2 = 0xA0, and PCI_EXP_LNKCTL2 = 0x30. Perhaps abstracting
this offset will allow you to use the standard pci defines?

I haven't looked in much detail at the remainder of the defines, but
the same rationale should apply. 

> +#define PCIE_LCTLSTS2_HW_AUTO_DIS		BIT(5)
> +
> +/* Ack Frequency Register */
> +#define PCIE_AFR				0x70C
> +#define PCIE_AFR_FTS_NUM			GENMASK(15, 8)
> +#define PCIE_AFR_COM_FTS_NUM			GENMASK(23, 16)
> +#define PCIE_AFR_GEN12_FTS_NUM_DFT		(SZ_128 - 1)
> +#define PCIE_AFR_GEN3_FTS_NUM_DFT		180
> +#define PCIE_AFR_GEN4_FTS_NUM_DFT		196
> +
> +#define PCIE_PLCR_DLL_LINK_EN			BIT(5)
> +#define PCIE_PORT_LOGIC_FTS			GENMASK(7, 0)
> +#define PCIE_PORT_LOGIC_DFT_FTS_NUM		(SZ_128 - 1)
> +
> +#define PCIE_MISC_CTRL				0x8BC
> +#define PCIE_MISC_CTRL_DBI_RO_WR_EN		BIT(0)
> +
> +#define PCIE_MULTI_LANE_CTRL			0x8C0
> +#define PCIE_UPCONFIG_SUPPORT			BIT(7)
> +#define PCIE_DIRECT_LINK_WIDTH_CHANGE		BIT(6)
> +#define PCIE_TARGET_LINK_WIDTH			GENMASK(5, 0)
> +
> +/* APP RC Core Control Register */
> +#define PCIE_APP_CCR				0x10
> +#define PCIE_APP_CCR_LTSSM_ENABLE		BIT(0)
> +
> +/* PCIe Message Control */
> +#define PCIE_APP_MSG_CR				0x30
> +#define PCIE_APP_MSG_XMT_PM_TURNOFF		BIT(0)
> +
> +/* PCIe Power Management Control */
> +#define PCIE_APP_PMC				0x44
> +#define PCIE_APP_PMC_IN_L2			BIT(20)
> +
> +/* Interrupt Enable Register */
> +#define PCIE_APP_IRNEN				0xF4
> +#define PCIE_APP_IRNCR				0xF8
> +#define PCIE_APP_IRN_AER_REPORT			BIT(0)
> +#define PCIE_APP_IRN_PME			BIT(2)
> +#define PCIE_APP_IRN_RX_VDM_MSG			BIT(4)
> +#define PCIE_APP_IRN_PM_TO_ACK			BIT(9)
> +#define PCIE_APP_IRN_LINK_AUTO_BW_STAT		BIT(11)
> +#define PCIE_APP_IRN_BW_MGT			BIT(12)
> +#define PCIE_APP_IRN_MSG_LTR			BIT(18)
> +#define PCIE_APP_IRN_SYS_ERR_RC			BIT(29)
> +
> +#define PCIE_APP_INTX_OFST	12
> +#define PCIE_APP_IRN_INT	(PCIE_APP_IRN_AER_REPORT | PCIE_APP_IRN_PME | \
> +			PCIE_APP_IRN_RX_VDM_MSG | PCIE_APP_IRN_SYS_ERR_RC | \
> +			PCIE_APP_IRN_PM_TO_ACK | PCIE_APP_IRN_MSG_LTR | \
> +			PCIE_APP_IRN_BW_MGT | PCIE_APP_IRN_LINK_AUTO_BW_STAT | \
> +			(PCIE_APP_INTX_OFST + PCI_INTERRUPT_INTA) | \
> +			(PCIE_APP_INTX_OFST + PCI_INTERRUPT_INTB) | \
> +			(PCIE_APP_INTX_OFST + PCI_INTERRUPT_INTC) | \
> +			(PCIE_APP_INTX_OFST + PCI_INTERRUPT_INTD))
> +
> +#define BUS_IATU_OFFS		SZ_256M
> +#define RST_INTRVL_DFT_MS	100
> +enum {
> +	PCIE_LINK_SPEED_AUTO = 0,
> +	PCIE_LINK_SPEED_GEN1,
> +	PCIE_LINK_SPEED_GEN2,
> +	PCIE_LINK_SPEED_GEN3,
> +	PCIE_LINK_SPEED_GEN4,
> +};
> +
> +struct intel_pcie_soc {
> +	unsigned int pcie_ver;
> +	unsigned int pcie_atu_offset;
> +	u32 num_viewport;
> +};
> +
> +struct intel_pcie_port {
> +	struct dw_pcie		pci;
> +	unsigned int		id; /* Physical RC Index */
> +	void __iomem		*app_base;
> +	struct gpio_desc	*reset_gpio;
> +	u32			rst_interval;
> +	u32			max_speed;
> +	u32			link_gen;
> +	u32			max_width;
> +	u32			lanes;
> +	struct clk		*core_clk;
> +	struct reset_control	*core_rst;
> +	struct phy		*phy;
> +};
> +
> +static void pcie_update_bits(void __iomem *base, u32 mask, u32 val, u32 ofs)
> +{
> +	u32 orig, tmp;
> +
> +	orig = readl(base + ofs);
> +
> +	tmp = (orig & ~mask) | (val & mask);
> +
> +	if (tmp != orig)
> +		writel(tmp, base + ofs);
> +}
> +
> +static inline u32 pcie_app_rd(struct intel_pcie_port *lpp, u32 ofs)
> +{
> +	return readl(lpp->app_base + ofs);
> +}
> +
> +static inline void pcie_app_wr(struct intel_pcie_port *lpp, u32 val, u32 ofs)
> +{
> +	writel(val, lpp->app_base + ofs);
> +}
> +
> +static void pcie_app_wr_mask(struct intel_pcie_port *lpp,
> +			     u32 mask, u32 val, u32 ofs)
> +{
> +	pcie_update_bits(lpp->app_base, mask, val, ofs);
> +}
> +
> +static inline u32 pcie_rc_cfg_rd(struct intel_pcie_port *lpp, u32 ofs)
> +{
> +	return dw_pcie_readl_dbi(&lpp->pci, ofs);
> +}
> +
> +static inline void pcie_rc_cfg_wr(struct intel_pcie_port *lpp, u32 val, u32 ofs)
> +{
> +	dw_pcie_writel_dbi(&lpp->pci, ofs, val);
> +}
> +
> +static void pcie_rc_cfg_wr_mask(struct intel_pcie_port *lpp,
> +				u32 mask, u32 val, u32 ofs)
> +{
> +	pcie_update_bits(lpp->pci.dbi_base, mask, val, ofs);
> +}
> +
> +static void intel_pcie_mem_iatu(struct intel_pcie_port *lpp)
> +{
> +	struct pcie_port *pp = &lpp->pci.pp;
> +	phys_addr_t cpu_addr = pp->mem_base;
> +
> +	dw_pcie_prog_outbound_atu(&lpp->pci, PCIE_ATU_REGION_INDEX0,
> +				  PCIE_ATU_TYPE_MEM, cpu_addr,
> +				  pp->mem_base, pp->mem_size);
> +}
> +
> +static void intel_pcie_ltssm_enable(struct intel_pcie_port *lpp)
> +{
> +	pcie_app_wr_mask(lpp, PCIE_APP_CCR_LTSSM_ENABLE,
> +			 PCIE_APP_CCR_LTSSM_ENABLE, PCIE_APP_CCR);
> +}
> +
> +static void intel_pcie_ltssm_disable(struct intel_pcie_port *lpp)
> +{
> +	pcie_app_wr_mask(lpp, PCIE_APP_CCR_LTSSM_ENABLE, 0, PCIE_APP_CCR);
> +}
> +
> +static const char *pcie_link_gen_to_str(int gen)
> +{
> +	switch (gen) {
> +	case PCIE_LINK_SPEED_GEN1:
> +		return "2.5";
> +	case PCIE_LINK_SPEED_GEN2:
> +		return "5.0";
> +	case PCIE_LINK_SPEED_GEN3:
> +		return "8.0";
> +	case PCIE_LINK_SPEED_GEN4:
> +		return "16.0";
> +	default:
> +		return "???";
> +	}
> +}
> +
> +static void intel_pcie_link_setup(struct intel_pcie_port *lpp)
> +{
> +	u32 val;
> +
> +	val = pcie_rc_cfg_rd(lpp, PCIE_LCAP);
> +	lpp->max_speed = FIELD_GET(PCIE_LCAP_MAX_LINK_SPEED, val);
> +	lpp->max_width = FIELD_GET(PCIE_LCAP_MAX_LENGTH_WIDTH, val);
> +
> +	val = pcie_rc_cfg_rd(lpp, PCIE_LCTLSTS);
> +
> +	val &= ~(PCIE_LCTLSTS_LINK_DISABLE | PCIE_LCTLSTS_ASPM_ENABLE);
> +	val |= (PCIE_LCTLSTS_SLOT_CLK_CFG | PCIE_LCTLSTS_COM_CLK_CFG |
> +		PCIE_LCTLSTS_RCB128);
> +	pcie_rc_cfg_wr(lpp, val, PCIE_LCTLSTS);
> +}
> +
> +static void intel_pcie_max_speed_setup(struct intel_pcie_port *lpp)
> +{
> +	u32 reg, val;
> +
> +	reg = pcie_rc_cfg_rd(lpp, PCIE_LCTLSTS2);
> +	switch (lpp->link_gen) {
> +	case PCIE_LINK_SPEED_GEN1:
> +		reg &= ~PCIE_LCTLSTS2_TGT_LINK_SPEED;
> +		reg |= PCIE_LCTLSTS2_HW_AUTO_DIS |
> +			PCIE_LCTLSTS2_TGT_LINK_SPEED_25GT;
> +		break;
> +	case PCIE_LINK_SPEED_GEN2:
> +		reg &= ~PCIE_LCTLSTS2_TGT_LINK_SPEED;
> +		reg |= PCIE_LCTLSTS2_HW_AUTO_DIS |
> +			PCIE_LCTLSTS2_TGT_LINK_SPEED_5GT;
> +		break;
> +	case PCIE_LINK_SPEED_GEN3:
> +		reg &= ~PCIE_LCTLSTS2_TGT_LINK_SPEED;
> +		reg |= PCIE_LCTLSTS2_HW_AUTO_DIS |
> +			PCIE_LCTLSTS2_TGT_LINK_SPEED_8GT;
> +		break;
> +	case PCIE_LINK_SPEED_GEN4:
> +		reg &= ~PCIE_LCTLSTS2_TGT_LINK_SPEED;
> +		reg |= PCIE_LCTLSTS2_HW_AUTO_DIS |
> +			PCIE_LCTLSTS2_TGT_LINK_SPEED_16GT;
> +		break;
> +	default:
> +		/* Use hardware capability */
> +		val = pcie_rc_cfg_rd(lpp, PCIE_LCAP);
> +		val = FIELD_GET(PCIE_LCAP_MAX_LINK_SPEED, val);
> +		reg &= ~PCIE_LCTLSTS2_HW_AUTO_DIS;
> +		reg |= val;
> +		break;
> +	}
> +	pcie_rc_cfg_wr(lpp, reg, PCIE_LCTLSTS2);
> +}
> +
> +static void intel_pcie_speed_change_enable(struct intel_pcie_port *lpp)
> +{
> +	u32 mask, val;
> +
> +	mask = PORT_LOGIC_SPEED_CHANGE | PCIE_PORT_LOGIC_FTS;
> +	val = PORT_LOGIC_SPEED_CHANGE | PCIE_PORT_LOGIC_DFT_FTS_NUM;
> +
> +	pcie_rc_cfg_wr_mask(lpp, mask, val, PCIE_LINK_WIDTH_SPEED_CONTROL);
> +}
> +
> +static void intel_pcie_speed_change_disable(struct intel_pcie_port *lpp)
> +{
> +	pcie_rc_cfg_wr_mask(lpp, PORT_LOGIC_SPEED_CHANGE, 0,
> +			    PCIE_LINK_WIDTH_SPEED_CONTROL);
> +}
> +
> +static void intel_pcie_max_link_width_setup(struct intel_pcie_port *lpp)
> +{
> +	u32 mask, val;
> +
> +	/* HW auto bandwidth negotiation must be enabled */
> +	pcie_rc_cfg_wr_mask(lpp, PCIE_LCTLSTS_HW_AW_DIS, 0, PCIE_LCTLSTS);
> +
> +	mask = PCIE_DIRECT_LINK_WIDTH_CHANGE | PCIE_TARGET_LINK_WIDTH;
> +	val = PCIE_DIRECT_LINK_WIDTH_CHANGE | lpp->lanes;
> +	pcie_rc_cfg_wr_mask(lpp, mask, val, PCIE_MULTI_LANE_CTRL);

Is this identical functionality to the writing of PCIE_PORT_LINK_CONTROL
in dw_pcie_setup?

I ask because if the user sets num-lanes in the DT, will it have the
desired effect?

> +}
> +
> +static void intel_pcie_port_logic_setup(struct intel_pcie_port *lpp)
> +{
> +	u32 val, mask, fts;
> +
> +	switch (lpp->max_speed) {
> +	case PCIE_LINK_SPEED_GEN1:
> +	case PCIE_LINK_SPEED_GEN2:
> +		fts = PCIE_AFR_GEN12_FTS_NUM_DFT;
> +		break;
> +	case PCIE_LINK_SPEED_GEN3:
> +		fts = PCIE_AFR_GEN3_FTS_NUM_DFT;
> +		break;
> +	case PCIE_LINK_SPEED_GEN4:
> +		fts = PCIE_AFR_GEN4_FTS_NUM_DFT;
> +		break;
> +	default:
> +		fts = PCIE_AFR_GEN12_FTS_NUM_DFT;
> +		break;
> +	}
> +	mask = PCIE_AFR_FTS_NUM | PCIE_AFR_COM_FTS_NUM;
> +	val = FIELD_PREP(PCIE_AFR_FTS_NUM, fts) |
> +	       FIELD_PREP(PCIE_AFR_COM_FTS_NUM, fts);
> +	pcie_rc_cfg_wr_mask(lpp, mask, val, PCIE_AFR);
> +
> +	/* Port Link Control Register */
> +	pcie_rc_cfg_wr_mask(lpp, PCIE_PLCR_DLL_LINK_EN,
> +			    PCIE_PLCR_DLL_LINK_EN, PCIE_PORT_LINK_CONTROL);
> +}
> +
> +static void intel_pcie_upconfig_setup(struct intel_pcie_port *lpp)
> +{
> +	pcie_rc_cfg_wr_mask(lpp, PCIE_UPCONFIG_SUPPORT,
> +			    PCIE_UPCONFIG_SUPPORT, PCIE_MULTI_LANE_CTRL);
> +}
> +
> +static void intel_pcie_rc_setup(struct intel_pcie_port *lpp)
> +{
> +	intel_pcie_ltssm_disable(lpp);
> +	intel_pcie_link_setup(lpp);
> +	dw_pcie_setup_rc(&lpp->pci.pp);
> +	intel_pcie_upconfig_setup(lpp);
> +
> +	intel_pcie_max_speed_setup(lpp);
> +	intel_pcie_speed_change_enable(lpp);
> +	intel_pcie_port_logic_setup(lpp);
> +	intel_pcie_mem_iatu(lpp);

Doesn't dw_pcie_setup_rc do the same as intel_pcie_mem_iatu?

> +}
> +
> +static int intel_pcie_ep_rst_init(struct intel_pcie_port *lpp)
> +{
> +	struct device *dev = lpp->pci.dev;
> +	int ret;
> +
> +	lpp->reset_gpio = devm_gpiod_get(dev, "reset", GPIOD_OUT_LOW);
> +	if (IS_ERR(lpp->reset_gpio)) {
> +		ret = PTR_ERR(lpp->reset_gpio);
> +		if (ret != -EPROBE_DEFER)
> +			dev_err(dev, "failed to request PCIe GPIO: %d\n", ret);
> +		return ret;
> +	}
> +	/* Make initial reset last for 100us */
> +	usleep_range(100, 200);
> +
> +	return 0;
> +}
> +
> +static void intel_pcie_core_rst_assert(struct intel_pcie_port *lpp)
> +{
> +	reset_control_assert(lpp->core_rst);
> +}
> +
> +static void intel_pcie_core_rst_deassert(struct intel_pcie_port *lpp)
> +{
> +	/*
> +	 * One micro-second delay to make sure the reset pulse
> +	 * wide enough so that core reset is clean.
> +	 */
> +	udelay(1);
> +	reset_control_deassert(lpp->core_rst);
> +
> +	/*
> +	 * Some SoC core reset also reset PHY, more delay needed
> +	 * to make sure the reset process is done.
> +	 */
> +	usleep_range(1000, 2000);
> +}
> +
> +static void intel_pcie_device_rst_assert(struct intel_pcie_port *lpp)
> +{
> +	gpiod_set_value_cansleep(lpp->reset_gpio, 1);
> +}
> +
> +static void intel_pcie_device_rst_deassert(struct intel_pcie_port *lpp)
> +{
> +	msleep(lpp->rst_interval);
> +	gpiod_set_value_cansleep(lpp->reset_gpio, 0);
> +}
> +
> +static int intel_pcie_app_logic_setup(struct intel_pcie_port *lpp)
> +{
> +	intel_pcie_device_rst_deassert(lpp);
> +	intel_pcie_ltssm_enable(lpp);
> +
> +	return dw_pcie_wait_for_link(&lpp->pci);
> +}
> +
> +static irqreturn_t intel_pcie_core_isr(int irq, void *arg)
> +{
> +	struct intel_pcie_port *lpp = arg;
> +	u32 val, reg;
> +
> +	reg = pcie_app_rd(lpp, PCIE_APP_IRNCR);
> +	val = reg & PCIE_APP_IRN_INT;
> +
> +	pcie_app_wr(lpp, val, PCIE_APP_IRNCR);
> +
> +	trace_printk("PCIe misc interrupt status 0x%x\n", reg);
> +	return IRQ_HANDLED;
> +}

Why do we bother handling this interrupt?

> +
> +static int intel_pcie_setup_irq(struct intel_pcie_port *lpp)
> +{
> +	struct device *dev = lpp->pci.dev;
> +	struct platform_device *pdev;
> +	char *irq_name;
> +	int irq, ret;
> +
> +	pdev = to_platform_device(dev);
> +	irq = platform_get_irq(pdev, 0);
> +	if (irq < 0) {
> +		dev_err(dev, "missing sys integrated irq resource\n");
> +		return irq;
> +	}
> +
> +	irq_name = devm_kasprintf(dev, GFP_KERNEL, "pcie_misc%d", lpp->id);
> +	if (!irq_name) {
> +		dev_err(dev, "failed to alloc irq name\n");
> +		return -ENOMEM;
> +	}
> +
> +	ret = devm_request_irq(dev, irq, intel_pcie_core_isr,
> +			       IRQF_SHARED, irq_name, lpp);
> +	if (ret) {
> +		dev_err(dev, "request irq %d failed\n", irq);
> +		return ret;
> +	}
> +	/* Enable integrated interrupts */
> +	pcie_app_wr_mask(lpp, PCIE_APP_IRN_INT,
> +			 PCIE_APP_IRN_INT, PCIE_APP_IRNEN);
> +
> +	return ret;
> +}
> +
> +static void intel_pcie_core_irq_disable(struct intel_pcie_port *lpp)
> +{
> +	pcie_app_wr(lpp, 0, PCIE_APP_IRNEN);
> +	pcie_app_wr(lpp, PCIE_APP_IRN_INT,  PCIE_APP_IRNCR);
> +}
> +
> +static int intel_pcie_get_resources(struct platform_device *pdev)
> +{
> +	struct intel_pcie_port *lpp;
> +	struct resource *res;
> +	struct dw_pcie *pci;
> +	struct device *dev;
> +	int ret;
> +
> +	lpp = platform_get_drvdata(pdev);
> +	pci = &lpp->pci;
> +	dev = pci->dev;
> +
> +	ret = device_property_read_u32(dev, "linux,pci-domain", &lpp->id);
> +	if (ret) {
> +		dev_err(dev, "failed to get domain id, errno %d\n", ret);
> +		return ret;
> +	}
> +
> +	res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "dbi");
> +	if (!res)
> +		return -ENXIO;
> +
> +	pci->dbi_base = devm_ioremap_resource(dev, res);
> +	if (IS_ERR(pci->dbi_base))
> +		return PTR_ERR(pci->dbi_base);
> +
> +	lpp->core_clk = devm_clk_get(dev, NULL);
> +	if (IS_ERR(lpp->core_clk)) {
> +		ret = PTR_ERR(lpp->core_clk);
> +		if (ret != -EPROBE_DEFER)
> +			dev_err(dev, "failed to get clks: %d\n", ret);
> +		return ret;
> +	}
> +
> +	lpp->core_rst = devm_reset_control_get(dev, NULL);
> +	if (IS_ERR(lpp->core_rst)) {
> +		ret = PTR_ERR(lpp->core_rst);
> +		if (ret != -EPROBE_DEFER)
> +			dev_err(dev, "failed to get resets: %d\n", ret);
> +		return ret;
> +	}
> +
> +	ret = device_property_match_string(dev, "device_type", "pci");
> +	if (ret) {
> +		dev_err(dev, "failed to find pci device type: %d\n", ret);
> +		return ret;
> +	}
> +
> +	if (device_property_read_u32(dev, "reset-assert-ms",
> +				     &lpp->rst_interval))
> +		lpp->rst_interval = RST_INTRVL_DFT_MS;
> +
> +	if (device_property_read_u32(dev, "max-link-speed", &lpp->link_gen))
> +		lpp->link_gen = 0; /* Fallback to auto */

Is it possible to use of_pci_get_max_link_speed here instead?

> +
> +	res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "app");
> +	if (!res)
> +		return -ENXIO;
> +
> +	lpp->app_base = devm_ioremap_resource(dev, res);
> +	if (IS_ERR(lpp->app_base))
> +		return PTR_ERR(lpp->app_base);
> +
> +	ret = intel_pcie_ep_rst_init(lpp);
> +	if (ret)
> +		return ret;

Given that this is called from a function '..._get_resources' I don't think
we should be resetting anything here.

> +
> +	lpp->phy = devm_phy_get(dev, "pciephy");
> +	if (IS_ERR(lpp->phy)) {
> +		ret = PTR_ERR(lpp->phy);
> +		if (ret != -EPROBE_DEFER)
> +			dev_err(dev, "couldn't get pcie-phy: %d\n", ret);
> +		return ret;
> +	}
> +	return 0;
> +}
> +
> +static void intel_pcie_deinit_phy(struct intel_pcie_port *lpp)
> +{
> +	phy_exit(lpp->phy);
> +}
> +
> +static int intel_pcie_wait_l2(struct intel_pcie_port *lpp)
> +{
> +	u32 value;
> +	int ret;
> +
> +	if (lpp->max_speed < PCIE_LINK_SPEED_GEN3)
> +		return 0;
> +
> +	/* Send PME_TURN_OFF message */
> +	pcie_app_wr_mask(lpp, PCIE_APP_MSG_XMT_PM_TURNOFF,
> +			 PCIE_APP_MSG_XMT_PM_TURNOFF, PCIE_APP_MSG_CR);
> +
> +	/* Read PMC status and wait for falling into L2 link state */
> +	ret = readl_poll_timeout(lpp->app_base + PCIE_APP_PMC, value,
> +				 (value & PCIE_APP_PMC_IN_L2), 20,
> +				 jiffies_to_usecs(5 * HZ));
> +	if (ret)
> +		dev_err(lpp->pci.dev, "PCIe link enter L2 timeout!\n");
> +
> +	return ret;
> +}
> +
> +static void intel_pcie_turn_off(struct intel_pcie_port *lpp)
> +{
> +	if (dw_pcie_link_up(&lpp->pci))
> +		intel_pcie_wait_l2(lpp);
> +
> +	/* Put EP in reset state */

EP?

> +	intel_pcie_device_rst_assert(lpp);
> +	pcie_rc_cfg_wr_mask(lpp, PCI_COMMAND_MEMORY, 0, PCI_COMMAND);
> +}
> +
> +static int intel_pcie_host_setup(struct intel_pcie_port *lpp)
> +{
> +	int ret;
> +
> +	intel_pcie_core_rst_assert(lpp);
> +	intel_pcie_device_rst_assert(lpp);
> +
> +	ret = phy_init(lpp->phy);
> +	if (ret)
> +		return ret;
> +
> +	intel_pcie_core_rst_deassert(lpp);
> +
> +	ret = clk_prepare_enable(lpp->core_clk);
> +	if (ret) {
> +		dev_err(lpp->pci.dev, "Core clock enable failed: %d\n", ret);
> +		goto clk_err;
> +	}
> +
> +	intel_pcie_rc_setup(lpp);
> +	ret = intel_pcie_app_logic_setup(lpp);
> +	if (ret)
> +		goto app_init_err;
> +
> +	ret = intel_pcie_setup_irq(lpp);
> +	if (!ret)
> +		return ret;
> +
> +	intel_pcie_turn_off(lpp);
> +app_init_err:
> +	clk_disable_unprepare(lpp->core_clk);
> +clk_err:
> +	intel_pcie_core_rst_assert(lpp);
> +	intel_pcie_deinit_phy(lpp);
> +	return ret;
> +}
> +
> +static ssize_t
> +pcie_link_status_show(struct device *dev, struct device_attribute *attr,
> +		      char *buf)
> +{
> +	u32 reg, width, gen;
> +	struct intel_pcie_port *lpp;
> +
> +	lpp = dev_get_drvdata(dev);
> +
> +	reg = pcie_rc_cfg_rd(lpp, PCIE_LCTLSTS);
> +	width = FIELD_GET(PCIE_LCTLSTS_NEGOTIATED_LINK_WIDTH, reg);
> +	gen = FIELD_GET(PCIE_LCTLSTS_LINK_SPEED, reg);
> +	if (gen > lpp->max_speed)
> +		return -EINVAL;
> +
> +	return sprintf(buf, "Port %2u Width x%u Speed %s GT/s\n", lpp->id,
> +		       width, pcie_link_gen_to_str(gen));
> +}
> +static DEVICE_ATTR_RO(pcie_link_status);
> +
> +static ssize_t pcie_speed_store(struct device *dev,
> +				struct device_attribute *attr,
> +				const char *buf, size_t len)
> +{
> +	struct intel_pcie_port *lpp;
> +	unsigned long val;
> +	int ret;
> +
> +	lpp = dev_get_drvdata(dev);
> +
> +	ret = kstrtoul(buf, 10, &val);
> +	if (ret)
> +		return ret;
> +
> +	if (val > lpp->max_speed)
> +		return -EINVAL;
> +
> +	lpp->link_gen = val;
> +	intel_pcie_max_speed_setup(lpp);
> +	intel_pcie_speed_change_disable(lpp);
> +	intel_pcie_speed_change_enable(lpp);
> +
> +	return len;
> +}
> +static DEVICE_ATTR_WO(pcie_speed);
> +
> +/*
> + * Link width change on the fly is not always successful.
> + * It also depends on the partner.
> + */
> +static ssize_t pcie_width_store(struct device *dev,
> +				struct device_attribute *attr,
> +				const char *buf, size_t len)
> +{
> +	struct intel_pcie_port *lpp;
> +	unsigned long val;
> +
> +	lpp = dev_get_drvdata(dev);
> +
> +	if (kstrtoul(buf, 10, &val))
> +		return -EINVAL;
> +
> +	if (val > lpp->max_width)
> +		return -EINVAL;
> +
> +	lpp->lanes = val;
> +	intel_pcie_max_link_width_setup(lpp);
> +
> +	return len;
> +}
> +static DEVICE_ATTR_WO(pcie_width);

You mentioned that a use-case for changing width/speed on the fly was to
control power consumption (and this also helps debugging issues). As I
understand there is no current support for this in the kernel - yet it is
something that would provide value.

I haven't looked in much detail, however as I understand the PCI spec
allows an upstream partner to change the link speed and retrain. (I'm not
sure about link width). Given that we already have
[current,max]_link_[speed,width] is sysfs for each PCI device, it would
seem natural to extend this to allow for writing a max width or speed.

So ideally this type of thing would be moved to the core or at least in
the dwc driver. This way the benefits can be applied to more devices on
larger PCI fabrics - Though perhaps others here will have a different view
and I'm keen to hear them.

I'm keen to limit the differences between the DWC controller drivers and
unify common code - thus it would be helpful to have a justification as to
why this is only relevant for this controller.

For user-space only control, it is possible to achieve what you have here
with userspace utilities (something like setpci) (assuming the standard
looking registers you currently access are exposed in the normal config
space way - though PCIe capabilities).

My suggestion would be to drop these changes and later add something that
can benefit more devices. In any case if these changes stay within this
driver then it would be helpful to move them to a separate patch.

> +
> +static struct attribute *pcie_cfg_attrs[] = {
> +	&dev_attr_pcie_link_status.attr,
> +	&dev_attr_pcie_speed.attr,
> +	&dev_attr_pcie_width.attr,
> +	NULL,
> +};
> +ATTRIBUTE_GROUPS(pcie_cfg);
> +
> +static int intel_pcie_sysfs_init(struct intel_pcie_port *lpp)
> +{
> +	return devm_device_add_groups(lpp->pci.dev, pcie_cfg_groups);
> +}
> +
> +static void __intel_pcie_remove(struct intel_pcie_port *lpp)
> +{
> +	pcie_rc_cfg_wr_mask(lpp, PCI_COMMAND_MEMORY | PCI_COMMAND_MASTER,
> +			    0, PCI_COMMAND);
> +	intel_pcie_core_irq_disable(lpp);
> +	intel_pcie_turn_off(lpp);
> +	clk_disable_unprepare(lpp->core_clk);
> +	intel_pcie_core_rst_assert(lpp);
> +	intel_pcie_deinit_phy(lpp);
> +}
> +
> +static int intel_pcie_remove(struct platform_device *pdev)
> +{
> +	struct intel_pcie_port *lpp = platform_get_drvdata(pdev);
> +	struct pcie_port *pp = &lpp->pci.pp;
> +
> +	dw_pcie_host_deinit(pp);
> +	__intel_pcie_remove(lpp);
> +
> +	return 0;
> +}
> +
> +static int __maybe_unused intel_pcie_suspend_noirq(struct device *dev)
> +{
> +	struct intel_pcie_port *lpp = dev_get_drvdata(dev);
> +	int ret;
> +
> +	intel_pcie_core_irq_disable(lpp);
> +	ret = intel_pcie_wait_l2(lpp);
> +	if (ret)
> +		return ret;
> +
> +	intel_pcie_deinit_phy(lpp);
> +	clk_disable_unprepare(lpp->core_clk);
> +	return ret;
> +}
> +
> +static int __maybe_unused intel_pcie_resume_noirq(struct device *dev)
> +{
> +	struct intel_pcie_port *lpp = dev_get_drvdata(dev);
> +
> +	return intel_pcie_host_setup(lpp);
> +}
> +
> +static int intel_pcie_rc_init(struct pcie_port *pp)
> +{
> +	struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
> +	struct intel_pcie_port *lpp = dev_get_drvdata(pci->dev);
> +	int ret;
> +
> +	/* RC/host initialization */
> +	ret = intel_pcie_host_setup(lpp);
> +	if (ret)
> +		return ret;

Insert new line here.

> +	ret = intel_pcie_sysfs_init(lpp);
> +	if (ret)
> +		__intel_pcie_remove(lpp);
> +	return ret;
> +}
> +
> +int intel_pcie_msi_init(struct pcie_port *pp)
> +{
> +	struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
> +
> +	dev_dbg(pci->dev, "PCIe MSI/MSIx is handled by MSI in x86 processor\n");

What about other processors? (Noting that this driver doesn't depend on
any specific ARCH in the KConfig).

> +	return 0;
> +}
> +
> +u64 intel_pcie_cpu_addr(struct dw_pcie *pcie, u64 cpu_addr)
> +{
> +	return cpu_addr + BUS_IATU_OFFS;
> +}
> +
> +static const struct dw_pcie_ops intel_pcie_ops = {
> +	.cpu_addr_fixup = intel_pcie_cpu_addr,
> +};
> +
> +static const struct dw_pcie_host_ops intel_pcie_dw_ops = {
> +	.host_init =		intel_pcie_rc_init,
> +	.msi_host_init =	intel_pcie_msi_init,
> +};
> +
> +static const struct intel_pcie_soc pcie_data = {
> +	.pcie_ver =		0x520A,
> +	.pcie_atu_offset =	0xC0000,
> +	.num_viewport =		3,
> +};
> +
> +static int intel_pcie_probe(struct platform_device *pdev)
> +{
> +	const struct intel_pcie_soc *data;
> +	struct device *dev = &pdev->dev;
> +	struct intel_pcie_port *lpp;
> +	struct pcie_port *pp;
> +	struct dw_pcie *pci;
> +	int ret;
> +
> +	lpp = devm_kzalloc(dev, sizeof(*lpp), GFP_KERNEL);
> +	if (!lpp)
> +		return -ENOMEM;
> +
> +	platform_set_drvdata(pdev, lpp);
> +	pci = &lpp->pci;
> +	pci->dev = dev;
> +	pp = &pci->pp;
> +
> +	ret = intel_pcie_get_resources(pdev);
> +	if (ret)
> +		return ret;
> +
> +	data = device_get_match_data(dev);
> +	pci->ops = &intel_pcie_ops;
> +	pci->version = data->pcie_ver;
> +	pci->atu_base = pci->dbi_base + data->pcie_atu_offset;
> +	pp->ops = &intel_pcie_dw_ops;
> +
> +	ret = dw_pcie_host_init(pp);
> +	if (ret) {
> +		dev_err(dev, "cannot initialize host\n");
> +		return ret;
> +	}

Add a new line after the closing brace.

> +	/* Intel PCIe doesn't configure IO region, so configure
> +	 * viewport to not to access IO region during register
> +	 * read write operations.
> +	 */
> +	pci->num_viewport = data->num_viewport;
> +	dev_info(dev,
> +		 "Intel AXI PCIe Root Complex Port %d Init Done\n", lpp->id);
> +	return ret;
> +}
> +
> +static const struct dev_pm_ops intel_pcie_pm_ops = {
> +	SET_NOIRQ_SYSTEM_SLEEP_PM_OPS(intel_pcie_suspend_noirq,
> +				      intel_pcie_resume_noirq)
> +};
> +
> +static const struct of_device_id of_intel_pcie_match[] = {
> +	{ .compatible = "intel,lgm-pcie", .data = &pcie_data },
> +	{}
> +};
> +
> +static struct platform_driver intel_pcie_driver = {
> +	.probe = intel_pcie_probe,
> +	.remove = intel_pcie_remove,
> +	.driver = {
> +		.name = "intel-lgm-pcie",

Is there a reason why the we use intel-lgm-pcie here and pcie-intel-axi
elsewhere? What does lgm mean?

Thanks,

Andrew Murray

> +		.of_match_table = of_intel_pcie_match,
> +		.pm = &intel_pcie_pm_ops,
> +	},
> +};
> +builtin_platform_driver(intel_pcie_driver);
> -- 
> 2.11.0
> 

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH v3 2/2] dwc: PCI: intel: Intel PCIe RC controller driver
  2019-09-05 10:45   ` Andrew Murray
@ 2019-09-05 11:26     ` Christoph Hellwig
  2019-09-05 11:40     ` Andy Shevchenko
  2019-09-06 10:58     ` Dilip Kota
  2 siblings, 0 replies; 37+ messages in thread
From: Christoph Hellwig @ 2019-09-05 11:26 UTC (permalink / raw)
  To: Andrew Murray
  Cc: Dilip Kota, jingoohan1, gustavo.pimentel, lorenzo.pieralisi,
	robh, martin.blumenstingl, linux-pci, hch, devicetree,
	linux-kernel, andriy.shevchenko, cheol.yong.kim, chuanhua.lei,
	qi-ming.wu

On Thu, Sep 05, 2019 at 11:45:18AM +0100, Andrew Murray wrote:
> > +        depends on OF
> > +        select PCIE_DW_HOST
> > +        help
> > +          Say 'Y' here to enable support for Intel AHB/AXI PCIe Host
> > +	  controller driver.
> > +	  The Intel PCIe controller is based on the Synopsys Designware
> > +	  pcie core and therefore uses the Designware core functions to
> > +	  implement the driver.
> 
> I can see this description is similar to others in the same Kconfig,
> however I'm not sure what value a user gains by knowing implementation
> details - it's helpful to know that PCIE_INTEL_AXI is based on the
> Designware core, but is it helpful to know that the Designware core
> functions are used?

Not really.  What would be extremely useful for a user is what Intel
SOC contains this IP block, or the fact that most of those were
actually produced by Lantiq, a company bought by Intel.

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH v3 2/2] dwc: PCI: intel: Intel PCIe RC controller driver
  2019-09-05 10:45   ` Andrew Murray
  2019-09-05 11:26     ` Christoph Hellwig
@ 2019-09-05 11:40     ` Andy Shevchenko
  2019-09-12  7:01       ` Dilip Kota
  2019-09-06 10:58     ` Dilip Kota
  2 siblings, 1 reply; 37+ messages in thread
From: Andy Shevchenko @ 2019-09-05 11:40 UTC (permalink / raw)
  To: Andrew Murray
  Cc: Dilip Kota, jingoohan1, gustavo.pimentel, lorenzo.pieralisi,
	robh, martin.blumenstingl, linux-pci, hch, devicetree,
	linux-kernel, cheol.yong.kim, chuanhua.lei, qi-ming.wu

On Thu, Sep 05, 2019 at 11:45:18AM +0100, Andrew Murray wrote:
> On Wed, Sep 04, 2019 at 06:10:31PM +0800, Dilip Kota wrote:
> > Add support to PCIe RC controller on Intel Universal
> > Gateway SoC. PCIe controller is based of Synopsys
> > Designware pci core.

> > +config PCIE_INTEL_AXI

I think that name here is too generic. Classical x86 seems not using this.

> > +        bool "Intel AHB/AXI PCIe host controller support"
> > +        depends on PCI_MSI
> > +        depends on PCI
> > +        depends on OF
> > +        select PCIE_DW_HOST
> > +        help
> > +          Say 'Y' here to enable support for Intel AHB/AXI PCIe Host
> > +	  controller driver.
> > +	  The Intel PCIe controller is based on the Synopsys Designware
> > +	  pcie core and therefore uses the Designware core functions to
> > +	  implement the driver.

-- 
With Best Regards,
Andy Shevchenko



^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH v3 1/2] dt-bindings: PCI: intel: Add YAML schemas for the PCIe RC controller
  2019-09-04 10:10 ` [PATCH v3 1/2] dt-bindings: PCI: intel: Add YAML schemas for the PCIe RC controller Dilip Kota
  2019-09-05  2:23   ` Chuan Hua, Lei
@ 2019-09-05 20:31   ` Martin Blumenstingl
  2019-09-06  3:22     ` Chuan Hua, Lei
  2019-09-06  9:19     ` Andy Shevchenko
  2019-09-17 18:40   ` Rob Herring
  2 siblings, 2 replies; 37+ messages in thread
From: Martin Blumenstingl @ 2019-09-05 20:31 UTC (permalink / raw)
  To: Dilip Kota
  Cc: jingoohan1, gustavo.pimentel, lorenzo.pieralisi, robh, linux-pci,
	hch, devicetree, linux-kernel, andriy.shevchenko, cheol.yong.kim,
	chuanhua.lei, qi-ming.wu

Hi Dilip,

On Wed, Sep 4, 2019 at 12:11 PM Dilip Kota <eswara.kota@linux.intel.com> wrote:
[...]
> +properties:
> +  compatible:
> +    const: intel,lgm-pcie
should we add the "snps,dw-pcie" here (and in the example below) as well?
(this is what for example
Documentation/devicetree/bindings/pci/amlogic,meson-pcie.txt does)

[...]
> +  phy-names:
> +    const: pciephy
the most popular choice in Documentation/devicetree/bindings/pci/ is "pcie-phy"
if Rob is happy with "pciephy" (which is already part of two other
bindings) then I'm happy with "pciephy" as well

> +  num-lanes:
> +    description: Number of lanes to use for this port.
are there SoCs with more than 2 lanes?
you can list the allowed values in an enum so "num-lanes = <16>"
causes an error when someone accidentally has this in their .dts (and
runs the dt-bindings validation)

[...]
> +  reset-assert-ms:
maybe add:
  $ref: /schemas/types.yaml#/definitions/uint32

> +    description: |
> +      Device reset interval in ms.
> +      Some devices need an interval upto 500ms. By default it is 100ms.
> +
> +required:
> +  - compatible
> +  - device_type
> +  - reg
> +  - reg-names
> +  - ranges
> +  - resets
> +  - clocks
> +  - phys
> +  - phy-names
> +  - reset-gpios
> +  - num-lanes
> +  - linux,pci-domain
> +  - interrupts
> +  - interrupt-map
> +  - interrupt-map-mask
> +
> +additionalProperties: false
> +
> +examples:
> +  - |
> +    pcie10:pcie@d0e00000 {
> +      compatible = "intel,lgm-pcie";
> +      device_type = "pci";
> +      #address-cells = <3>;
> +      #size-cells = <2>;
> +      reg = <
> +            0xd0e00000 0x1000
> +            0xd2000000 0x800000
> +            0xd0a41000 0x1000
> +            >;
> +      reg-names = "dbi", "config", "app";
> +      linux,pci-domain = <0>;
> +      max-link-speed = <4>;
> +      bus-range = <0x00 0x08>;
> +      interrupt-parent = <&ioapic1>;
> +      interrupts = <67 1>;
> +      #interrupt-cells = <1>;
> +      interrupt-map-mask = <0 0 0 0x7>;
> +      interrupt-map = <0 0 0 1 &ioapic1 27 1>,
> +                      <0 0 0 2 &ioapic1 28 1>,
> +                      <0 0 0 3 &ioapic1 29 1>,
> +                      <0 0 0 4 &ioapic1 30 1>;
is the "1" in the interrupts and interrupt-map properties IRQ_TYPE_EDGE_RISING?
you can use these macros in this example as well, see
Documentation/devicetree/bindings/iio/accel/adi,adxl372.yaml for
example


Martin

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH v3 1/2] dt-bindings: PCI: intel: Add YAML schemas for the PCIe RC controller
  2019-09-05 20:31   ` Martin Blumenstingl
@ 2019-09-06  3:22     ` Chuan Hua, Lei
  2019-09-06 17:17       ` Martin Blumenstingl
  2019-09-06  9:19     ` Andy Shevchenko
  1 sibling, 1 reply; 37+ messages in thread
From: Chuan Hua, Lei @ 2019-09-06  3:22 UTC (permalink / raw)
  To: Martin Blumenstingl, Dilip Kota
  Cc: jingoohan1, gustavo.pimentel, lorenzo.pieralisi, robh, linux-pci,
	hch, devicetree, linux-kernel, andriy.shevchenko, cheol.yong.kim,
	qi-ming.wu


On 9/6/2019 4:31 AM, Martin Blumenstingl wrote:
> Hi Dilip,
>
> On Wed, Sep 4, 2019 at 12:11 PM Dilip Kota <eswara.kota@linux.intel.com> wrote:
> [...]
>> +properties:
>> +  compatible:
>> +    const: intel,lgm-pcie
> should we add the "snps,dw-pcie" here (and in the example below) as well?
> (this is what for example
> Documentation/devicetree/bindings/pci/amlogic,meson-pcie.txt does)
Thanks for pointing out this. We should add this.
>
> [...]
>> +  phy-names:
>> +    const: pciephy
> the most popular choice in Documentation/devicetree/bindings/pci/ is "pcie-phy"
> if Rob is happy with "pciephy" (which is already part of two other
> bindings) then I'm happy with "pciephy" as well
Agree.
>
>> +  num-lanes:
>> +    description: Number of lanes to use for this port.
> are there SoCs with more than 2 lanes?
> you can list the allowed values in an enum so "num-lanes = <16>"
> causes an error when someone accidentally has this in their .dts (and
> runs the dt-bindings validation)
Our SoC(LGM) supports single lane or dual lane. Again this also depends 
on the board. I wonder if we should put this into board specific dts.  
To make multiple lanes work properly, it also depends on the phy mode. 
In my internal version, I put it into board dts.
>
> [...]
>> +  reset-assert-ms:
> maybe add:
>    $ref: /schemas/types.yaml#/definitions/uint32
Agree
>> +    description: |
>> +      Device reset interval in ms.
>> +      Some devices need an interval upto 500ms. By default it is 100ms.
>> +
>> +required:
>> +  - compatible
>> +  - device_type
>> +  - reg
>> +  - reg-names
>> +  - ranges
>> +  - resets
>> +  - clocks
>> +  - phys
>> +  - phy-names
>> +  - reset-gpios
>> +  - num-lanes
>> +  - linux,pci-domain
>> +  - interrupts
>> +  - interrupt-map
>> +  - interrupt-map-mask
>> +
>> +additionalProperties: false
>> +
>> +examples:
>> +  - |
>> +    pcie10:pcie@d0e00000 {
>> +      compatible = "intel,lgm-pcie";
>> +      device_type = "pci";
>> +      #address-cells = <3>;
>> +      #size-cells = <2>;
>> +      reg = <
>> +            0xd0e00000 0x1000
>> +            0xd2000000 0x800000
>> +            0xd0a41000 0x1000
>> +            >;
>> +      reg-names = "dbi", "config", "app";
>> +      linux,pci-domain = <0>;
>> +      max-link-speed = <4>;
>> +      bus-range = <0x00 0x08>;
>> +      interrupt-parent = <&ioapic1>;
>> +      interrupts = <67 1>;
>> +      #interrupt-cells = <1>;
>> +      interrupt-map-mask = <0 0 0 0x7>;
>> +      interrupt-map = <0 0 0 1 &ioapic1 27 1>,
>> +                      <0 0 0 2 &ioapic1 28 1>,
>> +                      <0 0 0 3 &ioapic1 29 1>,
>> +                      <0 0 0 4 &ioapic1 30 1>;
> is the "1" in the interrupts and interrupt-map properties IRQ_TYPE_EDGE_RISING?
> you can use these macros in this example as well, see
> Documentation/devicetree/bindings/iio/accel/adi,adxl372.yaml for
> example

No. 1 here means index from arch/x86/devicetree.c

static struct of_ioapic_type of_ioapic_type[] =
{
     {
         .out_type    = IRQ_TYPE_EDGE_RISING,
         .trigger    = IOAPIC_EDGE,
         .polarity    = 1,
     },
     {
         .out_type    = IRQ_TYPE_LEVEL_LOW,
         .trigger    = IOAPIC_LEVEL,
         .polarity    = 0,
     },
     {
         .out_type    = IRQ_TYPE_LEVEL_HIGH,
         .trigger    = IOAPIC_LEVEL,
         .polarity    = 1,
     },
     {
         .out_type    = IRQ_TYPE_EDGE_FALLING,
         .trigger    = IOAPIC_EDGE,
         .polarity    = 0,
     },
};

static int dt_irqdomain_alloc(struct irq_domain *domain, unsigned int virq,
                   unsigned int nr_irqs, void *arg)
{
     struct irq_fwspec *fwspec = (struct irq_fwspec *)arg;
     struct of_ioapic_type *it;
     struct irq_alloc_info tmp;
     int type_index;

     if (WARN_ON(fwspec->param_count < 2))
         return -EINVAL;

     type_index = fwspec->param[1]; // index.
     if (type_index >= ARRAY_SIZE(of_ioapic_type))
         return -EINVAL;

I would not see this definition is user-friendly. But it is how x86 
handles at the moment.

>
> Martin

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH v3 1/2] dt-bindings: PCI: intel: Add YAML schemas for the PCIe RC controller
  2019-09-05 20:31   ` Martin Blumenstingl
  2019-09-06  3:22     ` Chuan Hua, Lei
@ 2019-09-06  9:19     ` Andy Shevchenko
  2019-09-09  6:52       ` Dilip Kota
  2019-09-17 18:33       ` Rob Herring
  1 sibling, 2 replies; 37+ messages in thread
From: Andy Shevchenko @ 2019-09-06  9:19 UTC (permalink / raw)
  To: Martin Blumenstingl
  Cc: Dilip Kota, jingoohan1, gustavo.pimentel, lorenzo.pieralisi,
	robh, linux-pci, hch, devicetree, linux-kernel, cheol.yong.kim,
	chuanhua.lei, qi-ming.wu

On Thu, Sep 05, 2019 at 10:31:29PM +0200, Martin Blumenstingl wrote:
> On Wed, Sep 4, 2019 at 12:11 PM Dilip Kota <eswara.kota@linux.intel.com> wrote:

> > +  phy-names:
> > +    const: pciephy
> the most popular choice in Documentation/devicetree/bindings/pci/ is "pcie-phy"
> if Rob is happy with "pciephy" (which is already part of two other
> bindings) then I'm happy with "pciephy" as well

I'm not Rob, though I consider more practical to have a hyphenated variant.

-- 
With Best Regards,
Andy Shevchenko



^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH v3 1/2] dt-bindings: PCI: intel: Add YAML schemas for the PCIe RC controller
  2019-09-05  2:23   ` Chuan Hua, Lei
@ 2019-09-06 10:39     ` Dilip Kota
  0 siblings, 0 replies; 37+ messages in thread
From: Dilip Kota @ 2019-09-06 10:39 UTC (permalink / raw)
  To: Chuan Hua, Lei, jingoohan1, gustavo.pimentel, lorenzo.pieralisi,
	robh, martin.blumenstingl, linux-pci, hch, devicetree
  Cc: linux-kernel, andriy.shevchenko, cheol.yong.kim, qi-ming.wu

Hi Chuan Hua,

On 9/5/2019 10:23 AM, Chuan Hua, Lei wrote:
> Hi Dilip,
>
> On 9/4/2019 6:10 PM, Dilip Kota wrote:
>> The Intel PCIe RC controller is Synopsys Designware
>> based PCIe core. Add YAML schemas for PCIe in RC mode
>> present in Intel Universal Gateway soc.
>>
>> Signed-off-by: Dilip Kota <eswara.kota@linux.intel.com>
>> ---
>> changes on v3:
>>     Add the appropriate License-Identifier
>>     Rename intel,rst-interval to 'reset-assert-us'
> rst->interval to reset-assert-ms(should be typo error)

Sure, i will fix it. That's a typo error.
Thanks for pointing it.

Regards
Dilip



^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH v3 2/2] dwc: PCI: intel: Intel PCIe RC controller driver
  2019-09-04 13:05   ` Andy Shevchenko
@ 2019-09-06 10:39     ` Dilip Kota
  0 siblings, 0 replies; 37+ messages in thread
From: Dilip Kota @ 2019-09-06 10:39 UTC (permalink / raw)
  To: Andy Shevchenko
  Cc: jingoohan1, gustavo.pimentel, lorenzo.pieralisi, robh,
	martin.blumenstingl, linux-pci, hch, devicetree, linux-kernel,
	cheol.yong.kim, chuanhua.lei, qi-ming.wu

Hi Andy,

Thanks for the review comments, please find my response inline.

On 9/4/2019 9:05 PM, Andy Shevchenko wrote:
> On Wed, Sep 04, 2019 at 06:10:31PM +0800, Dilip Kota wrote:
>> Add support to PCIe RC controller on Intel Universal
>> Gateway SoC. PCIe controller is based of Synopsys
>> Designware pci core.
> Thanks for an update. My comments below.
>
>> +config PCIE_INTEL_AXI
>> +        bool "Intel AHB/AXI PCIe host controller support"
>> +        depends on PCI_MSI
>> +        depends on PCI
>> +        depends on OF
>> +        select PCIE_DW_HOST
>> +        help
>> +          Say 'Y' here to enable support for Intel AHB/AXI PCIe Host
>> +	  controller driver.
>> +	  The Intel PCIe controller is based on the Synopsys Designware
>> +	  pcie core and therefore uses the Designware core functions to
>> +	  implement the driver.
> This hunk is full of indentation issues. Please, see how it's done in other
> cases which are already part of upstream kernel.
Sure, I will fix it.
>
>> +#define PCIE_APP_INTX_OFST	12
>> +#define PCIE_APP_IRN_INT	(PCIE_APP_IRN_AER_REPORT | PCIE_APP_IRN_PME | \
> It would be slightly easier to read if the value starts from the next line.
Agree, i will update it in the next patch revision.
>
>> +			PCIE_APP_IRN_RX_VDM_MSG | PCIE_APP_IRN_SYS_ERR_RC | \
>> +			PCIE_APP_IRN_PM_TO_ACK | PCIE_APP_IRN_MSG_LTR | \
>> +			PCIE_APP_IRN_BW_MGT | PCIE_APP_IRN_LINK_AUTO_BW_STAT | \
>> +			(PCIE_APP_INTX_OFST + PCI_INTERRUPT_INTA) | \
>> +			(PCIE_APP_INTX_OFST + PCI_INTERRUPT_INTB) | \
>> +			(PCIE_APP_INTX_OFST + PCI_INTERRUPT_INTC) | \
>> +			(PCIE_APP_INTX_OFST + PCI_INTERRUPT_INTD))
>> +static void intel_pcie_link_setup(struct intel_pcie_port *lpp)
>> +{
>> +	u32 val;
>> +
>> +	val = pcie_rc_cfg_rd(lpp, PCIE_LCAP);
>> +	lpp->max_speed = FIELD_GET(PCIE_LCAP_MAX_LINK_SPEED, val);
>> +	lpp->max_width = FIELD_GET(PCIE_LCAP_MAX_LENGTH_WIDTH, val);
>> +
>> +	val = pcie_rc_cfg_rd(lpp, PCIE_LCTLSTS);
>> +
>> +	val &= ~(PCIE_LCTLSTS_LINK_DISABLE | PCIE_LCTLSTS_ASPM_ENABLE);
>> +	val |= (PCIE_LCTLSTS_SLOT_CLK_CFG | PCIE_LCTLSTS_COM_CLK_CFG |
>> +		PCIE_LCTLSTS_RCB128);
> Unnecessary parentheses.
Sure, i will fix it.
>
>> +	pcie_rc_cfg_wr(lpp, val, PCIE_LCTLSTS);
>> +}
>> +	switch (lpp->max_speed) {
>> +	case PCIE_LINK_SPEED_GEN1:
>> +	case PCIE_LINK_SPEED_GEN2:
>> +		fts = PCIE_AFR_GEN12_FTS_NUM_DFT;
>> +		break;
> You may group this with default case, right?
Sure, i am ok to group it with default case. I will update it in the 
next patch revision.
>
>> +	case PCIE_LINK_SPEED_GEN3:
>> +		fts = PCIE_AFR_GEN3_FTS_NUM_DFT;
>> +		break;
>> +	case PCIE_LINK_SPEED_GEN4:
>> +		fts = PCIE_AFR_GEN4_FTS_NUM_DFT;
>> +		break;
>> +	default:
>> +		fts = PCIE_AFR_GEN12_FTS_NUM_DFT;
>> +		break;
>> +	}
>> +	trace_printk("PCIe misc interrupt status 0x%x\n", reg);
> Hmm... Doesn't it give you a BIG warning?
>
> kernel/trace/trace.c:trace_printk_init_buffers()
I hope this should be fine as it come in debug builds only.
>
>> +	return IRQ_HANDLED;
>> +}
> Perhaps the PCI needs some trace points instead.
>
>> +static int intel_pcie_setup_irq(struct intel_pcie_port *lpp)
>> +{
>> +	struct device *dev = lpp->pci.dev;
>> +	struct platform_device *pdev;
>> +	char *irq_name;
>> +	int irq, ret;
>> +
>> +	pdev = to_platform_device(dev);
>> +	irq = platform_get_irq(pdev, 0);
>> +	if (irq < 0) {
>> +		dev_err(dev, "missing sys integrated irq resource\n");
> Redundant since this cycle.
platform_get_irq() is not printing any error message, so kept a error 
message here.

>
>> +		return irq;
>> +	}
>> +	irq_name = devm_kasprintf(dev, GFP_KERNEL, "pcie_misc%d", lpp->id);
>> +	if (!irq_name) {
>> +		dev_err(dev, "failed to alloc irq name\n");
> Not very useful. initcall_debug shows an error code.
Thanks for pointing it. I will remove this debug print.
>
>> +		return -ENOMEM;
>> +	}
>> +
>> +	ret = devm_request_irq(dev, irq, intel_pcie_core_isr,
>> +			       IRQF_SHARED, irq_name, lpp);
>> +	if (ret) {
>> +		dev_err(dev, "request irq %d failed\n", irq);
>> +		return ret;
>> +	}
> + blank line.
Agree, will fix it.
>
>> +	/* Enable integrated interrupts */
>> +	pcie_app_wr_mask(lpp, PCIE_APP_IRN_INT,
>> +			 PCIE_APP_IRN_INT, PCIE_APP_IRNEN);
> At least one parameter can be located on the previous line.
Agree, will fix it.
>
>> +	return ret;
>> +}
>> +static int intel_pcie_get_resources(struct platform_device *pdev)
>> +{
>> +	struct intel_pcie_port *lpp;
>> +	struct resource *res;
>> +	struct dw_pcie *pci;
>> +	struct device *dev;
>> +	int ret;
>> +
>> +	lpp = platform_get_drvdata(pdev);
>> +	pci = &lpp->pci;
>> +	dev = pci->dev;
> You may save here and there few LOCs by adding these to corresponding
> definition blocks.
Agree, will fix it.
>
>> +
>> +	ret = device_property_read_u32(dev, "linux,pci-domain", &lpp->id);
>> +	if (ret) {
>> +		dev_err(dev, "failed to get domain id, errno %d\n", ret);
>> +		return ret;
>> +	}
>> +
>> +	res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "dbi");
>> +	if (!res)
>> +		return -ENXIO;
> Redundant. It's done in below call.
Agree, will fix it.
>
>> +
>> +	pci->dbi_base = devm_ioremap_resource(dev, res);
>> +	if (IS_ERR(pci->dbi_base))
>> +		return PTR_ERR(pci->dbi_base);
>> +
>> +	lpp->core_clk = devm_clk_get(dev, NULL);
>> +	if (IS_ERR(lpp->core_clk)) {
>> +		ret = PTR_ERR(lpp->core_clk);
>> +		if (ret != -EPROBE_DEFER)
>> +			dev_err(dev, "failed to get clks: %d\n", ret);
>> +		return ret;
>> +	}
>> +
>> +	lpp->core_rst = devm_reset_control_get(dev, NULL);
>> +	if (IS_ERR(lpp->core_rst)) {
>> +		ret = PTR_ERR(lpp->core_rst);
>> +		if (ret != -EPROBE_DEFER)
>> +			dev_err(dev, "failed to get resets: %d\n", ret);
>> +		return ret;
>> +	}
>> +
>> +	ret = device_property_match_string(dev, "device_type", "pci");
>> +	if (ret) {
>> +		dev_err(dev, "failed to find pci device type: %d\n", ret);
>> +		return ret;
>> +	}
>> +
>> +	if (device_property_read_u32(dev, "reset-assert-ms",
>> +				     &lpp->rst_interval))
> I would rather leave it on one line.
Agree, will fix it.
>
>
>> +		lpp->rst_interval = RST_INTRVL_DFT_MS;
>> +
>> +	if (device_property_read_u32(dev, "max-link-speed", &lpp->link_gen))
>> +		lpp->link_gen = 0; /* Fallback to auto */
>> +
>> +	res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "app");
>> +	if (!res)
>> +		return -ENXIO;
> Redundant.
Agree, will fix it.
>
>> +
>> +	lpp->app_base = devm_ioremap_resource(dev, res);
>> +	if (IS_ERR(lpp->app_base))
>> +		return PTR_ERR(lpp->app_base);
>> +
>> +	ret = intel_pcie_ep_rst_init(lpp);
>> +	if (ret)
>> +		return ret;
>> +
>> +	lpp->phy = devm_phy_get(dev, "pciephy");
>> +	if (IS_ERR(lpp->phy)) {
>> +		ret = PTR_ERR(lpp->phy);
>> +		if (ret != -EPROBE_DEFER)
>> +			dev_err(dev, "couldn't get pcie-phy: %d\n", ret);
>> +		return ret;
>> +	}
>> +	return 0;
>> +}
>> +static int intel_pcie_host_setup(struct intel_pcie_port *lpp)
>> +{
>> +	int ret;
>> +
>> +	intel_pcie_core_rst_assert(lpp);
>> +	intel_pcie_device_rst_assert(lpp);
>> +
>> +	ret = phy_init(lpp->phy);
>> +	if (ret)
>> +		return ret;
>> +
>> +	intel_pcie_core_rst_deassert(lpp);
>> +
>> +	ret = clk_prepare_enable(lpp->core_clk);
>> +	if (ret) {
>> +		dev_err(lpp->pci.dev, "Core clock enable failed: %d\n", ret);
>> +		goto clk_err;
>> +	}
>> +
>> +	intel_pcie_rc_setup(lpp);
>> +	ret = intel_pcie_app_logic_setup(lpp);
>> +	if (ret)
>> +		goto app_init_err;
>> +
>> +	ret = intel_pcie_setup_irq(lpp);
>> +	if (!ret)
>> +		return ret;
>> +
>> +	intel_pcie_turn_off(lpp);
>> +app_init_err:
>> +	clk_disable_unprepare(lpp->core_clk);
>> +clk_err:
>> +	intel_pcie_core_rst_assert(lpp);
>> +	intel_pcie_deinit_phy(lpp);
> Why not phy_exit()?
intel_pcie_deinit_phy() calls phy_exit().
>
>> +	return ret;
>> +}
>> +static ssize_t pcie_width_store(struct device *dev,
>> +				struct device_attribute *attr,
>> +				const char *buf, size_t len)
>> +{
>> +	struct intel_pcie_port *lpp;
>> +	unsigned long val;
>> +
>> +	lpp = dev_get_drvdata(dev);
>> +
>> +	if (kstrtoul(buf, 10, &val))
>> +		return -EINVAL;
> Why shadowing error code?
Thanks for pointing it, will fix it.
>
>> +
>> +	if (val > lpp->max_width)
>> +		return -EINVAL;
>> +
>> +	lpp->lanes = val;
>> +	intel_pcie_max_link_width_setup(lpp);
>> +
>> +	return len;
>> +}
>> +	pcie_rc_cfg_wr_mask(lpp, PCI_COMMAND_MEMORY | PCI_COMMAND_MASTER,
>> +			    0, PCI_COMMAND);
> 0 is pretty much fits previous line.
Agree, will fix it.
>
> You need to fix your editor settings.
>
>> +int intel_pcie_msi_init(struct pcie_port *pp)
>> +{
>> +	struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
>> +
>> +	dev_dbg(pci->dev, "PCIe MSI/MSIx is handled by MSI in x86 processor\n");
> Is it useful message?
Yes, it is required because MSI controller can be of PCIe RC or a 
different one.
In this case MSI controller is of x86 processor.
>
>> +	return 0;
>> +}
>> +	dev_info(dev,
>> +		 "Intel AXI PCIe Root Complex Port %d Init Done\n", lpp->id);
> Same, literal can be placed on previous line.
Agree, will fix it.


Regards,

Dilip

>

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH v3 2/2] dwc: PCI: intel: Intel PCIe RC controller driver
  2019-09-05 10:45   ` Andrew Murray
  2019-09-05 11:26     ` Christoph Hellwig
  2019-09-05 11:40     ` Andy Shevchenko
@ 2019-09-06 10:58     ` Dilip Kota
  2019-09-06 11:20       ` Andrew Murray
  2 siblings, 1 reply; 37+ messages in thread
From: Dilip Kota @ 2019-09-06 10:58 UTC (permalink / raw)
  To: Andrew Murray
  Cc: jingoohan1, gustavo.pimentel, lorenzo.pieralisi, robh,
	martin.blumenstingl, linux-pci, hch, devicetree, linux-kernel,
	andriy.shevchenko, cheol.yong.kim, chuanhua.lei, qi-ming.wu

Hi Andrew Murray,

Thanks for the review. Please find my response inline.

On 9/5/2019 6:45 PM, Andrew Murray wrote:
> On Wed, Sep 04, 2019 at 06:10:31PM +0800, Dilip Kota wrote:
>> Add support to PCIe RC controller on Intel Universal
>> Gateway SoC. PCIe controller is based of Synopsys
>> Designware pci core.
>>
>> Signed-off-by: Dilip Kota <eswara.kota@linux.intel.com>
>> ---
> Hi Dilip,
>
> Thanks for the patch, initial feedback below:
>
>> changes on v3:
>> 	Rename PCIe app logic registers with PCIE_APP prefix.
>> 	PCIE_IOP_CTRL configuration is not required. Remove respective code.
>> 	Remove wrapper functions for clk enable/disable APIs.
>> 	Use platform_get_resource_byname() instead of
>> 	  devm_platform_ioremap_resource() to be similar with DWC framework.
>> 	Rename phy name to "pciephy".
>> 	Modify debug message in msi_init() callback to be more specific.
>> 	Remove map_irq() callback.
>> 	Enable the INTx interrupts at the time of PCIe initialization.
>> 	Reduce memory fragmentation by using variable "struct dw_pcie pci"
>> 	  instead of allocating memory.
>> 	Reduce the delay to 100us during enpoint initialization
>> 	  intel_pcie_ep_rst_init().
>> 	Call  dw_pcie_host_deinit() during remove() instead of directly
>> 	  calling PCIe core APIs.
>> 	Rename "intel,rst-interval" to "reset-assert-ms".
>> 	Remove unused APP logic Interrupt bit macro definitions.
>>   	Use dwc framework's dw_pcie_setup_rc() for PCIe host specific
>> 	 configuration instead of redefining the same functionality in
>> 	 the driver.
>> 	Move the whole DT parsing specific code to intel_pcie_get_resources()
>>
>>   drivers/pci/controller/dwc/Kconfig          |  13 +
>>   drivers/pci/controller/dwc/Makefile         |   1 +
>>   drivers/pci/controller/dwc/pcie-intel-axi.c | 840 ++++++++++++++++++++++++++++
>>   3 files changed, 854 insertions(+)
>>   create mode 100644 drivers/pci/controller/dwc/pcie-intel-axi.c
>>
>> diff --git a/drivers/pci/controller/dwc/Kconfig b/drivers/pci/controller/dwc/Kconfig
>> index 6ea778ae4877..e44b9b6a6390 100644
>> --- a/drivers/pci/controller/dwc/Kconfig
>> +++ b/drivers/pci/controller/dwc/Kconfig
>> @@ -82,6 +82,19 @@ config PCIE_DW_PLAT_EP
>>   	  order to enable device-specific features PCI_DW_PLAT_EP must be
>>   	  selected.
>>   
>> +config PCIE_INTEL_AXI
>> +        bool "Intel AHB/AXI PCIe host controller support"
>> +        depends on PCI_MSI
>> +        depends on PCI
> I don't think the PCI dependency is required here.
>
> I'm also not sure why PCI_MSI is required, we select PCIE_DW_HOST which
> depends on PCI_MSI_IRQ_DOMAIN which depends on PCI_MSI.
Agree, dependency on PCI and PCI_MSI can be removed. I will remove it in 
next patch revision.
>
>> +        depends on OF
>> +        select PCIE_DW_HOST
>> +        help
>> +          Say 'Y' here to enable support for Intel AHB/AXI PCIe Host
>> +	  controller driver.
>> +	  The Intel PCIe controller is based on the Synopsys Designware
>> +	  pcie core and therefore uses the Designware core functions to
>> +	  implement the driver.
> I can see this description is similar to others in the same Kconfig,
> however I'm not sure what value a user gains by knowing implementation
> details - it's helpful to know that PCIE_INTEL_AXI is based on the
> Designware core, but is it helpful to know that the Designware core
> functions are used?
>
>> +
>>   config PCI_EXYNOS
>>   	bool "Samsung Exynos PCIe controller"
>>   	depends on SOC_EXYNOS5440 || COMPILE_TEST
>> diff --git a/drivers/pci/controller/dwc/Makefile b/drivers/pci/controller/dwc/Makefile
>> index b085dfd4fab7..46e656ebdf90 100644
>> --- a/drivers/pci/controller/dwc/Makefile
>> +++ b/drivers/pci/controller/dwc/Makefile
>> @@ -3,6 +3,7 @@ obj-$(CONFIG_PCIE_DW) += pcie-designware.o
>>   obj-$(CONFIG_PCIE_DW_HOST) += pcie-designware-host.o
>>   obj-$(CONFIG_PCIE_DW_EP) += pcie-designware-ep.o
>>   obj-$(CONFIG_PCIE_DW_PLAT) += pcie-designware-plat.o
>> +obj-$(CONFIG_PCIE_INTEL_AXI) += pcie-intel-axi.o
>>   obj-$(CONFIG_PCI_DRA7XX) += pci-dra7xx.o
>>   obj-$(CONFIG_PCI_EXYNOS) += pci-exynos.o
>>   obj-$(CONFIG_PCI_IMX6) += pci-imx6.o
>> diff --git a/drivers/pci/controller/dwc/pcie-intel-axi.c b/drivers/pci/controller/dwc/pcie-intel-axi.c
>> new file mode 100644
>> index 000000000000..75607ce03ebf
>> --- /dev/null
>> +++ b/drivers/pci/controller/dwc/pcie-intel-axi.c
>> @@ -0,0 +1,840 @@
>> +// SPDX-License-Identifier: GPL-2.0
>> +/*
>> + * PCIe host controller driver for Intel AXI PCIe Bridge
>> + *
>> + * Copyright (c) 2019 Intel Corporation.
>> + */
>> +
>> +#include <linux/bitfield.h>
>> +#include <linux/clk.h>
>> +#include <linux/gpio/consumer.h>
>> +#include <linux/interrupt.h>
>> +#include <linux/iopoll.h>
>> +#include <linux/of_irq.h>
>> +#include <linux/of_pci.h>
>> +#include <linux/of_platform.h>
>> +#include <linux/phy/phy.h>
>> +#include <linux/platform_device.h>
>> +#include <linux/reset.h>
>> +
>> +#include "../../pci.h"
> Please remove this - it isn't needed.
Ok, will remove it in next patch revision.
>
>> +#include "pcie-designware.h"
>> +
>> +#define PCIE_CCRID				0x8
>> +
>> +#define PCIE_LCAP				0x7C
>> +#define PCIE_LCAP_MAX_LINK_SPEED		GENMASK(3, 0)
>> +#define PCIE_LCAP_MAX_LENGTH_WIDTH		GENMASK(9, 4)
> These look like the standard PCI Link Capabilities Register,
> can you use the standard ones defined in pci_regs.h? (see
> PCI_EXP_LNKCAP_SLS_*).
>
>> +
>> +/* Link Control and Status Register */
>> +#define PCIE_LCTLSTS				0x80
>> +#define PCIE_LCTLSTS_ASPM_ENABLE		GENMASK(1, 0)
>> +#define PCIE_LCTLSTS_RCB128			BIT(3)
>> +#define PCIE_LCTLSTS_LINK_DISABLE		BIT(4)
>> +#define PCIE_LCTLSTS_COM_CLK_CFG		BIT(6)
>> +#define PCIE_LCTLSTS_HW_AW_DIS			BIT(9)
>> +#define PCIE_LCTLSTS_LINK_SPEED			GENMASK(19, 16)
>> +#define PCIE_LCTLSTS_NEGOTIATED_LINK_WIDTH	GENMASK(25, 20)
>> +#define PCIE_LCTLSTS_SLOT_CLK_CFG		BIT(28)
> These look like the standard Link Control and Link Status register, can
> you use the standard ones defined in pci_regs.h? (see PCI_EXP_LNKCTL_*
> and PCI_EXP_LNKSTA_*).
>
>> +
>> +#define PCIE_LCTLSTS2				0xA0
>> +#define PCIE_LCTLSTS2_TGT_LINK_SPEED		GENMASK(3, 0)
>> +#define PCIE_LCTLSTS2_TGT_LINK_SPEED_25GT	0x1
>> +#define PCIE_LCTLSTS2_TGT_LINK_SPEED_5GT	0x2
>> +#define PCIE_LCTLSTS2_TGT_LINK_SPEED_8GT	0x3
>> +#define PCIE_LCTLSTS2_TGT_LINK_SPEED_16GT	0x4
> And these look like the standard Link Control Register 2 (see
> PCI_EXP_LNKCTL2_*).
>
> Most of the #defines above and below look just like standard PCI defines
> (for example those found in pci_regs.h) - both in terms of their name and
> what they do. Ideally where the functionality is the same or very similar,
> then we should use the standard PCI defines in (pci_regs.h). This helps
> readability/understanding, helps to identify duplicate code and reduces
> the size of your patch.
>
> Also the capability offsets (e.g. PCIE_LCTLSTS2) are also standard. The
> offsets you define are offset by an additional 0x70. E.g.i
> PCIE_LCTLSTS2 = 0xA0, and PCI_EXP_LNKCTL2 = 0x30. Perhaps abstracting
> this offset will allow you to use the standard pci defines?
>
> I haven't looked in much detail at the remainder of the defines, but
> the same rationale should apply.
Agree, that's a good point. I will define the offset macro and use the 
macros defined in pci_regs.h.
>
>> +#define PCIE_LCTLSTS2_HW_AUTO_DIS		BIT(5)
>> +
>> +/* Ack Frequency Register */
>> +#define PCIE_AFR				0x70C
>> +#define PCIE_AFR_FTS_NUM			GENMASK(15, 8)
>> +#define PCIE_AFR_COM_FTS_NUM			GENMASK(23, 16)
>> +#define PCIE_AFR_GEN12_FTS_NUM_DFT		(SZ_128 - 1)
>> +#define PCIE_AFR_GEN3_FTS_NUM_DFT		180
>> +#define PCIE_AFR_GEN4_FTS_NUM_DFT		196
>> +
>> +#define PCIE_PLCR_DLL_LINK_EN			BIT(5)
>> +#define PCIE_PORT_LOGIC_FTS			GENMASK(7, 0)
>> +#define PCIE_PORT_LOGIC_DFT_FTS_NUM		(SZ_128 - 1)
>> +
>> +#define PCIE_MISC_CTRL				0x8BC
>> +#define PCIE_MISC_CTRL_DBI_RO_WR_EN		BIT(0)
>> +
>> +#define PCIE_MULTI_LANE_CTRL			0x8C0
>> +#define PCIE_UPCONFIG_SUPPORT			BIT(7)
>> +#define PCIE_DIRECT_LINK_WIDTH_CHANGE		BIT(6)
>> +#define PCIE_TARGET_LINK_WIDTH			GENMASK(5, 0)
>> +
>> +/* APP RC Core Control Register */
>> +#define PCIE_APP_CCR				0x10
>> +#define PCIE_APP_CCR_LTSSM_ENABLE		BIT(0)
>> +
>> +/* PCIe Message Control */
>> +#define PCIE_APP_MSG_CR				0x30
>> +#define PCIE_APP_MSG_XMT_PM_TURNOFF		BIT(0)
>> +
>> +/* PCIe Power Management Control */
>> +#define PCIE_APP_PMC				0x44
>> +#define PCIE_APP_PMC_IN_L2			BIT(20)
>> +
>> +/* Interrupt Enable Register */
>> +#define PCIE_APP_IRNEN				0xF4
>> +#define PCIE_APP_IRNCR				0xF8
>> +#define PCIE_APP_IRN_AER_REPORT			BIT(0)
>> +#define PCIE_APP_IRN_PME			BIT(2)
>> +#define PCIE_APP_IRN_RX_VDM_MSG			BIT(4)
>> +#define PCIE_APP_IRN_PM_TO_ACK			BIT(9)
>> +#define PCIE_APP_IRN_LINK_AUTO_BW_STAT		BIT(11)
>> +#define PCIE_APP_IRN_BW_MGT			BIT(12)
>> +#define PCIE_APP_IRN_MSG_LTR			BIT(18)
>> +#define PCIE_APP_IRN_SYS_ERR_RC			BIT(29)
>> +
>> +#define PCIE_APP_INTX_OFST	12
>> +#define PCIE_APP_IRN_INT	(PCIE_APP_IRN_AER_REPORT | PCIE_APP_IRN_PME | \
>> +			PCIE_APP_IRN_RX_VDM_MSG | PCIE_APP_IRN_SYS_ERR_RC | \
>> +			PCIE_APP_IRN_PM_TO_ACK | PCIE_APP_IRN_MSG_LTR | \
>> +			PCIE_APP_IRN_BW_MGT | PCIE_APP_IRN_LINK_AUTO_BW_STAT | \
>> +			(PCIE_APP_INTX_OFST + PCI_INTERRUPT_INTA) | \
>> +			(PCIE_APP_INTX_OFST + PCI_INTERRUPT_INTB) | \
>> +			(PCIE_APP_INTX_OFST + PCI_INTERRUPT_INTC) | \
>> +			(PCIE_APP_INTX_OFST + PCI_INTERRUPT_INTD))
>> +
>> +#define BUS_IATU_OFFS		SZ_256M
>> +#define RST_INTRVL_DFT_MS	100
>> +enum {
>> +	PCIE_LINK_SPEED_AUTO = 0,
>> +	PCIE_LINK_SPEED_GEN1,
>> +	PCIE_LINK_SPEED_GEN2,
>> +	PCIE_LINK_SPEED_GEN3,
>> +	PCIE_LINK_SPEED_GEN4,
>> +};
>> +
>> +struct intel_pcie_soc {
>> +	unsigned int pcie_ver;
>> +	unsigned int pcie_atu_offset;
>> +	u32 num_viewport;
>> +};
>> +
>> +struct intel_pcie_port {
>> +	struct dw_pcie		pci;
>> +	unsigned int		id; /* Physical RC Index */
>> +	void __iomem		*app_base;
>> +	struct gpio_desc	*reset_gpio;
>> +	u32			rst_interval;
>> +	u32			max_speed;
>> +	u32			link_gen;
>> +	u32			max_width;
>> +	u32			lanes;
>> +	struct clk		*core_clk;
>> +	struct reset_control	*core_rst;
>> +	struct phy		*phy;
>> +};
>> +
>> +static void pcie_update_bits(void __iomem *base, u32 mask, u32 val, u32 ofs)
>> +{
>> +	u32 orig, tmp;
>> +
>> +	orig = readl(base + ofs);
>> +
>> +	tmp = (orig & ~mask) | (val & mask);
>> +
>> +	if (tmp != orig)
>> +		writel(tmp, base + ofs);
>> +}
>> +
>> +static inline u32 pcie_app_rd(struct intel_pcie_port *lpp, u32 ofs)
>> +{
>> +	return readl(lpp->app_base + ofs);
>> +}
>> +
>> +static inline void pcie_app_wr(struct intel_pcie_port *lpp, u32 val, u32 ofs)
>> +{
>> +	writel(val, lpp->app_base + ofs);
>> +}
>> +
>> +static void pcie_app_wr_mask(struct intel_pcie_port *lpp,
>> +			     u32 mask, u32 val, u32 ofs)
>> +{
>> +	pcie_update_bits(lpp->app_base, mask, val, ofs);
>> +}
>> +
>> +static inline u32 pcie_rc_cfg_rd(struct intel_pcie_port *lpp, u32 ofs)
>> +{
>> +	return dw_pcie_readl_dbi(&lpp->pci, ofs);
>> +}
>> +
>> +static inline void pcie_rc_cfg_wr(struct intel_pcie_port *lpp, u32 val, u32 ofs)
>> +{
>> +	dw_pcie_writel_dbi(&lpp->pci, ofs, val);
>> +}
>> +
>> +static void pcie_rc_cfg_wr_mask(struct intel_pcie_port *lpp,
>> +				u32 mask, u32 val, u32 ofs)
>> +{
>> +	pcie_update_bits(lpp->pci.dbi_base, mask, val, ofs);
>> +}
>> +
>> +static void intel_pcie_mem_iatu(struct intel_pcie_port *lpp)
>> +{
>> +	struct pcie_port *pp = &lpp->pci.pp;
>> +	phys_addr_t cpu_addr = pp->mem_base;
>> +
>> +	dw_pcie_prog_outbound_atu(&lpp->pci, PCIE_ATU_REGION_INDEX0,
>> +				  PCIE_ATU_TYPE_MEM, cpu_addr,
>> +				  pp->mem_base, pp->mem_size);
>> +}
>> +
>> +static void intel_pcie_ltssm_enable(struct intel_pcie_port *lpp)
>> +{
>> +	pcie_app_wr_mask(lpp, PCIE_APP_CCR_LTSSM_ENABLE,
>> +			 PCIE_APP_CCR_LTSSM_ENABLE, PCIE_APP_CCR);
>> +}
>> +
>> +static void intel_pcie_ltssm_disable(struct intel_pcie_port *lpp)
>> +{
>> +	pcie_app_wr_mask(lpp, PCIE_APP_CCR_LTSSM_ENABLE, 0, PCIE_APP_CCR);
>> +}
>> +
>> +static const char *pcie_link_gen_to_str(int gen)
>> +{
>> +	switch (gen) {
>> +	case PCIE_LINK_SPEED_GEN1:
>> +		return "2.5";
>> +	case PCIE_LINK_SPEED_GEN2:
>> +		return "5.0";
>> +	case PCIE_LINK_SPEED_GEN3:
>> +		return "8.0";
>> +	case PCIE_LINK_SPEED_GEN4:
>> +		return "16.0";
>> +	default:
>> +		return "???";
>> +	}
>> +}
>> +
>> +static void intel_pcie_link_setup(struct intel_pcie_port *lpp)
>> +{
>> +	u32 val;
>> +
>> +	val = pcie_rc_cfg_rd(lpp, PCIE_LCAP);
>> +	lpp->max_speed = FIELD_GET(PCIE_LCAP_MAX_LINK_SPEED, val);
>> +	lpp->max_width = FIELD_GET(PCIE_LCAP_MAX_LENGTH_WIDTH, val);
>> +
>> +	val = pcie_rc_cfg_rd(lpp, PCIE_LCTLSTS);
>> +
>> +	val &= ~(PCIE_LCTLSTS_LINK_DISABLE | PCIE_LCTLSTS_ASPM_ENABLE);
>> +	val |= (PCIE_LCTLSTS_SLOT_CLK_CFG | PCIE_LCTLSTS_COM_CLK_CFG |
>> +		PCIE_LCTLSTS_RCB128);
>> +	pcie_rc_cfg_wr(lpp, val, PCIE_LCTLSTS);
>> +}
>> +
>> +static void intel_pcie_max_speed_setup(struct intel_pcie_port *lpp)
>> +{
>> +	u32 reg, val;
>> +
>> +	reg = pcie_rc_cfg_rd(lpp, PCIE_LCTLSTS2);
>> +	switch (lpp->link_gen) {
>> +	case PCIE_LINK_SPEED_GEN1:
>> +		reg &= ~PCIE_LCTLSTS2_TGT_LINK_SPEED;
>> +		reg |= PCIE_LCTLSTS2_HW_AUTO_DIS |
>> +			PCIE_LCTLSTS2_TGT_LINK_SPEED_25GT;
>> +		break;
>> +	case PCIE_LINK_SPEED_GEN2:
>> +		reg &= ~PCIE_LCTLSTS2_TGT_LINK_SPEED;
>> +		reg |= PCIE_LCTLSTS2_HW_AUTO_DIS |
>> +			PCIE_LCTLSTS2_TGT_LINK_SPEED_5GT;
>> +		break;
>> +	case PCIE_LINK_SPEED_GEN3:
>> +		reg &= ~PCIE_LCTLSTS2_TGT_LINK_SPEED;
>> +		reg |= PCIE_LCTLSTS2_HW_AUTO_DIS |
>> +			PCIE_LCTLSTS2_TGT_LINK_SPEED_8GT;
>> +		break;
>> +	case PCIE_LINK_SPEED_GEN4:
>> +		reg &= ~PCIE_LCTLSTS2_TGT_LINK_SPEED;
>> +		reg |= PCIE_LCTLSTS2_HW_AUTO_DIS |
>> +			PCIE_LCTLSTS2_TGT_LINK_SPEED_16GT;
>> +		break;
>> +	default:
>> +		/* Use hardware capability */
>> +		val = pcie_rc_cfg_rd(lpp, PCIE_LCAP);
>> +		val = FIELD_GET(PCIE_LCAP_MAX_LINK_SPEED, val);
>> +		reg &= ~PCIE_LCTLSTS2_HW_AUTO_DIS;
>> +		reg |= val;
>> +		break;
>> +	}
>> +	pcie_rc_cfg_wr(lpp, reg, PCIE_LCTLSTS2);
>> +}
>> +
>> +static void intel_pcie_speed_change_enable(struct intel_pcie_port *lpp)
>> +{
>> +	u32 mask, val;
>> +
>> +	mask = PORT_LOGIC_SPEED_CHANGE | PCIE_PORT_LOGIC_FTS;
>> +	val = PORT_LOGIC_SPEED_CHANGE | PCIE_PORT_LOGIC_DFT_FTS_NUM;
>> +
>> +	pcie_rc_cfg_wr_mask(lpp, mask, val, PCIE_LINK_WIDTH_SPEED_CONTROL);
>> +}
>> +
>> +static void intel_pcie_speed_change_disable(struct intel_pcie_port *lpp)
>> +{
>> +	pcie_rc_cfg_wr_mask(lpp, PORT_LOGIC_SPEED_CHANGE, 0,
>> +			    PCIE_LINK_WIDTH_SPEED_CONTROL);
>> +}
>> +
>> +static void intel_pcie_max_link_width_setup(struct intel_pcie_port *lpp)
>> +{
>> +	u32 mask, val;
>> +
>> +	/* HW auto bandwidth negotiation must be enabled */
>> +	pcie_rc_cfg_wr_mask(lpp, PCIE_LCTLSTS_HW_AW_DIS, 0, PCIE_LCTLSTS);
>> +
>> +	mask = PCIE_DIRECT_LINK_WIDTH_CHANGE | PCIE_TARGET_LINK_WIDTH;
>> +	val = PCIE_DIRECT_LINK_WIDTH_CHANGE | lpp->lanes;
>> +	pcie_rc_cfg_wr_mask(lpp, mask, val, PCIE_MULTI_LANE_CTRL);
> Is this identical functionality to the writing of PCIE_PORT_LINK_CONTROL
> in dw_pcie_setup?
>
> I ask because if the user sets num-lanes in the DT, will it have the
> desired effect?

intel_pcie_max_link_width_setup() function will be called by sysfs attribute pcie_width_store() to change on the fly.

>
>> +}
>> +
>> +static void intel_pcie_port_logic_setup(struct intel_pcie_port *lpp)
>> +{
>> +	u32 val, mask, fts;
>> +
>> +	switch (lpp->max_speed) {
>> +	case PCIE_LINK_SPEED_GEN1:
>> +	case PCIE_LINK_SPEED_GEN2:
>> +		fts = PCIE_AFR_GEN12_FTS_NUM_DFT;
>> +		break;
>> +	case PCIE_LINK_SPEED_GEN3:
>> +		fts = PCIE_AFR_GEN3_FTS_NUM_DFT;
>> +		break;
>> +	case PCIE_LINK_SPEED_GEN4:
>> +		fts = PCIE_AFR_GEN4_FTS_NUM_DFT;
>> +		break;
>> +	default:
>> +		fts = PCIE_AFR_GEN12_FTS_NUM_DFT;
>> +		break;
>> +	}
>> +	mask = PCIE_AFR_FTS_NUM | PCIE_AFR_COM_FTS_NUM;
>> +	val = FIELD_PREP(PCIE_AFR_FTS_NUM, fts) |
>> +	       FIELD_PREP(PCIE_AFR_COM_FTS_NUM, fts);
>> +	pcie_rc_cfg_wr_mask(lpp, mask, val, PCIE_AFR);
>> +
>> +	/* Port Link Control Register */
>> +	pcie_rc_cfg_wr_mask(lpp, PCIE_PLCR_DLL_LINK_EN,
>> +			    PCIE_PLCR_DLL_LINK_EN, PCIE_PORT_LINK_CONTROL);
>> +}
>> +
>> +static void intel_pcie_upconfig_setup(struct intel_pcie_port *lpp)
>> +{
>> +	pcie_rc_cfg_wr_mask(lpp, PCIE_UPCONFIG_SUPPORT,
>> +			    PCIE_UPCONFIG_SUPPORT, PCIE_MULTI_LANE_CTRL);
>> +}
>> +
>> +static void intel_pcie_rc_setup(struct intel_pcie_port *lpp)
>> +{
>> +	intel_pcie_ltssm_disable(lpp);
>> +	intel_pcie_link_setup(lpp);
>> +	dw_pcie_setup_rc(&lpp->pci.pp);
>> +	intel_pcie_upconfig_setup(lpp);
>> +
>> +	intel_pcie_max_speed_setup(lpp);
>> +	intel_pcie_speed_change_enable(lpp);
>> +	intel_pcie_port_logic_setup(lpp);
>> +	intel_pcie_mem_iatu(lpp);
> Doesn't dw_pcie_setup_rc do the same as intel_pcie_mem_iatu?
Thanks for pointing it. dw_pcie_setup_rc() does.
intel_pcie_mem_iatu can be removed.
>
>> +}
>> +
>> +static int intel_pcie_ep_rst_init(struct intel_pcie_port *lpp)
>> +{
>> +	struct device *dev = lpp->pci.dev;
>> +	int ret;
>> +
>> +	lpp->reset_gpio = devm_gpiod_get(dev, "reset", GPIOD_OUT_LOW);
>> +	if (IS_ERR(lpp->reset_gpio)) {
>> +		ret = PTR_ERR(lpp->reset_gpio);
>> +		if (ret != -EPROBE_DEFER)
>> +			dev_err(dev, "failed to request PCIe GPIO: %d\n", ret);
>> +		return ret;
>> +	}
>> +	/* Make initial reset last for 100us */
>> +	usleep_range(100, 200);
>> +
>> +	return 0;
>> +}
>> +
>> +static void intel_pcie_core_rst_assert(struct intel_pcie_port *lpp)
>> +{
>> +	reset_control_assert(lpp->core_rst);
>> +}
>> +
>> +static void intel_pcie_core_rst_deassert(struct intel_pcie_port *lpp)
>> +{
>> +	/*
>> +	 * One micro-second delay to make sure the reset pulse
>> +	 * wide enough so that core reset is clean.
>> +	 */
>> +	udelay(1);
>> +	reset_control_deassert(lpp->core_rst);
>> +
>> +	/*
>> +	 * Some SoC core reset also reset PHY, more delay needed
>> +	 * to make sure the reset process is done.
>> +	 */
>> +	usleep_range(1000, 2000);
>> +}
>> +
>> +static void intel_pcie_device_rst_assert(struct intel_pcie_port *lpp)
>> +{
>> +	gpiod_set_value_cansleep(lpp->reset_gpio, 1);
>> +}
>> +
>> +static void intel_pcie_device_rst_deassert(struct intel_pcie_port *lpp)
>> +{
>> +	msleep(lpp->rst_interval);
>> +	gpiod_set_value_cansleep(lpp->reset_gpio, 0);
>> +}
>> +
>> +static int intel_pcie_app_logic_setup(struct intel_pcie_port *lpp)
>> +{
>> +	intel_pcie_device_rst_deassert(lpp);
>> +	intel_pcie_ltssm_enable(lpp);
>> +
>> +	return dw_pcie_wait_for_link(&lpp->pci);
>> +}
>> +
>> +static irqreturn_t intel_pcie_core_isr(int irq, void *arg)
>> +{
>> +	struct intel_pcie_port *lpp = arg;
>> +	u32 val, reg;
>> +
>> +	reg = pcie_app_rd(lpp, PCIE_APP_IRNCR);
>> +	val = reg & PCIE_APP_IRN_INT;
>> +
>> +	pcie_app_wr(lpp, val, PCIE_APP_IRNCR);
>> +
>> +	trace_printk("PCIe misc interrupt status 0x%x\n", reg);
>> +	return IRQ_HANDLED;
>> +}
> Why do we bother handling this interrupt?
This helps during debugging.
>
>> +
>> +static int intel_pcie_setup_irq(struct intel_pcie_port *lpp)
>> +{
>> +	struct device *dev = lpp->pci.dev;
>> +	struct platform_device *pdev;
>> +	char *irq_name;
>> +	int irq, ret;
>> +
>> +	pdev = to_platform_device(dev);
>> +	irq = platform_get_irq(pdev, 0);
>> +	if (irq < 0) {
>> +		dev_err(dev, "missing sys integrated irq resource\n");
>> +		return irq;
>> +	}
>> +
>> +	irq_name = devm_kasprintf(dev, GFP_KERNEL, "pcie_misc%d", lpp->id);
>> +	if (!irq_name) {
>> +		dev_err(dev, "failed to alloc irq name\n");
>> +		return -ENOMEM;
>> +	}
>> +
>> +	ret = devm_request_irq(dev, irq, intel_pcie_core_isr,
>> +			       IRQF_SHARED, irq_name, lpp);
>> +	if (ret) {
>> +		dev_err(dev, "request irq %d failed\n", irq);
>> +		return ret;
>> +	}
>> +	/* Enable integrated interrupts */
>> +	pcie_app_wr_mask(lpp, PCIE_APP_IRN_INT,
>> +			 PCIE_APP_IRN_INT, PCIE_APP_IRNEN);
>> +
>> +	return ret;
>> +}
>> +
>> +static void intel_pcie_core_irq_disable(struct intel_pcie_port *lpp)
>> +{
>> +	pcie_app_wr(lpp, 0, PCIE_APP_IRNEN);
>> +	pcie_app_wr(lpp, PCIE_APP_IRN_INT,  PCIE_APP_IRNCR);
>> +}
>> +
>> +static int intel_pcie_get_resources(struct platform_device *pdev)
>> +{
>> +	struct intel_pcie_port *lpp;
>> +	struct resource *res;
>> +	struct dw_pcie *pci;
>> +	struct device *dev;
>> +	int ret;
>> +
>> +	lpp = platform_get_drvdata(pdev);
>> +	pci = &lpp->pci;
>> +	dev = pci->dev;
>> +
>> +	ret = device_property_read_u32(dev, "linux,pci-domain", &lpp->id);
>> +	if (ret) {
>> +		dev_err(dev, "failed to get domain id, errno %d\n", ret);
>> +		return ret;
>> +	}
>> +
>> +	res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "dbi");
>> +	if (!res)
>> +		return -ENXIO;
>> +
>> +	pci->dbi_base = devm_ioremap_resource(dev, res);
>> +	if (IS_ERR(pci->dbi_base))
>> +		return PTR_ERR(pci->dbi_base);
>> +
>> +	lpp->core_clk = devm_clk_get(dev, NULL);
>> +	if (IS_ERR(lpp->core_clk)) {
>> +		ret = PTR_ERR(lpp->core_clk);
>> +		if (ret != -EPROBE_DEFER)
>> +			dev_err(dev, "failed to get clks: %d\n", ret);
>> +		return ret;
>> +	}
>> +
>> +	lpp->core_rst = devm_reset_control_get(dev, NULL);
>> +	if (IS_ERR(lpp->core_rst)) {
>> +		ret = PTR_ERR(lpp->core_rst);
>> +		if (ret != -EPROBE_DEFER)
>> +			dev_err(dev, "failed to get resets: %d\n", ret);
>> +		return ret;
>> +	}
>> +
>> +	ret = device_property_match_string(dev, "device_type", "pci");
>> +	if (ret) {
>> +		dev_err(dev, "failed to find pci device type: %d\n", ret);
>> +		return ret;
>> +	}
>> +
>> +	if (device_property_read_u32(dev, "reset-assert-ms",
>> +				     &lpp->rst_interval))
>> +		lpp->rst_interval = RST_INTRVL_DFT_MS;
>> +
>> +	if (device_property_read_u32(dev, "max-link-speed", &lpp->link_gen))
>> +		lpp->link_gen = 0; /* Fallback to auto */
> Is it possible to use of_pci_get_max_link_speed here instead?
Thanks for pointing it. of_pci_get_max_link_speed() can be used here. I 
will update it in the next patch revision.
>
>> +
>> +	res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "app");
>> +	if (!res)
>> +		return -ENXIO;
>> +
>> +	lpp->app_base = devm_ioremap_resource(dev, res);
>> +	if (IS_ERR(lpp->app_base))
>> +		return PTR_ERR(lpp->app_base);
>> +
>> +	ret = intel_pcie_ep_rst_init(lpp);
>> +	if (ret)
>> +		return ret;
> Given that this is called from a function '..._get_resources' I don't think
> we should be resetting anything here.
Agree. I will move it out of get_resources().
>
>> +
>> +	lpp->phy = devm_phy_get(dev, "pciephy");
>> +	if (IS_ERR(lpp->phy)) {
>> +		ret = PTR_ERR(lpp->phy);
>> +		if (ret != -EPROBE_DEFER)
>> +			dev_err(dev, "couldn't get pcie-phy: %d\n", ret);
>> +		return ret;
>> +	}
>> +	return 0;
>> +}
>> +
>> +static void intel_pcie_deinit_phy(struct intel_pcie_port *lpp)
>> +{
>> +	phy_exit(lpp->phy);
>> +}
>> +
>> +static int intel_pcie_wait_l2(struct intel_pcie_port *lpp)
>> +{
>> +	u32 value;
>> +	int ret;
>> +
>> +	if (lpp->max_speed < PCIE_LINK_SPEED_GEN3)
>> +		return 0;
>> +
>> +	/* Send PME_TURN_OFF message */
>> +	pcie_app_wr_mask(lpp, PCIE_APP_MSG_XMT_PM_TURNOFF,
>> +			 PCIE_APP_MSG_XMT_PM_TURNOFF, PCIE_APP_MSG_CR);
>> +
>> +	/* Read PMC status and wait for falling into L2 link state */
>> +	ret = readl_poll_timeout(lpp->app_base + PCIE_APP_PMC, value,
>> +				 (value & PCIE_APP_PMC_IN_L2), 20,
>> +				 jiffies_to_usecs(5 * HZ));
>> +	if (ret)
>> +		dev_err(lpp->pci.dev, "PCIe link enter L2 timeout!\n");
>> +
>> +	return ret;
>> +}
>> +
>> +static void intel_pcie_turn_off(struct intel_pcie_port *lpp)
>> +{
>> +	if (dw_pcie_link_up(&lpp->pci))
>> +		intel_pcie_wait_l2(lpp);
>> +
>> +	/* Put EP in reset state */
> EP?
End point device. I will update it.
>
>> +	intel_pcie_device_rst_assert(lpp);
>> +	pcie_rc_cfg_wr_mask(lpp, PCI_COMMAND_MEMORY, 0, PCI_COMMAND);
>> +}
>> +
>> +static int intel_pcie_host_setup(struct intel_pcie_port *lpp)
>> +{
>> +	int ret;
>> +
>> +	intel_pcie_core_rst_assert(lpp);
>> +	intel_pcie_device_rst_assert(lpp);
>> +
>> +	ret = phy_init(lpp->phy);
>> +	if (ret)
>> +		return ret;
>> +
>> +	intel_pcie_core_rst_deassert(lpp);
>> +
>> +	ret = clk_prepare_enable(lpp->core_clk);
>> +	if (ret) {
>> +		dev_err(lpp->pci.dev, "Core clock enable failed: %d\n", ret);
>> +		goto clk_err;
>> +	}
>> +
>> +	intel_pcie_rc_setup(lpp);
>> +	ret = intel_pcie_app_logic_setup(lpp);
>> +	if (ret)
>> +		goto app_init_err;
>> +
>> +	ret = intel_pcie_setup_irq(lpp);
>> +	if (!ret)
>> +		return ret;
>> +
>> +	intel_pcie_turn_off(lpp);
>> +app_init_err:
>> +	clk_disable_unprepare(lpp->core_clk);
>> +clk_err:
>> +	intel_pcie_core_rst_assert(lpp);
>> +	intel_pcie_deinit_phy(lpp);
>> +	return ret;
>> +}
>> +
>> +static ssize_t
>> +pcie_link_status_show(struct device *dev, struct device_attribute *attr,
>> +		      char *buf)
>> +{
>> +	u32 reg, width, gen;
>> +	struct intel_pcie_port *lpp;
>> +
>> +	lpp = dev_get_drvdata(dev);
>> +
>> +	reg = pcie_rc_cfg_rd(lpp, PCIE_LCTLSTS);
>> +	width = FIELD_GET(PCIE_LCTLSTS_NEGOTIATED_LINK_WIDTH, reg);
>> +	gen = FIELD_GET(PCIE_LCTLSTS_LINK_SPEED, reg);
>> +	if (gen > lpp->max_speed)
>> +		return -EINVAL;
>> +
>> +	return sprintf(buf, "Port %2u Width x%u Speed %s GT/s\n", lpp->id,
>> +		       width, pcie_link_gen_to_str(gen));
>> +}
>> +static DEVICE_ATTR_RO(pcie_link_status);
>> +
>> +static ssize_t pcie_speed_store(struct device *dev,
>> +				struct device_attribute *attr,
>> +				const char *buf, size_t len)
>> +{
>> +	struct intel_pcie_port *lpp;
>> +	unsigned long val;
>> +	int ret;
>> +
>> +	lpp = dev_get_drvdata(dev);
>> +
>> +	ret = kstrtoul(buf, 10, &val);
>> +	if (ret)
>> +		return ret;
>> +
>> +	if (val > lpp->max_speed)
>> +		return -EINVAL;
>> +
>> +	lpp->link_gen = val;
>> +	intel_pcie_max_speed_setup(lpp);
>> +	intel_pcie_speed_change_disable(lpp);
>> +	intel_pcie_speed_change_enable(lpp);
>> +
>> +	return len;
>> +}
>> +static DEVICE_ATTR_WO(pcie_speed);
>> +
>> +/*
>> + * Link width change on the fly is not always successful.
>> + * It also depends on the partner.
>> + */
>> +static ssize_t pcie_width_store(struct device *dev,
>> +				struct device_attribute *attr,
>> +				const char *buf, size_t len)
>> +{
>> +	struct intel_pcie_port *lpp;
>> +	unsigned long val;
>> +
>> +	lpp = dev_get_drvdata(dev);
>> +
>> +	if (kstrtoul(buf, 10, &val))
>> +		return -EINVAL;
>> +
>> +	if (val > lpp->max_width)
>> +		return -EINVAL;
>> +
>> +	lpp->lanes = val;
>> +	intel_pcie_max_link_width_setup(lpp);
>> +
>> +	return len;
>> +}
>> +static DEVICE_ATTR_WO(pcie_width);
> You mentioned that a use-case for changing width/speed on the fly was to
> control power consumption (and this also helps debugging issues). As I
> understand there is no current support for this in the kernel - yet it is
> something that would provide value.
>
> I haven't looked in much detail, however as I understand the PCI spec
> allows an upstream partner to change the link speed and retrain. (I'm not
> sure about link width). Given that we already have
> [current,max]_link_[speed,width] is sysfs for each PCI device, it would
> seem natural to extend this to allow for writing a max width or speed.
>
> So ideally this type of thing would be moved to the core or at least in
> the dwc driver. This way the benefits can be applied to more devices on
> larger PCI fabrics - Though perhaps others here will have a different view
> and I'm keen to hear them.
>
> I'm keen to limit the differences between the DWC controller drivers and
> unify common code - thus it would be helpful to have a justification as to
> why this is only relevant for this controller.
>
> For user-space only control, it is possible to achieve what you have here
> with userspace utilities (something like setpci) (assuming the standard
> looking registers you currently access are exposed in the normal config
> space way - though PCIe capabilities).
>
> My suggestion would be to drop these changes and later add something that
> can benefit more devices. In any case if these changes stay within this
> driver then it would be helpful to move them to a separate patch.
Sure, i will submit these entity in separate patch.
>
>> +
>> +static struct attribute *pcie_cfg_attrs[] = {
>> +	&dev_attr_pcie_link_status.attr,
>> +	&dev_attr_pcie_speed.attr,
>> +	&dev_attr_pcie_width.attr,
>> +	NULL,
>> +};
>> +ATTRIBUTE_GROUPS(pcie_cfg);
>> +
>> +static int intel_pcie_sysfs_init(struct intel_pcie_port *lpp)
>> +{
>> +	return devm_device_add_groups(lpp->pci.dev, pcie_cfg_groups);
>> +}
>> +
>> +static void __intel_pcie_remove(struct intel_pcie_port *lpp)
>> +{
>> +	pcie_rc_cfg_wr_mask(lpp, PCI_COMMAND_MEMORY | PCI_COMMAND_MASTER,
>> +			    0, PCI_COMMAND);
>> +	intel_pcie_core_irq_disable(lpp);
>> +	intel_pcie_turn_off(lpp);
>> +	clk_disable_unprepare(lpp->core_clk);
>> +	intel_pcie_core_rst_assert(lpp);
>> +	intel_pcie_deinit_phy(lpp);
>> +}
>> +
>> +static int intel_pcie_remove(struct platform_device *pdev)
>> +{
>> +	struct intel_pcie_port *lpp = platform_get_drvdata(pdev);
>> +	struct pcie_port *pp = &lpp->pci.pp;
>> +
>> +	dw_pcie_host_deinit(pp);
>> +	__intel_pcie_remove(lpp);
>> +
>> +	return 0;
>> +}
>> +
>> +static int __maybe_unused intel_pcie_suspend_noirq(struct device *dev)
>> +{
>> +	struct intel_pcie_port *lpp = dev_get_drvdata(dev);
>> +	int ret;
>> +
>> +	intel_pcie_core_irq_disable(lpp);
>> +	ret = intel_pcie_wait_l2(lpp);
>> +	if (ret)
>> +		return ret;
>> +
>> +	intel_pcie_deinit_phy(lpp);
>> +	clk_disable_unprepare(lpp->core_clk);
>> +	return ret;
>> +}
>> +
>> +static int __maybe_unused intel_pcie_resume_noirq(struct device *dev)
>> +{
>> +	struct intel_pcie_port *lpp = dev_get_drvdata(dev);
>> +
>> +	return intel_pcie_host_setup(lpp);
>> +}
>> +
>> +static int intel_pcie_rc_init(struct pcie_port *pp)
>> +{
>> +	struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
>> +	struct intel_pcie_port *lpp = dev_get_drvdata(pci->dev);
>> +	int ret;
>> +
>> +	/* RC/host initialization */
>> +	ret = intel_pcie_host_setup(lpp);
>> +	if (ret)
>> +		return ret;
> Insert new line here.
Ok.
>
>> +	ret = intel_pcie_sysfs_init(lpp);
>> +	if (ret)
>> +		__intel_pcie_remove(lpp);
>> +	return ret;
>> +}
>> +
>> +int intel_pcie_msi_init(struct pcie_port *pp)
>> +{
>> +	struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
>> +
>> +	dev_dbg(pci->dev, "PCIe MSI/MSIx is handled by MSI in x86 processor\n");
> What about other processors? (Noting that this driver doesn't depend on
> any specific ARCH in the KConfig).
Agree. i will mark the dependency in Kconfig.
>
>> +	return 0;
>> +}
>> +
>> +u64 intel_pcie_cpu_addr(struct dw_pcie *pcie, u64 cpu_addr)
>> +{
>> +	return cpu_addr + BUS_IATU_OFFS;
>> +}
>> +
>> +static const struct dw_pcie_ops intel_pcie_ops = {
>> +	.cpu_addr_fixup = intel_pcie_cpu_addr,
>> +};
>> +
>> +static const struct dw_pcie_host_ops intel_pcie_dw_ops = {
>> +	.host_init =		intel_pcie_rc_init,
>> +	.msi_host_init =	intel_pcie_msi_init,
>> +};
>> +
>> +static const struct intel_pcie_soc pcie_data = {
>> +	.pcie_ver =		0x520A,
>> +	.pcie_atu_offset =	0xC0000,
>> +	.num_viewport =		3,
>> +};
>> +
>> +static int intel_pcie_probe(struct platform_device *pdev)
>> +{
>> +	const struct intel_pcie_soc *data;
>> +	struct device *dev = &pdev->dev;
>> +	struct intel_pcie_port *lpp;
>> +	struct pcie_port *pp;
>> +	struct dw_pcie *pci;
>> +	int ret;
>> +
>> +	lpp = devm_kzalloc(dev, sizeof(*lpp), GFP_KERNEL);
>> +	if (!lpp)
>> +		return -ENOMEM;
>> +
>> +	platform_set_drvdata(pdev, lpp);
>> +	pci = &lpp->pci;
>> +	pci->dev = dev;
>> +	pp = &pci->pp;
>> +
>> +	ret = intel_pcie_get_resources(pdev);
>> +	if (ret)
>> +		return ret;
>> +
>> +	data = device_get_match_data(dev);
>> +	pci->ops = &intel_pcie_ops;
>> +	pci->version = data->pcie_ver;
>> +	pci->atu_base = pci->dbi_base + data->pcie_atu_offset;
>> +	pp->ops = &intel_pcie_dw_ops;
>> +
>> +	ret = dw_pcie_host_init(pp);
>> +	if (ret) {
>> +		dev_err(dev, "cannot initialize host\n");
>> +		return ret;
>> +	}
> Add a new line after the closing brace.
Ok
>
>> +	/* Intel PCIe doesn't configure IO region, so configure
>> +	 * viewport to not to access IO region during register
>> +	 * read write operations.
>> +	 */
>> +	pci->num_viewport = data->num_viewport;
>> +	dev_info(dev,
>> +		 "Intel AXI PCIe Root Complex Port %d Init Done\n", lpp->id);
>> +	return ret;
>> +}
>> +
>> +static const struct dev_pm_ops intel_pcie_pm_ops = {
>> +	SET_NOIRQ_SYSTEM_SLEEP_PM_OPS(intel_pcie_suspend_noirq,
>> +				      intel_pcie_resume_noirq)
>> +};
>> +
>> +static const struct of_device_id of_intel_pcie_match[] = {
>> +	{ .compatible = "intel,lgm-pcie", .data = &pcie_data },
>> +	{}
>> +};
>> +
>> +static struct platform_driver intel_pcie_driver = {
>> +	.probe = intel_pcie_probe,
>> +	.remove = intel_pcie_remove,
>> +	.driver = {
>> +		.name = "intel-lgm-pcie",
> Is there a reason why the we use intel-lgm-pcie here and pcie-intel-axi
> elsewhere? What does lgm mean?
lgm is the name of intel SoC.  I will name it to pcie-intel-axi to be 
generic.
>
> Thanks,
>
> Andrew Murray
>
>> +		.of_match_table = of_intel_pcie_match,
>> +		.pm = &intel_pcie_pm_ops,
>> +	},
>> +};
>> +builtin_platform_driver(intel_pcie_driver);
>> -- 
>> 2.11.0
>>

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH v3 2/2] dwc: PCI: intel: Intel PCIe RC controller driver
  2019-09-06 10:58     ` Dilip Kota
@ 2019-09-06 11:20       ` Andrew Murray
  2019-09-09  6:51         ` Dilip Kota
  0 siblings, 1 reply; 37+ messages in thread
From: Andrew Murray @ 2019-09-06 11:20 UTC (permalink / raw)
  To: Dilip Kota
  Cc: jingoohan1, gustavo.pimentel, lorenzo.pieralisi, robh,
	martin.blumenstingl, linux-pci, hch, devicetree, linux-kernel,
	andriy.shevchenko, cheol.yong.kim, chuanhua.lei, qi-ming.wu

On Fri, Sep 06, 2019 at 06:58:11PM +0800, Dilip Kota wrote:
> Hi Andrew Murray,
> 
> Thanks for the review. Please find my response inline.
> 
> On 9/5/2019 6:45 PM, Andrew Murray wrote:
> > On Wed, Sep 04, 2019 at 06:10:31PM +0800, Dilip Kota wrote:
> > > Add support to PCIe RC controller on Intel Universal
> > > Gateway SoC. PCIe controller is based of Synopsys
> > > Designware pci core.
> > > 
> > > Signed-off-by: Dilip Kota <eswara.kota@linux.intel.com>
> > > ---
> > Hi Dilip,
> > 
> > Thanks for the patch, initial feedback below:
> > 
> > > changes on v3:
> > > 	Rename PCIe app logic registers with PCIE_APP prefix.
> > > 	PCIE_IOP_CTRL configuration is not required. Remove respective code.
> > > 	Remove wrapper functions for clk enable/disable APIs.
> > > 	Use platform_get_resource_byname() instead of
> > > 	  devm_platform_ioremap_resource() to be similar with DWC framework.
> > > 	Rename phy name to "pciephy".
> > > 	Modify debug message in msi_init() callback to be more specific.
> > > 	Remove map_irq() callback.
> > > 	Enable the INTx interrupts at the time of PCIe initialization.
> > > 	Reduce memory fragmentation by using variable "struct dw_pcie pci"
> > > 	  instead of allocating memory.
> > > 	Reduce the delay to 100us during enpoint initialization
> > > 	  intel_pcie_ep_rst_init().
> > > 	Call  dw_pcie_host_deinit() during remove() instead of directly
> > > 	  calling PCIe core APIs.
> > > 	Rename "intel,rst-interval" to "reset-assert-ms".
> > > 	Remove unused APP logic Interrupt bit macro definitions.
> > >   	Use dwc framework's dw_pcie_setup_rc() for PCIe host specific
> > > 	 configuration instead of redefining the same functionality in
> > > 	 the driver.
> > > 	Move the whole DT parsing specific code to intel_pcie_get_resources()
> > > 
> > >   drivers/pci/controller/dwc/Kconfig          |  13 +
> > >   drivers/pci/controller/dwc/Makefile         |   1 +
> > >   drivers/pci/controller/dwc/pcie-intel-axi.c | 840 ++++++++++++++++++++++++++++
> > >   3 files changed, 854 insertions(+)
> > >   create mode 100644 drivers/pci/controller/dwc/pcie-intel-axi.c
> > > 
> > > diff --git a/drivers/pci/controller/dwc/Kconfig b/drivers/pci/controller/dwc/Kconfig
> > > index 6ea778ae4877..e44b9b6a6390 100644
> > > --- a/drivers/pci/controller/dwc/Kconfig
> > > +++ b/drivers/pci/controller/dwc/Kconfig
> > > @@ -82,6 +82,19 @@ config PCIE_DW_PLAT_EP
> > >   	  order to enable device-specific features PCI_DW_PLAT_EP must be
> > >   	  selected.
> > > +config PCIE_INTEL_AXI
> > > +        bool "Intel AHB/AXI PCIe host controller support"
> > > +        depends on PCI_MSI
> > > +        depends on PCI
> > I don't think the PCI dependency is required here.
> > 
> > I'm also not sure why PCI_MSI is required, we select PCIE_DW_HOST which
> > depends on PCI_MSI_IRQ_DOMAIN which depends on PCI_MSI.
> Agree, dependency on PCI and PCI_MSI can be removed. I will remove it in
> next patch revision.
> > 
> > > +        depends on OF
> > > +        select PCIE_DW_HOST
> > > +        help
> > > +          Say 'Y' here to enable support for Intel AHB/AXI PCIe Host
> > > +	  controller driver.
> > > +	  The Intel PCIe controller is based on the Synopsys Designware
> > > +	  pcie core and therefore uses the Designware core functions to
> > > +	  implement the driver.
> > I can see this description is similar to others in the same Kconfig,
> > however I'm not sure what value a user gains by knowing implementation
> > details - it's helpful to know that PCIE_INTEL_AXI is based on the
> > Designware core, but is it helpful to know that the Designware core
> > functions are used?
> > 
> > > +
> > >   config PCI_EXYNOS
> > >   	bool "Samsung Exynos PCIe controller"
> > >   	depends on SOC_EXYNOS5440 || COMPILE_TEST
> > > diff --git a/drivers/pci/controller/dwc/Makefile b/drivers/pci/controller/dwc/Makefile
> > > index b085dfd4fab7..46e656ebdf90 100644
> > > --- a/drivers/pci/controller/dwc/Makefile
> > > +++ b/drivers/pci/controller/dwc/Makefile
> > > @@ -3,6 +3,7 @@ obj-$(CONFIG_PCIE_DW) += pcie-designware.o
> > >   obj-$(CONFIG_PCIE_DW_HOST) += pcie-designware-host.o
> > >   obj-$(CONFIG_PCIE_DW_EP) += pcie-designware-ep.o
> > >   obj-$(CONFIG_PCIE_DW_PLAT) += pcie-designware-plat.o
> > > +obj-$(CONFIG_PCIE_INTEL_AXI) += pcie-intel-axi.o
> > >   obj-$(CONFIG_PCI_DRA7XX) += pci-dra7xx.o
> > >   obj-$(CONFIG_PCI_EXYNOS) += pci-exynos.o
> > >   obj-$(CONFIG_PCI_IMX6) += pci-imx6.o
> > > diff --git a/drivers/pci/controller/dwc/pcie-intel-axi.c b/drivers/pci/controller/dwc/pcie-intel-axi.c
> > > new file mode 100644
> > > index 000000000000..75607ce03ebf
> > > --- /dev/null
> > > +++ b/drivers/pci/controller/dwc/pcie-intel-axi.c
> > > @@ -0,0 +1,840 @@
> > > +// SPDX-License-Identifier: GPL-2.0
> > > +/*
> > > + * PCIe host controller driver for Intel AXI PCIe Bridge
> > > + *
> > > + * Copyright (c) 2019 Intel Corporation.
> > > + */
> > > +
> > > +#include <linux/bitfield.h>
> > > +#include <linux/clk.h>
> > > +#include <linux/gpio/consumer.h>
> > > +#include <linux/interrupt.h>
> > > +#include <linux/iopoll.h>
> > > +#include <linux/of_irq.h>
> > > +#include <linux/of_pci.h>
> > > +#include <linux/of_platform.h>
> > > +#include <linux/phy/phy.h>
> > > +#include <linux/platform_device.h>
> > > +#include <linux/reset.h>
> > > +
> > > +#include "../../pci.h"
> > Please remove this - it isn't needed.
> Ok, will remove it in next patch revision.
> > 
> > > +#include "pcie-designware.h"
> > > +
> > > +#define PCIE_CCRID				0x8
> > > +
> > > +#define PCIE_LCAP				0x7C
> > > +#define PCIE_LCAP_MAX_LINK_SPEED		GENMASK(3, 0)
> > > +#define PCIE_LCAP_MAX_LENGTH_WIDTH		GENMASK(9, 4)
> > These look like the standard PCI Link Capabilities Register,
> > can you use the standard ones defined in pci_regs.h? (see
> > PCI_EXP_LNKCAP_SLS_*).
> > 
> > > +
> > > +/* Link Control and Status Register */
> > > +#define PCIE_LCTLSTS				0x80
> > > +#define PCIE_LCTLSTS_ASPM_ENABLE		GENMASK(1, 0)
> > > +#define PCIE_LCTLSTS_RCB128			BIT(3)
> > > +#define PCIE_LCTLSTS_LINK_DISABLE		BIT(4)
> > > +#define PCIE_LCTLSTS_COM_CLK_CFG		BIT(6)
> > > +#define PCIE_LCTLSTS_HW_AW_DIS			BIT(9)
> > > +#define PCIE_LCTLSTS_LINK_SPEED			GENMASK(19, 16)
> > > +#define PCIE_LCTLSTS_NEGOTIATED_LINK_WIDTH	GENMASK(25, 20)
> > > +#define PCIE_LCTLSTS_SLOT_CLK_CFG		BIT(28)
> > These look like the standard Link Control and Link Status register, can
> > you use the standard ones defined in pci_regs.h? (see PCI_EXP_LNKCTL_*
> > and PCI_EXP_LNKSTA_*).
> > 
> > > +
> > > +#define PCIE_LCTLSTS2				0xA0
> > > +#define PCIE_LCTLSTS2_TGT_LINK_SPEED		GENMASK(3, 0)
> > > +#define PCIE_LCTLSTS2_TGT_LINK_SPEED_25GT	0x1
> > > +#define PCIE_LCTLSTS2_TGT_LINK_SPEED_5GT	0x2
> > > +#define PCIE_LCTLSTS2_TGT_LINK_SPEED_8GT	0x3
> > > +#define PCIE_LCTLSTS2_TGT_LINK_SPEED_16GT	0x4
> > And these look like the standard Link Control Register 2 (see
> > PCI_EXP_LNKCTL2_*).
> > 
> > Most of the #defines above and below look just like standard PCI defines
> > (for example those found in pci_regs.h) - both in terms of their name and
> > what they do. Ideally where the functionality is the same or very similar,
> > then we should use the standard PCI defines in (pci_regs.h). This helps
> > readability/understanding, helps to identify duplicate code and reduces
> > the size of your patch.
> > 
> > Also the capability offsets (e.g. PCIE_LCTLSTS2) are also standard. The
> > offsets you define are offset by an additional 0x70. E.g.i
> > PCIE_LCTLSTS2 = 0xA0, and PCI_EXP_LNKCTL2 = 0x30. Perhaps abstracting
> > this offset will allow you to use the standard pci defines?
> > 
> > I haven't looked in much detail at the remainder of the defines, but
> > the same rationale should apply.
> Agree, that's a good point. I will define the offset macro and use the
> macros defined in pci_regs.h.
> > 
> > > +#define PCIE_LCTLSTS2_HW_AUTO_DIS		BIT(5)
> > > +
> > > +/* Ack Frequency Register */
> > > +#define PCIE_AFR				0x70C
> > > +#define PCIE_AFR_FTS_NUM			GENMASK(15, 8)
> > > +#define PCIE_AFR_COM_FTS_NUM			GENMASK(23, 16)
> > > +#define PCIE_AFR_GEN12_FTS_NUM_DFT		(SZ_128 - 1)
> > > +#define PCIE_AFR_GEN3_FTS_NUM_DFT		180
> > > +#define PCIE_AFR_GEN4_FTS_NUM_DFT		196
> > > +
> > > +#define PCIE_PLCR_DLL_LINK_EN			BIT(5)
> > > +#define PCIE_PORT_LOGIC_FTS			GENMASK(7, 0)
> > > +#define PCIE_PORT_LOGIC_DFT_FTS_NUM		(SZ_128 - 1)
> > > +
> > > +#define PCIE_MISC_CTRL				0x8BC
> > > +#define PCIE_MISC_CTRL_DBI_RO_WR_EN		BIT(0)
> > > +
> > > +#define PCIE_MULTI_LANE_CTRL			0x8C0
> > > +#define PCIE_UPCONFIG_SUPPORT			BIT(7)
> > > +#define PCIE_DIRECT_LINK_WIDTH_CHANGE		BIT(6)
> > > +#define PCIE_TARGET_LINK_WIDTH			GENMASK(5, 0)
> > > +
> > > +/* APP RC Core Control Register */
> > > +#define PCIE_APP_CCR				0x10
> > > +#define PCIE_APP_CCR_LTSSM_ENABLE		BIT(0)
> > > +
> > > +/* PCIe Message Control */
> > > +#define PCIE_APP_MSG_CR				0x30
> > > +#define PCIE_APP_MSG_XMT_PM_TURNOFF		BIT(0)
> > > +
> > > +/* PCIe Power Management Control */
> > > +#define PCIE_APP_PMC				0x44
> > > +#define PCIE_APP_PMC_IN_L2			BIT(20)
> > > +
> > > +/* Interrupt Enable Register */
> > > +#define PCIE_APP_IRNEN				0xF4
> > > +#define PCIE_APP_IRNCR				0xF8
> > > +#define PCIE_APP_IRN_AER_REPORT			BIT(0)
> > > +#define PCIE_APP_IRN_PME			BIT(2)
> > > +#define PCIE_APP_IRN_RX_VDM_MSG			BIT(4)
> > > +#define PCIE_APP_IRN_PM_TO_ACK			BIT(9)
> > > +#define PCIE_APP_IRN_LINK_AUTO_BW_STAT		BIT(11)
> > > +#define PCIE_APP_IRN_BW_MGT			BIT(12)
> > > +#define PCIE_APP_IRN_MSG_LTR			BIT(18)
> > > +#define PCIE_APP_IRN_SYS_ERR_RC			BIT(29)
> > > +
> > > +#define PCIE_APP_INTX_OFST	12
> > > +#define PCIE_APP_IRN_INT	(PCIE_APP_IRN_AER_REPORT | PCIE_APP_IRN_PME | \
> > > +			PCIE_APP_IRN_RX_VDM_MSG | PCIE_APP_IRN_SYS_ERR_RC | \
> > > +			PCIE_APP_IRN_PM_TO_ACK | PCIE_APP_IRN_MSG_LTR | \
> > > +			PCIE_APP_IRN_BW_MGT | PCIE_APP_IRN_LINK_AUTO_BW_STAT | \
> > > +			(PCIE_APP_INTX_OFST + PCI_INTERRUPT_INTA) | \
> > > +			(PCIE_APP_INTX_OFST + PCI_INTERRUPT_INTB) | \
> > > +			(PCIE_APP_INTX_OFST + PCI_INTERRUPT_INTC) | \
> > > +			(PCIE_APP_INTX_OFST + PCI_INTERRUPT_INTD))
> > > +
> > > +#define BUS_IATU_OFFS		SZ_256M
> > > +#define RST_INTRVL_DFT_MS	100
> > > +enum {
> > > +	PCIE_LINK_SPEED_AUTO = 0,
> > > +	PCIE_LINK_SPEED_GEN1,
> > > +	PCIE_LINK_SPEED_GEN2,
> > > +	PCIE_LINK_SPEED_GEN3,
> > > +	PCIE_LINK_SPEED_GEN4,
> > > +};
> > > +
> > > +struct intel_pcie_soc {
> > > +	unsigned int pcie_ver;
> > > +	unsigned int pcie_atu_offset;
> > > +	u32 num_viewport;
> > > +};
> > > +
> > > +struct intel_pcie_port {
> > > +	struct dw_pcie		pci;
> > > +	unsigned int		id; /* Physical RC Index */
> > > +	void __iomem		*app_base;
> > > +	struct gpio_desc	*reset_gpio;
> > > +	u32			rst_interval;
> > > +	u32			max_speed;
> > > +	u32			link_gen;
> > > +	u32			max_width;
> > > +	u32			lanes;
> > > +	struct clk		*core_clk;
> > > +	struct reset_control	*core_rst;
> > > +	struct phy		*phy;
> > > +};
> > > +
> > > +static void pcie_update_bits(void __iomem *base, u32 mask, u32 val, u32 ofs)
> > > +{
> > > +	u32 orig, tmp;
> > > +
> > > +	orig = readl(base + ofs);
> > > +
> > > +	tmp = (orig & ~mask) | (val & mask);
> > > +
> > > +	if (tmp != orig)
> > > +		writel(tmp, base + ofs);
> > > +}
> > > +
> > > +static inline u32 pcie_app_rd(struct intel_pcie_port *lpp, u32 ofs)
> > > +{
> > > +	return readl(lpp->app_base + ofs);
> > > +}
> > > +
> > > +static inline void pcie_app_wr(struct intel_pcie_port *lpp, u32 val, u32 ofs)
> > > +{
> > > +	writel(val, lpp->app_base + ofs);
> > > +}
> > > +
> > > +static void pcie_app_wr_mask(struct intel_pcie_port *lpp,
> > > +			     u32 mask, u32 val, u32 ofs)
> > > +{
> > > +	pcie_update_bits(lpp->app_base, mask, val, ofs);
> > > +}
> > > +
> > > +static inline u32 pcie_rc_cfg_rd(struct intel_pcie_port *lpp, u32 ofs)
> > > +{
> > > +	return dw_pcie_readl_dbi(&lpp->pci, ofs);
> > > +}
> > > +
> > > +static inline void pcie_rc_cfg_wr(struct intel_pcie_port *lpp, u32 val, u32 ofs)
> > > +{
> > > +	dw_pcie_writel_dbi(&lpp->pci, ofs, val);
> > > +}
> > > +
> > > +static void pcie_rc_cfg_wr_mask(struct intel_pcie_port *lpp,
> > > +				u32 mask, u32 val, u32 ofs)
> > > +{
> > > +	pcie_update_bits(lpp->pci.dbi_base, mask, val, ofs);
> > > +}
> > > +
> > > +static void intel_pcie_mem_iatu(struct intel_pcie_port *lpp)
> > > +{
> > > +	struct pcie_port *pp = &lpp->pci.pp;
> > > +	phys_addr_t cpu_addr = pp->mem_base;
> > > +
> > > +	dw_pcie_prog_outbound_atu(&lpp->pci, PCIE_ATU_REGION_INDEX0,
> > > +				  PCIE_ATU_TYPE_MEM, cpu_addr,
> > > +				  pp->mem_base, pp->mem_size);
> > > +}
> > > +
> > > +static void intel_pcie_ltssm_enable(struct intel_pcie_port *lpp)
> > > +{
> > > +	pcie_app_wr_mask(lpp, PCIE_APP_CCR_LTSSM_ENABLE,
> > > +			 PCIE_APP_CCR_LTSSM_ENABLE, PCIE_APP_CCR);
> > > +}
> > > +
> > > +static void intel_pcie_ltssm_disable(struct intel_pcie_port *lpp)
> > > +{
> > > +	pcie_app_wr_mask(lpp, PCIE_APP_CCR_LTSSM_ENABLE, 0, PCIE_APP_CCR);
> > > +}
> > > +
> > > +static const char *pcie_link_gen_to_str(int gen)
> > > +{
> > > +	switch (gen) {
> > > +	case PCIE_LINK_SPEED_GEN1:
> > > +		return "2.5";
> > > +	case PCIE_LINK_SPEED_GEN2:
> > > +		return "5.0";
> > > +	case PCIE_LINK_SPEED_GEN3:
> > > +		return "8.0";
> > > +	case PCIE_LINK_SPEED_GEN4:
> > > +		return "16.0";
> > > +	default:
> > > +		return "???";
> > > +	}
> > > +}
> > > +
> > > +static void intel_pcie_link_setup(struct intel_pcie_port *lpp)
> > > +{
> > > +	u32 val;
> > > +
> > > +	val = pcie_rc_cfg_rd(lpp, PCIE_LCAP);
> > > +	lpp->max_speed = FIELD_GET(PCIE_LCAP_MAX_LINK_SPEED, val);
> > > +	lpp->max_width = FIELD_GET(PCIE_LCAP_MAX_LENGTH_WIDTH, val);
> > > +
> > > +	val = pcie_rc_cfg_rd(lpp, PCIE_LCTLSTS);
> > > +
> > > +	val &= ~(PCIE_LCTLSTS_LINK_DISABLE | PCIE_LCTLSTS_ASPM_ENABLE);
> > > +	val |= (PCIE_LCTLSTS_SLOT_CLK_CFG | PCIE_LCTLSTS_COM_CLK_CFG |
> > > +		PCIE_LCTLSTS_RCB128);
> > > +	pcie_rc_cfg_wr(lpp, val, PCIE_LCTLSTS);
> > > +}
> > > +
> > > +static void intel_pcie_max_speed_setup(struct intel_pcie_port *lpp)
> > > +{
> > > +	u32 reg, val;
> > > +
> > > +	reg = pcie_rc_cfg_rd(lpp, PCIE_LCTLSTS2);
> > > +	switch (lpp->link_gen) {
> > > +	case PCIE_LINK_SPEED_GEN1:
> > > +		reg &= ~PCIE_LCTLSTS2_TGT_LINK_SPEED;
> > > +		reg |= PCIE_LCTLSTS2_HW_AUTO_DIS |
> > > +			PCIE_LCTLSTS2_TGT_LINK_SPEED_25GT;
> > > +		break;
> > > +	case PCIE_LINK_SPEED_GEN2:
> > > +		reg &= ~PCIE_LCTLSTS2_TGT_LINK_SPEED;
> > > +		reg |= PCIE_LCTLSTS2_HW_AUTO_DIS |
> > > +			PCIE_LCTLSTS2_TGT_LINK_SPEED_5GT;
> > > +		break;
> > > +	case PCIE_LINK_SPEED_GEN3:
> > > +		reg &= ~PCIE_LCTLSTS2_TGT_LINK_SPEED;
> > > +		reg |= PCIE_LCTLSTS2_HW_AUTO_DIS |
> > > +			PCIE_LCTLSTS2_TGT_LINK_SPEED_8GT;
> > > +		break;
> > > +	case PCIE_LINK_SPEED_GEN4:
> > > +		reg &= ~PCIE_LCTLSTS2_TGT_LINK_SPEED;
> > > +		reg |= PCIE_LCTLSTS2_HW_AUTO_DIS |
> > > +			PCIE_LCTLSTS2_TGT_LINK_SPEED_16GT;
> > > +		break;
> > > +	default:
> > > +		/* Use hardware capability */
> > > +		val = pcie_rc_cfg_rd(lpp, PCIE_LCAP);
> > > +		val = FIELD_GET(PCIE_LCAP_MAX_LINK_SPEED, val);
> > > +		reg &= ~PCIE_LCTLSTS2_HW_AUTO_DIS;
> > > +		reg |= val;
> > > +		break;
> > > +	}
> > > +	pcie_rc_cfg_wr(lpp, reg, PCIE_LCTLSTS2);
> > > +}
> > > +
> > > +static void intel_pcie_speed_change_enable(struct intel_pcie_port *lpp)
> > > +{
> > > +	u32 mask, val;
> > > +
> > > +	mask = PORT_LOGIC_SPEED_CHANGE | PCIE_PORT_LOGIC_FTS;
> > > +	val = PORT_LOGIC_SPEED_CHANGE | PCIE_PORT_LOGIC_DFT_FTS_NUM;
> > > +
> > > +	pcie_rc_cfg_wr_mask(lpp, mask, val, PCIE_LINK_WIDTH_SPEED_CONTROL);
> > > +}
> > > +
> > > +static void intel_pcie_speed_change_disable(struct intel_pcie_port *lpp)
> > > +{
> > > +	pcie_rc_cfg_wr_mask(lpp, PORT_LOGIC_SPEED_CHANGE, 0,
> > > +			    PCIE_LINK_WIDTH_SPEED_CONTROL);
> > > +}
> > > +
> > > +static void intel_pcie_max_link_width_setup(struct intel_pcie_port *lpp)
> > > +{
> > > +	u32 mask, val;
> > > +
> > > +	/* HW auto bandwidth negotiation must be enabled */
> > > +	pcie_rc_cfg_wr_mask(lpp, PCIE_LCTLSTS_HW_AW_DIS, 0, PCIE_LCTLSTS);
> > > +
> > > +	mask = PCIE_DIRECT_LINK_WIDTH_CHANGE | PCIE_TARGET_LINK_WIDTH;
> > > +	val = PCIE_DIRECT_LINK_WIDTH_CHANGE | lpp->lanes;
> > > +	pcie_rc_cfg_wr_mask(lpp, mask, val, PCIE_MULTI_LANE_CTRL);
> > Is this identical functionality to the writing of PCIE_PORT_LINK_CONTROL
> > in dw_pcie_setup?
> > 
> > I ask because if the user sets num-lanes in the DT, will it have the
> > desired effect?
> 
> intel_pcie_max_link_width_setup() function will be called by sysfs attribute pcie_width_store() to change on the fly.

Indeed, but a user may also set num-lanes in the device tree. I'm wondering
if, when set in device-tree, it will have the desired effect. Because I don't
see anything similar to PCIE_LCTLSTS_HW_AW_DIS in dw_pcie_setup which is what
your function does here.

I guess I'm trying to avoid the suitation where num-lanes doesn't have the
desired effect and the only way to set the num-lanes is throught the sysfs
control.

> 
> > 
> > > +}
> > > +
> > > +static void intel_pcie_port_logic_setup(struct intel_pcie_port *lpp)
> > > +{
> > > +	u32 val, mask, fts;
> > > +
> > > +	switch (lpp->max_speed) {
> > > +	case PCIE_LINK_SPEED_GEN1:
> > > +	case PCIE_LINK_SPEED_GEN2:
> > > +		fts = PCIE_AFR_GEN12_FTS_NUM_DFT;
> > > +		break;
> > > +	case PCIE_LINK_SPEED_GEN3:
> > > +		fts = PCIE_AFR_GEN3_FTS_NUM_DFT;
> > > +		break;
> > > +	case PCIE_LINK_SPEED_GEN4:
> > > +		fts = PCIE_AFR_GEN4_FTS_NUM_DFT;
> > > +		break;
> > > +	default:
> > > +		fts = PCIE_AFR_GEN12_FTS_NUM_DFT;
> > > +		break;
> > > +	}
> > > +	mask = PCIE_AFR_FTS_NUM | PCIE_AFR_COM_FTS_NUM;
> > > +	val = FIELD_PREP(PCIE_AFR_FTS_NUM, fts) |
> > > +	       FIELD_PREP(PCIE_AFR_COM_FTS_NUM, fts);
> > > +	pcie_rc_cfg_wr_mask(lpp, mask, val, PCIE_AFR);
> > > +
> > > +	/* Port Link Control Register */
> > > +	pcie_rc_cfg_wr_mask(lpp, PCIE_PLCR_DLL_LINK_EN,
> > > +			    PCIE_PLCR_DLL_LINK_EN, PCIE_PORT_LINK_CONTROL);
> > > +}
> > > +
> > > +static void intel_pcie_upconfig_setup(struct intel_pcie_port *lpp)
> > > +{
> > > +	pcie_rc_cfg_wr_mask(lpp, PCIE_UPCONFIG_SUPPORT,
> > > +			    PCIE_UPCONFIG_SUPPORT, PCIE_MULTI_LANE_CTRL);
> > > +}
> > > +
> > > +static void intel_pcie_rc_setup(struct intel_pcie_port *lpp)
> > > +{
> > > +	intel_pcie_ltssm_disable(lpp);
> > > +	intel_pcie_link_setup(lpp);
> > > +	dw_pcie_setup_rc(&lpp->pci.pp);
> > > +	intel_pcie_upconfig_setup(lpp);
> > > +
> > > +	intel_pcie_max_speed_setup(lpp);
> > > +	intel_pcie_speed_change_enable(lpp);
> > > +	intel_pcie_port_logic_setup(lpp);
> > > +	intel_pcie_mem_iatu(lpp);
> > Doesn't dw_pcie_setup_rc do the same as intel_pcie_mem_iatu?
> Thanks for pointing it. dw_pcie_setup_rc() does.
> intel_pcie_mem_iatu can be removed.

Excellent.

> > 
> > > +}
> > > +
> > > +static int intel_pcie_ep_rst_init(struct intel_pcie_port *lpp)
> > > +{
> > > +	struct device *dev = lpp->pci.dev;
> > > +	int ret;
> > > +
> > > +	lpp->reset_gpio = devm_gpiod_get(dev, "reset", GPIOD_OUT_LOW);
> > > +	if (IS_ERR(lpp->reset_gpio)) {
> > > +		ret = PTR_ERR(lpp->reset_gpio);
> > > +		if (ret != -EPROBE_DEFER)
> > > +			dev_err(dev, "failed to request PCIe GPIO: %d\n", ret);
> > > +		return ret;
> > > +	}
> > > +	/* Make initial reset last for 100us */
> > > +	usleep_range(100, 200);
> > > +
> > > +	return 0;
> > > +}
> > > +
> > > +static void intel_pcie_core_rst_assert(struct intel_pcie_port *lpp)
> > > +{
> > > +	reset_control_assert(lpp->core_rst);
> > > +}
> > > +
> > > +static void intel_pcie_core_rst_deassert(struct intel_pcie_port *lpp)
> > > +{
> > > +	/*
> > > +	 * One micro-second delay to make sure the reset pulse
> > > +	 * wide enough so that core reset is clean.
> > > +	 */
> > > +	udelay(1);
> > > +	reset_control_deassert(lpp->core_rst);
> > > +
> > > +	/*
> > > +	 * Some SoC core reset also reset PHY, more delay needed
> > > +	 * to make sure the reset process is done.
> > > +	 */
> > > +	usleep_range(1000, 2000);
> > > +}
> > > +
> > > +static void intel_pcie_device_rst_assert(struct intel_pcie_port *lpp)
> > > +{
> > > +	gpiod_set_value_cansleep(lpp->reset_gpio, 1);
> > > +}
> > > +
> > > +static void intel_pcie_device_rst_deassert(struct intel_pcie_port *lpp)
> > > +{
> > > +	msleep(lpp->rst_interval);
> > > +	gpiod_set_value_cansleep(lpp->reset_gpio, 0);
> > > +}
> > > +
> > > +static int intel_pcie_app_logic_setup(struct intel_pcie_port *lpp)
> > > +{
> > > +	intel_pcie_device_rst_deassert(lpp);
> > > +	intel_pcie_ltssm_enable(lpp);
> > > +
> > > +	return dw_pcie_wait_for_link(&lpp->pci);
> > > +}
> > > +
> > > +static irqreturn_t intel_pcie_core_isr(int irq, void *arg)
> > > +{
> > > +	struct intel_pcie_port *lpp = arg;
> > > +	u32 val, reg;
> > > +
> > > +	reg = pcie_app_rd(lpp, PCIE_APP_IRNCR);
> > > +	val = reg & PCIE_APP_IRN_INT;
> > > +
> > > +	pcie_app_wr(lpp, val, PCIE_APP_IRNCR);
> > > +
> > > +	trace_printk("PCIe misc interrupt status 0x%x\n", reg);
> > > +	return IRQ_HANDLED;
> > > +}
> > Why do we bother handling this interrupt?
> This helps during debugging.

I think it should be removed. It adds very little value to most users.

Most users won't have access to the datasheets to debug this properly, and
in any case if they could, then they would be competent to add an interrupt
handler themselves.

> > 
> > > +
> > > +static int intel_pcie_setup_irq(struct intel_pcie_port *lpp)
> > > +{
> > > +	struct device *dev = lpp->pci.dev;
> > > +	struct platform_device *pdev;
> > > +	char *irq_name;
> > > +	int irq, ret;
> > > +
> > > +	pdev = to_platform_device(dev);
> > > +	irq = platform_get_irq(pdev, 0);
> > > +	if (irq < 0) {
> > > +		dev_err(dev, "missing sys integrated irq resource\n");
> > > +		return irq;
> > > +	}
> > > +
> > > +	irq_name = devm_kasprintf(dev, GFP_KERNEL, "pcie_misc%d", lpp->id);
> > > +	if (!irq_name) {
> > > +		dev_err(dev, "failed to alloc irq name\n");
> > > +		return -ENOMEM;
> > > +	}
> > > +
> > > +	ret = devm_request_irq(dev, irq, intel_pcie_core_isr,
> > > +			       IRQF_SHARED, irq_name, lpp);
> > > +	if (ret) {
> > > +		dev_err(dev, "request irq %d failed\n", irq);
> > > +		return ret;
> > > +	}
> > > +	/* Enable integrated interrupts */
> > > +	pcie_app_wr_mask(lpp, PCIE_APP_IRN_INT,
> > > +			 PCIE_APP_IRN_INT, PCIE_APP_IRNEN);
> > > +
> > > +	return ret;
> > > +}
> > > +
> > > +static void intel_pcie_core_irq_disable(struct intel_pcie_port *lpp)
> > > +{
> > > +	pcie_app_wr(lpp, 0, PCIE_APP_IRNEN);
> > > +	pcie_app_wr(lpp, PCIE_APP_IRN_INT,  PCIE_APP_IRNCR);
> > > +}
> > > +
> > > +static int intel_pcie_get_resources(struct platform_device *pdev)
> > > +{
> > > +	struct intel_pcie_port *lpp;
> > > +	struct resource *res;
> > > +	struct dw_pcie *pci;
> > > +	struct device *dev;
> > > +	int ret;
> > > +
> > > +	lpp = platform_get_drvdata(pdev);
> > > +	pci = &lpp->pci;
> > > +	dev = pci->dev;
> > > +
> > > +	ret = device_property_read_u32(dev, "linux,pci-domain", &lpp->id);
> > > +	if (ret) {
> > > +		dev_err(dev, "failed to get domain id, errno %d\n", ret);
> > > +		return ret;
> > > +	}
> > > +
> > > +	res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "dbi");
> > > +	if (!res)
> > > +		return -ENXIO;
> > > +
> > > +	pci->dbi_base = devm_ioremap_resource(dev, res);
> > > +	if (IS_ERR(pci->dbi_base))
> > > +		return PTR_ERR(pci->dbi_base);
> > > +
> > > +	lpp->core_clk = devm_clk_get(dev, NULL);
> > > +	if (IS_ERR(lpp->core_clk)) {
> > > +		ret = PTR_ERR(lpp->core_clk);
> > > +		if (ret != -EPROBE_DEFER)
> > > +			dev_err(dev, "failed to get clks: %d\n", ret);
> > > +		return ret;
> > > +	}
> > > +
> > > +	lpp->core_rst = devm_reset_control_get(dev, NULL);
> > > +	if (IS_ERR(lpp->core_rst)) {
> > > +		ret = PTR_ERR(lpp->core_rst);
> > > +		if (ret != -EPROBE_DEFER)
> > > +			dev_err(dev, "failed to get resets: %d\n", ret);
> > > +		return ret;
> > > +	}
> > > +
> > > +	ret = device_property_match_string(dev, "device_type", "pci");
> > > +	if (ret) {
> > > +		dev_err(dev, "failed to find pci device type: %d\n", ret);
> > > +		return ret;
> > > +	}
> > > +
> > > +	if (device_property_read_u32(dev, "reset-assert-ms",
> > > +				     &lpp->rst_interval))
> > > +		lpp->rst_interval = RST_INTRVL_DFT_MS;
> > > +
> > > +	if (device_property_read_u32(dev, "max-link-speed", &lpp->link_gen))
> > > +		lpp->link_gen = 0; /* Fallback to auto */
> > Is it possible to use of_pci_get_max_link_speed here instead?
> Thanks for pointing it. of_pci_get_max_link_speed() can be used here. I will
> update it in the next patch revision.
> > 
> > > +
> > > +	res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "app");
> > > +	if (!res)
> > > +		return -ENXIO;
> > > +
> > > +	lpp->app_base = devm_ioremap_resource(dev, res);
> > > +	if (IS_ERR(lpp->app_base))
> > > +		return PTR_ERR(lpp->app_base);
> > > +
> > > +	ret = intel_pcie_ep_rst_init(lpp);
> > > +	if (ret)
> > > +		return ret;
> > Given that this is called from a function '..._get_resources' I don't think
> > we should be resetting anything here.
> Agree. I will move it out of get_resources().
> > 
> > > +
> > > +	lpp->phy = devm_phy_get(dev, "pciephy");
> > > +	if (IS_ERR(lpp->phy)) {
> > > +		ret = PTR_ERR(lpp->phy);
> > > +		if (ret != -EPROBE_DEFER)
> > > +			dev_err(dev, "couldn't get pcie-phy: %d\n", ret);
> > > +		return ret;
> > > +	}
> > > +	return 0;
> > > +}
> > > +
> > > +static void intel_pcie_deinit_phy(struct intel_pcie_port *lpp)
> > > +{
> > > +	phy_exit(lpp->phy);
> > > +}
> > > +
> > > +static int intel_pcie_wait_l2(struct intel_pcie_port *lpp)
> > > +{
> > > +	u32 value;
> > > +	int ret;
> > > +
> > > +	if (lpp->max_speed < PCIE_LINK_SPEED_GEN3)
> > > +		return 0;
> > > +
> > > +	/* Send PME_TURN_OFF message */
> > > +	pcie_app_wr_mask(lpp, PCIE_APP_MSG_XMT_PM_TURNOFF,
> > > +			 PCIE_APP_MSG_XMT_PM_TURNOFF, PCIE_APP_MSG_CR);
> > > +
> > > +	/* Read PMC status and wait for falling into L2 link state */
> > > +	ret = readl_poll_timeout(lpp->app_base + PCIE_APP_PMC, value,
> > > +				 (value & PCIE_APP_PMC_IN_L2), 20,
> > > +				 jiffies_to_usecs(5 * HZ));
> > > +	if (ret)
> > > +		dev_err(lpp->pci.dev, "PCIe link enter L2 timeout!\n");
> > > +
> > > +	return ret;
> > > +}
> > > +
> > > +static void intel_pcie_turn_off(struct intel_pcie_port *lpp)
> > > +{
> > > +	if (dw_pcie_link_up(&lpp->pci))
> > > +		intel_pcie_wait_l2(lpp);
> > > +
> > > +	/* Put EP in reset state */
> > EP?
> End point device. I will update it.

Is this not a host bridge controller?

> > 
> > > +	intel_pcie_device_rst_assert(lpp);
> > > +	pcie_rc_cfg_wr_mask(lpp, PCI_COMMAND_MEMORY, 0, PCI_COMMAND);
> > > +}
> > > +
> > > +static int intel_pcie_host_setup(struct intel_pcie_port *lpp)
> > > +{
> > > +	int ret;
> > > +
> > > +	intel_pcie_core_rst_assert(lpp);
> > > +	intel_pcie_device_rst_assert(lpp);
> > > +
> > > +	ret = phy_init(lpp->phy);
> > > +	if (ret)
> > > +		return ret;
> > > +
> > > +	intel_pcie_core_rst_deassert(lpp);
> > > +
> > > +	ret = clk_prepare_enable(lpp->core_clk);
> > > +	if (ret) {
> > > +		dev_err(lpp->pci.dev, "Core clock enable failed: %d\n", ret);
> > > +		goto clk_err;
> > > +	}
> > > +
> > > +	intel_pcie_rc_setup(lpp);
> > > +	ret = intel_pcie_app_logic_setup(lpp);
> > > +	if (ret)
> > > +		goto app_init_err;
> > > +
> > > +	ret = intel_pcie_setup_irq(lpp);
> > > +	if (!ret)
> > > +		return ret;
> > > +
> > > +	intel_pcie_turn_off(lpp);
> > > +app_init_err:
> > > +	clk_disable_unprepare(lpp->core_clk);
> > > +clk_err:
> > > +	intel_pcie_core_rst_assert(lpp);
> > > +	intel_pcie_deinit_phy(lpp);
> > > +	return ret;
> > > +}
> > > +
> > > +static ssize_t
> > > +pcie_link_status_show(struct device *dev, struct device_attribute *attr,
> > > +		      char *buf)
> > > +{
> > > +	u32 reg, width, gen;
> > > +	struct intel_pcie_port *lpp;
> > > +
> > > +	lpp = dev_get_drvdata(dev);
> > > +
> > > +	reg = pcie_rc_cfg_rd(lpp, PCIE_LCTLSTS);
> > > +	width = FIELD_GET(PCIE_LCTLSTS_NEGOTIATED_LINK_WIDTH, reg);
> > > +	gen = FIELD_GET(PCIE_LCTLSTS_LINK_SPEED, reg);
> > > +	if (gen > lpp->max_speed)
> > > +		return -EINVAL;
> > > +
> > > +	return sprintf(buf, "Port %2u Width x%u Speed %s GT/s\n", lpp->id,
> > > +		       width, pcie_link_gen_to_str(gen));
> > > +}
> > > +static DEVICE_ATTR_RO(pcie_link_status);
> > > +
> > > +static ssize_t pcie_speed_store(struct device *dev,
> > > +				struct device_attribute *attr,
> > > +				const char *buf, size_t len)
> > > +{
> > > +	struct intel_pcie_port *lpp;
> > > +	unsigned long val;
> > > +	int ret;
> > > +
> > > +	lpp = dev_get_drvdata(dev);
> > > +
> > > +	ret = kstrtoul(buf, 10, &val);
> > > +	if (ret)
> > > +		return ret;
> > > +
> > > +	if (val > lpp->max_speed)
> > > +		return -EINVAL;
> > > +
> > > +	lpp->link_gen = val;
> > > +	intel_pcie_max_speed_setup(lpp);
> > > +	intel_pcie_speed_change_disable(lpp);
> > > +	intel_pcie_speed_change_enable(lpp);
> > > +
> > > +	return len;
> > > +}
> > > +static DEVICE_ATTR_WO(pcie_speed);
> > > +
> > > +/*
> > > + * Link width change on the fly is not always successful.
> > > + * It also depends on the partner.
> > > + */
> > > +static ssize_t pcie_width_store(struct device *dev,
> > > +				struct device_attribute *attr,
> > > +				const char *buf, size_t len)
> > > +{
> > > +	struct intel_pcie_port *lpp;
> > > +	unsigned long val;
> > > +
> > > +	lpp = dev_get_drvdata(dev);
> > > +
> > > +	if (kstrtoul(buf, 10, &val))
> > > +		return -EINVAL;
> > > +
> > > +	if (val > lpp->max_width)
> > > +		return -EINVAL;
> > > +
> > > +	lpp->lanes = val;
> > > +	intel_pcie_max_link_width_setup(lpp);
> > > +
> > > +	return len;
> > > +}
> > > +static DEVICE_ATTR_WO(pcie_width);
> > You mentioned that a use-case for changing width/speed on the fly was to
> > control power consumption (and this also helps debugging issues). As I
> > understand there is no current support for this in the kernel - yet it is
> > something that would provide value.
> > 
> > I haven't looked in much detail, however as I understand the PCI spec
> > allows an upstream partner to change the link speed and retrain. (I'm not
> > sure about link width). Given that we already have
> > [current,max]_link_[speed,width] is sysfs for each PCI device, it would
> > seem natural to extend this to allow for writing a max width or speed.
> > 
> > So ideally this type of thing would be moved to the core or at least in
> > the dwc driver. This way the benefits can be applied to more devices on
> > larger PCI fabrics - Though perhaps others here will have a different view
> > and I'm keen to hear them.
> > 
> > I'm keen to limit the differences between the DWC controller drivers and
> > unify common code - thus it would be helpful to have a justification as to
> > why this is only relevant for this controller.
> > 
> > For user-space only control, it is possible to achieve what you have here
> > with userspace utilities (something like setpci) (assuming the standard
> > looking registers you currently access are exposed in the normal config
> > space way - though PCIe capabilities).
> > 
> > My suggestion would be to drop these changes and later add something that
> > can benefit more devices. In any case if these changes stay within this
> > driver then it would be helpful to move them to a separate patch.
> Sure, i will submit these entity in separate patch.

Please ensure that all supporting macros, functions and defines go with that
other patch as well please.

> > 
> > > +
> > > +static struct attribute *pcie_cfg_attrs[] = {
> > > +	&dev_attr_pcie_link_status.attr,
> > > +	&dev_attr_pcie_speed.attr,
> > > +	&dev_attr_pcie_width.attr,
> > > +	NULL,
> > > +};
> > > +ATTRIBUTE_GROUPS(pcie_cfg);
> > > +
> > > +static int intel_pcie_sysfs_init(struct intel_pcie_port *lpp)
> > > +{
> > > +	return devm_device_add_groups(lpp->pci.dev, pcie_cfg_groups);
> > > +}
> > > +
> > > +static void __intel_pcie_remove(struct intel_pcie_port *lpp)
> > > +{
> > > +	pcie_rc_cfg_wr_mask(lpp, PCI_COMMAND_MEMORY | PCI_COMMAND_MASTER,
> > > +			    0, PCI_COMMAND);
> > > +	intel_pcie_core_irq_disable(lpp);
> > > +	intel_pcie_turn_off(lpp);
> > > +	clk_disable_unprepare(lpp->core_clk);
> > > +	intel_pcie_core_rst_assert(lpp);
> > > +	intel_pcie_deinit_phy(lpp);
> > > +}
> > > +
> > > +static int intel_pcie_remove(struct platform_device *pdev)
> > > +{
> > > +	struct intel_pcie_port *lpp = platform_get_drvdata(pdev);
> > > +	struct pcie_port *pp = &lpp->pci.pp;
> > > +
> > > +	dw_pcie_host_deinit(pp);
> > > +	__intel_pcie_remove(lpp);
> > > +
> > > +	return 0;
> > > +}
> > > +
> > > +static int __maybe_unused intel_pcie_suspend_noirq(struct device *dev)
> > > +{
> > > +	struct intel_pcie_port *lpp = dev_get_drvdata(dev);
> > > +	int ret;
> > > +
> > > +	intel_pcie_core_irq_disable(lpp);
> > > +	ret = intel_pcie_wait_l2(lpp);
> > > +	if (ret)
> > > +		return ret;
> > > +
> > > +	intel_pcie_deinit_phy(lpp);
> > > +	clk_disable_unprepare(lpp->core_clk);
> > > +	return ret;
> > > +}
> > > +
> > > +static int __maybe_unused intel_pcie_resume_noirq(struct device *dev)
> > > +{
> > > +	struct intel_pcie_port *lpp = dev_get_drvdata(dev);
> > > +
> > > +	return intel_pcie_host_setup(lpp);
> > > +}
> > > +
> > > +static int intel_pcie_rc_init(struct pcie_port *pp)
> > > +{
> > > +	struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
> > > +	struct intel_pcie_port *lpp = dev_get_drvdata(pci->dev);
> > > +	int ret;
> > > +
> > > +	/* RC/host initialization */
> > > +	ret = intel_pcie_host_setup(lpp);
> > > +	if (ret)
> > > +		return ret;
> > Insert new line here.
> Ok.
> > 
> > > +	ret = intel_pcie_sysfs_init(lpp);
> > > +	if (ret)
> > > +		__intel_pcie_remove(lpp);
> > > +	return ret;
> > > +}
> > > +
> > > +int intel_pcie_msi_init(struct pcie_port *pp)
> > > +{
> > > +	struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
> > > +
> > > +	dev_dbg(pci->dev, "PCIe MSI/MSIx is handled by MSI in x86 processor\n");
> > What about other processors? (Noting that this driver doesn't depend on
> > any specific ARCH in the KConfig).
> Agree. i will mark the dependency in Kconfig.

OK, please also see how other drivers use the COMPILE_TEST Kconfig option.

I'd suggest that the dev_dbg just becomes a code comment. 

> > 
> > > +	return 0;
> > > +}
> > > +
> > > +u64 intel_pcie_cpu_addr(struct dw_pcie *pcie, u64 cpu_addr)
> > > +{
> > > +	return cpu_addr + BUS_IATU_OFFS;
> > > +}
> > > +
> > > +static const struct dw_pcie_ops intel_pcie_ops = {
> > > +	.cpu_addr_fixup = intel_pcie_cpu_addr,
> > > +};
> > > +
> > > +static const struct dw_pcie_host_ops intel_pcie_dw_ops = {
> > > +	.host_init =		intel_pcie_rc_init,
> > > +	.msi_host_init =	intel_pcie_msi_init,
> > > +};
> > > +
> > > +static const struct intel_pcie_soc pcie_data = {
> > > +	.pcie_ver =		0x520A,
> > > +	.pcie_atu_offset =	0xC0000,
> > > +	.num_viewport =		3,
> > > +};
> > > +
> > > +static int intel_pcie_probe(struct platform_device *pdev)
> > > +{
> > > +	const struct intel_pcie_soc *data;
> > > +	struct device *dev = &pdev->dev;
> > > +	struct intel_pcie_port *lpp;
> > > +	struct pcie_port *pp;
> > > +	struct dw_pcie *pci;
> > > +	int ret;
> > > +
> > > +	lpp = devm_kzalloc(dev, sizeof(*lpp), GFP_KERNEL);
> > > +	if (!lpp)
> > > +		return -ENOMEM;
> > > +
> > > +	platform_set_drvdata(pdev, lpp);
> > > +	pci = &lpp->pci;
> > > +	pci->dev = dev;
> > > +	pp = &pci->pp;
> > > +
> > > +	ret = intel_pcie_get_resources(pdev);
> > > +	if (ret)
> > > +		return ret;
> > > +
> > > +	data = device_get_match_data(dev);
> > > +	pci->ops = &intel_pcie_ops;
> > > +	pci->version = data->pcie_ver;
> > > +	pci->atu_base = pci->dbi_base + data->pcie_atu_offset;
> > > +	pp->ops = &intel_pcie_dw_ops;
> > > +
> > > +	ret = dw_pcie_host_init(pp);
> > > +	if (ret) {
> > > +		dev_err(dev, "cannot initialize host\n");
> > > +		return ret;
> > > +	}
> > Add a new line after the closing brace.
> Ok
> > 
> > > +	/* Intel PCIe doesn't configure IO region, so configure
> > > +	 * viewport to not to access IO region during register
> > > +	 * read write operations.
> > > +	 */
> > > +	pci->num_viewport = data->num_viewport;
> > > +	dev_info(dev,
> > > +		 "Intel AXI PCIe Root Complex Port %d Init Done\n", lpp->id);
> > > +	return ret;
> > > +}
> > > +
> > > +static const struct dev_pm_ops intel_pcie_pm_ops = {
> > > +	SET_NOIRQ_SYSTEM_SLEEP_PM_OPS(intel_pcie_suspend_noirq,
> > > +				      intel_pcie_resume_noirq)
> > > +};
> > > +
> > > +static const struct of_device_id of_intel_pcie_match[] = {
> > > +	{ .compatible = "intel,lgm-pcie", .data = &pcie_data },
> > > +	{}
> > > +};
> > > +
> > > +static struct platform_driver intel_pcie_driver = {
> > > +	.probe = intel_pcie_probe,
> > > +	.remove = intel_pcie_remove,
> > > +	.driver = {
> > > +		.name = "intel-lgm-pcie",
> > Is there a reason why the we use intel-lgm-pcie here and pcie-intel-axi
> > elsewhere? What does lgm mean?
> lgm is the name of intel SoC.  I will name it to pcie-intel-axi to be
> generic.

I'm keen to ensure that it is consistently named. I've seen other comments
regarding what the name should be - I don't have a strong opinion though
I do think that *-axi may be too generic.

Thanks,

Andrew Murray

> > 
> > Thanks,
> > 
> > Andrew Murray
> > 
> > > +		.of_match_table = of_intel_pcie_match,
> > > +		.pm = &intel_pcie_pm_ops,
> > > +	},
> > > +};
> > > +builtin_platform_driver(intel_pcie_driver);
> > > -- 
> > > 2.11.0
> > > 

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH v3 1/2] dt-bindings: PCI: intel: Add YAML schemas for the PCIe RC controller
  2019-09-06  3:22     ` Chuan Hua, Lei
@ 2019-09-06 17:17       ` Martin Blumenstingl
  2019-09-06 17:48         ` Andy Shevchenko
  0 siblings, 1 reply; 37+ messages in thread
From: Martin Blumenstingl @ 2019-09-06 17:17 UTC (permalink / raw)
  To: Chuan Hua, Lei
  Cc: Dilip Kota, jingoohan1, gustavo.pimentel, lorenzo.pieralisi,
	robh, linux-pci, hch, devicetree, linux-kernel,
	andriy.shevchenko, cheol.yong.kim, qi-ming.wu

On Fri, Sep 6, 2019 at 5:22 AM Chuan Hua, Lei
<chuanhua.lei@linux.intel.com> wrote:
[...]
> >> +examples:
> >> +  - |
> >> +    pcie10:pcie@d0e00000 {
> >> +      compatible = "intel,lgm-pcie";
> >> +      device_type = "pci";
> >> +      #address-cells = <3>;
> >> +      #size-cells = <2>;
> >> +      reg = <
> >> +            0xd0e00000 0x1000
> >> +            0xd2000000 0x800000
> >> +            0xd0a41000 0x1000
> >> +            >;
> >> +      reg-names = "dbi", "config", "app";
> >> +      linux,pci-domain = <0>;
> >> +      max-link-speed = <4>;
> >> +      bus-range = <0x00 0x08>;
> >> +      interrupt-parent = <&ioapic1>;
> >> +      interrupts = <67 1>;
> >> +      #interrupt-cells = <1>;
> >> +      interrupt-map-mask = <0 0 0 0x7>;
> >> +      interrupt-map = <0 0 0 1 &ioapic1 27 1>,
> >> +                      <0 0 0 2 &ioapic1 28 1>,
> >> +                      <0 0 0 3 &ioapic1 29 1>,
> >> +                      <0 0 0 4 &ioapic1 30 1>;
> > is the "1" in the interrupts and interrupt-map properties IRQ_TYPE_EDGE_RISING?
> > you can use these macros in this example as well, see
> > Documentation/devicetree/bindings/iio/accel/adi,adxl372.yaml for
> > example
>
> No. 1 here means index from arch/x86/devicetree.c
>
> static struct of_ioapic_type of_ioapic_type[] =
> {
>      {
>          .out_type    = IRQ_TYPE_EDGE_RISING,
>          .trigger    = IOAPIC_EDGE,
>          .polarity    = 1,
>      },
>      {
>          .out_type    = IRQ_TYPE_LEVEL_LOW,
>          .trigger    = IOAPIC_LEVEL,
>          .polarity    = 0,
>      },
>      {
>          .out_type    = IRQ_TYPE_LEVEL_HIGH,
>          .trigger    = IOAPIC_LEVEL,
>          .polarity    = 1,
>      },
>      {
>          .out_type    = IRQ_TYPE_EDGE_FALLING,
>          .trigger    = IOAPIC_EDGE,
>          .polarity    = 0,
>      },
> };
>
> static int dt_irqdomain_alloc(struct irq_domain *domain, unsigned int virq,
>                    unsigned int nr_irqs, void *arg)
> {
>      struct irq_fwspec *fwspec = (struct irq_fwspec *)arg;
>      struct of_ioapic_type *it;
>      struct irq_alloc_info tmp;
>      int type_index;
>
>      if (WARN_ON(fwspec->param_count < 2))
>          return -EINVAL;
>
>      type_index = fwspec->param[1]; // index.
>      if (type_index >= ARRAY_SIZE(of_ioapic_type))
>          return -EINVAL;
>
> I would not see this definition is user-friendly. But it is how x86
> handles at the moment.
thank you for explaining this - I had no idea x86 is different from
all other platforms I know
the only upstream x86 .dts I could find
(arch/x86/platform/ce4100/falconfalls.dts) also uses the magic x86
numbers
so I'm fine with this until someone else knows a better solution


Martin

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH v3 1/2] dt-bindings: PCI: intel: Add YAML schemas for the PCIe RC controller
  2019-09-06 17:17       ` Martin Blumenstingl
@ 2019-09-06 17:48         ` Andy Shevchenko
  2019-09-07  1:48           ` Ivan Gorinov
  0 siblings, 1 reply; 37+ messages in thread
From: Andy Shevchenko @ 2019-09-06 17:48 UTC (permalink / raw)
  To: Martin Blumenstingl, Ivan Gorinov
  Cc: Chuan Hua, Lei, Dilip Kota, jingoohan1, gustavo.pimentel,
	lorenzo.pieralisi, robh, linux-pci, hch, devicetree,
	linux-kernel, cheol.yong.kim, qi-ming.wu

On Fri, Sep 06, 2019 at 07:17:11PM +0200, Martin Blumenstingl wrote:
> On Fri, Sep 6, 2019 at 5:22 AM Chuan Hua, Lei
> <chuanhua.lei@linux.intel.com> wrote:

> >      type_index = fwspec->param[1]; // index.
> >      if (type_index >= ARRAY_SIZE(of_ioapic_type))
> >          return -EINVAL;
> >
> > I would not see this definition is user-friendly. But it is how x86
> > handles at the moment.
> thank you for explaining this - I had no idea x86 is different from
> all other platforms I know
> the only upstream x86 .dts I could find
> (arch/x86/platform/ce4100/falconfalls.dts) also uses the magic x86
> numbers
> so I'm fine with this until someone else knows a better solution

Ivan, Cc'ed, had done few amendments to x86 DT support. Perhaps he may add
something to the discussion.

-- 
With Best Regards,
Andy Shevchenko



^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH v3 1/2] dt-bindings: PCI: intel: Add YAML schemas for the PCIe RC controller
  2019-09-06 17:48         ` Andy Shevchenko
@ 2019-09-07  1:48           ` Ivan Gorinov
  0 siblings, 0 replies; 37+ messages in thread
From: Ivan Gorinov @ 2019-09-07  1:48 UTC (permalink / raw)
  To: Andy Shevchenko
  Cc: Martin Blumenstingl, Chuan Hua, Lei, Dilip Kota, jingoohan1,
	gustavo.pimentel, lorenzo.pieralisi, robh, linux-pci, hch,
	devicetree, linux-kernel, cheol.yong.kim, qi-ming.wu

On Fri, Sep 06, 2019 at 08:48:15PM +0300, Andy Shevchenko wrote:
> On Fri, Sep 06, 2019 at 07:17:11PM +0200, Martin Blumenstingl wrote:
> > On Fri, Sep 6, 2019 at 5:22 AM Chuan Hua, Lei
> > <chuanhua.lei@linux.intel.com> wrote:
> 
> > >      type_index = fwspec->param[1]; // index.
> > >      if (type_index >= ARRAY_SIZE(of_ioapic_type))
> > >          return -EINVAL;
> > >
> > > I would not see this definition is user-friendly. But it is how x86
> > > handles at the moment.
> > thank you for explaining this - I had no idea x86 is different from
> > all other platforms I know
> > the only upstream x86 .dts I could find
> > (arch/x86/platform/ce4100/falconfalls.dts) also uses the magic x86
> > numbers
> > so I'm fine with this until someone else knows a better solution
> 
> Ivan, Cc'ed, had done few amendments to x86 DT support. Perhaps he may add
> something to the discussion.

I just fixed broken interrupt support in x86-specific DT implementation.

In CE4100, PCI devices are directly connected to I/O APIC input lines.
Conventional PCI devices other than bridges don't need to be described in
Device Tree or if they use standard PCI routing.
Mapping INTA .. INTD pins to inputs of the bridge's interrupt parent depends
on device number on the bus. In Device Tree, this mapping is described by
"interrupt-map-mask" and "interrupt-map" properties of the bridge device node.

Possible interrupt types described by Open Firmware Recomended Practice:

    0 - Rising Edge
    1 - Level triggered, active low


^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH v3 2/2] dwc: PCI: intel: Intel PCIe RC controller driver
  2019-09-06 11:20       ` Andrew Murray
@ 2019-09-09  6:51         ` Dilip Kota
  2019-09-09  8:31           ` Andrew Murray
  0 siblings, 1 reply; 37+ messages in thread
From: Dilip Kota @ 2019-09-09  6:51 UTC (permalink / raw)
  To: Andrew Murray
  Cc: jingoohan1, gustavo.pimentel, lorenzo.pieralisi, robh,
	martin.blumenstingl, linux-pci, hch, devicetree, linux-kernel,
	andriy.shevchenko, cheol.yong.kim, chuanhua.lei, qi-ming.wu


On 9/6/2019 7:20 PM, Andrew Murray wrote:
> On Fri, Sep 06, 2019 at 06:58:11PM +0800, Dilip Kota wrote:
>> Hi Andrew Murray,
>>
>> Thanks for the review. Please find my response inline.
>>
>> On 9/5/2019 6:45 PM, Andrew Murray wrote:
>>> On Wed, Sep 04, 2019 at 06:10:31PM +0800, Dilip Kota wrote:
>>>> Add support to PCIe RC controller on Intel Universal
>>>> Gateway SoC. PCIe controller is based of Synopsys
>>>> Designware pci core.
>>>>
>>>> Signed-off-by: Dilip Kota <eswara.kota@linux.intel.com>
>>>> ---
>>> Hi Dilip,
>>>
>>> Thanks for the patch, initial feedback below:
>>>
>>>> changes on v3:
>>>> 	Rename PCIe app logic registers with PCIE_APP prefix.
>>>> 	PCIE_IOP_CTRL configuration is not required. Remove respective code.
>>>> 	Remove wrapper functions for clk enable/disable APIs.
>>>> 	Use platform_get_resource_byname() instead of
>>>> 	  devm_platform_ioremap_resource() to be similar with DWC framework.
>>>> 	Rename phy name to "pciephy".
>>>> 	Modify debug message in msi_init() callback to be more specific.
>>>> 	Remove map_irq() callback.
>>>> 	Enable the INTx interrupts at the time of PCIe initialization.
>>>> 	Reduce memory fragmentation by using variable "struct dw_pcie pci"
>>>> 	  instead of allocating memory.
>>>> 	Reduce the delay to 100us during enpoint initialization
>>>> 	  intel_pcie_ep_rst_init().
>>>> 	Call  dw_pcie_host_deinit() during remove() instead of directly
>>>> 	  calling PCIe core APIs.
>>>> 	Rename "intel,rst-interval" to "reset-assert-ms".
>>>> 	Remove unused APP logic Interrupt bit macro definitions.
>>>>    	Use dwc framework's dw_pcie_setup_rc() for PCIe host specific
>>>> 	 configuration instead of redefining the same functionality in
>>>> 	 the driver.
>>>> 	Move the whole DT parsing specific code to intel_pcie_get_resources()
>>>>
>>>>    drivers/pci/controller/dwc/Kconfig          |  13 +
>>>>    drivers/pci/controller/dwc/Makefile         |   1 +
>>>>    drivers/pci/controller/dwc/pcie-intel-axi.c | 840 ++++++++++++++++++++++++++++
>>>>    3 files changed, 854 insertions(+)
>>>>    create mode 100644 drivers/pci/controller/dwc/pcie-intel-axi.c
>>>>
>>>> diff --git a/drivers/pci/controller/dwc/Kconfig b/drivers/pci/controller/dwc/Kconfig
>>>> index 6ea778ae4877..e44b9b6a6390 100644
>>>> --- a/drivers/pci/controller/dwc/Kconfig
>>>> +++ b/drivers/pci/controller/dwc/Kconfig
>>>> @@ -82,6 +82,19 @@ config PCIE_DW_PLAT_EP
>>>>    	  order to enable device-specific features PCI_DW_PLAT_EP must be
>>>>    	  selected.
>>>> +config PCIE_INTEL_AXI
>>>> +        bool "Intel AHB/AXI PCIe host controller support"
>>>> +        depends on PCI_MSI
>>>> +        depends on PCI
>>> I don't think the PCI dependency is required here.
>>>
>>> I'm also not sure why PCI_MSI is required, we select PCIE_DW_HOST which
>>> depends on PCI_MSI_IRQ_DOMAIN which depends on PCI_MSI.
>> Agree, dependency on PCI and PCI_MSI can be removed. I will remove it in
>> next patch revision.
>>>> +        depends on OF
>>>> +        select PCIE_DW_HOST
>>>> +        help
>>>> +          Say 'Y' here to enable support for Intel AHB/AXI PCIe Host
>>>> +	  controller driver.
>>>> +	  The Intel PCIe controller is based on the Synopsys Designware
>>>> +	  pcie core and therefore uses the Designware core functions to
>>>> +	  implement the driver.
>>> I can see this description is similar to others in the same Kconfig,
>>> however I'm not sure what value a user gains by knowing implementation
>>> details - it's helpful to know that PCIE_INTEL_AXI is based on the
>>> Designware core, but is it helpful to know that the Designware core
>>> functions are used?
>>>
>>>> +
>>>>    config PCI_EXYNOS
>>>>    	bool "Samsung Exynos PCIe controller"
>>>>    	depends on SOC_EXYNOS5440 || COMPILE_TEST
>>>> diff --git a/drivers/pci/controller/dwc/Makefile b/drivers/pci/controller/dwc/Makefile
>>>> index b085dfd4fab7..46e656ebdf90 100644
>>>> --- a/drivers/pci/controller/dwc/Makefile
>>>> +++ b/drivers/pci/controller/dwc/Makefile
>>>> @@ -3,6 +3,7 @@ obj-$(CONFIG_PCIE_DW) += pcie-designware.o
>>>>    obj-$(CONFIG_PCIE_DW_HOST) += pcie-designware-host.o
>>>>    obj-$(CONFIG_PCIE_DW_EP) += pcie-designware-ep.o
>>>>    obj-$(CONFIG_PCIE_DW_PLAT) += pcie-designware-plat.o
>>>> +obj-$(CONFIG_PCIE_INTEL_AXI) += pcie-intel-axi.o
>>>>    obj-$(CONFIG_PCI_DRA7XX) += pci-dra7xx.o
>>>>    obj-$(CONFIG_PCI_EXYNOS) += pci-exynos.o
>>>>    obj-$(CONFIG_PCI_IMX6) += pci-imx6.o
>>>> diff --git a/drivers/pci/controller/dwc/pcie-intel-axi.c b/drivers/pci/controller/dwc/pcie-intel-axi.c
>>>> new file mode 100644
>>>> index 000000000000..75607ce03ebf
>>>> --- /dev/null
>>>> +++ b/drivers/pci/controller/dwc/pcie-intel-axi.c
>>>> @@ -0,0 +1,840 @@
>>>> +// SPDX-License-Identifier: GPL-2.0
>>>> +/*
>>>> + * PCIe host controller driver for Intel AXI PCIe Bridge
>>>> + *
>>>> + * Copyright (c) 2019 Intel Corporation.
>>>> + */
>>>> +
>>>> +#include <linux/bitfield.h>
>>>> +#include <linux/clk.h>
>>>> +#include <linux/gpio/consumer.h>
>>>> +#include <linux/interrupt.h>
>>>> +#include <linux/iopoll.h>
>>>> +#include <linux/of_irq.h>
>>>> +#include <linux/of_pci.h>
>>>> +#include <linux/of_platform.h>
>>>> +#include <linux/phy/phy.h>
>>>> +#include <linux/platform_device.h>
>>>> +#include <linux/reset.h>
>>>> +
>>>> +#include "../../pci.h"
>>> Please remove this - it isn't needed.
>> Ok, will remove it in next patch revision.
>>>> +#include "pcie-designware.h"
>>>> +
>>>> +#define PCIE_CCRID				0x8
>>>> +
>>>> +#define PCIE_LCAP				0x7C
>>>> +#define PCIE_LCAP_MAX_LINK_SPEED		GENMASK(3, 0)
>>>> +#define PCIE_LCAP_MAX_LENGTH_WIDTH		GENMASK(9, 4)
>>> These look like the standard PCI Link Capabilities Register,
>>> can you use the standard ones defined in pci_regs.h? (see
>>> PCI_EXP_LNKCAP_SLS_*).
>>>
>>>> +
>>>> +/* Link Control and Status Register */
>>>> +#define PCIE_LCTLSTS				0x80
>>>> +#define PCIE_LCTLSTS_ASPM_ENABLE		GENMASK(1, 0)
>>>> +#define PCIE_LCTLSTS_RCB128			BIT(3)
>>>> +#define PCIE_LCTLSTS_LINK_DISABLE		BIT(4)
>>>> +#define PCIE_LCTLSTS_COM_CLK_CFG		BIT(6)
>>>> +#define PCIE_LCTLSTS_HW_AW_DIS			BIT(9)
>>>> +#define PCIE_LCTLSTS_LINK_SPEED			GENMASK(19, 16)
>>>> +#define PCIE_LCTLSTS_NEGOTIATED_LINK_WIDTH	GENMASK(25, 20)
>>>> +#define PCIE_LCTLSTS_SLOT_CLK_CFG		BIT(28)
>>> These look like the standard Link Control and Link Status register, can
>>> you use the standard ones defined in pci_regs.h? (see PCI_EXP_LNKCTL_*
>>> and PCI_EXP_LNKSTA_*).
>>>
>>>> +
>>>> +#define PCIE_LCTLSTS2				0xA0
>>>> +#define PCIE_LCTLSTS2_TGT_LINK_SPEED		GENMASK(3, 0)
>>>> +#define PCIE_LCTLSTS2_TGT_LINK_SPEED_25GT	0x1
>>>> +#define PCIE_LCTLSTS2_TGT_LINK_SPEED_5GT	0x2
>>>> +#define PCIE_LCTLSTS2_TGT_LINK_SPEED_8GT	0x3
>>>> +#define PCIE_LCTLSTS2_TGT_LINK_SPEED_16GT	0x4
>>> And these look like the standard Link Control Register 2 (see
>>> PCI_EXP_LNKCTL2_*).
>>>
>>> Most of the #defines above and below look just like standard PCI defines
>>> (for example those found in pci_regs.h) - both in terms of their name and
>>> what they do. Ideally where the functionality is the same or very similar,
>>> then we should use the standard PCI defines in (pci_regs.h). This helps
>>> readability/understanding, helps to identify duplicate code and reduces
>>> the size of your patch.
>>>
>>> Also the capability offsets (e.g. PCIE_LCTLSTS2) are also standard. The
>>> offsets you define are offset by an additional 0x70. E.g.i
>>> PCIE_LCTLSTS2 = 0xA0, and PCI_EXP_LNKCTL2 = 0x30. Perhaps abstracting
>>> this offset will allow you to use the standard pci defines?
>>>
>>> I haven't looked in much detail at the remainder of the defines, but
>>> the same rationale should apply.
>> Agree, that's a good point. I will define the offset macro and use the
>> macros defined in pci_regs.h.
>>>> +#define PCIE_LCTLSTS2_HW_AUTO_DIS		BIT(5)
>>>> +
>>>> +/* Ack Frequency Register */
>>>> +#define PCIE_AFR				0x70C
>>>> +#define PCIE_AFR_FTS_NUM			GENMASK(15, 8)
>>>> +#define PCIE_AFR_COM_FTS_NUM			GENMASK(23, 16)
>>>> +#define PCIE_AFR_GEN12_FTS_NUM_DFT		(SZ_128 - 1)
>>>> +#define PCIE_AFR_GEN3_FTS_NUM_DFT		180
>>>> +#define PCIE_AFR_GEN4_FTS_NUM_DFT		196
>>>> +
>>>> +#define PCIE_PLCR_DLL_LINK_EN			BIT(5)
>>>> +#define PCIE_PORT_LOGIC_FTS			GENMASK(7, 0)
>>>> +#define PCIE_PORT_LOGIC_DFT_FTS_NUM		(SZ_128 - 1)
>>>> +
>>>> +#define PCIE_MISC_CTRL				0x8BC
>>>> +#define PCIE_MISC_CTRL_DBI_RO_WR_EN		BIT(0)
>>>> +
>>>> +#define PCIE_MULTI_LANE_CTRL			0x8C0
>>>> +#define PCIE_UPCONFIG_SUPPORT			BIT(7)
>>>> +#define PCIE_DIRECT_LINK_WIDTH_CHANGE		BIT(6)
>>>> +#define PCIE_TARGET_LINK_WIDTH			GENMASK(5, 0)
>>>> +
>>>> +/* APP RC Core Control Register */
>>>> +#define PCIE_APP_CCR				0x10
>>>> +#define PCIE_APP_CCR_LTSSM_ENABLE		BIT(0)
>>>> +
>>>> +/* PCIe Message Control */
>>>> +#define PCIE_APP_MSG_CR				0x30
>>>> +#define PCIE_APP_MSG_XMT_PM_TURNOFF		BIT(0)
>>>> +
>>>> +/* PCIe Power Management Control */
>>>> +#define PCIE_APP_PMC				0x44
>>>> +#define PCIE_APP_PMC_IN_L2			BIT(20)
>>>> +
>>>> +/* Interrupt Enable Register */
>>>> +#define PCIE_APP_IRNEN				0xF4
>>>> +#define PCIE_APP_IRNCR				0xF8
>>>> +#define PCIE_APP_IRN_AER_REPORT			BIT(0)
>>>> +#define PCIE_APP_IRN_PME			BIT(2)
>>>> +#define PCIE_APP_IRN_RX_VDM_MSG			BIT(4)
>>>> +#define PCIE_APP_IRN_PM_TO_ACK			BIT(9)
>>>> +#define PCIE_APP_IRN_LINK_AUTO_BW_STAT		BIT(11)
>>>> +#define PCIE_APP_IRN_BW_MGT			BIT(12)
>>>> +#define PCIE_APP_IRN_MSG_LTR			BIT(18)
>>>> +#define PCIE_APP_IRN_SYS_ERR_RC			BIT(29)
>>>> +
>>>> +#define PCIE_APP_INTX_OFST	12
>>>> +#define PCIE_APP_IRN_INT	(PCIE_APP_IRN_AER_REPORT | PCIE_APP_IRN_PME | \
>>>> +			PCIE_APP_IRN_RX_VDM_MSG | PCIE_APP_IRN_SYS_ERR_RC | \
>>>> +			PCIE_APP_IRN_PM_TO_ACK | PCIE_APP_IRN_MSG_LTR | \
>>>> +			PCIE_APP_IRN_BW_MGT | PCIE_APP_IRN_LINK_AUTO_BW_STAT | \
>>>> +			(PCIE_APP_INTX_OFST + PCI_INTERRUPT_INTA) | \
>>>> +			(PCIE_APP_INTX_OFST + PCI_INTERRUPT_INTB) | \
>>>> +			(PCIE_APP_INTX_OFST + PCI_INTERRUPT_INTC) | \
>>>> +			(PCIE_APP_INTX_OFST + PCI_INTERRUPT_INTD))
>>>> +
>>>> +#define BUS_IATU_OFFS		SZ_256M
>>>> +#define RST_INTRVL_DFT_MS	100
>>>> +enum {
>>>> +	PCIE_LINK_SPEED_AUTO = 0,
>>>> +	PCIE_LINK_SPEED_GEN1,
>>>> +	PCIE_LINK_SPEED_GEN2,
>>>> +	PCIE_LINK_SPEED_GEN3,
>>>> +	PCIE_LINK_SPEED_GEN4,
>>>> +};
>>>> +
>>>> +struct intel_pcie_soc {
>>>> +	unsigned int pcie_ver;
>>>> +	unsigned int pcie_atu_offset;
>>>> +	u32 num_viewport;
>>>> +};
>>>> +
>>>> +struct intel_pcie_port {
>>>> +	struct dw_pcie		pci;
>>>> +	unsigned int		id; /* Physical RC Index */
>>>> +	void __iomem		*app_base;
>>>> +	struct gpio_desc	*reset_gpio;
>>>> +	u32			rst_interval;
>>>> +	u32			max_speed;
>>>> +	u32			link_gen;
>>>> +	u32			max_width;
>>>> +	u32			lanes;
>>>> +	struct clk		*core_clk;
>>>> +	struct reset_control	*core_rst;
>>>> +	struct phy		*phy;
>>>> +};
>>>> +
>>>> +static void pcie_update_bits(void __iomem *base, u32 mask, u32 val, u32 ofs)
>>>> +{
>>>> +	u32 orig, tmp;
>>>> +
>>>> +	orig = readl(base + ofs);
>>>> +
>>>> +	tmp = (orig & ~mask) | (val & mask);
>>>> +
>>>> +	if (tmp != orig)
>>>> +		writel(tmp, base + ofs);
>>>> +}
>>>> +
>>>> +static inline u32 pcie_app_rd(struct intel_pcie_port *lpp, u32 ofs)
>>>> +{
>>>> +	return readl(lpp->app_base + ofs);
>>>> +}
>>>> +
>>>> +static inline void pcie_app_wr(struct intel_pcie_port *lpp, u32 val, u32 ofs)
>>>> +{
>>>> +	writel(val, lpp->app_base + ofs);
>>>> +}
>>>> +
>>>> +static void pcie_app_wr_mask(struct intel_pcie_port *lpp,
>>>> +			     u32 mask, u32 val, u32 ofs)
>>>> +{
>>>> +	pcie_update_bits(lpp->app_base, mask, val, ofs);
>>>> +}
>>>> +
>>>> +static inline u32 pcie_rc_cfg_rd(struct intel_pcie_port *lpp, u32 ofs)
>>>> +{
>>>> +	return dw_pcie_readl_dbi(&lpp->pci, ofs);
>>>> +}
>>>> +
>>>> +static inline void pcie_rc_cfg_wr(struct intel_pcie_port *lpp, u32 val, u32 ofs)
>>>> +{
>>>> +	dw_pcie_writel_dbi(&lpp->pci, ofs, val);
>>>> +}
>>>> +
>>>> +static void pcie_rc_cfg_wr_mask(struct intel_pcie_port *lpp,
>>>> +				u32 mask, u32 val, u32 ofs)
>>>> +{
>>>> +	pcie_update_bits(lpp->pci.dbi_base, mask, val, ofs);
>>>> +}
>>>> +
>>>> +static void intel_pcie_mem_iatu(struct intel_pcie_port *lpp)
>>>> +{
>>>> +	struct pcie_port *pp = &lpp->pci.pp;
>>>> +	phys_addr_t cpu_addr = pp->mem_base;
>>>> +
>>>> +	dw_pcie_prog_outbound_atu(&lpp->pci, PCIE_ATU_REGION_INDEX0,
>>>> +				  PCIE_ATU_TYPE_MEM, cpu_addr,
>>>> +				  pp->mem_base, pp->mem_size);
>>>> +}
>>>> +
>>>> +static void intel_pcie_ltssm_enable(struct intel_pcie_port *lpp)
>>>> +{
>>>> +	pcie_app_wr_mask(lpp, PCIE_APP_CCR_LTSSM_ENABLE,
>>>> +			 PCIE_APP_CCR_LTSSM_ENABLE, PCIE_APP_CCR);
>>>> +}
>>>> +
>>>> +static void intel_pcie_ltssm_disable(struct intel_pcie_port *lpp)
>>>> +{
>>>> +	pcie_app_wr_mask(lpp, PCIE_APP_CCR_LTSSM_ENABLE, 0, PCIE_APP_CCR);
>>>> +}
>>>> +
>>>> +static const char *pcie_link_gen_to_str(int gen)
>>>> +{
>>>> +	switch (gen) {
>>>> +	case PCIE_LINK_SPEED_GEN1:
>>>> +		return "2.5";
>>>> +	case PCIE_LINK_SPEED_GEN2:
>>>> +		return "5.0";
>>>> +	case PCIE_LINK_SPEED_GEN3:
>>>> +		return "8.0";
>>>> +	case PCIE_LINK_SPEED_GEN4:
>>>> +		return "16.0";
>>>> +	default:
>>>> +		return "???";
>>>> +	}
>>>> +}
>>>> +
>>>> +static void intel_pcie_link_setup(struct intel_pcie_port *lpp)
>>>> +{
>>>> +	u32 val;
>>>> +
>>>> +	val = pcie_rc_cfg_rd(lpp, PCIE_LCAP);
>>>> +	lpp->max_speed = FIELD_GET(PCIE_LCAP_MAX_LINK_SPEED, val);
>>>> +	lpp->max_width = FIELD_GET(PCIE_LCAP_MAX_LENGTH_WIDTH, val);
>>>> +
>>>> +	val = pcie_rc_cfg_rd(lpp, PCIE_LCTLSTS);
>>>> +
>>>> +	val &= ~(PCIE_LCTLSTS_LINK_DISABLE | PCIE_LCTLSTS_ASPM_ENABLE);
>>>> +	val |= (PCIE_LCTLSTS_SLOT_CLK_CFG | PCIE_LCTLSTS_COM_CLK_CFG |
>>>> +		PCIE_LCTLSTS_RCB128);
>>>> +	pcie_rc_cfg_wr(lpp, val, PCIE_LCTLSTS);
>>>> +}
>>>> +
>>>> +static void intel_pcie_max_speed_setup(struct intel_pcie_port *lpp)
>>>> +{
>>>> +	u32 reg, val;
>>>> +
>>>> +	reg = pcie_rc_cfg_rd(lpp, PCIE_LCTLSTS2);
>>>> +	switch (lpp->link_gen) {
>>>> +	case PCIE_LINK_SPEED_GEN1:
>>>> +		reg &= ~PCIE_LCTLSTS2_TGT_LINK_SPEED;
>>>> +		reg |= PCIE_LCTLSTS2_HW_AUTO_DIS |
>>>> +			PCIE_LCTLSTS2_TGT_LINK_SPEED_25GT;
>>>> +		break;
>>>> +	case PCIE_LINK_SPEED_GEN2:
>>>> +		reg &= ~PCIE_LCTLSTS2_TGT_LINK_SPEED;
>>>> +		reg |= PCIE_LCTLSTS2_HW_AUTO_DIS |
>>>> +			PCIE_LCTLSTS2_TGT_LINK_SPEED_5GT;
>>>> +		break;
>>>> +	case PCIE_LINK_SPEED_GEN3:
>>>> +		reg &= ~PCIE_LCTLSTS2_TGT_LINK_SPEED;
>>>> +		reg |= PCIE_LCTLSTS2_HW_AUTO_DIS |
>>>> +			PCIE_LCTLSTS2_TGT_LINK_SPEED_8GT;
>>>> +		break;
>>>> +	case PCIE_LINK_SPEED_GEN4:
>>>> +		reg &= ~PCIE_LCTLSTS2_TGT_LINK_SPEED;
>>>> +		reg |= PCIE_LCTLSTS2_HW_AUTO_DIS |
>>>> +			PCIE_LCTLSTS2_TGT_LINK_SPEED_16GT;
>>>> +		break;
>>>> +	default:
>>>> +		/* Use hardware capability */
>>>> +		val = pcie_rc_cfg_rd(lpp, PCIE_LCAP);
>>>> +		val = FIELD_GET(PCIE_LCAP_MAX_LINK_SPEED, val);
>>>> +		reg &= ~PCIE_LCTLSTS2_HW_AUTO_DIS;
>>>> +		reg |= val;
>>>> +		break;
>>>> +	}
>>>> +	pcie_rc_cfg_wr(lpp, reg, PCIE_LCTLSTS2);
>>>> +}
>>>> +
>>>> +static void intel_pcie_speed_change_enable(struct intel_pcie_port *lpp)
>>>> +{
>>>> +	u32 mask, val;
>>>> +
>>>> +	mask = PORT_LOGIC_SPEED_CHANGE | PCIE_PORT_LOGIC_FTS;
>>>> +	val = PORT_LOGIC_SPEED_CHANGE | PCIE_PORT_LOGIC_DFT_FTS_NUM;
>>>> +
>>>> +	pcie_rc_cfg_wr_mask(lpp, mask, val, PCIE_LINK_WIDTH_SPEED_CONTROL);
>>>> +}
>>>> +
>>>> +static void intel_pcie_speed_change_disable(struct intel_pcie_port *lpp)
>>>> +{
>>>> +	pcie_rc_cfg_wr_mask(lpp, PORT_LOGIC_SPEED_CHANGE, 0,
>>>> +			    PCIE_LINK_WIDTH_SPEED_CONTROL);
>>>> +}
>>>> +
>>>> +static void intel_pcie_max_link_width_setup(struct intel_pcie_port *lpp)
>>>> +{
>>>> +	u32 mask, val;
>>>> +
>>>> +	/* HW auto bandwidth negotiation must be enabled */
>>>> +	pcie_rc_cfg_wr_mask(lpp, PCIE_LCTLSTS_HW_AW_DIS, 0, PCIE_LCTLSTS);
>>>> +
>>>> +	mask = PCIE_DIRECT_LINK_WIDTH_CHANGE | PCIE_TARGET_LINK_WIDTH;
>>>> +	val = PCIE_DIRECT_LINK_WIDTH_CHANGE | lpp->lanes;
>>>> +	pcie_rc_cfg_wr_mask(lpp, mask, val, PCIE_MULTI_LANE_CTRL);
>>> Is this identical functionality to the writing of PCIE_PORT_LINK_CONTROL
>>> in dw_pcie_setup?
>>>
>>> I ask because if the user sets num-lanes in the DT, will it have the
>>> desired effect?
>> intel_pcie_max_link_width_setup() function will be called by sysfs attribute pcie_width_store() to change on the fly.
> Indeed, but a user may also set num-lanes in the device tree. I'm wondering
> if, when set in device-tree, it will have the desired effect. Because I don't
> see anything similar to PCIE_LCTLSTS_HW_AW_DIS in dw_pcie_setup which is what
> your function does here.
>
> I guess I'm trying to avoid the suitation where num-lanes doesn't have the
> desired effect and the only way to set the num-lanes is throught the sysfs
> control.
I will check this and get back to you.
>
>>>> +}
>>>> +
>>>> +static void intel_pcie_port_logic_setup(struct intel_pcie_port *lpp)
>>>> +{
>>>> +	u32 val, mask, fts;
>>>> +
>>>> +	switch (lpp->max_speed) {
>>>> +	case PCIE_LINK_SPEED_GEN1:
>>>> +	case PCIE_LINK_SPEED_GEN2:
>>>> +		fts = PCIE_AFR_GEN12_FTS_NUM_DFT;
>>>> +		break;
>>>> +	case PCIE_LINK_SPEED_GEN3:
>>>> +		fts = PCIE_AFR_GEN3_FTS_NUM_DFT;
>>>> +		break;
>>>> +	case PCIE_LINK_SPEED_GEN4:
>>>> +		fts = PCIE_AFR_GEN4_FTS_NUM_DFT;
>>>> +		break;
>>>> +	default:
>>>> +		fts = PCIE_AFR_GEN12_FTS_NUM_DFT;
>>>> +		break;
>>>> +	}
>>>> +	mask = PCIE_AFR_FTS_NUM | PCIE_AFR_COM_FTS_NUM;
>>>> +	val = FIELD_PREP(PCIE_AFR_FTS_NUM, fts) |
>>>> +	       FIELD_PREP(PCIE_AFR_COM_FTS_NUM, fts);
>>>> +	pcie_rc_cfg_wr_mask(lpp, mask, val, PCIE_AFR);
>>>> +
>>>> +	/* Port Link Control Register */
>>>> +	pcie_rc_cfg_wr_mask(lpp, PCIE_PLCR_DLL_LINK_EN,
>>>> +			    PCIE_PLCR_DLL_LINK_EN, PCIE_PORT_LINK_CONTROL);
>>>> +}
>>>> +
>>>> +static void intel_pcie_upconfig_setup(struct intel_pcie_port *lpp)
>>>> +{
>>>> +	pcie_rc_cfg_wr_mask(lpp, PCIE_UPCONFIG_SUPPORT,
>>>> +			    PCIE_UPCONFIG_SUPPORT, PCIE_MULTI_LANE_CTRL);
>>>> +}
>>>> +
>>>> +static void intel_pcie_rc_setup(struct intel_pcie_port *lpp)
>>>> +{
>>>> +	intel_pcie_ltssm_disable(lpp);
>>>> +	intel_pcie_link_setup(lpp);
>>>> +	dw_pcie_setup_rc(&lpp->pci.pp);
>>>> +	intel_pcie_upconfig_setup(lpp);
>>>> +
>>>> +	intel_pcie_max_speed_setup(lpp);
>>>> +	intel_pcie_speed_change_enable(lpp);
>>>> +	intel_pcie_port_logic_setup(lpp);
>>>> +	intel_pcie_mem_iatu(lpp);
>>> Doesn't dw_pcie_setup_rc do the same as intel_pcie_mem_iatu?
>> Thanks for pointing it. dw_pcie_setup_rc() does.
>> intel_pcie_mem_iatu can be removed.
> Excellent.
>
>>>> +}
>>>> +
>>>> +static int intel_pcie_ep_rst_init(struct intel_pcie_port *lpp)
>>>> +{
>>>> +	struct device *dev = lpp->pci.dev;
>>>> +	int ret;
>>>> +
>>>> +	lpp->reset_gpio = devm_gpiod_get(dev, "reset", GPIOD_OUT_LOW);
>>>> +	if (IS_ERR(lpp->reset_gpio)) {
>>>> +		ret = PTR_ERR(lpp->reset_gpio);
>>>> +		if (ret != -EPROBE_DEFER)
>>>> +			dev_err(dev, "failed to request PCIe GPIO: %d\n", ret);
>>>> +		return ret;
>>>> +	}
>>>> +	/* Make initial reset last for 100us */
>>>> +	usleep_range(100, 200);
>>>> +
>>>> +	return 0;
>>>> +}
>>>> +
>>>> +static void intel_pcie_core_rst_assert(struct intel_pcie_port *lpp)
>>>> +{
>>>> +	reset_control_assert(lpp->core_rst);
>>>> +}
>>>> +
>>>> +static void intel_pcie_core_rst_deassert(struct intel_pcie_port *lpp)
>>>> +{
>>>> +	/*
>>>> +	 * One micro-second delay to make sure the reset pulse
>>>> +	 * wide enough so that core reset is clean.
>>>> +	 */
>>>> +	udelay(1);
>>>> +	reset_control_deassert(lpp->core_rst);
>>>> +
>>>> +	/*
>>>> +	 * Some SoC core reset also reset PHY, more delay needed
>>>> +	 * to make sure the reset process is done.
>>>> +	 */
>>>> +	usleep_range(1000, 2000);
>>>> +}
>>>> +
>>>> +static void intel_pcie_device_rst_assert(struct intel_pcie_port *lpp)
>>>> +{
>>>> +	gpiod_set_value_cansleep(lpp->reset_gpio, 1);
>>>> +}
>>>> +
>>>> +static void intel_pcie_device_rst_deassert(struct intel_pcie_port *lpp)
>>>> +{
>>>> +	msleep(lpp->rst_interval);
>>>> +	gpiod_set_value_cansleep(lpp->reset_gpio, 0);
>>>> +}
>>>> +
>>>> +static int intel_pcie_app_logic_setup(struct intel_pcie_port *lpp)
>>>> +{
>>>> +	intel_pcie_device_rst_deassert(lpp);
>>>> +	intel_pcie_ltssm_enable(lpp);
>>>> +
>>>> +	return dw_pcie_wait_for_link(&lpp->pci);
>>>> +}
>>>> +
>>>> +static irqreturn_t intel_pcie_core_isr(int irq, void *arg)
>>>> +{
>>>> +	struct intel_pcie_port *lpp = arg;
>>>> +	u32 val, reg;
>>>> +
>>>> +	reg = pcie_app_rd(lpp, PCIE_APP_IRNCR);
>>>> +	val = reg & PCIE_APP_IRN_INT;
>>>> +
>>>> +	pcie_app_wr(lpp, val, PCIE_APP_IRNCR);
>>>> +
>>>> +	trace_printk("PCIe misc interrupt status 0x%x\n", reg);
>>>> +	return IRQ_HANDLED;
>>>> +}
>>> Why do we bother handling this interrupt?
>> This helps during debugging.
> I think it should be removed. It adds very little value to most users.
>
> Most users won't have access to the datasheets to debug this properly, and
> in any case if they could, then they would be competent to add an interrupt
> handler themselves.
IMO, having this will help to get the basic hardware interrupt status 
during debugging.
And, user also can enhance the handler as per the need.
I thing keeping it is beneficial than removing it.
Please let me know your view.
>
>>>> +
>>>> +static int intel_pcie_setup_irq(struct intel_pcie_port *lpp)
>>>> +{
>>>> +	struct device *dev = lpp->pci.dev;
>>>> +	struct platform_device *pdev;
>>>> +	char *irq_name;
>>>> +	int irq, ret;
>>>> +
>>>> +	pdev = to_platform_device(dev);
>>>> +	irq = platform_get_irq(pdev, 0);
>>>> +	if (irq < 0) {
>>>> +		dev_err(dev, "missing sys integrated irq resource\n");
>>>> +		return irq;
>>>> +	}
>>>> +
>>>> +	irq_name = devm_kasprintf(dev, GFP_KERNEL, "pcie_misc%d", lpp->id);
>>>> +	if (!irq_name) {
>>>> +		dev_err(dev, "failed to alloc irq name\n");
>>>> +		return -ENOMEM;
>>>> +	}
>>>> +
>>>> +	ret = devm_request_irq(dev, irq, intel_pcie_core_isr,
>>>> +			       IRQF_SHARED, irq_name, lpp);
>>>> +	if (ret) {
>>>> +		dev_err(dev, "request irq %d failed\n", irq);
>>>> +		return ret;
>>>> +	}
>>>> +	/* Enable integrated interrupts */
>>>> +	pcie_app_wr_mask(lpp, PCIE_APP_IRN_INT,
>>>> +			 PCIE_APP_IRN_INT, PCIE_APP_IRNEN);
>>>> +
>>>> +	return ret;
>>>> +}
>>>> +
>>>> +static void intel_pcie_core_irq_disable(struct intel_pcie_port *lpp)
>>>> +{
>>>> +	pcie_app_wr(lpp, 0, PCIE_APP_IRNEN);
>>>> +	pcie_app_wr(lpp, PCIE_APP_IRN_INT,  PCIE_APP_IRNCR);
>>>> +}
>>>> +
>>>> +static int intel_pcie_get_resources(struct platform_device *pdev)
>>>> +{
>>>> +	struct intel_pcie_port *lpp;
>>>> +	struct resource *res;
>>>> +	struct dw_pcie *pci;
>>>> +	struct device *dev;
>>>> +	int ret;
>>>> +
>>>> +	lpp = platform_get_drvdata(pdev);
>>>> +	pci = &lpp->pci;
>>>> +	dev = pci->dev;
>>>> +
>>>> +	ret = device_property_read_u32(dev, "linux,pci-domain", &lpp->id);
>>>> +	if (ret) {
>>>> +		dev_err(dev, "failed to get domain id, errno %d\n", ret);
>>>> +		return ret;
>>>> +	}
>>>> +
>>>> +	res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "dbi");
>>>> +	if (!res)
>>>> +		return -ENXIO;
>>>> +
>>>> +	pci->dbi_base = devm_ioremap_resource(dev, res);
>>>> +	if (IS_ERR(pci->dbi_base))
>>>> +		return PTR_ERR(pci->dbi_base);
>>>> +
>>>> +	lpp->core_clk = devm_clk_get(dev, NULL);
>>>> +	if (IS_ERR(lpp->core_clk)) {
>>>> +		ret = PTR_ERR(lpp->core_clk);
>>>> +		if (ret != -EPROBE_DEFER)
>>>> +			dev_err(dev, "failed to get clks: %d\n", ret);
>>>> +		return ret;
>>>> +	}
>>>> +
>>>> +	lpp->core_rst = devm_reset_control_get(dev, NULL);
>>>> +	if (IS_ERR(lpp->core_rst)) {
>>>> +		ret = PTR_ERR(lpp->core_rst);
>>>> +		if (ret != -EPROBE_DEFER)
>>>> +			dev_err(dev, "failed to get resets: %d\n", ret);
>>>> +		return ret;
>>>> +	}
>>>> +
>>>> +	ret = device_property_match_string(dev, "device_type", "pci");
>>>> +	if (ret) {
>>>> +		dev_err(dev, "failed to find pci device type: %d\n", ret);
>>>> +		return ret;
>>>> +	}
>>>> +
>>>> +	if (device_property_read_u32(dev, "reset-assert-ms",
>>>> +				     &lpp->rst_interval))
>>>> +		lpp->rst_interval = RST_INTRVL_DFT_MS;
>>>> +
>>>> +	if (device_property_read_u32(dev, "max-link-speed", &lpp->link_gen))
>>>> +		lpp->link_gen = 0; /* Fallback to auto */
>>> Is it possible to use of_pci_get_max_link_speed here instead?
>> Thanks for pointing it. of_pci_get_max_link_speed() can be used here. I will
>> update it in the next patch revision.
>>>> +
>>>> +	res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "app");
>>>> +	if (!res)
>>>> +		return -ENXIO;
>>>> +
>>>> +	lpp->app_base = devm_ioremap_resource(dev, res);
>>>> +	if (IS_ERR(lpp->app_base))
>>>> +		return PTR_ERR(lpp->app_base);
>>>> +
>>>> +	ret = intel_pcie_ep_rst_init(lpp);
>>>> +	if (ret)
>>>> +		return ret;
>>> Given that this is called from a function '..._get_resources' I don't think
>>> we should be resetting anything here.
>> Agree. I will move it out of get_resources().
>>>> +
>>>> +	lpp->phy = devm_phy_get(dev, "pciephy");
>>>> +	if (IS_ERR(lpp->phy)) {
>>>> +		ret = PTR_ERR(lpp->phy);
>>>> +		if (ret != -EPROBE_DEFER)
>>>> +			dev_err(dev, "couldn't get pcie-phy: %d\n", ret);
>>>> +		return ret;
>>>> +	}
>>>> +	return 0;
>>>> +}
>>>> +
>>>> +static void intel_pcie_deinit_phy(struct intel_pcie_port *lpp)
>>>> +{
>>>> +	phy_exit(lpp->phy);
>>>> +}
>>>> +
>>>> +static int intel_pcie_wait_l2(struct intel_pcie_port *lpp)
>>>> +{
>>>> +	u32 value;
>>>> +	int ret;
>>>> +
>>>> +	if (lpp->max_speed < PCIE_LINK_SPEED_GEN3)
>>>> +		return 0;
>>>> +
>>>> +	/* Send PME_TURN_OFF message */
>>>> +	pcie_app_wr_mask(lpp, PCIE_APP_MSG_XMT_PM_TURNOFF,
>>>> +			 PCIE_APP_MSG_XMT_PM_TURNOFF, PCIE_APP_MSG_CR);
>>>> +
>>>> +	/* Read PMC status and wait for falling into L2 link state */
>>>> +	ret = readl_poll_timeout(lpp->app_base + PCIE_APP_PMC, value,
>>>> +				 (value & PCIE_APP_PMC_IN_L2), 20,
>>>> +				 jiffies_to_usecs(5 * HZ));
>>>> +	if (ret)
>>>> +		dev_err(lpp->pci.dev, "PCIe link enter L2 timeout!\n");
>>>> +
>>>> +	return ret;
>>>> +}
>>>> +
>>>> +static void intel_pcie_turn_off(struct intel_pcie_port *lpp)
>>>> +{
>>>> +	if (dw_pcie_link_up(&lpp->pci))
>>>> +		intel_pcie_wait_l2(lpp);
>>>> +
>>>> +	/* Put EP in reset state */
>>> EP?
>> End point device. I will update it.
> Is this not a host bridge controller?
It is PERST#, signals hardware reset to the End point .
         /* Put EP in reset state */
         intel_pcie_device_rst_assert(lpp);
>
>>>> +	intel_pcie_device_rst_assert(lpp);
>>>> +	pcie_rc_cfg_wr_mask(lpp, PCI_COMMAND_MEMORY, 0, PCI_COMMAND);
>>>> +}
>>>> +
>>>> +static int intel_pcie_host_setup(struct intel_pcie_port *lpp)
>>>> +{
>>>> +	int ret;
>>>> +
>>>> +	intel_pcie_core_rst_assert(lpp);
>>>> +	intel_pcie_device_rst_assert(lpp);
>>>> +
>>>> +	ret = phy_init(lpp->phy);
>>>> +	if (ret)
>>>> +		return ret;
>>>> +
>>>> +	intel_pcie_core_rst_deassert(lpp);
>>>> +
>>>> +	ret = clk_prepare_enable(lpp->core_clk);
>>>> +	if (ret) {
>>>> +		dev_err(lpp->pci.dev, "Core clock enable failed: %d\n", ret);
>>>> +		goto clk_err;
>>>> +	}
>>>> +
>>>> +	intel_pcie_rc_setup(lpp);
>>>> +	ret = intel_pcie_app_logic_setup(lpp);
>>>> +	if (ret)
>>>> +		goto app_init_err;
>>>> +
>>>> +	ret = intel_pcie_setup_irq(lpp);
>>>> +	if (!ret)
>>>> +		return ret;
>>>> +
>>>> +	intel_pcie_turn_off(lpp);
>>>> +app_init_err:
>>>> +	clk_disable_unprepare(lpp->core_clk);
>>>> +clk_err:
>>>> +	intel_pcie_core_rst_assert(lpp);
>>>> +	intel_pcie_deinit_phy(lpp);
>>>> +	return ret;
>>>> +}
>>>> +
>>>> +static ssize_t
>>>> +pcie_link_status_show(struct device *dev, struct device_attribute *attr,
>>>> +		      char *buf)
>>>> +{
>>>> +	u32 reg, width, gen;
>>>> +	struct intel_pcie_port *lpp;
>>>> +
>>>> +	lpp = dev_get_drvdata(dev);
>>>> +
>>>> +	reg = pcie_rc_cfg_rd(lpp, PCIE_LCTLSTS);
>>>> +	width = FIELD_GET(PCIE_LCTLSTS_NEGOTIATED_LINK_WIDTH, reg);
>>>> +	gen = FIELD_GET(PCIE_LCTLSTS_LINK_SPEED, reg);
>>>> +	if (gen > lpp->max_speed)
>>>> +		return -EINVAL;
>>>> +
>>>> +	return sprintf(buf, "Port %2u Width x%u Speed %s GT/s\n", lpp->id,
>>>> +		       width, pcie_link_gen_to_str(gen));
>>>> +}
>>>> +static DEVICE_ATTR_RO(pcie_link_status);
>>>> +
>>>> +static ssize_t pcie_speed_store(struct device *dev,
>>>> +				struct device_attribute *attr,
>>>> +				const char *buf, size_t len)
>>>> +{
>>>> +	struct intel_pcie_port *lpp;
>>>> +	unsigned long val;
>>>> +	int ret;
>>>> +
>>>> +	lpp = dev_get_drvdata(dev);
>>>> +
>>>> +	ret = kstrtoul(buf, 10, &val);
>>>> +	if (ret)
>>>> +		return ret;
>>>> +
>>>> +	if (val > lpp->max_speed)
>>>> +		return -EINVAL;
>>>> +
>>>> +	lpp->link_gen = val;
>>>> +	intel_pcie_max_speed_setup(lpp);
>>>> +	intel_pcie_speed_change_disable(lpp);
>>>> +	intel_pcie_speed_change_enable(lpp);
>>>> +
>>>> +	return len;
>>>> +}
>>>> +static DEVICE_ATTR_WO(pcie_speed);
>>>> +
>>>> +/*
>>>> + * Link width change on the fly is not always successful.
>>>> + * It also depends on the partner.
>>>> + */
>>>> +static ssize_t pcie_width_store(struct device *dev,
>>>> +				struct device_attribute *attr,
>>>> +				const char *buf, size_t len)
>>>> +{
>>>> +	struct intel_pcie_port *lpp;
>>>> +	unsigned long val;
>>>> +
>>>> +	lpp = dev_get_drvdata(dev);
>>>> +
>>>> +	if (kstrtoul(buf, 10, &val))
>>>> +		return -EINVAL;
>>>> +
>>>> +	if (val > lpp->max_width)
>>>> +		return -EINVAL;
>>>> +
>>>> +	lpp->lanes = val;
>>>> +	intel_pcie_max_link_width_setup(lpp);
>>>> +
>>>> +	return len;
>>>> +}
>>>> +static DEVICE_ATTR_WO(pcie_width);
>>> You mentioned that a use-case for changing width/speed on the fly was to
>>> control power consumption (and this also helps debugging issues). As I
>>> understand there is no current support for this in the kernel - yet it is
>>> something that would provide value.
>>>
>>> I haven't looked in much detail, however as I understand the PCI spec
>>> allows an upstream partner to change the link speed and retrain. (I'm not
>>> sure about link width). Given that we already have
>>> [current,max]_link_[speed,width] is sysfs for each PCI device, it would
>>> seem natural to extend this to allow for writing a max width or speed.
>>>
>>> So ideally this type of thing would be moved to the core or at least in
>>> the dwc driver. This way the benefits can be applied to more devices on
>>> larger PCI fabrics - Though perhaps others here will have a different view
>>> and I'm keen to hear them.
>>>
>>> I'm keen to limit the differences between the DWC controller drivers and
>>> unify common code - thus it would be helpful to have a justification as to
>>> why this is only relevant for this controller.
>>>
>>> For user-space only control, it is possible to achieve what you have here
>>> with userspace utilities (something like setpci) (assuming the standard
>>> looking registers you currently access are exposed in the normal config
>>> space way - though PCIe capabilities).
>>>
>>> My suggestion would be to drop these changes and later add something that
>>> can benefit more devices. In any case if these changes stay within this
>>> driver then it would be helpful to move them to a separate patch.
>> Sure, i will submit these entity in separate patch.
> Please ensure that all supporting macros, functions and defines go with that
> other patch as well please.
Sure.
>
>>>> +
>>>> +static struct attribute *pcie_cfg_attrs[] = {
>>>> +	&dev_attr_pcie_link_status.attr,
>>>> +	&dev_attr_pcie_speed.attr,
>>>> +	&dev_attr_pcie_width.attr,
>>>> +	NULL,
>>>> +};
>>>> +ATTRIBUTE_GROUPS(pcie_cfg);
>>>> +
>>>> +static int intel_pcie_sysfs_init(struct intel_pcie_port *lpp)
>>>> +{
>>>> +	return devm_device_add_groups(lpp->pci.dev, pcie_cfg_groups);
>>>> +}
>>>> +
>>>> +static void __intel_pcie_remove(struct intel_pcie_port *lpp)
>>>> +{
>>>> +	pcie_rc_cfg_wr_mask(lpp, PCI_COMMAND_MEMORY | PCI_COMMAND_MASTER,
>>>> +			    0, PCI_COMMAND);
>>>> +	intel_pcie_core_irq_disable(lpp);
>>>> +	intel_pcie_turn_off(lpp);
>>>> +	clk_disable_unprepare(lpp->core_clk);
>>>> +	intel_pcie_core_rst_assert(lpp);
>>>> +	intel_pcie_deinit_phy(lpp);
>>>> +}
>>>> +
>>>> +static int intel_pcie_remove(struct platform_device *pdev)
>>>> +{
>>>> +	struct intel_pcie_port *lpp = platform_get_drvdata(pdev);
>>>> +	struct pcie_port *pp = &lpp->pci.pp;
>>>> +
>>>> +	dw_pcie_host_deinit(pp);
>>>> +	__intel_pcie_remove(lpp);
>>>> +
>>>> +	return 0;
>>>> +}
>>>> +
>>>> +static int __maybe_unused intel_pcie_suspend_noirq(struct device *dev)
>>>> +{
>>>> +	struct intel_pcie_port *lpp = dev_get_drvdata(dev);
>>>> +	int ret;
>>>> +
>>>> +	intel_pcie_core_irq_disable(lpp);
>>>> +	ret = intel_pcie_wait_l2(lpp);
>>>> +	if (ret)
>>>> +		return ret;
>>>> +
>>>> +	intel_pcie_deinit_phy(lpp);
>>>> +	clk_disable_unprepare(lpp->core_clk);
>>>> +	return ret;
>>>> +}
>>>> +
>>>> +static int __maybe_unused intel_pcie_resume_noirq(struct device *dev)
>>>> +{
>>>> +	struct intel_pcie_port *lpp = dev_get_drvdata(dev);
>>>> +
>>>> +	return intel_pcie_host_setup(lpp);
>>>> +}
>>>> +
>>>> +static int intel_pcie_rc_init(struct pcie_port *pp)
>>>> +{
>>>> +	struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
>>>> +	struct intel_pcie_port *lpp = dev_get_drvdata(pci->dev);
>>>> +	int ret;
>>>> +
>>>> +	/* RC/host initialization */
>>>> +	ret = intel_pcie_host_setup(lpp);
>>>> +	if (ret)
>>>> +		return ret;
>>> Insert new line here.
>> Ok.
>>>> +	ret = intel_pcie_sysfs_init(lpp);
>>>> +	if (ret)
>>>> +		__intel_pcie_remove(lpp);
>>>> +	return ret;
>>>> +}
>>>> +
>>>> +int intel_pcie_msi_init(struct pcie_port *pp)
>>>> +{
>>>> +	struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
>>>> +
>>>> +	dev_dbg(pci->dev, "PCIe MSI/MSIx is handled by MSI in x86 processor\n");
>>> What about other processors? (Noting that this driver doesn't depend on
>>> any specific ARCH in the KConfig).
>> Agree. i will mark the dependency in Kconfig.
> OK, please also see how other drivers use the COMPILE_TEST Kconfig option.
Ok sure.
>
> I'd suggest that the dev_dbg just becomes a code comment.
>
>>>> +	return 0;
>>>> +}
>>>> +
>>>> +u64 intel_pcie_cpu_addr(struct dw_pcie *pcie, u64 cpu_addr)
>>>> +{
>>>> +	return cpu_addr + BUS_IATU_OFFS;
>>>> +}
>>>> +
>>>> +static const struct dw_pcie_ops intel_pcie_ops = {
>>>> +	.cpu_addr_fixup = intel_pcie_cpu_addr,
>>>> +};
>>>> +
>>>> +static const struct dw_pcie_host_ops intel_pcie_dw_ops = {
>>>> +	.host_init =		intel_pcie_rc_init,
>>>> +	.msi_host_init =	intel_pcie_msi_init,
>>>> +};
>>>> +
>>>> +static const struct intel_pcie_soc pcie_data = {
>>>> +	.pcie_ver =		0x520A,
>>>> +	.pcie_atu_offset =	0xC0000,
>>>> +	.num_viewport =		3,
>>>> +};
>>>> +
>>>> +static int intel_pcie_probe(struct platform_device *pdev)
>>>> +{
>>>> +	const struct intel_pcie_soc *data;
>>>> +	struct device *dev = &pdev->dev;
>>>> +	struct intel_pcie_port *lpp;
>>>> +	struct pcie_port *pp;
>>>> +	struct dw_pcie *pci;
>>>> +	int ret;
>>>> +
>>>> +	lpp = devm_kzalloc(dev, sizeof(*lpp), GFP_KERNEL);
>>>> +	if (!lpp)
>>>> +		return -ENOMEM;
>>>> +
>>>> +	platform_set_drvdata(pdev, lpp);
>>>> +	pci = &lpp->pci;
>>>> +	pci->dev = dev;
>>>> +	pp = &pci->pp;
>>>> +
>>>> +	ret = intel_pcie_get_resources(pdev);
>>>> +	if (ret)
>>>> +		return ret;
>>>> +
>>>> +	data = device_get_match_data(dev);
>>>> +	pci->ops = &intel_pcie_ops;
>>>> +	pci->version = data->pcie_ver;
>>>> +	pci->atu_base = pci->dbi_base + data->pcie_atu_offset;
>>>> +	pp->ops = &intel_pcie_dw_ops;
>>>> +
>>>> +	ret = dw_pcie_host_init(pp);
>>>> +	if (ret) {
>>>> +		dev_err(dev, "cannot initialize host\n");
>>>> +		return ret;
>>>> +	}
>>> Add a new line after the closing brace.
>> Ok
>>>> +	/* Intel PCIe doesn't configure IO region, so configure
>>>> +	 * viewport to not to access IO region during register
>>>> +	 * read write operations.
>>>> +	 */
>>>> +	pci->num_viewport = data->num_viewport;
>>>> +	dev_info(dev,
>>>> +		 "Intel AXI PCIe Root Complex Port %d Init Done\n", lpp->id);
>>>> +	return ret;
>>>> +}
>>>> +
>>>> +static const struct dev_pm_ops intel_pcie_pm_ops = {
>>>> +	SET_NOIRQ_SYSTEM_SLEEP_PM_OPS(intel_pcie_suspend_noirq,
>>>> +				      intel_pcie_resume_noirq)
>>>> +};
>>>> +
>>>> +static const struct of_device_id of_intel_pcie_match[] = {
>>>> +	{ .compatible = "intel,lgm-pcie", .data = &pcie_data },
>>>> +	{}
>>>> +};
>>>> +
>>>> +static struct platform_driver intel_pcie_driver = {
>>>> +	.probe = intel_pcie_probe,
>>>> +	.remove = intel_pcie_remove,
>>>> +	.driver = {
>>>> +		.name = "intel-lgm-pcie",
>>> Is there a reason why the we use intel-lgm-pcie here and pcie-intel-axi
>>> elsewhere? What does lgm mean?
>> lgm is the name of intel SoC.  I will name it to pcie-intel-axi to be
>> generic.
> I'm keen to ensure that it is consistently named. I've seen other comments
> regarding what the name should be - I don't have a strong opinion though
> I do think that *-axi may be too generic.

Ok, i will check and get back to you on this.

Regards,

Dilip

>
> Thanks,
>
> Andrew Murray
>
>>> Thanks,
>>>
>>> Andrew Murray
>>>
>>>> +		.of_match_table = of_intel_pcie_match,
>>>> +		.pm = &intel_pcie_pm_ops,
>>>> +	},
>>>> +};
>>>> +builtin_platform_driver(intel_pcie_driver);
>>>> -- 
>>>> 2.11.0
>>>>

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH v3 1/2] dt-bindings: PCI: intel: Add YAML schemas for the PCIe RC controller
  2019-09-06  9:19     ` Andy Shevchenko
@ 2019-09-09  6:52       ` Dilip Kota
  2019-09-17 18:33       ` Rob Herring
  1 sibling, 0 replies; 37+ messages in thread
From: Dilip Kota @ 2019-09-09  6:52 UTC (permalink / raw)
  To: Andy Shevchenko, Martin Blumenstingl
  Cc: jingoohan1, gustavo.pimentel, lorenzo.pieralisi, robh, linux-pci,
	hch, devicetree, linux-kernel, cheol.yong.kim, chuanhua.lei,
	qi-ming.wu


On 9/6/2019 5:19 PM, Andy Shevchenko wrote:
> On Thu, Sep 05, 2019 at 10:31:29PM +0200, Martin Blumenstingl wrote:
>> On Wed, Sep 4, 2019 at 12:11 PM Dilip Kota <eswara.kota@linux.intel.com> wrote:
>>> +  phy-names:
>>> +    const: pciephy
>> the most popular choice in Documentation/devicetree/bindings/pci/ is "pcie-phy"
>> if Rob is happy with "pciephy" (which is already part of two other
>> bindings) then I'm happy with "pciephy" as well
> I'm not Rob, though I consider more practical to have a hyphenated variant.

I am okay to "pcie-phy". I will update it in next patch version.

Regards,
Dilip

>

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH v3 2/2] dwc: PCI: intel: Intel PCIe RC controller driver
  2019-09-09  6:51         ` Dilip Kota
@ 2019-09-09  8:31           ` Andrew Murray
  2019-09-10  8:08             ` Dilip Kota
       [not found]             ` <22857835-1f98-b251-c94b-16b4b0a6dba2@linux.intel.com>
  0 siblings, 2 replies; 37+ messages in thread
From: Andrew Murray @ 2019-09-09  8:31 UTC (permalink / raw)
  To: Dilip Kota
  Cc: jingoohan1, gustavo.pimentel, lorenzo.pieralisi, robh,
	martin.blumenstingl, linux-pci, hch, devicetree, linux-kernel,
	andriy.shevchenko, cheol.yong.kim, chuanhua.lei, qi-ming.wu

On Mon, Sep 09, 2019 at 02:51:03PM +0800, Dilip Kota wrote:
> 
> On 9/6/2019 7:20 PM, Andrew Murray wrote:
> > On Fri, Sep 06, 2019 at 06:58:11PM +0800, Dilip Kota wrote:
> > > Hi Andrew Murray,
> > > 
> > > Thanks for the review. Please find my response inline.
> > > 
> > > On 9/5/2019 6:45 PM, Andrew Murray wrote:
> > > > On Wed, Sep 04, 2019 at 06:10:31PM +0800, Dilip Kota wrote:
> > > > > Add support to PCIe RC controller on Intel Universal
> > > > > Gateway SoC. PCIe controller is based of Synopsys
> > > > > Designware pci core.
> > > > > 
> > > > > Signed-off-by: Dilip Kota <eswara.kota@linux.intel.com>
> > > > > ---
> > > > Hi Dilip,
> > > > 
> > > > Thanks for the patch, initial feedback below:
> > > > 
> > > > > changes on v3:
> > > > > 	Rename PCIe app logic registers with PCIE_APP prefix.
> > > > > 	PCIE_IOP_CTRL configuration is not required. Remove respective code.
> > > > > 	Remove wrapper functions for clk enable/disable APIs.
> > > > > 	Use platform_get_resource_byname() instead of
> > > > > 	  devm_platform_ioremap_resource() to be similar with DWC framework.
> > > > > 	Rename phy name to "pciephy".
> > > > > 	Modify debug message in msi_init() callback to be more specific.
> > > > > 	Remove map_irq() callback.
> > > > > 	Enable the INTx interrupts at the time of PCIe initialization.
> > > > > 	Reduce memory fragmentation by using variable "struct dw_pcie pci"
> > > > > 	  instead of allocating memory.
> > > > > 	Reduce the delay to 100us during enpoint initialization
> > > > > 	  intel_pcie_ep_rst_init().
> > > > > 	Call  dw_pcie_host_deinit() during remove() instead of directly
> > > > > 	  calling PCIe core APIs.
> > > > > 	Rename "intel,rst-interval" to "reset-assert-ms".
> > > > > 	Remove unused APP logic Interrupt bit macro definitions.
> > > > >    	Use dwc framework's dw_pcie_setup_rc() for PCIe host specific
> > > > > 	 configuration instead of redefining the same functionality in
> > > > > 	 the driver.
> > > > > 	Move the whole DT parsing specific code to intel_pcie_get_resources()
> > > > > 
> > > > >    drivers/pci/controller/dwc/Kconfig          |  13 +
> > > > >    drivers/pci/controller/dwc/Makefile         |   1 +
> > > > >    drivers/pci/controller/dwc/pcie-intel-axi.c | 840 ++++++++++++++++++++++++++++
> > > > >    3 files changed, 854 insertions(+)
> > > > >    create mode 100644 drivers/pci/controller/dwc/pcie-intel-axi.c
> > > > > 
> > > > > diff --git a/drivers/pci/controller/dwc/Kconfig b/drivers/pci/controller/dwc/Kconfig
> > > > > index 6ea778ae4877..e44b9b6a6390 100644
> > > > > --- a/drivers/pci/controller/dwc/Kconfig
> > > > > +++ b/drivers/pci/controller/dwc/Kconfig
> > > > > @@ -82,6 +82,19 @@ config PCIE_DW_PLAT_EP
> > > > >    	  order to enable device-specific features PCI_DW_PLAT_EP must be
> > > > >    	  selected.
> > > > > +config PCIE_INTEL_AXI
> > > > > +        bool "Intel AHB/AXI PCIe host controller support"
> > > > > +        depends on PCI_MSI
> > > > > +        depends on PCI
> > > > I don't think the PCI dependency is required here.
> > > > 
> > > > I'm also not sure why PCI_MSI is required, we select PCIE_DW_HOST which
> > > > depends on PCI_MSI_IRQ_DOMAIN which depends on PCI_MSI.
> > > Agree, dependency on PCI and PCI_MSI can be removed. I will remove it in
> > > next patch revision.
> > > > > +        depends on OF
> > > > > +        select PCIE_DW_HOST
> > > > > +        help
> > > > > +          Say 'Y' here to enable support for Intel AHB/AXI PCIe Host
> > > > > +	  controller driver.
> > > > > +	  The Intel PCIe controller is based on the Synopsys Designware
> > > > > +	  pcie core and therefore uses the Designware core functions to
> > > > > +	  implement the driver.
> > > > I can see this description is similar to others in the same Kconfig,
> > > > however I'm not sure what value a user gains by knowing implementation
> > > > details - it's helpful to know that PCIE_INTEL_AXI is based on the
> > > > Designware core, but is it helpful to know that the Designware core
> > > > functions are used?
> > > > 
> > > > > +
> > > > >    config PCI_EXYNOS
> > > > >    	bool "Samsung Exynos PCIe controller"
> > > > >    	depends on SOC_EXYNOS5440 || COMPILE_TEST
> > > > > diff --git a/drivers/pci/controller/dwc/Makefile b/drivers/pci/controller/dwc/Makefile
> > > > > index b085dfd4fab7..46e656ebdf90 100644
> > > > > --- a/drivers/pci/controller/dwc/Makefile
> > > > > +++ b/drivers/pci/controller/dwc/Makefile
> > > > > @@ -3,6 +3,7 @@ obj-$(CONFIG_PCIE_DW) += pcie-designware.o
> > > > >    obj-$(CONFIG_PCIE_DW_HOST) += pcie-designware-host.o
> > > > >    obj-$(CONFIG_PCIE_DW_EP) += pcie-designware-ep.o
> > > > >    obj-$(CONFIG_PCIE_DW_PLAT) += pcie-designware-plat.o
> > > > > +obj-$(CONFIG_PCIE_INTEL_AXI) += pcie-intel-axi.o
> > > > >    obj-$(CONFIG_PCI_DRA7XX) += pci-dra7xx.o
> > > > >    obj-$(CONFIG_PCI_EXYNOS) += pci-exynos.o
> > > > >    obj-$(CONFIG_PCI_IMX6) += pci-imx6.o
> > > > > diff --git a/drivers/pci/controller/dwc/pcie-intel-axi.c b/drivers/pci/controller/dwc/pcie-intel-axi.c
> > > > > new file mode 100644
> > > > > index 000000000000..75607ce03ebf
> > > > > --- /dev/null
> > > > > +++ b/drivers/pci/controller/dwc/pcie-intel-axi.c
> > > > > @@ -0,0 +1,840 @@
> > > > > +// SPDX-License-Identifier: GPL-2.0
> > > > > +/*
> > > > > + * PCIe host controller driver for Intel AXI PCIe Bridge
> > > > > + *
> > > > > + * Copyright (c) 2019 Intel Corporation.
> > > > > + */
> > > > > +
> > > > > +#include <linux/bitfield.h>
> > > > > +#include <linux/clk.h>
> > > > > +#include <linux/gpio/consumer.h>
> > > > > +#include <linux/interrupt.h>
> > > > > +#include <linux/iopoll.h>
> > > > > +#include <linux/of_irq.h>
> > > > > +#include <linux/of_pci.h>
> > > > > +#include <linux/of_platform.h>
> > > > > +#include <linux/phy/phy.h>
> > > > > +#include <linux/platform_device.h>
> > > > > +#include <linux/reset.h>
> > > > > +
> > > > > +#include "../../pci.h"
> > > > Please remove this - it isn't needed.
> > > Ok, will remove it in next patch revision.
> > > > > +#include "pcie-designware.h"
> > > > > +
> > > > > +#define PCIE_CCRID				0x8
> > > > > +
> > > > > +#define PCIE_LCAP				0x7C
> > > > > +#define PCIE_LCAP_MAX_LINK_SPEED		GENMASK(3, 0)
> > > > > +#define PCIE_LCAP_MAX_LENGTH_WIDTH		GENMASK(9, 4)
> > > > These look like the standard PCI Link Capabilities Register,
> > > > can you use the standard ones defined in pci_regs.h? (see
> > > > PCI_EXP_LNKCAP_SLS_*).
> > > > 
> > > > > +
> > > > > +/* Link Control and Status Register */
> > > > > +#define PCIE_LCTLSTS				0x80
> > > > > +#define PCIE_LCTLSTS_ASPM_ENABLE		GENMASK(1, 0)
> > > > > +#define PCIE_LCTLSTS_RCB128			BIT(3)
> > > > > +#define PCIE_LCTLSTS_LINK_DISABLE		BIT(4)
> > > > > +#define PCIE_LCTLSTS_COM_CLK_CFG		BIT(6)
> > > > > +#define PCIE_LCTLSTS_HW_AW_DIS			BIT(9)
> > > > > +#define PCIE_LCTLSTS_LINK_SPEED			GENMASK(19, 16)
> > > > > +#define PCIE_LCTLSTS_NEGOTIATED_LINK_WIDTH	GENMASK(25, 20)
> > > > > +#define PCIE_LCTLSTS_SLOT_CLK_CFG		BIT(28)
> > > > These look like the standard Link Control and Link Status register, can
> > > > you use the standard ones defined in pci_regs.h? (see PCI_EXP_LNKCTL_*
> > > > and PCI_EXP_LNKSTA_*).
> > > > 
> > > > > +
> > > > > +#define PCIE_LCTLSTS2				0xA0
> > > > > +#define PCIE_LCTLSTS2_TGT_LINK_SPEED		GENMASK(3, 0)
> > > > > +#define PCIE_LCTLSTS2_TGT_LINK_SPEED_25GT	0x1
> > > > > +#define PCIE_LCTLSTS2_TGT_LINK_SPEED_5GT	0x2
> > > > > +#define PCIE_LCTLSTS2_TGT_LINK_SPEED_8GT	0x3
> > > > > +#define PCIE_LCTLSTS2_TGT_LINK_SPEED_16GT	0x4
> > > > And these look like the standard Link Control Register 2 (see
> > > > PCI_EXP_LNKCTL2_*).
> > > > 
> > > > Most of the #defines above and below look just like standard PCI defines
> > > > (for example those found in pci_regs.h) - both in terms of their name and
> > > > what they do. Ideally where the functionality is the same or very similar,
> > > > then we should use the standard PCI defines in (pci_regs.h). This helps
> > > > readability/understanding, helps to identify duplicate code and reduces
> > > > the size of your patch.
> > > > 
> > > > Also the capability offsets (e.g. PCIE_LCTLSTS2) are also standard. The
> > > > offsets you define are offset by an additional 0x70. E.g.i
> > > > PCIE_LCTLSTS2 = 0xA0, and PCI_EXP_LNKCTL2 = 0x30. Perhaps abstracting
> > > > this offset will allow you to use the standard pci defines?
> > > > 
> > > > I haven't looked in much detail at the remainder of the defines, but
> > > > the same rationale should apply.
> > > Agree, that's a good point. I will define the offset macro and use the
> > > macros defined in pci_regs.h.
> > > > > +#define PCIE_LCTLSTS2_HW_AUTO_DIS		BIT(5)
> > > > > +
> > > > > +/* Ack Frequency Register */
> > > > > +#define PCIE_AFR				0x70C
> > > > > +#define PCIE_AFR_FTS_NUM			GENMASK(15, 8)
> > > > > +#define PCIE_AFR_COM_FTS_NUM			GENMASK(23, 16)
> > > > > +#define PCIE_AFR_GEN12_FTS_NUM_DFT		(SZ_128 - 1)
> > > > > +#define PCIE_AFR_GEN3_FTS_NUM_DFT		180
> > > > > +#define PCIE_AFR_GEN4_FTS_NUM_DFT		196
> > > > > +
> > > > > +#define PCIE_PLCR_DLL_LINK_EN			BIT(5)
> > > > > +#define PCIE_PORT_LOGIC_FTS			GENMASK(7, 0)
> > > > > +#define PCIE_PORT_LOGIC_DFT_FTS_NUM		(SZ_128 - 1)
> > > > > +
> > > > > +#define PCIE_MISC_CTRL				0x8BC
> > > > > +#define PCIE_MISC_CTRL_DBI_RO_WR_EN		BIT(0)
> > > > > +
> > > > > +#define PCIE_MULTI_LANE_CTRL			0x8C0
> > > > > +#define PCIE_UPCONFIG_SUPPORT			BIT(7)
> > > > > +#define PCIE_DIRECT_LINK_WIDTH_CHANGE		BIT(6)
> > > > > +#define PCIE_TARGET_LINK_WIDTH			GENMASK(5, 0)
> > > > > +
> > > > > +/* APP RC Core Control Register */
> > > > > +#define PCIE_APP_CCR				0x10
> > > > > +#define PCIE_APP_CCR_LTSSM_ENABLE		BIT(0)
> > > > > +
> > > > > +/* PCIe Message Control */
> > > > > +#define PCIE_APP_MSG_CR				0x30
> > > > > +#define PCIE_APP_MSG_XMT_PM_TURNOFF		BIT(0)
> > > > > +
> > > > > +/* PCIe Power Management Control */
> > > > > +#define PCIE_APP_PMC				0x44
> > > > > +#define PCIE_APP_PMC_IN_L2			BIT(20)
> > > > > +
> > > > > +/* Interrupt Enable Register */
> > > > > +#define PCIE_APP_IRNEN				0xF4
> > > > > +#define PCIE_APP_IRNCR				0xF8
> > > > > +#define PCIE_APP_IRN_AER_REPORT			BIT(0)
> > > > > +#define PCIE_APP_IRN_PME			BIT(2)
> > > > > +#define PCIE_APP_IRN_RX_VDM_MSG			BIT(4)
> > > > > +#define PCIE_APP_IRN_PM_TO_ACK			BIT(9)
> > > > > +#define PCIE_APP_IRN_LINK_AUTO_BW_STAT		BIT(11)
> > > > > +#define PCIE_APP_IRN_BW_MGT			BIT(12)
> > > > > +#define PCIE_APP_IRN_MSG_LTR			BIT(18)
> > > > > +#define PCIE_APP_IRN_SYS_ERR_RC			BIT(29)
> > > > > +
> > > > > +#define PCIE_APP_INTX_OFST	12
> > > > > +#define PCIE_APP_IRN_INT	(PCIE_APP_IRN_AER_REPORT | PCIE_APP_IRN_PME | \
> > > > > +			PCIE_APP_IRN_RX_VDM_MSG | PCIE_APP_IRN_SYS_ERR_RC | \
> > > > > +			PCIE_APP_IRN_PM_TO_ACK | PCIE_APP_IRN_MSG_LTR | \
> > > > > +			PCIE_APP_IRN_BW_MGT | PCIE_APP_IRN_LINK_AUTO_BW_STAT | \
> > > > > +			(PCIE_APP_INTX_OFST + PCI_INTERRUPT_INTA) | \
> > > > > +			(PCIE_APP_INTX_OFST + PCI_INTERRUPT_INTB) | \
> > > > > +			(PCIE_APP_INTX_OFST + PCI_INTERRUPT_INTC) | \
> > > > > +			(PCIE_APP_INTX_OFST + PCI_INTERRUPT_INTD))
> > > > > +
> > > > > +#define BUS_IATU_OFFS		SZ_256M
> > > > > +#define RST_INTRVL_DFT_MS	100
> > > > > +enum {
> > > > > +	PCIE_LINK_SPEED_AUTO = 0,
> > > > > +	PCIE_LINK_SPEED_GEN1,
> > > > > +	PCIE_LINK_SPEED_GEN2,
> > > > > +	PCIE_LINK_SPEED_GEN3,
> > > > > +	PCIE_LINK_SPEED_GEN4,
> > > > > +};
> > > > > +
> > > > > +struct intel_pcie_soc {
> > > > > +	unsigned int pcie_ver;
> > > > > +	unsigned int pcie_atu_offset;
> > > > > +	u32 num_viewport;
> > > > > +};
> > > > > +
> > > > > +struct intel_pcie_port {
> > > > > +	struct dw_pcie		pci;
> > > > > +	unsigned int		id; /* Physical RC Index */
> > > > > +	void __iomem		*app_base;
> > > > > +	struct gpio_desc	*reset_gpio;
> > > > > +	u32			rst_interval;
> > > > > +	u32			max_speed;
> > > > > +	u32			link_gen;
> > > > > +	u32			max_width;
> > > > > +	u32			lanes;
> > > > > +	struct clk		*core_clk;
> > > > > +	struct reset_control	*core_rst;
> > > > > +	struct phy		*phy;
> > > > > +};
> > > > > +
> > > > > +static void pcie_update_bits(void __iomem *base, u32 mask, u32 val, u32 ofs)
> > > > > +{
> > > > > +	u32 orig, tmp;
> > > > > +
> > > > > +	orig = readl(base + ofs);
> > > > > +
> > > > > +	tmp = (orig & ~mask) | (val & mask);
> > > > > +
> > > > > +	if (tmp != orig)
> > > > > +		writel(tmp, base + ofs);
> > > > > +}
> > > > > +
> > > > > +static inline u32 pcie_app_rd(struct intel_pcie_port *lpp, u32 ofs)
> > > > > +{
> > > > > +	return readl(lpp->app_base + ofs);
> > > > > +}
> > > > > +
> > > > > +static inline void pcie_app_wr(struct intel_pcie_port *lpp, u32 val, u32 ofs)
> > > > > +{
> > > > > +	writel(val, lpp->app_base + ofs);
> > > > > +}
> > > > > +
> > > > > +static void pcie_app_wr_mask(struct intel_pcie_port *lpp,
> > > > > +			     u32 mask, u32 val, u32 ofs)
> > > > > +{
> > > > > +	pcie_update_bits(lpp->app_base, mask, val, ofs);
> > > > > +}
> > > > > +
> > > > > +static inline u32 pcie_rc_cfg_rd(struct intel_pcie_port *lpp, u32 ofs)
> > > > > +{
> > > > > +	return dw_pcie_readl_dbi(&lpp->pci, ofs);
> > > > > +}
> > > > > +
> > > > > +static inline void pcie_rc_cfg_wr(struct intel_pcie_port *lpp, u32 val, u32 ofs)
> > > > > +{
> > > > > +	dw_pcie_writel_dbi(&lpp->pci, ofs, val);
> > > > > +}
> > > > > +
> > > > > +static void pcie_rc_cfg_wr_mask(struct intel_pcie_port *lpp,
> > > > > +				u32 mask, u32 val, u32 ofs)
> > > > > +{
> > > > > +	pcie_update_bits(lpp->pci.dbi_base, mask, val, ofs);
> > > > > +}
> > > > > +
> > > > > +static void intel_pcie_mem_iatu(struct intel_pcie_port *lpp)
> > > > > +{
> > > > > +	struct pcie_port *pp = &lpp->pci.pp;
> > > > > +	phys_addr_t cpu_addr = pp->mem_base;
> > > > > +
> > > > > +	dw_pcie_prog_outbound_atu(&lpp->pci, PCIE_ATU_REGION_INDEX0,
> > > > > +				  PCIE_ATU_TYPE_MEM, cpu_addr,
> > > > > +				  pp->mem_base, pp->mem_size);
> > > > > +}
> > > > > +
> > > > > +static void intel_pcie_ltssm_enable(struct intel_pcie_port *lpp)
> > > > > +{
> > > > > +	pcie_app_wr_mask(lpp, PCIE_APP_CCR_LTSSM_ENABLE,
> > > > > +			 PCIE_APP_CCR_LTSSM_ENABLE, PCIE_APP_CCR);
> > > > > +}
> > > > > +
> > > > > +static void intel_pcie_ltssm_disable(struct intel_pcie_port *lpp)
> > > > > +{
> > > > > +	pcie_app_wr_mask(lpp, PCIE_APP_CCR_LTSSM_ENABLE, 0, PCIE_APP_CCR);
> > > > > +}
> > > > > +
> > > > > +static const char *pcie_link_gen_to_str(int gen)
> > > > > +{
> > > > > +	switch (gen) {
> > > > > +	case PCIE_LINK_SPEED_GEN1:
> > > > > +		return "2.5";
> > > > > +	case PCIE_LINK_SPEED_GEN2:
> > > > > +		return "5.0";
> > > > > +	case PCIE_LINK_SPEED_GEN3:
> > > > > +		return "8.0";
> > > > > +	case PCIE_LINK_SPEED_GEN4:
> > > > > +		return "16.0";
> > > > > +	default:
> > > > > +		return "???";
> > > > > +	}
> > > > > +}
> > > > > +
> > > > > +static void intel_pcie_link_setup(struct intel_pcie_port *lpp)
> > > > > +{
> > > > > +	u32 val;
> > > > > +
> > > > > +	val = pcie_rc_cfg_rd(lpp, PCIE_LCAP);
> > > > > +	lpp->max_speed = FIELD_GET(PCIE_LCAP_MAX_LINK_SPEED, val);
> > > > > +	lpp->max_width = FIELD_GET(PCIE_LCAP_MAX_LENGTH_WIDTH, val);
> > > > > +
> > > > > +	val = pcie_rc_cfg_rd(lpp, PCIE_LCTLSTS);
> > > > > +
> > > > > +	val &= ~(PCIE_LCTLSTS_LINK_DISABLE | PCIE_LCTLSTS_ASPM_ENABLE);
> > > > > +	val |= (PCIE_LCTLSTS_SLOT_CLK_CFG | PCIE_LCTLSTS_COM_CLK_CFG |
> > > > > +		PCIE_LCTLSTS_RCB128);
> > > > > +	pcie_rc_cfg_wr(lpp, val, PCIE_LCTLSTS);
> > > > > +}
> > > > > +
> > > > > +static void intel_pcie_max_speed_setup(struct intel_pcie_port *lpp)
> > > > > +{
> > > > > +	u32 reg, val;
> > > > > +
> > > > > +	reg = pcie_rc_cfg_rd(lpp, PCIE_LCTLSTS2);
> > > > > +	switch (lpp->link_gen) {
> > > > > +	case PCIE_LINK_SPEED_GEN1:
> > > > > +		reg &= ~PCIE_LCTLSTS2_TGT_LINK_SPEED;
> > > > > +		reg |= PCIE_LCTLSTS2_HW_AUTO_DIS |
> > > > > +			PCIE_LCTLSTS2_TGT_LINK_SPEED_25GT;
> > > > > +		break;
> > > > > +	case PCIE_LINK_SPEED_GEN2:
> > > > > +		reg &= ~PCIE_LCTLSTS2_TGT_LINK_SPEED;
> > > > > +		reg |= PCIE_LCTLSTS2_HW_AUTO_DIS |
> > > > > +			PCIE_LCTLSTS2_TGT_LINK_SPEED_5GT;
> > > > > +		break;
> > > > > +	case PCIE_LINK_SPEED_GEN3:
> > > > > +		reg &= ~PCIE_LCTLSTS2_TGT_LINK_SPEED;
> > > > > +		reg |= PCIE_LCTLSTS2_HW_AUTO_DIS |
> > > > > +			PCIE_LCTLSTS2_TGT_LINK_SPEED_8GT;
> > > > > +		break;
> > > > > +	case PCIE_LINK_SPEED_GEN4:
> > > > > +		reg &= ~PCIE_LCTLSTS2_TGT_LINK_SPEED;
> > > > > +		reg |= PCIE_LCTLSTS2_HW_AUTO_DIS |
> > > > > +			PCIE_LCTLSTS2_TGT_LINK_SPEED_16GT;
> > > > > +		break;
> > > > > +	default:
> > > > > +		/* Use hardware capability */
> > > > > +		val = pcie_rc_cfg_rd(lpp, PCIE_LCAP);
> > > > > +		val = FIELD_GET(PCIE_LCAP_MAX_LINK_SPEED, val);
> > > > > +		reg &= ~PCIE_LCTLSTS2_HW_AUTO_DIS;
> > > > > +		reg |= val;
> > > > > +		break;
> > > > > +	}
> > > > > +	pcie_rc_cfg_wr(lpp, reg, PCIE_LCTLSTS2);
> > > > > +}
> > > > > +
> > > > > +static void intel_pcie_speed_change_enable(struct intel_pcie_port *lpp)
> > > > > +{
> > > > > +	u32 mask, val;
> > > > > +
> > > > > +	mask = PORT_LOGIC_SPEED_CHANGE | PCIE_PORT_LOGIC_FTS;
> > > > > +	val = PORT_LOGIC_SPEED_CHANGE | PCIE_PORT_LOGIC_DFT_FTS_NUM;
> > > > > +
> > > > > +	pcie_rc_cfg_wr_mask(lpp, mask, val, PCIE_LINK_WIDTH_SPEED_CONTROL);
> > > > > +}
> > > > > +
> > > > > +static void intel_pcie_speed_change_disable(struct intel_pcie_port *lpp)
> > > > > +{
> > > > > +	pcie_rc_cfg_wr_mask(lpp, PORT_LOGIC_SPEED_CHANGE, 0,
> > > > > +			    PCIE_LINK_WIDTH_SPEED_CONTROL);
> > > > > +}
> > > > > +
> > > > > +static void intel_pcie_max_link_width_setup(struct intel_pcie_port *lpp)
> > > > > +{
> > > > > +	u32 mask, val;
> > > > > +
> > > > > +	/* HW auto bandwidth negotiation must be enabled */
> > > > > +	pcie_rc_cfg_wr_mask(lpp, PCIE_LCTLSTS_HW_AW_DIS, 0, PCIE_LCTLSTS);
> > > > > +
> > > > > +	mask = PCIE_DIRECT_LINK_WIDTH_CHANGE | PCIE_TARGET_LINK_WIDTH;
> > > > > +	val = PCIE_DIRECT_LINK_WIDTH_CHANGE | lpp->lanes;
> > > > > +	pcie_rc_cfg_wr_mask(lpp, mask, val, PCIE_MULTI_LANE_CTRL);
> > > > Is this identical functionality to the writing of PCIE_PORT_LINK_CONTROL
> > > > in dw_pcie_setup?
> > > > 
> > > > I ask because if the user sets num-lanes in the DT, will it have the
> > > > desired effect?
> > > intel_pcie_max_link_width_setup() function will be called by sysfs attribute pcie_width_store() to change on the fly.
> > Indeed, but a user may also set num-lanes in the device tree. I'm wondering
> > if, when set in device-tree, it will have the desired effect. Because I don't
> > see anything similar to PCIE_LCTLSTS_HW_AW_DIS in dw_pcie_setup which is what
> > your function does here.
> > 
> > I guess I'm trying to avoid the suitation where num-lanes doesn't have the
> > desired effect and the only way to set the num-lanes is throught the sysfs
> > control.
> I will check this and get back to you.
> > 
> > > > > +}
> > > > > +
> > > > > +static void intel_pcie_port_logic_setup(struct intel_pcie_port *lpp)
> > > > > +{
> > > > > +	u32 val, mask, fts;
> > > > > +
> > > > > +	switch (lpp->max_speed) {
> > > > > +	case PCIE_LINK_SPEED_GEN1:
> > > > > +	case PCIE_LINK_SPEED_GEN2:
> > > > > +		fts = PCIE_AFR_GEN12_FTS_NUM_DFT;
> > > > > +		break;
> > > > > +	case PCIE_LINK_SPEED_GEN3:
> > > > > +		fts = PCIE_AFR_GEN3_FTS_NUM_DFT;
> > > > > +		break;
> > > > > +	case PCIE_LINK_SPEED_GEN4:
> > > > > +		fts = PCIE_AFR_GEN4_FTS_NUM_DFT;
> > > > > +		break;
> > > > > +	default:
> > > > > +		fts = PCIE_AFR_GEN12_FTS_NUM_DFT;
> > > > > +		break;
> > > > > +	}
> > > > > +	mask = PCIE_AFR_FTS_NUM | PCIE_AFR_COM_FTS_NUM;
> > > > > +	val = FIELD_PREP(PCIE_AFR_FTS_NUM, fts) |
> > > > > +	       FIELD_PREP(PCIE_AFR_COM_FTS_NUM, fts);
> > > > > +	pcie_rc_cfg_wr_mask(lpp, mask, val, PCIE_AFR);
> > > > > +
> > > > > +	/* Port Link Control Register */
> > > > > +	pcie_rc_cfg_wr_mask(lpp, PCIE_PLCR_DLL_LINK_EN,
> > > > > +			    PCIE_PLCR_DLL_LINK_EN, PCIE_PORT_LINK_CONTROL);
> > > > > +}
> > > > > +
> > > > > +static void intel_pcie_upconfig_setup(struct intel_pcie_port *lpp)
> > > > > +{
> > > > > +	pcie_rc_cfg_wr_mask(lpp, PCIE_UPCONFIG_SUPPORT,
> > > > > +			    PCIE_UPCONFIG_SUPPORT, PCIE_MULTI_LANE_CTRL);
> > > > > +}
> > > > > +
> > > > > +static void intel_pcie_rc_setup(struct intel_pcie_port *lpp)
> > > > > +{
> > > > > +	intel_pcie_ltssm_disable(lpp);
> > > > > +	intel_pcie_link_setup(lpp);
> > > > > +	dw_pcie_setup_rc(&lpp->pci.pp);
> > > > > +	intel_pcie_upconfig_setup(lpp);
> > > > > +
> > > > > +	intel_pcie_max_speed_setup(lpp);
> > > > > +	intel_pcie_speed_change_enable(lpp);
> > > > > +	intel_pcie_port_logic_setup(lpp);
> > > > > +	intel_pcie_mem_iatu(lpp);
> > > > Doesn't dw_pcie_setup_rc do the same as intel_pcie_mem_iatu?
> > > Thanks for pointing it. dw_pcie_setup_rc() does.
> > > intel_pcie_mem_iatu can be removed.
> > Excellent.
> > 
> > > > > +}
> > > > > +
> > > > > +static int intel_pcie_ep_rst_init(struct intel_pcie_port *lpp)
> > > > > +{
> > > > > +	struct device *dev = lpp->pci.dev;
> > > > > +	int ret;
> > > > > +
> > > > > +	lpp->reset_gpio = devm_gpiod_get(dev, "reset", GPIOD_OUT_LOW);
> > > > > +	if (IS_ERR(lpp->reset_gpio)) {
> > > > > +		ret = PTR_ERR(lpp->reset_gpio);
> > > > > +		if (ret != -EPROBE_DEFER)
> > > > > +			dev_err(dev, "failed to request PCIe GPIO: %d\n", ret);
> > > > > +		return ret;
> > > > > +	}
> > > > > +	/* Make initial reset last for 100us */
> > > > > +	usleep_range(100, 200);
> > > > > +
> > > > > +	return 0;
> > > > > +}
> > > > > +
> > > > > +static void intel_pcie_core_rst_assert(struct intel_pcie_port *lpp)
> > > > > +{
> > > > > +	reset_control_assert(lpp->core_rst);
> > > > > +}
> > > > > +
> > > > > +static void intel_pcie_core_rst_deassert(struct intel_pcie_port *lpp)
> > > > > +{
> > > > > +	/*
> > > > > +	 * One micro-second delay to make sure the reset pulse
> > > > > +	 * wide enough so that core reset is clean.
> > > > > +	 */
> > > > > +	udelay(1);
> > > > > +	reset_control_deassert(lpp->core_rst);
> > > > > +
> > > > > +	/*
> > > > > +	 * Some SoC core reset also reset PHY, more delay needed
> > > > > +	 * to make sure the reset process is done.
> > > > > +	 */
> > > > > +	usleep_range(1000, 2000);
> > > > > +}
> > > > > +
> > > > > +static void intel_pcie_device_rst_assert(struct intel_pcie_port *lpp)
> > > > > +{
> > > > > +	gpiod_set_value_cansleep(lpp->reset_gpio, 1);
> > > > > +}
> > > > > +
> > > > > +static void intel_pcie_device_rst_deassert(struct intel_pcie_port *lpp)
> > > > > +{
> > > > > +	msleep(lpp->rst_interval);
> > > > > +	gpiod_set_value_cansleep(lpp->reset_gpio, 0);
> > > > > +}
> > > > > +
> > > > > +static int intel_pcie_app_logic_setup(struct intel_pcie_port *lpp)
> > > > > +{
> > > > > +	intel_pcie_device_rst_deassert(lpp);
> > > > > +	intel_pcie_ltssm_enable(lpp);
> > > > > +
> > > > > +	return dw_pcie_wait_for_link(&lpp->pci);
> > > > > +}
> > > > > +
> > > > > +static irqreturn_t intel_pcie_core_isr(int irq, void *arg)
> > > > > +{
> > > > > +	struct intel_pcie_port *lpp = arg;
> > > > > +	u32 val, reg;
> > > > > +
> > > > > +	reg = pcie_app_rd(lpp, PCIE_APP_IRNCR);
> > > > > +	val = reg & PCIE_APP_IRN_INT;
> > > > > +
> > > > > +	pcie_app_wr(lpp, val, PCIE_APP_IRNCR);
> > > > > +
> > > > > +	trace_printk("PCIe misc interrupt status 0x%x\n", reg);
> > > > > +	return IRQ_HANDLED;
> > > > > +}
> > > > Why do we bother handling this interrupt?
> > > This helps during debugging.
> > I think it should be removed. It adds very little value to most users.
> > 
> > Most users won't have access to the datasheets to debug this properly, and
> > in any case if they could, then they would be competent to add an interrupt
> > handler themselves.
> IMO, having this will help to get the basic hardware interrupt status during
> debugging.
> And, user also can enhance the handler as per the need.
> I thing keeping it is beneficial than removing it.
> Please let me know your view.

I'm much prefer to remove this.

> > 
> > > > > +
> > > > > +static int intel_pcie_setup_irq(struct intel_pcie_port *lpp)
> > > > > +{
> > > > > +	struct device *dev = lpp->pci.dev;
> > > > > +	struct platform_device *pdev;
> > > > > +	char *irq_name;
> > > > > +	int irq, ret;
> > > > > +
> > > > > +	pdev = to_platform_device(dev);
> > > > > +	irq = platform_get_irq(pdev, 0);
> > > > > +	if (irq < 0) {
> > > > > +		dev_err(dev, "missing sys integrated irq resource\n");
> > > > > +		return irq;
> > > > > +	}
> > > > > +
> > > > > +	irq_name = devm_kasprintf(dev, GFP_KERNEL, "pcie_misc%d", lpp->id);
> > > > > +	if (!irq_name) {
> > > > > +		dev_err(dev, "failed to alloc irq name\n");
> > > > > +		return -ENOMEM;
> > > > > +	}
> > > > > +
> > > > > +	ret = devm_request_irq(dev, irq, intel_pcie_core_isr,
> > > > > +			       IRQF_SHARED, irq_name, lpp);
> > > > > +	if (ret) {
> > > > > +		dev_err(dev, "request irq %d failed\n", irq);
> > > > > +		return ret;
> > > > > +	}
> > > > > +	/* Enable integrated interrupts */
> > > > > +	pcie_app_wr_mask(lpp, PCIE_APP_IRN_INT,
> > > > > +			 PCIE_APP_IRN_INT, PCIE_APP_IRNEN);
> > > > > +
> > > > > +	return ret;
> > > > > +}
> > > > > +
> > > > > +static void intel_pcie_core_irq_disable(struct intel_pcie_port *lpp)
> > > > > +{
> > > > > +	pcie_app_wr(lpp, 0, PCIE_APP_IRNEN);
> > > > > +	pcie_app_wr(lpp, PCIE_APP_IRN_INT,  PCIE_APP_IRNCR);
> > > > > +}
> > > > > +
> > > > > +static int intel_pcie_get_resources(struct platform_device *pdev)
> > > > > +{
> > > > > +	struct intel_pcie_port *lpp;
> > > > > +	struct resource *res;
> > > > > +	struct dw_pcie *pci;
> > > > > +	struct device *dev;
> > > > > +	int ret;
> > > > > +
> > > > > +	lpp = platform_get_drvdata(pdev);
> > > > > +	pci = &lpp->pci;
> > > > > +	dev = pci->dev;
> > > > > +
> > > > > +	ret = device_property_read_u32(dev, "linux,pci-domain", &lpp->id);
> > > > > +	if (ret) {
> > > > > +		dev_err(dev, "failed to get domain id, errno %d\n", ret);
> > > > > +		return ret;
> > > > > +	}
> > > > > +
> > > > > +	res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "dbi");
> > > > > +	if (!res)
> > > > > +		return -ENXIO;
> > > > > +
> > > > > +	pci->dbi_base = devm_ioremap_resource(dev, res);
> > > > > +	if (IS_ERR(pci->dbi_base))
> > > > > +		return PTR_ERR(pci->dbi_base);
> > > > > +
> > > > > +	lpp->core_clk = devm_clk_get(dev, NULL);
> > > > > +	if (IS_ERR(lpp->core_clk)) {
> > > > > +		ret = PTR_ERR(lpp->core_clk);
> > > > > +		if (ret != -EPROBE_DEFER)
> > > > > +			dev_err(dev, "failed to get clks: %d\n", ret);
> > > > > +		return ret;
> > > > > +	}
> > > > > +
> > > > > +	lpp->core_rst = devm_reset_control_get(dev, NULL);
> > > > > +	if (IS_ERR(lpp->core_rst)) {
> > > > > +		ret = PTR_ERR(lpp->core_rst);
> > > > > +		if (ret != -EPROBE_DEFER)
> > > > > +			dev_err(dev, "failed to get resets: %d\n", ret);
> > > > > +		return ret;
> > > > > +	}
> > > > > +
> > > > > +	ret = device_property_match_string(dev, "device_type", "pci");
> > > > > +	if (ret) {
> > > > > +		dev_err(dev, "failed to find pci device type: %d\n", ret);
> > > > > +		return ret;
> > > > > +	}
> > > > > +
> > > > > +	if (device_property_read_u32(dev, "reset-assert-ms",
> > > > > +				     &lpp->rst_interval))
> > > > > +		lpp->rst_interval = RST_INTRVL_DFT_MS;
> > > > > +
> > > > > +	if (device_property_read_u32(dev, "max-link-speed", &lpp->link_gen))
> > > > > +		lpp->link_gen = 0; /* Fallback to auto */
> > > > Is it possible to use of_pci_get_max_link_speed here instead?
> > > Thanks for pointing it. of_pci_get_max_link_speed() can be used here. I will
> > > update it in the next patch revision.
> > > > > +
> > > > > +	res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "app");
> > > > > +	if (!res)
> > > > > +		return -ENXIO;
> > > > > +
> > > > > +	lpp->app_base = devm_ioremap_resource(dev, res);
> > > > > +	if (IS_ERR(lpp->app_base))
> > > > > +		return PTR_ERR(lpp->app_base);
> > > > > +
> > > > > +	ret = intel_pcie_ep_rst_init(lpp);
> > > > > +	if (ret)
> > > > > +		return ret;
> > > > Given that this is called from a function '..._get_resources' I don't think
> > > > we should be resetting anything here.
> > > Agree. I will move it out of get_resources().
> > > > > +
> > > > > +	lpp->phy = devm_phy_get(dev, "pciephy");
> > > > > +	if (IS_ERR(lpp->phy)) {
> > > > > +		ret = PTR_ERR(lpp->phy);
> > > > > +		if (ret != -EPROBE_DEFER)
> > > > > +			dev_err(dev, "couldn't get pcie-phy: %d\n", ret);
> > > > > +		return ret;
> > > > > +	}
> > > > > +	return 0;
> > > > > +}
> > > > > +
> > > > > +static void intel_pcie_deinit_phy(struct intel_pcie_port *lpp)
> > > > > +{
> > > > > +	phy_exit(lpp->phy);
> > > > > +}
> > > > > +
> > > > > +static int intel_pcie_wait_l2(struct intel_pcie_port *lpp)
> > > > > +{
> > > > > +	u32 value;
> > > > > +	int ret;
> > > > > +
> > > > > +	if (lpp->max_speed < PCIE_LINK_SPEED_GEN3)
> > > > > +		return 0;
> > > > > +
> > > > > +	/* Send PME_TURN_OFF message */
> > > > > +	pcie_app_wr_mask(lpp, PCIE_APP_MSG_XMT_PM_TURNOFF,
> > > > > +			 PCIE_APP_MSG_XMT_PM_TURNOFF, PCIE_APP_MSG_CR);
> > > > > +
> > > > > +	/* Read PMC status and wait for falling into L2 link state */
> > > > > +	ret = readl_poll_timeout(lpp->app_base + PCIE_APP_PMC, value,
> > > > > +				 (value & PCIE_APP_PMC_IN_L2), 20,
> > > > > +				 jiffies_to_usecs(5 * HZ));
> > > > > +	if (ret)
> > > > > +		dev_err(lpp->pci.dev, "PCIe link enter L2 timeout!\n");
> > > > > +
> > > > > +	return ret;
> > > > > +}
> > > > > +
> > > > > +static void intel_pcie_turn_off(struct intel_pcie_port *lpp)
> > > > > +{
> > > > > +	if (dw_pcie_link_up(&lpp->pci))
> > > > > +		intel_pcie_wait_l2(lpp);
> > > > > +
> > > > > +	/* Put EP in reset state */
> > > > EP?
> > > End point device. I will update it.
> > Is this not a host bridge controller?
> It is PERST#, signals hardware reset to the End point .
>         /* Put EP in reset state */
>         intel_pcie_device_rst_assert(lpp);

OK.

> > 
> > > > > +	intel_pcie_device_rst_assert(lpp);
> > > > > +	pcie_rc_cfg_wr_mask(lpp, PCI_COMMAND_MEMORY, 0, PCI_COMMAND);
> > > > > +}
> > > > > +
> > > > > +static int intel_pcie_host_setup(struct intel_pcie_port *lpp)
> > > > > +{
> > > > > +	int ret;
> > > > > +
> > > > > +	intel_pcie_core_rst_assert(lpp);
> > > > > +	intel_pcie_device_rst_assert(lpp);
> > > > > +
> > > > > +	ret = phy_init(lpp->phy);
> > > > > +	if (ret)
> > > > > +		return ret;
> > > > > +
> > > > > +	intel_pcie_core_rst_deassert(lpp);
> > > > > +
> > > > > +	ret = clk_prepare_enable(lpp->core_clk);
> > > > > +	if (ret) {
> > > > > +		dev_err(lpp->pci.dev, "Core clock enable failed: %d\n", ret);
> > > > > +		goto clk_err;
> > > > > +	}
> > > > > +
> > > > > +	intel_pcie_rc_setup(lpp);
> > > > > +	ret = intel_pcie_app_logic_setup(lpp);
> > > > > +	if (ret)
> > > > > +		goto app_init_err;
> > > > > +
> > > > > +	ret = intel_pcie_setup_irq(lpp);
> > > > > +	if (!ret)
> > > > > +		return ret;
> > > > > +
> > > > > +	intel_pcie_turn_off(lpp);
> > > > > +app_init_err:
> > > > > +	clk_disable_unprepare(lpp->core_clk);
> > > > > +clk_err:
> > > > > +	intel_pcie_core_rst_assert(lpp);
> > > > > +	intel_pcie_deinit_phy(lpp);
> > > > > +	return ret;
> > > > > +}
> > > > > +
> > > > > +static ssize_t
> > > > > +pcie_link_status_show(struct device *dev, struct device_attribute *attr,
> > > > > +		      char *buf)
> > > > > +{
> > > > > +	u32 reg, width, gen;
> > > > > +	struct intel_pcie_port *lpp;
> > > > > +
> > > > > +	lpp = dev_get_drvdata(dev);
> > > > > +
> > > > > +	reg = pcie_rc_cfg_rd(lpp, PCIE_LCTLSTS);
> > > > > +	width = FIELD_GET(PCIE_LCTLSTS_NEGOTIATED_LINK_WIDTH, reg);
> > > > > +	gen = FIELD_GET(PCIE_LCTLSTS_LINK_SPEED, reg);
> > > > > +	if (gen > lpp->max_speed)
> > > > > +		return -EINVAL;
> > > > > +
> > > > > +	return sprintf(buf, "Port %2u Width x%u Speed %s GT/s\n", lpp->id,
> > > > > +		       width, pcie_link_gen_to_str(gen));
> > > > > +}
> > > > > +static DEVICE_ATTR_RO(pcie_link_status);
> > > > > +
> > > > > +static ssize_t pcie_speed_store(struct device *dev,
> > > > > +				struct device_attribute *attr,
> > > > > +				const char *buf, size_t len)
> > > > > +{
> > > > > +	struct intel_pcie_port *lpp;
> > > > > +	unsigned long val;
> > > > > +	int ret;
> > > > > +
> > > > > +	lpp = dev_get_drvdata(dev);
> > > > > +
> > > > > +	ret = kstrtoul(buf, 10, &val);
> > > > > +	if (ret)
> > > > > +		return ret;
> > > > > +
> > > > > +	if (val > lpp->max_speed)
> > > > > +		return -EINVAL;
> > > > > +
> > > > > +	lpp->link_gen = val;
> > > > > +	intel_pcie_max_speed_setup(lpp);
> > > > > +	intel_pcie_speed_change_disable(lpp);
> > > > > +	intel_pcie_speed_change_enable(lpp);
> > > > > +
> > > > > +	return len;
> > > > > +}
> > > > > +static DEVICE_ATTR_WO(pcie_speed);
> > > > > +
> > > > > +/*
> > > > > + * Link width change on the fly is not always successful.
> > > > > + * It also depends on the partner.
> > > > > + */
> > > > > +static ssize_t pcie_width_store(struct device *dev,
> > > > > +				struct device_attribute *attr,
> > > > > +				const char *buf, size_t len)
> > > > > +{
> > > > > +	struct intel_pcie_port *lpp;
> > > > > +	unsigned long val;
> > > > > +
> > > > > +	lpp = dev_get_drvdata(dev);
> > > > > +
> > > > > +	if (kstrtoul(buf, 10, &val))
> > > > > +		return -EINVAL;
> > > > > +
> > > > > +	if (val > lpp->max_width)
> > > > > +		return -EINVAL;
> > > > > +
> > > > > +	lpp->lanes = val;
> > > > > +	intel_pcie_max_link_width_setup(lpp);
> > > > > +
> > > > > +	return len;
> > > > > +}
> > > > > +static DEVICE_ATTR_WO(pcie_width);
> > > > You mentioned that a use-case for changing width/speed on the fly was to
> > > > control power consumption (and this also helps debugging issues). As I
> > > > understand there is no current support for this in the kernel - yet it is
> > > > something that would provide value.
> > > > 
> > > > I haven't looked in much detail, however as I understand the PCI spec
> > > > allows an upstream partner to change the link speed and retrain. (I'm not
> > > > sure about link width). Given that we already have
> > > > [current,max]_link_[speed,width] is sysfs for each PCI device, it would
> > > > seem natural to extend this to allow for writing a max width or speed.
> > > > 
> > > > So ideally this type of thing would be moved to the core or at least in
> > > > the dwc driver. This way the benefits can be applied to more devices on
> > > > larger PCI fabrics - Though perhaps others here will have a different view
> > > > and I'm keen to hear them.
> > > > 
> > > > I'm keen to limit the differences between the DWC controller drivers and
> > > > unify common code - thus it would be helpful to have a justification as to
> > > > why this is only relevant for this controller.
> > > > 
> > > > For user-space only control, it is possible to achieve what you have here
> > > > with userspace utilities (something like setpci) (assuming the standard
> > > > looking registers you currently access are exposed in the normal config
> > > > space way - though PCIe capabilities).
> > > > 
> > > > My suggestion would be to drop these changes and later add something that
> > > > can benefit more devices. In any case if these changes stay within this
> > > > driver then it would be helpful to move them to a separate patch.
> > > Sure, i will submit these entity in separate patch.
> > Please ensure that all supporting macros, functions and defines go with that
> > other patch as well please.
> Sure.
> > 
> > > > > +
> > > > > +static struct attribute *pcie_cfg_attrs[] = {
> > > > > +	&dev_attr_pcie_link_status.attr,
> > > > > +	&dev_attr_pcie_speed.attr,
> > > > > +	&dev_attr_pcie_width.attr,
> > > > > +	NULL,
> > > > > +};
> > > > > +ATTRIBUTE_GROUPS(pcie_cfg);
> > > > > +
> > > > > +static int intel_pcie_sysfs_init(struct intel_pcie_port *lpp)
> > > > > +{
> > > > > +	return devm_device_add_groups(lpp->pci.dev, pcie_cfg_groups);
> > > > > +}
> > > > > +
> > > > > +static void __intel_pcie_remove(struct intel_pcie_port *lpp)
> > > > > +{
> > > > > +	pcie_rc_cfg_wr_mask(lpp, PCI_COMMAND_MEMORY | PCI_COMMAND_MASTER,
> > > > > +			    0, PCI_COMMAND);
> > > > > +	intel_pcie_core_irq_disable(lpp);
> > > > > +	intel_pcie_turn_off(lpp);
> > > > > +	clk_disable_unprepare(lpp->core_clk);
> > > > > +	intel_pcie_core_rst_assert(lpp);
> > > > > +	intel_pcie_deinit_phy(lpp);
> > > > > +}
> > > > > +
> > > > > +static int intel_pcie_remove(struct platform_device *pdev)
> > > > > +{
> > > > > +	struct intel_pcie_port *lpp = platform_get_drvdata(pdev);
> > > > > +	struct pcie_port *pp = &lpp->pci.pp;
> > > > > +
> > > > > +	dw_pcie_host_deinit(pp);
> > > > > +	__intel_pcie_remove(lpp);
> > > > > +
> > > > > +	return 0;
> > > > > +}
> > > > > +
> > > > > +static int __maybe_unused intel_pcie_suspend_noirq(struct device *dev)
> > > > > +{
> > > > > +	struct intel_pcie_port *lpp = dev_get_drvdata(dev);
> > > > > +	int ret;
> > > > > +
> > > > > +	intel_pcie_core_irq_disable(lpp);
> > > > > +	ret = intel_pcie_wait_l2(lpp);
> > > > > +	if (ret)
> > > > > +		return ret;
> > > > > +
> > > > > +	intel_pcie_deinit_phy(lpp);
> > > > > +	clk_disable_unprepare(lpp->core_clk);
> > > > > +	return ret;
> > > > > +}
> > > > > +
> > > > > +static int __maybe_unused intel_pcie_resume_noirq(struct device *dev)
> > > > > +{
> > > > > +	struct intel_pcie_port *lpp = dev_get_drvdata(dev);
> > > > > +
> > > > > +	return intel_pcie_host_setup(lpp);
> > > > > +}
> > > > > +
> > > > > +static int intel_pcie_rc_init(struct pcie_port *pp)
> > > > > +{
> > > > > +	struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
> > > > > +	struct intel_pcie_port *lpp = dev_get_drvdata(pci->dev);
> > > > > +	int ret;
> > > > > +
> > > > > +	/* RC/host initialization */
> > > > > +	ret = intel_pcie_host_setup(lpp);
> > > > > +	if (ret)
> > > > > +		return ret;
> > > > Insert new line here.
> > > Ok.
> > > > > +	ret = intel_pcie_sysfs_init(lpp);
> > > > > +	if (ret)
> > > > > +		__intel_pcie_remove(lpp);
> > > > > +	return ret;
> > > > > +}
> > > > > +
> > > > > +int intel_pcie_msi_init(struct pcie_port *pp)
> > > > > +{
> > > > > +	struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
> > > > > +
> > > > > +	dev_dbg(pci->dev, "PCIe MSI/MSIx is handled by MSI in x86 processor\n");
> > > > What about other processors? (Noting that this driver doesn't depend on
> > > > any specific ARCH in the KConfig).
> > > Agree. i will mark the dependency in Kconfig.
> > OK, please also see how other drivers use the COMPILE_TEST Kconfig option.
> Ok sure.
> > 
> > I'd suggest that the dev_dbg just becomes a code comment.
> > 
> > > > > +	return 0;
> > > > > +}
> > > > > +
> > > > > +u64 intel_pcie_cpu_addr(struct dw_pcie *pcie, u64 cpu_addr)
> > > > > +{
> > > > > +	return cpu_addr + BUS_IATU_OFFS;
> > > > > +}
> > > > > +
> > > > > +static const struct dw_pcie_ops intel_pcie_ops = {
> > > > > +	.cpu_addr_fixup = intel_pcie_cpu_addr,
> > > > > +};
> > > > > +
> > > > > +static const struct dw_pcie_host_ops intel_pcie_dw_ops = {
> > > > > +	.host_init =		intel_pcie_rc_init,
> > > > > +	.msi_host_init =	intel_pcie_msi_init,
> > > > > +};
> > > > > +
> > > > > +static const struct intel_pcie_soc pcie_data = {
> > > > > +	.pcie_ver =		0x520A,
> > > > > +	.pcie_atu_offset =	0xC0000,
> > > > > +	.num_viewport =		3,
> > > > > +};
> > > > > +
> > > > > +static int intel_pcie_probe(struct platform_device *pdev)
> > > > > +{
> > > > > +	const struct intel_pcie_soc *data;
> > > > > +	struct device *dev = &pdev->dev;
> > > > > +	struct intel_pcie_port *lpp;
> > > > > +	struct pcie_port *pp;
> > > > > +	struct dw_pcie *pci;
> > > > > +	int ret;
> > > > > +
> > > > > +	lpp = devm_kzalloc(dev, sizeof(*lpp), GFP_KERNEL);
> > > > > +	if (!lpp)
> > > > > +		return -ENOMEM;
> > > > > +
> > > > > +	platform_set_drvdata(pdev, lpp);
> > > > > +	pci = &lpp->pci;
> > > > > +	pci->dev = dev;
> > > > > +	pp = &pci->pp;
> > > > > +
> > > > > +	ret = intel_pcie_get_resources(pdev);
> > > > > +	if (ret)
> > > > > +		return ret;
> > > > > +
> > > > > +	data = device_get_match_data(dev);
> > > > > +	pci->ops = &intel_pcie_ops;
> > > > > +	pci->version = data->pcie_ver;
> > > > > +	pci->atu_base = pci->dbi_base + data->pcie_atu_offset;
> > > > > +	pp->ops = &intel_pcie_dw_ops;
> > > > > +
> > > > > +	ret = dw_pcie_host_init(pp);
> > > > > +	if (ret) {
> > > > > +		dev_err(dev, "cannot initialize host\n");
> > > > > +		return ret;
> > > > > +	}
> > > > Add a new line after the closing brace.
> > > Ok
> > > > > +	/* Intel PCIe doesn't configure IO region, so configure
> > > > > +	 * viewport to not to access IO region during register
> > > > > +	 * read write operations.
> > > > > +	 */
> > > > > +	pci->num_viewport = data->num_viewport;
> > > > > +	dev_info(dev,
> > > > > +		 "Intel AXI PCIe Root Complex Port %d Init Done\n", lpp->id);
> > > > > +	return ret;
> > > > > +}
> > > > > +
> > > > > +static const struct dev_pm_ops intel_pcie_pm_ops = {
> > > > > +	SET_NOIRQ_SYSTEM_SLEEP_PM_OPS(intel_pcie_suspend_noirq,
> > > > > +				      intel_pcie_resume_noirq)
> > > > > +};
> > > > > +
> > > > > +static const struct of_device_id of_intel_pcie_match[] = {
> > > > > +	{ .compatible = "intel,lgm-pcie", .data = &pcie_data },
> > > > > +	{}
> > > > > +};
> > > > > +
> > > > > +static struct platform_driver intel_pcie_driver = {
> > > > > +	.probe = intel_pcie_probe,
> > > > > +	.remove = intel_pcie_remove,
> > > > > +	.driver = {
> > > > > +		.name = "intel-lgm-pcie",
> > > > Is there a reason why the we use intel-lgm-pcie here and pcie-intel-axi
> > > > elsewhere? What does lgm mean?
> > > lgm is the name of intel SoC.  I will name it to pcie-intel-axi to be
> > > generic.
> > I'm keen to ensure that it is consistently named. I've seen other comments
> > regarding what the name should be - I don't have a strong opinion though
> > I do think that *-axi may be too generic.
> 
> Ok, i will check and get back to you on this.
> 
> Regards,

Thanks,

Andrew Murray

> 
> Dilip
> 
> > 
> > Thanks,
> > 
> > Andrew Murray
> > 
> > > > Thanks,
> > > > 
> > > > Andrew Murray
> > > > 
> > > > > +		.of_match_table = of_intel_pcie_match,
> > > > > +		.pm = &intel_pcie_pm_ops,
> > > > > +	},
> > > > > +};
> > > > > +builtin_platform_driver(intel_pcie_driver);
> > > > > -- 
> > > > > 2.11.0
> > > > > 

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH v3 2/2] dwc: PCI: intel: Intel PCIe RC controller driver
  2019-09-09  8:31           ` Andrew Murray
@ 2019-09-10  8:08             ` Dilip Kota
       [not found]             ` <22857835-1f98-b251-c94b-16b4b0a6dba2@linux.intel.com>
  1 sibling, 0 replies; 37+ messages in thread
From: Dilip Kota @ 2019-09-10  8:08 UTC (permalink / raw)
  To: Andrew Murray
  Cc: jingoohan1, gustavo.pimentel, lorenzo.pieralisi, robh,
	martin.blumenstingl, linux-pci, hch, devicetree, linux-kernel,
	andriy.shevchenko, cheol.yong.kim, chuanhua.lei, qi-ming.wu

[Got delivery failure mail; so re-sending the mail]

Hi Andrew Murray,

Please find my response inline.

On 9/9/2019 4:31 PM, Andrew Murray wrote:
> On Mon, Sep 09, 2019 at 02:51:03PM +0800, Dilip Kota wrote:
>> On 9/6/2019 7:20 PM, Andrew Murray wrote:
>>> On Fri, Sep 06, 2019 at 06:58:11PM +0800, Dilip Kota wrote:
>>>> Hi Andrew Murray,
>>>>
>>>> Thanks for the review. Please find my response inline.
>>>>
>>>> On 9/5/2019 6:45 PM, Andrew Murray wrote:
>>>>> On Wed, Sep 04, 2019 at 06:10:31PM +0800, Dilip Kota wrote:
>>>>>> Add support to PCIe RC controller on Intel Universal
>>>>>> Gateway SoC. PCIe controller is based of Synopsys
>>>>>> Designware pci core.
>>>>>>
>>>>>> Signed-off-by: Dilip Kota <eswara.kota@linux.intel.com>
>>>>>> ---
>>>>> Hi Dilip,
>>>>>
>>>>> Thanks for the patch, initial feedback below:
>>>>>
>>>>>> changes on v3:
>>>>>> 	Rename PCIe app logic registers with PCIE_APP prefix.
>>>>>> 	PCIE_IOP_CTRL configuration is not required. Remove respective code.
>>>>>> 	Remove wrapper functions for clk enable/disable APIs.
>>>>>> 	Use platform_get_resource_byname() instead of
>>>>>> 	  devm_platform_ioremap_resource() to be similar with DWC framework.
>>>>>> 	Rename phy name to "pciephy".
>>>>>> 	Modify debug message in msi_init() callback to be more specific.
>>>>>> 	Remove map_irq() callback.
>>>>>> 	Enable the INTx interrupts at the time of PCIe initialization.
>>>>>> 	Reduce memory fragmentation by using variable "struct dw_pcie pci"
>>>>>> 	  instead of allocating memory.
>>>>>> 	Reduce the delay to 100us during enpoint initialization
>>>>>> 	  intel_pcie_ep_rst_init().
>>>>>> 	Call  dw_pcie_host_deinit() during remove() instead of directly
>>>>>> 	  calling PCIe core APIs.
>>>>>> 	Rename "intel,rst-interval" to "reset-assert-ms".
>>>>>> 	Remove unused APP logic Interrupt bit macro definitions.
>>>>>>     	Use dwc framework's dw_pcie_setup_rc() for PCIe host specific
>>>>>> 	 configuration instead of redefining the same functionality in
>>>>>> 	 the driver.
>>>>>> 	Move the whole DT parsing specific code to intel_pcie_get_resources()
>>>>>>
>>>>>>     drivers/pci/controller/dwc/Kconfig          |  13 +
>>>>>>     drivers/pci/controller/dwc/Makefile         |   1 +
>>>>>>     drivers/pci/controller/dwc/pcie-intel-axi.c | 840 ++++++++++++++++++++++++++++
>>>>>>     3 files changed, 854 insertions(+)
>>>>>>     create mode 100644 drivers/pci/controller/dwc/pcie-intel-axi.c
>>>>>>
>>>>>> diff --git a/drivers/pci/controller/dwc/Kconfig b/drivers/pci/controller/dwc/Kconfig
>>>>>> index 6ea778ae4877..e44b9b6a6390 100644
>>>>>> --- a/drivers/pci/controller/dwc/Kconfig
>>>>>> +++ b/drivers/pci/controller/dwc/Kconfig
>>>>>> @@ -82,6 +82,19 @@ config PCIE_DW_PLAT_EP
>>>>>>     	  order to enable device-specific features PCI_DW_PLAT_EP must be
>>>>>>     	  selected.
>>>>>> +config PCIE_INTEL_AXI
>>>>>> +        bool "Intel AHB/AXI PCIe host controller support"
>>>>>> +        depends on PCI_MSI
>>>>>> +        depends on PCI
>>>>> I don't think the PCI dependency is required here.
>>>>>
>>>>> I'm also not sure why PCI_MSI is required, we select PCIE_DW_HOST which
>>>>> depends on PCI_MSI_IRQ_DOMAIN which depends on PCI_MSI.
>>>> Agree, dependency on PCI and PCI_MSI can be removed. I will remove it in
>>>> next patch revision.
>>>>>> +        depends on OF
>>>>>> +        select PCIE_DW_HOST
>>>>>> +        help
>>>>>> +          Say 'Y' here to enable support for Intel AHB/AXI PCIe Host
>>>>>> +	  controller driver.
>>>>>> +	  The Intel PCIe controller is based on the Synopsys Designware
>>>>>> +	  pcie core and therefore uses the Designware core functions to
>>>>>> +	  implement the driver.
>>>>> I can see this description is similar to others in the same Kconfig,
>>>>> however I'm not sure what value a user gains by knowing implementation
>>>>> details - it's helpful to know that PCIE_INTEL_AXI is based on the
>>>>> Designware core, but is it helpful to know that the Designware core
>>>>> functions are used?
>>>>>
>>>>>> +
>>>>>>     config PCI_EXYNOS
>>>>>>     	bool "Samsung Exynos PCIe controller"
>>>>>>     	depends on SOC_EXYNOS5440 || COMPILE_TEST
>>>>>> diff --git a/drivers/pci/controller/dwc/Makefile b/drivers/pci/controller/dwc/Makefile
>>>>>> index b085dfd4fab7..46e656ebdf90 100644
>>>>>> --- a/drivers/pci/controller/dwc/Makefile
>>>>>> +++ b/drivers/pci/controller/dwc/Makefile
>>>>>> @@ -3,6 +3,7 @@ obj-$(CONFIG_PCIE_DW) += pcie-designware.o
>>>>>>     obj-$(CONFIG_PCIE_DW_HOST) += pcie-designware-host.o
>>>>>>     obj-$(CONFIG_PCIE_DW_EP) += pcie-designware-ep.o
>>>>>>     obj-$(CONFIG_PCIE_DW_PLAT) += pcie-designware-plat.o
>>>>>> +obj-$(CONFIG_PCIE_INTEL_AXI) += pcie-intel-axi.o
>>>>>>     obj-$(CONFIG_PCI_DRA7XX) += pci-dra7xx.o
>>>>>>     obj-$(CONFIG_PCI_EXYNOS) += pci-exynos.o
>>>>>>     obj-$(CONFIG_PCI_IMX6) += pci-imx6.o
>>>>>> diff --git a/drivers/pci/controller/dwc/pcie-intel-axi.c b/drivers/pci/controller/dwc/pcie-intel-axi.c
>>>>>> new file mode 100644
>>>>>> index 000000000000..75607ce03ebf
>>>>>> --- /dev/null
>>>>>> +++ b/drivers/pci/controller/dwc/pcie-intel-axi.c
>>>>>> @@ -0,0 +1,840 @@
>>>>>> +// SPDX-License-Identifier: GPL-2.0
>>>>>> +/*
>>>>>> + * PCIe host controller driver for Intel AXI PCIe Bridge
>>>>>> + *
>>>>>> + * Copyright (c) 2019 Intel Corporation.
>>>>>> + */
>>>>>> +
>>>>>> +#include <linux/bitfield.h>
>>>>>> +#include <linux/clk.h>
>>>>>> +#include <linux/gpio/consumer.h>
>>>>>> +#include <linux/interrupt.h>
>>>>>> +#include <linux/iopoll.h>
>>>>>> +#include <linux/of_irq.h>
>>>>>> +#include <linux/of_pci.h>
>>>>>> +#include <linux/of_platform.h>
>>>>>> +#include <linux/phy/phy.h>
>>>>>> +#include <linux/platform_device.h>
>>>>>> +#include <linux/reset.h>
>>>>>> +
>>>>>> +#include "../../pci.h"
>>>>> Please remove this - it isn't needed.
>>>> Ok, will remove it in next patch revision.
>>>>>> +#include "pcie-designware.h"
>>>>>> +
>>>>>> +#define PCIE_CCRID				0x8
>>>>>> +
>>>>>> +#define PCIE_LCAP				0x7C
>>>>>> +#define PCIE_LCAP_MAX_LINK_SPEED		GENMASK(3, 0)
>>>>>> +#define PCIE_LCAP_MAX_LENGTH_WIDTH		GENMASK(9, 4)
>>>>> These look like the standard PCI Link Capabilities Register,
>>>>> can you use the standard ones defined in pci_regs.h? (see
>>>>> PCI_EXP_LNKCAP_SLS_*).
>>>>>
>>>>>> +
>>>>>> +/* Link Control and Status Register */
>>>>>> +#define PCIE_LCTLSTS				0x80
>>>>>> +#define PCIE_LCTLSTS_ASPM_ENABLE		GENMASK(1, 0)
>>>>>> +#define PCIE_LCTLSTS_RCB128			BIT(3)
>>>>>> +#define PCIE_LCTLSTS_LINK_DISABLE		BIT(4)
>>>>>> +#define PCIE_LCTLSTS_COM_CLK_CFG		BIT(6)
>>>>>> +#define PCIE_LCTLSTS_HW_AW_DIS			BIT(9)
>>>>>> +#define PCIE_LCTLSTS_LINK_SPEED			GENMASK(19, 16)
>>>>>> +#define PCIE_LCTLSTS_NEGOTIATED_LINK_WIDTH	GENMASK(25, 20)
>>>>>> +#define PCIE_LCTLSTS_SLOT_CLK_CFG		BIT(28)
>>>>> These look like the standard Link Control and Link Status register, can
>>>>> you use the standard ones defined in pci_regs.h? (see PCI_EXP_LNKCTL_*
>>>>> and PCI_EXP_LNKSTA_*).
>>>>>
>>>>>> +
>>>>>> +#define PCIE_LCTLSTS2				0xA0
>>>>>> +#define PCIE_LCTLSTS2_TGT_LINK_SPEED		GENMASK(3, 0)
>>>>>> +#define PCIE_LCTLSTS2_TGT_LINK_SPEED_25GT	0x1
>>>>>> +#define PCIE_LCTLSTS2_TGT_LINK_SPEED_5GT	0x2
>>>>>> +#define PCIE_LCTLSTS2_TGT_LINK_SPEED_8GT	0x3
>>>>>> +#define PCIE_LCTLSTS2_TGT_LINK_SPEED_16GT	0x4
>>>>> And these look like the standard Link Control Register 2 (see
>>>>> PCI_EXP_LNKCTL2_*).
>>>>>
>>>>> Most of the #defines above and below look just like standard PCI defines
>>>>> (for example those found in pci_regs.h) - both in terms of their name and
>>>>> what they do. Ideally where the functionality is the same or very similar,
>>>>> then we should use the standard PCI defines in (pci_regs.h). This helps
>>>>> readability/understanding, helps to identify duplicate code and reduces
>>>>> the size of your patch.
>>>>>
>>>>> Also the capability offsets (e.g. PCIE_LCTLSTS2) are also standard. The
>>>>> offsets you define are offset by an additional 0x70. E.g.i
>>>>> PCIE_LCTLSTS2 = 0xA0, and PCI_EXP_LNKCTL2 = 0x30. Perhaps abstracting
>>>>> this offset will allow you to use the standard pci defines?
>>>>>
>>>>> I haven't looked in much detail at the remainder of the defines, but
>>>>> the same rationale should apply.
>>>> Agree, that's a good point. I will define the offset macro and use the
>>>> macros defined in pci_regs.h.
>>>>>> +#define PCIE_LCTLSTS2_HW_AUTO_DIS		BIT(5)
>>>>>> +
>>>>>> +/* Ack Frequency Register */
>>>>>> +#define PCIE_AFR				0x70C
>>>>>> +#define PCIE_AFR_FTS_NUM			GENMASK(15, 8)
>>>>>> +#define PCIE_AFR_COM_FTS_NUM			GENMASK(23, 16)
>>>>>> +#define PCIE_AFR_GEN12_FTS_NUM_DFT		(SZ_128 - 1)
>>>>>> +#define PCIE_AFR_GEN3_FTS_NUM_DFT		180
>>>>>> +#define PCIE_AFR_GEN4_FTS_NUM_DFT		196
>>>>>> +
>>>>>> +#define PCIE_PLCR_DLL_LINK_EN			BIT(5)
>>>>>> +#define PCIE_PORT_LOGIC_FTS			GENMASK(7, 0)
>>>>>> +#define PCIE_PORT_LOGIC_DFT_FTS_NUM		(SZ_128 - 1)
>>>>>> +
>>>>>> +#define PCIE_MISC_CTRL				0x8BC
>>>>>> +#define PCIE_MISC_CTRL_DBI_RO_WR_EN		BIT(0)
>>>>>> +
>>>>>> +#define PCIE_MULTI_LANE_CTRL			0x8C0
>>>>>> +#define PCIE_UPCONFIG_SUPPORT			BIT(7)
>>>>>> +#define PCIE_DIRECT_LINK_WIDTH_CHANGE		BIT(6)
>>>>>> +#define PCIE_TARGET_LINK_WIDTH			GENMASK(5, 0)
>>>>>> +
>>>>>> +/* APP RC Core Control Register */
>>>>>> +#define PCIE_APP_CCR				0x10
>>>>>> +#define PCIE_APP_CCR_LTSSM_ENABLE		BIT(0)
>>>>>> +
>>>>>> +/* PCIe Message Control */
>>>>>> +#define PCIE_APP_MSG_CR				0x30
>>>>>> +#define PCIE_APP_MSG_XMT_PM_TURNOFF		BIT(0)
>>>>>> +
>>>>>> +/* PCIe Power Management Control */
>>>>>> +#define PCIE_APP_PMC				0x44
>>>>>> +#define PCIE_APP_PMC_IN_L2			BIT(20)
>>>>>> +
>>>>>> +/* Interrupt Enable Register */
>>>>>> +#define PCIE_APP_IRNEN				0xF4
>>>>>> +#define PCIE_APP_IRNCR				0xF8
>>>>>> +#define PCIE_APP_IRN_AER_REPORT			BIT(0)
>>>>>> +#define PCIE_APP_IRN_PME			BIT(2)
>>>>>> +#define PCIE_APP_IRN_RX_VDM_MSG			BIT(4)
>>>>>> +#define PCIE_APP_IRN_PM_TO_ACK			BIT(9)
>>>>>> +#define PCIE_APP_IRN_LINK_AUTO_BW_STAT		BIT(11)
>>>>>> +#define PCIE_APP_IRN_BW_MGT			BIT(12)
>>>>>> +#define PCIE_APP_IRN_MSG_LTR			BIT(18)
>>>>>> +#define PCIE_APP_IRN_SYS_ERR_RC			BIT(29)
>>>>>> +
>>>>>> +#define PCIE_APP_INTX_OFST	12
>>>>>> +#define PCIE_APP_IRN_INT	(PCIE_APP_IRN_AER_REPORT | PCIE_APP_IRN_PME | \
>>>>>> +			PCIE_APP_IRN_RX_VDM_MSG | PCIE_APP_IRN_SYS_ERR_RC | \
>>>>>> +			PCIE_APP_IRN_PM_TO_ACK | PCIE_APP_IRN_MSG_LTR | \
>>>>>> +			PCIE_APP_IRN_BW_MGT | PCIE_APP_IRN_LINK_AUTO_BW_STAT | \
>>>>>> +			(PCIE_APP_INTX_OFST + PCI_INTERRUPT_INTA) | \
>>>>>> +			(PCIE_APP_INTX_OFST + PCI_INTERRUPT_INTB) | \
>>>>>> +			(PCIE_APP_INTX_OFST + PCI_INTERRUPT_INTC) | \
>>>>>> +			(PCIE_APP_INTX_OFST + PCI_INTERRUPT_INTD))
>>>>>> +
>>>>>> +#define BUS_IATU_OFFS		SZ_256M
>>>>>> +#define RST_INTRVL_DFT_MS	100
>>>>>> +enum {
>>>>>> +	PCIE_LINK_SPEED_AUTO = 0,
>>>>>> +	PCIE_LINK_SPEED_GEN1,
>>>>>> +	PCIE_LINK_SPEED_GEN2,
>>>>>> +	PCIE_LINK_SPEED_GEN3,
>>>>>> +	PCIE_LINK_SPEED_GEN4,
>>>>>> +};
>>>>>> +
>>>>>> +struct intel_pcie_soc {
>>>>>> +	unsigned int pcie_ver;
>>>>>> +	unsigned int pcie_atu_offset;
>>>>>> +	u32 num_viewport;
>>>>>> +};
>>>>>> +
>>>>>> +struct intel_pcie_port {
>>>>>> +	struct dw_pcie		pci;
>>>>>> +	unsigned int		id; /* Physical RC Index */
>>>>>> +	void __iomem		*app_base;
>>>>>> +	struct gpio_desc	*reset_gpio;
>>>>>> +	u32			rst_interval;
>>>>>> +	u32			max_speed;
>>>>>> +	u32			link_gen;
>>>>>> +	u32			max_width;
>>>>>> +	u32			lanes;
>>>>>> +	struct clk		*core_clk;
>>>>>> +	struct reset_control	*core_rst;
>>>>>> +	struct phy		*phy;
>>>>>> +};
>>>>>> +
>>>>>> +static void pcie_update_bits(void __iomem *base, u32 mask, u32 val, u32 ofs)
>>>>>> +{
>>>>>> +	u32 orig, tmp;
>>>>>> +
>>>>>> +	orig = readl(base + ofs);
>>>>>> +
>>>>>> +	tmp = (orig & ~mask) | (val & mask);
>>>>>> +
>>>>>> +	if (tmp != orig)
>>>>>> +		writel(tmp, base + ofs);
>>>>>> +}
>>>>>> +
>>>>>> +static inline u32 pcie_app_rd(struct intel_pcie_port *lpp, u32 ofs)
>>>>>> +{
>>>>>> +	return readl(lpp->app_base + ofs);
>>>>>> +}
>>>>>> +
>>>>>> +static inline void pcie_app_wr(struct intel_pcie_port *lpp, u32 val, u32 ofs)
>>>>>> +{
>>>>>> +	writel(val, lpp->app_base + ofs);
>>>>>> +}
>>>>>> +
>>>>>> +static void pcie_app_wr_mask(struct intel_pcie_port *lpp,
>>>>>> +			     u32 mask, u32 val, u32 ofs)
>>>>>> +{
>>>>>> +	pcie_update_bits(lpp->app_base, mask, val, ofs);
>>>>>> +}
>>>>>> +
>>>>>> +static inline u32 pcie_rc_cfg_rd(struct intel_pcie_port *lpp, u32 ofs)
>>>>>> +{
>>>>>> +	return dw_pcie_readl_dbi(&lpp->pci, ofs);
>>>>>> +}
>>>>>> +
>>>>>> +static inline void pcie_rc_cfg_wr(struct intel_pcie_port *lpp, u32 val, u32 ofs)
>>>>>> +{
>>>>>> +	dw_pcie_writel_dbi(&lpp->pci, ofs, val);
>>>>>> +}
>>>>>> +
>>>>>> +static void pcie_rc_cfg_wr_mask(struct intel_pcie_port *lpp,
>>>>>> +				u32 mask, u32 val, u32 ofs)
>>>>>> +{
>>>>>> +	pcie_update_bits(lpp->pci.dbi_base, mask, val, ofs);
>>>>>> +}
>>>>>> +
>>>>>> +static void intel_pcie_mem_iatu(struct intel_pcie_port *lpp)
>>>>>> +{
>>>>>> +	struct pcie_port *pp = &lpp->pci.pp;
>>>>>> +	phys_addr_t cpu_addr = pp->mem_base;
>>>>>> +
>>>>>> +	dw_pcie_prog_outbound_atu(&lpp->pci, PCIE_ATU_REGION_INDEX0,
>>>>>> +				  PCIE_ATU_TYPE_MEM, cpu_addr,
>>>>>> +				  pp->mem_base, pp->mem_size);
>>>>>> +}
>>>>>> +
>>>>>> +static void intel_pcie_ltssm_enable(struct intel_pcie_port *lpp)
>>>>>> +{
>>>>>> +	pcie_app_wr_mask(lpp, PCIE_APP_CCR_LTSSM_ENABLE,
>>>>>> +			 PCIE_APP_CCR_LTSSM_ENABLE, PCIE_APP_CCR);
>>>>>> +}
>>>>>> +
>>>>>> +static void intel_pcie_ltssm_disable(struct intel_pcie_port *lpp)
>>>>>> +{
>>>>>> +	pcie_app_wr_mask(lpp, PCIE_APP_CCR_LTSSM_ENABLE, 0, PCIE_APP_CCR);
>>>>>> +}
>>>>>> +
>>>>>> +static const char *pcie_link_gen_to_str(int gen)
>>>>>> +{
>>>>>> +	switch (gen) {
>>>>>> +	case PCIE_LINK_SPEED_GEN1:
>>>>>> +		return "2.5";
>>>>>> +	case PCIE_LINK_SPEED_GEN2:
>>>>>> +		return "5.0";
>>>>>> +	case PCIE_LINK_SPEED_GEN3:
>>>>>> +		return "8.0";
>>>>>> +	case PCIE_LINK_SPEED_GEN4:
>>>>>> +		return "16.0";
>>>>>> +	default:
>>>>>> +		return "???";
>>>>>> +	}
>>>>>> +}
>>>>>> +
>>>>>> +static void intel_pcie_link_setup(struct intel_pcie_port *lpp)
>>>>>> +{
>>>>>> +	u32 val;
>>>>>> +
>>>>>> +	val = pcie_rc_cfg_rd(lpp, PCIE_LCAP);
>>>>>> +	lpp->max_speed = FIELD_GET(PCIE_LCAP_MAX_LINK_SPEED, val);
>>>>>> +	lpp->max_width = FIELD_GET(PCIE_LCAP_MAX_LENGTH_WIDTH, val);
>>>>>> +
>>>>>> +	val = pcie_rc_cfg_rd(lpp, PCIE_LCTLSTS);
>>>>>> +
>>>>>> +	val &= ~(PCIE_LCTLSTS_LINK_DISABLE | PCIE_LCTLSTS_ASPM_ENABLE);
>>>>>> +	val |= (PCIE_LCTLSTS_SLOT_CLK_CFG | PCIE_LCTLSTS_COM_CLK_CFG |
>>>>>> +		PCIE_LCTLSTS_RCB128);
>>>>>> +	pcie_rc_cfg_wr(lpp, val, PCIE_LCTLSTS);
>>>>>> +}
>>>>>> +
>>>>>> +static void intel_pcie_max_speed_setup(struct intel_pcie_port *lpp)
>>>>>> +{
>>>>>> +	u32 reg, val;
>>>>>> +
>>>>>> +	reg = pcie_rc_cfg_rd(lpp, PCIE_LCTLSTS2);
>>>>>> +	switch (lpp->link_gen) {
>>>>>> +	case PCIE_LINK_SPEED_GEN1:
>>>>>> +		reg &= ~PCIE_LCTLSTS2_TGT_LINK_SPEED;
>>>>>> +		reg |= PCIE_LCTLSTS2_HW_AUTO_DIS |
>>>>>> +			PCIE_LCTLSTS2_TGT_LINK_SPEED_25GT;
>>>>>> +		break;
>>>>>> +	case PCIE_LINK_SPEED_GEN2:
>>>>>> +		reg &= ~PCIE_LCTLSTS2_TGT_LINK_SPEED;
>>>>>> +		reg |= PCIE_LCTLSTS2_HW_AUTO_DIS |
>>>>>> +			PCIE_LCTLSTS2_TGT_LINK_SPEED_5GT;
>>>>>> +		break;
>>>>>> +	case PCIE_LINK_SPEED_GEN3:
>>>>>> +		reg &= ~PCIE_LCTLSTS2_TGT_LINK_SPEED;
>>>>>> +		reg |= PCIE_LCTLSTS2_HW_AUTO_DIS |
>>>>>> +			PCIE_LCTLSTS2_TGT_LINK_SPEED_8GT;
>>>>>> +		break;
>>>>>> +	case PCIE_LINK_SPEED_GEN4:
>>>>>> +		reg &= ~PCIE_LCTLSTS2_TGT_LINK_SPEED;
>>>>>> +		reg |= PCIE_LCTLSTS2_HW_AUTO_DIS |
>>>>>> +			PCIE_LCTLSTS2_TGT_LINK_SPEED_16GT;
>>>>>> +		break;
>>>>>> +	default:
>>>>>> +		/* Use hardware capability */
>>>>>> +		val = pcie_rc_cfg_rd(lpp, PCIE_LCAP);
>>>>>> +		val = FIELD_GET(PCIE_LCAP_MAX_LINK_SPEED, val);
>>>>>> +		reg &= ~PCIE_LCTLSTS2_HW_AUTO_DIS;
>>>>>> +		reg |= val;
>>>>>> +		break;
>>>>>> +	}
>>>>>> +	pcie_rc_cfg_wr(lpp, reg, PCIE_LCTLSTS2);
>>>>>> +}
>>>>>> +
>>>>>> +static void intel_pcie_speed_change_enable(struct intel_pcie_port *lpp)
>>>>>> +{
>>>>>> +	u32 mask, val;
>>>>>> +
>>>>>> +	mask = PORT_LOGIC_SPEED_CHANGE | PCIE_PORT_LOGIC_FTS;
>>>>>> +	val = PORT_LOGIC_SPEED_CHANGE | PCIE_PORT_LOGIC_DFT_FTS_NUM;
>>>>>> +
>>>>>> +	pcie_rc_cfg_wr_mask(lpp, mask, val, PCIE_LINK_WIDTH_SPEED_CONTROL);
>>>>>> +}
>>>>>> +
>>>>>> +static void intel_pcie_speed_change_disable(struct intel_pcie_port *lpp)
>>>>>> +{
>>>>>> +	pcie_rc_cfg_wr_mask(lpp, PORT_LOGIC_SPEED_CHANGE, 0,
>>>>>> +			    PCIE_LINK_WIDTH_SPEED_CONTROL);
>>>>>> +}
>>>>>> +
>>>>>> +static void intel_pcie_max_link_width_setup(struct intel_pcie_port *lpp)
>>>>>> +{
>>>>>> +	u32 mask, val;
>>>>>> +
>>>>>> +	/* HW auto bandwidth negotiation must be enabled */
>>>>>> +	pcie_rc_cfg_wr_mask(lpp, PCIE_LCTLSTS_HW_AW_DIS, 0, PCIE_LCTLSTS);
>>>>>> +
>>>>>> +	mask = PCIE_DIRECT_LINK_WIDTH_CHANGE | PCIE_TARGET_LINK_WIDTH;
>>>>>> +	val = PCIE_DIRECT_LINK_WIDTH_CHANGE | lpp->lanes;
>>>>>> +	pcie_rc_cfg_wr_mask(lpp, mask, val, PCIE_MULTI_LANE_CTRL);
>>>>> Is this identical functionality to the writing of PCIE_PORT_LINK_CONTROL
>>>>> in dw_pcie_setup?
>>>>>
>>>>> I ask because if the user sets num-lanes in the DT, will it have the
>>>>> desired effect?
>>>> intel_pcie_max_link_width_setup() function will be called by sysfs attribute pcie_width_store() to change on the fly.
>>> Indeed, but a user may also set num-lanes in the device tree. I'm wondering
>>> if, when set in device-tree, it will have the desired effect. Because I don't
>>> see anything similar to PCIE_LCTLSTS_HW_AW_DIS in dw_pcie_setup which is what
>>> your function does here.
>>>
>>> I guess I'm trying to avoid the suitation where num-lanes doesn't have the
>>> desired effect and the only way to set the num-lanes is throught the sysfs
>>> control.
>> I will check this and get back to you.
intel_pcie_max_link_width_setup() is doing the lane resizing which is 
different from the link up/establishment during the probe. Also 
PCIE_LCTLSTS_HW_AW_DIS default value is 0 so not configuring during the 
probe or dw_pcie_setup()

intel_pcie_max_link_width_setup() is following the Designware databook 
instructions for lane resizing and below is the snippet from the book.

Program the TARGET_LINK_WIDTH[5:0] field of the MULTI_LANE_CONTROL_OFF 
register.
Program the DIRECT_LINK_WIDTH_CHANGE field of MULTI_LANE_CONTROL_OFF 
register.
It is assumed that the PCIE_CAP_HW_AUTO_WIDTH_DISABLE field in the 
LINK_CONTROL_LINK_STATUS_REG register is 0.
>>>>>> +}
>>>>>> +
>>>>>> +static void intel_pcie_port_logic_setup(struct intel_pcie_port *lpp)
>>>>>> +{
>>>>>> +	u32 val, mask, fts;
>>>>>> +
>>>>>> +	switch (lpp->max_speed) {
>>>>>> +	case PCIE_LINK_SPEED_GEN1:
>>>>>> +	case PCIE_LINK_SPEED_GEN2:
>>>>>> +		fts = PCIE_AFR_GEN12_FTS_NUM_DFT;
>>>>>> +		break;
>>>>>> +	case PCIE_LINK_SPEED_GEN3:
>>>>>> +		fts = PCIE_AFR_GEN3_FTS_NUM_DFT;
>>>>>> +		break;
>>>>>> +	case PCIE_LINK_SPEED_GEN4:
>>>>>> +		fts = PCIE_AFR_GEN4_FTS_NUM_DFT;
>>>>>> +		break;
>>>>>> +	default:
>>>>>> +		fts = PCIE_AFR_GEN12_FTS_NUM_DFT;
>>>>>> +		break;
>>>>>> +	}
>>>>>> +	mask = PCIE_AFR_FTS_NUM | PCIE_AFR_COM_FTS_NUM;
>>>>>> +	val = FIELD_PREP(PCIE_AFR_FTS_NUM, fts) |
>>>>>> +	       FIELD_PREP(PCIE_AFR_COM_FTS_NUM, fts);
>>>>>> +	pcie_rc_cfg_wr_mask(lpp, mask, val, PCIE_AFR);
>>>>>> +
>>>>>> +	/* Port Link Control Register */
>>>>>> +	pcie_rc_cfg_wr_mask(lpp, PCIE_PLCR_DLL_LINK_EN,
>>>>>> +			    PCIE_PLCR_DLL_LINK_EN, PCIE_PORT_LINK_CONTROL);
>>>>>> +}
>>>>>> +
>>>>>> +static void intel_pcie_upconfig_setup(struct intel_pcie_port *lpp)
>>>>>> +{
>>>>>> +	pcie_rc_cfg_wr_mask(lpp, PCIE_UPCONFIG_SUPPORT,
>>>>>> +			    PCIE_UPCONFIG_SUPPORT, PCIE_MULTI_LANE_CTRL);
>>>>>> +}
>>>>>> +
>>>>>> +static void intel_pcie_rc_setup(struct intel_pcie_port *lpp)
>>>>>> +{
>>>>>> +	intel_pcie_ltssm_disable(lpp);
>>>>>> +	intel_pcie_link_setup(lpp);
>>>>>> +	dw_pcie_setup_rc(&lpp->pci.pp);
>>>>>> +	intel_pcie_upconfig_setup(lpp);
>>>>>> +
>>>>>> +	intel_pcie_max_speed_setup(lpp);
>>>>>> +	intel_pcie_speed_change_enable(lpp);
>>>>>> +	intel_pcie_port_logic_setup(lpp);
>>>>>> +	intel_pcie_mem_iatu(lpp);
>>>>> Doesn't dw_pcie_setup_rc do the same as intel_pcie_mem_iatu?
>>>> Thanks for pointing it. dw_pcie_setup_rc() does.
>>>> intel_pcie_mem_iatu can be removed.
>>> Excellent.
>>>
>>>>>> +}
>>>>>> +
>>>>>> +static int intel_pcie_ep_rst_init(struct intel_pcie_port *lpp)
>>>>>> +{
>>>>>> +	struct device *dev = lpp->pci.dev;
>>>>>> +	int ret;
>>>>>> +
>>>>>> +	lpp->reset_gpio = devm_gpiod_get(dev, "reset", GPIOD_OUT_LOW);
>>>>>> +	if (IS_ERR(lpp->reset_gpio)) {
>>>>>> +		ret = PTR_ERR(lpp->reset_gpio);
>>>>>> +		if (ret != -EPROBE_DEFER)
>>>>>> +			dev_err(dev, "failed to request PCIe GPIO: %d\n", ret);
>>>>>> +		return ret;
>>>>>> +	}
>>>>>> +	/* Make initial reset last for 100us */
>>>>>> +	usleep_range(100, 200);
>>>>>> +
>>>>>> +	return 0;
>>>>>> +}
>>>>>> +
>>>>>> +static void intel_pcie_core_rst_assert(struct intel_pcie_port *lpp)
>>>>>> +{
>>>>>> +	reset_control_assert(lpp->core_rst);
>>>>>> +}
>>>>>> +
>>>>>> +static void intel_pcie_core_rst_deassert(struct intel_pcie_port *lpp)
>>>>>> +{
>>>>>> +	/*
>>>>>> +	 * One micro-second delay to make sure the reset pulse
>>>>>> +	 * wide enough so that core reset is clean.
>>>>>> +	 */
>>>>>> +	udelay(1);
>>>>>> +	reset_control_deassert(lpp->core_rst);
>>>>>> +
>>>>>> +	/*
>>>>>> +	 * Some SoC core reset also reset PHY, more delay needed
>>>>>> +	 * to make sure the reset process is done.
>>>>>> +	 */
>>>>>> +	usleep_range(1000, 2000);
>>>>>> +}
>>>>>> +
>>>>>> +static void intel_pcie_device_rst_assert(struct intel_pcie_port *lpp)
>>>>>> +{
>>>>>> +	gpiod_set_value_cansleep(lpp->reset_gpio, 1);
>>>>>> +}
>>>>>> +
>>>>>> +static void intel_pcie_device_rst_deassert(struct intel_pcie_port *lpp)
>>>>>> +{
>>>>>> +	msleep(lpp->rst_interval);
>>>>>> +	gpiod_set_value_cansleep(lpp->reset_gpio, 0);
>>>>>> +}
>>>>>> +
>>>>>> +static int intel_pcie_app_logic_setup(struct intel_pcie_port *lpp)
>>>>>> +{
>>>>>> +	intel_pcie_device_rst_deassert(lpp);
>>>>>> +	intel_pcie_ltssm_enable(lpp);
>>>>>> +
>>>>>> +	return dw_pcie_wait_for_link(&lpp->pci);
>>>>>> +}
>>>>>> +
>>>>>> +static irqreturn_t intel_pcie_core_isr(int irq, void *arg)
>>>>>> +{
>>>>>> +	struct intel_pcie_port *lpp = arg;
>>>>>> +	u32 val, reg;
>>>>>> +
>>>>>> +	reg = pcie_app_rd(lpp, PCIE_APP_IRNCR);
>>>>>> +	val = reg & PCIE_APP_IRN_INT;
>>>>>> +
>>>>>> +	pcie_app_wr(lpp, val, PCIE_APP_IRNCR);
>>>>>> +
>>>>>> +	trace_printk("PCIe misc interrupt status 0x%x\n", reg);
>>>>>> +	return IRQ_HANDLED;
>>>>>> +}
>>>>> Why do we bother handling this interrupt?
>>>> This helps during debugging.
>>> I think it should be removed. It adds very little value to most users.
>>>
>>> Most users won't have access to the datasheets to debug this properly, and
>>> in any case if they could, then they would be competent to add an interrupt
>>> handler themselves.
>> IMO, having this will help to get the basic hardware interrupt status during
>> debugging.
>> And, user also can enhance the handler as per the need.
>> I thing keeping it is beneficial than removing it.
>> Please let me know your view.
> I'm much prefer to remove this.
Sure, i am OK to remove it.
>
>>>>>> +
>>>>>> +static int intel_pcie_setup_irq(struct intel_pcie_port *lpp)
>>>>>> +{
>>>>>> +	struct device *dev = lpp->pci.dev;
>>>>>> +	struct platform_device *pdev;
>>>>>> +	char *irq_name;
>>>>>> +	int irq, ret;
>>>>>> +
>>>>>> +	pdev = to_platform_device(dev);
>>>>>> +	irq = platform_get_irq(pdev, 0);
>>>>>> +	if (irq < 0) {
>>>>>> +		dev_err(dev, "missing sys integrated irq resource\n");
>>>>>> +		return irq;
>>>>>> +	}
>>>>>> +
>>>>>> +	irq_name = devm_kasprintf(dev, GFP_KERNEL, "pcie_misc%d", lpp->id);
>>>>>> +	if (!irq_name) {
>>>>>> +		dev_err(dev, "failed to alloc irq name\n");
>>>>>> +		return -ENOMEM;
>>>>>> +	}
>>>>>> +
>>>>>> +	ret = devm_request_irq(dev, irq, intel_pcie_core_isr,
>>>>>> +			       IRQF_SHARED, irq_name, lpp);
>>>>>> +	if (ret) {
>>>>>> +		dev_err(dev, "request irq %d failed\n", irq);
>>>>>> +		return ret;
>>>>>> +	}
>>>>>> +	/* Enable integrated interrupts */
>>>>>> +	pcie_app_wr_mask(lpp, PCIE_APP_IRN_INT,
>>>>>> +			 PCIE_APP_IRN_INT, PCIE_APP_IRNEN);
>>>>>> +
>>>>>> +	return ret;
>>>>>> +}
>>>>>> +
>>>>>> +static void intel_pcie_core_irq_disable(struct intel_pcie_port *lpp)
>>>>>> +{
>>>>>> +	pcie_app_wr(lpp, 0, PCIE_APP_IRNEN);
>>>>>> +	pcie_app_wr(lpp, PCIE_APP_IRN_INT,  PCIE_APP_IRNCR);
>>>>>> +}
>>>>>> +
>>>>>> +static int intel_pcie_get_resources(struct platform_device *pdev)
>>>>>> +{
>>>>>> +	struct intel_pcie_port *lpp;
>>>>>> +	struct resource *res;
>>>>>> +	struct dw_pcie *pci;
>>>>>> +	struct device *dev;
>>>>>> +	int ret;
>>>>>> +
>>>>>> +	lpp = platform_get_drvdata(pdev);
>>>>>> +	pci = &lpp->pci;
>>>>>> +	dev = pci->dev;
>>>>>> +
>>>>>> +	ret = device_property_read_u32(dev, "linux,pci-domain", &lpp->id);
>>>>>> +	if (ret) {
>>>>>> +		dev_err(dev, "failed to get domain id, errno %d\n", ret);
>>>>>> +		return ret;
>>>>>> +	}
>>>>>> +
>>>>>> +	res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "dbi");
>>>>>> +	if (!res)
>>>>>> +		return -ENXIO;
>>>>>> +
>>>>>> +	pci->dbi_base = devm_ioremap_resource(dev, res);
>>>>>> +	if (IS_ERR(pci->dbi_base))
>>>>>> +		return PTR_ERR(pci->dbi_base);
>>>>>> +
>>>>>> +	lpp->core_clk = devm_clk_get(dev, NULL);
>>>>>> +	if (IS_ERR(lpp->core_clk)) {
>>>>>> +		ret = PTR_ERR(lpp->core_clk);
>>>>>> +		if (ret != -EPROBE_DEFER)
>>>>>> +			dev_err(dev, "failed to get clks: %d\n", ret);
>>>>>> +		return ret;
>>>>>> +	}
>>>>>> +
>>>>>> +	lpp->core_rst = devm_reset_control_get(dev, NULL);
>>>>>> +	if (IS_ERR(lpp->core_rst)) {
>>>>>> +		ret = PTR_ERR(lpp->core_rst);
>>>>>> +		if (ret != -EPROBE_DEFER)
>>>>>> +			dev_err(dev, "failed to get resets: %d\n", ret);
>>>>>> +		return ret;
>>>>>> +	}
>>>>>> +
>>>>>> +	ret = device_property_match_string(dev, "device_type", "pci");
>>>>>> +	if (ret) {
>>>>>> +		dev_err(dev, "failed to find pci device type: %d\n", ret);
>>>>>> +		return ret;
>>>>>> +	}
>>>>>> +
>>>>>> +	if (device_property_read_u32(dev, "reset-assert-ms",
>>>>>> +				     &lpp->rst_interval))
>>>>>> +		lpp->rst_interval = RST_INTRVL_DFT_MS;
>>>>>> +
>>>>>> +	if (device_property_read_u32(dev, "max-link-speed", &lpp->link_gen))
>>>>>> +		lpp->link_gen = 0; /* Fallback to auto */
>>>>> Is it possible to use of_pci_get_max_link_speed here instead?
>>>> Thanks for pointing it. of_pci_get_max_link_speed() can be used here. I will
>>>> update it in the next patch revision.
I just remember, earlier we were using of_pci_get_max_link_speed() 
itself. As per reviewer comments changed it to 
device_property_read_u32() to maintain symmetry in parsing device tree 
properties( from device node).
Let me know your view.
>>>>>> +
>>>>>> +	res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "app");
>>>>>> +	if (!res)
>>>>>> +		return -ENXIO;
>>>>>> +
>>>>>> +	lpp->app_base = devm_ioremap_resource(dev, res);
>>>>>> +	if (IS_ERR(lpp->app_base))
>>>>>> +		return PTR_ERR(lpp->app_base);
>>>>>> +
>>>>>> +	ret = intel_pcie_ep_rst_init(lpp);
>>>>>> +	if (ret)
>>>>>> +		return ret;
>>>>> Given that this is called from a function '..._get_resources' I don't think
>>>>> we should be resetting anything here.
>>>> Agree. I will move it out of get_resources().
>>>>>> +
>>>>>> +	lpp->phy = devm_phy_get(dev, "pciephy");
>>>>>> +	if (IS_ERR(lpp->phy)) {
>>>>>> +		ret = PTR_ERR(lpp->phy);
>>>>>> +		if (ret != -EPROBE_DEFER)
>>>>>> +			dev_err(dev, "couldn't get pcie-phy: %d\n", ret);
>>>>>> +		return ret;
>>>>>> +	}
>>>>>> +	return 0;
>>>>>> +}
>>>>>> +
>>>>>> +static void intel_pcie_deinit_phy(struct intel_pcie_port *lpp)
>>>>>> +{
>>>>>> +	phy_exit(lpp->phy);
>>>>>> +}
>>>>>> +
>>>>>> +static int intel_pcie_wait_l2(struct intel_pcie_port *lpp)
>>>>>> +{
>>>>>> +	u32 value;
>>>>>> +	int ret;
>>>>>> +
>>>>>> +	if (lpp->max_speed < PCIE_LINK_SPEED_GEN3)
>>>>>> +		return 0;
>>>>>> +
>>>>>> +	/* Send PME_TURN_OFF message */
>>>>>> +	pcie_app_wr_mask(lpp, PCIE_APP_MSG_XMT_PM_TURNOFF,
>>>>>> +			 PCIE_APP_MSG_XMT_PM_TURNOFF, PCIE_APP_MSG_CR);
>>>>>> +
>>>>>> +	/* Read PMC status and wait for falling into L2 link state */
>>>>>> +	ret = readl_poll_timeout(lpp->app_base + PCIE_APP_PMC, value,
>>>>>> +				 (value & PCIE_APP_PMC_IN_L2), 20,
>>>>>> +				 jiffies_to_usecs(5 * HZ));
>>>>>> +	if (ret)
>>>>>> +		dev_err(lpp->pci.dev, "PCIe link enter L2 timeout!\n");
>>>>>> +
>>>>>> +	return ret;
>>>>>> +}
>>>>>> +
>>>>>> +static void intel_pcie_turn_off(struct intel_pcie_port *lpp)
>>>>>> +{
>>>>>> +	if (dw_pcie_link_up(&lpp->pci))
>>>>>> +		intel_pcie_wait_l2(lpp);
>>>>>> +
>>>>>> +	/* Put EP in reset state */
>>>>> EP?
>>>> End point device. I will update it.
>>> Is this not a host bridge controller?
>> It is PERST#, signals hardware reset to the End point .
>>          /* Put EP in reset state */
>>          intel_pcie_device_rst_assert(lpp);
> OK.
>
>>>>>> +	intel_pcie_device_rst_assert(lpp);
>>>>>> +	pcie_rc_cfg_wr_mask(lpp, PCI_COMMAND_MEMORY, 0, PCI_COMMAND);
>>>>>> +}
>>>>>> +
>>>>>> +static int intel_pcie_host_setup(struct intel_pcie_port *lpp)
>>>>>> +{
>>>>>> +	int ret;
>>>>>> +
>>>>>> +	intel_pcie_core_rst_assert(lpp);
>>>>>> +	intel_pcie_device_rst_assert(lpp);
>>>>>> +
>>>>>> +	ret = phy_init(lpp->phy);
>>>>>> +	if (ret)
>>>>>> +		return ret;
>>>>>> +
>>>>>> +	intel_pcie_core_rst_deassert(lpp);
>>>>>> +
>>>>>> +	ret = clk_prepare_enable(lpp->core_clk);
>>>>>> +	if (ret) {
>>>>>> +		dev_err(lpp->pci.dev, "Core clock enable failed: %d\n", ret);
>>>>>> +		goto clk_err;
>>>>>> +	}
>>>>>> +
>>>>>> +	intel_pcie_rc_setup(lpp);
>>>>>> +	ret = intel_pcie_app_logic_setup(lpp);
>>>>>> +	if (ret)
>>>>>> +		goto app_init_err;
>>>>>> +
>>>>>> +	ret = intel_pcie_setup_irq(lpp);
>>>>>> +	if (!ret)
>>>>>> +		return ret;
>>>>>> +
>>>>>> +	intel_pcie_turn_off(lpp);
>>>>>> +app_init_err:
>>>>>> +	clk_disable_unprepare(lpp->core_clk);
>>>>>> +clk_err:
>>>>>> +	intel_pcie_core_rst_assert(lpp);
>>>>>> +	intel_pcie_deinit_phy(lpp);
>>>>>> +	return ret;
>>>>>> +}
>>>>>> +
>>>>>> +static ssize_t
>>>>>> +pcie_link_status_show(struct device *dev, struct device_attribute *attr,
>>>>>> +		      char *buf)
>>>>>> +{
>>>>>> +	u32 reg, width, gen;
>>>>>> +	struct intel_pcie_port *lpp;
>>>>>> +
>>>>>> +	lpp = dev_get_drvdata(dev);
>>>>>> +
>>>>>> +	reg = pcie_rc_cfg_rd(lpp, PCIE_LCTLSTS);
>>>>>> +	width = FIELD_GET(PCIE_LCTLSTS_NEGOTIATED_LINK_WIDTH, reg);
>>>>>> +	gen = FIELD_GET(PCIE_LCTLSTS_LINK_SPEED, reg);
>>>>>> +	if (gen > lpp->max_speed)
>>>>>> +		return -EINVAL;
>>>>>> +
>>>>>> +	return sprintf(buf, "Port %2u Width x%u Speed %s GT/s\n", lpp->id,
>>>>>> +		       width, pcie_link_gen_to_str(gen));
>>>>>> +}
>>>>>> +static DEVICE_ATTR_RO(pcie_link_status);
>>>>>> +
>>>>>> +static ssize_t pcie_speed_store(struct device *dev,
>>>>>> +				struct device_attribute *attr,
>>>>>> +				const char *buf, size_t len)
>>>>>> +{
>>>>>> +	struct intel_pcie_port *lpp;
>>>>>> +	unsigned long val;
>>>>>> +	int ret;
>>>>>> +
>>>>>> +	lpp = dev_get_drvdata(dev);
>>>>>> +
>>>>>> +	ret = kstrtoul(buf, 10, &val);
>>>>>> +	if (ret)
>>>>>> +		return ret;
>>>>>> +
>>>>>> +	if (val > lpp->max_speed)
>>>>>> +		return -EINVAL;
>>>>>> +
>>>>>> +	lpp->link_gen = val;
>>>>>> +	intel_pcie_max_speed_setup(lpp);
>>>>>> +	intel_pcie_speed_change_disable(lpp);
>>>>>> +	intel_pcie_speed_change_enable(lpp);
>>>>>> +
>>>>>> +	return len;
>>>>>> +}
>>>>>> +static DEVICE_ATTR_WO(pcie_speed);
>>>>>> +
>>>>>> +/*
>>>>>> + * Link width change on the fly is not always successful.
>>>>>> + * It also depends on the partner.
>>>>>> + */
>>>>>> +static ssize_t pcie_width_store(struct device *dev,
>>>>>> +				struct device_attribute *attr,
>>>>>> +				const char *buf, size_t len)
>>>>>> +{
>>>>>> +	struct intel_pcie_port *lpp;
>>>>>> +	unsigned long val;
>>>>>> +
>>>>>> +	lpp = dev_get_drvdata(dev);
>>>>>> +
>>>>>> +	if (kstrtoul(buf, 10, &val))
>>>>>> +		return -EINVAL;
>>>>>> +
>>>>>> +	if (val > lpp->max_width)
>>>>>> +		return -EINVAL;
>>>>>> +
>>>>>> +	lpp->lanes = val;
>>>>>> +	intel_pcie_max_link_width_setup(lpp);
>>>>>> +
>>>>>> +	return len;
>>>>>> +}
>>>>>> +static DEVICE_ATTR_WO(pcie_width);
>>>>> You mentioned that a use-case for changing width/speed on the fly was to
>>>>> control power consumption (and this also helps debugging issues). As I
>>>>> understand there is no current support for this in the kernel - yet it is
>>>>> something that would provide value.
>>>>>
>>>>> I haven't looked in much detail, however as I understand the PCI spec
>>>>> allows an upstream partner to change the link speed and retrain. (I'm not
>>>>> sure about link width). Given that we already have
>>>>> [current,max]_link_[speed,width] is sysfs for each PCI device, it would
>>>>> seem natural to extend this to allow for writing a max width or speed.
>>>>>
>>>>> So ideally this type of thing would be moved to the core or at least in
>>>>> the dwc driver. This way the benefits can be applied to more devices on
>>>>> larger PCI fabrics - Though perhaps others here will have a different view
>>>>> and I'm keen to hear them.
>>>>>
>>>>> I'm keen to limit the differences between the DWC controller drivers and
>>>>> unify common code - thus it would be helpful to have a justification as to
>>>>> why this is only relevant for this controller.
>>>>>
>>>>> For user-space only control, it is possible to achieve what you have here
>>>>> with userspace utilities (something like setpci) (assuming the standard
>>>>> looking registers you currently access are exposed in the normal config
>>>>> space way - though PCIe capabilities).
>>>>>
>>>>> My suggestion would be to drop these changes and later add something that
>>>>> can benefit more devices. In any case if these changes stay within this
>>>>> driver then it would be helpful to move them to a separate patch.
>>>> Sure, i will submit these entity in separate patch.
>>> Please ensure that all supporting macros, functions and defines go with that
>>> other patch as well please.
>> Sure.
>>>>>> +
>>>>>> +static struct attribute *pcie_cfg_attrs[] = {
>>>>>> +	&dev_attr_pcie_link_status.attr,
>>>>>> +	&dev_attr_pcie_speed.attr,
>>>>>> +	&dev_attr_pcie_width.attr,
>>>>>> +	NULL,
>>>>>> +};
>>>>>> +ATTRIBUTE_GROUPS(pcie_cfg);
>>>>>> +
>>>>>> +static int intel_pcie_sysfs_init(struct intel_pcie_port *lpp)
>>>>>> +{
>>>>>> +	return devm_device_add_groups(lpp->pci.dev, pcie_cfg_groups);
>>>>>> +}
>>>>>> +
>>>>>> +static void __intel_pcie_remove(struct intel_pcie_port *lpp)
>>>>>> +{
>>>>>> +	pcie_rc_cfg_wr_mask(lpp, PCI_COMMAND_MEMORY | PCI_COMMAND_MASTER,
>>>>>> +			    0, PCI_COMMAND);
>>>>>> +	intel_pcie_core_irq_disable(lpp);
>>>>>> +	intel_pcie_turn_off(lpp);
>>>>>> +	clk_disable_unprepare(lpp->core_clk);
>>>>>> +	intel_pcie_core_rst_assert(lpp);
>>>>>> +	intel_pcie_deinit_phy(lpp);
>>>>>> +}
>>>>>> +
>>>>>> +static int intel_pcie_remove(struct platform_device *pdev)
>>>>>> +{
>>>>>> +	struct intel_pcie_port *lpp = platform_get_drvdata(pdev);
>>>>>> +	struct pcie_port *pp = &lpp->pci.pp;
>>>>>> +
>>>>>> +	dw_pcie_host_deinit(pp);
>>>>>> +	__intel_pcie_remove(lpp);
>>>>>> +
>>>>>> +	return 0;
>>>>>> +}
>>>>>> +
>>>>>> +static int __maybe_unused intel_pcie_suspend_noirq(struct device *dev)
>>>>>> +{
>>>>>> +	struct intel_pcie_port *lpp = dev_get_drvdata(dev);
>>>>>> +	int ret;
>>>>>> +
>>>>>> +	intel_pcie_core_irq_disable(lpp);
>>>>>> +	ret = intel_pcie_wait_l2(lpp);
>>>>>> +	if (ret)
>>>>>> +		return ret;
>>>>>> +
>>>>>> +	intel_pcie_deinit_phy(lpp);
>>>>>> +	clk_disable_unprepare(lpp->core_clk);
>>>>>> +	return ret;
>>>>>> +}
>>>>>> +
>>>>>> +static int __maybe_unused intel_pcie_resume_noirq(struct device *dev)
>>>>>> +{
>>>>>> +	struct intel_pcie_port *lpp = dev_get_drvdata(dev);
>>>>>> +
>>>>>> +	return intel_pcie_host_setup(lpp);
>>>>>> +}
>>>>>> +
>>>>>> +static int intel_pcie_rc_init(struct pcie_port *pp)
>>>>>> +{
>>>>>> +	struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
>>>>>> +	struct intel_pcie_port *lpp = dev_get_drvdata(pci->dev);
>>>>>> +	int ret;
>>>>>> +
>>>>>> +	/* RC/host initialization */
>>>>>> +	ret = intel_pcie_host_setup(lpp);
>>>>>> +	if (ret)
>>>>>> +		return ret;
>>>>> Insert new line here.
>>>> Ok.
>>>>>> +	ret = intel_pcie_sysfs_init(lpp);
>>>>>> +	if (ret)
>>>>>> +		__intel_pcie_remove(lpp);
>>>>>> +	return ret;
>>>>>> +}
>>>>>> +
>>>>>> +int intel_pcie_msi_init(struct pcie_port *pp)
>>>>>> +{
>>>>>> +	struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
>>>>>> +
>>>>>> +	dev_dbg(pci->dev, "PCIe MSI/MSIx is handled by MSI in x86 processor\n");
>>>>> What about other processors? (Noting that this driver doesn't depend on
>>>>> any specific ARCH in the KConfig).
>>>> Agree. i will mark the dependency in Kconfig.
>>> OK, please also see how other drivers use the COMPILE_TEST Kconfig option.
>> Ok sure.
>>> I'd suggest that the dev_dbg just becomes a code comment.
Ok
>>>
>>>>>> +	return 0;
>>>>>> +}
>>>>>> +
>>>>>> +u64 intel_pcie_cpu_addr(struct dw_pcie *pcie, u64 cpu_addr)
>>>>>> +{
>>>>>> +	return cpu_addr + BUS_IATU_OFFS;
>>>>>> +}
>>>>>> +
>>>>>> +static const struct dw_pcie_ops intel_pcie_ops = {
>>>>>> +	.cpu_addr_fixup = intel_pcie_cpu_addr,
>>>>>> +};
>>>>>> +
>>>>>> +static const struct dw_pcie_host_ops intel_pcie_dw_ops = {
>>>>>> +	.host_init =		intel_pcie_rc_init,
>>>>>> +	.msi_host_init =	intel_pcie_msi_init,
>>>>>> +};
>>>>>> +
>>>>>> +static const struct intel_pcie_soc pcie_data = {
>>>>>> +	.pcie_ver =		0x520A,
>>>>>> +	.pcie_atu_offset =	0xC0000,
>>>>>> +	.num_viewport =		3,
>>>>>> +};
>>>>>> +
>>>>>> +static int intel_pcie_probe(struct platform_device *pdev)
>>>>>> +{
>>>>>> +	const struct intel_pcie_soc *data;
>>>>>> +	struct device *dev = &pdev->dev;
>>>>>> +	struct intel_pcie_port *lpp;
>>>>>> +	struct pcie_port *pp;
>>>>>> +	struct dw_pcie *pci;
>>>>>> +	int ret;
>>>>>> +
>>>>>> +	lpp = devm_kzalloc(dev, sizeof(*lpp), GFP_KERNEL);
>>>>>> +	if (!lpp)
>>>>>> +		return -ENOMEM;
>>>>>> +
>>>>>> +	platform_set_drvdata(pdev, lpp);
>>>>>> +	pci = &lpp->pci;
>>>>>> +	pci->dev = dev;
>>>>>> +	pp = &pci->pp;
>>>>>> +
>>>>>> +	ret = intel_pcie_get_resources(pdev);
>>>>>> +	if (ret)
>>>>>> +		return ret;
>>>>>> +
>>>>>> +	data = device_get_match_data(dev);
>>>>>> +	pci->ops = &intel_pcie_ops;
>>>>>> +	pci->version = data->pcie_ver;
>>>>>> +	pci->atu_base = pci->dbi_base + data->pcie_atu_offset;
>>>>>> +	pp->ops = &intel_pcie_dw_ops;
>>>>>> +
>>>>>> +	ret = dw_pcie_host_init(pp);
>>>>>> +	if (ret) {
>>>>>> +		dev_err(dev, "cannot initialize host\n");
>>>>>> +		return ret;
>>>>>> +	}
>>>>> Add a new line after the closing brace.
>>>> Ok
>>>>>> +	/* Intel PCIe doesn't configure IO region, so configure
>>>>>> +	 * viewport to not to access IO region during register
>>>>>> +	 * read write operations.
>>>>>> +	 */
>>>>>> +	pci->num_viewport = data->num_viewport;
>>>>>> +	dev_info(dev,
>>>>>> +		 "Intel AXI PCIe Root Complex Port %d Init Done\n", lpp->id);
>>>>>> +	return ret;
>>>>>> +}
>>>>>> +
>>>>>> +static const struct dev_pm_ops intel_pcie_pm_ops = {
>>>>>> +	SET_NOIRQ_SYSTEM_SLEEP_PM_OPS(intel_pcie_suspend_noirq,
>>>>>> +				      intel_pcie_resume_noirq)
>>>>>> +};
>>>>>> +
>>>>>> +static const struct of_device_id of_intel_pcie_match[] = {
>>>>>> +	{ .compatible = "intel,lgm-pcie", .data = &pcie_data },
>>>>>> +	{}
>>>>>> +};
>>>>>> +
>>>>>> +static struct platform_driver intel_pcie_driver = {
>>>>>> +	.probe = intel_pcie_probe,
>>>>>> +	.remove = intel_pcie_remove,
>>>>>> +	.driver = {
>>>>>> +		.name = "intel-lgm-pcie",
>>>>> Is there a reason why the we use intel-lgm-pcie here and pcie-intel-axi
>>>>> elsewhere? What does lgm mean?
>>>> lgm is the name of intel SoC.  I will name it to pcie-intel-axi to be
>>>> generic.
>>> I'm keen to ensure that it is consistently named. I've seen other comments
>>> regarding what the name should be - I don't have a strong opinion though
>>> I do think that *-axi may be too generic.
>> Ok, i will check and get back to you on this.

This Driver is for PCIe RC controller on Intel Gateway SoCs. So how 
about naming it as "pcie-intel-gw"; pcie-intel-gw.c; PCIE_INTEL_GW in 
Kconfig.
Let me know your view.

Regards,
Dilip

>>
>> Regards,
> Thanks,
>
> Andrew Murray
>
>> Dilip
>>
>>> Thanks,
>>>
>>> Andrew Murray
>>>
>>>>> Thanks,
>>>>>
>>>>> Andrew Murray
>>>>>
>>>>>> +		.of_match_table = of_intel_pcie_match,
>>>>>> +		.pm = &intel_pcie_pm_ops,
>>>>>> +	},
>>>>>> +};
>>>>>> +builtin_platform_driver(intel_pcie_driver);
>>>>>> -- 
>>>>>> 2.11.0
>>>>>>

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH v3 2/2] dwc: PCI: intel: Intel PCIe RC controller driver
       [not found]             ` <22857835-1f98-b251-c94b-16b4b0a6dba2@linux.intel.com>
@ 2019-09-11 10:30               ` Andrew Murray
  2019-09-12  6:58                 ` Dilip Kota
  0 siblings, 1 reply; 37+ messages in thread
From: Andrew Murray @ 2019-09-11 10:30 UTC (permalink / raw)
  To: Dilip Kota
  Cc: jingoohan1, gustavo.pimentel, lorenzo.pieralisi, robh,
	martin.blumenstingl, linux-pci, hch, devicetree, linux-kernel,
	andriy.shevchenko, cheol.yong.kim, chuanhua.lei, qi-ming.wu

On Tue, Sep 10, 2019 at 03:46:17PM +0800, Dilip Kota wrote:
> Hi Andrew Murray,
> 
> Please find my response inline.
> 
> On 9/9/2019 4:31 PM, Andrew Murray wrote:
> > On Mon, Sep 09, 2019 at 02:51:03PM +0800, Dilip Kota wrote:
> > > On 9/6/2019 7:20 PM, Andrew Murray wrote:
> > > > On Fri, Sep 06, 2019 at 06:58:11PM +0800, Dilip Kota wrote:
> > > > > Hi Andrew Murray,
> > > > > 
> > > > > Thanks for the review. Please find my response inline.
> > > > > 
> > > > > On 9/5/2019 6:45 PM, Andrew Murray wrote:
> > > > > > On Wed, Sep 04, 2019 at 06:10:31PM +0800, Dilip Kota wrote:
> > > > > > > Add support to PCIe RC controller on Intel Universal
> > > > > > > Gateway SoC. PCIe controller is based of Synopsys
> > > > > > > Designware pci core.
> > > > > > > 
> > > > > > > Signed-off-by: Dilip Kota <eswara.kota@linux.intel.com>
> > > > > > > ---
> > > > > > Hi Dilip,
> > > > > > 
> > > > > > Thanks for the patch, initial feedback below:
> > > > > > 
> > > > > > > changes on v3:
> > > > > > > 	Rename PCIe app logic registers with PCIE_APP prefix.
> > > > > > > 	PCIE_IOP_CTRL configuration is not required. Remove respective code.
> > > > > > > 	Remove wrapper functions for clk enable/disable APIs.
> > > > > > > 	Use platform_get_resource_byname() instead of
> > > > > > > 	  devm_platform_ioremap_resource() to be similar with DWC framework.
> > > > > > > 	Rename phy name to "pciephy".
> > > > > > > 	Modify debug message in msi_init() callback to be more specific.
> > > > > > > 	Remove map_irq() callback.
> > > > > > > 	Enable the INTx interrupts at the time of PCIe initialization.
> > > > > > > 	Reduce memory fragmentation by using variable "struct dw_pcie pci"
> > > > > > > 	  instead of allocating memory.
> > > > > > > 	Reduce the delay to 100us during enpoint initialization
> > > > > > > 	  intel_pcie_ep_rst_init().
> > > > > > > 	Call  dw_pcie_host_deinit() during remove() instead of directly
> > > > > > > 	  calling PCIe core APIs.
> > > > > > > 	Rename "intel,rst-interval" to "reset-assert-ms".
> > > > > > > 	Remove unused APP logic Interrupt bit macro definitions.
> > > > > > >     	Use dwc framework's dw_pcie_setup_rc() for PCIe host specific
> > > > > > > 	 configuration instead of redefining the same functionality in
> > > > > > > 	 the driver.
> > > > > > > 	Move the whole DT parsing specific code to intel_pcie_get_resources()
> > > > > > > 
> > > > > > >     drivers/pci/controller/dwc/Kconfig          |  13 +
> > > > > > >     drivers/pci/controller/dwc/Makefile         |   1 +
> > > > > > >     drivers/pci/controller/dwc/pcie-intel-axi.c | 840 ++++++++++++++++++++++++++++
> > > > > > >     3 files changed, 854 insertions(+)
> > > > > > >     create mode 100644 drivers/pci/controller/dwc/pcie-intel-axi.c
> > > > > > > 
> > > > > > > diff --git a/drivers/pci/controller/dwc/Kconfig b/drivers/pci/controller/dwc/Kconfig
> > > > > > > index 6ea778ae4877..e44b9b6a6390 100644
> > > > > > > --- a/drivers/pci/controller/dwc/Kconfig
> > > > > > > +++ b/drivers/pci/controller/dwc/Kconfig
> > > > > > > @@ -82,6 +82,19 @@ config PCIE_DW_PLAT_EP
> > > > > > >     	  order to enable device-specific features PCI_DW_PLAT_EP must be
> > > > > > >     	  selected.
> > > > > > > +config PCIE_INTEL_AXI
> > > > > > > +        bool "Intel AHB/AXI PCIe host controller support"
> > > > > > > +        depends on PCI_MSI
> > > > > > > +        depends on PCI
> > > > > > I don't think the PCI dependency is required here.
> > > > > > 
> > > > > > I'm also not sure why PCI_MSI is required, we select PCIE_DW_HOST which
> > > > > > depends on PCI_MSI_IRQ_DOMAIN which depends on PCI_MSI.
> > > > > Agree, dependency on PCI and PCI_MSI can be removed. I will remove it in
> > > > > next patch revision.
> > > > > > > +        depends on OF
> > > > > > > +        select PCIE_DW_HOST
> > > > > > > +        help
> > > > > > > +          Say 'Y' here to enable support for Intel AHB/AXI PCIe Host
> > > > > > > +	  controller driver.
> > > > > > > +	  The Intel PCIe controller is based on the Synopsys Designware
> > > > > > > +	  pcie core and therefore uses the Designware core functions to
> > > > > > > +	  implement the driver.
> > > > > > I can see this description is similar to others in the same Kconfig,
> > > > > > however I'm not sure what value a user gains by knowing implementation
> > > > > > details - it's helpful to know that PCIE_INTEL_AXI is based on the
> > > > > > Designware core, but is it helpful to know that the Designware core
> > > > > > functions are used?
> > > > > > 
> > > > > > > +
> > > > > > >     config PCI_EXYNOS
> > > > > > >     	bool "Samsung Exynos PCIe controller"
> > > > > > >     	depends on SOC_EXYNOS5440 || COMPILE_TEST
> > > > > > > diff --git a/drivers/pci/controller/dwc/Makefile b/drivers/pci/controller/dwc/Makefile
> > > > > > > index b085dfd4fab7..46e656ebdf90 100644
> > > > > > > --- a/drivers/pci/controller/dwc/Makefile
> > > > > > > +++ b/drivers/pci/controller/dwc/Makefile
> > > > > > > @@ -3,6 +3,7 @@ obj-$(CONFIG_PCIE_DW) += pcie-designware.o
> > > > > > >     obj-$(CONFIG_PCIE_DW_HOST) += pcie-designware-host.o
> > > > > > >     obj-$(CONFIG_PCIE_DW_EP) += pcie-designware-ep.o
> > > > > > >     obj-$(CONFIG_PCIE_DW_PLAT) += pcie-designware-plat.o
> > > > > > > +obj-$(CONFIG_PCIE_INTEL_AXI) += pcie-intel-axi.o
> > > > > > >     obj-$(CONFIG_PCI_DRA7XX) += pci-dra7xx.o
> > > > > > >     obj-$(CONFIG_PCI_EXYNOS) += pci-exynos.o
> > > > > > >     obj-$(CONFIG_PCI_IMX6) += pci-imx6.o
> > > > > > > diff --git a/drivers/pci/controller/dwc/pcie-intel-axi.c b/drivers/pci/controller/dwc/pcie-intel-axi.c
> > > > > > > new file mode 100644
> > > > > > > index 000000000000..75607ce03ebf
> > > > > > > --- /dev/null
> > > > > > > +++ b/drivers/pci/controller/dwc/pcie-intel-axi.c
> > > > > > > @@ -0,0 +1,840 @@
> > > > > > > +// SPDX-License-Identifier: GPL-2.0
> > > > > > > +/*
> > > > > > > + * PCIe host controller driver for Intel AXI PCIe Bridge
> > > > > > > + *
> > > > > > > + * Copyright (c) 2019 Intel Corporation.
> > > > > > > + */
> > > > > > > +
> > > > > > > +#include <linux/bitfield.h>
> > > > > > > +#include <linux/clk.h>
> > > > > > > +#include <linux/gpio/consumer.h>
> > > > > > > +#include <linux/interrupt.h>
> > > > > > > +#include <linux/iopoll.h>
> > > > > > > +#include <linux/of_irq.h>
> > > > > > > +#include <linux/of_pci.h>
> > > > > > > +#include <linux/of_platform.h>
> > > > > > > +#include <linux/phy/phy.h>
> > > > > > > +#include <linux/platform_device.h>
> > > > > > > +#include <linux/reset.h>
> > > > > > > +
> > > > > > > +#include "../../pci.h"
> > > > > > Please remove this - it isn't needed.
> > > > > Ok, will remove it in next patch revision.
> > > > > > > +#include "pcie-designware.h"
> > > > > > > +
> > > > > > > +#define PCIE_CCRID				0x8
> > > > > > > +
> > > > > > > +#define PCIE_LCAP				0x7C
> > > > > > > +#define PCIE_LCAP_MAX_LINK_SPEED		GENMASK(3, 0)
> > > > > > > +#define PCIE_LCAP_MAX_LENGTH_WIDTH		GENMASK(9, 4)
> > > > > > These look like the standard PCI Link Capabilities Register,
> > > > > > can you use the standard ones defined in pci_regs.h? (see
> > > > > > PCI_EXP_LNKCAP_SLS_*).
> > > > > > 
> > > > > > > +
> > > > > > > +/* Link Control and Status Register */
> > > > > > > +#define PCIE_LCTLSTS				0x80
> > > > > > > +#define PCIE_LCTLSTS_ASPM_ENABLE		GENMASK(1, 0)
> > > > > > > +#define PCIE_LCTLSTS_RCB128			BIT(3)
> > > > > > > +#define PCIE_LCTLSTS_LINK_DISABLE		BIT(4)
> > > > > > > +#define PCIE_LCTLSTS_COM_CLK_CFG		BIT(6)
> > > > > > > +#define PCIE_LCTLSTS_HW_AW_DIS			BIT(9)
> > > > > > > +#define PCIE_LCTLSTS_LINK_SPEED			GENMASK(19, 16)
> > > > > > > +#define PCIE_LCTLSTS_NEGOTIATED_LINK_WIDTH	GENMASK(25, 20)
> > > > > > > +#define PCIE_LCTLSTS_SLOT_CLK_CFG		BIT(28)
> > > > > > These look like the standard Link Control and Link Status register, can
> > > > > > you use the standard ones defined in pci_regs.h? (see PCI_EXP_LNKCTL_*
> > > > > > and PCI_EXP_LNKSTA_*).
> > > > > > 
> > > > > > > +
> > > > > > > +#define PCIE_LCTLSTS2				0xA0
> > > > > > > +#define PCIE_LCTLSTS2_TGT_LINK_SPEED		GENMASK(3, 0)
> > > > > > > +#define PCIE_LCTLSTS2_TGT_LINK_SPEED_25GT	0x1
> > > > > > > +#define PCIE_LCTLSTS2_TGT_LINK_SPEED_5GT	0x2
> > > > > > > +#define PCIE_LCTLSTS2_TGT_LINK_SPEED_8GT	0x3
> > > > > > > +#define PCIE_LCTLSTS2_TGT_LINK_SPEED_16GT	0x4
> > > > > > And these look like the standard Link Control Register 2 (see
> > > > > > PCI_EXP_LNKCTL2_*).
> > > > > > 
> > > > > > Most of the #defines above and below look just like standard PCI defines
> > > > > > (for example those found in pci_regs.h) - both in terms of their name and
> > > > > > what they do. Ideally where the functionality is the same or very similar,
> > > > > > then we should use the standard PCI defines in (pci_regs.h). This helps
> > > > > > readability/understanding, helps to identify duplicate code and reduces
> > > > > > the size of your patch.
> > > > > > 
> > > > > > Also the capability offsets (e.g. PCIE_LCTLSTS2) are also standard. The
> > > > > > offsets you define are offset by an additional 0x70. E.g.i
> > > > > > PCIE_LCTLSTS2 = 0xA0, and PCI_EXP_LNKCTL2 = 0x30. Perhaps abstracting
> > > > > > this offset will allow you to use the standard pci defines?
> > > > > > 
> > > > > > I haven't looked in much detail at the remainder of the defines, but
> > > > > > the same rationale should apply.
> > > > > Agree, that's a good point. I will define the offset macro and use the
> > > > > macros defined in pci_regs.h.
> > > > > > > +#define PCIE_LCTLSTS2_HW_AUTO_DIS		BIT(5)
> > > > > > > +
> > > > > > > +/* Ack Frequency Register */
> > > > > > > +#define PCIE_AFR				0x70C
> > > > > > > +#define PCIE_AFR_FTS_NUM			GENMASK(15, 8)
> > > > > > > +#define PCIE_AFR_COM_FTS_NUM			GENMASK(23, 16)
> > > > > > > +#define PCIE_AFR_GEN12_FTS_NUM_DFT		(SZ_128 - 1)
> > > > > > > +#define PCIE_AFR_GEN3_FTS_NUM_DFT		180
> > > > > > > +#define PCIE_AFR_GEN4_FTS_NUM_DFT		196
> > > > > > > +
> > > > > > > +#define PCIE_PLCR_DLL_LINK_EN			BIT(5)
> > > > > > > +#define PCIE_PORT_LOGIC_FTS			GENMASK(7, 0)
> > > > > > > +#define PCIE_PORT_LOGIC_DFT_FTS_NUM		(SZ_128 - 1)
> > > > > > > +
> > > > > > > +#define PCIE_MISC_CTRL				0x8BC
> > > > > > > +#define PCIE_MISC_CTRL_DBI_RO_WR_EN		BIT(0)
> > > > > > > +
> > > > > > > +#define PCIE_MULTI_LANE_CTRL			0x8C0
> > > > > > > +#define PCIE_UPCONFIG_SUPPORT			BIT(7)
> > > > > > > +#define PCIE_DIRECT_LINK_WIDTH_CHANGE		BIT(6)
> > > > > > > +#define PCIE_TARGET_LINK_WIDTH			GENMASK(5, 0)
> > > > > > > +
> > > > > > > +/* APP RC Core Control Register */
> > > > > > > +#define PCIE_APP_CCR				0x10
> > > > > > > +#define PCIE_APP_CCR_LTSSM_ENABLE		BIT(0)
> > > > > > > +
> > > > > > > +/* PCIe Message Control */
> > > > > > > +#define PCIE_APP_MSG_CR				0x30
> > > > > > > +#define PCIE_APP_MSG_XMT_PM_TURNOFF		BIT(0)
> > > > > > > +
> > > > > > > +/* PCIe Power Management Control */
> > > > > > > +#define PCIE_APP_PMC				0x44
> > > > > > > +#define PCIE_APP_PMC_IN_L2			BIT(20)
> > > > > > > +
> > > > > > > +/* Interrupt Enable Register */
> > > > > > > +#define PCIE_APP_IRNEN				0xF4
> > > > > > > +#define PCIE_APP_IRNCR				0xF8
> > > > > > > +#define PCIE_APP_IRN_AER_REPORT			BIT(0)
> > > > > > > +#define PCIE_APP_IRN_PME			BIT(2)
> > > > > > > +#define PCIE_APP_IRN_RX_VDM_MSG			BIT(4)
> > > > > > > +#define PCIE_APP_IRN_PM_TO_ACK			BIT(9)
> > > > > > > +#define PCIE_APP_IRN_LINK_AUTO_BW_STAT		BIT(11)
> > > > > > > +#define PCIE_APP_IRN_BW_MGT			BIT(12)
> > > > > > > +#define PCIE_APP_IRN_MSG_LTR			BIT(18)
> > > > > > > +#define PCIE_APP_IRN_SYS_ERR_RC			BIT(29)
> > > > > > > +
> > > > > > > +#define PCIE_APP_INTX_OFST	12
> > > > > > > +#define PCIE_APP_IRN_INT	(PCIE_APP_IRN_AER_REPORT | PCIE_APP_IRN_PME | \
> > > > > > > +			PCIE_APP_IRN_RX_VDM_MSG | PCIE_APP_IRN_SYS_ERR_RC | \
> > > > > > > +			PCIE_APP_IRN_PM_TO_ACK | PCIE_APP_IRN_MSG_LTR | \
> > > > > > > +			PCIE_APP_IRN_BW_MGT | PCIE_APP_IRN_LINK_AUTO_BW_STAT | \
> > > > > > > +			(PCIE_APP_INTX_OFST + PCI_INTERRUPT_INTA) | \
> > > > > > > +			(PCIE_APP_INTX_OFST + PCI_INTERRUPT_INTB) | \
> > > > > > > +			(PCIE_APP_INTX_OFST + PCI_INTERRUPT_INTC) | \
> > > > > > > +			(PCIE_APP_INTX_OFST + PCI_INTERRUPT_INTD))
> > > > > > > +
> > > > > > > +#define BUS_IATU_OFFS		SZ_256M
> > > > > > > +#define RST_INTRVL_DFT_MS	100
> > > > > > > +enum {
> > > > > > > +	PCIE_LINK_SPEED_AUTO = 0,
> > > > > > > +	PCIE_LINK_SPEED_GEN1,
> > > > > > > +	PCIE_LINK_SPEED_GEN2,
> > > > > > > +	PCIE_LINK_SPEED_GEN3,
> > > > > > > +	PCIE_LINK_SPEED_GEN4,
> > > > > > > +};
> > > > > > > +
> > > > > > > +struct intel_pcie_soc {
> > > > > > > +	unsigned int pcie_ver;
> > > > > > > +	unsigned int pcie_atu_offset;
> > > > > > > +	u32 num_viewport;
> > > > > > > +};
> > > > > > > +
> > > > > > > +struct intel_pcie_port {
> > > > > > > +	struct dw_pcie		pci;
> > > > > > > +	unsigned int		id; /* Physical RC Index */
> > > > > > > +	void __iomem		*app_base;
> > > > > > > +	struct gpio_desc	*reset_gpio;
> > > > > > > +	u32			rst_interval;
> > > > > > > +	u32			max_speed;
> > > > > > > +	u32			link_gen;
> > > > > > > +	u32			max_width;
> > > > > > > +	u32			lanes;
> > > > > > > +	struct clk		*core_clk;
> > > > > > > +	struct reset_control	*core_rst;
> > > > > > > +	struct phy		*phy;
> > > > > > > +};
> > > > > > > +
> > > > > > > +static void pcie_update_bits(void __iomem *base, u32 mask, u32 val, u32 ofs)
> > > > > > > +{
> > > > > > > +	u32 orig, tmp;
> > > > > > > +
> > > > > > > +	orig = readl(base + ofs);
> > > > > > > +
> > > > > > > +	tmp = (orig & ~mask) | (val & mask);
> > > > > > > +
> > > > > > > +	if (tmp != orig)
> > > > > > > +		writel(tmp, base + ofs);
> > > > > > > +}
> > > > > > > +
> > > > > > > +static inline u32 pcie_app_rd(struct intel_pcie_port *lpp, u32 ofs)
> > > > > > > +{
> > > > > > > +	return readl(lpp->app_base + ofs);
> > > > > > > +}
> > > > > > > +
> > > > > > > +static inline void pcie_app_wr(struct intel_pcie_port *lpp, u32 val, u32 ofs)
> > > > > > > +{
> > > > > > > +	writel(val, lpp->app_base + ofs);
> > > > > > > +}
> > > > > > > +
> > > > > > > +static void pcie_app_wr_mask(struct intel_pcie_port *lpp,
> > > > > > > +			     u32 mask, u32 val, u32 ofs)
> > > > > > > +{
> > > > > > > +	pcie_update_bits(lpp->app_base, mask, val, ofs);
> > > > > > > +}
> > > > > > > +
> > > > > > > +static inline u32 pcie_rc_cfg_rd(struct intel_pcie_port *lpp, u32 ofs)
> > > > > > > +{
> > > > > > > +	return dw_pcie_readl_dbi(&lpp->pci, ofs);
> > > > > > > +}
> > > > > > > +
> > > > > > > +static inline void pcie_rc_cfg_wr(struct intel_pcie_port *lpp, u32 val, u32 ofs)
> > > > > > > +{
> > > > > > > +	dw_pcie_writel_dbi(&lpp->pci, ofs, val);
> > > > > > > +}
> > > > > > > +
> > > > > > > +static void pcie_rc_cfg_wr_mask(struct intel_pcie_port *lpp,
> > > > > > > +				u32 mask, u32 val, u32 ofs)
> > > > > > > +{
> > > > > > > +	pcie_update_bits(lpp->pci.dbi_base, mask, val, ofs);
> > > > > > > +}
> > > > > > > +
> > > > > > > +static void intel_pcie_mem_iatu(struct intel_pcie_port *lpp)
> > > > > > > +{
> > > > > > > +	struct pcie_port *pp = &lpp->pci.pp;
> > > > > > > +	phys_addr_t cpu_addr = pp->mem_base;
> > > > > > > +
> > > > > > > +	dw_pcie_prog_outbound_atu(&lpp->pci, PCIE_ATU_REGION_INDEX0,
> > > > > > > +				  PCIE_ATU_TYPE_MEM, cpu_addr,
> > > > > > > +				  pp->mem_base, pp->mem_size);
> > > > > > > +}
> > > > > > > +
> > > > > > > +static void intel_pcie_ltssm_enable(struct intel_pcie_port *lpp)
> > > > > > > +{
> > > > > > > +	pcie_app_wr_mask(lpp, PCIE_APP_CCR_LTSSM_ENABLE,
> > > > > > > +			 PCIE_APP_CCR_LTSSM_ENABLE, PCIE_APP_CCR);
> > > > > > > +}
> > > > > > > +
> > > > > > > +static void intel_pcie_ltssm_disable(struct intel_pcie_port *lpp)
> > > > > > > +{
> > > > > > > +	pcie_app_wr_mask(lpp, PCIE_APP_CCR_LTSSM_ENABLE, 0, PCIE_APP_CCR);
> > > > > > > +}
> > > > > > > +
> > > > > > > +static const char *pcie_link_gen_to_str(int gen)
> > > > > > > +{
> > > > > > > +	switch (gen) {
> > > > > > > +	case PCIE_LINK_SPEED_GEN1:
> > > > > > > +		return "2.5";
> > > > > > > +	case PCIE_LINK_SPEED_GEN2:
> > > > > > > +		return "5.0";
> > > > > > > +	case PCIE_LINK_SPEED_GEN3:
> > > > > > > +		return "8.0";
> > > > > > > +	case PCIE_LINK_SPEED_GEN4:
> > > > > > > +		return "16.0";
> > > > > > > +	default:
> > > > > > > +		return "???";
> > > > > > > +	}
> > > > > > > +}
> > > > > > > +
> > > > > > > +static void intel_pcie_link_setup(struct intel_pcie_port *lpp)
> > > > > > > +{
> > > > > > > +	u32 val;
> > > > > > > +
> > > > > > > +	val = pcie_rc_cfg_rd(lpp, PCIE_LCAP);
> > > > > > > +	lpp->max_speed = FIELD_GET(PCIE_LCAP_MAX_LINK_SPEED, val);
> > > > > > > +	lpp->max_width = FIELD_GET(PCIE_LCAP_MAX_LENGTH_WIDTH, val);
> > > > > > > +
> > > > > > > +	val = pcie_rc_cfg_rd(lpp, PCIE_LCTLSTS);
> > > > > > > +
> > > > > > > +	val &= ~(PCIE_LCTLSTS_LINK_DISABLE | PCIE_LCTLSTS_ASPM_ENABLE);
> > > > > > > +	val |= (PCIE_LCTLSTS_SLOT_CLK_CFG | PCIE_LCTLSTS_COM_CLK_CFG |
> > > > > > > +		PCIE_LCTLSTS_RCB128);
> > > > > > > +	pcie_rc_cfg_wr(lpp, val, PCIE_LCTLSTS);
> > > > > > > +}
> > > > > > > +
> > > > > > > +static void intel_pcie_max_speed_setup(struct intel_pcie_port *lpp)
> > > > > > > +{
> > > > > > > +	u32 reg, val;
> > > > > > > +
> > > > > > > +	reg = pcie_rc_cfg_rd(lpp, PCIE_LCTLSTS2);
> > > > > > > +	switch (lpp->link_gen) {
> > > > > > > +	case PCIE_LINK_SPEED_GEN1:
> > > > > > > +		reg &= ~PCIE_LCTLSTS2_TGT_LINK_SPEED;
> > > > > > > +		reg |= PCIE_LCTLSTS2_HW_AUTO_DIS |
> > > > > > > +			PCIE_LCTLSTS2_TGT_LINK_SPEED_25GT;
> > > > > > > +		break;
> > > > > > > +	case PCIE_LINK_SPEED_GEN2:
> > > > > > > +		reg &= ~PCIE_LCTLSTS2_TGT_LINK_SPEED;
> > > > > > > +		reg |= PCIE_LCTLSTS2_HW_AUTO_DIS |
> > > > > > > +			PCIE_LCTLSTS2_TGT_LINK_SPEED_5GT;
> > > > > > > +		break;
> > > > > > > +	case PCIE_LINK_SPEED_GEN3:
> > > > > > > +		reg &= ~PCIE_LCTLSTS2_TGT_LINK_SPEED;
> > > > > > > +		reg |= PCIE_LCTLSTS2_HW_AUTO_DIS |
> > > > > > > +			PCIE_LCTLSTS2_TGT_LINK_SPEED_8GT;
> > > > > > > +		break;
> > > > > > > +	case PCIE_LINK_SPEED_GEN4:
> > > > > > > +		reg &= ~PCIE_LCTLSTS2_TGT_LINK_SPEED;
> > > > > > > +		reg |= PCIE_LCTLSTS2_HW_AUTO_DIS |
> > > > > > > +			PCIE_LCTLSTS2_TGT_LINK_SPEED_16GT;
> > > > > > > +		break;
> > > > > > > +	default:
> > > > > > > +		/* Use hardware capability */
> > > > > > > +		val = pcie_rc_cfg_rd(lpp, PCIE_LCAP);
> > > > > > > +		val = FIELD_GET(PCIE_LCAP_MAX_LINK_SPEED, val);
> > > > > > > +		reg &= ~PCIE_LCTLSTS2_HW_AUTO_DIS;
> > > > > > > +		reg |= val;
> > > > > > > +		break;
> > > > > > > +	}
> > > > > > > +	pcie_rc_cfg_wr(lpp, reg, PCIE_LCTLSTS2);
> > > > > > > +}
> > > > > > > +
> > > > > > > +static void intel_pcie_speed_change_enable(struct intel_pcie_port *lpp)
> > > > > > > +{
> > > > > > > +	u32 mask, val;
> > > > > > > +
> > > > > > > +	mask = PORT_LOGIC_SPEED_CHANGE | PCIE_PORT_LOGIC_FTS;
> > > > > > > +	val = PORT_LOGIC_SPEED_CHANGE | PCIE_PORT_LOGIC_DFT_FTS_NUM;
> > > > > > > +
> > > > > > > +	pcie_rc_cfg_wr_mask(lpp, mask, val, PCIE_LINK_WIDTH_SPEED_CONTROL);
> > > > > > > +}
> > > > > > > +
> > > > > > > +static void intel_pcie_speed_change_disable(struct intel_pcie_port *lpp)
> > > > > > > +{
> > > > > > > +	pcie_rc_cfg_wr_mask(lpp, PORT_LOGIC_SPEED_CHANGE, 0,
> > > > > > > +			    PCIE_LINK_WIDTH_SPEED_CONTROL);
> > > > > > > +}
> > > > > > > +
> > > > > > > +static void intel_pcie_max_link_width_setup(struct intel_pcie_port *lpp)
> > > > > > > +{
> > > > > > > +	u32 mask, val;
> > > > > > > +
> > > > > > > +	/* HW auto bandwidth negotiation must be enabled */
> > > > > > > +	pcie_rc_cfg_wr_mask(lpp, PCIE_LCTLSTS_HW_AW_DIS, 0, PCIE_LCTLSTS);
> > > > > > > +
> > > > > > > +	mask = PCIE_DIRECT_LINK_WIDTH_CHANGE | PCIE_TARGET_LINK_WIDTH;
> > > > > > > +	val = PCIE_DIRECT_LINK_WIDTH_CHANGE | lpp->lanes;
> > > > > > > +	pcie_rc_cfg_wr_mask(lpp, mask, val, PCIE_MULTI_LANE_CTRL);
> > > > > > Is this identical functionality to the writing of PCIE_PORT_LINK_CONTROL
> > > > > > in dw_pcie_setup?
> > > > > > 
> > > > > > I ask because if the user sets num-lanes in the DT, will it have the
> > > > > > desired effect?
> > > > > intel_pcie_max_link_width_setup() function will be called by sysfs attribute pcie_width_store() to change on the fly.
> > > > Indeed, but a user may also set num-lanes in the device tree. I'm wondering
> > > > if, when set in device-tree, it will have the desired effect. Because I don't
> > > > see anything similar to PCIE_LCTLSTS_HW_AW_DIS in dw_pcie_setup which is what
> > > > your function does here.
> > > > 
> > > > I guess I'm trying to avoid the suitation where num-lanes doesn't have the
> > > > desired effect and the only way to set the num-lanes is throught the sysfs
> > > > control.
> > > I will check this and get back to you.
> intel_pcie_max_link_width_setup() is doing the lane resizing which is
> different from the link up/establishment happening during probe. Also
> PCIE_LCTLSTS_HW_AW_DIS default value is 0 so not setting during the probe or
> dw_pcie_setup.
> 
> intel_pcie_max_link_width_setup() is programming as per the designware PCIe
> controller databook instructions for lane resizing.
> 
> Below lines are from Designware PCIe databook for lane resizing.
> 
>  Program the TARGET_LINK_WIDTH[5:0] field of the MULTI_LANE_CONTROL_OFF
> register.
>  Program the DIRECT_LINK_WIDTH_CHANGE2 field of the MULTI_LANE_CONTROL_OFF
> register.
> It is assumed that the PCIE_CAP_HW_AUTO_WIDTH_DISABLE field in the
> LINK_CONTROL_LINK_STATUS_REG register is 0.

OK, so there is a difference between initial training and then resizing
of the link. Given that this is not Intel specific, should this function
exist within the designware driver for the benefit of others?

> 
> 
> > > > > > > +}
> > > > > > > +
> > > > > > > +static void intel_pcie_port_logic_setup(struct intel_pcie_port *lpp)
> > > > > > > +{
> > > > > > > +	u32 val, mask, fts;
> > > > > > > +
> > > > > > > +	switch (lpp->max_speed) {
> > > > > > > +	case PCIE_LINK_SPEED_GEN1:
> > > > > > > +	case PCIE_LINK_SPEED_GEN2:
> > > > > > > +		fts = PCIE_AFR_GEN12_FTS_NUM_DFT;
> > > > > > > +		break;
> > > > > > > +	case PCIE_LINK_SPEED_GEN3:
> > > > > > > +		fts = PCIE_AFR_GEN3_FTS_NUM_DFT;
> > > > > > > +		break;
> > > > > > > +	case PCIE_LINK_SPEED_GEN4:
> > > > > > > +		fts = PCIE_AFR_GEN4_FTS_NUM_DFT;
> > > > > > > +		break;
> > > > > > > +	default:
> > > > > > > +		fts = PCIE_AFR_GEN12_FTS_NUM_DFT;
> > > > > > > +		break;
> > > > > > > +	}
> > > > > > > +	mask = PCIE_AFR_FTS_NUM | PCIE_AFR_COM_FTS_NUM;
> > > > > > > +	val = FIELD_PREP(PCIE_AFR_FTS_NUM, fts) |
> > > > > > > +	       FIELD_PREP(PCIE_AFR_COM_FTS_NUM, fts);
> > > > > > > +	pcie_rc_cfg_wr_mask(lpp, mask, val, PCIE_AFR);
> > > > > > > +
> > > > > > > +	/* Port Link Control Register */
> > > > > > > +	pcie_rc_cfg_wr_mask(lpp, PCIE_PLCR_DLL_LINK_EN,
> > > > > > > +			    PCIE_PLCR_DLL_LINK_EN, PCIE_PORT_LINK_CONTROL);
> > > > > > > +}
> > > > > > > +
> > > > > > > +static void intel_pcie_upconfig_setup(struct intel_pcie_port *lpp)
> > > > > > > +{
> > > > > > > +	pcie_rc_cfg_wr_mask(lpp, PCIE_UPCONFIG_SUPPORT,
> > > > > > > +			    PCIE_UPCONFIG_SUPPORT, PCIE_MULTI_LANE_CTRL);
> > > > > > > +}
> > > > > > > +
> > > > > > > +static void intel_pcie_rc_setup(struct intel_pcie_port *lpp)
> > > > > > > +{
> > > > > > > +	intel_pcie_ltssm_disable(lpp);
> > > > > > > +	intel_pcie_link_setup(lpp);
> > > > > > > +	dw_pcie_setup_rc(&lpp->pci.pp);
> > > > > > > +	intel_pcie_upconfig_setup(lpp);
> > > > > > > +
> > > > > > > +	intel_pcie_max_speed_setup(lpp);
> > > > > > > +	intel_pcie_speed_change_enable(lpp);
> > > > > > > +	intel_pcie_port_logic_setup(lpp);
> > > > > > > +	intel_pcie_mem_iatu(lpp);
> > > > > > Doesn't dw_pcie_setup_rc do the same as intel_pcie_mem_iatu?
> > > > > Thanks for pointing it. dw_pcie_setup_rc() does.
> > > > > intel_pcie_mem_iatu can be removed.
> > > > Excellent.
> > > > 
> > > > > > > +}
> > > > > > > +
> > > > > > > +static int intel_pcie_ep_rst_init(struct intel_pcie_port *lpp)
> > > > > > > +{
> > > > > > > +	struct device *dev = lpp->pci.dev;
> > > > > > > +	int ret;
> > > > > > > +
> > > > > > > +	lpp->reset_gpio = devm_gpiod_get(dev, "reset", GPIOD_OUT_LOW);
> > > > > > > +	if (IS_ERR(lpp->reset_gpio)) {
> > > > > > > +		ret = PTR_ERR(lpp->reset_gpio);
> > > > > > > +		if (ret != -EPROBE_DEFER)
> > > > > > > +			dev_err(dev, "failed to request PCIe GPIO: %d\n", ret);
> > > > > > > +		return ret;
> > > > > > > +	}
> > > > > > > +	/* Make initial reset last for 100us */
> > > > > > > +	usleep_range(100, 200);
> > > > > > > +
> > > > > > > +	return 0;
> > > > > > > +}
> > > > > > > +
> > > > > > > +static void intel_pcie_core_rst_assert(struct intel_pcie_port *lpp)
> > > > > > > +{
> > > > > > > +	reset_control_assert(lpp->core_rst);
> > > > > > > +}
> > > > > > > +
> > > > > > > +static void intel_pcie_core_rst_deassert(struct intel_pcie_port *lpp)
> > > > > > > +{
> > > > > > > +	/*
> > > > > > > +	 * One micro-second delay to make sure the reset pulse
> > > > > > > +	 * wide enough so that core reset is clean.
> > > > > > > +	 */
> > > > > > > +	udelay(1);
> > > > > > > +	reset_control_deassert(lpp->core_rst);
> > > > > > > +
> > > > > > > +	/*
> > > > > > > +	 * Some SoC core reset also reset PHY, more delay needed
> > > > > > > +	 * to make sure the reset process is done.
> > > > > > > +	 */
> > > > > > > +	usleep_range(1000, 2000);
> > > > > > > +}
> > > > > > > +
> > > > > > > +static void intel_pcie_device_rst_assert(struct intel_pcie_port *lpp)
> > > > > > > +{
> > > > > > > +	gpiod_set_value_cansleep(lpp->reset_gpio, 1);
> > > > > > > +}
> > > > > > > +
> > > > > > > +static void intel_pcie_device_rst_deassert(struct intel_pcie_port *lpp)
> > > > > > > +{
> > > > > > > +	msleep(lpp->rst_interval);
> > > > > > > +	gpiod_set_value_cansleep(lpp->reset_gpio, 0);
> > > > > > > +}
> > > > > > > +
> > > > > > > +static int intel_pcie_app_logic_setup(struct intel_pcie_port *lpp)
> > > > > > > +{
> > > > > > > +	intel_pcie_device_rst_deassert(lpp);
> > > > > > > +	intel_pcie_ltssm_enable(lpp);
> > > > > > > +
> > > > > > > +	return dw_pcie_wait_for_link(&lpp->pci);
> > > > > > > +}
> > > > > > > +
> > > > > > > +static irqreturn_t intel_pcie_core_isr(int irq, void *arg)
> > > > > > > +{
> > > > > > > +	struct intel_pcie_port *lpp = arg;
> > > > > > > +	u32 val, reg;
> > > > > > > +
> > > > > > > +	reg = pcie_app_rd(lpp, PCIE_APP_IRNCR);
> > > > > > > +	val = reg & PCIE_APP_IRN_INT;
> > > > > > > +
> > > > > > > +	pcie_app_wr(lpp, val, PCIE_APP_IRNCR);
> > > > > > > +
> > > > > > > +	trace_printk("PCIe misc interrupt status 0x%x\n", reg);
> > > > > > > +	return IRQ_HANDLED;
> > > > > > > +}
> > > > > > Why do we bother handling this interrupt?
> > > > > This helps during debugging.
> > > > I think it should be removed. It adds very little value to most users.
> > > > 
> > > > Most users won't have access to the datasheets to debug this properly, and
> > > > in any case if they could, then they would be competent to add an interrupt
> > > > handler themselves.
> > > IMO, having this will help to get the basic hardware interrupt status during
> > > debugging.
> > > And, user also can enhance the handler as per the need.
> > > I thing keeping it is beneficial than removing it.
> > > Please let me know your view.
> > I'm much prefer to remove this.
> Sure, i am OK to remove it.
> > 
> > > > > > > +
> > > > > > > +static int intel_pcie_setup_irq(struct intel_pcie_port *lpp)
> > > > > > > +{
> > > > > > > +	struct device *dev = lpp->pci.dev;
> > > > > > > +	struct platform_device *pdev;
> > > > > > > +	char *irq_name;
> > > > > > > +	int irq, ret;
> > > > > > > +
> > > > > > > +	pdev = to_platform_device(dev);
> > > > > > > +	irq = platform_get_irq(pdev, 0);
> > > > > > > +	if (irq < 0) {
> > > > > > > +		dev_err(dev, "missing sys integrated irq resource\n");
> > > > > > > +		return irq;
> > > > > > > +	}
> > > > > > > +
> > > > > > > +	irq_name = devm_kasprintf(dev, GFP_KERNEL, "pcie_misc%d", lpp->id);
> > > > > > > +	if (!irq_name) {
> > > > > > > +		dev_err(dev, "failed to alloc irq name\n");
> > > > > > > +		return -ENOMEM;
> > > > > > > +	}
> > > > > > > +
> > > > > > > +	ret = devm_request_irq(dev, irq, intel_pcie_core_isr,
> > > > > > > +			       IRQF_SHARED, irq_name, lpp);
> > > > > > > +	if (ret) {
> > > > > > > +		dev_err(dev, "request irq %d failed\n", irq);
> > > > > > > +		return ret;
> > > > > > > +	}
> > > > > > > +	/* Enable integrated interrupts */
> > > > > > > +	pcie_app_wr_mask(lpp, PCIE_APP_IRN_INT,
> > > > > > > +			 PCIE_APP_IRN_INT, PCIE_APP_IRNEN);
> > > > > > > +
> > > > > > > +	return ret;
> > > > > > > +}
> > > > > > > +
> > > > > > > +static void intel_pcie_core_irq_disable(struct intel_pcie_port *lpp)
> > > > > > > +{
> > > > > > > +	pcie_app_wr(lpp, 0, PCIE_APP_IRNEN);
> > > > > > > +	pcie_app_wr(lpp, PCIE_APP_IRN_INT,  PCIE_APP_IRNCR);
> > > > > > > +}
> > > > > > > +
> > > > > > > +static int intel_pcie_get_resources(struct platform_device *pdev)
> > > > > > > +{
> > > > > > > +	struct intel_pcie_port *lpp;
> > > > > > > +	struct resource *res;
> > > > > > > +	struct dw_pcie *pci;
> > > > > > > +	struct device *dev;
> > > > > > > +	int ret;
> > > > > > > +
> > > > > > > +	lpp = platform_get_drvdata(pdev);
> > > > > > > +	pci = &lpp->pci;
> > > > > > > +	dev = pci->dev;
> > > > > > > +
> > > > > > > +	ret = device_property_read_u32(dev, "linux,pci-domain", &lpp->id);
> > > > > > > +	if (ret) {
> > > > > > > +		dev_err(dev, "failed to get domain id, errno %d\n", ret);
> > > > > > > +		return ret;
> > > > > > > +	}
> > > > > > > +
> > > > > > > +	res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "dbi");
> > > > > > > +	if (!res)
> > > > > > > +		return -ENXIO;
> > > > > > > +
> > > > > > > +	pci->dbi_base = devm_ioremap_resource(dev, res);
> > > > > > > +	if (IS_ERR(pci->dbi_base))
> > > > > > > +		return PTR_ERR(pci->dbi_base);
> > > > > > > +
> > > > > > > +	lpp->core_clk = devm_clk_get(dev, NULL);
> > > > > > > +	if (IS_ERR(lpp->core_clk)) {
> > > > > > > +		ret = PTR_ERR(lpp->core_clk);
> > > > > > > +		if (ret != -EPROBE_DEFER)
> > > > > > > +			dev_err(dev, "failed to get clks: %d\n", ret);
> > > > > > > +		return ret;
> > > > > > > +	}
> > > > > > > +
> > > > > > > +	lpp->core_rst = devm_reset_control_get(dev, NULL);
> > > > > > > +	if (IS_ERR(lpp->core_rst)) {
> > > > > > > +		ret = PTR_ERR(lpp->core_rst);
> > > > > > > +		if (ret != -EPROBE_DEFER)
> > > > > > > +			dev_err(dev, "failed to get resets: %d\n", ret);
> > > > > > > +		return ret;
> > > > > > > +	}
> > > > > > > +
> > > > > > > +	ret = device_property_match_string(dev, "device_type", "pci");
> > > > > > > +	if (ret) {
> > > > > > > +		dev_err(dev, "failed to find pci device type: %d\n", ret);
> > > > > > > +		return ret;
> > > > > > > +	}
> > > > > > > +
> > > > > > > +	if (device_property_read_u32(dev, "reset-assert-ms",
> > > > > > > +				     &lpp->rst_interval))
> > > > > > > +		lpp->rst_interval = RST_INTRVL_DFT_MS;
> > > > > > > +
> > > > > > > +	if (device_property_read_u32(dev, "max-link-speed", &lpp->link_gen))
> > > > > > > +		lpp->link_gen = 0; /* Fallback to auto */
> > > > > > Is it possible to use of_pci_get_max_link_speed here instead?
> > > > > Thanks for pointing it. of_pci_get_max_link_speed() can be used here. I will
> > > > > update it in the next patch revision.
> I just remember, earlier we were using  of_pci_get_max_link_speed() itself.
> As per reviewer comments changed to device_property_read_u32() to maintain
> symmetry in parsing device tree properties from device node.
> Let me know your view.

I couldn't find an earlier version of this series that used of_pci_get_max_link_speed,
have I missed it somewhere?

> > > > > > > +
> > > > > > > +	res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "app");
> > > > > > > +	if (!res)
> > > > > > > +		return -ENXIO;
> > > > > > > +
> > > > > > > +	lpp->app_base = devm_ioremap_resource(dev, res);
> > > > > > > +	if (IS_ERR(lpp->app_base))
> > > > > > > +		return PTR_ERR(lpp->app_base);
> > > > > > > +
> > > > > > > +	ret = intel_pcie_ep_rst_init(lpp);
> > > > > > > +	if (ret)
> > > > > > > +		return ret;
> > > > > > Given that this is called from a function '..._get_resources' I don't think
> > > > > > we should be resetting anything here.
> > > > > Agree. I will move it out of get_resources().
> > > > > > > +
> > > > > > > +	lpp->phy = devm_phy_get(dev, "pciephy");
> > > > > > > +	if (IS_ERR(lpp->phy)) {
> > > > > > > +		ret = PTR_ERR(lpp->phy);
> > > > > > > +		if (ret != -EPROBE_DEFER)
> > > > > > > +			dev_err(dev, "couldn't get pcie-phy: %d\n", ret);
> > > > > > > +		return ret;
> > > > > > > +	}
> > > > > > > +	return 0;
> > > > > > > +}
> > > > > > > +
> > > > > > > +static void intel_pcie_deinit_phy(struct intel_pcie_port *lpp)
> > > > > > > +{
> > > > > > > +	phy_exit(lpp->phy);
> > > > > > > +}
> > > > > > > +
> > > > > > > +static int intel_pcie_wait_l2(struct intel_pcie_port *lpp)
> > > > > > > +{
> > > > > > > +	u32 value;
> > > > > > > +	int ret;
> > > > > > > +
> > > > > > > +	if (lpp->max_speed < PCIE_LINK_SPEED_GEN3)
> > > > > > > +		return 0;
> > > > > > > +
> > > > > > > +	/* Send PME_TURN_OFF message */
> > > > > > > +	pcie_app_wr_mask(lpp, PCIE_APP_MSG_XMT_PM_TURNOFF,
> > > > > > > +			 PCIE_APP_MSG_XMT_PM_TURNOFF, PCIE_APP_MSG_CR);
> > > > > > > +
> > > > > > > +	/* Read PMC status and wait for falling into L2 link state */
> > > > > > > +	ret = readl_poll_timeout(lpp->app_base + PCIE_APP_PMC, value,
> > > > > > > +				 (value & PCIE_APP_PMC_IN_L2), 20,
> > > > > > > +				 jiffies_to_usecs(5 * HZ));
> > > > > > > +	if (ret)
> > > > > > > +		dev_err(lpp->pci.dev, "PCIe link enter L2 timeout!\n");
> > > > > > > +
> > > > > > > +	return ret;
> > > > > > > +}
> > > > > > > +
> > > > > > > +static void intel_pcie_turn_off(struct intel_pcie_port *lpp)
> > > > > > > +{
> > > > > > > +	if (dw_pcie_link_up(&lpp->pci))
> > > > > > > +		intel_pcie_wait_l2(lpp);
> > > > > > > +
> > > > > > > +	/* Put EP in reset state */
> > > > > > EP?
> > > > > End point device. I will update it.
> > > > Is this not a host bridge controller?
> > > It is PERST#, signals hardware reset to the End point .
> > >          /* Put EP in reset state */
> > >          intel_pcie_device_rst_assert(lpp);
> > OK.
> > 
> > > > > > > +	intel_pcie_device_rst_assert(lpp);
> > > > > > > +	pcie_rc_cfg_wr_mask(lpp, PCI_COMMAND_MEMORY, 0, PCI_COMMAND);
> > > > > > > +}
> > > > > > > +
> > > > > > > +static int intel_pcie_host_setup(struct intel_pcie_port *lpp)
> > > > > > > +{
> > > > > > > +	int ret;
> > > > > > > +
> > > > > > > +	intel_pcie_core_rst_assert(lpp);
> > > > > > > +	intel_pcie_device_rst_assert(lpp);
> > > > > > > +
> > > > > > > +	ret = phy_init(lpp->phy);
> > > > > > > +	if (ret)
> > > > > > > +		return ret;
> > > > > > > +
> > > > > > > +	intel_pcie_core_rst_deassert(lpp);
> > > > > > > +
> > > > > > > +	ret = clk_prepare_enable(lpp->core_clk);
> > > > > > > +	if (ret) {
> > > > > > > +		dev_err(lpp->pci.dev, "Core clock enable failed: %d\n", ret);
> > > > > > > +		goto clk_err;
> > > > > > > +	}
> > > > > > > +
> > > > > > > +	intel_pcie_rc_setup(lpp);
> > > > > > > +	ret = intel_pcie_app_logic_setup(lpp);
> > > > > > > +	if (ret)
> > > > > > > +		goto app_init_err;
> > > > > > > +
> > > > > > > +	ret = intel_pcie_setup_irq(lpp);
> > > > > > > +	if (!ret)
> > > > > > > +		return ret;
> > > > > > > +
> > > > > > > +	intel_pcie_turn_off(lpp);
> > > > > > > +app_init_err:
> > > > > > > +	clk_disable_unprepare(lpp->core_clk);
> > > > > > > +clk_err:
> > > > > > > +	intel_pcie_core_rst_assert(lpp);
> > > > > > > +	intel_pcie_deinit_phy(lpp);
> > > > > > > +	return ret;
> > > > > > > +}
> > > > > > > +
> > > > > > > +static ssize_t
> > > > > > > +pcie_link_status_show(struct device *dev, struct device_attribute *attr,
> > > > > > > +		      char *buf)
> > > > > > > +{
> > > > > > > +	u32 reg, width, gen;
> > > > > > > +	struct intel_pcie_port *lpp;
> > > > > > > +
> > > > > > > +	lpp = dev_get_drvdata(dev);
> > > > > > > +
> > > > > > > +	reg = pcie_rc_cfg_rd(lpp, PCIE_LCTLSTS);
> > > > > > > +	width = FIELD_GET(PCIE_LCTLSTS_NEGOTIATED_LINK_WIDTH, reg);
> > > > > > > +	gen = FIELD_GET(PCIE_LCTLSTS_LINK_SPEED, reg);
> > > > > > > +	if (gen > lpp->max_speed)
> > > > > > > +		return -EINVAL;
> > > > > > > +
> > > > > > > +	return sprintf(buf, "Port %2u Width x%u Speed %s GT/s\n", lpp->id,
> > > > > > > +		       width, pcie_link_gen_to_str(gen));
> > > > > > > +}
> > > > > > > +static DEVICE_ATTR_RO(pcie_link_status);
> > > > > > > +
> > > > > > > +static ssize_t pcie_speed_store(struct device *dev,
> > > > > > > +				struct device_attribute *attr,
> > > > > > > +				const char *buf, size_t len)
> > > > > > > +{
> > > > > > > +	struct intel_pcie_port *lpp;
> > > > > > > +	unsigned long val;
> > > > > > > +	int ret;
> > > > > > > +
> > > > > > > +	lpp = dev_get_drvdata(dev);
> > > > > > > +
> > > > > > > +	ret = kstrtoul(buf, 10, &val);
> > > > > > > +	if (ret)
> > > > > > > +		return ret;
> > > > > > > +
> > > > > > > +	if (val > lpp->max_speed)
> > > > > > > +		return -EINVAL;
> > > > > > > +
> > > > > > > +	lpp->link_gen = val;
> > > > > > > +	intel_pcie_max_speed_setup(lpp);
> > > > > > > +	intel_pcie_speed_change_disable(lpp);
> > > > > > > +	intel_pcie_speed_change_enable(lpp);
> > > > > > > +
> > > > > > > +	return len;
> > > > > > > +}
> > > > > > > +static DEVICE_ATTR_WO(pcie_speed);
> > > > > > > +
> > > > > > > +/*
> > > > > > > + * Link width change on the fly is not always successful.
> > > > > > > + * It also depends on the partner.
> > > > > > > + */
> > > > > > > +static ssize_t pcie_width_store(struct device *dev,
> > > > > > > +				struct device_attribute *attr,
> > > > > > > +				const char *buf, size_t len)
> > > > > > > +{
> > > > > > > +	struct intel_pcie_port *lpp;
> > > > > > > +	unsigned long val;
> > > > > > > +
> > > > > > > +	lpp = dev_get_drvdata(dev);
> > > > > > > +
> > > > > > > +	if (kstrtoul(buf, 10, &val))
> > > > > > > +		return -EINVAL;
> > > > > > > +
> > > > > > > +	if (val > lpp->max_width)
> > > > > > > +		return -EINVAL;
> > > > > > > +
> > > > > > > +	lpp->lanes = val;
> > > > > > > +	intel_pcie_max_link_width_setup(lpp);
> > > > > > > +
> > > > > > > +	return len;
> > > > > > > +}
> > > > > > > +static DEVICE_ATTR_WO(pcie_width);
> > > > > > You mentioned that a use-case for changing width/speed on the fly was to
> > > > > > control power consumption (and this also helps debugging issues). As I
> > > > > > understand there is no current support for this in the kernel - yet it is
> > > > > > something that would provide value.
> > > > > > 
> > > > > > I haven't looked in much detail, however as I understand the PCI spec
> > > > > > allows an upstream partner to change the link speed and retrain. (I'm not
> > > > > > sure about link width). Given that we already have
> > > > > > [current,max]_link_[speed,width] is sysfs for each PCI device, it would
> > > > > > seem natural to extend this to allow for writing a max width or speed.
> > > > > > 
> > > > > > So ideally this type of thing would be moved to the core or at least in
> > > > > > the dwc driver. This way the benefits can be applied to more devices on
> > > > > > larger PCI fabrics - Though perhaps others here will have a different view
> > > > > > and I'm keen to hear them.
> > > > > > 
> > > > > > I'm keen to limit the differences between the DWC controller drivers and
> > > > > > unify common code - thus it would be helpful to have a justification as to
> > > > > > why this is only relevant for this controller.
> > > > > > 
> > > > > > For user-space only control, it is possible to achieve what you have here
> > > > > > with userspace utilities (something like setpci) (assuming the standard
> > > > > > looking registers you currently access are exposed in the normal config
> > > > > > space way - though PCIe capabilities).
> > > > > > 
> > > > > > My suggestion would be to drop these changes and later add something that
> > > > > > can benefit more devices. In any case if these changes stay within this
> > > > > > driver then it would be helpful to move them to a separate patch.
> > > > > Sure, i will submit these entity in separate patch.
> > > > Please ensure that all supporting macros, functions and defines go with that
> > > > other patch as well please.
> > > Sure.
> > > > > > > +
> > > > > > > +static struct attribute *pcie_cfg_attrs[] = {
> > > > > > > +	&dev_attr_pcie_link_status.attr,
> > > > > > > +	&dev_attr_pcie_speed.attr,
> > > > > > > +	&dev_attr_pcie_width.attr,
> > > > > > > +	NULL,
> > > > > > > +};
> > > > > > > +ATTRIBUTE_GROUPS(pcie_cfg);
> > > > > > > +
> > > > > > > +static int intel_pcie_sysfs_init(struct intel_pcie_port *lpp)
> > > > > > > +{
> > > > > > > +	return devm_device_add_groups(lpp->pci.dev, pcie_cfg_groups);
> > > > > > > +}
> > > > > > > +
> > > > > > > +static void __intel_pcie_remove(struct intel_pcie_port *lpp)
> > > > > > > +{
> > > > > > > +	pcie_rc_cfg_wr_mask(lpp, PCI_COMMAND_MEMORY | PCI_COMMAND_MASTER,
> > > > > > > +			    0, PCI_COMMAND);
> > > > > > > +	intel_pcie_core_irq_disable(lpp);
> > > > > > > +	intel_pcie_turn_off(lpp);
> > > > > > > +	clk_disable_unprepare(lpp->core_clk);
> > > > > > > +	intel_pcie_core_rst_assert(lpp);
> > > > > > > +	intel_pcie_deinit_phy(lpp);
> > > > > > > +}
> > > > > > > +
> > > > > > > +static int intel_pcie_remove(struct platform_device *pdev)
> > > > > > > +{
> > > > > > > +	struct intel_pcie_port *lpp = platform_get_drvdata(pdev);
> > > > > > > +	struct pcie_port *pp = &lpp->pci.pp;
> > > > > > > +
> > > > > > > +	dw_pcie_host_deinit(pp);
> > > > > > > +	__intel_pcie_remove(lpp);
> > > > > > > +
> > > > > > > +	return 0;
> > > > > > > +}
> > > > > > > +
> > > > > > > +static int __maybe_unused intel_pcie_suspend_noirq(struct device *dev)
> > > > > > > +{
> > > > > > > +	struct intel_pcie_port *lpp = dev_get_drvdata(dev);
> > > > > > > +	int ret;
> > > > > > > +
> > > > > > > +	intel_pcie_core_irq_disable(lpp);
> > > > > > > +	ret = intel_pcie_wait_l2(lpp);
> > > > > > > +	if (ret)
> > > > > > > +		return ret;
> > > > > > > +
> > > > > > > +	intel_pcie_deinit_phy(lpp);
> > > > > > > +	clk_disable_unprepare(lpp->core_clk);
> > > > > > > +	return ret;
> > > > > > > +}
> > > > > > > +
> > > > > > > +static int __maybe_unused intel_pcie_resume_noirq(struct device *dev)
> > > > > > > +{
> > > > > > > +	struct intel_pcie_port *lpp = dev_get_drvdata(dev);
> > > > > > > +
> > > > > > > +	return intel_pcie_host_setup(lpp);
> > > > > > > +}
> > > > > > > +
> > > > > > > +static int intel_pcie_rc_init(struct pcie_port *pp)
> > > > > > > +{
> > > > > > > +	struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
> > > > > > > +	struct intel_pcie_port *lpp = dev_get_drvdata(pci->dev);
> > > > > > > +	int ret;
> > > > > > > +
> > > > > > > +	/* RC/host initialization */
> > > > > > > +	ret = intel_pcie_host_setup(lpp);
> > > > > > > +	if (ret)
> > > > > > > +		return ret;
> > > > > > Insert new line here.
> > > > > Ok.
> > > > > > > +	ret = intel_pcie_sysfs_init(lpp);
> > > > > > > +	if (ret)
> > > > > > > +		__intel_pcie_remove(lpp);
> > > > > > > +	return ret;
> > > > > > > +}
> > > > > > > +
> > > > > > > +int intel_pcie_msi_init(struct pcie_port *pp)
> > > > > > > +{
> > > > > > > +	struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
> > > > > > > +
> > > > > > > +	dev_dbg(pci->dev, "PCIe MSI/MSIx is handled by MSI in x86 processor\n");
> > > > > > What about other processors? (Noting that this driver doesn't depend on
> > > > > > any specific ARCH in the KConfig).
> > > > > Agree. i will mark the dependency in Kconfig.
> > > > OK, please also see how other drivers use the COMPILE_TEST Kconfig option.
> > > Ok sure.
> > > > I'd suggest that the dev_dbg just becomes a code comment.
> Ok
> > > > 
> > > > > > > +	return 0;
> > > > > > > +}
> > > > > > > +
> > > > > > > +u64 intel_pcie_cpu_addr(struct dw_pcie *pcie, u64 cpu_addr)
> > > > > > > +{
> > > > > > > +	return cpu_addr + BUS_IATU_OFFS;
> > > > > > > +}
> > > > > > > +
> > > > > > > +static const struct dw_pcie_ops intel_pcie_ops = {
> > > > > > > +	.cpu_addr_fixup = intel_pcie_cpu_addr,
> > > > > > > +};
> > > > > > > +
> > > > > > > +static const struct dw_pcie_host_ops intel_pcie_dw_ops = {
> > > > > > > +	.host_init =		intel_pcie_rc_init,
> > > > > > > +	.msi_host_init =	intel_pcie_msi_init,
> > > > > > > +};
> > > > > > > +
> > > > > > > +static const struct intel_pcie_soc pcie_data = {
> > > > > > > +	.pcie_ver =		0x520A,
> > > > > > > +	.pcie_atu_offset =	0xC0000,
> > > > > > > +	.num_viewport =		3,
> > > > > > > +};
> > > > > > > +
> > > > > > > +static int intel_pcie_probe(struct platform_device *pdev)
> > > > > > > +{
> > > > > > > +	const struct intel_pcie_soc *data;
> > > > > > > +	struct device *dev = &pdev->dev;
> > > > > > > +	struct intel_pcie_port *lpp;
> > > > > > > +	struct pcie_port *pp;
> > > > > > > +	struct dw_pcie *pci;
> > > > > > > +	int ret;
> > > > > > > +
> > > > > > > +	lpp = devm_kzalloc(dev, sizeof(*lpp), GFP_KERNEL);
> > > > > > > +	if (!lpp)
> > > > > > > +		return -ENOMEM;
> > > > > > > +
> > > > > > > +	platform_set_drvdata(pdev, lpp);
> > > > > > > +	pci = &lpp->pci;
> > > > > > > +	pci->dev = dev;
> > > > > > > +	pp = &pci->pp;
> > > > > > > +
> > > > > > > +	ret = intel_pcie_get_resources(pdev);
> > > > > > > +	if (ret)
> > > > > > > +		return ret;
> > > > > > > +
> > > > > > > +	data = device_get_match_data(dev);
> > > > > > > +	pci->ops = &intel_pcie_ops;
> > > > > > > +	pci->version = data->pcie_ver;
> > > > > > > +	pci->atu_base = pci->dbi_base + data->pcie_atu_offset;
> > > > > > > +	pp->ops = &intel_pcie_dw_ops;
> > > > > > > +
> > > > > > > +	ret = dw_pcie_host_init(pp);
> > > > > > > +	if (ret) {
> > > > > > > +		dev_err(dev, "cannot initialize host\n");
> > > > > > > +		return ret;
> > > > > > > +	}
> > > > > > Add a new line after the closing brace.
> > > > > Ok
> > > > > > > +	/* Intel PCIe doesn't configure IO region, so configure
> > > > > > > +	 * viewport to not to access IO region during register
> > > > > > > +	 * read write operations.
> > > > > > > +	 */
> > > > > > > +	pci->num_viewport = data->num_viewport;
> > > > > > > +	dev_info(dev,
> > > > > > > +		 "Intel AXI PCIe Root Complex Port %d Init Done\n", lpp->id);
> > > > > > > +	return ret;
> > > > > > > +}
> > > > > > > +
> > > > > > > +static const struct dev_pm_ops intel_pcie_pm_ops = {
> > > > > > > +	SET_NOIRQ_SYSTEM_SLEEP_PM_OPS(intel_pcie_suspend_noirq,
> > > > > > > +				      intel_pcie_resume_noirq)
> > > > > > > +};
> > > > > > > +
> > > > > > > +static const struct of_device_id of_intel_pcie_match[] = {
> > > > > > > +	{ .compatible = "intel,lgm-pcie", .data = &pcie_data },
> > > > > > > +	{}
> > > > > > > +};
> > > > > > > +
> > > > > > > +static struct platform_driver intel_pcie_driver = {
> > > > > > > +	.probe = intel_pcie_probe,
> > > > > > > +	.remove = intel_pcie_remove,
> > > > > > > +	.driver = {
> > > > > > > +		.name = "intel-lgm-pcie",
> > > > > > Is there a reason why the we use intel-lgm-pcie here and pcie-intel-axi
> > > > > > elsewhere? What does lgm mean?
> > > > > lgm is the name of intel SoC.  I will name it to pcie-intel-axi to be
> > > > > generic.
> > > > I'm keen to ensure that it is consistently named. I've seen other comments
> > > > regarding what the name should be - I don't have a strong opinion though
> > > > I do think that *-axi may be too generic.
> 
> This PCIe driver is for the Intel Gateway SoCs. So how about naming it is as
> "pcie-intel-gw"; pcie-intel-gw.c and Kconfig as PCIE_INTEL_GW.

I don't have a problem with this, though others may have differing views.

Thanks,

Andrew Murray

> 
> Regards,
> Dilip
> 
> > > Ok, i will check and get back to you on this.
> 
> > > 
> > > Regards,
> > Thanks,
> > 
> > Andrew Murray
> > 
> > > Dilip
> > > 
> > > > Thanks,
> > > > 
> > > > Andrew Murray
> > > > 
> > > > > > Thanks,
> > > > > > 
> > > > > > Andrew Murray
> > > > > > 
> > > > > > > +		.of_match_table = of_intel_pcie_match,
> > > > > > > +		.pm = &intel_pcie_pm_ops,
> > > > > > > +	},
> > > > > > > +};
> > > > > > > +builtin_platform_driver(intel_pcie_driver);
> > > > > > > -- 
> > > > > > > 2.11.0
> > > > > > > 

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH v3 2/2] dwc: PCI: intel: Intel PCIe RC controller driver
  2019-09-11 10:30               ` Andrew Murray
@ 2019-09-12  6:58                 ` Dilip Kota
  2019-09-12  8:25                   ` Andrew Murray
  0 siblings, 1 reply; 37+ messages in thread
From: Dilip Kota @ 2019-09-12  6:58 UTC (permalink / raw)
  To: Andrew Murray
  Cc: jingoohan1, gustavo.pimentel, lorenzo.pieralisi, robh,
	martin.blumenstingl, linux-pci, hch, devicetree, linux-kernel,
	andriy.shevchenko, cheol.yong.kim, chuanhua.lei, qi-ming.wu

Hi Andrew Murray,

On 9/11/2019 6:30 PM, Andrew Murray wrote:
> On Tue, Sep 10, 2019 at 03:46:17PM +0800, Dilip Kota wrote:
>> Hi Andrew Murray,
>>
>> Please find my response inline.
>>
>> On 9/9/2019 4:31 PM, Andrew Murray wrote:
>>> On Mon, Sep 09, 2019 at 02:51:03PM +0800, Dilip Kota wrote:
>>>> On 9/6/2019 7:20 PM, Andrew Murray wrote:
>>>>> On Fri, Sep 06, 2019 at 06:58:11PM +0800, Dilip Kota wrote:
>>>>>> Hi Andrew Murray,
>>>>>>
>>>>>> Thanks for the review. Please find my response inline.
>>>>>>
>>>>>> On 9/5/2019 6:45 PM, Andrew Murray wrote:
>>>>>>> On Wed, Sep 04, 2019 at 06:10:31PM +0800, Dilip Kota wrote:
>>>>>>>> Add support to PCIe RC controller on Intel Universal
>>>>>>>> Gateway SoC. PCIe controller is based of Synopsys
>>>>>>>> Designware pci core.
>>>>>>>>
>>>>>>>> Signed-off-by: Dilip Kota <eswara.kota@linux.intel.com>
>>>>>>>> ---
>>>>>>> Hi Dilip,
>>>>>>>
>>>>>>> Thanks for the patch, initial feedback below:
>>>>>>>
>>>>>>>> changes on v3:
>>>>>>>> 	Rename PCIe app logic registers with PCIE_APP prefix.
>>>>>>>> 	PCIE_IOP_CTRL configuration is not required. Remove respective code.
>>>>>>>> 	Remove wrapper functions for clk enable/disable APIs.
>>>>>>>> 	Use platform_get_resource_byname() instead of
>>>>>>>> 	  devm_platform_ioremap_resource() to be similar with DWC framework.
>>>>>>>> 	Rename phy name to "pciephy".
>>>>>>>> 	Modify debug message in msi_init() callback to be more specific.
>>>>>>>> 	Remove map_irq() callback.
>>>>>>>> 	Enable the INTx interrupts at the time of PCIe initialization.
>>>>>>>> 	Reduce memory fragmentation by using variable "struct dw_pcie pci"
>>>>>>>> 	  instead of allocating memory.
>>>>>>>> 	Reduce the delay to 100us during enpoint initialization
>>>>>>>> 	  intel_pcie_ep_rst_init().
>>>>>>>> 	Call  dw_pcie_host_deinit() during remove() instead of directly
>>>>>>>> 	  calling PCIe core APIs.
>>>>>>>> 	Rename "intel,rst-interval" to "reset-assert-ms".
>>>>>>>> 	Remove unused APP logic Interrupt bit macro definitions.
>>>>>>>>      	Use dwc framework's dw_pcie_setup_rc() for PCIe host specific
>>>>>>>> 	 configuration instead of redefining the same functionality in
>>>>>>>> 	 the driver.
>>>>>>>> 	Move the whole DT parsing specific code to intel_pcie_get_resources()
>>>>>>>>
>>>>>>>>      drivers/pci/controller/dwc/Kconfig          |  13 +
>>>>>>>>      drivers/pci/controller/dwc/Makefile         |   1 +
>>>>>>>>      drivers/pci/controller/dwc/pcie-intel-axi.c | 840 ++++++++++++++++++++++++++++
>>>>>>>>      3 files changed, 854 insertions(+)
>>>>>>>>      create mode 100644 drivers/pci/controller/dwc/pcie-intel-axi.c
>>>>>>>>
>>>>>>>> diff --git a/drivers/pci/controller/dwc/Kconfig b/drivers/pci/controller/dwc/Kconfig
>>>>>>>> index 6ea778ae4877..e44b9b6a6390 100644
>>>>>>>> --- a/drivers/pci/controller/dwc/Kconfig
>>>>>>>> +++ b/drivers/pci/controller/dwc/Kconfig
>>>>>>>> @@ -82,6 +82,19 @@ config PCIE_DW_PLAT_EP
>>>>>>>>      	  order to enable device-specific features PCI_DW_PLAT_EP must be
>>>>>>>>      	  selected.
>>>>>>>> +config PCIE_INTEL_AXI
>>>>>>>> +        bool "Intel AHB/AXI PCIe host controller support"
>>>>>>>> +        depends on PCI_MSI
>>>>>>>> +        depends on PCI
>>>>>>> I don't think the PCI dependency is required here.
>>>>>>>
>>>>>>> I'm also not sure why PCI_MSI is required, we select PCIE_DW_HOST which
>>>>>>> depends on PCI_MSI_IRQ_DOMAIN which depends on PCI_MSI.
>>>>>> Agree, dependency on PCI and PCI_MSI can be removed. I will remove it in
>>>>>> next patch revision.
>>>>>>>> +        depends on OF
>>>>>>>> +        select PCIE_DW_HOST
>>>>>>>> +        help
>>>>>>>> +          Say 'Y' here to enable support for Intel AHB/AXI PCIe Host
>>>>>>>> +	  controller driver.
>>>>>>>> +	  The Intel PCIe controller is based on the Synopsys Designware
>>>>>>>> +	  pcie core and therefore uses the Designware core functions to
>>>>>>>> +	  implement the driver.
>>>>>>> I can see this description is similar to others in the same Kconfig,
>>>>>>> however I'm not sure what value a user gains by knowing implementation
>>>>>>> details - it's helpful to know that PCIE_INTEL_AXI is based on the
>>>>>>> Designware core, but is it helpful to know that the Designware core
>>>>>>> functions are used?
>>>>>>>
>>>>>>>> +
>>>>>>>>      config PCI_EXYNOS
>>>>>>>>      	bool "Samsung Exynos PCIe controller"
>>>>>>>>      	depends on SOC_EXYNOS5440 || COMPILE_TEST
>>>>>>>> diff --git a/drivers/pci/controller/dwc/Makefile b/drivers/pci/controller/dwc/Makefile
>>>>>>>> index b085dfd4fab7..46e656ebdf90 100644
>>>>>>>> --- a/drivers/pci/controller/dwc/Makefile
>>>>>>>> +++ b/drivers/pci/controller/dwc/Makefile
>>>>>>>> @@ -3,6 +3,7 @@ obj-$(CONFIG_PCIE_DW) += pcie-designware.o
>>>>>>>>      obj-$(CONFIG_PCIE_DW_HOST) += pcie-designware-host.o
>>>>>>>>      obj-$(CONFIG_PCIE_DW_EP) += pcie-designware-ep.o
>>>>>>>>      obj-$(CONFIG_PCIE_DW_PLAT) += pcie-designware-plat.o
>>>>>>>> +obj-$(CONFIG_PCIE_INTEL_AXI) += pcie-intel-axi.o
>>>>>>>>      obj-$(CONFIG_PCI_DRA7XX) += pci-dra7xx.o
>>>>>>>>      obj-$(CONFIG_PCI_EXYNOS) += pci-exynos.o
>>>>>>>>      obj-$(CONFIG_PCI_IMX6) += pci-imx6.o
>>>>>>>> diff --git a/drivers/pci/controller/dwc/pcie-intel-axi.c b/drivers/pci/controller/dwc/pcie-intel-axi.c
>>>>>>>> new file mode 100644
>>>>>>>> index 000000000000..75607ce03ebf
>>>>>>>> --- /dev/null
>>>>>>>> +++ b/drivers/pci/controller/dwc/pcie-intel-axi.c
>>>>>>>> @@ -0,0 +1,840 @@
>>>>>>>> +// SPDX-License-Identifier: GPL-2.0
>>>>>>>> +/*
>>>>>>>> + * PCIe host controller driver for Intel AXI PCIe Bridge
>>>>>>>> + *
>>>>>>>> + * Copyright (c) 2019 Intel Corporation.
>>>>>>>> + */
>>>>>>>> +
>>>>>>>> +#include <linux/bitfield.h>
>>>>>>>> +#include <linux/clk.h>
>>>>>>>> +#include <linux/gpio/consumer.h>
>>>>>>>> +#include <linux/interrupt.h>
>>>>>>>> +#include <linux/iopoll.h>
>>>>>>>> +#include <linux/of_irq.h>
>>>>>>>> +#include <linux/of_pci.h>
>>>>>>>> +#include <linux/of_platform.h>
>>>>>>>> +#include <linux/phy/phy.h>
>>>>>>>> +#include <linux/platform_device.h>
>>>>>>>> +#include <linux/reset.h>
>>>>>>>> +
>>>>>>>> +#include "../../pci.h"
>>>>>>> Please remove this - it isn't needed.
>>>>>> Ok, will remove it in next patch revision.
>>>>>>>> +#include "pcie-designware.h"
>>>>>>>> +
>>>>>>>> +#define PCIE_CCRID				0x8
>>>>>>>> +
>>>>>>>> +#define PCIE_LCAP				0x7C
>>>>>>>> +#define PCIE_LCAP_MAX_LINK_SPEED		GENMASK(3, 0)
>>>>>>>> +#define PCIE_LCAP_MAX_LENGTH_WIDTH		GENMASK(9, 4)
>>>>>>> These look like the standard PCI Link Capabilities Register,
>>>>>>> can you use the standard ones defined in pci_regs.h? (see
>>>>>>> PCI_EXP_LNKCAP_SLS_*).
>>>>>>>
>>>>>>>> +
>>>>>>>> +/* Link Control and Status Register */
>>>>>>>> +#define PCIE_LCTLSTS				0x80
>>>>>>>> +#define PCIE_LCTLSTS_ASPM_ENABLE		GENMASK(1, 0)
>>>>>>>> +#define PCIE_LCTLSTS_RCB128			BIT(3)
>>>>>>>> +#define PCIE_LCTLSTS_LINK_DISABLE		BIT(4)
>>>>>>>> +#define PCIE_LCTLSTS_COM_CLK_CFG		BIT(6)
>>>>>>>> +#define PCIE_LCTLSTS_HW_AW_DIS			BIT(9)
>>>>>>>> +#define PCIE_LCTLSTS_LINK_SPEED			GENMASK(19, 16)
>>>>>>>> +#define PCIE_LCTLSTS_NEGOTIATED_LINK_WIDTH	GENMASK(25, 20)
>>>>>>>> +#define PCIE_LCTLSTS_SLOT_CLK_CFG		BIT(28)
>>>>>>> These look like the standard Link Control and Link Status register, can
>>>>>>> you use the standard ones defined in pci_regs.h? (see PCI_EXP_LNKCTL_*
>>>>>>> and PCI_EXP_LNKSTA_*).
>>>>>>>
>>>>>>>> +
>>>>>>>> +#define PCIE_LCTLSTS2				0xA0
>>>>>>>> +#define PCIE_LCTLSTS2_TGT_LINK_SPEED		GENMASK(3, 0)
>>>>>>>> +#define PCIE_LCTLSTS2_TGT_LINK_SPEED_25GT	0x1
>>>>>>>> +#define PCIE_LCTLSTS2_TGT_LINK_SPEED_5GT	0x2
>>>>>>>> +#define PCIE_LCTLSTS2_TGT_LINK_SPEED_8GT	0x3
>>>>>>>> +#define PCIE_LCTLSTS2_TGT_LINK_SPEED_16GT	0x4
>>>>>>> And these look like the standard Link Control Register 2 (see
>>>>>>> PCI_EXP_LNKCTL2_*).
>>>>>>>
>>>>>>> Most of the #defines above and below look just like standard PCI defines
>>>>>>> (for example those found in pci_regs.h) - both in terms of their name and
>>>>>>> what they do. Ideally where the functionality is the same or very similar,
>>>>>>> then we should use the standard PCI defines in (pci_regs.h). This helps
>>>>>>> readability/understanding, helps to identify duplicate code and reduces
>>>>>>> the size of your patch.
>>>>>>>
>>>>>>> Also the capability offsets (e.g. PCIE_LCTLSTS2) are also standard. The
>>>>>>> offsets you define are offset by an additional 0x70. E.g.i
>>>>>>> PCIE_LCTLSTS2 = 0xA0, and PCI_EXP_LNKCTL2 = 0x30. Perhaps abstracting
>>>>>>> this offset will allow you to use the standard pci defines?
>>>>>>>
>>>>>>> I haven't looked in much detail at the remainder of the defines, but
>>>>>>> the same rationale should apply.
>>>>>> Agree, that's a good point. I will define the offset macro and use the
>>>>>> macros defined in pci_regs.h.
>>>>>>>> +#define PCIE_LCTLSTS2_HW_AUTO_DIS		BIT(5)
>>>>>>>> +
>>>>>>>> +/* Ack Frequency Register */
>>>>>>>> +#define PCIE_AFR				0x70C
>>>>>>>> +#define PCIE_AFR_FTS_NUM			GENMASK(15, 8)
>>>>>>>> +#define PCIE_AFR_COM_FTS_NUM			GENMASK(23, 16)
>>>>>>>> +#define PCIE_AFR_GEN12_FTS_NUM_DFT		(SZ_128 - 1)
>>>>>>>> +#define PCIE_AFR_GEN3_FTS_NUM_DFT		180
>>>>>>>> +#define PCIE_AFR_GEN4_FTS_NUM_DFT		196
>>>>>>>> +
>>>>>>>> +#define PCIE_PLCR_DLL_LINK_EN			BIT(5)
>>>>>>>> +#define PCIE_PORT_LOGIC_FTS			GENMASK(7, 0)
>>>>>>>> +#define PCIE_PORT_LOGIC_DFT_FTS_NUM		(SZ_128 - 1)
>>>>>>>> +
>>>>>>>> +#define PCIE_MISC_CTRL				0x8BC
>>>>>>>> +#define PCIE_MISC_CTRL_DBI_RO_WR_EN		BIT(0)
>>>>>>>> +
>>>>>>>> +#define PCIE_MULTI_LANE_CTRL			0x8C0
>>>>>>>> +#define PCIE_UPCONFIG_SUPPORT			BIT(7)
>>>>>>>> +#define PCIE_DIRECT_LINK_WIDTH_CHANGE		BIT(6)
>>>>>>>> +#define PCIE_TARGET_LINK_WIDTH			GENMASK(5, 0)
>>>>>>>> +
>>>>>>>> +/* APP RC Core Control Register */
>>>>>>>> +#define PCIE_APP_CCR				0x10
>>>>>>>> +#define PCIE_APP_CCR_LTSSM_ENABLE		BIT(0)
>>>>>>>> +
>>>>>>>> +/* PCIe Message Control */
>>>>>>>> +#define PCIE_APP_MSG_CR				0x30
>>>>>>>> +#define PCIE_APP_MSG_XMT_PM_TURNOFF		BIT(0)
>>>>>>>> +
>>>>>>>> +/* PCIe Power Management Control */
>>>>>>>> +#define PCIE_APP_PMC				0x44
>>>>>>>> +#define PCIE_APP_PMC_IN_L2			BIT(20)
>>>>>>>> +
>>>>>>>> +/* Interrupt Enable Register */
>>>>>>>> +#define PCIE_APP_IRNEN				0xF4
>>>>>>>> +#define PCIE_APP_IRNCR				0xF8
>>>>>>>> +#define PCIE_APP_IRN_AER_REPORT			BIT(0)
>>>>>>>> +#define PCIE_APP_IRN_PME			BIT(2)
>>>>>>>> +#define PCIE_APP_IRN_RX_VDM_MSG			BIT(4)
>>>>>>>> +#define PCIE_APP_IRN_PM_TO_ACK			BIT(9)
>>>>>>>> +#define PCIE_APP_IRN_LINK_AUTO_BW_STAT		BIT(11)
>>>>>>>> +#define PCIE_APP_IRN_BW_MGT			BIT(12)
>>>>>>>> +#define PCIE_APP_IRN_MSG_LTR			BIT(18)
>>>>>>>> +#define PCIE_APP_IRN_SYS_ERR_RC			BIT(29)
>>>>>>>> +
>>>>>>>> +#define PCIE_APP_INTX_OFST	12
>>>>>>>> +#define PCIE_APP_IRN_INT	(PCIE_APP_IRN_AER_REPORT | PCIE_APP_IRN_PME | \
>>>>>>>> +			PCIE_APP_IRN_RX_VDM_MSG | PCIE_APP_IRN_SYS_ERR_RC | \
>>>>>>>> +			PCIE_APP_IRN_PM_TO_ACK | PCIE_APP_IRN_MSG_LTR | \
>>>>>>>> +			PCIE_APP_IRN_BW_MGT | PCIE_APP_IRN_LINK_AUTO_BW_STAT | \
>>>>>>>> +			(PCIE_APP_INTX_OFST + PCI_INTERRUPT_INTA) | \
>>>>>>>> +			(PCIE_APP_INTX_OFST + PCI_INTERRUPT_INTB) | \
>>>>>>>> +			(PCIE_APP_INTX_OFST + PCI_INTERRUPT_INTC) | \
>>>>>>>> +			(PCIE_APP_INTX_OFST + PCI_INTERRUPT_INTD))
>>>>>>>> +
>>>>>>>> +#define BUS_IATU_OFFS		SZ_256M
>>>>>>>> +#define RST_INTRVL_DFT_MS	100
>>>>>>>> +enum {
>>>>>>>> +	PCIE_LINK_SPEED_AUTO = 0,
>>>>>>>> +	PCIE_LINK_SPEED_GEN1,
>>>>>>>> +	PCIE_LINK_SPEED_GEN2,
>>>>>>>> +	PCIE_LINK_SPEED_GEN3,
>>>>>>>> +	PCIE_LINK_SPEED_GEN4,
>>>>>>>> +};
>>>>>>>> +
>>>>>>>> +struct intel_pcie_soc {
>>>>>>>> +	unsigned int pcie_ver;
>>>>>>>> +	unsigned int pcie_atu_offset;
>>>>>>>> +	u32 num_viewport;
>>>>>>>> +};
>>>>>>>> +
>>>>>>>> +struct intel_pcie_port {
>>>>>>>> +	struct dw_pcie		pci;
>>>>>>>> +	unsigned int		id; /* Physical RC Index */
>>>>>>>> +	void __iomem		*app_base;
>>>>>>>> +	struct gpio_desc	*reset_gpio;
>>>>>>>> +	u32			rst_interval;
>>>>>>>> +	u32			max_speed;
>>>>>>>> +	u32			link_gen;
>>>>>>>> +	u32			max_width;
>>>>>>>> +	u32			lanes;
>>>>>>>> +	struct clk		*core_clk;
>>>>>>>> +	struct reset_control	*core_rst;
>>>>>>>> +	struct phy		*phy;
>>>>>>>> +};
>>>>>>>> +
>>>>>>>> +static void pcie_update_bits(void __iomem *base, u32 mask, u32 val, u32 ofs)
>>>>>>>> +{
>>>>>>>> +	u32 orig, tmp;
>>>>>>>> +
>>>>>>>> +	orig = readl(base + ofs);
>>>>>>>> +
>>>>>>>> +	tmp = (orig & ~mask) | (val & mask);
>>>>>>>> +
>>>>>>>> +	if (tmp != orig)
>>>>>>>> +		writel(tmp, base + ofs);
>>>>>>>> +}
>>>>>>>> +
>>>>>>>> +static inline u32 pcie_app_rd(struct intel_pcie_port *lpp, u32 ofs)
>>>>>>>> +{
>>>>>>>> +	return readl(lpp->app_base + ofs);
>>>>>>>> +}
>>>>>>>> +
>>>>>>>> +static inline void pcie_app_wr(struct intel_pcie_port *lpp, u32 val, u32 ofs)
>>>>>>>> +{
>>>>>>>> +	writel(val, lpp->app_base + ofs);
>>>>>>>> +}
>>>>>>>> +
>>>>>>>> +static void pcie_app_wr_mask(struct intel_pcie_port *lpp,
>>>>>>>> +			     u32 mask, u32 val, u32 ofs)
>>>>>>>> +{
>>>>>>>> +	pcie_update_bits(lpp->app_base, mask, val, ofs);
>>>>>>>> +}
>>>>>>>> +
>>>>>>>> +static inline u32 pcie_rc_cfg_rd(struct intel_pcie_port *lpp, u32 ofs)
>>>>>>>> +{
>>>>>>>> +	return dw_pcie_readl_dbi(&lpp->pci, ofs);
>>>>>>>> +}
>>>>>>>> +
>>>>>>>> +static inline void pcie_rc_cfg_wr(struct intel_pcie_port *lpp, u32 val, u32 ofs)
>>>>>>>> +{
>>>>>>>> +	dw_pcie_writel_dbi(&lpp->pci, ofs, val);
>>>>>>>> +}
>>>>>>>> +
>>>>>>>> +static void pcie_rc_cfg_wr_mask(struct intel_pcie_port *lpp,
>>>>>>>> +				u32 mask, u32 val, u32 ofs)
>>>>>>>> +{
>>>>>>>> +	pcie_update_bits(lpp->pci.dbi_base, mask, val, ofs);
>>>>>>>> +}
>>>>>>>> +
>>>>>>>> +static void intel_pcie_mem_iatu(struct intel_pcie_port *lpp)
>>>>>>>> +{
>>>>>>>> +	struct pcie_port *pp = &lpp->pci.pp;
>>>>>>>> +	phys_addr_t cpu_addr = pp->mem_base;
>>>>>>>> +
>>>>>>>> +	dw_pcie_prog_outbound_atu(&lpp->pci, PCIE_ATU_REGION_INDEX0,
>>>>>>>> +				  PCIE_ATU_TYPE_MEM, cpu_addr,
>>>>>>>> +				  pp->mem_base, pp->mem_size);
>>>>>>>> +}
>>>>>>>> +
>>>>>>>> +static void intel_pcie_ltssm_enable(struct intel_pcie_port *lpp)
>>>>>>>> +{
>>>>>>>> +	pcie_app_wr_mask(lpp, PCIE_APP_CCR_LTSSM_ENABLE,
>>>>>>>> +			 PCIE_APP_CCR_LTSSM_ENABLE, PCIE_APP_CCR);
>>>>>>>> +}
>>>>>>>> +
>>>>>>>> +static void intel_pcie_ltssm_disable(struct intel_pcie_port *lpp)
>>>>>>>> +{
>>>>>>>> +	pcie_app_wr_mask(lpp, PCIE_APP_CCR_LTSSM_ENABLE, 0, PCIE_APP_CCR);
>>>>>>>> +}
>>>>>>>> +
>>>>>>>> +static const char *pcie_link_gen_to_str(int gen)
>>>>>>>> +{
>>>>>>>> +	switch (gen) {
>>>>>>>> +	case PCIE_LINK_SPEED_GEN1:
>>>>>>>> +		return "2.5";
>>>>>>>> +	case PCIE_LINK_SPEED_GEN2:
>>>>>>>> +		return "5.0";
>>>>>>>> +	case PCIE_LINK_SPEED_GEN3:
>>>>>>>> +		return "8.0";
>>>>>>>> +	case PCIE_LINK_SPEED_GEN4:
>>>>>>>> +		return "16.0";
>>>>>>>> +	default:
>>>>>>>> +		return "???";
>>>>>>>> +	}
>>>>>>>> +}
>>>>>>>> +
>>>>>>>> +static void intel_pcie_link_setup(struct intel_pcie_port *lpp)
>>>>>>>> +{
>>>>>>>> +	u32 val;
>>>>>>>> +
>>>>>>>> +	val = pcie_rc_cfg_rd(lpp, PCIE_LCAP);
>>>>>>>> +	lpp->max_speed = FIELD_GET(PCIE_LCAP_MAX_LINK_SPEED, val);
>>>>>>>> +	lpp->max_width = FIELD_GET(PCIE_LCAP_MAX_LENGTH_WIDTH, val);
>>>>>>>> +
>>>>>>>> +	val = pcie_rc_cfg_rd(lpp, PCIE_LCTLSTS);
>>>>>>>> +
>>>>>>>> +	val &= ~(PCIE_LCTLSTS_LINK_DISABLE | PCIE_LCTLSTS_ASPM_ENABLE);
>>>>>>>> +	val |= (PCIE_LCTLSTS_SLOT_CLK_CFG | PCIE_LCTLSTS_COM_CLK_CFG |
>>>>>>>> +		PCIE_LCTLSTS_RCB128);
>>>>>>>> +	pcie_rc_cfg_wr(lpp, val, PCIE_LCTLSTS);
>>>>>>>> +}
>>>>>>>> +
>>>>>>>> +static void intel_pcie_max_speed_setup(struct intel_pcie_port *lpp)
>>>>>>>> +{
>>>>>>>> +	u32 reg, val;
>>>>>>>> +
>>>>>>>> +	reg = pcie_rc_cfg_rd(lpp, PCIE_LCTLSTS2);
>>>>>>>> +	switch (lpp->link_gen) {
>>>>>>>> +	case PCIE_LINK_SPEED_GEN1:
>>>>>>>> +		reg &= ~PCIE_LCTLSTS2_TGT_LINK_SPEED;
>>>>>>>> +		reg |= PCIE_LCTLSTS2_HW_AUTO_DIS |
>>>>>>>> +			PCIE_LCTLSTS2_TGT_LINK_SPEED_25GT;
>>>>>>>> +		break;
>>>>>>>> +	case PCIE_LINK_SPEED_GEN2:
>>>>>>>> +		reg &= ~PCIE_LCTLSTS2_TGT_LINK_SPEED;
>>>>>>>> +		reg |= PCIE_LCTLSTS2_HW_AUTO_DIS |
>>>>>>>> +			PCIE_LCTLSTS2_TGT_LINK_SPEED_5GT;
>>>>>>>> +		break;
>>>>>>>> +	case PCIE_LINK_SPEED_GEN3:
>>>>>>>> +		reg &= ~PCIE_LCTLSTS2_TGT_LINK_SPEED;
>>>>>>>> +		reg |= PCIE_LCTLSTS2_HW_AUTO_DIS |
>>>>>>>> +			PCIE_LCTLSTS2_TGT_LINK_SPEED_8GT;
>>>>>>>> +		break;
>>>>>>>> +	case PCIE_LINK_SPEED_GEN4:
>>>>>>>> +		reg &= ~PCIE_LCTLSTS2_TGT_LINK_SPEED;
>>>>>>>> +		reg |= PCIE_LCTLSTS2_HW_AUTO_DIS |
>>>>>>>> +			PCIE_LCTLSTS2_TGT_LINK_SPEED_16GT;
>>>>>>>> +		break;
>>>>>>>> +	default:
>>>>>>>> +		/* Use hardware capability */
>>>>>>>> +		val = pcie_rc_cfg_rd(lpp, PCIE_LCAP);
>>>>>>>> +		val = FIELD_GET(PCIE_LCAP_MAX_LINK_SPEED, val);
>>>>>>>> +		reg &= ~PCIE_LCTLSTS2_HW_AUTO_DIS;
>>>>>>>> +		reg |= val;
>>>>>>>> +		break;
>>>>>>>> +	}
>>>>>>>> +	pcie_rc_cfg_wr(lpp, reg, PCIE_LCTLSTS2);
>>>>>>>> +}
>>>>>>>> +
>>>>>>>> +static void intel_pcie_speed_change_enable(struct intel_pcie_port *lpp)
>>>>>>>> +{
>>>>>>>> +	u32 mask, val;
>>>>>>>> +
>>>>>>>> +	mask = PORT_LOGIC_SPEED_CHANGE | PCIE_PORT_LOGIC_FTS;
>>>>>>>> +	val = PORT_LOGIC_SPEED_CHANGE | PCIE_PORT_LOGIC_DFT_FTS_NUM;
>>>>>>>> +
>>>>>>>> +	pcie_rc_cfg_wr_mask(lpp, mask, val, PCIE_LINK_WIDTH_SPEED_CONTROL);
>>>>>>>> +}
>>>>>>>> +
>>>>>>>> +static void intel_pcie_speed_change_disable(struct intel_pcie_port *lpp)
>>>>>>>> +{
>>>>>>>> +	pcie_rc_cfg_wr_mask(lpp, PORT_LOGIC_SPEED_CHANGE, 0,
>>>>>>>> +			    PCIE_LINK_WIDTH_SPEED_CONTROL);
>>>>>>>> +}
>>>>>>>> +
>>>>>>>> +static void intel_pcie_max_link_width_setup(struct intel_pcie_port *lpp)
>>>>>>>> +{
>>>>>>>> +	u32 mask, val;
>>>>>>>> +
>>>>>>>> +	/* HW auto bandwidth negotiation must be enabled */
>>>>>>>> +	pcie_rc_cfg_wr_mask(lpp, PCIE_LCTLSTS_HW_AW_DIS, 0, PCIE_LCTLSTS);
>>>>>>>> +
>>>>>>>> +	mask = PCIE_DIRECT_LINK_WIDTH_CHANGE | PCIE_TARGET_LINK_WIDTH;
>>>>>>>> +	val = PCIE_DIRECT_LINK_WIDTH_CHANGE | lpp->lanes;
>>>>>>>> +	pcie_rc_cfg_wr_mask(lpp, mask, val, PCIE_MULTI_LANE_CTRL);
>>>>>>> Is this identical functionality to the writing of PCIE_PORT_LINK_CONTROL
>>>>>>> in dw_pcie_setup?
>>>>>>>
>>>>>>> I ask because if the user sets num-lanes in the DT, will it have the
>>>>>>> desired effect?
>>>>>> intel_pcie_max_link_width_setup() function will be called by sysfs attribute pcie_width_store() to change on the fly.
>>>>> Indeed, but a user may also set num-lanes in the device tree. I'm wondering
>>>>> if, when set in device-tree, it will have the desired effect. Because I don't
>>>>> see anything similar to PCIE_LCTLSTS_HW_AW_DIS in dw_pcie_setup which is what
>>>>> your function does here.
>>>>>
>>>>> I guess I'm trying to avoid the suitation where num-lanes doesn't have the
>>>>> desired effect and the only way to set the num-lanes is throught the sysfs
>>>>> control.
>>>> I will check this and get back to you.
>> intel_pcie_max_link_width_setup() is doing the lane resizing which is
>> different from the link up/establishment happening during probe. Also
>> PCIE_LCTLSTS_HW_AW_DIS default value is 0 so not setting during the probe or
>> dw_pcie_setup.
>>
>> intel_pcie_max_link_width_setup() is programming as per the designware PCIe
>> controller databook instructions for lane resizing.
>>
>> Below lines are from Designware PCIe databook for lane resizing.
>>
>>   Program the TARGET_LINK_WIDTH[5:0] field of the MULTI_LANE_CONTROL_OFF
>> register.
>>   Program the DIRECT_LINK_WIDTH_CHANGE2 field of the MULTI_LANE_CONTROL_OFF
>> register.
>> It is assumed that the PCIE_CAP_HW_AUTO_WIDTH_DISABLE field in the
>> LINK_CONTROL_LINK_STATUS_REG register is 0.
> OK, so there is a difference between initial training and then resizing
> of the link. Given that this is not Intel specific, should this function
> exist within the designware driver for the benefit of others?
I am ok to add if maintainer agrees with it.
>
>>
>>>>>>>> +}
>>>>>>>> +
>>>>>>>> +static void intel_pcie_port_logic_setup(struct intel_pcie_port *lpp)
>>>>>>>> +{
>>>>>>>> +	u32 val, mask, fts;
>>>>>>>> +
>>>>>>>> +	switch (lpp->max_speed) {
>>>>>>>> +	case PCIE_LINK_SPEED_GEN1:
>>>>>>>> +	case PCIE_LINK_SPEED_GEN2:
>>>>>>>> +		fts = PCIE_AFR_GEN12_FTS_NUM_DFT;
>>>>>>>> +		break;
>>>>>>>> +	case PCIE_LINK_SPEED_GEN3:
>>>>>>>> +		fts = PCIE_AFR_GEN3_FTS_NUM_DFT;
>>>>>>>> +		break;
>>>>>>>> +	case PCIE_LINK_SPEED_GEN4:
>>>>>>>> +		fts = PCIE_AFR_GEN4_FTS_NUM_DFT;
>>>>>>>> +		break;
>>>>>>>> +	default:
>>>>>>>> +		fts = PCIE_AFR_GEN12_FTS_NUM_DFT;
>>>>>>>> +		break;
>>>>>>>> +	}
>>>>>>>> +	mask = PCIE_AFR_FTS_NUM | PCIE_AFR_COM_FTS_NUM;
>>>>>>>> +	val = FIELD_PREP(PCIE_AFR_FTS_NUM, fts) |
>>>>>>>> +	       FIELD_PREP(PCIE_AFR_COM_FTS_NUM, fts);
>>>>>>>> +	pcie_rc_cfg_wr_mask(lpp, mask, val, PCIE_AFR);
>>>>>>>> +
>>>>>>>> +	/* Port Link Control Register */
>>>>>>>> +	pcie_rc_cfg_wr_mask(lpp, PCIE_PLCR_DLL_LINK_EN,
>>>>>>>> +			    PCIE_PLCR_DLL_LINK_EN, PCIE_PORT_LINK_CONTROL);
>>>>>>>> +}
>>>>>>>> +
>>>>>>>> +static void intel_pcie_upconfig_setup(struct intel_pcie_port *lpp)
>>>>>>>> +{
>>>>>>>> +	pcie_rc_cfg_wr_mask(lpp, PCIE_UPCONFIG_SUPPORT,
>>>>>>>> +			    PCIE_UPCONFIG_SUPPORT, PCIE_MULTI_LANE_CTRL);
>>>>>>>> +}
>>>>>>>> +
>>>>>>>> +static void intel_pcie_rc_setup(struct intel_pcie_port *lpp)
>>>>>>>> +{
>>>>>>>> +	intel_pcie_ltssm_disable(lpp);
>>>>>>>> +	intel_pcie_link_setup(lpp);
>>>>>>>> +	dw_pcie_setup_rc(&lpp->pci.pp);
>>>>>>>> +	intel_pcie_upconfig_setup(lpp);
>>>>>>>> +
>>>>>>>> +	intel_pcie_max_speed_setup(lpp);
>>>>>>>> +	intel_pcie_speed_change_enable(lpp);
>>>>>>>> +	intel_pcie_port_logic_setup(lpp);
>>>>>>>> +	intel_pcie_mem_iatu(lpp);
>>>>>>> Doesn't dw_pcie_setup_rc do the same as intel_pcie_mem_iatu?
>>>>>> Thanks for pointing it. dw_pcie_setup_rc() does.
>>>>>> intel_pcie_mem_iatu can be removed.
>>>>> Excellent.
>>>>>
>>>>>>>> +}
>>>>>>>> +
>>>>>>>> +static int intel_pcie_ep_rst_init(struct intel_pcie_port *lpp)
>>>>>>>> +{
>>>>>>>> +	struct device *dev = lpp->pci.dev;
>>>>>>>> +	int ret;
>>>>>>>> +
>>>>>>>> +	lpp->reset_gpio = devm_gpiod_get(dev, "reset", GPIOD_OUT_LOW);
>>>>>>>> +	if (IS_ERR(lpp->reset_gpio)) {
>>>>>>>> +		ret = PTR_ERR(lpp->reset_gpio);
>>>>>>>> +		if (ret != -EPROBE_DEFER)
>>>>>>>> +			dev_err(dev, "failed to request PCIe GPIO: %d\n", ret);
>>>>>>>> +		return ret;
>>>>>>>> +	}
>>>>>>>> +	/* Make initial reset last for 100us */
>>>>>>>> +	usleep_range(100, 200);
>>>>>>>> +
>>>>>>>> +	return 0;
>>>>>>>> +}
>>>>>>>> +
>>>>>>>> +static void intel_pcie_core_rst_assert(struct intel_pcie_port *lpp)
>>>>>>>> +{
>>>>>>>> +	reset_control_assert(lpp->core_rst);
>>>>>>>> +}
>>>>>>>> +
>>>>>>>> +static void intel_pcie_core_rst_deassert(struct intel_pcie_port *lpp)
>>>>>>>> +{
>>>>>>>> +	/*
>>>>>>>> +	 * One micro-second delay to make sure the reset pulse
>>>>>>>> +	 * wide enough so that core reset is clean.
>>>>>>>> +	 */
>>>>>>>> +	udelay(1);
>>>>>>>> +	reset_control_deassert(lpp->core_rst);
>>>>>>>> +
>>>>>>>> +	/*
>>>>>>>> +	 * Some SoC core reset also reset PHY, more delay needed
>>>>>>>> +	 * to make sure the reset process is done.
>>>>>>>> +	 */
>>>>>>>> +	usleep_range(1000, 2000);
>>>>>>>> +}
>>>>>>>> +
>>>>>>>> +static void intel_pcie_device_rst_assert(struct intel_pcie_port *lpp)
>>>>>>>> +{
>>>>>>>> +	gpiod_set_value_cansleep(lpp->reset_gpio, 1);
>>>>>>>> +}
>>>>>>>> +
>>>>>>>> +static void intel_pcie_device_rst_deassert(struct intel_pcie_port *lpp)
>>>>>>>> +{
>>>>>>>> +	msleep(lpp->rst_interval);
>>>>>>>> +	gpiod_set_value_cansleep(lpp->reset_gpio, 0);
>>>>>>>> +}
>>>>>>>> +
>>>>>>>> +static int intel_pcie_app_logic_setup(struct intel_pcie_port *lpp)
>>>>>>>> +{
>>>>>>>> +	intel_pcie_device_rst_deassert(lpp);
>>>>>>>> +	intel_pcie_ltssm_enable(lpp);
>>>>>>>> +
>>>>>>>> +	return dw_pcie_wait_for_link(&lpp->pci);
>>>>>>>> +}
>>>>>>>> +
>>>>>>>> +static irqreturn_t intel_pcie_core_isr(int irq, void *arg)
>>>>>>>> +{
>>>>>>>> +	struct intel_pcie_port *lpp = arg;
>>>>>>>> +	u32 val, reg;
>>>>>>>> +
>>>>>>>> +	reg = pcie_app_rd(lpp, PCIE_APP_IRNCR);
>>>>>>>> +	val = reg & PCIE_APP_IRN_INT;
>>>>>>>> +
>>>>>>>> +	pcie_app_wr(lpp, val, PCIE_APP_IRNCR);
>>>>>>>> +
>>>>>>>> +	trace_printk("PCIe misc interrupt status 0x%x\n", reg);
>>>>>>>> +	return IRQ_HANDLED;
>>>>>>>> +}
>>>>>>> Why do we bother handling this interrupt?
>>>>>> This helps during debugging.
>>>>> I think it should be removed. It adds very little value to most users.
>>>>>
>>>>> Most users won't have access to the datasheets to debug this properly, and
>>>>> in any case if they could, then they would be competent to add an interrupt
>>>>> handler themselves.
>>>> IMO, having this will help to get the basic hardware interrupt status during
>>>> debugging.
>>>> And, user also can enhance the handler as per the need.
>>>> I thing keeping it is beneficial than removing it.
>>>> Please let me know your view.
>>> I'm much prefer to remove this.
>> Sure, i am OK to remove it.
>>>>>>>> +
>>>>>>>> +static int intel_pcie_setup_irq(struct intel_pcie_port *lpp)
>>>>>>>> +{
>>>>>>>> +	struct device *dev = lpp->pci.dev;
>>>>>>>> +	struct platform_device *pdev;
>>>>>>>> +	char *irq_name;
>>>>>>>> +	int irq, ret;
>>>>>>>> +
>>>>>>>> +	pdev = to_platform_device(dev);
>>>>>>>> +	irq = platform_get_irq(pdev, 0);
>>>>>>>> +	if (irq < 0) {
>>>>>>>> +		dev_err(dev, "missing sys integrated irq resource\n");
>>>>>>>> +		return irq;
>>>>>>>> +	}
>>>>>>>> +
>>>>>>>> +	irq_name = devm_kasprintf(dev, GFP_KERNEL, "pcie_misc%d", lpp->id);
>>>>>>>> +	if (!irq_name) {
>>>>>>>> +		dev_err(dev, "failed to alloc irq name\n");
>>>>>>>> +		return -ENOMEM;
>>>>>>>> +	}
>>>>>>>> +
>>>>>>>> +	ret = devm_request_irq(dev, irq, intel_pcie_core_isr,
>>>>>>>> +			       IRQF_SHARED, irq_name, lpp);
>>>>>>>> +	if (ret) {
>>>>>>>> +		dev_err(dev, "request irq %d failed\n", irq);
>>>>>>>> +		return ret;
>>>>>>>> +	}
>>>>>>>> +	/* Enable integrated interrupts */
>>>>>>>> +	pcie_app_wr_mask(lpp, PCIE_APP_IRN_INT,
>>>>>>>> +			 PCIE_APP_IRN_INT, PCIE_APP_IRNEN);
>>>>>>>> +
>>>>>>>> +	return ret;
>>>>>>>> +}
>>>>>>>> +
>>>>>>>> +static void intel_pcie_core_irq_disable(struct intel_pcie_port *lpp)
>>>>>>>> +{
>>>>>>>> +	pcie_app_wr(lpp, 0, PCIE_APP_IRNEN);
>>>>>>>> +	pcie_app_wr(lpp, PCIE_APP_IRN_INT,  PCIE_APP_IRNCR);
>>>>>>>> +}
>>>>>>>> +
>>>>>>>> +static int intel_pcie_get_resources(struct platform_device *pdev)
>>>>>>>> +{
>>>>>>>> +	struct intel_pcie_port *lpp;
>>>>>>>> +	struct resource *res;
>>>>>>>> +	struct dw_pcie *pci;
>>>>>>>> +	struct device *dev;
>>>>>>>> +	int ret;
>>>>>>>> +
>>>>>>>> +	lpp = platform_get_drvdata(pdev);
>>>>>>>> +	pci = &lpp->pci;
>>>>>>>> +	dev = pci->dev;
>>>>>>>> +
>>>>>>>> +	ret = device_property_read_u32(dev, "linux,pci-domain", &lpp->id);
>>>>>>>> +	if (ret) {
>>>>>>>> +		dev_err(dev, "failed to get domain id, errno %d\n", ret);
>>>>>>>> +		return ret;
>>>>>>>> +	}
>>>>>>>> +
>>>>>>>> +	res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "dbi");
>>>>>>>> +	if (!res)
>>>>>>>> +		return -ENXIO;
>>>>>>>> +
>>>>>>>> +	pci->dbi_base = devm_ioremap_resource(dev, res);
>>>>>>>> +	if (IS_ERR(pci->dbi_base))
>>>>>>>> +		return PTR_ERR(pci->dbi_base);
>>>>>>>> +
>>>>>>>> +	lpp->core_clk = devm_clk_get(dev, NULL);
>>>>>>>> +	if (IS_ERR(lpp->core_clk)) {
>>>>>>>> +		ret = PTR_ERR(lpp->core_clk);
>>>>>>>> +		if (ret != -EPROBE_DEFER)
>>>>>>>> +			dev_err(dev, "failed to get clks: %d\n", ret);
>>>>>>>> +		return ret;
>>>>>>>> +	}
>>>>>>>> +
>>>>>>>> +	lpp->core_rst = devm_reset_control_get(dev, NULL);
>>>>>>>> +	if (IS_ERR(lpp->core_rst)) {
>>>>>>>> +		ret = PTR_ERR(lpp->core_rst);
>>>>>>>> +		if (ret != -EPROBE_DEFER)
>>>>>>>> +			dev_err(dev, "failed to get resets: %d\n", ret);
>>>>>>>> +		return ret;
>>>>>>>> +	}
>>>>>>>> +
>>>>>>>> +	ret = device_property_match_string(dev, "device_type", "pci");
>>>>>>>> +	if (ret) {
>>>>>>>> +		dev_err(dev, "failed to find pci device type: %d\n", ret);
>>>>>>>> +		return ret;
>>>>>>>> +	}
>>>>>>>> +
>>>>>>>> +	if (device_property_read_u32(dev, "reset-assert-ms",
>>>>>>>> +				     &lpp->rst_interval))
>>>>>>>> +		lpp->rst_interval = RST_INTRVL_DFT_MS;
>>>>>>>> +
>>>>>>>> +	if (device_property_read_u32(dev, "max-link-speed", &lpp->link_gen))
>>>>>>>> +		lpp->link_gen = 0; /* Fallback to auto */
>>>>>>> Is it possible to use of_pci_get_max_link_speed here instead?
>>>>>> Thanks for pointing it. of_pci_get_max_link_speed() can be used here. I will
>>>>>> update it in the next patch revision.
>> I just remember, earlier we were using  of_pci_get_max_link_speed() itself.
>> As per reviewer comments changed to device_property_read_u32() to maintain
>> symmetry in parsing device tree properties from device node.
>> Let me know your view.
> I couldn't find an earlier version of this series that used of_pci_get_max_link_speed,
> have I missed it somewhere?
It happened in our internal review.
What's your suggestion please, either to go with symmetry in parsing 
"device_property_read_u32()" or with pci helper function 
"of_pci_get_max_link_speed"?
>
>>>>>>>> +
>>>>>>>> +	res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "app");
>>>>>>>> +	if (!res)
>>>>>>>> +		return -ENXIO;
>>>>>>>> +
>>>>>>>> +	lpp->app_base = devm_ioremap_resource(dev, res);
>>>>>>>> +	if (IS_ERR(lpp->app_base))
>>>>>>>> +		return PTR_ERR(lpp->app_base);
>>>>>>>> +
>>>>>>>> +	ret = intel_pcie_ep_rst_init(lpp);
>>>>>>>> +	if (ret)
>>>>>>>> +		return ret;
>>>>>>> Given that this is called from a function '..._get_resources' I don't think
>>>>>>> we should be resetting anything here.
>>>>>> Agree. I will move it out of get_resources().
>>>>>>>> +
>>>>>>>> +	lpp->phy = devm_phy_get(dev, "pciephy");
>>>>>>>> +	if (IS_ERR(lpp->phy)) {
>>>>>>>> +		ret = PTR_ERR(lpp->phy);
>>>>>>>> +		if (ret != -EPROBE_DEFER)
>>>>>>>> +			dev_err(dev, "couldn't get pcie-phy: %d\n", ret);
>>>>>>>> +		return ret;
>>>>>>>> +	}
>>>>>>>> +	return 0;
>>>>>>>> +}
>>>>>>>> +
>>>>>>>> +static void intel_pcie_deinit_phy(struct intel_pcie_port *lpp)
>>>>>>>> +{
>>>>>>>> +	phy_exit(lpp->phy);
>>>>>>>> +}
>>>>>>>> +
>>>>>>>> +static int intel_pcie_wait_l2(struct intel_pcie_port *lpp)
>>>>>>>> +{
>>>>>>>> +	u32 value;
>>>>>>>> +	int ret;
>>>>>>>> +
>>>>>>>> +	if (lpp->max_speed < PCIE_LINK_SPEED_GEN3)
>>>>>>>> +		return 0;
>>>>>>>> +
>>>>>>>> +	/* Send PME_TURN_OFF message */
>>>>>>>> +	pcie_app_wr_mask(lpp, PCIE_APP_MSG_XMT_PM_TURNOFF,
>>>>>>>> +			 PCIE_APP_MSG_XMT_PM_TURNOFF, PCIE_APP_MSG_CR);
>>>>>>>> +
>>>>>>>> +	/* Read PMC status and wait for falling into L2 link state */
>>>>>>>> +	ret = readl_poll_timeout(lpp->app_base + PCIE_APP_PMC, value,
>>>>>>>> +				 (value & PCIE_APP_PMC_IN_L2), 20,
>>>>>>>> +				 jiffies_to_usecs(5 * HZ));
>>>>>>>> +	if (ret)
>>>>>>>> +		dev_err(lpp->pci.dev, "PCIe link enter L2 timeout!\n");
>>>>>>>> +
>>>>>>>> +	return ret;
>>>>>>>> +}
>>>>>>>> +
>>>>>>>> +static void intel_pcie_turn_off(struct intel_pcie_port *lpp)
>>>>>>>> +{
>>>>>>>> +	if (dw_pcie_link_up(&lpp->pci))
>>>>>>>> +		intel_pcie_wait_l2(lpp);
>>>>>>>> +
>>>>>>>> +	/* Put EP in reset state */
>>>>>>> EP?
>>>>>> End point device. I will update it.
>>>>> Is this not a host bridge controller?
>>>> It is PERST#, signals hardware reset to the End point .
>>>>           /* Put EP in reset state */
>>>>           intel_pcie_device_rst_assert(lpp);
>>> OK.
>>>
>>>>>>>> +	intel_pcie_device_rst_assert(lpp);
>>>>>>>> +	pcie_rc_cfg_wr_mask(lpp, PCI_COMMAND_MEMORY, 0, PCI_COMMAND);
>>>>>>>> +}
>>>>>>>> +
>>>>>>>> +static int intel_pcie_host_setup(struct intel_pcie_port *lpp)
>>>>>>>> +{
>>>>>>>> +	int ret;
>>>>>>>> +
>>>>>>>> +	intel_pcie_core_rst_assert(lpp);
>>>>>>>> +	intel_pcie_device_rst_assert(lpp);
>>>>>>>> +
>>>>>>>> +	ret = phy_init(lpp->phy);
>>>>>>>> +	if (ret)
>>>>>>>> +		return ret;
>>>>>>>> +
>>>>>>>> +	intel_pcie_core_rst_deassert(lpp);
>>>>>>>> +
>>>>>>>> +	ret = clk_prepare_enable(lpp->core_clk);
>>>>>>>> +	if (ret) {
>>>>>>>> +		dev_err(lpp->pci.dev, "Core clock enable failed: %d\n", ret);
>>>>>>>> +		goto clk_err;
>>>>>>>> +	}
>>>>>>>> +
>>>>>>>> +	intel_pcie_rc_setup(lpp);
>>>>>>>> +	ret = intel_pcie_app_logic_setup(lpp);
>>>>>>>> +	if (ret)
>>>>>>>> +		goto app_init_err;
>>>>>>>> +
>>>>>>>> +	ret = intel_pcie_setup_irq(lpp);
>>>>>>>> +	if (!ret)
>>>>>>>> +		return ret;
>>>>>>>> +
>>>>>>>> +	intel_pcie_turn_off(lpp);
>>>>>>>> +app_init_err:
>>>>>>>> +	clk_disable_unprepare(lpp->core_clk);
>>>>>>>> +clk_err:
>>>>>>>> +	intel_pcie_core_rst_assert(lpp);
>>>>>>>> +	intel_pcie_deinit_phy(lpp);
>>>>>>>> +	return ret;
>>>>>>>> +}
>>>>>>>> +
>>>>>>>> +static ssize_t
>>>>>>>> +pcie_link_status_show(struct device *dev, struct device_attribute *attr,
>>>>>>>> +		      char *buf)
>>>>>>>> +{
>>>>>>>> +	u32 reg, width, gen;
>>>>>>>> +	struct intel_pcie_port *lpp;
>>>>>>>> +
>>>>>>>> +	lpp = dev_get_drvdata(dev);
>>>>>>>> +
>>>>>>>> +	reg = pcie_rc_cfg_rd(lpp, PCIE_LCTLSTS);
>>>>>>>> +	width = FIELD_GET(PCIE_LCTLSTS_NEGOTIATED_LINK_WIDTH, reg);
>>>>>>>> +	gen = FIELD_GET(PCIE_LCTLSTS_LINK_SPEED, reg);
>>>>>>>> +	if (gen > lpp->max_speed)
>>>>>>>> +		return -EINVAL;
>>>>>>>> +
>>>>>>>> +	return sprintf(buf, "Port %2u Width x%u Speed %s GT/s\n", lpp->id,
>>>>>>>> +		       width, pcie_link_gen_to_str(gen));
>>>>>>>> +}
>>>>>>>> +static DEVICE_ATTR_RO(pcie_link_status);
>>>>>>>> +
>>>>>>>> +static ssize_t pcie_speed_store(struct device *dev,
>>>>>>>> +				struct device_attribute *attr,
>>>>>>>> +				const char *buf, size_t len)
>>>>>>>> +{
>>>>>>>> +	struct intel_pcie_port *lpp;
>>>>>>>> +	unsigned long val;
>>>>>>>> +	int ret;
>>>>>>>> +
>>>>>>>> +	lpp = dev_get_drvdata(dev);
>>>>>>>> +
>>>>>>>> +	ret = kstrtoul(buf, 10, &val);
>>>>>>>> +	if (ret)
>>>>>>>> +		return ret;
>>>>>>>> +
>>>>>>>> +	if (val > lpp->max_speed)
>>>>>>>> +		return -EINVAL;
>>>>>>>> +
>>>>>>>> +	lpp->link_gen = val;
>>>>>>>> +	intel_pcie_max_speed_setup(lpp);
>>>>>>>> +	intel_pcie_speed_change_disable(lpp);
>>>>>>>> +	intel_pcie_speed_change_enable(lpp);
>>>>>>>> +
>>>>>>>> +	return len;
>>>>>>>> +}
>>>>>>>> +static DEVICE_ATTR_WO(pcie_speed);
>>>>>>>> +
>>>>>>>> +/*
>>>>>>>> + * Link width change on the fly is not always successful.
>>>>>>>> + * It also depends on the partner.
>>>>>>>> + */
>>>>>>>> +static ssize_t pcie_width_store(struct device *dev,
>>>>>>>> +				struct device_attribute *attr,
>>>>>>>> +				const char *buf, size_t len)
>>>>>>>> +{
>>>>>>>> +	struct intel_pcie_port *lpp;
>>>>>>>> +	unsigned long val;
>>>>>>>> +
>>>>>>>> +	lpp = dev_get_drvdata(dev);
>>>>>>>> +
>>>>>>>> +	if (kstrtoul(buf, 10, &val))
>>>>>>>> +		return -EINVAL;
>>>>>>>> +
>>>>>>>> +	if (val > lpp->max_width)
>>>>>>>> +		return -EINVAL;
>>>>>>>> +
>>>>>>>> +	lpp->lanes = val;
>>>>>>>> +	intel_pcie_max_link_width_setup(lpp);
>>>>>>>> +
>>>>>>>> +	return len;
>>>>>>>> +}
>>>>>>>> +static DEVICE_ATTR_WO(pcie_width);
>>>>>>> You mentioned that a use-case for changing width/speed on the fly was to
>>>>>>> control power consumption (and this also helps debugging issues). As I
>>>>>>> understand there is no current support for this in the kernel - yet it is
>>>>>>> something that would provide value.
>>>>>>>
>>>>>>> I haven't looked in much detail, however as I understand the PCI spec
>>>>>>> allows an upstream partner to change the link speed and retrain. (I'm not
>>>>>>> sure about link width). Given that we already have
>>>>>>> [current,max]_link_[speed,width] is sysfs for each PCI device, it would
>>>>>>> seem natural to extend this to allow for writing a max width or speed.
>>>>>>>
>>>>>>> So ideally this type of thing would be moved to the core or at least in
>>>>>>> the dwc driver. This way the benefits can be applied to more devices on
>>>>>>> larger PCI fabrics - Though perhaps others here will have a different view
>>>>>>> and I'm keen to hear them.
>>>>>>>
>>>>>>> I'm keen to limit the differences between the DWC controller drivers and
>>>>>>> unify common code - thus it would be helpful to have a justification as to
>>>>>>> why this is only relevant for this controller.
>>>>>>>
>>>>>>> For user-space only control, it is possible to achieve what you have here
>>>>>>> with userspace utilities (something like setpci) (assuming the standard
>>>>>>> looking registers you currently access are exposed in the normal config
>>>>>>> space way - though PCIe capabilities).
>>>>>>>
>>>>>>> My suggestion would be to drop these changes and later add something that
>>>>>>> can benefit more devices. In any case if these changes stay within this
>>>>>>> driver then it would be helpful to move them to a separate patch.
>>>>>> Sure, i will submit these entity in separate patch.
>>>>> Please ensure that all supporting macros, functions and defines go with that
>>>>> other patch as well please.
>>>> Sure.
>>>>>>>> +
>>>>>>>> +static struct attribute *pcie_cfg_attrs[] = {
>>>>>>>> +	&dev_attr_pcie_link_status.attr,
>>>>>>>> +	&dev_attr_pcie_speed.attr,
>>>>>>>> +	&dev_attr_pcie_width.attr,
>>>>>>>> +	NULL,
>>>>>>>> +};
>>>>>>>> +ATTRIBUTE_GROUPS(pcie_cfg);
>>>>>>>> +
>>>>>>>> +static int intel_pcie_sysfs_init(struct intel_pcie_port *lpp)
>>>>>>>> +{
>>>>>>>> +	return devm_device_add_groups(lpp->pci.dev, pcie_cfg_groups);
>>>>>>>> +}
>>>>>>>> +
>>>>>>>> +static void __intel_pcie_remove(struct intel_pcie_port *lpp)
>>>>>>>> +{
>>>>>>>> +	pcie_rc_cfg_wr_mask(lpp, PCI_COMMAND_MEMORY | PCI_COMMAND_MASTER,
>>>>>>>> +			    0, PCI_COMMAND);
>>>>>>>> +	intel_pcie_core_irq_disable(lpp);
>>>>>>>> +	intel_pcie_turn_off(lpp);
>>>>>>>> +	clk_disable_unprepare(lpp->core_clk);
>>>>>>>> +	intel_pcie_core_rst_assert(lpp);
>>>>>>>> +	intel_pcie_deinit_phy(lpp);
>>>>>>>> +}
>>>>>>>> +
>>>>>>>> +static int intel_pcie_remove(struct platform_device *pdev)
>>>>>>>> +{
>>>>>>>> +	struct intel_pcie_port *lpp = platform_get_drvdata(pdev);
>>>>>>>> +	struct pcie_port *pp = &lpp->pci.pp;
>>>>>>>> +
>>>>>>>> +	dw_pcie_host_deinit(pp);
>>>>>>>> +	__intel_pcie_remove(lpp);
>>>>>>>> +
>>>>>>>> +	return 0;
>>>>>>>> +}
>>>>>>>> +
>>>>>>>> +static int __maybe_unused intel_pcie_suspend_noirq(struct device *dev)
>>>>>>>> +{
>>>>>>>> +	struct intel_pcie_port *lpp = dev_get_drvdata(dev);
>>>>>>>> +	int ret;
>>>>>>>> +
>>>>>>>> +	intel_pcie_core_irq_disable(lpp);
>>>>>>>> +	ret = intel_pcie_wait_l2(lpp);
>>>>>>>> +	if (ret)
>>>>>>>> +		return ret;
>>>>>>>> +
>>>>>>>> +	intel_pcie_deinit_phy(lpp);
>>>>>>>> +	clk_disable_unprepare(lpp->core_clk);
>>>>>>>> +	return ret;
>>>>>>>> +}
>>>>>>>> +
>>>>>>>> +static int __maybe_unused intel_pcie_resume_noirq(struct device *dev)
>>>>>>>> +{
>>>>>>>> +	struct intel_pcie_port *lpp = dev_get_drvdata(dev);
>>>>>>>> +
>>>>>>>> +	return intel_pcie_host_setup(lpp);
>>>>>>>> +}
>>>>>>>> +
>>>>>>>> +static int intel_pcie_rc_init(struct pcie_port *pp)
>>>>>>>> +{
>>>>>>>> +	struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
>>>>>>>> +	struct intel_pcie_port *lpp = dev_get_drvdata(pci->dev);
>>>>>>>> +	int ret;
>>>>>>>> +
>>>>>>>> +	/* RC/host initialization */
>>>>>>>> +	ret = intel_pcie_host_setup(lpp);
>>>>>>>> +	if (ret)
>>>>>>>> +		return ret;
>>>>>>> Insert new line here.
>>>>>> Ok.
>>>>>>>> +	ret = intel_pcie_sysfs_init(lpp);
>>>>>>>> +	if (ret)
>>>>>>>> +		__intel_pcie_remove(lpp);
>>>>>>>> +	return ret;
>>>>>>>> +}
>>>>>>>> +
>>>>>>>> +int intel_pcie_msi_init(struct pcie_port *pp)
>>>>>>>> +{
>>>>>>>> +	struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
>>>>>>>> +
>>>>>>>> +	dev_dbg(pci->dev, "PCIe MSI/MSIx is handled by MSI in x86 processor\n");
>>>>>>> What about other processors? (Noting that this driver doesn't depend on
>>>>>>> any specific ARCH in the KConfig).
>>>>>> Agree. i will mark the dependency in Kconfig.
>>>>> OK, please also see how other drivers use the COMPILE_TEST Kconfig option.
>>>> Ok sure.
>>>>> I'd suggest that the dev_dbg just becomes a code comment.
>> Ok
>>>>>>>> +	return 0;
>>>>>>>> +}
>>>>>>>> +
>>>>>>>> +u64 intel_pcie_cpu_addr(struct dw_pcie *pcie, u64 cpu_addr)
>>>>>>>> +{
>>>>>>>> +	return cpu_addr + BUS_IATU_OFFS;
>>>>>>>> +}
>>>>>>>> +
>>>>>>>> +static const struct dw_pcie_ops intel_pcie_ops = {
>>>>>>>> +	.cpu_addr_fixup = intel_pcie_cpu_addr,
>>>>>>>> +};
>>>>>>>> +
>>>>>>>> +static const struct dw_pcie_host_ops intel_pcie_dw_ops = {
>>>>>>>> +	.host_init =		intel_pcie_rc_init,
>>>>>>>> +	.msi_host_init =	intel_pcie_msi_init,
>>>>>>>> +};
>>>>>>>> +
>>>>>>>> +static const struct intel_pcie_soc pcie_data = {
>>>>>>>> +	.pcie_ver =		0x520A,
>>>>>>>> +	.pcie_atu_offset =	0xC0000,
>>>>>>>> +	.num_viewport =		3,
>>>>>>>> +};
>>>>>>>> +
>>>>>>>> +static int intel_pcie_probe(struct platform_device *pdev)
>>>>>>>> +{
>>>>>>>> +	const struct intel_pcie_soc *data;
>>>>>>>> +	struct device *dev = &pdev->dev;
>>>>>>>> +	struct intel_pcie_port *lpp;
>>>>>>>> +	struct pcie_port *pp;
>>>>>>>> +	struct dw_pcie *pci;
>>>>>>>> +	int ret;
>>>>>>>> +
>>>>>>>> +	lpp = devm_kzalloc(dev, sizeof(*lpp), GFP_KERNEL);
>>>>>>>> +	if (!lpp)
>>>>>>>> +		return -ENOMEM;
>>>>>>>> +
>>>>>>>> +	platform_set_drvdata(pdev, lpp);
>>>>>>>> +	pci = &lpp->pci;
>>>>>>>> +	pci->dev = dev;
>>>>>>>> +	pp = &pci->pp;
>>>>>>>> +
>>>>>>>> +	ret = intel_pcie_get_resources(pdev);
>>>>>>>> +	if (ret)
>>>>>>>> +		return ret;
>>>>>>>> +
>>>>>>>> +	data = device_get_match_data(dev);
>>>>>>>> +	pci->ops = &intel_pcie_ops;
>>>>>>>> +	pci->version = data->pcie_ver;
>>>>>>>> +	pci->atu_base = pci->dbi_base + data->pcie_atu_offset;
>>>>>>>> +	pp->ops = &intel_pcie_dw_ops;
>>>>>>>> +
>>>>>>>> +	ret = dw_pcie_host_init(pp);
>>>>>>>> +	if (ret) {
>>>>>>>> +		dev_err(dev, "cannot initialize host\n");
>>>>>>>> +		return ret;
>>>>>>>> +	}
>>>>>>> Add a new line after the closing brace.
>>>>>> Ok
>>>>>>>> +	/* Intel PCIe doesn't configure IO region, so configure
>>>>>>>> +	 * viewport to not to access IO region during register
>>>>>>>> +	 * read write operations.
>>>>>>>> +	 */
>>>>>>>> +	pci->num_viewport = data->num_viewport;
>>>>>>>> +	dev_info(dev,
>>>>>>>> +		 "Intel AXI PCIe Root Complex Port %d Init Done\n", lpp->id);
>>>>>>>> +	return ret;
>>>>>>>> +}
>>>>>>>> +
>>>>>>>> +static const struct dev_pm_ops intel_pcie_pm_ops = {
>>>>>>>> +	SET_NOIRQ_SYSTEM_SLEEP_PM_OPS(intel_pcie_suspend_noirq,
>>>>>>>> +				      intel_pcie_resume_noirq)
>>>>>>>> +};
>>>>>>>> +
>>>>>>>> +static const struct of_device_id of_intel_pcie_match[] = {
>>>>>>>> +	{ .compatible = "intel,lgm-pcie", .data = &pcie_data },
>>>>>>>> +	{}
>>>>>>>> +};
>>>>>>>> +
>>>>>>>> +static struct platform_driver intel_pcie_driver = {
>>>>>>>> +	.probe = intel_pcie_probe,
>>>>>>>> +	.remove = intel_pcie_remove,
>>>>>>>> +	.driver = {
>>>>>>>> +		.name = "intel-lgm-pcie",
>>>>>>> Is there a reason why the we use intel-lgm-pcie here and pcie-intel-axi
>>>>>>> elsewhere? What does lgm mean?
>>>>>> lgm is the name of intel SoC.  I will name it to pcie-intel-axi to be
>>>>>> generic.
>>>>> I'm keen to ensure that it is consistently named. I've seen other comments
>>>>> regarding what the name should be - I don't have a strong opinion though
>>>>> I do think that *-axi may be too generic.
>> This PCIe driver is for the Intel Gateway SoCs. So how about naming it is as
>> "pcie-intel-gw"; pcie-intel-gw.c and Kconfig as PCIE_INTEL_GW.
> I don't have a problem with this, though others may have differing views.
Sure. thank you.
>
> Thanks,
>
> Andrew Murray
>
>> Regards,
>> Dilip
>>
>>>> Ok, i will check and get back to you on this.
>>>> Regards,
>>> Thanks,
>>>
>>> Andrew Murray
>>>
>>>> Dilip
>>>>
>>>>> Thanks,
>>>>>
>>>>> Andrew Murray
>>>>>
>>>>>>> Thanks,
>>>>>>>
>>>>>>> Andrew Murray
>>>>>>>
>>>>>>>> +		.of_match_table = of_intel_pcie_match,
>>>>>>>> +		.pm = &intel_pcie_pm_ops,
>>>>>>>> +	},
>>>>>>>> +};
>>>>>>>> +builtin_platform_driver(intel_pcie_driver);
>>>>>>>> -- 
>>>>>>>> 2.11.0
>>>>>>>>

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH v3 2/2] dwc: PCI: intel: Intel PCIe RC controller driver
  2019-09-05 11:40     ` Andy Shevchenko
@ 2019-09-12  7:01       ` Dilip Kota
  0 siblings, 0 replies; 37+ messages in thread
From: Dilip Kota @ 2019-09-12  7:01 UTC (permalink / raw)
  To: Andy Shevchenko, Andrew Murray
  Cc: jingoohan1, gustavo.pimentel, lorenzo.pieralisi, robh,
	martin.blumenstingl, linux-pci, hch, devicetree, linux-kernel,
	cheol.yong.kim, chuanhua.lei, qi-ming.wu

Hi Andy,

On 9/5/2019 7:40 PM, Andy Shevchenko wrote:
> On Thu, Sep 05, 2019 at 11:45:18AM +0100, Andrew Murray wrote:
>> On Wed, Sep 04, 2019 at 06:10:31PM +0800, Dilip Kota wrote:
>>> Add support to PCIe RC controller on Intel Universal
>>> Gateway SoC. PCIe controller is based of Synopsys
>>> Designware pci core.
>>> +config PCIE_INTEL_AXI
> I think that name here is too generic. Classical x86 seems not using this.

This PCIe driver is for the Intel Gateway SoCs. So how about naming it is as
"pcie-intel-gw"; pcie-intel-gw.c and Kconfig as PCIE_INTEL_GW.

Andrew Murray is ok with this naming, please let me know your view.

>
>>> +        bool "Intel AHB/AXI PCIe host controller support"
>>> +        depends on PCI_MSI
>>> +        depends on PCI
>>> +        depends on OF
>>> +        select PCIE_DW_HOST
>>> +        help
>>> +          Say 'Y' here to enable support for Intel AHB/AXI PCIe Host
>>> +	  controller driver.
>>> +	  The Intel PCIe controller is based on the Synopsys Designware
>>> +	  pcie core and therefore uses the Designware core functions to
>>> +	  implement the driver.

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH v3 2/2] dwc: PCI: intel: Intel PCIe RC controller driver
  2019-09-12  6:58                 ` Dilip Kota
@ 2019-09-12  8:25                   ` Andrew Murray
  2019-09-12  9:23                     ` Dilip Kota
  0 siblings, 1 reply; 37+ messages in thread
From: Andrew Murray @ 2019-09-12  8:25 UTC (permalink / raw)
  To: Dilip Kota
  Cc: jingoohan1, gustavo.pimentel, lorenzo.pieralisi, robh,
	martin.blumenstingl, linux-pci, hch, devicetree, linux-kernel,
	andriy.shevchenko, cheol.yong.kim, chuanhua.lei, qi-ming.wu

On Thu, Sep 12, 2019 at 02:58:45PM +0800, Dilip Kota wrote:
> Hi Andrew Murray,
> 
> On 9/11/2019 6:30 PM, Andrew Murray wrote:
> > On Tue, Sep 10, 2019 at 03:46:17PM +0800, Dilip Kota wrote:
> > > Hi Andrew Murray,
> > > 
> > > Please find my response inline.
> > > 
> > > On 9/9/2019 4:31 PM, Andrew Murray wrote:
> > > > On Mon, Sep 09, 2019 at 02:51:03PM +0800, Dilip Kota wrote:
> > > > > On 9/6/2019 7:20 PM, Andrew Murray wrote:
> > > > > > On Fri, Sep 06, 2019 at 06:58:11PM +0800, Dilip Kota wrote:
> > > > > > > Hi Andrew Murray,
> > > > > > > 
> > > > > > > Thanks for the review. Please find my response inline.
> > > > > > > 
> > > > > > > On 9/5/2019 6:45 PM, Andrew Murray wrote:
> > > > > > > > On Wed, Sep 04, 2019 at 06:10:31PM +0800, Dilip Kota wrote:
> > > > > > > > > Add support to PCIe RC controller on Intel Universal
> > > > > > > > > Gateway SoC. PCIe controller is based of Synopsys
> > > > > > > > > Designware pci core.
> > > > > > > > > 
> > > > > > > > > Signed-off-by: Dilip Kota <eswara.kota@linux.intel.com>
> > > > > > > > > ---
> > > > > > > > Hi Dilip,
> > > > > > > > 
> > > > > > > > Thanks for the patch, initial feedback below:
> > > > > > > > 
> > > > > > > > > changes on v3:
> > > > > > > > > 	Rename PCIe app logic registers with PCIE_APP prefix.
> > > > > > > > > 	PCIE_IOP_CTRL configuration is not required. Remove respective code.
> > > > > > > > > 	Remove wrapper functions for clk enable/disable APIs.
> > > > > > > > > 	Use platform_get_resource_byname() instead of
> > > > > > > > > 	  devm_platform_ioremap_resource() to be similar with DWC framework.
> > > > > > > > > 	Rename phy name to "pciephy".
> > > > > > > > > 	Modify debug message in msi_init() callback to be more specific.
> > > > > > > > > 	Remove map_irq() callback.
> > > > > > > > > 	Enable the INTx interrupts at the time of PCIe initialization.
> > > > > > > > > 	Reduce memory fragmentation by using variable "struct dw_pcie pci"
> > > > > > > > > 	  instead of allocating memory.
> > > > > > > > > 	Reduce the delay to 100us during enpoint initialization
> > > > > > > > > 	  intel_pcie_ep_rst_init().
> > > > > > > > > 	Call  dw_pcie_host_deinit() during remove() instead of directly
> > > > > > > > > 	  calling PCIe core APIs.
> > > > > > > > > 	Rename "intel,rst-interval" to "reset-assert-ms".
> > > > > > > > > 	Remove unused APP logic Interrupt bit macro definitions.
> > > > > > > > >      	Use dwc framework's dw_pcie_setup_rc() for PCIe host specific
> > > > > > > > > 	 configuration instead of redefining the same functionality in
> > > > > > > > > 	 the driver.
> > > > > > > > > 	Move the whole DT parsing specific code to intel_pcie_get_resources()
> > > > > > > > > 
> > > > > > > > >      drivers/pci/controller/dwc/Kconfig          |  13 +
> > > > > > > > >      drivers/pci/controller/dwc/Makefile         |   1 +
> > > > > > > > >      drivers/pci/controller/dwc/pcie-intel-axi.c | 840 ++++++++++++++++++++++++++++
> > > > > > > > >      3 files changed, 854 insertions(+)
> > > > > > > > >      create mode 100644 drivers/pci/controller/dwc/pcie-intel-axi.c
> > > > > > > > > 
> > > > > > > > > diff --git a/drivers/pci/controller/dwc/Kconfig b/drivers/pci/controller/dwc/Kconfig
> > > > > > > > > index 6ea778ae4877..e44b9b6a6390 100644
> > > > > > > > > --- a/drivers/pci/controller/dwc/Kconfig
> > > > > > > > > +++ b/drivers/pci/controller/dwc/Kconfig
> > > > > > > > > @@ -82,6 +82,19 @@ config PCIE_DW_PLAT_EP
> > > > > > > > >      	  order to enable device-specific features PCI_DW_PLAT_EP must be
> > > > > > > > >      	  selected.
> > > > > > > > > +config PCIE_INTEL_AXI
> > > > > > > > > +        bool "Intel AHB/AXI PCIe host controller support"
> > > > > > > > > +        depends on PCI_MSI
> > > > > > > > > +        depends on PCI
> > > > > > > > I don't think the PCI dependency is required here.
> > > > > > > > 
> > > > > > > > I'm also not sure why PCI_MSI is required, we select PCIE_DW_HOST which
> > > > > > > > depends on PCI_MSI_IRQ_DOMAIN which depends on PCI_MSI.
> > > > > > > Agree, dependency on PCI and PCI_MSI can be removed. I will remove it in
> > > > > > > next patch revision.
> > > > > > > > > +        depends on OF
> > > > > > > > > +        select PCIE_DW_HOST
> > > > > > > > > +        help
> > > > > > > > > +          Say 'Y' here to enable support for Intel AHB/AXI PCIe Host
> > > > > > > > > +	  controller driver.
> > > > > > > > > +	  The Intel PCIe controller is based on the Synopsys Designware
> > > > > > > > > +	  pcie core and therefore uses the Designware core functions to
> > > > > > > > > +	  implement the driver.
> > > > > > > > I can see this description is similar to others in the same Kconfig,
> > > > > > > > however I'm not sure what value a user gains by knowing implementation
> > > > > > > > details - it's helpful to know that PCIE_INTEL_AXI is based on the
> > > > > > > > Designware core, but is it helpful to know that the Designware core
> > > > > > > > functions are used?
> > > > > > > > 
> > > > > > > > > +
> > > > > > > > >      config PCI_EXYNOS
> > > > > > > > >      	bool "Samsung Exynos PCIe controller"
> > > > > > > > >      	depends on SOC_EXYNOS5440 || COMPILE_TEST
> > > > > > > > > diff --git a/drivers/pci/controller/dwc/Makefile b/drivers/pci/controller/dwc/Makefile
> > > > > > > > > index b085dfd4fab7..46e656ebdf90 100644
> > > > > > > > > --- a/drivers/pci/controller/dwc/Makefile
> > > > > > > > > +++ b/drivers/pci/controller/dwc/Makefile
> > > > > > > > > @@ -3,6 +3,7 @@ obj-$(CONFIG_PCIE_DW) += pcie-designware.o
> > > > > > > > >      obj-$(CONFIG_PCIE_DW_HOST) += pcie-designware-host.o
> > > > > > > > >      obj-$(CONFIG_PCIE_DW_EP) += pcie-designware-ep.o
> > > > > > > > >      obj-$(CONFIG_PCIE_DW_PLAT) += pcie-designware-plat.o
> > > > > > > > > +obj-$(CONFIG_PCIE_INTEL_AXI) += pcie-intel-axi.o
> > > > > > > > >      obj-$(CONFIG_PCI_DRA7XX) += pci-dra7xx.o
> > > > > > > > >      obj-$(CONFIG_PCI_EXYNOS) += pci-exynos.o
> > > > > > > > >      obj-$(CONFIG_PCI_IMX6) += pci-imx6.o
> > > > > > > > > diff --git a/drivers/pci/controller/dwc/pcie-intel-axi.c b/drivers/pci/controller/dwc/pcie-intel-axi.c
> > > > > > > > > new file mode 100644
> > > > > > > > > index 000000000000..75607ce03ebf
> > > > > > > > > --- /dev/null
> > > > > > > > > +++ b/drivers/pci/controller/dwc/pcie-intel-axi.c
> > > > > > > > > @@ -0,0 +1,840 @@
> > > > > > > > > +// SPDX-License-Identifier: GPL-2.0
> > > > > > > > > +/*
> > > > > > > > > + * PCIe host controller driver for Intel AXI PCIe Bridge
> > > > > > > > > + *
> > > > > > > > > + * Copyright (c) 2019 Intel Corporation.
> > > > > > > > > + */
> > > > > > > > > +
> > > > > > > > > +#include <linux/bitfield.h>
> > > > > > > > > +#include <linux/clk.h>
> > > > > > > > > +#include <linux/gpio/consumer.h>
> > > > > > > > > +#include <linux/interrupt.h>
> > > > > > > > > +#include <linux/iopoll.h>
> > > > > > > > > +#include <linux/of_irq.h>
> > > > > > > > > +#include <linux/of_pci.h>
> > > > > > > > > +#include <linux/of_platform.h>
> > > > > > > > > +#include <linux/phy/phy.h>
> > > > > > > > > +#include <linux/platform_device.h>
> > > > > > > > > +#include <linux/reset.h>
> > > > > > > > > +
> > > > > > > > > +#include "../../pci.h"
> > > > > > > > Please remove this - it isn't needed.
> > > > > > > Ok, will remove it in next patch revision.
> > > > > > > > > +#include "pcie-designware.h"
> > > > > > > > > +
> > > > > > > > > +#define PCIE_CCRID				0x8
> > > > > > > > > +
> > > > > > > > > +#define PCIE_LCAP				0x7C
> > > > > > > > > +#define PCIE_LCAP_MAX_LINK_SPEED		GENMASK(3, 0)
> > > > > > > > > +#define PCIE_LCAP_MAX_LENGTH_WIDTH		GENMASK(9, 4)
> > > > > > > > These look like the standard PCI Link Capabilities Register,
> > > > > > > > can you use the standard ones defined in pci_regs.h? (see
> > > > > > > > PCI_EXP_LNKCAP_SLS_*).
> > > > > > > > 
> > > > > > > > > +
> > > > > > > > > +/* Link Control and Status Register */
> > > > > > > > > +#define PCIE_LCTLSTS				0x80
> > > > > > > > > +#define PCIE_LCTLSTS_ASPM_ENABLE		GENMASK(1, 0)
> > > > > > > > > +#define PCIE_LCTLSTS_RCB128			BIT(3)
> > > > > > > > > +#define PCIE_LCTLSTS_LINK_DISABLE		BIT(4)
> > > > > > > > > +#define PCIE_LCTLSTS_COM_CLK_CFG		BIT(6)
> > > > > > > > > +#define PCIE_LCTLSTS_HW_AW_DIS			BIT(9)
> > > > > > > > > +#define PCIE_LCTLSTS_LINK_SPEED			GENMASK(19, 16)
> > > > > > > > > +#define PCIE_LCTLSTS_NEGOTIATED_LINK_WIDTH	GENMASK(25, 20)
> > > > > > > > > +#define PCIE_LCTLSTS_SLOT_CLK_CFG		BIT(28)
> > > > > > > > These look like the standard Link Control and Link Status register, can
> > > > > > > > you use the standard ones defined in pci_regs.h? (see PCI_EXP_LNKCTL_*
> > > > > > > > and PCI_EXP_LNKSTA_*).
> > > > > > > > 
> > > > > > > > > +
> > > > > > > > > +#define PCIE_LCTLSTS2				0xA0
> > > > > > > > > +#define PCIE_LCTLSTS2_TGT_LINK_SPEED		GENMASK(3, 0)
> > > > > > > > > +#define PCIE_LCTLSTS2_TGT_LINK_SPEED_25GT	0x1
> > > > > > > > > +#define PCIE_LCTLSTS2_TGT_LINK_SPEED_5GT	0x2
> > > > > > > > > +#define PCIE_LCTLSTS2_TGT_LINK_SPEED_8GT	0x3
> > > > > > > > > +#define PCIE_LCTLSTS2_TGT_LINK_SPEED_16GT	0x4
> > > > > > > > And these look like the standard Link Control Register 2 (see
> > > > > > > > PCI_EXP_LNKCTL2_*).
> > > > > > > > 
> > > > > > > > Most of the #defines above and below look just like standard PCI defines
> > > > > > > > (for example those found in pci_regs.h) - both in terms of their name and
> > > > > > > > what they do. Ideally where the functionality is the same or very similar,
> > > > > > > > then we should use the standard PCI defines in (pci_regs.h). This helps
> > > > > > > > readability/understanding, helps to identify duplicate code and reduces
> > > > > > > > the size of your patch.
> > > > > > > > 
> > > > > > > > Also the capability offsets (e.g. PCIE_LCTLSTS2) are also standard. The
> > > > > > > > offsets you define are offset by an additional 0x70. E.g.i
> > > > > > > > PCIE_LCTLSTS2 = 0xA0, and PCI_EXP_LNKCTL2 = 0x30. Perhaps abstracting
> > > > > > > > this offset will allow you to use the standard pci defines?
> > > > > > > > 
> > > > > > > > I haven't looked in much detail at the remainder of the defines, but
> > > > > > > > the same rationale should apply.
> > > > > > > Agree, that's a good point. I will define the offset macro and use the
> > > > > > > macros defined in pci_regs.h.
> > > > > > > > > +#define PCIE_LCTLSTS2_HW_AUTO_DIS		BIT(5)
> > > > > > > > > +
> > > > > > > > > +/* Ack Frequency Register */
> > > > > > > > > +#define PCIE_AFR				0x70C
> > > > > > > > > +#define PCIE_AFR_FTS_NUM			GENMASK(15, 8)
> > > > > > > > > +#define PCIE_AFR_COM_FTS_NUM			GENMASK(23, 16)
> > > > > > > > > +#define PCIE_AFR_GEN12_FTS_NUM_DFT		(SZ_128 - 1)
> > > > > > > > > +#define PCIE_AFR_GEN3_FTS_NUM_DFT		180
> > > > > > > > > +#define PCIE_AFR_GEN4_FTS_NUM_DFT		196
> > > > > > > > > +
> > > > > > > > > +#define PCIE_PLCR_DLL_LINK_EN			BIT(5)
> > > > > > > > > +#define PCIE_PORT_LOGIC_FTS			GENMASK(7, 0)
> > > > > > > > > +#define PCIE_PORT_LOGIC_DFT_FTS_NUM		(SZ_128 - 1)
> > > > > > > > > +
> > > > > > > > > +#define PCIE_MISC_CTRL				0x8BC
> > > > > > > > > +#define PCIE_MISC_CTRL_DBI_RO_WR_EN		BIT(0)
> > > > > > > > > +
> > > > > > > > > +#define PCIE_MULTI_LANE_CTRL			0x8C0
> > > > > > > > > +#define PCIE_UPCONFIG_SUPPORT			BIT(7)
> > > > > > > > > +#define PCIE_DIRECT_LINK_WIDTH_CHANGE		BIT(6)
> > > > > > > > > +#define PCIE_TARGET_LINK_WIDTH			GENMASK(5, 0)
> > > > > > > > > +
> > > > > > > > > +/* APP RC Core Control Register */
> > > > > > > > > +#define PCIE_APP_CCR				0x10
> > > > > > > > > +#define PCIE_APP_CCR_LTSSM_ENABLE		BIT(0)
> > > > > > > > > +
> > > > > > > > > +/* PCIe Message Control */
> > > > > > > > > +#define PCIE_APP_MSG_CR				0x30
> > > > > > > > > +#define PCIE_APP_MSG_XMT_PM_TURNOFF		BIT(0)
> > > > > > > > > +
> > > > > > > > > +/* PCIe Power Management Control */
> > > > > > > > > +#define PCIE_APP_PMC				0x44
> > > > > > > > > +#define PCIE_APP_PMC_IN_L2			BIT(20)
> > > > > > > > > +
> > > > > > > > > +/* Interrupt Enable Register */
> > > > > > > > > +#define PCIE_APP_IRNEN				0xF4
> > > > > > > > > +#define PCIE_APP_IRNCR				0xF8
> > > > > > > > > +#define PCIE_APP_IRN_AER_REPORT			BIT(0)
> > > > > > > > > +#define PCIE_APP_IRN_PME			BIT(2)
> > > > > > > > > +#define PCIE_APP_IRN_RX_VDM_MSG			BIT(4)
> > > > > > > > > +#define PCIE_APP_IRN_PM_TO_ACK			BIT(9)
> > > > > > > > > +#define PCIE_APP_IRN_LINK_AUTO_BW_STAT		BIT(11)
> > > > > > > > > +#define PCIE_APP_IRN_BW_MGT			BIT(12)
> > > > > > > > > +#define PCIE_APP_IRN_MSG_LTR			BIT(18)
> > > > > > > > > +#define PCIE_APP_IRN_SYS_ERR_RC			BIT(29)
> > > > > > > > > +
> > > > > > > > > +#define PCIE_APP_INTX_OFST	12
> > > > > > > > > +#define PCIE_APP_IRN_INT	(PCIE_APP_IRN_AER_REPORT | PCIE_APP_IRN_PME | \
> > > > > > > > > +			PCIE_APP_IRN_RX_VDM_MSG | PCIE_APP_IRN_SYS_ERR_RC | \
> > > > > > > > > +			PCIE_APP_IRN_PM_TO_ACK | PCIE_APP_IRN_MSG_LTR | \
> > > > > > > > > +			PCIE_APP_IRN_BW_MGT | PCIE_APP_IRN_LINK_AUTO_BW_STAT | \
> > > > > > > > > +			(PCIE_APP_INTX_OFST + PCI_INTERRUPT_INTA) | \
> > > > > > > > > +			(PCIE_APP_INTX_OFST + PCI_INTERRUPT_INTB) | \
> > > > > > > > > +			(PCIE_APP_INTX_OFST + PCI_INTERRUPT_INTC) | \
> > > > > > > > > +			(PCIE_APP_INTX_OFST + PCI_INTERRUPT_INTD))
> > > > > > > > > +
> > > > > > > > > +#define BUS_IATU_OFFS		SZ_256M
> > > > > > > > > +#define RST_INTRVL_DFT_MS	100
> > > > > > > > > +enum {
> > > > > > > > > +	PCIE_LINK_SPEED_AUTO = 0,
> > > > > > > > > +	PCIE_LINK_SPEED_GEN1,
> > > > > > > > > +	PCIE_LINK_SPEED_GEN2,
> > > > > > > > > +	PCIE_LINK_SPEED_GEN3,
> > > > > > > > > +	PCIE_LINK_SPEED_GEN4,
> > > > > > > > > +};
> > > > > > > > > +
> > > > > > > > > +struct intel_pcie_soc {
> > > > > > > > > +	unsigned int pcie_ver;
> > > > > > > > > +	unsigned int pcie_atu_offset;
> > > > > > > > > +	u32 num_viewport;
> > > > > > > > > +};
> > > > > > > > > +
> > > > > > > > > +struct intel_pcie_port {
> > > > > > > > > +	struct dw_pcie		pci;
> > > > > > > > > +	unsigned int		id; /* Physical RC Index */
> > > > > > > > > +	void __iomem		*app_base;
> > > > > > > > > +	struct gpio_desc	*reset_gpio;
> > > > > > > > > +	u32			rst_interval;
> > > > > > > > > +	u32			max_speed;
> > > > > > > > > +	u32			link_gen;
> > > > > > > > > +	u32			max_width;
> > > > > > > > > +	u32			lanes;
> > > > > > > > > +	struct clk		*core_clk;
> > > > > > > > > +	struct reset_control	*core_rst;
> > > > > > > > > +	struct phy		*phy;
> > > > > > > > > +};
> > > > > > > > > +
> > > > > > > > > +static void pcie_update_bits(void __iomem *base, u32 mask, u32 val, u32 ofs)
> > > > > > > > > +{
> > > > > > > > > +	u32 orig, tmp;
> > > > > > > > > +
> > > > > > > > > +	orig = readl(base + ofs);
> > > > > > > > > +
> > > > > > > > > +	tmp = (orig & ~mask) | (val & mask);
> > > > > > > > > +
> > > > > > > > > +	if (tmp != orig)
> > > > > > > > > +		writel(tmp, base + ofs);
> > > > > > > > > +}
> > > > > > > > > +
> > > > > > > > > +static inline u32 pcie_app_rd(struct intel_pcie_port *lpp, u32 ofs)
> > > > > > > > > +{
> > > > > > > > > +	return readl(lpp->app_base + ofs);
> > > > > > > > > +}
> > > > > > > > > +
> > > > > > > > > +static inline void pcie_app_wr(struct intel_pcie_port *lpp, u32 val, u32 ofs)
> > > > > > > > > +{
> > > > > > > > > +	writel(val, lpp->app_base + ofs);
> > > > > > > > > +}
> > > > > > > > > +
> > > > > > > > > +static void pcie_app_wr_mask(struct intel_pcie_port *lpp,
> > > > > > > > > +			     u32 mask, u32 val, u32 ofs)
> > > > > > > > > +{
> > > > > > > > > +	pcie_update_bits(lpp->app_base, mask, val, ofs);
> > > > > > > > > +}
> > > > > > > > > +
> > > > > > > > > +static inline u32 pcie_rc_cfg_rd(struct intel_pcie_port *lpp, u32 ofs)
> > > > > > > > > +{
> > > > > > > > > +	return dw_pcie_readl_dbi(&lpp->pci, ofs);
> > > > > > > > > +}
> > > > > > > > > +
> > > > > > > > > +static inline void pcie_rc_cfg_wr(struct intel_pcie_port *lpp, u32 val, u32 ofs)
> > > > > > > > > +{
> > > > > > > > > +	dw_pcie_writel_dbi(&lpp->pci, ofs, val);
> > > > > > > > > +}
> > > > > > > > > +
> > > > > > > > > +static void pcie_rc_cfg_wr_mask(struct intel_pcie_port *lpp,
> > > > > > > > > +				u32 mask, u32 val, u32 ofs)
> > > > > > > > > +{
> > > > > > > > > +	pcie_update_bits(lpp->pci.dbi_base, mask, val, ofs);
> > > > > > > > > +}
> > > > > > > > > +
> > > > > > > > > +static void intel_pcie_mem_iatu(struct intel_pcie_port *lpp)
> > > > > > > > > +{
> > > > > > > > > +	struct pcie_port *pp = &lpp->pci.pp;
> > > > > > > > > +	phys_addr_t cpu_addr = pp->mem_base;
> > > > > > > > > +
> > > > > > > > > +	dw_pcie_prog_outbound_atu(&lpp->pci, PCIE_ATU_REGION_INDEX0,
> > > > > > > > > +				  PCIE_ATU_TYPE_MEM, cpu_addr,
> > > > > > > > > +				  pp->mem_base, pp->mem_size);
> > > > > > > > > +}
> > > > > > > > > +
> > > > > > > > > +static void intel_pcie_ltssm_enable(struct intel_pcie_port *lpp)
> > > > > > > > > +{
> > > > > > > > > +	pcie_app_wr_mask(lpp, PCIE_APP_CCR_LTSSM_ENABLE,
> > > > > > > > > +			 PCIE_APP_CCR_LTSSM_ENABLE, PCIE_APP_CCR);
> > > > > > > > > +}
> > > > > > > > > +
> > > > > > > > > +static void intel_pcie_ltssm_disable(struct intel_pcie_port *lpp)
> > > > > > > > > +{
> > > > > > > > > +	pcie_app_wr_mask(lpp, PCIE_APP_CCR_LTSSM_ENABLE, 0, PCIE_APP_CCR);
> > > > > > > > > +}
> > > > > > > > > +
> > > > > > > > > +static const char *pcie_link_gen_to_str(int gen)
> > > > > > > > > +{
> > > > > > > > > +	switch (gen) {
> > > > > > > > > +	case PCIE_LINK_SPEED_GEN1:
> > > > > > > > > +		return "2.5";
> > > > > > > > > +	case PCIE_LINK_SPEED_GEN2:
> > > > > > > > > +		return "5.0";
> > > > > > > > > +	case PCIE_LINK_SPEED_GEN3:
> > > > > > > > > +		return "8.0";
> > > > > > > > > +	case PCIE_LINK_SPEED_GEN4:
> > > > > > > > > +		return "16.0";
> > > > > > > > > +	default:
> > > > > > > > > +		return "???";
> > > > > > > > > +	}
> > > > > > > > > +}
> > > > > > > > > +
> > > > > > > > > +static void intel_pcie_link_setup(struct intel_pcie_port *lpp)
> > > > > > > > > +{
> > > > > > > > > +	u32 val;
> > > > > > > > > +
> > > > > > > > > +	val = pcie_rc_cfg_rd(lpp, PCIE_LCAP);
> > > > > > > > > +	lpp->max_speed = FIELD_GET(PCIE_LCAP_MAX_LINK_SPEED, val);
> > > > > > > > > +	lpp->max_width = FIELD_GET(PCIE_LCAP_MAX_LENGTH_WIDTH, val);
> > > > > > > > > +
> > > > > > > > > +	val = pcie_rc_cfg_rd(lpp, PCIE_LCTLSTS);
> > > > > > > > > +
> > > > > > > > > +	val &= ~(PCIE_LCTLSTS_LINK_DISABLE | PCIE_LCTLSTS_ASPM_ENABLE);
> > > > > > > > > +	val |= (PCIE_LCTLSTS_SLOT_CLK_CFG | PCIE_LCTLSTS_COM_CLK_CFG |
> > > > > > > > > +		PCIE_LCTLSTS_RCB128);
> > > > > > > > > +	pcie_rc_cfg_wr(lpp, val, PCIE_LCTLSTS);
> > > > > > > > > +}
> > > > > > > > > +
> > > > > > > > > +static void intel_pcie_max_speed_setup(struct intel_pcie_port *lpp)
> > > > > > > > > +{
> > > > > > > > > +	u32 reg, val;
> > > > > > > > > +
> > > > > > > > > +	reg = pcie_rc_cfg_rd(lpp, PCIE_LCTLSTS2);
> > > > > > > > > +	switch (lpp->link_gen) {
> > > > > > > > > +	case PCIE_LINK_SPEED_GEN1:
> > > > > > > > > +		reg &= ~PCIE_LCTLSTS2_TGT_LINK_SPEED;
> > > > > > > > > +		reg |= PCIE_LCTLSTS2_HW_AUTO_DIS |
> > > > > > > > > +			PCIE_LCTLSTS2_TGT_LINK_SPEED_25GT;
> > > > > > > > > +		break;
> > > > > > > > > +	case PCIE_LINK_SPEED_GEN2:
> > > > > > > > > +		reg &= ~PCIE_LCTLSTS2_TGT_LINK_SPEED;
> > > > > > > > > +		reg |= PCIE_LCTLSTS2_HW_AUTO_DIS |
> > > > > > > > > +			PCIE_LCTLSTS2_TGT_LINK_SPEED_5GT;
> > > > > > > > > +		break;
> > > > > > > > > +	case PCIE_LINK_SPEED_GEN3:
> > > > > > > > > +		reg &= ~PCIE_LCTLSTS2_TGT_LINK_SPEED;
> > > > > > > > > +		reg |= PCIE_LCTLSTS2_HW_AUTO_DIS |
> > > > > > > > > +			PCIE_LCTLSTS2_TGT_LINK_SPEED_8GT;
> > > > > > > > > +		break;
> > > > > > > > > +	case PCIE_LINK_SPEED_GEN4:
> > > > > > > > > +		reg &= ~PCIE_LCTLSTS2_TGT_LINK_SPEED;
> > > > > > > > > +		reg |= PCIE_LCTLSTS2_HW_AUTO_DIS |
> > > > > > > > > +			PCIE_LCTLSTS2_TGT_LINK_SPEED_16GT;
> > > > > > > > > +		break;
> > > > > > > > > +	default:
> > > > > > > > > +		/* Use hardware capability */
> > > > > > > > > +		val = pcie_rc_cfg_rd(lpp, PCIE_LCAP);
> > > > > > > > > +		val = FIELD_GET(PCIE_LCAP_MAX_LINK_SPEED, val);
> > > > > > > > > +		reg &= ~PCIE_LCTLSTS2_HW_AUTO_DIS;
> > > > > > > > > +		reg |= val;
> > > > > > > > > +		break;
> > > > > > > > > +	}
> > > > > > > > > +	pcie_rc_cfg_wr(lpp, reg, PCIE_LCTLSTS2);
> > > > > > > > > +}
> > > > > > > > > +
> > > > > > > > > +static void intel_pcie_speed_change_enable(struct intel_pcie_port *lpp)
> > > > > > > > > +{
> > > > > > > > > +	u32 mask, val;
> > > > > > > > > +
> > > > > > > > > +	mask = PORT_LOGIC_SPEED_CHANGE | PCIE_PORT_LOGIC_FTS;
> > > > > > > > > +	val = PORT_LOGIC_SPEED_CHANGE | PCIE_PORT_LOGIC_DFT_FTS_NUM;
> > > > > > > > > +
> > > > > > > > > +	pcie_rc_cfg_wr_mask(lpp, mask, val, PCIE_LINK_WIDTH_SPEED_CONTROL);
> > > > > > > > > +}
> > > > > > > > > +
> > > > > > > > > +static void intel_pcie_speed_change_disable(struct intel_pcie_port *lpp)
> > > > > > > > > +{
> > > > > > > > > +	pcie_rc_cfg_wr_mask(lpp, PORT_LOGIC_SPEED_CHANGE, 0,
> > > > > > > > > +			    PCIE_LINK_WIDTH_SPEED_CONTROL);
> > > > > > > > > +}
> > > > > > > > > +
> > > > > > > > > +static void intel_pcie_max_link_width_setup(struct intel_pcie_port *lpp)
> > > > > > > > > +{
> > > > > > > > > +	u32 mask, val;
> > > > > > > > > +
> > > > > > > > > +	/* HW auto bandwidth negotiation must be enabled */
> > > > > > > > > +	pcie_rc_cfg_wr_mask(lpp, PCIE_LCTLSTS_HW_AW_DIS, 0, PCIE_LCTLSTS);
> > > > > > > > > +
> > > > > > > > > +	mask = PCIE_DIRECT_LINK_WIDTH_CHANGE | PCIE_TARGET_LINK_WIDTH;
> > > > > > > > > +	val = PCIE_DIRECT_LINK_WIDTH_CHANGE | lpp->lanes;
> > > > > > > > > +	pcie_rc_cfg_wr_mask(lpp, mask, val, PCIE_MULTI_LANE_CTRL);
> > > > > > > > Is this identical functionality to the writing of PCIE_PORT_LINK_CONTROL
> > > > > > > > in dw_pcie_setup?
> > > > > > > > 
> > > > > > > > I ask because if the user sets num-lanes in the DT, will it have the
> > > > > > > > desired effect?
> > > > > > > intel_pcie_max_link_width_setup() function will be called by sysfs attribute pcie_width_store() to change on the fly.
> > > > > > Indeed, but a user may also set num-lanes in the device tree. I'm wondering
> > > > > > if, when set in device-tree, it will have the desired effect. Because I don't
> > > > > > see anything similar to PCIE_LCTLSTS_HW_AW_DIS in dw_pcie_setup which is what
> > > > > > your function does here.
> > > > > > 
> > > > > > I guess I'm trying to avoid the suitation where num-lanes doesn't have the
> > > > > > desired effect and the only way to set the num-lanes is throught the sysfs
> > > > > > control.
> > > > > I will check this and get back to you.
> > > intel_pcie_max_link_width_setup() is doing the lane resizing which is
> > > different from the link up/establishment happening during probe. Also
> > > PCIE_LCTLSTS_HW_AW_DIS default value is 0 so not setting during the probe or
> > > dw_pcie_setup.
> > > 
> > > intel_pcie_max_link_width_setup() is programming as per the designware PCIe
> > > controller databook instructions for lane resizing.
> > > 
> > > Below lines are from Designware PCIe databook for lane resizing.
> > > 
> > >   Program the TARGET_LINK_WIDTH[5:0] field of the MULTI_LANE_CONTROL_OFF
> > > register.
> > >   Program the DIRECT_LINK_WIDTH_CHANGE2 field of the MULTI_LANE_CONTROL_OFF
> > > register.
> > > It is assumed that the PCIE_CAP_HW_AUTO_WIDTH_DISABLE field in the
> > > LINK_CONTROL_LINK_STATUS_REG register is 0.
> > OK, so there is a difference between initial training and then resizing
> > of the link. Given that this is not Intel specific, should this function
> > exist within the designware driver for the benefit of others?
> I am ok to add if maintainer agrees with it.
> > 
> > > 
> > > > > > > > > +}
> > > > > > > > > +
> > > > > > > > > +static void intel_pcie_port_logic_setup(struct intel_pcie_port *lpp)
> > > > > > > > > +{
> > > > > > > > > +	u32 val, mask, fts;
> > > > > > > > > +
> > > > > > > > > +	switch (lpp->max_speed) {
> > > > > > > > > +	case PCIE_LINK_SPEED_GEN1:
> > > > > > > > > +	case PCIE_LINK_SPEED_GEN2:
> > > > > > > > > +		fts = PCIE_AFR_GEN12_FTS_NUM_DFT;
> > > > > > > > > +		break;
> > > > > > > > > +	case PCIE_LINK_SPEED_GEN3:
> > > > > > > > > +		fts = PCIE_AFR_GEN3_FTS_NUM_DFT;
> > > > > > > > > +		break;
> > > > > > > > > +	case PCIE_LINK_SPEED_GEN4:
> > > > > > > > > +		fts = PCIE_AFR_GEN4_FTS_NUM_DFT;
> > > > > > > > > +		break;
> > > > > > > > > +	default:
> > > > > > > > > +		fts = PCIE_AFR_GEN12_FTS_NUM_DFT;
> > > > > > > > > +		break;
> > > > > > > > > +	}
> > > > > > > > > +	mask = PCIE_AFR_FTS_NUM | PCIE_AFR_COM_FTS_NUM;
> > > > > > > > > +	val = FIELD_PREP(PCIE_AFR_FTS_NUM, fts) |
> > > > > > > > > +	       FIELD_PREP(PCIE_AFR_COM_FTS_NUM, fts);
> > > > > > > > > +	pcie_rc_cfg_wr_mask(lpp, mask, val, PCIE_AFR);
> > > > > > > > > +
> > > > > > > > > +	/* Port Link Control Register */
> > > > > > > > > +	pcie_rc_cfg_wr_mask(lpp, PCIE_PLCR_DLL_LINK_EN,
> > > > > > > > > +			    PCIE_PLCR_DLL_LINK_EN, PCIE_PORT_LINK_CONTROL);
> > > > > > > > > +}
> > > > > > > > > +
> > > > > > > > > +static void intel_pcie_upconfig_setup(struct intel_pcie_port *lpp)
> > > > > > > > > +{
> > > > > > > > > +	pcie_rc_cfg_wr_mask(lpp, PCIE_UPCONFIG_SUPPORT,
> > > > > > > > > +			    PCIE_UPCONFIG_SUPPORT, PCIE_MULTI_LANE_CTRL);
> > > > > > > > > +}
> > > > > > > > > +
> > > > > > > > > +static void intel_pcie_rc_setup(struct intel_pcie_port *lpp)
> > > > > > > > > +{
> > > > > > > > > +	intel_pcie_ltssm_disable(lpp);
> > > > > > > > > +	intel_pcie_link_setup(lpp);
> > > > > > > > > +	dw_pcie_setup_rc(&lpp->pci.pp);
> > > > > > > > > +	intel_pcie_upconfig_setup(lpp);
> > > > > > > > > +
> > > > > > > > > +	intel_pcie_max_speed_setup(lpp);
> > > > > > > > > +	intel_pcie_speed_change_enable(lpp);
> > > > > > > > > +	intel_pcie_port_logic_setup(lpp);
> > > > > > > > > +	intel_pcie_mem_iatu(lpp);
> > > > > > > > Doesn't dw_pcie_setup_rc do the same as intel_pcie_mem_iatu?
> > > > > > > Thanks for pointing it. dw_pcie_setup_rc() does.
> > > > > > > intel_pcie_mem_iatu can be removed.
> > > > > > Excellent.
> > > > > > 
> > > > > > > > > +}
> > > > > > > > > +
> > > > > > > > > +static int intel_pcie_ep_rst_init(struct intel_pcie_port *lpp)
> > > > > > > > > +{
> > > > > > > > > +	struct device *dev = lpp->pci.dev;
> > > > > > > > > +	int ret;
> > > > > > > > > +
> > > > > > > > > +	lpp->reset_gpio = devm_gpiod_get(dev, "reset", GPIOD_OUT_LOW);
> > > > > > > > > +	if (IS_ERR(lpp->reset_gpio)) {
> > > > > > > > > +		ret = PTR_ERR(lpp->reset_gpio);
> > > > > > > > > +		if (ret != -EPROBE_DEFER)
> > > > > > > > > +			dev_err(dev, "failed to request PCIe GPIO: %d\n", ret);
> > > > > > > > > +		return ret;
> > > > > > > > > +	}
> > > > > > > > > +	/* Make initial reset last for 100us */
> > > > > > > > > +	usleep_range(100, 200);
> > > > > > > > > +
> > > > > > > > > +	return 0;
> > > > > > > > > +}
> > > > > > > > > +
> > > > > > > > > +static void intel_pcie_core_rst_assert(struct intel_pcie_port *lpp)
> > > > > > > > > +{
> > > > > > > > > +	reset_control_assert(lpp->core_rst);
> > > > > > > > > +}
> > > > > > > > > +
> > > > > > > > > +static void intel_pcie_core_rst_deassert(struct intel_pcie_port *lpp)
> > > > > > > > > +{
> > > > > > > > > +	/*
> > > > > > > > > +	 * One micro-second delay to make sure the reset pulse
> > > > > > > > > +	 * wide enough so that core reset is clean.
> > > > > > > > > +	 */
> > > > > > > > > +	udelay(1);
> > > > > > > > > +	reset_control_deassert(lpp->core_rst);
> > > > > > > > > +
> > > > > > > > > +	/*
> > > > > > > > > +	 * Some SoC core reset also reset PHY, more delay needed
> > > > > > > > > +	 * to make sure the reset process is done.
> > > > > > > > > +	 */
> > > > > > > > > +	usleep_range(1000, 2000);
> > > > > > > > > +}
> > > > > > > > > +
> > > > > > > > > +static void intel_pcie_device_rst_assert(struct intel_pcie_port *lpp)
> > > > > > > > > +{
> > > > > > > > > +	gpiod_set_value_cansleep(lpp->reset_gpio, 1);
> > > > > > > > > +}
> > > > > > > > > +
> > > > > > > > > +static void intel_pcie_device_rst_deassert(struct intel_pcie_port *lpp)
> > > > > > > > > +{
> > > > > > > > > +	msleep(lpp->rst_interval);
> > > > > > > > > +	gpiod_set_value_cansleep(lpp->reset_gpio, 0);
> > > > > > > > > +}
> > > > > > > > > +
> > > > > > > > > +static int intel_pcie_app_logic_setup(struct intel_pcie_port *lpp)
> > > > > > > > > +{
> > > > > > > > > +	intel_pcie_device_rst_deassert(lpp);
> > > > > > > > > +	intel_pcie_ltssm_enable(lpp);
> > > > > > > > > +
> > > > > > > > > +	return dw_pcie_wait_for_link(&lpp->pci);
> > > > > > > > > +}
> > > > > > > > > +
> > > > > > > > > +static irqreturn_t intel_pcie_core_isr(int irq, void *arg)
> > > > > > > > > +{
> > > > > > > > > +	struct intel_pcie_port *lpp = arg;
> > > > > > > > > +	u32 val, reg;
> > > > > > > > > +
> > > > > > > > > +	reg = pcie_app_rd(lpp, PCIE_APP_IRNCR);
> > > > > > > > > +	val = reg & PCIE_APP_IRN_INT;
> > > > > > > > > +
> > > > > > > > > +	pcie_app_wr(lpp, val, PCIE_APP_IRNCR);
> > > > > > > > > +
> > > > > > > > > +	trace_printk("PCIe misc interrupt status 0x%x\n", reg);
> > > > > > > > > +	return IRQ_HANDLED;
> > > > > > > > > +}
> > > > > > > > Why do we bother handling this interrupt?
> > > > > > > This helps during debugging.
> > > > > > I think it should be removed. It adds very little value to most users.
> > > > > > 
> > > > > > Most users won't have access to the datasheets to debug this properly, and
> > > > > > in any case if they could, then they would be competent to add an interrupt
> > > > > > handler themselves.
> > > > > IMO, having this will help to get the basic hardware interrupt status during
> > > > > debugging.
> > > > > And, user also can enhance the handler as per the need.
> > > > > I thing keeping it is beneficial than removing it.
> > > > > Please let me know your view.
> > > > I'm much prefer to remove this.
> > > Sure, i am OK to remove it.
> > > > > > > > > +
> > > > > > > > > +static int intel_pcie_setup_irq(struct intel_pcie_port *lpp)
> > > > > > > > > +{
> > > > > > > > > +	struct device *dev = lpp->pci.dev;
> > > > > > > > > +	struct platform_device *pdev;
> > > > > > > > > +	char *irq_name;
> > > > > > > > > +	int irq, ret;
> > > > > > > > > +
> > > > > > > > > +	pdev = to_platform_device(dev);
> > > > > > > > > +	irq = platform_get_irq(pdev, 0);
> > > > > > > > > +	if (irq < 0) {
> > > > > > > > > +		dev_err(dev, "missing sys integrated irq resource\n");
> > > > > > > > > +		return irq;
> > > > > > > > > +	}
> > > > > > > > > +
> > > > > > > > > +	irq_name = devm_kasprintf(dev, GFP_KERNEL, "pcie_misc%d", lpp->id);
> > > > > > > > > +	if (!irq_name) {
> > > > > > > > > +		dev_err(dev, "failed to alloc irq name\n");
> > > > > > > > > +		return -ENOMEM;
> > > > > > > > > +	}
> > > > > > > > > +
> > > > > > > > > +	ret = devm_request_irq(dev, irq, intel_pcie_core_isr,
> > > > > > > > > +			       IRQF_SHARED, irq_name, lpp);
> > > > > > > > > +	if (ret) {
> > > > > > > > > +		dev_err(dev, "request irq %d failed\n", irq);
> > > > > > > > > +		return ret;
> > > > > > > > > +	}
> > > > > > > > > +	/* Enable integrated interrupts */
> > > > > > > > > +	pcie_app_wr_mask(lpp, PCIE_APP_IRN_INT,
> > > > > > > > > +			 PCIE_APP_IRN_INT, PCIE_APP_IRNEN);
> > > > > > > > > +
> > > > > > > > > +	return ret;
> > > > > > > > > +}
> > > > > > > > > +
> > > > > > > > > +static void intel_pcie_core_irq_disable(struct intel_pcie_port *lpp)
> > > > > > > > > +{
> > > > > > > > > +	pcie_app_wr(lpp, 0, PCIE_APP_IRNEN);
> > > > > > > > > +	pcie_app_wr(lpp, PCIE_APP_IRN_INT,  PCIE_APP_IRNCR);
> > > > > > > > > +}
> > > > > > > > > +
> > > > > > > > > +static int intel_pcie_get_resources(struct platform_device *pdev)
> > > > > > > > > +{
> > > > > > > > > +	struct intel_pcie_port *lpp;
> > > > > > > > > +	struct resource *res;
> > > > > > > > > +	struct dw_pcie *pci;
> > > > > > > > > +	struct device *dev;
> > > > > > > > > +	int ret;
> > > > > > > > > +
> > > > > > > > > +	lpp = platform_get_drvdata(pdev);
> > > > > > > > > +	pci = &lpp->pci;
> > > > > > > > > +	dev = pci->dev;
> > > > > > > > > +
> > > > > > > > > +	ret = device_property_read_u32(dev, "linux,pci-domain", &lpp->id);
> > > > > > > > > +	if (ret) {
> > > > > > > > > +		dev_err(dev, "failed to get domain id, errno %d\n", ret);
> > > > > > > > > +		return ret;
> > > > > > > > > +	}
> > > > > > > > > +
> > > > > > > > > +	res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "dbi");
> > > > > > > > > +	if (!res)
> > > > > > > > > +		return -ENXIO;
> > > > > > > > > +
> > > > > > > > > +	pci->dbi_base = devm_ioremap_resource(dev, res);
> > > > > > > > > +	if (IS_ERR(pci->dbi_base))
> > > > > > > > > +		return PTR_ERR(pci->dbi_base);
> > > > > > > > > +
> > > > > > > > > +	lpp->core_clk = devm_clk_get(dev, NULL);
> > > > > > > > > +	if (IS_ERR(lpp->core_clk)) {
> > > > > > > > > +		ret = PTR_ERR(lpp->core_clk);
> > > > > > > > > +		if (ret != -EPROBE_DEFER)
> > > > > > > > > +			dev_err(dev, "failed to get clks: %d\n", ret);
> > > > > > > > > +		return ret;
> > > > > > > > > +	}
> > > > > > > > > +
> > > > > > > > > +	lpp->core_rst = devm_reset_control_get(dev, NULL);
> > > > > > > > > +	if (IS_ERR(lpp->core_rst)) {
> > > > > > > > > +		ret = PTR_ERR(lpp->core_rst);
> > > > > > > > > +		if (ret != -EPROBE_DEFER)
> > > > > > > > > +			dev_err(dev, "failed to get resets: %d\n", ret);
> > > > > > > > > +		return ret;
> > > > > > > > > +	}
> > > > > > > > > +
> > > > > > > > > +	ret = device_property_match_string(dev, "device_type", "pci");
> > > > > > > > > +	if (ret) {
> > > > > > > > > +		dev_err(dev, "failed to find pci device type: %d\n", ret);
> > > > > > > > > +		return ret;
> > > > > > > > > +	}
> > > > > > > > > +
> > > > > > > > > +	if (device_property_read_u32(dev, "reset-assert-ms",
> > > > > > > > > +				     &lpp->rst_interval))
> > > > > > > > > +		lpp->rst_interval = RST_INTRVL_DFT_MS;
> > > > > > > > > +
> > > > > > > > > +	if (device_property_read_u32(dev, "max-link-speed", &lpp->link_gen))
> > > > > > > > > +		lpp->link_gen = 0; /* Fallback to auto */
> > > > > > > > Is it possible to use of_pci_get_max_link_speed here instead?
> > > > > > > Thanks for pointing it. of_pci_get_max_link_speed() can be used here. I will
> > > > > > > update it in the next patch revision.
> > > I just remember, earlier we were using  of_pci_get_max_link_speed() itself.
> > > As per reviewer comments changed to device_property_read_u32() to maintain
> > > symmetry in parsing device tree properties from device node.
> > > Let me know your view.
> > I couldn't find an earlier version of this series that used of_pci_get_max_link_speed,
> > have I missed it somewhere?
> It happened in our internal review.
> What's your suggestion please, either to go with symmetry in parsing
> "device_property_read_u32()" or with pci helper function
> "of_pci_get_max_link_speed"?

I'd prefer the helper, the added benefit of this is additional error checking.
It also means users can be confident that max-link-speed will behave in the
same way as other host controllers that use this field.

Thanks,

Andrew Murray

> > 
> > > > > > > > > +
> > > > > > > > > +	res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "app");
> > > > > > > > > +	if (!res)
> > > > > > > > > +		return -ENXIO;
> > > > > > > > > +
> > > > > > > > > +	lpp->app_base = devm_ioremap_resource(dev, res);
> > > > > > > > > +	if (IS_ERR(lpp->app_base))
> > > > > > > > > +		return PTR_ERR(lpp->app_base);
> > > > > > > > > +
> > > > > > > > > +	ret = intel_pcie_ep_rst_init(lpp);
> > > > > > > > > +	if (ret)
> > > > > > > > > +		return ret;
> > > > > > > > Given that this is called from a function '..._get_resources' I don't think
> > > > > > > > we should be resetting anything here.
> > > > > > > Agree. I will move it out of get_resources().
> > > > > > > > > +
> > > > > > > > > +	lpp->phy = devm_phy_get(dev, "pciephy");
> > > > > > > > > +	if (IS_ERR(lpp->phy)) {
> > > > > > > > > +		ret = PTR_ERR(lpp->phy);
> > > > > > > > > +		if (ret != -EPROBE_DEFER)
> > > > > > > > > +			dev_err(dev, "couldn't get pcie-phy: %d\n", ret);
> > > > > > > > > +		return ret;
> > > > > > > > > +	}
> > > > > > > > > +	return 0;
> > > > > > > > > +}
> > > > > > > > > +
> > > > > > > > > +static void intel_pcie_deinit_phy(struct intel_pcie_port *lpp)
> > > > > > > > > +{
> > > > > > > > > +	phy_exit(lpp->phy);
> > > > > > > > > +}
> > > > > > > > > +
> > > > > > > > > +static int intel_pcie_wait_l2(struct intel_pcie_port *lpp)
> > > > > > > > > +{
> > > > > > > > > +	u32 value;
> > > > > > > > > +	int ret;
> > > > > > > > > +
> > > > > > > > > +	if (lpp->max_speed < PCIE_LINK_SPEED_GEN3)
> > > > > > > > > +		return 0;
> > > > > > > > > +
> > > > > > > > > +	/* Send PME_TURN_OFF message */
> > > > > > > > > +	pcie_app_wr_mask(lpp, PCIE_APP_MSG_XMT_PM_TURNOFF,
> > > > > > > > > +			 PCIE_APP_MSG_XMT_PM_TURNOFF, PCIE_APP_MSG_CR);
> > > > > > > > > +
> > > > > > > > > +	/* Read PMC status and wait for falling into L2 link state */
> > > > > > > > > +	ret = readl_poll_timeout(lpp->app_base + PCIE_APP_PMC, value,
> > > > > > > > > +				 (value & PCIE_APP_PMC_IN_L2), 20,
> > > > > > > > > +				 jiffies_to_usecs(5 * HZ));
> > > > > > > > > +	if (ret)
> > > > > > > > > +		dev_err(lpp->pci.dev, "PCIe link enter L2 timeout!\n");
> > > > > > > > > +
> > > > > > > > > +	return ret;
> > > > > > > > > +}
> > > > > > > > > +
> > > > > > > > > +static void intel_pcie_turn_off(struct intel_pcie_port *lpp)
> > > > > > > > > +{
> > > > > > > > > +	if (dw_pcie_link_up(&lpp->pci))
> > > > > > > > > +		intel_pcie_wait_l2(lpp);
> > > > > > > > > +
> > > > > > > > > +	/* Put EP in reset state */
> > > > > > > > EP?
> > > > > > > End point device. I will update it.
> > > > > > Is this not a host bridge controller?
> > > > > It is PERST#, signals hardware reset to the End point .
> > > > >           /* Put EP in reset state */
> > > > >           intel_pcie_device_rst_assert(lpp);
> > > > OK.
> > > > 
> > > > > > > > > +	intel_pcie_device_rst_assert(lpp);
> > > > > > > > > +	pcie_rc_cfg_wr_mask(lpp, PCI_COMMAND_MEMORY, 0, PCI_COMMAND);
> > > > > > > > > +}
> > > > > > > > > +
> > > > > > > > > +static int intel_pcie_host_setup(struct intel_pcie_port *lpp)
> > > > > > > > > +{
> > > > > > > > > +	int ret;
> > > > > > > > > +
> > > > > > > > > +	intel_pcie_core_rst_assert(lpp);
> > > > > > > > > +	intel_pcie_device_rst_assert(lpp);
> > > > > > > > > +
> > > > > > > > > +	ret = phy_init(lpp->phy);
> > > > > > > > > +	if (ret)
> > > > > > > > > +		return ret;
> > > > > > > > > +
> > > > > > > > > +	intel_pcie_core_rst_deassert(lpp);
> > > > > > > > > +
> > > > > > > > > +	ret = clk_prepare_enable(lpp->core_clk);
> > > > > > > > > +	if (ret) {
> > > > > > > > > +		dev_err(lpp->pci.dev, "Core clock enable failed: %d\n", ret);
> > > > > > > > > +		goto clk_err;
> > > > > > > > > +	}
> > > > > > > > > +
> > > > > > > > > +	intel_pcie_rc_setup(lpp);
> > > > > > > > > +	ret = intel_pcie_app_logic_setup(lpp);
> > > > > > > > > +	if (ret)
> > > > > > > > > +		goto app_init_err;
> > > > > > > > > +
> > > > > > > > > +	ret = intel_pcie_setup_irq(lpp);
> > > > > > > > > +	if (!ret)
> > > > > > > > > +		return ret;
> > > > > > > > > +
> > > > > > > > > +	intel_pcie_turn_off(lpp);
> > > > > > > > > +app_init_err:
> > > > > > > > > +	clk_disable_unprepare(lpp->core_clk);
> > > > > > > > > +clk_err:
> > > > > > > > > +	intel_pcie_core_rst_assert(lpp);
> > > > > > > > > +	intel_pcie_deinit_phy(lpp);
> > > > > > > > > +	return ret;
> > > > > > > > > +}
> > > > > > > > > +
> > > > > > > > > +static ssize_t
> > > > > > > > > +pcie_link_status_show(struct device *dev, struct device_attribute *attr,
> > > > > > > > > +		      char *buf)
> > > > > > > > > +{
> > > > > > > > > +	u32 reg, width, gen;
> > > > > > > > > +	struct intel_pcie_port *lpp;
> > > > > > > > > +
> > > > > > > > > +	lpp = dev_get_drvdata(dev);
> > > > > > > > > +
> > > > > > > > > +	reg = pcie_rc_cfg_rd(lpp, PCIE_LCTLSTS);
> > > > > > > > > +	width = FIELD_GET(PCIE_LCTLSTS_NEGOTIATED_LINK_WIDTH, reg);
> > > > > > > > > +	gen = FIELD_GET(PCIE_LCTLSTS_LINK_SPEED, reg);
> > > > > > > > > +	if (gen > lpp->max_speed)
> > > > > > > > > +		return -EINVAL;
> > > > > > > > > +
> > > > > > > > > +	return sprintf(buf, "Port %2u Width x%u Speed %s GT/s\n", lpp->id,
> > > > > > > > > +		       width, pcie_link_gen_to_str(gen));
> > > > > > > > > +}
> > > > > > > > > +static DEVICE_ATTR_RO(pcie_link_status);
> > > > > > > > > +
> > > > > > > > > +static ssize_t pcie_speed_store(struct device *dev,
> > > > > > > > > +				struct device_attribute *attr,
> > > > > > > > > +				const char *buf, size_t len)
> > > > > > > > > +{
> > > > > > > > > +	struct intel_pcie_port *lpp;
> > > > > > > > > +	unsigned long val;
> > > > > > > > > +	int ret;
> > > > > > > > > +
> > > > > > > > > +	lpp = dev_get_drvdata(dev);
> > > > > > > > > +
> > > > > > > > > +	ret = kstrtoul(buf, 10, &val);
> > > > > > > > > +	if (ret)
> > > > > > > > > +		return ret;
> > > > > > > > > +
> > > > > > > > > +	if (val > lpp->max_speed)
> > > > > > > > > +		return -EINVAL;
> > > > > > > > > +
> > > > > > > > > +	lpp->link_gen = val;
> > > > > > > > > +	intel_pcie_max_speed_setup(lpp);
> > > > > > > > > +	intel_pcie_speed_change_disable(lpp);
> > > > > > > > > +	intel_pcie_speed_change_enable(lpp);
> > > > > > > > > +
> > > > > > > > > +	return len;
> > > > > > > > > +}
> > > > > > > > > +static DEVICE_ATTR_WO(pcie_speed);
> > > > > > > > > +
> > > > > > > > > +/*
> > > > > > > > > + * Link width change on the fly is not always successful.
> > > > > > > > > + * It also depends on the partner.
> > > > > > > > > + */
> > > > > > > > > +static ssize_t pcie_width_store(struct device *dev,
> > > > > > > > > +				struct device_attribute *attr,
> > > > > > > > > +				const char *buf, size_t len)
> > > > > > > > > +{
> > > > > > > > > +	struct intel_pcie_port *lpp;
> > > > > > > > > +	unsigned long val;
> > > > > > > > > +
> > > > > > > > > +	lpp = dev_get_drvdata(dev);
> > > > > > > > > +
> > > > > > > > > +	if (kstrtoul(buf, 10, &val))
> > > > > > > > > +		return -EINVAL;
> > > > > > > > > +
> > > > > > > > > +	if (val > lpp->max_width)
> > > > > > > > > +		return -EINVAL;
> > > > > > > > > +
> > > > > > > > > +	lpp->lanes = val;
> > > > > > > > > +	intel_pcie_max_link_width_setup(lpp);
> > > > > > > > > +
> > > > > > > > > +	return len;
> > > > > > > > > +}
> > > > > > > > > +static DEVICE_ATTR_WO(pcie_width);
> > > > > > > > You mentioned that a use-case for changing width/speed on the fly was to
> > > > > > > > control power consumption (and this also helps debugging issues). As I
> > > > > > > > understand there is no current support for this in the kernel - yet it is
> > > > > > > > something that would provide value.
> > > > > > > > 
> > > > > > > > I haven't looked in much detail, however as I understand the PCI spec
> > > > > > > > allows an upstream partner to change the link speed and retrain. (I'm not
> > > > > > > > sure about link width). Given that we already have
> > > > > > > > [current,max]_link_[speed,width] is sysfs for each PCI device, it would
> > > > > > > > seem natural to extend this to allow for writing a max width or speed.
> > > > > > > > 
> > > > > > > > So ideally this type of thing would be moved to the core or at least in
> > > > > > > > the dwc driver. This way the benefits can be applied to more devices on
> > > > > > > > larger PCI fabrics - Though perhaps others here will have a different view
> > > > > > > > and I'm keen to hear them.
> > > > > > > > 
> > > > > > > > I'm keen to limit the differences between the DWC controller drivers and
> > > > > > > > unify common code - thus it would be helpful to have a justification as to
> > > > > > > > why this is only relevant for this controller.
> > > > > > > > 
> > > > > > > > For user-space only control, it is possible to achieve what you have here
> > > > > > > > with userspace utilities (something like setpci) (assuming the standard
> > > > > > > > looking registers you currently access are exposed in the normal config
> > > > > > > > space way - though PCIe capabilities).
> > > > > > > > 
> > > > > > > > My suggestion would be to drop these changes and later add something that
> > > > > > > > can benefit more devices. In any case if these changes stay within this
> > > > > > > > driver then it would be helpful to move them to a separate patch.
> > > > > > > Sure, i will submit these entity in separate patch.
> > > > > > Please ensure that all supporting macros, functions and defines go with that
> > > > > > other patch as well please.
> > > > > Sure.
> > > > > > > > > +
> > > > > > > > > +static struct attribute *pcie_cfg_attrs[] = {
> > > > > > > > > +	&dev_attr_pcie_link_status.attr,
> > > > > > > > > +	&dev_attr_pcie_speed.attr,
> > > > > > > > > +	&dev_attr_pcie_width.attr,
> > > > > > > > > +	NULL,
> > > > > > > > > +};
> > > > > > > > > +ATTRIBUTE_GROUPS(pcie_cfg);
> > > > > > > > > +
> > > > > > > > > +static int intel_pcie_sysfs_init(struct intel_pcie_port *lpp)
> > > > > > > > > +{
> > > > > > > > > +	return devm_device_add_groups(lpp->pci.dev, pcie_cfg_groups);
> > > > > > > > > +}
> > > > > > > > > +
> > > > > > > > > +static void __intel_pcie_remove(struct intel_pcie_port *lpp)
> > > > > > > > > +{
> > > > > > > > > +	pcie_rc_cfg_wr_mask(lpp, PCI_COMMAND_MEMORY | PCI_COMMAND_MASTER,
> > > > > > > > > +			    0, PCI_COMMAND);
> > > > > > > > > +	intel_pcie_core_irq_disable(lpp);
> > > > > > > > > +	intel_pcie_turn_off(lpp);
> > > > > > > > > +	clk_disable_unprepare(lpp->core_clk);
> > > > > > > > > +	intel_pcie_core_rst_assert(lpp);
> > > > > > > > > +	intel_pcie_deinit_phy(lpp);
> > > > > > > > > +}
> > > > > > > > > +
> > > > > > > > > +static int intel_pcie_remove(struct platform_device *pdev)
> > > > > > > > > +{
> > > > > > > > > +	struct intel_pcie_port *lpp = platform_get_drvdata(pdev);
> > > > > > > > > +	struct pcie_port *pp = &lpp->pci.pp;
> > > > > > > > > +
> > > > > > > > > +	dw_pcie_host_deinit(pp);
> > > > > > > > > +	__intel_pcie_remove(lpp);
> > > > > > > > > +
> > > > > > > > > +	return 0;
> > > > > > > > > +}
> > > > > > > > > +
> > > > > > > > > +static int __maybe_unused intel_pcie_suspend_noirq(struct device *dev)
> > > > > > > > > +{
> > > > > > > > > +	struct intel_pcie_port *lpp = dev_get_drvdata(dev);
> > > > > > > > > +	int ret;
> > > > > > > > > +
> > > > > > > > > +	intel_pcie_core_irq_disable(lpp);
> > > > > > > > > +	ret = intel_pcie_wait_l2(lpp);
> > > > > > > > > +	if (ret)
> > > > > > > > > +		return ret;
> > > > > > > > > +
> > > > > > > > > +	intel_pcie_deinit_phy(lpp);
> > > > > > > > > +	clk_disable_unprepare(lpp->core_clk);
> > > > > > > > > +	return ret;
> > > > > > > > > +}
> > > > > > > > > +
> > > > > > > > > +static int __maybe_unused intel_pcie_resume_noirq(struct device *dev)
> > > > > > > > > +{
> > > > > > > > > +	struct intel_pcie_port *lpp = dev_get_drvdata(dev);
> > > > > > > > > +
> > > > > > > > > +	return intel_pcie_host_setup(lpp);
> > > > > > > > > +}
> > > > > > > > > +
> > > > > > > > > +static int intel_pcie_rc_init(struct pcie_port *pp)
> > > > > > > > > +{
> > > > > > > > > +	struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
> > > > > > > > > +	struct intel_pcie_port *lpp = dev_get_drvdata(pci->dev);
> > > > > > > > > +	int ret;
> > > > > > > > > +
> > > > > > > > > +	/* RC/host initialization */
> > > > > > > > > +	ret = intel_pcie_host_setup(lpp);
> > > > > > > > > +	if (ret)
> > > > > > > > > +		return ret;
> > > > > > > > Insert new line here.
> > > > > > > Ok.
> > > > > > > > > +	ret = intel_pcie_sysfs_init(lpp);
> > > > > > > > > +	if (ret)
> > > > > > > > > +		__intel_pcie_remove(lpp);
> > > > > > > > > +	return ret;
> > > > > > > > > +}
> > > > > > > > > +
> > > > > > > > > +int intel_pcie_msi_init(struct pcie_port *pp)
> > > > > > > > > +{
> > > > > > > > > +	struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
> > > > > > > > > +
> > > > > > > > > +	dev_dbg(pci->dev, "PCIe MSI/MSIx is handled by MSI in x86 processor\n");
> > > > > > > > What about other processors? (Noting that this driver doesn't depend on
> > > > > > > > any specific ARCH in the KConfig).
> > > > > > > Agree. i will mark the dependency in Kconfig.
> > > > > > OK, please also see how other drivers use the COMPILE_TEST Kconfig option.
> > > > > Ok sure.
> > > > > > I'd suggest that the dev_dbg just becomes a code comment.
> > > Ok
> > > > > > > > > +	return 0;
> > > > > > > > > +}
> > > > > > > > > +
> > > > > > > > > +u64 intel_pcie_cpu_addr(struct dw_pcie *pcie, u64 cpu_addr)
> > > > > > > > > +{
> > > > > > > > > +	return cpu_addr + BUS_IATU_OFFS;
> > > > > > > > > +}
> > > > > > > > > +
> > > > > > > > > +static const struct dw_pcie_ops intel_pcie_ops = {
> > > > > > > > > +	.cpu_addr_fixup = intel_pcie_cpu_addr,
> > > > > > > > > +};
> > > > > > > > > +
> > > > > > > > > +static const struct dw_pcie_host_ops intel_pcie_dw_ops = {
> > > > > > > > > +	.host_init =		intel_pcie_rc_init,
> > > > > > > > > +	.msi_host_init =	intel_pcie_msi_init,
> > > > > > > > > +};
> > > > > > > > > +
> > > > > > > > > +static const struct intel_pcie_soc pcie_data = {
> > > > > > > > > +	.pcie_ver =		0x520A,
> > > > > > > > > +	.pcie_atu_offset =	0xC0000,
> > > > > > > > > +	.num_viewport =		3,
> > > > > > > > > +};
> > > > > > > > > +
> > > > > > > > > +static int intel_pcie_probe(struct platform_device *pdev)
> > > > > > > > > +{
> > > > > > > > > +	const struct intel_pcie_soc *data;
> > > > > > > > > +	struct device *dev = &pdev->dev;
> > > > > > > > > +	struct intel_pcie_port *lpp;
> > > > > > > > > +	struct pcie_port *pp;
> > > > > > > > > +	struct dw_pcie *pci;
> > > > > > > > > +	int ret;
> > > > > > > > > +
> > > > > > > > > +	lpp = devm_kzalloc(dev, sizeof(*lpp), GFP_KERNEL);
> > > > > > > > > +	if (!lpp)
> > > > > > > > > +		return -ENOMEM;
> > > > > > > > > +
> > > > > > > > > +	platform_set_drvdata(pdev, lpp);
> > > > > > > > > +	pci = &lpp->pci;
> > > > > > > > > +	pci->dev = dev;
> > > > > > > > > +	pp = &pci->pp;
> > > > > > > > > +
> > > > > > > > > +	ret = intel_pcie_get_resources(pdev);
> > > > > > > > > +	if (ret)
> > > > > > > > > +		return ret;
> > > > > > > > > +
> > > > > > > > > +	data = device_get_match_data(dev);
> > > > > > > > > +	pci->ops = &intel_pcie_ops;
> > > > > > > > > +	pci->version = data->pcie_ver;
> > > > > > > > > +	pci->atu_base = pci->dbi_base + data->pcie_atu_offset;
> > > > > > > > > +	pp->ops = &intel_pcie_dw_ops;
> > > > > > > > > +
> > > > > > > > > +	ret = dw_pcie_host_init(pp);
> > > > > > > > > +	if (ret) {
> > > > > > > > > +		dev_err(dev, "cannot initialize host\n");
> > > > > > > > > +		return ret;
> > > > > > > > > +	}
> > > > > > > > Add a new line after the closing brace.
> > > > > > > Ok
> > > > > > > > > +	/* Intel PCIe doesn't configure IO region, so configure
> > > > > > > > > +	 * viewport to not to access IO region during register
> > > > > > > > > +	 * read write operations.
> > > > > > > > > +	 */
> > > > > > > > > +	pci->num_viewport = data->num_viewport;
> > > > > > > > > +	dev_info(dev,
> > > > > > > > > +		 "Intel AXI PCIe Root Complex Port %d Init Done\n", lpp->id);
> > > > > > > > > +	return ret;
> > > > > > > > > +}
> > > > > > > > > +
> > > > > > > > > +static const struct dev_pm_ops intel_pcie_pm_ops = {
> > > > > > > > > +	SET_NOIRQ_SYSTEM_SLEEP_PM_OPS(intel_pcie_suspend_noirq,
> > > > > > > > > +				      intel_pcie_resume_noirq)
> > > > > > > > > +};
> > > > > > > > > +
> > > > > > > > > +static const struct of_device_id of_intel_pcie_match[] = {
> > > > > > > > > +	{ .compatible = "intel,lgm-pcie", .data = &pcie_data },
> > > > > > > > > +	{}
> > > > > > > > > +};
> > > > > > > > > +
> > > > > > > > > +static struct platform_driver intel_pcie_driver = {
> > > > > > > > > +	.probe = intel_pcie_probe,
> > > > > > > > > +	.remove = intel_pcie_remove,
> > > > > > > > > +	.driver = {
> > > > > > > > > +		.name = "intel-lgm-pcie",
> > > > > > > > Is there a reason why the we use intel-lgm-pcie here and pcie-intel-axi
> > > > > > > > elsewhere? What does lgm mean?
> > > > > > > lgm is the name of intel SoC.  I will name it to pcie-intel-axi to be
> > > > > > > generic.
> > > > > > I'm keen to ensure that it is consistently named. I've seen other comments
> > > > > > regarding what the name should be - I don't have a strong opinion though
> > > > > > I do think that *-axi may be too generic.
> > > This PCIe driver is for the Intel Gateway SoCs. So how about naming it is as
> > > "pcie-intel-gw"; pcie-intel-gw.c and Kconfig as PCIE_INTEL_GW.
> > I don't have a problem with this, though others may have differing views.
> Sure. thank you.
> > 
> > Thanks,
> > 
> > Andrew Murray
> > 
> > > Regards,
> > > Dilip
> > > 
> > > > > Ok, i will check and get back to you on this.
> > > > > Regards,
> > > > Thanks,
> > > > 
> > > > Andrew Murray
> > > > 
> > > > > Dilip
> > > > > 
> > > > > > Thanks,
> > > > > > 
> > > > > > Andrew Murray
> > > > > > 
> > > > > > > > Thanks,
> > > > > > > > 
> > > > > > > > Andrew Murray
> > > > > > > > 
> > > > > > > > > +		.of_match_table = of_intel_pcie_match,
> > > > > > > > > +		.pm = &intel_pcie_pm_ops,
> > > > > > > > > +	},
> > > > > > > > > +};
> > > > > > > > > +builtin_platform_driver(intel_pcie_driver);
> > > > > > > > > -- 
> > > > > > > > > 2.11.0
> > > > > > > > > 

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH v3 2/2] dwc: PCI: intel: Intel PCIe RC controller driver
  2019-09-12  8:25                   ` Andrew Murray
@ 2019-09-12  9:23                     ` Dilip Kota
  2019-09-12 10:49                       ` Gustavo Pimentel
  0 siblings, 1 reply; 37+ messages in thread
From: Dilip Kota @ 2019-09-12  9:23 UTC (permalink / raw)
  To: Andrew Murray, gustavo.pimentel
  Cc: jingoohan1, lorenzo.pieralisi, robh, martin.blumenstingl,
	linux-pci, hch, devicetree, linux-kernel, andriy.shevchenko,
	cheol.yong.kim, chuanhua.lei, qi-ming.wu

Quoting Andrew Murray:
Quoting Gustavo Pimentel:

On 9/12/2019 4:25 PM, Andrew Murray wrote:
> [...]
>>>>>>>>>> +static void intel_pcie_max_link_width_setup(struct intel_pcie_port *lpp)
>>>>>>>>>> +{
>>>>>>>>>> +	u32 mask, val;
>>>>>>>>>> +
>>>>>>>>>> +	/* HW auto bandwidth negotiation must be enabled */
>>>>>>>>>> +	pcie_rc_cfg_wr_mask(lpp, PCIE_LCTLSTS_HW_AW_DIS, 0, PCIE_LCTLSTS);
>>>>>>>>>> +
>>>>>>>>>> +	mask = PCIE_DIRECT_LINK_WIDTH_CHANGE | PCIE_TARGET_LINK_WIDTH;
>>>>>>>>>> +	val = PCIE_DIRECT_LINK_WIDTH_CHANGE | lpp->lanes;
>>>>>>>>>> +	pcie_rc_cfg_wr_mask(lpp, mask, val, PCIE_MULTI_LANE_CTRL);
>>>>>>>>> Is this identical functionality to the writing of PCIE_PORT_LINK_CONTROL
>>>>>>>>> in dw_pcie_setup?
>>>>>>>>>
>>>>>>>>> I ask because if the user sets num-lanes in the DT, will it have the
>>>>>>>>> desired effect?
>>>>>>>> intel_pcie_max_link_width_setup() function will be called by sysfs attribute pcie_width_store() to change on the fly.
>>>>>>> Indeed, but a user may also set num-lanes in the device tree. I'm wondering
>>>>>>> if, when set in device-tree, it will have the desired effect. Because I don't
>>>>>>> see anything similar to PCIE_LCTLSTS_HW_AW_DIS in dw_pcie_setup which is what
>>>>>>> your function does here.
>>>>>>>
>>>>>>> I guess I'm trying to avoid the suitation where num-lanes doesn't have the
>>>>>>> desired effect and the only way to set the num-lanes is throught the sysfs
>>>>>>> control.
>>>>>> I will check this and get back to you.
>>>> intel_pcie_max_link_width_setup() is doing the lane resizing which is
>>>> different from the link up/establishment happening during probe. Also
>>>> PCIE_LCTLSTS_HW_AW_DIS default value is 0 so not setting during the probe or
>>>> dw_pcie_setup.
>>>>
>>>> intel_pcie_max_link_width_setup() is programming as per the designware PCIe
>>>> controller databook instructions for lane resizing.
>>>>
>>>> Below lines are from Designware PCIe databook for lane resizing.
>>>>
>>>>    Program the TARGET_LINK_WIDTH[5:0] field of the MULTI_LANE_CONTROL_OFF
>>>> register.
>>>>    Program the DIRECT_LINK_WIDTH_CHANGE2 field of the MULTI_LANE_CONTROL_OFF
>>>> register.
>>>> It is assumed that the PCIE_CAP_HW_AUTO_WIDTH_DISABLE field in the
>>>> LINK_CONTROL_LINK_STATUS_REG register is 0.
>>> OK, so there is a difference between initial training and then resizing
>>> of the link. Given that this is not Intel specific, should this function
>>> exist within the designware driver for the benefit of others?
>> I am ok to add if maintainer agrees with it.

Gustavo Pimentel,

Could you please let us know your opinion on this.

[...]

>>>>>>>>>> +}
>>>>>>>>>> +
>>>>>>>>>> +static void intel_pcie_port_logic_setup(struct intel_pcie_port *lpp)
>>>>>>>>>> +{
>>>>>>>>>> +	u32 val, mask, fts;
>>>>>>>>>> +
>>>>>>>>>> +	switch (lpp->max_speed) {
>>>>>>>>>> +	case PCIE_LINK_SPEED_GEN1:
>>>>>>>>>> +	case PCIE_LINK_SPEED_GEN2:
>>>>>>>>>> +		fts = PCIE_AFR_GEN12_FTS_NUM_DFT;
>>>>>>>>>> +		break;
[...]
>>>>>>>>>> +
>>>>>>>>>> +	if (device_property_read_u32(dev, "max-link-speed", &lpp->link_gen))
>>>>>>>>>> +		lpp->link_gen = 0; /* Fallback to auto */
>>>>>>>>> Is it possible to use of_pci_get_max_link_speed here instead?
>>>>>>>> Thanks for pointing it. of_pci_get_max_link_speed() can be used here. I will
>>>>>>>> update it in the next patch revision.
>>>> I just remember, earlier we were using  of_pci_get_max_link_speed() itself.
>>>> As per reviewer comments changed to device_property_read_u32() to maintain
>>>> symmetry in parsing device tree properties from device node.
>>>> Let me know your view.
>>> I couldn't find an earlier version of this series that used of_pci_get_max_link_speed,
>>> have I missed it somewhere?
>> It happened in our internal review.
>> What's your suggestion please, either to go with symmetry in parsing
>> "device_property_read_u32()" or with pci helper function
>> "of_pci_get_max_link_speed"?
> I'd prefer the helper, the added benefit of this is additional error checking.
> It also means users can be confident that max-link-speed will behave in the
> same way as other host controllers that use this field.
Ok, i will update it in the next patch version.


Regards,

Dilip

>
> Thanks,
>
> Andrew Murray
>
>>>>>>>>>> +
>>>>>>>>>> +	res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "app");
>>>>>>>>>> +	if (!res)
>>>>>>>>>> +		return -ENXIO;
>>>>>>>>>> +
>>>>>>>>>> +	lpp->app_base = devm_ioremap_resource(dev, res);
>>>>>>>>>> +	if (IS_ERR(lpp->app_base))
>>>>>>>>>> +		return PTR_ERR(lpp->app_base);
>>>>>>>>>> +
>>>>>>>>>> +	ret = intel_pcie_ep_rst_init(lpp);
>>>>>>>>>> +	if (ret)
>>>>>>>>>> +		return ret;
>>>>>>>>> Given that this is called from a function '..._get_resources' I don't think
>>>>>>>>> we should be resetting anything here.
>>>>>>>> Agree. I will move it out of get_resources().
>>>>>>>>>> +
>>>>>>>>>> +	lpp->phy = devm_phy_get(dev, "pciephy");
>>>>>>>>>> +	if (IS_ERR(lpp->phy)) {
>>>>>>>>>> +		ret = PTR_ERR(lpp->phy);
>>>>>>>>>> +		if (ret != -EPROBE_DEFER)
>>>>>>>>>> +			dev_err(dev, "couldn't get pcie-phy: %d\n", ret);
>>>>>>>>>> +		return ret;
>>>>>>>>>> +	}
>>>>>>>>>> +	return 0;
>>>>>>>>>> +}
>>>>>>>>>> +
>>>>>>>>>> +static void intel_pcie_deinit_phy(struct intel_pcie_port *lpp)
>>>>>>>>>> +{
>>>>>>>>>> +	phy_exit(lpp->phy);
>>>>>>>>>> +}
>>>>>>>>>> +
>>>>>>>>>> +static int intel_pcie_wait_l2(struct intel_pcie_port *lpp)
>>>>>>>>>> +{
>>>>>>>>>> +	u32 value;
>>>>>>>>>> +	int ret;
>>>>>>>>>> +
>>>>>>>>>> +	if (lpp->max_speed < PCIE_LINK_SPEED_GEN3)
>>>>>>>>>> +		return 0;
>>>>>>>>>> +
>>>>>>>>>> +	/* Send PME_TURN_OFF message */
>>>>>>>>>> +	pcie_app_wr_mask(lpp, PCIE_APP_MSG_XMT_PM_TURNOFF,
>>>>>>>>>> +			 PCIE_APP_MSG_XMT_PM_TURNOFF, PCIE_APP_MSG_CR);
>>>>>>>>>> +
>>>>>>>>>> +	/* Read PMC status and wait for falling into L2 link state */
>>>>>>>>>> +	ret = readl_poll_timeout(lpp->app_base + PCIE_APP_PMC, value,
>>>>>>>>>> +				 (value & PCIE_APP_PMC_IN_L2), 20,
>>>>>>>>>> +				 jiffies_to_usecs(5 * HZ));
>>>>>>>>>> +	if (ret)
>>>>>>>>>> +		dev_err(lpp->pci.dev, "PCIe link enter L2 timeout!\n");
>>>>>>>>>> +
>>>>>>>>>> +	return ret;
>>>>>>>>>> +}
>>>>>>>>>> +
>>>>>>>>>> +static void intel_pcie_turn_off(struct intel_pcie_port *lpp)
>>>>>>>>>> +{
>>>>>>>>>> +	if (dw_pcie_link_up(&lpp->pci))
>>>>>>>>>> +		intel_pcie_wait_l2(lpp);
>>>>>>>>>> +
>>>>>>>>>> +	/* Put EP in reset state */
>>>>>>>>> EP?
>>>>>>>> End point device. I will update it.
>>>>>>> Is this not a host bridge controller?
>>>>>> It is PERST#, signals hardware reset to the End point .
>>>>>>            /* Put EP in reset state */
>>>>>>            intel_pcie_device_rst_assert(lpp);
>>>>> OK.
>>>>>
>>>>>>>>>> +	intel_pcie_device_rst_assert(lpp);
>>>>>>>>>> +	pcie_rc_cfg_wr_mask(lpp, PCI_COMMAND_MEMORY, 0, PCI_COMMAND);
>>>>>>>>>> +}
>>>>>>>>>> +
>>>>>>>>>> +static int intel_pcie_host_setup(struct intel_pcie_port *lpp)
>>>>>>>>>> +{
>>>>>>>>>> +	int ret;
>>>>>>>>>> +
>>>>>>>>>> +	intel_pcie_core_rst_assert(lpp);
>>>>>>>>>> +	intel_pcie_device_rst_assert(lpp);
>>>>>>>>>> +
>>>>>>>>>> +	ret = phy_init(lpp->phy);
>>>>>>>>>> +	if (ret)
>>>>>>>>>> +		return ret;
>>>>>>>>>> +
>>>>>>>>>> +	intel_pcie_core_rst_deassert(lpp);
>>>>>>>>>> +
>>>>>>>>>> +	ret = clk_prepare_enable(lpp->core_clk);
>>>>>>>>>> +	if (ret) {
>>>>>>>>>> +		dev_err(lpp->pci.dev, "Core clock enable failed: %d\n", ret);
>>>>>>>>>> +		goto clk_err;
>>>>>>>>>> +	}
>>>>>>>>>> +
>>>>>>>>>> +	intel_pcie_rc_setup(lpp);
>>>>>>>>>> +	ret = intel_pcie_app_logic_setup(lpp);
>>>>>>>>>> +	if (ret)
>>>>>>>>>> +		goto app_init_err;
>>>>>>>>>> +
>>>>>>>>>> +	ret = intel_pcie_setup_irq(lpp);
>>>>>>>>>> +	if (!ret)
>>>>>>>>>> +		return ret;
>>>>>>>>>> +
>>>>>>>>>> +	intel_pcie_turn_off(lpp);
>>>>>>>>>> +app_init_err:
>>>>>>>>>> +	clk_disable_unprepare(lpp->core_clk);
>>>>>>>>>> +clk_err:
>>>>>>>>>> +	intel_pcie_core_rst_assert(lpp);
>>>>>>>>>> +	intel_pcie_deinit_phy(lpp);
>>>>>>>>>> +	return ret;
>>>>>>>>>> +}
>>>>>>>>>> +
>>>>>>>>>> +static ssize_t
>>>>>>>>>> +pcie_link_status_show(struct device *dev, struct device_attribute *attr,
>>>>>>>>>> +		      char *buf)
>>>>>>>>>> +{
>>>>>>>>>> +	u32 reg, width, gen;
>>>>>>>>>> +	struct intel_pcie_port *lpp;
>>>>>>>>>> +
>>>>>>>>>> +	lpp = dev_get_drvdata(dev);
>>>>>>>>>> +
>>>>>>>>>> +	reg = pcie_rc_cfg_rd(lpp, PCIE_LCTLSTS);
>>>>>>>>>> +	width = FIELD_GET(PCIE_LCTLSTS_NEGOTIATED_LINK_WIDTH, reg);
>>>>>>>>>> +	gen = FIELD_GET(PCIE_LCTLSTS_LINK_SPEED, reg);
>>>>>>>>>> +	if (gen > lpp->max_speed)
>>>>>>>>>> +		return -EINVAL;
>>>>>>>>>> +
>>>>>>>>>> +	return sprintf(buf, "Port %2u Width x%u Speed %s GT/s\n", lpp->id,
>>>>>>>>>> +		       width, pcie_link_gen_to_str(gen));
>>>>>>>>>> +}
>>>>>>>>>> +static DEVICE_ATTR_RO(pcie_link_status);
>>>>>>>>>> +
>>>>>>>>>> +static ssize_t pcie_speed_store(struct device *dev,
>>>>>>>>>> +				struct device_attribute *attr,
>>>>>>>>>> +				const char *buf, size_t len)
>>>>>>>>>> +{
>>>>>>>>>> +	struct intel_pcie_port *lpp;
>>>>>>>>>> +	unsigned long val;
>>>>>>>>>> +	int ret;
>>>>>>>>>> +
>>>>>>>>>> +	lpp = dev_get_drvdata(dev);
>>>>>>>>>> +
>>>>>>>>>> +	ret = kstrtoul(buf, 10, &val);
>>>>>>>>>> +	if (ret)
>>>>>>>>>> +		return ret;
>>>>>>>>>> +
>>>>>>>>>> +	if (val > lpp->max_speed)
>>>>>>>>>> +		return -EINVAL;
>>>>>>>>>> +
>>>>>>>>>> +	lpp->link_gen = val;
>>>>>>>>>> +	intel_pcie_max_speed_setup(lpp);
>>>>>>>>>> +	intel_pcie_speed_change_disable(lpp);
>>>>>>>>>> +	intel_pcie_speed_change_enable(lpp);
>>>>>>>>>> +
>>>>>>>>>> +	return len;
>>>>>>>>>> +}
>>>>>>>>>> +static DEVICE_ATTR_WO(pcie_speed);
>>>>>>>>>> +
>>>>>>>>>> +/*
>>>>>>>>>> + * Link width change on the fly is not always successful.
>>>>>>>>>> + * It also depends on the partner.
>>>>>>>>>> + */
>>>>>>>>>> +static ssize_t pcie_width_store(struct device *dev,
>>>>>>>>>> +				struct device_attribute *attr,
>>>>>>>>>> +				const char *buf, size_t len)
>>>>>>>>>> +{
>>>>>>>>>> +	struct intel_pcie_port *lpp;
>>>>>>>>>> +	unsigned long val;
>>>>>>>>>> +
>>>>>>>>>> +	lpp = dev_get_drvdata(dev);
>>>>>>>>>> +
>>>>>>>>>> +	if (kstrtoul(buf, 10, &val))
>>>>>>>>>> +		return -EINVAL;
>>>>>>>>>> +
>>>>>>>>>> +	if (val > lpp->max_width)
>>>>>>>>>> +		return -EINVAL;
>>>>>>>>>> +
>>>>>>>>>> +	lpp->lanes = val;
>>>>>>>>>> +	intel_pcie_max_link_width_setup(lpp);
>>>>>>>>>> +
>>>>>>>>>> +	return len;
>>>>>>>>>> +}
>>>>>>>>>> +static DEVICE_ATTR_WO(pcie_width);
>>>>>>>>> You mentioned that a use-case for changing width/speed on the fly was to
>>>>>>>>> control power consumption (and this also helps debugging issues). As I
>>>>>>>>> understand there is no current support for this in the kernel - yet it is
>>>>>>>>> something that would provide value.
>>>>>>>>>
>>>>>>>>> I haven't looked in much detail, however as I understand the PCI spec
>>>>>>>>> allows an upstream partner to change the link speed and retrain. (I'm not
>>>>>>>>> sure about link width). Given that we already have
>>>>>>>>> [current,max]_link_[speed,width] is sysfs for each PCI device, it would
>>>>>>>>> seem natural to extend this to allow for writing a max width or speed.
>>>>>>>>>
>>>>>>>>> So ideally this type of thing would be moved to the core or at least in
>>>>>>>>> the dwc driver. This way the benefits can be applied to more devices on
>>>>>>>>> larger PCI fabrics - Though perhaps others here will have a different view
>>>>>>>>> and I'm keen to hear them.
>>>>>>>>>
>>>>>>>>> I'm keen to limit the differences between the DWC controller drivers and
>>>>>>>>> unify common code - thus it would be helpful to have a justification as to
>>>>>>>>> why this is only relevant for this controller.
>>>>>>>>>
>>>>>>>>> For user-space only control, it is possible to achieve what you have here
>>>>>>>>> with userspace utilities (something like setpci) (assuming the standard
>>>>>>>>> looking registers you currently access are exposed in the normal config
>>>>>>>>> space way - though PCIe capabilities).
>>>>>>>>>
>>>>>>>>> My suggestion would be to drop these changes and later add something that
>>>>>>>>> can benefit more devices. In any case if these changes stay within this
>>>>>>>>> driver then it would be helpful to move them to a separate patch.
>>>>>>>> Sure, i will submit these entity in separate patch.
>>>>>>> Please ensure that all supporting macros, functions and defines go with that
>>>>>>> other patch as well please.
>>>>>> Sure.
>>>>>>>>>> +
>>>>>>>>>> +static struct attribute *pcie_cfg_attrs[] = {
>>>>>>>>>> +	&dev_attr_pcie_link_status.attr,
>>>>>>>>>> +	&dev_attr_pcie_speed.attr,
>>>>>>>>>> +	&dev_attr_pcie_width.attr,
>>>>>>>>>> +	NULL,
>>>>>>>>>> +};
>>>>>>>>>> +ATTRIBUTE_GROUPS(pcie_cfg);
>>>>>>>>>> +
>>>>>>>>>> +static int intel_pcie_sysfs_init(struct intel_pcie_port *lpp)
>>>>>>>>>> +{
>>>>>>>>>> +	return devm_device_add_groups(lpp->pci.dev, pcie_cfg_groups);
>>>>>>>>>> +}
>>>>>>>>>> +
>>>>>>>>>> +static void __intel_pcie_remove(struct intel_pcie_port *lpp)
>>>>>>>>>> +{
>>>>>>>>>> +	pcie_rc_cfg_wr_mask(lpp, PCI_COMMAND_MEMORY | PCI_COMMAND_MASTER,
>>>>>>>>>> +			    0, PCI_COMMAND);
>>>>>>>>>> +	intel_pcie_core_irq_disable(lpp);
>>>>>>>>>> +	intel_pcie_turn_off(lpp);
>>>>>>>>>> +	clk_disable_unprepare(lpp->core_clk);
>>>>>>>>>> +	intel_pcie_core_rst_assert(lpp);
>>>>>>>>>> +	intel_pcie_deinit_phy(lpp);
>>>>>>>>>> +}
>>>>>>>>>> +
>>>>>>>>>> +static int intel_pcie_remove(struct platform_device *pdev)
>>>>>>>>>> +{
>>>>>>>>>> +	struct intel_pcie_port *lpp = platform_get_drvdata(pdev);
>>>>>>>>>> +	struct pcie_port *pp = &lpp->pci.pp;
>>>>>>>>>> +
>>>>>>>>>> +	dw_pcie_host_deinit(pp);
>>>>>>>>>> +	__intel_pcie_remove(lpp);
>>>>>>>>>> +
>>>>>>>>>> +	return 0;
>>>>>>>>>> +}
>>>>>>>>>> +
>>>>>>>>>> +static int __maybe_unused intel_pcie_suspend_noirq(struct device *dev)
>>>>>>>>>> +{
>>>>>>>>>> +	struct intel_pcie_port *lpp = dev_get_drvdata(dev);
>>>>>>>>>> +	int ret;
>>>>>>>>>> +
>>>>>>>>>> +	intel_pcie_core_irq_disable(lpp);
>>>>>>>>>> +	ret = intel_pcie_wait_l2(lpp);
>>>>>>>>>> +	if (ret)
>>>>>>>>>> +		return ret;
>>>>>>>>>> +
>>>>>>>>>> +	intel_pcie_deinit_phy(lpp);
>>>>>>>>>> +	clk_disable_unprepare(lpp->core_clk);
>>>>>>>>>> +	return ret;
>>>>>>>>>> +}
>>>>>>>>>> +
>>>>>>>>>> +static int __maybe_unused intel_pcie_resume_noirq(struct device *dev)
>>>>>>>>>> +{
>>>>>>>>>> +	struct intel_pcie_port *lpp = dev_get_drvdata(dev);
>>>>>>>>>> +
>>>>>>>>>> +	return intel_pcie_host_setup(lpp);
>>>>>>>>>> +}
>>>>>>>>>> +
>>>>>>>>>> +static int intel_pcie_rc_init(struct pcie_port *pp)
>>>>>>>>>> +{
>>>>>>>>>> +	struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
>>>>>>>>>> +	struct intel_pcie_port *lpp = dev_get_drvdata(pci->dev);
>>>>>>>>>> +	int ret;
>>>>>>>>>> +
>>>>>>>>>> +	/* RC/host initialization */
>>>>>>>>>> +	ret = intel_pcie_host_setup(lpp);
>>>>>>>>>> +	if (ret)
>>>>>>>>>> +		return ret;
>>>>>>>>> Insert new line here.
>>>>>>>> Ok.
>>>>>>>>>> +	ret = intel_pcie_sysfs_init(lpp);
>>>>>>>>>> +	if (ret)
>>>>>>>>>> +		__intel_pcie_remove(lpp);
>>>>>>>>>> +	return ret;
>>>>>>>>>> +}
>>>>>>>>>> +
>>>>>>>>>> +int intel_pcie_msi_init(struct pcie_port *pp)
>>>>>>>>>> +{
>>>>>>>>>> +	struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
>>>>>>>>>> +
>>>>>>>>>> +	dev_dbg(pci->dev, "PCIe MSI/MSIx is handled by MSI in x86 processor\n");
>>>>>>>>> What about other processors? (Noting that this driver doesn't depend on
>>>>>>>>> any specific ARCH in the KConfig).
>>>>>>>> Agree. i will mark the dependency in Kconfig.
>>>>>>> OK, please also see how other drivers use the COMPILE_TEST Kconfig option.
>>>>>> Ok sure.
>>>>>>> I'd suggest that the dev_dbg just becomes a code comment.
>>>> Ok
>>>>>>>>>> +	return 0;
>>>>>>>>>> +}
>>>>>>>>>> +
>>>>>>>>>> +u64 intel_pcie_cpu_addr(struct dw_pcie *pcie, u64 cpu_addr)
>>>>>>>>>> +{
>>>>>>>>>> +	return cpu_addr + BUS_IATU_OFFS;
>>>>>>>>>> +}
>>>>>>>>>> +
>>>>>>>>>> +static const struct dw_pcie_ops intel_pcie_ops = {
>>>>>>>>>> +	.cpu_addr_fixup = intel_pcie_cpu_addr,
>>>>>>>>>> +};
>>>>>>>>>> +
>>>>>>>>>> +static const struct dw_pcie_host_ops intel_pcie_dw_ops = {
>>>>>>>>>> +	.host_init =		intel_pcie_rc_init,
>>>>>>>>>> +	.msi_host_init =	intel_pcie_msi_init,
>>>>>>>>>> +};
>>>>>>>>>> +
>>>>>>>>>> +static const struct intel_pcie_soc pcie_data = {
>>>>>>>>>> +	.pcie_ver =		0x520A,
>>>>>>>>>> +	.pcie_atu_offset =	0xC0000,
>>>>>>>>>> +	.num_viewport =		3,
>>>>>>>>>> +};
>>>>>>>>>> +
>>>>>>>>>> +static int intel_pcie_probe(struct platform_device *pdev)
>>>>>>>>>> +{
>>>>>>>>>> +	const struct intel_pcie_soc *data;
>>>>>>>>>> +	struct device *dev = &pdev->dev;
>>>>>>>>>> +	struct intel_pcie_port *lpp;
>>>>>>>>>> +	struct pcie_port *pp;
>>>>>>>>>> +	struct dw_pcie *pci;
>>>>>>>>>> +	int ret;
>>>>>>>>>> +
>>>>>>>>>> +	lpp = devm_kzalloc(dev, sizeof(*lpp), GFP_KERNEL);
>>>>>>>>>> +	if (!lpp)
>>>>>>>>>> +		return -ENOMEM;
>>>>>>>>>> +
>>>>>>>>>> +	platform_set_drvdata(pdev, lpp);
>>>>>>>>>> +	pci = &lpp->pci;
>>>>>>>>>> +	pci->dev = dev;
>>>>>>>>>> +	pp = &pci->pp;
>>>>>>>>>> +
>>>>>>>>>> +	ret = intel_pcie_get_resources(pdev);
>>>>>>>>>> +	if (ret)
>>>>>>>>>> +		return ret;
>>>>>>>>>> +
>>>>>>>>>> +	data = device_get_match_data(dev);
>>>>>>>>>> +	pci->ops = &intel_pcie_ops;
>>>>>>>>>> +	pci->version = data->pcie_ver;
>>>>>>>>>> +	pci->atu_base = pci->dbi_base + data->pcie_atu_offset;
>>>>>>>>>> +	pp->ops = &intel_pcie_dw_ops;
>>>>>>>>>> +
>>>>>>>>>> +	ret = dw_pcie_host_init(pp);
>>>>>>>>>> +	if (ret) {
>>>>>>>>>> +		dev_err(dev, "cannot initialize host\n");
>>>>>>>>>> +		return ret;
>>>>>>>>>> +	}
>>>>>>>>> Add a new line after the closing brace.
>>>>>>>> Ok
>>>>>>>>>> +	/* Intel PCIe doesn't configure IO region, so configure
>>>>>>>>>> +	 * viewport to not to access IO region during register
>>>>>>>>>> +	 * read write operations.
>>>>>>>>>> +	 */
>>>>>>>>>> +	pci->num_viewport = data->num_viewport;
>>>>>>>>>> +	dev_info(dev,
>>>>>>>>>> +		 "Intel AXI PCIe Root Complex Port %d Init Done\n", lpp->id);
>>>>>>>>>> +	return ret;
>>>>>>>>>> +}
>>>>>>>>>> +
>>>>>>>>>> +static const struct dev_pm_ops intel_pcie_pm_ops = {
>>>>>>>>>> +	SET_NOIRQ_SYSTEM_SLEEP_PM_OPS(intel_pcie_suspend_noirq,
>>>>>>>>>> +				      intel_pcie_resume_noirq)
>>>>>>>>>> +};
>>>>>>>>>> +
>>>>>>>>>> +static const struct of_device_id of_intel_pcie_match[] = {
>>>>>>>>>> +	{ .compatible = "intel,lgm-pcie", .data = &pcie_data },
>>>>>>>>>> +	{}
>>>>>>>>>> +};
>>>>>>>>>> +
>>>>>>>>>> +static struct platform_driver intel_pcie_driver = {
>>>>>>>>>> +	.probe = intel_pcie_probe,
>>>>>>>>>> +	.remove = intel_pcie_remove,
>>>>>>>>>> +	.driver = {
>>>>>>>>>> +		.name = "intel-lgm-pcie",
>>>>>>>>> Is there a reason why the we use intel-lgm-pcie here and pcie-intel-axi
>>>>>>>>> elsewhere? What does lgm mean?
>>>>>>>> lgm is the name of intel SoC.  I will name it to pcie-intel-axi to be
>>>>>>>> generic.
>>>>>>> I'm keen to ensure that it is consistently named. I've seen other comments
>>>>>>> regarding what the name should be - I don't have a strong opinion though
>>>>>>> I do think that *-axi may be too generic.
>>>> This PCIe driver is for the Intel Gateway SoCs. So how about naming it is as
>>>> "pcie-intel-gw"; pcie-intel-gw.c and Kconfig as PCIE_INTEL_GW.
>>> I don't have a problem with this, though others may have differing views.
>> Sure. thank you.
>>> Thanks,
>>>
>>> Andrew Murray
>>>
>>>> Regards,
>>>> Dilip
>>>>
>>>>>> Ok, i will check and get back to you on this.
>>>>>> Regards,
>>>>> Thanks,
>>>>>
>>>>> Andrew Murray
>>>>>
>>>>>> Dilip
>>>>>>
>>>>>>> Thanks,
>>>>>>>
>>>>>>> Andrew Murray
>>>>>>>
>>>>>>>>> Thanks,
>>>>>>>>>
>>>>>>>>> Andrew Murray
>>>>>>>>>
>>>>>>>>>> +		.of_match_table = of_intel_pcie_match,
>>>>>>>>>> +		.pm = &intel_pcie_pm_ops,
>>>>>>>>>> +	},
>>>>>>>>>> +};
>>>>>>>>>> +builtin_platform_driver(intel_pcie_driver);
>>>>>>>>>> -- 
>>>>>>>>>> 2.11.0
>>>>>>>>>>

^ permalink raw reply	[flat|nested] 37+ messages in thread

* RE: [PATCH v3 2/2] dwc: PCI: intel: Intel PCIe RC controller driver
  2019-09-12  9:23                     ` Dilip Kota
@ 2019-09-12 10:49                       ` Gustavo Pimentel
  2019-09-13  9:20                         ` Dilip Kota
  0 siblings, 1 reply; 37+ messages in thread
From: Gustavo Pimentel @ 2019-09-12 10:49 UTC (permalink / raw)
  To: Dilip Kota, Andrew Murray, gustavo.pimentel
  Cc: jingoohan1, lorenzo.pieralisi, robh, martin.blumenstingl,
	linux-pci, hch, devicetree, linux-kernel, andriy.shevchenko,
	cheol.yong.kim, chuanhua.lei, qi-ming.wu

On Thu, Sep 12, 2019 at 10:23:31, Dilip Kota 
<eswara.kota@linux.intel.com> wrote:

> Quoting Andrew Murray:
> Quoting Gustavo Pimentel:
> 
> On 9/12/2019 4:25 PM, Andrew Murray wrote:
> > [...]
> >>>>>>>>>> +static void intel_pcie_max_link_width_setup(struct intel_pcie_port *lpp)
> >>>>>>>>>> +{
> >>>>>>>>>> +	u32 mask, val;
> >>>>>>>>>> +
> >>>>>>>>>> +	/* HW auto bandwidth negotiation must be enabled */
> >>>>>>>>>> +	pcie_rc_cfg_wr_mask(lpp, PCIE_LCTLSTS_HW_AW_DIS, 0, PCIE_LCTLSTS);
> >>>>>>>>>> +
> >>>>>>>>>> +	mask = PCIE_DIRECT_LINK_WIDTH_CHANGE | PCIE_TARGET_LINK_WIDTH;
> >>>>>>>>>> +	val = PCIE_DIRECT_LINK_WIDTH_CHANGE | lpp->lanes;
> >>>>>>>>>> +	pcie_rc_cfg_wr_mask(lpp, mask, val, PCIE_MULTI_LANE_CTRL);
> >>>>>>>>> Is this identical functionality to the writing of PCIE_PORT_LINK_CONTROL
> >>>>>>>>> in dw_pcie_setup?
> >>>>>>>>>
> >>>>>>>>> I ask because if the user sets num-lanes in the DT, will it have the
> >>>>>>>>> desired effect?
> >>>>>>>> intel_pcie_max_link_width_setup() function will be called by sysfs attribute pcie_width_store() to change on the fly.
> >>>>>>> Indeed, but a user may also set num-lanes in the device tree. I'm wondering
> >>>>>>> if, when set in device-tree, it will have the desired effect. Because I don't
> >>>>>>> see anything similar to PCIE_LCTLSTS_HW_AW_DIS in dw_pcie_setup which is what
> >>>>>>> your function does here.
> >>>>>>>
> >>>>>>> I guess I'm trying to avoid the suitation where num-lanes doesn't have the
> >>>>>>> desired effect and the only way to set the num-lanes is throught the sysfs
> >>>>>>> control.
> >>>>>> I will check this and get back to you.
> >>>> intel_pcie_max_link_width_setup() is doing the lane resizing which is
> >>>> different from the link up/establishment happening during probe. Also
> >>>> PCIE_LCTLSTS_HW_AW_DIS default value is 0 so not setting during the probe or
> >>>> dw_pcie_setup.
> >>>>
> >>>> intel_pcie_max_link_width_setup() is programming as per the designware PCIe
> >>>> controller databook instructions for lane resizing.
> >>>>
> >>>> Below lines are from Designware PCIe databook for lane resizing.
> >>>>
> >>>>    Program the TARGET_LINK_WIDTH[5:0] field of the MULTI_LANE_CONTROL_OFF
> >>>> register.
> >>>>    Program the DIRECT_LINK_WIDTH_CHANGE2 field of the MULTI_LANE_CONTROL_OFF
> >>>> register.
> >>>> It is assumed that the PCIE_CAP_HW_AUTO_WIDTH_DISABLE field in the
> >>>> LINK_CONTROL_LINK_STATUS_REG register is 0.
> >>> OK, so there is a difference between initial training and then resizing
> >>> of the link. Given that this is not Intel specific, should this function
> >>> exist within the designware driver for the benefit of others?
> >> I am ok to add if maintainer agrees with it.
> 
> Gustavo Pimentel,
> 
> Could you please let us know your opinion on this.

Hi, I just return from parental leave, therefore I still trying to get 
the pace in mailing list discussion.

However your suggestion looks good, I agree that can go into DesignWare 
driver to be available to all.

Just a small request, please do in general:
s/designware/DesignWare

Regards,
Gustavo

> 
> [...]
> 
> >>>>>>>>>> +}
> >>>>>>>>>> +
> >>>>>>>>>> +static void intel_pcie_port_logic_setup(struct intel_pcie_port *lpp)
> >>>>>>>>>> +{
> >>>>>>>>>> +	u32 val, mask, fts;
> >>>>>>>>>> +
> >>>>>>>>>> +	switch (lpp->max_speed) {
> >>>>>>>>>> +	case PCIE_LINK_SPEED_GEN1:
> >>>>>>>>>> +	case PCIE_LINK_SPEED_GEN2:
> >>>>>>>>>> +		fts = PCIE_AFR_GEN12_FTS_NUM_DFT;
> >>>>>>>>>> +		break;
> [...]
> >>>>>>>>>> +
> >>>>>>>>>> +	if (device_property_read_u32(dev, "max-link-speed", &lpp->link_gen))
> >>>>>>>>>> +		lpp->link_gen = 0; /* Fallback to auto */
> >>>>>>>>> Is it possible to use of_pci_get_max_link_speed here instead?
> >>>>>>>> Thanks for pointing it. of_pci_get_max_link_speed() can be used here. I will
> >>>>>>>> update it in the next patch revision.
> >>>> I just remember, earlier we were using  of_pci_get_max_link_speed() itself.
> >>>> As per reviewer comments changed to device_property_read_u32() to maintain
> >>>> symmetry in parsing device tree properties from device node.
> >>>> Let me know your view.
> >>> I couldn't find an earlier version of this series that used of_pci_get_max_link_speed,
> >>> have I missed it somewhere?
> >> It happened in our internal review.
> >> What's your suggestion please, either to go with symmetry in parsing
> >> "device_property_read_u32()" or with pci helper function
> >> "of_pci_get_max_link_speed"?
> > I'd prefer the helper, the added benefit of this is additional error checking.
> > It also means users can be confident that max-link-speed will behave in the
> > same way as other host controllers that use this field.
> Ok, i will update it in the next patch version.
> 
> 
> Regards,
> 
> Dilip
> 
> >
> > Thanks,
> >
> > Andrew Murray
> >
> >>>>>>>>>> +
> >>>>>>>>>> +	res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "app");
> >>>>>>>>>> +	if (!res)
> >>>>>>>>>> +		return -ENXIO;
> >>>>>>>>>> +
> >>>>>>>>>> +	lpp->app_base = devm_ioremap_resource(dev, res);
> >>>>>>>>>> +	if (IS_ERR(lpp->app_base))
> >>>>>>>>>> +		return PTR_ERR(lpp->app_base);
> >>>>>>>>>> +
> >>>>>>>>>> +	ret = intel_pcie_ep_rst_init(lpp);
> >>>>>>>>>> +	if (ret)
> >>>>>>>>>> +		return ret;
> >>>>>>>>> Given that this is called from a function '..._get_resources' I don't think
> >>>>>>>>> we should be resetting anything here.
> >>>>>>>> Agree. I will move it out of get_resources().
> >>>>>>>>>> +
> >>>>>>>>>> +	lpp->phy = devm_phy_get(dev, "pciephy");
> >>>>>>>>>> +	if (IS_ERR(lpp->phy)) {
> >>>>>>>>>> +		ret = PTR_ERR(lpp->phy);
> >>>>>>>>>> +		if (ret != -EPROBE_DEFER)
> >>>>>>>>>> +			dev_err(dev, "couldn't get pcie-phy: %d\n", ret);
> >>>>>>>>>> +		return ret;
> >>>>>>>>>> +	}
> >>>>>>>>>> +	return 0;
> >>>>>>>>>> +}
> >>>>>>>>>> +
> >>>>>>>>>> +static void intel_pcie_deinit_phy(struct intel_pcie_port *lpp)
> >>>>>>>>>> +{
> >>>>>>>>>> +	phy_exit(lpp->phy);
> >>>>>>>>>> +}
> >>>>>>>>>> +
> >>>>>>>>>> +static int intel_pcie_wait_l2(struct intel_pcie_port *lpp)
> >>>>>>>>>> +{
> >>>>>>>>>> +	u32 value;
> >>>>>>>>>> +	int ret;
> >>>>>>>>>> +
> >>>>>>>>>> +	if (lpp->max_speed < PCIE_LINK_SPEED_GEN3)
> >>>>>>>>>> +		return 0;
> >>>>>>>>>> +
> >>>>>>>>>> +	/* Send PME_TURN_OFF message */
> >>>>>>>>>> +	pcie_app_wr_mask(lpp, PCIE_APP_MSG_XMT_PM_TURNOFF,
> >>>>>>>>>> +			 PCIE_APP_MSG_XMT_PM_TURNOFF, PCIE_APP_MSG_CR);
> >>>>>>>>>> +
> >>>>>>>>>> +	/* Read PMC status and wait for falling into L2 link state */
> >>>>>>>>>> +	ret = readl_poll_timeout(lpp->app_base + PCIE_APP_PMC, value,
> >>>>>>>>>> +				 (value & PCIE_APP_PMC_IN_L2), 20,
> >>>>>>>>>> +				 jiffies_to_usecs(5 * HZ));
> >>>>>>>>>> +	if (ret)
> >>>>>>>>>> +		dev_err(lpp->pci.dev, "PCIe link enter L2 timeout!\n");
> >>>>>>>>>> +
> >>>>>>>>>> +	return ret;
> >>>>>>>>>> +}
> >>>>>>>>>> +
> >>>>>>>>>> +static void intel_pcie_turn_off(struct intel_pcie_port *lpp)
> >>>>>>>>>> +{
> >>>>>>>>>> +	if (dw_pcie_link_up(&lpp->pci))
> >>>>>>>>>> +		intel_pcie_wait_l2(lpp);
> >>>>>>>>>> +
> >>>>>>>>>> +	/* Put EP in reset state */
> >>>>>>>>> EP?
> >>>>>>>> End point device. I will update it.
> >>>>>>> Is this not a host bridge controller?
> >>>>>> It is PERST#, signals hardware reset to the End point .
> >>>>>>            /* Put EP in reset state */
> >>>>>>            intel_pcie_device_rst_assert(lpp);
> >>>>> OK.
> >>>>>
> >>>>>>>>>> +	intel_pcie_device_rst_assert(lpp);
> >>>>>>>>>> +	pcie_rc_cfg_wr_mask(lpp, PCI_COMMAND_MEMORY, 0, PCI_COMMAND);
> >>>>>>>>>> +}
> >>>>>>>>>> +
> >>>>>>>>>> +static int intel_pcie_host_setup(struct intel_pcie_port *lpp)
> >>>>>>>>>> +{
> >>>>>>>>>> +	int ret;
> >>>>>>>>>> +
> >>>>>>>>>> +	intel_pcie_core_rst_assert(lpp);
> >>>>>>>>>> +	intel_pcie_device_rst_assert(lpp);
> >>>>>>>>>> +
> >>>>>>>>>> +	ret = phy_init(lpp->phy);
> >>>>>>>>>> +	if (ret)
> >>>>>>>>>> +		return ret;
> >>>>>>>>>> +
> >>>>>>>>>> +	intel_pcie_core_rst_deassert(lpp);
> >>>>>>>>>> +
> >>>>>>>>>> +	ret = clk_prepare_enable(lpp->core_clk);
> >>>>>>>>>> +	if (ret) {
> >>>>>>>>>> +		dev_err(lpp->pci.dev, "Core clock enable failed: %d\n", ret);
> >>>>>>>>>> +		goto clk_err;
> >>>>>>>>>> +	}
> >>>>>>>>>> +
> >>>>>>>>>> +	intel_pcie_rc_setup(lpp);
> >>>>>>>>>> +	ret = intel_pcie_app_logic_setup(lpp);
> >>>>>>>>>> +	if (ret)
> >>>>>>>>>> +		goto app_init_err;
> >>>>>>>>>> +
> >>>>>>>>>> +	ret = intel_pcie_setup_irq(lpp);
> >>>>>>>>>> +	if (!ret)
> >>>>>>>>>> +		return ret;
> >>>>>>>>>> +
> >>>>>>>>>> +	intel_pcie_turn_off(lpp);
> >>>>>>>>>> +app_init_err:
> >>>>>>>>>> +	clk_disable_unprepare(lpp->core_clk);
> >>>>>>>>>> +clk_err:
> >>>>>>>>>> +	intel_pcie_core_rst_assert(lpp);
> >>>>>>>>>> +	intel_pcie_deinit_phy(lpp);
> >>>>>>>>>> +	return ret;
> >>>>>>>>>> +}
> >>>>>>>>>> +
> >>>>>>>>>> +static ssize_t
> >>>>>>>>>> +pcie_link_status_show(struct device *dev, struct device_attribute *attr,
> >>>>>>>>>> +		      char *buf)
> >>>>>>>>>> +{
> >>>>>>>>>> +	u32 reg, width, gen;
> >>>>>>>>>> +	struct intel_pcie_port *lpp;
> >>>>>>>>>> +
> >>>>>>>>>> +	lpp = dev_get_drvdata(dev);
> >>>>>>>>>> +
> >>>>>>>>>> +	reg = pcie_rc_cfg_rd(lpp, PCIE_LCTLSTS);
> >>>>>>>>>> +	width = FIELD_GET(PCIE_LCTLSTS_NEGOTIATED_LINK_WIDTH, reg);
> >>>>>>>>>> +	gen = FIELD_GET(PCIE_LCTLSTS_LINK_SPEED, reg);
> >>>>>>>>>> +	if (gen > lpp->max_speed)
> >>>>>>>>>> +		return -EINVAL;
> >>>>>>>>>> +
> >>>>>>>>>> +	return sprintf(buf, "Port %2u Width x%u Speed %s GT/s\n", lpp->id,
> >>>>>>>>>> +		       width, pcie_link_gen_to_str(gen));
> >>>>>>>>>> +}
> >>>>>>>>>> +static DEVICE_ATTR_RO(pcie_link_status);
> >>>>>>>>>> +
> >>>>>>>>>> +static ssize_t pcie_speed_store(struct device *dev,
> >>>>>>>>>> +				struct device_attribute *attr,
> >>>>>>>>>> +				const char *buf, size_t len)
> >>>>>>>>>> +{
> >>>>>>>>>> +	struct intel_pcie_port *lpp;
> >>>>>>>>>> +	unsigned long val;
> >>>>>>>>>> +	int ret;
> >>>>>>>>>> +
> >>>>>>>>>> +	lpp = dev_get_drvdata(dev);
> >>>>>>>>>> +
> >>>>>>>>>> +	ret = kstrtoul(buf, 10, &val);
> >>>>>>>>>> +	if (ret)
> >>>>>>>>>> +		return ret;
> >>>>>>>>>> +
> >>>>>>>>>> +	if (val > lpp->max_speed)
> >>>>>>>>>> +		return -EINVAL;
> >>>>>>>>>> +
> >>>>>>>>>> +	lpp->link_gen = val;
> >>>>>>>>>> +	intel_pcie_max_speed_setup(lpp);
> >>>>>>>>>> +	intel_pcie_speed_change_disable(lpp);
> >>>>>>>>>> +	intel_pcie_speed_change_enable(lpp);
> >>>>>>>>>> +
> >>>>>>>>>> +	return len;
> >>>>>>>>>> +}
> >>>>>>>>>> +static DEVICE_ATTR_WO(pcie_speed);
> >>>>>>>>>> +
> >>>>>>>>>> +/*
> >>>>>>>>>> + * Link width change on the fly is not always successful.
> >>>>>>>>>> + * It also depends on the partner.
> >>>>>>>>>> + */
> >>>>>>>>>> +static ssize_t pcie_width_store(struct device *dev,
> >>>>>>>>>> +				struct device_attribute *attr,
> >>>>>>>>>> +				const char *buf, size_t len)
> >>>>>>>>>> +{
> >>>>>>>>>> +	struct intel_pcie_port *lpp;
> >>>>>>>>>> +	unsigned long val;
> >>>>>>>>>> +
> >>>>>>>>>> +	lpp = dev_get_drvdata(dev);
> >>>>>>>>>> +
> >>>>>>>>>> +	if (kstrtoul(buf, 10, &val))
> >>>>>>>>>> +		return -EINVAL;
> >>>>>>>>>> +
> >>>>>>>>>> +	if (val > lpp->max_width)
> >>>>>>>>>> +		return -EINVAL;
> >>>>>>>>>> +
> >>>>>>>>>> +	lpp->lanes = val;
> >>>>>>>>>> +	intel_pcie_max_link_width_setup(lpp);
> >>>>>>>>>> +
> >>>>>>>>>> +	return len;
> >>>>>>>>>> +}
> >>>>>>>>>> +static DEVICE_ATTR_WO(pcie_width);
> >>>>>>>>> You mentioned that a use-case for changing width/speed on the fly was to
> >>>>>>>>> control power consumption (and this also helps debugging issues). As I
> >>>>>>>>> understand there is no current support for this in the kernel - yet it is
> >>>>>>>>> something that would provide value.
> >>>>>>>>>
> >>>>>>>>> I haven't looked in much detail, however as I understand the PCI spec
> >>>>>>>>> allows an upstream partner to change the link speed and retrain. (I'm not
> >>>>>>>>> sure about link width). Given that we already have
> >>>>>>>>> [current,max]_link_[speed,width] is sysfs for each PCI device, it would
> >>>>>>>>> seem natural to extend this to allow for writing a max width or speed.
> >>>>>>>>>
> >>>>>>>>> So ideally this type of thing would be moved to the core or at least in
> >>>>>>>>> the dwc driver. This way the benefits can be applied to more devices on
> >>>>>>>>> larger PCI fabrics - Though perhaps others here will have a different view
> >>>>>>>>> and I'm keen to hear them.
> >>>>>>>>>
> >>>>>>>>> I'm keen to limit the differences between the DWC controller drivers and
> >>>>>>>>> unify common code - thus it would be helpful to have a justification as to
> >>>>>>>>> why this is only relevant for this controller.
> >>>>>>>>>
> >>>>>>>>> For user-space only control, it is possible to achieve what you have here
> >>>>>>>>> with userspace utilities (something like setpci) (assuming the standard
> >>>>>>>>> looking registers you currently access are exposed in the normal config
> >>>>>>>>> space way - though PCIe capabilities).
> >>>>>>>>>
> >>>>>>>>> My suggestion would be to drop these changes and later add something that
> >>>>>>>>> can benefit more devices. In any case if these changes stay within this
> >>>>>>>>> driver then it would be helpful to move them to a separate patch.
> >>>>>>>> Sure, i will submit these entity in separate patch.
> >>>>>>> Please ensure that all supporting macros, functions and defines go with that
> >>>>>>> other patch as well please.
> >>>>>> Sure.
> >>>>>>>>>> +
> >>>>>>>>>> +static struct attribute *pcie_cfg_attrs[] = {
> >>>>>>>>>> +	&dev_attr_pcie_link_status.attr,
> >>>>>>>>>> +	&dev_attr_pcie_speed.attr,
> >>>>>>>>>> +	&dev_attr_pcie_width.attr,
> >>>>>>>>>> +	NULL,
> >>>>>>>>>> +};
> >>>>>>>>>> +ATTRIBUTE_GROUPS(pcie_cfg);
> >>>>>>>>>> +
> >>>>>>>>>> +static int intel_pcie_sysfs_init(struct intel_pcie_port *lpp)
> >>>>>>>>>> +{
> >>>>>>>>>> +	return devm_device_add_groups(lpp->pci.dev, pcie_cfg_groups);
> >>>>>>>>>> +}
> >>>>>>>>>> +
> >>>>>>>>>> +static void __intel_pcie_remove(struct intel_pcie_port *lpp)
> >>>>>>>>>> +{
> >>>>>>>>>> +	pcie_rc_cfg_wr_mask(lpp, PCI_COMMAND_MEMORY | PCI_COMMAND_MASTER,
> >>>>>>>>>> +			    0, PCI_COMMAND);
> >>>>>>>>>> +	intel_pcie_core_irq_disable(lpp);
> >>>>>>>>>> +	intel_pcie_turn_off(lpp);
> >>>>>>>>>> +	clk_disable_unprepare(lpp->core_clk);
> >>>>>>>>>> +	intel_pcie_core_rst_assert(lpp);
> >>>>>>>>>> +	intel_pcie_deinit_phy(lpp);
> >>>>>>>>>> +}
> >>>>>>>>>> +
> >>>>>>>>>> +static int intel_pcie_remove(struct platform_device *pdev)
> >>>>>>>>>> +{
> >>>>>>>>>> +	struct intel_pcie_port *lpp = platform_get_drvdata(pdev);
> >>>>>>>>>> +	struct pcie_port *pp = &lpp->pci.pp;
> >>>>>>>>>> +
> >>>>>>>>>> +	dw_pcie_host_deinit(pp);
> >>>>>>>>>> +	__intel_pcie_remove(lpp);
> >>>>>>>>>> +
> >>>>>>>>>> +	return 0;
> >>>>>>>>>> +}
> >>>>>>>>>> +
> >>>>>>>>>> +static int __maybe_unused intel_pcie_suspend_noirq(struct device *dev)
> >>>>>>>>>> +{
> >>>>>>>>>> +	struct intel_pcie_port *lpp = dev_get_drvdata(dev);
> >>>>>>>>>> +	int ret;
> >>>>>>>>>> +
> >>>>>>>>>> +	intel_pcie_core_irq_disable(lpp);
> >>>>>>>>>> +	ret = intel_pcie_wait_l2(lpp);
> >>>>>>>>>> +	if (ret)
> >>>>>>>>>> +		return ret;
> >>>>>>>>>> +
> >>>>>>>>>> +	intel_pcie_deinit_phy(lpp);
> >>>>>>>>>> +	clk_disable_unprepare(lpp->core_clk);
> >>>>>>>>>> +	return ret;
> >>>>>>>>>> +}
> >>>>>>>>>> +
> >>>>>>>>>> +static int __maybe_unused intel_pcie_resume_noirq(struct device *dev)
> >>>>>>>>>> +{
> >>>>>>>>>> +	struct intel_pcie_port *lpp = dev_get_drvdata(dev);
> >>>>>>>>>> +
> >>>>>>>>>> +	return intel_pcie_host_setup(lpp);
> >>>>>>>>>> +}
> >>>>>>>>>> +
> >>>>>>>>>> +static int intel_pcie_rc_init(struct pcie_port *pp)
> >>>>>>>>>> +{
> >>>>>>>>>> +	struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
> >>>>>>>>>> +	struct intel_pcie_port *lpp = dev_get_drvdata(pci->dev);
> >>>>>>>>>> +	int ret;
> >>>>>>>>>> +
> >>>>>>>>>> +	/* RC/host initialization */
> >>>>>>>>>> +	ret = intel_pcie_host_setup(lpp);
> >>>>>>>>>> +	if (ret)
> >>>>>>>>>> +		return ret;
> >>>>>>>>> Insert new line here.
> >>>>>>>> Ok.
> >>>>>>>>>> +	ret = intel_pcie_sysfs_init(lpp);
> >>>>>>>>>> +	if (ret)
> >>>>>>>>>> +		__intel_pcie_remove(lpp);
> >>>>>>>>>> +	return ret;
> >>>>>>>>>> +}
> >>>>>>>>>> +
> >>>>>>>>>> +int intel_pcie_msi_init(struct pcie_port *pp)
> >>>>>>>>>> +{
> >>>>>>>>>> +	struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
> >>>>>>>>>> +
> >>>>>>>>>> +	dev_dbg(pci->dev, "PCIe MSI/MSIx is handled by MSI in x86 processor\n");
> >>>>>>>>> What about other processors? (Noting that this driver doesn't depend on
> >>>>>>>>> any specific ARCH in the KConfig).
> >>>>>>>> Agree. i will mark the dependency in Kconfig.
> >>>>>>> OK, please also see how other drivers use the COMPILE_TEST Kconfig option.
> >>>>>> Ok sure.
> >>>>>>> I'd suggest that the dev_dbg just becomes a code comment.
> >>>> Ok
> >>>>>>>>>> +	return 0;
> >>>>>>>>>> +}
> >>>>>>>>>> +
> >>>>>>>>>> +u64 intel_pcie_cpu_addr(struct dw_pcie *pcie, u64 cpu_addr)
> >>>>>>>>>> +{
> >>>>>>>>>> +	return cpu_addr + BUS_IATU_OFFS;
> >>>>>>>>>> +}
> >>>>>>>>>> +
> >>>>>>>>>> +static const struct dw_pcie_ops intel_pcie_ops = {
> >>>>>>>>>> +	.cpu_addr_fixup = intel_pcie_cpu_addr,
> >>>>>>>>>> +};
> >>>>>>>>>> +
> >>>>>>>>>> +static const struct dw_pcie_host_ops intel_pcie_dw_ops = {
> >>>>>>>>>> +	.host_init =		intel_pcie_rc_init,
> >>>>>>>>>> +	.msi_host_init =	intel_pcie_msi_init,
> >>>>>>>>>> +};
> >>>>>>>>>> +
> >>>>>>>>>> +static const struct intel_pcie_soc pcie_data = {
> >>>>>>>>>> +	.pcie_ver =		0x520A,
> >>>>>>>>>> +	.pcie_atu_offset =	0xC0000,
> >>>>>>>>>> +	.num_viewport =		3,
> >>>>>>>>>> +};
> >>>>>>>>>> +
> >>>>>>>>>> +static int intel_pcie_probe(struct platform_device *pdev)
> >>>>>>>>>> +{
> >>>>>>>>>> +	const struct intel_pcie_soc *data;
> >>>>>>>>>> +	struct device *dev = &pdev->dev;
> >>>>>>>>>> +	struct intel_pcie_port *lpp;
> >>>>>>>>>> +	struct pcie_port *pp;
> >>>>>>>>>> +	struct dw_pcie *pci;
> >>>>>>>>>> +	int ret;
> >>>>>>>>>> +
> >>>>>>>>>> +	lpp = devm_kzalloc(dev, sizeof(*lpp), GFP_KERNEL);
> >>>>>>>>>> +	if (!lpp)
> >>>>>>>>>> +		return -ENOMEM;
> >>>>>>>>>> +
> >>>>>>>>>> +	platform_set_drvdata(pdev, lpp);
> >>>>>>>>>> +	pci = &lpp->pci;
> >>>>>>>>>> +	pci->dev = dev;
> >>>>>>>>>> +	pp = &pci->pp;
> >>>>>>>>>> +
> >>>>>>>>>> +	ret = intel_pcie_get_resources(pdev);
> >>>>>>>>>> +	if (ret)
> >>>>>>>>>> +		return ret;
> >>>>>>>>>> +
> >>>>>>>>>> +	data = device_get_match_data(dev);
> >>>>>>>>>> +	pci->ops = &intel_pcie_ops;
> >>>>>>>>>> +	pci->version = data->pcie_ver;
> >>>>>>>>>> +	pci->atu_base = pci->dbi_base + data->pcie_atu_offset;
> >>>>>>>>>> +	pp->ops = &intel_pcie_dw_ops;
> >>>>>>>>>> +
> >>>>>>>>>> +	ret = dw_pcie_host_init(pp);
> >>>>>>>>>> +	if (ret) {
> >>>>>>>>>> +		dev_err(dev, "cannot initialize host\n");
> >>>>>>>>>> +		return ret;
> >>>>>>>>>> +	}
> >>>>>>>>> Add a new line after the closing brace.
> >>>>>>>> Ok
> >>>>>>>>>> +	/* Intel PCIe doesn't configure IO region, so configure
> >>>>>>>>>> +	 * viewport to not to access IO region during register
> >>>>>>>>>> +	 * read write operations.
> >>>>>>>>>> +	 */
> >>>>>>>>>> +	pci->num_viewport = data->num_viewport;
> >>>>>>>>>> +	dev_info(dev,
> >>>>>>>>>> +		 "Intel AXI PCIe Root Complex Port %d Init Done\n", lpp->id);
> >>>>>>>>>> +	return ret;
> >>>>>>>>>> +}
> >>>>>>>>>> +
> >>>>>>>>>> +static const struct dev_pm_ops intel_pcie_pm_ops = {
> >>>>>>>>>> +	SET_NOIRQ_SYSTEM_SLEEP_PM_OPS(intel_pcie_suspend_noirq,
> >>>>>>>>>> +				      intel_pcie_resume_noirq)
> >>>>>>>>>> +};
> >>>>>>>>>> +
> >>>>>>>>>> +static const struct of_device_id of_intel_pcie_match[] = {
> >>>>>>>>>> +	{ .compatible = "intel,lgm-pcie", .data = &pcie_data },
> >>>>>>>>>> +	{}
> >>>>>>>>>> +};
> >>>>>>>>>> +
> >>>>>>>>>> +static struct platform_driver intel_pcie_driver = {
> >>>>>>>>>> +	.probe = intel_pcie_probe,
> >>>>>>>>>> +	.remove = intel_pcie_remove,
> >>>>>>>>>> +	.driver = {
> >>>>>>>>>> +		.name = "intel-lgm-pcie",
> >>>>>>>>> Is there a reason why the we use intel-lgm-pcie here and pcie-intel-axi
> >>>>>>>>> elsewhere? What does lgm mean?
> >>>>>>>> lgm is the name of intel SoC.  I will name it to pcie-intel-axi to be
> >>>>>>>> generic.
> >>>>>>> I'm keen to ensure that it is consistently named. I've seen other comments
> >>>>>>> regarding what the name should be - I don't have a strong opinion though
> >>>>>>> I do think that *-axi may be too generic.
> >>>> This PCIe driver is for the Intel Gateway SoCs. So how about naming it is as
> >>>> "pcie-intel-gw"; pcie-intel-gw.c and Kconfig as PCIE_INTEL_GW.
> >>> I don't have a problem with this, though others may have differing views.
> >> Sure. thank you.
> >>> Thanks,
> >>>
> >>> Andrew Murray
> >>>
> >>>> Regards,
> >>>> Dilip
> >>>>
> >>>>>> Ok, i will check and get back to you on this.
> >>>>>> Regards,
> >>>>> Thanks,
> >>>>>
> >>>>> Andrew Murray
> >>>>>
> >>>>>> Dilip
> >>>>>>
> >>>>>>> Thanks,
> >>>>>>>
> >>>>>>> Andrew Murray
> >>>>>>>
> >>>>>>>>> Thanks,
> >>>>>>>>>
> >>>>>>>>> Andrew Murray
> >>>>>>>>>
> >>>>>>>>>> +		.of_match_table = of_intel_pcie_match,
> >>>>>>>>>> +		.pm = &intel_pcie_pm_ops,
> >>>>>>>>>> +	},
> >>>>>>>>>> +};
> >>>>>>>>>> +builtin_platform_driver(intel_pcie_driver);
> >>>>>>>>>> -- 
> >>>>>>>>>> 2.11.0
> >>>>>>>>>>



^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH v3 2/2] dwc: PCI: intel: Intel PCIe RC controller driver
  2019-09-12 10:49                       ` Gustavo Pimentel
@ 2019-09-13  9:20                         ` Dilip Kota
  2019-09-13 10:12                           ` andriy.shevchenko
  0 siblings, 1 reply; 37+ messages in thread
From: Dilip Kota @ 2019-09-13  9:20 UTC (permalink / raw)
  To: Gustavo Pimentel, Andrew Murray
  Cc: jingoohan1, lorenzo.pieralisi, robh, martin.blumenstingl,
	linux-pci, hch, devicetree, linux-kernel, andriy.shevchenko,
	cheol.yong.kim, chuanhua.lei, qi-ming.wu


On 9/12/2019 6:49 PM, Gustavo Pimentel wrote:
> On Thu, Sep 12, 2019 at 10:23:31, Dilip Kota
> <eswara.kota@linux.intel.com> wrote:
>
>> Quoting Andrew Murray:
>> Quoting Gustavo Pimentel:
>>
>> On 9/12/2019 4:25 PM, Andrew Murray wrote:
>>> [...]
>>>>>>>>>>>> +static void intel_pcie_max_link_width_setup(struct intel_pcie_port *lpp)
>>>>>>>>>>>> +{
>>>>>>>>>>>> +	u32 mask, val;
>>>>>>>>>>>> +
>>>>>>>>>>>> +	/* HW auto bandwidth negotiation must be enabled */
>>>>>>>>>>>> +	pcie_rc_cfg_wr_mask(lpp, PCIE_LCTLSTS_HW_AW_DIS, 0, PCIE_LCTLSTS);
>>>>>>>>>>>> +
>>>>>>>>>>>> +	mask = PCIE_DIRECT_LINK_WIDTH_CHANGE | PCIE_TARGET_LINK_WIDTH;
>>>>>>>>>>>> +	val = PCIE_DIRECT_LINK_WIDTH_CHANGE | lpp->lanes;
>>>>>>>>>>>> +	pcie_rc_cfg_wr_mask(lpp, mask, val, PCIE_MULTI_LANE_CTRL);
>>>>>>>>>>> Is this identical functionality to the writing of PCIE_PORT_LINK_CONTROL
>>>>>>>>>>> in dw_pcie_setup?
>>>>>>>>>>>
>>>>>>>>>>> I ask because if the user sets num-lanes in the DT, will it have the
>>>>>>>>>>> desired effect?
>>>>>>>>>> intel_pcie_max_link_width_setup() function will be called by sysfs attribute pcie_width_store() to change on the fly.
>>>>>>>>> Indeed, but a user may also set num-lanes in the device tree. I'm wondering
>>>>>>>>> if, when set in device-tree, it will have the desired effect. Because I don't
>>>>>>>>> see anything similar to PCIE_LCTLSTS_HW_AW_DIS in dw_pcie_setup which is what
>>>>>>>>> your function does here.
>>>>>>>>>
>>>>>>>>> I guess I'm trying to avoid the suitation where num-lanes doesn't have the
>>>>>>>>> desired effect and the only way to set the num-lanes is throught the sysfs
>>>>>>>>> control.
>>>>>>>> I will check this and get back to you.
>>>>>> intel_pcie_max_link_width_setup() is doing the lane resizing which is
>>>>>> different from the link up/establishment happening during probe. Also
>>>>>> PCIE_LCTLSTS_HW_AW_DIS default value is 0 so not setting during the probe or
>>>>>> dw_pcie_setup.
>>>>>>
>>>>>> intel_pcie_max_link_width_setup() is programming as per the designware PCIe
>>>>>> controller databook instructions for lane resizing.
>>>>>>
>>>>>> Below lines are from Designware PCIe databook for lane resizing.
>>>>>>
>>>>>>     Program the TARGET_LINK_WIDTH[5:0] field of the MULTI_LANE_CONTROL_OFF
>>>>>> register.
>>>>>>     Program the DIRECT_LINK_WIDTH_CHANGE2 field of the MULTI_LANE_CONTROL_OFF
>>>>>> register.
>>>>>> It is assumed that the PCIE_CAP_HW_AUTO_WIDTH_DISABLE field in the
>>>>>> LINK_CONTROL_LINK_STATUS_REG register is 0.
>>>>> OK, so there is a difference between initial training and then resizing
>>>>> of the link. Given that this is not Intel specific, should this function
>>>>> exist within the designware driver for the benefit of others?
>>>> I am ok to add if maintainer agrees with it.
>> Gustavo Pimentel,
>>
>> Could you please let us know your opinion on this.
> Hi, I just return from parental leave, therefore I still trying to get
> the pace in mailing list discussion.
>
> However your suggestion looks good, I agree that can go into DesignWare
> driver to be available to all.
Thanks Gustavo for the confirmation, i will add it in the next patch version
>
> Just a small request, please do in general:
> s/designware/DesignWare

Sorry, i didnt understand this.

Regards,
Dilip

>
> Regards,
> Gustavo
>
>> [...]
>>
>>>>>>>>>>>> +}
>>>>>>>>>>>> +
>>>>>>>>>>>> +static void intel_pcie_port_logic_setup(struct intel_pcie_port *lpp)
>>>>>>>>>>>> +{
>>>>>>>>>>>> +	u32 val, mask, fts;
>>>>>>>>>>>> +
>>>>>>>>>>>> +	switch (lpp->max_speed) {
>>>>>>>>>>>> +	case PCIE_LINK_SPEED_GEN1:
>>>>>>>>>>>> +	case PCIE_LINK_SPEED_GEN2:
>>>>>>>>>>>> +		fts = PCIE_AFR_GEN12_FTS_NUM_DFT;
>>>>>>>>>>>> +		break;
>> [...]
>>>>>>>>>>>> +
>>>>>>>>>>>> +	if (device_property_read_u32(dev, "max-link-speed", &lpp->link_gen))
>>>>>>>>>>>> +		lpp->link_gen = 0; /* Fallback to auto */
>>>>>>>>>>> Is it possible to use of_pci_get_max_link_speed here instead?
>>>>>>>>>> Thanks for pointing it. of_pci_get_max_link_speed() can be used here. I will
>>>>>>>>>> update it in the next patch revision.
>>>>>> I just remember, earlier we were using  of_pci_get_max_link_speed() itself.
>>>>>> As per reviewer comments changed to device_property_read_u32() to maintain
>>>>>> symmetry in parsing device tree properties from device node.
>>>>>> Let me know your view.
>>>>> I couldn't find an earlier version of this series that used of_pci_get_max_link_speed,
>>>>> have I missed it somewhere?
>>>> It happened in our internal review.
>>>> What's your suggestion please, either to go with symmetry in parsing
>>>> "device_property_read_u32()" or with pci helper function
>>>> "of_pci_get_max_link_speed"?
>>> I'd prefer the helper, the added benefit of this is additional error checking.
>>> It also means users can be confident that max-link-speed will behave in the
>>> same way as other host controllers that use this field.
>> Ok, i will update it in the next patch version.
>>
>>
>> Regards,
>>
>> Dilip
>>
>>> Thanks,
>>>
>>> Andrew Murray
>>>
>>>>>>>>>>>> +
>>>>>>>>>>>> +	res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "app");
>>>>>>>>>>>> +	if (!res)
>>>>>>>>>>>> +		return -ENXIO;
>>>>>>>>>>>> +
>>>>>>>>>>>> +	lpp->app_base = devm_ioremap_resource(dev, res);
>>>>>>>>>>>> +	if (IS_ERR(lpp->app_base))
>>>>>>>>>>>> +		return PTR_ERR(lpp->app_base);
>>>>>>>>>>>> +
>>>>>>>>>>>> +	ret = intel_pcie_ep_rst_init(lpp);
>>>>>>>>>>>> +	if (ret)
>>>>>>>>>>>> +		return ret;
>>>>>>>>>>> Given that this is called from a function '..._get_resources' I don't think
>>>>>>>>>>> we should be resetting anything here.
>>>>>>>>>> Agree. I will move it out of get_resources().
>>>>>>>>>>>> +
>>>>>>>>>>>> +	lpp->phy = devm_phy_get(dev, "pciephy");
>>>>>>>>>>>> +	if (IS_ERR(lpp->phy)) {
>>>>>>>>>>>> +		ret = PTR_ERR(lpp->phy);
>>>>>>>>>>>> +		if (ret != -EPROBE_DEFER)
>>>>>>>>>>>> +			dev_err(dev, "couldn't get pcie-phy: %d\n", ret);
>>>>>>>>>>>> +		return ret;
>>>>>>>>>>>> +	}
>>>>>>>>>>>> +	return 0;
>>>>>>>>>>>> +}
>>>>>>>>>>>> +
>>>>>>>>>>>> +static void intel_pcie_deinit_phy(struct intel_pcie_port *lpp)
>>>>>>>>>>>> +{
>>>>>>>>>>>> +	phy_exit(lpp->phy);
>>>>>>>>>>>> +}
>>>>>>>>>>>> +
>>>>>>>>>>>> +static int intel_pcie_wait_l2(struct intel_pcie_port *lpp)
>>>>>>>>>>>> +{
>>>>>>>>>>>> +	u32 value;
>>>>>>>>>>>> +	int ret;
>>>>>>>>>>>> +
>>>>>>>>>>>> +	if (lpp->max_speed < PCIE_LINK_SPEED_GEN3)
>>>>>>>>>>>> +		return 0;
>>>>>>>>>>>> +
>>>>>>>>>>>> +	/* Send PME_TURN_OFF message */
>>>>>>>>>>>> +	pcie_app_wr_mask(lpp, PCIE_APP_MSG_XMT_PM_TURNOFF,
>>>>>>>>>>>> +			 PCIE_APP_MSG_XMT_PM_TURNOFF, PCIE_APP_MSG_CR);
>>>>>>>>>>>> +
>>>>>>>>>>>> +	/* Read PMC status and wait for falling into L2 link state */
>>>>>>>>>>>> +	ret = readl_poll_timeout(lpp->app_base + PCIE_APP_PMC, value,
>>>>>>>>>>>> +				 (value & PCIE_APP_PMC_IN_L2), 20,
>>>>>>>>>>>> +				 jiffies_to_usecs(5 * HZ));
>>>>>>>>>>>> +	if (ret)
>>>>>>>>>>>> +		dev_err(lpp->pci.dev, "PCIe link enter L2 timeout!\n");
>>>>>>>>>>>> +
>>>>>>>>>>>> +	return ret;
>>>>>>>>>>>> +}
>>>>>>>>>>>> +
>>>>>>>>>>>> +static void intel_pcie_turn_off(struct intel_pcie_port *lpp)
>>>>>>>>>>>> +{
>>>>>>>>>>>> +	if (dw_pcie_link_up(&lpp->pci))
>>>>>>>>>>>> +		intel_pcie_wait_l2(lpp);
>>>>>>>>>>>> +
>>>>>>>>>>>> +	/* Put EP in reset state */
>>>>>>>>>>> EP?
>>>>>>>>>> End point device. I will update it.
>>>>>>>>> Is this not a host bridge controller?
>>>>>>>> It is PERST#, signals hardware reset to the End point .
>>>>>>>>             /* Put EP in reset state */
>>>>>>>>             intel_pcie_device_rst_assert(lpp);
>>>>>>> OK.
>>>>>>>
>>>>>>>>>>>> +	intel_pcie_device_rst_assert(lpp);
>>>>>>>>>>>> +	pcie_rc_cfg_wr_mask(lpp, PCI_COMMAND_MEMORY, 0, PCI_COMMAND);
>>>>>>>>>>>> +}
>>>>>>>>>>>> +
>>>>>>>>>>>> +static int intel_pcie_host_setup(struct intel_pcie_port *lpp)
>>>>>>>>>>>> +{
>>>>>>>>>>>> +	int ret;
>>>>>>>>>>>> +
>>>>>>>>>>>> +	intel_pcie_core_rst_assert(lpp);
>>>>>>>>>>>> +	intel_pcie_device_rst_assert(lpp);
>>>>>>>>>>>> +
>>>>>>>>>>>> +	ret = phy_init(lpp->phy);
>>>>>>>>>>>> +	if (ret)
>>>>>>>>>>>> +		return ret;
>>>>>>>>>>>> +
>>>>>>>>>>>> +	intel_pcie_core_rst_deassert(lpp);
>>>>>>>>>>>> +
>>>>>>>>>>>> +	ret = clk_prepare_enable(lpp->core_clk);
>>>>>>>>>>>> +	if (ret) {
>>>>>>>>>>>> +		dev_err(lpp->pci.dev, "Core clock enable failed: %d\n", ret);
>>>>>>>>>>>> +		goto clk_err;
>>>>>>>>>>>> +	}
>>>>>>>>>>>> +
>>>>>>>>>>>> +	intel_pcie_rc_setup(lpp);
>>>>>>>>>>>> +	ret = intel_pcie_app_logic_setup(lpp);
>>>>>>>>>>>> +	if (ret)
>>>>>>>>>>>> +		goto app_init_err;
>>>>>>>>>>>> +
>>>>>>>>>>>> +	ret = intel_pcie_setup_irq(lpp);
>>>>>>>>>>>> +	if (!ret)
>>>>>>>>>>>> +		return ret;
>>>>>>>>>>>> +
>>>>>>>>>>>> +	intel_pcie_turn_off(lpp);
>>>>>>>>>>>> +app_init_err:
>>>>>>>>>>>> +	clk_disable_unprepare(lpp->core_clk);
>>>>>>>>>>>> +clk_err:
>>>>>>>>>>>> +	intel_pcie_core_rst_assert(lpp);
>>>>>>>>>>>> +	intel_pcie_deinit_phy(lpp);
>>>>>>>>>>>> +	return ret;
>>>>>>>>>>>> +}
>>>>>>>>>>>> +
>>>>>>>>>>>> +static ssize_t
>>>>>>>>>>>> +pcie_link_status_show(struct device *dev, struct device_attribute *attr,
>>>>>>>>>>>> +		      char *buf)
>>>>>>>>>>>> +{
>>>>>>>>>>>> +	u32 reg, width, gen;
>>>>>>>>>>>> +	struct intel_pcie_port *lpp;
>>>>>>>>>>>> +
>>>>>>>>>>>> +	lpp = dev_get_drvdata(dev);
>>>>>>>>>>>> +
>>>>>>>>>>>> +	reg = pcie_rc_cfg_rd(lpp, PCIE_LCTLSTS);
>>>>>>>>>>>> +	width = FIELD_GET(PCIE_LCTLSTS_NEGOTIATED_LINK_WIDTH, reg);
>>>>>>>>>>>> +	gen = FIELD_GET(PCIE_LCTLSTS_LINK_SPEED, reg);
>>>>>>>>>>>> +	if (gen > lpp->max_speed)
>>>>>>>>>>>> +		return -EINVAL;
>>>>>>>>>>>> +
>>>>>>>>>>>> +	return sprintf(buf, "Port %2u Width x%u Speed %s GT/s\n", lpp->id,
>>>>>>>>>>>> +		       width, pcie_link_gen_to_str(gen));
>>>>>>>>>>>> +}
>>>>>>>>>>>> +static DEVICE_ATTR_RO(pcie_link_status);
>>>>>>>>>>>> +
>>>>>>>>>>>> +static ssize_t pcie_speed_store(struct device *dev,
>>>>>>>>>>>> +				struct device_attribute *attr,
>>>>>>>>>>>> +				const char *buf, size_t len)
>>>>>>>>>>>> +{
>>>>>>>>>>>> +	struct intel_pcie_port *lpp;
>>>>>>>>>>>> +	unsigned long val;
>>>>>>>>>>>> +	int ret;
>>>>>>>>>>>> +
>>>>>>>>>>>> +	lpp = dev_get_drvdata(dev);
>>>>>>>>>>>> +
>>>>>>>>>>>> +	ret = kstrtoul(buf, 10, &val);
>>>>>>>>>>>> +	if (ret)
>>>>>>>>>>>> +		return ret;
>>>>>>>>>>>> +
>>>>>>>>>>>> +	if (val > lpp->max_speed)
>>>>>>>>>>>> +		return -EINVAL;
>>>>>>>>>>>> +
>>>>>>>>>>>> +	lpp->link_gen = val;
>>>>>>>>>>>> +	intel_pcie_max_speed_setup(lpp);
>>>>>>>>>>>> +	intel_pcie_speed_change_disable(lpp);
>>>>>>>>>>>> +	intel_pcie_speed_change_enable(lpp);
>>>>>>>>>>>> +
>>>>>>>>>>>> +	return len;
>>>>>>>>>>>> +}
>>>>>>>>>>>> +static DEVICE_ATTR_WO(pcie_speed);
>>>>>>>>>>>> +
>>>>>>>>>>>> +/*
>>>>>>>>>>>> + * Link width change on the fly is not always successful.
>>>>>>>>>>>> + * It also depends on the partner.
>>>>>>>>>>>> + */
>>>>>>>>>>>> +static ssize_t pcie_width_store(struct device *dev,
>>>>>>>>>>>> +				struct device_attribute *attr,
>>>>>>>>>>>> +				const char *buf, size_t len)
>>>>>>>>>>>> +{
>>>>>>>>>>>> +	struct intel_pcie_port *lpp;
>>>>>>>>>>>> +	unsigned long val;
>>>>>>>>>>>> +
>>>>>>>>>>>> +	lpp = dev_get_drvdata(dev);
>>>>>>>>>>>> +
>>>>>>>>>>>> +	if (kstrtoul(buf, 10, &val))
>>>>>>>>>>>> +		return -EINVAL;
>>>>>>>>>>>> +
>>>>>>>>>>>> +	if (val > lpp->max_width)
>>>>>>>>>>>> +		return -EINVAL;
>>>>>>>>>>>> +
>>>>>>>>>>>> +	lpp->lanes = val;
>>>>>>>>>>>> +	intel_pcie_max_link_width_setup(lpp);
>>>>>>>>>>>> +
>>>>>>>>>>>> +	return len;
>>>>>>>>>>>> +}
>>>>>>>>>>>> +static DEVICE_ATTR_WO(pcie_width);
>>>>>>>>>>> You mentioned that a use-case for changing width/speed on the fly was to
>>>>>>>>>>> control power consumption (and this also helps debugging issues). As I
>>>>>>>>>>> understand there is no current support for this in the kernel - yet it is
>>>>>>>>>>> something that would provide value.
>>>>>>>>>>>
>>>>>>>>>>> I haven't looked in much detail, however as I understand the PCI spec
>>>>>>>>>>> allows an upstream partner to change the link speed and retrain. (I'm not
>>>>>>>>>>> sure about link width). Given that we already have
>>>>>>>>>>> [current,max]_link_[speed,width] is sysfs for each PCI device, it would
>>>>>>>>>>> seem natural to extend this to allow for writing a max width or speed.
>>>>>>>>>>>
>>>>>>>>>>> So ideally this type of thing would be moved to the core or at least in
>>>>>>>>>>> the dwc driver. This way the benefits can be applied to more devices on
>>>>>>>>>>> larger PCI fabrics - Though perhaps others here will have a different view
>>>>>>>>>>> and I'm keen to hear them.
>>>>>>>>>>>
>>>>>>>>>>> I'm keen to limit the differences between the DWC controller drivers and
>>>>>>>>>>> unify common code - thus it would be helpful to have a justification as to
>>>>>>>>>>> why this is only relevant for this controller.
>>>>>>>>>>>
>>>>>>>>>>> For user-space only control, it is possible to achieve what you have here
>>>>>>>>>>> with userspace utilities (something like setpci) (assuming the standard
>>>>>>>>>>> looking registers you currently access are exposed in the normal config
>>>>>>>>>>> space way - though PCIe capabilities).
>>>>>>>>>>>
>>>>>>>>>>> My suggestion would be to drop these changes and later add something that
>>>>>>>>>>> can benefit more devices. In any case if these changes stay within this
>>>>>>>>>>> driver then it would be helpful to move them to a separate patch.
>>>>>>>>>> Sure, i will submit these entity in separate patch.
>>>>>>>>> Please ensure that all supporting macros, functions and defines go with that
>>>>>>>>> other patch as well please.
>>>>>>>> Sure.
>>>>>>>>>>>> +
>>>>>>>>>>>> +static struct attribute *pcie_cfg_attrs[] = {
>>>>>>>>>>>> +	&dev_attr_pcie_link_status.attr,
>>>>>>>>>>>> +	&dev_attr_pcie_speed.attr,
>>>>>>>>>>>> +	&dev_attr_pcie_width.attr,
>>>>>>>>>>>> +	NULL,
>>>>>>>>>>>> +};
>>>>>>>>>>>> +ATTRIBUTE_GROUPS(pcie_cfg);
>>>>>>>>>>>> +
>>>>>>>>>>>> +static int intel_pcie_sysfs_init(struct intel_pcie_port *lpp)
>>>>>>>>>>>> +{
>>>>>>>>>>>> +	return devm_device_add_groups(lpp->pci.dev, pcie_cfg_groups);
>>>>>>>>>>>> +}
>>>>>>>>>>>> +
>>>>>>>>>>>> +static void __intel_pcie_remove(struct intel_pcie_port *lpp)
>>>>>>>>>>>> +{
>>>>>>>>>>>> +	pcie_rc_cfg_wr_mask(lpp, PCI_COMMAND_MEMORY | PCI_COMMAND_MASTER,
>>>>>>>>>>>> +			    0, PCI_COMMAND);
>>>>>>>>>>>> +	intel_pcie_core_irq_disable(lpp);
>>>>>>>>>>>> +	intel_pcie_turn_off(lpp);
>>>>>>>>>>>> +	clk_disable_unprepare(lpp->core_clk);
>>>>>>>>>>>> +	intel_pcie_core_rst_assert(lpp);
>>>>>>>>>>>> +	intel_pcie_deinit_phy(lpp);
>>>>>>>>>>>> +}
>>>>>>>>>>>> +
>>>>>>>>>>>> +static int intel_pcie_remove(struct platform_device *pdev)
>>>>>>>>>>>> +{
>>>>>>>>>>>> +	struct intel_pcie_port *lpp = platform_get_drvdata(pdev);
>>>>>>>>>>>> +	struct pcie_port *pp = &lpp->pci.pp;
>>>>>>>>>>>> +
>>>>>>>>>>>> +	dw_pcie_host_deinit(pp);
>>>>>>>>>>>> +	__intel_pcie_remove(lpp);
>>>>>>>>>>>> +
>>>>>>>>>>>> +	return 0;
>>>>>>>>>>>> +}
>>>>>>>>>>>> +
>>>>>>>>>>>> +static int __maybe_unused intel_pcie_suspend_noirq(struct device *dev)
>>>>>>>>>>>> +{
>>>>>>>>>>>> +	struct intel_pcie_port *lpp = dev_get_drvdata(dev);
>>>>>>>>>>>> +	int ret;
>>>>>>>>>>>> +
>>>>>>>>>>>> +	intel_pcie_core_irq_disable(lpp);
>>>>>>>>>>>> +	ret = intel_pcie_wait_l2(lpp);
>>>>>>>>>>>> +	if (ret)
>>>>>>>>>>>> +		return ret;
>>>>>>>>>>>> +
>>>>>>>>>>>> +	intel_pcie_deinit_phy(lpp);
>>>>>>>>>>>> +	clk_disable_unprepare(lpp->core_clk);
>>>>>>>>>>>> +	return ret;
>>>>>>>>>>>> +}
>>>>>>>>>>>> +
>>>>>>>>>>>> +static int __maybe_unused intel_pcie_resume_noirq(struct device *dev)
>>>>>>>>>>>> +{
>>>>>>>>>>>> +	struct intel_pcie_port *lpp = dev_get_drvdata(dev);
>>>>>>>>>>>> +
>>>>>>>>>>>> +	return intel_pcie_host_setup(lpp);
>>>>>>>>>>>> +}
>>>>>>>>>>>> +
>>>>>>>>>>>> +static int intel_pcie_rc_init(struct pcie_port *pp)
>>>>>>>>>>>> +{
>>>>>>>>>>>> +	struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
>>>>>>>>>>>> +	struct intel_pcie_port *lpp = dev_get_drvdata(pci->dev);
>>>>>>>>>>>> +	int ret;
>>>>>>>>>>>> +
>>>>>>>>>>>> +	/* RC/host initialization */
>>>>>>>>>>>> +	ret = intel_pcie_host_setup(lpp);
>>>>>>>>>>>> +	if (ret)
>>>>>>>>>>>> +		return ret;
>>>>>>>>>>> Insert new line here.
>>>>>>>>>> Ok.
>>>>>>>>>>>> +	ret = intel_pcie_sysfs_init(lpp);
>>>>>>>>>>>> +	if (ret)
>>>>>>>>>>>> +		__intel_pcie_remove(lpp);
>>>>>>>>>>>> +	return ret;
>>>>>>>>>>>> +}
>>>>>>>>>>>> +
>>>>>>>>>>>> +int intel_pcie_msi_init(struct pcie_port *pp)
>>>>>>>>>>>> +{
>>>>>>>>>>>> +	struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
>>>>>>>>>>>> +
>>>>>>>>>>>> +	dev_dbg(pci->dev, "PCIe MSI/MSIx is handled by MSI in x86 processor\n");
>>>>>>>>>>> What about other processors? (Noting that this driver doesn't depend on
>>>>>>>>>>> any specific ARCH in the KConfig).
>>>>>>>>>> Agree. i will mark the dependency in Kconfig.
>>>>>>>>> OK, please also see how other drivers use the COMPILE_TEST Kconfig option.
>>>>>>>> Ok sure.
>>>>>>>>> I'd suggest that the dev_dbg just becomes a code comment.
>>>>>> Ok
>>>>>>>>>>>> +	return 0;
>>>>>>>>>>>> +}
>>>>>>>>>>>> +
>>>>>>>>>>>> +u64 intel_pcie_cpu_addr(struct dw_pcie *pcie, u64 cpu_addr)
>>>>>>>>>>>> +{
>>>>>>>>>>>> +	return cpu_addr + BUS_IATU_OFFS;
>>>>>>>>>>>> +}
>>>>>>>>>>>> +
>>>>>>>>>>>> +static const struct dw_pcie_ops intel_pcie_ops = {
>>>>>>>>>>>> +	.cpu_addr_fixup = intel_pcie_cpu_addr,
>>>>>>>>>>>> +};
>>>>>>>>>>>> +
>>>>>>>>>>>> +static const struct dw_pcie_host_ops intel_pcie_dw_ops = {
>>>>>>>>>>>> +	.host_init =		intel_pcie_rc_init,
>>>>>>>>>>>> +	.msi_host_init =	intel_pcie_msi_init,
>>>>>>>>>>>> +};
>>>>>>>>>>>> +
>>>>>>>>>>>> +static const struct intel_pcie_soc pcie_data = {
>>>>>>>>>>>> +	.pcie_ver =		0x520A,
>>>>>>>>>>>> +	.pcie_atu_offset =	0xC0000,
>>>>>>>>>>>> +	.num_viewport =		3,
>>>>>>>>>>>> +};
>>>>>>>>>>>> +
>>>>>>>>>>>> +static int intel_pcie_probe(struct platform_device *pdev)
>>>>>>>>>>>> +{
>>>>>>>>>>>> +	const struct intel_pcie_soc *data;
>>>>>>>>>>>> +	struct device *dev = &pdev->dev;
>>>>>>>>>>>> +	struct intel_pcie_port *lpp;
>>>>>>>>>>>> +	struct pcie_port *pp;
>>>>>>>>>>>> +	struct dw_pcie *pci;
>>>>>>>>>>>> +	int ret;
>>>>>>>>>>>> +
>>>>>>>>>>>> +	lpp = devm_kzalloc(dev, sizeof(*lpp), GFP_KERNEL);
>>>>>>>>>>>> +	if (!lpp)
>>>>>>>>>>>> +		return -ENOMEM;
>>>>>>>>>>>> +
>>>>>>>>>>>> +	platform_set_drvdata(pdev, lpp);
>>>>>>>>>>>> +	pci = &lpp->pci;
>>>>>>>>>>>> +	pci->dev = dev;
>>>>>>>>>>>> +	pp = &pci->pp;
>>>>>>>>>>>> +
>>>>>>>>>>>> +	ret = intel_pcie_get_resources(pdev);
>>>>>>>>>>>> +	if (ret)
>>>>>>>>>>>> +		return ret;
>>>>>>>>>>>> +
>>>>>>>>>>>> +	data = device_get_match_data(dev);
>>>>>>>>>>>> +	pci->ops = &intel_pcie_ops;
>>>>>>>>>>>> +	pci->version = data->pcie_ver;
>>>>>>>>>>>> +	pci->atu_base = pci->dbi_base + data->pcie_atu_offset;
>>>>>>>>>>>> +	pp->ops = &intel_pcie_dw_ops;
>>>>>>>>>>>> +
>>>>>>>>>>>> +	ret = dw_pcie_host_init(pp);
>>>>>>>>>>>> +	if (ret) {
>>>>>>>>>>>> +		dev_err(dev, "cannot initialize host\n");
>>>>>>>>>>>> +		return ret;
>>>>>>>>>>>> +	}
>>>>>>>>>>> Add a new line after the closing brace.
>>>>>>>>>> Ok
>>>>>>>>>>>> +	/* Intel PCIe doesn't configure IO region, so configure
>>>>>>>>>>>> +	 * viewport to not to access IO region during register
>>>>>>>>>>>> +	 * read write operations.
>>>>>>>>>>>> +	 */
>>>>>>>>>>>> +	pci->num_viewport = data->num_viewport;
>>>>>>>>>>>> +	dev_info(dev,
>>>>>>>>>>>> +		 "Intel AXI PCIe Root Complex Port %d Init Done\n", lpp->id);
>>>>>>>>>>>> +	return ret;
>>>>>>>>>>>> +}
>>>>>>>>>>>> +
>>>>>>>>>>>> +static const struct dev_pm_ops intel_pcie_pm_ops = {
>>>>>>>>>>>> +	SET_NOIRQ_SYSTEM_SLEEP_PM_OPS(intel_pcie_suspend_noirq,
>>>>>>>>>>>> +				      intel_pcie_resume_noirq)
>>>>>>>>>>>> +};
>>>>>>>>>>>> +
>>>>>>>>>>>> +static const struct of_device_id of_intel_pcie_match[] = {
>>>>>>>>>>>> +	{ .compatible = "intel,lgm-pcie", .data = &pcie_data },
>>>>>>>>>>>> +	{}
>>>>>>>>>>>> +};
>>>>>>>>>>>> +
>>>>>>>>>>>> +static struct platform_driver intel_pcie_driver = {
>>>>>>>>>>>> +	.probe = intel_pcie_probe,
>>>>>>>>>>>> +	.remove = intel_pcie_remove,
>>>>>>>>>>>> +	.driver = {
>>>>>>>>>>>> +		.name = "intel-lgm-pcie",
>>>>>>>>>>> Is there a reason why the we use intel-lgm-pcie here and pcie-intel-axi
>>>>>>>>>>> elsewhere? What does lgm mean?
>>>>>>>>>> lgm is the name of intel SoC.  I will name it to pcie-intel-axi to be
>>>>>>>>>> generic.
>>>>>>>>> I'm keen to ensure that it is consistently named. I've seen other comments
>>>>>>>>> regarding what the name should be - I don't have a strong opinion though
>>>>>>>>> I do think that *-axi may be too generic.
>>>>>> This PCIe driver is for the Intel Gateway SoCs. So how about naming it is as
>>>>>> "pcie-intel-gw"; pcie-intel-gw.c and Kconfig as PCIE_INTEL_GW.
>>>>> I don't have a problem with this, though others may have differing views.
>>>> Sure. thank you.
>>>>> Thanks,
>>>>>
>>>>> Andrew Murray
>>>>>
>>>>>> Regards,
>>>>>> Dilip
>>>>>>
>>>>>>>> Ok, i will check and get back to you on this.
>>>>>>>> Regards,
>>>>>>> Thanks,
>>>>>>>
>>>>>>> Andrew Murray
>>>>>>>
>>>>>>>> Dilip
>>>>>>>>
>>>>>>>>> Thanks,
>>>>>>>>>
>>>>>>>>> Andrew Murray
>>>>>>>>>
>>>>>>>>>>> Thanks,
>>>>>>>>>>>
>>>>>>>>>>> Andrew Murray
>>>>>>>>>>>
>>>>>>>>>>>> +		.of_match_table = of_intel_pcie_match,
>>>>>>>>>>>> +		.pm = &intel_pcie_pm_ops,
>>>>>>>>>>>> +	},
>>>>>>>>>>>> +};
>>>>>>>>>>>> +builtin_platform_driver(intel_pcie_driver);
>>>>>>>>>>>> -- 
>>>>>>>>>>>> 2.11.0
>>>>>>>>>>>>
>

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH v3 2/2] dwc: PCI: intel: Intel PCIe RC controller driver
  2019-09-13  9:20                         ` Dilip Kota
@ 2019-09-13 10:12                           ` andriy.shevchenko
  2019-09-16  3:03                             ` Dilip Kota
  0 siblings, 1 reply; 37+ messages in thread
From: andriy.shevchenko @ 2019-09-13 10:12 UTC (permalink / raw)
  To: Dilip Kota
  Cc: Gustavo Pimentel, Andrew Murray, jingoohan1, lorenzo.pieralisi,
	robh, martin.blumenstingl, linux-pci, hch, devicetree,
	linux-kernel, cheol.yong.kim, chuanhua.lei, qi-ming.wu

On Fri, Sep 13, 2019 at 05:20:26PM +0800, Dilip Kota wrote:
> On 9/12/2019 6:49 PM, Gustavo Pimentel wrote:
> > On Thu, Sep 12, 2019 at 10:23:31, Dilip Kota
> > <eswara.kota@linux.intel.com> wrote:

> > Hi, I just return from parental leave, therefore I still trying to get
> > the pace in mailing list discussion.
> > 
> > However your suggestion looks good, I agree that can go into DesignWare
> > driver to be available to all.
> Thanks Gustavo for the confirmation, i will add it in the next patch version
> > 
> > Just a small request, please do in general:
> > s/designware/DesignWare
> 
> Sorry, i didnt understand this.

It means the reviewer asks you to name DesignWare in this form,
i.o.w. designware -> DesignWare.

`man 1 sed` gives you more about it :-)

-- 
With Best Regards,
Andy Shevchenko



^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH v3 2/2] dwc: PCI: intel: Intel PCIe RC controller driver
  2019-09-13 10:12                           ` andriy.shevchenko
@ 2019-09-16  3:03                             ` Dilip Kota
  0 siblings, 0 replies; 37+ messages in thread
From: Dilip Kota @ 2019-09-16  3:03 UTC (permalink / raw)
  To: andriy.shevchenko
  Cc: Gustavo Pimentel, Andrew Murray, jingoohan1, lorenzo.pieralisi,
	robh, martin.blumenstingl, linux-pci, hch, devicetree,
	linux-kernel, cheol.yong.kim, chuanhua.lei, qi-ming.wu


On 9/13/2019 6:12 PM, andriy.shevchenko@intel.com wrote:
> On Fri, Sep 13, 2019 at 05:20:26PM +0800, Dilip Kota wrote:
>> On 9/12/2019 6:49 PM, Gustavo Pimentel wrote:
>>> On Thu, Sep 12, 2019 at 10:23:31, Dilip Kota
>>> <eswara.kota@linux.intel.com> wrote:
>>> Hi, I just return from parental leave, therefore I still trying to get
>>> the pace in mailing list discussion.
>>>
>>> However your suggestion looks good, I agree that can go into DesignWare
>>> driver to be available to all.
>> Thanks Gustavo for the confirmation, i will add it in the next patch version
>>> Just a small request, please do in general:
>>> s/designware/DesignWare
>> Sorry, i didnt understand this.
> It means the reviewer asks you to name DesignWare in this form,
> i.o.w. designware -> DesignWare.
>
> `man 1 sed` gives you more about it :-)
Thanks Andy for clarifying it.
Could you please also comment let me know your opinion on the driver 
naming. Below is the mail snippet.

======
 >>> Add support to PCIe RC controller on Intel Universal
 >>> Gateway SoC. PCIe controller is based of Synopsys
 >>> Designware pci core.
 >>> +config PCIE_INTEL_AXI
 > [Andy]: I think that name here is too generic. Classical x86 seems 
not using this.

[Dilip Kota]:
This PCIe driver is for the Intel Gateway SoCs. So how about naming it is as
"pcie-intel-gw"; pcie-intel-gw.c and Kconfig as PCIE_INTEL_GW.

Andrew Murray is ok with this naming, please let me know your view.
=======

Regards,
Dilip

>

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH v3 1/2] dt-bindings: PCI: intel: Add YAML schemas for the PCIe RC controller
  2019-09-06  9:19     ` Andy Shevchenko
  2019-09-09  6:52       ` Dilip Kota
@ 2019-09-17 18:33       ` Rob Herring
  2019-09-18  6:48         ` Dilip Kota
  1 sibling, 1 reply; 37+ messages in thread
From: Rob Herring @ 2019-09-17 18:33 UTC (permalink / raw)
  To: Andy Shevchenko
  Cc: Martin Blumenstingl, Dilip Kota, jingoohan1, gustavo.pimentel,
	lorenzo.pieralisi, linux-pci, hch, devicetree, linux-kernel,
	cheol.yong.kim, chuanhua.lei, qi-ming.wu

On Fri, Sep 06, 2019 at 12:19:50PM +0300, Andy Shevchenko wrote:
> On Thu, Sep 05, 2019 at 10:31:29PM +0200, Martin Blumenstingl wrote:
> > On Wed, Sep 4, 2019 at 12:11 PM Dilip Kota <eswara.kota@linux.intel.com> wrote:
> 
> > > +  phy-names:
> > > +    const: pciephy
> > the most popular choice in Documentation/devicetree/bindings/pci/ is "pcie-phy"
> > if Rob is happy with "pciephy" (which is already part of two other
> > bindings) then I'm happy with "pciephy" as well
> 
> I'm not Rob, though I consider more practical to have a hyphenated variant.

*-names is kind of pointless when there is only one entry and 'foo' in 
the values of 'foo-names' is redundant.

Rob

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH v3 1/2] dt-bindings: PCI: intel: Add YAML schemas for the PCIe RC controller
  2019-09-04 10:10 ` [PATCH v3 1/2] dt-bindings: PCI: intel: Add YAML schemas for the PCIe RC controller Dilip Kota
  2019-09-05  2:23   ` Chuan Hua, Lei
  2019-09-05 20:31   ` Martin Blumenstingl
@ 2019-09-17 18:40   ` Rob Herring
  2019-09-18  6:56     ` Dilip Kota
  2 siblings, 1 reply; 37+ messages in thread
From: Rob Herring @ 2019-09-17 18:40 UTC (permalink / raw)
  To: Dilip Kota
  Cc: jingoohan1, gustavo.pimentel, lorenzo.pieralisi,
	martin.blumenstingl, linux-pci, hch, devicetree, linux-kernel,
	andriy.shevchenko, cheol.yong.kim, chuanhua.lei, qi-ming.wu

On Wed, Sep 04, 2019 at 06:10:30PM +0800, Dilip Kota wrote:
> The Intel PCIe RC controller is Synopsys Designware
> based PCIe core. Add YAML schemas for PCIe in RC mode
> present in Intel Universal Gateway soc.
> 
> Signed-off-by: Dilip Kota <eswara.kota@linux.intel.com>
> ---
> changes on v3:
> 	Add the appropriate License-Identifier
> 	Rename intel,rst-interval to 'reset-assert-us'
> 	Add additionalProperties: false
> 	Rename phy-names to 'pciephy'
> 	Remove the dtsi node split of SoC and board in the example
> 	Add #interrupt-cells = <1>; or else interrupt parsing will fail
> 	Name yaml file with compatible name
> 
>  .../devicetree/bindings/pci/intel,lgm-pcie.yaml    | 137 +++++++++++++++++++++
>  1 file changed, 137 insertions(+)
>  create mode 100644 Documentation/devicetree/bindings/pci/intel,lgm-pcie.yaml
> 
> diff --git a/Documentation/devicetree/bindings/pci/intel,lgm-pcie.yaml b/Documentation/devicetree/bindings/pci/intel,lgm-pcie.yaml
> new file mode 100644
> index 000000000000..5e5cc7fd66cd
> --- /dev/null
> +++ b/Documentation/devicetree/bindings/pci/intel,lgm-pcie.yaml
> @@ -0,0 +1,137 @@
> +# SPDX-License-Identifier: (GPL-2.0 OR BSD-2-Clause)
> +%YAML 1.2
> +---
> +$id: http://devicetree.org/schemas/pci/intel-pcie.yaml#
> +$schema: http://devicetree.org/meta-schemas/core.yaml#
> +
> +title: Intel AXI bus based PCI express root complex
> +
> +maintainers:
> +  - Dilip Kota <eswara.kota@linux.intel.com>
> +
> +properties:
> +  compatible:
> +    const: intel,lgm-pcie
> +

> +  device_type:
> +    const: pci
> +
> +  "#address-cells":
> +    const: 3
> +
> +  "#size-cells":
> +    const: 2

These all belong in a common schema.

> +
> +  reg:
> +    items:
> +      - description: Controller control and status registers.
> +      - description: PCIe configuration registers.
> +      - description: Controller application registers.
> +
> +  reg-names:
> +    items:
> +      - const: dbi
> +      - const: config
> +      - const: app
> +
> +  ranges:
> +    description: Ranges for the PCI memory and I/O regions.

And this.

> +
> +  resets:
> +    maxItems: 1
> +
> +  clocks:
> +    description: PCIe registers interface clock.
> +
> +  phys:
> +    maxItems: 1
> +
> +  phy-names:
> +    const: pciephy
> +
> +  reset-gpios:
> +    maxItems: 1
> +

> +  num-lanes:
> +    description: Number of lanes to use for this port.
> +
> +  linux,pci-domain:
> +    $ref: /schemas/types.yaml#/definitions/uint32
> +    description: PCI domain ID.

These 2 also should be common.

> +
> +  interrupts:
> +    description: PCIe core integrated miscellaneous interrupt.

How many? No need for description if there's only 1.

> +
> +  '#interrupt-cells':
> +    const: 1

> +
> +  interrupt-map-mask:
> +    description: Standard PCI IRQ mapping properties.
> +
> +  interrupt-map:
> +    description: Standard PCI IRQ mapping properties.
> +
> +  max-link-speed:
> +    description: Specify PCI Gen for link capability.
> +
> +  bus-range:
> +    description: Range of bus numbers associated with this controller.

All common.

> +
> +  reset-assert-ms:
> +    description: |
> +      Device reset interval in ms.
> +      Some devices need an interval upto 500ms. By default it is 100ms.

This is a property of a device, so it belongs in a device node. How 
would you deal with this without DT?

> +
> +required:
> +  - compatible
> +  - device_type
> +  - reg
> +  - reg-names
> +  - ranges
> +  - resets
> +  - clocks
> +  - phys
> +  - phy-names
> +  - reset-gpios
> +  - num-lanes
> +  - linux,pci-domain
> +  - interrupts
> +  - interrupt-map
> +  - interrupt-map-mask
> +
> +additionalProperties: false
> +
> +examples:
> +  - |
> +    pcie10:pcie@d0e00000 {
> +      compatible = "intel,lgm-pcie";
> +      device_type = "pci";
> +      #address-cells = <3>;
> +      #size-cells = <2>;
> +      reg = <
> +            0xd0e00000 0x1000
> +            0xd2000000 0x800000
> +            0xd0a41000 0x1000
> +            >;
> +      reg-names = "dbi", "config", "app";
> +      linux,pci-domain = <0>;
> +      max-link-speed = <4>;
> +      bus-range = <0x00 0x08>;
> +      interrupt-parent = <&ioapic1>;
> +      interrupts = <67 1>;
> +      #interrupt-cells = <1>;
> +      interrupt-map-mask = <0 0 0 0x7>;
> +      interrupt-map = <0 0 0 1 &ioapic1 27 1>,
> +                      <0 0 0 2 &ioapic1 28 1>,
> +                      <0 0 0 3 &ioapic1 29 1>,
> +                      <0 0 0 4 &ioapic1 30 1>;
> +      ranges = <0x02000000 0 0xd4000000 0xd4000000 0 0x04000000>;
> +      resets = <&rcu0 0x50 0>;
> +      clocks = <&cgu0 LGM_GCLK_PCIE10>;
> +      phys = <&cb0phy0>;
> +      phy-names = "pciephy";
> +      status = "okay";
> +      reset-assert-ms = <500>;
> +      reset-gpios = <&gpio0 3 GPIO_ACTIVE_LOW>;
> +      num-lanes = <2>;
> +    };
> -- 
> 2.11.0
> 

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH v3 1/2] dt-bindings: PCI: intel: Add YAML schemas for the PCIe RC controller
  2019-09-17 18:33       ` Rob Herring
@ 2019-09-18  6:48         ` Dilip Kota
  0 siblings, 0 replies; 37+ messages in thread
From: Dilip Kota @ 2019-09-18  6:48 UTC (permalink / raw)
  To: Rob Herring, Andy Shevchenko
  Cc: Martin Blumenstingl, jingoohan1, gustavo.pimentel,
	lorenzo.pieralisi, linux-pci, hch, devicetree, linux-kernel,
	cheol.yong.kim, chuanhua.lei, qi-ming.wu

Hi Rob,

On 9/18/2019 2:33 AM, Rob Herring wrote:
> On Fri, Sep 06, 2019 at 12:19:50PM +0300, Andy Shevchenko wrote:
>> On Thu, Sep 05, 2019 at 10:31:29PM +0200, Martin Blumenstingl wrote:
>>> On Wed, Sep 4, 2019 at 12:11 PM Dilip Kota <eswara.kota@linux.intel.com> wrote:
>>>> +  phy-names:
>>>> +    const: pciephy
>>> the most popular choice in Documentation/devicetree/bindings/pci/ is "pcie-phy"
>>> if Rob is happy with "pciephy" (which is already part of two other
>>> bindings) then I'm happy with "pciephy" as well
>> I'm not Rob, though I consider more practical to have a hyphenated variant.
> *-names is kind of pointless when there is only one entry and 'foo' in
> the values of 'foo-names' is redundant.

Ok, i will change it to:

     phy-name:
         const: pcie

Regards,
Dilip

>
> Rob

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH v3 1/2] dt-bindings: PCI: intel: Add YAML schemas for the PCIe RC controller
  2019-09-17 18:40   ` Rob Herring
@ 2019-09-18  6:56     ` Dilip Kota
       [not found]       ` <b7e549bb-b46c-c393-50ac-9ef3b198fd49@linux.intel.com>
  0 siblings, 1 reply; 37+ messages in thread
From: Dilip Kota @ 2019-09-18  6:56 UTC (permalink / raw)
  To: Rob Herring
  Cc: jingoohan1, gustavo.pimentel, lorenzo.pieralisi,
	martin.blumenstingl, linux-pci, hch, devicetree, linux-kernel,
	andriy.shevchenko, cheol.yong.kim, chuanhua.lei, qi-ming.wu


On 9/18/2019 2:40 AM, Rob Herring wrote:
> On Wed, Sep 04, 2019 at 06:10:30PM +0800, Dilip Kota wrote:
>> The Intel PCIe RC controller is Synopsys Designware
>> based PCIe core. Add YAML schemas for PCIe in RC mode
>> present in Intel Universal Gateway soc.
>>
>> Signed-off-by: Dilip Kota <eswara.kota@linux.intel.com>
>> ---
>> changes on v3:
>> 	Add the appropriate License-Identifier
>> 	Rename intel,rst-interval to 'reset-assert-us'
>> 	Add additionalProperties: false
>> 	Rename phy-names to 'pciephy'
>> 	Remove the dtsi node split of SoC and board in the example
>> 	Add #interrupt-cells = <1>; or else interrupt parsing will fail
>> 	Name yaml file with compatible name
>>
>>   .../devicetree/bindings/pci/intel,lgm-pcie.yaml    | 137 +++++++++++++++++++++
>>   1 file changed, 137 insertions(+)
>>   create mode 100644 Documentation/devicetree/bindings/pci/intel,lgm-pcie.yaml
>>
>> diff --git a/Documentation/devicetree/bindings/pci/intel,lgm-pcie.yaml b/Documentation/devicetree/bindings/pci/intel,lgm-pcie.yaml
>> new file mode 100644
>> index 000000000000..5e5cc7fd66cd
>> --- /dev/null
>> +++ b/Documentation/devicetree/bindings/pci/intel,lgm-pcie.yaml
>> @@ -0,0 +1,137 @@
>> +# SPDX-License-Identifier: (GPL-2.0 OR BSD-2-Clause)
>> +%YAML 1.2
>> +---
>> +$id: http://devicetree.org/schemas/pci/intel-pcie.yaml#
>> +$schema: http://devicetree.org/meta-schemas/core.yaml#
>> +
>> +title: Intel AXI bus based PCI express root complex
>> +
>> +maintainers:
>> +  - Dilip Kota <eswara.kota@linux.intel.com>
>> +
>> +properties:
>> +  compatible:
>> +    const: intel,lgm-pcie
>> +
>> +  device_type:
>> +    const: pci
>> +
>> +  "#address-cells":
>> +    const: 3
>> +
>> +  "#size-cells":
>> +    const: 2
> These all belong in a common schema.
>
>> +
>> +  reg:
>> +    items:
>> +      - description: Controller control and status registers.
>> +      - description: PCIe configuration registers.
>> +      - description: Controller application registers.
>> +
>> +  reg-names:
>> +    items:
>> +      - const: dbi
>> +      - const: config
>> +      - const: app
>> +
>> +  ranges:
>> +    description: Ranges for the PCI memory and I/O regions.
> And this.
>
>> +
>> +  resets:
>> +    maxItems: 1
>> +
>> +  clocks:
>> +    description: PCIe registers interface clock.
>> +
>> +  phys:
>> +    maxItems: 1
>> +
>> +  phy-names:
>> +    const: pciephy
>> +
>> +  reset-gpios:
>> +    maxItems: 1
>> +
>> +  num-lanes:
>> +    description: Number of lanes to use for this port.
>> +
>> +  linux,pci-domain:
>> +    $ref: /schemas/types.yaml#/definitions/uint32
>> +    description: PCI domain ID.
> These 2 also should be common.
>
>> +
>> +  interrupts:
>> +    description: PCIe core integrated miscellaneous interrupt.
> How many? No need for description if there's only 1.
>
>> +
>> +  '#interrupt-cells':
>> +    const: 1
>> +
>> +  interrupt-map-mask:
>> +    description: Standard PCI IRQ mapping properties.
>> +
>> +  interrupt-map:
>> +    description: Standard PCI IRQ mapping properties.
>> +
>> +  max-link-speed:
>> +    description: Specify PCI Gen for link capability.
>> +
>> +  bus-range:
>> +    description: Range of bus numbers associated with this controller.
> All common.
You mean to remove all the common schema entries description!.
In most of the Documention/devicetree/binding/pci documents all the 
common entries are described so I followed the same.
>
>> +
>> +  reset-assert-ms:
>> +    description: |
>> +      Device reset interval in ms.
>> +      Some devices need an interval upto 500ms. By default it is 100ms.
> This is a property of a device, so it belongs in a device node. How
> would you deal with this without DT?
This property is for the PCIe RC to keep a delay before notifying the 
reset to the device.
If this entry is not present, PCIe driver will set a default value of 100ms.
>
>> +
>> +required:
>> +  - compatible
>> +  - device_type
>> +  - reg
>> +  - reg-names
>> +  - ranges
>> +  - resets
>> +  - clocks
>> +  - phys
>> +  - phy-names
>> +  - reset-gpios
>> +  - num-lanes
>> +  - linux,pci-domain
>> +  - interrupts
>> +  - interrupt-map
>> +  - interrupt-map-mask
>> +
>> +additionalProperties: false
>> +
>> +examples:
>> +  - |
>> +    pcie10:pcie@d0e00000 {
>> +      compatible = "intel,lgm-pcie";
>> +      device_type = "pci";
>> +      #address-cells = <3>;
>> +      #size-cells = <2>;
>> +      reg = <
>> +            0xd0e00000 0x1000
>> +            0xd2000000 0x800000
>> +            0xd0a41000 0x1000
>> +            >;
>> +      reg-names = "dbi", "config", "app";
>> +      linux,pci-domain = <0>;
>> +      max-link-speed = <4>;
>> +      bus-range = <0x00 0x08>;
>> +      interrupt-parent = <&ioapic1>;
>> +      interrupts = <67 1>;
>> +      #interrupt-cells = <1>;
>> +      interrupt-map-mask = <0 0 0 0x7>;
>> +      interrupt-map = <0 0 0 1 &ioapic1 27 1>,
>> +                      <0 0 0 2 &ioapic1 28 1>,
>> +                      <0 0 0 3 &ioapic1 29 1>,
>> +                      <0 0 0 4 &ioapic1 30 1>;
>> +      ranges = <0x02000000 0 0xd4000000 0xd4000000 0 0x04000000>;
>> +      resets = <&rcu0 0x50 0>;
>> +      clocks = <&cgu0 LGM_GCLK_PCIE10>;
>> +      phys = <&cb0phy0>;
>> +      phy-names = "pciephy";
>> +      status = "okay";
>> +      reset-assert-ms = <500>;
>> +      reset-gpios = <&gpio0 3 GPIO_ACTIVE_LOW>;
>> +      num-lanes = <2>;
>> +    };
>> -- 
>> 2.11.0
>>

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: Fwd: Re: [PATCH v3 1/2] dt-bindings: PCI: intel: Add YAML schemas for the PCIe RC controller
       [not found]       ` <b7e549bb-b46c-c393-50ac-9ef3b198fd49@linux.intel.com>
@ 2019-10-03  6:35         ` " Dilip Kota
  0 siblings, 0 replies; 37+ messages in thread
From: Dilip Kota @ 2019-10-03  6:35 UTC (permalink / raw)
  To: Rob Herring
  Cc: jingoohan1, gustavo.pimentel, lorenzo.pieralisi,
	martin.blumenstingl, linux-pci, hch, devicetree, linux-kernel,
	andriy.shevchenko, cheol.yong.kim, chuanhua.lei, qi-ming.wu

Hi Rob,


On 18/9/2019 2:56 PM, Dilip Kota wrote:
> On 9/18/2019 2:40 AM, Rob Herring wrote:
>> On Wed, Sep 04, 2019 at 06:10:30PM +0800, Dilip Kota wrote:
>>> The Intel PCIe RC controller is Synopsys Designware
>>> based PCIe core. Add YAML schemas for PCIe in RC mode
>>> present in Intel Universal Gateway soc.
>>>
>>> Signed-off-by: Dilip Kota <eswara.kota@linux.intel.com>
>>> ---
>>> changes on v3:
>>> Add the appropriate License-Identifier
>>> Rename intel,rst-interval to 'reset-assert-us'
>>> Add additionalProperties: false
>>> Rename phy-names to 'pciephy'
>>> Remove the dtsi node split of SoC and board in the example
>>> Add #interrupt-cells = <1>; or else interrupt parsing will fail
>>> Name yaml file with compatible name
>>>
>>> .../devicetree/bindings/pci/intel,lgm-pcie.yaml | 137 
>>> +++++++++++++++++++++
>>> 1 file changed, 137 insertions(+)
>>> create mode 100644 
>>> Documentation/devicetree/bindings/pci/intel,lgm-pcie.yaml
>>>
>>> diff --git 
>>> a/Documentation/devicetree/bindings/pci/intel,lgm-pcie.yaml 
>>> b/Documentation/devicetree/bindings/pci/intel,lgm-pcie.yaml
>>> new file mode 100644
>>> index 000000000000..5e5cc7fd66cd
>>> --- /dev/null
>>> +++ b/Documentation/devicetree/bindings/pci/intel,lgm-pcie.yaml
>>> @@ -0,0 +1,137 @@
>>> +# SPDX-License-Identifier: (GPL-2.0 OR BSD-2-Clause)
>>> +%YAML 1.2
>>> +---
>>> +$id: http://devicetree.org/schemas/pci/intel-pcie.yaml#
>>> +$schema: http://devicetree.org/meta-schemas/core.yaml#
>>> +
>>> +title: Intel AXI bus based PCI express root complex
>>> +
>>> +maintainers:
>>> + - Dilip Kota <eswara.kota@linux.intel.com>
>>> +
>>> +properties:
>>> + compatible:
>>> + const: intel,lgm-pcie
>>> +
>>> + device_type:
>>> + const: pci
>>> +
>>> + "#address-cells":
>>> + const: 3
>>> +
>>> + "#size-cells":
>>> + const: 2
>> These all belong in a common schema.
>>
>>> +
>>> + reg:
>>> + items:
>>> + - description: Controller control and status registers.
>>> + - description: PCIe configuration registers.
>>> + - description: Controller application registers.
>>> +
>>> + reg-names:
>>> + items:
>>> + - const: dbi
>>> + - const: config
>>> + - const: app
>>> +
>>> + ranges:
>>> + description: Ranges for the PCI memory and I/O regions.
>> And this.
>>
>>> +
>>> + resets:
>>> + maxItems: 1
>>> +
>>> + clocks:
>>> + description: PCIe registers interface clock.
>>> +
>>> + phys:
>>> + maxItems: 1
>>> +
>>> + phy-names:
>>> + const: pciephy
>>> +
>>> + reset-gpios:
>>> + maxItems: 1
>>> +
>>> + num-lanes:
>>> + description: Number of lanes to use for this port.
>>> +
>>> + linux,pci-domain:
>>> + $ref: /schemas/types.yaml#/definitions/uint32
>>> + description: PCI domain ID.
>> These 2 also should be common.
>>
>>> +
>>> + interrupts:
>>> + description: PCIe core integrated miscellaneous interrupt.
>> How many? No need for description if there's only 1.
>>
>>> +
>>> + '#interrupt-cells':
>>> + const: 1
>>> +
>>> + interrupt-map-mask:
>>> + description: Standard PCI IRQ mapping properties.
>>> +
>>> + interrupt-map:
>>> + description: Standard PCI IRQ mapping properties.
>>> +
>>> + max-link-speed:
>>> + description: Specify PCI Gen for link capability.
>>> +
>>> + bus-range:
>>> + description: Range of bus numbers associated with this controller.
>> All common.
> You mean to remove all the common schema entries description!.
> In most of the Documention/devicetree/binding/pci documents all the 
> common entries are described so I followed the same.

If common schemas are removed, getting below warning during the 
dt_binding_check.

Documentation/devicetree/bindings/pci/intel,lgm-pcie.example.dt.yaml: 
pcie@d0e00000: '#address-cells', '#interrupt-cells', '#size-cells', 
'bus-range', 'device_type', 'interrupt-map', 'interrupt-map-mask', 
'interrupt-parent', 'linux,pci-domain', 'ranges', 'reset-gpios' do not 
match any of the regexes: 'pinctrl-[0-9]+'

Regards,
Dilip

>>
>>> +
>>> + reset-assert-ms:
>>> + description: |
>>> + Device reset interval in ms.
>>> + Some devices need an interval upto 500ms. By default it is 100ms.
>> This is a property of a device, so it belongs in a device node. How
>> would you deal with this without DT?
> This property is for the PCIe RC to keep a delay before notifying the 
> reset to the device.
> If this entry is not present, PCIe driver will set a default value of 
> 100ms.
>>
>>> +
>>> +required:
>>> + - compatible
>>> + - device_type
>>> + - reg
>>> + - reg-names
>>> + - ranges
>>> + - resets
>>> + - clocks
>>> + - phys
>>> + - phy-names
>>> + - reset-gpios
>>> + - num-lanes
>>> + - linux,pci-domain
>>> + - interrupts
>>> + - interrupt-map
>>> + - interrupt-map-mask
>>> +
>>> +additionalProperties: false
>>> +
>>> +examples:
>>> + - |
>>> + pcie10:pcie@d0e00000 {
>>> + compatible = "intel,lgm-pcie";
>>> + device_type = "pci";
>>> + #address-cells = <3>;
>>> + #size-cells = <2>;
>>> + reg = <
>>> + 0xd0e00000 0x1000
>>> + 0xd2000000 0x800000
>>> + 0xd0a41000 0x1000
>>> + >;
>>> + reg-names = "dbi", "config", "app";
>>> + linux,pci-domain = <0>;
>>> + max-link-speed = <4>;
>>> + bus-range = <0x00 0x08>;
>>> + interrupt-parent = <&ioapic1>;
>>> + interrupts = <67 1>;
>>> + #interrupt-cells = <1>;
>>> + interrupt-map-mask = <0 0 0 0x7>;
>>> + interrupt-map = <0 0 0 1 &ioapic1 27 1>,
>>> + <0 0 0 2 &ioapic1 28 1>,
>>> + <0 0 0 3 &ioapic1 29 1>,
>>> + <0 0 0 4 &ioapic1 30 1>;
>>> + ranges = <0x02000000 0 0xd4000000 0xd4000000 0 0x04000000>;
>>> + resets = <&rcu0 0x50 0>;
>>> + clocks = <&cgu0 LGM_GCLK_PCIE10>;
>>> + phys = <&cb0phy0>;
>>> + phy-names = "pciephy";
>>> + status = "okay";
>>> + reset-assert-ms = <500>;
>>> + reset-gpios = <&gpio0 3 GPIO_ACTIVE_LOW>;
>>> + num-lanes = <2>;
>>> + };
>>> -- 2.11.0
>>>

^ permalink raw reply	[flat|nested] 37+ messages in thread

end of thread, back to index

Thread overview: 37+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-09-04 10:10 [PATCH v3 0/2] PCI: Add Intel PCIe Driver and respective dt-binding yaml file Dilip Kota
2019-09-04 10:10 ` [PATCH v3 1/2] dt-bindings: PCI: intel: Add YAML schemas for the PCIe RC controller Dilip Kota
2019-09-05  2:23   ` Chuan Hua, Lei
2019-09-06 10:39     ` Dilip Kota
2019-09-05 20:31   ` Martin Blumenstingl
2019-09-06  3:22     ` Chuan Hua, Lei
2019-09-06 17:17       ` Martin Blumenstingl
2019-09-06 17:48         ` Andy Shevchenko
2019-09-07  1:48           ` Ivan Gorinov
2019-09-06  9:19     ` Andy Shevchenko
2019-09-09  6:52       ` Dilip Kota
2019-09-17 18:33       ` Rob Herring
2019-09-18  6:48         ` Dilip Kota
2019-09-17 18:40   ` Rob Herring
2019-09-18  6:56     ` Dilip Kota
     [not found]       ` <b7e549bb-b46c-c393-50ac-9ef3b198fd49@linux.intel.com>
2019-10-03  6:35         ` Fwd: " Dilip Kota
2019-09-04 10:10 ` [PATCH v3 2/2] dwc: PCI: intel: Intel PCIe RC controller driver Dilip Kota
2019-09-04 13:05   ` Andy Shevchenko
2019-09-06 10:39     ` Dilip Kota
2019-09-05  2:30   ` Chuan Hua, Lei
2019-09-05 10:45   ` Andrew Murray
2019-09-05 11:26     ` Christoph Hellwig
2019-09-05 11:40     ` Andy Shevchenko
2019-09-12  7:01       ` Dilip Kota
2019-09-06 10:58     ` Dilip Kota
2019-09-06 11:20       ` Andrew Murray
2019-09-09  6:51         ` Dilip Kota
2019-09-09  8:31           ` Andrew Murray
2019-09-10  8:08             ` Dilip Kota
     [not found]             ` <22857835-1f98-b251-c94b-16b4b0a6dba2@linux.intel.com>
2019-09-11 10:30               ` Andrew Murray
2019-09-12  6:58                 ` Dilip Kota
2019-09-12  8:25                   ` Andrew Murray
2019-09-12  9:23                     ` Dilip Kota
2019-09-12 10:49                       ` Gustavo Pimentel
2019-09-13  9:20                         ` Dilip Kota
2019-09-13 10:12                           ` andriy.shevchenko
2019-09-16  3:03                             ` Dilip Kota

Linux-PCI Archive on lore.kernel.org

Archives are clonable:
	git clone --mirror https://lore.kernel.org/linux-pci/0 linux-pci/git/0.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 linux-pci linux-pci/ https://lore.kernel.org/linux-pci \
		linux-pci@vger.kernel.org
	public-inbox-index linux-pci

Example config snippet for mirrors

Newsgroup available over NNTP:
	nntp://nntp.lore.kernel.org/org.kernel.vger.linux-pci


AGPL code for this site: git clone https://public-inbox.org/public-inbox.git