linux-pci.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2 0/3] PCI: Add map irq callback in dwc framework and add Intel PCIe driver
@ 2019-08-20  9:39 Dilip Kota
  2019-08-20  9:39 ` [PATCH v2 1/3] PCI: dwc: Add map irq callback Dilip Kota
                   ` (2 more replies)
  0 siblings, 3 replies; 23+ messages in thread
From: Dilip Kota @ 2019-08-20  9:39 UTC (permalink / raw)
  To: jingoohan1, gustavo.pimentel, linux-pci, hch, devicetree
  Cc: linux-kernel, andriy.shevchenko, cheol.yong.kim, chuanhua.lei,
	qi-ming.wu, Dilip Kota

Intel PCIe is synopsys based controller utilizes the Designware
framework for host initialization and intel application
specific register configurations.

Intel PCIe driver also uses the Irq map callback for platform
specific register configurations while Irq mapping. So submitting
the Intel pcie driver series with dwc irq map callback change.

Changes on v2:
	Submitting user(intel pcie driver) changes along with core changes.

Dilip Kota (3):
  PCI: dwc: Add map irq callback
  dt-bindings: PCI: intel: Add YAML schemas for the PCIe RC controller
  dwc: PCI: intel: Intel PCIe RC controller driver

 .../devicetree/bindings/pci/intel-pcie.yaml        | 133 +++
 drivers/pci/controller/dwc/Kconfig                 |  13 +
 drivers/pci/controller/dwc/Makefile                |   1 +
 drivers/pci/controller/dwc/pcie-designware-host.c  |   6 +-
 drivers/pci/controller/dwc/pcie-designware.h       |   1 +
 drivers/pci/controller/dwc/pcie-intel-axi.c        | 900 +++++++++++++++++
 6 files changed, 1053 insertions(+), 1 deletion(-)
 create mode 100644 Documentation/devicetree/bindings/pci/intel-pcie.yaml
 create mode 100644 drivers/pci/controller/dwc/pcie-intel-axi.c

-- 
2.11.0


^ permalink raw reply	[flat|nested] 23+ messages in thread

* [PATCH v2 1/3] PCI: dwc: Add map irq callback
  2019-08-20  9:39 [PATCH v2 0/3] PCI: Add map irq callback in dwc framework and add Intel PCIe driver Dilip Kota
@ 2019-08-20  9:39 ` Dilip Kota
  2019-08-20  9:39 ` [PATCH v2 2/3] dt-bindings: PCI: intel: Add YAML schemas for the PCIe RC controller Dilip Kota
  2019-08-20  9:39 ` [PATCH v2 3/3] dwc: PCI: intel: Intel PCIe RC controller driver Dilip Kota
  2 siblings, 0 replies; 23+ messages in thread
From: Dilip Kota @ 2019-08-20  9:39 UTC (permalink / raw)
  To: jingoohan1, gustavo.pimentel, linux-pci, hch, devicetree
  Cc: linux-kernel, andriy.shevchenko, cheol.yong.kim, chuanhua.lei,
	qi-ming.wu, Dilip Kota

Certain platforms like Intel need to configure
registers to enable the interrupts.
Map Irq callback helps to perform platform specific
configurations while assigning or enabling the interrupts.

Signed-off-by: Dilip Kota <eswara.kota@linux.intel.com>
---
changes on v2:
	Addressing review comments to use if else {} instead of ternary operator
	to improver code readability.

 drivers/pci/controller/dwc/pcie-designware-host.c | 6 +++++-
 drivers/pci/controller/dwc/pcie-designware.h      | 1 +
 2 files changed, 6 insertions(+), 1 deletion(-)

diff --git a/drivers/pci/controller/dwc/pcie-designware-host.c b/drivers/pci/controller/dwc/pcie-designware-host.c
index f93252d0da5b..ca7b40ff5c94 100644
--- a/drivers/pci/controller/dwc/pcie-designware-host.c
+++ b/drivers/pci/controller/dwc/pcie-designware-host.c
@@ -470,9 +470,13 @@ int dw_pcie_host_init(struct pcie_port *pp)
 	bridge->sysdata = pp;
 	bridge->busnr = pp->root_bus_nr;
 	bridge->ops = &dw_pcie_ops;
-	bridge->map_irq = of_irq_parse_and_map_pci;
 	bridge->swizzle_irq = pci_common_swizzle;
 
+	if (pp->map_irq)
+		bridge->map_irq = pp->map_irq;
+	else
+		bridge->map_irq = of_irq_parse_and_map_pci;
+
 	ret = pci_scan_root_bus_bridge(bridge);
 	if (ret)
 		goto err_free_msi;
diff --git a/drivers/pci/controller/dwc/pcie-designware.h b/drivers/pci/controller/dwc/pcie-designware.h
index ffed084a0b4f..bc3657800dfa 100644
--- a/drivers/pci/controller/dwc/pcie-designware.h
+++ b/drivers/pci/controller/dwc/pcie-designware.h
@@ -173,6 +173,7 @@ struct pcie_port {
 	struct resource		*busn;
 	int			irq;
 	const struct dw_pcie_host_ops *ops;
+	int (*map_irq)(const struct pci_dev *dev, u8 slot, u8 pin);
 	int			msi_irq;
 	struct irq_domain	*irq_domain;
 	struct irq_domain	*msi_domain;
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH v2 2/3] dt-bindings: PCI: intel: Add YAML schemas for the PCIe RC controller
  2019-08-20  9:39 [PATCH v2 0/3] PCI: Add map irq callback in dwc framework and add Intel PCIe driver Dilip Kota
  2019-08-20  9:39 ` [PATCH v2 1/3] PCI: dwc: Add map irq callback Dilip Kota
@ 2019-08-20  9:39 ` Dilip Kota
  2019-08-20 13:42   ` Rob Herring
  2019-08-20  9:39 ` [PATCH v2 3/3] dwc: PCI: intel: Intel PCIe RC controller driver Dilip Kota
  2 siblings, 1 reply; 23+ messages in thread
From: Dilip Kota @ 2019-08-20  9:39 UTC (permalink / raw)
  To: jingoohan1, gustavo.pimentel, linux-pci, hch, devicetree
  Cc: linux-kernel, andriy.shevchenko, cheol.yong.kim, chuanhua.lei,
	qi-ming.wu, Dilip Kota

The Intel PCIe RC controller is Synopsys Designware
based PCIe core. Add YAML schemas for PCIe in RC mode
present in Intel Universal Gateway soc.

Signed-off-by: Dilip Kota <eswara.kota@linux.intel.com>
---
 .../devicetree/bindings/pci/intel-pcie.yaml        | 133 +++++++++++++++++++++
 1 file changed, 133 insertions(+)
 create mode 100644 Documentation/devicetree/bindings/pci/intel-pcie.yaml

diff --git a/Documentation/devicetree/bindings/pci/intel-pcie.yaml b/Documentation/devicetree/bindings/pci/intel-pcie.yaml
new file mode 100644
index 000000000000..80caaaba5e2c
--- /dev/null
+++ b/Documentation/devicetree/bindings/pci/intel-pcie.yaml
@@ -0,0 +1,133 @@
+# SPDX-License-Identifier: GPL-2.0
+%YAML 1.2
+---
+$id: http://devicetree.org/schemas/pci/intel-pcie.yaml#
+$schema: http://devicetree.org/meta-schemas/core.yaml#
+
+title: Intel AXI bus based PCI express root complex
+
+maintainers:
+  - Dilip Kota <eswara.kota@linux.intel.com>
+
+properties:
+  compatible:
+    const: intel,lgm-pcie
+
+  device_type:
+    const: pci
+
+  "#address-cells":
+    const: 3
+
+  "#size-cells":
+    const: 2
+
+  reg:
+    items:
+      - description: Controller control and status registers.
+      - description: PCIe configuration registers.
+      - description: Controller application registers.
+
+  reg-names:
+    items:
+      - const: dbi
+      - const: config
+      - const: app
+
+  ranges:
+    description: Ranges for the PCI memory and I/O regions.
+
+  resets:
+    maxItems: 1
+
+  clocks:
+    description: PCIe registers interface clock.
+
+  phys:
+    maxItems: 1
+
+  phy-names:
+    const: phy
+
+  reset-gpios:
+    maxItems: 1
+
+  num-lanes:
+    description: Number of lanes to use for this port.
+
+  linux,pci-domain:
+    description: PCI domain ID.
+
+  interrupts:
+    description: PCIe core integrated miscellaneous interrupt.
+
+  interrupt-map-mask:
+    description: Standard PCI IRQ mapping properties.
+
+  interrupt-map:
+    description: Standard PCI IRQ mapping properties.
+
+  max-link-speed:
+    description: Specify PCI Gen for link capability.
+
+  bus-range:
+    description: Range of bus numbers associated with this controller.
+
+  intel,rst-interval:
+    description: |
+      Device reset interval in ms. Some devices need an interval upto 500ms.
+      By default it is 100ms.
+
+required:
+  - compatible
+  - device_type
+  - reg
+  - reg-names
+  - ranges
+  - resets
+  - clocks
+  - phys
+  - phy-names
+  - reset-gpios
+  - num-lanes
+  - linux,pci-domain
+  - interrupts
+  - interrupt-map
+  - interrupt-map-mask
+
+examples:
+  - |
+    pcie10:pcie@d0e00000 {
+      compatible = "intel,lgm-pcie";
+      device_type = "pci";
+      #address-cells = <3>;
+      #size-cells = <2>;
+      reg = <
+            0xd0e00000 0x1000
+            0xd2000000 0x800000
+            0xd0a41000 0x1000
+            >;
+      reg-names = "dbi", "config", "app";
+      linux,pci-domain = <0>;
+      max-link-speed = <4>;
+      bus-range = <0x00 0x08>;
+      interrupt-parent = <&ioapic1>;
+      interrupts = <67 1>;
+      interrupt-map-mask = <0 0 0 0x7>;
+      interrupt-map = <0 0 0 1 &ioapic1 27 1>,
+                      <0 0 0 2 &ioapic1 28 1>,
+                      <0 0 0 3 &ioapic1 29 1>,
+                      <0 0 0 4 &ioapic1 30 1>;
+      ranges = <0x02000000 0 0xd4000000 0xd4000000 0 0x04000000>;
+      resets = <&rcu0 0x50 0>;
+      clocks = <&cgu0 LGM_GCLK_PCIE10>;
+      phys = <&cb0phy0>;
+      phy-names = "phy";
+    };
+
+    &pcie10 {
+      status = "okay";
+      intel,rst-interval = <100>;
+      reset-gpios = <&gpio0 3 GPIO_ACTIVE_LOW>;
+      num-lanes = <2>;
+    };
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH v2 3/3] dwc: PCI: intel: Intel PCIe RC controller driver
  2019-08-20  9:39 [PATCH v2 0/3] PCI: Add map irq callback in dwc framework and add Intel PCIe driver Dilip Kota
  2019-08-20  9:39 ` [PATCH v2 1/3] PCI: dwc: Add map irq callback Dilip Kota
  2019-08-20  9:39 ` [PATCH v2 2/3] dt-bindings: PCI: intel: Add YAML schemas for the PCIe RC controller Dilip Kota
@ 2019-08-20  9:39 ` Dilip Kota
  2019-08-24 21:03   ` Martin Blumenstingl
  2 siblings, 1 reply; 23+ messages in thread
From: Dilip Kota @ 2019-08-20  9:39 UTC (permalink / raw)
  To: jingoohan1, gustavo.pimentel, linux-pci, hch, devicetree
  Cc: linux-kernel, andriy.shevchenko, cheol.yong.kim, chuanhua.lei,
	qi-ming.wu, Dilip Kota

Add support to PCIe RC controller on Intel Universal
Gateway SoC. PCIe controller is based of Synopsys
Designware pci core.

Signed-off-by: Dilip Kota <eswara.kota@linux.intel.com>
---
 drivers/pci/controller/dwc/Kconfig          |  13 +
 drivers/pci/controller/dwc/Makefile         |   1 +
 drivers/pci/controller/dwc/pcie-intel-axi.c | 900 ++++++++++++++++++++++++++++
 3 files changed, 914 insertions(+)
 create mode 100644 drivers/pci/controller/dwc/pcie-intel-axi.c

diff --git a/drivers/pci/controller/dwc/Kconfig b/drivers/pci/controller/dwc/Kconfig
index 6ea778ae4877..e44b9b6a6390 100644
--- a/drivers/pci/controller/dwc/Kconfig
+++ b/drivers/pci/controller/dwc/Kconfig
@@ -82,6 +82,19 @@ config PCIE_DW_PLAT_EP
 	  order to enable device-specific features PCI_DW_PLAT_EP must be
 	  selected.
 
+config PCIE_INTEL_AXI
+        bool "Intel AHB/AXI PCIe host controller support"
+        depends on PCI_MSI
+        depends on PCI
+        depends on OF
+        select PCIE_DW_HOST
+        help
+          Say 'Y' here to enable support for Intel AHB/AXI PCIe Host
+	  controller driver.
+	  The Intel PCIe controller is based on the Synopsys Designware
+	  pcie core and therefore uses the Designware core functions to
+	  implement the driver.
+
 config PCI_EXYNOS
 	bool "Samsung Exynos PCIe controller"
 	depends on SOC_EXYNOS5440 || COMPILE_TEST
diff --git a/drivers/pci/controller/dwc/Makefile b/drivers/pci/controller/dwc/Makefile
index b085dfd4fab7..46e656ebdf90 100644
--- a/drivers/pci/controller/dwc/Makefile
+++ b/drivers/pci/controller/dwc/Makefile
@@ -3,6 +3,7 @@ obj-$(CONFIG_PCIE_DW) += pcie-designware.o
 obj-$(CONFIG_PCIE_DW_HOST) += pcie-designware-host.o
 obj-$(CONFIG_PCIE_DW_EP) += pcie-designware-ep.o
 obj-$(CONFIG_PCIE_DW_PLAT) += pcie-designware-plat.o
+obj-$(CONFIG_PCIE_INTEL_AXI) += pcie-intel-axi.o
 obj-$(CONFIG_PCI_DRA7XX) += pci-dra7xx.o
 obj-$(CONFIG_PCI_EXYNOS) += pci-exynos.o
 obj-$(CONFIG_PCI_IMX6) += pci-imx6.o
diff --git a/drivers/pci/controller/dwc/pcie-intel-axi.c b/drivers/pci/controller/dwc/pcie-intel-axi.c
new file mode 100644
index 000000000000..2085b580add3
--- /dev/null
+++ b/drivers/pci/controller/dwc/pcie-intel-axi.c
@@ -0,0 +1,900 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * PCIe host controller driver for Intel AXI PCIe Bridge
+ *
+ * Copyright (c) 2019 Intel Corporation.
+ *
+ */
+
+#include <linux/bitfield.h>
+#include <linux/clk.h>
+#include <linux/gpio/consumer.h>
+#include <linux/interrupt.h>
+#include <linux/iopoll.h>
+#include <linux/of_irq.h>
+#include <linux/of_pci.h>
+#include <linux/of_platform.h>
+#include <linux/phy/phy.h>
+#include <linux/platform_device.h>
+#include <linux/reset.h>
+
+#include "../../pci.h"
+#include "pcie-designware.h"
+
+#define PCIE_CCRID				0x8
+
+#define PCIE_LCAP				0x7C
+#define PCIE_LCAP_MAX_LINK_SPEED		GENMASK(3, 0)
+#define PCIE_LCAP_MAX_LENGTH_WIDTH		GENMASK(9, 4)
+
+/* Link Control and Status Register */
+#define PCIE_LCTLSTS				0x80
+#define PCIE_LCTLSTS_ASPM_ENABLE		GENMASK(1, 0)
+#define PCIE_LCTLSTS_RCB128			BIT(3)
+#define PCIE_LCTLSTS_LINK_DISABLE		BIT(4)
+#define PCIE_LCTLSTS_COM_CLK_CFG		BIT(6)
+#define PCIE_LCTLSTS_HW_AW_DIS			BIT(9)
+#define PCIE_LCTLSTS_LINK_SPEED			GENMASK(19, 16)
+#define PCIE_LCTLSTS_NEGOTIATED_LINK_WIDTH	GENMASK(25, 20)
+#define PCIE_LCTLSTS_SLOT_CLK_CFG		BIT(28)
+
+#define PCIE_LCTLSTS2				0xA0
+#define PCIE_LCTLSTS2_TGT_LINK_SPEED		GENMASK(3, 0)
+#define PCIE_LCTLSTS2_TGT_LINK_SPEED_25GT	0x1
+#define PCIE_LCTLSTS2_TGT_LINK_SPEED_5GT	0x2
+#define PCIE_LCTLSTS2_TGT_LINK_SPEED_8GT	0x3
+#define PCIE_LCTLSTS2_TGT_LINK_SPEED_16GT	0x4
+#define PCIE_LCTLSTS2_HW_AUTO_DIS		BIT(5)
+
+/* Ack Frequency Register */
+#define PCIE_AFR				0x70C
+#define PCIE_AFR_FTS_NUM			GENMASK(15, 8)
+#define PCIE_AFR_COM_FTS_NUM			GENMASK(23, 16)
+#define PCIE_AFR_GEN12_FTS_NUM_DFT		(SZ_128 - 1)
+#define PCIE_AFR_GEN3_FTS_NUM_DFT		180
+#define PCIE_AFR_GEN4_FTS_NUM_DFT		196
+
+#define PCIE_PLCR_DLL_LINK_EN			BIT(5)
+#define PCIE_PORT_LOGIC_FTS			GENMASK(7, 0)
+#define PCIE_PORT_LOGIC_DFT_FTS_NUM		(SZ_128 - 1)
+
+#define PCIE_MISC_CTRL				0x8BC
+#define PCIE_MISC_CTRL_DBI_RO_WR_EN		BIT(0)
+
+#define PCIE_MULTI_LANE_CTRL			0x8C0
+#define PCIE_UPCONFIG_SUPPORT			BIT(7)
+#define PCIE_DIRECT_LINK_WIDTH_CHANGE		BIT(6)
+#define PCIE_TARGET_LINK_WIDTH			GENMASK(5, 0)
+
+#define PCIE_IOP_CTRL				0x8C4
+#define PCIE_IOP_RX_STANDBY_CTRL		GENMASK(6, 0)
+
+/* APP RC Core Control Register */
+#define PCIE_RC_CCR				0x10
+#define PCIE_RC_CCR_LTSSM_ENABLE		BIT(0)
+#define PCIE_DEVICE_TYPE			GENMASK(7, 4)
+#define PCIE_RC_CCR_RC_MODE			BIT(2)
+
+/* PCIe Message Control */
+#define PCIE_MSG_CR				0x30
+#define PCIE_XMT_PM_TURNOFF			BIT(0)
+
+/* PCIe Power Management Control */
+#define PCIE_PMC				0x44
+#define PCIE_PM_IN_L2				BIT(20)
+
+/* Interrupt Enable Register */
+#define PCIE_IRNEN				0xF4
+#define PCIE_IRNCR				0xF8
+#define PCIE_IRN_AER_REPORT			BIT(0)
+#define PCIE_IRN_PME				BIT(2)
+#define PCIE_IRN_HOTPLUG			BIT(3)
+#define PCIE_IRN_RX_VDM_MSG			BIT(4)
+#define PCIE_IRN_PM_TO_ACK			BIT(9)
+#define PCIE_IRN_PM_TURNOFF_ACK			BIT(10)
+#define PCIE_IRN_LINK_AUTO_BW_STATUS		BIT(11)
+#define PCIE_IRN_BW_MGT				BIT(12)
+#define PCIE_IRN_WAKEUP				BIT(17)
+#define PCIE_IRN_MSG_LTR			BIT(18)
+#define PCIE_IRN_SYS_INT			BIT(28)
+#define PCIE_IRN_SYS_ERR_RC			BIT(29)
+
+#define PCIE_IRN_IR_INT	(PCIE_IRN_AER_REPORT | PCIE_IRN_PME | \
+			PCIE_IRN_RX_VDM_MSG | PCIE_IRN_SYS_ERR_RC | \
+			PCIE_IRN_PM_TO_ACK | PCIE_IRN_LINK_AUTO_BW_STATUS | \
+			PCIE_IRN_BW_MGT | PCIE_IRN_MSG_LTR)
+
+#define PCIE_INTX_OFFSET	12
+#define BUS_IATU_OFFS		SZ_256M
+#define RST_INTRVL_DFT_MS	100
+enum {
+	PCIE_LINK_SPEED_AUTO = 0,
+	PCIE_LINK_SPEED_GEN1,
+	PCIE_LINK_SPEED_GEN2,
+	PCIE_LINK_SPEED_GEN3,
+	PCIE_LINK_SPEED_GEN4,
+};
+
+struct intel_pcie_soc {
+	unsigned int pcie_ver;
+	unsigned int pcie_atu_offset;
+	u32 num_viewport;
+};
+
+struct intel_pcie_port {
+	struct dw_pcie		*pci;
+	unsigned int		id; /* Physical RC Index */
+	void __iomem		*app_base;
+	struct gpio_desc	*reset_gpio;
+	u32			rst_interval;
+	u32			max_speed;
+	u32			link_gen;
+	u32			max_width;
+	u32			lanes;
+	struct clk		*core_clk;
+	struct reset_control	*core_rst;
+	struct phy		*phy;
+};
+
+static void pcie_update_bits(void __iomem *base, u32 mask, u32 val, u32 ofs)
+{
+	u32 orig, tmp;
+
+	orig = readl(base + ofs);
+
+	tmp = (orig & ~mask) | (val & mask);
+
+	if (tmp != orig)
+		writel(tmp, base + ofs);
+}
+
+static inline u32 pcie_app_rd(struct intel_pcie_port *lpp, u32 ofs)
+{
+	return readl(lpp->app_base + ofs);
+}
+
+static inline void pcie_app_wr(struct intel_pcie_port *lpp, u32 val, u32 ofs)
+{
+	writel(val, lpp->app_base + ofs);
+}
+
+static void pcie_app_wr_mask(struct intel_pcie_port *lpp,
+			     u32 mask, u32 val, u32 ofs)
+{
+	pcie_update_bits(lpp->app_base, mask, val, ofs);
+}
+
+static inline u32 pcie_rc_cfg_rd(struct intel_pcie_port *lpp, u32 ofs)
+{
+	return dw_pcie_readl_dbi(lpp->pci, ofs);
+}
+
+static inline void pcie_rc_cfg_wr(struct intel_pcie_port *lpp, u32 val, u32 ofs)
+{
+	dw_pcie_writel_dbi(lpp->pci, ofs, val);
+}
+
+static void pcie_rc_cfg_wr_mask(struct intel_pcie_port *lpp,
+				u32 mask, u32 val, u32 ofs)
+{
+	pcie_update_bits(lpp->pci->dbi_base, mask, val, ofs);
+}
+
+static void intel_pcie_mem_iatu(struct intel_pcie_port *lpp)
+{
+	struct pcie_port *pp = &lpp->pci->pp;
+	phys_addr_t cpu_addr = pp->mem_base;
+
+	dw_pcie_prog_outbound_atu(lpp->pci, PCIE_ATU_REGION_INDEX0,
+				  PCIE_ATU_TYPE_MEM, cpu_addr,
+				  pp->mem_base, pp->mem_size);
+}
+
+static int intel_pcie_bios_map_irq(const struct pci_dev *dev, u8 slot, u8 pin)
+{
+
+	struct pcie_port *pp = dev->bus->sysdata;
+	struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
+	struct intel_pcie_port *lpp = dev_get_drvdata(pci->dev);
+	struct device *pdev = lpp->pci->dev;
+	u32 irq_bit;
+	int irq;
+
+	if (pin == PCI_INTERRUPT_UNKNOWN || pin > PCI_NUM_INTX) {
+		dev_warn(pdev, "WARNING: dev %s: invalid interrupt pin %d\n",
+			 pci_name(dev), pin);
+		return -1;
+	}
+	irq = of_irq_parse_and_map_pci(dev, slot, pin);
+	if (!irq) {
+		dev_err(pdev, "trying to map irq for unknown slot:%d pin:%d\n",
+			slot, pin);
+		return -1;
+	}
+	/* Pin to irq offset bit position */
+	irq_bit = BIT(pin + PCIE_INTX_OFFSET);
+
+	/* Clear possible pending interrupts first */
+	pcie_app_wr(lpp, irq_bit, PCIE_IRNCR);
+
+	pcie_app_wr_mask(lpp, irq_bit, irq_bit, PCIE_IRNEN);
+	return irq;
+}
+
+static void intel_pcie_bridge_class_code_setup(struct intel_pcie_port *lpp)
+{
+	pcie_rc_cfg_wr_mask(lpp, PCIE_MISC_CTRL_DBI_RO_WR_EN,
+			    PCIE_MISC_CTRL_DBI_RO_WR_EN, PCIE_MISC_CTRL);
+	pcie_rc_cfg_wr_mask(lpp, 0xffffff00, PCI_CLASS_BRIDGE_PCI << 16,
+			    PCIE_CCRID);
+	pcie_rc_cfg_wr_mask(lpp, PCIE_MISC_CTRL_DBI_RO_WR_EN, 0,
+			    PCIE_MISC_CTRL);
+}
+
+static void intel_pcie_ltssm_enable(struct intel_pcie_port *lpp)
+{
+	pcie_app_wr_mask(lpp, PCIE_RC_CCR_LTSSM_ENABLE,
+			 PCIE_RC_CCR_LTSSM_ENABLE, PCIE_RC_CCR);
+}
+
+static void intel_pcie_ltssm_disable(struct intel_pcie_port *lpp)
+{
+	pcie_app_wr_mask(lpp, PCIE_RC_CCR_LTSSM_ENABLE, 0, PCIE_RC_CCR);
+}
+
+static const char *pcie_link_gen_to_str(int gen)
+{
+	switch (gen) {
+	case PCIE_LINK_SPEED_GEN1:
+		return "2.5";
+	case PCIE_LINK_SPEED_GEN2:
+		return "5.0";
+	case PCIE_LINK_SPEED_GEN3:
+		return "8.0";
+	case PCIE_LINK_SPEED_GEN4:
+		return "16.0";
+	default:
+		return "???";
+	}
+}
+
+static void intel_pcie_link_setup(struct intel_pcie_port *lpp)
+{
+	u32 val;
+
+	val = pcie_rc_cfg_rd(lpp, PCIE_LCAP);
+	lpp->max_speed = FIELD_GET(PCIE_LCAP_MAX_LINK_SPEED, val);
+	lpp->max_width = FIELD_GET(PCIE_LCAP_MAX_LENGTH_WIDTH, val);
+
+	val = pcie_rc_cfg_rd(lpp, PCIE_LCTLSTS);
+
+	val &= ~(PCIE_LCTLSTS_LINK_DISABLE | PCIE_LCTLSTS_ASPM_ENABLE);
+	val |= (PCIE_LCTLSTS_SLOT_CLK_CFG | PCIE_LCTLSTS_COM_CLK_CFG |
+		PCIE_LCTLSTS_RCB128);
+	pcie_rc_cfg_wr(lpp, val, PCIE_LCTLSTS);
+}
+
+static void intel_pcie_max_speed_setup(struct intel_pcie_port *lpp)
+{
+	u32 reg, val;
+
+	reg = pcie_rc_cfg_rd(lpp, PCIE_LCTLSTS2);
+	switch (lpp->link_gen) {
+	case PCIE_LINK_SPEED_GEN1:
+		reg &= ~PCIE_LCTLSTS2_TGT_LINK_SPEED;
+		reg |= PCIE_LCTLSTS2_HW_AUTO_DIS |
+			PCIE_LCTLSTS2_TGT_LINK_SPEED_25GT;
+		break;
+	case PCIE_LINK_SPEED_GEN2:
+		reg &= ~PCIE_LCTLSTS2_TGT_LINK_SPEED;
+		reg |= PCIE_LCTLSTS2_HW_AUTO_DIS |
+			PCIE_LCTLSTS2_TGT_LINK_SPEED_5GT;
+		break;
+	case PCIE_LINK_SPEED_GEN3:
+		reg &= ~PCIE_LCTLSTS2_TGT_LINK_SPEED;
+		reg |= PCIE_LCTLSTS2_HW_AUTO_DIS |
+			PCIE_LCTLSTS2_TGT_LINK_SPEED_8GT;
+		break;
+	case PCIE_LINK_SPEED_GEN4:
+		reg &= ~PCIE_LCTLSTS2_TGT_LINK_SPEED;
+		reg |= PCIE_LCTLSTS2_HW_AUTO_DIS |
+			PCIE_LCTLSTS2_TGT_LINK_SPEED_16GT;
+		break;
+	default:
+		/* Use hardware capability */
+		val = pcie_rc_cfg_rd(lpp, PCIE_LCAP);
+		val = FIELD_GET(PCIE_LCAP_MAX_LINK_SPEED, val);
+		reg &= ~PCIE_LCTLSTS2_HW_AUTO_DIS;
+		reg |= val;
+		break;
+	}
+	pcie_rc_cfg_wr(lpp, reg, PCIE_LCTLSTS2);
+}
+
+static void intel_pcie_speed_change_enable(struct intel_pcie_port *lpp)
+{
+	u32 mask, val;
+
+	mask = PORT_LOGIC_SPEED_CHANGE | PCIE_PORT_LOGIC_FTS;
+	val = PORT_LOGIC_SPEED_CHANGE | PCIE_PORT_LOGIC_DFT_FTS_NUM;
+
+	pcie_rc_cfg_wr_mask(lpp, mask, val, PCIE_LINK_WIDTH_SPEED_CONTROL);
+}
+
+static void intel_pcie_speed_change_disable(struct intel_pcie_port *lpp)
+{
+	pcie_rc_cfg_wr_mask(lpp, PORT_LOGIC_SPEED_CHANGE, 0,
+			    PCIE_LINK_WIDTH_SPEED_CONTROL);
+}
+
+static void intel_pcie_max_link_width_setup(struct intel_pcie_port *lpp)
+{
+	u32 mask, val;
+
+	/* HW auto bandwidth negotiation must be enabled */
+	pcie_rc_cfg_wr_mask(lpp, PCIE_LCTLSTS_HW_AW_DIS, 0, PCIE_LCTLSTS);
+
+	mask = PCIE_DIRECT_LINK_WIDTH_CHANGE | PCIE_TARGET_LINK_WIDTH;
+	val = PCIE_DIRECT_LINK_WIDTH_CHANGE | lpp->lanes;
+	pcie_rc_cfg_wr_mask(lpp, mask, val, PCIE_MULTI_LANE_CTRL);
+}
+
+static void intel_pcie_port_logic_setup(struct intel_pcie_port *lpp)
+{
+	u32 val, mask, fts;
+
+	switch (lpp->max_speed) {
+	case PCIE_LINK_SPEED_GEN1:
+	case PCIE_LINK_SPEED_GEN2:
+		fts = PCIE_AFR_GEN12_FTS_NUM_DFT;
+		break;
+	case PCIE_LINK_SPEED_GEN3:
+		fts = PCIE_AFR_GEN3_FTS_NUM_DFT;
+		break;
+	case PCIE_LINK_SPEED_GEN4:
+		fts = PCIE_AFR_GEN4_FTS_NUM_DFT;
+		break;
+	default:
+		fts = PCIE_AFR_GEN12_FTS_NUM_DFT;
+		break;
+	}
+	mask = PCIE_AFR_FTS_NUM | PCIE_AFR_COM_FTS_NUM;
+	val = FIELD_PREP(PCIE_AFR_FTS_NUM, fts) |
+	       FIELD_PREP(PCIE_AFR_COM_FTS_NUM, fts);
+	pcie_rc_cfg_wr_mask(lpp, mask, val, PCIE_AFR);
+
+	/* Port Link Control Register */
+	pcie_rc_cfg_wr_mask(lpp, PCIE_PLCR_DLL_LINK_EN,
+			    PCIE_PLCR_DLL_LINK_EN, PCIE_PORT_LINK_CONTROL);
+}
+
+static void intel_pcie_upconfig_setup(struct intel_pcie_port *lpp)
+{
+	pcie_rc_cfg_wr_mask(lpp, PCIE_UPCONFIG_SUPPORT,
+			    PCIE_UPCONFIG_SUPPORT, PCIE_MULTI_LANE_CTRL);
+
+	pcie_rc_cfg_wr_mask(lpp, PCIE_IOP_RX_STANDBY_CTRL, 0, PCIE_IOP_CTRL);
+}
+
+static void intel_pcie_rc_setup(struct intel_pcie_port *lpp)
+{
+	u32 val;
+
+	val = PCI_COMMAND_MEMORY | PCI_COMMAND_MASTER;
+	pcie_rc_cfg_wr_mask(lpp, val, val, PCI_COMMAND);
+
+	intel_pcie_ltssm_disable(lpp);
+	intel_pcie_link_setup(lpp);
+	dw_pcie_setup(lpp->pci);
+	intel_pcie_upconfig_setup(lpp);
+	intel_pcie_bridge_class_code_setup(lpp);
+	intel_pcie_max_speed_setup(lpp);
+	intel_pcie_speed_change_enable(lpp);
+	intel_pcie_port_logic_setup(lpp);
+	intel_pcie_mem_iatu(lpp);
+}
+
+static int intel_pcie_ep_rst_init(struct intel_pcie_port *lpp)
+{
+	struct device *dev = lpp->pci->dev;
+	int ret = 0;
+
+	lpp->reset_gpio = devm_gpiod_get(dev, "reset", GPIOD_OUT_LOW);
+	if (IS_ERR(lpp->reset_gpio)) {
+		ret = PTR_ERR(lpp->reset_gpio);
+		if (ret != -EPROBE_DEFER)
+			dev_err(dev, "failed to request PCIe GPIO: %d\n", ret);
+		return ret;
+	}
+	/* Make initial reset last for 100ms */
+	msleep(100);
+
+	return ret;
+}
+
+static void intel_pcie_core_rst_assert(struct intel_pcie_port *lpp)
+{
+	reset_control_assert(lpp->core_rst);
+}
+
+static void intel_pcie_core_rst_deassert(struct intel_pcie_port *lpp)
+{
+	/*
+	 * One micro-second delay to make sure the reset pulse
+	 * wide enough so that core reset is clean.
+	 */
+	udelay(1);
+	reset_control_deassert(lpp->core_rst);
+
+	/*
+	 * Some SoC core reset also reset PHY, more delay needed
+	 * to make sure the reset process is done.
+	 */
+	usleep_range(1000, 2000);
+}
+
+static void intel_pcie_device_rst_assert(struct intel_pcie_port *lpp)
+{
+	gpiod_set_value_cansleep(lpp->reset_gpio, 1);
+}
+
+static void intel_pcie_device_rst_deassert(struct intel_pcie_port *lpp)
+{
+	msleep(lpp->rst_interval);
+	gpiod_set_value_cansleep(lpp->reset_gpio, 0);
+}
+
+static int intel_pcie_app_logic_setup(struct intel_pcie_port *lpp)
+{
+	intel_pcie_device_rst_deassert(lpp);
+	intel_pcie_ltssm_enable(lpp);
+
+	return dw_pcie_wait_for_link(lpp->pci);
+}
+
+static irqreturn_t intel_pcie_core_isr(int irq, void *arg)
+{
+	struct intel_pcie_port *lpp = arg;
+	u32 val, reg;
+
+	reg = pcie_app_rd(lpp, PCIE_IRNCR);
+	val = reg & PCIE_IRN_IR_INT;
+
+	pcie_app_wr(lpp, val, PCIE_IRNCR);
+
+	trace_printk("PCIe misc interrupt status 0x%x\n", reg);
+	return IRQ_HANDLED;
+}
+
+static int intel_pcie_setup_irq(struct intel_pcie_port *lpp)
+{
+	struct device *dev = lpp->pci->dev;
+	struct platform_device *pdev;
+	char *irq_name;
+	int irq, ret;
+
+	pdev = to_platform_device(dev);
+	irq = platform_get_irq(pdev, 0);
+	if (irq < 0) {
+		dev_err(dev, "missing sys integrated irq resource\n");
+		return irq;
+	}
+
+	irq_name = devm_kasprintf(dev, GFP_KERNEL, "pcie_misc%d", lpp->id);
+	if (!irq_name) {
+		dev_err(dev, "failed to alloc irq name\n");
+		return -ENOMEM;
+	}
+
+	ret = devm_request_irq(dev, irq, intel_pcie_core_isr,
+			       IRQF_SHARED, irq_name, lpp);
+	if (ret) {
+		dev_err(dev, "request irq %d failed\n", irq);
+		return ret;
+	}
+	/* Enable integrated interrupts */
+	pcie_app_wr_mask(lpp, PCIE_IRN_IR_INT, PCIE_IRN_IR_INT, PCIE_IRNEN);
+
+	return ret;
+}
+
+static void intel_pcie_core_irq_disable(struct intel_pcie_port *lpp)
+{
+	pcie_app_wr(lpp, 0, PCIE_IRNEN);
+	pcie_app_wr(lpp, PCIE_IRN_IR_INT,  PCIE_IRNCR);
+}
+
+static void intel_pcie_disable_clks(struct intel_pcie_port *lpp)
+{
+	clk_disable_unprepare(lpp->core_clk);
+}
+
+static int intel_pcie_enable_clks(struct intel_pcie_port *lpp)
+{
+	int ret = clk_prepare_enable(lpp->core_clk);
+
+	if (ret)
+		dev_err(lpp->pci->dev, "Core clock enable failed: %d\n", ret);
+
+	return ret;
+}
+
+static int intel_pcie_get_resources(struct platform_device *pdev)
+{
+	struct intel_pcie_port *lpp;
+	struct device *dev;
+	int ret;
+
+	lpp = platform_get_drvdata(pdev);
+	dev = lpp->pci->dev;
+
+	lpp->core_clk = devm_clk_get(dev, NULL);
+	if (IS_ERR(lpp->core_clk)) {
+		ret = PTR_ERR(lpp->core_clk);
+		if (ret != -EPROBE_DEFER)
+			dev_err(dev, "failed to get clks: %d\n", ret);
+		return ret;
+	}
+
+	lpp->core_rst = devm_reset_control_get(dev, NULL);
+	if (IS_ERR(lpp->core_rst)) {
+		ret = PTR_ERR(lpp->core_rst);
+		if (ret != -EPROBE_DEFER)
+			dev_err(dev, "failed to get resets: %d\n", ret);
+		return ret;
+	}
+
+	ret = device_property_match_string(dev, "device_type", "pci");
+	if (ret) {
+		dev_err(dev, "failed to find pci device type: %d\n", ret);
+		return ret;
+	}
+
+	if (device_property_read_u32(dev, "intel,rst-interval",
+				     &lpp->rst_interval))
+		lpp->rst_interval = RST_INTRVL_DFT_MS;
+
+	if (device_property_read_u32(dev, "max-link-speed", &lpp->link_gen))
+		lpp->link_gen = 0; /* Fallback to auto */
+
+	lpp->app_base = devm_platform_ioremap_resource(pdev, 2);
+	if (IS_ERR(lpp->app_base))
+		return PTR_ERR(lpp->app_base);
+
+	ret = intel_pcie_ep_rst_init(lpp);
+	if (ret)
+		return ret;
+
+	lpp->phy = devm_phy_get(dev, "phy");
+	if (IS_ERR(lpp->phy)) {
+		ret = PTR_ERR(lpp->phy);
+		if (ret != -EPROBE_DEFER)
+			dev_err(dev, "couldn't get pcie-phy: %d\n", ret);
+		return ret;
+	}
+	return 0;
+}
+
+static void intel_pcie_deinit_phy(struct intel_pcie_port *lpp)
+{
+	phy_exit(lpp->phy);
+}
+
+static int intel_pcie_wait_l2(struct intel_pcie_port *lpp)
+{
+	u32 value;
+	int ret;
+
+	if (lpp->max_speed < PCIE_LINK_SPEED_GEN3)
+		return 0;
+
+	/* Send PME_TURN_OFF message */
+	pcie_app_wr_mask(lpp, PCIE_XMT_PM_TURNOFF,
+			 PCIE_XMT_PM_TURNOFF, PCIE_MSG_CR);
+
+	/* Read PMC status and wait for falling into L2 link state */
+	ret = readl_poll_timeout(lpp->app_base + PCIE_PMC, value,
+				 (value & PCIE_PM_IN_L2), 20,
+				 jiffies_to_usecs(5 * HZ));
+	if (ret)
+		dev_err(lpp->pci->dev, "PCIe link enter L2 timeout!\n");
+
+	return ret;
+}
+
+static void intel_pcie_turn_off(struct intel_pcie_port *lpp)
+{
+	if (dw_pcie_link_up(lpp->pci))
+		intel_pcie_wait_l2(lpp);
+
+	/* Put EP in reset state */
+	intel_pcie_device_rst_assert(lpp);
+	pcie_rc_cfg_wr_mask(lpp, PCI_COMMAND_MEMORY, 0, PCI_COMMAND);
+}
+
+static int intel_pcie_host_setup(struct intel_pcie_port *lpp)
+{
+	int ret;
+
+	intel_pcie_core_rst_assert(lpp);
+	intel_pcie_device_rst_assert(lpp);
+
+	ret = phy_init(lpp->phy);
+	if (ret)
+		return ret;
+
+	intel_pcie_core_rst_deassert(lpp);
+	ret = intel_pcie_enable_clks(lpp);
+	if (ret)
+		goto clk_err;
+
+	intel_pcie_rc_setup(lpp);
+	ret = intel_pcie_app_logic_setup(lpp);
+	if (ret)
+		goto app_init_err;
+
+	ret = intel_pcie_setup_irq(lpp);
+	if (!ret)
+		return ret;
+
+	intel_pcie_turn_off(lpp);
+app_init_err:
+	intel_pcie_disable_clks(lpp);
+clk_err:
+	intel_pcie_core_rst_assert(lpp);
+	intel_pcie_deinit_phy(lpp);
+	return ret;
+}
+
+static ssize_t
+pcie_link_status_show(struct device *dev, struct device_attribute *attr,
+		      char *buf)
+{
+	u32 reg, width, gen;
+	struct intel_pcie_port *lpp;
+
+	lpp = dev_get_drvdata(dev);
+
+	reg = pcie_rc_cfg_rd(lpp, PCIE_LCTLSTS);
+	width = FIELD_GET(PCIE_LCTLSTS_NEGOTIATED_LINK_WIDTH, reg);
+	gen = FIELD_GET(PCIE_LCTLSTS_LINK_SPEED, reg);
+	if (gen > lpp->max_speed)
+		return -EINVAL;
+
+	return sprintf(buf, "Port %2u Width x%u Speed %s GT/s\n", lpp->id,
+		       width, pcie_link_gen_to_str(gen));
+}
+static DEVICE_ATTR_RO(pcie_link_status);
+
+static ssize_t pcie_speed_store(struct device *dev,
+				struct device_attribute *attr,
+				const char *buf, size_t len)
+{
+	struct intel_pcie_port *lpp;
+	unsigned long val;
+	int ret;
+
+	lpp = dev_get_drvdata(dev);
+
+	ret = kstrtoul(buf, 10, &val);
+	if (ret)
+		return ret;
+
+	if (val > lpp->max_speed)
+		return -EINVAL;
+
+	lpp->link_gen = val;
+	intel_pcie_max_speed_setup(lpp);
+	intel_pcie_speed_change_disable(lpp);
+	intel_pcie_speed_change_enable(lpp);
+
+	return len;
+}
+static DEVICE_ATTR_WO(pcie_speed);
+
+/*
+ * Link width change on the fly is not always successful.
+ * It also depends on the partner.
+ */
+static ssize_t pcie_width_store(struct device *dev,
+				struct device_attribute *attr,
+				const char *buf, size_t len)
+{
+	struct intel_pcie_port *lpp;
+	unsigned long val;
+
+	lpp = dev_get_drvdata(dev);
+
+	if (kstrtoul(buf, 10, &val))
+		return -EINVAL;
+
+	if (val > lpp->max_width)
+		return -EINVAL;
+
+	lpp->lanes = val;
+	intel_pcie_max_link_width_setup(lpp);
+
+	return len;
+}
+static DEVICE_ATTR_WO(pcie_width);
+
+static struct attribute *pcie_cfg_attrs[] = {
+	&dev_attr_pcie_link_status.attr,
+	&dev_attr_pcie_speed.attr,
+	&dev_attr_pcie_width.attr,
+	NULL,
+};
+ATTRIBUTE_GROUPS(pcie_cfg);
+
+static int intel_pcie_sysfs_init(struct intel_pcie_port *lpp)
+{
+	return devm_device_add_groups(lpp->pci->dev, pcie_cfg_groups);
+}
+
+static void __intel_pcie_remove(struct intel_pcie_port *lpp)
+{
+	pcie_rc_cfg_wr_mask(lpp, PCI_COMMAND_MEMORY | PCI_COMMAND_MASTER,
+			    0, PCI_COMMAND);
+	intel_pcie_core_irq_disable(lpp);
+	intel_pcie_turn_off(lpp);
+	intel_pcie_disable_clks(lpp);
+	intel_pcie_core_rst_assert(lpp);
+	intel_pcie_deinit_phy(lpp);
+}
+
+static int intel_pcie_remove(struct platform_device *pdev)
+{
+	struct intel_pcie_port *lpp = platform_get_drvdata(pdev);
+	struct pcie_port *pp = &lpp->pci->pp;
+
+	pci_stop_root_bus(pp->root_bus);
+	pci_remove_root_bus(pp->root_bus);
+	__intel_pcie_remove(lpp);
+
+	return 0;
+}
+
+static int __maybe_unused intel_pcie_suspend_noirq(struct device *dev)
+{
+	struct intel_pcie_port *lpp = dev_get_drvdata(dev);
+	int ret;
+
+	intel_pcie_core_irq_disable(lpp);
+	ret = intel_pcie_wait_l2(lpp);
+	if (ret)
+		return ret;
+
+	intel_pcie_deinit_phy(lpp);
+	intel_pcie_disable_clks(lpp);
+	return ret;
+}
+
+static int __maybe_unused intel_pcie_resume_noirq(struct device *dev)
+{
+	struct intel_pcie_port *lpp = dev_get_drvdata(dev);
+
+	return intel_pcie_host_setup(lpp);
+}
+
+static int intel_pcie_rc_init(struct pcie_port *pp)
+{
+	struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
+	struct intel_pcie_port *lpp = dev_get_drvdata(pci->dev);
+	int ret;
+
+	/* RC/host initialization */
+	ret = intel_pcie_host_setup(lpp);
+	if (ret)
+		return ret;
+	ret = intel_pcie_sysfs_init(lpp);
+	if (ret)
+		__intel_pcie_remove(lpp);
+	return ret;
+}
+
+int intel_pcie_msi_init(struct pcie_port *pp)
+{
+	struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
+
+	dev_dbg(pci->dev, "MSI is handled in x86 arch\n");
+	return 0;
+}
+
+u64 intel_pcie_cpu_addr(struct dw_pcie *pcie, u64 cpu_addr)
+{
+	return cpu_addr + BUS_IATU_OFFS;
+}
+
+static const struct dw_pcie_ops intel_pcie_ops = {
+	.cpu_addr_fixup = intel_pcie_cpu_addr,
+};
+
+static const struct dw_pcie_host_ops intel_pcie_dw_ops = {
+	.host_init =		intel_pcie_rc_init,
+	.msi_host_init =	intel_pcie_msi_init,
+};
+
+static const struct intel_pcie_soc pcie_data = {
+	.pcie_ver =		0x520A,
+	.pcie_atu_offset =	0xC0000,
+	.num_viewport =		3,
+};
+
+static int intel_pcie_probe(struct platform_device *pdev)
+{
+	struct device *dev = &pdev->dev;
+	const struct intel_pcie_soc *data;
+	struct intel_pcie_port *lpp;
+	struct pcie_port *pp;
+	struct dw_pcie *pci;
+	int ret;
+
+	lpp = devm_kzalloc(dev, sizeof(*lpp), GFP_KERNEL);
+	if (!lpp)
+		return -ENOMEM;
+
+	pci = devm_kzalloc(dev, sizeof(*pci), GFP_KERNEL);
+	if (!pci)
+		return -ENOMEM;
+
+	platform_set_drvdata(pdev, lpp);
+	lpp->pci = pci;
+	pci->dev = dev;
+	pp = &pci->pp;
+
+	ret = device_property_read_u32(dev, "linux,pci-domain", &lpp->id);
+	if (ret) {
+		dev_err(dev, "failed to get domain id, errno %d\n", ret);
+		return ret;
+	}
+
+	pci->dbi_base = devm_platform_ioremap_resource(pdev, 0);
+	if (IS_ERR(pci->dbi_base))
+		return PTR_ERR(pci->dbi_base);
+
+	ret = intel_pcie_get_resources(pdev);
+	if (ret)
+		return ret;
+
+	data = device_get_match_data(dev);
+	pci->ops = &intel_pcie_ops;
+	pci->version = data->pcie_ver;
+	pci->atu_base = pci->dbi_base + data->pcie_atu_offset;
+	pp->ops = &intel_pcie_dw_ops;
+	pp->map_irq = intel_pcie_bios_map_irq;
+
+	ret = dw_pcie_host_init(pp);
+	if (ret) {
+		dev_err(dev, "cannot initialize host\n");
+		return ret;
+	}
+	/* Intel PCIe doesn't configure IO region, so configure
+	 * viewport to not to access IO region during register
+	 * read write operations.
+	 */
+	pci->num_viewport = data->num_viewport;
+	dev_info(dev,
+		 "Intel AXI PCIe Root Complex Port %d Init Done\n", lpp->id);
+	return ret;
+}
+
+static const struct dev_pm_ops intel_pcie_pm_ops = {
+	SET_NOIRQ_SYSTEM_SLEEP_PM_OPS(intel_pcie_suspend_noirq,
+				      intel_pcie_resume_noirq)
+};
+
+static const struct of_device_id of_intel_pcie_match[] = {
+	{ .compatible = "intel,lgm-pcie", .data = &pcie_data },
+	{}
+};
+
+static struct platform_driver intel_pcie_driver = {
+	.probe = intel_pcie_probe,
+	.remove = intel_pcie_remove,
+	.driver = {
+		.name = "intel-lgm-pcie",
+		.of_match_table = of_intel_pcie_match,
+		.pm = &intel_pcie_pm_ops,
+	},
+};
+builtin_platform_driver(intel_pcie_driver);
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* Re: [PATCH v2 2/3] dt-bindings: PCI: intel: Add YAML schemas for the PCIe RC controller
  2019-08-20  9:39 ` [PATCH v2 2/3] dt-bindings: PCI: intel: Add YAML schemas for the PCIe RC controller Dilip Kota
@ 2019-08-20 13:42   ` Rob Herring
  2019-08-21  9:56     ` Dilip Kota
  0 siblings, 1 reply; 23+ messages in thread
From: Rob Herring @ 2019-08-20 13:42 UTC (permalink / raw)
  To: Dilip Kota
  Cc: Jingoo Han, Gustavo Pimentel, linux-pci, Christoph Hellwig,
	devicetree, linux-kernel, Andy Shevchenko, cheol.yong.kim,
	chuanhua.lei, qi-ming.wu

On Tue, Aug 20, 2019 at 4:40 AM Dilip Kota <eswara.kota@linux.intel.com> wrote:
>
> The Intel PCIe RC controller is Synopsys Designware
> based PCIe core. Add YAML schemas for PCIe in RC mode
> present in Intel Universal Gateway soc.

Run 'make dt_binding_check' and fix all the warnings.

>
> Signed-off-by: Dilip Kota <eswara.kota@linux.intel.com>
> ---
>  .../devicetree/bindings/pci/intel-pcie.yaml        | 133 +++++++++++++++++++++
>  1 file changed, 133 insertions(+)
>  create mode 100644 Documentation/devicetree/bindings/pci/intel-pcie.yaml
>
> diff --git a/Documentation/devicetree/bindings/pci/intel-pcie.yaml b/Documentation/devicetree/bindings/pci/intel-pcie.yaml
> new file mode 100644
> index 000000000000..80caaaba5e2c
> --- /dev/null
> +++ b/Documentation/devicetree/bindings/pci/intel-pcie.yaml
> @@ -0,0 +1,133 @@
> +# SPDX-License-Identifier: GPL-2.0

(GPL-2.0-only OR BSD-2-Clause) is preferred for new bindings.

> +%YAML 1.2
> +---
> +$id: http://devicetree.org/schemas/pci/intel-pcie.yaml#
> +$schema: http://devicetree.org/meta-schemas/core.yaml#
> +
> +title: Intel AXI bus based PCI express root complex
> +
> +maintainers:
> +  - Dilip Kota <eswara.kota@linux.intel.com>
> +
> +properties:
> +  compatible:
> +    const: intel,lgm-pcie
> +
> +  device_type:
> +    const: pci
> +
> +  "#address-cells":
> +    const: 3
> +
> +  "#size-cells":
> +    const: 2
> +
> +  reg:
> +    items:
> +      - description: Controller control and status registers.
> +      - description: PCIe configuration registers.
> +      - description: Controller application registers.
> +
> +  reg-names:
> +    items:
> +      - const: dbi
> +      - const: config
> +      - const: app
> +
> +  ranges:
> +    description: Ranges for the PCI memory and I/O regions.
> +
> +  resets:
> +    maxItems: 1
> +
> +  clocks:
> +    description: PCIe registers interface clock.
> +
> +  phys:
> +    maxItems: 1
> +
> +  phy-names:
> +    const: phy
> +
> +  reset-gpios:
> +    maxItems: 1
> +
> +  num-lanes:
> +    description: Number of lanes to use for this port.
> +
> +  linux,pci-domain:
> +    description: PCI domain ID.
> +
> +  interrupts:
> +    description: PCIe core integrated miscellaneous interrupt.
> +
> +  interrupt-map-mask:
> +    description: Standard PCI IRQ mapping properties.
> +
> +  interrupt-map:
> +    description: Standard PCI IRQ mapping properties.
> +
> +  max-link-speed:
> +    description: Specify PCI Gen for link capability.
> +
> +  bus-range:
> +    description: Range of bus numbers associated with this controller.
> +
> +  intel,rst-interval:

Use 'reset-assert-us'

> +    description: |
> +      Device reset interval in ms. Some devices need an interval upto 500ms.
> +      By default it is 100ms.
> +
> +required:
> +  - compatible
> +  - device_type
> +  - reg
> +  - reg-names
> +  - ranges
> +  - resets
> +  - clocks
> +  - phys
> +  - phy-names
> +  - reset-gpios
> +  - num-lanes
> +  - linux,pci-domain
> +  - interrupts
> +  - interrupt-map
> +  - interrupt-map-mask
> +
> +examples:
> +  - |
> +    pcie10:pcie@d0e00000 {
> +      compatible = "intel,lgm-pcie";
> +      device_type = "pci";
> +      #address-cells = <3>;
> +      #size-cells = <2>;
> +      reg = <
> +            0xd0e00000 0x1000
> +            0xd2000000 0x800000
> +            0xd0a41000 0x1000
> +            >;
> +      reg-names = "dbi", "config", "app";
> +      linux,pci-domain = <0>;
> +      max-link-speed = <4>;
> +      bus-range = <0x00 0x08>;
> +      interrupt-parent = <&ioapic1>;
> +      interrupts = <67 1>;
> +      interrupt-map-mask = <0 0 0 0x7>;
> +      interrupt-map = <0 0 0 1 &ioapic1 27 1>,
> +                      <0 0 0 2 &ioapic1 28 1>,
> +                      <0 0 0 3 &ioapic1 29 1>,
> +                      <0 0 0 4 &ioapic1 30 1>;
> +      ranges = <0x02000000 0 0xd4000000 0xd4000000 0 0x04000000>;
> +      resets = <&rcu0 0x50 0>;
> +      clocks = <&cgu0 LGM_GCLK_PCIE10>;
> +      phys = <&cb0phy0>;
> +      phy-names = "phy";
> +    };
> +
> +    &pcie10 {

Don't show this soc/board split in examples. Just combine to one node.

> +      status = "okay";
> +      intel,rst-interval = <100>;
> +      reset-gpios = <&gpio0 3 GPIO_ACTIVE_LOW>;
> +      num-lanes = <2>;
> +    };
> --
> 2.11.0
>

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v2 2/3] dt-bindings: PCI: intel: Add YAML schemas for the PCIe RC controller
  2019-08-20 13:42   ` Rob Herring
@ 2019-08-21  9:56     ` Dilip Kota
  0 siblings, 0 replies; 23+ messages in thread
From: Dilip Kota @ 2019-08-21  9:56 UTC (permalink / raw)
  To: Rob Herring
  Cc: Jingoo Han, Gustavo Pimentel, linux-pci, Christoph Hellwig,
	devicetree, linux-kernel, Andy Shevchenko, cheol.yong.kim,
	chuanhua.lei, qi-ming.wu


On 8/20/2019 9:42 PM, Rob Herring wrote:
> On Tue, Aug 20, 2019 at 4:40 AM Dilip Kota <eswara.kota@linux.intel.com> wrote:
>> The Intel PCIe RC controller is Synopsys Designware
>> based PCIe core. Add YAML schemas for PCIe in RC mode
>> present in Intel Universal Gateway soc.
> Run 'make dt_binding_check' and fix all the warnings.
Sure, will run and fix them. Sorry for missing it.
>
>> Signed-off-by: Dilip Kota <eswara.kota@linux.intel.com>
>> ---
>>   .../devicetree/bindings/pci/intel-pcie.yaml        | 133 +++++++++++++++++++++
>>   1 file changed, 133 insertions(+)
>>   create mode 100644 Documentation/devicetree/bindings/pci/intel-pcie.yaml
>>
>> diff --git a/Documentation/devicetree/bindings/pci/intel-pcie.yaml b/Documentation/devicetree/bindings/pci/intel-pcie.yaml
>> new file mode 100644
>> index 000000000000..80caaaba5e2c
>> --- /dev/null
>> +++ b/Documentation/devicetree/bindings/pci/intel-pcie.yaml
>> @@ -0,0 +1,133 @@
>> +# SPDX-License-Identifier: GPL-2.0
> (GPL-2.0-only OR BSD-2-Clause) is preferred for new bindings.
Sure, will update it.
>
>> +%YAML 1.2
>> +---
>> +$id: http://devicetree.org/schemas/pci/intel-pcie.yaml#
>> +$schema: http://devicetree.org/meta-schemas/core.yaml#
>> +
>> +title: Intel AXI bus based PCI express root complex
>> +
>> +maintainers:
>> +  - Dilip Kota <eswara.kota@linux.intel.com>
>> +
>> +properties:
>> +  compatible:
>> +    const: intel,lgm-pcie
>> +
>> +  device_type:
>> +    const: pci
>> +
>> +  "#address-cells":
>> +    const: 3
>> +
>> +  "#size-cells":
>> +    const: 2
>> +
>> +  reg:
>> +    items:
>> +      - description: Controller control and status registers.
>> +      - description: PCIe configuration registers.
>> +      - description: Controller application registers.
>> +
>> +  reg-names:
>> +    items:
>> +      - const: dbi
>> +      - const: config
>> +      - const: app
>> +
>> +  ranges:
>> +    description: Ranges for the PCI memory and I/O regions.
>> +
>> +  resets:
>> +    maxItems: 1
>> +
>> +  clocks:
>> +    description: PCIe registers interface clock.
>> +
>> +  phys:
>> +    maxItems: 1
>> +
>> +  phy-names:
>> +    const: phy
>> +
>> +  reset-gpios:
>> +    maxItems: 1
>> +
>> +  num-lanes:
>> +    description: Number of lanes to use for this port.
>> +
>> +  linux,pci-domain:
>> +    description: PCI domain ID.
>> +
>> +  interrupts:
>> +    description: PCIe core integrated miscellaneous interrupt.
>> +
>> +  interrupt-map-mask:
>> +    description: Standard PCI IRQ mapping properties.
>> +
>> +  interrupt-map:
>> +    description: Standard PCI IRQ mapping properties.
>> +
>> +  max-link-speed:
>> +    description: Specify PCI Gen for link capability.
>> +
>> +  bus-range:
>> +    description: Range of bus numbers associated with this controller.
>> +
>> +  intel,rst-interval:
> Use 'reset-assert-us'

Sure.
I am thinking of 'reset-assert-ms' to remain in Milli seconds.
Please let me know your view.

>
>> +    description: |
>> +      Device reset interval in ms. Some devices need an interval upto 500ms.
>> +      By default it is 100ms.
>> +
>> +required:
>> +  - compatible
>> +  - device_type
>> +  - reg
>> +  - reg-names
>> +  - ranges
>> +  - resets
>> +  - clocks
>> +  - phys
>> +  - phy-names
>> +  - reset-gpios
>> +  - num-lanes
>> +  - linux,pci-domain
>> +  - interrupts
>> +  - interrupt-map
>> +  - interrupt-map-mask
>> +
>> +examples:
>> +  - |
>> +    pcie10:pcie@d0e00000 {
>> +      compatible = "intel,lgm-pcie";
>> +      device_type = "pci";
>> +      #address-cells = <3>;
>> +      #size-cells = <2>;
>> +      reg = <
>> +            0xd0e00000 0x1000
>> +            0xd2000000 0x800000
>> +            0xd0a41000 0x1000
>> +            >;
>> +      reg-names = "dbi", "config", "app";
>> +      linux,pci-domain = <0>;
>> +      max-link-speed = <4>;
>> +      bus-range = <0x00 0x08>;
>> +      interrupt-parent = <&ioapic1>;
>> +      interrupts = <67 1>;
>> +      interrupt-map-mask = <0 0 0 0x7>;
>> +      interrupt-map = <0 0 0 1 &ioapic1 27 1>,
>> +                      <0 0 0 2 &ioapic1 28 1>,
>> +                      <0 0 0 3 &ioapic1 29 1>,
>> +                      <0 0 0 4 &ioapic1 30 1>;
>> +      ranges = <0x02000000 0 0xd4000000 0xd4000000 0 0x04000000>;
>> +      resets = <&rcu0 0x50 0>;
>> +      clocks = <&cgu0 LGM_GCLK_PCIE10>;
>> +      phys = <&cb0phy0>;
>> +      phy-names = "phy";
>> +    };
>> +
>> +    &pcie10 {
> Don't show this soc/board split in examples. Just combine to one node.

Sure, will combine.

Thanks for the review comments.

Regards,
Dilip

>
>> +      status = "okay";
>> +      intel,rst-interval = <100>;
>> +      reset-gpios = <&gpio0 3 GPIO_ACTIVE_LOW>;
>> +      num-lanes = <2>;
>> +    };
>> --
>> 2.11.0
>>

^ permalink raw reply	[flat|nested] 23+ messages in thread

* RE: [PATCH v2 3/3] dwc: PCI: intel: Intel PCIe RC controller driver
  2019-08-20  9:39 ` [PATCH v2 3/3] dwc: PCI: intel: Intel PCIe RC controller driver Dilip Kota
@ 2019-08-24 21:03   ` Martin Blumenstingl
  2019-08-26  3:30     ` Chuan Hua, Lei
  0 siblings, 1 reply; 23+ messages in thread
From: Martin Blumenstingl @ 2019-08-24 21:03 UTC (permalink / raw)
  To: eswara.kota
  Cc: andriy.shevchenko, cheol.yong.kim, chuanhua.lei, devicetree,
	gustavo.pimentel, hch, jingoohan1, linux-kernel, linux-pci,
	qi-ming.wu

Hi Dilip,

first of all: thank you for submitting this upstream!
I hope that we can use this driver to replace the out-of-tree PCIe
driver that's used in OpenWrt for the Lantiq VRX200 SoCs.

a small disclaimer: I don't have access to any Lantiq, Intel or
DesignWare datasheets. so everything I write below is based on my own
understanding of the Tegra public datasheet (which describes the
DesignWare PCIe registers) as well as the public Lantiq UGW code drops
with out-of-tree drivers for an older version of this PCIe IP.
thus some of my statements below may be wrong - in this case I would
appreciate an explanation of how the hardware really works.

please keep me CC'ed on further revisions of this series. I am highly
interested in making this driver work on the Lantiq VRX200 SoCs once
the initial version has landed in linux-next.

> +config PCIE_INTEL_AXI
> +        bool "Intel AHB/AXI PCIe host controller support"
I believe that this is mostly the same IP block as it's used in Lantiq
(xDSL) VRX200 SoCs (with MIPS cores) which was introduced in 2010
(before Intel acquired Lantiq).
This is why I would have personally called the driver PCIE_LANTIQ

[...]
> +#define PCIE_CCRID				0x8
> +
> +#define PCIE_LCAP				0x7C
> +#define PCIE_LCAP_MAX_LINK_SPEED		GENMASK(3, 0)
> +#define PCIE_LCAP_MAX_LENGTH_WIDTH		GENMASK(9, 4)
> +
> +/* Link Control and Status Register */
> +#define PCIE_LCTLSTS				0x80
> +#define PCIE_LCTLSTS_ASPM_ENABLE		GENMASK(1, 0)
> +#define PCIE_LCTLSTS_RCB128			BIT(3)
> +#define PCIE_LCTLSTS_LINK_DISABLE		BIT(4)
> +#define PCIE_LCTLSTS_COM_CLK_CFG		BIT(6)
> +#define PCIE_LCTLSTS_HW_AW_DIS			BIT(9)
> +#define PCIE_LCTLSTS_LINK_SPEED			GENMASK(19, 16)
> +#define PCIE_LCTLSTS_NEGOTIATED_LINK_WIDTH	GENMASK(25, 20)
> +#define PCIE_LCTLSTS_SLOT_CLK_CFG		BIT(28)
> +
> +#define PCIE_LCTLSTS2				0xA0
> +#define PCIE_LCTLSTS2_TGT_LINK_SPEED		GENMASK(3, 0)
> +#define PCIE_LCTLSTS2_TGT_LINK_SPEED_25GT	0x1
> +#define PCIE_LCTLSTS2_TGT_LINK_SPEED_5GT	0x2
> +#define PCIE_LCTLSTS2_TGT_LINK_SPEED_8GT	0x3
> +#define PCIE_LCTLSTS2_TGT_LINK_SPEED_16GT	0x4
> +#define PCIE_LCTLSTS2_HW_AUTO_DIS		BIT(5)
> +
> +/* Ack Frequency Register */
> +#define PCIE_AFR				0x70C
> +#define PCIE_AFR_FTS_NUM			GENMASK(15, 8)
> +#define PCIE_AFR_COM_FTS_NUM			GENMASK(23, 16)
> +#define PCIE_AFR_GEN12_FTS_NUM_DFT		(SZ_128 - 1)
> +#define PCIE_AFR_GEN3_FTS_NUM_DFT		180
> +#define PCIE_AFR_GEN4_FTS_NUM_DFT		196
> +
> +#define PCIE_PLCR_DLL_LINK_EN			BIT(5)
> +#define PCIE_PORT_LOGIC_FTS			GENMASK(7, 0)
> +#define PCIE_PORT_LOGIC_DFT_FTS_NUM		(SZ_128 - 1)
> +
> +#define PCIE_MISC_CTRL				0x8BC
> +#define PCIE_MISC_CTRL_DBI_RO_WR_EN		BIT(0)
> +
> +#define PCIE_MULTI_LANE_CTRL			0x8C0
> +#define PCIE_UPCONFIG_SUPPORT			BIT(7)
> +#define PCIE_DIRECT_LINK_WIDTH_CHANGE		BIT(6)
> +#define PCIE_TARGET_LINK_WIDTH			GENMASK(5, 0)
> +
> +#define PCIE_IOP_CTRL				0x8C4
> +#define PCIE_IOP_RX_STANDBY_CTRL		GENMASK(6, 0)
are you sure that you need any of the registers above?
as far as I can tell most (all?) of them are part of the DesignWare
register set. why doesn't pcie-designware-host initialize these?
I don't have any datasheet or register documentation for the DesignWare
PCIe core. In my own experiment from a month ago on the Lantiq VRX200
SoC I didn't need any of this: [0]

this also makes me wonder if various functions below like
intel_pcie_link_setup() and intel_pcie_max_speed_setup() (and probably
more) are really needed or if it's just cargo cult / copy paste from
an out-of-tree driver).

> +/* APP RC Core Control Register */
> +#define PCIE_RC_CCR				0x10
> +#define PCIE_RC_CCR_LTSSM_ENABLE		BIT(0)
> +#define PCIE_DEVICE_TYPE			GENMASK(7, 4)
> +#define PCIE_RC_CCR_RC_MODE			BIT(2)
> +
> +/* PCIe Message Control */
> +#define PCIE_MSG_CR				0x30
> +#define PCIE_XMT_PM_TURNOFF			BIT(0)
> +
> +/* PCIe Power Management Control */
> +#define PCIE_PMC				0x44
> +#define PCIE_PM_IN_L2				BIT(20)
> +
> +/* Interrupt Enable Register */
> +#define PCIE_IRNEN				0xF4
> +#define PCIE_IRNCR				0xF8
> +#define PCIE_IRN_AER_REPORT			BIT(0)
> +#define PCIE_IRN_PME				BIT(2)
> +#define PCIE_IRN_HOTPLUG			BIT(3)
> +#define PCIE_IRN_RX_VDM_MSG			BIT(4)
> +#define PCIE_IRN_PM_TO_ACK			BIT(9)
> +#define PCIE_IRN_PM_TURNOFF_ACK			BIT(10)
> +#define PCIE_IRN_LINK_AUTO_BW_STATUS		BIT(11)
> +#define PCIE_IRN_BW_MGT				BIT(12)
> +#define PCIE_IRN_WAKEUP				BIT(17)
> +#define PCIE_IRN_MSG_LTR			BIT(18)
> +#define PCIE_IRN_SYS_INT			BIT(28)
> +#define PCIE_IRN_SYS_ERR_RC			BIT(29)
> +
> +#define PCIE_IRN_IR_INT	(PCIE_IRN_AER_REPORT | PCIE_IRN_PME | \
> +			PCIE_IRN_RX_VDM_MSG | PCIE_IRN_SYS_ERR_RC | \
> +			PCIE_IRN_PM_TO_ACK | PCIE_IRN_LINK_AUTO_BW_STATUS | \
> +			PCIE_IRN_BW_MGT | PCIE_IRN_MSG_LTR)
I would rename all of the APP register #defines to match the pattern
PCIE_APP_*
That makes it easy to differentiate the PCIe (DBI) registers from the
APP registers.

[...]
> +static inline u32 pcie_app_rd(struct intel_pcie_port *lpp, u32 ofs)
> +{
> +	return readl(lpp->app_base + ofs);
> +}
> +
> +static inline void pcie_app_wr(struct intel_pcie_port *lpp, u32 val, u32 ofs)
> +{
> +	writel(val, lpp->app_base + ofs);
> +}
> +
> +static void pcie_app_wr_mask(struct intel_pcie_port *lpp,
> +			     u32 mask, u32 val, u32 ofs)
> +{
> +	pcie_update_bits(lpp->app_base, mask, val, ofs);
> +}
do you have plans to support the MIPS SoCs (VRX200, ARX300, XRX350,
XRX550)?
These will need register writes in big endian. in my own experiment [0]
I simply used the regmap interface which will default to little endian
register access but switch to big endian when the devicetree node is
marked with big-endian.

[...]
> +static int intel_pcie_bios_map_irq(const struct pci_dev *dev, u8 slot, u8 pin)
> +{
> +
> +	struct pcie_port *pp = dev->bus->sysdata;
> +	struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
> +	struct intel_pcie_port *lpp = dev_get_drvdata(pci->dev);
> +	struct device *pdev = lpp->pci->dev;
> +	u32 irq_bit;
> +	int irq;
> +
> +	if (pin == PCI_INTERRUPT_UNKNOWN || pin > PCI_NUM_INTX) {
> +		dev_warn(pdev, "WARNING: dev %s: invalid interrupt pin %d\n",
> +			 pci_name(dev), pin);
> +		return -1;
> +	}
> +	irq = of_irq_parse_and_map_pci(dev, slot, pin);
> +	if (!irq) {
> +		dev_err(pdev, "trying to map irq for unknown slot:%d pin:%d\n",
> +			slot, pin);
> +		return -1;
> +	}
> +	/* Pin to irq offset bit position */
> +	irq_bit = BIT(pin + PCIE_INTX_OFFSET);
> +
> +	/* Clear possible pending interrupts first */
> +	pcie_app_wr(lpp, irq_bit, PCIE_IRNCR);
> +
> +	pcie_app_wr_mask(lpp, irq_bit, irq_bit, PCIE_IRNEN);
> +	return irq;
> +}
my interpretation is that there's an interrupt controller embedded into
the APP registers. The integrated interrupt controller takes 5
interrupts and provides the legacy PCI_INTA/B/C/D interrupts as well as
a WAKEUP IRQ. Each of it's own interrupts is tied to one of the parent
interrupts.

my solution (in the experiment on the VRX200 SoC [1]) was to implement an
interrupt controller and have it as a child devicetree node. then I used
interrupt-map to route the interrupts to the PCIe interrupt controller.
with that I didn't have to overwrite .map_irq.

(note that this comment is only related to the PCI_INTx and WAKE
interrupts but not the other interrupt configuration bits, because as
far as I understand the other ones are only related to the controller)

> +static void intel_pcie_bridge_class_code_setup(struct intel_pcie_port *lpp)
> +{
> +	pcie_rc_cfg_wr_mask(lpp, PCIE_MISC_CTRL_DBI_RO_WR_EN,
> +			    PCIE_MISC_CTRL_DBI_RO_WR_EN, PCIE_MISC_CTRL);
> +	pcie_rc_cfg_wr_mask(lpp, 0xffffff00, PCI_CLASS_BRIDGE_PCI << 16,
> +			    PCIE_CCRID);
> +	pcie_rc_cfg_wr_mask(lpp, PCIE_MISC_CTRL_DBI_RO_WR_EN, 0,
> +			    PCIE_MISC_CTRL);
> +}
in my own experiments I didn't need this - have you confirmed that it's
required? and if it is required: why is that?
if others are curious as well then maybe add the explanation as comment
to the driver

[...]
> +static const char *pcie_link_gen_to_str(int gen)
> +{
> +	switch (gen) {
> +	case PCIE_LINK_SPEED_GEN1:
> +		return "2.5";
> +	case PCIE_LINK_SPEED_GEN2:
> +		return "5.0";
> +	case PCIE_LINK_SPEED_GEN3:
> +		return "8.0";
> +	case PCIE_LINK_SPEED_GEN4:
> +		return "16.0";
> +	default:
> +		return "???";
> +	}
> +}
why duplicate PCIE_SPEED2STR from drivers/pci/pci.h?

> +static int intel_pcie_ep_rst_init(struct intel_pcie_port *lpp)
> +{
> +	struct device *dev = lpp->pci->dev;
> +	int ret = 0;
> +
> +	lpp->reset_gpio = devm_gpiod_get(dev, "reset", GPIOD_OUT_LOW);
> +	if (IS_ERR(lpp->reset_gpio)) {
> +		ret = PTR_ERR(lpp->reset_gpio);
> +		if (ret != -EPROBE_DEFER)
> +			dev_err(dev, "failed to request PCIe GPIO: %d\n", ret);
> +		return ret;
> +	}
> +	/* Make initial reset last for 100ms */
> +	msleep(100);
why is there lpp->rst_interval when you hardcode 100ms here?

[...]
> +static int intel_pcie_setup_irq(struct intel_pcie_port *lpp)
> +{
> +	struct device *dev = lpp->pci->dev;
> +	struct platform_device *pdev;
> +	char *irq_name;
> +	int irq, ret;
> +
> +	pdev = to_platform_device(dev);
> +	irq = platform_get_irq(pdev, 0);
> +	if (irq < 0) {
> +		dev_err(dev, "missing sys integrated irq resource\n");
> +		return irq;
> +	}
> +
> +	irq_name = devm_kasprintf(dev, GFP_KERNEL, "pcie_misc%d", lpp->id);
> +	if (!irq_name) {
> +		dev_err(dev, "failed to alloc irq name\n");
> +		return -ENOMEM;
you are only requesting one IRQ line for the whole driver. personally
I would drop the custom irq_name and pass NULL to devm_request_irq
because that will then fall-back to auto-generating an IRQ name based
on the devicetree node. I assume it's the same for ACPI but I haven't
tried that yet.

> +static void intel_pcie_disable_clks(struct intel_pcie_port *lpp)
> +{
> +	clk_disable_unprepare(lpp->core_clk);
> +}
> +
> +static int intel_pcie_enable_clks(struct intel_pcie_port *lpp)
> +{
> +	int ret = clk_prepare_enable(lpp->core_clk);
> +
> +	if (ret)
> +		dev_err(lpp->pci->dev, "Core clock enable failed: %d\n", ret);
> +
> +	return ret;
> +}
you have some functions (using these two as an example) which are only
used once. they add some boilerplate and (in my opinion) make the code
harder to read.

> +static int intel_pcie_get_resources(struct platform_device *pdev)
> +{
> +	struct intel_pcie_port *lpp;
> +	struct device *dev;
> +	int ret;
> +
> +	lpp = platform_get_drvdata(pdev);
> +	dev = lpp->pci->dev;
> +
> +	lpp->core_clk = devm_clk_get(dev, NULL);
> +	if (IS_ERR(lpp->core_clk)) {
> +		ret = PTR_ERR(lpp->core_clk);
> +		if (ret != -EPROBE_DEFER)
> +			dev_err(dev, "failed to get clks: %d\n", ret);
> +		return ret;
> +	}
> +
> +	lpp->core_rst = devm_reset_control_get(dev, NULL);
> +	if (IS_ERR(lpp->core_rst)) {
> +		ret = PTR_ERR(lpp->core_rst);
> +		if (ret != -EPROBE_DEFER)
> +			dev_err(dev, "failed to get resets: %d\n", ret);
> +		return ret;
> +	}
> +
> +	ret = device_property_match_string(dev, "device_type", "pci");
> +	if (ret) {
> +		dev_err(dev, "failed to find pci device type: %d\n", ret);
> +		return ret;
> +	}
> +
> +	if (device_property_read_u32(dev, "intel,rst-interval",
> +				     &lpp->rst_interval))
> +		lpp->rst_interval = RST_INTRVL_DFT_MS;
> +
> +	if (device_property_read_u32(dev, "max-link-speed", &lpp->link_gen))
> +		lpp->link_gen = 0; /* Fallback to auto */
> +
> +	lpp->app_base = devm_platform_ioremap_resource(pdev, 2);
I suggest using platform_get_resource_byname(pdev, "app") because
pcie-designware uses named resources for the "config" space already

that said, devm_platform_ioremap_resource_byname would be a great
addition in my opinion ;)

> +	if (IS_ERR(lpp->app_base))
> +		return PTR_ERR(lpp->app_base);
> +
> +	ret = intel_pcie_ep_rst_init(lpp);
> +	if (ret)
> +		return ret;
> +
> +	lpp->phy = devm_phy_get(dev, "phy");
I suggest to use "pcie" as phy-name, otherwise the binding looks odd:
  phys = <&pcie0_phy>;
  phy-names = "phy";
versus:
  phys = <&pcie0_phy>;
  phy-names = "pcie";

> +static ssize_t
> +pcie_link_status_show(struct device *dev, struct device_attribute *attr,
> +		      char *buf)
> +{
> +	u32 reg, width, gen;
> +	struct intel_pcie_port *lpp;
> +
> +	lpp = dev_get_drvdata(dev);
> +
> +	reg = pcie_rc_cfg_rd(lpp, PCIE_LCTLSTS);
> +	width = FIELD_GET(PCIE_LCTLSTS_NEGOTIATED_LINK_WIDTH, reg);
> +	gen = FIELD_GET(PCIE_LCTLSTS_LINK_SPEED, reg);
> +	if (gen > lpp->max_speed)
> +		return -EINVAL;
> +
> +	return sprintf(buf, "Port %2u Width x%u Speed %s GT/s\n", lpp->id,
> +		       width, pcie_link_gen_to_str(gen));
> +}
> +static DEVICE_ATTR_RO(pcie_link_status);
"lspci -vv | grep LnkSta" already shows the link speed and width.
why do you need this?

> +static ssize_t pcie_speed_store(struct device *dev,
> +				struct device_attribute *attr,
> +				const char *buf, size_t len)
> +{
> +	struct intel_pcie_port *lpp;
> +	unsigned long val;
> +	int ret;
> +
> +	lpp = dev_get_drvdata(dev);
> +
> +	ret = kstrtoul(buf, 10, &val);
> +	if (ret)
> +		return ret;
> +
> +	if (val > lpp->max_speed)
> +		return -EINVAL;
> +
> +	lpp->link_gen = val;
> +	intel_pcie_max_speed_setup(lpp);
> +	intel_pcie_speed_change_disable(lpp);
> +	intel_pcie_speed_change_enable(lpp);
> +
> +	return len;
> +}
> +static DEVICE_ATTR_WO(pcie_speed);
this is already configurable via devicetree, why do you need this?

> +/*
> + * Link width change on the fly is not always successful.
> + * It also depends on the partner.
> + */
> +static ssize_t pcie_width_store(struct device *dev,
> +				struct device_attribute *attr,
> +				const char *buf, size_t len)
> +{
> +	struct intel_pcie_port *lpp;
> +	unsigned long val;
> +
> +	lpp = dev_get_drvdata(dev);
> +
> +	if (kstrtoul(buf, 10, &val))
> +		return -EINVAL;
> +
> +	if (val > lpp->max_width)
> +		return -EINVAL;
> +
> +	lpp->lanes = val;
> +	intel_pcie_max_link_width_setup(lpp);
> +
> +	return len;
> +}
> +static DEVICE_ATTR_WO(pcie_width);
is it needed because of a bug somewhere? who do you expect to use this
sysfs attribute and when (which condition) do you expect people to use
this?

[...]
> +static void __intel_pcie_remove(struct intel_pcie_port *lpp)
> +{
> +	pcie_rc_cfg_wr_mask(lpp, PCI_COMMAND_MEMORY | PCI_COMMAND_MASTER,
> +			    0, PCI_COMMAND);
I expect logic like this to be part of the PCI subsystem in Linux.
why is this needed?

[...]
> +int intel_pcie_msi_init(struct pcie_port *pp)
> +{
> +	struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
> +
> +	dev_dbg(pci->dev, "MSI is handled in x86 arch\n");
I would rephrase this to "MSI is handled by a separate MSI interrupt
controller"
on the VRX200 SoC there's also a MSI interrupt controller and it seems
that "x86" has this as well (even though it might be two different MSI
IRQ IP blocks).

[...]
> +static int intel_pcie_probe(struct platform_device *pdev)
> +{
> +	struct device *dev = &pdev->dev;
> +	const struct intel_pcie_soc *data;
> +	struct intel_pcie_port *lpp;
> +	struct pcie_port *pp;
> +	struct dw_pcie *pci;
> +	int ret;
> +
> +	lpp = devm_kzalloc(dev, sizeof(*lpp), GFP_KERNEL);
> +	if (!lpp)
> +		return -ENOMEM;
> +
> +	pci = devm_kzalloc(dev, sizeof(*pci), GFP_KERNEL);
> +	if (!pci)
> +		return -ENOMEM;
other drivers have a "struct dw_pcie	pci" struct member (I assume
that it's to prevent memory fragmentation). why not do the same here
and embed it into struct intel_pcie_port?

> +	platform_set_drvdata(pdev, lpp);
> +	lpp->pci = pci;
> +	pci->dev = dev;
> +	pp = &pci->pp;
> +
> +	ret = device_property_read_u32(dev, "linux,pci-domain", &lpp->id);
> +	if (ret) {
> +		dev_err(dev, "failed to get domain id, errno %d\n", ret);
> +		return ret;
> +	}
> +
> +	pci->dbi_base = devm_platform_ioremap_resource(pdev, 0);
> +	if (IS_ERR(pci->dbi_base))
> +		return PTR_ERR(pci->dbi_base);
> +
as stated above I would use the _byname variant


Martin


[0] https://github.com/xdarklight/linux/blob/lantiq-pcie-driver-next-20190727/drivers/pci/controller/dwc/pcie-lantiq.c
[1] https://github.com/xdarklight/linux/blob/lantiq-pcie-driver-next-20190727/Documentation/devicetree/bindings/pci/lantiq%2Cdw-pcie.yaml

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v2 3/3] dwc: PCI: intel: Intel PCIe RC controller driver
  2019-08-24 21:03   ` Martin Blumenstingl
@ 2019-08-26  3:30     ` Chuan Hua, Lei
  2019-08-26  6:48       ` Dilip Kota
                         ` (2 more replies)
  0 siblings, 3 replies; 23+ messages in thread
From: Chuan Hua, Lei @ 2019-08-26  3:30 UTC (permalink / raw)
  To: Martin Blumenstingl, eswara.kota
  Cc: andriy.shevchenko, cheol.yong.kim, devicetree, gustavo.pimentel,
	hch, jingoohan1, linux-kernel, linux-pci, qi-ming.wu

Hi Martin,

Thanks for your valuable comments. I reply some of them as below.

Regards,

Chuanhua

On 8/25/2019 5:03 AM, Martin Blumenstingl wrote:
> Hi Dilip,
>
> first of all: thank you for submitting this upstream!
> I hope that we can use this driver to replace the out-of-tree PCIe
> driver that's used in OpenWrt for the Lantiq VRX200 SoCs.
>
> a small disclaimer: I don't have access to any Lantiq, Intel or
> DesignWare datasheets. so everything I write below is based on my own
> understanding of the Tegra public datasheet (which describes the
> DesignWare PCIe registers) as well as the public Lantiq UGW code drops
> with out-of-tree drivers for an older version of this PCIe IP.
> thus some of my statements below may be wrong - in this case I would
> appreciate an explanation of how the hardware really works.
>
> please keep me CC'ed on further revisions of this series. I am highly
> interested in making this driver work on the Lantiq VRX200 SoCs once
> the initial version has landed in linux-next.
>
>> +config PCIE_INTEL_AXI
>> +        bool "Intel AHB/AXI PCIe host controller support"
> I believe that this is mostly the same IP block as it's used in Lantiq
> (xDSL) VRX200 SoCs (with MIPS cores) which was introduced in 2010
> (before Intel acquired Lantiq).
> This is why I would have personally called the driver PCIE_LANTIQ

VRX200 SoC(internally called VR9) was the first PCIe SoC product which 
was using synopsys

controller v3.30a. It only supports PCIe Gen1.1/1.0. The phy is internal 
phy from infineon.

After that, we have other PCe 1.1/10. products such as ARX300/390.  
However, these products are EOL,

that is why we don't put effort to support VRX200/ARX300/390.

PCIE_LANTIQ was also a name used internally in the product as in 
linux-3.10.x.


>
> [...]
>> +#define PCIE_CCRID				0x8
>> +
>> +#define PCIE_LCAP				0x7C
>> +#define PCIE_LCAP_MAX_LINK_SPEED		GENMASK(3, 0)
>> +#define PCIE_LCAP_MAX_LENGTH_WIDTH		GENMASK(9, 4)
>> +
>> +/* Link Control and Status Register */
>> +#define PCIE_LCTLSTS				0x80
>> +#define PCIE_LCTLSTS_ASPM_ENABLE		GENMASK(1, 0)
>> +#define PCIE_LCTLSTS_RCB128			BIT(3)
>> +#define PCIE_LCTLSTS_LINK_DISABLE		BIT(4)
>> +#define PCIE_LCTLSTS_COM_CLK_CFG		BIT(6)
>> +#define PCIE_LCTLSTS_HW_AW_DIS			BIT(9)
>> +#define PCIE_LCTLSTS_LINK_SPEED			GENMASK(19, 16)
>> +#define PCIE_LCTLSTS_NEGOTIATED_LINK_WIDTH	GENMASK(25, 20)
>> +#define PCIE_LCTLSTS_SLOT_CLK_CFG		BIT(28)
>> +
>> +#define PCIE_LCTLSTS2				0xA0
>> +#define PCIE_LCTLSTS2_TGT_LINK_SPEED		GENMASK(3, 0)
>> +#define PCIE_LCTLSTS2_TGT_LINK_SPEED_25GT	0x1
>> +#define PCIE_LCTLSTS2_TGT_LINK_SPEED_5GT	0x2
>> +#define PCIE_LCTLSTS2_TGT_LINK_SPEED_8GT	0x3
>> +#define PCIE_LCTLSTS2_TGT_LINK_SPEED_16GT	0x4
>> +#define PCIE_LCTLSTS2_HW_AUTO_DIS		BIT(5)
>> +
>> +/* Ack Frequency Register */
>> +#define PCIE_AFR				0x70C
>> +#define PCIE_AFR_FTS_NUM			GENMASK(15, 8)
>> +#define PCIE_AFR_COM_FTS_NUM			GENMASK(23, 16)
>> +#define PCIE_AFR_GEN12_FTS_NUM_DFT		(SZ_128 - 1)
>> +#define PCIE_AFR_GEN3_FTS_NUM_DFT		180
>> +#define PCIE_AFR_GEN4_FTS_NUM_DFT		196
>> +
>> +#define PCIE_PLCR_DLL_LINK_EN			BIT(5)
>> +#define PCIE_PORT_LOGIC_FTS			GENMASK(7, 0)
>> +#define PCIE_PORT_LOGIC_DFT_FTS_NUM		(SZ_128 - 1)
>> +
>> +#define PCIE_MISC_CTRL				0x8BC
>> +#define PCIE_MISC_CTRL_DBI_RO_WR_EN		BIT(0)
>> +
>> +#define PCIE_MULTI_LANE_CTRL			0x8C0
>> +#define PCIE_UPCONFIG_SUPPORT			BIT(7)
>> +#define PCIE_DIRECT_LINK_WIDTH_CHANGE		BIT(6)
>> +#define PCIE_TARGET_LINK_WIDTH			GENMASK(5, 0)
>> +
>> +#define PCIE_IOP_CTRL				0x8C4
>> +#define PCIE_IOP_RX_STANDBY_CTRL		GENMASK(6, 0)
no need for IOP
> are you sure that you need any of the registers above?
> as far as I can tell most (all?) of them are part of the DesignWare
> register set. why doesn't pcie-designware-host initialize these?
> I don't have any datasheet or register documentation for the DesignWare
> PCIe core. In my own experiment from a month ago on the Lantiq VRX200
> SoC I didn't need any of this: [0]

As I mentioned, VRX200 was a very old PCIe Gen1.1 product. In our latest 
SoC Lightning

Mountain, we are using synopsys controller 5.20/5.50a. We support 
Gen2(XRX350/550),

Gen3(PRX300) and GEN4(X86 based SoC). We also supported dual lane and 
single lane.

Some of the above registers are needed to control FTS, link width and 
link speed.

>
> this also makes me wonder if various functions below like
> intel_pcie_link_setup() and intel_pcie_max_speed_setup() (and probably
> more) are really needed or if it's just cargo cult / copy paste from
> an out-of-tree driver).

intel_pcie_link_setup is to record maximum speed and and link width. We need these
to change speed and link width on the fly which is not supported by dwc driver common
source.
There are two major purposes.
1. For cable applications, they have battery support mode. In this case, it has to
switch from x2 and gen4 to x1 and gen1 on the fly
2. Some customers have high EMI issues. They can try to switch to lower speed and
lower link width to check on the fly. Otherwise, they have to change the device tree
and reboot the system.

>
>> +/* APP RC Core Control Register */
>> +#define PCIE_RC_CCR				0x10
>> +#define PCIE_RC_CCR_LTSSM_ENABLE		BIT(0)
>> +#define PCIE_DEVICE_TYPE			GENMASK(7, 4)
>> +#define PCIE_RC_CCR_RC_MODE			BIT(2)
>> +
>> +/* PCIe Message Control */
>> +#define PCIE_MSG_CR				0x30
>> +#define PCIE_XMT_PM_TURNOFF			BIT(0)
>> +
>> +/* PCIe Power Management Control */
>> +#define PCIE_PMC				0x44
>> +#define PCIE_PM_IN_L2				BIT(20)
>> +
>> +/* Interrupt Enable Register */
>> +#define PCIE_IRNEN				0xF4
>> +#define PCIE_IRNCR				0xF8
>> +#define PCIE_IRN_AER_REPORT			BIT(0)
>> +#define PCIE_IRN_PME				BIT(2)
>> +#define PCIE_IRN_HOTPLUG			BIT(3)
>> +#define PCIE_IRN_RX_VDM_MSG			BIT(4)
>> +#define PCIE_IRN_PM_TO_ACK			BIT(9)
>> +#define PCIE_IRN_PM_TURNOFF_ACK			BIT(10)
>> +#define PCIE_IRN_LINK_AUTO_BW_STATUS		BIT(11)
>> +#define PCIE_IRN_BW_MGT				BIT(12)
>> +#define PCIE_IRN_WAKEUP				BIT(17)
>> +#define PCIE_IRN_MSG_LTR			BIT(18)
>> +#define PCIE_IRN_SYS_INT			BIT(28)
>> +#define PCIE_IRN_SYS_ERR_RC			BIT(29)
>> +
>> +#define PCIE_IRN_IR_INT	(PCIE_IRN_AER_REPORT | PCIE_IRN_PME | \
>> +			PCIE_IRN_RX_VDM_MSG | PCIE_IRN_SYS_ERR_RC | \
>> +			PCIE_IRN_PM_TO_ACK | PCIE_IRN_LINK_AUTO_BW_STATUS | \
>> +			PCIE_IRN_BW_MGT | PCIE_IRN_MSG_LTR)
> I would rename all of the APP register #defines to match the pattern
> PCIE_APP_*
> That makes it easy to differentiate the PCIe (DBI) registers from the
> APP registers.
>
> [...]
Agree.
>> +static inline u32 pcie_app_rd(struct intel_pcie_port *lpp, u32 ofs)
>> +{
>> +	return readl(lpp->app_base + ofs);
>> +}
>> +
>> +static inline void pcie_app_wr(struct intel_pcie_port *lpp, u32 val, u32 ofs)
>> +{
>> +	writel(val, lpp->app_base + ofs);
>> +}
>> +
>> +static void pcie_app_wr_mask(struct intel_pcie_port *lpp,
>> +			     u32 mask, u32 val, u32 ofs)
>> +{
>> +	pcie_update_bits(lpp->app_base, mask, val, ofs);
>> +}
> do you have plans to support the MIPS SoCs (VRX200, ARX300, XRX350,
> XRX550)?
> These will need register writes in big endian. in my own experiment [0]
> I simply used the regmap interface which will default to little endian
> register access but switch to big endian when the devicetree node is
> marked with big-endian.
>
> [...]

We can support up to XRX350/XRX500/PRX300 for MIPS SoC since we still

sell these products. However, we have no effort to support EOL product

such as VRX200 which also makes driver quite complex since the glue

logic(reset, clock and endianness). For MIPS based platform, we have

endianness control in device tree such as inbound_swap and outbound_swap

For VRX200, we have another big concern, that is PCI and PCIe has coupling

for endiannes which makes things very bad.

However, if you are interested in supporting VRX200, it is highly 
appreciated.

>> +static int intel_pcie_bios_map_irq(const struct pci_dev *dev, u8 slot, u8 pin)
>> +{
>> +
>> +	struct pcie_port *pp = dev->bus->sysdata;
>> +	struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
>> +	struct intel_pcie_port *lpp = dev_get_drvdata(pci->dev);
>> +	struct device *pdev = lpp->pci->dev;
>> +	u32 irq_bit;
>> +	int irq;
>> +
>> +	if (pin == PCI_INTERRUPT_UNKNOWN || pin > PCI_NUM_INTX) {
>> +		dev_warn(pdev, "WARNING: dev %s: invalid interrupt pin %d\n",
>> +			 pci_name(dev), pin);
>> +		return -1;
>> +	}
>> +	irq = of_irq_parse_and_map_pci(dev, slot, pin);
>> +	if (!irq) {
>> +		dev_err(pdev, "trying to map irq for unknown slot:%d pin:%d\n",
>> +			slot, pin);
>> +		return -1;
>> +	}
>> +	/* Pin to irq offset bit position */
>> +	irq_bit = BIT(pin + PCIE_INTX_OFFSET);
>> +
>> +	/* Clear possible pending interrupts first */
>> +	pcie_app_wr(lpp, irq_bit, PCIE_IRNCR);
>> +
>> +	pcie_app_wr_mask(lpp, irq_bit, irq_bit, PCIE_IRNEN);
>> +	return irq;
>> +}
> my interpretation is that there's an interrupt controller embedded into
> the APP registers. The integrated interrupt controller takes 5
> interrupts and provides the legacy PCI_INTA/B/C/D interrupts as well as
> a WAKEUP IRQ. Each of it's own interrupts is tied to one of the parent
> interrupts.

For MIPS base SoC, there is no interrupt controller for such APP registers.

They are directly connected with centralized PIC(ICU for VRX200/ARX300, GIC

for XRX500/PRX300, IOAPIC for lightning mountain).That is why we don't

implement the interrupt controller here.

>
> my solution (in the experiment on the VRX200 SoC [1]) was to implement an
> interrupt controller and have it as a child devicetree node. then I used
> interrupt-map to route the interrupts to the PCIe interrupt controller.
> with that I didn't have to overwrite .map_irq.
>
> (note that this comment is only related to the PCI_INTx and WAKE
> interrupts but not the other interrupt configuration bits, because as
> far as I understand the other ones are only related to the controller)
>
>> +static void intel_pcie_bridge_class_code_setup(struct intel_pcie_port *lpp)
>> +{
>> +	pcie_rc_cfg_wr_mask(lpp, PCIE_MISC_CTRL_DBI_RO_WR_EN,
>> +			    PCIE_MISC_CTRL_DBI_RO_WR_EN, PCIE_MISC_CTRL);
>> +	pcie_rc_cfg_wr_mask(lpp, 0xffffff00, PCI_CLASS_BRIDGE_PCI << 16,
>> +			    PCIE_CCRID);
>> +	pcie_rc_cfg_wr_mask(lpp, PCIE_MISC_CTRL_DBI_RO_WR_EN, 0,
>> +			    PCIE_MISC_CTRL);
>> +}
> in my own experiments I didn't need this - have you confirmed that it's
> required? and if it is required: why is that?
> if others are curious as well then maybe add the explanation as comment
> to the driver
>
> [...]

This is needed. In the old driver, we fixed this by fixup. The original 
comment as follows,

/*
  * The root complex has a hardwired class of PCI_CLASS_NETWORK_OTHER or
  * PCI_CLASS_BRIDGE_HOST, when it is operating as a root complex this
  * needs to be switched to * PCI_CLASS_BRIDGE_PCI
  */

>> +static const char *pcie_link_gen_to_str(int gen)
>> +{
>> +	switch (gen) {
>> +	case PCIE_LINK_SPEED_GEN1:
>> +		return "2.5";
>> +	case PCIE_LINK_SPEED_GEN2:
>> +		return "5.0";
>> +	case PCIE_LINK_SPEED_GEN3:
>> +		return "8.0";
>> +	case PCIE_LINK_SPEED_GEN4:
>> +		return "16.0";
>> +	default:
>> +		return "???";
>> +	}
>> +}
> why duplicate PCIE_SPEED2STR from drivers/pci/pci.h?

Great! even link_device_setup can be reduced since pcie_get_speed_cap is

implementing similar stuff.

>
>> +static int intel_pcie_ep_rst_init(struct intel_pcie_port *lpp)
>> +{
>> +	struct device *dev = lpp->pci->dev;
>> +	int ret = 0;
>> +
>> +	lpp->reset_gpio = devm_gpiod_get(dev, "reset", GPIOD_OUT_LOW);
>> +	if (IS_ERR(lpp->reset_gpio)) {
>> +		ret = PTR_ERR(lpp->reset_gpio);
>> +		if (ret != -EPROBE_DEFER)
>> +			dev_err(dev, "failed to request PCIe GPIO: %d\n", ret);
>> +		return ret;
>> +	}
>> +	/* Make initial reset last for 100ms */
>> +	msleep(100);
> why is there lpp->rst_interval when you hardcode 100ms here?

There are different purpose. rst_interval is purely for asserted reset 
pulse.

Here 100ms is to make sure the initial state keeps at least 100ms, then we

can reset.

>
> [...]
>> +static int intel_pcie_setup_irq(struct intel_pcie_port *lpp)
>> +{
>> +	struct device *dev = lpp->pci->dev;
>> +	struct platform_device *pdev;
>> +	char *irq_name;
>> +	int irq, ret;
>> +
>> +	pdev = to_platform_device(dev);
>> +	irq = platform_get_irq(pdev, 0);
>> +	if (irq < 0) {
>> +		dev_err(dev, "missing sys integrated irq resource\n");
>> +		return irq;
>> +	}
>> +
>> +	irq_name = devm_kasprintf(dev, GFP_KERNEL, "pcie_misc%d", lpp->id);
>> +	if (!irq_name) {
>> +		dev_err(dev, "failed to alloc irq name\n");
>> +		return -ENOMEM;
> you are only requesting one IRQ line for the whole driver. personally
> I would drop the custom irq_name and pass NULL to devm_request_irq
> because that will then fall-back to auto-generating an IRQ name based
> on the devicetree node. I assume it's the same for ACPI but I haven't
> tried that yet.

Not sure I understand what you mean.  As you know from the code, we have 
lpp->id which means

we have multiple instances of Root Complex(1,2,3,4,8), so we need this 
for identification.

this irq in old product called ir(integrated interrupt or misc interrupt).

>
>> +static void intel_pcie_disable_clks(struct intel_pcie_port *lpp)
>> +{
>> +	clk_disable_unprepare(lpp->core_clk);
>> +}
>> +
>> +static int intel_pcie_enable_clks(struct intel_pcie_port *lpp)
>> +{
>> +	int ret = clk_prepare_enable(lpp->core_clk);
>> +
>> +	if (ret)
>> +		dev_err(lpp->pci->dev, "Core clock enable failed: %d\n", ret);
>> +
>> +	return ret;
>> +}
> you have some functions (using these two as an example) which are only
> used once. they add some boilerplate and (in my opinion) make the code
> harder to read.

Yes. If we plan to support old MIPS SoC, we have a lot of clocks. The

code is from old code. We can remove this wrapper for new SoC. Later we

can add them back.

>
>> +static int intel_pcie_get_resources(struct platform_device *pdev)
>> +{
>> +	struct intel_pcie_port *lpp;
>> +	struct device *dev;
>> +	int ret;
>> +
>> +	lpp = platform_get_drvdata(pdev);
>> +	dev = lpp->pci->dev;
>> +
>> +	lpp->core_clk = devm_clk_get(dev, NULL);
>> +	if (IS_ERR(lpp->core_clk)) {
>> +		ret = PTR_ERR(lpp->core_clk);
>> +		if (ret != -EPROBE_DEFER)
>> +			dev_err(dev, "failed to get clks: %d\n", ret);
>> +		return ret;
>> +	}
>> +
>> +	lpp->core_rst = devm_reset_control_get(dev, NULL);
>> +	if (IS_ERR(lpp->core_rst)) {
>> +		ret = PTR_ERR(lpp->core_rst);
>> +		if (ret != -EPROBE_DEFER)
>> +			dev_err(dev, "failed to get resets: %d\n", ret);
>> +		return ret;
>> +	}
>> +
>> +	ret = device_property_match_string(dev, "device_type", "pci");
>> +	if (ret) {
>> +		dev_err(dev, "failed to find pci device type: %d\n", ret);
>> +		return ret;
>> +	}
>> +
>> +	if (device_property_read_u32(dev, "intel,rst-interval",
>> +				     &lpp->rst_interval))
>> +		lpp->rst_interval = RST_INTRVL_DFT_MS;
>> +
>> +	if (device_property_read_u32(dev, "max-link-speed", &lpp->link_gen))
>> +		lpp->link_gen = 0; /* Fallback to auto */
>> +
>> +	lpp->app_base = devm_platform_ioremap_resource(pdev, 2);
> I suggest using platform_get_resource_byname(pdev, "app") because
> pcie-designware uses named resources for the "config" space already
>
> that said, devm_platform_ioremap_resource_byname would be a great
> addition in my opinion ;)
Agree.
>
>> +	if (IS_ERR(lpp->app_base))
>> +		return PTR_ERR(lpp->app_base);
>> +
>> +	ret = intel_pcie_ep_rst_init(lpp);
>> +	if (ret)
>> +		return ret;
>> +
>> +	lpp->phy = devm_phy_get(dev, "phy");
> I suggest to use "pcie" as phy-name, otherwise the binding looks odd:
>    phys = <&pcie0_phy>;
>    phy-names = "phy";
> versus:
>    phys = <&pcie0_phy>;
>    phy-names = "pcie";
Agree.
>> +static ssize_t
>> +pcie_link_status_show(struct device *dev, struct device_attribute *attr,
>> +		      char *buf)
>> +{
>> +	u32 reg, width, gen;
>> +	struct intel_pcie_port *lpp;
>> +
>> +	lpp = dev_get_drvdata(dev);
>> +
>> +	reg = pcie_rc_cfg_rd(lpp, PCIE_LCTLSTS);
>> +	width = FIELD_GET(PCIE_LCTLSTS_NEGOTIATED_LINK_WIDTH, reg);
>> +	gen = FIELD_GET(PCIE_LCTLSTS_LINK_SPEED, reg);
>> +	if (gen > lpp->max_speed)
>> +		return -EINVAL;
>> +
>> +	return sprintf(buf, "Port %2u Width x%u Speed %s GT/s\n", lpp->id,
>> +		       width, pcie_link_gen_to_str(gen));
>> +}
>> +static DEVICE_ATTR_RO(pcie_link_status);
> "lspci -vv | grep LnkSta" already shows the link speed and width.
> why do you need this?

In most embedded package, lspci from busybox only showed deviceid and 
vendor id.

They don't install lspci utilities.

>> +static ssize_t pcie_speed_store(struct device *dev,
>> +				struct device_attribute *attr,
>> +				const char *buf, size_t len)
>> +{
>> +	struct intel_pcie_port *lpp;
>> +	unsigned long val;
>> +	int ret;
>> +
>> +	lpp = dev_get_drvdata(dev);
>> +
>> +	ret = kstrtoul(buf, 10, &val);
>> +	if (ret)
>> +		return ret;
>> +
>> +	if (val > lpp->max_speed)
>> +		return -EINVAL;
>> +
>> +	lpp->link_gen = val;
>> +	intel_pcie_max_speed_setup(lpp);
>> +	intel_pcie_speed_change_disable(lpp);
>> +	intel_pcie_speed_change_enable(lpp);
>> +
>> +	return len;
>> +}
>> +static DEVICE_ATTR_WO(pcie_speed);
> this is already configurable via devicetree, why do you need this?
On the fly change as I mentioned in the beginning.
>
>> +/*
>> + * Link width change on the fly is not always successful.
>> + * It also depends on the partner.
>> + */
>> +static ssize_t pcie_width_store(struct device *dev,
>> +				struct device_attribute *attr,
>> +				const char *buf, size_t len)
>> +{
>> +	struct intel_pcie_port *lpp;
>> +	unsigned long val;
>> +
>> +	lpp = dev_get_drvdata(dev);
>> +
>> +	if (kstrtoul(buf, 10, &val))
>> +		return -EINVAL;
>> +
>> +	if (val > lpp->max_width)
>> +		return -EINVAL;
>> +
>> +	lpp->lanes = val;
>> +	intel_pcie_max_link_width_setup(lpp);
>> +
>> +	return len;
>> +}
>> +static DEVICE_ATTR_WO(pcie_width);
> is it needed because of a bug somewhere? who do you expect to use this
> sysfs attribute and when (which condition) do you expect people to use
> this?
>
> [...]
On the fly change as I mentioned in the beginning.
>> +static void __intel_pcie_remove(struct intel_pcie_port *lpp)
>> +{
>> +	pcie_rc_cfg_wr_mask(lpp, PCI_COMMAND_MEMORY | PCI_COMMAND_MASTER,
>> +			    0, PCI_COMMAND);
> I expect logic like this to be part of the PCI subsystem in Linux.
> why is this needed?
>
> [...]

bind/unbind case we use this. For extreme cases, we use unbind and bind 
to reset

PCI instead of rebooting.

>> +int intel_pcie_msi_init(struct pcie_port *pp)
>> +{
>> +	struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
>> +
>> +	dev_dbg(pci->dev, "MSI is handled in x86 arch\n");
> I would rephrase this to "MSI is handled by a separate MSI interrupt
> controller"
> on the VRX200 SoC there's also a MSI interrupt controller and it seems
> that "x86" has this as well (even though it might be two different MSI
> IRQ IP blocks).
>
> [...]

Agree. For X86/64, MSI is handled by X86 arch. We don't need to

implement another MSI controller separately.

For MIPS based SoC, we don't use synopsys MSI controller. The MSI still

connects with central interrupt controller with MSI decoding (we can name it

as MSI controller)

>> +static int intel_pcie_probe(struct platform_device *pdev)
>> +{
>> +	struct device *dev = &pdev->dev;
>> +	const struct intel_pcie_soc *data;
>> +	struct intel_pcie_port *lpp;
>> +	struct pcie_port *pp;
>> +	struct dw_pcie *pci;
>> +	int ret;
>> +
>> +	lpp = devm_kzalloc(dev, sizeof(*lpp), GFP_KERNEL);
>> +	if (!lpp)
>> +		return -ENOMEM;
>> +
>> +	pci = devm_kzalloc(dev, sizeof(*pci), GFP_KERNEL);
>> +	if (!pci)
>> +		return -ENOMEM;
> other drivers have a "struct dw_pcie	pci" struct member (I assume
> that it's to prevent memory fragmentation). why not do the same here
> and embed it into struct intel_pcie_port?
Dilip should explain this
>
>> +	platform_set_drvdata(pdev, lpp);
>> +	lpp->pci = pci;
>> +	pci->dev = dev;
>> +	pp = &pci->pp;
>> +
>> +	ret = device_property_read_u32(dev, "linux,pci-domain", &lpp->id);
>> +	if (ret) {
>> +		dev_err(dev, "failed to get domain id, errno %d\n", ret);
>> +		return ret;
>> +	}
>> +
>> +	pci->dbi_base = devm_platform_ioremap_resource(pdev, 0);
>> +	if (IS_ERR(pci->dbi_base))
>> +		return PTR_ERR(pci->dbi_base);
>> +
> as stated above I would use the _byname variant
Agree.
>
>
> Martin
>
>
> [0] https://github.com/xdarklight/linux/blob/lantiq-pcie-driver-next-20190727/drivers/pci/controller/dwc/pcie-lantiq.c
> [1] https://github.com/xdarklight/linux/blob/lantiq-pcie-driver-next-20190727/Documentation/devicetree/bindings/pci/lantiq%2Cdw-pcie.yaml

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v2 3/3] dwc: PCI: intel: Intel PCIe RC controller driver
  2019-08-26  3:30     ` Chuan Hua, Lei
@ 2019-08-26  6:48       ` Dilip Kota
       [not found]       ` <f1cb5ba9-b57a-971a-5a2f-1f13e0cc9507@linux.intel.com>
  2019-08-26 21:15       ` Martin Blumenstingl
  2 siblings, 0 replies; 23+ messages in thread
From: Dilip Kota @ 2019-08-26  6:48 UTC (permalink / raw)
  To: Chuan Hua, Lei, Martin Blumenstingl
  Cc: andriy.shevchenko, cheol.yong.kim, devicetree, gustavo.pimentel,
	hch, jingoohan1, linux-kernel, linux-pci, qi-ming.wu

[Got delivery failure mail , so re-sending the mail]

Hi Martin,

Thanks for review comments, please find my response inline.

On 8/26/2019 11:30 AM, Chuan Hua, Lei wrote:
> Hi Martin,
>
> Thanks for your valuable comments. I reply some of them as below.
>
> Regards,
>
> Chuanhua
>
> On 8/25/2019 5:03 AM, Martin Blumenstingl wrote:
>> Hi Dilip,
>>
>> first of all: thank you for submitting this upstream!
>> I hope that we can use this driver to replace the out-of-tree PCIe
>> driver that's used in OpenWrt for the Lantiq VRX200 SoCs.
>>
>> a small disclaimer: I don't have access to any Lantiq, Intel or
>> DesignWare datasheets. so everything I write below is based on my own
>> understanding of the Tegra public datasheet (which describes the
>> DesignWare PCIe registers) as well as the public Lantiq UGW code drops
>> with out-of-tree drivers for an older version of this PCIe IP.
>> thus some of my statements below may be wrong - in this case I would
>> appreciate an explanation of how the hardware really works.
>>
>> please keep me CC'ed on further revisions of this series. I am highly
>> interested in making this driver work on the Lantiq VRX200 SoCs once
>> the initial version has landed in linux-next.
>>
>>> +config PCIE_INTEL_AXI
>>> +        bool "Intel AHB/AXI PCIe host controller support"
>> I believe that this is mostly the same IP block as it's used in Lantiq
>> (xDSL) VRX200 SoCs (with MIPS cores) which was introduced in 2010
>> (before Intel acquired Lantiq).
>> This is why I would have personally called the driver PCIE_LANTIQ
>
> VRX200 SoC(internally called VR9) was the first PCIe SoC product which 
> was using synopsys
>
> controller v3.30a. It only supports PCIe Gen1.1/1.0. The phy is 
> internal phy from infineon.
>
> After that, we have other PCe 1.1/10. products such as ARX300/390.  
> However, these products are EOL,
>
> that is why we don't put effort to support VRX200/ARX300/390.
>
> PCIE_LANTIQ was also a name used internally in the product as in 
> linux-3.10.x.
>
>
>>
>> [...]
>>> +#define PCIE_CCRID                0x8
>>> +
>>> +#define PCIE_LCAP                0x7C
>>> +#define PCIE_LCAP_MAX_LINK_SPEED        GENMASK(3, 0)
>>> +#define PCIE_LCAP_MAX_LENGTH_WIDTH        GENMASK(9, 4)
>>> +
>>> +/* Link Control and Status Register */
>>> +#define PCIE_LCTLSTS                0x80
>>> +#define PCIE_LCTLSTS_ASPM_ENABLE        GENMASK(1, 0)
>>> +#define PCIE_LCTLSTS_RCB128            BIT(3)
>>> +#define PCIE_LCTLSTS_LINK_DISABLE        BIT(4)
>>> +#define PCIE_LCTLSTS_COM_CLK_CFG        BIT(6)
>>> +#define PCIE_LCTLSTS_HW_AW_DIS            BIT(9)
>>> +#define PCIE_LCTLSTS_LINK_SPEED            GENMASK(19, 16)
>>> +#define PCIE_LCTLSTS_NEGOTIATED_LINK_WIDTH    GENMASK(25, 20)
>>> +#define PCIE_LCTLSTS_SLOT_CLK_CFG        BIT(28)
>>> +
>>> +#define PCIE_LCTLSTS2                0xA0
>>> +#define PCIE_LCTLSTS2_TGT_LINK_SPEED        GENMASK(3, 0)
>>> +#define PCIE_LCTLSTS2_TGT_LINK_SPEED_25GT    0x1
>>> +#define PCIE_LCTLSTS2_TGT_LINK_SPEED_5GT    0x2
>>> +#define PCIE_LCTLSTS2_TGT_LINK_SPEED_8GT    0x3
>>> +#define PCIE_LCTLSTS2_TGT_LINK_SPEED_16GT    0x4
>>> +#define PCIE_LCTLSTS2_HW_AUTO_DIS        BIT(5)
>>> +
>>> +/* Ack Frequency Register */
>>> +#define PCIE_AFR                0x70C
>>> +#define PCIE_AFR_FTS_NUM            GENMASK(15, 8)
>>> +#define PCIE_AFR_COM_FTS_NUM            GENMASK(23, 16)
>>> +#define PCIE_AFR_GEN12_FTS_NUM_DFT        (SZ_128 - 1)
>>> +#define PCIE_AFR_GEN3_FTS_NUM_DFT        180
>>> +#define PCIE_AFR_GEN4_FTS_NUM_DFT        196
>>> +
>>> +#define PCIE_PLCR_DLL_LINK_EN            BIT(5)
>>> +#define PCIE_PORT_LOGIC_FTS            GENMASK(7, 0)
>>> +#define PCIE_PORT_LOGIC_DFT_FTS_NUM        (SZ_128 - 1)
>>> +
>>> +#define PCIE_MISC_CTRL                0x8BC
>>> +#define PCIE_MISC_CTRL_DBI_RO_WR_EN        BIT(0)
>>> +
>>> +#define PCIE_MULTI_LANE_CTRL            0x8C0
>>> +#define PCIE_UPCONFIG_SUPPORT            BIT(7)
>>> +#define PCIE_DIRECT_LINK_WIDTH_CHANGE        BIT(6)
>>> +#define PCIE_TARGET_LINK_WIDTH            GENMASK(5, 0)
>>> +
>>> +#define PCIE_IOP_CTRL                0x8C4
>>> +#define PCIE_IOP_RX_STANDBY_CTRL        GENMASK(6, 0)
> no need for IOP
>> are you sure that you need any of the registers above?
>> as far as I can tell most (all?) of them are part of the DesignWare
>> register set. why doesn't pcie-designware-host initialize these?
>> I don't have any datasheet or register documentation for the DesignWare
>> PCIe core. In my own experiment from a month ago on the Lantiq VRX200
>> SoC I didn't need any of this: [0]
>
> As I mentioned, VRX200 was a very old PCIe Gen1.1 product. In our 
> latest SoC Lightning
>
> Mountain, we are using synopsys controller 5.20/5.50a. We support 
> Gen2(XRX350/550),
>
> Gen3(PRX300) and GEN4(X86 based SoC). We also supported dual lane and 
> single lane.
>
> Some of the above registers are needed to control FTS, link width and 
> link speed.
>
>>
>> this also makes me wonder if various functions below like
>> intel_pcie_link_setup() and intel_pcie_max_speed_setup() (and probably
>> more) are really needed or if it's just cargo cult / copy paste from
>> an out-of-tree driver).
>
> intel_pcie_link_setup is to record maximum speed and and link width. 
> We need these
> to change speed and link width on the fly which is not supported by 
> dwc driver common
> source.
> There are two major purposes.
> 1. For cable applications, they have battery support mode. In this 
> case, it has to
> switch from x2 and gen4 to x1 and gen1 on the fly
> 2. Some customers have high EMI issues. They can try to switch to 
> lower speed and
> lower link width to check on the fly. Otherwise, they have to change 
> the device tree
> and reboot the system.
>
>>
>>> +/* APP RC Core Control Register */
>>> +#define PCIE_RC_CCR                0x10
>>> +#define PCIE_RC_CCR_LTSSM_ENABLE        BIT(0)
>>> +#define PCIE_DEVICE_TYPE            GENMASK(7, 4)
>>> +#define PCIE_RC_CCR_RC_MODE            BIT(2)
>>> +
>>> +/* PCIe Message Control */
>>> +#define PCIE_MSG_CR                0x30
>>> +#define PCIE_XMT_PM_TURNOFF            BIT(0)
>>> +
>>> +/* PCIe Power Management Control */
>>> +#define PCIE_PMC                0x44
>>> +#define PCIE_PM_IN_L2                BIT(20)
>>> +
>>> +/* Interrupt Enable Register */
>>> +#define PCIE_IRNEN                0xF4
>>> +#define PCIE_IRNCR                0xF8
>>> +#define PCIE_IRN_AER_REPORT            BIT(0)
>>> +#define PCIE_IRN_PME                BIT(2)
>>> +#define PCIE_IRN_HOTPLUG            BIT(3)
>>> +#define PCIE_IRN_RX_VDM_MSG            BIT(4)
>>> +#define PCIE_IRN_PM_TO_ACK            BIT(9)
>>> +#define PCIE_IRN_PM_TURNOFF_ACK            BIT(10)
>>> +#define PCIE_IRN_LINK_AUTO_BW_STATUS        BIT(11)
>>> +#define PCIE_IRN_BW_MGT                BIT(12)
>>> +#define PCIE_IRN_WAKEUP                BIT(17)
>>> +#define PCIE_IRN_MSG_LTR            BIT(18)
>>> +#define PCIE_IRN_SYS_INT            BIT(28)
>>> +#define PCIE_IRN_SYS_ERR_RC            BIT(29)
>>> +
>>> +#define PCIE_IRN_IR_INT    (PCIE_IRN_AER_REPORT | PCIE_IRN_PME | \
>>> +            PCIE_IRN_RX_VDM_MSG | PCIE_IRN_SYS_ERR_RC | \
>>> +            PCIE_IRN_PM_TO_ACK | PCIE_IRN_LINK_AUTO_BW_STATUS | \
>>> +            PCIE_IRN_BW_MGT | PCIE_IRN_MSG_LTR)
>> I would rename all of the APP register #defines to match the pattern
>> PCIE_APP_*
>> That makes it easy to differentiate the PCIe (DBI) registers from the
>> APP registers.
>>
>> [...]
> Agree.
>>> +static inline u32 pcie_app_rd(struct intel_pcie_port *lpp, u32 ofs)
>>> +{
>>> +    return readl(lpp->app_base + ofs);
>>> +}
>>> +
>>> +static inline void pcie_app_wr(struct intel_pcie_port *lpp, u32 
>>> val, u32 ofs)
>>> +{
>>> +    writel(val, lpp->app_base + ofs);
>>> +}
>>> +
>>> +static void pcie_app_wr_mask(struct intel_pcie_port *lpp,
>>> +                 u32 mask, u32 val, u32 ofs)
>>> +{
>>> +    pcie_update_bits(lpp->app_base, mask, val, ofs);
>>> +}
>> do you have plans to support the MIPS SoCs (VRX200, ARX300, XRX350,
>> XRX550)?
>> These will need register writes in big endian. in my own experiment [0]
>> I simply used the regmap interface which will default to little endian
>> register access but switch to big endian when the devicetree node is
>> marked with big-endian.
>>
>> [...]
>
> We can support up to XRX350/XRX500/PRX300 for MIPS SoC since we still
>
> sell these products. However, we have no effort to support EOL product
>
> such as VRX200 which also makes driver quite complex since the glue
>
> logic(reset, clock and endianness). For MIPS based platform, we have
>
> endianness control in device tree such as inbound_swap and outbound_swap
>
> For VRX200, we have another big concern, that is PCI and PCIe has 
> coupling
>
> for endiannes which makes things very bad.
>
> However, if you are interested in supporting VRX200, it is highly 
> appreciated.
>
>>> +static int intel_pcie_bios_map_irq(const struct pci_dev *dev, u8 
>>> slot, u8 pin)
>>> +{
>>> +
>>> +    struct pcie_port *pp = dev->bus->sysdata;
>>> +    struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
>>> +    struct intel_pcie_port *lpp = dev_get_drvdata(pci->dev);
>>> +    struct device *pdev = lpp->pci->dev;
>>> +    u32 irq_bit;
>>> +    int irq;
>>> +
>>> +    if (pin == PCI_INTERRUPT_UNKNOWN || pin > PCI_NUM_INTX) {
>>> +        dev_warn(pdev, "WARNING: dev %s: invalid interrupt pin %d\n",
>>> +             pci_name(dev), pin);
>>> +        return -1;
>>> +    }
>>> +    irq = of_irq_parse_and_map_pci(dev, slot, pin);
>>> +    if (!irq) {
>>> +        dev_err(pdev, "trying to map irq for unknown slot:%d 
>>> pin:%d\n",
>>> +            slot, pin);
>>> +        return -1;
>>> +    }
>>> +    /* Pin to irq offset bit position */
>>> +    irq_bit = BIT(pin + PCIE_INTX_OFFSET);
>>> +
>>> +    /* Clear possible pending interrupts first */
>>> +    pcie_app_wr(lpp, irq_bit, PCIE_IRNCR);
>>> +
>>> +    pcie_app_wr_mask(lpp, irq_bit, irq_bit, PCIE_IRNEN);
>>> +    return irq;
>>> +}
>> my interpretation is that there's an interrupt controller embedded into
>> the APP registers. The integrated interrupt controller takes 5
>> interrupts and provides the legacy PCI_INTA/B/C/D interrupts as well as
>> a WAKEUP IRQ. Each of it's own interrupts is tied to one of the parent
>> interrupts.
>
> For MIPS base SoC, there is no interrupt controller for such APP 
> registers.
>
> They are directly connected with centralized PIC(ICU for 
> VRX200/ARX300, GIC
>
> for XRX500/PRX300, IOAPIC for lightning mountain).That is why we don't
>
> implement the interrupt controller here.
>
>>
>> my solution (in the experiment on the VRX200 SoC [1]) was to 
>> implement an
>> interrupt controller and have it as a child devicetree node. then I used
>> interrupt-map to route the interrupts to the PCIe interrupt controller.
>> with that I didn't have to overwrite .map_irq.
>>
>> (note that this comment is only related to the PCI_INTx and WAKE
>> interrupts but not the other interrupt configuration bits, because as
>> far as I understand the other ones are only related to the controller)
>>
>>> +static void intel_pcie_bridge_class_code_setup(struct 
>>> intel_pcie_port *lpp)
>>> +{
>>> +    pcie_rc_cfg_wr_mask(lpp, PCIE_MISC_CTRL_DBI_RO_WR_EN,
>>> +                PCIE_MISC_CTRL_DBI_RO_WR_EN, PCIE_MISC_CTRL);
>>> +    pcie_rc_cfg_wr_mask(lpp, 0xffffff00, PCI_CLASS_BRIDGE_PCI << 16,
>>> +                PCIE_CCRID);
>>> +    pcie_rc_cfg_wr_mask(lpp, PCIE_MISC_CTRL_DBI_RO_WR_EN, 0,
>>> +                PCIE_MISC_CTRL);
>>> +}
>> in my own experiments I didn't need this - have you confirmed that it's
>> required? and if it is required: why is that?
>> if others are curious as well then maybe add the explanation as comment
>> to the driver
>>
>> [...]
>
> This is needed. In the old driver, we fixed this by fixup. The 
> original comment as follows,
>
> /*
>  * The root complex has a hardwired class of PCI_CLASS_NETWORK_OTHER or
>  * PCI_CLASS_BRIDGE_HOST, when it is operating as a root complex this
>  * needs to be switched to * PCI_CLASS_BRIDGE_PCI
>  */
>
>>> +static const char *pcie_link_gen_to_str(int gen)
>>> +{
>>> +    switch (gen) {
>>> +    case PCIE_LINK_SPEED_GEN1:
>>> +        return "2.5";
>>> +    case PCIE_LINK_SPEED_GEN2:
>>> +        return "5.0";
>>> +    case PCIE_LINK_SPEED_GEN3:
>>> +        return "8.0";
>>> +    case PCIE_LINK_SPEED_GEN4:
>>> +        return "16.0";
>>> +    default:
>>> +        return "???";
>>> +    }
>>> +}
>> why duplicate PCIE_SPEED2STR from drivers/pci/pci.h?
>
> Great! even link_device_setup can be reduced since pcie_get_speed_cap is
>
> implementing similar stuff.
>
>>
>>> +static int intel_pcie_ep_rst_init(struct intel_pcie_port *lpp)
>>> +{
>>> +    struct device *dev = lpp->pci->dev;
>>> +    int ret = 0;
>>> +
>>> +    lpp->reset_gpio = devm_gpiod_get(dev, "reset", GPIOD_OUT_LOW);
>>> +    if (IS_ERR(lpp->reset_gpio)) {
>>> +        ret = PTR_ERR(lpp->reset_gpio);
>>> +        if (ret != -EPROBE_DEFER)
>>> +            dev_err(dev, "failed to request PCIe GPIO: %d\n", ret);
>>> +        return ret;
>>> +    }
>>> +    /* Make initial reset last for 100ms */
>>> +    msleep(100);
>> why is there lpp->rst_interval when you hardcode 100ms here?
>
> There are different purpose. rst_interval is purely for asserted reset 
> pulse.
>
> Here 100ms is to make sure the initial state keeps at least 100ms, 
> then we
>
> can reset.
>
>>
>> [...]
>>> +static int intel_pcie_setup_irq(struct intel_pcie_port *lpp)
>>> +{
>>> +    struct device *dev = lpp->pci->dev;
>>> +    struct platform_device *pdev;
>>> +    char *irq_name;
>>> +    int irq, ret;
>>> +
>>> +    pdev = to_platform_device(dev);
>>> +    irq = platform_get_irq(pdev, 0);
>>> +    if (irq < 0) {
>>> +        dev_err(dev, "missing sys integrated irq resource\n");
>>> +        return irq;
>>> +    }
>>> +
>>> +    irq_name = devm_kasprintf(dev, GFP_KERNEL, "pcie_misc%d", 
>>> lpp->id);
>>> +    if (!irq_name) {
>>> +        dev_err(dev, "failed to alloc irq name\n");
>>> +        return -ENOMEM;
>> you are only requesting one IRQ line for the whole driver. personally
>> I would drop the custom irq_name and pass NULL to devm_request_irq
>> because that will then fall-back to auto-generating an IRQ name based
>> on the devicetree node. I assume it's the same for ACPI but I haven't
>> tried that yet.
>
> Not sure I understand what you mean.  As you know from the code, we 
> have lpp->id which means
>
> we have multiple instances of Root Complex(1,2,3,4,8), so we need this 
> for identification.
>
> this irq in old product called ir(integrated interrupt or misc 
> interrupt).
>
>>
>>> +static void intel_pcie_disable_clks(struct intel_pcie_port *lpp)
>>> +{
>>> +    clk_disable_unprepare(lpp->core_clk);
>>> +}
>>> +
>>> +static int intel_pcie_enable_clks(struct intel_pcie_port *lpp)
>>> +{
>>> +    int ret = clk_prepare_enable(lpp->core_clk);
>>> +
>>> +    if (ret)
>>> +        dev_err(lpp->pci->dev, "Core clock enable failed: %d\n", ret);
>>> +
>>> +    return ret;
>>> +}
>> you have some functions (using these two as an example) which are only
>> used once. they add some boilerplate and (in my opinion) make the code
>> harder to read.
>
> Yes. If we plan to support old MIPS SoC, we have a lot of clocks. The
>
> code is from old code. We can remove this wrapper for new SoC. Later we
>
> can add them back.
>
>>
>>> +static int intel_pcie_get_resources(struct platform_device *pdev)
>>> +{
>>> +    struct intel_pcie_port *lpp;
>>> +    struct device *dev;
>>> +    int ret;
>>> +
>>> +    lpp = platform_get_drvdata(pdev);
>>> +    dev = lpp->pci->dev;
>>> +
>>> +    lpp->core_clk = devm_clk_get(dev, NULL);
>>> +    if (IS_ERR(lpp->core_clk)) {
>>> +        ret = PTR_ERR(lpp->core_clk);
>>> +        if (ret != -EPROBE_DEFER)
>>> +            dev_err(dev, "failed to get clks: %d\n", ret);
>>> +        return ret;
>>> +    }
>>> +
>>> +    lpp->core_rst = devm_reset_control_get(dev, NULL);
>>> +    if (IS_ERR(lpp->core_rst)) {
>>> +        ret = PTR_ERR(lpp->core_rst);
>>> +        if (ret != -EPROBE_DEFER)
>>> +            dev_err(dev, "failed to get resets: %d\n", ret);
>>> +        return ret;
>>> +    }
>>> +
>>> +    ret = device_property_match_string(dev, "device_type", "pci");
>>> +    if (ret) {
>>> +        dev_err(dev, "failed to find pci device type: %d\n", ret);
>>> +        return ret;
>>> +    }
>>> +
>>> +    if (device_property_read_u32(dev, "intel,rst-interval",
>>> +                     &lpp->rst_interval))
>>> +        lpp->rst_interval = RST_INTRVL_DFT_MS;
>>> +
>>> +    if (device_property_read_u32(dev, "max-link-speed", 
>>> &lpp->link_gen))
>>> +        lpp->link_gen = 0; /* Fallback to auto */
>>> +
>>> +    lpp->app_base = devm_platform_ioremap_resource(pdev, 2);
>> I suggest using platform_get_resource_byname(pdev, "app") because
>> pcie-designware uses named resources for the "config" space already
>>
>> that said, devm_platform_ioremap_resource_byname would be a great
>> addition in my opinion ;)
> Agree.
>>
>>> +    if (IS_ERR(lpp->app_base))
>>> +        return PTR_ERR(lpp->app_base);
>>> +
>>> +    ret = intel_pcie_ep_rst_init(lpp);
>>> +    if (ret)
>>> +        return ret;
>>> +
>>> +    lpp->phy = devm_phy_get(dev, "phy");
>> I suggest to use "pcie" as phy-name, otherwise the binding looks odd:
>>    phys = <&pcie0_phy>;
>>    phy-names = "phy";
>> versus:
>>    phys = <&pcie0_phy>;
>>    phy-names = "pcie";
> Agree.
>>> +static ssize_t
>>> +pcie_link_status_show(struct device *dev, struct device_attribute 
>>> *attr,
>>> +              char *buf)
>>> +{
>>> +    u32 reg, width, gen;
>>> +    struct intel_pcie_port *lpp;
>>> +
>>> +    lpp = dev_get_drvdata(dev);
>>> +
>>> +    reg = pcie_rc_cfg_rd(lpp, PCIE_LCTLSTS);
>>> +    width = FIELD_GET(PCIE_LCTLSTS_NEGOTIATED_LINK_WIDTH, reg);
>>> +    gen = FIELD_GET(PCIE_LCTLSTS_LINK_SPEED, reg);
>>> +    if (gen > lpp->max_speed)
>>> +        return -EINVAL;
>>> +
>>> +    return sprintf(buf, "Port %2u Width x%u Speed %s GT/s\n", lpp->id,
>>> +               width, pcie_link_gen_to_str(gen));
>>> +}
>>> +static DEVICE_ATTR_RO(pcie_link_status);
>> "lspci -vv | grep LnkSta" already shows the link speed and width.
>> why do you need this?
>
> In most embedded package, lspci from busybox only showed deviceid and 
> vendor id.
>
> They don't install lspci utilities.
>
>>> +static ssize_t pcie_speed_store(struct device *dev,
>>> +                struct device_attribute *attr,
>>> +                const char *buf, size_t len)
>>> +{
>>> +    struct intel_pcie_port *lpp;
>>> +    unsigned long val;
>>> +    int ret;
>>> +
>>> +    lpp = dev_get_drvdata(dev);
>>> +
>>> +    ret = kstrtoul(buf, 10, &val);
>>> +    if (ret)
>>> +        return ret;
>>> +
>>> +    if (val > lpp->max_speed)
>>> +        return -EINVAL;
>>> +
>>> +    lpp->link_gen = val;
>>> +    intel_pcie_max_speed_setup(lpp);
>>> +    intel_pcie_speed_change_disable(lpp);
>>> +    intel_pcie_speed_change_enable(lpp);
>>> +
>>> +    return len;
>>> +}
>>> +static DEVICE_ATTR_WO(pcie_speed);
>> this is already configurable via devicetree, why do you need this?
> On the fly change as I mentioned in the beginning.
>>
>>> +/*
>>> + * Link width change on the fly is not always successful.
>>> + * It also depends on the partner.
>>> + */
>>> +static ssize_t pcie_width_store(struct device *dev,
>>> +                struct device_attribute *attr,
>>> +                const char *buf, size_t len)
>>> +{
>>> +    struct intel_pcie_port *lpp;
>>> +    unsigned long val;
>>> +
>>> +    lpp = dev_get_drvdata(dev);
>>> +
>>> +    if (kstrtoul(buf, 10, &val))
>>> +        return -EINVAL;
>>> +
>>> +    if (val > lpp->max_width)
>>> +        return -EINVAL;
>>> +
>>> +    lpp->lanes = val;
>>> +    intel_pcie_max_link_width_setup(lpp);
>>> +
>>> +    return len;
>>> +}
>>> +static DEVICE_ATTR_WO(pcie_width);
>> is it needed because of a bug somewhere? who do you expect to use this
>> sysfs attribute and when (which condition) do you expect people to use
>> this?
>>
>> [...]
> On the fly change as I mentioned in the beginning.
>>> +static void __intel_pcie_remove(struct intel_pcie_port *lpp)
>>> +{
>>> +    pcie_rc_cfg_wr_mask(lpp, PCI_COMMAND_MEMORY | PCI_COMMAND_MASTER,
>>> +                0, PCI_COMMAND);
>> I expect logic like this to be part of the PCI subsystem in Linux.
>> why is this needed?
>>
>> [...]
>
> bind/unbind case we use this. For extreme cases, we use unbind and 
> bind to reset
>
> PCI instead of rebooting.
>
>>> +int intel_pcie_msi_init(struct pcie_port *pp)
>>> +{
>>> +    struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
>>> +
>>> +    dev_dbg(pci->dev, "MSI is handled in x86 arch\n");
>> I would rephrase this to "MSI is handled by a separate MSI interrupt
>> controller"
>> on the VRX200 SoC there's also a MSI interrupt controller and it seems
>> that "x86" has this as well (even though it might be two different MSI
>> IRQ IP blocks).
>>
>> [...]
>
> Agree. For X86/64, MSI is handled by X86 arch. We don't need to
>
> implement another MSI controller separately.
>
> For MIPS based SoC, we don't use synopsys MSI controller. The MSI still
>
> connects with central interrupt controller with MSI decoding (we can 
> name it
>
> as MSI controller)
>
>>> +static int intel_pcie_probe(struct platform_device *pdev)
>>> +{
>>> +    struct device *dev = &pdev->dev;
>>> +    const struct intel_pcie_soc *data;
>>> +    struct intel_pcie_port *lpp;
>>> +    struct pcie_port *pp;
>>> +    struct dw_pcie *pci;
>>> +    int ret;
>>> +
>>> +    lpp = devm_kzalloc(dev, sizeof(*lpp), GFP_KERNEL);
>>> +    if (!lpp)
>>> +        return -ENOMEM;
>>> +
>>> +    pci = devm_kzalloc(dev, sizeof(*pci), GFP_KERNEL);
>>> +    if (!pci)
>>> +        return -ENOMEM;
>> other drivers have a "struct dw_pcie    pci" struct member (I assume
>> that it's to prevent memory fragmentation). why not do the same here
>> and embed it into struct intel_pcie_port?
> Dilip should explain this
Almost all the drivers are following the same way. I don't see any issue 
in this way.
Please help me with more description if you see an issue here.

struct qcom_pcie {
     struct dw_pcie *pci;
Ref: 
https://elixir.bootlin.com/linux/v5.3-rc6/source/drivers/pci/controller/dwc/pcie-qcom.c

struct armada8k_pcie {
     struct dw_pcie *pci;
Ref: 
https://elixir.bootlin.com/linux/v5.3-rc6/source/drivers/pci/controller/dwc/pcie-armada8k.c

struct artpec6_pcie {
     struct dw_pcie        *pci;
Ref: 
https://elixir.bootlin.com/linux/v5.3-rc6/source/drivers/pci/controller/dwc/pcie-artpec6.c

struct kirin_pcie {
     struct dw_pcie    *pci;
Ref: 
https://elixir.bootlin.com/linux/v5.3-rc6/source/drivers/pci/controller/dwc/pcie-kirin.c

struct spear13xx_pcie {
     struct dw_pcie        *pci;
Ref: 
https://elixir.bootlin.com/linux/v5.3-rc6/source/drivers/pci/controller/dwc/pcie-spear13xx.c

Regards,

Dilip
>>
>>> +    platform_set_drvdata(pdev, lpp);
>>> +    lpp->pci = pci;
>>> +    pci->dev = dev;
>>> +    pp = &pci->pp;
>>> +
>>> +    ret = device_property_read_u32(dev, "linux,pci-domain", &lpp->id);
>>> +    if (ret) {
>>> +        dev_err(dev, "failed to get domain id, errno %d\n", ret);
>>> +        return ret;
>>> +    }
>>> +
>>> +    pci->dbi_base = devm_platform_ioremap_resource(pdev, 0);
>>> +    if (IS_ERR(pci->dbi_base))
>>> +        return PTR_ERR(pci->dbi_base);
>>> +
>> as stated above I would use the _byname variant
> Agree.
>>
>>
>> Martin
>>
>>
>> [0] 
>> https://github.com/xdarklight/linux/blob/lantiq-pcie-driver-next-20190727/drivers/pci/controller/dwc/pcie-lantiq.c
>> [1] 
>> https://github.com/xdarklight/linux/blob/lantiq-pcie-driver-next-20190727/Documentation/devicetree/bindings/pci/lantiq%2Cdw-pcie.yaml

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v2 3/3] dwc: PCI: intel: Intel PCIe RC controller driver
       [not found]       ` <f1cb5ba9-b57a-971a-5a2f-1f13e0cc9507@linux.intel.com>
@ 2019-08-26 20:14         ` Martin Blumenstingl
  2019-08-27  9:14           ` Dilip Kota
  0 siblings, 1 reply; 23+ messages in thread
From: Martin Blumenstingl @ 2019-08-26 20:14 UTC (permalink / raw)
  To: Dilip Kota
  Cc: Chuan Hua, Lei, andriy.shevchenko, cheol.yong.kim, devicetree,
	gustavo.pimentel, hch, jingoohan1, linux-kernel, linux-pci,
	qi-ming.wu

Hi Dilip,

On Mon, Aug 26, 2019 at 8:42 AM Dilip Kota <eswara.kota@linux.intel.com> wrote:
[...]
> intel_pcie_port structure is having "struct dw_pcie" as mentioned below:
>
> struct intel_pcie_port {
>         struct dw_pcie          *pci;
>         unsigned int            id; /* Physical RC Index */
>         void __iomem            *app_base;
>         struct gpio_desc        *reset_gpio;
> [...]
> };
>
> Almost all the drivers are following the same way. I don't see any issue in this way.
> Please help me with more description if you see an issue here.
>
> struct qcom_pcie {
> struct dw_pcie *pci;
> Ref: https://elixir.bootlin.com/linux/v5.3-rc6/source/drivers/pci/controller/dwc/pcie-qcom.c
>
> struct armada8k_pcie {
> struct dw_pcie *pci;
> Ref: https://elixir.bootlin.com/linux/v5.3-rc6/source/drivers/pci/controller/dwc/pcie-armada8k.c
>
> struct artpec6_pcie {
> struct dw_pcie *pci;
> Ref: https://elixir.bootlin.com/linux/v5.3-rc6/source/drivers/pci/controller/dwc/pcie-artpec6.c
>
> struct kirin_pcie {
> struct dw_pcie *pci;
> Ref: https://elixir.bootlin.com/linux/v5.3-rc6/source/drivers/pci/controller/dwc/pcie-kirin.c
>
> struct spear13xx_pcie {
> struct dw_pcie *pci;
> Ref: https://elixir.bootlin.com/linux/v5.3-rc6/source/drivers/pci/controller/dwc/pcie-spear13xx.c
thank you for this detailed list.
it seems that I picked the minority of drivers as "reference" where
it's implemented differently:

first example: pci-meson
  struct meson_pcie {
    struct dw_pcie pci;
    ...
  };

second example: pcie-tegra194 (only in -next, will be part of v5.4)
  struct tegra_pcie_dw {
    ...
    struct dw_pcie pci;
    ...
  };

so some drivers store a pointer pointer to the dw_pcie struct vs.
embedding the dw_pcie struct directly.
as far as I know the result will be equal, except that you don't have
to use a second devm_kzalloc for struct dw_pcie (and thus reducing
memory fragmentation).


Martin

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v2 3/3] dwc: PCI: intel: Intel PCIe RC controller driver
  2019-08-26  3:30     ` Chuan Hua, Lei
  2019-08-26  6:48       ` Dilip Kota
       [not found]       ` <f1cb5ba9-b57a-971a-5a2f-1f13e0cc9507@linux.intel.com>
@ 2019-08-26 21:15       ` Martin Blumenstingl
  2019-08-27  3:09         ` Chuan Hua, Lei
  2019-08-27 14:28         ` Andy Shevchenko
  2 siblings, 2 replies; 23+ messages in thread
From: Martin Blumenstingl @ 2019-08-26 21:15 UTC (permalink / raw)
  To: Chuan Hua, Lei
  Cc: eswara.kota, andriy.shevchenko, cheol.yong.kim, devicetree,
	gustavo.pimentel, hch, jingoohan1, linux-kernel, linux-pci,
	qi-ming.wu

Hello,

On Mon, Aug 26, 2019 at 5:31 AM Chuan Hua, Lei
<chuanhua.lei@linux.intel.com> wrote:
>
> Hi Martin,
>
> Thanks for your valuable comments. I reply some of them as below.
you're welcome

[...]
> >> +config PCIE_INTEL_AXI
> >> +        bool "Intel AHB/AXI PCIe host controller support"
> > I believe that this is mostly the same IP block as it's used in Lantiq
> > (xDSL) VRX200 SoCs (with MIPS cores) which was introduced in 2010
> > (before Intel acquired Lantiq).
> > This is why I would have personally called the driver PCIE_LANTIQ
>
> VRX200 SoC(internally called VR9) was the first PCIe SoC product which
> was using synopsys
>
> controller v3.30a. It only supports PCIe Gen1.1/1.0. The phy is internal
> phy from infineon.
thank you for these details
I wasn't aware that the PCIe PHY on these SoCs was developed by
Infineon nor is the DWC version documented anywhere

[...]
> >> +#define PCIE_CCRID                          0x8
> >> +
> >> +#define PCIE_LCAP                           0x7C
> >> +#define PCIE_LCAP_MAX_LINK_SPEED            GENMASK(3, 0)
> >> +#define PCIE_LCAP_MAX_LENGTH_WIDTH          GENMASK(9, 4)
> >> +
> >> +/* Link Control and Status Register */
> >> +#define PCIE_LCTLSTS                                0x80
> >> +#define PCIE_LCTLSTS_ASPM_ENABLE            GENMASK(1, 0)
> >> +#define PCIE_LCTLSTS_RCB128                 BIT(3)
> >> +#define PCIE_LCTLSTS_LINK_DISABLE           BIT(4)
> >> +#define PCIE_LCTLSTS_COM_CLK_CFG            BIT(6)
> >> +#define PCIE_LCTLSTS_HW_AW_DIS                      BIT(9)
> >> +#define PCIE_LCTLSTS_LINK_SPEED                     GENMASK(19, 16)
> >> +#define PCIE_LCTLSTS_NEGOTIATED_LINK_WIDTH  GENMASK(25, 20)
> >> +#define PCIE_LCTLSTS_SLOT_CLK_CFG           BIT(28)
> >> +
> >> +#define PCIE_LCTLSTS2                               0xA0
> >> +#define PCIE_LCTLSTS2_TGT_LINK_SPEED                GENMASK(3, 0)
> >> +#define PCIE_LCTLSTS2_TGT_LINK_SPEED_25GT   0x1
> >> +#define PCIE_LCTLSTS2_TGT_LINK_SPEED_5GT    0x2
> >> +#define PCIE_LCTLSTS2_TGT_LINK_SPEED_8GT    0x3
> >> +#define PCIE_LCTLSTS2_TGT_LINK_SPEED_16GT   0x4
> >> +#define PCIE_LCTLSTS2_HW_AUTO_DIS           BIT(5)
> >> +
> >> +/* Ack Frequency Register */
> >> +#define PCIE_AFR                            0x70C
> >> +#define PCIE_AFR_FTS_NUM                    GENMASK(15, 8)
> >> +#define PCIE_AFR_COM_FTS_NUM                        GENMASK(23, 16)
> >> +#define PCIE_AFR_GEN12_FTS_NUM_DFT          (SZ_128 - 1)
> >> +#define PCIE_AFR_GEN3_FTS_NUM_DFT           180
> >> +#define PCIE_AFR_GEN4_FTS_NUM_DFT           196
> >> +
> >> +#define PCIE_PLCR_DLL_LINK_EN                       BIT(5)
> >> +#define PCIE_PORT_LOGIC_FTS                 GENMASK(7, 0)
> >> +#define PCIE_PORT_LOGIC_DFT_FTS_NUM         (SZ_128 - 1)
> >> +
> >> +#define PCIE_MISC_CTRL                              0x8BC
> >> +#define PCIE_MISC_CTRL_DBI_RO_WR_EN         BIT(0)
> >> +
> >> +#define PCIE_MULTI_LANE_CTRL                        0x8C0
> >> +#define PCIE_UPCONFIG_SUPPORT                       BIT(7)
> >> +#define PCIE_DIRECT_LINK_WIDTH_CHANGE               BIT(6)
> >> +#define PCIE_TARGET_LINK_WIDTH                      GENMASK(5, 0)
> >> +
> >> +#define PCIE_IOP_CTRL                               0x8C4
> >> +#define PCIE_IOP_RX_STANDBY_CTRL            GENMASK(6, 0)
> no need for IOP
with "are you sure that you need any of the registers above?" I really
meant all registers above (including, but not limited to IOP)

[...]
> As I mentioned, VRX200 was a very old PCIe Gen1.1 product. In our latest
> SoC Lightning
>
> Mountain, we are using synopsys controller 5.20/5.50a. We support
> Gen2(XRX350/550),
>
> Gen3(PRX300) and GEN4(X86 based SoC). We also supported dual lane and
> single lane.
>
> Some of the above registers are needed to control FTS, link width and
> link speed.
only now I noticed that I didn't explain why I was asking whether all
these registers are needed
my understanding of the DWC PCIe controller driver "library" is that:
- all functionality which is provided by the DesignWare PCIe core
should be added to drivers/pci/controller/dwc/pcie-designware*
- functionality which is built on top/around the DWC PCIe core should
be added to <vendor specific driver>

the link width and link speed settings (I don't know about "FTS")
don't seem Intel/Lantiq controller specific to me
so the register setup for these bits should be moved to
drivers/pci/controller/dwc/pcie-designware*

> > this also makes me wonder if various functions below like
> > intel_pcie_link_setup() and intel_pcie_max_speed_setup() (and probably
> > more) are really needed or if it's just cargo cult / copy paste from
> > an out-of-tree driver).
>
> intel_pcie_link_setup is to record maximum speed and and link width. We need these
> to change speed and link width on the fly which is not supported by dwc driver common
> source.
> There are two major purposes.
> 1. For cable applications, they have battery support mode. In this case, it has to
> switch from x2 and gen4 to x1 and gen1 on the fly
> 2. Some customers have high EMI issues. They can try to switch to lower speed and
> lower link width to check on the fly. Otherwise, they have to change the device tree
> and reboot the system.
based on your description I imagine that this may be a "common
problem" (for example: Nvidia Tegra SoCs are also used in portable -
as in battery powered - applications)
I don't know enough about the Linux PCIe subsystem to comment on this,
so I'm hoping that one of the PCIe subsystem maintainers can comment
whether this is logic that should be implemented on a per-controller
basis or whether there should be some generic implementation

[...]
> >> +static inline u32 pcie_app_rd(struct intel_pcie_port *lpp, u32 ofs)
> >> +{
> >> +    return readl(lpp->app_base + ofs);
> >> +}
> >> +
> >> +static inline void pcie_app_wr(struct intel_pcie_port *lpp, u32 val, u32 ofs)
> >> +{
> >> +    writel(val, lpp->app_base + ofs);
> >> +}
> >> +
> >> +static void pcie_app_wr_mask(struct intel_pcie_port *lpp,
> >> +                         u32 mask, u32 val, u32 ofs)
> >> +{
> >> +    pcie_update_bits(lpp->app_base, mask, val, ofs);
> >> +}
> > do you have plans to support the MIPS SoCs (VRX200, ARX300, XRX350,
> > XRX550)?
> > These will need register writes in big endian. in my own experiment [0]
> > I simply used the regmap interface which will default to little endian
> > register access but switch to big endian when the devicetree node is
> > marked with big-endian.
> >
> > [...]
>
> We can support up to XRX350/XRX500/PRX300 for MIPS SoC since we still
> sell these products.
OK, I understand this.
switching to regmap will give you two benefits:
- big endian registers writes (without additional code) on the MIPS SoCs
- you can drop the pcie_app_* helper functions and use
regmap_{read,write,update_bits} instead

> [...] However, we have no effort to support EOL product
> such as VRX200 which also makes driver quite complex since the glue
> logic(reset, clock and endianness). For MIPS based platform, we have
> endianness control in device tree such as inbound_swap and outbound_swap
>
> For VRX200, we have another big concern, that is PCI and PCIe has coupling
> for endiannes which makes things very bad.
>
> However, if you are interested in supporting VRX200, it is highly
> appreciated.
thank you for the endianness control description and your concerns
about the implementation on VRX200

with my experiment I didn't have any problems with reset lines, clocks
or endianness (at least I believe so)
let's focus on the newer SoCs first, support for more SoCs can be a second step

> >> +static int intel_pcie_bios_map_irq(const struct pci_dev *dev, u8 slot, u8 pin)
> >> +{
> >> +
> >> +    struct pcie_port *pp = dev->bus->sysdata;
> >> +    struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
> >> +    struct intel_pcie_port *lpp = dev_get_drvdata(pci->dev);
> >> +    struct device *pdev = lpp->pci->dev;
> >> +    u32 irq_bit;
> >> +    int irq;
> >> +
> >> +    if (pin == PCI_INTERRUPT_UNKNOWN || pin > PCI_NUM_INTX) {
> >> +            dev_warn(pdev, "WARNING: dev %s: invalid interrupt pin %d\n",
> >> +                     pci_name(dev), pin);
> >> +            return -1;
> >> +    }
> >> +    irq = of_irq_parse_and_map_pci(dev, slot, pin);
> >> +    if (!irq) {
> >> +            dev_err(pdev, "trying to map irq for unknown slot:%d pin:%d\n",
> >> +                    slot, pin);
> >> +            return -1;
> >> +    }
> >> +    /* Pin to irq offset bit position */
> >> +    irq_bit = BIT(pin + PCIE_INTX_OFFSET);
> >> +
> >> +    /* Clear possible pending interrupts first */
> >> +    pcie_app_wr(lpp, irq_bit, PCIE_IRNCR);
> >> +
> >> +    pcie_app_wr_mask(lpp, irq_bit, irq_bit, PCIE_IRNEN);
> >> +    return irq;
> >> +}
> > my interpretation is that there's an interrupt controller embedded into
> > the APP registers. The integrated interrupt controller takes 5
> > interrupts and provides the legacy PCI_INTA/B/C/D interrupts as well as
> > a WAKEUP IRQ. Each of it's own interrupts is tied to one of the parent
> > interrupts.
>
> For MIPS base SoC, there is no interrupt controller for such APP registers.
let me try to describe the IRNCR register with my own words:

a) it contains various interrupts for the controller itself (for
example: HOTPLUG, RX fatal error, RX non fatal error, ...)
these interrupts arrive at the controllers interrupt line (requested
below using devm_request_irq).
all of these interrupts are enabled when initializing the controller
as they are valid regardless of which PCI interrupt type (MSI, legacy
A/B/C/D) is used

b) it contains bits to enable/disable the legacy PCI INTA/B/C/D interrupts
these interrupts arrive at a dedicated interrupt line each.
each individual interrupt has an enable bit in the IRNCR register and
should only be enabled when when it's actually needed

c) it contains a PCI WAKE interrupt
it arrives at a dedicated interrupt line
I don't know when it should be enabled or not (I don't even know if
this interrupt is important at all)

let's focus on case a) and b) because I don't know if case c) is
relevant at all:
case a) is implemented in the IRQ setup, this matches my expectation

case b) is implemented in the map_irq callback. however, that only
covers "enabling" the interrupt line. I cannot see the code to disable
the interrupt line again.
now I am wondering:
- if we don't have to disable the interrupt line (once it is enabled),
why can't we enable all of these interrupts at initialization time
(instead of doing it on-demand)?
- if the interrupts do have to be disabled again (that is what I
assumed so far) then where is this supposed to happen? (my solution
for this was to implement a simple interrupt controller within the
PCIe driver which only supports enable/disable. disclaimer: I didn't
ask the PCI or interrupt maintainers for feedback on this yet)

[...]
> >> +static void intel_pcie_bridge_class_code_setup(struct intel_pcie_port *lpp)
> >> +{
> >> +    pcie_rc_cfg_wr_mask(lpp, PCIE_MISC_CTRL_DBI_RO_WR_EN,
> >> +                        PCIE_MISC_CTRL_DBI_RO_WR_EN, PCIE_MISC_CTRL);
> >> +    pcie_rc_cfg_wr_mask(lpp, 0xffffff00, PCI_CLASS_BRIDGE_PCI << 16,
> >> +                        PCIE_CCRID);
> >> +    pcie_rc_cfg_wr_mask(lpp, PCIE_MISC_CTRL_DBI_RO_WR_EN, 0,
> >> +                        PCIE_MISC_CTRL);
> >> +}
> > in my own experiments I didn't need this - have you confirmed that it's
> > required? and if it is required: why is that?
> > if others are curious as well then maybe add the explanation as comment
> > to the driver
> >
> > [...]
>
> This is needed. In the old driver, we fixed this by fixup. The original
> comment as follows,
>
> /*
>   * The root complex has a hardwired class of PCI_CLASS_NETWORK_OTHER or
>   * PCI_CLASS_BRIDGE_HOST, when it is operating as a root complex this
>   * needs to be switched to * PCI_CLASS_BRIDGE_PCI
>   */
that would be a good comment to add if you really need it
can you please look at dw_pcie_setup_rc (from pcie-designware-host.c), it does:
  /* Enable write permission for the DBI read-only register */
  dw_pcie_dbi_ro_wr_en(pci);
  /* Program correct class for RC */
  dw_pcie_wr_own_conf(pp, PCI_CLASS_DEVICE, 2, PCI_CLASS_BRIDGE_PCI);
  /* Better disable write permission right after the update */
  dw_pcie_dbi_ro_wr_dis(pci);

so my understanding is that there is a functional requirement to set
the class to PCI_CLASS_BRIDGE_PCI
however, that requirement is already covered by pcie-designware-host.c

> >> +static const char *pcie_link_gen_to_str(int gen)
> >> +{
> >> +    switch (gen) {
> >> +    case PCIE_LINK_SPEED_GEN1:
> >> +            return "2.5";
> >> +    case PCIE_LINK_SPEED_GEN2:
> >> +            return "5.0";
> >> +    case PCIE_LINK_SPEED_GEN3:
> >> +            return "8.0";
> >> +    case PCIE_LINK_SPEED_GEN4:
> >> +            return "16.0";
> >> +    default:
> >> +            return "???";
> >> +    }
> >> +}
> > why duplicate PCIE_SPEED2STR from drivers/pci/pci.h?
>
> Great! even link_device_setup can be reduced since pcie_get_speed_cap is
> implementing similar stuff.
removing code is always great, I'm glad that this helped!

> >
> >> +static int intel_pcie_ep_rst_init(struct intel_pcie_port *lpp)
> >> +{
> >> +    struct device *dev = lpp->pci->dev;
> >> +    int ret = 0;
> >> +
> >> +    lpp->reset_gpio = devm_gpiod_get(dev, "reset", GPIOD_OUT_LOW);
> >> +    if (IS_ERR(lpp->reset_gpio)) {
> >> +            ret = PTR_ERR(lpp->reset_gpio);
> >> +            if (ret != -EPROBE_DEFER)
> >> +                    dev_err(dev, "failed to request PCIe GPIO: %d\n", ret);
> >> +            return ret;
> >> +    }
> >> +    /* Make initial reset last for 100ms */
> >> +    msleep(100);
> > why is there lpp->rst_interval when you hardcode 100ms here?
>
> There are different purpose. rst_interval is purely for asserted reset
> pulse.
>
> Here 100ms is to make sure the initial state keeps at least 100ms, then we
> can reset.
my interpretation is that it totally depends on the board design or
the bootloader setup.

on a board where the bootloader initializes the GPIO to logical "0"
the devm_gpiod_get() call will not change the GPIO output.
in this case a 100ms delay may be OK (based on your description)

however, if the GPIO was a logical "1" (for example if the bootloader
set it to that value - and considering the GPIOD_OUT_LOW flag) then it
will be set to "0" with the devm_gpiod_get() call above.
now there is a transition from "deasserted" to "asserted" which does
not honor lpp->rst_interval

I'm not sure if this is a problem or not - all I know is that I don't
fully understand the problem yet

> >
> > [...]
> >> +static int intel_pcie_setup_irq(struct intel_pcie_port *lpp)
> >> +{
> >> +    struct device *dev = lpp->pci->dev;
> >> +    struct platform_device *pdev;
> >> +    char *irq_name;
> >> +    int irq, ret;
> >> +
> >> +    pdev = to_platform_device(dev);
> >> +    irq = platform_get_irq(pdev, 0);
> >> +    if (irq < 0) {
> >> +            dev_err(dev, "missing sys integrated irq resource\n");
> >> +            return irq;
> >> +    }
> >> +
> >> +    irq_name = devm_kasprintf(dev, GFP_KERNEL, "pcie_misc%d", lpp->id);
> >> +    if (!irq_name) {
> >> +            dev_err(dev, "failed to alloc irq name\n");
> >> +            return -ENOMEM;
> > you are only requesting one IRQ line for the whole driver. personally
> > I would drop the custom irq_name and pass NULL to devm_request_irq
> > because that will then fall-back to auto-generating an IRQ name based
> > on the devicetree node. I assume it's the same for ACPI but I haven't
> > tried that yet.
>
> Not sure I understand what you mean.  As you know from the code, we have
> lpp->id which means
>
> we have multiple instances of Root Complex(1,2,3,4,8), so we need this
> for identification.
sorry, I was wrong with my original statement, the name cannot be NULL

I checked the other drivers (meson-gx-mmc and meson-saradc) I had in
mind and they use dev_name(&pdev->dev);
that will give a unique interrupt name (derived from the devicetree)
in /proc/interrupts, for example: c1108680.adc, d0070000.mmc,
d0072000.mmc, ...

[...]
> >> +static void intel_pcie_disable_clks(struct intel_pcie_port *lpp)
> >> +{
> >> +    clk_disable_unprepare(lpp->core_clk);
> >> +}
> >> +
> >> +static int intel_pcie_enable_clks(struct intel_pcie_port *lpp)
> >> +{
> >> +    int ret = clk_prepare_enable(lpp->core_clk);
> >> +
> >> +    if (ret)
> >> +            dev_err(lpp->pci->dev, "Core clock enable failed: %d\n", ret);
> >> +
> >> +    return ret;
> >> +}
> > you have some functions (using these two as an example) which are only
> > used once. they add some boilerplate and (in my opinion) make the code
> > harder to read.
>
> Yes. If we plan to support old MIPS SoC, we have a lot of clocks. The
> code is from old code. We can remove this wrapper for new SoC. Later we
> can add them back.
if multiple clocks are needed then I suggest using the bulk clock
operations (clk_bulk_data)
so I still think these functions should be dropped

[...]
> >> +static ssize_t
> >> +pcie_link_status_show(struct device *dev, struct device_attribute *attr,
> >> +                  char *buf)
> >> +{
> >> +    u32 reg, width, gen;
> >> +    struct intel_pcie_port *lpp;
> >> +
> >> +    lpp = dev_get_drvdata(dev);
> >> +
> >> +    reg = pcie_rc_cfg_rd(lpp, PCIE_LCTLSTS);
> >> +    width = FIELD_GET(PCIE_LCTLSTS_NEGOTIATED_LINK_WIDTH, reg);
> >> +    gen = FIELD_GET(PCIE_LCTLSTS_LINK_SPEED, reg);
> >> +    if (gen > lpp->max_speed)
> >> +            return -EINVAL;
> >> +
> >> +    return sprintf(buf, "Port %2u Width x%u Speed %s GT/s\n", lpp->id,
> >> +                   width, pcie_link_gen_to_str(gen));
> >> +}
> >> +static DEVICE_ATTR_RO(pcie_link_status);
> > "lspci -vv | grep LnkSta" already shows the link speed and width.
> > why do you need this?
>
> In most embedded package, lspci from busybox only showed deviceid and
> vendor id.
>
> They don't install lspci utilities.
I think the PCI maintainers should comment on this as I'm not sure
whether this is something that should be implemented driver-specific
(why not make it available for all drivers?)

[...]
> >> +static void __intel_pcie_remove(struct intel_pcie_port *lpp)
> >> +{
> >> +    pcie_rc_cfg_wr_mask(lpp, PCI_COMMAND_MEMORY | PCI_COMMAND_MASTER,
> >> +                        0, PCI_COMMAND);
> > I expect logic like this to be part of the PCI subsystem in Linux.
> > why is this needed?
> >
> > [...]
>
> bind/unbind case we use this. For extreme cases, we use unbind and bind
> to reset
> PCI instead of rebooting.
OK, but this does not seem Intel/Lantiq specific at all
why isn't this managed by either pcie-designware-host.c or the generic
PCI/PCIe subsystem in Linux?


Martin

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v2 3/3] dwc: PCI: intel: Intel PCIe RC controller driver
  2019-08-26 21:15       ` Martin Blumenstingl
@ 2019-08-27  3:09         ` Chuan Hua, Lei
  2019-08-27  8:47           ` Dilip Kota
  2019-08-27 20:38           ` Martin Blumenstingl
  2019-08-27 14:28         ` Andy Shevchenko
  1 sibling, 2 replies; 23+ messages in thread
From: Chuan Hua, Lei @ 2019-08-27  3:09 UTC (permalink / raw)
  To: Martin Blumenstingl
  Cc: eswara.kota, andriy.shevchenko, cheol.yong.kim, devicetree,
	gustavo.pimentel, hch, jingoohan1, linux-kernel, linux-pci,
	qi-ming.wu

Hi Martin,

Thanks for your feedback. Please check the comments below.

On 8/27/2019 5:15 AM, Martin Blumenstingl wrote:
> Hello,
>
> On Mon, Aug 26, 2019 at 5:31 AM Chuan Hua, Lei
> <chuanhua.lei@linux.intel.com> wrote:
>> Hi Martin,
>>
>> Thanks for your valuable comments. I reply some of them as below.
> you're welcome
>
> [...]
>>>> +config PCIE_INTEL_AXI
>>>> +        bool "Intel AHB/AXI PCIe host controller support"
>>> I believe that this is mostly the same IP block as it's used in Lantiq
>>> (xDSL) VRX200 SoCs (with MIPS cores) which was introduced in 2010
>>> (before Intel acquired Lantiq).
>>> This is why I would have personally called the driver PCIE_LANTIQ
>> VRX200 SoC(internally called VR9) was the first PCIe SoC product which
>> was using synopsys
>>
>> controller v3.30a. It only supports PCIe Gen1.1/1.0. The phy is internal
>> phy from infineon.
> thank you for these details
> I wasn't aware that the PCIe PHY on these SoCs was developed by
> Infineon nor is the DWC version documented anywhere

VRX200/ARX300 PHY is internal value. There are a lot of hardcode which was

from hardware people. From XRX500, we switch to synopsis PHY. However, later

comboPHY is coming to the picture. Even though we have one same controller

with different versions, we most likely will have three different phy 
drivers.

> [...]
>>>> +#define PCIE_CCRID                          0x8
>>>> +
>>>> +#define PCIE_LCAP                           0x7C
>>>> +#define PCIE_LCAP_MAX_LINK_SPEED            GENMASK(3, 0)
>>>> +#define PCIE_LCAP_MAX_LENGTH_WIDTH          GENMASK(9, 4)
>>>> +
>>>> +/* Link Control and Status Register */
>>>> +#define PCIE_LCTLSTS                                0x80
>>>> +#define PCIE_LCTLSTS_ASPM_ENABLE            GENMASK(1, 0)
>>>> +#define PCIE_LCTLSTS_RCB128                 BIT(3)
>>>> +#define PCIE_LCTLSTS_LINK_DISABLE           BIT(4)
>>>> +#define PCIE_LCTLSTS_COM_CLK_CFG            BIT(6)
>>>> +#define PCIE_LCTLSTS_HW_AW_DIS                      BIT(9)
>>>> +#define PCIE_LCTLSTS_LINK_SPEED                     GENMASK(19, 16)
>>>> +#define PCIE_LCTLSTS_NEGOTIATED_LINK_WIDTH  GENMASK(25, 20)
>>>> +#define PCIE_LCTLSTS_SLOT_CLK_CFG           BIT(28)
>>>> +
>>>> +#define PCIE_LCTLSTS2                               0xA0
>>>> +#define PCIE_LCTLSTS2_TGT_LINK_SPEED                GENMASK(3, 0)
>>>> +#define PCIE_LCTLSTS2_TGT_LINK_SPEED_25GT   0x1
>>>> +#define PCIE_LCTLSTS2_TGT_LINK_SPEED_5GT    0x2
>>>> +#define PCIE_LCTLSTS2_TGT_LINK_SPEED_8GT    0x3
>>>> +#define PCIE_LCTLSTS2_TGT_LINK_SPEED_16GT   0x4
>>>> +#define PCIE_LCTLSTS2_HW_AUTO_DIS           BIT(5)
>>>> +
>>>> +/* Ack Frequency Register */
>>>> +#define PCIE_AFR                            0x70C
>>>> +#define PCIE_AFR_FTS_NUM                    GENMASK(15, 8)
>>>> +#define PCIE_AFR_COM_FTS_NUM                        GENMASK(23, 16)
>>>> +#define PCIE_AFR_GEN12_FTS_NUM_DFT          (SZ_128 - 1)
>>>> +#define PCIE_AFR_GEN3_FTS_NUM_DFT           180
>>>> +#define PCIE_AFR_GEN4_FTS_NUM_DFT           196
>>>> +
>>>> +#define PCIE_PLCR_DLL_LINK_EN                       BIT(5)
>>>> +#define PCIE_PORT_LOGIC_FTS                 GENMASK(7, 0)
>>>> +#define PCIE_PORT_LOGIC_DFT_FTS_NUM         (SZ_128 - 1)
>>>> +
>>>> +#define PCIE_MISC_CTRL                              0x8BC
>>>> +#define PCIE_MISC_CTRL_DBI_RO_WR_EN         BIT(0)
>>>> +
>>>> +#define PCIE_MULTI_LANE_CTRL                        0x8C0
>>>> +#define PCIE_UPCONFIG_SUPPORT                       BIT(7)
>>>> +#define PCIE_DIRECT_LINK_WIDTH_CHANGE               BIT(6)
>>>> +#define PCIE_TARGET_LINK_WIDTH                      GENMASK(5, 0)
>>>> +
>>>> +#define PCIE_IOP_CTRL                               0x8C4
>>>> +#define PCIE_IOP_RX_STANDBY_CTRL            GENMASK(6, 0)
>> no need for IOP
> with "are you sure that you need any of the registers above?" I really
> meant all registers above (including, but not limited to IOP)
>
> [...]
>> As I mentioned, VRX200 was a very old PCIe Gen1.1 product. In our latest
>> SoC Lightning
>>
>> Mountain, we are using synopsys controller 5.20/5.50a. We support
>> Gen2(XRX350/550),
>>
>> Gen3(PRX300) and GEN4(X86 based SoC). We also supported dual lane and
>> single lane.
>>
>> Some of the above registers are needed to control FTS, link width and
>> link speed.
> only now I noticed that I didn't explain why I was asking whether all
> these registers are needed
> my understanding of the DWC PCIe controller driver "library" is that:
> - all functionality which is provided by the DesignWare PCIe core
> should be added to drivers/pci/controller/dwc/pcie-designware*
> - functionality which is built on top/around the DWC PCIe core should
> be added to <vendor specific driver>
>
> the link width and link speed settings (I don't know about "FTS")
> don't seem Intel/Lantiq controller specific to me
> so the register setup for these bits should be moved to
> drivers/pci/controller/dwc/pcie-designware*

FTS means fast training sequence. Different generations will have

different FTS. Common DWC drivers have default number for all generations

which are not optimized.

DWC driver handles link speed and link width during the initialization.

Then left link speed change and link width to device (EP) according to

PCIe spec. Not sure if other vendors or customers have this kind of

requirement. We implemented this due to customer's requirement.

We can check with DWC maintainer about this.

>
>>> this also makes me wonder if various functions below like
>>> intel_pcie_link_setup() and intel_pcie_max_speed_setup() (and probably
>>> more) are really needed or if it's just cargo cult / copy paste from
>>> an out-of-tree driver).
>> intel_pcie_link_setup is to record maximum speed and and link width. We need these
>> to change speed and link width on the fly which is not supported by dwc driver common
>> source.
>> There are two major purposes.
>> 1. For cable applications, they have battery support mode. In this case, it has to
>> switch from x2 and gen4 to x1 and gen1 on the fly
>> 2. Some customers have high EMI issues. They can try to switch to lower speed and
>> lower link width to check on the fly. Otherwise, they have to change the device tree
>> and reboot the system.
> based on your description I imagine that this may be a "common
> problem" (for example: Nvidia Tegra SoCs are also used in portable -
> as in battery powered - applications)
> I don't know enough about the Linux PCIe subsystem to comment on this,
> so I'm hoping that one of the PCIe subsystem maintainers can comment
> whether this is logic that should be implemented on a per-controller
> basis or whether there should be some generic implementation
>
> [...]

I guess PCIe maintainer will not consider this. From the spec,

link speed change and link width change is initiated by device(EP)

instead of RC.

>>>> +static inline u32 pcie_app_rd(struct intel_pcie_port *lpp, u32 ofs)
>>>> +{
>>>> +    return readl(lpp->app_base + ofs);
>>>> +}
>>>> +
>>>> +static inline void pcie_app_wr(struct intel_pcie_port *lpp, u32 val, u32 ofs)
>>>> +{
>>>> +    writel(val, lpp->app_base + ofs);
>>>> +}
>>>> +
>>>> +static void pcie_app_wr_mask(struct intel_pcie_port *lpp,
>>>> +                         u32 mask, u32 val, u32 ofs)
>>>> +{
>>>> +    pcie_update_bits(lpp->app_base, mask, val, ofs);
>>>> +}
>>> do you have plans to support the MIPS SoCs (VRX200, ARX300, XRX350,
>>> XRX550)?
>>> These will need register writes in big endian. in my own experiment [0]
>>> I simply used the regmap interface which will default to little endian
>>> register access but switch to big endian when the devicetree node is
>>> marked with big-endian.
>>>
>>> [...]
>> We can support up to XRX350/XRX500/PRX300 for MIPS SoC since we still
>> sell these products.
> OK, I understand this.
> switching to regmap will give you two benefits:
> - big endian registers writes (without additional code) on the MIPS SoCs
> - you can drop the pcie_app_* helper functions and use
> regmap_{read,write,update_bits} instead

I am not sure if regmap_xxx can avoid endian issues. For MIPS, the behavior

also depends on CONFIG_SWAP_IO option. Anyway, we can switch to regmap

easily.

>
>> [...] However, we have no effort to support EOL product
>> such as VRX200 which also makes driver quite complex since the glue
>> logic(reset, clock and endianness). For MIPS based platform, we have
>> endianness control in device tree such as inbound_swap and outbound_swap
>>
>> For VRX200, we have another big concern, that is PCI and PCIe has coupling
>> for endiannes which makes things very bad.
>>
>> However, if you are interested in supporting VRX200, it is highly
>> appreciated.
> thank you for the endianness control description and your concerns
> about the implementation on VRX200
>
> with my experiment I didn't have any problems with reset lines, clocks
> or endianness (at least I believe so)
> let's focus on the newer SoCs first, support for more SoCs can be a second step
>
>>>> +static int intel_pcie_bios_map_irq(const struct pci_dev *dev, u8 slot, u8 pin)
>>>> +{
>>>> +
>>>> +    struct pcie_port *pp = dev->bus->sysdata;
>>>> +    struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
>>>> +    struct intel_pcie_port *lpp = dev_get_drvdata(pci->dev);
>>>> +    struct device *pdev = lpp->pci->dev;
>>>> +    u32 irq_bit;
>>>> +    int irq;
>>>> +
>>>> +    if (pin == PCI_INTERRUPT_UNKNOWN || pin > PCI_NUM_INTX) {
>>>> +            dev_warn(pdev, "WARNING: dev %s: invalid interrupt pin %d\n",
>>>> +                     pci_name(dev), pin);
>>>> +            return -1;
>>>> +    }
>>>> +    irq = of_irq_parse_and_map_pci(dev, slot, pin);
>>>> +    if (!irq) {
>>>> +            dev_err(pdev, "trying to map irq for unknown slot:%d pin:%d\n",
>>>> +                    slot, pin);
>>>> +            return -1;
>>>> +    }
>>>> +    /* Pin to irq offset bit position */
>>>> +    irq_bit = BIT(pin + PCIE_INTX_OFFSET);
>>>> +
>>>> +    /* Clear possible pending interrupts first */
>>>> +    pcie_app_wr(lpp, irq_bit, PCIE_IRNCR);
>>>> +
>>>> +    pcie_app_wr_mask(lpp, irq_bit, irq_bit, PCIE_IRNEN);
>>>> +    return irq;
>>>> +}
>>> my interpretation is that there's an interrupt controller embedded into
>>> the APP registers. The integrated interrupt controller takes 5
>>> interrupts and provides the legacy PCI_INTA/B/C/D interrupts as well as
>>> a WAKEUP IRQ. Each of it's own interrupts is tied to one of the parent
>>> interrupts.
>> For MIPS base SoC, there is no interrupt controller for such APP registers.
> let me try to describe the IRNCR register with my own words:
>
> a) it contains various interrupts for the controller itself (for
> example: HOTPLUG, RX fatal error, RX non fatal error, ...)
> these interrupts arrive at the controllers interrupt line (requested
> below using devm_request_irq).
> all of these interrupts are enabled when initializing the controller
> as they are valid regardless of which PCI interrupt type (MSI, legacy
> A/B/C/D) is used
>
> b) it contains bits to enable/disable the legacy PCI INTA/B/C/D interrupts
> these interrupts arrive at a dedicated interrupt line each.
> each individual interrupt has an enable bit in the IRNCR register and
> should only be enabled when when it's actually needed
>
> c) it contains a PCI WAKE interrupt
> it arrives at a dedicated interrupt line
> I don't know when it should be enabled or not (I don't even know if
> this interrupt is important at all)
>
> let's focus on case a) and b) because I don't know if case c) is
> relevant at all:
> case a) is implemented in the IRQ setup, this matches my expectation
>
> case b) is implemented in the map_irq callback. however, that only
> covers "enabling" the interrupt line. I cannot see the code to disable
> the interrupt line again.

You are right. We don't have disable the interrupt line for A/B/C/D.

For PCI intx A/B/C/D, SoC only need to enable it (level trigger interrupt),

To clear the interrupt, we have to clear the source. Actually, all INCR

interrupts are level-triggered interrupts.

> now I am wondering:
> - if we don't have to disable the interrupt line (once it is enabled),
> why can't we enable all of these interrupts at initialization time
> (instead of doing it on-demand)?
Good point! we even can remote map_irq patch, directly call

of_irq_parse_and_map_pci as other drivers do.

> - if the interrupts do have to be disabled again (that is what I
> assumed so far) then where is this supposed to happen? (my solution
> for this was to implement a simple interrupt controller within the
> PCIe driver which only supports enable/disable. disclaimer: I didn't
> ask the PCI or interrupt maintainers for feedback on this yet)
>
> [...]

We can implement one interrupt controller, but personally, it has too

much overhead. If we follow this way, almost all modules of all old

lantiq SoCs can implement one interrupt controller. Maybe you can check

with PCI maintainer for their comments.

>>>> +static void intel_pcie_bridge_class_code_setup(struct intel_pcie_port *lpp)
>>>> +{
>>>> +    pcie_rc_cfg_wr_mask(lpp, PCIE_MISC_CTRL_DBI_RO_WR_EN,
>>>> +                        PCIE_MISC_CTRL_DBI_RO_WR_EN, PCIE_MISC_CTRL);
>>>> +    pcie_rc_cfg_wr_mask(lpp, 0xffffff00, PCI_CLASS_BRIDGE_PCI << 16,
>>>> +                        PCIE_CCRID);
>>>> +    pcie_rc_cfg_wr_mask(lpp, PCIE_MISC_CTRL_DBI_RO_WR_EN, 0,
>>>> +                        PCIE_MISC_CTRL);
>>>> +}
>>> in my own experiments I didn't need this - have you confirmed that it's
>>> required? and if it is required: why is that?
>>> if others are curious as well then maybe add the explanation as comment
>>> to the driver
>>>
>>> [...]
>> This is needed. In the old driver, we fixed this by fixup. The original
>> comment as follows,
>>
>> /*
>>    * The root complex has a hardwired class of PCI_CLASS_NETWORK_OTHER or
>>    * PCI_CLASS_BRIDGE_HOST, when it is operating as a root complex this
>>    * needs to be switched to * PCI_CLASS_BRIDGE_PCI
>>    */
> that would be a good comment to add if you really need it
> can you please look at dw_pcie_setup_rc (from pcie-designware-host.c), it does:
>    /* Enable write permission for the DBI read-only register */
>    dw_pcie_dbi_ro_wr_en(pci);
>    /* Program correct class for RC */
>    dw_pcie_wr_own_conf(pp, PCI_CLASS_DEVICE, 2, PCI_CLASS_BRIDGE_PCI);
>    /* Better disable write permission right after the update */
>    dw_pcie_dbi_ro_wr_dis(pci);
>
> so my understanding is that there is a functional requirement to set
> the class to PCI_CLASS_BRIDGE_PCI
> however, that requirement is already covered by pcie-designware-host.c
I will task Dilip to check if we can use dwc one.
>>>> +static const char *pcie_link_gen_to_str(int gen)
>>>> +{
>>>> +    switch (gen) {
>>>> +    case PCIE_LINK_SPEED_GEN1:
>>>> +            return "2.5";
>>>> +    case PCIE_LINK_SPEED_GEN2:
>>>> +            return "5.0";
>>>> +    case PCIE_LINK_SPEED_GEN3:
>>>> +            return "8.0";
>>>> +    case PCIE_LINK_SPEED_GEN4:
>>>> +            return "16.0";
>>>> +    default:
>>>> +            return "???";
>>>> +    }
>>>> +}
>>> why duplicate PCIE_SPEED2STR from drivers/pci/pci.h?
>> Great! even link_device_setup can be reduced since pcie_get_speed_cap is
>> implementing similar stuff.
> removing code is always great, I'm glad that this helped!
>
>>>> +static int intel_pcie_ep_rst_init(struct intel_pcie_port *lpp)
>>>> +{
>>>> +    struct device *dev = lpp->pci->dev;
>>>> +    int ret = 0;
>>>> +
>>>> +    lpp->reset_gpio = devm_gpiod_get(dev, "reset", GPIOD_OUT_LOW);
>>>> +    if (IS_ERR(lpp->reset_gpio)) {
>>>> +            ret = PTR_ERR(lpp->reset_gpio);
>>>> +            if (ret != -EPROBE_DEFER)
>>>> +                    dev_err(dev, "failed to request PCIe GPIO: %d\n", ret);
>>>> +            return ret;
>>>> +    }
>>>> +    /* Make initial reset last for 100ms */
>>>> +    msleep(100);
>>> why is there lpp->rst_interval when you hardcode 100ms here?
>> There are different purpose. rst_interval is purely for asserted reset
>> pulse.
>>
>> Here 100ms is to make sure the initial state keeps at least 100ms, then we
>> can reset.
> my interpretation is that it totally depends on the board design or
> the bootloader setup.

Partially, you are right. However, we should not add some dependency 
here from

bootloader and board. rst_interval is just to make sure the pulse (low 
active or high active)

lasts the specified the time.

> on a board where the bootloader initializes the GPIO to logical "0"
> the devm_gpiod_get() call will not change the GPIO output.
> in this case a 100ms delay may be OK (based on your description)
>
> however, if the GPIO was a logical "1" (for example if the bootloader
> set it to that value - and considering the GPIOD_OUT_LOW flag) then it
> will be set to "0" with the devm_gpiod_get() call above.
> now there is a transition from "deasserted" to "asserted" which does
> not honor lpp->rst_interval
>
> I'm not sure if this is a problem or not - all I know is that I don't
> fully understand the problem yet
>>> [...]
>>>> +static int intel_pcie_setup_irq(struct intel_pcie_port *lpp)
>>>> +{
>>>> +    struct device *dev = lpp->pci->dev;
>>>> +    struct platform_device *pdev;
>>>> +    char *irq_name;
>>>> +    int irq, ret;
>>>> +
>>>> +    pdev = to_platform_device(dev);
>>>> +    irq = platform_get_irq(pdev, 0);
>>>> +    if (irq < 0) {
>>>> +            dev_err(dev, "missing sys integrated irq resource\n");
>>>> +            return irq;
>>>> +    }
>>>> +
>>>> +    irq_name = devm_kasprintf(dev, GFP_KERNEL, "pcie_misc%d", lpp->id);
>>>> +    if (!irq_name) {
>>>> +            dev_err(dev, "failed to alloc irq name\n");
>>>> +            return -ENOMEM;
>>> you are only requesting one IRQ line for the whole driver. personally
>>> I would drop the custom irq_name and pass NULL to devm_request_irq
>>> because that will then fall-back to auto-generating an IRQ name based
>>> on the devicetree node. I assume it's the same for ACPI but I haven't
>>> tried that yet.
>> Not sure I understand what you mean.  As you know from the code, we have
>> lpp->id which means
>>
>> we have multiple instances of Root Complex(1,2,3,4,8), so we need this
>> for identification.
> sorry, I was wrong with my original statement, the name cannot be NULL
>
> I checked the other drivers (meson-gx-mmc and meson-saradc) I had in
> mind and they use dev_name(&pdev->dev);
> that will give a unique interrupt name (derived from the devicetree)
> in /proc/interrupts, for example: c1108680.adc, d0070000.mmc,
> d0072000.mmc, ...
>
> [...]

Right. We also use dev_name in our code. However, some people like numbering

the interface which is easier for them to remember and discuss. I link id to

domain so that we can easily know what is wrong once we have issues. When we

tell the address to hardware people and support staff, they are totally 
lost.

Again, it is ok to switch to dev_name.

>>>> +static void intel_pcie_disable_clks(struct intel_pcie_port *lpp)
>>>> +{
>>>> +    clk_disable_unprepare(lpp->core_clk);
>>>> +}
>>>> +
>>>> +static int intel_pcie_enable_clks(struct intel_pcie_port *lpp)
>>>> +{
>>>> +    int ret = clk_prepare_enable(lpp->core_clk);
>>>> +
>>>> +    if (ret)
>>>> +            dev_err(lpp->pci->dev, "Core clock enable failed: %d\n", ret);
>>>> +
>>>> +    return ret;
>>>> +}
>>> you have some functions (using these two as an example) which are only
>>> used once. they add some boilerplate and (in my opinion) make the code
>>> harder to read.
>> Yes. If we plan to support old MIPS SoC, we have a lot of clocks. The
>> code is from old code. We can remove this wrapper for new SoC. Later we
>> can add them back.
> if multiple clocks are needed then I suggest using the bulk clock
> operations (clk_bulk_data)
> so I still think these functions should be dropped
>
> [...]

It is ok to remove and if necessary, we can add it back for old SoC or use

clk_bulk_data.

>>>> +static ssize_t
>>>> +pcie_link_status_show(struct device *dev, struct device_attribute *attr,
>>>> +                  char *buf)
>>>> +{
>>>> +    u32 reg, width, gen;
>>>> +    struct intel_pcie_port *lpp;
>>>> +
>>>> +    lpp = dev_get_drvdata(dev);
>>>> +
>>>> +    reg = pcie_rc_cfg_rd(lpp, PCIE_LCTLSTS);
>>>> +    width = FIELD_GET(PCIE_LCTLSTS_NEGOTIATED_LINK_WIDTH, reg);
>>>> +    gen = FIELD_GET(PCIE_LCTLSTS_LINK_SPEED, reg);
>>>> +    if (gen > lpp->max_speed)
>>>> +            return -EINVAL;
>>>> +
>>>> +    return sprintf(buf, "Port %2u Width x%u Speed %s GT/s\n", lpp->id,
>>>> +                   width, pcie_link_gen_to_str(gen));
>>>> +}
>>>> +static DEVICE_ATTR_RO(pcie_link_status);
>>> "lspci -vv | grep LnkSta" already shows the link speed and width.
>>> why do you need this?
>> In most embedded package, lspci from busybox only showed deviceid and
>> vendor id.
>>
>> They don't install lspci utilities.
> I think the PCI maintainers should comment on this as I'm not sure
> whether this is something that should be implemented driver-specific
> (why not make it available for all drivers?)
>
> [...]
I can't comment on this:)
>>>> +static void __intel_pcie_remove(struct intel_pcie_port *lpp)
>>>> +{
>>>> +    pcie_rc_cfg_wr_mask(lpp, PCI_COMMAND_MEMORY | PCI_COMMAND_MASTER,
>>>> +                        0, PCI_COMMAND);
>>> I expect logic like this to be part of the PCI subsystem in Linux.
>>> why is this needed?
>>>
>>> [...]
>> bind/unbind case we use this. For extreme cases, we use unbind and bind
>> to reset
>> PCI instead of rebooting.
> OK, but this does not seem Intel/Lantiq specific at all
> why isn't this managed by either pcie-designware-host.c or the generic
> PCI/PCIe subsystem in Linux?

I doubt if other RC driver will support bind/unbind. We do have this 
requirement

due to power management from WiFi devices.

>
>
> Martin

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v2 3/3] dwc: PCI: intel: Intel PCIe RC controller driver
  2019-08-27  3:09         ` Chuan Hua, Lei
@ 2019-08-27  8:47           ` Dilip Kota
  2019-08-27 20:51             ` Martin Blumenstingl
  2019-08-27 20:38           ` Martin Blumenstingl
  1 sibling, 1 reply; 23+ messages in thread
From: Dilip Kota @ 2019-08-27  8:47 UTC (permalink / raw)
  To: Chuan Hua, Lei, Martin Blumenstingl
  Cc: andriy.shevchenko, cheol.yong.kim, devicetree, gustavo.pimentel,
	hch, jingoohan1, linux-kernel, linux-pci, qi-ming.wu

Hi Martin,

On 8/27/2019 11:09 AM, Chuan Hua, Lei wrote:
[...]
>
>
>> now I am wondering:
>> - if we don't have to disable the interrupt line (once it is enabled),
>> why can't we enable all of these interrupts at initialization time
>> (instead of doing it on-demand)?
> Good point! we even can remote map_irq patch, directly call
>
> of_irq_parse_and_map_pci as other drivers do.

Irrespective of disabling, imo interrupts(A/B/C/D) should be enabled 
when they are requested; which happens during map_irq() call.

>> - if the interrupts do have to be disabled again (that is what I
>> assumed so far) then where is this supposed to happen? (my solution
>> for this was to implement a simple interrupt controller within the
>> PCIe driver which only supports enable/disable. disclaimer: I didn't
>> ask the PCI or interrupt maintainers for feedback on this yet)
>>
>> [...]
>
> We can implement one interrupt controller, but personally, it has too
>
> much overhead. If we follow this way, almost all modules of all old
>
> lantiq SoCs can implement one interrupt controller. Maybe you can check
>
> with PCI maintainer for their comments.
>
[...]
>>> This is needed. In the old driver, we fixed this by fixup. The original
>>> comment as follows,
>>>
>>> /*
>>>    * The root complex has a hardwired class of 
>>> PCI_CLASS_NETWORK_OTHER or
>>>    * PCI_CLASS_BRIDGE_HOST, when it is operating as a root complex this
>>>    * needs to be switched to * PCI_CLASS_BRIDGE_PCI
>>>    */
>> that would be a good comment to add if you really need it
>> can you please look at dw_pcie_setup_rc (from 
>> pcie-designware-host.c), it does:
>>    /* Enable write permission for the DBI read-only register */
>>    dw_pcie_dbi_ro_wr_en(pci);
>>    /* Program correct class for RC */
>>    dw_pcie_wr_own_conf(pp, PCI_CLASS_DEVICE, 2, PCI_CLASS_BRIDGE_PCI);
>>    /* Better disable write permission right after the update */
>>    dw_pcie_dbi_ro_wr_dis(pci);
>>
>> so my understanding is that there is a functional requirement to set
>> the class to PCI_CLASS_BRIDGE_PCI
>> however, that requirement is already covered by pcie-designware-host.c
> I will task Dilip to check if we can use dwc one.
dw_pcie_setup_rc () cannot be called here because, it is not doing 
PCI_CLASS_BRIDGE_PCI set alone, it is configuring many other things.

[...]


Regards,

Dilip


^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v2 3/3] dwc: PCI: intel: Intel PCIe RC controller driver
  2019-08-26 20:14         ` Martin Blumenstingl
@ 2019-08-27  9:14           ` Dilip Kota
  0 siblings, 0 replies; 23+ messages in thread
From: Dilip Kota @ 2019-08-27  9:14 UTC (permalink / raw)
  To: Martin Blumenstingl
  Cc: Chuan Hua, Lei, andriy.shevchenko, cheol.yong.kim, devicetree,
	gustavo.pimentel, hch, jingoohan1, linux-kernel, linux-pci,
	qi-ming.wu


On 8/27/2019 4:14 AM, Martin Blumenstingl wrote:
> second example: pcie-tegra194 (only in -next, will be part of v5.4)
>    struct tegra_pcie_dw {
>      ...
>      struct dw_pcie pci;
>      ...
>    };
>
> so some drivers store a pointer pointer to the dw_pcie struct vs.
> embedding the dw_pcie struct directly.
> as far as I know the result will be equal, except that you don't have
> to use a second devm_kzalloc for struct dw_pcie (and thus reducing
> memory fragmentation).
Okay, i will change it to "struct dw_pcie pci;"

Thanks for the feedback.

Regards,

Dilip


^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v2 3/3] dwc: PCI: intel: Intel PCIe RC controller driver
  2019-08-26 21:15       ` Martin Blumenstingl
  2019-08-27  3:09         ` Chuan Hua, Lei
@ 2019-08-27 14:28         ` Andy Shevchenko
  1 sibling, 0 replies; 23+ messages in thread
From: Andy Shevchenko @ 2019-08-27 14:28 UTC (permalink / raw)
  To: Martin Blumenstingl
  Cc: Chuan Hua, Lei, eswara.kota, cheol.yong.kim, devicetree,
	gustavo.pimentel, hch, jingoohan1, linux-kernel, linux-pci,
	qi-ming.wu

On Mon, Aug 26, 2019 at 11:15:48PM +0200, Martin Blumenstingl wrote:
> On Mon, Aug 26, 2019 at 5:31 AM Chuan Hua, Lei
> <chuanhua.lei@linux.intel.com> wrote:

> > As I mentioned, VRX200 was a very old PCIe Gen1.1 product. In our latest
> > SoC Lightning
> >
> > Mountain, we are using synopsys controller 5.20/5.50a. We support
> > Gen2(XRX350/550),
> >
> > Gen3(PRX300) and GEN4(X86 based SoC). We also supported dual lane and
> > single lane.
> >
> > Some of the above registers are needed to control FTS, link width and
> > link speed.
> only now I noticed that I didn't explain why I was asking whether all
> these registers are needed
> my understanding of the DWC PCIe controller driver "library" is that:
> - all functionality which is provided by the DesignWare PCIe core
> should be added to drivers/pci/controller/dwc/pcie-designware*
> - functionality which is built on top/around the DWC PCIe core should
> be added to <vendor specific driver>
> 
> the link width and link speed settings (I don't know about "FTS")
> don't seem Intel/Lantiq controller specific to me
> so the register setup for these bits should be moved to
> drivers/pci/controller/dwc/pcie-designware*

I think it may be done this way. We have already example with stmmac
(DWC network card IP) driver which split in similar way.

> > We can support up to XRX350/XRX500/PRX300 for MIPS SoC since we still
> > sell these products.
> OK, I understand this.
> switching to regmap will give you two benefits:
> - big endian registers writes (without additional code) on the MIPS SoCs
> - you can drop the pcie_app_* helper functions and use
> regmap_{read,write,update_bits} instead

Actually one more, i.e. dump of the registers by request via debugfs, which
I found very helpful.

-- 
With Best Regards,
Andy Shevchenko



^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v2 3/3] dwc: PCI: intel: Intel PCIe RC controller driver
  2019-08-27  3:09         ` Chuan Hua, Lei
  2019-08-27  8:47           ` Dilip Kota
@ 2019-08-27 20:38           ` Martin Blumenstingl
  2019-08-28  3:35             ` Chuan Hua, Lei
  2019-08-29  5:10             ` Kishon Vijay Abraham I
  1 sibling, 2 replies; 23+ messages in thread
From: Martin Blumenstingl @ 2019-08-27 20:38 UTC (permalink / raw)
  To: Chuan Hua, Lei
  Cc: eswara.kota, andriy.shevchenko, cheol.yong.kim, devicetree,
	gustavo.pimentel, hch, jingoohan1, linux-kernel, linux-pci,
	qi-ming.wu, kishon

Hello,

On Tue, Aug 27, 2019 at 5:09 AM Chuan Hua, Lei
<chuanhua.lei@linux.intel.com> wrote:
>
> Hi Martin,
>
> Thanks for your feedback. Please check the comments below.
>
> On 8/27/2019 5:15 AM, Martin Blumenstingl wrote:
> > Hello,
> >
> > On Mon, Aug 26, 2019 at 5:31 AM Chuan Hua, Lei
> > <chuanhua.lei@linux.intel.com> wrote:
> >> Hi Martin,
> >>
> >> Thanks for your valuable comments. I reply some of them as below.
> > you're welcome
> >
> > [...]
> >>>> +config PCIE_INTEL_AXI
> >>>> +        bool "Intel AHB/AXI PCIe host controller support"
> >>> I believe that this is mostly the same IP block as it's used in Lantiq
> >>> (xDSL) VRX200 SoCs (with MIPS cores) which was introduced in 2010
> >>> (before Intel acquired Lantiq).
> >>> This is why I would have personally called the driver PCIE_LANTIQ
> >> VRX200 SoC(internally called VR9) was the first PCIe SoC product which
> >> was using synopsys
> >>
> >> controller v3.30a. It only supports PCIe Gen1.1/1.0. The phy is internal
> >> phy from infineon.
> > thank you for these details
> > I wasn't aware that the PCIe PHY on these SoCs was developed by
> > Infineon nor is the DWC version documented anywhere
>
> VRX200/ARX300 PHY is internal value. There are a lot of hardcode which was
> from hardware people. From XRX500, we switch to synopsis PHY. However, later
> comboPHY is coming to the picture. Even though we have one same controller
> with different versions, we most likely will have three different phy
> drivers.
that is a good argument for using a separate PHY driver and
integrating that using the PHY subsystem (which is already the case in
this patch revision)

> > [...]
> >>>> +#define PCIE_CCRID                          0x8
> >>>> +
> >>>> +#define PCIE_LCAP                           0x7C
> >>>> +#define PCIE_LCAP_MAX_LINK_SPEED            GENMASK(3, 0)
> >>>> +#define PCIE_LCAP_MAX_LENGTH_WIDTH          GENMASK(9, 4)
> >>>> +
> >>>> +/* Link Control and Status Register */
> >>>> +#define PCIE_LCTLSTS                                0x80
> >>>> +#define PCIE_LCTLSTS_ASPM_ENABLE            GENMASK(1, 0)
> >>>> +#define PCIE_LCTLSTS_RCB128                 BIT(3)
> >>>> +#define PCIE_LCTLSTS_LINK_DISABLE           BIT(4)
> >>>> +#define PCIE_LCTLSTS_COM_CLK_CFG            BIT(6)
> >>>> +#define PCIE_LCTLSTS_HW_AW_DIS                      BIT(9)
> >>>> +#define PCIE_LCTLSTS_LINK_SPEED                     GENMASK(19, 16)
> >>>> +#define PCIE_LCTLSTS_NEGOTIATED_LINK_WIDTH  GENMASK(25, 20)
> >>>> +#define PCIE_LCTLSTS_SLOT_CLK_CFG           BIT(28)
> >>>> +
> >>>> +#define PCIE_LCTLSTS2                               0xA0
> >>>> +#define PCIE_LCTLSTS2_TGT_LINK_SPEED                GENMASK(3, 0)
> >>>> +#define PCIE_LCTLSTS2_TGT_LINK_SPEED_25GT   0x1
> >>>> +#define PCIE_LCTLSTS2_TGT_LINK_SPEED_5GT    0x2
> >>>> +#define PCIE_LCTLSTS2_TGT_LINK_SPEED_8GT    0x3
> >>>> +#define PCIE_LCTLSTS2_TGT_LINK_SPEED_16GT   0x4
> >>>> +#define PCIE_LCTLSTS2_HW_AUTO_DIS           BIT(5)
> >>>> +
> >>>> +/* Ack Frequency Register */
> >>>> +#define PCIE_AFR                            0x70C
> >>>> +#define PCIE_AFR_FTS_NUM                    GENMASK(15, 8)
> >>>> +#define PCIE_AFR_COM_FTS_NUM                        GENMASK(23, 16)
> >>>> +#define PCIE_AFR_GEN12_FTS_NUM_DFT          (SZ_128 - 1)
> >>>> +#define PCIE_AFR_GEN3_FTS_NUM_DFT           180
> >>>> +#define PCIE_AFR_GEN4_FTS_NUM_DFT           196
> >>>> +
> >>>> +#define PCIE_PLCR_DLL_LINK_EN                       BIT(5)
> >>>> +#define PCIE_PORT_LOGIC_FTS                 GENMASK(7, 0)
> >>>> +#define PCIE_PORT_LOGIC_DFT_FTS_NUM         (SZ_128 - 1)
> >>>> +
> >>>> +#define PCIE_MISC_CTRL                              0x8BC
> >>>> +#define PCIE_MISC_CTRL_DBI_RO_WR_EN         BIT(0)
> >>>> +
> >>>> +#define PCIE_MULTI_LANE_CTRL                        0x8C0
> >>>> +#define PCIE_UPCONFIG_SUPPORT                       BIT(7)
> >>>> +#define PCIE_DIRECT_LINK_WIDTH_CHANGE               BIT(6)
> >>>> +#define PCIE_TARGET_LINK_WIDTH                      GENMASK(5, 0)
> >>>> +
> >>>> +#define PCIE_IOP_CTRL                               0x8C4
> >>>> +#define PCIE_IOP_RX_STANDBY_CTRL            GENMASK(6, 0)
> >> no need for IOP
> > with "are you sure that you need any of the registers above?" I really
> > meant all registers above (including, but not limited to IOP)
> >
> > [...]
> >> As I mentioned, VRX200 was a very old PCIe Gen1.1 product. In our latest
> >> SoC Lightning
> >>
> >> Mountain, we are using synopsys controller 5.20/5.50a. We support
> >> Gen2(XRX350/550),
> >>
> >> Gen3(PRX300) and GEN4(X86 based SoC). We also supported dual lane and
> >> single lane.
> >>
> >> Some of the above registers are needed to control FTS, link width and
> >> link speed.
> > only now I noticed that I didn't explain why I was asking whether all
> > these registers are needed
> > my understanding of the DWC PCIe controller driver "library" is that:
> > - all functionality which is provided by the DesignWare PCIe core
> > should be added to drivers/pci/controller/dwc/pcie-designware*
> > - functionality which is built on top/around the DWC PCIe core should
> > be added to <vendor specific driver>
> >
> > the link width and link speed settings (I don't know about "FTS")
> > don't seem Intel/Lantiq controller specific to me
> > so the register setup for these bits should be moved to
> > drivers/pci/controller/dwc/pcie-designware*
>
> FTS means fast training sequence. Different generations will have
> different FTS. Common DWC drivers have default number for all generations
> which are not optimized.
I am not a DWC PCIe driver expert, but it seems to me that this is
exactly the reason why struct dw_pcie has a "version" field (which
you're also filling).
same as below: I'm interested in the DWC PCIe maintainer's opinion

> DWC driver handles link speed and link width during the initialization.
> Then left link speed change and link width to device (EP) according to
> PCIe spec. Not sure if other vendors or customers have this kind of
> requirement. We implemented this due to customer's requirement.
>
> We can check with DWC maintainer about this.
thank you for the explanation.
I am also interested in hearing the DWC PCIe maintainer's opinion on this topic

[...]
> > now I am wondering:
> > - if we don't have to disable the interrupt line (once it is enabled),
> > why can't we enable all of these interrupts at initialization time
> > (instead of doing it on-demand)?
> Good point! we even can remote map_irq patch, directly call
> of_irq_parse_and_map_pci as other drivers do.
>
> > - if the interrupts do have to be disabled again (that is what I
> > assumed so far) then where is this supposed to happen? (my solution
> > for this was to implement a simple interrupt controller within the
> > PCIe driver which only supports enable/disable. disclaimer: I didn't
> > ask the PCI or interrupt maintainers for feedback on this yet)
> >
> > [...]
>
> We can implement one interrupt controller, but personally, it has too
> much overhead. If we follow this way, almost all modules of all old
> lantiq SoCs can implement one interrupt controller. Maybe you can check
> with PCI maintainer for their comments.
if we can enable the PCI_INTA/B/C/D interrupts unconditionally then
you can switch to the standard of_irq_parse_and_map_pci implementation
(as you already noted above).
in that case the extra interrupt controller won't be needed.

I have no idea how to test whether unconditionally enabling these
interrupts (in the APP registers that is) causes any problems though.
that's why I went the interrupt-controller route in my experiment.
if the hardware works with a simplified version then I'm more than
happy to use that

[...]
> >>>> +static int intel_pcie_ep_rst_init(struct intel_pcie_port *lpp)
> >>>> +{
> >>>> +    struct device *dev = lpp->pci->dev;
> >>>> +    int ret = 0;
> >>>> +
> >>>> +    lpp->reset_gpio = devm_gpiod_get(dev, "reset", GPIOD_OUT_LOW);
> >>>> +    if (IS_ERR(lpp->reset_gpio)) {
> >>>> +            ret = PTR_ERR(lpp->reset_gpio);
> >>>> +            if (ret != -EPROBE_DEFER)
> >>>> +                    dev_err(dev, "failed to request PCIe GPIO: %d\n", ret);
> >>>> +            return ret;
> >>>> +    }
> >>>> +    /* Make initial reset last for 100ms */
> >>>> +    msleep(100);
> >>> why is there lpp->rst_interval when you hardcode 100ms here?
> >> There are different purpose. rst_interval is purely for asserted reset
> >> pulse.
> >>
> >> Here 100ms is to make sure the initial state keeps at least 100ms, then we
> >> can reset.
> > my interpretation is that it totally depends on the board design or
> > the bootloader setup.
>
> Partially, you are right. However, we should not add some dependency
> here from
> bootloader and board. rst_interval is just to make sure the pulse (low
> active or high active)
> lasts the specified the time.
+Cc Kishon

he recently added support for a GPIO reset line to the
pcie-cadence-host.c [0] and I believe he's also maintaining
pci-keystone.c which are both using a 100uS delay (instead of 100ms).
I don't know the PCIe spec so maybe Kishon can comment on the values
that should be used according to the spec.
if there's then a reason why values other than the ones from the spec
are needed then there should be a comment explaining why different
values are needed (what problem does it solve).

> > on a board where the bootloader initializes the GPIO to logical "0"
> > the devm_gpiod_get() call will not change the GPIO output.
> > in this case a 100ms delay may be OK (based on your description)
> >
> > however, if the GPIO was a logical "1" (for example if the bootloader
> > set it to that value - and considering the GPIOD_OUT_LOW flag) then it
> > will be set to "0" with the devm_gpiod_get() call above.
> > now there is a transition from "deasserted" to "asserted" which does
> > not honor lpp->rst_interval
> >
> > I'm not sure if this is a problem or not - all I know is that I don't
> > fully understand the problem yet
> >>> [...]
> >>>> +static int intel_pcie_setup_irq(struct intel_pcie_port *lpp)
> >>>> +{
> >>>> +    struct device *dev = lpp->pci->dev;
> >>>> +    struct platform_device *pdev;
> >>>> +    char *irq_name;
> >>>> +    int irq, ret;
> >>>> +
> >>>> +    pdev = to_platform_device(dev);
> >>>> +    irq = platform_get_irq(pdev, 0);
> >>>> +    if (irq < 0) {
> >>>> +            dev_err(dev, "missing sys integrated irq resource\n");
> >>>> +            return irq;
> >>>> +    }
> >>>> +
> >>>> +    irq_name = devm_kasprintf(dev, GFP_KERNEL, "pcie_misc%d", lpp->id);
> >>>> +    if (!irq_name) {
> >>>> +            dev_err(dev, "failed to alloc irq name\n");
> >>>> +            return -ENOMEM;
> >>> you are only requesting one IRQ line for the whole driver. personally
> >>> I would drop the custom irq_name and pass NULL to devm_request_irq
> >>> because that will then fall-back to auto-generating an IRQ name based
> >>> on the devicetree node. I assume it's the same for ACPI but I haven't
> >>> tried that yet.
> >> Not sure I understand what you mean.  As you know from the code, we have
> >> lpp->id which means
> >>
> >> we have multiple instances of Root Complex(1,2,3,4,8), so we need this
> >> for identification.
> > sorry, I was wrong with my original statement, the name cannot be NULL
> >
> > I checked the other drivers (meson-gx-mmc and meson-saradc) I had in
> > mind and they use dev_name(&pdev->dev);
> > that will give a unique interrupt name (derived from the devicetree)
> > in /proc/interrupts, for example: c1108680.adc, d0070000.mmc,
> > d0072000.mmc, ...
> >
> > [...]
>
> Right. We also use dev_name in our code. However, some people like numbering
> the interface which is easier for them to remember and discuss. I link id to
> domain so that we can easily know what is wrong once we have issues. When we
> tell the address to hardware people and support staff, they are totally
> lost.
ah, this also explains why linux,pci-domain is a mandatory property
(while it's optional for any other PCIe controller driver that I have
seen so far)

> Again, it is ok to switch to dev_name.
both ways will work, I just wanted to point out that you can achieve a
similar goal with less code.
if the current solution works best for your support team then I'm fine
with that as well

[...]
> >>>> +static void __intel_pcie_remove(struct intel_pcie_port *lpp)
> >>>> +{
> >>>> +    pcie_rc_cfg_wr_mask(lpp, PCI_COMMAND_MEMORY | PCI_COMMAND_MASTER,
> >>>> +                        0, PCI_COMMAND);
> >>> I expect logic like this to be part of the PCI subsystem in Linux.
> >>> why is this needed?
> >>>
> >>> [...]
> >> bind/unbind case we use this. For extreme cases, we use unbind and bind
> >> to reset
> >> PCI instead of rebooting.
> > OK, but this does not seem Intel/Lantiq specific at all
> > why isn't this managed by either pcie-designware-host.c or the generic
> > PCI/PCIe subsystem in Linux?
>
> I doubt if other RC driver will support bind/unbind. We do have this
> requirement due to power management from WiFi devices.
pcie-designware-host.c will gain .remove() support in Linux 5.4
I don't understand how .remove() and then .probe() again is different
from .unbind() followed by a .bind()


Martin

[0] https://lkml.org/lkml/2019/6/4/626

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v2 3/3] dwc: PCI: intel: Intel PCIe RC controller driver
  2019-08-27  8:47           ` Dilip Kota
@ 2019-08-27 20:51             ` Martin Blumenstingl
  0 siblings, 0 replies; 23+ messages in thread
From: Martin Blumenstingl @ 2019-08-27 20:51 UTC (permalink / raw)
  To: Dilip Kota
  Cc: Chuan Hua, Lei, andriy.shevchenko, cheol.yong.kim, devicetree,
	gustavo.pimentel, hch, jingoohan1, linux-kernel, linux-pci,
	qi-ming.wu

Hi Dilip,

On Tue, Aug 27, 2019 at 10:47 AM Dilip Kota <eswara.kota@linux.intel.com> wrote:
[...]
> >
> >
> >> now I am wondering:
> >> - if we don't have to disable the interrupt line (once it is enabled),
> >> why can't we enable all of these interrupts at initialization time
> >> (instead of doing it on-demand)?
> > Good point! we even can remote map_irq patch, directly call
> >
> > of_irq_parse_and_map_pci as other drivers do.
>
> Irrespective of disabling, imo interrupts(A/B/C/D) should be enabled
> when they are requested; which happens during map_irq() call.
with an integrated interrupt controller (which I decided to use in my
experiment because I could not find a .unmap_irq callback) this would
be the case:
- of_irq_parse_and_map_pci finds our APP interrupt
- this will enable the interrupt in the PCIe controller's APP register first
- then it will enable the interrupt in the parent interrupt controller
- when the PCIe card then frees the IRQ line again we will disable
both interrupts (the APP interrupt as well as the parent interrupt)

I don't see why of_irq_parse_and_map_pci + custom code to enable the
APP interrupt in .map_irq (without any .unmap_irq callback) is better
than always enabling it

[...]
> >>> This is needed. In the old driver, we fixed this by fixup. The original
> >>> comment as follows,
> >>>
> >>> /*
> >>>    * The root complex has a hardwired class of
> >>> PCI_CLASS_NETWORK_OTHER or
> >>>    * PCI_CLASS_BRIDGE_HOST, when it is operating as a root complex this
> >>>    * needs to be switched to * PCI_CLASS_BRIDGE_PCI
> >>>    */
> >> that would be a good comment to add if you really need it
> >> can you please look at dw_pcie_setup_rc (from
> >> pcie-designware-host.c), it does:
> >>    /* Enable write permission for the DBI read-only register */
> >>    dw_pcie_dbi_ro_wr_en(pci);
> >>    /* Program correct class for RC */
> >>    dw_pcie_wr_own_conf(pp, PCI_CLASS_DEVICE, 2, PCI_CLASS_BRIDGE_PCI);
> >>    /* Better disable write permission right after the update */
> >>    dw_pcie_dbi_ro_wr_dis(pci);
> >>
> >> so my understanding is that there is a functional requirement to set
> >> the class to PCI_CLASS_BRIDGE_PCI
> >> however, that requirement is already covered by pcie-designware-host.c
> > I will task Dilip to check if we can use dwc one.
> dw_pcie_setup_rc () cannot be called here because, it is not doing
> PCI_CLASS_BRIDGE_PCI set alone, it is configuring many other things.
I am surprised to see that dw_pcie_setup_rc() is not used at all in your driver
it sets up BARs, bus numbers, interrupt pins, the command register, etc.
with my limited knowledge I assumed that these ("many other things")
are all mandatory so everything works correctly

what problems do you experience when you use dw_pcie_setup_rc()?

also it seems that virtually every PCIe driver based on the DWC
controller uses it:
$ grep -R dw_pcie_setup_rc drivers/pci/controller/dwc/ | wc -l
20


Martin

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v2 3/3] dwc: PCI: intel: Intel PCIe RC controller driver
  2019-08-27 20:38           ` Martin Blumenstingl
@ 2019-08-28  3:35             ` Chuan Hua, Lei
  2019-08-28 19:36               ` Martin Blumenstingl
  2019-08-29  5:10             ` Kishon Vijay Abraham I
  1 sibling, 1 reply; 23+ messages in thread
From: Chuan Hua, Lei @ 2019-08-28  3:35 UTC (permalink / raw)
  To: Martin Blumenstingl
  Cc: eswara.kota, andriy.shevchenko, cheol.yong.kim, devicetree,
	gustavo.pimentel, hch, jingoohan1, linux-kernel, linux-pci,
	qi-ming.wu, kishon

Hi Martin,

Thanks for your comment.

On 8/28/2019 4:38 AM, Martin Blumenstingl wrote:
> Hello,
>
> On Tue, Aug 27, 2019 at 5:09 AM Chuan Hua, Lei
> <chuanhua.lei@linux.intel.com> wrote:
>> Hi Martin,
>>
>> Thanks for your feedback. Please check the comments below.
>>
>> On 8/27/2019 5:15 AM, Martin Blumenstingl wrote:
>>> Hello,
>>>
>>> On Mon, Aug 26, 2019 at 5:31 AM Chuan Hua, Lei
>>> <chuanhua.lei@linux.intel.com> wrote:
>>>> Hi Martin,
>>>>
>>>> Thanks for your valuable comments. I reply some of them as below.
>>> you're welcome
>>>
>>> [...]
>>>>>> +config PCIE_INTEL_AXI
>>>>>> +        bool "Intel AHB/AXI PCIe host controller support"
>>>>> I believe that this is mostly the same IP block as it's used in Lantiq
>>>>> (xDSL) VRX200 SoCs (with MIPS cores) which was introduced in 2010
>>>>> (before Intel acquired Lantiq).
>>>>> This is why I would have personally called the driver PCIE_LANTIQ
>>>> VRX200 SoC(internally called VR9) was the first PCIe SoC product which
>>>> was using synopsys
>>>>
>>>> controller v3.30a. It only supports PCIe Gen1.1/1.0. The phy is internal
>>>> phy from infineon.
>>> thank you for these details
>>> I wasn't aware that the PCIe PHY on these SoCs was developed by
>>> Infineon nor is the DWC version documented anywhere
>> VRX200/ARX300 PHY is internal value. There are a lot of hardcode which was
>> from hardware people. From XRX500, we switch to synopsis PHY. However, later
>> comboPHY is coming to the picture. Even though we have one same controller
>> with different versions, we most likely will have three different phy
>> drivers.
> that is a good argument for using a separate PHY driver and
> integrating that using the PHY subsystem (which is already the case in
> this patch revision)
Right. CombonPHY(PRX300/Lighting Mountain) and SlimPHY 
driver(XRX350/550) are in the way to upstream.
>
>>> [...]
>>>>>> +#define PCIE_CCRID                          0x8
>>>>>> +
>>>>>> +#define PCIE_LCAP                           0x7C
>>>>>> +#define PCIE_LCAP_MAX_LINK_SPEED            GENMASK(3, 0)
>>>>>> +#define PCIE_LCAP_MAX_LENGTH_WIDTH          GENMASK(9, 4)
>>>>>> +
>>>>>> +/* Link Control and Status Register */
>>>>>> +#define PCIE_LCTLSTS                                0x80
>>>>>> +#define PCIE_LCTLSTS_ASPM_ENABLE            GENMASK(1, 0)
>>>>>> +#define PCIE_LCTLSTS_RCB128                 BIT(3)
>>>>>> +#define PCIE_LCTLSTS_LINK_DISABLE           BIT(4)
>>>>>> +#define PCIE_LCTLSTS_COM_CLK_CFG            BIT(6)
>>>>>> +#define PCIE_LCTLSTS_HW_AW_DIS                      BIT(9)
>>>>>> +#define PCIE_LCTLSTS_LINK_SPEED                     GENMASK(19, 16)
>>>>>> +#define PCIE_LCTLSTS_NEGOTIATED_LINK_WIDTH  GENMASK(25, 20)
>>>>>> +#define PCIE_LCTLSTS_SLOT_CLK_CFG           BIT(28)
>>>>>> +
>>>>>> +#define PCIE_LCTLSTS2                               0xA0
>>>>>> +#define PCIE_LCTLSTS2_TGT_LINK_SPEED                GENMASK(3, 0)
>>>>>> +#define PCIE_LCTLSTS2_TGT_LINK_SPEED_25GT   0x1
>>>>>> +#define PCIE_LCTLSTS2_TGT_LINK_SPEED_5GT    0x2
>>>>>> +#define PCIE_LCTLSTS2_TGT_LINK_SPEED_8GT    0x3
>>>>>> +#define PCIE_LCTLSTS2_TGT_LINK_SPEED_16GT   0x4
>>>>>> +#define PCIE_LCTLSTS2_HW_AUTO_DIS           BIT(5)
>>>>>> +
>>>>>> +/* Ack Frequency Register */
>>>>>> +#define PCIE_AFR                            0x70C
>>>>>> +#define PCIE_AFR_FTS_NUM                    GENMASK(15, 8)
>>>>>> +#define PCIE_AFR_COM_FTS_NUM                        GENMASK(23, 16)
>>>>>> +#define PCIE_AFR_GEN12_FTS_NUM_DFT          (SZ_128 - 1)
>>>>>> +#define PCIE_AFR_GEN3_FTS_NUM_DFT           180
>>>>>> +#define PCIE_AFR_GEN4_FTS_NUM_DFT           196
>>>>>> +
>>>>>> +#define PCIE_PLCR_DLL_LINK_EN                       BIT(5)
>>>>>> +#define PCIE_PORT_LOGIC_FTS                 GENMASK(7, 0)
>>>>>> +#define PCIE_PORT_LOGIC_DFT_FTS_NUM         (SZ_128 - 1)
>>>>>> +
>>>>>> +#define PCIE_MISC_CTRL                              0x8BC
>>>>>> +#define PCIE_MISC_CTRL_DBI_RO_WR_EN         BIT(0)
>>>>>> +
>>>>>> +#define PCIE_MULTI_LANE_CTRL                        0x8C0
>>>>>> +#define PCIE_UPCONFIG_SUPPORT                       BIT(7)
>>>>>> +#define PCIE_DIRECT_LINK_WIDTH_CHANGE               BIT(6)
>>>>>> +#define PCIE_TARGET_LINK_WIDTH                      GENMASK(5, 0)
>>>>>> +
>>>>>> +#define PCIE_IOP_CTRL                               0x8C4
>>>>>> +#define PCIE_IOP_RX_STANDBY_CTRL            GENMASK(6, 0)
>>>> no need for IOP
>>> with "are you sure that you need any of the registers above?" I really
>>> meant all registers above (including, but not limited to IOP)
>>>
>>> [...]
>>>> As I mentioned, VRX200 was a very old PCIe Gen1.1 product. In our latest
>>>> SoC Lightning
>>>>
>>>> Mountain, we are using synopsys controller 5.20/5.50a. We support
>>>> Gen2(XRX350/550),
>>>>
>>>> Gen3(PRX300) and GEN4(X86 based SoC). We also supported dual lane and
>>>> single lane.
>>>>
>>>> Some of the above registers are needed to control FTS, link width and
>>>> link speed.
>>> only now I noticed that I didn't explain why I was asking whether all
>>> these registers are needed
>>> my understanding of the DWC PCIe controller driver "library" is that:
>>> - all functionality which is provided by the DesignWare PCIe core
>>> should be added to drivers/pci/controller/dwc/pcie-designware*
>>> - functionality which is built on top/around the DWC PCIe core should
>>> be added to <vendor specific driver>
>>>
>>> the link width and link speed settings (I don't know about "FTS")
>>> don't seem Intel/Lantiq controller specific to me
>>> so the register setup for these bits should be moved to
>>> drivers/pci/controller/dwc/pcie-designware*
>> FTS means fast training sequence. Different generations will have
>> different FTS. Common DWC drivers have default number for all generations
>> which are not optimized.
> I am not a DWC PCIe driver expert, but it seems to me that this is
> exactly the reason why struct dw_pcie has a "version" field (which
> you're also filling).
> same as below: I'm interested in the DWC PCIe maintainer's opinion
>
>> DWC driver handles link speed and link width during the initialization.
>> Then left link speed change and link width to device (EP) according to
>> PCIe spec. Not sure if other vendors or customers have this kind of
>> requirement. We implemented this due to customer's requirement.
>>
>> We can check with DWC maintainer about this.
> thank you for the explanation.
> I am also interested in hearing the DWC PCIe maintainer's opinion on this topic
>
> [...]
>>> now I am wondering:
>>> - if we don't have to disable the interrupt line (once it is enabled),
>>> why can't we enable all of these interrupts at initialization time
>>> (instead of doing it on-demand)?
>> Good point! we even can remote map_irq patch, directly call
>> of_irq_parse_and_map_pci as other drivers do.
>>
>>> - if the interrupts do have to be disabled again (that is what I
>>> assumed so far) then where is this supposed to happen? (my solution
>>> for this was to implement a simple interrupt controller within the
>>> PCIe driver which only supports enable/disable. disclaimer: I didn't
>>> ask the PCI or interrupt maintainers for feedback on this yet)
>>>
>>> [...]
>> We can implement one interrupt controller, but personally, it has too
>> much overhead. If we follow this way, almost all modules of all old
>> lantiq SoCs can implement one interrupt controller. Maybe you can check
>> with PCI maintainer for their comments.
> if we can enable the PCI_INTA/B/C/D interrupts unconditionally then
> you can switch to the standard of_irq_parse_and_map_pci implementation
> (as you already noted above).
> in that case the extra interrupt controller won't be needed.
>
> I have no idea how to test whether unconditionally enabling these
> interrupts (in the APP registers that is) causes any problems though.
> that's why I went the interrupt-controller route in my experiment.
> if the hardware works with a simplified version then I'm more than
> happy to use that
We will enable these interrupts. simple is more easy to handle.
>
> [...]
>>>>>> +static int intel_pcie_ep_rst_init(struct intel_pcie_port *lpp)
>>>>>> +{
>>>>>> +    struct device *dev = lpp->pci->dev;
>>>>>> +    int ret = 0;
>>>>>> +
>>>>>> +    lpp->reset_gpio = devm_gpiod_get(dev, "reset", GPIOD_OUT_LOW);
>>>>>> +    if (IS_ERR(lpp->reset_gpio)) {
>>>>>> +            ret = PTR_ERR(lpp->reset_gpio);
>>>>>> +            if (ret != -EPROBE_DEFER)
>>>>>> +                    dev_err(dev, "failed to request PCIe GPIO: %d\n", ret);
>>>>>> +            return ret;
>>>>>> +    }
>>>>>> +    /* Make initial reset last for 100ms */
>>>>>> +    msleep(100);
>>>>> why is there lpp->rst_interval when you hardcode 100ms here?
>>>> There are different purpose. rst_interval is purely for asserted reset
>>>> pulse.
>>>>
>>>> Here 100ms is to make sure the initial state keeps at least 100ms, then we
>>>> can reset.
>>> my interpretation is that it totally depends on the board design or
>>> the bootloader setup.
>> Partially, you are right. However, we should not add some dependency
>> here from
>> bootloader and board. rst_interval is just to make sure the pulse (low
>> active or high active)
>> lasts the specified the time.
> +Cc Kishon
>
> he recently added support for a GPIO reset line to the
> pcie-cadence-host.c [0] and I believe he's also maintaining
> pci-keystone.c which are both using a 100uS delay (instead of 100ms).
> I don't know the PCIe spec so maybe Kishon can comment on the values
> that should be used according to the spec.
> if there's then a reason why values other than the ones from the spec
> are needed then there should be a comment explaining why different
> values are needed (what problem does it solve).

spec doesn't guide this part. It is a board or SoC specific setting. 
100us also should work. spec only requirs reset duration should last 
100ms. The idea is that before reset assert and deassert, make sure the 
default deassert status keeps some time. We take this value from 
hardware suggestion long time back. We can reduce this value to 100us, 
but we need to test on the board.

>
>>> on a board where the bootloader initializes the GPIO to logical "0"
>>> the devm_gpiod_get() call will not change the GPIO output.
>>> in this case a 100ms delay may be OK (based on your description)
>>>
>>> however, if the GPIO was a logical "1" (for example if the bootloader
>>> set it to that value - and considering the GPIOD_OUT_LOW flag) then it
>>> will be set to "0" with the devm_gpiod_get() call above.
>>> now there is a transition from "deasserted" to "asserted" which does
>>> not honor lpp->rst_interval
>>>
>>> I'm not sure if this is a problem or not - all I know is that I don't
>>> fully understand the problem yet
>>>>> [...]
>>>>>> +static int intel_pcie_setup_irq(struct intel_pcie_port *lpp)
>>>>>> +{
>>>>>> +    struct device *dev = lpp->pci->dev;
>>>>>> +    struct platform_device *pdev;
>>>>>> +    char *irq_name;
>>>>>> +    int irq, ret;
>>>>>> +
>>>>>> +    pdev = to_platform_device(dev);
>>>>>> +    irq = platform_get_irq(pdev, 0);
>>>>>> +    if (irq < 0) {
>>>>>> +            dev_err(dev, "missing sys integrated irq resource\n");
>>>>>> +            return irq;
>>>>>> +    }
>>>>>> +
>>>>>> +    irq_name = devm_kasprintf(dev, GFP_KERNEL, "pcie_misc%d", lpp->id);
>>>>>> +    if (!irq_name) {
>>>>>> +            dev_err(dev, "failed to alloc irq name\n");
>>>>>> +            return -ENOMEM;
>>>>> you are only requesting one IRQ line for the whole driver. personally
>>>>> I would drop the custom irq_name and pass NULL to devm_request_irq
>>>>> because that will then fall-back to auto-generating an IRQ name based
>>>>> on the devicetree node. I assume it's the same for ACPI but I haven't
>>>>> tried that yet.
>>>> Not sure I understand what you mean.  As you know from the code, we have
>>>> lpp->id which means
>>>>
>>>> we have multiple instances of Root Complex(1,2,3,4,8), so we need this
>>>> for identification.
>>> sorry, I was wrong with my original statement, the name cannot be NULL
>>>
>>> I checked the other drivers (meson-gx-mmc and meson-saradc) I had in
>>> mind and they use dev_name(&pdev->dev);
>>> that will give a unique interrupt name (derived from the devicetree)
>>> in /proc/interrupts, for example: c1108680.adc, d0070000.mmc,
>>> d0072000.mmc, ...
>>>
>>> [...]
>> Right. We also use dev_name in our code. However, some people like numbering
>> the interface which is easier for them to remember and discuss. I link id to
>> domain so that we can easily know what is wrong once we have issues. When we
>> tell the address to hardware people and support staff, they are totally
>> lost.
> ah, this also explains why linux,pci-domain is a mandatory property
> (while it's optional for any other PCIe controller driver that I have
> seen so far)
Right. Imagine if you have 8 RCs on the board, how should we communicate 
with hardware /support people:)
>
>> Again, it is ok to switch to dev_name.
> both ways will work, I just wanted to point out that you can achieve a
> similar goal with less code.
> if the current solution works best for your support team then I'm fine
> with that as well
>
> [...]
>>>>>> +static void __intel_pcie_remove(struct intel_pcie_port *lpp)
>>>>>> +{
>>>>>> +    pcie_rc_cfg_wr_mask(lpp, PCI_COMMAND_MEMORY | PCI_COMMAND_MASTER,
>>>>>> +                        0, PCI_COMMAND);
>>>>> I expect logic like this to be part of the PCI subsystem in Linux.
>>>>> why is this needed?
>>>>>
>>>>> [...]
>>>> bind/unbind case we use this. For extreme cases, we use unbind and bind
>>>> to reset
>>>> PCI instead of rebooting.
>>> OK, but this does not seem Intel/Lantiq specific at all
>>> why isn't this managed by either pcie-designware-host.c or the generic
>>> PCI/PCIe subsystem in Linux?
>> I doubt if other RC driver will support bind/unbind. We do have this
>> requirement due to power management from WiFi devices.
> pcie-designware-host.c will gain .remove() support in Linux 5.4
> I don't understand how .remove() and then .probe() again is different
> from .unbind() followed by a .bind()
Good. If this is the case, bind/unbind eventually goes to probe/remove, 
so we can remove this.
>
>
> Martin
>
> [0] https://lkml.org/lkml/2019/6/4/626

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v2 3/3] dwc: PCI: intel: Intel PCIe RC controller driver
  2019-08-28  3:35             ` Chuan Hua, Lei
@ 2019-08-28 19:36               ` Martin Blumenstingl
  2019-08-29  2:54                 ` Chuan Hua, Lei
  0 siblings, 1 reply; 23+ messages in thread
From: Martin Blumenstingl @ 2019-08-28 19:36 UTC (permalink / raw)
  To: Chuan Hua, Lei
  Cc: eswara.kota, andriy.shevchenko, cheol.yong.kim, devicetree,
	gustavo.pimentel, hch, jingoohan1, linux-kernel, linux-pci,
	qi-ming.wu, kishon

On Wed, Aug 28, 2019 at 5:35 AM Chuan Hua, Lei
<chuanhua.lei@linux.intel.com> wrote:
[...]
> >>>>>> +static int intel_pcie_ep_rst_init(struct intel_pcie_port *lpp)
> >>>>>> +{
> >>>>>> +    struct device *dev = lpp->pci->dev;
> >>>>>> +    int ret = 0;
> >>>>>> +
> >>>>>> +    lpp->reset_gpio = devm_gpiod_get(dev, "reset", GPIOD_OUT_LOW);
> >>>>>> +    if (IS_ERR(lpp->reset_gpio)) {
> >>>>>> +            ret = PTR_ERR(lpp->reset_gpio);
> >>>>>> +            if (ret != -EPROBE_DEFER)
> >>>>>> +                    dev_err(dev, "failed to request PCIe GPIO: %d\n", ret);
> >>>>>> +            return ret;
> >>>>>> +    }
> >>>>>> +    /* Make initial reset last for 100ms */
> >>>>>> +    msleep(100);
> >>>>> why is there lpp->rst_interval when you hardcode 100ms here?
> >>>> There are different purpose. rst_interval is purely for asserted reset
> >>>> pulse.
> >>>>
> >>>> Here 100ms is to make sure the initial state keeps at least 100ms, then we
> >>>> can reset.
> >>> my interpretation is that it totally depends on the board design or
> >>> the bootloader setup.
> >> Partially, you are right. However, we should not add some dependency
> >> here from
> >> bootloader and board. rst_interval is just to make sure the pulse (low
> >> active or high active)
> >> lasts the specified the time.
> > +Cc Kishon
> >
> > he recently added support for a GPIO reset line to the
> > pcie-cadence-host.c [0] and I believe he's also maintaining
> > pci-keystone.c which are both using a 100uS delay (instead of 100ms).
> > I don't know the PCIe spec so maybe Kishon can comment on the values
> > that should be used according to the spec.
> > if there's then a reason why values other than the ones from the spec
> > are needed then there should be a comment explaining why different
> > values are needed (what problem does it solve).
>
> spec doesn't guide this part. It is a board or SoC specific setting.
> 100us also should work. spec only requirs reset duration should last
> 100ms. The idea is that before reset assert and deassert, make sure the
> default deassert status keeps some time. We take this value from
> hardware suggestion long time back. We can reduce this value to 100us,
> but we need to test on the board.
OK. I don't know how other PCI controller drivers manage this. if the
PCI maintainers are happy with this then I am as well
maybe it's worth changing the comment to indicate that this delay was
suggested by the hardware team (so it's clear that this is not coming
from the PCI spec)

[...]
> >>>>>> +static void __intel_pcie_remove(struct intel_pcie_port *lpp)
> >>>>>> +{
> >>>>>> +    pcie_rc_cfg_wr_mask(lpp, PCI_COMMAND_MEMORY | PCI_COMMAND_MASTER,
> >>>>>> +                        0, PCI_COMMAND);
> >>>>> I expect logic like this to be part of the PCI subsystem in Linux.
> >>>>> why is this needed?
> >>>>>
> >>>>> [...]
> >>>> bind/unbind case we use this. For extreme cases, we use unbind and bind
> >>>> to reset
> >>>> PCI instead of rebooting.
> >>> OK, but this does not seem Intel/Lantiq specific at all
> >>> why isn't this managed by either pcie-designware-host.c or the generic
> >>> PCI/PCIe subsystem in Linux?
> >> I doubt if other RC driver will support bind/unbind. We do have this
> >> requirement due to power management from WiFi devices.
> > pcie-designware-host.c will gain .remove() support in Linux 5.4
> > I don't understand how .remove() and then .probe() again is different
> > from .unbind() followed by a .bind()
> Good. If this is the case, bind/unbind eventually goes to probe/remove,
> so we can remove this.
OK. as far as I understand you need to call dw_pcie_host_deinit from
the .remove() callback (which is missing in this version)
(I'm using drivers/pci/controller/dwc/pcie-tegra194.c as an example,
this driver is in linux-next and thus will appear in Linux 5.4)


Martin

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v2 3/3] dwc: PCI: intel: Intel PCIe RC controller driver
  2019-08-28 19:36               ` Martin Blumenstingl
@ 2019-08-29  2:54                 ` Chuan Hua, Lei
       [not found]                   ` <4bab775a-0e39-a187-0791-40050feb7d67@linux.intel.com>
  0 siblings, 1 reply; 23+ messages in thread
From: Chuan Hua, Lei @ 2019-08-29  2:54 UTC (permalink / raw)
  To: Martin Blumenstingl
  Cc: eswara.kota, andriy.shevchenko, cheol.yong.kim, devicetree,
	gustavo.pimentel, hch, jingoohan1, linux-kernel, linux-pci,
	qi-ming.wu, kishon


On 8/29/2019 3:36 AM, Martin Blumenstingl wrote:
> On Wed, Aug 28, 2019 at 5:35 AM Chuan Hua, Lei
> <chuanhua.lei@linux.intel.com> wrote:
> [...]
>>>>>>>> +static int intel_pcie_ep_rst_init(struct intel_pcie_port *lpp)
>>>>>>>> +{
>>>>>>>> +    struct device *dev = lpp->pci->dev;
>>>>>>>> +    int ret = 0;
>>>>>>>> +
>>>>>>>> +    lpp->reset_gpio = devm_gpiod_get(dev, "reset", GPIOD_OUT_LOW);
>>>>>>>> +    if (IS_ERR(lpp->reset_gpio)) {
>>>>>>>> +            ret = PTR_ERR(lpp->reset_gpio);
>>>>>>>> +            if (ret != -EPROBE_DEFER)
>>>>>>>> +                    dev_err(dev, "failed to request PCIe GPIO: %d\n", ret);
>>>>>>>> +            return ret;
>>>>>>>> +    }
>>>>>>>> +    /* Make initial reset last for 100ms */
>>>>>>>> +    msleep(100);
>>>>>>> why is there lpp->rst_interval when you hardcode 100ms here?
>>>>>> There are different purpose. rst_interval is purely for asserted reset
>>>>>> pulse.
>>>>>>
>>>>>> Here 100ms is to make sure the initial state keeps at least 100ms, then we
>>>>>> can reset.
>>>>> my interpretation is that it totally depends on the board design or
>>>>> the bootloader setup.
>>>> Partially, you are right. However, we should not add some dependency
>>>> here from
>>>> bootloader and board. rst_interval is just to make sure the pulse (low
>>>> active or high active)
>>>> lasts the specified the time.
>>> +Cc Kishon
>>>
>>> he recently added support for a GPIO reset line to the
>>> pcie-cadence-host.c [0] and I believe he's also maintaining
>>> pci-keystone.c which are both using a 100uS delay (instead of 100ms).
>>> I don't know the PCIe spec so maybe Kishon can comment on the values
>>> that should be used according to the spec.
>>> if there's then a reason why values other than the ones from the spec
>>> are needed then there should be a comment explaining why different
>>> values are needed (what problem does it solve).
>> spec doesn't guide this part. It is a board or SoC specific setting.
>> 100us also should work. spec only requirs reset duration should last
>> 100ms. The idea is that before reset assert and deassert, make sure the
>> default deassert status keeps some time. We take this value from
>> hardware suggestion long time back. We can reduce this value to 100us,
>> but we need to test on the board.
> OK. I don't know how other PCI controller drivers manage this. if the
> PCI maintainers are happy with this then I am as well
> maybe it's worth changing the comment to indicate that this delay was
> suggested by the hardware team (so it's clear that this is not coming
> from the PCI spec)
Dilip will change to 100us delay and run the test. I also need to run 
some tests for old boards(XRX350/550/PRX300) to confirm this has no 
impact on function.
> [...]
>>>>>>>> +static void __intel_pcie_remove(struct intel_pcie_port *lpp)
>>>>>>>> +{
>>>>>>>> +    pcie_rc_cfg_wr_mask(lpp, PCI_COMMAND_MEMORY | PCI_COMMAND_MASTER,
>>>>>>>> +                        0, PCI_COMMAND);
>>>>>>> I expect logic like this to be part of the PCI subsystem in Linux.
>>>>>>> why is this needed?
>>>>>>>
>>>>>>> [...]
>>>>>> bind/unbind case we use this. For extreme cases, we use unbind and bind
>>>>>> to reset
>>>>>> PCI instead of rebooting.
>>>>> OK, but this does not seem Intel/Lantiq specific at all
>>>>> why isn't this managed by either pcie-designware-host.c or the generic
>>>>> PCI/PCIe subsystem in Linux?
>>>> I doubt if other RC driver will support bind/unbind. We do have this
>>>> requirement due to power management from WiFi devices.
>>> pcie-designware-host.c will gain .remove() support in Linux 5.4
>>> I don't understand how .remove() and then .probe() again is different
>>> from .unbind() followed by a .bind()
>> Good. If this is the case, bind/unbind eventually goes to probe/remove,
>> so we can remove this.
> OK. as far as I understand you need to call dw_pcie_host_deinit from
> the .remove() callback (which is missing in this version)
> (I'm using drivers/pci/controller/dwc/pcie-tegra194.c as an example,
> this driver is in linux-next and thus will appear in Linux 5.4)
Thanks for your information. We should adapt this in next version.
>
>
> Martin

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v2 3/3] dwc: PCI: intel: Intel PCIe RC controller driver
  2019-08-27 20:38           ` Martin Blumenstingl
  2019-08-28  3:35             ` Chuan Hua, Lei
@ 2019-08-29  5:10             ` Kishon Vijay Abraham I
  2019-08-29 21:01               ` Martin Blumenstingl
  1 sibling, 1 reply; 23+ messages in thread
From: Kishon Vijay Abraham I @ 2019-08-29  5:10 UTC (permalink / raw)
  To: Martin Blumenstingl, Chuan Hua, Lei
  Cc: eswara.kota, andriy.shevchenko, cheol.yong.kim, devicetree,
	gustavo.pimentel, hch, jingoohan1, linux-kernel, linux-pci,
	qi-ming.wu

Hi Martin,

On 28/08/19 2:08 AM, Martin Blumenstingl wrote:
> Hello,
> 
> On Tue, Aug 27, 2019 at 5:09 AM Chuan Hua, Lei
> <chuanhua.lei@linux.intel.com> wrote:
>>
>> Hi Martin,
>>
>> Thanks for your feedback. Please check the comments below.
>>
>> On 8/27/2019 5:15 AM, Martin Blumenstingl wrote:
>>> Hello,
>>>
>>> On Mon, Aug 26, 2019 at 5:31 AM Chuan Hua, Lei
>>> <chuanhua.lei@linux.intel.com> wrote:
>>>> Hi Martin,
>>>>
>>>> Thanks for your valuable comments. I reply some of them as below.
>>> you're welcome
>>>
>>> [...]
>>>>>> +config PCIE_INTEL_AXI
>>>>>> +        bool "Intel AHB/AXI PCIe host controller support"
>>>>> I believe that this is mostly the same IP block as it's used in Lantiq
>>>>> (xDSL) VRX200 SoCs (with MIPS cores) which was introduced in 2010
>>>>> (before Intel acquired Lantiq).
>>>>> This is why I would have personally called the driver PCIE_LANTIQ
>>>> VRX200 SoC(internally called VR9) was the first PCIe SoC product which
>>>> was using synopsys
>>>>
>>>> controller v3.30a. It only supports PCIe Gen1.1/1.0. The phy is internal
>>>> phy from infineon.
>>> thank you for these details
>>> I wasn't aware that the PCIe PHY on these SoCs was developed by
>>> Infineon nor is the DWC version documented anywhere
>>
>> VRX200/ARX300 PHY is internal value. There are a lot of hardcode which was
>> from hardware people. From XRX500, we switch to synopsis PHY. However, later
>> comboPHY is coming to the picture. Even though we have one same controller
>> with different versions, we most likely will have three different phy
>> drivers.
> that is a good argument for using a separate PHY driver and
> integrating that using the PHY subsystem (which is already the case in
> this patch revision)
> 
.
.
<snip>
>>>>>> +static int intel_pcie_ep_rst_init(struct intel_pcie_port *lpp)
>>>>>> +{
>>>>>> +    struct device *dev = lpp->pci->dev;
>>>>>> +    int ret = 0;
>>>>>> +
>>>>>> +    lpp->reset_gpio = devm_gpiod_get(dev, "reset", GPIOD_OUT_LOW);
>>>>>> +    if (IS_ERR(lpp->reset_gpio)) {
>>>>>> +            ret = PTR_ERR(lpp->reset_gpio);
>>>>>> +            if (ret != -EPROBE_DEFER)
>>>>>> +                    dev_err(dev, "failed to request PCIe GPIO: %d\n", ret);
>>>>>> +            return ret;
>>>>>> +    }
>>>>>> +    /* Make initial reset last for 100ms */
>>>>>> +    msleep(100);
>>>>> why is there lpp->rst_interval when you hardcode 100ms here?
>>>> There are different purpose. rst_interval is purely for asserted reset
>>>> pulse.
>>>>
>>>> Here 100ms is to make sure the initial state keeps at least 100ms, then we
>>>> can reset.
>>> my interpretation is that it totally depends on the board design or
>>> the bootloader setup.
>>
>> Partially, you are right. However, we should not add some dependency
>> here from
>> bootloader and board. rst_interval is just to make sure the pulse (low
>> active or high active)
>> lasts the specified the time.
> +Cc Kishon
> 
> he recently added support for a GPIO reset line to the
> pcie-cadence-host.c [0] and I believe he's also maintaining
> pci-keystone.c which are both using a 100uS delay (instead of 100ms).
> I don't know the PCIe spec so maybe Kishon can comment on the values
> that should be used according to the spec.
> if there's then a reason why values other than the ones from the spec
> are needed then there should be a comment explaining why different
> values are needed (what problem does it solve).

The PCI EXPRESS CARD ELECTROMECHANICAL SPECIFICATION defines the Power
Sequencing and Reset Signal Timings in Table 2-4. Please also refer Figure
2-10: Power Up of the CEM.

╔═════════════╤══════════════════════════════════════╤═════╤═════╤═══════╗
║ Symbol      │ Parameter                            │ Min │ Max │ Units ║
╠═════════════╪══════════════════════════════════════╪═════╪═════╪═══════╣
║ T PVPERL    │ Power stable to PERST# inactive      │ 100 │     │ ms    ║
╟─────────────┼──────────────────────────────────────┼─────┼─────┼───────╢
║ T PERST-CLK │ REFCLK stable before PERST# inactive │ 100 │     │ μs    ║
╟─────────────┼──────────────────────────────────────┼─────┼─────┼───────╢
║ T PERST     │ PERST# active time                   │ 100 │     │ μs    ║
╟─────────────┼──────────────────────────────────────┼─────┼─────┼───────╢
║ T FAIL      │ Power level invalid to PERST# active │     │ 500 │ ns    ║
╟─────────────┼──────────────────────────────────────┼─────┼─────┼───────╢
║ T WKRF      │ WAKE# rise – fall time               │     │ 100 │ ns    ║
╚═════════════╧══════════════════════════════════════╧═════╧═════╧═══════╝

In my code I used T PERST-CLK (i.e REFCLK stable before PERST# inactive).
REFCLK to the card is enabled as part of PHY enable and then wait for 100μs
before making PERST# inactive.

Power to the device is given during board power up and the assumption here is
it will take more the 100ms for the probe to be invoked after board power up
(i.e after ROM, bootloaders and linux kernel). But if you have a regulator that
is enabled in PCI probe, then T PVPERL (100ms) should also used in probe.

Thanks
Kishon

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v2 3/3] dwc: PCI: intel: Intel PCIe RC controller driver
  2019-08-29  5:10             ` Kishon Vijay Abraham I
@ 2019-08-29 21:01               ` Martin Blumenstingl
  0 siblings, 0 replies; 23+ messages in thread
From: Martin Blumenstingl @ 2019-08-29 21:01 UTC (permalink / raw)
  To: Kishon Vijay Abraham I
  Cc: Chuan Hua, Lei, eswara.kota, andriy.shevchenko, cheol.yong.kim,
	devicetree, gustavo.pimentel, hch, jingoohan1, linux-kernel,
	linux-pci, qi-ming.wu

Hi Kishon,

On Thu, Aug 29, 2019 at 7:10 AM Kishon Vijay Abraham I <kishon@ti.com> wrote:
[...]
> The PCI EXPRESS CARD ELECTROMECHANICAL SPECIFICATION defines the Power
> Sequencing and Reset Signal Timings in Table 2-4. Please also refer Figure
> 2-10: Power Up of the CEM.
>
> ╔═════════════╤══════════════════════════════════════╤═════╤═════╤═══════╗
> ║ Symbol      │ Parameter                            │ Min │ Max │ Units ║
> ╠═════════════╪══════════════════════════════════════╪═════╪═════╪═══════╣
> ║ T PVPERL    │ Power stable to PERST# inactive      │ 100 │     │ ms    ║
> ╟─────────────┼──────────────────────────────────────┼─────┼─────┼───────╢
> ║ T PERST-CLK │ REFCLK stable before PERST# inactive │ 100 │     │ μs    ║
> ╟─────────────┼──────────────────────────────────────┼─────┼─────┼───────╢
> ║ T PERST     │ PERST# active time                   │ 100 │     │ μs    ║
> ╟─────────────┼──────────────────────────────────────┼─────┼─────┼───────╢
> ║ T FAIL      │ Power level invalid to PERST# active │     │ 500 │ ns    ║
> ╟─────────────┼──────────────────────────────────────┼─────┼─────┼───────╢
> ║ T WKRF      │ WAKE# rise – fall time               │     │ 100 │ ns    ║
> ╚═════════════╧══════════════════════════════════════╧═════╧═════╧═══════╝
>
> In my code I used T PERST-CLK (i.e REFCLK stable before PERST# inactive).
> REFCLK to the card is enabled as part of PHY enable and then wait for 100μs
> before making PERST# inactive.
>
> Power to the device is given during board power up and the assumption here is
> it will take more the 100ms for the probe to be invoked after board power up
> (i.e after ROM, bootloaders and linux kernel). But if you have a regulator that
> is enabled in PCI probe, then T PVPERL (100ms) should also used in probe.
thank you for this detailed overview and for the explanation about the
assumptions you made (and why)


Martin

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v2 3/3] dwc: PCI: intel: Intel PCIe RC controller driver
       [not found]                   ` <4bab775a-0e39-a187-0791-40050feb7d67@linux.intel.com>
@ 2019-09-03 18:36                     ` Martin Blumenstingl
  0 siblings, 0 replies; 23+ messages in thread
From: Martin Blumenstingl @ 2019-09-03 18:36 UTC (permalink / raw)
  To: Dilip Kota
  Cc: Chuan Hua, Lei, andriy.shevchenko, cheol.yong.kim, devicetree,
	gustavo.pimentel, hch, jingoohan1, linux-kernel, linux-pci,
	qi-ming.wu, kishon

Hi Dilip,

On Tue, Sep 3, 2019 at 12:20 PM Dilip Kota <eswara.kota@linux.intel.com> wrote:
>
> Hi Martin,
>
> On 8/29/2019 10:54 AM, Chuan Hua, Lei wrote:
>
>
> On 8/29/2019 3:36 AM, Martin Blumenstingl wrote:
>
> On Wed, Aug 28, 2019 at 5:35 AM Chuan Hua, Lei
> <chuanhua.lei@linux.intel.com> wrote:
> [...]
>
> +static int intel_pcie_ep_rst_init(struct intel_pcie_port *lpp)
> +{
> +    struct device *dev = lpp->pci->dev;
> +    int ret = 0;
> +
> +    lpp->reset_gpio = devm_gpiod_get(dev, "reset", GPIOD_OUT_LOW);
> +    if (IS_ERR(lpp->reset_gpio)) {
> +            ret = PTR_ERR(lpp->reset_gpio);
> +            if (ret != -EPROBE_DEFER)
> +                    dev_err(dev, "failed to request PCIe GPIO: %d\n", ret);
> +            return ret;
> +    }
> +    /* Make initial reset last for 100ms */
> +    msleep(100);
>
> why is there lpp->rst_interval when you hardcode 100ms here?
>
> There are different purpose. rst_interval is purely for asserted reset
> pulse.
>
> Here 100ms is to make sure the initial state keeps at least 100ms, then we
> can reset.
>
> my interpretation is that it totally depends on the board design or
> the bootloader setup.
>
> Partially, you are right. However, we should not add some dependency
> here from
> bootloader and board. rst_interval is just to make sure the pulse (low
> active or high active)
> lasts the specified the time.
>
> +Cc Kishon
>
> he recently added support for a GPIO reset line to the
> pcie-cadence-host.c [0] and I believe he's also maintaining
> pci-keystone.c which are both using a 100uS delay (instead of 100ms).
> I don't know the PCIe spec so maybe Kishon can comment on the values
> that should be used according to the spec.
> if there's then a reason why values other than the ones from the spec
> are needed then there should be a comment explaining why different
> values are needed (what problem does it solve).
>
> spec doesn't guide this part. It is a board or SoC specific setting.
> 100us also should work. spec only requirs reset duration should last
> 100ms. The idea is that before reset assert and deassert, make sure the
> default deassert status keeps some time. We take this value from
> hardware suggestion long time back. We can reduce this value to 100us,
> but we need to test on the board.
>
> OK. I don't know how other PCI controller drivers manage this. if the
> PCI maintainers are happy with this then I am as well
> maybe it's worth changing the comment to indicate that this delay was
> suggested by the hardware team (so it's clear that this is not coming
> from the PCI spec)
>
> Dilip will change to 100us delay and run the test. I also need to run some tests for old boards(XRX350/550/PRX300) to confirm this has no impact on function.
>
> I have tested 100us on the target and it is working fine.
> Along with this change, i have validated below changes and test is successful.
>     Enabling the A/B/C/D interrupts during the initialization instead of in map_irq()
>     Calling dw_pcie_setup_rc() function during initialization.
>
> I will push these changes in the next patch version.
great, thank you for working on simplifying the code!

> And, regarding [1]:
> I have checked the code for using regmap; Helper functions especially update_bits() cannot be avoided(it is required while configuring pcie RC registers too). and LGM is little endian.
> Switching to regmap() is not bringing any gain.
OK, if it doesn't help you for LGM then no need to switch to regmap now
I can still do it afterwards when adding support for other SoCs

> Regarding [2]:
> PCIE_SPEED2STR() is quite different from the pcie_link_gen_to_str().
> PCIE_SPEED2STR() expects a encoded value defined in pcie_link_speed[] array in probe.c, whereas pcie_link_gen_to_str() is a direct mapping to the register bits value.
> pcie_link_gen_to_str() is pretty much simple and straight forward.
>
> And, any of the pcie controller drivers are using neither PCIE_SPEED2STR() nor pcie_link_speed[].
OK, I see - thank you for following up
the PCI maintainers need to decide whether pcie_link_status_show is
acceptable (instead of using lspci) - that's the only place where
pcie_link_gen_to_str is used


Martin

^ permalink raw reply	[flat|nested] 23+ messages in thread

end of thread, other threads:[~2019-09-03 18:36 UTC | newest]

Thread overview: 23+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-08-20  9:39 [PATCH v2 0/3] PCI: Add map irq callback in dwc framework and add Intel PCIe driver Dilip Kota
2019-08-20  9:39 ` [PATCH v2 1/3] PCI: dwc: Add map irq callback Dilip Kota
2019-08-20  9:39 ` [PATCH v2 2/3] dt-bindings: PCI: intel: Add YAML schemas for the PCIe RC controller Dilip Kota
2019-08-20 13:42   ` Rob Herring
2019-08-21  9:56     ` Dilip Kota
2019-08-20  9:39 ` [PATCH v2 3/3] dwc: PCI: intel: Intel PCIe RC controller driver Dilip Kota
2019-08-24 21:03   ` Martin Blumenstingl
2019-08-26  3:30     ` Chuan Hua, Lei
2019-08-26  6:48       ` Dilip Kota
     [not found]       ` <f1cb5ba9-b57a-971a-5a2f-1f13e0cc9507@linux.intel.com>
2019-08-26 20:14         ` Martin Blumenstingl
2019-08-27  9:14           ` Dilip Kota
2019-08-26 21:15       ` Martin Blumenstingl
2019-08-27  3:09         ` Chuan Hua, Lei
2019-08-27  8:47           ` Dilip Kota
2019-08-27 20:51             ` Martin Blumenstingl
2019-08-27 20:38           ` Martin Blumenstingl
2019-08-28  3:35             ` Chuan Hua, Lei
2019-08-28 19:36               ` Martin Blumenstingl
2019-08-29  2:54                 ` Chuan Hua, Lei
     [not found]                   ` <4bab775a-0e39-a187-0791-40050feb7d67@linux.intel.com>
2019-09-03 18:36                     ` Martin Blumenstingl
2019-08-29  5:10             ` Kishon Vijay Abraham I
2019-08-29 21:01               ` Martin Blumenstingl
2019-08-27 14:28         ` Andy Shevchenko

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).