linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/5] PCI: Add support to the Cadence PCIe controller
@ 2017-11-23 15:01 Cyrille Pitchen
  2017-11-23 15:01 ` [PATCH 1/5] PCI: Add vendor ID for Cadence Cyrille Pitchen
                   ` (5 more replies)
  0 siblings, 6 replies; 39+ messages in thread
From: Cyrille Pitchen @ 2017-11-23 15:01 UTC (permalink / raw)
  To: bhelgaas, kishon, lorenzo.pieralisi, linux-pci
  Cc: adouglas, stelford, dgary, kgopi, eandrews, thomas.petazzoni,
	sureshp, nsekhar, linux-kernel, robh, devicetree,
	Cyrille Pitchen

Hi all,

this series of patches adds support to the Cadence PCIe controller.
It was tested on a ARM64 platform emulated by a Palladium running both
linux-next (next-20171123) and pci-next kernels.

The host mode was tested with some PCIe devices connected to the Palladium
through a speed-bridge. Some of those devices were a USB host controller
and a SATA controller. The PCIe host controller was also tested with a
second controller configured in endpoint mode and connected back to back
to the first controller.

The EndPoint Controller (EPC) driver of this series was tested with the
pci-epf-test.c EndPoint Function (EPF) driver and the pcitest userspace
program.

For linux-next, I applied this series on top of Kishon's patch
("PCI: endpoint: Use EPC's device in dma_alloc_coherent/dma_free_coherent")
otherwise dma_alloc_coherent() fails when called by pci_epf_alloc_space().

Also, I patched drivers/Makefile rather than drivers/pci/Makefile to make
the drivers/pci/cadence/pcie-cadence-ep.o linked after
drivers/pci/endpoint/*.o objects, otherwise the built-in pci-cadence-ep
driver would be probed before the PCI endpoint framework would have been
initialized, which results in a kernel crash.
I guess this is the reason why the "pci/dwc" line was also put in
drivers/Makefile, right after the "pci/endpoint" line.

Best regards,

Cyrille

Cyrille Pitchen (4):
  PCI: Add vendor ID for Cadence
  PCI: cadence: Add host driver for Cadence PCIe controller
  dt-bindings: PCI: cadence: Add DT bindings for Cadence PCIe endpoint
    controller
  PCI: cadence: add EndPoint Controller driver for Cadence PCIe
    controller

Scott Telford (1):
  dt-bindings: PCI: cadence: Add DT bindings for Cadence PCIe host
    controller

 .../devicetree/bindings/pci/cdns,cdns-pcie-ep.txt  |  20 +
 .../bindings/pci/cdns,cdns-pcie-host.txt           |  54 ++
 drivers/Makefile                                   |   1 +
 drivers/pci/Kconfig                                |   1 +
 drivers/pci/cadence/Kconfig                        |  33 ++
 drivers/pci/cadence/Makefile                       |   3 +
 drivers/pci/cadence/pcie-cadence-ep.c              | 553 +++++++++++++++++++++
 drivers/pci/cadence/pcie-cadence-host.c            | 425 ++++++++++++++++
 drivers/pci/cadence/pcie-cadence.c                 | 110 ++++
 drivers/pci/cadence/pcie-cadence.h                 | 325 ++++++++++++
 include/linux/pci_ids.h                            |   2 +
 11 files changed, 1527 insertions(+)
 create mode 100644 Documentation/devicetree/bindings/pci/cdns,cdns-pcie-ep.txt
 create mode 100644 Documentation/devicetree/bindings/pci/cdns,cdns-pcie-host.txt
 create mode 100644 drivers/pci/cadence/Kconfig
 create mode 100644 drivers/pci/cadence/Makefile
 create mode 100644 drivers/pci/cadence/pcie-cadence-ep.c
 create mode 100644 drivers/pci/cadence/pcie-cadence-host.c
 create mode 100644 drivers/pci/cadence/pcie-cadence.c
 create mode 100644 drivers/pci/cadence/pcie-cadence.h

-- 
2.11.0

^ permalink raw reply	[flat|nested] 39+ messages in thread

* [PATCH 1/5] PCI: Add vendor ID for Cadence
  2017-11-23 15:01 [PATCH 0/5] PCI: Add support to the Cadence PCIe controller Cyrille Pitchen
@ 2017-11-23 15:01 ` Cyrille Pitchen
  2017-12-06 21:27   ` Bjorn Helgaas
  2017-11-23 15:01 ` [PATCH 2/5] dt-bindings: PCI: cadence: Add DT bindings for Cadence PCIe host controller Cyrille Pitchen
                   ` (4 subsequent siblings)
  5 siblings, 1 reply; 39+ messages in thread
From: Cyrille Pitchen @ 2017-11-23 15:01 UTC (permalink / raw)
  To: bhelgaas, kishon, lorenzo.pieralisi, linux-pci
  Cc: adouglas, stelford, dgary, kgopi, eandrews, thomas.petazzoni,
	sureshp, nsekhar, linux-kernel, robh, devicetree,
	Cyrille Pitchen

This patch adds a new PCI vendor ID for Cadence.

Signed-off-by: Cyrille Pitchen <cyrille.pitchen@free-electrons.com>
---
 include/linux/pci_ids.h | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/include/linux/pci_ids.h b/include/linux/pci_ids.h
index ab20dc5db423..eb13e84e1fef 100644
--- a/include/linux/pci_ids.h
+++ b/include/linux/pci_ids.h
@@ -2381,6 +2381,8 @@
 
 #define PCI_VENDOR_ID_LENOVO		0x17aa
 
+#define PCI_VENDOR_ID_CDNS		0x17cd
+
 #define PCI_VENDOR_ID_ARECA		0x17d3
 #define PCI_DEVICE_ID_ARECA_1110	0x1110
 #define PCI_DEVICE_ID_ARECA_1120	0x1120
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 39+ messages in thread

* [PATCH 2/5] dt-bindings: PCI: cadence: Add DT bindings for Cadence PCIe host controller
  2017-11-23 15:01 [PATCH 0/5] PCI: Add support to the Cadence PCIe controller Cyrille Pitchen
  2017-11-23 15:01 ` [PATCH 1/5] PCI: Add vendor ID for Cadence Cyrille Pitchen
@ 2017-11-23 15:01 ` Cyrille Pitchen
  2017-11-26 19:32   ` Rob Herring
  2017-11-23 15:01 ` [PATCH 3/5] PCI: cadence: Add host driver for Cadence PCIe controller Cyrille Pitchen
                   ` (3 subsequent siblings)
  5 siblings, 1 reply; 39+ messages in thread
From: Cyrille Pitchen @ 2017-11-23 15:01 UTC (permalink / raw)
  To: bhelgaas, kishon, lorenzo.pieralisi, linux-pci
  Cc: adouglas, stelford, dgary, kgopi, eandrews, thomas.petazzoni,
	sureshp, nsekhar, linux-kernel, robh, devicetree,
	Cyrille Pitchen

From: Scott Telford <stelford@cadence.com>

This patch adds documentation for the DT bindings of the Cadence PCIe
controller when configured in host (Root Complex) mode.

Signed-off-by: Scott Telford <stelford@cadence.com>
Signed-off-by: Cyrille Pitchen <cyrille.pitchen@free-electrons.com>

dt-bindings: PCI: cadence: host fixup
---
 .../bindings/pci/cdns,cdns-pcie-host.txt           | 54 ++++++++++++++++++++++
 1 file changed, 54 insertions(+)
 create mode 100644 Documentation/devicetree/bindings/pci/cdns,cdns-pcie-host.txt

diff --git a/Documentation/devicetree/bindings/pci/cdns,cdns-pcie-host.txt b/Documentation/devicetree/bindings/pci/cdns,cdns-pcie-host.txt
new file mode 100644
index 000000000000..4b3df8ffd5e1
--- /dev/null
+++ b/Documentation/devicetree/bindings/pci/cdns,cdns-pcie-host.txt
@@ -0,0 +1,54 @@
+* Cadence PCIe host controller
+
+This PCIe controller inherits the base properties defined in
+host-generic-pci.txt.
+
+Required properties:
+- compatible: should contain "cdns,cdns-pcie-host" to identify the IP used.
+- reg: Should contain the PCIe configuration window base address, controller
+  register base address, and AXI interface region base address respectively.
+- reg-names: Must be "cfg", "reg" and "mem" respectively.
+- #address-cells: set to <3>
+- #size-cells: set to <2>
+- device_type: set to "pci"
+- ranges: ranges for the PCI memory and I/O regions
+- #interrupt-cells: set to <1>
+- interrupt-map-mask and interrupt-map: standard PCI
+	properties to define the mapping of the PCIe interface to interrupt
+	numbers.
+
+Example:
+
+	pci@fb000000 {
+		compatible = "cdns,cdns-pcie-host";
+		device_type = "pci";
+		#address-cells = <3>;
+		#size-cells = <2>;
+		bus-range = <0x0 0xff>;
+		linux,pci-domain = <0>;
+
+		// CPU_PHYSICAL(2)  SIZE(2)
+		reg = <0x0 0x41000000  0x0 0x00001000>, /* Config space */
+		      <0x0 0xfb000000  0x0 0x01000000>, /* Base Address */
+		      <0x0 0x40000000  0x0 0x04000000>; /* AXI region */
+		reg-names = "cfg", "reg", "mem";
+
+		// BUS_ADDRESS(3)  CPU_PHYSICAL(1)  SIZE(2)
+		ranges =
+			<0x02000000 0x0 0x42000000  0x0 0x42000000  0x0 0x1000000>,
+			<0x01000000 0x0 0x43000000  0x0 0x43000000  0x0 0x0010000>;
+
+
+		#interrupt-cells = <0x1>;
+
+		// PCI_DEVICE(3)  INT#(1)  CONTROLLER(PHANDLE)  CONTROLLER_DATA(5)
+		interrupt-map = <0x0 0x0 0x0  0x1  &gic  0x0 0x0 0x0 14 0x1
+				 0x0 0x0 0x0  0x2  &gic  0x0 0x0 0x0 15 0x1
+				 0x0 0x0 0x0  0x3  &gic  0x0 0x0 0x0 16 0x1
+				 0x0 0x0 0x0  0x4  &gic  0x0 0x0 0x0 17 0x1>;
+
+		// PCI_DEVICE(3)  INT#(1)
+		interrupt-map-mask = <0x0000 0x0 0x0  0x7>;
+
+		msi-parent = <&its_pci>;
+	};
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 39+ messages in thread

* [PATCH 3/5] PCI: cadence: Add host driver for Cadence PCIe controller
  2017-11-23 15:01 [PATCH 0/5] PCI: Add support to the Cadence PCIe controller Cyrille Pitchen
  2017-11-23 15:01 ` [PATCH 1/5] PCI: Add vendor ID for Cadence Cyrille Pitchen
  2017-11-23 15:01 ` [PATCH 2/5] dt-bindings: PCI: cadence: Add DT bindings for Cadence PCIe host controller Cyrille Pitchen
@ 2017-11-23 15:01 ` Cyrille Pitchen
  2017-11-28 20:41   ` Bjorn Helgaas
                     ` (2 more replies)
  2017-11-23 15:01 ` [PATCH 4/5] dt-bindings: PCI: cadence: Add DT bindings for Cadence PCIe endpoint controller Cyrille Pitchen
                   ` (2 subsequent siblings)
  5 siblings, 3 replies; 39+ messages in thread
From: Cyrille Pitchen @ 2017-11-23 15:01 UTC (permalink / raw)
  To: bhelgaas, kishon, lorenzo.pieralisi, linux-pci
  Cc: adouglas, stelford, dgary, kgopi, eandrews, thomas.petazzoni,
	sureshp, nsekhar, linux-kernel, robh, devicetree,
	Cyrille Pitchen

This patch adds support to the Cadence PCIe controller in host mode.

Signed-off-by: Cyrille Pitchen <cyrille.pitchen@free-electrons.com>
---
 drivers/Makefile                        |   1 +
 drivers/pci/Kconfig                     |   1 +
 drivers/pci/cadence/Kconfig             |  24 ++
 drivers/pci/cadence/Makefile            |   2 +
 drivers/pci/cadence/pcie-cadence-host.c | 425 ++++++++++++++++++++++++++++++++
 drivers/pci/cadence/pcie-cadence.c      | 110 +++++++++
 drivers/pci/cadence/pcie-cadence.h      | 325 ++++++++++++++++++++++++
 7 files changed, 888 insertions(+)
 create mode 100644 drivers/pci/cadence/Kconfig
 create mode 100644 drivers/pci/cadence/Makefile
 create mode 100644 drivers/pci/cadence/pcie-cadence-host.c
 create mode 100644 drivers/pci/cadence/pcie-cadence.c
 create mode 100644 drivers/pci/cadence/pcie-cadence.h

diff --git a/drivers/Makefile b/drivers/Makefile
index 1d034b680431..27bdd98784d9 100644
--- a/drivers/Makefile
+++ b/drivers/Makefile
@@ -18,6 +18,7 @@ obj-y				+= pwm/
 
 obj-$(CONFIG_PCI)		+= pci/
 obj-$(CONFIG_PCI_ENDPOINT)	+= pci/endpoint/
+obj-$(CONFIG_PCI_CADENCE)	+= pci/cadence/
 # PCI dwc controller drivers
 obj-y				+= pci/dwc/
 
diff --git a/drivers/pci/Kconfig b/drivers/pci/Kconfig
index 90944667ccea..2471b2e36b8b 100644
--- a/drivers/pci/Kconfig
+++ b/drivers/pci/Kconfig
@@ -144,6 +144,7 @@ config PCI_HYPERV
           PCI devices from a PCI backend to support PCI driver domains.
 
 source "drivers/pci/hotplug/Kconfig"
+source "drivers/pci/cadence/Kconfig"
 source "drivers/pci/dwc/Kconfig"
 source "drivers/pci/host/Kconfig"
 source "drivers/pci/endpoint/Kconfig"
diff --git a/drivers/pci/cadence/Kconfig b/drivers/pci/cadence/Kconfig
new file mode 100644
index 000000000000..120306cae2aa
--- /dev/null
+++ b/drivers/pci/cadence/Kconfig
@@ -0,0 +1,24 @@
+menuconfig PCI_CADENCE
+	bool "Cadence PCI controllers support"
+	depends on PCI && HAS_IOMEM
+	help
+	  Say Y here if you want to support some Cadence PCI controller.
+
+	  When in doubt, say N.
+
+if PCI_CADENCE
+
+config PCIE_CADENCE
+	bool
+
+config PCIE_CADENCE_HOST
+	bool "Cadence PCIe host controller"
+	depends on OF
+	depends on PCI_MSI_IRQ_DOMAIN
+	select PCIE_CADENCE
+	help
+	  Say Y here if you want to support the Cadence PCIe controller in host
+	  mode. This PCIe controller may be embedded into many different vendors
+	  SoCs.
+
+endif # PCI_CADENCE
diff --git a/drivers/pci/cadence/Makefile b/drivers/pci/cadence/Makefile
new file mode 100644
index 000000000000..d57d192d2595
--- /dev/null
+++ b/drivers/pci/cadence/Makefile
@@ -0,0 +1,2 @@
+obj-$(CONFIG_PCIE_CADENCE) += pcie-cadence.o
+obj-$(CONFIG_PCIE_CADENCE_HOST) += pcie-cadence-host.o
diff --git a/drivers/pci/cadence/pcie-cadence-host.c b/drivers/pci/cadence/pcie-cadence-host.c
new file mode 100644
index 000000000000..252471e72a93
--- /dev/null
+++ b/drivers/pci/cadence/pcie-cadence-host.c
@@ -0,0 +1,425 @@
+/*
+ * Cadence PCIe host controller driver.
+ *
+ * Copyright (c) 2017 Cadence
+ *
+ * Author: Cyrille Pitchen <cyrille.pitchen@free-electrons.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program.  If not, see <http://www.gnu.org/licenses/>.
+ */
+#include <linux/kernel.h>
+#include <linux/of_address.h>
+#include <linux/of_pci.h>
+#include <linux/platform_device.h>
+#include <linux/pm_runtime.h>
+
+#include "pcie-cadence.h"
+
+/**
+ * struct cdns_pcie_rc_data - hardware specific data
+ * @max_regions: maximum number of regions supported by the hardware
+ * @vendor_id: PCI vendor ID
+ * @device_id: PCI device ID
+ * @no_bar_nbits: Number of bits to keep for inbound (PCIe -> CPU) address
+ *                translation (nbits sets into the "no BAR match" register).
+ */
+struct cdns_pcie_rc_data {
+	size_t			max_regions;
+	u16			vendor_id;
+	u16			device_id;
+	u8			no_bar_nbits;
+};
+
+/**
+ * struct cdns_pcie_rc - private data for this PCIe Root Complex driver
+ * @pcie: Cadence PCIe controller
+ * @dev: pointer to PCIe device
+ * @cfg_res: start/end offsets in the physical system memory to map PCI
+ *           configuration space accesses
+ * @bus_range: first/last buses behind the PCIe host controller
+ * @cfg_base: IO mapped window to access the PCI configuration space of a
+ *            single function at a time
+ * @data: pointer to a 'struct cdns_pcie_rc_data'
+ */
+struct cdns_pcie_rc {
+	struct cdns_pcie	pcie;
+	struct device		*dev;
+	struct resource		*cfg_res;
+	struct resource		*bus_range;
+	void __iomem		*cfg_base;
+	const struct cdns_pcie_rc_data	*data;
+};
+
+static void __iomem *
+cdns_pci_map_bus(struct pci_bus *bus, unsigned int devfn, int where)
+{
+	struct pci_host_bridge *bridge = pci_find_host_bridge(bus);
+	struct cdns_pcie_rc *rc = pci_host_bridge_priv(bridge);
+	struct cdns_pcie *pcie = &rc->pcie;
+	unsigned int busn = bus->number;
+	u32 addr0, desc0;
+
+	if (busn < rc->bus_range->start || busn > rc->bus_range->end)
+		return NULL;
+
+	if (busn == rc->bus_range->start) {
+		if (devfn)
+			return NULL;
+
+		return pcie->reg_base + (where & 0xfff);
+	}
+
+	/* Update Output registers for AXI region 0. */
+	addr0 = CDNS_PCIE_AT_OB_REGION_PCI_ADDR0_NBITS(12) |
+		CDNS_PCIE_AT_OB_REGION_PCI_ADDR0_DEVFN(devfn) |
+		CDNS_PCIE_AT_OB_REGION_PCI_ADDR0_BUS(busn);
+	cdns_pcie_writel(pcie, CDNS_PCIE_AT_OB_REGION_PCI_ADDR0(0), addr0);
+
+	/* Configuration Type 0 or Type 1 access. */
+	desc0 = CDNS_PCIE_AT_OB_REGION_DESC0_HARDCODED_RID |
+		CDNS_PCIE_AT_OB_REGION_DESC0_DEVFN(0);
+	/*
+	 * The bus number was already set once for all in desc1 by
+	 * cdns_pcie_host_init_address_translation().
+	 */
+	if (busn == rc->bus_range->start + 1)
+		desc0 |= CDNS_PCIE_AT_OB_REGION_DESC0_TYPE_CONF_TYPE0;
+	else
+		desc0 |= CDNS_PCIE_AT_OB_REGION_DESC0_TYPE_CONF_TYPE1;
+	cdns_pcie_writel(pcie, CDNS_PCIE_AT_OB_REGION_DESC0(0), desc0);
+
+	return rc->cfg_base + (where & 0xfff);
+}
+
+static struct pci_ops cdns_pcie_host_ops = {
+	.map_bus	= cdns_pci_map_bus,
+	.read		= pci_generic_config_read,
+	.write		= pci_generic_config_write,
+};
+
+static const struct cdns_pcie_rc_data cdns_pcie_rc_data = {
+	.max_regions	= 32,
+	.vendor_id	= PCI_VENDOR_ID_CDNS,
+	.device_id	= 0x0200,
+	.no_bar_nbits	= 32,
+};
+
+static const struct of_device_id cdns_pcie_host_of_match[] = {
+	{ .compatible = "cdns,cdns-pcie-host",
+	  .data = &cdns_pcie_rc_data },
+
+	{ },
+};
+
+static int cdns_pcie_parse_request_of_pci_ranges(struct device *dev,
+						 struct list_head *resources,
+						 struct resource **bus_range)
+{
+	int err, res_valid = 0;
+	struct device_node *np = dev->of_node;
+	resource_size_t iobase;
+	struct resource_entry *win, *tmp;
+
+	err = of_pci_get_host_bridge_resources(np, 0, 0xff, resources, &iobase);
+	if (err)
+		return err;
+
+	err = devm_request_pci_bus_resources(dev, resources);
+	if (err)
+		return err;
+
+	resource_list_for_each_entry_safe(win, tmp, resources) {
+		struct resource *res = win->res;
+
+		switch (resource_type(res)) {
+		case IORESOURCE_IO:
+			err = pci_remap_iospace(res, iobase);
+			if (err) {
+				dev_warn(dev, "error %d: failed to map resource %pR\n",
+					 err, res);
+				resource_list_destroy_entry(win);
+			}
+			break;
+		case IORESOURCE_MEM:
+			res_valid |= !(res->flags & IORESOURCE_PREFETCH);
+			break;
+		case IORESOURCE_BUS:
+			*bus_range = res;
+			break;
+		}
+	}
+
+	if (res_valid)
+		return 0;
+
+	dev_err(dev, "non-prefetchable memory resource required\n");
+	return -EINVAL;
+}
+
+static int cdns_pcie_host_init_root_port(struct cdns_pcie_rc *rc)
+{
+	const struct cdns_pcie_rc_data *data = rc->data;
+	struct cdns_pcie *pcie = &rc->pcie;
+	u8 pbn, sbn, subn;
+	u32 value, ctrl;
+
+	/*
+	 * Set the root complex BAR configuration register:
+	 * - disable both BAR0 and BAR1.
+	 * - enable Prefetchable Memory Base and Limit registers in type 1
+	 *   config space (64 bits).
+	 * - enable IO Base and Limit registers in type 1 config
+	 *   space (32 bits).
+	 */
+	ctrl = CDNS_PCIE_LM_BAR_CFG_CTRL_DISABLED;
+	value = CDNS_PCIE_LM_RC_BAR_CFG_BAR0_CTRL(ctrl) |
+		CDNS_PCIE_LM_RC_BAR_CFG_BAR1_CTRL(ctrl) |
+		CDNS_PCIE_LM_RC_BAR_CFG_PREFETCH_MEM_ENABLE |
+		CDNS_PCIE_LM_RC_BAR_CFG_PREFETCH_MEM_64BITS |
+		CDNS_PCIE_LM_RC_BAR_CFG_IO_ENABLE |
+		CDNS_PCIE_LM_RC_BAR_CFG_IO_32BITS;
+	cdns_pcie_writel(pcie, CDNS_PCIE_LM_RC_BAR_CFG, value);
+
+	/* Set root port configuration space */
+	if (data->vendor_id != 0xffff)
+		cdns_pcie_rp_writew(pcie, PCI_VENDOR_ID, data->vendor_id);
+	if (data->device_id != 0xffff)
+		cdns_pcie_rp_writew(pcie, PCI_DEVICE_ID, data->device_id);
+
+	cdns_pcie_rp_writeb(pcie, PCI_CLASS_REVISION, 0);
+	cdns_pcie_rp_writeb(pcie, PCI_CLASS_PROG, 0);
+	cdns_pcie_rp_writew(pcie, PCI_CLASS_DEVICE, PCI_CLASS_BRIDGE_PCI);
+
+	pbn = rc->bus_range->start;
+	sbn = pbn + 1; /* Single root port. */
+	subn = rc->bus_range->end;
+	cdns_pcie_rp_writeb(pcie, PCI_PRIMARY_BUS, pbn);
+	cdns_pcie_rp_writeb(pcie, PCI_SECONDARY_BUS, sbn);
+	cdns_pcie_rp_writeb(pcie, PCI_SUBORDINATE_BUS, subn);
+
+	return 0;
+}
+
+static int cdns_pcie_host_init_address_translation(struct cdns_pcie_rc *rc)
+{
+	struct cdns_pcie *pcie = &rc->pcie;
+	struct resource *cfg_res = rc->cfg_res;
+	struct resource *mem_res = pcie->mem_res;
+	struct resource *bus_range = rc->bus_range;
+	struct device *dev = rc->dev;
+	struct device_node *np = dev->of_node;
+	struct of_pci_range_parser parser;
+	struct of_pci_range range;
+	u32 addr0, addr1, desc1;
+	u64 cpu_addr;
+	int r, err;
+
+	/*
+	 * Reserve region 0 for PCI configure space accesses:
+	 * OB_REGION_PCI_ADDR0 and OB_REGION_DESC0 are updated dynamically by
+	 * cdns_pci_map_bus(), other region registers are set here once for all.
+	 */
+	addr1 = 0; /* Should be programmed to zero. */
+	desc1 = CDNS_PCIE_AT_OB_REGION_DESC1_BUS(bus_range->start);
+	cdns_pcie_writel(pcie, CDNS_PCIE_AT_OB_REGION_PCI_ADDR1(0), addr1);
+	cdns_pcie_writel(pcie, CDNS_PCIE_AT_OB_REGION_DESC1(0), desc1);
+
+	cpu_addr = cfg_res->start - mem_res->start;
+	addr0 = CDNS_PCIE_AT_OB_REGION_CPU_ADDR0_NBITS(12) |
+		(lower_32_bits(cpu_addr) & GENMASK(31, 8));
+	addr1 = upper_32_bits(cpu_addr);
+	cdns_pcie_writel(pcie, CDNS_PCIE_AT_OB_REGION_CPU_ADDR0(0), addr0);
+	cdns_pcie_writel(pcie, CDNS_PCIE_AT_OB_REGION_CPU_ADDR1(0), addr1);
+
+	err = of_pci_range_parser_init(&parser, np);
+	if (err)
+		return err;
+
+	r = 1;
+	for_each_of_pci_range(&parser, &range) {
+		bool is_io;
+
+		if (r >= rc->data->max_regions)
+			break;
+
+		if ((range.flags & IORESOURCE_TYPE_BITS) == IORESOURCE_MEM)
+			is_io = false;
+		else if ((range.flags & IORESOURCE_TYPE_BITS) == IORESOURCE_IO)
+			is_io = true;
+		else
+			continue;
+
+		cdns_pcie_set_outbound_region(pcie, r, is_io,
+					      range.cpu_addr,
+					      range.pci_addr,
+					      range.size);
+		r++;
+	}
+
+	/*
+	 * Set Root Port no BAR match Inbound Translation registers:
+	 * needed for MSI.
+	 * Root Port BAR0 and BAR1 are disabled, hence no need to set their
+	 * inbound translation registers.
+	 */
+	addr0 = CDNS_PCIE_AT_IB_RP_BAR_ADDR0_NBITS(rc->data->no_bar_nbits);
+	addr1 = 0;
+	cdns_pcie_writel(pcie, CDNS_PCIE_AT_IB_RP_BAR_ADDR0(RP_NO_BAR), addr0);
+	cdns_pcie_writel(pcie, CDNS_PCIE_AT_IB_RP_BAR_ADDR1(RP_NO_BAR), addr1);
+
+	return 0;
+}
+
+static int cdns_pcie_host_init(struct device *dev,
+			       struct list_head *resources,
+			       struct cdns_pcie_rc *rc)
+{
+	struct resource *bus_range = NULL;
+	int err;
+
+	/* Parse our PCI ranges and request their resources */
+	err = cdns_pcie_parse_request_of_pci_ranges(dev, resources, &bus_range);
+	if (err)
+		goto err_out;
+
+	if (bus_range->start > bus_range->end) {
+		err = -EINVAL;
+		goto err_out;
+	}
+	rc->bus_range = bus_range;
+	rc->pcie.bus = bus_range->start;
+
+	err = cdns_pcie_host_init_root_port(rc);
+	if (err)
+		goto err_out;
+
+	err = cdns_pcie_host_init_address_translation(rc);
+	if (err)
+		goto err_out;
+
+	return 0;
+
+ err_out:
+	pci_free_resource_list(resources);
+	return err;
+}
+
+static int cdns_pcie_host_probe(struct platform_device *pdev)
+{
+	const struct of_device_id *of_id;
+	const char *type;
+	struct device *dev = &pdev->dev;
+	struct device_node *np = dev->of_node;
+	struct pci_bus *bus, *child;
+	struct pci_host_bridge *bridge;
+	struct list_head resources;
+	struct cdns_pcie_rc *rc;
+	struct cdns_pcie *pcie;
+	struct resource *res;
+	int ret;
+
+	bridge = devm_pci_alloc_host_bridge(dev, sizeof(*rc));
+	if (!bridge)
+		return -ENOMEM;
+
+	rc = pci_host_bridge_priv(bridge);
+	rc->dev = dev;
+	platform_set_drvdata(pdev, rc);
+
+	pcie = &rc->pcie;
+	pcie->is_rc = true;
+
+	of_id = of_match_node(cdns_pcie_host_of_match, np);
+	rc->data = (const struct cdns_pcie_rc_data *)of_id->data;
+
+	type = of_get_property(np, "device_type", NULL);
+	if (!type || strcmp(type, "pci")) {
+		dev_err(dev, "invalid \"device_type\" %s\n", type);
+		return -EINVAL;
+	}
+
+	res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "reg");
+	pcie->reg_base = devm_ioremap_resource(dev, res);
+	if (IS_ERR(pcie->reg_base)) {
+		dev_err(dev, "missing \"reg\"\n");
+		return PTR_ERR(pcie->reg_base);
+	}
+
+	res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "cfg");
+	rc->cfg_base = devm_ioremap_resource(dev, res);
+	if (IS_ERR(rc->cfg_base)) {
+		dev_err(dev, "missing \"cfg\"\n");
+		return PTR_ERR(rc->cfg_base);
+	}
+	rc->cfg_res = res;
+
+	res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "mem");
+	if (!res) {
+		dev_err(dev, "missing \"mem\"\n");
+		return -EINVAL;
+	}
+	pcie->mem_res = res;
+
+	pm_runtime_enable(dev);
+	ret = pm_runtime_get_sync(dev);
+	if (ret < 0) {
+		dev_err(dev, "pm_runtime_get_sync() failed\n");
+		goto err_get_sync;
+	}
+
+	INIT_LIST_HEAD(&resources);
+	ret = cdns_pcie_host_init(dev, &resources, rc);
+	if (ret)
+		goto err_init;
+
+	list_splice_init(&resources, &bridge->windows);
+	bridge->dev.parent = dev;
+	bridge->busnr = pcie->bus;
+	bridge->ops = &cdns_pcie_host_ops;
+	bridge->map_irq = of_irq_parse_and_map_pci;
+	bridge->swizzle_irq = pci_common_swizzle;
+
+	ret = pci_scan_root_bus_bridge(bridge);
+	if (ret < 0) {
+		dev_err(dev, "Scanning root bridge failed");
+		goto err_init;
+	}
+
+	bus = bridge->bus;
+	pci_bus_size_bridges(bus);
+	pci_bus_assign_resources(bus);
+
+	list_for_each_entry(child, &bus->children, node)
+		pcie_bus_configure_settings(child);
+
+	pci_bus_add_devices(bus);
+
+	return 0;
+
+ err_init:
+	pm_runtime_put_sync(dev);
+
+ err_get_sync:
+	pm_runtime_disable(dev);
+
+	return ret;
+}
+
+static struct platform_driver cdns_pcie_host_driver = {
+	.driver = {
+		.name = "cdns-pcie-host",
+		.of_match_table = cdns_pcie_host_of_match,
+	},
+	.probe = cdns_pcie_host_probe,
+};
+builtin_platform_driver(cdns_pcie_host_driver);
diff --git a/drivers/pci/cadence/pcie-cadence.c b/drivers/pci/cadence/pcie-cadence.c
new file mode 100644
index 000000000000..5c10879d5e96
--- /dev/null
+++ b/drivers/pci/cadence/pcie-cadence.c
@@ -0,0 +1,110 @@
+/*
+ * Cadence PCIe controller driver.
+ *
+ * Copyright (c) 2017 Cadence
+ *
+ * Author: Cyrille Pitchen <cyrille.pitchen@free-electrons.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program.  If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include <linux/kernel.h>
+
+#include "pcie-cadence.h"
+
+void cdns_pcie_set_outbound_region(struct cdns_pcie *pcie, u32 r, bool is_io,
+				   u64 cpu_addr, u64 pci_addr, size_t size)
+{
+	/*
+	 * roundup_pow_of_two() returns an unsigned long, which is not suited
+	 * for 64bit values.
+	 */
+	u64 sz = 1ULL << fls64(size - 1);
+	int nbits = ilog2(sz);
+	u32 addr0, addr1, desc0, desc1;
+
+	if (nbits < 8)
+		nbits = 8;
+
+	/* Set the PCI address */
+	addr0 = CDNS_PCIE_AT_OB_REGION_PCI_ADDR0_NBITS(nbits) |
+		(lower_32_bits(pci_addr) & GENMASK(31, 8));
+	addr1 = upper_32_bits(pci_addr);
+
+	cdns_pcie_writel(pcie, CDNS_PCIE_AT_OB_REGION_PCI_ADDR0(r), addr0);
+	cdns_pcie_writel(pcie, CDNS_PCIE_AT_OB_REGION_PCI_ADDR1(r), addr1);
+
+	/* Set the PCIe header descriptor */
+	if (is_io)
+		desc0 = CDNS_PCIE_AT_OB_REGION_DESC0_TYPE_IO;
+	else
+		desc0 = CDNS_PCIE_AT_OB_REGION_DESC0_TYPE_MEM;
+	desc1 = 0;
+
+	if (pcie->is_rc) {
+		desc0 |= CDNS_PCIE_AT_OB_REGION_DESC0_HARDCODED_RID |
+			 CDNS_PCIE_AT_OB_REGION_DESC0_DEVFN(0);
+		desc1 |= CDNS_PCIE_AT_OB_REGION_DESC1_BUS(pcie->bus);
+	}
+
+	cdns_pcie_writel(pcie, CDNS_PCIE_AT_OB_REGION_DESC0(r), desc0);
+	cdns_pcie_writel(pcie, CDNS_PCIE_AT_OB_REGION_DESC1(r), desc1);
+
+	/* Set the CPU address */
+	cpu_addr -= pcie->mem_res->start;
+	addr0 = CDNS_PCIE_AT_OB_REGION_CPU_ADDR0_NBITS(nbits) |
+		(lower_32_bits(cpu_addr) & GENMASK(31, 8));
+	addr1 = upper_32_bits(cpu_addr);
+
+	cdns_pcie_writel(pcie, CDNS_PCIE_AT_OB_REGION_CPU_ADDR0(r), addr0);
+	cdns_pcie_writel(pcie, CDNS_PCIE_AT_OB_REGION_CPU_ADDR1(r), addr1);
+}
+
+void cdns_pcie_set_outbound_region_for_normal_msg(struct cdns_pcie *pcie, u32 r,
+						  u64 cpu_addr)
+{
+	u32 addr0, addr1, desc0, desc1;
+
+	desc0 = CDNS_PCIE_AT_OB_REGION_DESC0_TYPE_NORMAL_MSG;
+	desc1 = 0;
+	if (pcie->is_rc) {
+		desc0 |= CDNS_PCIE_AT_OB_REGION_DESC0_HARDCODED_RID |
+			 CDNS_PCIE_AT_OB_REGION_DESC0_DEVFN(0);
+		desc1 |= CDNS_PCIE_AT_OB_REGION_DESC1_BUS(pcie->bus);
+	}
+
+	/* Set the CPU address */
+	cpu_addr -= pcie->mem_res->start;
+	addr0 = CDNS_PCIE_AT_OB_REGION_CPU_ADDR0_NBITS(17) |
+		(lower_32_bits(cpu_addr) & GENMASK(31, 8));
+	addr1 = upper_32_bits(cpu_addr);
+
+	cdns_pcie_writel(pcie, CDNS_PCIE_AT_OB_REGION_PCI_ADDR0(r), 0);
+	cdns_pcie_writel(pcie, CDNS_PCIE_AT_OB_REGION_PCI_ADDR1(r), 0);
+	cdns_pcie_writel(pcie, CDNS_PCIE_AT_OB_REGION_DESC0(r), desc0);
+	cdns_pcie_writel(pcie, CDNS_PCIE_AT_OB_REGION_DESC1(r), desc1);
+	cdns_pcie_writel(pcie, CDNS_PCIE_AT_OB_REGION_CPU_ADDR0(r), addr0);
+	cdns_pcie_writel(pcie, CDNS_PCIE_AT_OB_REGION_CPU_ADDR1(r), addr1);
+}
+
+void cdns_pcie_reset_outbound_region(struct cdns_pcie *pcie, u32 r)
+{
+	cdns_pcie_writel(pcie, CDNS_PCIE_AT_OB_REGION_PCI_ADDR0(r), 0);
+	cdns_pcie_writel(pcie, CDNS_PCIE_AT_OB_REGION_PCI_ADDR1(r), 0);
+
+	cdns_pcie_writel(pcie, CDNS_PCIE_AT_OB_REGION_DESC0(r), 0);
+	cdns_pcie_writel(pcie, CDNS_PCIE_AT_OB_REGION_DESC1(r), 0);
+
+	cdns_pcie_writel(pcie, CDNS_PCIE_AT_OB_REGION_CPU_ADDR0(r), 0);
+	cdns_pcie_writel(pcie, CDNS_PCIE_AT_OB_REGION_CPU_ADDR1(r), 0);
+}
diff --git a/drivers/pci/cadence/pcie-cadence.h b/drivers/pci/cadence/pcie-cadence.h
new file mode 100644
index 000000000000..195e23b7d4fe
--- /dev/null
+++ b/drivers/pci/cadence/pcie-cadence.h
@@ -0,0 +1,325 @@
+/*
+ * Cadence PCIe controller driver.
+ *
+ * Copyright (c) 2017 Cadence
+ *
+ * Author: Cyrille Pitchen <cyrille.pitchen@free-electrons.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program.  If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#ifndef _PCIE_CADENCE_H
+#define _PCIE_CADENCE_H
+
+#include <linux/kernel.h>
+#include <linux/pci.h>
+
+/*
+ * Local Management Registers
+ */
+#define CDNS_PCIE_LM_BASE	0x00100000
+
+/* Vendor ID Register */
+#define CDNS_PCIE_LM_ID		(CDNS_PCIE_LM_BASE + 0x0044)
+#define  CDNS_PCIE_LM_ID_VENDOR_MASK	GENMASK(15, 0)
+#define  CDNS_PCIE_LM_ID_VENDOR_SHIFT	0
+#define  CDNS_PCIE_LM_ID_VENDOR(vid) \
+	(((vid) << CDNS_PCIE_LM_ID_VENDOR_SHIFT) & CDNS_PCIE_LM_ID_VENDOR_MASK)
+#define  CDNS_PCIE_LM_ID_SUBSYS_MASK	GENMASK(31, 16)
+#define  CDNS_PCIE_LM_ID_SUBSYS_SHIFT	16
+#define  CDNS_PCIE_LM_ID_SUBSYS(sub) \
+	(((sub) << CDNS_PCIE_LM_ID_SUBSYS_SHIFT) & CDNS_PCIE_LM_ID_SUBSYS_MASK)
+
+/* Root Port Requestor ID Register */
+#define CDNS_PCIE_LM_RP_RID	(CDNS_PCIE_LM_BASE + 0x0228)
+#define  CDNS_PCIE_LM_RP_RID_MASK	GENMASK(15, 0)
+#define  CDNS_PCIE_LM_RP_RID_SHIFT	0
+#define  CDNS_PCIE_LM_RP_RID_(rid) \
+	(((rid) << CDNS_PCIE_LM_RP_RID_SHIFT) & CDNS_PCIE_LM_RP_RID_MASK)
+
+/* Endpoint Bus and Device Number Register */
+#define CDNS_PCIE_LM_EP_ID	(CDNS_PCIE_LM_BASE + 0x022c)
+#define  CDNS_PCIE_LM_EP_ID_DEV_MASK	GENMASK(4, 0)
+#define  CDNS_PCIE_LM_EP_ID_DEV_SHIFT	0
+#define  CDNS_PCIE_LM_EP_ID_BUS_MASK	GENMASK(15, 8)
+#define  CDNS_PCIE_LM_EP_ID_BUS_SHIFT	8
+
+/* Endpoint Function f BAR b Configuration Registers */
+#define CDNS_PCIE_LM_EP_FUNC_BAR_CFG0(fn) \
+	(CDNS_PCIE_LM_BASE + 0x0240 + (fn) * 0x0008)
+#define CDNS_PCIE_LM_EP_FUNC_BAR_CFG1(fn) \
+	(CDNS_PCIE_LM_BASE + 0x0244 + (fn) * 0x0008)
+#define  CDNS_PCIE_LM_EP_FUNC_BAR_CFG_BAR_APERTURE_MASK(b) \
+	(GENMASK(4, 0) << ((b) * 8))
+#define  CDNS_PCIE_LM_EP_FUNC_BAR_CFG_BAR_APERTURE(b, a) \
+	(((a) << ((b) * 8)) & CDNS_PCIE_LM_EP_FUNC_BAR_CFG_BAR_APERTURE_MASK(b))
+#define  CDNS_PCIE_LM_EP_FUNC_BAR_CFG_BAR_CTRL_MASK(b) \
+	(GENMASK(7, 5) << ((b) * 8))
+#define  CDNS_PCIE_LM_EP_FUNC_BAR_CFG_BAR_CTRL(b, c) \
+	(((c) << ((b) * 8 + 5)) & CDNS_PCIE_LM_EP_FUNC_BAR_CFG_BAR_CTRL_MASK(b))
+
+/* Endpoint Function Configuration Register */
+#define CDNS_PCIE_LM_EP_FUNC_CFG	(CDNS_PCIE_LM_BASE + 0x02c0)
+
+/* Root Complex BAR Configuration Register */
+#define CDNS_PCIE_LM_RC_BAR_CFG	(CDNS_PCIE_LM_BASE + 0x0300)
+#define  CDNS_PCIE_LM_RC_BAR_CFG_BAR0_APERTURE_MASK	GENMASK(5, 0)
+#define  CDNS_PCIE_LM_RC_BAR_CFG_BAR0_APERTURE(a) \
+	(((a) << 0) & CDNS_PCIE_LM_RC_BAR_CFG_BAR0_APERTURE_MASK)
+#define  CDNS_PCIE_LM_RC_BAR_CFG_BAR0_CTRL_MASK		GENMASK(8, 6)
+#define  CDNS_PCIE_LM_RC_BAR_CFG_BAR0_CTRL(c) \
+	(((c) << 6) & CDNS_PCIE_LM_RC_BAR_CFG_BAR0_CTRL_MASK)
+#define  CDNS_PCIE_LM_RC_BAR_CFG_BAR1_APERTURE_MASK	GENMASK(13, 9)
+#define  CDNS_PCIE_LM_RC_BAR_CFG_BAR1_APERTURE(a) \
+	(((a) << 9) & CDNS_PCIE_LM_RC_BAR_CFG_BAR1_APERTURE_MASK)
+#define  CDNS_PCIE_LM_RC_BAR_CFG_BAR1_CTRL_MASK		GENMASK(16, 14)
+#define  CDNS_PCIE_LM_RC_BAR_CFG_BAR1_CTRL(c) \
+	(((c) << 14) & CDNS_PCIE_LM_RC_BAR_CFG_BAR1_CTRL_MASK)
+#define  CDNS_PCIE_LM_RC_BAR_CFG_PREFETCH_MEM_ENABLE	BIT(17)
+#define  CDNS_PCIE_LM_RC_BAR_CFG_PREFETCH_MEM_32BITS	0
+#define  CDNS_PCIE_LM_RC_BAR_CFG_PREFETCH_MEM_64BITS	BIT(18)
+#define  CDNS_PCIE_LM_RC_BAR_CFG_IO_ENABLE		BIT(19)
+#define  CDNS_PCIE_LM_RC_BAR_CFG_IO_16BITS		0
+#define  CDNS_PCIE_LM_RC_BAR_CFG_IO_32BITS		BIT(20)
+#define  CDNS_PCIE_LM_RC_BAR_CFG_CHECK_ENABLE		BIT(31)
+
+/* BAR control values applicable to both Endpoint Function and Root Complex */
+#define  CDNS_PCIE_LM_BAR_CFG_CTRL_DISABLED		0x0
+#define  CDNS_PCIE_LM_BAR_CFG_CTRL_IO_32BITS		0x1
+#define  CDNS_PCIE_LM_BAR_CFG_CTRL_MEM_32BITS		0x4
+#define  CDNS_PCIE_LM_BAR_CFG_CTRL_PREFETCH_MEM_32BITS	0x5
+#define  CDNS_PCIE_LM_BAR_CFG_CTRL_MEM_64BITS		0x6
+#define  CDNS_PCIE_LM_BAR_CFG_CTRL_PREFETCH_MEM_64BITS	0x7
+
+
+/*
+ * Endpoint Function Registers (PCI configuration space for endpoint functions)
+ */
+#define CDNS_PCIE_EP_FUNC_BASE(fn)	(((fn) << 12) & GENMASK(19, 12))
+
+#define CDNS_PCIE_EP_FUNC_MSI_CAP_OFFSET	0x90
+
+/*
+ * Root Port Registers (PCI configuration space for the root port function)
+ */
+#define CDNS_PCIE_RP_BASE	0x00200000
+
+
+/*
+ * Address Translation Registers
+ */
+#define CDNS_PCIE_AT_BASE	0x00400000
+
+/* Region r Outbound AXI to PCIe Address Translation Register 0 */
+#define CDNS_PCIE_AT_OB_REGION_PCI_ADDR0(r) \
+	(CDNS_PCIE_AT_BASE + 0x0000 + ((r) & 0x1f) * 0x0020)
+#define  CDNS_PCIE_AT_OB_REGION_PCI_ADDR0_NBITS_MASK	GENMASK(5, 0)
+#define  CDNS_PCIE_AT_OB_REGION_PCI_ADDR0_NBITS(nbits) \
+	(((nbits) - 1) & CDNS_PCIE_AT_OB_REGION_PCI_ADDR0_NBITS_MASK)
+#define  CDNS_PCIE_AT_OB_REGION_PCI_ADDR0_DEVFN_MASK	GENMASK(19, 12)
+#define  CDNS_PCIE_AT_OB_REGION_PCI_ADDR0_DEVFN(devfn) \
+	(((devfn) << 12) & CDNS_PCIE_AT_OB_REGION_PCI_ADDR0_DEVFN_MASK)
+#define  CDNS_PCIE_AT_OB_REGION_PCI_ADDR0_BUS_MASK	GENMASK(27, 20)
+#define  CDNS_PCIE_AT_OB_REGION_PCI_ADDR0_BUS(bus) \
+	(((bus) << 20) & CDNS_PCIE_AT_OB_REGION_PCI_ADDR0_BUS_MASK)
+
+/* Region r Outbound AXI to PCIe Address Translation Register 1 */
+#define CDNS_PCIE_AT_OB_REGION_PCI_ADDR1(r) \
+	(CDNS_PCIE_AT_BASE + 0x0004 + ((r) & 0x1f) * 0x0020)
+
+/* Region r Outbound PCIe Descriptor Register 0 */
+#define CDNS_PCIE_AT_OB_REGION_DESC0(r) \
+	(CDNS_PCIE_AT_BASE + 0x0008 + ((r) & 0x1f) * 0x0020)
+#define  CDNS_PCIE_AT_OB_REGION_DESC0_TYPE_MASK		GENMASK(3, 0)
+#define  CDNS_PCIE_AT_OB_REGION_DESC0_TYPE_MEM		0x2
+#define  CDNS_PCIE_AT_OB_REGION_DESC0_TYPE_IO		0x6
+#define  CDNS_PCIE_AT_OB_REGION_DESC0_TYPE_CONF_TYPE0	0xa
+#define  CDNS_PCIE_AT_OB_REGION_DESC0_TYPE_CONF_TYPE1	0xb
+#define  CDNS_PCIE_AT_OB_REGION_DESC0_TYPE_NORMAL_MSG	0xc
+#define  CDNS_PCIE_AT_OB_REGION_DESC0_TYPE_VENDOR_MSG	0xd
+/* Bit 23 MUST be set in RC mode. */
+#define  CDNS_PCIE_AT_OB_REGION_DESC0_HARDCODED_RID	BIT(23)
+#define  CDNS_PCIE_AT_OB_REGION_DESC0_DEVFN_MASK	GENMASK(31, 24)
+#define  CDNS_PCIE_AT_OB_REGION_DESC0_DEVFN(devfn) \
+	(((devfn) << 24) & CDNS_PCIE_AT_OB_REGION_DESC0_DEVFN_MASK)
+
+/* Region r Outbound PCIe Descriptor Register 1 */
+#define CDNS_PCIE_AT_OB_REGION_DESC1(r)	\
+	(CDNS_PCIE_AT_BASE + 0x000c + ((r) & 0x1f) * 0x0020)
+#define  CDNS_PCIE_AT_OB_REGION_DESC1_BUS_MASK	GENMASK(7, 0)
+#define  CDNS_PCIE_AT_OB_REGION_DESC1_BUS(bus) \
+	((bus) & CDNS_PCIE_AT_OB_REGION_DESC1_BUS_MASK)
+
+/* Region r AXI Region Base Address Register 0 */
+#define CDNS_PCIE_AT_OB_REGION_CPU_ADDR0(r) \
+	(CDNS_PCIE_AT_BASE + 0x0018 + ((r) & 0x1f) * 0x0020)
+#define  CDNS_PCIE_AT_OB_REGION_CPU_ADDR0_NBITS_MASK	GENMASK(5, 0)
+#define  CDNS_PCIE_AT_OB_REGION_CPU_ADDR0_NBITS(nbits) \
+	(((nbits) - 1) & CDNS_PCIE_AT_OB_REGION_CPU_ADDR0_NBITS_MASK)
+
+/* Region r AXI Region Base Address Register 1 */
+#define CDNS_PCIE_AT_OB_REGION_CPU_ADDR1(r) \
+	(CDNS_PCIE_AT_BASE + 0x001c + ((r) & 0x1f) * 0x0020)
+
+/* Root Port BAR Inbound PCIe to AXI Address Translation Register */
+#define CDNS_PCIE_AT_IB_RP_BAR_ADDR0(bar) \
+	(CDNS_PCIE_AT_BASE + 0x0800 + (bar) * 0x0008)
+#define  CDNS_PCIE_AT_IB_RP_BAR_ADDR0_NBITS_MASK	GENMASK(5, 0)
+#define  CDNS_PCIE_AT_IB_RP_BAR_ADDR0_NBITS(nbits) \
+	(((nbits) - 1) & CDNS_PCIE_AT_IB_RP_BAR_ADDR0_NBITS_MASK)
+#define CDNS_PCIE_AT_IB_RP_BAR_ADDR1(bar) \
+	(CDNS_PCIE_AT_BASE + 0x0804 + (bar) * 0x0008)
+
+enum cdns_pcie_rp_bar {
+	RP_BAR0,
+	RP_BAR1,
+	RP_NO_BAR
+};
+
+/* Endpoint Function BAR Inbound PCIe to AXI Address Translation Register */
+#define CDNS_PCIE_AT_IB_EP_FUNC_BAR_ADDR0(fn, bar) \
+	(CDNS_PCIE_AT_BASE + 0x0840 + (fn) * 0x0040 + (bar) * 0x0008)
+#define CDNS_PCIE_AT_IB_EP_FUNC_BAR_ADDR1(fn, bar) \
+	(CDNS_PCIE_AT_BASE + 0x0844 + (fn) * 0x0040 + (bar) * 0x0008)
+
+/* Normal/Vendor specific message access: offset inside some outbound region */
+#define CDNS_PCIE_NORMAL_MSG_ROUTING_MASK	GENMASK(7, 5)
+#define CDNS_PCIE_NORMAL_MSG_ROUTING(route) \
+	(((route) << 5) & CDNS_PCIE_NORMAL_MSG_ROUTING_MASK)
+#define CDNS_PCIE_NORMAL_MSG_CODE_MASK		GENMASK(15, 8)
+#define CDNS_PCIE_NORMAL_MSG_CODE(code) \
+	(((code) << 8) & CDNS_PCIE_NORMAL_MSG_CODE_MASK)
+#define CDNS_PCIE_MSG_NO_DATA			BIT(16)
+
+enum cdns_pcie_msg_code {
+	MSG_CODE_ASSERT_INTA	= 0x20,
+	MSG_CODE_ASSERT_INTB	= 0x21,
+	MSG_CODE_ASSERT_INTC	= 0x22,
+	MSG_CODE_ASSERT_INTD	= 0x23,
+	MSG_CODE_DEASSERT_INTA	= 0x24,
+	MSG_CODE_DEASSERT_INTB	= 0x25,
+	MSG_CODE_DEASSERT_INTC	= 0x26,
+	MSG_CODE_DEASSERT_INTD	= 0x27,
+};
+
+enum cdns_pcie_msg_routing {
+	/* Route to Root Complex */
+	MSG_ROUTING_TO_RC,
+
+	/* Use Address Routing */
+	MSG_ROUTING_BY_ADDR,
+
+	/* Use ID Routing */
+	MSG_ROUTING_BY_ID,
+
+	/* Route as Broadcast Message from Root Complex */
+	MSG_ROUTING_BCAST,
+
+	/* Local message; terminate at receiver (INTx messages) */
+	MSG_ROUTING_LOCAL,
+
+	/* Gather & route to Root Complex (PME_TO_Ack message) */
+	MSG_ROUTING_GATHER,
+};
+
+/**
+ * struct cdns_pcie - private data for Cadence PCIe controller drivers
+ * @reg_base: IO mapped register base
+ * @mem_res: start/end offsets in the physical system memory to map PCI accesses
+ * @is_rc: tell whether the PCIe controller mode is Root Complex or Endpoint.
+ * @bus: In Root Complex mode, the bus number
+ */
+struct cdns_pcie {
+	void __iomem		*reg_base;
+	struct resource		*mem_res;
+	bool			is_rc;
+	u8			bus;
+};
+
+/* Register access */
+static inline void cdns_pcie_writeb(struct cdns_pcie *pcie, u32 reg, u8 value)
+{
+	writeb(value, pcie->reg_base + reg);
+}
+
+static inline void cdns_pcie_writew(struct cdns_pcie *pcie, u32 reg, u16 value)
+{
+	writew(value, pcie->reg_base + reg);
+}
+
+static inline void cdns_pcie_writel(struct cdns_pcie *pcie, u32 reg, u32 value)
+{
+	writel(value, pcie->reg_base + reg);
+}
+
+static inline u32 cdns_pcie_readl(struct cdns_pcie *pcie, u32 reg)
+{
+	return readl(pcie->reg_base + reg);
+}
+
+/* Root Port register access */
+static inline void cdns_pcie_rp_writeb(struct cdns_pcie *pcie,
+				       u32 reg, u8 value)
+{
+	writeb(value, pcie->reg_base + CDNS_PCIE_RP_BASE + reg);
+}
+
+static inline void cdns_pcie_rp_writew(struct cdns_pcie *pcie,
+				       u32 reg, u16 value)
+{
+	writew(value, pcie->reg_base + CDNS_PCIE_RP_BASE + reg);
+}
+
+/* Endpoint Function register access */
+static inline void cdns_pcie_ep_fn_writeb(struct cdns_pcie *pcie, u8 fn,
+					  u32 reg, u8 value)
+{
+	writeb(value, pcie->reg_base + CDNS_PCIE_EP_FUNC_BASE(fn) + reg);
+}
+
+static inline void cdns_pcie_ep_fn_writew(struct cdns_pcie *pcie, u8 fn,
+					  u32 reg, u16 value)
+{
+	writew(value, pcie->reg_base + CDNS_PCIE_EP_FUNC_BASE(fn) + reg);
+}
+
+static inline void cdns_pcie_ep_fn_writel(struct cdns_pcie *pcie, u8 fn,
+					  u32 reg, u16 value)
+{
+	writel(value, pcie->reg_base + CDNS_PCIE_EP_FUNC_BASE(fn) + reg);
+}
+
+static inline u8 cdns_pcie_ep_fn_readb(struct cdns_pcie *pcie, u8 fn, u32 reg)
+{
+	return readb(pcie->reg_base + CDNS_PCIE_EP_FUNC_BASE(fn) + reg);
+}
+
+static inline u16 cdns_pcie_ep_fn_readw(struct cdns_pcie *pcie, u8 fn, u32 reg)
+{
+	return readw(pcie->reg_base + CDNS_PCIE_EP_FUNC_BASE(fn) + reg);
+}
+
+static inline u32 cdns_pcie_ep_fn_readl(struct cdns_pcie *pcie, u8 fn, u32 reg)
+{
+	return readl(pcie->reg_base + CDNS_PCIE_EP_FUNC_BASE(fn) + reg);
+}
+
+void cdns_pcie_set_outbound_region(struct cdns_pcie *pcie, u32 r, bool is_io,
+				   u64 cpu_addr, u64 pci_addr, size_t size);
+
+void cdns_pcie_set_outbound_region_for_normal_msg(struct cdns_pcie *pcie, u32 r,
+						  u64 cpu_addr);
+
+void cdns_pcie_reset_outbound_region(struct cdns_pcie *pcie, u32 r);
+
+#endif /* _PCIE_CADENCE_H */
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 39+ messages in thread

* [PATCH 4/5] dt-bindings: PCI: cadence: Add DT bindings for Cadence PCIe endpoint controller
  2017-11-23 15:01 [PATCH 0/5] PCI: Add support to the Cadence PCIe controller Cyrille Pitchen
                   ` (2 preceding siblings ...)
  2017-11-23 15:01 ` [PATCH 3/5] PCI: cadence: Add host driver for Cadence PCIe controller Cyrille Pitchen
@ 2017-11-23 15:01 ` Cyrille Pitchen
  2017-11-26 19:33   ` Rob Herring
  2017-11-23 15:01 ` [PATCH 5/5] PCI: cadence: add EndPoint Controller driver for Cadence PCIe controller Cyrille Pitchen
  2017-11-28 15:50 ` [PATCH 0/5] PCI: Add support to the " Lorenzo Pieralisi
  5 siblings, 1 reply; 39+ messages in thread
From: Cyrille Pitchen @ 2017-11-23 15:01 UTC (permalink / raw)
  To: bhelgaas, kishon, lorenzo.pieralisi, linux-pci
  Cc: adouglas, stelford, dgary, kgopi, eandrews, thomas.petazzoni,
	sureshp, nsekhar, linux-kernel, robh, devicetree,
	Cyrille Pitchen

This patch documents the DT bindings for the Cadence PCIe controller
when configured in endpoint mode.

Signed-off-by: Cyrille Pitchen <cyrille.pitchen@free-electrons.com>
---
 .../devicetree/bindings/pci/cdns,cdns-pcie-ep.txt    | 20 ++++++++++++++++++++
 1 file changed, 20 insertions(+)
 create mode 100644 Documentation/devicetree/bindings/pci/cdns,cdns-pcie-ep.txt

diff --git a/Documentation/devicetree/bindings/pci/cdns,cdns-pcie-ep.txt b/Documentation/devicetree/bindings/pci/cdns,cdns-pcie-ep.txt
new file mode 100644
index 000000000000..d4168d79cceb
--- /dev/null
+++ b/Documentation/devicetree/bindings/pci/cdns,cdns-pcie-ep.txt
@@ -0,0 +1,20 @@
+* Cadence PCIe endpoint controller
+
+Required properties:
+- compatible: should contain "cdns,cdns-pcie-ep" to identify the IP used.
+- reg: should contain the controller register base address and AXI interface
+  region base address respectively.
+- reg-names: Must be "reg" and "mem" respectively.
+
+Optional properties:
+- max-functions: maximum number of functions that can be configured.
+
+Example:
+
+	pci@fc000000 {
+		compatible = "cdns,cdns-pcie-ep";
+		reg = <0x0 0xfc000000 0x0 0x01000000>, /* Base Address */
+		      <0x0 0x80000000 0x0 0x40000000>; /* AXI region */
+		reg-names = "reg", "mem";
+		max-functions = /bits/ 8 <8>;
+	};
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 39+ messages in thread

* [PATCH 5/5] PCI: cadence: add EndPoint Controller driver for Cadence PCIe controller
  2017-11-23 15:01 [PATCH 0/5] PCI: Add support to the Cadence PCIe controller Cyrille Pitchen
                   ` (3 preceding siblings ...)
  2017-11-23 15:01 ` [PATCH 4/5] dt-bindings: PCI: cadence: Add DT bindings for Cadence PCIe endpoint controller Cyrille Pitchen
@ 2017-11-23 15:01 ` Cyrille Pitchen
  2017-12-01 12:20   ` Lorenzo Pieralisi
  2017-11-28 15:50 ` [PATCH 0/5] PCI: Add support to the " Lorenzo Pieralisi
  5 siblings, 1 reply; 39+ messages in thread
From: Cyrille Pitchen @ 2017-11-23 15:01 UTC (permalink / raw)
  To: bhelgaas, kishon, lorenzo.pieralisi, linux-pci
  Cc: adouglas, stelford, dgary, kgopi, eandrews, thomas.petazzoni,
	sureshp, nsekhar, linux-kernel, robh, devicetree,
	Cyrille Pitchen

This patch adds support to the Cadence PCIe controller in endpoint mode.

Signed-off-by: Cyrille Pitchen <cyrille.pitchen@free-electrons.com>
---
 drivers/pci/cadence/Kconfig           |   9 +
 drivers/pci/cadence/Makefile          |   1 +
 drivers/pci/cadence/pcie-cadence-ep.c | 553 ++++++++++++++++++++++++++++++++++
 3 files changed, 563 insertions(+)
 create mode 100644 drivers/pci/cadence/pcie-cadence-ep.c

diff --git a/drivers/pci/cadence/Kconfig b/drivers/pci/cadence/Kconfig
index 120306cae2aa..b2e6af71f39e 100644
--- a/drivers/pci/cadence/Kconfig
+++ b/drivers/pci/cadence/Kconfig
@@ -21,4 +21,13 @@ config PCIE_CADENCE_HOST
 	  mode. This PCIe controller may be embedded into many different vendors
 	  SoCs.
 
+config PCIE_CADENCE_EP
+	bool "Cadence PCIe endpoint controller"
+	depends on PCI_ENDPOINT
+	select PCIE_CADENCE
+	help
+	  Say Y here if you want to support the Cadence PCIe  controller in
+	  endpoint mode. This PCIe controller may be embedded into many
+	  different vendors SoCs.
+
 endif # PCI_CADENCE
diff --git a/drivers/pci/cadence/Makefile b/drivers/pci/cadence/Makefile
index d57d192d2595..61e9c8d6839d 100644
--- a/drivers/pci/cadence/Makefile
+++ b/drivers/pci/cadence/Makefile
@@ -1,2 +1,3 @@
 obj-$(CONFIG_PCIE_CADENCE) += pcie-cadence.o
 obj-$(CONFIG_PCIE_CADENCE_HOST) += pcie-cadence-host.o
+obj-$(CONFIG_PCIE_CADENCE_EP) += pcie-cadence-ep.o
diff --git a/drivers/pci/cadence/pcie-cadence-ep.c b/drivers/pci/cadence/pcie-cadence-ep.c
new file mode 100644
index 000000000000..a1d761101a9c
--- /dev/null
+++ b/drivers/pci/cadence/pcie-cadence-ep.c
@@ -0,0 +1,553 @@
+/*
+ * Cadence PCIe host controller driver.
+ *
+ * Copyright (c) 2017 Cadence
+ *
+ * Author: Cyrille Pitchen <cyrille.pitchen@free-electrons.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program.  If not, see <http://www.gnu.org/licenses/>.
+ */
+#include <linux/kernel.h>
+#include <linux/of.h>
+#include <linux/pci-epc.h>
+#include <linux/platform_device.h>
+#include <linux/pm_runtime.h>
+#include <linux/sizes.h>
+#include <linux/delay.h>
+
+#include "pcie-cadence.h"
+
+#define CDNS_PCIE_EP_MIN_APERTURE		128	/* 128 bytes */
+#define CDNS_PCIE_EP_IRQ_PCI_ADDR_NONE		0x1
+#define CDNS_PCIE_EP_IRQ_PCI_ADDR_LEGACY	0x3
+
+/**
+ * struct cdns_pcie_ep_data - hardware specific data
+ * @max_regions: maximum nmuber of regions supported by hardware
+ */
+struct cdns_pcie_ep_data {
+	size_t				max_regions;
+};
+
+/**
+ * struct cdns_pcie_ep - private data for this PCIe endpoint controller driver
+ * @pcie: Cadence PCIe controller
+ * @data: pointer to a 'struct cdns_pcie_data'
+ */
+struct cdns_pcie_ep {
+	struct cdns_pcie		pcie;
+	const struct cdns_pcie_ep_data	*data;
+	struct pci_epc			*epc;
+	unsigned long			ob_region_map;
+	phys_addr_t			*ob_addr;
+	phys_addr_t			irq_phys_addr;
+	void __iomem			*irq_cpu_addr;
+	u64				irq_pci_addr;
+	u8				irq_pending;
+};
+
+static int cdns_pcie_ep_write_header(struct pci_epc *epc,
+				     struct pci_epf_header *hdr)
+{
+	struct cdns_pcie_ep *ep = epc_get_drvdata(epc);
+	struct cdns_pcie *pcie = &ep->pcie;
+	u8 fn = 0;
+
+	if (fn == 0) {
+		u32 id = CDNS_PCIE_LM_ID_VENDOR(hdr->vendorid) |
+			 CDNS_PCIE_LM_ID_SUBSYS(hdr->subsys_vendor_id);
+		cdns_pcie_writel(pcie, CDNS_PCIE_LM_ID, id);
+	}
+	cdns_pcie_ep_fn_writew(pcie, fn, PCI_DEVICE_ID, hdr->deviceid);
+	cdns_pcie_ep_fn_writeb(pcie, fn, PCI_REVISION_ID, hdr->revid);
+	cdns_pcie_ep_fn_writeb(pcie, fn, PCI_CLASS_PROG, hdr->progif_code);
+	cdns_pcie_ep_fn_writew(pcie, fn, PCI_CLASS_DEVICE,
+			       hdr->subclass_code | hdr->baseclass_code << 8);
+	cdns_pcie_ep_fn_writeb(pcie, fn, PCI_CACHE_LINE_SIZE,
+			       hdr->cache_line_size);
+	cdns_pcie_ep_fn_writew(pcie, fn, PCI_SUBSYSTEM_ID, hdr->subsys_id);
+	cdns_pcie_ep_fn_writeb(pcie, fn, PCI_INTERRUPT_PIN, hdr->interrupt_pin);
+
+	return 0;
+}
+
+static int cdns_pcie_ep_set_bar(struct pci_epc *epc, enum pci_barno bar,
+				dma_addr_t bar_phys, size_t size, int flags)
+{
+	struct cdns_pcie_ep *ep = epc_get_drvdata(epc);
+	struct cdns_pcie *pcie = &ep->pcie;
+	u32 addr0, addr1, reg, cfg, b, aperture, ctrl;
+	u8 fn = 0;
+	u64 sz;
+
+	/* BAR size is 2^(aperture + 7) */
+	sz = max_t(size_t, size, CDNS_PCIE_EP_MIN_APERTURE);
+	sz = 1ULL << fls64(sz - 1);
+	aperture = ilog2(sz) - 7; /* 128B -> 0, 256B -> 1, 512B -> 2, ... */
+
+	if ((flags & PCI_BASE_ADDRESS_SPACE) == PCI_BASE_ADDRESS_SPACE_IO) {
+		ctrl = CDNS_PCIE_LM_BAR_CFG_CTRL_IO_32BITS;
+	} else {
+		bool is_prefetch = !!(flags & PCI_BASE_ADDRESS_MEM_PREFETCH);
+		bool is_64bits = sz > SZ_2G;
+
+		if (is_64bits && (bar & 1))
+			return -EINVAL;
+
+		switch (is_64bits << 1 | is_prefetch) {
+		case 0:
+			ctrl = CDNS_PCIE_LM_BAR_CFG_CTRL_MEM_32BITS;
+			break;
+
+		case 1:
+			ctrl = CDNS_PCIE_LM_BAR_CFG_CTRL_PREFETCH_MEM_32BITS;
+			break;
+
+		case 2:
+			ctrl = CDNS_PCIE_LM_BAR_CFG_CTRL_MEM_64BITS;
+			break;
+
+		case 3:
+			ctrl = CDNS_PCIE_LM_BAR_CFG_CTRL_PREFETCH_MEM_64BITS;
+			break;
+		}
+	}
+
+	addr0 = lower_32_bits(bar_phys);
+	addr1 = upper_32_bits(bar_phys);
+	cdns_pcie_writel(pcie, CDNS_PCIE_AT_IB_EP_FUNC_BAR_ADDR0(fn, bar),
+			 addr0);
+	cdns_pcie_writel(pcie, CDNS_PCIE_AT_IB_EP_FUNC_BAR_ADDR1(fn, bar),
+			 addr1);
+
+	if (bar < BAR_4) {
+		reg = CDNS_PCIE_LM_EP_FUNC_BAR_CFG0(fn);
+		b = bar;
+	} else {
+		reg = CDNS_PCIE_LM_EP_FUNC_BAR_CFG1(fn);
+		b = bar - BAR_4;
+	}
+
+	cfg = cdns_pcie_readl(pcie, reg);
+	cfg &= ~(CDNS_PCIE_LM_EP_FUNC_BAR_CFG_BAR_APERTURE_MASK(b) |
+		 CDNS_PCIE_LM_EP_FUNC_BAR_CFG_BAR_CTRL_MASK(b));
+	cfg |= (CDNS_PCIE_LM_EP_FUNC_BAR_CFG_BAR_APERTURE(b, aperture) |
+		CDNS_PCIE_LM_EP_FUNC_BAR_CFG_BAR_CTRL(b, ctrl));
+	cdns_pcie_writel(pcie, reg, cfg);
+
+	return 0;
+}
+
+static void cdns_pcie_ep_clear_bar(struct pci_epc *epc, enum pci_barno bar)
+{
+	struct cdns_pcie_ep *ep = epc_get_drvdata(epc);
+	struct cdns_pcie *pcie = &ep->pcie;
+	u32 reg, cfg, b, ctrl;
+	u8 fn = 0;
+
+	if (bar < BAR_4) {
+		reg = CDNS_PCIE_LM_EP_FUNC_BAR_CFG0(fn);
+		b = bar;
+	} else {
+		reg = CDNS_PCIE_LM_EP_FUNC_BAR_CFG1(fn);
+		b = bar - BAR_4;
+	}
+
+	ctrl = CDNS_PCIE_LM_BAR_CFG_CTRL_DISABLED;
+	cfg = cdns_pcie_readl(pcie, reg);
+	cfg &= ~(CDNS_PCIE_LM_EP_FUNC_BAR_CFG_BAR_APERTURE_MASK(b) |
+		 CDNS_PCIE_LM_EP_FUNC_BAR_CFG_BAR_CTRL_MASK(b));
+	cfg |= CDNS_PCIE_LM_EP_FUNC_BAR_CFG_BAR_CTRL_MASK(ctrl);
+	cdns_pcie_writel(pcie, reg, cfg);
+
+	cdns_pcie_writel(pcie, CDNS_PCIE_AT_IB_EP_FUNC_BAR_ADDR0(fn, bar), 0);
+	cdns_pcie_writel(pcie, CDNS_PCIE_AT_IB_EP_FUNC_BAR_ADDR1(fn, bar), 0);
+}
+
+static int cdns_pcie_ep_map_addr(struct pci_epc *epc, phys_addr_t addr,
+				 u64 pci_addr, size_t size)
+{
+	struct cdns_pcie_ep *ep = epc_get_drvdata(epc);
+	struct cdns_pcie *pcie = &ep->pcie;
+	u32 r;
+
+	r = find_first_zero_bit(&ep->ob_region_map, sizeof(ep->ob_region_map));
+	if (r >= ep->data->max_regions - 1) {
+		dev_err(&epc->dev, "no free outbound region\n");
+		return -EINVAL;
+	}
+
+	cdns_pcie_set_outbound_region(pcie, r, false, addr, pci_addr, size);
+
+	set_bit(r, &ep->ob_region_map);
+	ep->ob_addr[r] = addr;
+
+	return 0;
+}
+
+static void cdns_pcie_ep_unmap_addr(struct pci_epc *epc, phys_addr_t addr)
+{
+	struct cdns_pcie_ep *ep = epc_get_drvdata(epc);
+	struct cdns_pcie *pcie = &ep->pcie;
+	u32 r;
+
+	for (r = 0; r < ep->data->max_regions - 1; r++)
+		if (ep->ob_addr[r] == addr)
+			break;
+
+	if (r >= ep->data->max_regions - 1)
+		return;
+
+	cdns_pcie_reset_outbound_region(pcie, r);
+
+	ep->ob_addr[r] = 0;
+	clear_bit(r, &ep->ob_region_map);
+}
+
+static int cdns_pcie_ep_set_msi(struct pci_epc *epc, u8 mmc)
+{
+	struct cdns_pcie_ep *ep = epc_get_drvdata(epc);
+	struct cdns_pcie *pcie = &ep->pcie;
+	u32 cap = CDNS_PCIE_EP_FUNC_MSI_CAP_OFFSET;
+	u16 flags;
+	u8 fn = 0;
+
+	/* Validate the ID of the MSI Capability structure. */
+	if (cdns_pcie_ep_fn_readb(pcie, fn, cap) != PCI_CAP_ID_MSI)
+		return -EINVAL;
+
+	/*
+	 * Set the Multiple Message Capable bitfield into the Message Control
+	 * register.
+	 */
+	flags = cdns_pcie_ep_fn_readw(pcie, fn, cap + PCI_MSI_FLAGS);
+	flags = (flags & ~PCI_MSI_FLAGS_QMASK) | (mmc << 1);
+	flags |= PCI_MSI_FLAGS_64BIT;
+	flags &= ~PCI_MSI_FLAGS_MASKBIT;
+	cdns_pcie_ep_fn_writew(pcie, fn, cap + PCI_MSI_FLAGS, flags);
+
+	return 0;
+}
+
+static int cdns_pcie_ep_get_msi(struct pci_epc *epc)
+{
+	struct cdns_pcie_ep *ep = epc_get_drvdata(epc);
+	struct cdns_pcie *pcie = &ep->pcie;
+	u32 cap = CDNS_PCIE_EP_FUNC_MSI_CAP_OFFSET;
+	u16 flags, mmc, mme;
+	u8 fn = 0;
+
+	/* Validate the ID of the MSI Capability structure. */
+	if (cdns_pcie_ep_fn_readb(pcie, fn, cap) != PCI_CAP_ID_MSI)
+		return -EINVAL;
+
+	/* Validate that the MSI feature is actually enabled. */
+	flags = cdns_pcie_ep_fn_readw(pcie, fn, cap + PCI_MSI_FLAGS);
+	if (!(flags & PCI_MSI_FLAGS_ENABLE))
+		return -EINVAL;
+
+	/*
+	 * Get the Multiple Message Enable bitfield from the Message Control
+	 * register.
+	 */
+	mmc = (flags & PCI_MSI_FLAGS_QMASK) >> 1;
+	mme = (flags & PCI_MSI_FLAGS_QSIZE) >> 4;
+	if (mme > mmc)
+		mme = mmc;
+	if (mme > 5)
+		mme = 5;
+
+	return mme;
+}
+
+static void cdns_pcie_ep_assert_intx(struct cdns_pcie_ep *ep, u8 fn,
+				     u8 intx, bool is_asserted)
+{
+	struct cdns_pcie *pcie = &ep->pcie;
+	u32 r = ep->data->max_regions - 1;
+	u32 offset;
+	u16 status;
+	u8 msg_code;
+
+	intx &= 3;
+
+	/* Set the outbound region if needed. */
+	if (unlikely(ep->irq_pci_addr != CDNS_PCIE_EP_IRQ_PCI_ADDR_LEGACY)) {
+		/* Last region was reserved for IRQ writes. */
+		cdns_pcie_set_outbound_region_for_normal_msg(pcie, r,
+							     ep->irq_phys_addr);
+		ep->irq_pci_addr = CDNS_PCIE_EP_IRQ_PCI_ADDR_LEGACY;
+	}
+
+	if (is_asserted) {
+		ep->irq_pending |= BIT(intx);
+		msg_code = MSG_CODE_ASSERT_INTA + intx;
+	} else {
+		ep->irq_pending &= ~BIT(intx);
+		msg_code = MSG_CODE_DEASSERT_INTA + intx;
+	}
+
+	status = cdns_pcie_ep_fn_readw(pcie, fn, PCI_STATUS);
+	if (((status & PCI_STATUS_INTERRUPT) != 0) ^ (ep->irq_pending != 0)) {
+		status ^= PCI_STATUS_INTERRUPT;
+		cdns_pcie_ep_fn_writew(pcie, fn, PCI_STATUS, status);
+	}
+
+	offset = CDNS_PCIE_NORMAL_MSG_ROUTING(MSG_ROUTING_LOCAL) |
+		 CDNS_PCIE_NORMAL_MSG_CODE(msg_code) |
+		 CDNS_PCIE_MSG_NO_DATA;
+	writel(0, ep->irq_cpu_addr + offset);
+}
+
+static int cdns_pcie_ep_send_legacy_irq(struct cdns_pcie_ep *ep, u8 fn, u8 intx)
+{
+	u16 cmd;
+
+	cmd = cdns_pcie_ep_fn_readw(&ep->pcie, fn, PCI_COMMAND);
+	if (cmd & PCI_COMMAND_INTX_DISABLE)
+		return -EINVAL;
+
+	cdns_pcie_ep_assert_intx(ep, fn, intx, true);
+	mdelay(1);
+	cdns_pcie_ep_assert_intx(ep, fn, intx, false);
+	return 0;
+}
+
+static int cdns_pcie_ep_raise_irq(struct pci_epc *epc,
+				  enum pci_epc_irq_type type, u8 interrupt_num)
+{
+	struct cdns_pcie_ep *ep = epc_get_drvdata(epc);
+	struct cdns_pcie *pcie = &ep->pcie;
+	u32 cap = CDNS_PCIE_EP_FUNC_MSI_CAP_OFFSET;
+	u16 flags, mmc, mme, data, data_mask;
+	u8 msi_count;
+	u64 pci_addr, pci_addr_mask = 0xff;
+	u8 fn = 0;
+
+	/* Handle legacy IRQ. */
+	if (type == PCI_EPC_IRQ_LEGACY)
+		return cdns_pcie_ep_send_legacy_irq(ep, fn, 0);
+
+	/* Otherwise MSI. */
+	if (type != PCI_EPC_IRQ_MSI)
+		return -EINVAL;
+
+	/* Check whether the MSI feature has been enabled by the PCI host. */
+	flags = cdns_pcie_ep_fn_readw(pcie, fn, cap + PCI_MSI_FLAGS);
+	if (!(flags & PCI_MSI_FLAGS_ENABLE))
+		return -EINVAL;
+
+	/* Get the number of enabled MSIs */
+	mmc = (flags & PCI_MSI_FLAGS_QMASK) >> 1;
+	mme = (flags & PCI_MSI_FLAGS_QSIZE) >> 4;
+	if (mme > mmc)
+		mme = mmc;
+	if (mme > 5)
+		mme = 5;
+
+	msi_count = 1 << mme;
+	if (!interrupt_num || interrupt_num > msi_count)
+		return -EINVAL;
+
+	/* Compute the data value to be written. */
+	data_mask = msi_count - 1;
+	data = cdns_pcie_ep_fn_readw(pcie, fn, cap + PCI_MSI_DATA_64);
+	data = (data & ~data_mask) | ((interrupt_num - 1) & data_mask);
+
+	/* Get the PCI address where to write the data into. */
+	pci_addr = cdns_pcie_ep_fn_readl(pcie, fn, cap + PCI_MSI_ADDRESS_HI);
+	pci_addr <<= 32;
+	pci_addr |= cdns_pcie_ep_fn_readl(pcie, fn, cap + PCI_MSI_ADDRESS_LO);
+	pci_addr &= GENMASK_ULL(63, 2);
+
+	/* Set the outbound region if needed. */
+	if (unlikely(ep->irq_pci_addr != pci_addr)) {
+		/* Last region was reserved for IRQ writes. */
+		cdns_pcie_set_outbound_region(pcie, ep->data->max_regions - 1,
+					      false,
+					      ep->irq_phys_addr,
+					      pci_addr & ~pci_addr_mask,
+					      pci_addr_mask + 1);
+		ep->irq_pci_addr = pci_addr;
+	}
+	writew(data, ep->irq_cpu_addr + (pci_addr & pci_addr_mask));
+
+	return 0;
+}
+
+static int cdns_pcie_ep_start(struct pci_epc *epc)
+{
+	struct cdns_pcie_ep *ep = epc_get_drvdata(epc);
+	struct cdns_pcie *pcie = &ep->pcie;
+	struct pci_epf *epf;
+	u32 cfg;
+	u8 fn = 0;
+
+	/* Enable this endpoint function. */
+	cfg = cdns_pcie_readl(pcie, CDNS_PCIE_LM_EP_FUNC_CFG);
+	cfg |= BIT(fn);
+	cdns_pcie_writel(pcie, CDNS_PCIE_LM_EP_FUNC_CFG, cfg);
+
+	/*
+	 * Already linked-up: don't call directly pci_epc_linkup() because we've
+	 * already locked the epc->lock.
+	 */
+	list_for_each_entry(epf, &epc->pci_epf, list)
+		pci_epf_linkup(epf);
+
+	return 0;
+}
+
+static void cdns_pcie_ep_stop(struct pci_epc *epc)
+{
+	struct cdns_pcie_ep *ep = epc_get_drvdata(epc);
+	struct cdns_pcie *pcie = &ep->pcie;
+	u32 cfg;
+	u8 fn = 0;
+
+	/* Disable this endpoint function (function 0 can't be disabled). */
+	cfg = cdns_pcie_readl(pcie, CDNS_PCIE_LM_EP_FUNC_CFG);
+	cfg &= ~BIT(fn);
+	cdns_pcie_writel(pcie, CDNS_PCIE_LM_EP_FUNC_CFG, cfg);
+}
+
+static const struct pci_epc_ops cdns_pcie_epc_ops = {
+	.write_header	= cdns_pcie_ep_write_header,
+	.set_bar	= cdns_pcie_ep_set_bar,
+	.clear_bar	= cdns_pcie_ep_clear_bar,
+	.map_addr	= cdns_pcie_ep_map_addr,
+	.unmap_addr	= cdns_pcie_ep_unmap_addr,
+	.set_msi	= cdns_pcie_ep_set_msi,
+	.get_msi	= cdns_pcie_ep_get_msi,
+	.raise_irq	= cdns_pcie_ep_raise_irq,
+	.start		= cdns_pcie_ep_start,
+	.stop		= cdns_pcie_ep_stop,
+};
+
+static const struct cdns_pcie_ep_data cdns_pcie_ep_data = {
+	.max_regions	= 16,
+};
+
+static const struct of_device_id cdns_pcie_ep_of_match[] = {
+	{ .compatible = "cdns,cdns-pcie-ep",
+	  .data = &cdns_pcie_ep_data },
+
+	{ },
+};
+
+static int cdns_pcie_ep_probe(struct platform_device *pdev)
+{
+	struct device *dev = &pdev->dev;
+	struct device_node *np = dev->of_node;
+	const struct of_device_id *of_id;
+	struct cdns_pcie_ep *ep;
+	struct cdns_pcie *pcie;
+	struct pci_epc *epc;
+	struct resource *res;
+	size_t max_regions;
+	int ret;
+
+	ep = devm_kzalloc(dev, sizeof(*ep), GFP_KERNEL);
+	if (!ep)
+		return -ENOMEM;
+
+	platform_set_drvdata(pdev, ep);
+
+	pcie = &ep->pcie;
+	pcie->is_rc = false;
+
+	of_id = of_match_node(cdns_pcie_ep_of_match, np);
+	ep->data = (const struct cdns_pcie_ep_data *)of_id->data;
+
+	res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "reg");
+	pcie->reg_base = devm_ioremap_resource(dev, res);
+	if (IS_ERR(pcie->reg_base)) {
+		dev_err(dev, "missing \"reg\"\n");
+		return PTR_ERR(pcie->reg_base);
+	}
+
+	res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "mem");
+	if (!res) {
+		dev_err(dev, "missing \"mem\"\n");
+		return -EINVAL;
+	}
+	pcie->mem_res = res;
+
+	max_regions = ep->data->max_regions;
+	ep->ob_addr = devm_kzalloc(dev, max_regions * sizeof(*ep->ob_addr),
+				   GFP_KERNEL);
+	if (!ep->ob_addr)
+		return -ENOMEM;
+
+	pm_runtime_enable(dev);
+	ret = pm_runtime_get_sync(dev);
+	if (ret < 0) {
+		dev_err(dev, "pm_runtime_get_sync() failed\n");
+		goto err_get_sync;
+	}
+
+	/* Disable all but function 0 (anyway BIT(0) is hardwired to 1). */
+	cdns_pcie_writel(pcie, CDNS_PCIE_LM_EP_FUNC_CFG, BIT(0));
+
+	epc = devm_pci_epc_create(dev, &cdns_pcie_epc_ops);
+	if (IS_ERR(epc)) {
+		dev_err(dev, "failed to create epc device\n");
+		ret = PTR_ERR(epc);
+		goto err_init;
+	}
+
+	ep->epc = epc;
+	epc_set_drvdata(epc, ep);
+
+	ret = of_property_read_u8(np, "max-functions", &epc->max_functions);
+	if (ret < 0)
+		epc->max_functions = 1;
+
+	ret = pci_epc_mem_init(epc, pcie->mem_res->start,
+			       resource_size(pcie->mem_res));
+	if (ret < 0) {
+		dev_err(dev, "failed to initialize the memory space\n");
+		goto err_init;
+	}
+
+	ep->irq_cpu_addr = pci_epc_mem_alloc_addr(epc, &ep->irq_phys_addr,
+						  SZ_128K);
+	if (!ep->irq_cpu_addr) {
+		dev_err(dev, "failed to reserve memory space for MSI\n");
+		ret = -ENOMEM;
+		goto free_epc_mem;
+	}
+	ep->irq_pci_addr = CDNS_PCIE_EP_IRQ_PCI_ADDR_NONE;
+
+	return 0;
+
+ free_epc_mem:
+	pci_epc_mem_exit(epc);
+
+ err_init:
+	pm_runtime_put_sync(dev);
+
+ err_get_sync:
+	pm_runtime_disable(dev);
+
+	return ret;
+}
+
+static struct platform_driver cdns_pcie_ep_driver = {
+	.driver = {
+		.name = "cdns-pcie-ep",
+		.of_match_table = cdns_pcie_ep_of_match,
+	},
+	.probe = cdns_pcie_ep_probe,
+};
+builtin_platform_driver(cdns_pcie_ep_driver);
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 39+ messages in thread

* Re: [PATCH 2/5] dt-bindings: PCI: cadence: Add DT bindings for Cadence PCIe host controller
  2017-11-23 15:01 ` [PATCH 2/5] dt-bindings: PCI: cadence: Add DT bindings for Cadence PCIe host controller Cyrille Pitchen
@ 2017-11-26 19:32   ` Rob Herring
  0 siblings, 0 replies; 39+ messages in thread
From: Rob Herring @ 2017-11-26 19:32 UTC (permalink / raw)
  To: Cyrille Pitchen
  Cc: bhelgaas, kishon, lorenzo.pieralisi, linux-pci, adouglas,
	stelford, dgary, kgopi, eandrews, thomas.petazzoni, sureshp,
	nsekhar, linux-kernel, devicetree

On Thu, Nov 23, 2017 at 04:01:47PM +0100, Cyrille Pitchen wrote:
> From: Scott Telford <stelford@cadence.com>
> 
> This patch adds documentation for the DT bindings of the Cadence PCIe
> controller when configured in host (Root Complex) mode.
> 
> Signed-off-by: Scott Telford <stelford@cadence.com>
> Signed-off-by: Cyrille Pitchen <cyrille.pitchen@free-electrons.com>
> 
> dt-bindings: PCI: cadence: host fixup
> ---
>  .../bindings/pci/cdns,cdns-pcie-host.txt           | 54 ++++++++++++++++++++++
>  1 file changed, 54 insertions(+)
>  create mode 100644 Documentation/devicetree/bindings/pci/cdns,cdns-pcie-host.txt

A few nits, otherwise:

Acked-by: Rob Herring <robh@kernel.org>

> 
> diff --git a/Documentation/devicetree/bindings/pci/cdns,cdns-pcie-host.txt b/Documentation/devicetree/bindings/pci/cdns,cdns-pcie-host.txt
> new file mode 100644
> index 000000000000..4b3df8ffd5e1
> --- /dev/null
> +++ b/Documentation/devicetree/bindings/pci/cdns,cdns-pcie-host.txt
> @@ -0,0 +1,54 @@
> +* Cadence PCIe host controller
> +
> +This PCIe controller inherits the base properties defined in
> +host-generic-pci.txt.
> +
> +Required properties:
> +- compatible: should contain "cdns,cdns-pcie-host" to identify the IP used.
> +- reg: Should contain the PCIe configuration window base address, controller
> +  register base address, and AXI interface region base address respectively.
> +- reg-names: Must be "cfg", "reg" and "mem" respectively.
> +- #address-cells: set to <3>
> +- #size-cells: set to <2>
> +- device_type: set to "pci"
> +- ranges: ranges for the PCI memory and I/O regions
> +- #interrupt-cells: set to <1>
> +- interrupt-map-mask and interrupt-map: standard PCI
> +	properties to define the mapping of the PCIe interface to interrupt
> +	numbers.
> +
> +Example:
> +
> +	pci@fb000000 {

pcie@...

Usually the first address is the unit-address, so either change the 
order in reg or change this.

> +		compatible = "cdns,cdns-pcie-host";
> +		device_type = "pci";
> +		#address-cells = <3>;
> +		#size-cells = <2>;
> +		bus-range = <0x0 0xff>;
> +		linux,pci-domain = <0>;
> +
> +		// CPU_PHYSICAL(2)  SIZE(2)

Drop the comment. If you know what 'reg' is then it is obvious. Also, 
don't use C++ style comments.

> +		reg = <0x0 0x41000000  0x0 0x00001000>, /* Config space */
> +		      <0x0 0xfb000000  0x0 0x01000000>, /* Base Address */
> +		      <0x0 0x40000000  0x0 0x04000000>; /* AXI region */
> +		reg-names = "cfg", "reg", "mem";
> +
> +		// BUS_ADDRESS(3)  CPU_PHYSICAL(1)  SIZE(2)

ditto

> +		ranges =
> +			<0x02000000 0x0 0x42000000  0x0 0x42000000  0x0 0x1000000>,
> +			<0x01000000 0x0 0x43000000  0x0 0x43000000  0x0 0x0010000>;
> +
> +
> +		#interrupt-cells = <0x1>;
> +
> +		// PCI_DEVICE(3)  INT#(1)  CONTROLLER(PHANDLE)  CONTROLLER_DATA(5)
> +		interrupt-map = <0x0 0x0 0x0  0x1  &gic  0x0 0x0 0x0 14 0x1
> +				 0x0 0x0 0x0  0x2  &gic  0x0 0x0 0x0 15 0x1
> +				 0x0 0x0 0x0  0x3  &gic  0x0 0x0 0x0 16 0x1
> +				 0x0 0x0 0x0  0x4  &gic  0x0 0x0 0x0 17 0x1>;
> +
> +		// PCI_DEVICE(3)  INT#(1)
> +		interrupt-map-mask = <0x0000 0x0 0x0  0x7>;
> +
> +		msi-parent = <&its_pci>;
> +	};
> -- 
> 2.11.0
> 
> --
> To unsubscribe from this list: send the line "unsubscribe devicetree" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH 4/5] dt-bindings: PCI: cadence: Add DT bindings for Cadence PCIe endpoint controller
  2017-11-23 15:01 ` [PATCH 4/5] dt-bindings: PCI: cadence: Add DT bindings for Cadence PCIe endpoint controller Cyrille Pitchen
@ 2017-11-26 19:33   ` Rob Herring
  0 siblings, 0 replies; 39+ messages in thread
From: Rob Herring @ 2017-11-26 19:33 UTC (permalink / raw)
  To: Cyrille Pitchen
  Cc: bhelgaas, kishon, lorenzo.pieralisi, linux-pci, adouglas,
	stelford, dgary, kgopi, eandrews, thomas.petazzoni, sureshp,
	nsekhar, linux-kernel, devicetree

On Thu, Nov 23, 2017 at 04:01:49PM +0100, Cyrille Pitchen wrote:
> This patch documents the DT bindings for the Cadence PCIe controller
> when configured in endpoint mode.
> 
> Signed-off-by: Cyrille Pitchen <cyrille.pitchen@free-electrons.com>
> ---
>  .../devicetree/bindings/pci/cdns,cdns-pcie-ep.txt    | 20 ++++++++++++++++++++
>  1 file changed, 20 insertions(+)
>  create mode 100644 Documentation/devicetree/bindings/pci/cdns,cdns-pcie-ep.txt

Acked-by: Rob Herring <robh@kernel.org>

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH 0/5] PCI: Add support to the Cadence PCIe controller
  2017-11-23 15:01 [PATCH 0/5] PCI: Add support to the Cadence PCIe controller Cyrille Pitchen
                   ` (4 preceding siblings ...)
  2017-11-23 15:01 ` [PATCH 5/5] PCI: cadence: add EndPoint Controller driver for Cadence PCIe controller Cyrille Pitchen
@ 2017-11-28 15:50 ` Lorenzo Pieralisi
  2017-11-30  7:13   ` Kishon Vijay Abraham I
  5 siblings, 1 reply; 39+ messages in thread
From: Lorenzo Pieralisi @ 2017-11-28 15:50 UTC (permalink / raw)
  To: Cyrille Pitchen
  Cc: bhelgaas, kishon, linux-pci, adouglas, stelford, dgary, kgopi,
	eandrews, thomas.petazzoni, sureshp, nsekhar, linux-kernel, robh,
	devicetree

On Thu, Nov 23, 2017 at 04:01:45PM +0100, Cyrille Pitchen wrote:
> Hi all,
> 
> this series of patches adds support to the Cadence PCIe controller.
> It was tested on a ARM64 platform emulated by a Palladium running both
> linux-next (next-20171123) and pci-next kernels.
> 
> The host mode was tested with some PCIe devices connected to the Palladium
> through a speed-bridge. Some of those devices were a USB host controller
> and a SATA controller. The PCIe host controller was also tested with a
> second controller configured in endpoint mode and connected back to back
> to the first controller.
> 
> The EndPoint Controller (EPC) driver of this series was tested with the
> pci-epf-test.c EndPoint Function (EPF) driver and the pcitest userspace
> program.
> 
> For linux-next, I applied this series on top of Kishon's patch
> ("PCI: endpoint: Use EPC's device in dma_alloc_coherent/dma_free_coherent")
> otherwise dma_alloc_coherent() fails when called by pci_epf_alloc_space().
> 
> Also, I patched drivers/Makefile rather than drivers/pci/Makefile to make
> the drivers/pci/cadence/pcie-cadence-ep.o linked after
> drivers/pci/endpoint/*.o objects, otherwise the built-in pci-cadence-ep
> driver would be probed before the PCI endpoint framework would have been
> initialized, which results in a kernel crash.

Nice :( - isn't there a way to improve this (ie probe deferral or
registering the EPF bus earlier) ?

> I guess this is the reason why the "pci/dwc" line was also put in
> drivers/Makefile, right after the "pci/endpoint" line.

Or probably the other way around - see commit 5e8cb4033807

@Kishon, thoughts ?

Thanks,
Lorenzo

> Best regards,
> 
> Cyrille
> 
> Cyrille Pitchen (4):
>   PCI: Add vendor ID for Cadence
>   PCI: cadence: Add host driver for Cadence PCIe controller
>   dt-bindings: PCI: cadence: Add DT bindings for Cadence PCIe endpoint
>     controller
>   PCI: cadence: add EndPoint Controller driver for Cadence PCIe
>     controller
> 
> Scott Telford (1):
>   dt-bindings: PCI: cadence: Add DT bindings for Cadence PCIe host
>     controller
> 
>  .../devicetree/bindings/pci/cdns,cdns-pcie-ep.txt  |  20 +
>  .../bindings/pci/cdns,cdns-pcie-host.txt           |  54 ++
>  drivers/Makefile                                   |   1 +
>  drivers/pci/Kconfig                                |   1 +
>  drivers/pci/cadence/Kconfig                        |  33 ++
>  drivers/pci/cadence/Makefile                       |   3 +
>  drivers/pci/cadence/pcie-cadence-ep.c              | 553 +++++++++++++++++++++
>  drivers/pci/cadence/pcie-cadence-host.c            | 425 ++++++++++++++++
>  drivers/pci/cadence/pcie-cadence.c                 | 110 ++++
>  drivers/pci/cadence/pcie-cadence.h                 | 325 ++++++++++++
>  include/linux/pci_ids.h                            |   2 +
>  11 files changed, 1527 insertions(+)
>  create mode 100644 Documentation/devicetree/bindings/pci/cdns,cdns-pcie-ep.txt
>  create mode 100644 Documentation/devicetree/bindings/pci/cdns,cdns-pcie-host.txt
>  create mode 100644 drivers/pci/cadence/Kconfig
>  create mode 100644 drivers/pci/cadence/Makefile
>  create mode 100644 drivers/pci/cadence/pcie-cadence-ep.c
>  create mode 100644 drivers/pci/cadence/pcie-cadence-host.c
>  create mode 100644 drivers/pci/cadence/pcie-cadence.c
>  create mode 100644 drivers/pci/cadence/pcie-cadence.h
> 
> -- 
> 2.11.0
> 

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH 3/5] PCI: cadence: Add host driver for Cadence PCIe controller
  2017-11-23 15:01 ` [PATCH 3/5] PCI: cadence: Add host driver for Cadence PCIe controller Cyrille Pitchen
@ 2017-11-28 20:41   ` Bjorn Helgaas
  2017-11-28 20:46     ` Bjorn Helgaas
                       ` (3 more replies)
  2017-11-29 17:34   ` Lorenzo Pieralisi
  2017-11-29 18:25   ` Lorenzo Pieralisi
  2 siblings, 4 replies; 39+ messages in thread
From: Bjorn Helgaas @ 2017-11-28 20:41 UTC (permalink / raw)
  To: Cyrille Pitchen
  Cc: bhelgaas, kishon, lorenzo.pieralisi, linux-pci, adouglas,
	stelford, dgary, kgopi, eandrews, thomas.petazzoni, sureshp,
	nsekhar, linux-kernel, robh, devicetree

On Thu, Nov 23, 2017 at 04:01:48PM +0100, Cyrille Pitchen wrote:
> This patch adds support to the Cadence PCIe controller in host mode.
> 
> Signed-off-by: Cyrille Pitchen <cyrille.pitchen@free-electrons.com>
> ---
>  drivers/Makefile                        |   1 +
>  drivers/pci/Kconfig                     |   1 +
>  drivers/pci/cadence/Kconfig             |  24 ++
>  drivers/pci/cadence/Makefile            |   2 +
>  drivers/pci/cadence/pcie-cadence-host.c | 425 ++++++++++++++++++++++++++++++++
>  drivers/pci/cadence/pcie-cadence.c      | 110 +++++++++
>  drivers/pci/cadence/pcie-cadence.h      | 325 ++++++++++++++++++++++++
>  7 files changed, 888 insertions(+)
>  create mode 100644 drivers/pci/cadence/Kconfig
>  create mode 100644 drivers/pci/cadence/Makefile
>  create mode 100644 drivers/pci/cadence/pcie-cadence-host.c
>  create mode 100644 drivers/pci/cadence/pcie-cadence.c
>  create mode 100644 drivers/pci/cadence/pcie-cadence.h

I prefer a single file per driver.  I assume you're anticipating
something like dwc, where the DesignWare core is incorporated into
several devices in slightly different ways.  But it doesn't look like
that's here yet, and personally, I'd rather split out the required
things when they actually become required, not ahead of time.

> diff --git a/drivers/Makefile b/drivers/Makefile
> index 1d034b680431..27bdd98784d9 100644
> --- a/drivers/Makefile
> +++ b/drivers/Makefile
> @@ -18,6 +18,7 @@ obj-y				+= pwm/
>  
>  obj-$(CONFIG_PCI)		+= pci/
>  obj-$(CONFIG_PCI_ENDPOINT)	+= pci/endpoint/
> +obj-$(CONFIG_PCI_CADENCE)	+= pci/cadence/

I can't remember why we added CONFIG_PCI_ENDPOINT here instead of in
drivers/pci/Makefile.  Is there any way to move both CONFIG_PCI_ENDPOINT
and CONFIG_PCI_CADENCE into drivers/pci/Makefile so this is better
encapsulated?

>  # PCI dwc controller drivers
>  obj-y				+= pci/dwc/
>...

> + * struct cdns_pcie_rc_data - hardware specific data

"cdns" is a weird abbreviation for "Cadence", since "Cadence" doesn't
contain an "s".

>...
> +static int cdns_pcie_parse_request_of_pci_ranges(struct device *dev,
> +						 struct list_head *resources,
> +						 struct resource **bus_range)
> +{
> +	int err, res_valid = 0;
> +	struct device_node *np = dev->of_node;
> +	resource_size_t iobase;
> +	struct resource_entry *win, *tmp;
> +
> +	err = of_pci_get_host_bridge_resources(np, 0, 0xff, resources, &iobase);
> +	if (err)
> +		return err;
> +
> +	err = devm_request_pci_bus_resources(dev, resources);
> +	if (err)
> +		return err;
> +
> +	resource_list_for_each_entry_safe(win, tmp, resources) {
> +		struct resource *res = win->res;
> +
> +		switch (resource_type(res)) {
> +		case IORESOURCE_IO:
> +			err = pci_remap_iospace(res, iobase);
> +			if (err) {
> +				dev_warn(dev, "error %d: failed to map resource %pR\n",
> +					 err, res);
> +				resource_list_destroy_entry(win);
> +			}
> +			break;
> +		case IORESOURCE_MEM:
> +			res_valid |= !(res->flags & IORESOURCE_PREFETCH);
> +			break;
> +		case IORESOURCE_BUS:
> +			*bus_range = res;
> +			break;
> +		}
> +	}
> +
> +	if (res_valid)
> +		return 0;
> +
> +	dev_err(dev, "non-prefetchable memory resource required\n");
> +	return -EINVAL;
> +}

The code above is starting to look awfully familiar.  I wonder if it's
time to think about some PCI-internal interface that can encapsulate
this.  In this case, there's really nothing Cadence-specific here.
There are other callers where there *is* vendor-specific code, but
possibly that could be handled by returning pointers to bus number,
I/O port, and MMIO resources so the caller could do the
vendor-specific stuff?

Bjorn

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH 3/5] PCI: cadence: Add host driver for Cadence PCIe controller
  2017-11-28 20:41   ` Bjorn Helgaas
@ 2017-11-28 20:46     ` Bjorn Helgaas
  2017-11-29  8:19     ` Thomas Petazzoni
                       ` (2 subsequent siblings)
  3 siblings, 0 replies; 39+ messages in thread
From: Bjorn Helgaas @ 2017-11-28 20:46 UTC (permalink / raw)
  To: Cyrille Pitchen
  Cc: bhelgaas, kishon, lorenzo.pieralisi, linux-pci, adouglas,
	stelford, dgary, kgopi, eandrews, thomas.petazzoni, sureshp,
	nsekhar, linux-kernel, robh, devicetree

On Tue, Nov 28, 2017 at 02:41:14PM -0600, Bjorn Helgaas wrote:
> On Thu, Nov 23, 2017 at 04:01:48PM +0100, Cyrille Pitchen wrote:
> > diff --git a/drivers/Makefile b/drivers/Makefile
> > index 1d034b680431..27bdd98784d9 100644
> > --- a/drivers/Makefile
> > +++ b/drivers/Makefile
> > @@ -18,6 +18,7 @@ obj-y				+= pwm/
> >  
> >  obj-$(CONFIG_PCI)		+= pci/
> >  obj-$(CONFIG_PCI_ENDPOINT)	+= pci/endpoint/
> > +obj-$(CONFIG_PCI_CADENCE)	+= pci/cadence/
> 
> I can't remember why we added CONFIG_PCI_ENDPOINT here instead of in
> drivers/pci/Makefile.  Is there any way to move both CONFIG_PCI_ENDPOINT
> and CONFIG_PCI_CADENCE into drivers/pci/Makefile so this is better
> encapsulated?

I now see the note in your cover letter.  If there is a reason for
this, it should go in the changelog to forestall questions like this.
But what's in the cover letter isn't quite enough because it doesn't
answer the question of why drivers/pci/Makefile isn't sufficient to
control the ordering of pci/dwc and pci/endpoint.

And sorry, I now see Lorenzo's similar query so he's already jumped on
this :)

> >  # PCI dwc controller drivers
> >  obj-y				+= pci/dwc/

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH 3/5] PCI: cadence: Add host driver for Cadence PCIe controller
  2017-11-28 20:41   ` Bjorn Helgaas
  2017-11-28 20:46     ` Bjorn Helgaas
@ 2017-11-29  8:19     ` Thomas Petazzoni
  2017-11-29 15:55       ` Bjorn Helgaas
  2017-11-29 14:14     ` Lorenzo Pieralisi
  2017-12-01 10:37     ` Cyrille Pitchen
  3 siblings, 1 reply; 39+ messages in thread
From: Thomas Petazzoni @ 2017-11-29  8:19 UTC (permalink / raw)
  To: Bjorn Helgaas
  Cc: Cyrille Pitchen, bhelgaas, kishon, lorenzo.pieralisi, linux-pci,
	adouglas, stelford, dgary, kgopi, eandrews, sureshp, nsekhar,
	linux-kernel, robh, devicetree

Hello,

On Tue, 28 Nov 2017 14:41:14 -0600, Bjorn Helgaas wrote:

> > + * struct cdns_pcie_rc_data - hardware specific data  
> 
> "cdns" is a weird abbreviation for "Cadence", since "Cadence" doesn't
> contain an "s".

cdns is the official Device Tree binding vendor prefix for Cadence:

$ grep Cadence Documentation/devicetree/bindings/vendor-prefixes.txt
cdns	Cadence Design Systems Inc.

And it is already widely used throughout the kernel for Cadence
drivers. See drivers/watchdog/cadence_wdt.c, drivers/spi/spi-cadence.c,
drivers/i2c/busses/i2c-cadence.c, etc.

Best regards,

Thomas Petazzoni
-- 
Thomas Petazzoni, CTO, Free Electrons
Embedded Linux and Kernel engineering
http://free-electrons.com

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH 3/5] PCI: cadence: Add host driver for Cadence PCIe controller
  2017-11-28 20:41   ` Bjorn Helgaas
  2017-11-28 20:46     ` Bjorn Helgaas
  2017-11-29  8:19     ` Thomas Petazzoni
@ 2017-11-29 14:14     ` Lorenzo Pieralisi
  2017-12-01 10:37     ` Cyrille Pitchen
  3 siblings, 0 replies; 39+ messages in thread
From: Lorenzo Pieralisi @ 2017-11-29 14:14 UTC (permalink / raw)
  To: Bjorn Helgaas
  Cc: Cyrille Pitchen, bhelgaas, kishon, linux-pci, adouglas, stelford,
	dgary, kgopi, eandrews, thomas.petazzoni, sureshp, nsekhar,
	linux-kernel, robh, devicetree

On Tue, Nov 28, 2017 at 02:41:14PM -0600, Bjorn Helgaas wrote:

[...]

> > +static int cdns_pcie_parse_request_of_pci_ranges(struct device *dev,
> > +						 struct list_head *resources,
> > +						 struct resource **bus_range)
> > +{
> > +	int err, res_valid = 0;
> > +	struct device_node *np = dev->of_node;
> > +	resource_size_t iobase;
> > +	struct resource_entry *win, *tmp;
> > +
> > +	err = of_pci_get_host_bridge_resources(np, 0, 0xff, resources, &iobase);
> > +	if (err)
> > +		return err;
> > +
> > +	err = devm_request_pci_bus_resources(dev, resources);
> > +	if (err)
> > +		return err;
> > +
> > +	resource_list_for_each_entry_safe(win, tmp, resources) {
> > +		struct resource *res = win->res;
> > +
> > +		switch (resource_type(res)) {
> > +		case IORESOURCE_IO:
> > +			err = pci_remap_iospace(res, iobase);
> > +			if (err) {
> > +				dev_warn(dev, "error %d: failed to map resource %pR\n",
> > +					 err, res);
> > +				resource_list_destroy_entry(win);
> > +			}
> > +			break;
> > +		case IORESOURCE_MEM:
> > +			res_valid |= !(res->flags & IORESOURCE_PREFETCH);
> > +			break;
> > +		case IORESOURCE_BUS:
> > +			*bus_range = res;
> > +			break;
> > +		}
> > +	}
> > +
> > +	if (res_valid)
> > +		return 0;
> > +
> > +	dev_err(dev, "non-prefetchable memory resource required\n");
> > +	return -EINVAL;
> > +}
> 
> The code above is starting to look awfully familiar.  I wonder if it's
> time to think about some PCI-internal interface that can encapsulate
> this.  In this case, there's really nothing Cadence-specific here.
> There are other callers where there *is* vendor-specific code, but
> possibly that could be handled by returning pointers to bus number,
> I/O port, and MMIO resources so the caller could do the
> vendor-specific stuff?

Yes and that's not the only one, pattern below is duplicated
(with some minor differences across host bridges that I think
can be managed through function parameters), it is probably worth
moving them both into a core code helper.

list_splice_init(&resources, &bridge->windows);
bridge->dev.parent = dev;
bridge->busnr = bus;
bridge->ops = &pci_ops;
bridge->map_irq = of_irq_parse_and_map_pci;
bridge->swizzle_irq = pci_common_swizzle;

ret = pci_scan_root_bus_bridge(bridge);
if (ret < 0) {
	dev_err(dev, "Scanning root bridge failed");
	goto err_init;
}

bus = bridge->bus;
pci_bus_size_bridges(bus);
pci_bus_assign_resources(bus);

list_for_each_entry(child, &bus->children, node)
	pcie_bus_configure_settings(child);

pci_bus_add_devices(bus);

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH 3/5] PCI: cadence: Add host driver for Cadence PCIe controller
  2017-11-29  8:19     ` Thomas Petazzoni
@ 2017-11-29 15:55       ` Bjorn Helgaas
  0 siblings, 0 replies; 39+ messages in thread
From: Bjorn Helgaas @ 2017-11-29 15:55 UTC (permalink / raw)
  To: Thomas Petazzoni
  Cc: Cyrille Pitchen, bhelgaas, kishon, lorenzo.pieralisi, linux-pci,
	adouglas, stelford, dgary, kgopi, eandrews, sureshp, nsekhar,
	linux-kernel, robh, devicetree

On Wed, Nov 29, 2017 at 09:19:29AM +0100, Thomas Petazzoni wrote:
> Hello,
> 
> On Tue, 28 Nov 2017 14:41:14 -0600, Bjorn Helgaas wrote:
> 
> > > + * struct cdns_pcie_rc_data - hardware specific data  
> > 
> > "cdns" is a weird abbreviation for "Cadence", since "Cadence" doesn't
> > contain an "s".
> 
> cdns is the official Device Tree binding vendor prefix for Cadence:
> 
> $ grep Cadence Documentation/devicetree/bindings/vendor-prefixes.txt
> cdns	Cadence Design Systems Inc.
> 
> And it is already widely used throughout the kernel for Cadence
> drivers. See drivers/watchdog/cadence_wdt.c, drivers/spi/spi-cadence.c,
> drivers/i2c/busses/i2c-cadence.c, etc.

Hard to argue with that!  Thanks!

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH 3/5] PCI: cadence: Add host driver for Cadence PCIe controller
  2017-11-23 15:01 ` [PATCH 3/5] PCI: cadence: Add host driver for Cadence PCIe controller Cyrille Pitchen
  2017-11-28 20:41   ` Bjorn Helgaas
@ 2017-11-29 17:34   ` Lorenzo Pieralisi
  2017-12-03 20:44     ` Cyrille Pitchen
  2017-11-29 18:25   ` Lorenzo Pieralisi
  2 siblings, 1 reply; 39+ messages in thread
From: Lorenzo Pieralisi @ 2017-11-29 17:34 UTC (permalink / raw)
  To: Cyrille Pitchen
  Cc: bhelgaas, kishon, linux-pci, adouglas, stelford, dgary, kgopi,
	eandrews, thomas.petazzoni, sureshp, nsekhar, linux-kernel, robh,
	devicetree

On Thu, Nov 23, 2017 at 04:01:48PM +0100, Cyrille Pitchen wrote:
> This patch adds support to the Cadence PCIe controller in host mode.

Bjorn already commented on this, it would be good to add some
of the cover letter details in this log.

> Signed-off-by: Cyrille Pitchen <cyrille.pitchen@free-electrons.com>
> ---
>  drivers/Makefile                        |   1 +
>  drivers/pci/Kconfig                     |   1 +
>  drivers/pci/cadence/Kconfig             |  24 ++
>  drivers/pci/cadence/Makefile            |   2 +
>  drivers/pci/cadence/pcie-cadence-host.c | 425 ++++++++++++++++++++++++++++++++

You should also update the MAINTAINERS file.

>  drivers/pci/cadence/pcie-cadence.c      | 110 +++++++++
>  drivers/pci/cadence/pcie-cadence.h      | 325 ++++++++++++++++++++++++
>  7 files changed, 888 insertions(+)
>  create mode 100644 drivers/pci/cadence/Kconfig
>  create mode 100644 drivers/pci/cadence/Makefile
>  create mode 100644 drivers/pci/cadence/pcie-cadence-host.c
>  create mode 100644 drivers/pci/cadence/pcie-cadence.c
>  create mode 100644 drivers/pci/cadence/pcie-cadence.h
>
> diff --git a/drivers/Makefile b/drivers/Makefile
> index 1d034b680431..27bdd98784d9 100644
> --- a/drivers/Makefile
> +++ b/drivers/Makefile
> @@ -18,6 +18,7 @@ obj-y                               += pwm/
>
>  obj-$(CONFIG_PCI)            += pci/
>  obj-$(CONFIG_PCI_ENDPOINT)   += pci/endpoint/
> +obj-$(CONFIG_PCI_CADENCE)    += pci/cadence/

Already commented on the cover letter.

>  # PCI dwc controller drivers
>  obj-y                                += pci/dwc/
>
> diff --git a/drivers/pci/Kconfig b/drivers/pci/Kconfig
> index 90944667ccea..2471b2e36b8b 100644
> --- a/drivers/pci/Kconfig
> +++ b/drivers/pci/Kconfig
> @@ -144,6 +144,7 @@ config PCI_HYPERV
>            PCI devices from a PCI backend to support PCI driver domains.
>
>  source "drivers/pci/hotplug/Kconfig"
> +source "drivers/pci/cadence/Kconfig"
>  source "drivers/pci/dwc/Kconfig"
>  source "drivers/pci/host/Kconfig"
>  source "drivers/pci/endpoint/Kconfig"
> diff --git a/drivers/pci/cadence/Kconfig b/drivers/pci/cadence/Kconfig
> new file mode 100644
> index 000000000000..120306cae2aa
> --- /dev/null
> +++ b/drivers/pci/cadence/Kconfig
> @@ -0,0 +1,24 @@
> +menuconfig PCI_CADENCE
> +     bool "Cadence PCI controllers support"
> +     depends on PCI && HAS_IOMEM
> +     help
> +       Say Y here if you want to support some Cadence PCI controller.
> +
> +       When in doubt, say N.
> +
> +if PCI_CADENCE
> +
> +config PCIE_CADENCE
> +     bool
> +
> +config PCIE_CADENCE_HOST
> +     bool "Cadence PCIe host controller"
> +     depends on OF
> +     depends on PCI_MSI_IRQ_DOMAIN

I do not see the reason for this dependency in the code.

> +     select PCIE_CADENCE
> +     help
> +       Say Y here if you want to support the Cadence PCIe controller in host
> +       mode. This PCIe controller may be embedded into many different vendors
> +       SoCs.
> +
> +endif # PCI_CADENCE
> diff --git a/drivers/pci/cadence/Makefile b/drivers/pci/cadence/Makefile
> new file mode 100644
> index 000000000000..d57d192d2595
> --- /dev/null
> +++ b/drivers/pci/cadence/Makefile
> @@ -0,0 +1,2 @@
> +obj-$(CONFIG_PCIE_CADENCE) += pcie-cadence.o
> +obj-$(CONFIG_PCIE_CADENCE_HOST) += pcie-cadence-host.o
> diff --git a/drivers/pci/cadence/pcie-cadence-host.c b/drivers/pci/cadence/pcie-cadence-host.c
> new file mode 100644
> index 000000000000..252471e72a93
> --- /dev/null
> +++ b/drivers/pci/cadence/pcie-cadence-host.c
> @@ -0,0 +1,425 @@
> +/*
> + * Cadence PCIe host controller driver.
> + *
> + * Copyright (c) 2017 Cadence
> + *
> + * Author: Cyrille Pitchen <cyrille.pitchen@free-electrons.com>
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License version 2 as
> + * published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> + * GNU General Public License for more details.
> + *
> + * You should have received a copy of the GNU General Public License along with
> + * this program.  If not, see <http://www.gnu.org/licenses/>.
> + */
> +#include <linux/kernel.h>
> +#include <linux/of_address.h>
> +#include <linux/of_pci.h>
> +#include <linux/platform_device.h>
> +#include <linux/pm_runtime.h>
> +
> +#include "pcie-cadence.h"
> +
> +/**
> + * struct cdns_pcie_rc_data - hardware specific data
> + * @max_regions: maximum number of regions supported by the hardware
> + * @vendor_id: PCI vendor ID
> + * @device_id: PCI device ID
> + * @no_bar_nbits: Number of bits to keep for inbound (PCIe -> CPU) address
> + *                translation (nbits sets into the "no BAR match" register).
> + */
> +struct cdns_pcie_rc_data {
> +     size_t                  max_regions;

Reason for it to be size_t ?

> +     u16                     vendor_id;
> +     u16                     device_id;
> +     u8                      no_bar_nbits;
> +};

I think that this data should come from DT (?) more below.

> +
> +/**
> + * struct cdns_pcie_rc - private data for this PCIe Root Complex driver
> + * @pcie: Cadence PCIe controller
> + * @dev: pointer to PCIe device
> + * @cfg_res: start/end offsets in the physical system memory to map PCI
> + *           configuration space accesses
> + * @bus_range: first/last buses behind the PCIe host controller
> + * @cfg_base: IO mapped window to access the PCI configuration space of a
> + *            single function at a time
> + * @data: pointer to a 'struct cdns_pcie_rc_data'
> + */
> +struct cdns_pcie_rc {
> +     struct cdns_pcie        pcie;
> +     struct device           *dev;
> +     struct resource         *cfg_res;
> +     struct resource         *bus_range;
> +     void __iomem            *cfg_base;
> +     const struct cdns_pcie_rc_data  *data;
> +};
> +
> +static void __iomem *

Please do not split lines like this, storage class and return type
should be in the same line as the name, move parameter(s) to a new
line.

static void __iomem *cdns_pci_map_bus(struct pci_bus *bus, unsigned int devfn,
                                      int where)

> +cdns_pci_map_bus(struct pci_bus *bus, unsigned int devfn, int where)
> +{
> +     struct pci_host_bridge *bridge = pci_find_host_bridge(bus);
> +     struct cdns_pcie_rc *rc = pci_host_bridge_priv(bridge);
> +     struct cdns_pcie *pcie = &rc->pcie;
> +     unsigned int busn = bus->number;
> +     u32 addr0, desc0;
> +
> +     if (busn < rc->bus_range->start || busn > rc->bus_range->end)
> +             return NULL;

It does not hurt but I wonder whether you really need this check.

> +     if (busn == rc->bus_range->start) {
> +             if (devfn)

I suspect I know why you need this check but I ask you to explain it
anyway if you do not mind please.

> +                     return NULL;
> +
> +             return pcie->reg_base + (where & 0xfff);
> +     }
> +
> +     /* Update Output registers for AXI region 0. */
> +     addr0 = CDNS_PCIE_AT_OB_REGION_PCI_ADDR0_NBITS(12) |

Ok, so for every config access you reprogram addr0 to reflect the
correct bus|devfn ID in the PCI bus TLP corresponding to an ECAM address
in CPU physical address space, is my understanding correct ?

> +             CDNS_PCIE_AT_OB_REGION_PCI_ADDR0_DEVFN(devfn) |
> +             CDNS_PCIE_AT_OB_REGION_PCI_ADDR0_BUS(busn);
> +     cdns_pcie_writel(pcie, CDNS_PCIE_AT_OB_REGION_PCI_ADDR0(0), addr0);
> +
> +     /* Configuration Type 0 or Type 1 access. */
> +     desc0 = CDNS_PCIE_AT_OB_REGION_DESC0_HARDCODED_RID |
> +             CDNS_PCIE_AT_OB_REGION_DESC0_DEVFN(0);
> +     /*
> +      * The bus number was already set once for all in desc1 by
> +      * cdns_pcie_host_init_address_translation().
> +      */
> +     if (busn == rc->bus_range->start + 1)
> +             desc0 |= CDNS_PCIE_AT_OB_REGION_DESC0_TYPE_CONF_TYPE0;
> +     else
> +             desc0 |= CDNS_PCIE_AT_OB_REGION_DESC0_TYPE_CONF_TYPE1;

I would like to ask you why you have to do it here and the root port
does not figure it out by itself, I do not have the datasheet so I am
just asking for my own information.

> +     cdns_pcie_writel(pcie, CDNS_PCIE_AT_OB_REGION_DESC0(0), desc0);
> +
> +     return rc->cfg_base + (where & 0xfff);
> +}
> +
> +static struct pci_ops cdns_pcie_host_ops = {
> +     .map_bus        = cdns_pci_map_bus,
> +     .read           = pci_generic_config_read,
> +     .write          = pci_generic_config_write,
> +};
> +
> +static const struct cdns_pcie_rc_data cdns_pcie_rc_data = {
> +     .max_regions    = 32,
> +     .vendor_id      = PCI_VENDOR_ID_CDNS,
> +     .device_id      = 0x0200,
> +     .no_bar_nbits   = 32,
> +};

Should (some of) these parameters be retrieved through a DT binding ?

> +static const struct of_device_id cdns_pcie_host_of_match[] = {
> +     { .compatible = "cdns,cdns-pcie-host",
> +       .data = &cdns_pcie_rc_data },
> +
> +     { },
> +};
> +
> +static int cdns_pcie_parse_request_of_pci_ranges(struct device *dev,
> +                                              struct list_head *resources,
> +                                              struct resource **bus_range)
> +{
> +     int err, res_valid = 0;
> +     struct device_node *np = dev->of_node;
> +     resource_size_t iobase;
> +     struct resource_entry *win, *tmp;
> +
> +     err = of_pci_get_host_bridge_resources(np, 0, 0xff, resources, &iobase);
> +     if (err)
> +             return err;
> +
> +     err = devm_request_pci_bus_resources(dev, resources);
> +     if (err)
> +             return err;
> +
> +     resource_list_for_each_entry_safe(win, tmp, resources) {
> +             struct resource *res = win->res;
> +
> +             switch (resource_type(res)) {
> +             case IORESOURCE_IO:
> +                     err = pci_remap_iospace(res, iobase);
> +                     if (err) {
> +                             dev_warn(dev, "error %d: failed to map resource %pR\n",
> +                                      err, res);
> +                             resource_list_destroy_entry(win);
> +                     }
> +                     break;
> +             case IORESOURCE_MEM:
> +                     res_valid |= !(res->flags & IORESOURCE_PREFETCH);
> +                     break;
> +             case IORESOURCE_BUS:
> +                     *bus_range = res;
> +                     break;
> +             }
> +     }
> +
> +     if (res_valid)
> +             return 0;
> +
> +     dev_err(dev, "non-prefetchable memory resource required\n");
> +     return -EINVAL;

Nit, I prefer you swap these two as it is done in pci-aardvark.c:

        if (!res_valid) {
                dev_err(dev, "non-prefetchable memory resource required\n");
                return -EINVAL;
        }

        return 0;

but as per previous replies this function can be factorized in
core PCI code - I would not bother unless you are willing to write
the patch series that does the refactoring yourself :)

> +}
> +
> +static int cdns_pcie_host_init_root_port(struct cdns_pcie_rc *rc)
> +{
> +     const struct cdns_pcie_rc_data *data = rc->data;
> +     struct cdns_pcie *pcie = &rc->pcie;
> +     u8 pbn, sbn, subn;
> +     u32 value, ctrl;
> +
> +     /*
> +      * Set the root complex BAR configuration register:
> +      * - disable both BAR0 and BAR1.
> +      * - enable Prefetchable Memory Base and Limit registers in type 1
> +      *   config space (64 bits).
> +      * - enable IO Base and Limit registers in type 1 config
> +      *   space (32 bits).
> +      */
> +     ctrl = CDNS_PCIE_LM_BAR_CFG_CTRL_DISABLED;
> +     value = CDNS_PCIE_LM_RC_BAR_CFG_BAR0_CTRL(ctrl) |
> +             CDNS_PCIE_LM_RC_BAR_CFG_BAR1_CTRL(ctrl) |
> +             CDNS_PCIE_LM_RC_BAR_CFG_PREFETCH_MEM_ENABLE |
> +             CDNS_PCIE_LM_RC_BAR_CFG_PREFETCH_MEM_64BITS |
> +             CDNS_PCIE_LM_RC_BAR_CFG_IO_ENABLE |
> +             CDNS_PCIE_LM_RC_BAR_CFG_IO_32BITS;
> +     cdns_pcie_writel(pcie, CDNS_PCIE_LM_RC_BAR_CFG, value);
> +
> +     /* Set root port configuration space */
> +     if (data->vendor_id != 0xffff)
> +             cdns_pcie_rp_writew(pcie, PCI_VENDOR_ID, data->vendor_id);
> +     if (data->device_id != 0xffff)
> +             cdns_pcie_rp_writew(pcie, PCI_DEVICE_ID, data->device_id);
> +
> +     cdns_pcie_rp_writeb(pcie, PCI_CLASS_REVISION, 0);
> +     cdns_pcie_rp_writeb(pcie, PCI_CLASS_PROG, 0);
> +     cdns_pcie_rp_writew(pcie, PCI_CLASS_DEVICE, PCI_CLASS_BRIDGE_PCI);
> +
> +     pbn = rc->bus_range->start;
> +     sbn = pbn + 1; /* Single root port. */
> +     subn = rc->bus_range->end;
> +     cdns_pcie_rp_writeb(pcie, PCI_PRIMARY_BUS, pbn);
> +     cdns_pcie_rp_writeb(pcie, PCI_SECONDARY_BUS, sbn);
> +     cdns_pcie_rp_writeb(pcie, PCI_SUBORDINATE_BUS, subn);

Again - I do not have the datasheet for this device therefore I would
kindly ask you how this works; it seems to me that what you are doing
here is done through normal configuration cycles in an ECAM compliant
system to program the RP PRIMARY/SECONDARY/SUBORDINATE bus - I would
like to understand why this code is needed.

> +     return 0;
> +}
> +
> +static int cdns_pcie_host_init_address_translation(struct cdns_pcie_rc *rc)
> +{
> +     struct cdns_pcie *pcie = &rc->pcie;
> +     struct resource *cfg_res = rc->cfg_res;
> +     struct resource *mem_res = pcie->mem_res;
> +     struct resource *bus_range = rc->bus_range;
> +     struct device *dev = rc->dev;
> +     struct device_node *np = dev->of_node;
> +     struct of_pci_range_parser parser;
> +     struct of_pci_range range;
> +     u32 addr0, addr1, desc1;
> +     u64 cpu_addr;
> +     int r, err;
> +
> +     /*
> +      * Reserve region 0 for PCI configure space accesses:
> +      * OB_REGION_PCI_ADDR0 and OB_REGION_DESC0 are updated dynamically by
> +      * cdns_pci_map_bus(), other region registers are set here once for all.
> +      */
> +     addr1 = 0; /* Should be programmed to zero. */
> +     desc1 = CDNS_PCIE_AT_OB_REGION_DESC1_BUS(bus_range->start);
> +     cdns_pcie_writel(pcie, CDNS_PCIE_AT_OB_REGION_PCI_ADDR1(0), addr1);
> +     cdns_pcie_writel(pcie, CDNS_PCIE_AT_OB_REGION_DESC1(0), desc1);
> +
> +     cpu_addr = cfg_res->start - mem_res->start;
> +     addr0 = CDNS_PCIE_AT_OB_REGION_CPU_ADDR0_NBITS(12) |
> +             (lower_32_bits(cpu_addr) & GENMASK(31, 8));
> +     addr1 = upper_32_bits(cpu_addr);
> +     cdns_pcie_writel(pcie, CDNS_PCIE_AT_OB_REGION_CPU_ADDR0(0), addr0);
> +     cdns_pcie_writel(pcie, CDNS_PCIE_AT_OB_REGION_CPU_ADDR1(0), addr1);
> +
> +     err = of_pci_range_parser_init(&parser, np);
> +     if (err)
> +             return err;
> +
> +     r = 1;
> +     for_each_of_pci_range(&parser, &range) {
> +             bool is_io;
> +
> +             if (r >= rc->data->max_regions)
> +                     break;
> +
> +             if ((range.flags & IORESOURCE_TYPE_BITS) == IORESOURCE_MEM)
> +                     is_io = false;
> +             else if ((range.flags & IORESOURCE_TYPE_BITS) == IORESOURCE_IO)
> +                     is_io = true;
> +             else
> +                     continue;
> +
> +             cdns_pcie_set_outbound_region(pcie, r, is_io,
> +                                           range.cpu_addr,
> +                                           range.pci_addr,
> +                                           range.size);
> +             r++;
> +     }
> +
> +     /*
> +      * Set Root Port no BAR match Inbound Translation registers:
> +      * needed for MSI.

And DMA :) if I understand what this is doing correctly, ie setting
the root complex decoding for incoming memory traffic.

> +      * Root Port BAR0 and BAR1 are disabled, hence no need to set their
> +      * inbound translation registers.
> +      */
> +     addr0 = CDNS_PCIE_AT_IB_RP_BAR_ADDR0_NBITS(rc->data->no_bar_nbits);
> +     addr1 = 0;
> +     cdns_pcie_writel(pcie, CDNS_PCIE_AT_IB_RP_BAR_ADDR0(RP_NO_BAR), addr0);
> +     cdns_pcie_writel(pcie, CDNS_PCIE_AT_IB_RP_BAR_ADDR1(RP_NO_BAR), addr1);
> +
> +     return 0;
> +}
> +
> +static int cdns_pcie_host_init(struct device *dev,
> +                            struct list_head *resources,
> +                            struct cdns_pcie_rc *rc)
> +{
> +     struct resource *bus_range = NULL;
> +     int err;
> +
> +     /* Parse our PCI ranges and request their resources */
> +     err = cdns_pcie_parse_request_of_pci_ranges(dev, resources, &bus_range);
> +     if (err)
> +             goto err_out;

I think that the err_out path should be part of:

cdns_pcie_parse_request_of_pci_ranges()

implementation and here you would just return.

> +
> +     if (bus_range->start > bus_range->end) {
> +             err = -EINVAL;
> +             goto err_out;
> +     }

Add a space here; this check seems useless to me anyway.

> +     rc->bus_range = bus_range;
> +     rc->pcie.bus = bus_range->start;
> +
> +     err = cdns_pcie_host_init_root_port(rc);
> +     if (err)
> +             goto err_out;
> +
> +     err = cdns_pcie_host_init_address_translation(rc);
> +     if (err)
> +             goto err_out;
> +
> +     return 0;
> +
> + err_out:
> +     pci_free_resource_list(resources);

See above.

> +     return err;
> +}
> +
> +static int cdns_pcie_host_probe(struct platform_device *pdev)
> +{
> +     const struct of_device_id *of_id;
> +     const char *type;
> +     struct device *dev = &pdev->dev;
> +     struct device_node *np = dev->of_node;
> +     struct pci_bus *bus, *child;
> +     struct pci_host_bridge *bridge;
> +     struct list_head resources;
> +     struct cdns_pcie_rc *rc;
> +     struct cdns_pcie *pcie;
> +     struct resource *res;
> +     int ret;
> +
> +     bridge = devm_pci_alloc_host_bridge(dev, sizeof(*rc));
> +     if (!bridge)
> +             return -ENOMEM;
> +
> +     rc = pci_host_bridge_priv(bridge);
> +     rc->dev = dev;
> +     platform_set_drvdata(pdev, rc);

I do not think it is needed.

> +     pcie = &rc->pcie;
> +     pcie->is_rc = true;
> +
> +     of_id = of_match_node(cdns_pcie_host_of_match, np);
> +     rc->data = (const struct cdns_pcie_rc_data *)of_id->data;
> +
> +     type = of_get_property(np, "device_type", NULL);
> +     if (!type || strcmp(type, "pci")) {
> +             dev_err(dev, "invalid \"device_type\" %s\n", type);
> +             return -EINVAL;
> +     }
> +
> +     res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "reg");
> +     pcie->reg_base = devm_ioremap_resource(dev, res);
> +     if (IS_ERR(pcie->reg_base)) {
> +             dev_err(dev, "missing \"reg\"\n");
> +             return PTR_ERR(pcie->reg_base);
> +     }
> +
> +     res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "cfg");
> +     rc->cfg_base = devm_ioremap_resource(dev, res);

devm_pci_remap_cfg_resource() please.

> +     if (IS_ERR(rc->cfg_base)) {
> +             dev_err(dev, "missing \"cfg\"\n");
> +             return PTR_ERR(rc->cfg_base);
> +     }
> +     rc->cfg_res = res;
> +
> +     res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "mem");
> +     if (!res) {
> +             dev_err(dev, "missing \"mem\"\n");
> +             return -EINVAL;
> +     }
> +     pcie->mem_res = res;
> +
> +     pm_runtime_enable(dev);
> +     ret = pm_runtime_get_sync(dev);
> +     if (ret < 0) {
> +             dev_err(dev, "pm_runtime_get_sync() failed\n");
> +             goto err_get_sync;
> +     }
> +
> +     INIT_LIST_HEAD(&resources);
> +     ret = cdns_pcie_host_init(dev, &resources, rc);
> +     if (ret)
> +             goto err_init;
> +
> +     list_splice_init(&resources, &bridge->windows);
> +     bridge->dev.parent = dev;
> +     bridge->busnr = pcie->bus;
> +     bridge->ops = &cdns_pcie_host_ops;
> +     bridge->map_irq = of_irq_parse_and_map_pci;
> +     bridge->swizzle_irq = pci_common_swizzle;
> +
> +     ret = pci_scan_root_bus_bridge(bridge);
> +     if (ret < 0) {
> +             dev_err(dev, "Scanning root bridge failed");
> +             goto err_init;
> +     }
> +
> +     bus = bridge->bus;
> +     pci_bus_size_bridges(bus);
> +     pci_bus_assign_resources(bus);
> +
> +     list_for_each_entry(child, &bus->children, node)
> +             pcie_bus_configure_settings(child);
> +
> +     pci_bus_add_devices(bus);
> +
> +     return 0;
> +
> + err_init:
> +     pm_runtime_put_sync(dev);
> +
> + err_get_sync:
> +     pm_runtime_disable(dev);
> +
> +     return ret;
> +}
> +
> +static struct platform_driver cdns_pcie_host_driver = {
> +     .driver = {
> +             .name = "cdns-pcie-host",
> +             .of_match_table = cdns_pcie_host_of_match,
> +     },
> +     .probe = cdns_pcie_host_probe,
> +};
> +builtin_platform_driver(cdns_pcie_host_driver);
> diff --git a/drivers/pci/cadence/pcie-cadence.c b/drivers/pci/cadence/pcie-cadence.c
> new file mode 100644
> index 000000000000..5c10879d5e96
> --- /dev/null
> +++ b/drivers/pci/cadence/pcie-cadence.c
> @@ -0,0 +1,110 @@
> +/*
> + * Cadence PCIe controller driver.
> + *
> + * Copyright (c) 2017 Cadence
> + *
> + * Author: Cyrille Pitchen <cyrille.pitchen@free-electrons.com>
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License version 2 as
> + * published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> + * GNU General Public License for more details.
> + *
> + * You should have received a copy of the GNU General Public License along with
> + * this program.  If not, see <http://www.gnu.org/licenses/>.
> + */
> +
> +#include <linux/kernel.h>
> +
> +#include "pcie-cadence.h"
> +
> +void cdns_pcie_set_outbound_region(struct cdns_pcie *pcie, u32 r, bool is_io,
> +                                u64 cpu_addr, u64 pci_addr, size_t size)
> +{
> +     /*
> +      * roundup_pow_of_two() returns an unsigned long, which is not suited
> +      * for 64bit values.
> +      */
> +     u64 sz = 1ULL << fls64(size - 1);
> +     int nbits = ilog2(sz);
> +     u32 addr0, addr1, desc0, desc1;
> +
> +     if (nbits < 8)
> +             nbits = 8;
> +
> +     /* Set the PCI address */
> +     addr0 = CDNS_PCIE_AT_OB_REGION_PCI_ADDR0_NBITS(nbits) |
> +             (lower_32_bits(pci_addr) & GENMASK(31, 8));
> +     addr1 = upper_32_bits(pci_addr);
> +
> +     cdns_pcie_writel(pcie, CDNS_PCIE_AT_OB_REGION_PCI_ADDR0(r), addr0);
> +     cdns_pcie_writel(pcie, CDNS_PCIE_AT_OB_REGION_PCI_ADDR1(r), addr1);
> +
> +     /* Set the PCIe header descriptor */
> +     if (is_io)
> +             desc0 = CDNS_PCIE_AT_OB_REGION_DESC0_TYPE_IO;
> +     else
> +             desc0 = CDNS_PCIE_AT_OB_REGION_DESC0_TYPE_MEM;
> +     desc1 = 0;
> +
> +     if (pcie->is_rc) {
> +             desc0 |= CDNS_PCIE_AT_OB_REGION_DESC0_HARDCODED_RID |
> +                      CDNS_PCIE_AT_OB_REGION_DESC0_DEVFN(0);
> +             desc1 |= CDNS_PCIE_AT_OB_REGION_DESC1_BUS(pcie->bus);
> +     }
> +
> +     cdns_pcie_writel(pcie, CDNS_PCIE_AT_OB_REGION_DESC0(r), desc0);
> +     cdns_pcie_writel(pcie, CDNS_PCIE_AT_OB_REGION_DESC1(r), desc1);
> +
> +     /* Set the CPU address */
> +     cpu_addr -= pcie->mem_res->start;
> +     addr0 = CDNS_PCIE_AT_OB_REGION_CPU_ADDR0_NBITS(nbits) |
> +             (lower_32_bits(cpu_addr) & GENMASK(31, 8));
> +     addr1 = upper_32_bits(cpu_addr);
> +
> +     cdns_pcie_writel(pcie, CDNS_PCIE_AT_OB_REGION_CPU_ADDR0(r), addr0);
> +     cdns_pcie_writel(pcie, CDNS_PCIE_AT_OB_REGION_CPU_ADDR1(r), addr1);
> +}
> +
> +void cdns_pcie_set_outbound_region_for_normal_msg(struct cdns_pcie *pcie, u32 r,
> +                                               u64 cpu_addr)

Not used in this patch, you should split it out.

> +{
> +     u32 addr0, addr1, desc0, desc1;
> +
> +     desc0 = CDNS_PCIE_AT_OB_REGION_DESC0_TYPE_NORMAL_MSG;
> +     desc1 = 0;
> +     if (pcie->is_rc) {
> +             desc0 |= CDNS_PCIE_AT_OB_REGION_DESC0_HARDCODED_RID |
> +                      CDNS_PCIE_AT_OB_REGION_DESC0_DEVFN(0);
> +             desc1 |= CDNS_PCIE_AT_OB_REGION_DESC1_BUS(pcie->bus);
> +     }
> +
> +     /* Set the CPU address */
> +     cpu_addr -= pcie->mem_res->start;
> +     addr0 = CDNS_PCIE_AT_OB_REGION_CPU_ADDR0_NBITS(17) |
> +             (lower_32_bits(cpu_addr) & GENMASK(31, 8));
> +     addr1 = upper_32_bits(cpu_addr);
> +
> +     cdns_pcie_writel(pcie, CDNS_PCIE_AT_OB_REGION_PCI_ADDR0(r), 0);
> +     cdns_pcie_writel(pcie, CDNS_PCIE_AT_OB_REGION_PCI_ADDR1(r), 0);
> +     cdns_pcie_writel(pcie, CDNS_PCIE_AT_OB_REGION_DESC0(r), desc0);
> +     cdns_pcie_writel(pcie, CDNS_PCIE_AT_OB_REGION_DESC1(r), desc1);
> +     cdns_pcie_writel(pcie, CDNS_PCIE_AT_OB_REGION_CPU_ADDR0(r), addr0);
> +     cdns_pcie_writel(pcie, CDNS_PCIE_AT_OB_REGION_CPU_ADDR1(r), addr1);
> +}
> +
> +void cdns_pcie_reset_outbound_region(struct cdns_pcie *pcie, u32 r)
> +{
> +     cdns_pcie_writel(pcie, CDNS_PCIE_AT_OB_REGION_PCI_ADDR0(r), 0);
> +     cdns_pcie_writel(pcie, CDNS_PCIE_AT_OB_REGION_PCI_ADDR1(r), 0);
> +
> +     cdns_pcie_writel(pcie, CDNS_PCIE_AT_OB_REGION_DESC0(r), 0);
> +     cdns_pcie_writel(pcie, CDNS_PCIE_AT_OB_REGION_DESC1(r), 0);
> +
> +     cdns_pcie_writel(pcie, CDNS_PCIE_AT_OB_REGION_CPU_ADDR0(r), 0);
> +     cdns_pcie_writel(pcie, CDNS_PCIE_AT_OB_REGION_CPU_ADDR1(r), 0);
> +}
> diff --git a/drivers/pci/cadence/pcie-cadence.h b/drivers/pci/cadence/pcie-cadence.h
> new file mode 100644
> index 000000000000..195e23b7d4fe
> --- /dev/null
> +++ b/drivers/pci/cadence/pcie-cadence.h
> @@ -0,0 +1,325 @@
> +/*
> + * Cadence PCIe controller driver.
> + *
> + * Copyright (c) 2017 Cadence
> + *
> + * Author: Cyrille Pitchen <cyrille.pitchen@free-electrons.com>
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License version 2 as
> + * published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> + * GNU General Public License for more details.
> + *
> + * You should have received a copy of the GNU General Public License along with
> + * this program.  If not, see <http://www.gnu.org/licenses/>.
> + */
> +
> +#ifndef _PCIE_CADENCE_H
> +#define _PCIE_CADENCE_H
> +
> +#include <linux/kernel.h>
> +#include <linux/pci.h>
> +
> +/*
> + * Local Management Registers
> + */
> +#define CDNS_PCIE_LM_BASE    0x00100000
> +
> +/* Vendor ID Register */
> +#define CDNS_PCIE_LM_ID              (CDNS_PCIE_LM_BASE + 0x0044)
> +#define  CDNS_PCIE_LM_ID_VENDOR_MASK GENMASK(15, 0)
> +#define  CDNS_PCIE_LM_ID_VENDOR_SHIFT        0
> +#define  CDNS_PCIE_LM_ID_VENDOR(vid) \
> +     (((vid) << CDNS_PCIE_LM_ID_VENDOR_SHIFT) & CDNS_PCIE_LM_ID_VENDOR_MASK)
> +#define  CDNS_PCIE_LM_ID_SUBSYS_MASK GENMASK(31, 16)
> +#define  CDNS_PCIE_LM_ID_SUBSYS_SHIFT        16
> +#define  CDNS_PCIE_LM_ID_SUBSYS(sub) \
> +     (((sub) << CDNS_PCIE_LM_ID_SUBSYS_SHIFT) & CDNS_PCIE_LM_ID_SUBSYS_MASK)
> +
> +/* Root Port Requestor ID Register */
> +#define CDNS_PCIE_LM_RP_RID  (CDNS_PCIE_LM_BASE + 0x0228)
> +#define  CDNS_PCIE_LM_RP_RID_MASK    GENMASK(15, 0)
> +#define  CDNS_PCIE_LM_RP_RID_SHIFT   0
> +#define  CDNS_PCIE_LM_RP_RID_(rid) \
> +     (((rid) << CDNS_PCIE_LM_RP_RID_SHIFT) & CDNS_PCIE_LM_RP_RID_MASK)
> +
> +/* Endpoint Bus and Device Number Register */
> +#define CDNS_PCIE_LM_EP_ID   (CDNS_PCIE_LM_BASE + 0x022c)
> +#define  CDNS_PCIE_LM_EP_ID_DEV_MASK GENMASK(4, 0)
> +#define  CDNS_PCIE_LM_EP_ID_DEV_SHIFT        0
> +#define  CDNS_PCIE_LM_EP_ID_BUS_MASK GENMASK(15, 8)
> +#define  CDNS_PCIE_LM_EP_ID_BUS_SHIFT        8
> +
> +/* Endpoint Function f BAR b Configuration Registers */
> +#define CDNS_PCIE_LM_EP_FUNC_BAR_CFG0(fn) \
> +     (CDNS_PCIE_LM_BASE + 0x0240 + (fn) * 0x0008)
> +#define CDNS_PCIE_LM_EP_FUNC_BAR_CFG1(fn) \
> +     (CDNS_PCIE_LM_BASE + 0x0244 + (fn) * 0x0008)
> +#define  CDNS_PCIE_LM_EP_FUNC_BAR_CFG_BAR_APERTURE_MASK(b) \
> +     (GENMASK(4, 0) << ((b) * 8))
> +#define  CDNS_PCIE_LM_EP_FUNC_BAR_CFG_BAR_APERTURE(b, a) \
> +     (((a) << ((b) * 8)) & CDNS_PCIE_LM_EP_FUNC_BAR_CFG_BAR_APERTURE_MASK(b))
> +#define  CDNS_PCIE_LM_EP_FUNC_BAR_CFG_BAR_CTRL_MASK(b) \
> +     (GENMASK(7, 5) << ((b) * 8))
> +#define  CDNS_PCIE_LM_EP_FUNC_BAR_CFG_BAR_CTRL(b, c) \
> +     (((c) << ((b) * 8 + 5)) & CDNS_PCIE_LM_EP_FUNC_BAR_CFG_BAR_CTRL_MASK(b))
> +
> +/* Endpoint Function Configuration Register */
> +#define CDNS_PCIE_LM_EP_FUNC_CFG     (CDNS_PCIE_LM_BASE + 0x02c0)

All endpoint defines should be moved to the patch that needs them.

> +
> +/* Root Complex BAR Configuration Register */
> +#define CDNS_PCIE_LM_RC_BAR_CFG      (CDNS_PCIE_LM_BASE + 0x0300)
> +#define  CDNS_PCIE_LM_RC_BAR_CFG_BAR0_APERTURE_MASK  GENMASK(5, 0)
> +#define  CDNS_PCIE_LM_RC_BAR_CFG_BAR0_APERTURE(a) \
> +     (((a) << 0) & CDNS_PCIE_LM_RC_BAR_CFG_BAR0_APERTURE_MASK)
> +#define  CDNS_PCIE_LM_RC_BAR_CFG_BAR0_CTRL_MASK              GENMASK(8, 6)
> +#define  CDNS_PCIE_LM_RC_BAR_CFG_BAR0_CTRL(c) \
> +     (((c) << 6) & CDNS_PCIE_LM_RC_BAR_CFG_BAR0_CTRL_MASK)
> +#define  CDNS_PCIE_LM_RC_BAR_CFG_BAR1_APERTURE_MASK  GENMASK(13, 9)
> +#define  CDNS_PCIE_LM_RC_BAR_CFG_BAR1_APERTURE(a) \
> +     (((a) << 9) & CDNS_PCIE_LM_RC_BAR_CFG_BAR1_APERTURE_MASK)
> +#define  CDNS_PCIE_LM_RC_BAR_CFG_BAR1_CTRL_MASK              GENMASK(16, 14)
> +#define  CDNS_PCIE_LM_RC_BAR_CFG_BAR1_CTRL(c) \
> +     (((c) << 14) & CDNS_PCIE_LM_RC_BAR_CFG_BAR1_CTRL_MASK)
> +#define  CDNS_PCIE_LM_RC_BAR_CFG_PREFETCH_MEM_ENABLE BIT(17)
> +#define  CDNS_PCIE_LM_RC_BAR_CFG_PREFETCH_MEM_32BITS 0
> +#define  CDNS_PCIE_LM_RC_BAR_CFG_PREFETCH_MEM_64BITS BIT(18)
> +#define  CDNS_PCIE_LM_RC_BAR_CFG_IO_ENABLE           BIT(19)
> +#define  CDNS_PCIE_LM_RC_BAR_CFG_IO_16BITS           0
> +#define  CDNS_PCIE_LM_RC_BAR_CFG_IO_32BITS           BIT(20)
> +#define  CDNS_PCIE_LM_RC_BAR_CFG_CHECK_ENABLE                BIT(31)
> +
> +/* BAR control values applicable to both Endpoint Function and Root Complex */
> +#define  CDNS_PCIE_LM_BAR_CFG_CTRL_DISABLED          0x0
> +#define  CDNS_PCIE_LM_BAR_CFG_CTRL_IO_32BITS         0x1
> +#define  CDNS_PCIE_LM_BAR_CFG_CTRL_MEM_32BITS                0x4
> +#define  CDNS_PCIE_LM_BAR_CFG_CTRL_PREFETCH_MEM_32BITS       0x5
> +#define  CDNS_PCIE_LM_BAR_CFG_CTRL_MEM_64BITS                0x6
> +#define  CDNS_PCIE_LM_BAR_CFG_CTRL_PREFETCH_MEM_64BITS       0x7
> +
> +
> +/*
> + * Endpoint Function Registers (PCI configuration space for endpoint functions)
> + */
> +#define CDNS_PCIE_EP_FUNC_BASE(fn)   (((fn) << 12) & GENMASK(19, 12))
> +
> +#define CDNS_PCIE_EP_FUNC_MSI_CAP_OFFSET     0x90
> +
> +/*
> + * Root Port Registers (PCI configuration space for the root port function)
> + */
> +#define CDNS_PCIE_RP_BASE    0x00200000
> +
> +
> +/*
> + * Address Translation Registers
> + */
> +#define CDNS_PCIE_AT_BASE    0x00400000
> +
> +/* Region r Outbound AXI to PCIe Address Translation Register 0 */
> +#define CDNS_PCIE_AT_OB_REGION_PCI_ADDR0(r) \
> +     (CDNS_PCIE_AT_BASE + 0x0000 + ((r) & 0x1f) * 0x0020)
> +#define  CDNS_PCIE_AT_OB_REGION_PCI_ADDR0_NBITS_MASK GENMASK(5, 0)
> +#define  CDNS_PCIE_AT_OB_REGION_PCI_ADDR0_NBITS(nbits) \
> +     (((nbits) - 1) & CDNS_PCIE_AT_OB_REGION_PCI_ADDR0_NBITS_MASK)
> +#define  CDNS_PCIE_AT_OB_REGION_PCI_ADDR0_DEVFN_MASK GENMASK(19, 12)
> +#define  CDNS_PCIE_AT_OB_REGION_PCI_ADDR0_DEVFN(devfn) \
> +     (((devfn) << 12) & CDNS_PCIE_AT_OB_REGION_PCI_ADDR0_DEVFN_MASK)
> +#define  CDNS_PCIE_AT_OB_REGION_PCI_ADDR0_BUS_MASK   GENMASK(27, 20)
> +#define  CDNS_PCIE_AT_OB_REGION_PCI_ADDR0_BUS(bus) \
> +     (((bus) << 20) & CDNS_PCIE_AT_OB_REGION_PCI_ADDR0_BUS_MASK)
> +
> +/* Region r Outbound AXI to PCIe Address Translation Register 1 */
> +#define CDNS_PCIE_AT_OB_REGION_PCI_ADDR1(r) \
> +     (CDNS_PCIE_AT_BASE + 0x0004 + ((r) & 0x1f) * 0x0020)
> +
> +/* Region r Outbound PCIe Descriptor Register 0 */
> +#define CDNS_PCIE_AT_OB_REGION_DESC0(r) \
> +     (CDNS_PCIE_AT_BASE + 0x0008 + ((r) & 0x1f) * 0x0020)
> +#define  CDNS_PCIE_AT_OB_REGION_DESC0_TYPE_MASK              GENMASK(3, 0)
> +#define  CDNS_PCIE_AT_OB_REGION_DESC0_TYPE_MEM               0x2
> +#define  CDNS_PCIE_AT_OB_REGION_DESC0_TYPE_IO                0x6
> +#define  CDNS_PCIE_AT_OB_REGION_DESC0_TYPE_CONF_TYPE0        0xa
> +#define  CDNS_PCIE_AT_OB_REGION_DESC0_TYPE_CONF_TYPE1        0xb
> +#define  CDNS_PCIE_AT_OB_REGION_DESC0_TYPE_NORMAL_MSG        0xc
> +#define  CDNS_PCIE_AT_OB_REGION_DESC0_TYPE_VENDOR_MSG        0xd
> +/* Bit 23 MUST be set in RC mode. */
> +#define  CDNS_PCIE_AT_OB_REGION_DESC0_HARDCODED_RID  BIT(23)
> +#define  CDNS_PCIE_AT_OB_REGION_DESC0_DEVFN_MASK     GENMASK(31, 24)
> +#define  CDNS_PCIE_AT_OB_REGION_DESC0_DEVFN(devfn) \
> +     (((devfn) << 24) & CDNS_PCIE_AT_OB_REGION_DESC0_DEVFN_MASK)
> +
> +/* Region r Outbound PCIe Descriptor Register 1 */
> +#define CDNS_PCIE_AT_OB_REGION_DESC1(r)      \
> +     (CDNS_PCIE_AT_BASE + 0x000c + ((r) & 0x1f) * 0x0020)
> +#define  CDNS_PCIE_AT_OB_REGION_DESC1_BUS_MASK       GENMASK(7, 0)
> +#define  CDNS_PCIE_AT_OB_REGION_DESC1_BUS(bus) \
> +     ((bus) & CDNS_PCIE_AT_OB_REGION_DESC1_BUS_MASK)
> +
> +/* Region r AXI Region Base Address Register 0 */
> +#define CDNS_PCIE_AT_OB_REGION_CPU_ADDR0(r) \
> +     (CDNS_PCIE_AT_BASE + 0x0018 + ((r) & 0x1f) * 0x0020)
> +#define  CDNS_PCIE_AT_OB_REGION_CPU_ADDR0_NBITS_MASK GENMASK(5, 0)
> +#define  CDNS_PCIE_AT_OB_REGION_CPU_ADDR0_NBITS(nbits) \
> +     (((nbits) - 1) & CDNS_PCIE_AT_OB_REGION_CPU_ADDR0_NBITS_MASK)
> +
> +/* Region r AXI Region Base Address Register 1 */
> +#define CDNS_PCIE_AT_OB_REGION_CPU_ADDR1(r) \
> +     (CDNS_PCIE_AT_BASE + 0x001c + ((r) & 0x1f) * 0x0020)
> +
> +/* Root Port BAR Inbound PCIe to AXI Address Translation Register */
> +#define CDNS_PCIE_AT_IB_RP_BAR_ADDR0(bar) \
> +     (CDNS_PCIE_AT_BASE + 0x0800 + (bar) * 0x0008)
> +#define  CDNS_PCIE_AT_IB_RP_BAR_ADDR0_NBITS_MASK     GENMASK(5, 0)
> +#define  CDNS_PCIE_AT_IB_RP_BAR_ADDR0_NBITS(nbits) \
> +     (((nbits) - 1) & CDNS_PCIE_AT_IB_RP_BAR_ADDR0_NBITS_MASK)
> +#define CDNS_PCIE_AT_IB_RP_BAR_ADDR1(bar) \
> +     (CDNS_PCIE_AT_BASE + 0x0804 + (bar) * 0x0008)
> +
> +enum cdns_pcie_rp_bar {
> +     RP_BAR0,
> +     RP_BAR1,
> +     RP_NO_BAR
> +};
> +
> +/* Endpoint Function BAR Inbound PCIe to AXI Address Translation Register */
> +#define CDNS_PCIE_AT_IB_EP_FUNC_BAR_ADDR0(fn, bar) \
> +     (CDNS_PCIE_AT_BASE + 0x0840 + (fn) * 0x0040 + (bar) * 0x0008)
> +#define CDNS_PCIE_AT_IB_EP_FUNC_BAR_ADDR1(fn, bar) \
> +     (CDNS_PCIE_AT_BASE + 0x0844 + (fn) * 0x0040 + (bar) * 0x0008)
> +
> +/* Normal/Vendor specific message access: offset inside some outbound region */
> +#define CDNS_PCIE_NORMAL_MSG_ROUTING_MASK    GENMASK(7, 5)
> +#define CDNS_PCIE_NORMAL_MSG_ROUTING(route) \
> +     (((route) << 5) & CDNS_PCIE_NORMAL_MSG_ROUTING_MASK)
> +#define CDNS_PCIE_NORMAL_MSG_CODE_MASK               GENMASK(15, 8)
> +#define CDNS_PCIE_NORMAL_MSG_CODE(code) \
> +     (((code) << 8) & CDNS_PCIE_NORMAL_MSG_CODE_MASK)
> +#define CDNS_PCIE_MSG_NO_DATA                        BIT(16)
> +
> +enum cdns_pcie_msg_code {
> +     MSG_CODE_ASSERT_INTA    = 0x20,
> +     MSG_CODE_ASSERT_INTB    = 0x21,
> +     MSG_CODE_ASSERT_INTC    = 0x22,
> +     MSG_CODE_ASSERT_INTD    = 0x23,
> +     MSG_CODE_DEASSERT_INTA  = 0x24,
> +     MSG_CODE_DEASSERT_INTB  = 0x25,
> +     MSG_CODE_DEASSERT_INTC  = 0x26,
> +     MSG_CODE_DEASSERT_INTD  = 0x27,
> +};
> +
> +enum cdns_pcie_msg_routing {
> +     /* Route to Root Complex */
> +     MSG_ROUTING_TO_RC,
> +
> +     /* Use Address Routing */
> +     MSG_ROUTING_BY_ADDR,
> +
> +     /* Use ID Routing */
> +     MSG_ROUTING_BY_ID,
> +
> +     /* Route as Broadcast Message from Root Complex */
> +     MSG_ROUTING_BCAST,
> +
> +     /* Local message; terminate at receiver (INTx messages) */
> +     MSG_ROUTING_LOCAL,
> +
> +     /* Gather & route to Root Complex (PME_TO_Ack message) */
> +     MSG_ROUTING_GATHER,
> +};
> +
> +/**
> + * struct cdns_pcie - private data for Cadence PCIe controller drivers
> + * @reg_base: IO mapped register base
> + * @mem_res: start/end offsets in the physical system memory to map PCI accesses
> + * @is_rc: tell whether the PCIe controller mode is Root Complex or Endpoint.
> + * @bus: In Root Complex mode, the bus number
> + */
> +struct cdns_pcie {
> +     void __iomem            *reg_base;
> +     struct resource         *mem_res;
> +     bool                    is_rc;
> +     u8                      bus;
> +};
> +
> +/* Register access */
> +static inline void cdns_pcie_writeb(struct cdns_pcie *pcie, u32 reg, u8 value)
> +{
> +     writeb(value, pcie->reg_base + reg);
> +}
> +
> +static inline void cdns_pcie_writew(struct cdns_pcie *pcie, u32 reg, u16 value)
> +{
> +     writew(value, pcie->reg_base + reg);
> +}
> +
> +static inline void cdns_pcie_writel(struct cdns_pcie *pcie, u32 reg, u32 value)
> +{
> +     writel(value, pcie->reg_base + reg);
> +}
> +
> +static inline u32 cdns_pcie_readl(struct cdns_pcie *pcie, u32 reg)
> +{
> +     return readl(pcie->reg_base + reg);
> +}
> +
> +/* Root Port register access */
> +static inline void cdns_pcie_rp_writeb(struct cdns_pcie *pcie,
> +                                    u32 reg, u8 value)
> +{
> +     writeb(value, pcie->reg_base + CDNS_PCIE_RP_BASE + reg);
> +}
> +
> +static inline void cdns_pcie_rp_writew(struct cdns_pcie *pcie,
> +                                    u32 reg, u16 value)
> +{
> +     writew(value, pcie->reg_base + CDNS_PCIE_RP_BASE + reg);
> +}
> +
> +/* Endpoint Function register access */
> +static inline void cdns_pcie_ep_fn_writeb(struct cdns_pcie *pcie, u8 fn,
> +                                       u32 reg, u8 value)
> +{
> +     writeb(value, pcie->reg_base + CDNS_PCIE_EP_FUNC_BASE(fn) + reg);
> +}
> +
> +static inline void cdns_pcie_ep_fn_writew(struct cdns_pcie *pcie, u8 fn,
> +                                       u32 reg, u16 value)
> +{
> +     writew(value, pcie->reg_base + CDNS_PCIE_EP_FUNC_BASE(fn) + reg);
> +}
> +
> +static inline void cdns_pcie_ep_fn_writel(struct cdns_pcie *pcie, u8 fn,
> +                                       u32 reg, u16 value)
> +{
> +     writel(value, pcie->reg_base + CDNS_PCIE_EP_FUNC_BASE(fn) + reg);
> +}
> +
> +static inline u8 cdns_pcie_ep_fn_readb(struct cdns_pcie *pcie, u8 fn, u32 reg)
> +{
> +     return readb(pcie->reg_base + CDNS_PCIE_EP_FUNC_BASE(fn) + reg);
> +}
> +
> +static inline u16 cdns_pcie_ep_fn_readw(struct cdns_pcie *pcie, u8 fn, u32 reg)
> +{
> +     return readw(pcie->reg_base + CDNS_PCIE_EP_FUNC_BASE(fn) + reg);
> +}
> +
> +static inline u32 cdns_pcie_ep_fn_readl(struct cdns_pcie *pcie, u8 fn, u32 reg)
> +{
> +     return readl(pcie->reg_base + CDNS_PCIE_EP_FUNC_BASE(fn) + reg);
> +}

Same comments for all endpoint related functions and defines above.

Thanks,
Lorenzo

> +void cdns_pcie_set_outbound_region(struct cdns_pcie *pcie, u32 r, bool is_io,
> +                                u64 cpu_addr, u64 pci_addr, size_t size);
> +
> +void cdns_pcie_set_outbound_region_for_normal_msg(struct cdns_pcie *pcie, u32 r,
> +                                               u64 cpu_addr);
> +
> +void cdns_pcie_reset_outbound_region(struct cdns_pcie *pcie, u32 r);
> +
> +#endif /* _PCIE_CADENCE_H */
> --
> 2.11.0
>
IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH 3/5] PCI: cadence: Add host driver for Cadence PCIe controller
  2017-11-23 15:01 ` [PATCH 3/5] PCI: cadence: Add host driver for Cadence PCIe controller Cyrille Pitchen
  2017-11-28 20:41   ` Bjorn Helgaas
  2017-11-29 17:34   ` Lorenzo Pieralisi
@ 2017-11-29 18:25   ` Lorenzo Pieralisi
  2017-11-30 10:06     ` Lorenzo Pieralisi
  2 siblings, 1 reply; 39+ messages in thread
From: Lorenzo Pieralisi @ 2017-11-29 18:25 UTC (permalink / raw)
  To: Cyrille Pitchen
  Cc: bhelgaas, kishon, linux-pci, adouglas, stelford, dgary, kgopi,
	eandrews, thomas.petazzoni, sureshp, nsekhar, linux-kernel, robh,
	devicetree

[w/o unintended disclaimer]

On Thu, Nov 23, 2017 at 04:01:48PM +0100, Cyrille Pitchen wrote:
> This patch adds support to the Cadence PCIe controller in host mode.

Bjorn already commented on this, it would be good to add some
of the cover letter details in this log.

> Signed-off-by: Cyrille Pitchen <cyrille.pitchen@free-electrons.com>
> ---
>  drivers/Makefile                        |   1 +
>  drivers/pci/Kconfig                     |   1 +
>  drivers/pci/cadence/Kconfig             |  24 ++
>  drivers/pci/cadence/Makefile            |   2 +
>  drivers/pci/cadence/pcie-cadence-host.c | 425 ++++++++++++++++++++++++++++++++

You should also update the MAINTAINERS file.

>  drivers/pci/cadence/pcie-cadence.c      | 110 +++++++++
>  drivers/pci/cadence/pcie-cadence.h      | 325 ++++++++++++++++++++++++
>  7 files changed, 888 insertions(+)
>  create mode 100644 drivers/pci/cadence/Kconfig
>  create mode 100644 drivers/pci/cadence/Makefile
>  create mode 100644 drivers/pci/cadence/pcie-cadence-host.c
>  create mode 100644 drivers/pci/cadence/pcie-cadence.c
>  create mode 100644 drivers/pci/cadence/pcie-cadence.h
> 
> diff --git a/drivers/Makefile b/drivers/Makefile
> index 1d034b680431..27bdd98784d9 100644
> --- a/drivers/Makefile
> +++ b/drivers/Makefile
> @@ -18,6 +18,7 @@ obj-y				+= pwm/
>  
>  obj-$(CONFIG_PCI)		+= pci/
>  obj-$(CONFIG_PCI_ENDPOINT)	+= pci/endpoint/
> +obj-$(CONFIG_PCI_CADENCE)	+= pci/cadence/

Already commented on the cover letter.

>  # PCI dwc controller drivers
>  obj-y				+= pci/dwc/
>  
> diff --git a/drivers/pci/Kconfig b/drivers/pci/Kconfig
> index 90944667ccea..2471b2e36b8b 100644
> --- a/drivers/pci/Kconfig
> +++ b/drivers/pci/Kconfig
> @@ -144,6 +144,7 @@ config PCI_HYPERV
>            PCI devices from a PCI backend to support PCI driver domains.
>  
>  source "drivers/pci/hotplug/Kconfig"
> +source "drivers/pci/cadence/Kconfig"
>  source "drivers/pci/dwc/Kconfig"
>  source "drivers/pci/host/Kconfig"
>  source "drivers/pci/endpoint/Kconfig"
> diff --git a/drivers/pci/cadence/Kconfig b/drivers/pci/cadence/Kconfig
> new file mode 100644
> index 000000000000..120306cae2aa
> --- /dev/null
> +++ b/drivers/pci/cadence/Kconfig
> @@ -0,0 +1,24 @@
> +menuconfig PCI_CADENCE
> +	bool "Cadence PCI controllers support"
> +	depends on PCI && HAS_IOMEM
> +	help
> +	  Say Y here if you want to support some Cadence PCI controller.
> +
> +	  When in doubt, say N.
> +
> +if PCI_CADENCE
> +
> +config PCIE_CADENCE
> +	bool
> +
> +config PCIE_CADENCE_HOST
> +	bool "Cadence PCIe host controller"
> +	depends on OF
> +	depends on PCI_MSI_IRQ_DOMAIN

I do not see the reason for this dependency in the code.

> +	select PCIE_CADENCE
> +	help
> +	  Say Y here if you want to support the Cadence PCIe controller in host
> +	  mode. This PCIe controller may be embedded into many different vendors
> +	  SoCs.
> +
> +endif # PCI_CADENCE
> diff --git a/drivers/pci/cadence/Makefile b/drivers/pci/cadence/Makefile
> new file mode 100644
> index 000000000000..d57d192d2595
> --- /dev/null
> +++ b/drivers/pci/cadence/Makefile
> @@ -0,0 +1,2 @@
> +obj-$(CONFIG_PCIE_CADENCE) += pcie-cadence.o
> +obj-$(CONFIG_PCIE_CADENCE_HOST) += pcie-cadence-host.o
> diff --git a/drivers/pci/cadence/pcie-cadence-host.c b/drivers/pci/cadence/pcie-cadence-host.c
> new file mode 100644
> index 000000000000..252471e72a93
> --- /dev/null
> +++ b/drivers/pci/cadence/pcie-cadence-host.c
> @@ -0,0 +1,425 @@
> +/*
> + * Cadence PCIe host controller driver.
> + *
> + * Copyright (c) 2017 Cadence
> + *
> + * Author: Cyrille Pitchen <cyrille.pitchen@free-electrons.com>
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License version 2 as
> + * published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> + * GNU General Public License for more details.
> + *
> + * You should have received a copy of the GNU General Public License along with
> + * this program.  If not, see <http://www.gnu.org/licenses/>.
> + */
> +#include <linux/kernel.h>
> +#include <linux/of_address.h>
> +#include <linux/of_pci.h>
> +#include <linux/platform_device.h>
> +#include <linux/pm_runtime.h>
> +
> +#include "pcie-cadence.h"
> +
> +/**
> + * struct cdns_pcie_rc_data - hardware specific data
> + * @max_regions: maximum number of regions supported by the hardware
> + * @vendor_id: PCI vendor ID
> + * @device_id: PCI device ID
> + * @no_bar_nbits: Number of bits to keep for inbound (PCIe -> CPU) address
> + *                translation (nbits sets into the "no BAR match" register).
> + */
> +struct cdns_pcie_rc_data {
> +	size_t			max_regions;

Reason for it to be size_t ?

> +	u16			vendor_id;
> +	u16			device_id;
> +	u8			no_bar_nbits;
> +};

I think that this data should come from DT (?) more below.

> +
> +/**
> + * struct cdns_pcie_rc - private data for this PCIe Root Complex driver
> + * @pcie: Cadence PCIe controller
> + * @dev: pointer to PCIe device
> + * @cfg_res: start/end offsets in the physical system memory to map PCI
> + *           configuration space accesses
> + * @bus_range: first/last buses behind the PCIe host controller
> + * @cfg_base: IO mapped window to access the PCI configuration space of a
> + *            single function at a time
> + * @data: pointer to a 'struct cdns_pcie_rc_data'
> + */
> +struct cdns_pcie_rc {
> +	struct cdns_pcie	pcie;
> +	struct device		*dev;
> +	struct resource		*cfg_res;
> +	struct resource		*bus_range;
> +	void __iomem		*cfg_base;
> +	const struct cdns_pcie_rc_data	*data;
> +};
> +
> +static void __iomem *

Please do not split lines like this, storage class and return type
should be in the same line as the name, move parameter(s) to a new
line.

static void __iomem *cdns_pci_map_bus(struct pci_bus *bus, unsigned int devfn,
				      int where)

> +cdns_pci_map_bus(struct pci_bus *bus, unsigned int devfn, int where)
> +{
> +	struct pci_host_bridge *bridge = pci_find_host_bridge(bus);
> +	struct cdns_pcie_rc *rc = pci_host_bridge_priv(bridge);
> +	struct cdns_pcie *pcie = &rc->pcie;
> +	unsigned int busn = bus->number;
> +	u32 addr0, desc0;
> +
> +	if (busn < rc->bus_range->start || busn > rc->bus_range->end)
> +		return NULL;

It does not hurt but I wonder whether you really need this check.

> +	if (busn == rc->bus_range->start) {
> +		if (devfn)

I suspect I know why you need this check but I ask you to explain it
anyway if you do not mind please.

> +			return NULL;
> +
> +		return pcie->reg_base + (where & 0xfff);
> +	}
> +
> +	/* Update Output registers for AXI region 0. */
> +	addr0 = CDNS_PCIE_AT_OB_REGION_PCI_ADDR0_NBITS(12) |

Ok, so for every config access you reprogram addr0 to reflect the
correct bus|devfn ID in the PCI bus TLP corresponding to an ECAM address
in CPU physical address space, is my understanding correct ?

> +		CDNS_PCIE_AT_OB_REGION_PCI_ADDR0_DEVFN(devfn) |
> +		CDNS_PCIE_AT_OB_REGION_PCI_ADDR0_BUS(busn);
> +	cdns_pcie_writel(pcie, CDNS_PCIE_AT_OB_REGION_PCI_ADDR0(0), addr0);
> +
> +	/* Configuration Type 0 or Type 1 access. */
> +	desc0 = CDNS_PCIE_AT_OB_REGION_DESC0_HARDCODED_RID |
> +		CDNS_PCIE_AT_OB_REGION_DESC0_DEVFN(0);
> +	/*
> +	 * The bus number was already set once for all in desc1 by
> +	 * cdns_pcie_host_init_address_translation().
> +	 */
> +	if (busn == rc->bus_range->start + 1)
> +		desc0 |= CDNS_PCIE_AT_OB_REGION_DESC0_TYPE_CONF_TYPE0;
> +	else
> +		desc0 |= CDNS_PCIE_AT_OB_REGION_DESC0_TYPE_CONF_TYPE1;

I would like to ask you why you have to do it here and the root port
does not figure it out by itself, I do not have the datasheet so I am
just asking for my own information.

> +	cdns_pcie_writel(pcie, CDNS_PCIE_AT_OB_REGION_DESC0(0), desc0);
> +
> +	return rc->cfg_base + (where & 0xfff);
> +}
> +
> +static struct pci_ops cdns_pcie_host_ops = {
> +	.map_bus	= cdns_pci_map_bus,
> +	.read		= pci_generic_config_read,
> +	.write		= pci_generic_config_write,
> +};
> +
> +static const struct cdns_pcie_rc_data cdns_pcie_rc_data = {
> +	.max_regions	= 32,
> +	.vendor_id	= PCI_VENDOR_ID_CDNS,
> +	.device_id	= 0x0200,
> +	.no_bar_nbits	= 32,
> +};

Should (some of) these parameters be retrieved through a DT binding ?

> +static const struct of_device_id cdns_pcie_host_of_match[] = {
> +	{ .compatible = "cdns,cdns-pcie-host",
> +	  .data = &cdns_pcie_rc_data },
> +
> +	{ },
> +};
> +
> +static int cdns_pcie_parse_request_of_pci_ranges(struct device *dev,
> +						 struct list_head *resources,
> +						 struct resource **bus_range)
> +{
> +	int err, res_valid = 0;
> +	struct device_node *np = dev->of_node;
> +	resource_size_t iobase;
> +	struct resource_entry *win, *tmp;
> +
> +	err = of_pci_get_host_bridge_resources(np, 0, 0xff, resources, &iobase);
> +	if (err)
> +		return err;
> +
> +	err = devm_request_pci_bus_resources(dev, resources);
> +	if (err)
> +		return err;
> +
> +	resource_list_for_each_entry_safe(win, tmp, resources) {
> +		struct resource *res = win->res;
> +
> +		switch (resource_type(res)) {
> +		case IORESOURCE_IO:
> +			err = pci_remap_iospace(res, iobase);
> +			if (err) {
> +				dev_warn(dev, "error %d: failed to map resource %pR\n",
> +					 err, res);
> +				resource_list_destroy_entry(win);
> +			}
> +			break;
> +		case IORESOURCE_MEM:
> +			res_valid |= !(res->flags & IORESOURCE_PREFETCH);
> +			break;
> +		case IORESOURCE_BUS:
> +			*bus_range = res;
> +			break;
> +		}
> +	}
> +
> +	if (res_valid)
> +		return 0;
> +
> +	dev_err(dev, "non-prefetchable memory resource required\n");
> +	return -EINVAL;

Nit, I prefer you swap these two as it is done in pci-aardvark.c:

	if (!res_valid) {
		dev_err(dev, "non-prefetchable memory resource required\n");
		return -EINVAL;
	}

	return 0;

but as per previous replies this function can be factorized in
core PCI code - I would not bother unless you are willing to write
the patch series that does the refactoring yourself :)

> +}
> +
> +static int cdns_pcie_host_init_root_port(struct cdns_pcie_rc *rc)
> +{
> +	const struct cdns_pcie_rc_data *data = rc->data;
> +	struct cdns_pcie *pcie = &rc->pcie;
> +	u8 pbn, sbn, subn;
> +	u32 value, ctrl;
> +
> +	/*
> +	 * Set the root complex BAR configuration register:
> +	 * - disable both BAR0 and BAR1.
> +	 * - enable Prefetchable Memory Base and Limit registers in type 1
> +	 *   config space (64 bits).
> +	 * - enable IO Base and Limit registers in type 1 config
> +	 *   space (32 bits).
> +	 */
> +	ctrl = CDNS_PCIE_LM_BAR_CFG_CTRL_DISABLED;
> +	value = CDNS_PCIE_LM_RC_BAR_CFG_BAR0_CTRL(ctrl) |
> +		CDNS_PCIE_LM_RC_BAR_CFG_BAR1_CTRL(ctrl) |
> +		CDNS_PCIE_LM_RC_BAR_CFG_PREFETCH_MEM_ENABLE |
> +		CDNS_PCIE_LM_RC_BAR_CFG_PREFETCH_MEM_64BITS |
> +		CDNS_PCIE_LM_RC_BAR_CFG_IO_ENABLE |
> +		CDNS_PCIE_LM_RC_BAR_CFG_IO_32BITS;
> +	cdns_pcie_writel(pcie, CDNS_PCIE_LM_RC_BAR_CFG, value);
> +
> +	/* Set root port configuration space */
> +	if (data->vendor_id != 0xffff)
> +		cdns_pcie_rp_writew(pcie, PCI_VENDOR_ID, data->vendor_id);
> +	if (data->device_id != 0xffff)
> +		cdns_pcie_rp_writew(pcie, PCI_DEVICE_ID, data->device_id);
> +
> +	cdns_pcie_rp_writeb(pcie, PCI_CLASS_REVISION, 0);
> +	cdns_pcie_rp_writeb(pcie, PCI_CLASS_PROG, 0);
> +	cdns_pcie_rp_writew(pcie, PCI_CLASS_DEVICE, PCI_CLASS_BRIDGE_PCI);
> +
> +	pbn = rc->bus_range->start;
> +	sbn = pbn + 1; /* Single root port. */
> +	subn = rc->bus_range->end;
> +	cdns_pcie_rp_writeb(pcie, PCI_PRIMARY_BUS, pbn);
> +	cdns_pcie_rp_writeb(pcie, PCI_SECONDARY_BUS, sbn);
> +	cdns_pcie_rp_writeb(pcie, PCI_SUBORDINATE_BUS, subn);

Again - I do not have the datasheet for this device therefore I would
kindly ask you how this works; it seems to me that what you are doing
here is done through normal configuration cycles in an ECAM compliant
system to program the RP PRIMARY/SECONDARY/SUBORDINATE bus - I would
like to understand why this code is needed.

> +	return 0;
> +}
> +
> +static int cdns_pcie_host_init_address_translation(struct cdns_pcie_rc *rc)
> +{
> +	struct cdns_pcie *pcie = &rc->pcie;
> +	struct resource *cfg_res = rc->cfg_res;
> +	struct resource *mem_res = pcie->mem_res;
> +	struct resource *bus_range = rc->bus_range;
> +	struct device *dev = rc->dev;
> +	struct device_node *np = dev->of_node;
> +	struct of_pci_range_parser parser;
> +	struct of_pci_range range;
> +	u32 addr0, addr1, desc1;
> +	u64 cpu_addr;
> +	int r, err;
> +
> +	/*
> +	 * Reserve region 0 for PCI configure space accesses:
> +	 * OB_REGION_PCI_ADDR0 and OB_REGION_DESC0 are updated dynamically by
> +	 * cdns_pci_map_bus(), other region registers are set here once for all.
> +	 */
> +	addr1 = 0; /* Should be programmed to zero. */
> +	desc1 = CDNS_PCIE_AT_OB_REGION_DESC1_BUS(bus_range->start);
> +	cdns_pcie_writel(pcie, CDNS_PCIE_AT_OB_REGION_PCI_ADDR1(0), addr1);
> +	cdns_pcie_writel(pcie, CDNS_PCIE_AT_OB_REGION_DESC1(0), desc1);
> +
> +	cpu_addr = cfg_res->start - mem_res->start;
> +	addr0 = CDNS_PCIE_AT_OB_REGION_CPU_ADDR0_NBITS(12) |
> +		(lower_32_bits(cpu_addr) & GENMASK(31, 8));
> +	addr1 = upper_32_bits(cpu_addr);
> +	cdns_pcie_writel(pcie, CDNS_PCIE_AT_OB_REGION_CPU_ADDR0(0), addr0);
> +	cdns_pcie_writel(pcie, CDNS_PCIE_AT_OB_REGION_CPU_ADDR1(0), addr1);
> +
> +	err = of_pci_range_parser_init(&parser, np);
> +	if (err)
> +		return err;
> +
> +	r = 1;
> +	for_each_of_pci_range(&parser, &range) {
> +		bool is_io;
> +
> +		if (r >= rc->data->max_regions)
> +			break;
> +
> +		if ((range.flags & IORESOURCE_TYPE_BITS) == IORESOURCE_MEM)
> +			is_io = false;
> +		else if ((range.flags & IORESOURCE_TYPE_BITS) == IORESOURCE_IO)
> +			is_io = true;
> +		else
> +			continue;
> +
> +		cdns_pcie_set_outbound_region(pcie, r, is_io,
> +					      range.cpu_addr,
> +					      range.pci_addr,
> +					      range.size);
> +		r++;
> +	}
> +
> +	/*
> +	 * Set Root Port no BAR match Inbound Translation registers:
> +	 * needed for MSI.

And DMA :) if I understand what this is doing correctly, ie setting
the root complex decoding for incoming memory traffic.

> +	 * Root Port BAR0 and BAR1 are disabled, hence no need to set their
> +	 * inbound translation registers.
> +	 */
> +	addr0 = CDNS_PCIE_AT_IB_RP_BAR_ADDR0_NBITS(rc->data->no_bar_nbits);
> +	addr1 = 0;
> +	cdns_pcie_writel(pcie, CDNS_PCIE_AT_IB_RP_BAR_ADDR0(RP_NO_BAR), addr0);
> +	cdns_pcie_writel(pcie, CDNS_PCIE_AT_IB_RP_BAR_ADDR1(RP_NO_BAR), addr1);
> +
> +	return 0;
> +}
> +
> +static int cdns_pcie_host_init(struct device *dev,
> +			       struct list_head *resources,
> +			       struct cdns_pcie_rc *rc)
> +{
> +	struct resource *bus_range = NULL;
> +	int err;
> +
> +	/* Parse our PCI ranges and request their resources */
> +	err = cdns_pcie_parse_request_of_pci_ranges(dev, resources, &bus_range);
> +	if (err)
> +		goto err_out;

I think that the err_out path should be part of:

cdns_pcie_parse_request_of_pci_ranges()

implementation and here you would just return.

> +
> +	if (bus_range->start > bus_range->end) {
> +		err = -EINVAL;
> +		goto err_out;
> +	}

Add a space here; this check seems useless to me anyway.

> +	rc->bus_range = bus_range;
> +	rc->pcie.bus = bus_range->start;
> +
> +	err = cdns_pcie_host_init_root_port(rc);
> +	if (err)
> +		goto err_out;
> +
> +	err = cdns_pcie_host_init_address_translation(rc);
> +	if (err)
> +		goto err_out;
> +
> +	return 0;
> +
> + err_out:
> +	pci_free_resource_list(resources);

See above.

> +	return err;
> +}
> +
> +static int cdns_pcie_host_probe(struct platform_device *pdev)
> +{
> +	const struct of_device_id *of_id;
> +	const char *type;
> +	struct device *dev = &pdev->dev;
> +	struct device_node *np = dev->of_node;
> +	struct pci_bus *bus, *child;
> +	struct pci_host_bridge *bridge;
> +	struct list_head resources;
> +	struct cdns_pcie_rc *rc;
> +	struct cdns_pcie *pcie;
> +	struct resource *res;
> +	int ret;
> +
> +	bridge = devm_pci_alloc_host_bridge(dev, sizeof(*rc));
> +	if (!bridge)
> +		return -ENOMEM;
> +
> +	rc = pci_host_bridge_priv(bridge);
> +	rc->dev = dev;
> +	platform_set_drvdata(pdev, rc);

I do not think it is needed.

> +	pcie = &rc->pcie;
> +	pcie->is_rc = true;
> +
> +	of_id = of_match_node(cdns_pcie_host_of_match, np);
> +	rc->data = (const struct cdns_pcie_rc_data *)of_id->data;
> +
> +	type = of_get_property(np, "device_type", NULL);
> +	if (!type || strcmp(type, "pci")) {
> +		dev_err(dev, "invalid \"device_type\" %s\n", type);
> +		return -EINVAL;
> +	}
> +
> +	res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "reg");
> +	pcie->reg_base = devm_ioremap_resource(dev, res);
> +	if (IS_ERR(pcie->reg_base)) {
> +		dev_err(dev, "missing \"reg\"\n");
> +		return PTR_ERR(pcie->reg_base);
> +	}
> +
> +	res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "cfg");
> +	rc->cfg_base = devm_ioremap_resource(dev, res);

devm_pci_remap_cfg_resource() please.

> +	if (IS_ERR(rc->cfg_base)) {
> +		dev_err(dev, "missing \"cfg\"\n");
> +		return PTR_ERR(rc->cfg_base);
> +	}
> +	rc->cfg_res = res;
> +
> +	res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "mem");
> +	if (!res) {
> +		dev_err(dev, "missing \"mem\"\n");
> +		return -EINVAL;
> +	}
> +	pcie->mem_res = res;
> +
> +	pm_runtime_enable(dev);
> +	ret = pm_runtime_get_sync(dev);
> +	if (ret < 0) {
> +		dev_err(dev, "pm_runtime_get_sync() failed\n");
> +		goto err_get_sync;
> +	}
> +
> +	INIT_LIST_HEAD(&resources);
> +	ret = cdns_pcie_host_init(dev, &resources, rc);
> +	if (ret)
> +		goto err_init;
> +
> +	list_splice_init(&resources, &bridge->windows);
> +	bridge->dev.parent = dev;
> +	bridge->busnr = pcie->bus;
> +	bridge->ops = &cdns_pcie_host_ops;
> +	bridge->map_irq = of_irq_parse_and_map_pci;
> +	bridge->swizzle_irq = pci_common_swizzle;
> +
> +	ret = pci_scan_root_bus_bridge(bridge);
> +	if (ret < 0) {
> +		dev_err(dev, "Scanning root bridge failed");
> +		goto err_init;
> +	}
> +
> +	bus = bridge->bus;
> +	pci_bus_size_bridges(bus);
> +	pci_bus_assign_resources(bus);
> +
> +	list_for_each_entry(child, &bus->children, node)
> +		pcie_bus_configure_settings(child);
> +
> +	pci_bus_add_devices(bus);
> +
> +	return 0;
> +
> + err_init:
> +	pm_runtime_put_sync(dev);
> +
> + err_get_sync:
> +	pm_runtime_disable(dev);
> +
> +	return ret;
> +}
> +
> +static struct platform_driver cdns_pcie_host_driver = {
> +	.driver = {
> +		.name = "cdns-pcie-host",
> +		.of_match_table = cdns_pcie_host_of_match,
> +	},
> +	.probe = cdns_pcie_host_probe,
> +};
> +builtin_platform_driver(cdns_pcie_host_driver);
> diff --git a/drivers/pci/cadence/pcie-cadence.c b/drivers/pci/cadence/pcie-cadence.c
> new file mode 100644
> index 000000000000..5c10879d5e96
> --- /dev/null
> +++ b/drivers/pci/cadence/pcie-cadence.c
> @@ -0,0 +1,110 @@
> +/*
> + * Cadence PCIe controller driver.
> + *
> + * Copyright (c) 2017 Cadence
> + *
> + * Author: Cyrille Pitchen <cyrille.pitchen@free-electrons.com>
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License version 2 as
> + * published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> + * GNU General Public License for more details.
> + *
> + * You should have received a copy of the GNU General Public License along with
> + * this program.  If not, see <http://www.gnu.org/licenses/>.
> + */
> +
> +#include <linux/kernel.h>
> +
> +#include "pcie-cadence.h"
> +
> +void cdns_pcie_set_outbound_region(struct cdns_pcie *pcie, u32 r, bool is_io,
> +				   u64 cpu_addr, u64 pci_addr, size_t size)
> +{
> +	/*
> +	 * roundup_pow_of_two() returns an unsigned long, which is not suited
> +	 * for 64bit values.
> +	 */
> +	u64 sz = 1ULL << fls64(size - 1);
> +	int nbits = ilog2(sz);
> +	u32 addr0, addr1, desc0, desc1;
> +
> +	if (nbits < 8)
> +		nbits = 8;
> +
> +	/* Set the PCI address */
> +	addr0 = CDNS_PCIE_AT_OB_REGION_PCI_ADDR0_NBITS(nbits) |
> +		(lower_32_bits(pci_addr) & GENMASK(31, 8));
> +	addr1 = upper_32_bits(pci_addr);
> +
> +	cdns_pcie_writel(pcie, CDNS_PCIE_AT_OB_REGION_PCI_ADDR0(r), addr0);
> +	cdns_pcie_writel(pcie, CDNS_PCIE_AT_OB_REGION_PCI_ADDR1(r), addr1);
> +
> +	/* Set the PCIe header descriptor */
> +	if (is_io)
> +		desc0 = CDNS_PCIE_AT_OB_REGION_DESC0_TYPE_IO;
> +	else
> +		desc0 = CDNS_PCIE_AT_OB_REGION_DESC0_TYPE_MEM;
> +	desc1 = 0;
> +
> +	if (pcie->is_rc) {
> +		desc0 |= CDNS_PCIE_AT_OB_REGION_DESC0_HARDCODED_RID |
> +			 CDNS_PCIE_AT_OB_REGION_DESC0_DEVFN(0);
> +		desc1 |= CDNS_PCIE_AT_OB_REGION_DESC1_BUS(pcie->bus);
> +	}
> +
> +	cdns_pcie_writel(pcie, CDNS_PCIE_AT_OB_REGION_DESC0(r), desc0);
> +	cdns_pcie_writel(pcie, CDNS_PCIE_AT_OB_REGION_DESC1(r), desc1);
> +
> +	/* Set the CPU address */
> +	cpu_addr -= pcie->mem_res->start;
> +	addr0 = CDNS_PCIE_AT_OB_REGION_CPU_ADDR0_NBITS(nbits) |
> +		(lower_32_bits(cpu_addr) & GENMASK(31, 8));
> +	addr1 = upper_32_bits(cpu_addr);
> +
> +	cdns_pcie_writel(pcie, CDNS_PCIE_AT_OB_REGION_CPU_ADDR0(r), addr0);
> +	cdns_pcie_writel(pcie, CDNS_PCIE_AT_OB_REGION_CPU_ADDR1(r), addr1);
> +}
> +
> +void cdns_pcie_set_outbound_region_for_normal_msg(struct cdns_pcie *pcie, u32 r,
> +						  u64 cpu_addr)

Not used in this patch, you should split it out.

> +{
> +	u32 addr0, addr1, desc0, desc1;
> +
> +	desc0 = CDNS_PCIE_AT_OB_REGION_DESC0_TYPE_NORMAL_MSG;
> +	desc1 = 0;
> +	if (pcie->is_rc) {
> +		desc0 |= CDNS_PCIE_AT_OB_REGION_DESC0_HARDCODED_RID |
> +			 CDNS_PCIE_AT_OB_REGION_DESC0_DEVFN(0);
> +		desc1 |= CDNS_PCIE_AT_OB_REGION_DESC1_BUS(pcie->bus);
> +	}
> +
> +	/* Set the CPU address */
> +	cpu_addr -= pcie->mem_res->start;
> +	addr0 = CDNS_PCIE_AT_OB_REGION_CPU_ADDR0_NBITS(17) |
> +		(lower_32_bits(cpu_addr) & GENMASK(31, 8));
> +	addr1 = upper_32_bits(cpu_addr);
> +
> +	cdns_pcie_writel(pcie, CDNS_PCIE_AT_OB_REGION_PCI_ADDR0(r), 0);
> +	cdns_pcie_writel(pcie, CDNS_PCIE_AT_OB_REGION_PCI_ADDR1(r), 0);
> +	cdns_pcie_writel(pcie, CDNS_PCIE_AT_OB_REGION_DESC0(r), desc0);
> +	cdns_pcie_writel(pcie, CDNS_PCIE_AT_OB_REGION_DESC1(r), desc1);
> +	cdns_pcie_writel(pcie, CDNS_PCIE_AT_OB_REGION_CPU_ADDR0(r), addr0);
> +	cdns_pcie_writel(pcie, CDNS_PCIE_AT_OB_REGION_CPU_ADDR1(r), addr1);
> +}
> +
> +void cdns_pcie_reset_outbound_region(struct cdns_pcie *pcie, u32 r)
> +{
> +	cdns_pcie_writel(pcie, CDNS_PCIE_AT_OB_REGION_PCI_ADDR0(r), 0);
> +	cdns_pcie_writel(pcie, CDNS_PCIE_AT_OB_REGION_PCI_ADDR1(r), 0);
> +
> +	cdns_pcie_writel(pcie, CDNS_PCIE_AT_OB_REGION_DESC0(r), 0);
> +	cdns_pcie_writel(pcie, CDNS_PCIE_AT_OB_REGION_DESC1(r), 0);
> +
> +	cdns_pcie_writel(pcie, CDNS_PCIE_AT_OB_REGION_CPU_ADDR0(r), 0);
> +	cdns_pcie_writel(pcie, CDNS_PCIE_AT_OB_REGION_CPU_ADDR1(r), 0);
> +}
> diff --git a/drivers/pci/cadence/pcie-cadence.h b/drivers/pci/cadence/pcie-cadence.h
> new file mode 100644
> index 000000000000..195e23b7d4fe
> --- /dev/null
> +++ b/drivers/pci/cadence/pcie-cadence.h
> @@ -0,0 +1,325 @@
> +/*
> + * Cadence PCIe controller driver.
> + *
> + * Copyright (c) 2017 Cadence
> + *
> + * Author: Cyrille Pitchen <cyrille.pitchen@free-electrons.com>
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License version 2 as
> + * published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> + * GNU General Public License for more details.
> + *
> + * You should have received a copy of the GNU General Public License along with
> + * this program.  If not, see <http://www.gnu.org/licenses/>.
> + */
> +
> +#ifndef _PCIE_CADENCE_H
> +#define _PCIE_CADENCE_H
> +
> +#include <linux/kernel.h>
> +#include <linux/pci.h>
> +
> +/*
> + * Local Management Registers
> + */
> +#define CDNS_PCIE_LM_BASE	0x00100000
> +
> +/* Vendor ID Register */
> +#define CDNS_PCIE_LM_ID		(CDNS_PCIE_LM_BASE + 0x0044)
> +#define  CDNS_PCIE_LM_ID_VENDOR_MASK	GENMASK(15, 0)
> +#define  CDNS_PCIE_LM_ID_VENDOR_SHIFT	0
> +#define  CDNS_PCIE_LM_ID_VENDOR(vid) \
> +	(((vid) << CDNS_PCIE_LM_ID_VENDOR_SHIFT) & CDNS_PCIE_LM_ID_VENDOR_MASK)
> +#define  CDNS_PCIE_LM_ID_SUBSYS_MASK	GENMASK(31, 16)
> +#define  CDNS_PCIE_LM_ID_SUBSYS_SHIFT	16
> +#define  CDNS_PCIE_LM_ID_SUBSYS(sub) \
> +	(((sub) << CDNS_PCIE_LM_ID_SUBSYS_SHIFT) & CDNS_PCIE_LM_ID_SUBSYS_MASK)
> +
> +/* Root Port Requestor ID Register */
> +#define CDNS_PCIE_LM_RP_RID	(CDNS_PCIE_LM_BASE + 0x0228)
> +#define  CDNS_PCIE_LM_RP_RID_MASK	GENMASK(15, 0)
> +#define  CDNS_PCIE_LM_RP_RID_SHIFT	0
> +#define  CDNS_PCIE_LM_RP_RID_(rid) \
> +	(((rid) << CDNS_PCIE_LM_RP_RID_SHIFT) & CDNS_PCIE_LM_RP_RID_MASK)
> +
> +/* Endpoint Bus and Device Number Register */
> +#define CDNS_PCIE_LM_EP_ID	(CDNS_PCIE_LM_BASE + 0x022c)
> +#define  CDNS_PCIE_LM_EP_ID_DEV_MASK	GENMASK(4, 0)
> +#define  CDNS_PCIE_LM_EP_ID_DEV_SHIFT	0
> +#define  CDNS_PCIE_LM_EP_ID_BUS_MASK	GENMASK(15, 8)
> +#define  CDNS_PCIE_LM_EP_ID_BUS_SHIFT	8
> +
> +/* Endpoint Function f BAR b Configuration Registers */
> +#define CDNS_PCIE_LM_EP_FUNC_BAR_CFG0(fn) \
> +	(CDNS_PCIE_LM_BASE + 0x0240 + (fn) * 0x0008)
> +#define CDNS_PCIE_LM_EP_FUNC_BAR_CFG1(fn) \
> +	(CDNS_PCIE_LM_BASE + 0x0244 + (fn) * 0x0008)
> +#define  CDNS_PCIE_LM_EP_FUNC_BAR_CFG_BAR_APERTURE_MASK(b) \
> +	(GENMASK(4, 0) << ((b) * 8))
> +#define  CDNS_PCIE_LM_EP_FUNC_BAR_CFG_BAR_APERTURE(b, a) \
> +	(((a) << ((b) * 8)) & CDNS_PCIE_LM_EP_FUNC_BAR_CFG_BAR_APERTURE_MASK(b))
> +#define  CDNS_PCIE_LM_EP_FUNC_BAR_CFG_BAR_CTRL_MASK(b) \
> +	(GENMASK(7, 5) << ((b) * 8))
> +#define  CDNS_PCIE_LM_EP_FUNC_BAR_CFG_BAR_CTRL(b, c) \
> +	(((c) << ((b) * 8 + 5)) & CDNS_PCIE_LM_EP_FUNC_BAR_CFG_BAR_CTRL_MASK(b))
> +
> +/* Endpoint Function Configuration Register */
> +#define CDNS_PCIE_LM_EP_FUNC_CFG	(CDNS_PCIE_LM_BASE + 0x02c0)

All endpoint defines should be moved to the patch that needs them.

> +
> +/* Root Complex BAR Configuration Register */
> +#define CDNS_PCIE_LM_RC_BAR_CFG	(CDNS_PCIE_LM_BASE + 0x0300)
> +#define  CDNS_PCIE_LM_RC_BAR_CFG_BAR0_APERTURE_MASK	GENMASK(5, 0)
> +#define  CDNS_PCIE_LM_RC_BAR_CFG_BAR0_APERTURE(a) \
> +	(((a) << 0) & CDNS_PCIE_LM_RC_BAR_CFG_BAR0_APERTURE_MASK)
> +#define  CDNS_PCIE_LM_RC_BAR_CFG_BAR0_CTRL_MASK		GENMASK(8, 6)
> +#define  CDNS_PCIE_LM_RC_BAR_CFG_BAR0_CTRL(c) \
> +	(((c) << 6) & CDNS_PCIE_LM_RC_BAR_CFG_BAR0_CTRL_MASK)
> +#define  CDNS_PCIE_LM_RC_BAR_CFG_BAR1_APERTURE_MASK	GENMASK(13, 9)
> +#define  CDNS_PCIE_LM_RC_BAR_CFG_BAR1_APERTURE(a) \
> +	(((a) << 9) & CDNS_PCIE_LM_RC_BAR_CFG_BAR1_APERTURE_MASK)
> +#define  CDNS_PCIE_LM_RC_BAR_CFG_BAR1_CTRL_MASK		GENMASK(16, 14)
> +#define  CDNS_PCIE_LM_RC_BAR_CFG_BAR1_CTRL(c) \
> +	(((c) << 14) & CDNS_PCIE_LM_RC_BAR_CFG_BAR1_CTRL_MASK)
> +#define  CDNS_PCIE_LM_RC_BAR_CFG_PREFETCH_MEM_ENABLE	BIT(17)
> +#define  CDNS_PCIE_LM_RC_BAR_CFG_PREFETCH_MEM_32BITS	0
> +#define  CDNS_PCIE_LM_RC_BAR_CFG_PREFETCH_MEM_64BITS	BIT(18)
> +#define  CDNS_PCIE_LM_RC_BAR_CFG_IO_ENABLE		BIT(19)
> +#define  CDNS_PCIE_LM_RC_BAR_CFG_IO_16BITS		0
> +#define  CDNS_PCIE_LM_RC_BAR_CFG_IO_32BITS		BIT(20)
> +#define  CDNS_PCIE_LM_RC_BAR_CFG_CHECK_ENABLE		BIT(31)
> +
> +/* BAR control values applicable to both Endpoint Function and Root Complex */
> +#define  CDNS_PCIE_LM_BAR_CFG_CTRL_DISABLED		0x0
> +#define  CDNS_PCIE_LM_BAR_CFG_CTRL_IO_32BITS		0x1
> +#define  CDNS_PCIE_LM_BAR_CFG_CTRL_MEM_32BITS		0x4
> +#define  CDNS_PCIE_LM_BAR_CFG_CTRL_PREFETCH_MEM_32BITS	0x5
> +#define  CDNS_PCIE_LM_BAR_CFG_CTRL_MEM_64BITS		0x6
> +#define  CDNS_PCIE_LM_BAR_CFG_CTRL_PREFETCH_MEM_64BITS	0x7
> +
> +
> +/*
> + * Endpoint Function Registers (PCI configuration space for endpoint functions)
> + */
> +#define CDNS_PCIE_EP_FUNC_BASE(fn)	(((fn) << 12) & GENMASK(19, 12))
> +
> +#define CDNS_PCIE_EP_FUNC_MSI_CAP_OFFSET	0x90
> +
> +/*
> + * Root Port Registers (PCI configuration space for the root port function)
> + */
> +#define CDNS_PCIE_RP_BASE	0x00200000
> +
> +
> +/*
> + * Address Translation Registers
> + */
> +#define CDNS_PCIE_AT_BASE	0x00400000
> +
> +/* Region r Outbound AXI to PCIe Address Translation Register 0 */
> +#define CDNS_PCIE_AT_OB_REGION_PCI_ADDR0(r) \
> +	(CDNS_PCIE_AT_BASE + 0x0000 + ((r) & 0x1f) * 0x0020)
> +#define  CDNS_PCIE_AT_OB_REGION_PCI_ADDR0_NBITS_MASK	GENMASK(5, 0)
> +#define  CDNS_PCIE_AT_OB_REGION_PCI_ADDR0_NBITS(nbits) \
> +	(((nbits) - 1) & CDNS_PCIE_AT_OB_REGION_PCI_ADDR0_NBITS_MASK)
> +#define  CDNS_PCIE_AT_OB_REGION_PCI_ADDR0_DEVFN_MASK	GENMASK(19, 12)
> +#define  CDNS_PCIE_AT_OB_REGION_PCI_ADDR0_DEVFN(devfn) \
> +	(((devfn) << 12) & CDNS_PCIE_AT_OB_REGION_PCI_ADDR0_DEVFN_MASK)
> +#define  CDNS_PCIE_AT_OB_REGION_PCI_ADDR0_BUS_MASK	GENMASK(27, 20)
> +#define  CDNS_PCIE_AT_OB_REGION_PCI_ADDR0_BUS(bus) \
> +	(((bus) << 20) & CDNS_PCIE_AT_OB_REGION_PCI_ADDR0_BUS_MASK)
> +
> +/* Region r Outbound AXI to PCIe Address Translation Register 1 */
> +#define CDNS_PCIE_AT_OB_REGION_PCI_ADDR1(r) \
> +	(CDNS_PCIE_AT_BASE + 0x0004 + ((r) & 0x1f) * 0x0020)
> +
> +/* Region r Outbound PCIe Descriptor Register 0 */
> +#define CDNS_PCIE_AT_OB_REGION_DESC0(r) \
> +	(CDNS_PCIE_AT_BASE + 0x0008 + ((r) & 0x1f) * 0x0020)
> +#define  CDNS_PCIE_AT_OB_REGION_DESC0_TYPE_MASK		GENMASK(3, 0)
> +#define  CDNS_PCIE_AT_OB_REGION_DESC0_TYPE_MEM		0x2
> +#define  CDNS_PCIE_AT_OB_REGION_DESC0_TYPE_IO		0x6
> +#define  CDNS_PCIE_AT_OB_REGION_DESC0_TYPE_CONF_TYPE0	0xa
> +#define  CDNS_PCIE_AT_OB_REGION_DESC0_TYPE_CONF_TYPE1	0xb
> +#define  CDNS_PCIE_AT_OB_REGION_DESC0_TYPE_NORMAL_MSG	0xc
> +#define  CDNS_PCIE_AT_OB_REGION_DESC0_TYPE_VENDOR_MSG	0xd
> +/* Bit 23 MUST be set in RC mode. */
> +#define  CDNS_PCIE_AT_OB_REGION_DESC0_HARDCODED_RID	BIT(23)
> +#define  CDNS_PCIE_AT_OB_REGION_DESC0_DEVFN_MASK	GENMASK(31, 24)
> +#define  CDNS_PCIE_AT_OB_REGION_DESC0_DEVFN(devfn) \
> +	(((devfn) << 24) & CDNS_PCIE_AT_OB_REGION_DESC0_DEVFN_MASK)
> +
> +/* Region r Outbound PCIe Descriptor Register 1 */
> +#define CDNS_PCIE_AT_OB_REGION_DESC1(r)	\
> +	(CDNS_PCIE_AT_BASE + 0x000c + ((r) & 0x1f) * 0x0020)
> +#define  CDNS_PCIE_AT_OB_REGION_DESC1_BUS_MASK	GENMASK(7, 0)
> +#define  CDNS_PCIE_AT_OB_REGION_DESC1_BUS(bus) \
> +	((bus) & CDNS_PCIE_AT_OB_REGION_DESC1_BUS_MASK)
> +
> +/* Region r AXI Region Base Address Register 0 */
> +#define CDNS_PCIE_AT_OB_REGION_CPU_ADDR0(r) \
> +	(CDNS_PCIE_AT_BASE + 0x0018 + ((r) & 0x1f) * 0x0020)
> +#define  CDNS_PCIE_AT_OB_REGION_CPU_ADDR0_NBITS_MASK	GENMASK(5, 0)
> +#define  CDNS_PCIE_AT_OB_REGION_CPU_ADDR0_NBITS(nbits) \
> +	(((nbits) - 1) & CDNS_PCIE_AT_OB_REGION_CPU_ADDR0_NBITS_MASK)
> +
> +/* Region r AXI Region Base Address Register 1 */
> +#define CDNS_PCIE_AT_OB_REGION_CPU_ADDR1(r) \
> +	(CDNS_PCIE_AT_BASE + 0x001c + ((r) & 0x1f) * 0x0020)
> +
> +/* Root Port BAR Inbound PCIe to AXI Address Translation Register */
> +#define CDNS_PCIE_AT_IB_RP_BAR_ADDR0(bar) \
> +	(CDNS_PCIE_AT_BASE + 0x0800 + (bar) * 0x0008)
> +#define  CDNS_PCIE_AT_IB_RP_BAR_ADDR0_NBITS_MASK	GENMASK(5, 0)
> +#define  CDNS_PCIE_AT_IB_RP_BAR_ADDR0_NBITS(nbits) \
> +	(((nbits) - 1) & CDNS_PCIE_AT_IB_RP_BAR_ADDR0_NBITS_MASK)
> +#define CDNS_PCIE_AT_IB_RP_BAR_ADDR1(bar) \
> +	(CDNS_PCIE_AT_BASE + 0x0804 + (bar) * 0x0008)
> +
> +enum cdns_pcie_rp_bar {
> +	RP_BAR0,
> +	RP_BAR1,
> +	RP_NO_BAR
> +};
> +
> +/* Endpoint Function BAR Inbound PCIe to AXI Address Translation Register */
> +#define CDNS_PCIE_AT_IB_EP_FUNC_BAR_ADDR0(fn, bar) \
> +	(CDNS_PCIE_AT_BASE + 0x0840 + (fn) * 0x0040 + (bar) * 0x0008)
> +#define CDNS_PCIE_AT_IB_EP_FUNC_BAR_ADDR1(fn, bar) \
> +	(CDNS_PCIE_AT_BASE + 0x0844 + (fn) * 0x0040 + (bar) * 0x0008)
> +
> +/* Normal/Vendor specific message access: offset inside some outbound region */
> +#define CDNS_PCIE_NORMAL_MSG_ROUTING_MASK	GENMASK(7, 5)
> +#define CDNS_PCIE_NORMAL_MSG_ROUTING(route) \
> +	(((route) << 5) & CDNS_PCIE_NORMAL_MSG_ROUTING_MASK)
> +#define CDNS_PCIE_NORMAL_MSG_CODE_MASK		GENMASK(15, 8)
> +#define CDNS_PCIE_NORMAL_MSG_CODE(code) \
> +	(((code) << 8) & CDNS_PCIE_NORMAL_MSG_CODE_MASK)
> +#define CDNS_PCIE_MSG_NO_DATA			BIT(16)
> +
> +enum cdns_pcie_msg_code {
> +	MSG_CODE_ASSERT_INTA	= 0x20,
> +	MSG_CODE_ASSERT_INTB	= 0x21,
> +	MSG_CODE_ASSERT_INTC	= 0x22,
> +	MSG_CODE_ASSERT_INTD	= 0x23,
> +	MSG_CODE_DEASSERT_INTA	= 0x24,
> +	MSG_CODE_DEASSERT_INTB	= 0x25,
> +	MSG_CODE_DEASSERT_INTC	= 0x26,
> +	MSG_CODE_DEASSERT_INTD	= 0x27,
> +};
> +
> +enum cdns_pcie_msg_routing {
> +	/* Route to Root Complex */
> +	MSG_ROUTING_TO_RC,
> +
> +	/* Use Address Routing */
> +	MSG_ROUTING_BY_ADDR,
> +
> +	/* Use ID Routing */
> +	MSG_ROUTING_BY_ID,
> +
> +	/* Route as Broadcast Message from Root Complex */
> +	MSG_ROUTING_BCAST,
> +
> +	/* Local message; terminate at receiver (INTx messages) */
> +	MSG_ROUTING_LOCAL,
> +
> +	/* Gather & route to Root Complex (PME_TO_Ack message) */
> +	MSG_ROUTING_GATHER,
> +};
> +
> +/**
> + * struct cdns_pcie - private data for Cadence PCIe controller drivers
> + * @reg_base: IO mapped register base
> + * @mem_res: start/end offsets in the physical system memory to map PCI accesses
> + * @is_rc: tell whether the PCIe controller mode is Root Complex or Endpoint.
> + * @bus: In Root Complex mode, the bus number
> + */
> +struct cdns_pcie {
> +	void __iomem		*reg_base;
> +	struct resource		*mem_res;
> +	bool			is_rc;
> +	u8			bus;
> +};
> +
> +/* Register access */
> +static inline void cdns_pcie_writeb(struct cdns_pcie *pcie, u32 reg, u8 value)
> +{
> +	writeb(value, pcie->reg_base + reg);
> +}
> +
> +static inline void cdns_pcie_writew(struct cdns_pcie *pcie, u32 reg, u16 value)
> +{
> +	writew(value, pcie->reg_base + reg);
> +}
> +
> +static inline void cdns_pcie_writel(struct cdns_pcie *pcie, u32 reg, u32 value)
> +{
> +	writel(value, pcie->reg_base + reg);
> +}
> +
> +static inline u32 cdns_pcie_readl(struct cdns_pcie *pcie, u32 reg)
> +{
> +	return readl(pcie->reg_base + reg);
> +}
> +
> +/* Root Port register access */
> +static inline void cdns_pcie_rp_writeb(struct cdns_pcie *pcie,
> +				       u32 reg, u8 value)
> +{
> +	writeb(value, pcie->reg_base + CDNS_PCIE_RP_BASE + reg);
> +}
> +
> +static inline void cdns_pcie_rp_writew(struct cdns_pcie *pcie,
> +				       u32 reg, u16 value)
> +{
> +	writew(value, pcie->reg_base + CDNS_PCIE_RP_BASE + reg);
> +}
> +
> +/* Endpoint Function register access */
> +static inline void cdns_pcie_ep_fn_writeb(struct cdns_pcie *pcie, u8 fn,
> +					  u32 reg, u8 value)
> +{
> +	writeb(value, pcie->reg_base + CDNS_PCIE_EP_FUNC_BASE(fn) + reg);
> +}
> +
> +static inline void cdns_pcie_ep_fn_writew(struct cdns_pcie *pcie, u8 fn,
> +					  u32 reg, u16 value)
> +{
> +	writew(value, pcie->reg_base + CDNS_PCIE_EP_FUNC_BASE(fn) + reg);
> +}
> +
> +static inline void cdns_pcie_ep_fn_writel(struct cdns_pcie *pcie, u8 fn,
> +					  u32 reg, u16 value)
> +{
> +	writel(value, pcie->reg_base + CDNS_PCIE_EP_FUNC_BASE(fn) + reg);
> +}
> +
> +static inline u8 cdns_pcie_ep_fn_readb(struct cdns_pcie *pcie, u8 fn, u32 reg)
> +{
> +	return readb(pcie->reg_base + CDNS_PCIE_EP_FUNC_BASE(fn) + reg);
> +}
> +
> +static inline u16 cdns_pcie_ep_fn_readw(struct cdns_pcie *pcie, u8 fn, u32 reg)
> +{
> +	return readw(pcie->reg_base + CDNS_PCIE_EP_FUNC_BASE(fn) + reg);
> +}
> +
> +static inline u32 cdns_pcie_ep_fn_readl(struct cdns_pcie *pcie, u8 fn, u32 reg)
> +{
> +	return readl(pcie->reg_base + CDNS_PCIE_EP_FUNC_BASE(fn) + reg);
> +}

Same comments for all endpoint related functions and defines above.

Thanks,
Lorenzo

> +void cdns_pcie_set_outbound_region(struct cdns_pcie *pcie, u32 r, bool is_io,
> +				   u64 cpu_addr, u64 pci_addr, size_t size);
> +
> +void cdns_pcie_set_outbound_region_for_normal_msg(struct cdns_pcie *pcie, u32 r,
> +						  u64 cpu_addr);
> +
> +void cdns_pcie_reset_outbound_region(struct cdns_pcie *pcie, u32 r);
> +
> +#endif /* _PCIE_CADENCE_H */
> -- 
> 2.11.0
> 

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH 0/5] PCI: Add support to the Cadence PCIe controller
  2017-11-28 15:50 ` [PATCH 0/5] PCI: Add support to the " Lorenzo Pieralisi
@ 2017-11-30  7:13   ` Kishon Vijay Abraham I
  2017-11-30 18:18     ` Lorenzo Pieralisi
  0 siblings, 1 reply; 39+ messages in thread
From: Kishon Vijay Abraham I @ 2017-11-30  7:13 UTC (permalink / raw)
  To: Lorenzo Pieralisi, Cyrille Pitchen
  Cc: bhelgaas, linux-pci, adouglas, stelford, dgary, kgopi, eandrews,
	thomas.petazzoni, sureshp, nsekhar, linux-kernel, robh,
	devicetree

Hi,

On Tuesday 28 November 2017 09:20 PM, Lorenzo Pieralisi wrote:
> On Thu, Nov 23, 2017 at 04:01:45PM +0100, Cyrille Pitchen wrote:
>> Hi all,
>>
>> this series of patches adds support to the Cadence PCIe controller.
>> It was tested on a ARM64 platform emulated by a Palladium running both
>> linux-next (next-20171123) and pci-next kernels.
>>
>> The host mode was tested with some PCIe devices connected to the Palladium
>> through a speed-bridge. Some of those devices were a USB host controller
>> and a SATA controller. The PCIe host controller was also tested with a
>> second controller configured in endpoint mode and connected back to back
>> to the first controller.
>>
>> The EndPoint Controller (EPC) driver of this series was tested with the
>> pci-epf-test.c EndPoint Function (EPF) driver and the pcitest userspace
>> program.
>>
>> For linux-next, I applied this series on top of Kishon's patch
>> ("PCI: endpoint: Use EPC's device in dma_alloc_coherent/dma_free_coherent")
>> otherwise dma_alloc_coherent() fails when called by pci_epf_alloc_space().
>>
>> Also, I patched drivers/Makefile rather than drivers/pci/Makefile to make
>> the drivers/pci/cadence/pcie-cadence-ep.o linked after

The reason to patch drivers/Makefile should be because pcie-cadence-ep has to
be compiled even when CONFIG_PCI is not enabled. CONFIG_PCI enables host
specific features and ENDPOINT shouldn't depend on CONFIG_PCI.
>> drivers/pci/endpoint/*.o objects, otherwise the built-in pci-cadence-ep
>> driver would be probed before the PCI endpoint framework would have been
>> initialized, which results in a kernel crash.
> 
> Nice :( - isn't there a way to improve this (ie probe deferral or
> registering the EPF bus earlier) ?
> 
>> I guess this is the reason why the "pci/dwc" line was also put in
>> drivers/Makefile, right after the "pci/endpoint" line.
> 
> Or probably the other way around - see commit 5e8cb4033807
> 
> @Kishon, thoughts ?

Lorenzo, ordering Makefile is one way to initialize EP core before other
drivers. the other way is to have PCI EP core have a different initcall level..
subsys_initcall??

Thanks
Kishon

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH 3/5] PCI: cadence: Add host driver for Cadence PCIe controller
  2017-11-29 18:25   ` Lorenzo Pieralisi
@ 2017-11-30 10:06     ` Lorenzo Pieralisi
  0 siblings, 0 replies; 39+ messages in thread
From: Lorenzo Pieralisi @ 2017-11-30 10:06 UTC (permalink / raw)
  To: Cyrille Pitchen
  Cc: bhelgaas, kishon, linux-pci, adouglas, stelford, dgary, kgopi,
	eandrews, thomas.petazzoni, sureshp, nsekhar, linux-kernel, robh,
	devicetree

On Wed, Nov 29, 2017 at 06:25:15PM +0000, Lorenzo Pieralisi wrote:

[...]

> static void __iomem *cdns_pci_map_bus(struct pci_bus *bus, unsigned int devfn,
> 				      int where)
> 
> > +cdns_pci_map_bus(struct pci_bus *bus, unsigned int devfn, int where)
> > +{
> > +	struct pci_host_bridge *bridge = pci_find_host_bridge(bus);
> > +	struct cdns_pcie_rc *rc = pci_host_bridge_priv(bridge);
> > +	struct cdns_pcie *pcie = &rc->pcie;
> > +	unsigned int busn = bus->number;
> > +	u32 addr0, desc0;
> > +
> > +	if (busn < rc->bus_range->start || busn > rc->bus_range->end)
> > +		return NULL;
> 
> It does not hurt but I wonder whether you really need this check.
> 
> > +	if (busn == rc->bus_range->start) {
> > +		if (devfn)
> 
> I suspect I know why you need this check but I ask you to explain it
> anyway if you do not mind please.
> 
> > +			return NULL;
> > +
> > +		return pcie->reg_base + (where & 0xfff);
> > +	}
> > +
> > +	/* Update Output registers for AXI region 0. */
> > +	addr0 = CDNS_PCIE_AT_OB_REGION_PCI_ADDR0_NBITS(12) |
> 
> Ok, so for every config access you reprogram addr0 to reflect the
> correct bus|devfn ID in the PCI bus TLP corresponding to an ECAM address
> in CPU physical address space, is my understanding correct ?

By re-reading it, it looks like this mechanism is there to just
associate a different RID TLP on the PCI bus to a fixed window
in CPU virtual address space.

> > +		CDNS_PCIE_AT_OB_REGION_PCI_ADDR0_DEVFN(devfn) |
> > +		CDNS_PCIE_AT_OB_REGION_PCI_ADDR0_BUS(busn);
> > +	cdns_pcie_writel(pcie, CDNS_PCIE_AT_OB_REGION_PCI_ADDR0(0), addr0);
> > +
> > +	/* Configuration Type 0 or Type 1 access. */
> > +	desc0 = CDNS_PCIE_AT_OB_REGION_DESC0_HARDCODED_RID |
> > +		CDNS_PCIE_AT_OB_REGION_DESC0_DEVFN(0);
> > +	/*
> > +	 * The bus number was already set once for all in desc1 by
> > +	 * cdns_pcie_host_init_address_translation().
> > +	 */
> > +	if (busn == rc->bus_range->start + 1)
> > +		desc0 |= CDNS_PCIE_AT_OB_REGION_DESC0_TYPE_CONF_TYPE0;
> > +	else
> > +		desc0 |= CDNS_PCIE_AT_OB_REGION_DESC0_TYPE_CONF_TYPE1;
> 
> I would like to ask you why you have to do it here and the root port
> does not figure it out by itself, I do not have the datasheet so I am
> just asking for my own information.
> 
> > +	cdns_pcie_writel(pcie, CDNS_PCIE_AT_OB_REGION_DESC0(0), desc0);
> > +
> > +	return rc->cfg_base + (where & 0xfff);
> > +}

[...]

> > +static int cdns_pcie_host_init(struct device *dev,
> > +			       struct list_head *resources,
> > +			       struct cdns_pcie_rc *rc)
> > +{
> > +	struct resource *bus_range = NULL;
> > +	int err;
> > +
> > +	/* Parse our PCI ranges and request their resources */
> > +	err = cdns_pcie_parse_request_of_pci_ranges(dev, resources, &bus_range);
> > +	if (err)
> > +		goto err_out;
> 
> I think that the err_out path should be part of:
> 
> cdns_pcie_parse_request_of_pci_ranges()
> 
> implementation and here you would just return.

I take it back, what you are doing is cleaner and allows you to
have code freeing the resource list in one single place so you
can leave this as-is.

Thanks,
Lorenzo

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH 0/5] PCI: Add support to the Cadence PCIe controller
  2017-11-30  7:13   ` Kishon Vijay Abraham I
@ 2017-11-30 18:18     ` Lorenzo Pieralisi
  2017-11-30 18:45       ` Cyrille Pitchen
  0 siblings, 1 reply; 39+ messages in thread
From: Lorenzo Pieralisi @ 2017-11-30 18:18 UTC (permalink / raw)
  To: Kishon Vijay Abraham I
  Cc: Cyrille Pitchen, bhelgaas, linux-pci, adouglas, stelford, dgary,
	kgopi, eandrews, thomas.petazzoni, sureshp, nsekhar,
	linux-kernel, robh, devicetree

On Thu, Nov 30, 2017 at 12:43:20PM +0530, Kishon Vijay Abraham I wrote:

[...]

> >> For linux-next, I applied this series on top of Kishon's patch
> >> ("PCI: endpoint: Use EPC's device in dma_alloc_coherent/dma_free_coherent")
> >> otherwise dma_alloc_coherent() fails when called by pci_epf_alloc_space().
> >>
> >> Also, I patched drivers/Makefile rather than drivers/pci/Makefile to make
> >> the drivers/pci/cadence/pcie-cadence-ep.o linked after
> 
> The reason to patch drivers/Makefile should be because pcie-cadence-ep has to
> be compiled even when CONFIG_PCI is not enabled. CONFIG_PCI enables host
> specific features and ENDPOINT shouldn't depend on CONFIG_PCI.
> >> drivers/pci/endpoint/*.o objects, otherwise the built-in pci-cadence-ep
> >> driver would be probed before the PCI endpoint framework would have been
> >> initialized, which results in a kernel crash.
> > 
> > Nice :( - isn't there a way to improve this (ie probe deferral or
> > registering the EPF bus earlier) ?
> > 
> >> I guess this is the reason why the "pci/dwc" line was also put in
> >> drivers/Makefile, right after the "pci/endpoint" line.
> > 
> > Or probably the other way around - see commit 5e8cb4033807
> > 
> > @Kishon, thoughts ?
> 
> Lorenzo, ordering Makefile is one way to initialize EP core before

Makefile ordering is fragile, I do not like relying on it.

> other drivers. the other way is to have PCI EP core have a different
> initcall level..  subsys_initcall??

Yes, registering the bus at eg postcore_initcall() as PCI does should do
(if that's the problem this is solving) but still, the code must not
crash if the ordering is not correct, we have to fix this regardless.

I would appreciate if Cyrille can debug the cause of the kernel crash
so that we can fix it in the longer term.

Thanks,
Lorenzo

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH 0/5] PCI: Add support to the Cadence PCIe controller
  2017-11-30 18:18     ` Lorenzo Pieralisi
@ 2017-11-30 18:45       ` Cyrille Pitchen
  2017-11-30 20:05         ` Cyrille Pitchen
  0 siblings, 1 reply; 39+ messages in thread
From: Cyrille Pitchen @ 2017-11-30 18:45 UTC (permalink / raw)
  To: Lorenzo Pieralisi, Kishon Vijay Abraham I
  Cc: bhelgaas, linux-pci, adouglas, stelford, dgary, kgopi, eandrews,
	thomas.petazzoni, sureshp, nsekhar, linux-kernel, robh,
	devicetree

Hi all,

Le 30/11/2017 à 19:18, Lorenzo Pieralisi a écrit :
> On Thu, Nov 30, 2017 at 12:43:20PM +0530, Kishon Vijay Abraham I wrote:
> 
> [...]
> 
>>>> For linux-next, I applied this series on top of Kishon's patch
>>>> ("PCI: endpoint: Use EPC's device in dma_alloc_coherent/dma_free_coherent")
>>>> otherwise dma_alloc_coherent() fails when called by pci_epf_alloc_space().
>>>>
>>>> Also, I patched drivers/Makefile rather than drivers/pci/Makefile to make
>>>> the drivers/pci/cadence/pcie-cadence-ep.o linked after
>>
>> The reason to patch drivers/Makefile should be because pcie-cadence-ep has to
>> be compiled even when CONFIG_PCI is not enabled. CONFIG_PCI enables host
>> specific features and ENDPOINT shouldn't depend on CONFIG_PCI.
>>>> drivers/pci/endpoint/*.o objects, otherwise the built-in pci-cadence-ep
>>>> driver would be probed before the PCI endpoint framework would have been
>>>> initialized, which results in a kernel crash.
>>>
>>> Nice :( - isn't there a way to improve this (ie probe deferral or
>>> registering the EPF bus earlier) ?
>>>
>>>> I guess this is the reason why the "pci/dwc" line was also put in
>>>> drivers/Makefile, right after the "pci/endpoint" line.
>>>
>>> Or probably the other way around - see commit 5e8cb4033807
>>>
>>> @Kishon, thoughts ?
>>
>> Lorenzo, ordering Makefile is one way to initialize EP core before
> 
> Makefile ordering is fragile, I do not like relying on it.
> 
>> other drivers. the other way is to have PCI EP core have a different
>> initcall level..  subsys_initcall??
> 
> Yes, registering the bus at eg postcore_initcall() as PCI does should do
> (if that's the problem this is solving) but still, the code must not
> crash if the ordering is not correct, we have to fix this regardless.
> 
> I would appreciate if Cyrille can debug the cause of the kernel crash
> so that we can fix it in the longer term.
> 

I had the crash kernel few weeks ago hence I have to test again to confirm
but if I remember correctly it was a NULL pointer dereferencing crash:

cdns_pcie_ep_probe()
|
+--> devm_pci_epc_create()
     |
     +--> __devm_pci_ep_create()
          |
          +--> __pci_epc_create()
	       |
	       +--> pci_ep_cfs_add_epc_group()
	            |
		    | /*
		    |  * Here controllers_group is still NULL because
		    |  * pci_ep_cfs_init() has not been called yet.
		    |  */
		    +--> configfs_register_group(controllers_group, ...)
		         |
			 +--> link_group()
			      /* Dereferencing NULL pointer */

Best regards,

Cyrille

> Thanks,
> Lorenzo
> 


-- 
Cyrille Pitchen, Free Electrons
Embedded Linux and Kernel engineering
http://free-electrons.com

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH 0/5] PCI: Add support to the Cadence PCIe controller
  2017-11-30 18:45       ` Cyrille Pitchen
@ 2017-11-30 20:05         ` Cyrille Pitchen
  2017-11-30 23:05           ` Bjorn Helgaas
  0 siblings, 1 reply; 39+ messages in thread
From: Cyrille Pitchen @ 2017-11-30 20:05 UTC (permalink / raw)
  To: Lorenzo Pieralisi, Kishon Vijay Abraham I
  Cc: bhelgaas, linux-pci, adouglas, stelford, dgary, kgopi, eandrews,
	thomas.petazzoni, sureshp, nsekhar, linux-kernel, robh,
	devicetree

Le 30/11/2017 à 19:45, Cyrille Pitchen a écrit :
> Hi all,
> 
> Le 30/11/2017 à 19:18, Lorenzo Pieralisi a écrit :
>> On Thu, Nov 30, 2017 at 12:43:20PM +0530, Kishon Vijay Abraham I wrote:
>>
>> [...]
>>
>>>>> For linux-next, I applied this series on top of Kishon's patch
>>>>> ("PCI: endpoint: Use EPC's device in dma_alloc_coherent/dma_free_coherent")
>>>>> otherwise dma_alloc_coherent() fails when called by pci_epf_alloc_space().
>>>>>
>>>>> Also, I patched drivers/Makefile rather than drivers/pci/Makefile to make
>>>>> the drivers/pci/cadence/pcie-cadence-ep.o linked after
>>>
>>> The reason to patch drivers/Makefile should be because pcie-cadence-ep has to
>>> be compiled even when CONFIG_PCI is not enabled. CONFIG_PCI enables host
>>> specific features and ENDPOINT shouldn't depend on CONFIG_PCI.
>>>>> drivers/pci/endpoint/*.o objects, otherwise the built-in pci-cadence-ep
>>>>> driver would be probed before the PCI endpoint framework would have been
>>>>> initialized, which results in a kernel crash.
>>>>
>>>> Nice :( - isn't there a way to improve this (ie probe deferral or
>>>> registering the EPF bus earlier) ?
>>>>
>>>>> I guess this is the reason why the "pci/dwc" line was also put in
>>>>> drivers/Makefile, right after the "pci/endpoint" line.
>>>>
>>>> Or probably the other way around - see commit 5e8cb4033807
>>>>
>>>> @Kishon, thoughts ?
>>>
>>> Lorenzo, ordering Makefile is one way to initialize EP core before
>>
>> Makefile ordering is fragile, I do not like relying on it.
>>
>>> other drivers. the other way is to have PCI EP core have a different
>>> initcall level..  subsys_initcall??
>>
>> Yes, registering the bus at eg postcore_initcall() as PCI does should do
>> (if that's the problem this is solving) but still, the code must not
>> crash if the ordering is not correct, we have to fix this regardless.
>>
>> I would appreciate if Cyrille can debug the cause of the kernel crash
>> so that we can fix it in the longer term.
>>
> 
> I had the crash kernel few weeks ago hence I have to test again to confirm
> but if I remember correctly it was a NULL pointer dereferencing crash:
> 
> cdns_pcie_ep_probe()
> |
> +--> devm_pci_epc_create()
>      |
>      +--> __devm_pci_ep_create()
>           |
>           +--> __pci_epc_create()
> 	       |
> 	       +--> pci_ep_cfs_add_epc_group()
> 	            |
> 		    | /*
> 		    |  * Here controllers_group is still NULL because
> 		    |  * pci_ep_cfs_init() has not been called yet.
> 		    |  */
> 		    +--> configfs_register_group(controllers_group, ...)
> 		         |
> 			 +--> link_group()
> 			      /* Dereferencing NULL pointer */
> 
> Best regards,
> 
> Cyrille
>

I know it's far from perfect but what do you think or something like that:

---8<---------------------------------------------------------------------------
diff --git a/drivers/Makefile b/drivers/Makefile
index 27bdd98784d9..9757199b9a65 100644
--- a/drivers/Makefile
+++ b/drivers/Makefile
@@ -16,11 +16,7 @@ obj-$(CONFIG_PINCTRL)		+= pinctrl/
 obj-$(CONFIG_GPIOLIB)		+= gpio/
 obj-y				+= pwm/
 
-obj-$(CONFIG_PCI)		+= pci/
-obj-$(CONFIG_PCI_ENDPOINT)	+= pci/endpoint/
-obj-$(CONFIG_PCI_CADENCE)	+= pci/cadence/
-# PCI dwc controller drivers
-obj-y				+= pci/dwc/
+obj-y				+= pci/
 
 obj-$(CONFIG_PARISC)		+= parisc/
 obj-$(CONFIG_RAPIDIO)		+= rapidio/
diff --git a/drivers/pci/Makefile b/drivers/pci/Makefile
index 3d5e047f0a32..4e57fe4499ce 100644
--- a/drivers/pci/Makefile
+++ b/drivers/pci/Makefile
@@ -3,6 +3,7 @@
 # Makefile for the PCI bus specific drivers.
 #
 
+ifdef CONFIG_PCI
 obj-y		+= access.o bus.o probe.o host-bridge.o remove.o pci.o \
 			pci-driver.o search.o pci-sysfs.o rom.o setup-res.o \
 			irq.o vpd.o setup-bus.o vc.o mmap.o setup-irq.o
@@ -54,3 +55,9 @@ ccflags-$(CONFIG_PCI_DEBUG) := -DDEBUG
 # PCI host controller drivers
 obj-y += host/
 obj-y += switch/
+endif
+
+obj-$(CONFIG_PCI_ENDPOINT)	+= endpoint/
+obj-$(CONFIG_PCI_CADENCE)	+= cadence/
+# PCI dwc controller drivers
+obj-y				+= dwc/
---8<---------------------------------------------------------------------------

I admit it doesn't solve the ordering issue but at least it cleans some mess
in drivers/Makefile. 

I agree with Lorenzo, we should avoid as much as we could to rely on the
link order.

However I faced another issue: code from pcie-cadence.c is used by both
pcie-cadence-host.c and pcie-cadence-ep.c so I had to find a way to compile
this shared file if at least one of the host and EPC driver was selected.

That's why I put source files in the cadence sub-directory and used the
single cadence/Makefile to regroup all the compilation rules, instead of
splitting them between pci/Makefile and pci/endpoint/Makefile and having to
add pcie-cadence.o in both Makefiles.

>> Thanks,
>> Lorenzo
>>
> 
> 


-- 
Cyrille Pitchen, Free Electrons
Embedded Linux and Kernel engineering
http://free-electrons.com

^ permalink raw reply related	[flat|nested] 39+ messages in thread

* Re: [PATCH 0/5] PCI: Add support to the Cadence PCIe controller
  2017-11-30 20:05         ` Cyrille Pitchen
@ 2017-11-30 23:05           ` Bjorn Helgaas
  0 siblings, 0 replies; 39+ messages in thread
From: Bjorn Helgaas @ 2017-11-30 23:05 UTC (permalink / raw)
  To: Cyrille Pitchen
  Cc: Lorenzo Pieralisi, Kishon Vijay Abraham I, bhelgaas, linux-pci,
	adouglas, stelford, dgary, kgopi, eandrews, thomas.petazzoni,
	sureshp, nsekhar, linux-kernel, robh, devicetree

On Thu, Nov 30, 2017 at 09:05:12PM +0100, Cyrille Pitchen wrote:
> Le 30/11/2017 à 19:45, Cyrille Pitchen a écrit :
> > Le 30/11/2017 à 19:18, Lorenzo Pieralisi a écrit :
> >> On Thu, Nov 30, 2017 at 12:43:20PM +0530, Kishon Vijay Abraham I wrote:

> >>> The reason to patch drivers/Makefile should be because
> >>> pcie-cadence-ep has to be compiled even when CONFIG_PCI is not
> >>> enabled. CONFIG_PCI enables host specific features and ENDPOINT
> >>> shouldn't depend on CONFIG_PCI.

> I know it's far from perfect but what do you think or something like that:
> 
> ---8<---------------------------------------------------------------------------
> diff --git a/drivers/Makefile b/drivers/Makefile
> index 27bdd98784d9..9757199b9a65 100644
> --- a/drivers/Makefile
> +++ b/drivers/Makefile
> @@ -16,11 +16,7 @@ obj-$(CONFIG_PINCTRL)		+= pinctrl/
>  obj-$(CONFIG_GPIOLIB)		+= gpio/
>  obj-y				+= pwm/
>  
> -obj-$(CONFIG_PCI)		+= pci/
> -obj-$(CONFIG_PCI_ENDPOINT)	+= pci/endpoint/
> -obj-$(CONFIG_PCI_CADENCE)	+= pci/cadence/
> -# PCI dwc controller drivers
> -obj-y				+= pci/dwc/
> +obj-y				+= pci/
>  
>  obj-$(CONFIG_PARISC)		+= parisc/
>  obj-$(CONFIG_RAPIDIO)		+= rapidio/
> diff --git a/drivers/pci/Makefile b/drivers/pci/Makefile
> index 3d5e047f0a32..4e57fe4499ce 100644
> --- a/drivers/pci/Makefile
> +++ b/drivers/pci/Makefile
> @@ -3,6 +3,7 @@
>  # Makefile for the PCI bus specific drivers.
>  #
>  
> +ifdef CONFIG_PCI
>  obj-y		+= access.o bus.o probe.o host-bridge.o remove.o pci.o \
>  			pci-driver.o search.o pci-sysfs.o rom.o setup-res.o \
>  			irq.o vpd.o setup-bus.o vc.o mmap.o setup-irq.o
> @@ -54,3 +55,9 @@ ccflags-$(CONFIG_PCI_DEBUG) := -DDEBUG
>  # PCI host controller drivers
>  obj-y += host/
>  obj-y += switch/
> +endif
> +
> +obj-$(CONFIG_PCI_ENDPOINT)	+= endpoint/
> +obj-$(CONFIG_PCI_CADENCE)	+= cadence/
> +# PCI dwc controller drivers
> +obj-y				+= dwc/
> ---8<---------------------------------------------------------------------------
> 
> I admit it doesn't solve the ordering issue but at least it cleans some mess
> in drivers/Makefile. 

I like this a lot.  I think the only things that would actually
require the ifdef are the generic symbol things (CONFIG_PROC_FS,
CONFIG_SYSFS, CONFIG_OF), so we could do this for most of it:

  obj-$(CONFIG_PCI)		+= access.o bus.o probe.o ...
  ...
  obj-$(CONFIG_PCI_QUIRKS)	+= quirks.o
  obj-$(CONFIG_PCIEPORTBUS)	+= pcie/
  obj-$(CONFIG_PCI)		+= host/
  obj-$(CONFIG_PCI)		+= switch/

With possibly a few Kconfig dependency tweaks.

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH 3/5] PCI: cadence: Add host driver for Cadence PCIe controller
  2017-11-28 20:41   ` Bjorn Helgaas
                       ` (2 preceding siblings ...)
  2017-11-29 14:14     ` Lorenzo Pieralisi
@ 2017-12-01 10:37     ` Cyrille Pitchen
  2017-12-01 16:20       ` Lorenzo Pieralisi
  3 siblings, 1 reply; 39+ messages in thread
From: Cyrille Pitchen @ 2017-12-01 10:37 UTC (permalink / raw)
  To: Bjorn Helgaas
  Cc: bhelgaas, kishon, lorenzo.pieralisi, linux-pci, adouglas,
	stelford, dgary, kgopi, eandrews, thomas.petazzoni, sureshp,
	nsekhar, linux-kernel, robh, devicetree

Hi Bjorn,

Le 28/11/2017 à 21:41, Bjorn Helgaas a écrit :
> On Thu, Nov 23, 2017 at 04:01:48PM +0100, Cyrille Pitchen wrote:
>> This patch adds support to the Cadence PCIe controller in host mode.
>>
>> Signed-off-by: Cyrille Pitchen <cyrille.pitchen@free-electrons.com>
>> ---
>>  drivers/Makefile                        |   1 +
>>  drivers/pci/Kconfig                     |   1 +
>>  drivers/pci/cadence/Kconfig             |  24 ++
>>  drivers/pci/cadence/Makefile            |   2 +
>>  drivers/pci/cadence/pcie-cadence-host.c | 425 ++++++++++++++++++++++++++++++++
>>  drivers/pci/cadence/pcie-cadence.c      | 110 +++++++++
>>  drivers/pci/cadence/pcie-cadence.h      | 325 ++++++++++++++++++++++++
>>  7 files changed, 888 insertions(+)
>>  create mode 100644 drivers/pci/cadence/Kconfig
>>  create mode 100644 drivers/pci/cadence/Makefile
>>  create mode 100644 drivers/pci/cadence/pcie-cadence-host.c
>>  create mode 100644 drivers/pci/cadence/pcie-cadence.c
>>  create mode 100644 drivers/pci/cadence/pcie-cadence.h
> 
> I prefer a single file per driver.  I assume you're anticipating
> something like dwc, where the DesignWare core is incorporated into
> several devices in slightly different ways.  But it doesn't look like
> that's here yet, and personally, I'd rather split out the required
> things when they actually become required, not ahead of time.
>

The source code in pcie-cadence.c is shared between pcie-cadence-host.c
(Root Complex mode) and pcie-cadence-ep.c (Endpoint mode), the second
driver of this series.

Taking other comments into accounts, I will move endpoint only related
stuff in the pcie-cadence.{c,h} files from this patch to the endpoint
patch.

Otherwise your right, I expect this Cadence PCIe core to be embedded inside
several vendor SoCs but I thought I could handle this as much as possible
using mainly DT properties and/or the 'struct cdns_pcie_*_data' associated
to the DT 'compatible' strings. We will have to wait for those SoCs to be
released to know whether these two solutions are  enough or if we will need
dedicated files anyway.
 
>> diff --git a/drivers/Makefile b/drivers/Makefile
>> index 1d034b680431..27bdd98784d9 100644
>> --- a/drivers/Makefile
>> +++ b/drivers/Makefile
>> @@ -18,6 +18,7 @@ obj-y				+= pwm/
>>  
>>  obj-$(CONFIG_PCI)		+= pci/
>>  obj-$(CONFIG_PCI_ENDPOINT)	+= pci/endpoint/
>> +obj-$(CONFIG_PCI_CADENCE)	+= pci/cadence/
> 
> I can't remember why we added CONFIG_PCI_ENDPOINT here instead of in
> drivers/pci/Makefile.  Is there any way to move both CONFIG_PCI_ENDPOINT
> and CONFIG_PCI_CADENCE into drivers/pci/Makefile so this is better
> encapsulated?
>

I will work on the solution I have proposed in another reply since it seems
to be OK for you :)

>>  # PCI dwc controller drivers
>>  obj-y				+= pci/dwc/
>> ...
> 
>> + * struct cdns_pcie_rc_data - hardware specific data
> 
> "cdns" is a weird abbreviation for "Cadence", since "Cadence" doesn't
> contain an "s".
> 
>> ...
>> +static int cdns_pcie_parse_request_of_pci_ranges(struct device *dev,
>> +						 struct list_head *resources,
>> +						 struct resource **bus_range)
>> +{
>> +	int err, res_valid = 0;
>> +	struct device_node *np = dev->of_node;
>> +	resource_size_t iobase;
>> +	struct resource_entry *win, *tmp;
>> +
>> +	err = of_pci_get_host_bridge_resources(np, 0, 0xff, resources, &iobase);
>> +	if (err)
>> +		return err;
>> +
>> +	err = devm_request_pci_bus_resources(dev, resources);
>> +	if (err)
>> +		return err;
>> +
>> +	resource_list_for_each_entry_safe(win, tmp, resources) {
>> +		struct resource *res = win->res;
>> +
>> +		switch (resource_type(res)) {
>> +		case IORESOURCE_IO:
>> +			err = pci_remap_iospace(res, iobase);
>> +			if (err) {
>> +				dev_warn(dev, "error %d: failed to map resource %pR\n",
>> +					 err, res);
>> +				resource_list_destroy_entry(win);
>> +			}
>> +			break;
>> +		case IORESOURCE_MEM:
>> +			res_valid |= !(res->flags & IORESOURCE_PREFETCH);
>> +			break;
>> +		case IORESOURCE_BUS:
>> +			*bus_range = res;
>> +			break;
>> +		}
>> +	}
>> +
>> +	if (res_valid)
>> +		return 0;
>> +
>> +	dev_err(dev, "non-prefetchable memory resource required\n");
>> +	return -EINVAL;
>> +}
> 
> The code above is starting to look awfully familiar.  I wonder if it's
> time to think about some PCI-internal interface that can encapsulate
> this.  In this case, there's really nothing Cadence-specific here.
> There are other callers where there *is* vendor-specific code, but
> possibly that could be handled by returning pointers to bus number,
> I/O port, and MMIO resources so the caller could do the
> vendor-specific stuff?
> 
> Bjorn
> 

I am listing:
- gen_pci_parse_request_of_pci_ranges() [1]
- cdns_pcie_parse_request_of_pci_ranges() - same as [1]
- advk_pcie_parse_request_of_pci_ranges()
- altera_pcie_parse_request_of_pci_ranges()
- versatile_pci_parse_request_of_pci_ranges()
- rcar_pcie_parse_request_of_pci_ranges()

Then what about doing something like that:

---8<--------------------------------------------------------------------------
diff --git a/drivers/pci/host/pci-host-common.c b/drivers/pci/host/pci-host-common.c
index 44a47d4f0b8f..2413c5d83cbd 100644
--- a/drivers/pci/host/pci-host-common.c
+++ b/drivers/pci/host/pci-host-common.c
@@ -24,10 +24,19 @@
 #include <linux/pci-ecam.h>
 #include <linux/platform_device.h>
 
-static int gen_pci_parse_request_of_pci_ranges(struct device *dev,
-		       struct list_head *resources, struct resource **bus_range)
+struct pci_host_resource_parser {
+	int	(*get_resource)(void *userdata, struct resource_entry *win,
+				resource_size_t iobase);
+	int	(*finalize)(void *userdata);
+	void	(*abort)(void *userdata);
+};
+
+int pci_host_parse_request_of_pci_ranges(struct device *dev,
+				 struct list_head *resources,
+				 const struct pci_host_resource_parser *parser,
+				 void *userdata)
 {
-	int err, res_valid = 0;
+	int err;
 	struct device_node *np = dev->of_node;
 	resource_size_t iobase;
 	struct resource_entry *win, *tmp;
@@ -40,34 +49,94 @@ static int gen_pci_parse_request_of_pci_ranges(struct device *dev,
 	if (err)
 		return err;
 
-	resource_list_for_each_entry_safe(win, tmp, resources) {
-		struct resource *res = win->res;
-
-		switch (resource_type(res)) {
-		case IORESOURCE_IO:
-			err = pci_remap_iospace(res, iobase);
-			if (err) {
-				dev_warn(dev, "error %d: failed to map resource %pR\n",
-					 err, res);
-				resource_list_destroy_entry(win);
-			}
-			break;
-		case IORESOURCE_MEM:
-			res_valid |= !(res->flags & IORESOURCE_PREFETCH);
-			break;
-		case IORESOURCE_BUS:
-			*bus_range = res;
-			break;
+	if (parser && parser->get_resource) {
+		resource_list_for_each_entry_safe(win, tmp, resources) {
+			err = parser->get_resource(userdata, win, iobase);
+			if (err)
+				goto do_abort;
+		}
+	}
+
+	if (parser && parser->finalize)
+		return parser->finalize(userdata);
+
+	return 0;
+
+ do_abort:
+	if (parser && parser->abort)
+		parser->abort(userdata);
+
+	return err;
+}
+EXPORT_SYMBOL_GPL(pci_host_parse_request_of_pci_ranges);
+
+
+struct gen_pci_parser_context {
+	struct device	*dev;
+	struct resource	**bus_range;
+	int		res_valid;
+};
+
+static int gen_pci_resource_parser_get_resource(void *userdata,
+						struct resource_entry *win,
+						resource_size_t iobase)
+{
+	struct gen_pci_parser_context *ctx = userdata;
+	struct device *dev = ctx->dev;
+	struct resource *res = win->res;
+	int err;
+
+	switch (resource_type(res)) {
+	case IORESOURCE_IO:
+		err = pci_remap_iospace(res, iobase);
+		if (err) {
+			dev_warn(dev, "error %d: failed to map resource %pR\n",
+				 err, res);
+			resource_list_destroy_entry(win);
 		}
+		break;
+	case IORESOURCE_MEM:
+		ctx->res_valid |= !(res->flags & IORESOURCE_PREFETCH);
+		break;
+	case IORESOURCE_BUS:
+		*ctx->bus_range = res;
+		break;
 	}
 
-	if (res_valid)
+	return 0;
+}
+
+static int gen_pci_resource_parser_finalize(void *userdata)
+{
+	struct gen_pci_parser_context *ctx = userdata;
+	struct device *dev = ctx->dev;
+
+	if (ctx->res_valid)
 		return 0;
 
 	dev_err(dev, "non-prefetchable memory resource required\n");
 	return -EINVAL;
 }
 
+static const struct pci_host_resource_parser gen_pci_resource_parser = {
+	.get_resource	= gen_pci_resource_parser_get_resource,
+	.finalize	= gen_pci_resource_parser_finalize,
+};
+
+static int gen_pci_parse_request_of_pci_ranges(struct device *dev,
+		       struct list_head *resources, struct resource **bus_range)
+{
+	struct gen_pci_parser_context ctx;
+
+	ctx.dev = dev;
+	ctx.bus_range = bus_range;
+	ctx.res_valid = 0;
+
+	return pci_host_parse_request_of_pci_ranges(dev, resources,
+						    &gen_pci_resource_parser,
+						    &ctx);
+}
+
 static void gen_pci_unmap_cfg(void *ptr)
 {
 	pci_ecam_free((struct pci_config_window *)ptr);
---8<--------------------------------------------------------------------------

Best regards,

Cyrille

-- 
Cyrille Pitchen, Free Electrons
Embedded Linux and Kernel engineering
http://free-electrons.com

^ permalink raw reply related	[flat|nested] 39+ messages in thread

* Re: [PATCH 5/5] PCI: cadence: add EndPoint Controller driver for Cadence PCIe controller
  2017-11-23 15:01 ` [PATCH 5/5] PCI: cadence: add EndPoint Controller driver for Cadence PCIe controller Cyrille Pitchen
@ 2017-12-01 12:20   ` Lorenzo Pieralisi
  2017-12-04 14:56     ` Cyrille Pitchen
  2017-12-05  9:19     ` Kishon Vijay Abraham I
  0 siblings, 2 replies; 39+ messages in thread
From: Lorenzo Pieralisi @ 2017-12-01 12:20 UTC (permalink / raw)
  To: Cyrille Pitchen
  Cc: bhelgaas, kishon, linux-pci, adouglas, stelford, dgary, kgopi,
	eandrews, thomas.petazzoni, sureshp, nsekhar, linux-kernel, robh,
	devicetree

On Thu, Nov 23, 2017 at 04:01:50PM +0100, Cyrille Pitchen wrote:
> This patch adds support to the Cadence PCIe controller in endpoint mode.

Please add a brief description to the log to describe the most salient
features.

> Signed-off-by: Cyrille Pitchen <cyrille.pitchen@free-electrons.com>
> ---
>  drivers/pci/cadence/Kconfig           |   9 +
>  drivers/pci/cadence/Makefile          |   1 +
>  drivers/pci/cadence/pcie-cadence-ep.c | 553 ++++++++++++++++++++++++++++++++++
>  3 files changed, 563 insertions(+)
>  create mode 100644 drivers/pci/cadence/pcie-cadence-ep.c
> 
> diff --git a/drivers/pci/cadence/Kconfig b/drivers/pci/cadence/Kconfig
> index 120306cae2aa..b2e6af71f39e 100644
> --- a/drivers/pci/cadence/Kconfig
> +++ b/drivers/pci/cadence/Kconfig
> @@ -21,4 +21,13 @@ config PCIE_CADENCE_HOST
>  	  mode. This PCIe controller may be embedded into many different vendors
>  	  SoCs.
>  
> +config PCIE_CADENCE_EP
> +	bool "Cadence PCIe endpoint controller"
> +	depends on PCI_ENDPOINT
> +	select PCIE_CADENCE
> +	help
> +	  Say Y here if you want to support the Cadence PCIe  controller in
> +	  endpoint mode. This PCIe controller may be embedded into many
> +	  different vendors SoCs.
> +
>  endif # PCI_CADENCE
> diff --git a/drivers/pci/cadence/Makefile b/drivers/pci/cadence/Makefile
> index d57d192d2595..61e9c8d6839d 100644
> --- a/drivers/pci/cadence/Makefile
> +++ b/drivers/pci/cadence/Makefile
> @@ -1,2 +1,3 @@
>  obj-$(CONFIG_PCIE_CADENCE) += pcie-cadence.o
>  obj-$(CONFIG_PCIE_CADENCE_HOST) += pcie-cadence-host.o
> +obj-$(CONFIG_PCIE_CADENCE_EP) += pcie-cadence-ep.o
> diff --git a/drivers/pci/cadence/pcie-cadence-ep.c b/drivers/pci/cadence/pcie-cadence-ep.c
> new file mode 100644
> index 000000000000..a1d761101a9c
> --- /dev/null
> +++ b/drivers/pci/cadence/pcie-cadence-ep.c
> @@ -0,0 +1,553 @@
> +/*
> + * Cadence PCIe host controller driver.

You should update this comment.

> + *
> + * Copyright (c) 2017 Cadence
> + *
> + * Author: Cyrille Pitchen <cyrille.pitchen@free-electrons.com>
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License version 2 as
> + * published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> + * GNU General Public License for more details.
> + *
> + * You should have received a copy of the GNU General Public License along with
> + * this program.  If not, see <http://www.gnu.org/licenses/>.
> + */
> +#include <linux/kernel.h>
> +#include <linux/of.h>
> +#include <linux/pci-epc.h>
> +#include <linux/platform_device.h>
> +#include <linux/pm_runtime.h>
> +#include <linux/sizes.h>
> +#include <linux/delay.h>

Nit: alphabetical order.

> +#include "pcie-cadence.h"
> +
> +#define CDNS_PCIE_EP_MIN_APERTURE		128	/* 128 bytes */
> +#define CDNS_PCIE_EP_IRQ_PCI_ADDR_NONE		0x1
> +#define CDNS_PCIE_EP_IRQ_PCI_ADDR_LEGACY	0x3
> +
> +/**
> + * struct cdns_pcie_ep_data - hardware specific data
> + * @max_regions: maximum nmuber of regions supported by hardware

s/nmuber/number

> + */
> +struct cdns_pcie_ep_data {
> +	size_t				max_regions;
> +};
> +
> +/**
> + * struct cdns_pcie_ep - private data for this PCIe endpoint controller driver
> + * @pcie: Cadence PCIe controller
> + * @data: pointer to a 'struct cdns_pcie_data'
> + */
> +struct cdns_pcie_ep {
> +	struct cdns_pcie		pcie;
> +	const struct cdns_pcie_ep_data	*data;
> +	struct pci_epc			*epc;
> +	unsigned long			ob_region_map;
> +	phys_addr_t			*ob_addr;
> +	phys_addr_t			irq_phys_addr;
> +	void __iomem			*irq_cpu_addr;
> +	u64				irq_pci_addr;
> +	u8				irq_pending;
> +};
> +
> +static int cdns_pcie_ep_write_header(struct pci_epc *epc,
> +				     struct pci_epf_header *hdr)
> +{
> +	struct cdns_pcie_ep *ep = epc_get_drvdata(epc);
> +	struct cdns_pcie *pcie = &ep->pcie;
> +	u8 fn = 0;
> +
> +	if (fn == 0) {

I think there is some code to retrieve fn missing here.

> +		u32 id = CDNS_PCIE_LM_ID_VENDOR(hdr->vendorid) |
> +			 CDNS_PCIE_LM_ID_SUBSYS(hdr->subsys_vendor_id);
> +		cdns_pcie_writel(pcie, CDNS_PCIE_LM_ID, id);
> +	}
> +	cdns_pcie_ep_fn_writew(pcie, fn, PCI_DEVICE_ID, hdr->deviceid);
> +	cdns_pcie_ep_fn_writeb(pcie, fn, PCI_REVISION_ID, hdr->revid);
> +	cdns_pcie_ep_fn_writeb(pcie, fn, PCI_CLASS_PROG, hdr->progif_code);
> +	cdns_pcie_ep_fn_writew(pcie, fn, PCI_CLASS_DEVICE,
> +			       hdr->subclass_code | hdr->baseclass_code << 8);
> +	cdns_pcie_ep_fn_writeb(pcie, fn, PCI_CACHE_LINE_SIZE,
> +			       hdr->cache_line_size);
> +	cdns_pcie_ep_fn_writew(pcie, fn, PCI_SUBSYSTEM_ID, hdr->subsys_id);
> +	cdns_pcie_ep_fn_writeb(pcie, fn, PCI_INTERRUPT_PIN, hdr->interrupt_pin);
> +
> +	return 0;
> +}
> +
> +static int cdns_pcie_ep_set_bar(struct pci_epc *epc, enum pci_barno bar,
> +				dma_addr_t bar_phys, size_t size, int flags)
> +{
> +	struct cdns_pcie_ep *ep = epc_get_drvdata(epc);
> +	struct cdns_pcie *pcie = &ep->pcie;
> +	u32 addr0, addr1, reg, cfg, b, aperture, ctrl;
> +	u8 fn = 0;
> +	u64 sz;
> +
> +	/* BAR size is 2^(aperture + 7) */
> +	sz = max_t(size_t, size, CDNS_PCIE_EP_MIN_APERTURE);
> +	sz = 1ULL << fls64(sz - 1);
> +	aperture = ilog2(sz) - 7; /* 128B -> 0, 256B -> 1, 512B -> 2, ... */
> +
> +	if ((flags & PCI_BASE_ADDRESS_SPACE) == PCI_BASE_ADDRESS_SPACE_IO) {
> +		ctrl = CDNS_PCIE_LM_BAR_CFG_CTRL_IO_32BITS;
> +	} else {
> +		bool is_prefetch = !!(flags & PCI_BASE_ADDRESS_MEM_PREFETCH);
> +		bool is_64bits = sz > SZ_2G;
> +
> +		if (is_64bits && (bar & 1))
> +			return -EINVAL;
> +
> +		switch (is_64bits << 1 | is_prefetch) {

I would not mind implementing this as a nested if-else, I am not a big
fan of using bool this way.

> +		case 0:
> +			ctrl = CDNS_PCIE_LM_BAR_CFG_CTRL_MEM_32BITS;
> +			break;
> +
> +		case 1:
> +			ctrl = CDNS_PCIE_LM_BAR_CFG_CTRL_PREFETCH_MEM_32BITS;
> +			break;
> +
> +		case 2:
> +			ctrl = CDNS_PCIE_LM_BAR_CFG_CTRL_MEM_64BITS;
> +			break;
> +
> +		case 3:
> +			ctrl = CDNS_PCIE_LM_BAR_CFG_CTRL_PREFETCH_MEM_64BITS;
> +			break;
> +		}
> +	}
> +
> +	addr0 = lower_32_bits(bar_phys);
> +	addr1 = upper_32_bits(bar_phys);
> +	cdns_pcie_writel(pcie, CDNS_PCIE_AT_IB_EP_FUNC_BAR_ADDR0(fn, bar),
> +			 addr0);
> +	cdns_pcie_writel(pcie, CDNS_PCIE_AT_IB_EP_FUNC_BAR_ADDR1(fn, bar),
> +			 addr1);

Is fn always 0 ?

> +	if (bar < BAR_4) {
> +		reg = CDNS_PCIE_LM_EP_FUNC_BAR_CFG0(fn);
> +		b = bar;
> +	} else {
> +		reg = CDNS_PCIE_LM_EP_FUNC_BAR_CFG1(fn);
> +		b = bar - BAR_4;
> +	}
> +
> +	cfg = cdns_pcie_readl(pcie, reg);
> +	cfg &= ~(CDNS_PCIE_LM_EP_FUNC_BAR_CFG_BAR_APERTURE_MASK(b) |
> +		 CDNS_PCIE_LM_EP_FUNC_BAR_CFG_BAR_CTRL_MASK(b));
> +	cfg |= (CDNS_PCIE_LM_EP_FUNC_BAR_CFG_BAR_APERTURE(b, aperture) |
> +		CDNS_PCIE_LM_EP_FUNC_BAR_CFG_BAR_CTRL(b, ctrl));
> +	cdns_pcie_writel(pcie, reg, cfg);
> +
> +	return 0;
> +}
> +
> +static void cdns_pcie_ep_clear_bar(struct pci_epc *epc, enum pci_barno bar)
> +{
> +	struct cdns_pcie_ep *ep = epc_get_drvdata(epc);
> +	struct cdns_pcie *pcie = &ep->pcie;
> +	u32 reg, cfg, b, ctrl;
> +	u8 fn = 0;
> +
> +	if (bar < BAR_4) {
> +		reg = CDNS_PCIE_LM_EP_FUNC_BAR_CFG0(fn);
> +		b = bar;
> +	} else {
> +		reg = CDNS_PCIE_LM_EP_FUNC_BAR_CFG1(fn);
> +		b = bar - BAR_4;
> +	}
> +
> +	ctrl = CDNS_PCIE_LM_BAR_CFG_CTRL_DISABLED;
> +	cfg = cdns_pcie_readl(pcie, reg);
> +	cfg &= ~(CDNS_PCIE_LM_EP_FUNC_BAR_CFG_BAR_APERTURE_MASK(b) |
> +		 CDNS_PCIE_LM_EP_FUNC_BAR_CFG_BAR_CTRL_MASK(b));
> +	cfg |= CDNS_PCIE_LM_EP_FUNC_BAR_CFG_BAR_CTRL_MASK(ctrl);
> +	cdns_pcie_writel(pcie, reg, cfg);
> +
> +	cdns_pcie_writel(pcie, CDNS_PCIE_AT_IB_EP_FUNC_BAR_ADDR0(fn, bar), 0);
> +	cdns_pcie_writel(pcie, CDNS_PCIE_AT_IB_EP_FUNC_BAR_ADDR1(fn, bar), 0);
> +}
> +
> +static int cdns_pcie_ep_map_addr(struct pci_epc *epc, phys_addr_t addr,
> +				 u64 pci_addr, size_t size)
> +{
> +	struct cdns_pcie_ep *ep = epc_get_drvdata(epc);
> +	struct cdns_pcie *pcie = &ep->pcie;
> +	u32 r;
> +
> +	r = find_first_zero_bit(&ep->ob_region_map, sizeof(ep->ob_region_map));

Second argument must be in bits not bytes.

https://marc.info/?l=linux-pci&m=151179781225513&w=2

> +	if (r >= ep->data->max_regions - 1) {
> +		dev_err(&epc->dev, "no free outbound region\n");
> +		return -EINVAL;
> +	}
> +
> +	cdns_pcie_set_outbound_region(pcie, r, false, addr, pci_addr, size);
> +
> +	set_bit(r, &ep->ob_region_map);
> +	ep->ob_addr[r] = addr;
> +
> +	return 0;
> +}
> +
> +static void cdns_pcie_ep_unmap_addr(struct pci_epc *epc, phys_addr_t addr)
> +{
> +	struct cdns_pcie_ep *ep = epc_get_drvdata(epc);
> +	struct cdns_pcie *pcie = &ep->pcie;
> +	u32 r;
> +
> +	for (r = 0; r < ep->data->max_regions - 1; r++)
> +		if (ep->ob_addr[r] == addr)
> +			break;
> +
> +	if (r >= ep->data->max_regions - 1)

== ?

> +		return;
> +
> +	cdns_pcie_reset_outbound_region(pcie, r);
> +
> +	ep->ob_addr[r] = 0;
> +	clear_bit(r, &ep->ob_region_map);
> +}
> +
> +static int cdns_pcie_ep_set_msi(struct pci_epc *epc, u8 mmc)
> +{
> +	struct cdns_pcie_ep *ep = epc_get_drvdata(epc);
> +	struct cdns_pcie *pcie = &ep->pcie;
> +	u32 cap = CDNS_PCIE_EP_FUNC_MSI_CAP_OFFSET;
> +	u16 flags;
> +	u8 fn = 0;
> +
> +	/* Validate the ID of the MSI Capability structure. */
> +	if (cdns_pcie_ep_fn_readb(pcie, fn, cap) != PCI_CAP_ID_MSI)
> +		return -EINVAL;
> +
> +	/*
> +	 * Set the Multiple Message Capable bitfield into the Message Control
> +	 * register.
> +	 */
> +	flags = cdns_pcie_ep_fn_readw(pcie, fn, cap + PCI_MSI_FLAGS);
> +	flags = (flags & ~PCI_MSI_FLAGS_QMASK) | (mmc << 1);
> +	flags |= PCI_MSI_FLAGS_64BIT;
> +	flags &= ~PCI_MSI_FLAGS_MASKBIT;
> +	cdns_pcie_ep_fn_writew(pcie, fn, cap + PCI_MSI_FLAGS, flags);
> +
> +	return 0;
> +}
> +
> +static int cdns_pcie_ep_get_msi(struct pci_epc *epc)
> +{
> +	struct cdns_pcie_ep *ep = epc_get_drvdata(epc);
> +	struct cdns_pcie *pcie = &ep->pcie;
> +	u32 cap = CDNS_PCIE_EP_FUNC_MSI_CAP_OFFSET;
> +	u16 flags, mmc, mme;
> +	u8 fn = 0;
> +
> +	/* Validate the ID of the MSI Capability structure. */
> +	if (cdns_pcie_ep_fn_readb(pcie, fn, cap) != PCI_CAP_ID_MSI)
> +		return -EINVAL;
> +
> +	/* Validate that the MSI feature is actually enabled. */
> +	flags = cdns_pcie_ep_fn_readw(pcie, fn, cap + PCI_MSI_FLAGS);
> +	if (!(flags & PCI_MSI_FLAGS_ENABLE))
> +		return -EINVAL;
> +
> +	/*
> +	 * Get the Multiple Message Enable bitfield from the Message Control
> +	 * register.
> +	 */
> +	mmc = (flags & PCI_MSI_FLAGS_QMASK) >> 1;
> +	mme = (flags & PCI_MSI_FLAGS_QSIZE) >> 4;
> +	if (mme > mmc)
> +		mme = mmc;
> +	if (mme > 5)
> +		mme = 5;

You should comment on what this 5 means and why it is fine to cap mme.

> +
> +	return mme;
> +}
> +
> +static void cdns_pcie_ep_assert_intx(struct cdns_pcie_ep *ep, u8 fn,
> +				     u8 intx, bool is_asserted)
> +{
> +	struct cdns_pcie *pcie = &ep->pcie;
> +	u32 r = ep->data->max_regions - 1;
> +	u32 offset;
> +	u16 status;
> +	u8 msg_code;
> +
> +	intx &= 3;
> +
> +	/* Set the outbound region if needed. */
> +	if (unlikely(ep->irq_pci_addr != CDNS_PCIE_EP_IRQ_PCI_ADDR_LEGACY)) {
> +		/* Last region was reserved for IRQ writes. */
> +		cdns_pcie_set_outbound_region_for_normal_msg(pcie, r,
> +							     ep->irq_phys_addr);
> +		ep->irq_pci_addr = CDNS_PCIE_EP_IRQ_PCI_ADDR_LEGACY;
> +	}
> +
> +	if (is_asserted) {
> +		ep->irq_pending |= BIT(intx);
> +		msg_code = MSG_CODE_ASSERT_INTA + intx;
> +	} else {
> +		ep->irq_pending &= ~BIT(intx);
> +		msg_code = MSG_CODE_DEASSERT_INTA + intx;
> +	}
> +
> +	status = cdns_pcie_ep_fn_readw(pcie, fn, PCI_STATUS);
> +	if (((status & PCI_STATUS_INTERRUPT) != 0) ^ (ep->irq_pending != 0)) {
> +		status ^= PCI_STATUS_INTERRUPT;
> +		cdns_pcie_ep_fn_writew(pcie, fn, PCI_STATUS, status);
> +	}
> +
> +	offset = CDNS_PCIE_NORMAL_MSG_ROUTING(MSG_ROUTING_LOCAL) |
> +		 CDNS_PCIE_NORMAL_MSG_CODE(msg_code) |
> +		 CDNS_PCIE_MSG_NO_DATA;
> +	writel(0, ep->irq_cpu_addr + offset);
> +}
> +
> +static int cdns_pcie_ep_send_legacy_irq(struct cdns_pcie_ep *ep, u8 fn, u8 intx)
> +{
> +	u16 cmd;
> +
> +	cmd = cdns_pcie_ep_fn_readw(&ep->pcie, fn, PCI_COMMAND);
> +	if (cmd & PCI_COMMAND_INTX_DISABLE)
> +		return -EINVAL;
> +
> +	cdns_pcie_ep_assert_intx(ep, fn, intx, true);
> +	mdelay(1);

Add a comment please to explain the mdelay value.

> +	cdns_pcie_ep_assert_intx(ep, fn, intx, false);
> +	return 0;
> +}
> +
> +static int cdns_pcie_ep_raise_irq(struct pci_epc *epc,
> +				  enum pci_epc_irq_type type, u8 interrupt_num)
> +{
> +	struct cdns_pcie_ep *ep = epc_get_drvdata(epc);
> +	struct cdns_pcie *pcie = &ep->pcie;
> +	u32 cap = CDNS_PCIE_EP_FUNC_MSI_CAP_OFFSET;
> +	u16 flags, mmc, mme, data, data_mask;
> +	u8 msi_count;
> +	u64 pci_addr, pci_addr_mask = 0xff;
> +	u8 fn = 0;
> +
> +	/* Handle legacy IRQ. */
> +	if (type == PCI_EPC_IRQ_LEGACY)
> +		return cdns_pcie_ep_send_legacy_irq(ep, fn, 0);
> +
> +	/* Otherwise MSI. */
> +	if (type != PCI_EPC_IRQ_MSI)
> +		return -EINVAL;
> +
> +	/* Check whether the MSI feature has been enabled by the PCI host. */
> +	flags = cdns_pcie_ep_fn_readw(pcie, fn, cap + PCI_MSI_FLAGS);
> +	if (!(flags & PCI_MSI_FLAGS_ENABLE))
> +		return -EINVAL;
> +
> +	/* Get the number of enabled MSIs */
> +	mmc = (flags & PCI_MSI_FLAGS_QMASK) >> 1;
> +	mme = (flags & PCI_MSI_FLAGS_QSIZE) >> 4;
> +	if (mme > mmc)
> +		mme = mmc;
> +	if (mme > 5)
> +		mme = 5;

Same comment as above.

> +	msi_count = 1 << mme;
> +	if (!interrupt_num || interrupt_num > msi_count)
> +		return -EINVAL;
> +
> +	/* Compute the data value to be written. */
> +	data_mask = msi_count - 1;
> +	data = cdns_pcie_ep_fn_readw(pcie, fn, cap + PCI_MSI_DATA_64);
> +	data = (data & ~data_mask) | ((interrupt_num - 1) & data_mask);
> +
> +	/* Get the PCI address where to write the data into. */
> +	pci_addr = cdns_pcie_ep_fn_readl(pcie, fn, cap + PCI_MSI_ADDRESS_HI);
> +	pci_addr <<= 32;
> +	pci_addr |= cdns_pcie_ep_fn_readl(pcie, fn, cap + PCI_MSI_ADDRESS_LO);
> +	pci_addr &= GENMASK_ULL(63, 2);
> +
> +	/* Set the outbound region if needed. */
> +	if (unlikely(ep->irq_pci_addr != pci_addr)) {
> +		/* Last region was reserved for IRQ writes. */
> +		cdns_pcie_set_outbound_region(pcie, ep->data->max_regions - 1,
> +					      false,
> +					      ep->irq_phys_addr,
> +					      pci_addr & ~pci_addr_mask,
> +					      pci_addr_mask + 1);
> +		ep->irq_pci_addr = pci_addr;
> +	}
> +	writew(data, ep->irq_cpu_addr + (pci_addr & pci_addr_mask));
> +
> +	return 0;
> +}
> +
> +static int cdns_pcie_ep_start(struct pci_epc *epc)
> +{
> +	struct cdns_pcie_ep *ep = epc_get_drvdata(epc);
> +	struct cdns_pcie *pcie = &ep->pcie;
> +	struct pci_epf *epf;
> +	u32 cfg;
> +	u8 fn = 0;
> +
> +	/* Enable this endpoint function. */
> +	cfg = cdns_pcie_readl(pcie, CDNS_PCIE_LM_EP_FUNC_CFG);
> +	cfg |= BIT(fn);
> +	cdns_pcie_writel(pcie, CDNS_PCIE_LM_EP_FUNC_CFG, cfg);
> +
> +	/*
> +	 * Already linked-up: don't call directly pci_epc_linkup() because we've
> +	 * already locked the epc->lock.
> +	 */
> +	list_for_each_entry(epf, &epc->pci_epf, list)
> +		pci_epf_linkup(epf);
> +
> +	return 0;
> +}
> +
> +static void cdns_pcie_ep_stop(struct pci_epc *epc)
> +{
> +	struct cdns_pcie_ep *ep = epc_get_drvdata(epc);
> +	struct cdns_pcie *pcie = &ep->pcie;
> +	u32 cfg;
> +	u8 fn = 0;
> +
> +	/* Disable this endpoint function (function 0 can't be disabled). */

I do not understand this comment and how it applies to the code,
in other words fn is always 0 here (so it can't be disabled)
I do not understand what this code is there for.

> +	cfg = cdns_pcie_readl(pcie, CDNS_PCIE_LM_EP_FUNC_CFG);
> +	cfg &= ~BIT(fn);
> +	cdns_pcie_writel(pcie, CDNS_PCIE_LM_EP_FUNC_CFG, cfg);
> +}
> +
> +static const struct pci_epc_ops cdns_pcie_epc_ops = {
> +	.write_header	= cdns_pcie_ep_write_header,
> +	.set_bar	= cdns_pcie_ep_set_bar,
> +	.clear_bar	= cdns_pcie_ep_clear_bar,
> +	.map_addr	= cdns_pcie_ep_map_addr,
> +	.unmap_addr	= cdns_pcie_ep_unmap_addr,
> +	.set_msi	= cdns_pcie_ep_set_msi,
> +	.get_msi	= cdns_pcie_ep_get_msi,
> +	.raise_irq	= cdns_pcie_ep_raise_irq,
> +	.start		= cdns_pcie_ep_start,
> +	.stop		= cdns_pcie_ep_stop,
> +};
> +
> +static const struct cdns_pcie_ep_data cdns_pcie_ep_data = {
> +	.max_regions	= 16,
> +};

As I mentioned in patch 3, should this be set-up with DT ?

Thanks,
Lorenzo

> +
> +static const struct of_device_id cdns_pcie_ep_of_match[] = {
> +	{ .compatible = "cdns,cdns-pcie-ep",
> +	  .data = &cdns_pcie_ep_data },
> +
> +	{ },
> +};
> +
> +static int cdns_pcie_ep_probe(struct platform_device *pdev)
> +{
> +	struct device *dev = &pdev->dev;
> +	struct device_node *np = dev->of_node;
> +	const struct of_device_id *of_id;
> +	struct cdns_pcie_ep *ep;
> +	struct cdns_pcie *pcie;
> +	struct pci_epc *epc;
> +	struct resource *res;
> +	size_t max_regions;
> +	int ret;
> +
> +	ep = devm_kzalloc(dev, sizeof(*ep), GFP_KERNEL);
> +	if (!ep)
> +		return -ENOMEM;
> +
> +	platform_set_drvdata(pdev, ep);
> +
> +	pcie = &ep->pcie;
> +	pcie->is_rc = false;
> +
> +	of_id = of_match_node(cdns_pcie_ep_of_match, np);
> +	ep->data = (const struct cdns_pcie_ep_data *)of_id->data;
> +
> +	res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "reg");
> +	pcie->reg_base = devm_ioremap_resource(dev, res);
> +	if (IS_ERR(pcie->reg_base)) {
> +		dev_err(dev, "missing \"reg\"\n");
> +		return PTR_ERR(pcie->reg_base);
> +	}
> +
> +	res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "mem");
> +	if (!res) {
> +		dev_err(dev, "missing \"mem\"\n");
> +		return -EINVAL;
> +	}
> +	pcie->mem_res = res;
> +
> +	max_regions = ep->data->max_regions;
> +	ep->ob_addr = devm_kzalloc(dev, max_regions * sizeof(*ep->ob_addr),
> +				   GFP_KERNEL);
> +	if (!ep->ob_addr)
> +		return -ENOMEM;
> +
> +	pm_runtime_enable(dev);
> +	ret = pm_runtime_get_sync(dev);
> +	if (ret < 0) {
> +		dev_err(dev, "pm_runtime_get_sync() failed\n");
> +		goto err_get_sync;
> +	}
> +
> +	/* Disable all but function 0 (anyway BIT(0) is hardwired to 1). */
> +	cdns_pcie_writel(pcie, CDNS_PCIE_LM_EP_FUNC_CFG, BIT(0));
> +
> +	epc = devm_pci_epc_create(dev, &cdns_pcie_epc_ops);
> +	if (IS_ERR(epc)) {
> +		dev_err(dev, "failed to create epc device\n");
> +		ret = PTR_ERR(epc);
> +		goto err_init;
> +	}
> +
> +	ep->epc = epc;
> +	epc_set_drvdata(epc, ep);
> +
> +	ret = of_property_read_u8(np, "max-functions", &epc->max_functions);
> +	if (ret < 0)
> +		epc->max_functions = 1;
> +
> +	ret = pci_epc_mem_init(epc, pcie->mem_res->start,
> +			       resource_size(pcie->mem_res));
> +	if (ret < 0) {
> +		dev_err(dev, "failed to initialize the memory space\n");
> +		goto err_init;
> +	}
> +
> +	ep->irq_cpu_addr = pci_epc_mem_alloc_addr(epc, &ep->irq_phys_addr,
> +						  SZ_128K);
> +	if (!ep->irq_cpu_addr) {
> +		dev_err(dev, "failed to reserve memory space for MSI\n");
> +		ret = -ENOMEM;
> +		goto free_epc_mem;
> +	}
> +	ep->irq_pci_addr = CDNS_PCIE_EP_IRQ_PCI_ADDR_NONE;
> +
> +	return 0;
> +
> + free_epc_mem:
> +	pci_epc_mem_exit(epc);
> +
> + err_init:
> +	pm_runtime_put_sync(dev);
> +
> + err_get_sync:
> +	pm_runtime_disable(dev);
> +
> +	return ret;
> +}
> +
> +static struct platform_driver cdns_pcie_ep_driver = {
> +	.driver = {
> +		.name = "cdns-pcie-ep",
> +		.of_match_table = cdns_pcie_ep_of_match,
> +	},
> +	.probe = cdns_pcie_ep_probe,
> +};
> +builtin_platform_driver(cdns_pcie_ep_driver);
> -- 
> 2.11.0
> 

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH 3/5] PCI: cadence: Add host driver for Cadence PCIe controller
  2017-12-01 10:37     ` Cyrille Pitchen
@ 2017-12-01 16:20       ` Lorenzo Pieralisi
  0 siblings, 0 replies; 39+ messages in thread
From: Lorenzo Pieralisi @ 2017-12-01 16:20 UTC (permalink / raw)
  To: Cyrille Pitchen
  Cc: Bjorn Helgaas, bhelgaas, kishon, linux-pci, adouglas, stelford,
	dgary, kgopi, eandrews, thomas.petazzoni, sureshp, nsekhar,
	linux-kernel, robh, devicetree

On Fri, Dec 01, 2017 at 11:37:49AM +0100, Cyrille Pitchen wrote:
> Hi Bjorn,
> 
> Le 28/11/2017 à 21:41, Bjorn Helgaas a écrit :
> > On Thu, Nov 23, 2017 at 04:01:48PM +0100, Cyrille Pitchen wrote:
> >> This patch adds support to the Cadence PCIe controller in host mode.
> >>
> >> Signed-off-by: Cyrille Pitchen <cyrille.pitchen@free-electrons.com>
> >> ---
> >>  drivers/Makefile                        |   1 +
> >>  drivers/pci/Kconfig                     |   1 +
> >>  drivers/pci/cadence/Kconfig             |  24 ++
> >>  drivers/pci/cadence/Makefile            |   2 +
> >>  drivers/pci/cadence/pcie-cadence-host.c | 425 ++++++++++++++++++++++++++++++++
> >>  drivers/pci/cadence/pcie-cadence.c      | 110 +++++++++
> >>  drivers/pci/cadence/pcie-cadence.h      | 325 ++++++++++++++++++++++++
> >>  7 files changed, 888 insertions(+)
> >>  create mode 100644 drivers/pci/cadence/Kconfig
> >>  create mode 100644 drivers/pci/cadence/Makefile
> >>  create mode 100644 drivers/pci/cadence/pcie-cadence-host.c
> >>  create mode 100644 drivers/pci/cadence/pcie-cadence.c
> >>  create mode 100644 drivers/pci/cadence/pcie-cadence.h
> > 
> > I prefer a single file per driver.  I assume you're anticipating
> > something like dwc, where the DesignWare core is incorporated into
> > several devices in slightly different ways.  But it doesn't look like
> > that's here yet, and personally, I'd rather split out the required
> > things when they actually become required, not ahead of time.
> >
> 
> The source code in pcie-cadence.c is shared between pcie-cadence-host.c
> (Root Complex mode) and pcie-cadence-ep.c (Endpoint mode), the second
> driver of this series.
> 
> Taking other comments into accounts, I will move endpoint only related
> stuff in the pcie-cadence.{c,h} files from this patch to the endpoint
> patch.
> 
> Otherwise your right, I expect this Cadence PCIe core to be embedded inside
> several vendor SoCs but I thought I could handle this as much as possible
> using mainly DT properties and/or the 'struct cdns_pcie_*_data' associated
> to the DT 'compatible' strings. We will have to wait for those SoCs to be
> released to know whether these two solutions are  enough or if we will need
> dedicated files anyway.
>  
> >> diff --git a/drivers/Makefile b/drivers/Makefile
> >> index 1d034b680431..27bdd98784d9 100644
> >> --- a/drivers/Makefile
> >> +++ b/drivers/Makefile
> >> @@ -18,6 +18,7 @@ obj-y				+= pwm/
> >>  
> >>  obj-$(CONFIG_PCI)		+= pci/
> >>  obj-$(CONFIG_PCI_ENDPOINT)	+= pci/endpoint/
> >> +obj-$(CONFIG_PCI_CADENCE)	+= pci/cadence/
> > 
> > I can't remember why we added CONFIG_PCI_ENDPOINT here instead of in
> > drivers/pci/Makefile.  Is there any way to move both CONFIG_PCI_ENDPOINT
> > and CONFIG_PCI_CADENCE into drivers/pci/Makefile so this is better
> > encapsulated?
> >
> 
> I will work on the solution I have proposed in another reply since it seems
> to be OK for you :)
> 
> >>  # PCI dwc controller drivers
> >>  obj-y				+= pci/dwc/
> >> ...
> > 
> >> + * struct cdns_pcie_rc_data - hardware specific data
> > 
> > "cdns" is a weird abbreviation for "Cadence", since "Cadence" doesn't
> > contain an "s".
> > 
> >> ...
> >> +static int cdns_pcie_parse_request_of_pci_ranges(struct device *dev,
> >> +						 struct list_head *resources,
> >> +						 struct resource **bus_range)
> >> +{
> >> +	int err, res_valid = 0;
> >> +	struct device_node *np = dev->of_node;
> >> +	resource_size_t iobase;
> >> +	struct resource_entry *win, *tmp;
> >> +
> >> +	err = of_pci_get_host_bridge_resources(np, 0, 0xff, resources, &iobase);
> >> +	if (err)
> >> +		return err;
> >> +
> >> +	err = devm_request_pci_bus_resources(dev, resources);
> >> +	if (err)
> >> +		return err;
> >> +
> >> +	resource_list_for_each_entry_safe(win, tmp, resources) {
> >> +		struct resource *res = win->res;
> >> +
> >> +		switch (resource_type(res)) {
> >> +		case IORESOURCE_IO:
> >> +			err = pci_remap_iospace(res, iobase);
> >> +			if (err) {
> >> +				dev_warn(dev, "error %d: failed to map resource %pR\n",
> >> +					 err, res);
> >> +				resource_list_destroy_entry(win);
> >> +			}
> >> +			break;
> >> +		case IORESOURCE_MEM:
> >> +			res_valid |= !(res->flags & IORESOURCE_PREFETCH);
> >> +			break;
> >> +		case IORESOURCE_BUS:
> >> +			*bus_range = res;
> >> +			break;
> >> +		}
> >> +	}
> >> +
> >> +	if (res_valid)
> >> +		return 0;
> >> +
> >> +	dev_err(dev, "non-prefetchable memory resource required\n");
> >> +	return -EINVAL;
> >> +}
> > 
> > The code above is starting to look awfully familiar.  I wonder if it's
> > time to think about some PCI-internal interface that can encapsulate
> > this.  In this case, there's really nothing Cadence-specific here.
> > There are other callers where there *is* vendor-specific code, but
> > possibly that could be handled by returning pointers to bus number,
> > I/O port, and MMIO resources so the caller could do the
> > vendor-specific stuff?
> > 
> > Bjorn
> > 
> 
> I am listing:
> - gen_pci_parse_request_of_pci_ranges() [1]
> - cdns_pcie_parse_request_of_pci_ranges() - same as [1]
> - advk_pcie_parse_request_of_pci_ranges()
> - altera_pcie_parse_request_of_pci_ranges()
> - versatile_pci_parse_request_of_pci_ranges()
> - rcar_pcie_parse_request_of_pci_ranges()
> 
> Then what about doing something like that:
> 
> ---8<--------------------------------------------------------------------------
> diff --git a/drivers/pci/host/pci-host-common.c b/drivers/pci/host/pci-host-common.c
> index 44a47d4f0b8f..2413c5d83cbd 100644
> --- a/drivers/pci/host/pci-host-common.c
> +++ b/drivers/pci/host/pci-host-common.c
> @@ -24,10 +24,19 @@
>  #include <linux/pci-ecam.h>
>  #include <linux/platform_device.h>
>  
> -static int gen_pci_parse_request_of_pci_ranges(struct device *dev,
> -		       struct list_head *resources, struct resource **bus_range)
> +struct pci_host_resource_parser {
> +	int	(*get_resource)(void *userdata, struct resource_entry *win,
> +				resource_size_t iobase);
> +	int	(*finalize)(void *userdata);
> +	void	(*abort)(void *userdata);
> +};
> +
> +int pci_host_parse_request_of_pci_ranges(struct device *dev,
> +				 struct list_head *resources,
> +				 const struct pci_host_resource_parser *parser,
> +				 void *userdata)
>  {
> -	int err, res_valid = 0;
> +	int err;
>  	struct device_node *np = dev->of_node;
>  	resource_size_t iobase;
>  	struct resource_entry *win, *tmp;
> @@ -40,34 +49,94 @@ static int gen_pci_parse_request_of_pci_ranges(struct device *dev,
>  	if (err)
>  		return err;
>  
> -	resource_list_for_each_entry_safe(win, tmp, resources) {
> -		struct resource *res = win->res;
> -
> -		switch (resource_type(res)) {
> -		case IORESOURCE_IO:
> -			err = pci_remap_iospace(res, iobase);
> -			if (err) {
> -				dev_warn(dev, "error %d: failed to map resource %pR\n",
> -					 err, res);
> -				resource_list_destroy_entry(win);
> -			}
> -			break;
> -		case IORESOURCE_MEM:
> -			res_valid |= !(res->flags & IORESOURCE_PREFETCH);
> -			break;
> -		case IORESOURCE_BUS:
> -			*bus_range = res;
> -			break;
> +	if (parser && parser->get_resource) {
> +		resource_list_for_each_entry_safe(win, tmp, resources) {
> +			err = parser->get_resource(userdata, win, iobase);
> +			if (err)
> +				goto do_abort;
> +		}
> +	}
> +
> +	if (parser && parser->finalize)
> +		return parser->finalize(userdata);
> +
> +	return 0;
> +
> + do_abort:
> +	if (parser && parser->abort)
> +		parser->abort(userdata);
> +
> +	return err;
> +}
> +EXPORT_SYMBOL_GPL(pci_host_parse_request_of_pci_ranges);
> +
> +
> +struct gen_pci_parser_context {
> +	struct device	*dev;
> +	struct resource	**bus_range;
> +	int		res_valid;
> +};
> +
> +static int gen_pci_resource_parser_get_resource(void *userdata,
> +						struct resource_entry *win,
> +						resource_size_t iobase)
> +{
> +	struct gen_pci_parser_context *ctx = userdata;
> +	struct device *dev = ctx->dev;
> +	struct resource *res = win->res;
> +	int err;
> +
> +	switch (resource_type(res)) {
> +	case IORESOURCE_IO:
> +		err = pci_remap_iospace(res, iobase);
> +		if (err) {
> +			dev_warn(dev, "error %d: failed to map resource %pR\n",
> +				 err, res);
> +			resource_list_destroy_entry(win);
>  		}
> +		break;
> +	case IORESOURCE_MEM:
> +		ctx->res_valid |= !(res->flags & IORESOURCE_PREFETCH);
> +		break;
> +	case IORESOURCE_BUS:
> +		*ctx->bus_range = res;
> +		break;
>  	}
>  
> -	if (res_valid)
> +	return 0;
> +}
> +
> +static int gen_pci_resource_parser_finalize(void *userdata)
> +{
> +	struct gen_pci_parser_context *ctx = userdata;
> +	struct device *dev = ctx->dev;
> +
> +	if (ctx->res_valid)
>  		return 0;
>  
>  	dev_err(dev, "non-prefetchable memory resource required\n");
>  	return -EINVAL;
>  }
>  
> +static const struct pci_host_resource_parser gen_pci_resource_parser = {
> +	.get_resource	= gen_pci_resource_parser_get_resource,
> +	.finalize	= gen_pci_resource_parser_finalize,
> +};
> +
> +static int gen_pci_parse_request_of_pci_ranges(struct device *dev,
> +		       struct list_head *resources, struct resource **bus_range)
> +{
> +	struct gen_pci_parser_context ctx;
> +
> +	ctx.dev = dev;
> +	ctx.bus_range = bus_range;
> +	ctx.res_valid = 0;
> +
> +	return pci_host_parse_request_of_pci_ranges(dev, resources,
> +						    &gen_pci_resource_parser,
> +						    &ctx);
> +}

I think that what you are doing is something along the lines:

acpi_dev_get_resources()

and the struct res_proc_context but maybe we can do something
simpler to start with.

We can go through the resource list twice in every driver, once through
the generic function that we are creating - that embeds code common
across bridges - to be clear something like:

static int of_pci_parse_request_pci_resources(struct device *dev,
					      struct list_head *res)
{
	int err, res_valid;
	resource_size_t iobase;
	struct device_node *np = dev->of_node;
	struct resource_entry *win, *tmp;

	err = of_pci_get_host_bridge_resources(np, 0, 0xff, res, &iobase);
	if (err)
		return err;

	err = devm_request_pci_bus_resources(dev, res);
	if (err)
		goto out_release_res;

	resource_list_for_each_entry_safe(win, tmp, res) {
		struct resource *res = win->res;

		switch (resource_type(res)) {
		case IORESOURCE_IO:
			err = pci_remap_iospace(res, iobase);
			if (err) {
				dev_warn(dev, "error %d: failed to map resource %pR\n",
					 err, res);
				resource_list_destroy_entry(win);
			}
			break;
		case IORESOURCE_MEM:
			res_valid |= !(res->flags & IORESOURCE_PREFETCH);
			break;
		}
	}

	if (res_valid)
		return 0;

	dev_err(dev, "non-prefetchable memory resource required\n");
	err = -EINVAL;

out_release_res:
	pci_free_resource_list(res);
	return err;
}

and then in a PCI host bridge specific loop that just carries
out the host bridge specific actions for every resource returned
in the res list.

Lorenzo

> +
>  static void gen_pci_unmap_cfg(void *ptr)
>  {
>  	pci_ecam_free((struct pci_config_window *)ptr);
> ---8<--------------------------------------------------------------------------

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH 3/5] PCI: cadence: Add host driver for Cadence PCIe controller
  2017-11-29 17:34   ` Lorenzo Pieralisi
@ 2017-12-03 20:44     ` Cyrille Pitchen
  2017-12-04 18:20       ` Lorenzo Pieralisi
  0 siblings, 1 reply; 39+ messages in thread
From: Cyrille Pitchen @ 2017-12-03 20:44 UTC (permalink / raw)
  To: Lorenzo Pieralisi
  Cc: bhelgaas, kishon, linux-pci, adouglas, stelford, dgary, kgopi,
	eandrews, thomas.petazzoni, sureshp, nsekhar, linux-kernel, robh,
	devicetree

Hi Lorenzo,

Le 29/11/2017 à 18:34, Lorenzo Pieralisi a écrit :
> On Thu, Nov 23, 2017 at 04:01:48PM +0100, Cyrille Pitchen wrote:
>> This patch adds support to the Cadence PCIe controller in host mode.
> 
> Bjorn already commented on this, it would be good to add some
> of the cover letter details in this log.
> 
>> Signed-off-by: Cyrille Pitchen <cyrille.pitchen@free-electrons.com>
>> ---
>>  drivers/Makefile                        |   1 +
>>  drivers/pci/Kconfig                     |   1 +
>>  drivers/pci/cadence/Kconfig             |  24 ++
>>  drivers/pci/cadence/Makefile            |   2 +
>>  drivers/pci/cadence/pcie-cadence-host.c | 425 ++++++++++++++++++++++++++++++++
> 
> You should also update the MAINTAINERS file.

I will ask Cadence who will be the maintainer.

> 
>>  drivers/pci/cadence/pcie-cadence.c      | 110 +++++++++
>>  drivers/pci/cadence/pcie-cadence.h      | 325 ++++++++++++++++++++++++
>>  7 files changed, 888 insertions(+)
>>  create mode 100644 drivers/pci/cadence/Kconfig
>>  create mode 100644 drivers/pci/cadence/Makefile
>>  create mode 100644 drivers/pci/cadence/pcie-cadence-host.c
>>  create mode 100644 drivers/pci/cadence/pcie-cadence.c
>>  create mode 100644 drivers/pci/cadence/pcie-cadence.h
>>
>> diff --git a/drivers/Makefile b/drivers/Makefile
>> index 1d034b680431..27bdd98784d9 100644
>> --- a/drivers/Makefile
>> +++ b/drivers/Makefile
>> @@ -18,6 +18,7 @@ obj-y                               += pwm/
>>
>>  obj-$(CONFIG_PCI)            += pci/
>>  obj-$(CONFIG_PCI_ENDPOINT)   += pci/endpoint/
>> +obj-$(CONFIG_PCI_CADENCE)    += pci/cadence/
> 
> Already commented on the cover letter.
> 
>>  # PCI dwc controller drivers
>>  obj-y                                += pci/dwc/
>>
>> diff --git a/drivers/pci/Kconfig b/drivers/pci/Kconfig
>> index 90944667ccea..2471b2e36b8b 100644
>> --- a/drivers/pci/Kconfig
>> +++ b/drivers/pci/Kconfig
>> @@ -144,6 +144,7 @@ config PCI_HYPERV
>>            PCI devices from a PCI backend to support PCI driver domains.
>>
>>  source "drivers/pci/hotplug/Kconfig"
>> +source "drivers/pci/cadence/Kconfig"
>>  source "drivers/pci/dwc/Kconfig"
>>  source "drivers/pci/host/Kconfig"
>>  source "drivers/pci/endpoint/Kconfig"
>> diff --git a/drivers/pci/cadence/Kconfig b/drivers/pci/cadence/Kconfig
>> new file mode 100644
>> index 000000000000..120306cae2aa
>> --- /dev/null
>> +++ b/drivers/pci/cadence/Kconfig
>> @@ -0,0 +1,24 @@
>> +menuconfig PCI_CADENCE
>> +     bool "Cadence PCI controllers support"
>> +     depends on PCI && HAS_IOMEM
>> +     help
>> +       Say Y here if you want to support some Cadence PCI controller.
>> +
>> +       When in doubt, say N.
>> +
>> +if PCI_CADENCE
>> +
>> +config PCIE_CADENCE
>> +     bool
>> +
>> +config PCIE_CADENCE_HOST
>> +     bool "Cadence PCIe host controller"
>> +     depends on OF
>> +     depends on PCI_MSI_IRQ_DOMAIN
> 
> I do not see the reason for this dependency in the code.
>

I will remove it if it's not needed.
 
>> +     select PCIE_CADENCE
>> +     help
>> +       Say Y here if you want to support the Cadence PCIe controller in host
>> +       mode. This PCIe controller may be embedded into many different vendors
>> +       SoCs.
>> +
>> +endif # PCI_CADENCE
>> diff --git a/drivers/pci/cadence/Makefile b/drivers/pci/cadence/Makefile
>> new file mode 100644
>> index 000000000000..d57d192d2595
>> --- /dev/null
>> +++ b/drivers/pci/cadence/Makefile
>> @@ -0,0 +1,2 @@
>> +obj-$(CONFIG_PCIE_CADENCE) += pcie-cadence.o
>> +obj-$(CONFIG_PCIE_CADENCE_HOST) += pcie-cadence-host.o
>> diff --git a/drivers/pci/cadence/pcie-cadence-host.c b/drivers/pci/cadence/pcie-cadence-host.c
>> new file mode 100644
>> index 000000000000..252471e72a93
>> --- /dev/null
>> +++ b/drivers/pci/cadence/pcie-cadence-host.c
>> @@ -0,0 +1,425 @@
>> +/*
>> + * Cadence PCIe host controller driver.
>> + *
>> + * Copyright (c) 2017 Cadence
>> + *
>> + * Author: Cyrille Pitchen <cyrille.pitchen@free-electrons.com>
>> + *
>> + * This program is free software; you can redistribute it and/or modify
>> + * it under the terms of the GNU General Public License version 2 as
>> + * published by the Free Software Foundation.
>> + *
>> + * This program is distributed in the hope that it will be useful,
>> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
>> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
>> + * GNU General Public License for more details.
>> + *
>> + * You should have received a copy of the GNU General Public License along with
>> + * this program.  If not, see <http://www.gnu.org/licenses/>.
>> + */
>> +#include <linux/kernel.h>
>> +#include <linux/of_address.h>
>> +#include <linux/of_pci.h>
>> +#include <linux/platform_device.h>
>> +#include <linux/pm_runtime.h>
>> +
>> +#include "pcie-cadence.h"
>> +
>> +/**
>> + * struct cdns_pcie_rc_data - hardware specific data
>> + * @max_regions: maximum number of regions supported by the hardware
>> + * @vendor_id: PCI vendor ID
>> + * @device_id: PCI device ID
>> + * @no_bar_nbits: Number of bits to keep for inbound (PCIe -> CPU) address
>> + *                translation (nbits sets into the "no BAR match" register).
>> + */
>> +struct cdns_pcie_rc_data {
>> +     size_t                  max_regions;
> 
> Reason for it to be size_t ?
>

No special reason, I can use another integer type.
 
>> +     u16                     vendor_id;
>> +     u16                     device_id;
>> +     u8                      no_bar_nbits;
>> +};
> 
> I think that this data should come from DT (?) more below.
>

I can update the DT bindings documentation and the driver source code as needed.
 
>> +
>> +/**
>> + * struct cdns_pcie_rc - private data for this PCIe Root Complex driver
>> + * @pcie: Cadence PCIe controller
>> + * @dev: pointer to PCIe device
>> + * @cfg_res: start/end offsets in the physical system memory to map PCI
>> + *           configuration space accesses
>> + * @bus_range: first/last buses behind the PCIe host controller
>> + * @cfg_base: IO mapped window to access the PCI configuration space of a
>> + *            single function at a time
>> + * @data: pointer to a 'struct cdns_pcie_rc_data'
>> + */
>> +struct cdns_pcie_rc {
>> +     struct cdns_pcie        pcie;
>> +     struct device           *dev;
>> +     struct resource         *cfg_res;
>> +     struct resource         *bus_range;
>> +     void __iomem            *cfg_base;
>> +     const struct cdns_pcie_rc_data  *data;
>> +};
>> +
>> +static void __iomem *
> 
> Please do not split lines like this, storage class and return type
> should be in the same line as the name, move parameter(s) to a new
> line.
> 
> static void __iomem *cdns_pci_map_bus(struct pci_bus *bus, unsigned int devfn,
>                                       int where)
>

ok :)
 
>> +cdns_pci_map_bus(struct pci_bus *bus, unsigned int devfn, int where)
>> +{
>> +     struct pci_host_bridge *bridge = pci_find_host_bridge(bus);
>> +     struct cdns_pcie_rc *rc = pci_host_bridge_priv(bridge);
>> +     struct cdns_pcie *pcie = &rc->pcie;
>> +     unsigned int busn = bus->number;
>> +     u32 addr0, desc0;
>> +
>> +     if (busn < rc->bus_range->start || busn > rc->bus_range->end)
>> +             return NULL;
> 
> It does not hurt but I wonder whether you really need this check.
>

I can remove it.
 
>> +     if (busn == rc->bus_range->start) {
>> +             if (devfn)
> 
> I suspect I know why you need this check but I ask you to explain it
> anyway if you do not mind please.
>

If I have understood correctly, Cadence team told me that only the root
port is available on the first bus through device 0, function 0.
No other device/function should connected on this bus, all other devices
are behind at least one PCI bridge.

I can add a comment here to explain that.
 
>> +                     return NULL;
>> +
>> +             return pcie->reg_base + (where & 0xfff);
>> +     }
>> +
>> +     /* Update Output registers for AXI region 0. */
>> +     addr0 = CDNS_PCIE_AT_OB_REGION_PCI_ADDR0_NBITS(12) |
> 
> Ok, so for every config access you reprogram addr0 to reflect the
> correct bus|devfn ID in the PCI bus TLP corresponding to an ECAM address
> in CPU physical address space, is my understanding correct ?
>

The idea is to able to use only a 4KB memory area at a fixed address in the
space allocated for the PCIe controller in the AXI bus. I guess the plan is
to leave more space on the AXI bus to map all other PCIe devices.

This is just my guess. Anyway one purpose of this driver was actually to
perform all PCI configuration space accesses through this single 4KB memory
area in the AXI bus, changing the mapping dynamically to reach the relevant
PCI device. 
 
>> +             CDNS_PCIE_AT_OB_REGION_PCI_ADDR0_DEVFN(devfn) |
>> +             CDNS_PCIE_AT_OB_REGION_PCI_ADDR0_BUS(busn);
>> +     cdns_pcie_writel(pcie, CDNS_PCIE_AT_OB_REGION_PCI_ADDR0(0), addr0);
>> +
>> +     /* Configuration Type 0 or Type 1 access. */
>> +     desc0 = CDNS_PCIE_AT_OB_REGION_DESC0_HARDCODED_RID |
>> +             CDNS_PCIE_AT_OB_REGION_DESC0_DEVFN(0);
>> +     /*
>> +      * The bus number was already set once for all in desc1 by
>> +      * cdns_pcie_host_init_address_translation().
>> +      */
>> +     if (busn == rc->bus_range->start + 1)
>> +             desc0 |= CDNS_PCIE_AT_OB_REGION_DESC0_TYPE_CONF_TYPE0;
>> +     else
>> +             desc0 |= CDNS_PCIE_AT_OB_REGION_DESC0_TYPE_CONF_TYPE1;
> 
> I would like to ask you why you have to do it here and the root port
> does not figure it out by itself, I do not have the datasheet so I am
> just asking for my own information.

PCI configuration space registers of the root port can only be read through
the APB bus at offset 0:
->reg_base + (where & 0xfff)

They are internal registers of the PCIe controller so no TLP on the PCIe bus.

However to access the PCI configuration space registers of any other device,
the PCIe controller builds then sends a TLP on the PCIe bus using the offset
in the 4KB AXI area as the offset of the register in the PCI configuration
space:
->cfg_base + (where & 0xfff)

> 
>> +     cdns_pcie_writel(pcie, CDNS_PCIE_AT_OB_REGION_DESC0(0), desc0);
>> +
>> +     return rc->cfg_base + (where & 0xfff);
>> +}
>> +
>> +static struct pci_ops cdns_pcie_host_ops = {
>> +     .map_bus        = cdns_pci_map_bus,
>> +     .read           = pci_generic_config_read,
>> +     .write          = pci_generic_config_write,
>> +};
>> +
>> +static const struct cdns_pcie_rc_data cdns_pcie_rc_data = {
>> +     .max_regions    = 32,
>> +     .vendor_id      = PCI_VENDOR_ID_CDNS,
>> +     .device_id      = 0x0200,
>> +     .no_bar_nbits   = 32,
>> +};
> 
> Should (some of) these parameters be retrieved through a DT binding ?
>

Indeed, maybe we get max_regions and no_bar_nbits from the DT.

About the vendor and device IDs, I don't know which would be the best
choice between some dedicated DT properties or associating a custom
structure as above to the 'compatible' string.

Honestly, I don't have any strong preference, please just tell me what
you would prefer :)
 
>> +static const struct of_device_id cdns_pcie_host_of_match[] = {
>> +     { .compatible = "cdns,cdns-pcie-host",
>> +       .data = &cdns_pcie_rc_data },
>> +
>> +     { },
>> +};
>> +
>> +static int cdns_pcie_parse_request_of_pci_ranges(struct device *dev,
>> +                                              struct list_head *resources,
>> +                                              struct resource **bus_range)
>> +{
>> +     int err, res_valid = 0;
>> +     struct device_node *np = dev->of_node;
>> +     resource_size_t iobase;
>> +     struct resource_entry *win, *tmp;
>> +
>> +     err = of_pci_get_host_bridge_resources(np, 0, 0xff, resources, &iobase);
>> +     if (err)
>> +             return err;
>> +
>> +     err = devm_request_pci_bus_resources(dev, resources);
>> +     if (err)
>> +             return err;
>> +
>> +     resource_list_for_each_entry_safe(win, tmp, resources) {
>> +             struct resource *res = win->res;
>> +
>> +             switch (resource_type(res)) {
>> +             case IORESOURCE_IO:
>> +                     err = pci_remap_iospace(res, iobase);
>> +                     if (err) {
>> +                             dev_warn(dev, "error %d: failed to map resource %pR\n",
>> +                                      err, res);
>> +                             resource_list_destroy_entry(win);
>> +                     }
>> +                     break;
>> +             case IORESOURCE_MEM:
>> +                     res_valid |= !(res->flags & IORESOURCE_PREFETCH);
>> +                     break;
>> +             case IORESOURCE_BUS:
>> +                     *bus_range = res;
>> +                     break;
>> +             }
>> +     }
>> +
>> +     if (res_valid)
>> +             return 0;
>> +
>> +     dev_err(dev, "non-prefetchable memory resource required\n");
>> +     return -EINVAL;
> 
> Nit, I prefer you swap these two as it is done in pci-aardvark.c:
> 
>         if (!res_valid) {
>                 dev_err(dev, "non-prefetchable memory resource required\n");
>                 return -EINVAL;
>         }
> 
>         return 0;
> 
> but as per previous replies this function can be factorized in
> core PCI code - I would not bother unless you are willing to write
> the patch series that does the refactoring yourself :)
> 
>> +}
>> +
>> +static int cdns_pcie_host_init_root_port(struct cdns_pcie_rc *rc)
>> +{
>> +     const struct cdns_pcie_rc_data *data = rc->data;
>> +     struct cdns_pcie *pcie = &rc->pcie;
>> +     u8 pbn, sbn, subn;
>> +     u32 value, ctrl;
>> +
>> +     /*
>> +      * Set the root complex BAR configuration register:
>> +      * - disable both BAR0 and BAR1.
>> +      * - enable Prefetchable Memory Base and Limit registers in type 1
>> +      *   config space (64 bits).
>> +      * - enable IO Base and Limit registers in type 1 config
>> +      *   space (32 bits).
>> +      */
>> +     ctrl = CDNS_PCIE_LM_BAR_CFG_CTRL_DISABLED;
>> +     value = CDNS_PCIE_LM_RC_BAR_CFG_BAR0_CTRL(ctrl) |
>> +             CDNS_PCIE_LM_RC_BAR_CFG_BAR1_CTRL(ctrl) |
>> +             CDNS_PCIE_LM_RC_BAR_CFG_PREFETCH_MEM_ENABLE |
>> +             CDNS_PCIE_LM_RC_BAR_CFG_PREFETCH_MEM_64BITS |
>> +             CDNS_PCIE_LM_RC_BAR_CFG_IO_ENABLE |
>> +             CDNS_PCIE_LM_RC_BAR_CFG_IO_32BITS;
>> +     cdns_pcie_writel(pcie, CDNS_PCIE_LM_RC_BAR_CFG, value);
>> +
>> +     /* Set root port configuration space */
>> +     if (data->vendor_id != 0xffff)
>> +             cdns_pcie_rp_writew(pcie, PCI_VENDOR_ID, data->vendor_id);
>> +     if (data->device_id != 0xffff)
>> +             cdns_pcie_rp_writew(pcie, PCI_DEVICE_ID, data->device_id);
>> +
>> +     cdns_pcie_rp_writeb(pcie, PCI_CLASS_REVISION, 0);
>> +     cdns_pcie_rp_writeb(pcie, PCI_CLASS_PROG, 0);
>> +     cdns_pcie_rp_writew(pcie, PCI_CLASS_DEVICE, PCI_CLASS_BRIDGE_PCI);
>> +
>> +     pbn = rc->bus_range->start;
>> +     sbn = pbn + 1; /* Single root port. */
>> +     subn = rc->bus_range->end;
>> +     cdns_pcie_rp_writeb(pcie, PCI_PRIMARY_BUS, pbn);
>> +     cdns_pcie_rp_writeb(pcie, PCI_SECONDARY_BUS, sbn);
>> +     cdns_pcie_rp_writeb(pcie, PCI_SUBORDINATE_BUS, subn);
> 
> Again - I do not have the datasheet for this device therefore I would
> kindly ask you how this works; it seems to me that what you are doing
> here is done through normal configuration cycles in an ECAM compliant
> system to program the RP PRIMARY/SECONDARY/SUBORDINATE bus - I would
> like to understand why this code is needed.
>

I will test without those lines to test whether I can remove them.

At first, the PCIe controller was tested by Cadence team: there was code
in their bootloader to initialize the hardware (building the AXI <-> PCIe
mappings, ...): the bootloader used to set the primary, secondary and
subordinate bus numbers in the root port PCI config space.

Also there was a hardware trick to redirect accesses of the lowest
addresses in the AXI bus to the APB bus so the PCI configuration space of
the root port could have been accessed from the AXI bus too.

The AXI <-> PCIe mapping being done by the bootloader and the root port
config space being accessible from the AXI bus, it was possible to use
the pci-host-generic driver.

However, the hardware trick won't be included in the final design since
Cadence now wants to perform all PCI configuration space accesses through
a small 4KB window at a fixed address on the AXI bus.
Also, we now want all initialisations to be done by the linux driver
instead of the bootloader.

I simply moved all those initialisations from the bootloader to the linux
driver but actually there is a chance that I can remove the 3 writes to
the PCI_*_BUS registers.

 
>> +     return 0;
>> +}
>> +
>> +static int cdns_pcie_host_init_address_translation(struct cdns_pcie_rc *rc)
>> +{
>> +     struct cdns_pcie *pcie = &rc->pcie;
>> +     struct resource *cfg_res = rc->cfg_res;
>> +     struct resource *mem_res = pcie->mem_res;
>> +     struct resource *bus_range = rc->bus_range;
>> +     struct device *dev = rc->dev;
>> +     struct device_node *np = dev->of_node;
>> +     struct of_pci_range_parser parser;
>> +     struct of_pci_range range;
>> +     u32 addr0, addr1, desc1;
>> +     u64 cpu_addr;
>> +     int r, err;
>> +
>> +     /*
>> +      * Reserve region 0 for PCI configure space accesses:
>> +      * OB_REGION_PCI_ADDR0 and OB_REGION_DESC0 are updated dynamically by
>> +      * cdns_pci_map_bus(), other region registers are set here once for all.
>> +      */
>> +     addr1 = 0; /* Should be programmed to zero. */
>> +     desc1 = CDNS_PCIE_AT_OB_REGION_DESC1_BUS(bus_range->start);
>> +     cdns_pcie_writel(pcie, CDNS_PCIE_AT_OB_REGION_PCI_ADDR1(0), addr1);
>> +     cdns_pcie_writel(pcie, CDNS_PCIE_AT_OB_REGION_DESC1(0), desc1);
>> +
>> +     cpu_addr = cfg_res->start - mem_res->start;
>> +     addr0 = CDNS_PCIE_AT_OB_REGION_CPU_ADDR0_NBITS(12) |
>> +             (lower_32_bits(cpu_addr) & GENMASK(31, 8));
>> +     addr1 = upper_32_bits(cpu_addr);
>> +     cdns_pcie_writel(pcie, CDNS_PCIE_AT_OB_REGION_CPU_ADDR0(0), addr0);
>> +     cdns_pcie_writel(pcie, CDNS_PCIE_AT_OB_REGION_CPU_ADDR1(0), addr1);
>> +
>> +     err = of_pci_range_parser_init(&parser, np);
>> +     if (err)
>> +             return err;
>> +
>> +     r = 1;
>> +     for_each_of_pci_range(&parser, &range) {
>> +             bool is_io;
>> +
>> +             if (r >= rc->data->max_regions)
>> +                     break;
>> +
>> +             if ((range.flags & IORESOURCE_TYPE_BITS) == IORESOURCE_MEM)
>> +                     is_io = false;
>> +             else if ((range.flags & IORESOURCE_TYPE_BITS) == IORESOURCE_IO)
>> +                     is_io = true;
>> +             else
>> +                     continue;
>> +
>> +             cdns_pcie_set_outbound_region(pcie, r, is_io,
>> +                                           range.cpu_addr,
>> +                                           range.pci_addr,
>> +                                           range.size);
>> +             r++;
>> +     }
>> +
>> +     /*
>> +      * Set Root Port no BAR match Inbound Translation registers:
>> +      * needed for MSI.
> 
> And DMA :) if I understand what this is doing correctly, ie setting
> the root complex decoding for incoming memory traffic.
>

Yes, indeed :)
 
>> +      * Root Port BAR0 and BAR1 are disabled, hence no need to set their
>> +      * inbound translation registers.
>> +      */
>> +     addr0 = CDNS_PCIE_AT_IB_RP_BAR_ADDR0_NBITS(rc->data->no_bar_nbits);
>> +     addr1 = 0;
>> +     cdns_pcie_writel(pcie, CDNS_PCIE_AT_IB_RP_BAR_ADDR0(RP_NO_BAR), addr0);
>> +     cdns_pcie_writel(pcie, CDNS_PCIE_AT_IB_RP_BAR_ADDR1(RP_NO_BAR), addr1);
>> +
>> +     return 0;
>> +}
>> +
>> +static int cdns_pcie_host_init(struct device *dev,
>> +                            struct list_head *resources,
>> +                            struct cdns_pcie_rc *rc)
>> +{
>> +     struct resource *bus_range = NULL;
>> +     int err;
>> +
>> +     /* Parse our PCI ranges and request their resources */
>> +     err = cdns_pcie_parse_request_of_pci_ranges(dev, resources, &bus_range);
>> +     if (err)
>> +             goto err_out;
> 
> I think that the err_out path should be part of:
> 
> cdns_pcie_parse_request_of_pci_ranges()
> 
> implementation and here you would just return.
> 
>> +
>> +     if (bus_range->start > bus_range->end) {
>> +             err = -EINVAL;
>> +             goto err_out;
>> +     }
> 
> Add a space here; this check seems useless to me anyway.
>

I can remove this too.
 
>> +     rc->bus_range = bus_range;
>> +     rc->pcie.bus = bus_range->start;
>> +
>> +     err = cdns_pcie_host_init_root_port(rc);
>> +     if (err)
>> +             goto err_out;
>> +
>> +     err = cdns_pcie_host_init_address_translation(rc);
>> +     if (err)
>> +             goto err_out;
>> +
>> +     return 0;
>> +
>> + err_out:
>> +     pci_free_resource_list(resources);
> 
> See above.
> 
>> +     return err;
>> +}
>> +
>> +static int cdns_pcie_host_probe(struct platform_device *pdev)
>> +{
>> +     const struct of_device_id *of_id;
>> +     const char *type;
>> +     struct device *dev = &pdev->dev;
>> +     struct device_node *np = dev->of_node;
>> +     struct pci_bus *bus, *child;
>> +     struct pci_host_bridge *bridge;
>> +     struct list_head resources;
>> +     struct cdns_pcie_rc *rc;
>> +     struct cdns_pcie *pcie;
>> +     struct resource *res;
>> +     int ret;
>> +
>> +     bridge = devm_pci_alloc_host_bridge(dev, sizeof(*rc));
>> +     if (!bridge)
>> +             return -ENOMEM;
>> +
>> +     rc = pci_host_bridge_priv(bridge);
>> +     rc->dev = dev;
>> +     platform_set_drvdata(pdev, rc);
> 
> I do not think it is needed.
> 
>> +     pcie = &rc->pcie;
>> +     pcie->is_rc = true;
>> +
>> +     of_id = of_match_node(cdns_pcie_host_of_match, np);
>> +     rc->data = (const struct cdns_pcie_rc_data *)of_id->data;
>> +
>> +     type = of_get_property(np, "device_type", NULL);
>> +     if (!type || strcmp(type, "pci")) {
>> +             dev_err(dev, "invalid \"device_type\" %s\n", type);
>> +             return -EINVAL;
>> +     }
>> +
>> +     res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "reg");
>> +     pcie->reg_base = devm_ioremap_resource(dev, res);
>> +     if (IS_ERR(pcie->reg_base)) {
>> +             dev_err(dev, "missing \"reg\"\n");
>> +             return PTR_ERR(pcie->reg_base);
>> +     }
>> +
>> +     res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "cfg");
>> +     rc->cfg_base = devm_ioremap_resource(dev, res);
> 
> devm_pci_remap_cfg_resource() please.
>

I will change that.
 
>> +     if (IS_ERR(rc->cfg_base)) {
>> +             dev_err(dev, "missing \"cfg\"\n");
>> +             return PTR_ERR(rc->cfg_base);
>> +     }
>> +     rc->cfg_res = res;
>> +
>> +     res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "mem");
>> +     if (!res) {
>> +             dev_err(dev, "missing \"mem\"\n");
>> +             return -EINVAL;
>> +     }
>> +     pcie->mem_res = res;
>> +
>> +     pm_runtime_enable(dev);
>> +     ret = pm_runtime_get_sync(dev);
>> +     if (ret < 0) {
>> +             dev_err(dev, "pm_runtime_get_sync() failed\n");
>> +             goto err_get_sync;
>> +     }
>> +
>> +     INIT_LIST_HEAD(&resources);
>> +     ret = cdns_pcie_host_init(dev, &resources, rc);
>> +     if (ret)
>> +             goto err_init;
>> +
>> +     list_splice_init(&resources, &bridge->windows);
>> +     bridge->dev.parent = dev;
>> +     bridge->busnr = pcie->bus;
>> +     bridge->ops = &cdns_pcie_host_ops;
>> +     bridge->map_irq = of_irq_parse_and_map_pci;
>> +     bridge->swizzle_irq = pci_common_swizzle;
>> +
>> +     ret = pci_scan_root_bus_bridge(bridge);
>> +     if (ret < 0) {
>> +             dev_err(dev, "Scanning root bridge failed");
>> +             goto err_init;
>> +     }
>> +
>> +     bus = bridge->bus;
>> +     pci_bus_size_bridges(bus);
>> +     pci_bus_assign_resources(bus);
>> +
>> +     list_for_each_entry(child, &bus->children, node)
>> +             pcie_bus_configure_settings(child);
>> +
>> +     pci_bus_add_devices(bus);
>> +
>> +     return 0;
>> +
>> + err_init:
>> +     pm_runtime_put_sync(dev);
>> +
>> + err_get_sync:
>> +     pm_runtime_disable(dev);
>> +
>> +     return ret;
>> +}
>> +
>> +static struct platform_driver cdns_pcie_host_driver = {
>> +     .driver = {
>> +             .name = "cdns-pcie-host",
>> +             .of_match_table = cdns_pcie_host_of_match,
>> +     },
>> +     .probe = cdns_pcie_host_probe,
>> +};
>> +builtin_platform_driver(cdns_pcie_host_driver);
>> diff --git a/drivers/pci/cadence/pcie-cadence.c b/drivers/pci/cadence/pcie-cadence.c
>> new file mode 100644
>> index 000000000000..5c10879d5e96
>> --- /dev/null
>> +++ b/drivers/pci/cadence/pcie-cadence.c
>> @@ -0,0 +1,110 @@
>> +/*
>> + * Cadence PCIe controller driver.
>> + *
>> + * Copyright (c) 2017 Cadence
>> + *
>> + * Author: Cyrille Pitchen <cyrille.pitchen@free-electrons.com>
>> + *
>> + * This program is free software; you can redistribute it and/or modify
>> + * it under the terms of the GNU General Public License version 2 as
>> + * published by the Free Software Foundation.
>> + *
>> + * This program is distributed in the hope that it will be useful,
>> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
>> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
>> + * GNU General Public License for more details.
>> + *
>> + * You should have received a copy of the GNU General Public License along with
>> + * this program.  If not, see <http://www.gnu.org/licenses/>.
>> + */
>> +
>> +#include <linux/kernel.h>
>> +
>> +#include "pcie-cadence.h"
>> +
>> +void cdns_pcie_set_outbound_region(struct cdns_pcie *pcie, u32 r, bool is_io,
>> +                                u64 cpu_addr, u64 pci_addr, size_t size)
>> +{
>> +     /*
>> +      * roundup_pow_of_two() returns an unsigned long, which is not suited
>> +      * for 64bit values.
>> +      */
>> +     u64 sz = 1ULL << fls64(size - 1);
>> +     int nbits = ilog2(sz);
>> +     u32 addr0, addr1, desc0, desc1;
>> +
>> +     if (nbits < 8)
>> +             nbits = 8;
>> +
>> +     /* Set the PCI address */
>> +     addr0 = CDNS_PCIE_AT_OB_REGION_PCI_ADDR0_NBITS(nbits) |
>> +             (lower_32_bits(pci_addr) & GENMASK(31, 8));
>> +     addr1 = upper_32_bits(pci_addr);
>> +
>> +     cdns_pcie_writel(pcie, CDNS_PCIE_AT_OB_REGION_PCI_ADDR0(r), addr0);
>> +     cdns_pcie_writel(pcie, CDNS_PCIE_AT_OB_REGION_PCI_ADDR1(r), addr1);
>> +
>> +     /* Set the PCIe header descriptor */
>> +     if (is_io)
>> +             desc0 = CDNS_PCIE_AT_OB_REGION_DESC0_TYPE_IO;
>> +     else
>> +             desc0 = CDNS_PCIE_AT_OB_REGION_DESC0_TYPE_MEM;
>> +     desc1 = 0;
>> +
>> +     if (pcie->is_rc) {
>> +             desc0 |= CDNS_PCIE_AT_OB_REGION_DESC0_HARDCODED_RID |
>> +                      CDNS_PCIE_AT_OB_REGION_DESC0_DEVFN(0);
>> +             desc1 |= CDNS_PCIE_AT_OB_REGION_DESC1_BUS(pcie->bus);
>> +     }
>> +
>> +     cdns_pcie_writel(pcie, CDNS_PCIE_AT_OB_REGION_DESC0(r), desc0);
>> +     cdns_pcie_writel(pcie, CDNS_PCIE_AT_OB_REGION_DESC1(r), desc1);
>> +
>> +     /* Set the CPU address */
>> +     cpu_addr -= pcie->mem_res->start;
>> +     addr0 = CDNS_PCIE_AT_OB_REGION_CPU_ADDR0_NBITS(nbits) |
>> +             (lower_32_bits(cpu_addr) & GENMASK(31, 8));
>> +     addr1 = upper_32_bits(cpu_addr);
>> +
>> +     cdns_pcie_writel(pcie, CDNS_PCIE_AT_OB_REGION_CPU_ADDR0(r), addr0);
>> +     cdns_pcie_writel(pcie, CDNS_PCIE_AT_OB_REGION_CPU_ADDR1(r), addr1);
>> +}
>> +
>> +void cdns_pcie_set_outbound_region_for_normal_msg(struct cdns_pcie *pcie, u32 r,
>> +                                               u64 cpu_addr)
> 
> Not used in this patch, you should split it out.
> 
>> +{
>> +     u32 addr0, addr1, desc0, desc1;
>> +
>> +     desc0 = CDNS_PCIE_AT_OB_REGION_DESC0_TYPE_NORMAL_MSG;
>> +     desc1 = 0;
>> +     if (pcie->is_rc) {
>> +             desc0 |= CDNS_PCIE_AT_OB_REGION_DESC0_HARDCODED_RID |
>> +                      CDNS_PCIE_AT_OB_REGION_DESC0_DEVFN(0);
>> +             desc1 |= CDNS_PCIE_AT_OB_REGION_DESC1_BUS(pcie->bus);
>> +     }
>> +
>> +     /* Set the CPU address */
>> +     cpu_addr -= pcie->mem_res->start;
>> +     addr0 = CDNS_PCIE_AT_OB_REGION_CPU_ADDR0_NBITS(17) |
>> +             (lower_32_bits(cpu_addr) & GENMASK(31, 8));
>> +     addr1 = upper_32_bits(cpu_addr);
>> +
>> +     cdns_pcie_writel(pcie, CDNS_PCIE_AT_OB_REGION_PCI_ADDR0(r), 0);
>> +     cdns_pcie_writel(pcie, CDNS_PCIE_AT_OB_REGION_PCI_ADDR1(r), 0);
>> +     cdns_pcie_writel(pcie, CDNS_PCIE_AT_OB_REGION_DESC0(r), desc0);
>> +     cdns_pcie_writel(pcie, CDNS_PCIE_AT_OB_REGION_DESC1(r), desc1);
>> +     cdns_pcie_writel(pcie, CDNS_PCIE_AT_OB_REGION_CPU_ADDR0(r), addr0);
>> +     cdns_pcie_writel(pcie, CDNS_PCIE_AT_OB_REGION_CPU_ADDR1(r), addr1);
>> +}
>> +
>> +void cdns_pcie_reset_outbound_region(struct cdns_pcie *pcie, u32 r)
>> +{
>> +     cdns_pcie_writel(pcie, CDNS_PCIE_AT_OB_REGION_PCI_ADDR0(r), 0);
>> +     cdns_pcie_writel(pcie, CDNS_PCIE_AT_OB_REGION_PCI_ADDR1(r), 0);
>> +
>> +     cdns_pcie_writel(pcie, CDNS_PCIE_AT_OB_REGION_DESC0(r), 0);
>> +     cdns_pcie_writel(pcie, CDNS_PCIE_AT_OB_REGION_DESC1(r), 0);
>> +
>> +     cdns_pcie_writel(pcie, CDNS_PCIE_AT_OB_REGION_CPU_ADDR0(r), 0);
>> +     cdns_pcie_writel(pcie, CDNS_PCIE_AT_OB_REGION_CPU_ADDR1(r), 0);
>> +}
>> diff --git a/drivers/pci/cadence/pcie-cadence.h b/drivers/pci/cadence/pcie-cadence.h
>> new file mode 100644
>> index 000000000000..195e23b7d4fe
>> --- /dev/null
>> +++ b/drivers/pci/cadence/pcie-cadence.h
>> @@ -0,0 +1,325 @@
>> +/*
>> + * Cadence PCIe controller driver.
>> + *
>> + * Copyright (c) 2017 Cadence
>> + *
>> + * Author: Cyrille Pitchen <cyrille.pitchen@free-electrons.com>
>> + *
>> + * This program is free software; you can redistribute it and/or modify
>> + * it under the terms of the GNU General Public License version 2 as
>> + * published by the Free Software Foundation.
>> + *
>> + * This program is distributed in the hope that it will be useful,
>> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
>> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
>> + * GNU General Public License for more details.
>> + *
>> + * You should have received a copy of the GNU General Public License along with
>> + * this program.  If not, see <http://www.gnu.org/licenses/>.
>> + */
>> +
>> +#ifndef _PCIE_CADENCE_H
>> +#define _PCIE_CADENCE_H
>> +
>> +#include <linux/kernel.h>
>> +#include <linux/pci.h>
>> +
>> +/*
>> + * Local Management Registers
>> + */
>> +#define CDNS_PCIE_LM_BASE    0x00100000
>> +
>> +/* Vendor ID Register */
>> +#define CDNS_PCIE_LM_ID              (CDNS_PCIE_LM_BASE + 0x0044)
>> +#define  CDNS_PCIE_LM_ID_VENDOR_MASK GENMASK(15, 0)
>> +#define  CDNS_PCIE_LM_ID_VENDOR_SHIFT        0
>> +#define  CDNS_PCIE_LM_ID_VENDOR(vid) \
>> +     (((vid) << CDNS_PCIE_LM_ID_VENDOR_SHIFT) & CDNS_PCIE_LM_ID_VENDOR_MASK)
>> +#define  CDNS_PCIE_LM_ID_SUBSYS_MASK GENMASK(31, 16)
>> +#define  CDNS_PCIE_LM_ID_SUBSYS_SHIFT        16
>> +#define  CDNS_PCIE_LM_ID_SUBSYS(sub) \
>> +     (((sub) << CDNS_PCIE_LM_ID_SUBSYS_SHIFT) & CDNS_PCIE_LM_ID_SUBSYS_MASK)
>> +
>> +/* Root Port Requestor ID Register */
>> +#define CDNS_PCIE_LM_RP_RID  (CDNS_PCIE_LM_BASE + 0x0228)
>> +#define  CDNS_PCIE_LM_RP_RID_MASK    GENMASK(15, 0)
>> +#define  CDNS_PCIE_LM_RP_RID_SHIFT   0
>> +#define  CDNS_PCIE_LM_RP_RID_(rid) \
>> +     (((rid) << CDNS_PCIE_LM_RP_RID_SHIFT) & CDNS_PCIE_LM_RP_RID_MASK)
>> +
>> +/* Endpoint Bus and Device Number Register */
>> +#define CDNS_PCIE_LM_EP_ID   (CDNS_PCIE_LM_BASE + 0x022c)
>> +#define  CDNS_PCIE_LM_EP_ID_DEV_MASK GENMASK(4, 0)
>> +#define  CDNS_PCIE_LM_EP_ID_DEV_SHIFT        0
>> +#define  CDNS_PCIE_LM_EP_ID_BUS_MASK GENMASK(15, 8)
>> +#define  CDNS_PCIE_LM_EP_ID_BUS_SHIFT        8
>> +
>> +/* Endpoint Function f BAR b Configuration Registers */
>> +#define CDNS_PCIE_LM_EP_FUNC_BAR_CFG0(fn) \
>> +     (CDNS_PCIE_LM_BASE + 0x0240 + (fn) * 0x0008)
>> +#define CDNS_PCIE_LM_EP_FUNC_BAR_CFG1(fn) \
>> +     (CDNS_PCIE_LM_BASE + 0x0244 + (fn) * 0x0008)
>> +#define  CDNS_PCIE_LM_EP_FUNC_BAR_CFG_BAR_APERTURE_MASK(b) \
>> +     (GENMASK(4, 0) << ((b) * 8))
>> +#define  CDNS_PCIE_LM_EP_FUNC_BAR_CFG_BAR_APERTURE(b, a) \
>> +     (((a) << ((b) * 8)) & CDNS_PCIE_LM_EP_FUNC_BAR_CFG_BAR_APERTURE_MASK(b))
>> +#define  CDNS_PCIE_LM_EP_FUNC_BAR_CFG_BAR_CTRL_MASK(b) \
>> +     (GENMASK(7, 5) << ((b) * 8))
>> +#define  CDNS_PCIE_LM_EP_FUNC_BAR_CFG_BAR_CTRL(b, c) \
>> +     (((c) << ((b) * 8 + 5)) & CDNS_PCIE_LM_EP_FUNC_BAR_CFG_BAR_CTRL_MASK(b))
>> +
>> +/* Endpoint Function Configuration Register */
>> +#define CDNS_PCIE_LM_EP_FUNC_CFG     (CDNS_PCIE_LM_BASE + 0x02c0)
> 
> All endpoint defines should be moved to the patch that needs them.
>

Will move those definitions into the endpoint patch.

Thanks for the review!

Best regards,

Cyrille
 
>> +
>> +/* Root Complex BAR Configuration Register */
>> +#define CDNS_PCIE_LM_RC_BAR_CFG      (CDNS_PCIE_LM_BASE + 0x0300)
>> +#define  CDNS_PCIE_LM_RC_BAR_CFG_BAR0_APERTURE_MASK  GENMASK(5, 0)
>> +#define  CDNS_PCIE_LM_RC_BAR_CFG_BAR0_APERTURE(a) \
>> +     (((a) << 0) & CDNS_PCIE_LM_RC_BAR_CFG_BAR0_APERTURE_MASK)
>> +#define  CDNS_PCIE_LM_RC_BAR_CFG_BAR0_CTRL_MASK              GENMASK(8, 6)
>> +#define  CDNS_PCIE_LM_RC_BAR_CFG_BAR0_CTRL(c) \
>> +     (((c) << 6) & CDNS_PCIE_LM_RC_BAR_CFG_BAR0_CTRL_MASK)
>> +#define  CDNS_PCIE_LM_RC_BAR_CFG_BAR1_APERTURE_MASK  GENMASK(13, 9)
>> +#define  CDNS_PCIE_LM_RC_BAR_CFG_BAR1_APERTURE(a) \
>> +     (((a) << 9) & CDNS_PCIE_LM_RC_BAR_CFG_BAR1_APERTURE_MASK)
>> +#define  CDNS_PCIE_LM_RC_BAR_CFG_BAR1_CTRL_MASK              GENMASK(16, 14)
>> +#define  CDNS_PCIE_LM_RC_BAR_CFG_BAR1_CTRL(c) \
>> +     (((c) << 14) & CDNS_PCIE_LM_RC_BAR_CFG_BAR1_CTRL_MASK)
>> +#define  CDNS_PCIE_LM_RC_BAR_CFG_PREFETCH_MEM_ENABLE BIT(17)
>> +#define  CDNS_PCIE_LM_RC_BAR_CFG_PREFETCH_MEM_32BITS 0
>> +#define  CDNS_PCIE_LM_RC_BAR_CFG_PREFETCH_MEM_64BITS BIT(18)
>> +#define  CDNS_PCIE_LM_RC_BAR_CFG_IO_ENABLE           BIT(19)
>> +#define  CDNS_PCIE_LM_RC_BAR_CFG_IO_16BITS           0
>> +#define  CDNS_PCIE_LM_RC_BAR_CFG_IO_32BITS           BIT(20)
>> +#define  CDNS_PCIE_LM_RC_BAR_CFG_CHECK_ENABLE                BIT(31)
>> +
>> +/* BAR control values applicable to both Endpoint Function and Root Complex */
>> +#define  CDNS_PCIE_LM_BAR_CFG_CTRL_DISABLED          0x0
>> +#define  CDNS_PCIE_LM_BAR_CFG_CTRL_IO_32BITS         0x1
>> +#define  CDNS_PCIE_LM_BAR_CFG_CTRL_MEM_32BITS                0x4
>> +#define  CDNS_PCIE_LM_BAR_CFG_CTRL_PREFETCH_MEM_32BITS       0x5
>> +#define  CDNS_PCIE_LM_BAR_CFG_CTRL_MEM_64BITS                0x6
>> +#define  CDNS_PCIE_LM_BAR_CFG_CTRL_PREFETCH_MEM_64BITS       0x7
>> +
>> +
>> +/*
>> + * Endpoint Function Registers (PCI configuration space for endpoint functions)
>> + */
>> +#define CDNS_PCIE_EP_FUNC_BASE(fn)   (((fn) << 12) & GENMASK(19, 12))
>> +
>> +#define CDNS_PCIE_EP_FUNC_MSI_CAP_OFFSET     0x90
>> +
>> +/*
>> + * Root Port Registers (PCI configuration space for the root port function)
>> + */
>> +#define CDNS_PCIE_RP_BASE    0x00200000
>> +
>> +
>> +/*
>> + * Address Translation Registers
>> + */
>> +#define CDNS_PCIE_AT_BASE    0x00400000
>> +
>> +/* Region r Outbound AXI to PCIe Address Translation Register 0 */
>> +#define CDNS_PCIE_AT_OB_REGION_PCI_ADDR0(r) \
>> +     (CDNS_PCIE_AT_BASE + 0x0000 + ((r) & 0x1f) * 0x0020)
>> +#define  CDNS_PCIE_AT_OB_REGION_PCI_ADDR0_NBITS_MASK GENMASK(5, 0)
>> +#define  CDNS_PCIE_AT_OB_REGION_PCI_ADDR0_NBITS(nbits) \
>> +     (((nbits) - 1) & CDNS_PCIE_AT_OB_REGION_PCI_ADDR0_NBITS_MASK)
>> +#define  CDNS_PCIE_AT_OB_REGION_PCI_ADDR0_DEVFN_MASK GENMASK(19, 12)
>> +#define  CDNS_PCIE_AT_OB_REGION_PCI_ADDR0_DEVFN(devfn) \
>> +     (((devfn) << 12) & CDNS_PCIE_AT_OB_REGION_PCI_ADDR0_DEVFN_MASK)
>> +#define  CDNS_PCIE_AT_OB_REGION_PCI_ADDR0_BUS_MASK   GENMASK(27, 20)
>> +#define  CDNS_PCIE_AT_OB_REGION_PCI_ADDR0_BUS(bus) \
>> +     (((bus) << 20) & CDNS_PCIE_AT_OB_REGION_PCI_ADDR0_BUS_MASK)
>> +
>> +/* Region r Outbound AXI to PCIe Address Translation Register 1 */
>> +#define CDNS_PCIE_AT_OB_REGION_PCI_ADDR1(r) \
>> +     (CDNS_PCIE_AT_BASE + 0x0004 + ((r) & 0x1f) * 0x0020)
>> +
>> +/* Region r Outbound PCIe Descriptor Register 0 */
>> +#define CDNS_PCIE_AT_OB_REGION_DESC0(r) \
>> +     (CDNS_PCIE_AT_BASE + 0x0008 + ((r) & 0x1f) * 0x0020)
>> +#define  CDNS_PCIE_AT_OB_REGION_DESC0_TYPE_MASK              GENMASK(3, 0)
>> +#define  CDNS_PCIE_AT_OB_REGION_DESC0_TYPE_MEM               0x2
>> +#define  CDNS_PCIE_AT_OB_REGION_DESC0_TYPE_IO                0x6
>> +#define  CDNS_PCIE_AT_OB_REGION_DESC0_TYPE_CONF_TYPE0        0xa
>> +#define  CDNS_PCIE_AT_OB_REGION_DESC0_TYPE_CONF_TYPE1        0xb
>> +#define  CDNS_PCIE_AT_OB_REGION_DESC0_TYPE_NORMAL_MSG        0xc
>> +#define  CDNS_PCIE_AT_OB_REGION_DESC0_TYPE_VENDOR_MSG        0xd
>> +/* Bit 23 MUST be set in RC mode. */
>> +#define  CDNS_PCIE_AT_OB_REGION_DESC0_HARDCODED_RID  BIT(23)
>> +#define  CDNS_PCIE_AT_OB_REGION_DESC0_DEVFN_MASK     GENMASK(31, 24)
>> +#define  CDNS_PCIE_AT_OB_REGION_DESC0_DEVFN(devfn) \
>> +     (((devfn) << 24) & CDNS_PCIE_AT_OB_REGION_DESC0_DEVFN_MASK)
>> +
>> +/* Region r Outbound PCIe Descriptor Register 1 */
>> +#define CDNS_PCIE_AT_OB_REGION_DESC1(r)      \
>> +     (CDNS_PCIE_AT_BASE + 0x000c + ((r) & 0x1f) * 0x0020)
>> +#define  CDNS_PCIE_AT_OB_REGION_DESC1_BUS_MASK       GENMASK(7, 0)
>> +#define  CDNS_PCIE_AT_OB_REGION_DESC1_BUS(bus) \
>> +     ((bus) & CDNS_PCIE_AT_OB_REGION_DESC1_BUS_MASK)
>> +
>> +/* Region r AXI Region Base Address Register 0 */
>> +#define CDNS_PCIE_AT_OB_REGION_CPU_ADDR0(r) \
>> +     (CDNS_PCIE_AT_BASE + 0x0018 + ((r) & 0x1f) * 0x0020)
>> +#define  CDNS_PCIE_AT_OB_REGION_CPU_ADDR0_NBITS_MASK GENMASK(5, 0)
>> +#define  CDNS_PCIE_AT_OB_REGION_CPU_ADDR0_NBITS(nbits) \
>> +     (((nbits) - 1) & CDNS_PCIE_AT_OB_REGION_CPU_ADDR0_NBITS_MASK)
>> +
>> +/* Region r AXI Region Base Address Register 1 */
>> +#define CDNS_PCIE_AT_OB_REGION_CPU_ADDR1(r) \
>> +     (CDNS_PCIE_AT_BASE + 0x001c + ((r) & 0x1f) * 0x0020)
>> +
>> +/* Root Port BAR Inbound PCIe to AXI Address Translation Register */
>> +#define CDNS_PCIE_AT_IB_RP_BAR_ADDR0(bar) \
>> +     (CDNS_PCIE_AT_BASE + 0x0800 + (bar) * 0x0008)
>> +#define  CDNS_PCIE_AT_IB_RP_BAR_ADDR0_NBITS_MASK     GENMASK(5, 0)
>> +#define  CDNS_PCIE_AT_IB_RP_BAR_ADDR0_NBITS(nbits) \
>> +     (((nbits) - 1) & CDNS_PCIE_AT_IB_RP_BAR_ADDR0_NBITS_MASK)
>> +#define CDNS_PCIE_AT_IB_RP_BAR_ADDR1(bar) \
>> +     (CDNS_PCIE_AT_BASE + 0x0804 + (bar) * 0x0008)
>> +
>> +enum cdns_pcie_rp_bar {
>> +     RP_BAR0,
>> +     RP_BAR1,
>> +     RP_NO_BAR
>> +};
>> +
>> +/* Endpoint Function BAR Inbound PCIe to AXI Address Translation Register */
>> +#define CDNS_PCIE_AT_IB_EP_FUNC_BAR_ADDR0(fn, bar) \
>> +     (CDNS_PCIE_AT_BASE + 0x0840 + (fn) * 0x0040 + (bar) * 0x0008)
>> +#define CDNS_PCIE_AT_IB_EP_FUNC_BAR_ADDR1(fn, bar) \
>> +     (CDNS_PCIE_AT_BASE + 0x0844 + (fn) * 0x0040 + (bar) * 0x0008)
>> +
>> +/* Normal/Vendor specific message access: offset inside some outbound region */
>> +#define CDNS_PCIE_NORMAL_MSG_ROUTING_MASK    GENMASK(7, 5)
>> +#define CDNS_PCIE_NORMAL_MSG_ROUTING(route) \
>> +     (((route) << 5) & CDNS_PCIE_NORMAL_MSG_ROUTING_MASK)
>> +#define CDNS_PCIE_NORMAL_MSG_CODE_MASK               GENMASK(15, 8)
>> +#define CDNS_PCIE_NORMAL_MSG_CODE(code) \
>> +     (((code) << 8) & CDNS_PCIE_NORMAL_MSG_CODE_MASK)
>> +#define CDNS_PCIE_MSG_NO_DATA                        BIT(16)
>> +
>> +enum cdns_pcie_msg_code {
>> +     MSG_CODE_ASSERT_INTA    = 0x20,
>> +     MSG_CODE_ASSERT_INTB    = 0x21,
>> +     MSG_CODE_ASSERT_INTC    = 0x22,
>> +     MSG_CODE_ASSERT_INTD    = 0x23,
>> +     MSG_CODE_DEASSERT_INTA  = 0x24,
>> +     MSG_CODE_DEASSERT_INTB  = 0x25,
>> +     MSG_CODE_DEASSERT_INTC  = 0x26,
>> +     MSG_CODE_DEASSERT_INTD  = 0x27,
>> +};
>> +
>> +enum cdns_pcie_msg_routing {
>> +     /* Route to Root Complex */
>> +     MSG_ROUTING_TO_RC,
>> +
>> +     /* Use Address Routing */
>> +     MSG_ROUTING_BY_ADDR,
>> +
>> +     /* Use ID Routing */
>> +     MSG_ROUTING_BY_ID,
>> +
>> +     /* Route as Broadcast Message from Root Complex */
>> +     MSG_ROUTING_BCAST,
>> +
>> +     /* Local message; terminate at receiver (INTx messages) */
>> +     MSG_ROUTING_LOCAL,
>> +
>> +     /* Gather & route to Root Complex (PME_TO_Ack message) */
>> +     MSG_ROUTING_GATHER,
>> +};
>> +
>> +/**
>> + * struct cdns_pcie - private data for Cadence PCIe controller drivers
>> + * @reg_base: IO mapped register base
>> + * @mem_res: start/end offsets in the physical system memory to map PCI accesses
>> + * @is_rc: tell whether the PCIe controller mode is Root Complex or Endpoint.
>> + * @bus: In Root Complex mode, the bus number
>> + */
>> +struct cdns_pcie {
>> +     void __iomem            *reg_base;
>> +     struct resource         *mem_res;
>> +     bool                    is_rc;
>> +     u8                      bus;
>> +};
>> +
>> +/* Register access */
>> +static inline void cdns_pcie_writeb(struct cdns_pcie *pcie, u32 reg, u8 value)
>> +{
>> +     writeb(value, pcie->reg_base + reg);
>> +}
>> +
>> +static inline void cdns_pcie_writew(struct cdns_pcie *pcie, u32 reg, u16 value)
>> +{
>> +     writew(value, pcie->reg_base + reg);
>> +}
>> +
>> +static inline void cdns_pcie_writel(struct cdns_pcie *pcie, u32 reg, u32 value)
>> +{
>> +     writel(value, pcie->reg_base + reg);
>> +}
>> +
>> +static inline u32 cdns_pcie_readl(struct cdns_pcie *pcie, u32 reg)
>> +{
>> +     return readl(pcie->reg_base + reg);
>> +}
>> +
>> +/* Root Port register access */
>> +static inline void cdns_pcie_rp_writeb(struct cdns_pcie *pcie,
>> +                                    u32 reg, u8 value)
>> +{
>> +     writeb(value, pcie->reg_base + CDNS_PCIE_RP_BASE + reg);
>> +}
>> +
>> +static inline void cdns_pcie_rp_writew(struct cdns_pcie *pcie,
>> +                                    u32 reg, u16 value)
>> +{
>> +     writew(value, pcie->reg_base + CDNS_PCIE_RP_BASE + reg);
>> +}
>> +
>> +/* Endpoint Function register access */
>> +static inline void cdns_pcie_ep_fn_writeb(struct cdns_pcie *pcie, u8 fn,
>> +                                       u32 reg, u8 value)
>> +{
>> +     writeb(value, pcie->reg_base + CDNS_PCIE_EP_FUNC_BASE(fn) + reg);
>> +}
>> +
>> +static inline void cdns_pcie_ep_fn_writew(struct cdns_pcie *pcie, u8 fn,
>> +                                       u32 reg, u16 value)
>> +{
>> +     writew(value, pcie->reg_base + CDNS_PCIE_EP_FUNC_BASE(fn) + reg);
>> +}
>> +
>> +static inline void cdns_pcie_ep_fn_writel(struct cdns_pcie *pcie, u8 fn,
>> +                                       u32 reg, u16 value)
>> +{
>> +     writel(value, pcie->reg_base + CDNS_PCIE_EP_FUNC_BASE(fn) + reg);
>> +}
>> +
>> +static inline u8 cdns_pcie_ep_fn_readb(struct cdns_pcie *pcie, u8 fn, u32 reg)
>> +{
>> +     return readb(pcie->reg_base + CDNS_PCIE_EP_FUNC_BASE(fn) + reg);
>> +}
>> +
>> +static inline u16 cdns_pcie_ep_fn_readw(struct cdns_pcie *pcie, u8 fn, u32 reg)
>> +{
>> +     return readw(pcie->reg_base + CDNS_PCIE_EP_FUNC_BASE(fn) + reg);
>> +}
>> +
>> +static inline u32 cdns_pcie_ep_fn_readl(struct cdns_pcie *pcie, u8 fn, u32 reg)
>> +{
>> +     return readl(pcie->reg_base + CDNS_PCIE_EP_FUNC_BASE(fn) + reg);
>> +}
> 
> Same comments for all endpoint related functions and defines above.
> 
> Thanks,
> Lorenzo
> 
>> +void cdns_pcie_set_outbound_region(struct cdns_pcie *pcie, u32 r, bool is_io,
>> +                                u64 cpu_addr, u64 pci_addr, size_t size);
>> +
>> +void cdns_pcie_set_outbound_region_for_normal_msg(struct cdns_pcie *pcie, u32 r,
>> +                                               u64 cpu_addr);
>> +
>> +void cdns_pcie_reset_outbound_region(struct cdns_pcie *pcie, u32 r);
>> +
>> +#endif /* _PCIE_CADENCE_H */
>> --
>> 2.11.0
>>
> IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.
> 


-- 
Cyrille Pitchen, Free Electrons
Embedded Linux and Kernel engineering
http://free-electrons.com

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH 5/5] PCI: cadence: add EndPoint Controller driver for Cadence PCIe controller
  2017-12-01 12:20   ` Lorenzo Pieralisi
@ 2017-12-04 14:56     ` Cyrille Pitchen
  2017-12-05  9:19     ` Kishon Vijay Abraham I
  1 sibling, 0 replies; 39+ messages in thread
From: Cyrille Pitchen @ 2017-12-04 14:56 UTC (permalink / raw)
  To: Lorenzo Pieralisi
  Cc: bhelgaas, kishon, linux-pci, adouglas, stelford, dgary, kgopi,
	eandrews, thomas.petazzoni, sureshp, nsekhar, linux-kernel, robh,
	devicetree

Hi Lorenzo,

Le 01/12/2017 à 13:20, Lorenzo Pieralisi a écrit :
> On Thu, Nov 23, 2017 at 04:01:50PM +0100, Cyrille Pitchen wrote:
>> This patch adds support to the Cadence PCIe controller in endpoint mode.
> 
> Please add a brief description to the log to describe the most salient
> features.
> 
>> Signed-off-by: Cyrille Pitchen <cyrille.pitchen@free-electrons.com>
>> ---
>>  drivers/pci/cadence/Kconfig           |   9 +
>>  drivers/pci/cadence/Makefile          |   1 +
>>  drivers/pci/cadence/pcie-cadence-ep.c | 553 ++++++++++++++++++++++++++++++++++
>>  3 files changed, 563 insertions(+)
>>  create mode 100644 drivers/pci/cadence/pcie-cadence-ep.c
>>
>> diff --git a/drivers/pci/cadence/Kconfig b/drivers/pci/cadence/Kconfig
>> index 120306cae2aa..b2e6af71f39e 100644
>> --- a/drivers/pci/cadence/Kconfig
>> +++ b/drivers/pci/cadence/Kconfig
>> @@ -21,4 +21,13 @@ config PCIE_CADENCE_HOST
>>  	  mode. This PCIe controller may be embedded into many different vendors
>>  	  SoCs.
>>  
>> +config PCIE_CADENCE_EP
>> +	bool "Cadence PCIe endpoint controller"
>> +	depends on PCI_ENDPOINT
>> +	select PCIE_CADENCE
>> +	help
>> +	  Say Y here if you want to support the Cadence PCIe  controller in
>> +	  endpoint mode. This PCIe controller may be embedded into many
>> +	  different vendors SoCs.
>> +
>>  endif # PCI_CADENCE
>> diff --git a/drivers/pci/cadence/Makefile b/drivers/pci/cadence/Makefile
>> index d57d192d2595..61e9c8d6839d 100644
>> --- a/drivers/pci/cadence/Makefile
>> +++ b/drivers/pci/cadence/Makefile
>> @@ -1,2 +1,3 @@
>>  obj-$(CONFIG_PCIE_CADENCE) += pcie-cadence.o
>>  obj-$(CONFIG_PCIE_CADENCE_HOST) += pcie-cadence-host.o
>> +obj-$(CONFIG_PCIE_CADENCE_EP) += pcie-cadence-ep.o
>> diff --git a/drivers/pci/cadence/pcie-cadence-ep.c b/drivers/pci/cadence/pcie-cadence-ep.c
>> new file mode 100644
>> index 000000000000..a1d761101a9c
>> --- /dev/null
>> +++ b/drivers/pci/cadence/pcie-cadence-ep.c
>> @@ -0,0 +1,553 @@
>> +/*
>> + * Cadence PCIe host controller driver.
> 
> You should update this comment.
> 
>> + *
>> + * Copyright (c) 2017 Cadence
>> + *
>> + * Author: Cyrille Pitchen <cyrille.pitchen@free-electrons.com>
>> + *
>> + * This program is free software; you can redistribute it and/or modify
>> + * it under the terms of the GNU General Public License version 2 as
>> + * published by the Free Software Foundation.
>> + *
>> + * This program is distributed in the hope that it will be useful,
>> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
>> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
>> + * GNU General Public License for more details.
>> + *
>> + * You should have received a copy of the GNU General Public License along with
>> + * this program.  If not, see <http://www.gnu.org/licenses/>.
>> + */
>> +#include <linux/kernel.h>
>> +#include <linux/of.h>
>> +#include <linux/pci-epc.h>
>> +#include <linux/platform_device.h>
>> +#include <linux/pm_runtime.h>
>> +#include <linux/sizes.h>
>> +#include <linux/delay.h>
> 
> Nit: alphabetical order.
> 
>> +#include "pcie-cadence.h"
>> +
>> +#define CDNS_PCIE_EP_MIN_APERTURE		128	/* 128 bytes */
>> +#define CDNS_PCIE_EP_IRQ_PCI_ADDR_NONE		0x1
>> +#define CDNS_PCIE_EP_IRQ_PCI_ADDR_LEGACY	0x3
>> +
>> +/**
>> + * struct cdns_pcie_ep_data - hardware specific data
>> + * @max_regions: maximum nmuber of regions supported by hardware
> 
> s/nmuber/number
> 
>> + */
>> +struct cdns_pcie_ep_data {
>> +	size_t				max_regions;
>> +};
>> +
>> +/**
>> + * struct cdns_pcie_ep - private data for this PCIe endpoint controller driver
>> + * @pcie: Cadence PCIe controller
>> + * @data: pointer to a 'struct cdns_pcie_data'
>> + */
>> +struct cdns_pcie_ep {
>> +	struct cdns_pcie		pcie;
>> +	const struct cdns_pcie_ep_data	*data;
>> +	struct pci_epc			*epc;
>> +	unsigned long			ob_region_map;
>> +	phys_addr_t			*ob_addr;
>> +	phys_addr_t			irq_phys_addr;
>> +	void __iomem			*irq_cpu_addr;
>> +	u64				irq_pci_addr;
>> +	u8				irq_pending;
>> +};
>> +
>> +static int cdns_pcie_ep_write_header(struct pci_epc *epc,
>> +				     struct pci_epf_header *hdr)
>> +{
>> +	struct cdns_pcie_ep *ep = epc_get_drvdata(epc);
>> +	struct cdns_pcie *pcie = &ep->pcie;
>> +	u8 fn = 0;
>> +
>> +	if (fn == 0) {
> 
> I think there is some code to retrieve fn missing here.
>

I've used some "u8 fn = 0;" when implementing every handler of
'struct pci_epc_ops' because the current API of the EPC library supports
only one function for now but I guess this will change soon.

So I've implement each handler as if a new 'fn' parameter were added to the
handler prototype. I guess it would make sense to add such a parameter to
the pci_epc_*() functions in pci-epc-core.c.
 
>> +		u32 id = CDNS_PCIE_LM_ID_VENDOR(hdr->vendorid) |
>> +			 CDNS_PCIE_LM_ID_SUBSYS(hdr->subsys_vendor_id);
>> +		cdns_pcie_writel(pcie, CDNS_PCIE_LM_ID, id);
>> +	}
>> +	cdns_pcie_ep_fn_writew(pcie, fn, PCI_DEVICE_ID, hdr->deviceid);
>> +	cdns_pcie_ep_fn_writeb(pcie, fn, PCI_REVISION_ID, hdr->revid);
>> +	cdns_pcie_ep_fn_writeb(pcie, fn, PCI_CLASS_PROG, hdr->progif_code);
>> +	cdns_pcie_ep_fn_writew(pcie, fn, PCI_CLASS_DEVICE,
>> +			       hdr->subclass_code | hdr->baseclass_code << 8);
>> +	cdns_pcie_ep_fn_writeb(pcie, fn, PCI_CACHE_LINE_SIZE,
>> +			       hdr->cache_line_size);
>> +	cdns_pcie_ep_fn_writew(pcie, fn, PCI_SUBSYSTEM_ID, hdr->subsys_id);
>> +	cdns_pcie_ep_fn_writeb(pcie, fn, PCI_INTERRUPT_PIN, hdr->interrupt_pin);
>> +
>> +	return 0;
>> +}
>> +
>> +static int cdns_pcie_ep_set_bar(struct pci_epc *epc, enum pci_barno bar,
>> +				dma_addr_t bar_phys, size_t size, int flags)
>> +{
>> +	struct cdns_pcie_ep *ep = epc_get_drvdata(epc);
>> +	struct cdns_pcie *pcie = &ep->pcie;
>> +	u32 addr0, addr1, reg, cfg, b, aperture, ctrl;
>> +	u8 fn = 0;
>> +	u64 sz;
>> +
>> +	/* BAR size is 2^(aperture + 7) */
>> +	sz = max_t(size_t, size, CDNS_PCIE_EP_MIN_APERTURE);
>> +	sz = 1ULL << fls64(sz - 1);
>> +	aperture = ilog2(sz) - 7; /* 128B -> 0, 256B -> 1, 512B -> 2, ... */
>> +
>> +	if ((flags & PCI_BASE_ADDRESS_SPACE) == PCI_BASE_ADDRESS_SPACE_IO) {
>> +		ctrl = CDNS_PCIE_LM_BAR_CFG_CTRL_IO_32BITS;
>> +	} else {
>> +		bool is_prefetch = !!(flags & PCI_BASE_ADDRESS_MEM_PREFETCH);
>> +		bool is_64bits = sz > SZ_2G;
>> +
>> +		if (is_64bits && (bar & 1))
>> +			return -EINVAL;
>> +
>> +		switch (is_64bits << 1 | is_prefetch) {
> 
> I would not mind implementing this as a nested if-else, I am not a big
> fan of using bool this way.
>

It did it this way to cope with the 80 columns rule but I will try to do it
with nested if-elst statements. The macro names I've chosen are quite long!

>> +		case 0:
>> +			ctrl = CDNS_PCIE_LM_BAR_CFG_CTRL_MEM_32BITS;
>> +			break;
>> +
>> +		case 1:
>> +			ctrl = CDNS_PCIE_LM_BAR_CFG_CTRL_PREFETCH_MEM_32BITS;
>> +			break;
>> +
>> +		case 2:
>> +			ctrl = CDNS_PCIE_LM_BAR_CFG_CTRL_MEM_64BITS;
>> +			break;
>> +
>> +		case 3:
>> +			ctrl = CDNS_PCIE_LM_BAR_CFG_CTRL_PREFETCH_MEM_64BITS;
>> +			break;
>> +		}
>> +	}
>> +
>> +	addr0 = lower_32_bits(bar_phys);
>> +	addr1 = upper_32_bits(bar_phys);
>> +	cdns_pcie_writel(pcie, CDNS_PCIE_AT_IB_EP_FUNC_BAR_ADDR0(fn, bar),
>> +			 addr0);
>> +	cdns_pcie_writel(pcie, CDNS_PCIE_AT_IB_EP_FUNC_BAR_ADDR1(fn, bar),
>> +			 addr1);
> 
> Is fn always 0 ?
>

If the API of the EPC library is updated, fn could be something else but 0
later but for now, fn is actually always 0.

>> +	if (bar < BAR_4) {
>> +		reg = CDNS_PCIE_LM_EP_FUNC_BAR_CFG0(fn);
>> +		b = bar;
>> +	} else {
>> +		reg = CDNS_PCIE_LM_EP_FUNC_BAR_CFG1(fn);
>> +		b = bar - BAR_4;
>> +	}
>> +
>> +	cfg = cdns_pcie_readl(pcie, reg);
>> +	cfg &= ~(CDNS_PCIE_LM_EP_FUNC_BAR_CFG_BAR_APERTURE_MASK(b) |
>> +		 CDNS_PCIE_LM_EP_FUNC_BAR_CFG_BAR_CTRL_MASK(b));
>> +	cfg |= (CDNS_PCIE_LM_EP_FUNC_BAR_CFG_BAR_APERTURE(b, aperture) |
>> +		CDNS_PCIE_LM_EP_FUNC_BAR_CFG_BAR_CTRL(b, ctrl));
>> +	cdns_pcie_writel(pcie, reg, cfg);
>> +
>> +	return 0;
>> +}
>> +
>> +static void cdns_pcie_ep_clear_bar(struct pci_epc *epc, enum pci_barno bar)
>> +{
>> +	struct cdns_pcie_ep *ep = epc_get_drvdata(epc);
>> +	struct cdns_pcie *pcie = &ep->pcie;
>> +	u32 reg, cfg, b, ctrl;
>> +	u8 fn = 0;
>> +
>> +	if (bar < BAR_4) {
>> +		reg = CDNS_PCIE_LM_EP_FUNC_BAR_CFG0(fn);
>> +		b = bar;
>> +	} else {
>> +		reg = CDNS_PCIE_LM_EP_FUNC_BAR_CFG1(fn);
>> +		b = bar - BAR_4;
>> +	}
>> +
>> +	ctrl = CDNS_PCIE_LM_BAR_CFG_CTRL_DISABLED;
>> +	cfg = cdns_pcie_readl(pcie, reg);
>> +	cfg &= ~(CDNS_PCIE_LM_EP_FUNC_BAR_CFG_BAR_APERTURE_MASK(b) |
>> +		 CDNS_PCIE_LM_EP_FUNC_BAR_CFG_BAR_CTRL_MASK(b));
>> +	cfg |= CDNS_PCIE_LM_EP_FUNC_BAR_CFG_BAR_CTRL_MASK(ctrl);
>> +	cdns_pcie_writel(pcie, reg, cfg);
>> +
>> +	cdns_pcie_writel(pcie, CDNS_PCIE_AT_IB_EP_FUNC_BAR_ADDR0(fn, bar), 0);
>> +	cdns_pcie_writel(pcie, CDNS_PCIE_AT_IB_EP_FUNC_BAR_ADDR1(fn, bar), 0);
>> +}
>> +
>> +static int cdns_pcie_ep_map_addr(struct pci_epc *epc, phys_addr_t addr,
>> +				 u64 pci_addr, size_t size)
>> +{
>> +	struct cdns_pcie_ep *ep = epc_get_drvdata(epc);
>> +	struct cdns_pcie *pcie = &ep->pcie;
>> +	u32 r;
>> +
>> +	r = find_first_zero_bit(&ep->ob_region_map, sizeof(ep->ob_region_map));
> 
> Second argument must be in bits not bytes.
> 
> https://marc.info/?l=linux-pci&m=151179781225513&w=2
>

OK, thanks!

>> +	if (r >= ep->data->max_regions - 1) {
>> +		dev_err(&epc->dev, "no free outbound region\n");
>> +		return -EINVAL;
>> +	}
>> +
>> +	cdns_pcie_set_outbound_region(pcie, r, false, addr, pci_addr, size);
>> +
>> +	set_bit(r, &ep->ob_region_map);
>> +	ep->ob_addr[r] = addr;
>> +
>> +	return 0;
>> +}
>> +
>> +static void cdns_pcie_ep_unmap_addr(struct pci_epc *epc, phys_addr_t addr)
>> +{
>> +	struct cdns_pcie_ep *ep = epc_get_drvdata(epc);
>> +	struct cdns_pcie *pcie = &ep->pcie;
>> +	u32 r;
>> +
>> +	for (r = 0; r < ep->data->max_regions - 1; r++)
>> +		if (ep->ob_addr[r] == addr)
>> +			break;
>> +
>> +	if (r >= ep->data->max_regions - 1)
> 
> == ?
>

I will change that.

>> +		return;
>> +
>> +	cdns_pcie_reset_outbound_region(pcie, r);
>> +
>> +	ep->ob_addr[r] = 0;
>> +	clear_bit(r, &ep->ob_region_map);
>> +}
>> +
>> +static int cdns_pcie_ep_set_msi(struct pci_epc *epc, u8 mmc)
>> +{
>> +	struct cdns_pcie_ep *ep = epc_get_drvdata(epc);
>> +	struct cdns_pcie *pcie = &ep->pcie;
>> +	u32 cap = CDNS_PCIE_EP_FUNC_MSI_CAP_OFFSET;
>> +	u16 flags;
>> +	u8 fn = 0;
>> +
>> +	/* Validate the ID of the MSI Capability structure. */
>> +	if (cdns_pcie_ep_fn_readb(pcie, fn, cap) != PCI_CAP_ID_MSI)
>> +		return -EINVAL;
>> +
>> +	/*
>> +	 * Set the Multiple Message Capable bitfield into the Message Control
>> +	 * register.
>> +	 */
>> +	flags = cdns_pcie_ep_fn_readw(pcie, fn, cap + PCI_MSI_FLAGS);
>> +	flags = (flags & ~PCI_MSI_FLAGS_QMASK) | (mmc << 1);
>> +	flags |= PCI_MSI_FLAGS_64BIT;
>> +	flags &= ~PCI_MSI_FLAGS_MASKBIT;
>> +	cdns_pcie_ep_fn_writew(pcie, fn, cap + PCI_MSI_FLAGS, flags);
>> +
>> +	return 0;
>> +}
>> +
>> +static int cdns_pcie_ep_get_msi(struct pci_epc *epc)
>> +{
>> +	struct cdns_pcie_ep *ep = epc_get_drvdata(epc);
>> +	struct cdns_pcie *pcie = &ep->pcie;
>> +	u32 cap = CDNS_PCIE_EP_FUNC_MSI_CAP_OFFSET;
>> +	u16 flags, mmc, mme;
>> +	u8 fn = 0;
>> +
>> +	/* Validate the ID of the MSI Capability structure. */
>> +	if (cdns_pcie_ep_fn_readb(pcie, fn, cap) != PCI_CAP_ID_MSI)
>> +		return -EINVAL;
>> +
>> +	/* Validate that the MSI feature is actually enabled. */
>> +	flags = cdns_pcie_ep_fn_readw(pcie, fn, cap + PCI_MSI_FLAGS);
>> +	if (!(flags & PCI_MSI_FLAGS_ENABLE))
>> +		return -EINVAL;
>> +
>> +	/*
>> +	 * Get the Multiple Message Enable bitfield from the Message Control
>> +	 * register.
>> +	 */
>> +	mmc = (flags & PCI_MSI_FLAGS_QMASK) >> 1;
>> +	mme = (flags & PCI_MSI_FLAGS_QSIZE) >> 4;
>> +	if (mme > mmc)
>> +		mme = mmc;
>> +	if (mme > 5)
>> +		mme = 5;
> 
> You should comment on what this 5 means and why it is fine to cap mme.
> 
>> +
>> +	return mme;
>> +}
>> +
>> +static void cdns_pcie_ep_assert_intx(struct cdns_pcie_ep *ep, u8 fn,
>> +				     u8 intx, bool is_asserted)
>> +{
>> +	struct cdns_pcie *pcie = &ep->pcie;
>> +	u32 r = ep->data->max_regions - 1;
>> +	u32 offset;
>> +	u16 status;
>> +	u8 msg_code;
>> +
>> +	intx &= 3;
>> +
>> +	/* Set the outbound region if needed. */
>> +	if (unlikely(ep->irq_pci_addr != CDNS_PCIE_EP_IRQ_PCI_ADDR_LEGACY)) {
>> +		/* Last region was reserved for IRQ writes. */
>> +		cdns_pcie_set_outbound_region_for_normal_msg(pcie, r,
>> +							     ep->irq_phys_addr);
>> +		ep->irq_pci_addr = CDNS_PCIE_EP_IRQ_PCI_ADDR_LEGACY;
>> +	}
>> +
>> +	if (is_asserted) {
>> +		ep->irq_pending |= BIT(intx);
>> +		msg_code = MSG_CODE_ASSERT_INTA + intx;
>> +	} else {
>> +		ep->irq_pending &= ~BIT(intx);
>> +		msg_code = MSG_CODE_DEASSERT_INTA + intx;
>> +	}
>> +
>> +	status = cdns_pcie_ep_fn_readw(pcie, fn, PCI_STATUS);
>> +	if (((status & PCI_STATUS_INTERRUPT) != 0) ^ (ep->irq_pending != 0)) {
>> +		status ^= PCI_STATUS_INTERRUPT;
>> +		cdns_pcie_ep_fn_writew(pcie, fn, PCI_STATUS, status);
>> +	}
>> +
>> +	offset = CDNS_PCIE_NORMAL_MSG_ROUTING(MSG_ROUTING_LOCAL) |
>> +		 CDNS_PCIE_NORMAL_MSG_CODE(msg_code) |
>> +		 CDNS_PCIE_MSG_NO_DATA;
>> +	writel(0, ep->irq_cpu_addr + offset);
>> +}
>> +
>> +static int cdns_pcie_ep_send_legacy_irq(struct cdns_pcie_ep *ep, u8 fn, u8 intx)
>> +{
>> +	u16 cmd;
>> +
>> +	cmd = cdns_pcie_ep_fn_readw(&ep->pcie, fn, PCI_COMMAND);
>> +	if (cmd & PCI_COMMAND_INTX_DISABLE)
>> +		return -EINVAL;
>> +
>> +	cdns_pcie_ep_assert_intx(ep, fn, intx, true);
>> +	mdelay(1);
> 
> Add a comment please to explain the mdelay value.
>

Honestly, I don't know for this value: I've just adapted the source code
from what was already done by dra7xx_pcie_raise_legacy_irq() from
drivers/pci/dwc/pci-dra7xx.c.

>> +	cdns_pcie_ep_assert_intx(ep, fn, intx, false);
>> +	return 0;
>> +}
>> +
>> +static int cdns_pcie_ep_raise_irq(struct pci_epc *epc,
>> +				  enum pci_epc_irq_type type, u8 interrupt_num)
>> +{
>> +	struct cdns_pcie_ep *ep = epc_get_drvdata(epc);
>> +	struct cdns_pcie *pcie = &ep->pcie;
>> +	u32 cap = CDNS_PCIE_EP_FUNC_MSI_CAP_OFFSET;
>> +	u16 flags, mmc, mme, data, data_mask;
>> +	u8 msi_count;
>> +	u64 pci_addr, pci_addr_mask = 0xff;
>> +	u8 fn = 0;
>> +
>> +	/* Handle legacy IRQ. */
>> +	if (type == PCI_EPC_IRQ_LEGACY)
>> +		return cdns_pcie_ep_send_legacy_irq(ep, fn, 0);
>> +
>> +	/* Otherwise MSI. */
>> +	if (type != PCI_EPC_IRQ_MSI)
>> +		return -EINVAL;
>> +
>> +	/* Check whether the MSI feature has been enabled by the PCI host. */
>> +	flags = cdns_pcie_ep_fn_readw(pcie, fn, cap + PCI_MSI_FLAGS);
>> +	if (!(flags & PCI_MSI_FLAGS_ENABLE))
>> +		return -EINVAL;
>> +
>> +	/* Get the number of enabled MSIs */
>> +	mmc = (flags & PCI_MSI_FLAGS_QMASK) >> 1;
>> +	mme = (flags & PCI_MSI_FLAGS_QSIZE) >> 4;
>> +	if (mme > mmc)
>> +		mme = mmc;
>> +	if (mme > 5)
>> +		mme = 5;
> 
> Same comment as above.
> 
>> +	msi_count = 1 << mme;
>> +	if (!interrupt_num || interrupt_num > msi_count)
>> +		return -EINVAL;
>> +
>> +	/* Compute the data value to be written. */
>> +	data_mask = msi_count - 1;
>> +	data = cdns_pcie_ep_fn_readw(pcie, fn, cap + PCI_MSI_DATA_64);
>> +	data = (data & ~data_mask) | ((interrupt_num - 1) & data_mask);
>> +
>> +	/* Get the PCI address where to write the data into. */
>> +	pci_addr = cdns_pcie_ep_fn_readl(pcie, fn, cap + PCI_MSI_ADDRESS_HI);
>> +	pci_addr <<= 32;
>> +	pci_addr |= cdns_pcie_ep_fn_readl(pcie, fn, cap + PCI_MSI_ADDRESS_LO);
>> +	pci_addr &= GENMASK_ULL(63, 2);
>> +
>> +	/* Set the outbound region if needed. */
>> +	if (unlikely(ep->irq_pci_addr != pci_addr)) {
>> +		/* Last region was reserved for IRQ writes. */
>> +		cdns_pcie_set_outbound_region(pcie, ep->data->max_regions - 1,
>> +					      false,
>> +					      ep->irq_phys_addr,
>> +					      pci_addr & ~pci_addr_mask,
>> +					      pci_addr_mask + 1);
>> +		ep->irq_pci_addr = pci_addr;
>> +	}
>> +	writew(data, ep->irq_cpu_addr + (pci_addr & pci_addr_mask));
>> +
>> +	return 0;
>> +}
>> +
>> +static int cdns_pcie_ep_start(struct pci_epc *epc)
>> +{
>> +	struct cdns_pcie_ep *ep = epc_get_drvdata(epc);
>> +	struct cdns_pcie *pcie = &ep->pcie;
>> +	struct pci_epf *epf;
>> +	u32 cfg;
>> +	u8 fn = 0;
>> +
>> +	/* Enable this endpoint function. */
>> +	cfg = cdns_pcie_readl(pcie, CDNS_PCIE_LM_EP_FUNC_CFG);
>> +	cfg |= BIT(fn);
>> +	cdns_pcie_writel(pcie, CDNS_PCIE_LM_EP_FUNC_CFG, cfg);
>> +
>> +	/*
>> +	 * Already linked-up: don't call directly pci_epc_linkup() because we've
>> +	 * already locked the epc->lock.
>> +	 */
>> +	list_for_each_entry(epf, &epc->pci_epf, list)
>> +		pci_epf_linkup(epf);
>> +
>> +	return 0;
>> +}
>> +
>> +static void cdns_pcie_ep_stop(struct pci_epc *epc)
>> +{
>> +	struct cdns_pcie_ep *ep = epc_get_drvdata(epc);
>> +	struct cdns_pcie *pcie = &ep->pcie;
>> +	u32 cfg;
>> +	u8 fn = 0;
>> +
>> +	/* Disable this endpoint function (function 0 can't be disabled). */
> 
> I do not understand this comment and how it applies to the code,
> in other words fn is always 0 here (so it can't be disabled)
> I do not understand what this code is there for.
>

fn == 0 for the same reason as for other 'struct pci_epc_ops' handlers.
I added this comment above because BIT(0) is hard-wired to 1: in other word
function 0 is the only one that can't be disabled.

So the current code actually does nothing. I've implemented anyway still
hoping that the API would change so function other than 0 could be used.

>> +	cfg = cdns_pcie_readl(pcie, CDNS_PCIE_LM_EP_FUNC_CFG);
>> +	cfg &= ~BIT(fn);
>> +	cdns_pcie_writel(pcie, CDNS_PCIE_LM_EP_FUNC_CFG, cfg);
>> +}
>> +
>> +static const struct pci_epc_ops cdns_pcie_epc_ops = {
>> +	.write_header	= cdns_pcie_ep_write_header,
>> +	.set_bar	= cdns_pcie_ep_set_bar,
>> +	.clear_bar	= cdns_pcie_ep_clear_bar,
>> +	.map_addr	= cdns_pcie_ep_map_addr,
>> +	.unmap_addr	= cdns_pcie_ep_unmap_addr,
>> +	.set_msi	= cdns_pcie_ep_set_msi,
>> +	.get_msi	= cdns_pcie_ep_get_msi,
>> +	.raise_irq	= cdns_pcie_ep_raise_irq,
>> +	.start		= cdns_pcie_ep_start,
>> +	.stop		= cdns_pcie_ep_stop,
>> +};
>> +
>> +static const struct cdns_pcie_ep_data cdns_pcie_ep_data = {
>> +	.max_regions	= 16,
>> +};
> 
> As I mentioned in patch 3, should this be set-up with DT ?
>

OK, I'll move it as a DT property.

Best regards,

Cyrille

> Thanks,
> Lorenzo
> 
>> +
>> +static const struct of_device_id cdns_pcie_ep_of_match[] = {
>> +	{ .compatible = "cdns,cdns-pcie-ep",
>> +	  .data = &cdns_pcie_ep_data },
>> +
>> +	{ },
>> +};
>> +
>> +static int cdns_pcie_ep_probe(struct platform_device *pdev)
>> +{
>> +	struct device *dev = &pdev->dev;
>> +	struct device_node *np = dev->of_node;
>> +	const struct of_device_id *of_id;
>> +	struct cdns_pcie_ep *ep;
>> +	struct cdns_pcie *pcie;
>> +	struct pci_epc *epc;
>> +	struct resource *res;
>> +	size_t max_regions;
>> +	int ret;
>> +
>> +	ep = devm_kzalloc(dev, sizeof(*ep), GFP_KERNEL);
>> +	if (!ep)
>> +		return -ENOMEM;
>> +
>> +	platform_set_drvdata(pdev, ep);
>> +
>> +	pcie = &ep->pcie;
>> +	pcie->is_rc = false;
>> +
>> +	of_id = of_match_node(cdns_pcie_ep_of_match, np);
>> +	ep->data = (const struct cdns_pcie_ep_data *)of_id->data;
>> +
>> +	res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "reg");
>> +	pcie->reg_base = devm_ioremap_resource(dev, res);
>> +	if (IS_ERR(pcie->reg_base)) {
>> +		dev_err(dev, "missing \"reg\"\n");
>> +		return PTR_ERR(pcie->reg_base);
>> +	}
>> +
>> +	res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "mem");
>> +	if (!res) {
>> +		dev_err(dev, "missing \"mem\"\n");
>> +		return -EINVAL;
>> +	}
>> +	pcie->mem_res = res;
>> +
>> +	max_regions = ep->data->max_regions;
>> +	ep->ob_addr = devm_kzalloc(dev, max_regions * sizeof(*ep->ob_addr),
>> +				   GFP_KERNEL);
>> +	if (!ep->ob_addr)
>> +		return -ENOMEM;
>> +
>> +	pm_runtime_enable(dev);
>> +	ret = pm_runtime_get_sync(dev);
>> +	if (ret < 0) {
>> +		dev_err(dev, "pm_runtime_get_sync() failed\n");
>> +		goto err_get_sync;
>> +	}
>> +
>> +	/* Disable all but function 0 (anyway BIT(0) is hardwired to 1). */
>> +	cdns_pcie_writel(pcie, CDNS_PCIE_LM_EP_FUNC_CFG, BIT(0));
>> +
>> +	epc = devm_pci_epc_create(dev, &cdns_pcie_epc_ops);
>> +	if (IS_ERR(epc)) {
>> +		dev_err(dev, "failed to create epc device\n");
>> +		ret = PTR_ERR(epc);
>> +		goto err_init;
>> +	}
>> +
>> +	ep->epc = epc;
>> +	epc_set_drvdata(epc, ep);
>> +
>> +	ret = of_property_read_u8(np, "max-functions", &epc->max_functions);
>> +	if (ret < 0)
>> +		epc->max_functions = 1;
>> +
>> +	ret = pci_epc_mem_init(epc, pcie->mem_res->start,
>> +			       resource_size(pcie->mem_res));
>> +	if (ret < 0) {
>> +		dev_err(dev, "failed to initialize the memory space\n");
>> +		goto err_init;
>> +	}
>> +
>> +	ep->irq_cpu_addr = pci_epc_mem_alloc_addr(epc, &ep->irq_phys_addr,
>> +						  SZ_128K);
>> +	if (!ep->irq_cpu_addr) {
>> +		dev_err(dev, "failed to reserve memory space for MSI\n");
>> +		ret = -ENOMEM;
>> +		goto free_epc_mem;
>> +	}
>> +	ep->irq_pci_addr = CDNS_PCIE_EP_IRQ_PCI_ADDR_NONE;
>> +
>> +	return 0;
>> +
>> + free_epc_mem:
>> +	pci_epc_mem_exit(epc);
>> +
>> + err_init:
>> +	pm_runtime_put_sync(dev);
>> +
>> + err_get_sync:
>> +	pm_runtime_disable(dev);
>> +
>> +	return ret;
>> +}
>> +
>> +static struct platform_driver cdns_pcie_ep_driver = {
>> +	.driver = {
>> +		.name = "cdns-pcie-ep",
>> +		.of_match_table = cdns_pcie_ep_of_match,
>> +	},
>> +	.probe = cdns_pcie_ep_probe,
>> +};
>> +builtin_platform_driver(cdns_pcie_ep_driver);
>> -- 
>> 2.11.0
>>
> 


-- 
Cyrille Pitchen, Free Electrons
Embedded Linux and Kernel engineering
http://free-electrons.com

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH 3/5] PCI: cadence: Add host driver for Cadence PCIe controller
  2017-12-03 20:44     ` Cyrille Pitchen
@ 2017-12-04 18:20       ` Lorenzo Pieralisi
  2017-12-04 18:49         ` Ard Biesheuvel
  0 siblings, 1 reply; 39+ messages in thread
From: Lorenzo Pieralisi @ 2017-12-04 18:20 UTC (permalink / raw)
  To: Cyrille Pitchen
  Cc: bhelgaas, kishon, linux-pci, adouglas, stelford, dgary, kgopi,
	eandrews, thomas.petazzoni, sureshp, nsekhar, linux-kernel, robh,
	devicetree, ard.biesheuvel

[+Ard]

Hi Cyrille,

On Sun, Dec 03, 2017 at 09:44:46PM +0100, Cyrille Pitchen wrote:

[...]

> >> +cdns_pci_map_bus(struct pci_bus *bus, unsigned int devfn, int where)
> >> +{
> >> +     struct pci_host_bridge *bridge = pci_find_host_bridge(bus);
> >> +     struct cdns_pcie_rc *rc = pci_host_bridge_priv(bridge);
> >> +     struct cdns_pcie *pcie = &rc->pcie;
> >> +     unsigned int busn = bus->number;
> >> +     u32 addr0, desc0;
> >> +
> >> +     if (busn < rc->bus_range->start || busn > rc->bus_range->end)
> >> +             return NULL;
> > 
> > It does not hurt but I wonder whether you really need this check.
> >
> 
> I can remove it.
>  
> >> +     if (busn == rc->bus_range->start) {
> >> +             if (devfn)
> > 
> > I suspect I know why you need this check but I ask you to explain it
> > anyway if you do not mind please.
> >
> 
> If I have understood correctly, Cadence team told me that only the root
> port is available on the first bus through device 0, function 0.
> No other device/function should connected on this bus, all other devices
> are behind at least one PCI bridge.
> 
> I can add a comment here to explain that.

That's understood, the question is what happens if you do scan devfn != 0.

> >> +                     return NULL;
> >> +
> >> +             return pcie->reg_base + (where & 0xfff);
> >> +     }
> >> +
> >> +     /* Update Output registers for AXI region 0. */
> >> +     addr0 = CDNS_PCIE_AT_OB_REGION_PCI_ADDR0_NBITS(12) |
> > 
> > Ok, so for every config access you reprogram addr0 to reflect the
> > correct bus|devfn ID in the PCI bus TLP corresponding to an ECAM address
> > in CPU physical address space, is my understanding correct ?
> >
> 
> The idea is to able to use only a 4KB memory area at a fixed address in the
> space allocated for the PCIe controller in the AXI bus. I guess the plan is
> to leave more space on the AXI bus to map all other PCIe devices.
> 
> This is just my guess. Anyway one purpose of this driver was actually to
> perform all PCI configuration space accesses through this single 4KB memory
> area in the AXI bus, changing the mapping dynamically to reach the relevant
> PCI device. 

Thank you for explaining - that matches my understanding.

> >> +             CDNS_PCIE_AT_OB_REGION_PCI_ADDR0_DEVFN(devfn) |
> >> +             CDNS_PCIE_AT_OB_REGION_PCI_ADDR0_BUS(busn);
> >> +     cdns_pcie_writel(pcie, CDNS_PCIE_AT_OB_REGION_PCI_ADDR0(0), addr0);
> >> +
> >> +     /* Configuration Type 0 or Type 1 access. */
> >> +     desc0 = CDNS_PCIE_AT_OB_REGION_DESC0_HARDCODED_RID |
> >> +             CDNS_PCIE_AT_OB_REGION_DESC0_DEVFN(0);
> >> +     /*
> >> +      * The bus number was already set once for all in desc1 by
> >> +      * cdns_pcie_host_init_address_translation().
> >> +      */
> >> +     if (busn == rc->bus_range->start + 1)
> >> +             desc0 |= CDNS_PCIE_AT_OB_REGION_DESC0_TYPE_CONF_TYPE0;
> >> +     else
> >> +             desc0 |= CDNS_PCIE_AT_OB_REGION_DESC0_TYPE_CONF_TYPE1;
> > 
> > I would like to ask you why you have to do it here and the root port
> > does not figure it out by itself, I do not have the datasheet so I am
> > just asking for my own information.
> 
> PCI configuration space registers of the root port can only be read through
> the APB bus at offset 0:
> ->reg_base + (where & 0xfff)
> 
> They are internal registers of the PCIe controller so no TLP on the PCIe bus.
> 
> However to access the PCI configuration space registers of any other device,
> the PCIe controller builds then sends a TLP on the PCIe bus using the offset
> in the 4KB AXI area as the offset of the register in the PCI configuration
> space:
> ->cfg_base + (where & 0xfff)
> 
> > 
> >> +     cdns_pcie_writel(pcie, CDNS_PCIE_AT_OB_REGION_DESC0(0), desc0);
> >> +
> >> +     return rc->cfg_base + (where & 0xfff);
> >> +}
> >> +
> >> +static struct pci_ops cdns_pcie_host_ops = {
> >> +     .map_bus        = cdns_pci_map_bus,
> >> +     .read           = pci_generic_config_read,
> >> +     .write          = pci_generic_config_write,
> >> +};
> >> +
> >> +static const struct cdns_pcie_rc_data cdns_pcie_rc_data = {
> >> +     .max_regions    = 32,
> >> +     .vendor_id      = PCI_VENDOR_ID_CDNS,
> >> +     .device_id      = 0x0200,
> >> +     .no_bar_nbits   = 32,
> >> +};
> > 
> > Should (some of) these parameters be retrieved through a DT binding ?
> >
> 
> Indeed, maybe we get max_regions and no_bar_nbits from the DT.
> 
> About the vendor and device IDs, I don't know which would be the best
> choice between some dedicated DT properties or associating a custom
> structure as above to the 'compatible' string.
> 
> Honestly, I don't have any strong preference, please just tell me what
> you would prefer :)

I think it is best to ask DT maintainers (in CC) POV on this, they
certainly have a more comprehensive view than mine on the subject - I
have just noticed that _some_ data can be retrieved through DT therefore
I raised the point - either through different compatible strings or
some IP specific properties.

> >> +static const struct of_device_id cdns_pcie_host_of_match[] = {
> >> +     { .compatible = "cdns,cdns-pcie-host",
> >> +       .data = &cdns_pcie_rc_data },
> >> +
> >> +     { },
> >> +};
> >> +
> >> +static int cdns_pcie_parse_request_of_pci_ranges(struct device *dev,
> >> +                                              struct list_head *resources,
> >> +                                              struct resource **bus_range)
> >> +{
> >> +     int err, res_valid = 0;
> >> +     struct device_node *np = dev->of_node;
> >> +     resource_size_t iobase;
> >> +     struct resource_entry *win, *tmp;
> >> +
> >> +     err = of_pci_get_host_bridge_resources(np, 0, 0xff, resources, &iobase);
> >> +     if (err)
> >> +             return err;
> >> +
> >> +     err = devm_request_pci_bus_resources(dev, resources);
> >> +     if (err)
> >> +             return err;
> >> +
> >> +     resource_list_for_each_entry_safe(win, tmp, resources) {
> >> +             struct resource *res = win->res;
> >> +
> >> +             switch (resource_type(res)) {
> >> +             case IORESOURCE_IO:
> >> +                     err = pci_remap_iospace(res, iobase);
> >> +                     if (err) {
> >> +                             dev_warn(dev, "error %d: failed to map resource %pR\n",
> >> +                                      err, res);
> >> +                             resource_list_destroy_entry(win);
> >> +                     }
> >> +                     break;
> >> +             case IORESOURCE_MEM:
> >> +                     res_valid |= !(res->flags & IORESOURCE_PREFETCH);
> >> +                     break;
> >> +             case IORESOURCE_BUS:
> >> +                     *bus_range = res;
> >> +                     break;
> >> +             }
> >> +     }
> >> +
> >> +     if (res_valid)
> >> +             return 0;
> >> +
> >> +     dev_err(dev, "non-prefetchable memory resource required\n");
> >> +     return -EINVAL;
> > 
> > Nit, I prefer you swap these two as it is done in pci-aardvark.c:
> > 
> >         if (!res_valid) {
> >                 dev_err(dev, "non-prefetchable memory resource required\n");
> >                 return -EINVAL;
> >         }
> > 
> >         return 0;
> > 
> > but as per previous replies this function can be factorized in
> > core PCI code - I would not bother unless you are willing to write
> > the patch series that does the refactoring yourself :)
> > 
> >> +}
> >> +
> >> +static int cdns_pcie_host_init_root_port(struct cdns_pcie_rc *rc)
> >> +{
> >> +     const struct cdns_pcie_rc_data *data = rc->data;
> >> +     struct cdns_pcie *pcie = &rc->pcie;
> >> +     u8 pbn, sbn, subn;
> >> +     u32 value, ctrl;
> >> +
> >> +     /*
> >> +      * Set the root complex BAR configuration register:
> >> +      * - disable both BAR0 and BAR1.
> >> +      * - enable Prefetchable Memory Base and Limit registers in type 1
> >> +      *   config space (64 bits).
> >> +      * - enable IO Base and Limit registers in type 1 config
> >> +      *   space (32 bits).
> >> +      */
> >> +     ctrl = CDNS_PCIE_LM_BAR_CFG_CTRL_DISABLED;
> >> +     value = CDNS_PCIE_LM_RC_BAR_CFG_BAR0_CTRL(ctrl) |
> >> +             CDNS_PCIE_LM_RC_BAR_CFG_BAR1_CTRL(ctrl) |
> >> +             CDNS_PCIE_LM_RC_BAR_CFG_PREFETCH_MEM_ENABLE |
> >> +             CDNS_PCIE_LM_RC_BAR_CFG_PREFETCH_MEM_64BITS |
> >> +             CDNS_PCIE_LM_RC_BAR_CFG_IO_ENABLE |
> >> +             CDNS_PCIE_LM_RC_BAR_CFG_IO_32BITS;
> >> +     cdns_pcie_writel(pcie, CDNS_PCIE_LM_RC_BAR_CFG, value);
> >> +
> >> +     /* Set root port configuration space */
> >> +     if (data->vendor_id != 0xffff)
> >> +             cdns_pcie_rp_writew(pcie, PCI_VENDOR_ID, data->vendor_id);
> >> +     if (data->device_id != 0xffff)
> >> +             cdns_pcie_rp_writew(pcie, PCI_DEVICE_ID, data->device_id);
> >> +
> >> +     cdns_pcie_rp_writeb(pcie, PCI_CLASS_REVISION, 0);
> >> +     cdns_pcie_rp_writeb(pcie, PCI_CLASS_PROG, 0);
> >> +     cdns_pcie_rp_writew(pcie, PCI_CLASS_DEVICE, PCI_CLASS_BRIDGE_PCI);
> >> +
> >> +     pbn = rc->bus_range->start;
> >> +     sbn = pbn + 1; /* Single root port. */
> >> +     subn = rc->bus_range->end;
> >> +     cdns_pcie_rp_writeb(pcie, PCI_PRIMARY_BUS, pbn);
> >> +     cdns_pcie_rp_writeb(pcie, PCI_SECONDARY_BUS, sbn);
> >> +     cdns_pcie_rp_writeb(pcie, PCI_SUBORDINATE_BUS, subn);
> > 
> > Again - I do not have the datasheet for this device therefore I would
> > kindly ask you how this works; it seems to me that what you are doing
> > here is done through normal configuration cycles in an ECAM compliant
> > system to program the RP PRIMARY/SECONDARY/SUBORDINATE bus - I would
> > like to understand why this code is needed.
> >
> 
> I will test without those lines to test whether I can remove them.
> 
> At first, the PCIe controller was tested by Cadence team: there was code
> in their bootloader to initialize the hardware (building the AXI <-> PCIe
> mappings, ...): the bootloader used to set the primary, secondary and
> subordinate bus numbers in the root port PCI config space.
> 
> Also there was a hardware trick to redirect accesses of the lowest
> addresses in the AXI bus to the APB bus so the PCI configuration space of
> the root port could have been accessed from the AXI bus too.
> 
> The AXI <-> PCIe mapping being done by the bootloader and the root port
> config space being accessible from the AXI bus, it was possible to use
> the pci-host-generic driver.

That's what I was getting at. Ard (CC'ed) implemented a firmware set-up
(even though it was for a different IP but maybe it applies here) that
allows the kernel to use the pci-host-generic driver to initialize the
PCI controller:

https://marc.info/?l=linux-pci&m=150360022626351&w=2

I want to understand if there is an IP initialization sequence whereby
this IP can be made to work in an ECAM compliant way and therefore
reuse (most of) the pci-host-generic driver code.

> However, the hardware trick won't be included in the final design since
> Cadence now wants to perform all PCI configuration space accesses through
> a small 4KB window at a fixed address on the AXI bus.

I would like to understand what the HW "trick" (if you can disclose it)
was, because if there is a chance to reuse the pci-host-generic driver
for this IP I want to take it (yes it may entail some firmware set-up in
the bootloader) - was it a HW trick or a specific IP SW configuration ?

> Also, we now want all initialisations to be done by the linux driver
> instead of the bootloader.

That's a choice, I do not necessarily agree with it and I think we
should aim for more standardization on the PCI host bridge set-up
at firmware->kernel handover on DT platforms.

> I simply moved all those initialisations from the bootloader to the linux
> driver but actually there is a chance that I can remove the 3 writes to
> the PCI_*_BUS registers.

I asked because I do not have this IP documentation so I rely on you to
provide the correct initialization sequence and an explanation for it,
I think I understand now the initialization sequence a bit more but it
would be good to get to the bottom of it.

Thank you,
Lorenzo

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH 3/5] PCI: cadence: Add host driver for Cadence PCIe controller
  2017-12-04 18:20       ` Lorenzo Pieralisi
@ 2017-12-04 18:49         ` Ard Biesheuvel
  2017-12-06 11:32           ` Lorenzo Pieralisi
  0 siblings, 1 reply; 39+ messages in thread
From: Ard Biesheuvel @ 2017-12-04 18:49 UTC (permalink / raw)
  To: Lorenzo Pieralisi
  Cc: Cyrille Pitchen, Bjorn Helgaas, kishon, linux-pci, adouglas,
	Scott Telford, dgary, kgopi, eandrews, Thomas Petazzoni, sureshp,
	nsekhar, linux-kernel, Rob Herring, devicetree

On 4 December 2017 at 18:20, Lorenzo Pieralisi
<lorenzo.pieralisi@arm.com> wrote:
> [+Ard]
>
> Hi Cyrille,
>
> On Sun, Dec 03, 2017 at 09:44:46PM +0100, Cyrille Pitchen wrote:
>
> [...]
>
>> >> +cdns_pci_map_bus(struct pci_bus *bus, unsigned int devfn, int where)
>> >> +{
>> >> +     struct pci_host_bridge *bridge = pci_find_host_bridge(bus);
>> >> +     struct cdns_pcie_rc *rc = pci_host_bridge_priv(bridge);
>> >> +     struct cdns_pcie *pcie = &rc->pcie;
>> >> +     unsigned int busn = bus->number;
>> >> +     u32 addr0, desc0;
>> >> +
>> >> +     if (busn < rc->bus_range->start || busn > rc->bus_range->end)
>> >> +             return NULL;
>> >
>> > It does not hurt but I wonder whether you really need this check.
>> >
>>
>> I can remove it.
>>
>> >> +     if (busn == rc->bus_range->start) {
>> >> +             if (devfn)
>> >
>> > I suspect I know why you need this check but I ask you to explain it
>> > anyway if you do not mind please.
>> >
>>
>> If I have understood correctly, Cadence team told me that only the root
>> port is available on the first bus through device 0, function 0.
>> No other device/function should connected on this bus, all other devices
>> are behind at least one PCI bridge.
>>
>> I can add a comment here to explain that.
>
> That's understood, the question is what happens if you do scan devfn != 0.
>

OK, this is similar to the Synopsys IP. Type 0 config TLPs are not
filtered by the hardware, and so if you don't filter them in software,
the device downstream of the rootport will appear 32 times.

>> >> +                     return NULL;
>> >> +
>> >> +             return pcie->reg_base + (where & 0xfff);
>> >> +     }
>> >> +
>> >> +     /* Update Output registers for AXI region 0. */
>> >> +     addr0 = CDNS_PCIE_AT_OB_REGION_PCI_ADDR0_NBITS(12) |
>> >
>> > Ok, so for every config access you reprogram addr0 to reflect the
>> > correct bus|devfn ID in the PCI bus TLP corresponding to an ECAM address
>> > in CPU physical address space, is my understanding correct ?
>> >
>>
>> The idea is to able to use only a 4KB memory area at a fixed address in the
>> space allocated for the PCIe controller in the AXI bus. I guess the plan is
>> to leave more space on the AXI bus to map all other PCIe devices.
>>
>> This is just my guess. Anyway one purpose of this driver was actually to
>> perform all PCI configuration space accesses through this single 4KB memory
>> area in the AXI bus, changing the mapping dynamically to reach the relevant
>> PCI device.
>
> Thank you for explaining - that matches my understanding.
>
>> >> +             CDNS_PCIE_AT_OB_REGION_PCI_ADDR0_DEVFN(devfn) |
>> >> +             CDNS_PCIE_AT_OB_REGION_PCI_ADDR0_BUS(busn);
>> >> +     cdns_pcie_writel(pcie, CDNS_PCIE_AT_OB_REGION_PCI_ADDR0(0), addr0);
>> >> +
>> >> +     /* Configuration Type 0 or Type 1 access. */
>> >> +     desc0 = CDNS_PCIE_AT_OB_REGION_DESC0_HARDCODED_RID |
>> >> +             CDNS_PCIE_AT_OB_REGION_DESC0_DEVFN(0);
>> >> +     /*
>> >> +      * The bus number was already set once for all in desc1 by
>> >> +      * cdns_pcie_host_init_address_translation().
>> >> +      */
>> >> +     if (busn == rc->bus_range->start + 1)
>> >> +             desc0 |= CDNS_PCIE_AT_OB_REGION_DESC0_TYPE_CONF_TYPE0;
>> >> +     else
>> >> +             desc0 |= CDNS_PCIE_AT_OB_REGION_DESC0_TYPE_CONF_TYPE1;
>> >
>> > I would like to ask you why you have to do it here and the root port
>> > does not figure it out by itself, I do not have the datasheet so I am
>> > just asking for my own information.
>>
>> PCI configuration space registers of the root port can only be read through
>> the APB bus at offset 0:
>> ->reg_base + (where & 0xfff)
>>
>> They are internal registers of the PCIe controller so no TLP on the PCIe bus.
>>
>> However to access the PCI configuration space registers of any other device,
>> the PCIe controller builds then sends a TLP on the PCIe bus using the offset
>> in the 4KB AXI area as the offset of the register in the PCI configuration
>> space:
>> ->cfg_base + (where & 0xfff)
>>
>> >
>> >> +     cdns_pcie_writel(pcie, CDNS_PCIE_AT_OB_REGION_DESC0(0), desc0);
>> >> +
>> >> +     return rc->cfg_base + (where & 0xfff);
>> >> +}
>> >> +
>> >> +static struct pci_ops cdns_pcie_host_ops = {
>> >> +     .map_bus        = cdns_pci_map_bus,
>> >> +     .read           = pci_generic_config_read,
>> >> +     .write          = pci_generic_config_write,
>> >> +};
>> >> +
>> >> +static const struct cdns_pcie_rc_data cdns_pcie_rc_data = {
>> >> +     .max_regions    = 32,
>> >> +     .vendor_id      = PCI_VENDOR_ID_CDNS,
>> >> +     .device_id      = 0x0200,
>> >> +     .no_bar_nbits   = 32,
>> >> +};
>> >
>> > Should (some of) these parameters be retrieved through a DT binding ?
>> >
>>
>> Indeed, maybe we get max_regions and no_bar_nbits from the DT.
>>
>> About the vendor and device IDs, I don't know which would be the best
>> choice between some dedicated DT properties or associating a custom
>> structure as above to the 'compatible' string.
>>
>> Honestly, I don't have any strong preference, please just tell me what
>> you would prefer :)
>
> I think it is best to ask DT maintainers (in CC) POV on this, they
> certainly have a more comprehensive view than mine on the subject - I
> have just noticed that _some_ data can be retrieved through DT therefore
> I raised the point - either through different compatible strings or
> some IP specific properties.
>
>> >> +static const struct of_device_id cdns_pcie_host_of_match[] = {
>> >> +     { .compatible = "cdns,cdns-pcie-host",
>> >> +       .data = &cdns_pcie_rc_data },
>> >> +
>> >> +     { },
>> >> +};
>> >> +
>> >> +static int cdns_pcie_parse_request_of_pci_ranges(struct device *dev,
>> >> +                                              struct list_head *resources,
>> >> +                                              struct resource **bus_range)
>> >> +{
>> >> +     int err, res_valid = 0;
>> >> +     struct device_node *np = dev->of_node;
>> >> +     resource_size_t iobase;
>> >> +     struct resource_entry *win, *tmp;
>> >> +
>> >> +     err = of_pci_get_host_bridge_resources(np, 0, 0xff, resources, &iobase);
>> >> +     if (err)
>> >> +             return err;
>> >> +
>> >> +     err = devm_request_pci_bus_resources(dev, resources);
>> >> +     if (err)
>> >> +             return err;
>> >> +
>> >> +     resource_list_for_each_entry_safe(win, tmp, resources) {
>> >> +             struct resource *res = win->res;
>> >> +
>> >> +             switch (resource_type(res)) {
>> >> +             case IORESOURCE_IO:
>> >> +                     err = pci_remap_iospace(res, iobase);
>> >> +                     if (err) {
>> >> +                             dev_warn(dev, "error %d: failed to map resource %pR\n",
>> >> +                                      err, res);
>> >> +                             resource_list_destroy_entry(win);
>> >> +                     }
>> >> +                     break;
>> >> +             case IORESOURCE_MEM:
>> >> +                     res_valid |= !(res->flags & IORESOURCE_PREFETCH);
>> >> +                     break;
>> >> +             case IORESOURCE_BUS:
>> >> +                     *bus_range = res;
>> >> +                     break;
>> >> +             }
>> >> +     }
>> >> +
>> >> +     if (res_valid)
>> >> +             return 0;
>> >> +
>> >> +     dev_err(dev, "non-prefetchable memory resource required\n");
>> >> +     return -EINVAL;
>> >
>> > Nit, I prefer you swap these two as it is done in pci-aardvark.c:
>> >
>> >         if (!res_valid) {
>> >                 dev_err(dev, "non-prefetchable memory resource required\n");
>> >                 return -EINVAL;
>> >         }
>> >
>> >         return 0;
>> >
>> > but as per previous replies this function can be factorized in
>> > core PCI code - I would not bother unless you are willing to write
>> > the patch series that does the refactoring yourself :)
>> >
>> >> +}
>> >> +
>> >> +static int cdns_pcie_host_init_root_port(struct cdns_pcie_rc *rc)
>> >> +{
>> >> +     const struct cdns_pcie_rc_data *data = rc->data;
>> >> +     struct cdns_pcie *pcie = &rc->pcie;
>> >> +     u8 pbn, sbn, subn;
>> >> +     u32 value, ctrl;
>> >> +
>> >> +     /*
>> >> +      * Set the root complex BAR configuration register:
>> >> +      * - disable both BAR0 and BAR1.
>> >> +      * - enable Prefetchable Memory Base and Limit registers in type 1
>> >> +      *   config space (64 bits).
>> >> +      * - enable IO Base and Limit registers in type 1 config
>> >> +      *   space (32 bits).
>> >> +      */
>> >> +     ctrl = CDNS_PCIE_LM_BAR_CFG_CTRL_DISABLED;
>> >> +     value = CDNS_PCIE_LM_RC_BAR_CFG_BAR0_CTRL(ctrl) |
>> >> +             CDNS_PCIE_LM_RC_BAR_CFG_BAR1_CTRL(ctrl) |
>> >> +             CDNS_PCIE_LM_RC_BAR_CFG_PREFETCH_MEM_ENABLE |
>> >> +             CDNS_PCIE_LM_RC_BAR_CFG_PREFETCH_MEM_64BITS |
>> >> +             CDNS_PCIE_LM_RC_BAR_CFG_IO_ENABLE |
>> >> +             CDNS_PCIE_LM_RC_BAR_CFG_IO_32BITS;
>> >> +     cdns_pcie_writel(pcie, CDNS_PCIE_LM_RC_BAR_CFG, value);
>> >> +
>> >> +     /* Set root port configuration space */
>> >> +     if (data->vendor_id != 0xffff)
>> >> +             cdns_pcie_rp_writew(pcie, PCI_VENDOR_ID, data->vendor_id);
>> >> +     if (data->device_id != 0xffff)
>> >> +             cdns_pcie_rp_writew(pcie, PCI_DEVICE_ID, data->device_id);
>> >> +
>> >> +     cdns_pcie_rp_writeb(pcie, PCI_CLASS_REVISION, 0);
>> >> +     cdns_pcie_rp_writeb(pcie, PCI_CLASS_PROG, 0);
>> >> +     cdns_pcie_rp_writew(pcie, PCI_CLASS_DEVICE, PCI_CLASS_BRIDGE_PCI);
>> >> +
>> >> +     pbn = rc->bus_range->start;
>> >> +     sbn = pbn + 1; /* Single root port. */
>> >> +     subn = rc->bus_range->end;
>> >> +     cdns_pcie_rp_writeb(pcie, PCI_PRIMARY_BUS, pbn);
>> >> +     cdns_pcie_rp_writeb(pcie, PCI_SECONDARY_BUS, sbn);
>> >> +     cdns_pcie_rp_writeb(pcie, PCI_SUBORDINATE_BUS, subn);
>> >
>> > Again - I do not have the datasheet for this device therefore I would
>> > kindly ask you how this works; it seems to me that what you are doing
>> > here is done through normal configuration cycles in an ECAM compliant
>> > system to program the RP PRIMARY/SECONDARY/SUBORDINATE bus - I would
>> > like to understand why this code is needed.
>> >
>>
>> I will test without those lines to test whether I can remove them.
>>
>> At first, the PCIe controller was tested by Cadence team: there was code
>> in their bootloader to initialize the hardware (building the AXI <-> PCIe
>> mappings, ...): the bootloader used to set the primary, secondary and
>> subordinate bus numbers in the root port PCI config space.
>>
>> Also there was a hardware trick to redirect accesses of the lowest
>> addresses in the AXI bus to the APB bus so the PCI configuration space of
>> the root port could have been accessed from the AXI bus too.
>>
>> The AXI <-> PCIe mapping being done by the bootloader and the root port
>> config space being accessible from the AXI bus, it was possible to use
>> the pci-host-generic driver.
>
> That's what I was getting at. Ard (CC'ed) implemented a firmware set-up
> (even though it was for a different IP but maybe it applies here) that
> allows the kernel to use the pci-host-generic driver to initialize the
> PCI controller:
>
> https://marc.info/?l=linux-pci&m=150360022626351&w=2
>
> I want to understand if there is an IP initialization sequence whereby
> this IP can be made to work in an ECAM compliant way and therefore
> reuse (most of) the pci-host-generic driver code.
>

I think the Synopsys case is probably very similar. There are some
registers that look like the config space of a root port, but in
reality, every memory access that hits a live host bridge window is
forwarded onto the link, regardless of the values of the bridge BARs.
That is why in the quoted case, we can get away with ignoring the root
port altogether, rather than jumping through hoops to make the IP
block's PCI config space registers appear at B/D/F 0/0/0, while still
having to filter type 0 config TLPs going onto the link (which is
arguably the job of the root port to begin with)

So if this IP does implement a proper root port (i.e., one where the
bridge BARs are actually taken into account, and where type 0 config
TLPs are in fact filtered), I strongly recommend mapping its config
space registers in an ECAM compliant matter, which implies no
accessors in the OS.

However, given the observation above, this IP does not appear to
filter type 0 config TLPs to devfn > 0 downstream of the root port
either.

>> However, the hardware trick won't be included in the final design since
>> Cadence now wants to perform all PCI configuration space accesses through
>> a small 4KB window at a fixed address on the AXI bus.
>
> I would like to understand what the HW "trick" (if you can disclose it)
> was, because if there is a chance to reuse the pci-host-generic driver
> for this IP I want to take it (yes it may entail some firmware set-up in
> the bootloader) - was it a HW trick or a specific IP SW configuration ?
>
>> Also, we now want all initialisations to be done by the linux driver
>> instead of the bootloader.
>
> That's a choice, I do not necessarily agree with it and I think we
> should aim for more standardization on the PCI host bridge set-up
> at firmware->kernel handover on DT platforms.
>

Well, for one, it means this IP will never be supported by ACPI, which
seems like a huge downside to me.

>> I simply moved all those initialisations from the bootloader to the linux
>> driver but actually there is a chance that I can remove the 3 writes to
>> the PCI_*_BUS registers.
>
> I asked because I do not have this IP documentation so I rely on you to
> provide the correct initialization sequence and an explanation for it,
> I think I understand now the initialization sequence a bit more but it
> would be good to get to the bottom of it.
>

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH 5/5] PCI: cadence: add EndPoint Controller driver for Cadence PCIe controller
  2017-12-01 12:20   ` Lorenzo Pieralisi
  2017-12-04 14:56     ` Cyrille Pitchen
@ 2017-12-05  9:19     ` Kishon Vijay Abraham I
  2017-12-07 10:05       ` Philippe Ombredanne
  2017-12-13 16:50       ` Cyrille Pitchen
  1 sibling, 2 replies; 39+ messages in thread
From: Kishon Vijay Abraham I @ 2017-12-05  9:19 UTC (permalink / raw)
  To: Lorenzo Pieralisi, Cyrille Pitchen
  Cc: bhelgaas, linux-pci, adouglas, stelford, dgary, kgopi, eandrews,
	thomas.petazzoni, sureshp, nsekhar, linux-kernel, robh,
	devicetree

Hi,

On Friday 01 December 2017 05:50 PM, Lorenzo Pieralisi wrote:
> On Thu, Nov 23, 2017 at 04:01:50PM +0100, Cyrille Pitchen wrote:
>> This patch adds support to the Cadence PCIe controller in endpoint mode.
> 
> Please add a brief description to the log to describe the most salient
> features.
> 
>> Signed-off-by: Cyrille Pitchen <cyrille.pitchen@free-electrons.com>
>> ---
>>  drivers/pci/cadence/Kconfig           |   9 +
>>  drivers/pci/cadence/Makefile          |   1 +
>>  drivers/pci/cadence/pcie-cadence-ep.c | 553 ++++++++++++++++++++++++++++++++++
>>  3 files changed, 563 insertions(+)
>>  create mode 100644 drivers/pci/cadence/pcie-cadence-ep.c
>>
>> diff --git a/drivers/pci/cadence/Kconfig b/drivers/pci/cadence/Kconfig
>> index 120306cae2aa..b2e6af71f39e 100644
>> --- a/drivers/pci/cadence/Kconfig
>> +++ b/drivers/pci/cadence/Kconfig
>> @@ -21,4 +21,13 @@ config PCIE_CADENCE_HOST
>>  	  mode. This PCIe controller may be embedded into many different vendors
>>  	  SoCs.
>>  
>> +config PCIE_CADENCE_EP
>> +	bool "Cadence PCIe endpoint controller"
>> +	depends on PCI_ENDPOINT
>> +	select PCIE_CADENCE
>> +	help
>> +	  Say Y here if you want to support the Cadence PCIe  controller in
>> +	  endpoint mode. This PCIe controller may be embedded into many
>> +	  different vendors SoCs.
>> +
>>  endif # PCI_CADENCE
>> diff --git a/drivers/pci/cadence/Makefile b/drivers/pci/cadence/Makefile
>> index d57d192d2595..61e9c8d6839d 100644
>> --- a/drivers/pci/cadence/Makefile
>> +++ b/drivers/pci/cadence/Makefile
>> @@ -1,2 +1,3 @@
>>  obj-$(CONFIG_PCIE_CADENCE) += pcie-cadence.o
>>  obj-$(CONFIG_PCIE_CADENCE_HOST) += pcie-cadence-host.o
>> +obj-$(CONFIG_PCIE_CADENCE_EP) += pcie-cadence-ep.o
>> diff --git a/drivers/pci/cadence/pcie-cadence-ep.c b/drivers/pci/cadence/pcie-cadence-ep.c
>> new file mode 100644
>> index 000000000000..a1d761101a9c
>> --- /dev/null
>> +++ b/drivers/pci/cadence/pcie-cadence-ep.c
>> @@ -0,0 +1,553 @@
>> +/*
>> + * Cadence PCIe host controller driver.
> 
> You should update this comment.
> 
>> + *
>> + * Copyright (c) 2017 Cadence
>> + *
>> + * Author: Cyrille Pitchen <cyrille.pitchen@free-electrons.com>
>> + *
>> + * This program is free software; you can redistribute it and/or modify
>> + * it under the terms of the GNU General Public License version 2 as
>> + * published by the Free Software Foundation.
>> + *
>> + * This program is distributed in the hope that it will be useful,
>> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
>> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
>> + * GNU General Public License for more details.
>> + *
>> + * You should have received a copy of the GNU General Public License along with
>> + * this program.  If not, see <http://www.gnu.org/licenses/>.
>> + */
>> +#include <linux/kernel.h>
>> +#include <linux/of.h>
>> +#include <linux/pci-epc.h>
>> +#include <linux/platform_device.h>
>> +#include <linux/pm_runtime.h>
>> +#include <linux/sizes.h>
>> +#include <linux/delay.h>
> 
> Nit: alphabetical order.
> 
>> +#include "pcie-cadence.h"
>> +
>> +#define CDNS_PCIE_EP_MIN_APERTURE		128	/* 128 bytes */
>> +#define CDNS_PCIE_EP_IRQ_PCI_ADDR_NONE		0x1
>> +#define CDNS_PCIE_EP_IRQ_PCI_ADDR_LEGACY	0x3
>> +
>> +/**
>> + * struct cdns_pcie_ep_data - hardware specific data
>> + * @max_regions: maximum nmuber of regions supported by hardware
> 
> s/nmuber/number
> 
>> + */
>> +struct cdns_pcie_ep_data {
>> +	size_t				max_regions;
>> +};
>> +
>> +/**
>> + * struct cdns_pcie_ep - private data for this PCIe endpoint controller driver
>> + * @pcie: Cadence PCIe controller
>> + * @data: pointer to a 'struct cdns_pcie_data'
>> + */
>> +struct cdns_pcie_ep {
>> +	struct cdns_pcie		pcie;
>> +	const struct cdns_pcie_ep_data	*data;
>> +	struct pci_epc			*epc;
>> +	unsigned long			ob_region_map;
>> +	phys_addr_t			*ob_addr;
>> +	phys_addr_t			irq_phys_addr;
>> +	void __iomem			*irq_cpu_addr;
>> +	u64				irq_pci_addr;
>> +	u8				irq_pending;
>> +};
>> +
>> +static int cdns_pcie_ep_write_header(struct pci_epc *epc,
>> +				     struct pci_epf_header *hdr)
>> +{
>> +	struct cdns_pcie_ep *ep = epc_get_drvdata(epc);
>> +	struct cdns_pcie *pcie = &ep->pcie;
>> +	u8 fn = 0;
>> +
>> +	if (fn == 0) {
> 
> I think there is some code to retrieve fn missing here.

hmm.. the endpoint core has to send the function number which right now it's
not doing though it has the function number info in pci_epf.
> 
>> +		u32 id = CDNS_PCIE_LM_ID_VENDOR(hdr->vendorid) |
>> +			 CDNS_PCIE_LM_ID_SUBSYS(hdr->subsys_vendor_id);
>> +		cdns_pcie_writel(pcie, CDNS_PCIE_LM_ID, id);
>> +	}
>> +	cdns_pcie_ep_fn_writew(pcie, fn, PCI_DEVICE_ID, hdr->deviceid);
>> +	cdns_pcie_ep_fn_writeb(pcie, fn, PCI_REVISION_ID, hdr->revid);
>> +	cdns_pcie_ep_fn_writeb(pcie, fn, PCI_CLASS_PROG, hdr->progif_code);
>> +	cdns_pcie_ep_fn_writew(pcie, fn, PCI_CLASS_DEVICE,
>> +			       hdr->subclass_code | hdr->baseclass_code << 8);
>> +	cdns_pcie_ep_fn_writeb(pcie, fn, PCI_CACHE_LINE_SIZE,
>> +			       hdr->cache_line_size);
>> +	cdns_pcie_ep_fn_writew(pcie, fn, PCI_SUBSYSTEM_ID, hdr->subsys_id);
>> +	cdns_pcie_ep_fn_writeb(pcie, fn, PCI_INTERRUPT_PIN, hdr->interrupt_pin);
>> +
>> +	return 0;
>> +}
>> +
>> +static int cdns_pcie_ep_set_bar(struct pci_epc *epc, enum pci_barno bar,
>> +				dma_addr_t bar_phys, size_t size, int flags)
>> +{
>> +	struct cdns_pcie_ep *ep = epc_get_drvdata(epc);
>> +	struct cdns_pcie *pcie = &ep->pcie;
>> +	u32 addr0, addr1, reg, cfg, b, aperture, ctrl;
>> +	u8 fn = 0;

Here too endpoint core should send the function number..
>> +	u64 sz;
>> +
>> +	/* BAR size is 2^(aperture + 7) */
>> +	sz = max_t(size_t, size, CDNS_PCIE_EP_MIN_APERTURE);
>> +	sz = 1ULL << fls64(sz - 1);
>> +	aperture = ilog2(sz) - 7; /* 128B -> 0, 256B -> 1, 512B -> 2, ... */
>> +
>> +	if ((flags & PCI_BASE_ADDRESS_SPACE) == PCI_BASE_ADDRESS_SPACE_IO) {
>> +		ctrl = CDNS_PCIE_LM_BAR_CFG_CTRL_IO_32BITS;
>> +	} else {
>> +		bool is_prefetch = !!(flags & PCI_BASE_ADDRESS_MEM_PREFETCH);
>> +		bool is_64bits = sz > SZ_2G;
>> +
>> +		if (is_64bits && (bar & 1))
>> +			return -EINVAL;
>> +
>> +		switch (is_64bits << 1 | is_prefetch) {
> 
> I would not mind implementing this as a nested if-else, I am not a big
> fan of using bool this way.
> 
>> +		case 0:
>> +			ctrl = CDNS_PCIE_LM_BAR_CFG_CTRL_MEM_32BITS;
>> +			break;
>> +
>> +		case 1:
>> +			ctrl = CDNS_PCIE_LM_BAR_CFG_CTRL_PREFETCH_MEM_32BITS;
>> +			break;
>> +
>> +		case 2:
>> +			ctrl = CDNS_PCIE_LM_BAR_CFG_CTRL_MEM_64BITS;
>> +			break;
>> +
>> +		case 3:
>> +			ctrl = CDNS_PCIE_LM_BAR_CFG_CTRL_PREFETCH_MEM_64BITS;
>> +			break;
>> +		}
>> +	}
>> +
>> +	addr0 = lower_32_bits(bar_phys);
>> +	addr1 = upper_32_bits(bar_phys);
>> +	cdns_pcie_writel(pcie, CDNS_PCIE_AT_IB_EP_FUNC_BAR_ADDR0(fn, bar),
>> +			 addr0);

It would be nice if you can have defines for CDNS_PCIE_AT_IB_EP_FUNC_BAR_ADDR0
included in this patch rather than "PCI: cadence: Add host driver for Cadence
PCIe controller". All EP specific functions in header file should be included here.
>> +	cdns_pcie_writel(pcie, CDNS_PCIE_AT_IB_EP_FUNC_BAR_ADDR1(fn, bar),
>> +			 addr1);
> 
> Is fn always 0 ?
> 
>> +	if (bar < BAR_4) {
>> +		reg = CDNS_PCIE_LM_EP_FUNC_BAR_CFG0(fn);
>> +		b = bar;
>> +	} else {
>> +		reg = CDNS_PCIE_LM_EP_FUNC_BAR_CFG1(fn);
>> +		b = bar - BAR_4;
>> +	}
>> +
>> +	cfg = cdns_pcie_readl(pcie, reg);
>> +	cfg &= ~(CDNS_PCIE_LM_EP_FUNC_BAR_CFG_BAR_APERTURE_MASK(b) |
>> +		 CDNS_PCIE_LM_EP_FUNC_BAR_CFG_BAR_CTRL_MASK(b));
>> +	cfg |= (CDNS_PCIE_LM_EP_FUNC_BAR_CFG_BAR_APERTURE(b, aperture) |
>> +		CDNS_PCIE_LM_EP_FUNC_BAR_CFG_BAR_CTRL(b, ctrl));
>> +	cdns_pcie_writel(pcie, reg, cfg);
>> +
>> +	return 0;
>> +}
>> +
>> +static void cdns_pcie_ep_clear_bar(struct pci_epc *epc, enum pci_barno bar)
>> +{
>> +	struct cdns_pcie_ep *ep = epc_get_drvdata(epc);
>> +	struct cdns_pcie *pcie = &ep->pcie;
>> +	u32 reg, cfg, b, ctrl;
>> +	u8 fn = 0;

Here too endpoint core should send the function number..
>> +
>> +	if (bar < BAR_4) {
>> +		reg = CDNS_PCIE_LM_EP_FUNC_BAR_CFG0(fn);
>> +		b = bar;
>> +	} else {
>> +		reg = CDNS_PCIE_LM_EP_FUNC_BAR_CFG1(fn);
>> +		b = bar - BAR_4;
>> +	}
>> +
>> +	ctrl = CDNS_PCIE_LM_BAR_CFG_CTRL_DISABLED;
>> +	cfg = cdns_pcie_readl(pcie, reg);
>> +	cfg &= ~(CDNS_PCIE_LM_EP_FUNC_BAR_CFG_BAR_APERTURE_MASK(b) |
>> +		 CDNS_PCIE_LM_EP_FUNC_BAR_CFG_BAR_CTRL_MASK(b));
>> +	cfg |= CDNS_PCIE_LM_EP_FUNC_BAR_CFG_BAR_CTRL_MASK(ctrl);
>> +	cdns_pcie_writel(pcie, reg, cfg);
>> +
>> +	cdns_pcie_writel(pcie, CDNS_PCIE_AT_IB_EP_FUNC_BAR_ADDR0(fn, bar), 0);
>> +	cdns_pcie_writel(pcie, CDNS_PCIE_AT_IB_EP_FUNC_BAR_ADDR1(fn, bar), 0);
>> +}
>> +
>> +static int cdns_pcie_ep_map_addr(struct pci_epc *epc, phys_addr_t addr,
>> +				 u64 pci_addr, size_t size)
>> +{
>> +	struct cdns_pcie_ep *ep = epc_get_drvdata(epc);
>> +	struct cdns_pcie *pcie = &ep->pcie;
>> +	u32 r;
>> +
>> +	r = find_first_zero_bit(&ep->ob_region_map, sizeof(ep->ob_region_map));
> 
> Second argument must be in bits not bytes.
> 
> https://marc.info/?l=linux-pci&m=151179781225513&w=2
> 
>> +	if (r >= ep->data->max_regions - 1) {
>> +		dev_err(&epc->dev, "no free outbound region\n");
>> +		return -EINVAL;
>> +	}
>> +
>> +	cdns_pcie_set_outbound_region(pcie, r, false, addr, pci_addr, size);
>> +
>> +	set_bit(r, &ep->ob_region_map);
>> +	ep->ob_addr[r] = addr;
>> +
>> +	return 0;
>> +}
>> +
>> +static void cdns_pcie_ep_unmap_addr(struct pci_epc *epc, phys_addr_t addr)
>> +{
>> +	struct cdns_pcie_ep *ep = epc_get_drvdata(epc);
>> +	struct cdns_pcie *pcie = &ep->pcie;
>> +	u32 r;
>> +
>> +	for (r = 0; r < ep->data->max_regions - 1; r++)
>> +		if (ep->ob_addr[r] == addr)
>> +			break;
>> +
>> +	if (r >= ep->data->max_regions - 1)
> 
> == ?
> 
>> +		return;
>> +
>> +	cdns_pcie_reset_outbound_region(pcie, r);
>> +
>> +	ep->ob_addr[r] = 0;
>> +	clear_bit(r, &ep->ob_region_map);
>> +}
>> +
>> +static int cdns_pcie_ep_set_msi(struct pci_epc *epc, u8 mmc)
>> +{
>> +	struct cdns_pcie_ep *ep = epc_get_drvdata(epc);
>> +	struct cdns_pcie *pcie = &ep->pcie;
>> +	u32 cap = CDNS_PCIE_EP_FUNC_MSI_CAP_OFFSET;
>> +	u16 flags;
>> +	u8 fn = 0;
>> +
>> +	/* Validate the ID of the MSI Capability structure. */
>> +	if (cdns_pcie_ep_fn_readb(pcie, fn, cap) != PCI_CAP_ID_MSI)
>> +		return -EINVAL;
>> +
>> +	/*
>> +	 * Set the Multiple Message Capable bitfield into the Message Control
>> +	 * register.
>> +	 */
>> +	flags = cdns_pcie_ep_fn_readw(pcie, fn, cap + PCI_MSI_FLAGS);
>> +	flags = (flags & ~PCI_MSI_FLAGS_QMASK) | (mmc << 1);
>> +	flags |= PCI_MSI_FLAGS_64BIT;
>> +	flags &= ~PCI_MSI_FLAGS_MASKBIT;

Any reason why "Per-vector masking capable" is reset?
>> +	cdns_pcie_ep_fn_writew(pcie, fn, cap + PCI_MSI_FLAGS, flags);
>> +
>> +	return 0;
>> +}
>> +
>> +static int cdns_pcie_ep_get_msi(struct pci_epc *epc)
>> +{
>> +	struct cdns_pcie_ep *ep = epc_get_drvdata(epc);
>> +	struct cdns_pcie *pcie = &ep->pcie;
>> +	u32 cap = CDNS_PCIE_EP_FUNC_MSI_CAP_OFFSET;
>> +	u16 flags, mmc, mme;
>> +	u8 fn = 0;
>> +
>> +	/* Validate the ID of the MSI Capability structure. */
>> +	if (cdns_pcie_ep_fn_readb(pcie, fn, cap) != PCI_CAP_ID_MSI)
>> +		return -EINVAL;
>> +
>> +	/* Validate that the MSI feature is actually enabled. */
>> +	flags = cdns_pcie_ep_fn_readw(pcie, fn, cap + PCI_MSI_FLAGS);
>> +	if (!(flags & PCI_MSI_FLAGS_ENABLE))
>> +		return -EINVAL;
>> +
>> +	/*
>> +	 * Get the Multiple Message Enable bitfield from the Message Control
>> +	 * register.
>> +	 */
>> +	mmc = (flags & PCI_MSI_FLAGS_QMASK) >> 1;
>> +	mme = (flags & PCI_MSI_FLAGS_QSIZE) >> 4;
>> +	if (mme > mmc)
>> +		mme = mmc;
>> +	if (mme > 5)
>> +		mme = 5;

I'm not sure if both these above checks are required..
> 
> You should comment on what this 5 means and why it is fine to cap mme.
> 
>> +
>> +	return mme;
>> +}
>> +
>> +static void cdns_pcie_ep_assert_intx(struct cdns_pcie_ep *ep, u8 fn,
>> +				     u8 intx, bool is_asserted)
>> +{
>> +	struct cdns_pcie *pcie = &ep->pcie;
>> +	u32 r = ep->data->max_regions - 1;
>> +	u32 offset;
>> +	u16 status;
>> +	u8 msg_code;
>> +
>> +	intx &= 3;
>> +
>> +	/* Set the outbound region if needed. */
>> +	if (unlikely(ep->irq_pci_addr != CDNS_PCIE_EP_IRQ_PCI_ADDR_LEGACY)) {
>> +		/* Last region was reserved for IRQ writes. */
>> +		cdns_pcie_set_outbound_region_for_normal_msg(pcie, r,
>> +							     ep->irq_phys_addr);
>> +		ep->irq_pci_addr = CDNS_PCIE_EP_IRQ_PCI_ADDR_LEGACY;
>> +	}
>> +
>> +	if (is_asserted) {
>> +		ep->irq_pending |= BIT(intx);
>> +		msg_code = MSG_CODE_ASSERT_INTA + intx;
>> +	} else {
>> +		ep->irq_pending &= ~BIT(intx);
>> +		msg_code = MSG_CODE_DEASSERT_INTA + intx;
>> +	}
>> +
>> +	status = cdns_pcie_ep_fn_readw(pcie, fn, PCI_STATUS);
>> +	if (((status & PCI_STATUS_INTERRUPT) != 0) ^ (ep->irq_pending != 0)) {
>> +		status ^= PCI_STATUS_INTERRUPT;
>> +		cdns_pcie_ep_fn_writew(pcie, fn, PCI_STATUS, status);
>> +	}

here you are setting the PCI_STATUS_INTERRUPT even before sending the ASSERT
message.
>> +
>> +	offset = CDNS_PCIE_NORMAL_MSG_ROUTING(MSG_ROUTING_LOCAL) |
>> +		 CDNS_PCIE_NORMAL_MSG_CODE(msg_code) |
>> +		 CDNS_PCIE_MSG_NO_DATA;
>> +	writel(0, ep->irq_cpu_addr + offset);
>> +}
>> +
>> +static int cdns_pcie_ep_send_legacy_irq(struct cdns_pcie_ep *ep, u8 fn, u8 intx)
>> +{
>> +	u16 cmd;
>> +
>> +	cmd = cdns_pcie_ep_fn_readw(&ep->pcie, fn, PCI_COMMAND);
>> +	if (cmd & PCI_COMMAND_INTX_DISABLE)
>> +		return -EINVAL;
>> +
>> +	cdns_pcie_ep_assert_intx(ep, fn, intx, true);
>> +	mdelay(1);
> 
> Add a comment please to explain the mdelay value.
> 
>> +	cdns_pcie_ep_assert_intx(ep, fn, intx, false);
>> +	return 0;
>> +}
>> +
>> +static int cdns_pcie_ep_raise_irq(struct pci_epc *epc,
>> +				  enum pci_epc_irq_type type, u8 interrupt_num)
>> +{
>> +	struct cdns_pcie_ep *ep = epc_get_drvdata(epc);
>> +	struct cdns_pcie *pcie = &ep->pcie;
>> +	u32 cap = CDNS_PCIE_EP_FUNC_MSI_CAP_OFFSET;
>> +	u16 flags, mmc, mme, data, data_mask;
>> +	u8 msi_count;
>> +	u64 pci_addr, pci_addr_mask = 0xff;
>> +	u8 fn = 0;
>> +
>> +	/* Handle legacy IRQ. */
>> +	if (type == PCI_EPC_IRQ_LEGACY)
>> +		return cdns_pcie_ep_send_legacy_irq(ep, fn, 0);
>> +
>> +	/* Otherwise MSI. */
>> +	if (type != PCI_EPC_IRQ_MSI)
>> +		return -EINVAL;

MSI-X?
>> +
>> +	/* Check whether the MSI feature has been enabled by the PCI host. */
>> +	flags = cdns_pcie_ep_fn_readw(pcie, fn, cap + PCI_MSI_FLAGS);
>> +	if (!(flags & PCI_MSI_FLAGS_ENABLE))
>> +		return -EINVAL;
>> +
>> +	/* Get the number of enabled MSIs */
>> +	mmc = (flags & PCI_MSI_FLAGS_QMASK) >> 1;
>> +	mme = (flags & PCI_MSI_FLAGS_QSIZE) >> 4;
>> +	if (mme > mmc)
>> +		mme = mmc;
>> +	if (mme > 5)
>> +		mme = 5;
> 
> Same comment as above.
> 
>> +	msi_count = 1 << mme;
>> +	if (!interrupt_num || interrupt_num > msi_count)
>> +		return -EINVAL;
>> +
>> +	/* Compute the data value to be written. */
>> +	data_mask = msi_count - 1;
>> +	data = cdns_pcie_ep_fn_readw(pcie, fn, cap + PCI_MSI_DATA_64);
>> +	data = (data & ~data_mask) | ((interrupt_num - 1) & data_mask);
>> +
>> +	/* Get the PCI address where to write the data into. */
>> +	pci_addr = cdns_pcie_ep_fn_readl(pcie, fn, cap + PCI_MSI_ADDRESS_HI);
>> +	pci_addr <<= 32;
>> +	pci_addr |= cdns_pcie_ep_fn_readl(pcie, fn, cap + PCI_MSI_ADDRESS_LO);
>> +	pci_addr &= GENMASK_ULL(63, 2);
>> +
>> +	/* Set the outbound region if needed. */
>> +	if (unlikely(ep->irq_pci_addr != pci_addr)) {
>> +		/* Last region was reserved for IRQ writes. */
>> +		cdns_pcie_set_outbound_region(pcie, ep->data->max_regions - 1,
>> +					      false,
>> +					      ep->irq_phys_addr,
>> +					      pci_addr & ~pci_addr_mask,
>> +					      pci_addr_mask + 1);
>> +		ep->irq_pci_addr = pci_addr;
>> +	}
>> +	writew(data, ep->irq_cpu_addr + (pci_addr & pci_addr_mask));
>> +
>> +	return 0;
>> +}
>> +
>> +static int cdns_pcie_ep_start(struct pci_epc *epc)
>> +{
>> +	struct cdns_pcie_ep *ep = epc_get_drvdata(epc);
>> +	struct cdns_pcie *pcie = &ep->pcie;
>> +	struct pci_epf *epf;
>> +	u32 cfg;
>> +	u8 fn = 0;
>> +
>> +	/* Enable this endpoint function. */
>> +	cfg = cdns_pcie_readl(pcie, CDNS_PCIE_LM_EP_FUNC_CFG);
>> +	cfg |= BIT(fn);
>> +	cdns_pcie_writel(pcie, CDNS_PCIE_LM_EP_FUNC_CFG, cfg);
>> +
>> +	/*
>> +	 * Already linked-up: don't call directly pci_epc_linkup() because we've
>> +	 * already locked the epc->lock.
>> +	 */

Not sure what you mean by linked-up here?
>> +	list_for_each_entry(epf, &epc->pci_epf, list)
>> +		pci_epf_linkup(epf);

>> +
>> +	return 0;
>> +}
>> +
>> +static void cdns_pcie_ep_stop(struct pci_epc *epc)
>> +{
>> +	struct cdns_pcie_ep *ep = epc_get_drvdata(epc);
>> +	struct cdns_pcie *pcie = &ep->pcie;
>> +	u32 cfg;
>> +	u8 fn = 0;
>> +
>> +	/* Disable this endpoint function (function 0 can't be disabled). */
> 
> I do not understand this comment and how it applies to the code,
> in other words fn is always 0 here (so it can't be disabled)
> I do not understand what this code is there for.
> 
>> +	cfg = cdns_pcie_readl(pcie, CDNS_PCIE_LM_EP_FUNC_CFG);
>> +	cfg &= ~BIT(fn);
>> +	cdns_pcie_writel(pcie, CDNS_PCIE_LM_EP_FUNC_CFG, cfg);
>> +}
>> +
>> +static const struct pci_epc_ops cdns_pcie_epc_ops = {
>> +	.write_header	= cdns_pcie_ep_write_header,
>> +	.set_bar	= cdns_pcie_ep_set_bar,
>> +	.clear_bar	= cdns_pcie_ep_clear_bar,
>> +	.map_addr	= cdns_pcie_ep_map_addr,
>> +	.unmap_addr	= cdns_pcie_ep_unmap_addr,
>> +	.set_msi	= cdns_pcie_ep_set_msi,
>> +	.get_msi	= cdns_pcie_ep_get_msi,
>> +	.raise_irq	= cdns_pcie_ep_raise_irq,
>> +	.start		= cdns_pcie_ep_start,
>> +	.stop		= cdns_pcie_ep_stop,
>> +};
>> +
>> +static const struct cdns_pcie_ep_data cdns_pcie_ep_data = {
>> +	.max_regions	= 16,
>> +};
> 
> As I mentioned in patch 3, should this be set-up with DT ?
> 
> Thanks,
> Lorenzo
> 
>> +
>> +static const struct of_device_id cdns_pcie_ep_of_match[] = {
>> +	{ .compatible = "cdns,cdns-pcie-ep",
>> +	  .data = &cdns_pcie_ep_data },
>> +
>> +	{ },
>> +};
>> +
>> +static int cdns_pcie_ep_probe(struct platform_device *pdev)
>> +{
>> +	struct device *dev = &pdev->dev;
>> +	struct device_node *np = dev->of_node;
>> +	const struct of_device_id *of_id;
>> +	struct cdns_pcie_ep *ep;
>> +	struct cdns_pcie *pcie;
>> +	struct pci_epc *epc;
>> +	struct resource *res;
>> +	size_t max_regions;
>> +	int ret;
>> +
>> +	ep = devm_kzalloc(dev, sizeof(*ep), GFP_KERNEL);
>> +	if (!ep)
>> +		return -ENOMEM;
>> +
>> +	platform_set_drvdata(pdev, ep);
>> +
>> +	pcie = &ep->pcie;
>> +	pcie->is_rc = false;
>> +
>> +	of_id = of_match_node(cdns_pcie_ep_of_match, np);
>> +	ep->data = (const struct cdns_pcie_ep_data *)of_id->data;
>> +
>> +	res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "reg");
>> +	pcie->reg_base = devm_ioremap_resource(dev, res);
>> +	if (IS_ERR(pcie->reg_base)) {
>> +		dev_err(dev, "missing \"reg\"\n");
>> +		return PTR_ERR(pcie->reg_base);
>> +	}
>> +
>> +	res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "mem");
>> +	if (!res) {
>> +		dev_err(dev, "missing \"mem\"\n");
>> +		return -EINVAL;
>> +	}
>> +	pcie->mem_res = res;
>> +
>> +	max_regions = ep->data->max_regions;
>> +	ep->ob_addr = devm_kzalloc(dev, max_regions * sizeof(*ep->ob_addr),
>> +				   GFP_KERNEL);
>> +	if (!ep->ob_addr)
>> +		return -ENOMEM;
>> +
>> +	pm_runtime_enable(dev);
>> +	ret = pm_runtime_get_sync(dev);
>> +	if (ret < 0) {
>> +		dev_err(dev, "pm_runtime_get_sync() failed\n");
>> +		goto err_get_sync;
>> +	}
>> +
>> +	/* Disable all but function 0 (anyway BIT(0) is hardwired to 1). */
>> +	cdns_pcie_writel(pcie, CDNS_PCIE_LM_EP_FUNC_CFG, BIT(0));

why disable all functions?
>> +
>> +	epc = devm_pci_epc_create(dev, &cdns_pcie_epc_ops);
>> +	if (IS_ERR(epc)) {
>> +		dev_err(dev, "failed to create epc device\n");
>> +		ret = PTR_ERR(epc);
>> +		goto err_init;
>> +	}
>> +
>> +	ep->epc = epc;
>> +	epc_set_drvdata(epc, ep);
>> +
>> +	ret = of_property_read_u8(np, "max-functions", &epc->max_functions);
>> +	if (ret < 0)
>> +		epc->max_functions = 1;
>> +
>> +	ret = pci_epc_mem_init(epc, pcie->mem_res->start,
>> +			       resource_size(pcie->mem_res));
>> +	if (ret < 0) {
>> +		dev_err(dev, "failed to initialize the memory space\n");
>> +		goto err_init;
>> +	}
>> +
>> +	ep->irq_cpu_addr = pci_epc_mem_alloc_addr(epc, &ep->irq_phys_addr,
>> +						  SZ_128K);

Any reason why you chose SZ_128K?

Thanks
Kishon

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH 3/5] PCI: cadence: Add host driver for Cadence PCIe controller
  2017-12-04 18:49         ` Ard Biesheuvel
@ 2017-12-06 11:32           ` Lorenzo Pieralisi
  2017-12-13 16:42             ` Cyrille Pitchen
  0 siblings, 1 reply; 39+ messages in thread
From: Lorenzo Pieralisi @ 2017-12-06 11:32 UTC (permalink / raw)
  To: Ard Biesheuvel
  Cc: Cyrille Pitchen, Bjorn Helgaas, kishon, linux-pci, adouglas,
	Scott Telford, dgary, kgopi, eandrews, Thomas Petazzoni, sureshp,
	nsekhar, linux-kernel, Rob Herring, devicetree

On Mon, Dec 04, 2017 at 06:49:12PM +0000, Ard Biesheuvel wrote:

[...]

> >> >> +static int cdns_pcie_host_init_root_port(struct cdns_pcie_rc *rc)
> >> >> +{
> >> >> +     const struct cdns_pcie_rc_data *data = rc->data;
> >> >> +     struct cdns_pcie *pcie = &rc->pcie;
> >> >> +     u8 pbn, sbn, subn;
> >> >> +     u32 value, ctrl;
> >> >> +
> >> >> +     /*
> >> >> +      * Set the root complex BAR configuration register:
> >> >> +      * - disable both BAR0 and BAR1.
> >> >> +      * - enable Prefetchable Memory Base and Limit registers in type 1
> >> >> +      *   config space (64 bits).
> >> >> +      * - enable IO Base and Limit registers in type 1 config
> >> >> +      *   space (32 bits).
> >> >> +      */
> >> >> +     ctrl = CDNS_PCIE_LM_BAR_CFG_CTRL_DISABLED;
> >> >> +     value = CDNS_PCIE_LM_RC_BAR_CFG_BAR0_CTRL(ctrl) |
> >> >> +             CDNS_PCIE_LM_RC_BAR_CFG_BAR1_CTRL(ctrl) |
> >> >> +             CDNS_PCIE_LM_RC_BAR_CFG_PREFETCH_MEM_ENABLE |
> >> >> +             CDNS_PCIE_LM_RC_BAR_CFG_PREFETCH_MEM_64BITS |
> >> >> +             CDNS_PCIE_LM_RC_BAR_CFG_IO_ENABLE |
> >> >> +             CDNS_PCIE_LM_RC_BAR_CFG_IO_32BITS;
> >> >> +     cdns_pcie_writel(pcie, CDNS_PCIE_LM_RC_BAR_CFG, value);
> >> >> +
> >> >> +     /* Set root port configuration space */
> >> >> +     if (data->vendor_id != 0xffff)
> >> >> +             cdns_pcie_rp_writew(pcie, PCI_VENDOR_ID, data->vendor_id);
> >> >> +     if (data->device_id != 0xffff)
> >> >> +             cdns_pcie_rp_writew(pcie, PCI_DEVICE_ID, data->device_id);
> >> >> +
> >> >> +     cdns_pcie_rp_writeb(pcie, PCI_CLASS_REVISION, 0);
> >> >> +     cdns_pcie_rp_writeb(pcie, PCI_CLASS_PROG, 0);
> >> >> +     cdns_pcie_rp_writew(pcie, PCI_CLASS_DEVICE, PCI_CLASS_BRIDGE_PCI);
> >> >> +
> >> >> +     pbn = rc->bus_range->start;
> >> >> +     sbn = pbn + 1; /* Single root port. */
> >> >> +     subn = rc->bus_range->end;
> >> >> +     cdns_pcie_rp_writeb(pcie, PCI_PRIMARY_BUS, pbn);
> >> >> +     cdns_pcie_rp_writeb(pcie, PCI_SECONDARY_BUS, sbn);
> >> >> +     cdns_pcie_rp_writeb(pcie, PCI_SUBORDINATE_BUS, subn);
> >> >
> >> > Again - I do not have the datasheet for this device therefore I would
> >> > kindly ask you how this works; it seems to me that what you are doing
> >> > here is done through normal configuration cycles in an ECAM compliant
> >> > system to program the RP PRIMARY/SECONDARY/SUBORDINATE bus - I would
> >> > like to understand why this code is needed.
> >> >
> >>
> >> I will test without those lines to test whether I can remove them.
> >>
> >> At first, the PCIe controller was tested by Cadence team: there was code
> >> in their bootloader to initialize the hardware (building the AXI <-> PCIe
> >> mappings, ...): the bootloader used to set the primary, secondary and
> >> subordinate bus numbers in the root port PCI config space.
> >>
> >> Also there was a hardware trick to redirect accesses of the lowest
> >> addresses in the AXI bus to the APB bus so the PCI configuration space of
> >> the root port could have been accessed from the AXI bus too.
> >>
> >> The AXI <-> PCIe mapping being done by the bootloader and the root port
> >> config space being accessible from the AXI bus, it was possible to use
> >> the pci-host-generic driver.
> >
> > That's what I was getting at. Ard (CC'ed) implemented a firmware set-up
> > (even though it was for a different IP but maybe it applies here) that
> > allows the kernel to use the pci-host-generic driver to initialize the
> > PCI controller:
> >
> > https://marc.info/?l=linux-pci&m=150360022626351&w=2
> >
> > I want to understand if there is an IP initialization sequence whereby
> > this IP can be made to work in an ECAM compliant way and therefore
> > reuse (most of) the pci-host-generic driver code.
> >
> 
> I think the Synopsys case is probably very similar. There are some
> registers that look like the config space of a root port, but in
> reality, every memory access that hits a live host bridge window is
> forwarded onto the link, regardless of the values of the bridge BARs.
> That is why in the quoted case, we can get away with ignoring the root
> port altogether, rather than jumping through hoops to make the IP
> block's PCI config space registers appear at B/D/F 0/0/0, while still
> having to filter type 0 config TLPs going onto the link (which is
> arguably the job of the root port to begin with)
> 
> So if this IP does implement a proper root port (i.e., one where the
> bridge BARs are actually taken into account, and where type 0 config
> TLPs are in fact filtered), I strongly recommend mapping its config
> space registers in an ECAM compliant matter, which implies no
> accessors in the OS.
> 
> However, given the observation above, this IP does not appear to
> filter type 0 config TLPs to devfn > 0 downstream of the root port
> either.

Unfortunately that matches my understanding too, let's wait for
Cyrille's reply on my query.

> >> However, the hardware trick won't be included in the final design since
> >> Cadence now wants to perform all PCI configuration space accesses through
> >> a small 4KB window at a fixed address on the AXI bus.
> >
> > I would like to understand what the HW "trick" (if you can disclose it)
> > was, because if there is a chance to reuse the pci-host-generic driver
> > for this IP I want to take it (yes it may entail some firmware set-up in
> > the bootloader) - was it a HW trick or a specific IP SW configuration ?
> >
> >> Also, we now want all initialisations to be done by the linux driver
> >> instead of the bootloader.
> >
> > That's a choice, I do not necessarily agree with it and I think we
> > should aim for more standardization on the PCI host bridge set-up
> > at firmware->kernel handover on DT platforms.
> >
> 
> Well, for one, it means this IP will never be supported by ACPI, which
> seems like a huge downside to me.

Yes it is - that's exactly where my comments were heading.

Thanks,
Lorenzo

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH 1/5] PCI: Add vendor ID for Cadence
  2017-11-23 15:01 ` [PATCH 1/5] PCI: Add vendor ID for Cadence Cyrille Pitchen
@ 2017-12-06 21:27   ` Bjorn Helgaas
  0 siblings, 0 replies; 39+ messages in thread
From: Bjorn Helgaas @ 2017-12-06 21:27 UTC (permalink / raw)
  To: Cyrille Pitchen
  Cc: bhelgaas, kishon, lorenzo.pieralisi, linux-pci, adouglas,
	stelford, dgary, kgopi, eandrews, thomas.petazzoni, sureshp,
	nsekhar, linux-kernel, robh, devicetree

On Thu, Nov 23, 2017 at 04:01:46PM +0100, Cyrille Pitchen wrote:
> This patch adds a new PCI vendor ID for Cadence.
> 
> Signed-off-by: Cyrille Pitchen <cyrille.pitchen@free-electrons.com>

Acked-by: Bjorn Helgaas <bhelgaas@google.com>

I assume Lorenzo will merge this along with the rest of the Cadence
support.

> ---
>  include/linux/pci_ids.h | 2 ++
>  1 file changed, 2 insertions(+)
> 
> diff --git a/include/linux/pci_ids.h b/include/linux/pci_ids.h
> index ab20dc5db423..eb13e84e1fef 100644
> --- a/include/linux/pci_ids.h
> +++ b/include/linux/pci_ids.h
> @@ -2381,6 +2381,8 @@
>  
>  #define PCI_VENDOR_ID_LENOVO		0x17aa
>  
> +#define PCI_VENDOR_ID_CDNS		0x17cd
> +
>  #define PCI_VENDOR_ID_ARECA		0x17d3
>  #define PCI_DEVICE_ID_ARECA_1110	0x1110
>  #define PCI_DEVICE_ID_ARECA_1120	0x1120
> -- 
> 2.11.0
> 

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH 5/5] PCI: cadence: add EndPoint Controller driver for Cadence PCIe controller
  2017-12-05  9:19     ` Kishon Vijay Abraham I
@ 2017-12-07 10:05       ` Philippe Ombredanne
  2017-12-13 16:03         ` Cyrille Pitchen
  2017-12-13 16:50       ` Cyrille Pitchen
  1 sibling, 1 reply; 39+ messages in thread
From: Philippe Ombredanne @ 2017-12-07 10:05 UTC (permalink / raw)
  To: Cyrille Pitchen
  Cc: Lorenzo Pieralisi, Kishon Vijay Abraham I, bhelgaas, linux-pci,
	adouglas, stelford, dgary, kgopi, eandrews, thomas.petazzoni,
	sureshp, nsekhar, LKML, Rob Herring,
	open list:OPEN FIRMWARE AND FLATTENED DEVICE TREE BINDINGS

Cyrille,


On Tue, Dec 5, 2017 at 10:19 AM, Kishon Vijay Abraham I <kishon@ti.com> wrote:
> Hi,
>
> On Friday 01 December 2017 05:50 PM, Lorenzo Pieralisi wrote:
>> On Thu, Nov 23, 2017 at 04:01:50PM +0100, Cyrille Pitchen wrote:
>>> This patch adds support to the Cadence PCIe controller in endpoint mode.
>>
>> Please add a brief description to the log to describe the most salient
>> features.
>>
>>> Signed-off-by: Cyrille Pitchen <cyrille.pitchen@free-electrons.com>
>>> ---
>>>  drivers/pci/cadence/Kconfig           |   9 +
>>>  drivers/pci/cadence/Makefile          |   1 +
>>>  drivers/pci/cadence/pcie-cadence-ep.c | 553 ++++++++++++++++++++++++++++++++++
>>>  3 files changed, 563 insertions(+)
>>>  create mode 100644 drivers/pci/cadence/pcie-cadence-ep.c
>>>
>>> diff --git a/drivers/pci/cadence/Kconfig b/drivers/pci/cadence/Kconfig
>>> index 120306cae2aa..b2e6af71f39e 100644
>>> --- a/drivers/pci/cadence/Kconfig
>>> +++ b/drivers/pci/cadence/Kconfig
>>> @@ -21,4 +21,13 @@ config PCIE_CADENCE_HOST
>>>        mode. This PCIe controller may be embedded into many different vendors
>>>        SoCs.
>>>
>>> +config PCIE_CADENCE_EP
>>> +    bool "Cadence PCIe endpoint controller"
>>> +    depends on PCI_ENDPOINT
>>> +    select PCIE_CADENCE
>>> +    help
>>> +      Say Y here if you want to support the Cadence PCIe  controller in
>>> +      endpoint mode. This PCIe controller may be embedded into many
>>> +      different vendors SoCs.
>>> +
>>>  endif # PCI_CADENCE
>>> diff --git a/drivers/pci/cadence/Makefile b/drivers/pci/cadence/Makefile
>>> index d57d192d2595..61e9c8d6839d 100644
>>> --- a/drivers/pci/cadence/Makefile
>>> +++ b/drivers/pci/cadence/Makefile
>>> @@ -1,2 +1,3 @@
>>>  obj-$(CONFIG_PCIE_CADENCE) += pcie-cadence.o
>>>  obj-$(CONFIG_PCIE_CADENCE_HOST) += pcie-cadence-host.o
>>> +obj-$(CONFIG_PCIE_CADENCE_EP) += pcie-cadence-ep.o
>>> diff --git a/drivers/pci/cadence/pcie-cadence-ep.c b/drivers/pci/cadence/pcie-cadence-ep.c
>>> new file mode 100644
>>> index 000000000000..a1d761101a9c
>>> --- /dev/null
>>> +++ b/drivers/pci/cadence/pcie-cadence-ep.c
>>> @@ -0,0 +1,553 @@
>>> +/*
>>> + * Cadence PCIe host controller driver.
>>
>> You should update this comment.
>>
>>> + *
>>> + * Copyright (c) 2017 Cadence
>>> + *
>>> + * Author: Cyrille Pitchen <cyrille.pitchen@free-electrons.com>
>>> + *
>>> + * This program is free software; you can redistribute it and/or modify
>>> + * it under the terms of the GNU General Public License version 2 as
>>> + * published by the Free Software Foundation.
>>> + *
>>> + * This program is distributed in the hope that it will be useful,
>>> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
>>> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
>>> + * GNU General Public License for more details.
>>> + *
>>> + * You should have received a copy of the GNU General Public License along with
>>> + * this program.  If not, see <http://www.gnu.org/licenses/>.
>>> + */

Would you consider using the new SPDX ids instead of this long legalese blurb?
You can  check tglx's doc patches for details and Linus comments on
why he wants it a certain way with C++ // comments.

Here this could come out much streamlined with this top level comment block:

// SPDX-License-Identifier: GPL-2.0
// Copyright (c) 2017 Cadence
// Cadence PCIe host controller driver.
// Author: Cyrille Pitchen <cyrille.pitchen@free-electrons.com>

This is only a suggestion, but everyone kinda likes it when the
licensing boilerplate is minimal and out of the visual way in order to
focus is on what matters most: the code!
Thank you for your kind consideration.
-- 
Cordially
Philippe Ombredanne

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH 5/5] PCI: cadence: add EndPoint Controller driver for Cadence PCIe controller
  2017-12-07 10:05       ` Philippe Ombredanne
@ 2017-12-13 16:03         ` Cyrille Pitchen
  0 siblings, 0 replies; 39+ messages in thread
From: Cyrille Pitchen @ 2017-12-13 16:03 UTC (permalink / raw)
  To: Philippe Ombredanne
  Cc: Lorenzo Pieralisi, Kishon Vijay Abraham I, bhelgaas, linux-pci,
	adouglas, stelford, dgary, kgopi, eandrews, thomas.petazzoni,
	sureshp, nsekhar, LKML, Rob Herring,
	open list:OPEN FIRMWARE AND FLATTENED DEVICE TREE BINDINGS

Hi Philippe,

Le 07/12/2017 à 11:05, Philippe Ombredanne a écrit :
> Cyrille,
> 
> 
> On Tue, Dec 5, 2017 at 10:19 AM, Kishon Vijay Abraham I <kishon@ti.com> wrote:
>> Hi,
>>
>> On Friday 01 December 2017 05:50 PM, Lorenzo Pieralisi wrote:
>>> On Thu, Nov 23, 2017 at 04:01:50PM +0100, Cyrille Pitchen wrote:
>>>> This patch adds support to the Cadence PCIe controller in endpoint mode.
>>>
>>> Please add a brief description to the log to describe the most salient
>>> features.
>>>
>>>> Signed-off-by: Cyrille Pitchen <cyrille.pitchen@free-electrons.com>
>>>> ---
>>>>  drivers/pci/cadence/Kconfig           |   9 +
>>>>  drivers/pci/cadence/Makefile          |   1 +
>>>>  drivers/pci/cadence/pcie-cadence-ep.c | 553 ++++++++++++++++++++++++++++++++++
>>>>  3 files changed, 563 insertions(+)
>>>>  create mode 100644 drivers/pci/cadence/pcie-cadence-ep.c
>>>>
>>>> diff --git a/drivers/pci/cadence/Kconfig b/drivers/pci/cadence/Kconfig
>>>> index 120306cae2aa..b2e6af71f39e 100644
>>>> --- a/drivers/pci/cadence/Kconfig
>>>> +++ b/drivers/pci/cadence/Kconfig
>>>> @@ -21,4 +21,13 @@ config PCIE_CADENCE_HOST
>>>>        mode. This PCIe controller may be embedded into many different vendors
>>>>        SoCs.
>>>>
>>>> +config PCIE_CADENCE_EP
>>>> +    bool "Cadence PCIe endpoint controller"
>>>> +    depends on PCI_ENDPOINT
>>>> +    select PCIE_CADENCE
>>>> +    help
>>>> +      Say Y here if you want to support the Cadence PCIe  controller in
>>>> +      endpoint mode. This PCIe controller may be embedded into many
>>>> +      different vendors SoCs.
>>>> +
>>>>  endif # PCI_CADENCE
>>>> diff --git a/drivers/pci/cadence/Makefile b/drivers/pci/cadence/Makefile
>>>> index d57d192d2595..61e9c8d6839d 100644
>>>> --- a/drivers/pci/cadence/Makefile
>>>> +++ b/drivers/pci/cadence/Makefile
>>>> @@ -1,2 +1,3 @@
>>>>  obj-$(CONFIG_PCIE_CADENCE) += pcie-cadence.o
>>>>  obj-$(CONFIG_PCIE_CADENCE_HOST) += pcie-cadence-host.o
>>>> +obj-$(CONFIG_PCIE_CADENCE_EP) += pcie-cadence-ep.o
>>>> diff --git a/drivers/pci/cadence/pcie-cadence-ep.c b/drivers/pci/cadence/pcie-cadence-ep.c
>>>> new file mode 100644
>>>> index 000000000000..a1d761101a9c
>>>> --- /dev/null
>>>> +++ b/drivers/pci/cadence/pcie-cadence-ep.c
>>>> @@ -0,0 +1,553 @@
>>>> +/*
>>>> + * Cadence PCIe host controller driver.
>>>
>>> You should update this comment.
>>>
>>>> + *
>>>> + * Copyright (c) 2017 Cadence
>>>> + *
>>>> + * Author: Cyrille Pitchen <cyrille.pitchen@free-electrons.com>
>>>> + *
>>>> + * This program is free software; you can redistribute it and/or modify
>>>> + * it under the terms of the GNU General Public License version 2 as
>>>> + * published by the Free Software Foundation.
>>>> + *
>>>> + * This program is distributed in the hope that it will be useful,
>>>> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
>>>> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
>>>> + * GNU General Public License for more details.
>>>> + *
>>>> + * You should have received a copy of the GNU General Public License along with
>>>> + * this program.  If not, see <http://www.gnu.org/licenses/>.
>>>> + */
> 
> Would you consider using the new SPDX ids instead of this long legalese blurb?
> You can  check tglx's doc patches for details and Linus comments on
> why he wants it a certain way with C++ // comments.
> 
> Here this could come out much streamlined with this top level comment block:
> 
> // SPDX-License-Identifier: GPL-2.0
> // Copyright (c) 2017 Cadence
> // Cadence PCIe host controller driver.
> // Author: Cyrille Pitchen <cyrille.pitchen@free-electrons.com>
> 
> This is only a suggestion, but everyone kinda likes it when the
> licensing boilerplate is minimal and out of the visual way in order to
> focus is on what matters most: the code!
> Thank you for your kind consideration.
> 

I don't mind changing for your proposal if this is the preferred way.
I'm updating the series.

Best regards,

Cyrille

-- 
Cyrille Pitchen, Free Electrons
Embedded Linux and Kernel engineering
http://free-electrons.com

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH 3/5] PCI: cadence: Add host driver for Cadence PCIe controller
  2017-12-06 11:32           ` Lorenzo Pieralisi
@ 2017-12-13 16:42             ` Cyrille Pitchen
  0 siblings, 0 replies; 39+ messages in thread
From: Cyrille Pitchen @ 2017-12-13 16:42 UTC (permalink / raw)
  To: Lorenzo Pieralisi, Ard Biesheuvel
  Cc: Bjorn Helgaas, kishon, linux-pci, adouglas, Scott Telford, dgary,
	kgopi, eandrews, Thomas Petazzoni, sureshp, nsekhar,
	linux-kernel, Rob Herring, devicetree

Hi all,

Le 06/12/2017 à 12:32, Lorenzo Pieralisi a écrit :
> On Mon, Dec 04, 2017 at 06:49:12PM +0000, Ard Biesheuvel wrote:
> 
> [...]
> 
>>>>>> +static int cdns_pcie_host_init_root_port(struct cdns_pcie_rc *rc)
>>>>>> +{
>>>>>> +     const struct cdns_pcie_rc_data *data = rc->data;
>>>>>> +     struct cdns_pcie *pcie = &rc->pcie;
>>>>>> +     u8 pbn, sbn, subn;
>>>>>> +     u32 value, ctrl;
>>>>>> +
>>>>>> +     /*
>>>>>> +      * Set the root complex BAR configuration register:
>>>>>> +      * - disable both BAR0 and BAR1.
>>>>>> +      * - enable Prefetchable Memory Base and Limit registers in type 1
>>>>>> +      *   config space (64 bits).
>>>>>> +      * - enable IO Base and Limit registers in type 1 config
>>>>>> +      *   space (32 bits).
>>>>>> +      */
>>>>>> +     ctrl = CDNS_PCIE_LM_BAR_CFG_CTRL_DISABLED;
>>>>>> +     value = CDNS_PCIE_LM_RC_BAR_CFG_BAR0_CTRL(ctrl) |
>>>>>> +             CDNS_PCIE_LM_RC_BAR_CFG_BAR1_CTRL(ctrl) |
>>>>>> +             CDNS_PCIE_LM_RC_BAR_CFG_PREFETCH_MEM_ENABLE |
>>>>>> +             CDNS_PCIE_LM_RC_BAR_CFG_PREFETCH_MEM_64BITS |
>>>>>> +             CDNS_PCIE_LM_RC_BAR_CFG_IO_ENABLE |
>>>>>> +             CDNS_PCIE_LM_RC_BAR_CFG_IO_32BITS;
>>>>>> +     cdns_pcie_writel(pcie, CDNS_PCIE_LM_RC_BAR_CFG, value);
>>>>>> +
>>>>>> +     /* Set root port configuration space */
>>>>>> +     if (data->vendor_id != 0xffff)
>>>>>> +             cdns_pcie_rp_writew(pcie, PCI_VENDOR_ID, data->vendor_id);
>>>>>> +     if (data->device_id != 0xffff)
>>>>>> +             cdns_pcie_rp_writew(pcie, PCI_DEVICE_ID, data->device_id);
>>>>>> +
>>>>>> +     cdns_pcie_rp_writeb(pcie, PCI_CLASS_REVISION, 0);
>>>>>> +     cdns_pcie_rp_writeb(pcie, PCI_CLASS_PROG, 0);
>>>>>> +     cdns_pcie_rp_writew(pcie, PCI_CLASS_DEVICE, PCI_CLASS_BRIDGE_PCI);
>>>>>> +
>>>>>> +     pbn = rc->bus_range->start;
>>>>>> +     sbn = pbn + 1; /* Single root port. */
>>>>>> +     subn = rc->bus_range->end;
>>>>>> +     cdns_pcie_rp_writeb(pcie, PCI_PRIMARY_BUS, pbn);
>>>>>> +     cdns_pcie_rp_writeb(pcie, PCI_SECONDARY_BUS, sbn);
>>>>>> +     cdns_pcie_rp_writeb(pcie, PCI_SUBORDINATE_BUS, subn);
>>>>>
>>>>> Again - I do not have the datasheet for this device therefore I would
>>>>> kindly ask you how this works; it seems to me that what you are doing
>>>>> here is done through normal configuration cycles in an ECAM compliant
>>>>> system to program the RP PRIMARY/SECONDARY/SUBORDINATE bus - I would
>>>>> like to understand why this code is needed.
>>>>>
>>>>
>>>> I will test without those lines to test whether I can remove them.
>>>>
>>>> At first, the PCIe controller was tested by Cadence team: there was code
>>>> in their bootloader to initialize the hardware (building the AXI <-> PCIe
>>>> mappings, ...): the bootloader used to set the primary, secondary and
>>>> subordinate bus numbers in the root port PCI config space.
>>>>
>>>> Also there was a hardware trick to redirect accesses of the lowest
>>>> addresses in the AXI bus to the APB bus so the PCI configuration space of
>>>> the root port could have been accessed from the AXI bus too.
>>>>
>>>> The AXI <-> PCIe mapping being done by the bootloader and the root port
>>>> config space being accessible from the AXI bus, it was possible to use
>>>> the pci-host-generic driver.
>>>
>>> That's what I was getting at. Ard (CC'ed) implemented a firmware set-up
>>> (even though it was for a different IP but maybe it applies here) that
>>> allows the kernel to use the pci-host-generic driver to initialize the
>>> PCI controller:
>>>
>>> https://marc.info/?l=linux-pci&m=150360022626351&w=2
>>>
>>> I want to understand if there is an IP initialization sequence whereby
>>> this IP can be made to work in an ECAM compliant way and therefore
>>> reuse (most of) the pci-host-generic driver code.
>>>
>>
>> I think the Synopsys case is probably very similar. There are some
>> registers that look like the config space of a root port, but in
>> reality, every memory access that hits a live host bridge window is
>> forwarded onto the link, regardless of the values of the bridge BARs.
>> That is why in the quoted case, we can get away with ignoring the root
>> port altogether, rather than jumping through hoops to make the IP
>> block's PCI config space registers appear at B/D/F 0/0/0, while still
>> having to filter type 0 config TLPs going onto the link (which is
>> arguably the job of the root port to begin with)
>>
>> So if this IP does implement a proper root port (i.e., one where the
>> bridge BARs are actually taken into account, and where type 0 config
>> TLPs are in fact filtered), I strongly recommend mapping its config
>> space registers in an ECAM compliant matter, which implies no
>> accessors in the OS.
>>
>> However, given the observation above, this IP does not appear to
>> filter type 0 config TLPs to devfn > 0 downstream of the root port
>> either.
> 
> Unfortunately that matches my understanding too, let's wait for
> Cyrille's reply on my query.
> 
>>>> However, the hardware trick won't be included in the final design since
>>>> Cadence now wants to perform all PCI configuration space accesses through
>>>> a small 4KB window at a fixed address on the AXI bus.
>>>
>>> I would like to understand what the HW "trick" (if you can disclose it)
>>> was, because if there is a chance to reuse the pci-host-generic driver
>>> for this IP I want to take it (yes it may entail some firmware set-up in
>>> the bootloader) - was it a HW trick or a specific IP SW configuration ?
>>>

I will have to ask for details to Cadence designers if needed but when I
asked them about it, they explained me that AXI bus accesses in a small
window (I guess 4KB width) were redirected to the APB bus where lay the
registers for the root port PCI configuration space.

I was some hardware trick which won't be included in the final design, so
we can't enable or disable it by software.

Actually, this is requirement from the Cadence's customer that the host
driver can access the PCI config space of any device in sub ordinates
buses through a small memory area on the AXI bus. For some reason, they
don't want an ECAM compliant controller.

I don't know the reason but my guess is that they don't want to waste to
much space allocated to the PCIe controller on the AXI bus, likely on a
32-bit platform. As I said, this is such an assumption.

>>>> Also, we now want all initialisations to be done by the linux driver
>>>> instead of the bootloader.
>>>
>>> That's a choice, I do not necessarily agree with it and I think we
>>> should aim for more standardization on the PCI host bridge set-up
>>> at firmware->kernel handover on DT platforms.
>>>

It was another requirement of Cadence's customer that the PCIe host
controller initialization is done by the Linux driver rather than by
some boot loader.

Best regards,

Cyrille

>>
>> Well, for one, it means this IP will never be supported by ACPI, which
>> seems like a huge downside to me.
> 
> Yes it is - that's exactly where my comments were heading.
> 
> Thanks,
> Lorenzo
> 


-- 
Cyrille Pitchen, Free Electrons
Embedded Linux and Kernel engineering
http://free-electrons.com

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH 5/5] PCI: cadence: add EndPoint Controller driver for Cadence PCIe controller
  2017-12-05  9:19     ` Kishon Vijay Abraham I
  2017-12-07 10:05       ` Philippe Ombredanne
@ 2017-12-13 16:50       ` Cyrille Pitchen
  2017-12-14 17:03         ` Cyrille Pitchen
  1 sibling, 1 reply; 39+ messages in thread
From: Cyrille Pitchen @ 2017-12-13 16:50 UTC (permalink / raw)
  To: Kishon Vijay Abraham I, Lorenzo Pieralisi
  Cc: bhelgaas, linux-pci, adouglas, stelford, dgary, kgopi, eandrews,
	thomas.petazzoni, sureshp, nsekhar, linux-kernel, robh,
	devicetree

Hi Kishon,

Le 05/12/2017 à 10:19, Kishon Vijay Abraham I a écrit :
> Hi,
> 
> On Friday 01 December 2017 05:50 PM, Lorenzo Pieralisi wrote:
>> On Thu, Nov 23, 2017 at 04:01:50PM +0100, Cyrille Pitchen wrote:
>>> This patch adds support to the Cadence PCIe controller in endpoint mode.
>>
>> Please add a brief description to the log to describe the most salient
>> features.
>>
>>> Signed-off-by: Cyrille Pitchen <cyrille.pitchen@free-electrons.com>
>>> ---
>>>  drivers/pci/cadence/Kconfig           |   9 +
>>>  drivers/pci/cadence/Makefile          |   1 +
>>>  drivers/pci/cadence/pcie-cadence-ep.c | 553 ++++++++++++++++++++++++++++++++++
>>>  3 files changed, 563 insertions(+)
>>>  create mode 100644 drivers/pci/cadence/pcie-cadence-ep.c
>>>
>>> diff --git a/drivers/pci/cadence/Kconfig b/drivers/pci/cadence/Kconfig
>>> index 120306cae2aa..b2e6af71f39e 100644
>>> --- a/drivers/pci/cadence/Kconfig
>>> +++ b/drivers/pci/cadence/Kconfig
>>> @@ -21,4 +21,13 @@ config PCIE_CADENCE_HOST
>>>  	  mode. This PCIe controller may be embedded into many different vendors
>>>  	  SoCs.
>>>  
>>> +config PCIE_CADENCE_EP
>>> +	bool "Cadence PCIe endpoint controller"
>>> +	depends on PCI_ENDPOINT
>>> +	select PCIE_CADENCE
>>> +	help
>>> +	  Say Y here if you want to support the Cadence PCIe  controller in
>>> +	  endpoint mode. This PCIe controller may be embedded into many
>>> +	  different vendors SoCs.
>>> +
>>>  endif # PCI_CADENCE
>>> diff --git a/drivers/pci/cadence/Makefile b/drivers/pci/cadence/Makefile
>>> index d57d192d2595..61e9c8d6839d 100644
>>> --- a/drivers/pci/cadence/Makefile
>>> +++ b/drivers/pci/cadence/Makefile
>>> @@ -1,2 +1,3 @@
>>>  obj-$(CONFIG_PCIE_CADENCE) += pcie-cadence.o
>>>  obj-$(CONFIG_PCIE_CADENCE_HOST) += pcie-cadence-host.o
>>> +obj-$(CONFIG_PCIE_CADENCE_EP) += pcie-cadence-ep.o
>>> diff --git a/drivers/pci/cadence/pcie-cadence-ep.c b/drivers/pci/cadence/pcie-cadence-ep.c
>>> new file mode 100644
>>> index 000000000000..a1d761101a9c
>>> --- /dev/null
>>> +++ b/drivers/pci/cadence/pcie-cadence-ep.c
>>> @@ -0,0 +1,553 @@
>>> +/*
>>> + * Cadence PCIe host controller driver.
>>
>> You should update this comment.
>>
>>> + *
>>> + * Copyright (c) 2017 Cadence
>>> + *
>>> + * Author: Cyrille Pitchen <cyrille.pitchen@free-electrons.com>
>>> + *
>>> + * This program is free software; you can redistribute it and/or modify
>>> + * it under the terms of the GNU General Public License version 2 as
>>> + * published by the Free Software Foundation.
>>> + *
>>> + * This program is distributed in the hope that it will be useful,
>>> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
>>> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
>>> + * GNU General Public License for more details.
>>> + *
>>> + * You should have received a copy of the GNU General Public License along with
>>> + * this program.  If not, see <http://www.gnu.org/licenses/>.
>>> + */
>>> +#include <linux/kernel.h>
>>> +#include <linux/of.h>
>>> +#include <linux/pci-epc.h>
>>> +#include <linux/platform_device.h>
>>> +#include <linux/pm_runtime.h>
>>> +#include <linux/sizes.h>
>>> +#include <linux/delay.h>
>>
>> Nit: alphabetical order.
>>
>>> +#include "pcie-cadence.h"
>>> +
>>> +#define CDNS_PCIE_EP_MIN_APERTURE		128	/* 128 bytes */
>>> +#define CDNS_PCIE_EP_IRQ_PCI_ADDR_NONE		0x1
>>> +#define CDNS_PCIE_EP_IRQ_PCI_ADDR_LEGACY	0x3
>>> +
>>> +/**
>>> + * struct cdns_pcie_ep_data - hardware specific data
>>> + * @max_regions: maximum nmuber of regions supported by hardware
>>
>> s/nmuber/number
>>
>>> + */
>>> +struct cdns_pcie_ep_data {
>>> +	size_t				max_regions;
>>> +};
>>> +
>>> +/**
>>> + * struct cdns_pcie_ep - private data for this PCIe endpoint controller driver
>>> + * @pcie: Cadence PCIe controller
>>> + * @data: pointer to a 'struct cdns_pcie_data'
>>> + */
>>> +struct cdns_pcie_ep {
>>> +	struct cdns_pcie		pcie;
>>> +	const struct cdns_pcie_ep_data	*data;
>>> +	struct pci_epc			*epc;
>>> +	unsigned long			ob_region_map;
>>> +	phys_addr_t			*ob_addr;
>>> +	phys_addr_t			irq_phys_addr;
>>> +	void __iomem			*irq_cpu_addr;
>>> +	u64				irq_pci_addr;
>>> +	u8				irq_pending;
>>> +};
>>> +
>>> +static int cdns_pcie_ep_write_header(struct pci_epc *epc,
>>> +				     struct pci_epf_header *hdr)
>>> +{
>>> +	struct cdns_pcie_ep *ep = epc_get_drvdata(epc);
>>> +	struct cdns_pcie *pcie = &ep->pcie;
>>> +	u8 fn = 0;
>>> +
>>> +	if (fn == 0) {
>>
>> I think there is some code to retrieve fn missing here.
> 
> hmm.. the endpoint core has to send the function number which right now it's
> not doing though it has the function number info in pci_epf.

Would it be OK if I add a new patch in the next series adding a
'struct pcie_epf *epf' as a 2nd argument to all handlers in the
'struct pcie_epc_ops'? This way I could have access to epf->func_no as needed.

Best regards,

Cyrille

>>
>>> +		u32 id = CDNS_PCIE_LM_ID_VENDOR(hdr->vendorid) |
>>> +			 CDNS_PCIE_LM_ID_SUBSYS(hdr->subsys_vendor_id);
>>> +		cdns_pcie_writel(pcie, CDNS_PCIE_LM_ID, id);
>>> +	}
>>> +	cdns_pcie_ep_fn_writew(pcie, fn, PCI_DEVICE_ID, hdr->deviceid);
>>> +	cdns_pcie_ep_fn_writeb(pcie, fn, PCI_REVISION_ID, hdr->revid);
>>> +	cdns_pcie_ep_fn_writeb(pcie, fn, PCI_CLASS_PROG, hdr->progif_code);
>>> +	cdns_pcie_ep_fn_writew(pcie, fn, PCI_CLASS_DEVICE,
>>> +			       hdr->subclass_code | hdr->baseclass_code << 8);
>>> +	cdns_pcie_ep_fn_writeb(pcie, fn, PCI_CACHE_LINE_SIZE,
>>> +			       hdr->cache_line_size);
>>> +	cdns_pcie_ep_fn_writew(pcie, fn, PCI_SUBSYSTEM_ID, hdr->subsys_id);
>>> +	cdns_pcie_ep_fn_writeb(pcie, fn, PCI_INTERRUPT_PIN, hdr->interrupt_pin);
>>> +
>>> +	return 0;
>>> +}
>>> +
>>> +static int cdns_pcie_ep_set_bar(struct pci_epc *epc, enum pci_barno bar,
>>> +				dma_addr_t bar_phys, size_t size, int flags)
>>> +{
>>> +	struct cdns_pcie_ep *ep = epc_get_drvdata(epc);
>>> +	struct cdns_pcie *pcie = &ep->pcie;
>>> +	u32 addr0, addr1, reg, cfg, b, aperture, ctrl;
>>> +	u8 fn = 0;
> 
> Here too endpoint core should send the function number..
>>> +	u64 sz;
>>> +
>>> +	/* BAR size is 2^(aperture + 7) */
>>> +	sz = max_t(size_t, size, CDNS_PCIE_EP_MIN_APERTURE);
>>> +	sz = 1ULL << fls64(sz - 1);
>>> +	aperture = ilog2(sz) - 7; /* 128B -> 0, 256B -> 1, 512B -> 2, ... */
>>> +
>>> +	if ((flags & PCI_BASE_ADDRESS_SPACE) == PCI_BASE_ADDRESS_SPACE_IO) {
>>> +		ctrl = CDNS_PCIE_LM_BAR_CFG_CTRL_IO_32BITS;
>>> +	} else {
>>> +		bool is_prefetch = !!(flags & PCI_BASE_ADDRESS_MEM_PREFETCH);
>>> +		bool is_64bits = sz > SZ_2G;
>>> +
>>> +		if (is_64bits && (bar & 1))
>>> +			return -EINVAL;
>>> +
>>> +		switch (is_64bits << 1 | is_prefetch) {
>>
>> I would not mind implementing this as a nested if-else, I am not a big
>> fan of using bool this way.
>>
>>> +		case 0:
>>> +			ctrl = CDNS_PCIE_LM_BAR_CFG_CTRL_MEM_32BITS;
>>> +			break;
>>> +
>>> +		case 1:
>>> +			ctrl = CDNS_PCIE_LM_BAR_CFG_CTRL_PREFETCH_MEM_32BITS;
>>> +			break;
>>> +
>>> +		case 2:
>>> +			ctrl = CDNS_PCIE_LM_BAR_CFG_CTRL_MEM_64BITS;
>>> +			break;
>>> +
>>> +		case 3:
>>> +			ctrl = CDNS_PCIE_LM_BAR_CFG_CTRL_PREFETCH_MEM_64BITS;
>>> +			break;
>>> +		}
>>> +	}
>>> +
>>> +	addr0 = lower_32_bits(bar_phys);
>>> +	addr1 = upper_32_bits(bar_phys);
>>> +	cdns_pcie_writel(pcie, CDNS_PCIE_AT_IB_EP_FUNC_BAR_ADDR0(fn, bar),
>>> +			 addr0);
> 
> It would be nice if you can have defines for CDNS_PCIE_AT_IB_EP_FUNC_BAR_ADDR0
> included in this patch rather than "PCI: cadence: Add host driver for Cadence
> PCIe controller". All EP specific functions in header file should be included here.
>>> +	cdns_pcie_writel(pcie, CDNS_PCIE_AT_IB_EP_FUNC_BAR_ADDR1(fn, bar),
>>> +			 addr1);
>>
>> Is fn always 0 ?
>>
>>> +	if (bar < BAR_4) {
>>> +		reg = CDNS_PCIE_LM_EP_FUNC_BAR_CFG0(fn);
>>> +		b = bar;
>>> +	} else {
>>> +		reg = CDNS_PCIE_LM_EP_FUNC_BAR_CFG1(fn);
>>> +		b = bar - BAR_4;
>>> +	}
>>> +
>>> +	cfg = cdns_pcie_readl(pcie, reg);
>>> +	cfg &= ~(CDNS_PCIE_LM_EP_FUNC_BAR_CFG_BAR_APERTURE_MASK(b) |
>>> +		 CDNS_PCIE_LM_EP_FUNC_BAR_CFG_BAR_CTRL_MASK(b));
>>> +	cfg |= (CDNS_PCIE_LM_EP_FUNC_BAR_CFG_BAR_APERTURE(b, aperture) |
>>> +		CDNS_PCIE_LM_EP_FUNC_BAR_CFG_BAR_CTRL(b, ctrl));
>>> +	cdns_pcie_writel(pcie, reg, cfg);
>>> +
>>> +	return 0;
>>> +}
>>> +
>>> +static void cdns_pcie_ep_clear_bar(struct pci_epc *epc, enum pci_barno bar)
>>> +{
>>> +	struct cdns_pcie_ep *ep = epc_get_drvdata(epc);
>>> +	struct cdns_pcie *pcie = &ep->pcie;
>>> +	u32 reg, cfg, b, ctrl;
>>> +	u8 fn = 0;
> 
> Here too endpoint core should send the function number..
>>> +
>>> +	if (bar < BAR_4) {
>>> +		reg = CDNS_PCIE_LM_EP_FUNC_BAR_CFG0(fn);
>>> +		b = bar;
>>> +	} else {
>>> +		reg = CDNS_PCIE_LM_EP_FUNC_BAR_CFG1(fn);
>>> +		b = bar - BAR_4;
>>> +	}
>>> +
>>> +	ctrl = CDNS_PCIE_LM_BAR_CFG_CTRL_DISABLED;
>>> +	cfg = cdns_pcie_readl(pcie, reg);
>>> +	cfg &= ~(CDNS_PCIE_LM_EP_FUNC_BAR_CFG_BAR_APERTURE_MASK(b) |
>>> +		 CDNS_PCIE_LM_EP_FUNC_BAR_CFG_BAR_CTRL_MASK(b));
>>> +	cfg |= CDNS_PCIE_LM_EP_FUNC_BAR_CFG_BAR_CTRL_MASK(ctrl);
>>> +	cdns_pcie_writel(pcie, reg, cfg);
>>> +
>>> +	cdns_pcie_writel(pcie, CDNS_PCIE_AT_IB_EP_FUNC_BAR_ADDR0(fn, bar), 0);
>>> +	cdns_pcie_writel(pcie, CDNS_PCIE_AT_IB_EP_FUNC_BAR_ADDR1(fn, bar), 0);
>>> +}
>>> +
>>> +static int cdns_pcie_ep_map_addr(struct pci_epc *epc, phys_addr_t addr,
>>> +				 u64 pci_addr, size_t size)
>>> +{
>>> +	struct cdns_pcie_ep *ep = epc_get_drvdata(epc);
>>> +	struct cdns_pcie *pcie = &ep->pcie;
>>> +	u32 r;
>>> +
>>> +	r = find_first_zero_bit(&ep->ob_region_map, sizeof(ep->ob_region_map));
>>
>> Second argument must be in bits not bytes.
>>
>> https://marc.info/?l=linux-pci&m=151179781225513&w=2
>>
>>> +	if (r >= ep->data->max_regions - 1) {
>>> +		dev_err(&epc->dev, "no free outbound region\n");
>>> +		return -EINVAL;
>>> +	}
>>> +
>>> +	cdns_pcie_set_outbound_region(pcie, r, false, addr, pci_addr, size);
>>> +
>>> +	set_bit(r, &ep->ob_region_map);
>>> +	ep->ob_addr[r] = addr;
>>> +
>>> +	return 0;
>>> +}
>>> +
>>> +static void cdns_pcie_ep_unmap_addr(struct pci_epc *epc, phys_addr_t addr)
>>> +{
>>> +	struct cdns_pcie_ep *ep = epc_get_drvdata(epc);
>>> +	struct cdns_pcie *pcie = &ep->pcie;
>>> +	u32 r;
>>> +
>>> +	for (r = 0; r < ep->data->max_regions - 1; r++)
>>> +		if (ep->ob_addr[r] == addr)
>>> +			break;
>>> +
>>> +	if (r >= ep->data->max_regions - 1)
>>
>> == ?
>>
>>> +		return;
>>> +
>>> +	cdns_pcie_reset_outbound_region(pcie, r);
>>> +
>>> +	ep->ob_addr[r] = 0;
>>> +	clear_bit(r, &ep->ob_region_map);
>>> +}
>>> +
>>> +static int cdns_pcie_ep_set_msi(struct pci_epc *epc, u8 mmc)
>>> +{
>>> +	struct cdns_pcie_ep *ep = epc_get_drvdata(epc);
>>> +	struct cdns_pcie *pcie = &ep->pcie;
>>> +	u32 cap = CDNS_PCIE_EP_FUNC_MSI_CAP_OFFSET;
>>> +	u16 flags;
>>> +	u8 fn = 0;
>>> +
>>> +	/* Validate the ID of the MSI Capability structure. */
>>> +	if (cdns_pcie_ep_fn_readb(pcie, fn, cap) != PCI_CAP_ID_MSI)
>>> +		return -EINVAL;
>>> +
>>> +	/*
>>> +	 * Set the Multiple Message Capable bitfield into the Message Control
>>> +	 * register.
>>> +	 */
>>> +	flags = cdns_pcie_ep_fn_readw(pcie, fn, cap + PCI_MSI_FLAGS);
>>> +	flags = (flags & ~PCI_MSI_FLAGS_QMASK) | (mmc << 1);
>>> +	flags |= PCI_MSI_FLAGS_64BIT;
>>> +	flags &= ~PCI_MSI_FLAGS_MASKBIT;
> 
> Any reason why "Per-vector masking capable" is reset?
>>> +	cdns_pcie_ep_fn_writew(pcie, fn, cap + PCI_MSI_FLAGS, flags);
>>> +
>>> +	return 0;
>>> +}
>>> +
>>> +static int cdns_pcie_ep_get_msi(struct pci_epc *epc)
>>> +{
>>> +	struct cdns_pcie_ep *ep = epc_get_drvdata(epc);
>>> +	struct cdns_pcie *pcie = &ep->pcie;
>>> +	u32 cap = CDNS_PCIE_EP_FUNC_MSI_CAP_OFFSET;
>>> +	u16 flags, mmc, mme;
>>> +	u8 fn = 0;
>>> +
>>> +	/* Validate the ID of the MSI Capability structure. */
>>> +	if (cdns_pcie_ep_fn_readb(pcie, fn, cap) != PCI_CAP_ID_MSI)
>>> +		return -EINVAL;
>>> +
>>> +	/* Validate that the MSI feature is actually enabled. */
>>> +	flags = cdns_pcie_ep_fn_readw(pcie, fn, cap + PCI_MSI_FLAGS);
>>> +	if (!(flags & PCI_MSI_FLAGS_ENABLE))
>>> +		return -EINVAL;
>>> +
>>> +	/*
>>> +	 * Get the Multiple Message Enable bitfield from the Message Control
>>> +	 * register.
>>> +	 */
>>> +	mmc = (flags & PCI_MSI_FLAGS_QMASK) >> 1;
>>> +	mme = (flags & PCI_MSI_FLAGS_QSIZE) >> 4;
>>> +	if (mme > mmc)
>>> +		mme = mmc;
>>> +	if (mme > 5)
>>> +		mme = 5;
> 
> I'm not sure if both these above checks are required..
>>
>> You should comment on what this 5 means and why it is fine to cap mme.
>>
>>> +
>>> +	return mme;
>>> +}
>>> +
>>> +static void cdns_pcie_ep_assert_intx(struct cdns_pcie_ep *ep, u8 fn,
>>> +				     u8 intx, bool is_asserted)
>>> +{
>>> +	struct cdns_pcie *pcie = &ep->pcie;
>>> +	u32 r = ep->data->max_regions - 1;
>>> +	u32 offset;
>>> +	u16 status;
>>> +	u8 msg_code;
>>> +
>>> +	intx &= 3;
>>> +
>>> +	/* Set the outbound region if needed. */
>>> +	if (unlikely(ep->irq_pci_addr != CDNS_PCIE_EP_IRQ_PCI_ADDR_LEGACY)) {
>>> +		/* Last region was reserved for IRQ writes. */
>>> +		cdns_pcie_set_outbound_region_for_normal_msg(pcie, r,
>>> +							     ep->irq_phys_addr);
>>> +		ep->irq_pci_addr = CDNS_PCIE_EP_IRQ_PCI_ADDR_LEGACY;
>>> +	}
>>> +
>>> +	if (is_asserted) {
>>> +		ep->irq_pending |= BIT(intx);
>>> +		msg_code = MSG_CODE_ASSERT_INTA + intx;
>>> +	} else {
>>> +		ep->irq_pending &= ~BIT(intx);
>>> +		msg_code = MSG_CODE_DEASSERT_INTA + intx;
>>> +	}
>>> +
>>> +	status = cdns_pcie_ep_fn_readw(pcie, fn, PCI_STATUS);
>>> +	if (((status & PCI_STATUS_INTERRUPT) != 0) ^ (ep->irq_pending != 0)) {
>>> +		status ^= PCI_STATUS_INTERRUPT;
>>> +		cdns_pcie_ep_fn_writew(pcie, fn, PCI_STATUS, status);
>>> +	}
> 
> here you are setting the PCI_STATUS_INTERRUPT even before sending the ASSERT
> message.
>>> +
>>> +	offset = CDNS_PCIE_NORMAL_MSG_ROUTING(MSG_ROUTING_LOCAL) |
>>> +		 CDNS_PCIE_NORMAL_MSG_CODE(msg_code) |
>>> +		 CDNS_PCIE_MSG_NO_DATA;
>>> +	writel(0, ep->irq_cpu_addr + offset);
>>> +}
>>> +
>>> +static int cdns_pcie_ep_send_legacy_irq(struct cdns_pcie_ep *ep, u8 fn, u8 intx)
>>> +{
>>> +	u16 cmd;
>>> +
>>> +	cmd = cdns_pcie_ep_fn_readw(&ep->pcie, fn, PCI_COMMAND);
>>> +	if (cmd & PCI_COMMAND_INTX_DISABLE)
>>> +		return -EINVAL;
>>> +
>>> +	cdns_pcie_ep_assert_intx(ep, fn, intx, true);
>>> +	mdelay(1);
>>
>> Add a comment please to explain the mdelay value.
>>
>>> +	cdns_pcie_ep_assert_intx(ep, fn, intx, false);
>>> +	return 0;
>>> +}
>>> +
>>> +static int cdns_pcie_ep_raise_irq(struct pci_epc *epc,
>>> +				  enum pci_epc_irq_type type, u8 interrupt_num)
>>> +{
>>> +	struct cdns_pcie_ep *ep = epc_get_drvdata(epc);
>>> +	struct cdns_pcie *pcie = &ep->pcie;
>>> +	u32 cap = CDNS_PCIE_EP_FUNC_MSI_CAP_OFFSET;
>>> +	u16 flags, mmc, mme, data, data_mask;
>>> +	u8 msi_count;
>>> +	u64 pci_addr, pci_addr_mask = 0xff;
>>> +	u8 fn = 0;
>>> +
>>> +	/* Handle legacy IRQ. */
>>> +	if (type == PCI_EPC_IRQ_LEGACY)
>>> +		return cdns_pcie_ep_send_legacy_irq(ep, fn, 0);
>>> +
>>> +	/* Otherwise MSI. */
>>> +	if (type != PCI_EPC_IRQ_MSI)
>>> +		return -EINVAL;
> 
> MSI-X?
>>> +
>>> +	/* Check whether the MSI feature has been enabled by the PCI host. */
>>> +	flags = cdns_pcie_ep_fn_readw(pcie, fn, cap + PCI_MSI_FLAGS);
>>> +	if (!(flags & PCI_MSI_FLAGS_ENABLE))
>>> +		return -EINVAL;
>>> +
>>> +	/* Get the number of enabled MSIs */
>>> +	mmc = (flags & PCI_MSI_FLAGS_QMASK) >> 1;
>>> +	mme = (flags & PCI_MSI_FLAGS_QSIZE) >> 4;
>>> +	if (mme > mmc)
>>> +		mme = mmc;
>>> +	if (mme > 5)
>>> +		mme = 5;
>>
>> Same comment as above.
>>
>>> +	msi_count = 1 << mme;
>>> +	if (!interrupt_num || interrupt_num > msi_count)
>>> +		return -EINVAL;
>>> +
>>> +	/* Compute the data value to be written. */
>>> +	data_mask = msi_count - 1;
>>> +	data = cdns_pcie_ep_fn_readw(pcie, fn, cap + PCI_MSI_DATA_64);
>>> +	data = (data & ~data_mask) | ((interrupt_num - 1) & data_mask);
>>> +
>>> +	/* Get the PCI address where to write the data into. */
>>> +	pci_addr = cdns_pcie_ep_fn_readl(pcie, fn, cap + PCI_MSI_ADDRESS_HI);
>>> +	pci_addr <<= 32;
>>> +	pci_addr |= cdns_pcie_ep_fn_readl(pcie, fn, cap + PCI_MSI_ADDRESS_LO);
>>> +	pci_addr &= GENMASK_ULL(63, 2);
>>> +
>>> +	/* Set the outbound region if needed. */
>>> +	if (unlikely(ep->irq_pci_addr != pci_addr)) {
>>> +		/* Last region was reserved for IRQ writes. */
>>> +		cdns_pcie_set_outbound_region(pcie, ep->data->max_regions - 1,
>>> +					      false,
>>> +					      ep->irq_phys_addr,
>>> +					      pci_addr & ~pci_addr_mask,
>>> +					      pci_addr_mask + 1);
>>> +		ep->irq_pci_addr = pci_addr;
>>> +	}
>>> +	writew(data, ep->irq_cpu_addr + (pci_addr & pci_addr_mask));
>>> +
>>> +	return 0;
>>> +}
>>> +
>>> +static int cdns_pcie_ep_start(struct pci_epc *epc)
>>> +{
>>> +	struct cdns_pcie_ep *ep = epc_get_drvdata(epc);
>>> +	struct cdns_pcie *pcie = &ep->pcie;
>>> +	struct pci_epf *epf;
>>> +	u32 cfg;
>>> +	u8 fn = 0;
>>> +
>>> +	/* Enable this endpoint function. */
>>> +	cfg = cdns_pcie_readl(pcie, CDNS_PCIE_LM_EP_FUNC_CFG);
>>> +	cfg |= BIT(fn);
>>> +	cdns_pcie_writel(pcie, CDNS_PCIE_LM_EP_FUNC_CFG, cfg);
>>> +
>>> +	/*
>>> +	 * Already linked-up: don't call directly pci_epc_linkup() because we've
>>> +	 * already locked the epc->lock.
>>> +	 */
> 
> Not sure what you mean by linked-up here?
>>> +	list_for_each_entry(epf, &epc->pci_epf, list)
>>> +		pci_epf_linkup(epf);
> 
>>> +
>>> +	return 0;
>>> +}
>>> +
>>> +static void cdns_pcie_ep_stop(struct pci_epc *epc)
>>> +{
>>> +	struct cdns_pcie_ep *ep = epc_get_drvdata(epc);
>>> +	struct cdns_pcie *pcie = &ep->pcie;
>>> +	u32 cfg;
>>> +	u8 fn = 0;
>>> +
>>> +	/* Disable this endpoint function (function 0 can't be disabled). */
>>
>> I do not understand this comment and how it applies to the code,
>> in other words fn is always 0 here (so it can't be disabled)
>> I do not understand what this code is there for.
>>
>>> +	cfg = cdns_pcie_readl(pcie, CDNS_PCIE_LM_EP_FUNC_CFG);
>>> +	cfg &= ~BIT(fn);
>>> +	cdns_pcie_writel(pcie, CDNS_PCIE_LM_EP_FUNC_CFG, cfg);
>>> +}
>>> +
>>> +static const struct pci_epc_ops cdns_pcie_epc_ops = {
>>> +	.write_header	= cdns_pcie_ep_write_header,
>>> +	.set_bar	= cdns_pcie_ep_set_bar,
>>> +	.clear_bar	= cdns_pcie_ep_clear_bar,
>>> +	.map_addr	= cdns_pcie_ep_map_addr,
>>> +	.unmap_addr	= cdns_pcie_ep_unmap_addr,
>>> +	.set_msi	= cdns_pcie_ep_set_msi,
>>> +	.get_msi	= cdns_pcie_ep_get_msi,
>>> +	.raise_irq	= cdns_pcie_ep_raise_irq,
>>> +	.start		= cdns_pcie_ep_start,
>>> +	.stop		= cdns_pcie_ep_stop,
>>> +};
>>> +
>>> +static const struct cdns_pcie_ep_data cdns_pcie_ep_data = {
>>> +	.max_regions	= 16,
>>> +};
>>
>> As I mentioned in patch 3, should this be set-up with DT ?
>>
>> Thanks,
>> Lorenzo
>>
>>> +
>>> +static const struct of_device_id cdns_pcie_ep_of_match[] = {
>>> +	{ .compatible = "cdns,cdns-pcie-ep",
>>> +	  .data = &cdns_pcie_ep_data },
>>> +
>>> +	{ },
>>> +};
>>> +
>>> +static int cdns_pcie_ep_probe(struct platform_device *pdev)
>>> +{
>>> +	struct device *dev = &pdev->dev;
>>> +	struct device_node *np = dev->of_node;
>>> +	const struct of_device_id *of_id;
>>> +	struct cdns_pcie_ep *ep;
>>> +	struct cdns_pcie *pcie;
>>> +	struct pci_epc *epc;
>>> +	struct resource *res;
>>> +	size_t max_regions;
>>> +	int ret;
>>> +
>>> +	ep = devm_kzalloc(dev, sizeof(*ep), GFP_KERNEL);
>>> +	if (!ep)
>>> +		return -ENOMEM;
>>> +
>>> +	platform_set_drvdata(pdev, ep);
>>> +
>>> +	pcie = &ep->pcie;
>>> +	pcie->is_rc = false;
>>> +
>>> +	of_id = of_match_node(cdns_pcie_ep_of_match, np);
>>> +	ep->data = (const struct cdns_pcie_ep_data *)of_id->data;
>>> +
>>> +	res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "reg");
>>> +	pcie->reg_base = devm_ioremap_resource(dev, res);
>>> +	if (IS_ERR(pcie->reg_base)) {
>>> +		dev_err(dev, "missing \"reg\"\n");
>>> +		return PTR_ERR(pcie->reg_base);
>>> +	}
>>> +
>>> +	res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "mem");
>>> +	if (!res) {
>>> +		dev_err(dev, "missing \"mem\"\n");
>>> +		return -EINVAL;
>>> +	}
>>> +	pcie->mem_res = res;
>>> +
>>> +	max_regions = ep->data->max_regions;
>>> +	ep->ob_addr = devm_kzalloc(dev, max_regions * sizeof(*ep->ob_addr),
>>> +				   GFP_KERNEL);
>>> +	if (!ep->ob_addr)
>>> +		return -ENOMEM;
>>> +
>>> +	pm_runtime_enable(dev);
>>> +	ret = pm_runtime_get_sync(dev);
>>> +	if (ret < 0) {
>>> +		dev_err(dev, "pm_runtime_get_sync() failed\n");
>>> +		goto err_get_sync;
>>> +	}
>>> +
>>> +	/* Disable all but function 0 (anyway BIT(0) is hardwired to 1). */
>>> +	cdns_pcie_writel(pcie, CDNS_PCIE_LM_EP_FUNC_CFG, BIT(0));
> 
> why disable all functions?
>>> +
>>> +	epc = devm_pci_epc_create(dev, &cdns_pcie_epc_ops);
>>> +	if (IS_ERR(epc)) {
>>> +		dev_err(dev, "failed to create epc device\n");
>>> +		ret = PTR_ERR(epc);
>>> +		goto err_init;
>>> +	}
>>> +
>>> +	ep->epc = epc;
>>> +	epc_set_drvdata(epc, ep);
>>> +
>>> +	ret = of_property_read_u8(np, "max-functions", &epc->max_functions);
>>> +	if (ret < 0)
>>> +		epc->max_functions = 1;
>>> +
>>> +	ret = pci_epc_mem_init(epc, pcie->mem_res->start,
>>> +			       resource_size(pcie->mem_res));
>>> +	if (ret < 0) {
>>> +		dev_err(dev, "failed to initialize the memory space\n");
>>> +		goto err_init;
>>> +	}
>>> +
>>> +	ep->irq_cpu_addr = pci_epc_mem_alloc_addr(epc, &ep->irq_phys_addr,
>>> +						  SZ_128K);
> 
> Any reason why you chose SZ_128K?
> 
> Thanks
> Kishon
> 


-- 
Cyrille Pitchen, Free Electrons
Embedded Linux and Kernel engineering
http://free-electrons.com

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH 5/5] PCI: cadence: add EndPoint Controller driver for Cadence PCIe controller
  2017-12-13 16:50       ` Cyrille Pitchen
@ 2017-12-14 17:03         ` Cyrille Pitchen
  2017-12-15  5:49           ` Kishon Vijay Abraham I
  0 siblings, 1 reply; 39+ messages in thread
From: Cyrille Pitchen @ 2017-12-14 17:03 UTC (permalink / raw)
  To: Kishon Vijay Abraham I, Lorenzo Pieralisi
  Cc: bhelgaas, linux-pci, adouglas, stelford, dgary, kgopi, eandrews,
	thomas.petazzoni, sureshp, nsekhar, linux-kernel, robh,
	devicetree

Le 13/12/2017 à 17:50, Cyrille Pitchen a écrit :
> Hi Kishon,
> 
> Le 05/12/2017 à 10:19, Kishon Vijay Abraham I a écrit :
>> Hi,
>>
>> On Friday 01 December 2017 05:50 PM, Lorenzo Pieralisi wrote:
>>> On Thu, Nov 23, 2017 at 04:01:50PM +0100, Cyrille Pitchen wrote:
>>>> This patch adds support to the Cadence PCIe controller in endpoint mode.
>>>
>>> Please add a brief description to the log to describe the most salient
>>> features.
>>>
>>>> Signed-off-by: Cyrille Pitchen <cyrille.pitchen@free-electrons.com>
>>>> ---
>>>>  drivers/pci/cadence/Kconfig           |   9 +
>>>>  drivers/pci/cadence/Makefile          |   1 +
>>>>  drivers/pci/cadence/pcie-cadence-ep.c | 553 ++++++++++++++++++++++++++++++++++
>>>>  3 files changed, 563 insertions(+)
>>>>  create mode 100644 drivers/pci/cadence/pcie-cadence-ep.c
[...]
>>>> +static int cdns_pcie_ep_write_header(struct pci_epc *epc,
>>>> +				     struct pci_epf_header *hdr)
>>>> +{
>>>> +	struct cdns_pcie_ep *ep = epc_get_drvdata(epc);
>>>> +	struct cdns_pcie *pcie = &ep->pcie;
>>>> +	u8 fn = 0;
>>>> +
>>>> +	if (fn == 0) {
>>>
>>> I think there is some code to retrieve fn missing here.
>>
>> hmm.. the endpoint core has to send the function number which right now it's
>> not doing though it has the function number info in pci_epf.
> 
> Would it be OK if I add a new patch in the next series adding a
> 'struct pcie_epf *epf' as a 2nd argument to all handlers in the
> 'struct pcie_epc_ops'? This way I could have access to epf->func_no as needed.
>

Except for pci_epc_start() and pci_epc_stop(), both only called from
pci_epc_start_store(), I don't have trouble getting the epf value to be passed
as a 2nd argument to all other handlers in 'struct pcie_epc_ops'.

Now my next question is: is it better to keep the 'struct pci_epc *epc' as
the 1st argument of all those handlers or do you prefer me to remove it as
the value can always be retrieved from epf->epc, since now we provide epf as
a new argument ?

I have no personal preference. Please let me know your choice :)
 
> Best regards,
> 
> Cyrille
> 

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH 5/5] PCI: cadence: add EndPoint Controller driver for Cadence PCIe controller
  2017-12-14 17:03         ` Cyrille Pitchen
@ 2017-12-15  5:49           ` Kishon Vijay Abraham I
  2017-12-15 11:49             ` Cyrille Pitchen
  0 siblings, 1 reply; 39+ messages in thread
From: Kishon Vijay Abraham I @ 2017-12-15  5:49 UTC (permalink / raw)
  To: Cyrille Pitchen, Lorenzo Pieralisi
  Cc: bhelgaas, linux-pci, adouglas, stelford, dgary, kgopi, eandrews,
	thomas.petazzoni, sureshp, nsekhar, linux-kernel, robh,
	devicetree

Hi Cyrille,

On Thursday 14 December 2017 10:33 PM, Cyrille Pitchen wrote:
> Le 13/12/2017 à 17:50, Cyrille Pitchen a écrit :
>> Hi Kishon,
>>
>> Le 05/12/2017 à 10:19, Kishon Vijay Abraham I a écrit :
>>> Hi,
>>>
>>> On Friday 01 December 2017 05:50 PM, Lorenzo Pieralisi wrote:
>>>> On Thu, Nov 23, 2017 at 04:01:50PM +0100, Cyrille Pitchen wrote:
>>>>> This patch adds support to the Cadence PCIe controller in endpoint mode.
>>>>
>>>> Please add a brief description to the log to describe the most salient
>>>> features.
>>>>
>>>>> Signed-off-by: Cyrille Pitchen <cyrille.pitchen@free-electrons.com>
>>>>> ---
>>>>>  drivers/pci/cadence/Kconfig           |   9 +
>>>>>  drivers/pci/cadence/Makefile          |   1 +
>>>>>  drivers/pci/cadence/pcie-cadence-ep.c | 553 ++++++++++++++++++++++++++++++++++
>>>>>  3 files changed, 563 insertions(+)
>>>>>  create mode 100644 drivers/pci/cadence/pcie-cadence-ep.c
> [...]
>>>>> +static int cdns_pcie_ep_write_header(struct pci_epc *epc,
>>>>> +				     struct pci_epf_header *hdr)
>>>>> +{
>>>>> +	struct cdns_pcie_ep *ep = epc_get_drvdata(epc);
>>>>> +	struct cdns_pcie *pcie = &ep->pcie;
>>>>> +	u8 fn = 0;
>>>>> +
>>>>> +	if (fn == 0) {
>>>>
>>>> I think there is some code to retrieve fn missing here.
>>>
>>> hmm.. the endpoint core has to send the function number which right now it's
>>> not doing though it has the function number info in pci_epf.
>>
>> Would it be OK if I add a new patch in the next series adding a
>> 'struct pcie_epf *epf' as a 2nd argument to all handlers in the
>> 'struct pcie_epc_ops'? This way I could have access to epf->func_no as needed.

I prefer we just pass the func_no as the second argument. Do you see a problem
with that?
>>
> 
> Except for pci_epc_start() and pci_epc_stop(), both only called from
> pci_epc_start_store(), I don't have trouble getting the epf value to be passed
> as a 2nd argument to all other handlers in 'struct pcie_epc_ops'.

pci_epc_start()/pci_epc_stop() is used to start/stop the end point controller
as a whole and shouldn't need epf.
> 
> Now my next question is: is it better to keep the 'struct pci_epc *epc' as
> the 1st argument of all those handlers or do you prefer me to remove it as
> the value can always be retrieved from epf->epc, since now we provide epf as
> a new argument ?

Do we really need to pass epf when func_no is all we need?

Thanks
Kishon

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH 5/5] PCI: cadence: add EndPoint Controller driver for Cadence PCIe controller
  2017-12-15  5:49           ` Kishon Vijay Abraham I
@ 2017-12-15 11:49             ` Cyrille Pitchen
  0 siblings, 0 replies; 39+ messages in thread
From: Cyrille Pitchen @ 2017-12-15 11:49 UTC (permalink / raw)
  To: Kishon Vijay Abraham I, Lorenzo Pieralisi
  Cc: bhelgaas, linux-pci, adouglas, stelford, dgary, kgopi, eandrews,
	thomas.petazzoni, sureshp, nsekhar, linux-kernel, robh,
	devicetree

Hi Kishon,

Le 15/12/2017 à 06:49, Kishon Vijay Abraham I a écrit :
> Hi Cyrille,
> 
> On Thursday 14 December 2017 10:33 PM, Cyrille Pitchen wrote:
>> Le 13/12/2017 à 17:50, Cyrille Pitchen a écrit :
>>> Hi Kishon,
>>>
>>> Le 05/12/2017 à 10:19, Kishon Vijay Abraham I a écrit :
>>>> Hi,
>>>>
>>>> On Friday 01 December 2017 05:50 PM, Lorenzo Pieralisi wrote:
>>>>> On Thu, Nov 23, 2017 at 04:01:50PM +0100, Cyrille Pitchen wrote:
>>>>>> This patch adds support to the Cadence PCIe controller in endpoint mode.
>>>>>
>>>>> Please add a brief description to the log to describe the most salient
>>>>> features.
>>>>>
>>>>>> Signed-off-by: Cyrille Pitchen <cyrille.pitchen@free-electrons.com>
>>>>>> ---
>>>>>>  drivers/pci/cadence/Kconfig           |   9 +
>>>>>>  drivers/pci/cadence/Makefile          |   1 +
>>>>>>  drivers/pci/cadence/pcie-cadence-ep.c | 553 ++++++++++++++++++++++++++++++++++
>>>>>>  3 files changed, 563 insertions(+)
>>>>>>  create mode 100644 drivers/pci/cadence/pcie-cadence-ep.c
>> [...]
>>>>>> +static int cdns_pcie_ep_write_header(struct pci_epc *epc,
>>>>>> +				     struct pci_epf_header *hdr)
>>>>>> +{
>>>>>> +	struct cdns_pcie_ep *ep = epc_get_drvdata(epc);
>>>>>> +	struct cdns_pcie *pcie = &ep->pcie;
>>>>>> +	u8 fn = 0;
>>>>>> +
>>>>>> +	if (fn == 0) {
>>>>>
>>>>> I think there is some code to retrieve fn missing here.
>>>>
>>>> hmm.. the endpoint core has to send the function number which right now it's
>>>> not doing though it has the function number info in pci_epf.
>>>
>>> Would it be OK if I add a new patch in the next series adding a
>>> 'struct pcie_epf *epf' as a 2nd argument to all handlers in the
>>> 'struct pcie_epc_ops'? This way I could have access to epf->func_no as needed.
> 
> I prefer we just pass the func_no as the second argument. Do you see a problem
> with that?
>>>
>>
>> Except for pci_epc_start() and pci_epc_stop(), both only called from
>> pci_epc_start_store(), I don't have trouble getting the epf value to be passed
>> as a 2nd argument to all other handlers in 'struct pcie_epc_ops'.
> 
> pci_epc_start()/pci_epc_stop() is used to start/stop the end point controller
> as a whole and shouldn't need epf.
>>
>> Now my next question is: is it better to keep the 'struct pci_epc *epc' as
>> the 1st argument of all those handlers or do you prefer me to remove it as
>> the value can always be retrieved from epf->epc, since now we provide epf as
>> a new argument ?
> 
> Do we really need to pass epf when func_no is all we need?
>

No, func_no alone is currently all I need so it's perfectly fine for me.
I've just wanted to ask before to be sure I implement the expected solution
before submitting :)

Thanks!

Cyrille

 
> Thanks
> Kishon
> 


-- 
Cyrille Pitchen, Free Electrons
Embedded Linux and Kernel engineering
http://free-electrons.com

^ permalink raw reply	[flat|nested] 39+ messages in thread

end of thread, other threads:[~2017-12-15 11:50 UTC | newest]

Thread overview: 39+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-11-23 15:01 [PATCH 0/5] PCI: Add support to the Cadence PCIe controller Cyrille Pitchen
2017-11-23 15:01 ` [PATCH 1/5] PCI: Add vendor ID for Cadence Cyrille Pitchen
2017-12-06 21:27   ` Bjorn Helgaas
2017-11-23 15:01 ` [PATCH 2/5] dt-bindings: PCI: cadence: Add DT bindings for Cadence PCIe host controller Cyrille Pitchen
2017-11-26 19:32   ` Rob Herring
2017-11-23 15:01 ` [PATCH 3/5] PCI: cadence: Add host driver for Cadence PCIe controller Cyrille Pitchen
2017-11-28 20:41   ` Bjorn Helgaas
2017-11-28 20:46     ` Bjorn Helgaas
2017-11-29  8:19     ` Thomas Petazzoni
2017-11-29 15:55       ` Bjorn Helgaas
2017-11-29 14:14     ` Lorenzo Pieralisi
2017-12-01 10:37     ` Cyrille Pitchen
2017-12-01 16:20       ` Lorenzo Pieralisi
2017-11-29 17:34   ` Lorenzo Pieralisi
2017-12-03 20:44     ` Cyrille Pitchen
2017-12-04 18:20       ` Lorenzo Pieralisi
2017-12-04 18:49         ` Ard Biesheuvel
2017-12-06 11:32           ` Lorenzo Pieralisi
2017-12-13 16:42             ` Cyrille Pitchen
2017-11-29 18:25   ` Lorenzo Pieralisi
2017-11-30 10:06     ` Lorenzo Pieralisi
2017-11-23 15:01 ` [PATCH 4/5] dt-bindings: PCI: cadence: Add DT bindings for Cadence PCIe endpoint controller Cyrille Pitchen
2017-11-26 19:33   ` Rob Herring
2017-11-23 15:01 ` [PATCH 5/5] PCI: cadence: add EndPoint Controller driver for Cadence PCIe controller Cyrille Pitchen
2017-12-01 12:20   ` Lorenzo Pieralisi
2017-12-04 14:56     ` Cyrille Pitchen
2017-12-05  9:19     ` Kishon Vijay Abraham I
2017-12-07 10:05       ` Philippe Ombredanne
2017-12-13 16:03         ` Cyrille Pitchen
2017-12-13 16:50       ` Cyrille Pitchen
2017-12-14 17:03         ` Cyrille Pitchen
2017-12-15  5:49           ` Kishon Vijay Abraham I
2017-12-15 11:49             ` Cyrille Pitchen
2017-11-28 15:50 ` [PATCH 0/5] PCI: Add support to the " Lorenzo Pieralisi
2017-11-30  7:13   ` Kishon Vijay Abraham I
2017-11-30 18:18     ` Lorenzo Pieralisi
2017-11-30 18:45       ` Cyrille Pitchen
2017-11-30 20:05         ` Cyrille Pitchen
2017-11-30 23:05           ` Bjorn Helgaas

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).