All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v1 0/9] Refactoring Microchip PolarFire PCIe driver
@ 2023-07-19 10:20 ` Minda Chen
  0 siblings, 0 replies; 92+ messages in thread
From: Minda Chen @ 2023-07-19 10:20 UTC (permalink / raw)
  To: Daire McNamara, Conor Dooley, Rob Herring, Krzysztof Kozlowski,
	Bjorn Helgaas, Lorenzo Pieralisi, Krzysztof Wilczyński,
	Emil Renner Berthing
  Cc: devicetree, linux-kernel, linux-riscv, linux-pci, Paul Walmsley,
	Palmer Dabbelt, Albert Ou, Philipp Zabel, Mason Huo, Leyfoon Tan,
	Kevin Xie, Minda Chen

This patchset final purpose is add PCIe driver for StarFive JH7110 SoC.
JH7110 using PLDA XpressRICH PCIe IP. Microchip PolarFire Using the
same IP and have commit their codes, which are mixed with PLDA
controller codes and Microchip platform codes.

For re-use the PLDA controller codes, I request refactoring microchip
codes, move PLDA common codes to PLDA files.
Desigware and Cadence is good example for refactoring codes.

So first step is extract the PLDA common codes from microchip, and
refactoring the microchip codes.(patch1 - 4)
Then add the PLDA platform codes. (patch5, 6)
At last, add Starfive codes. (patch7 - 9)

This patchset is base on v6.5-rc1

patch1 is add PLDA XpressRICH PCIe host common properties dt-binding
       docs, most are extracted from microchip,pcie-host.yaml
patch2 is add plda,xpressrich-pcie-common.yaml(patch1 file) reference
       and remove the PLDA common properties.
patch3 is extracting the PLDA common codes from microchip Polarfire PCIe
       codes. The change list in the commit message.
patch4 is move microchip driver to PLDA directory and remove the PLDA
       common codes.
patch5 is add PLDA Xpressrich platform driver dt-binding doc.
patch6 is PLDA Xpressrich platform driver.
patch7 is add StarFive JH7110 PCIe dt-binding doc.
patch8 is add StarFive JH7110 Soc PCIe platform codes.
patch9 is StarFive JH7110 device tree configuration.

I have noticed that Daire have changed microchip's codes.
https://patchwork.kernel.org/project/linux-pci/cover/20230630154859.2049521-1-daire.mcnamara@microchip.com/
I have changed patch3 and patch4 base on their commits. StarFive
PCIe driver still can work. But their codes is under reviewed and 
maybe changing. Do not base on their changes first.
I will base on their commit to change patch3 and patch4 as soon as
their commits are accepted.

List below is old patchset and is dropped, which is non-refractored version.
https://patchwork.kernel.org/project/linux-pci/cover/20230406111142.74410-1-minda.chen@starfivetech.com/

Minda Chen (9):
  dt-bindings: PCI: Add PLDA XpressRICH PCIe host common properties
  dt-bindings: PCI: microchip: Remove the PLDA common properties
  PCI: PLDA: Get PLDA common codes from Microchip PolarFire host
  PCI: microchip: Move PCIe driver to PLDA directory
  dt-bindings: PLDA: Add PLDA XpressRICH PCIe host controller
  PCI: PLDA: Add host conroller platform driver
  dt-bindings: PCI: Add StarFive JH7110 PCIe controller
  PCI: PLDA: starfive: Add JH7110 PCIe controller
  riscv: dts: starfive: add PCIe dts configuration for JH7110

 .../bindings/pci/microchip,pcie-host.yaml     |  45 +-
 .../pci/plda,xpressrich-pcie-common.yaml      |  72 ++
 .../pci/plda,xpressrich-pcie-host.yaml        |  66 ++
 .../bindings/pci/starfive,jh7110-pcie.yaml    | 138 ++++
 MAINTAINERS                                   |  19 +-
 .../jh7110-starfive-visionfive-2.dtsi         |  44 ++
 arch/riscv/boot/dts/starfive/jh7110.dtsi      |  88 +++
 drivers/pci/controller/Kconfig                |   9 +-
 drivers/pci/controller/Makefile               |   2 +-
 drivers/pci/controller/plda/Kconfig           |  35 +
 drivers/pci/controller/plda/Makefile          |   5 +
 .../{ => plda}/pcie-microchip-host.c          | 594 ++--------------
 drivers/pci/controller/plda/pcie-plda-host.c  | 665 ++++++++++++++++++
 drivers/pci/controller/plda/pcie-plda-plat.c  |  64 ++
 drivers/pci/controller/plda/pcie-plda.h       | 230 ++++++
 drivers/pci/controller/plda/pcie-starfive.c   | 415 +++++++++++
 16 files changed, 1885 insertions(+), 606 deletions(-)
 create mode 100644 Documentation/devicetree/bindings/pci/plda,xpressrich-pcie-common.yaml
 create mode 100644 Documentation/devicetree/bindings/pci/plda,xpressrich-pcie-host.yaml
 create mode 100644 Documentation/devicetree/bindings/pci/starfive,jh7110-pcie.yaml
 create mode 100644 drivers/pci/controller/plda/Kconfig
 create mode 100644 drivers/pci/controller/plda/Makefile
 rename drivers/pci/controller/{ => plda}/pcie-microchip-host.c (50%)
 create mode 100644 drivers/pci/controller/plda/pcie-plda-host.c
 create mode 100644 drivers/pci/controller/plda/pcie-plda-plat.c
 create mode 100644 drivers/pci/controller/plda/pcie-plda.h
 create mode 100644 drivers/pci/controller/plda/pcie-starfive.c


base-commit: 06c2afb862f9da8dc5efa4b6076a0e48c3fbaaa5
-- 
2.17.1


^ permalink raw reply	[flat|nested] 92+ messages in thread

* [PATCH v1 0/9] Refactoring Microchip PolarFire PCIe driver
@ 2023-07-19 10:20 ` Minda Chen
  0 siblings, 0 replies; 92+ messages in thread
From: Minda Chen @ 2023-07-19 10:20 UTC (permalink / raw)
  To: Daire McNamara, Conor Dooley, Rob Herring, Krzysztof Kozlowski,
	Bjorn Helgaas, Lorenzo Pieralisi, Krzysztof Wilczyński,
	Emil Renner Berthing
  Cc: devicetree, linux-kernel, linux-riscv, linux-pci, Paul Walmsley,
	Palmer Dabbelt, Albert Ou, Philipp Zabel, Mason Huo, Leyfoon Tan,
	Kevin Xie, Minda Chen

This patchset final purpose is add PCIe driver for StarFive JH7110 SoC.
JH7110 using PLDA XpressRICH PCIe IP. Microchip PolarFire Using the
same IP and have commit their codes, which are mixed with PLDA
controller codes and Microchip platform codes.

For re-use the PLDA controller codes, I request refactoring microchip
codes, move PLDA common codes to PLDA files.
Desigware and Cadence is good example for refactoring codes.

So first step is extract the PLDA common codes from microchip, and
refactoring the microchip codes.(patch1 - 4)
Then add the PLDA platform codes. (patch5, 6)
At last, add Starfive codes. (patch7 - 9)

This patchset is base on v6.5-rc1

patch1 is add PLDA XpressRICH PCIe host common properties dt-binding
       docs, most are extracted from microchip,pcie-host.yaml
patch2 is add plda,xpressrich-pcie-common.yaml(patch1 file) reference
       and remove the PLDA common properties.
patch3 is extracting the PLDA common codes from microchip Polarfire PCIe
       codes. The change list in the commit message.
patch4 is move microchip driver to PLDA directory and remove the PLDA
       common codes.
patch5 is add PLDA Xpressrich platform driver dt-binding doc.
patch6 is PLDA Xpressrich platform driver.
patch7 is add StarFive JH7110 PCIe dt-binding doc.
patch8 is add StarFive JH7110 Soc PCIe platform codes.
patch9 is StarFive JH7110 device tree configuration.

I have noticed that Daire have changed microchip's codes.
https://patchwork.kernel.org/project/linux-pci/cover/20230630154859.2049521-1-daire.mcnamara@microchip.com/
I have changed patch3 and patch4 base on their commits. StarFive
PCIe driver still can work. But their codes is under reviewed and 
maybe changing. Do not base on their changes first.
I will base on their commit to change patch3 and patch4 as soon as
their commits are accepted.

List below is old patchset and is dropped, which is non-refractored version.
https://patchwork.kernel.org/project/linux-pci/cover/20230406111142.74410-1-minda.chen@starfivetech.com/

Minda Chen (9):
  dt-bindings: PCI: Add PLDA XpressRICH PCIe host common properties
  dt-bindings: PCI: microchip: Remove the PLDA common properties
  PCI: PLDA: Get PLDA common codes from Microchip PolarFire host
  PCI: microchip: Move PCIe driver to PLDA directory
  dt-bindings: PLDA: Add PLDA XpressRICH PCIe host controller
  PCI: PLDA: Add host conroller platform driver
  dt-bindings: PCI: Add StarFive JH7110 PCIe controller
  PCI: PLDA: starfive: Add JH7110 PCIe controller
  riscv: dts: starfive: add PCIe dts configuration for JH7110

 .../bindings/pci/microchip,pcie-host.yaml     |  45 +-
 .../pci/plda,xpressrich-pcie-common.yaml      |  72 ++
 .../pci/plda,xpressrich-pcie-host.yaml        |  66 ++
 .../bindings/pci/starfive,jh7110-pcie.yaml    | 138 ++++
 MAINTAINERS                                   |  19 +-
 .../jh7110-starfive-visionfive-2.dtsi         |  44 ++
 arch/riscv/boot/dts/starfive/jh7110.dtsi      |  88 +++
 drivers/pci/controller/Kconfig                |   9 +-
 drivers/pci/controller/Makefile               |   2 +-
 drivers/pci/controller/plda/Kconfig           |  35 +
 drivers/pci/controller/plda/Makefile          |   5 +
 .../{ => plda}/pcie-microchip-host.c          | 594 ++--------------
 drivers/pci/controller/plda/pcie-plda-host.c  | 665 ++++++++++++++++++
 drivers/pci/controller/plda/pcie-plda-plat.c  |  64 ++
 drivers/pci/controller/plda/pcie-plda.h       | 230 ++++++
 drivers/pci/controller/plda/pcie-starfive.c   | 415 +++++++++++
 16 files changed, 1885 insertions(+), 606 deletions(-)
 create mode 100644 Documentation/devicetree/bindings/pci/plda,xpressrich-pcie-common.yaml
 create mode 100644 Documentation/devicetree/bindings/pci/plda,xpressrich-pcie-host.yaml
 create mode 100644 Documentation/devicetree/bindings/pci/starfive,jh7110-pcie.yaml
 create mode 100644 drivers/pci/controller/plda/Kconfig
 create mode 100644 drivers/pci/controller/plda/Makefile
 rename drivers/pci/controller/{ => plda}/pcie-microchip-host.c (50%)
 create mode 100644 drivers/pci/controller/plda/pcie-plda-host.c
 create mode 100644 drivers/pci/controller/plda/pcie-plda-plat.c
 create mode 100644 drivers/pci/controller/plda/pcie-plda.h
 create mode 100644 drivers/pci/controller/plda/pcie-starfive.c


base-commit: 06c2afb862f9da8dc5efa4b6076a0e48c3fbaaa5
-- 
2.17.1


_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 92+ messages in thread

* [PATCH v1 1/9] dt-bindings: PCI: Add PLDA XpressRICH PCIe host common properties
  2023-07-19 10:20 ` Minda Chen
@ 2023-07-19 10:20   ` Minda Chen
  -1 siblings, 0 replies; 92+ messages in thread
From: Minda Chen @ 2023-07-19 10:20 UTC (permalink / raw)
  To: Daire McNamara, Conor Dooley, Rob Herring, Krzysztof Kozlowski,
	Bjorn Helgaas, Lorenzo Pieralisi, Krzysztof Wilczyński,
	Emil Renner Berthing
  Cc: devicetree, linux-kernel, linux-riscv, linux-pci, Paul Walmsley,
	Palmer Dabbelt, Albert Ou, Philipp Zabel, Mason Huo, Leyfoon Tan,
	Kevin Xie, Minda Chen

Add PLDA XpressRICH PCIe host common properties dt-binding doc.
Microchip PolarFire PCIe host using PLDA IP.
Extract properties from Microchip PolarFire PCIe host.

Signed-off-by: Minda Chen <minda.chen@starfivetech.com>
Reviewed-by: Hal Feng <hal.feng@starfivetech.com>
---
 .../pci/plda,xpressrich-pcie-common.yaml      | 72 +++++++++++++++++++
 1 file changed, 72 insertions(+)
 create mode 100644 Documentation/devicetree/bindings/pci/plda,xpressrich-pcie-common.yaml

diff --git a/Documentation/devicetree/bindings/pci/plda,xpressrich-pcie-common.yaml b/Documentation/devicetree/bindings/pci/plda,xpressrich-pcie-common.yaml
new file mode 100644
index 000000000000..3627a846c5d1
--- /dev/null
+++ b/Documentation/devicetree/bindings/pci/plda,xpressrich-pcie-common.yaml
@@ -0,0 +1,72 @@
+# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
+%YAML 1.2
+---
+$id: http://devicetree.org/schemas/pci/plda,xpressrich-pcie-common.yaml#
+$schema: http://devicetree.org/meta-schemas/core.yaml#
+
+title: PLDA XpressRICH PCIe host common properties
+
+maintainers:
+  - Daire McNamara <daire.mcnamara@microchip.com>
+  - Minda Chen <minda.chen@starfivetech.com>
+
+description:
+  Generic PLDA XpressRICH PCIe host common properties.
+
+select: false
+
+properties:
+  reg:
+    description:
+      At least host IP register set and configuration space are
+      required for normal controller work.
+    maxItems: 2
+
+  reg-names:
+    oneOf:
+      - items:
+          - const: cfg
+          - const: apb
+      - items:
+          - const: host
+          - const: cfg
+
+  interrupts:
+    minItems: 1
+    items:
+      - description: PCIe host controller
+      - description: builtin MSI controller
+
+  interrupt-names:
+    minItems: 1
+    items:
+      - const: pcie
+      - const: msi
+
+  msi-controller:
+    description: Identifies the node as an MSI controller.
+
+  msi-parent:
+    description: MSI controller the device is capable of using.
+
+  interrupt-controller:
+    type: object
+    properties:
+      '#address-cells':
+        const: 0
+
+      '#interrupt-cells':
+        const: 1
+
+      interrupt-controller: true
+
+    required:
+      - '#address-cells'
+      - '#interrupt-cells'
+      - interrupt-controller
+
+    additionalProperties: false
+
+additionalProperties: true
+
+...
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 92+ messages in thread

* [PATCH v1 1/9] dt-bindings: PCI: Add PLDA XpressRICH PCIe host common properties
@ 2023-07-19 10:20   ` Minda Chen
  0 siblings, 0 replies; 92+ messages in thread
From: Minda Chen @ 2023-07-19 10:20 UTC (permalink / raw)
  To: Daire McNamara, Conor Dooley, Rob Herring, Krzysztof Kozlowski,
	Bjorn Helgaas, Lorenzo Pieralisi, Krzysztof Wilczyński,
	Emil Renner Berthing
  Cc: devicetree, linux-kernel, linux-riscv, linux-pci, Paul Walmsley,
	Palmer Dabbelt, Albert Ou, Philipp Zabel, Mason Huo, Leyfoon Tan,
	Kevin Xie, Minda Chen

Add PLDA XpressRICH PCIe host common properties dt-binding doc.
Microchip PolarFire PCIe host using PLDA IP.
Extract properties from Microchip PolarFire PCIe host.

Signed-off-by: Minda Chen <minda.chen@starfivetech.com>
Reviewed-by: Hal Feng <hal.feng@starfivetech.com>
---
 .../pci/plda,xpressrich-pcie-common.yaml      | 72 +++++++++++++++++++
 1 file changed, 72 insertions(+)
 create mode 100644 Documentation/devicetree/bindings/pci/plda,xpressrich-pcie-common.yaml

diff --git a/Documentation/devicetree/bindings/pci/plda,xpressrich-pcie-common.yaml b/Documentation/devicetree/bindings/pci/plda,xpressrich-pcie-common.yaml
new file mode 100644
index 000000000000..3627a846c5d1
--- /dev/null
+++ b/Documentation/devicetree/bindings/pci/plda,xpressrich-pcie-common.yaml
@@ -0,0 +1,72 @@
+# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
+%YAML 1.2
+---
+$id: http://devicetree.org/schemas/pci/plda,xpressrich-pcie-common.yaml#
+$schema: http://devicetree.org/meta-schemas/core.yaml#
+
+title: PLDA XpressRICH PCIe host common properties
+
+maintainers:
+  - Daire McNamara <daire.mcnamara@microchip.com>
+  - Minda Chen <minda.chen@starfivetech.com>
+
+description:
+  Generic PLDA XpressRICH PCIe host common properties.
+
+select: false
+
+properties:
+  reg:
+    description:
+      At least host IP register set and configuration space are
+      required for normal controller work.
+    maxItems: 2
+
+  reg-names:
+    oneOf:
+      - items:
+          - const: cfg
+          - const: apb
+      - items:
+          - const: host
+          - const: cfg
+
+  interrupts:
+    minItems: 1
+    items:
+      - description: PCIe host controller
+      - description: builtin MSI controller
+
+  interrupt-names:
+    minItems: 1
+    items:
+      - const: pcie
+      - const: msi
+
+  msi-controller:
+    description: Identifies the node as an MSI controller.
+
+  msi-parent:
+    description: MSI controller the device is capable of using.
+
+  interrupt-controller:
+    type: object
+    properties:
+      '#address-cells':
+        const: 0
+
+      '#interrupt-cells':
+        const: 1
+
+      interrupt-controller: true
+
+    required:
+      - '#address-cells'
+      - '#interrupt-cells'
+      - interrupt-controller
+
+    additionalProperties: false
+
+additionalProperties: true
+
+...
-- 
2.17.1


_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply related	[flat|nested] 92+ messages in thread

* [PATCH v1 2/9] dt-bindings: PCI: microchip: Remove the PLDA common properties
  2023-07-19 10:20 ` Minda Chen
@ 2023-07-19 10:20   ` Minda Chen
  -1 siblings, 0 replies; 92+ messages in thread
From: Minda Chen @ 2023-07-19 10:20 UTC (permalink / raw)
  To: Daire McNamara, Conor Dooley, Rob Herring, Krzysztof Kozlowski,
	Bjorn Helgaas, Lorenzo Pieralisi, Krzysztof Wilczyński,
	Emil Renner Berthing
  Cc: devicetree, linux-kernel, linux-riscv, linux-pci, Paul Walmsley,
	Palmer Dabbelt, Albert Ou, Philipp Zabel, Mason Huo, Leyfoon Tan,
	Kevin Xie, Minda Chen

Add plda,xpressrich-pcie-common.yaml reference and
remove the PLDA XpressRICH PCIe host common properties.

Signed-off-by: Minda Chen <minda.chen@starfivetech.com>
---
 .../bindings/pci/microchip,pcie-host.yaml     | 45 +------------------
 1 file changed, 1 insertion(+), 44 deletions(-)

diff --git a/Documentation/devicetree/bindings/pci/microchip,pcie-host.yaml b/Documentation/devicetree/bindings/pci/microchip,pcie-host.yaml
index f7a3c2636355..cf2a14e9b91b 100644
--- a/Documentation/devicetree/bindings/pci/microchip,pcie-host.yaml
+++ b/Documentation/devicetree/bindings/pci/microchip,pcie-host.yaml
@@ -11,20 +11,13 @@ maintainers:
 
 allOf:
   - $ref: /schemas/pci/pci-bus.yaml#
+  - $ref: plda,xpressrich-pcie-common.yaml#
   - $ref: /schemas/interrupt-controller/msi-controller.yaml#
 
 properties:
   compatible:
     const: microchip,pcie-host-1.0 # PolarFire
 
-  reg:
-    maxItems: 2
-
-  reg-names:
-    items:
-      - const: cfg
-      - const: apb
-
   clocks:
     description:
       Fabric Interface Controllers, FICs, are the interface between the FPGA
@@ -52,18 +45,6 @@ properties:
     items:
       pattern: '^fic[0-3]$'
 
-  interrupts:
-    minItems: 1
-    items:
-      - description: PCIe host controller
-      - description: builtin MSI controller
-
-  interrupt-names:
-    minItems: 1
-    items:
-      - const: pcie
-      - const: msi
-
   ranges:
     maxItems: 1
 
@@ -71,30 +52,6 @@ properties:
     minItems: 1
     maxItems: 6
 
-  msi-controller:
-    description: Identifies the node as an MSI controller.
-
-  msi-parent:
-    description: MSI controller the device is capable of using.
-
-  interrupt-controller:
-    type: object
-    properties:
-      '#address-cells':
-        const: 0
-
-      '#interrupt-cells':
-        const: 1
-
-      interrupt-controller: true
-
-    required:
-      - '#address-cells'
-      - '#interrupt-cells'
-      - interrupt-controller
-
-    additionalProperties: false
-
 required:
   - reg
   - reg-names
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 92+ messages in thread

* [PATCH v1 2/9] dt-bindings: PCI: microchip: Remove the PLDA common properties
@ 2023-07-19 10:20   ` Minda Chen
  0 siblings, 0 replies; 92+ messages in thread
From: Minda Chen @ 2023-07-19 10:20 UTC (permalink / raw)
  To: Daire McNamara, Conor Dooley, Rob Herring, Krzysztof Kozlowski,
	Bjorn Helgaas, Lorenzo Pieralisi, Krzysztof Wilczyński,
	Emil Renner Berthing
  Cc: devicetree, linux-kernel, linux-riscv, linux-pci, Paul Walmsley,
	Palmer Dabbelt, Albert Ou, Philipp Zabel, Mason Huo, Leyfoon Tan,
	Kevin Xie, Minda Chen

Add plda,xpressrich-pcie-common.yaml reference and
remove the PLDA XpressRICH PCIe host common properties.

Signed-off-by: Minda Chen <minda.chen@starfivetech.com>
---
 .../bindings/pci/microchip,pcie-host.yaml     | 45 +------------------
 1 file changed, 1 insertion(+), 44 deletions(-)

diff --git a/Documentation/devicetree/bindings/pci/microchip,pcie-host.yaml b/Documentation/devicetree/bindings/pci/microchip,pcie-host.yaml
index f7a3c2636355..cf2a14e9b91b 100644
--- a/Documentation/devicetree/bindings/pci/microchip,pcie-host.yaml
+++ b/Documentation/devicetree/bindings/pci/microchip,pcie-host.yaml
@@ -11,20 +11,13 @@ maintainers:
 
 allOf:
   - $ref: /schemas/pci/pci-bus.yaml#
+  - $ref: plda,xpressrich-pcie-common.yaml#
   - $ref: /schemas/interrupt-controller/msi-controller.yaml#
 
 properties:
   compatible:
     const: microchip,pcie-host-1.0 # PolarFire
 
-  reg:
-    maxItems: 2
-
-  reg-names:
-    items:
-      - const: cfg
-      - const: apb
-
   clocks:
     description:
       Fabric Interface Controllers, FICs, are the interface between the FPGA
@@ -52,18 +45,6 @@ properties:
     items:
       pattern: '^fic[0-3]$'
 
-  interrupts:
-    minItems: 1
-    items:
-      - description: PCIe host controller
-      - description: builtin MSI controller
-
-  interrupt-names:
-    minItems: 1
-    items:
-      - const: pcie
-      - const: msi
-
   ranges:
     maxItems: 1
 
@@ -71,30 +52,6 @@ properties:
     minItems: 1
     maxItems: 6
 
-  msi-controller:
-    description: Identifies the node as an MSI controller.
-
-  msi-parent:
-    description: MSI controller the device is capable of using.
-
-  interrupt-controller:
-    type: object
-    properties:
-      '#address-cells':
-        const: 0
-
-      '#interrupt-cells':
-        const: 1
-
-      interrupt-controller: true
-
-    required:
-      - '#address-cells'
-      - '#interrupt-cells'
-      - interrupt-controller
-
-    additionalProperties: false
-
 required:
   - reg
   - reg-names
-- 
2.17.1


_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply related	[flat|nested] 92+ messages in thread

* [PATCH v1 3/9] PCI: PLDA: Get PLDA common codes from Microchip PolarFire host
  2023-07-19 10:20 ` Minda Chen
@ 2023-07-19 10:20   ` Minda Chen
  -1 siblings, 0 replies; 92+ messages in thread
From: Minda Chen @ 2023-07-19 10:20 UTC (permalink / raw)
  To: Daire McNamara, Conor Dooley, Rob Herring, Krzysztof Kozlowski,
	Bjorn Helgaas, Lorenzo Pieralisi, Krzysztof Wilczyński,
	Emil Renner Berthing
  Cc: devicetree, linux-kernel, linux-riscv, linux-pci, Paul Walmsley,
	Palmer Dabbelt, Albert Ou, Philipp Zabel, Mason Huo, Leyfoon Tan,
	Kevin Xie, Minda Chen

Add PLDA PCIe controller driver codes, extract from Microchip
PolarFire PCIe host driver codes first.

The change includes:
- copy the IP register marcos.
- Add related data structures of PCIe host instance.
  mc_pcie --> plda_pcie (Get most of data members)
  mc_msi  --> plda_msi
  add plda_pcie_ops and plda_evt data structures.
- function rename list:
  mc_pcie_enable_msi       --> plda_pcie_enable_msi
  mc_pcie_setup_window     --> plda_pcie_setup_window
  mc_pcie_setup_windows    --> plda_pcie_setup_iomems
  mc_pcie_init_irq_domains --> plda_pcie_init_irq_domains
  mc_allocate_msi_domains  --> plda_allocate_msi_domains
  mc_init_interrupts       --> plda_pcie_init_irq
  msi interrupts related functions and irq domain
  (primary function is mc_handle_msi):
  mc_handle_msi            --> plda_handle_msi
  intx interrupts related functions and irq domain
  (primary function is mc_handle_intx):
  mc_handle_intx           --> plda_handle_intx
  event interrupts:
  mc_handle_event	   --> plda_handle_event
- For PolarFire implements non-plda local interrupt events, most of
  event interrupt process codes can not be re-used. PLDA implements
  new codes and irq domain ops like PolarFire.
  New event functions:
    plda_event_handler
    plda_pcie_event_map
    plda_ack_event_irq
    plda_mask_event_irq
    plda_unmask_event_irq
    plda_hwirq_to_mask
- plda_handle_event adds a new irqnum to event num mapping codes for
  PLDA local event except DMA engine interrupt events. The DMA engine
  interrupt events are implemented by vendors. So do not add these
  events. PolarFire PCIe uses get_events function pointer to get
  their events num.

Signed-off-by: Minda Chen <minda.chen@starfivetech.com>
---
 MAINTAINERS                                  |   8 +
 drivers/pci/controller/Kconfig               |   1 +
 drivers/pci/controller/Makefile              |   1 +
 drivers/pci/controller/plda/Kconfig          |  11 +
 drivers/pci/controller/plda/Makefile         |   2 +
 drivers/pci/controller/plda/pcie-plda-host.c | 554 +++++++++++++++++++
 drivers/pci/controller/plda/pcie-plda.h      | 173 ++++++
 7 files changed, 750 insertions(+)
 create mode 100644 drivers/pci/controller/plda/Kconfig
 create mode 100644 drivers/pci/controller/plda/Makefile
 create mode 100644 drivers/pci/controller/plda/pcie-plda-host.c
 create mode 100644 drivers/pci/controller/plda/pcie-plda.h

diff --git a/MAINTAINERS b/MAINTAINERS
index 8e4f9d5dca55..3b3c4d8808ae 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -16170,6 +16170,14 @@ S:	Maintained
 F:	Documentation/devicetree/bindings/pci/cdns,*
 F:	drivers/pci/controller/cadence/
 
+PCI DRIVER FOR PLDA PCIE IP
+M:	Daire McNamara <daire.mcnamara@microchip.com>
+M:	Minda Chen <minda.chen@starfivetech.com>
+L:	linux-pci@vger.kernel.org
+S:	Maintained
+F:	Documentation/devicetree/bindings/pci/plda,*
+F:	drivers/pci/controller/plda/*plda*
+
 PCI DRIVER FOR FREESCALE LAYERSCAPE
 M:	Minghuan Lian <minghuan.Lian@nxp.com>
 M:	Mingkai Hu <mingkai.hu@nxp.com>
diff --git a/drivers/pci/controller/Kconfig b/drivers/pci/controller/Kconfig
index 8d49bad7f847..a106dabcb8dc 100644
--- a/drivers/pci/controller/Kconfig
+++ b/drivers/pci/controller/Kconfig
@@ -346,4 +346,5 @@ config PCIE_XILINX_CPM
 source "drivers/pci/controller/cadence/Kconfig"
 source "drivers/pci/controller/dwc/Kconfig"
 source "drivers/pci/controller/mobiveil/Kconfig"
+source "drivers/pci/controller/plda/Kconfig"
 endmenu
diff --git a/drivers/pci/controller/Makefile b/drivers/pci/controller/Makefile
index 37c8663de7fe..aa0a691ebcbc 100644
--- a/drivers/pci/controller/Makefile
+++ b/drivers/pci/controller/Makefile
@@ -43,6 +43,7 @@ obj-$(CONFIG_PCIE_MT7621) += pcie-mt7621.o
 # pcie-hisi.o quirks are needed even without CONFIG_PCIE_DW
 obj-y				+= dwc/
 obj-y				+= mobiveil/
+obj-y				+= plda/
 
 
 # The following drivers are for devices that use the generic ACPI
diff --git a/drivers/pci/controller/plda/Kconfig b/drivers/pci/controller/plda/Kconfig
new file mode 100644
index 000000000000..37e17326671b
--- /dev/null
+++ b/drivers/pci/controller/plda/Kconfig
@@ -0,0 +1,11 @@
+# SPDX-License-Identifier: GPL-2.0
+
+menu "PLDA PCIe controllers support"
+	depends on PCI
+
+config PCIE_PLDA_HOST
+	bool "PLDA PCIe host controller"
+	depends on OF && PCI_MSI
+	select IRQ_DOMAIN
+
+endmenu
diff --git a/drivers/pci/controller/plda/Makefile b/drivers/pci/controller/plda/Makefile
new file mode 100644
index 000000000000..79dbae655c2b
--- /dev/null
+++ b/drivers/pci/controller/plda/Makefile
@@ -0,0 +1,2 @@
+# SPDX-License-Identifier: GPL-2.0
+obj-$(CONFIG_PCIE_PLDA_HOST) += pcie-plda-host.o
diff --git a/drivers/pci/controller/plda/pcie-plda-host.c b/drivers/pci/controller/plda/pcie-plda-host.c
new file mode 100644
index 000000000000..ca720430721c
--- /dev/null
+++ b/drivers/pci/controller/plda/pcie-plda-host.c
@@ -0,0 +1,554 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * PLDA PCIe XpressRich host controller driver
+ *
+ * Copyright (C) 2023 Microchip Co. Ltd
+ *		      StarFive Co. Ltd.
+ *
+ * Author: Daire McNamara <daire.mcnamara@microchip.com>
+ * Author: Minda Chen <minda.chen@starfivetech.com>
+ */
+
+#include <linux/irqchip/chained_irq.h>
+#include <linux/irqdomain.h>
+#include <linux/msi.h>
+#include <linux/of_address.h>
+#include <linux/of_pci.h>
+#include <linux/pci_regs.h>
+#include <linux/pci-ecam.h>
+#include <linux/platform_device.h>
+
+#include "pcie-plda.h"
+
+void plda_pcie_enable_msi(struct plda_pcie *port)
+{
+	struct plda_msi *msi = &port->msi;
+	void __iomem *base = port->bridge_addr;
+	u32 cap_offset = MSI_CAP_CTRL_OFFSET;
+	u16 msg_ctrl = readw_relaxed(base + cap_offset + PCI_MSI_FLAGS);
+
+	msg_ctrl |= PCI_MSI_FLAGS_ENABLE;
+	msg_ctrl &= ~PCI_MSI_FLAGS_QMASK;
+	msg_ctrl |= MSI_MAX_Q_AVAIL;
+	msg_ctrl &= ~PCI_MSI_FLAGS_QSIZE;
+	msg_ctrl |= MSI_Q_SIZE;
+	msg_ctrl |= PCI_MSI_FLAGS_64BIT;
+
+	writew_relaxed(msg_ctrl, base + cap_offset + PCI_MSI_FLAGS);
+
+	writel_relaxed(lower_32_bits(msi->vector_phy),
+		       base + cap_offset + PCI_MSI_ADDRESS_LO);
+	writel_relaxed(upper_32_bits(msi->vector_phy),
+		       base + cap_offset + PCI_MSI_ADDRESS_HI);
+}
+EXPORT_SYMBOL_GPL(plda_pcie_enable_msi);
+
+static void plda_handle_msi(struct irq_desc *desc)
+{
+	struct plda_pcie *port = irq_desc_get_handler_data(desc);
+	struct irq_chip *chip = irq_desc_get_chip(desc);
+	struct device *dev = port->dev;
+	struct plda_msi *msi = &port->msi;
+	void __iomem *bridge_base_addr = port->bridge_addr;
+	unsigned long status;
+	u32 bit;
+	int ret;
+
+	chained_irq_enter(chip, desc);
+	status = readl_relaxed(bridge_base_addr + ISTATUS_LOCAL);
+	if (status & PM_MSI_INT_MSI_MASK) {
+		writel_relaxed(BIT(PM_MSI_INT_MSI_SHIFT), bridge_base_addr + ISTATUS_LOCAL);
+		status = readl_relaxed(bridge_base_addr + ISTATUS_MSI);
+		for_each_set_bit(bit, &status, msi->num_vectors) {
+			ret = generic_handle_domain_irq(msi->dev_domain, bit);
+			if (ret)
+				dev_err_ratelimited(dev, "bad MSI IRQ %d\n",
+						    bit);
+		}
+	}
+	chained_irq_exit(chip, desc);
+}
+
+static void plda_msi_bottom_irq_ack(struct irq_data *data)
+{
+	struct plda_pcie *port = irq_data_get_irq_chip_data(data);
+	void __iomem *bridge_base_addr = port->bridge_addr;
+	u32 bitpos = data->hwirq;
+
+	writel_relaxed(BIT(bitpos), bridge_base_addr + ISTATUS_MSI);
+}
+
+static void plda_compose_msi_msg(struct irq_data *data, struct msi_msg *msg)
+{
+	struct plda_pcie *port = irq_data_get_irq_chip_data(data);
+	phys_addr_t addr = port->msi.vector_phy;
+
+	msg->address_lo = lower_32_bits(addr);
+	msg->address_hi = upper_32_bits(addr);
+	msg->data = data->hwirq;
+
+	dev_dbg(port->dev, "msi#%x address_hi %#x address_lo %#x\n",
+		(int)data->hwirq, msg->address_hi, msg->address_lo);
+}
+
+static int plda_msi_set_affinity(struct irq_data *irq_data,
+				 const struct cpumask *mask, bool force)
+{
+	return -EINVAL;
+}
+
+static struct irq_chip plda_msi_bottom_irq_chip = {
+	.name = "PLDA MSI",
+	.irq_ack = plda_msi_bottom_irq_ack,
+	.irq_compose_msi_msg = plda_compose_msi_msg,
+	.irq_set_affinity = plda_msi_set_affinity,
+};
+
+static int plda_irq_msi_domain_alloc(struct irq_domain *domain, unsigned int virq,
+				     unsigned int nr_irqs, void *args)
+{
+	struct plda_pcie *port = domain->host_data;
+	struct plda_msi *msi = &port->msi;
+	void __iomem *bridge_base_addr = port->bridge_addr;
+	unsigned long bit;
+	u32 val;
+
+	mutex_lock(&msi->lock);
+	bit = find_first_zero_bit(msi->used, msi->num_vectors);
+	if (bit >= msi->num_vectors) {
+		mutex_unlock(&msi->lock);
+		return -ENOSPC;
+	}
+
+	set_bit(bit, msi->used);
+
+	irq_domain_set_info(domain, virq, bit, &plda_msi_bottom_irq_chip,
+			    domain->host_data, handle_edge_irq, NULL, NULL);
+
+	/* Enable MSI interrupts */
+	val = readl_relaxed(bridge_base_addr + IMASK_LOCAL);
+	val |= PM_MSI_INT_MSI_MASK;
+	writel_relaxed(val, bridge_base_addr + IMASK_LOCAL);
+
+	mutex_unlock(&msi->lock);
+
+	return 0;
+}
+
+static void plda_irq_msi_domain_free(struct irq_domain *domain, unsigned int virq,
+				     unsigned int nr_irqs)
+{
+	struct irq_data *d = irq_domain_get_irq_data(domain, virq);
+	struct plda_pcie *port = irq_data_get_irq_chip_data(d);
+	struct plda_msi *msi = &port->msi;
+
+	mutex_lock(&msi->lock);
+
+	if (test_bit(d->hwirq, msi->used))
+		__clear_bit(d->hwirq, msi->used);
+	else
+		dev_err(port->dev, "trying to free unused MSI%lu\n", d->hwirq);
+
+	mutex_unlock(&msi->lock);
+}
+
+static const struct irq_domain_ops msi_domain_ops = {
+	.alloc	= plda_irq_msi_domain_alloc,
+	.free	= plda_irq_msi_domain_free,
+};
+
+static struct irq_chip plda_msi_irq_chip = {
+	.name = "PLDA PCIe MSI",
+	.irq_ack = irq_chip_ack_parent,
+	.irq_mask = pci_msi_mask_irq,
+	.irq_unmask = pci_msi_unmask_irq,
+};
+
+static struct msi_domain_info plda_msi_domain_info = {
+	.flags = (MSI_FLAG_USE_DEF_DOM_OPS | MSI_FLAG_USE_DEF_CHIP_OPS |
+		  MSI_FLAG_PCI_MSIX),
+	.chip = &plda_msi_irq_chip,
+};
+
+static int plda_allocate_msi_domains(struct plda_pcie *port)
+{
+	struct device *dev = port->dev;
+	struct fwnode_handle *fwnode = of_node_to_fwnode(dev->of_node);
+	struct plda_msi *msi = &port->msi;
+
+	raw_spin_lock_init(&port->lock);
+	mutex_init(&port->msi.lock);
+
+	msi->dev_domain = irq_domain_add_linear(NULL, msi->num_vectors,
+						&msi_domain_ops, port);
+	if (!msi->dev_domain) {
+		dev_err(dev, "failed to create IRQ domain\n");
+		return -ENOMEM;
+	}
+
+	msi->msi_domain = pci_msi_create_irq_domain(fwnode, &plda_msi_domain_info,
+						    msi->dev_domain);
+	if (!msi->msi_domain) {
+		dev_err(dev, "failed to create MSI domain\n");
+		irq_domain_remove(msi->dev_domain);
+		return -ENOMEM;
+	}
+
+	return 0;
+}
+
+static void plda_handle_intx(struct irq_desc *desc)
+{
+	struct plda_pcie *port = irq_desc_get_handler_data(desc);
+	struct irq_chip *chip = irq_desc_get_chip(desc);
+	struct device *dev = port->dev;
+	unsigned long status;
+	u32 bit;
+	int ret;
+
+	chained_irq_enter(chip, desc);
+	status = readl_relaxed(port->bridge_addr + ISTATUS_LOCAL);
+	if (status & PM_MSI_INT_INTX_MASK) {
+		status &= PM_MSI_INT_INTX_MASK;
+		status >>= PM_MSI_INT_INTX_SHIFT;
+		for_each_set_bit(bit, &status, PCI_NUM_INTX) {
+			ret = generic_handle_domain_irq(port->intx_domain, bit);
+			if (ret)
+				dev_err_ratelimited(dev, "bad INTx IRQ %d %d\n",
+						    bit, ret);
+		}
+	}
+	chained_irq_exit(chip, desc);
+}
+
+static void plda_ack_intx_irq(struct irq_data *data)
+{
+	struct plda_pcie *port = irq_data_get_irq_chip_data(data);
+	void __iomem *bridge_base_addr = port->bridge_addr;
+	u32 mask = BIT(data->hwirq + PM_MSI_INT_INTX_SHIFT);
+
+	writel_relaxed(mask, bridge_base_addr + ISTATUS_LOCAL);
+}
+
+static void plda_mask_intx_irq(struct irq_data *data)
+{
+	struct plda_pcie *port = irq_data_get_irq_chip_data(data);
+	void __iomem *bridge_base_addr = port->bridge_addr;
+	unsigned long flags;
+	u32 mask = BIT(data->hwirq + PM_MSI_INT_INTX_SHIFT);
+	u32 val;
+
+	raw_spin_lock_irqsave(&port->lock, flags);
+	val = readl_relaxed(bridge_base_addr + IMASK_LOCAL);
+	val &= ~mask;
+	writel_relaxed(val, bridge_base_addr + IMASK_LOCAL);
+	raw_spin_unlock_irqrestore(&port->lock, flags);
+}
+
+static void plda_unmask_intx_irq(struct irq_data *data)
+{
+	struct plda_pcie *port = irq_data_get_irq_chip_data(data);
+	void __iomem *bridge_base_addr = port->bridge_addr;
+	unsigned long flags;
+	u32 mask = BIT(data->hwirq + PM_MSI_INT_INTX_SHIFT);
+	u32 val;
+
+	raw_spin_lock_irqsave(&port->lock, flags);
+	val = readl_relaxed(bridge_base_addr + IMASK_LOCAL);
+	val |= mask;
+	writel_relaxed(val, bridge_base_addr + IMASK_LOCAL);
+	raw_spin_unlock_irqrestore(&port->lock, flags);
+}
+
+static struct irq_chip plda_intx_irq_chip = {
+	.name = "PLDA PCIe INTx",
+	.irq_ack = plda_ack_intx_irq,
+	.irq_mask = plda_mask_intx_irq,
+	.irq_unmask = plda_unmask_intx_irq,
+};
+
+static int plda_pcie_intx_map(struct irq_domain *domain, unsigned int irq,
+			      irq_hw_number_t hwirq)
+{
+	irq_set_chip_and_handler(irq, &plda_intx_irq_chip, handle_level_irq);
+	irq_set_chip_data(irq, domain->host_data);
+
+	return 0;
+}
+
+static const struct irq_domain_ops intx_domain_ops = {
+	.map = plda_pcie_intx_map,
+};
+
+static void plda_handle_event(struct irq_desc *desc)
+{
+	struct plda_pcie *port = irq_desc_get_handler_data(desc);
+	struct irq_chip *chip = irq_desc_get_chip(desc);
+	unsigned long events = 0;
+	u32 bit, val, origin;
+
+	chained_irq_enter(chip, desc);
+
+	if (port->ops && port->ops->get_events) {
+		events = port->ops->get_events(port);
+	} else {
+		val = readl_relaxed(port->bridge_addr + ISTATUS_LOCAL);
+		origin = val;
+		val = val >> A_ATR_EVT_POST_ERR_SHIFT;
+		events |= val & 0xff;
+		if (origin & PM_MSI_INT_INTX_MASK)
+			events |= BIT(EVENT_PM_MSI_INT_INTX);
+		val = (origin >> PM_MSI_INT_MSI_SHIFT) & 0xf;
+		events |= val << EVENT_PM_MSI_INT_MSI;
+	}
+
+	for_each_set_bit(bit, &events, port->num_events)
+		generic_handle_domain_irq(port->event_domain, bit);
+
+	chained_irq_exit(chip, desc);
+}
+
+static u32 plda_hwirq_to_mask(int hwirq)
+{
+	u32 mask;
+
+	if (hwirq < EVENT_PM_MSI_INT_INTX)
+		mask = BIT(hwirq + A_ATR_EVT_POST_ERR_SHIFT);
+	else if (hwirq == EVENT_PM_MSI_INT_INTX)
+		mask = PM_MSI_INT_INTX_MASK;
+	else
+		mask = BIT(hwirq + PM_MSI_TO_MASK_OFFSET);
+
+	return mask;
+}
+
+static void plda_ack_event_irq(struct irq_data *data)
+{
+	struct plda_pcie *port = irq_data_get_irq_chip_data(data);
+
+	writel_relaxed(plda_hwirq_to_mask(data->hwirq),
+		       port->bridge_addr + ISTATUS_LOCAL);
+}
+
+static void plda_mask_event_irq(struct irq_data *data)
+{
+	struct plda_pcie *port = irq_data_get_irq_chip_data(data);
+	u32 mask, val;
+
+	mask = plda_hwirq_to_mask(data->hwirq);
+
+	raw_spin_lock(&port->lock);
+	val = readl_relaxed(port->bridge_addr + IMASK_LOCAL);
+	val &= ~mask;
+	writel_relaxed(val, port->bridge_addr + IMASK_LOCAL);
+	raw_spin_unlock(&port->lock);
+}
+
+static void plda_unmask_event_irq(struct irq_data *data)
+{
+	struct plda_pcie *port = irq_data_get_irq_chip_data(data);
+	u32 mask, val;
+
+	mask = plda_hwirq_to_mask(data->hwirq);
+
+	raw_spin_lock(&port->lock);
+	val = readl_relaxed(port->bridge_addr + IMASK_LOCAL);
+	val |= mask;
+	writel_relaxed(val, port->bridge_addr + IMASK_LOCAL);
+	raw_spin_unlock(&port->lock);
+}
+
+static struct irq_chip plda_event_irq_chip = {
+	.name = "PLDA PCIe EVENT",
+	.irq_ack = plda_ack_event_irq,
+	.irq_mask = plda_mask_event_irq,
+	.irq_unmask = plda_unmask_event_irq,
+};
+
+static int plda_pcie_event_map(struct irq_domain *domain, unsigned int irq,
+			       irq_hw_number_t hwirq)
+{
+	irq_set_chip_and_handler(irq, &plda_event_irq_chip, handle_level_irq);
+	irq_set_chip_data(irq, domain->host_data);
+
+	return 0;
+}
+
+static const struct irq_domain_ops event_domain_ops = {
+	.map = plda_pcie_event_map,
+};
+
+static irqreturn_t plda_event_handler(int irq, void *dev_id)
+{
+	return IRQ_HANDLED;
+}
+
+static int plda_pcie_init_irq_domains(struct plda_pcie *port,
+				      const struct irq_domain_ops *domain_ops)
+{
+	struct device *dev = port->dev;
+	struct device_node *node = dev->of_node;
+	struct device_node *pcie_intc_node;
+
+	/* Setup INTx */
+	pcie_intc_node = of_get_next_child(node, NULL);
+	if (!pcie_intc_node) {
+		dev_err(dev, "failed to find PCIe Intc node\n");
+		return -EINVAL;
+	}
+
+	port->event_domain = irq_domain_add_linear(pcie_intc_node,
+						   port->num_events,
+						   domain_ops, port);
+	if (!port->event_domain) {
+		dev_err(dev, "failed to get event domain\n");
+		of_node_put(pcie_intc_node);
+		return -ENOMEM;
+	}
+
+	irq_domain_update_bus_token(port->event_domain, DOMAIN_BUS_NEXUS);
+
+	port->intx_domain =
+		irq_domain_add_linear(node, PCI_NUM_INTX, &intx_domain_ops, port);
+	if (!port->intx_domain) {
+		dev_err(dev, "Failed to get a INTx IRQ domain\n");
+		of_node_put(pcie_intc_node);
+		return -ENOMEM;
+	}
+
+	irq_domain_update_bus_token(port->intx_domain, DOMAIN_BUS_WIRED);
+
+	of_node_put(pcie_intc_node);
+	raw_spin_lock_init(&port->lock);
+
+	return plda_allocate_msi_domains(port);
+}
+
+int plda_pcie_init_irq(struct plda_pcie *port, struct platform_device *pdev,
+		       struct plda_evt *evt)
+{
+	struct device *dev = port->dev;
+	const struct irq_domain_ops *ops;
+	int i;
+	int ret, err, event_irq, intx_irq, msi_irq;
+
+	if (!evt->domain_ops)
+		ops = &event_domain_ops;
+	else
+		ops = evt->domain_ops;
+
+	ret = plda_pcie_init_irq_domains(port, ops);
+	if (ret)
+		return ret;
+
+	port->irq = platform_get_irq(pdev, 0);
+	if (port->irq < 0) {
+		dev_err(dev, "Failed to get IRQ: %d\n", port->irq);
+		return port->irq;
+	}
+
+	for (i = 0; i < port->num_events; i++) {
+		event_irq = irq_create_mapping(port->event_domain, i);
+		if (!event_irq) {
+			dev_err(dev, "failed to map hwirq %d\n", i);
+			return -ENXIO;
+		}
+
+		if (evt->request_evt_irq)
+			err = evt->request_evt_irq(port, event_irq, i);
+		else
+			err = devm_request_irq(dev, event_irq,
+					       plda_event_handler,
+					       0, NULL, port);
+		if (err) {
+			dev_err(dev, "failed to request IRQ %d\n", event_irq);
+			return err;
+		}
+	}
+
+	intx_irq = irq_create_mapping(port->event_domain,
+				      evt->intx_evt);
+	if (!intx_irq) {
+		dev_err(dev, "failed to map INTx interrupt\n");
+		return -ENXIO;
+	}
+	port->intx_irq = intx_irq;
+
+	/* Plug the INTx chained handler */
+	irq_set_chained_handler_and_data(intx_irq, plda_handle_intx, port);
+
+	msi_irq = irq_create_mapping(port->event_domain,
+				     evt->msi_evt);
+	if (!msi_irq)
+		return -ENXIO;
+
+	port->msi_irq = msi_irq;
+	/* Plug the MSI chained handler */
+	irq_set_chained_handler_and_data(msi_irq, plda_handle_msi, port);
+
+	/* Plug the main event chained handler */
+	irq_set_chained_handler_and_data(port->irq, plda_handle_event, port);
+
+	return 0;
+}
+EXPORT_SYMBOL_GPL(plda_pcie_init_irq);
+
+void plda_pcie_setup_window(void __iomem *bridge_base_addr, u32 index,
+			    phys_addr_t axi_addr, phys_addr_t pci_addr,
+			    size_t size)
+{
+	u32 atr_sz = ilog2(size) - 1;
+	u32 val;
+
+	if (index == 0)
+		val = PCIE_CONFIG_INTERFACE;
+	else
+		val = PCIE_TX_RX_INTERFACE;
+
+	writel(val, bridge_base_addr + (index * ATR_ENTRY_SIZE) +
+	       ATR0_AXI4_SLV0_TRSL_PARAM);
+
+	val = lower_32_bits(axi_addr) | (atr_sz << ATR_SIZE_SHIFT) |
+			    ATR_IMPL_ENABLE;
+	writel_relaxed(val, bridge_base_addr + (index * ATR_ENTRY_SIZE) +
+	       ATR0_AXI4_SLV0_SRCADDR_PARAM);
+
+	val = upper_32_bits(axi_addr);
+	writel(val, bridge_base_addr + (index * ATR_ENTRY_SIZE) +
+	       ATR0_AXI4_SLV0_SRC_ADDR);
+
+	val = lower_32_bits(pci_addr);
+	writel(val, bridge_base_addr + (index * ATR_ENTRY_SIZE) +
+	       ATR0_AXI4_SLV0_TRSL_ADDR_LSB);
+
+	val = upper_32_bits(pci_addr);
+	writel(val, bridge_base_addr + (index * ATR_ENTRY_SIZE) +
+	       ATR0_AXI4_SLV0_TRSL_ADDR_UDW);
+
+	val = readl(bridge_base_addr + ATR0_PCIE_WIN0_SRCADDR_PARAM);
+	val |= (ATR0_PCIE_ATR_SIZE << ATR0_PCIE_ATR_SIZE_SHIFT);
+	writel(val, bridge_base_addr + ATR0_PCIE_WIN0_SRCADDR_PARAM);
+	writel(0, bridge_base_addr + ATR0_PCIE_WIN0_SRC_ADDR);
+}
+EXPORT_SYMBOL_GPL(plda_pcie_setup_window);
+
+int plda_pcie_setup_iomems(struct plda_pcie *port, struct pci_host_bridge *bridge)
+{
+	void __iomem *bridge_base_addr = port->bridge_addr;
+	struct resource_entry *entry;
+	u64 pci_addr;
+	u32 index = 1;
+
+	resource_list_for_each_entry(entry, &bridge->windows) {
+		if (resource_type(entry->res) == IORESOURCE_MEM) {
+			pci_addr = entry->res->start - entry->offset;
+			plda_pcie_setup_window(bridge_base_addr, index,
+					       entry->res->start, pci_addr,
+					       resource_size(entry->res));
+			index++;
+		}
+	}
+
+	return 0;
+}
+EXPORT_SYMBOL_GPL(plda_pcie_setup_iomems);
diff --git a/drivers/pci/controller/plda/pcie-plda.h b/drivers/pci/controller/plda/pcie-plda.h
new file mode 100644
index 000000000000..feb3a0d9ace5
--- /dev/null
+++ b/drivers/pci/controller/plda/pcie-plda.h
@@ -0,0 +1,173 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * PLDA PCIe host controller driver
+ */
+
+#ifndef _PCIE_PLDA_H
+#define _PCIE_PLDA_H
+
+#include <linux/kernel.h>
+#include <linux/pci.h>
+#include <linux/pci-epf.h>
+#include <linux/phy/phy.h>
+
+/* Number of MSI IRQs */
+#define PLDA_NUM_MSI_IRQS		32
+#define NUM_MSI_IRQS_CODED		5
+
+/* PCIe Bridge Phy Regs */
+#define PCIE_PCI_IDS_DW1		0x9c
+
+/* PCIe Config space MSI capability structure */
+#define MSI_CAP_CTRL_OFFSET		0xe0
+#define  MSI_MAX_Q_AVAIL		(NUM_MSI_IRQS_CODED << 1)
+#define  MSI_Q_SIZE			(NUM_MSI_IRQS_CODED << 4)
+
+#define IMASK_LOCAL				0x180
+#define  DMA_END_ENGINE_0_MASK			0x00000000u
+#define  DMA_END_ENGINE_0_SHIFT			0
+#define  DMA_END_ENGINE_1_MASK			0x00000000u
+#define  DMA_END_ENGINE_1_SHIFT			1
+#define  DMA_ERROR_ENGINE_0_MASK		BIT(8)
+#define  DMA_ERROR_ENGINE_0_SHIFT		8
+#define  DMA_ERROR_ENGINE_1_MASK		BIT(9)
+#define  DMA_ERROR_ENGINE_1_SHIFT		9
+#define  DMA_END_MASK				GENMASK(7, 0)
+#define  DMA_ERR_MASK				GENMASK(15, 8)
+#define  DMA_ERR_SHIFT				8
+#define  A_ATR_EVT_POST_ERR_MASK		BIT(16)
+#define  A_ATR_EVT_POST_ERR_SHIFT		16
+#define  A_ATR_EVT_FETCH_ERR_MASK		BIT(17)
+#define  A_ATR_EVT_FETCH_ERR_SHIFT		17
+#define  A_ATR_EVT_DISCARD_ERR_MASK		BIT(18)
+#define  A_ATR_EVT_DISCARD_ERR_SHIFT		18
+#define  A_ATR_EVT_DOORBELL_MASK		BIT(19)
+#define  A_ATR_EVT_DOORBELL_SHIFT		19
+#define  P_ATR_EVT_POST_ERR_MASK		BIT(20)
+#define  P_ATR_EVT_POST_ERR_SHIFT		20
+#define  P_ATR_EVT_FETCH_ERR_MASK		BIT(21)
+#define  P_ATR_EVT_FETCH_ERR_SHIFT		21
+#define  P_ATR_EVT_DISCARD_ERR_MASK		BIT(22)
+#define  P_ATR_EVT_DISCARD_ERR_SHIFT		22
+#define  P_ATR_EVT_DOORBELL_MASK		BIT(23)
+#define  P_ATR_EVT_DOORBELL_SHIFT		23
+#define  PM_MSI_INT_INTA_MASK			BIT(24)
+#define  PM_MSI_INT_INTA_SHIFT			24
+#define  PM_MSI_INT_INTB_MASK			BIT(25)
+#define  PM_MSI_INT_INTB_SHIFT			25
+#define  PM_MSI_INT_INTC_MASK			BIT(26)
+#define  PM_MSI_INT_INTC_SHIFT			26
+#define  PM_MSI_INT_INTD_MASK			BIT(27)
+#define  PM_MSI_INT_INTD_SHIFT			27
+#define  PM_MSI_INT_INTX_MASK			GENMASK(27, 24)
+#define  PM_MSI_INT_INTX_SHIFT			24
+#define  PM_MSI_INT_MSI_MASK			BIT(28)
+#define  PM_MSI_INT_MSI_SHIFT			28
+#define  PM_MSI_INT_AER_EVT_MASK		BIT(29)
+#define  PM_MSI_INT_AER_EVT_SHIFT		29
+#define  PM_MSI_INT_EVENTS_MASK			BIT(30)
+#define  PM_MSI_INT_EVENTS_SHIFT		30
+#define  PM_MSI_INT_SYS_ERR_MASK		BIT(31)
+#define  PM_MSI_INT_SYS_ERR_SHIFT		31
+
+#define ISTATUS_LOCAL				0x184
+#define IMASK_HOST				0x188
+#define ISTATUS_HOST				0x18c
+#define IMSI_ADDR				0x190
+#define ISTATUS_MSI				0x194
+
+/* PCIe Master table init defines */
+#define ATR0_PCIE_WIN0_SRCADDR_PARAM		0x600u
+#define  ATR0_PCIE_ATR_SIZE			0x25
+#define  ATR0_PCIE_ATR_SIZE_SHIFT		1
+#define ATR0_PCIE_WIN0_SRC_ADDR			0x604u
+#define ATR0_PCIE_WIN0_TRSL_ADDR_LSB		0x608u
+#define ATR0_PCIE_WIN0_TRSL_ADDR_UDW		0x60cu
+#define ATR0_PCIE_WIN0_TRSL_PARAM		0x610u
+
+/* PCIe AXI slave table init defines */
+#define ATR0_AXI4_SLV0_SRCADDR_PARAM		0x800u
+#define  ATR_SIZE_SHIFT				1
+#define  ATR_IMPL_ENABLE			1
+#define ATR0_AXI4_SLV0_SRC_ADDR			0x804u
+#define ATR0_AXI4_SLV0_TRSL_ADDR_LSB		0x808u
+#define ATR0_AXI4_SLV0_TRSL_ADDR_UDW		0x80cu
+#define ATR0_AXI4_SLV0_TRSL_PARAM		0x810u
+#define ATR0_AXI4_TABLE_OFFSET			0x20
+#define  PCIE_TX_RX_INTERFACE			0x00000000u
+#define  PCIE_CONFIG_INTERFACE			0x00000001u
+
+#define ATR_ENTRY_SIZE				32
+
+#define EVENT_A_ATR_EVT_POST_ERR		0
+#define EVENT_A_ATR_EVT_FETCH_ERR		1
+#define EVENT_A_ATR_EVT_DISCARD_ERR		2
+#define EVENT_A_ATR_EVT_DOORBELL		3
+#define EVENT_P_ATR_EVT_POST_ERR		4
+#define EVENT_P_ATR_EVT_FETCH_ERR		5
+#define EVENT_P_ATR_EVT_DISCARD_ERR		6
+#define EVENT_P_ATR_EVT_DOORBELL		7
+#define EVENT_PM_MSI_INT_INTX			8
+#define EVENT_PM_MSI_INT_MSI			9
+#define EVENT_PM_MSI_INT_AER_EVT		10
+#define EVENT_PM_MSI_INT_EVENTS			11
+#define EVENT_PM_MSI_INT_SYS_ERR		12
+#define NUM_PLDA_EVENTS				13
+
+#define PM_MSI_TO_MASK_OFFSET			19
+
+struct plda_pcie;
+
+struct plda_msi {
+	struct mutex lock;		/* Protect used bitmap */
+	struct irq_domain *msi_domain;
+	struct irq_domain *dev_domain; /* inner_domain*/
+	u32 num_vectors;
+	u64 vector_phy;
+	DECLARE_BITMAP(used, PLDA_NUM_MSI_IRQS);
+};
+
+struct plda_pcie_ops {
+	int (*host_init)(struct plda_pcie *pcie);
+	void (*host_deinit)(struct plda_pcie *pcie);
+	u32 (*get_events)(struct plda_pcie *pcie);
+};
+
+struct plda_pcie {
+	struct pci_host_bridge *bridge;
+	void __iomem *bridge_addr;
+	void __iomem *config_base;
+	struct irq_domain *intx_domain;
+	struct irq_domain *event_domain;
+	struct device *dev;
+	raw_spinlock_t lock;
+	struct plda_msi msi;
+	const struct plda_pcie_ops *ops;
+	struct phy *phy;
+	int irq;
+	int msi_irq;
+	int intx_irq;
+	int num_events;
+};
+
+struct plda_evt {
+	const struct irq_domain_ops *domain_ops;
+	int (*request_evt_irq)(struct plda_pcie *pcie, int evt_irq, int event);
+	int intx_evt;
+	int msi_evt;
+};
+
+void plda_pcie_enable_msi(struct plda_pcie *port);
+void plda_pcie_setup_window(void __iomem *bridge_base_addr, u32 index,
+			    phys_addr_t axi_addr, phys_addr_t pci_addr,
+			    size_t size);
+int plda_pcie_setup_iomems(struct plda_pcie *port, struct pci_host_bridge *host_bridge);
+int plda_pcie_init_irq(struct plda_pcie *port, struct platform_device *pdev,
+		       struct plda_evt *evt);
+
+static inline void plda_set_default_msi(struct plda_msi *msi)
+{
+	msi->vector_phy = IMSI_ADDR;
+	msi->num_vectors = PLDA_NUM_MSI_IRQS;
+}
+#endif /* _PCIE_PLDA_H */
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 92+ messages in thread

* [PATCH v1 3/9] PCI: PLDA: Get PLDA common codes from Microchip PolarFire host
@ 2023-07-19 10:20   ` Minda Chen
  0 siblings, 0 replies; 92+ messages in thread
From: Minda Chen @ 2023-07-19 10:20 UTC (permalink / raw)
  To: Daire McNamara, Conor Dooley, Rob Herring, Krzysztof Kozlowski,
	Bjorn Helgaas, Lorenzo Pieralisi, Krzysztof Wilczyński,
	Emil Renner Berthing
  Cc: devicetree, linux-kernel, linux-riscv, linux-pci, Paul Walmsley,
	Palmer Dabbelt, Albert Ou, Philipp Zabel, Mason Huo, Leyfoon Tan,
	Kevin Xie, Minda Chen

Add PLDA PCIe controller driver codes, extract from Microchip
PolarFire PCIe host driver codes first.

The change includes:
- copy the IP register marcos.
- Add related data structures of PCIe host instance.
  mc_pcie --> plda_pcie (Get most of data members)
  mc_msi  --> plda_msi
  add plda_pcie_ops and plda_evt data structures.
- function rename list:
  mc_pcie_enable_msi       --> plda_pcie_enable_msi
  mc_pcie_setup_window     --> plda_pcie_setup_window
  mc_pcie_setup_windows    --> plda_pcie_setup_iomems
  mc_pcie_init_irq_domains --> plda_pcie_init_irq_domains
  mc_allocate_msi_domains  --> plda_allocate_msi_domains
  mc_init_interrupts       --> plda_pcie_init_irq
  msi interrupts related functions and irq domain
  (primary function is mc_handle_msi):
  mc_handle_msi            --> plda_handle_msi
  intx interrupts related functions and irq domain
  (primary function is mc_handle_intx):
  mc_handle_intx           --> plda_handle_intx
  event interrupts:
  mc_handle_event	   --> plda_handle_event
- For PolarFire implements non-plda local interrupt events, most of
  event interrupt process codes can not be re-used. PLDA implements
  new codes and irq domain ops like PolarFire.
  New event functions:
    plda_event_handler
    plda_pcie_event_map
    plda_ack_event_irq
    plda_mask_event_irq
    plda_unmask_event_irq
    plda_hwirq_to_mask
- plda_handle_event adds a new irqnum to event num mapping codes for
  PLDA local event except DMA engine interrupt events. The DMA engine
  interrupt events are implemented by vendors. So do not add these
  events. PolarFire PCIe uses get_events function pointer to get
  their events num.

Signed-off-by: Minda Chen <minda.chen@starfivetech.com>
---
 MAINTAINERS                                  |   8 +
 drivers/pci/controller/Kconfig               |   1 +
 drivers/pci/controller/Makefile              |   1 +
 drivers/pci/controller/plda/Kconfig          |  11 +
 drivers/pci/controller/plda/Makefile         |   2 +
 drivers/pci/controller/plda/pcie-plda-host.c | 554 +++++++++++++++++++
 drivers/pci/controller/plda/pcie-plda.h      | 173 ++++++
 7 files changed, 750 insertions(+)
 create mode 100644 drivers/pci/controller/plda/Kconfig
 create mode 100644 drivers/pci/controller/plda/Makefile
 create mode 100644 drivers/pci/controller/plda/pcie-plda-host.c
 create mode 100644 drivers/pci/controller/plda/pcie-plda.h

diff --git a/MAINTAINERS b/MAINTAINERS
index 8e4f9d5dca55..3b3c4d8808ae 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -16170,6 +16170,14 @@ S:	Maintained
 F:	Documentation/devicetree/bindings/pci/cdns,*
 F:	drivers/pci/controller/cadence/
 
+PCI DRIVER FOR PLDA PCIE IP
+M:	Daire McNamara <daire.mcnamara@microchip.com>
+M:	Minda Chen <minda.chen@starfivetech.com>
+L:	linux-pci@vger.kernel.org
+S:	Maintained
+F:	Documentation/devicetree/bindings/pci/plda,*
+F:	drivers/pci/controller/plda/*plda*
+
 PCI DRIVER FOR FREESCALE LAYERSCAPE
 M:	Minghuan Lian <minghuan.Lian@nxp.com>
 M:	Mingkai Hu <mingkai.hu@nxp.com>
diff --git a/drivers/pci/controller/Kconfig b/drivers/pci/controller/Kconfig
index 8d49bad7f847..a106dabcb8dc 100644
--- a/drivers/pci/controller/Kconfig
+++ b/drivers/pci/controller/Kconfig
@@ -346,4 +346,5 @@ config PCIE_XILINX_CPM
 source "drivers/pci/controller/cadence/Kconfig"
 source "drivers/pci/controller/dwc/Kconfig"
 source "drivers/pci/controller/mobiveil/Kconfig"
+source "drivers/pci/controller/plda/Kconfig"
 endmenu
diff --git a/drivers/pci/controller/Makefile b/drivers/pci/controller/Makefile
index 37c8663de7fe..aa0a691ebcbc 100644
--- a/drivers/pci/controller/Makefile
+++ b/drivers/pci/controller/Makefile
@@ -43,6 +43,7 @@ obj-$(CONFIG_PCIE_MT7621) += pcie-mt7621.o
 # pcie-hisi.o quirks are needed even without CONFIG_PCIE_DW
 obj-y				+= dwc/
 obj-y				+= mobiveil/
+obj-y				+= plda/
 
 
 # The following drivers are for devices that use the generic ACPI
diff --git a/drivers/pci/controller/plda/Kconfig b/drivers/pci/controller/plda/Kconfig
new file mode 100644
index 000000000000..37e17326671b
--- /dev/null
+++ b/drivers/pci/controller/plda/Kconfig
@@ -0,0 +1,11 @@
+# SPDX-License-Identifier: GPL-2.0
+
+menu "PLDA PCIe controllers support"
+	depends on PCI
+
+config PCIE_PLDA_HOST
+	bool "PLDA PCIe host controller"
+	depends on OF && PCI_MSI
+	select IRQ_DOMAIN
+
+endmenu
diff --git a/drivers/pci/controller/plda/Makefile b/drivers/pci/controller/plda/Makefile
new file mode 100644
index 000000000000..79dbae655c2b
--- /dev/null
+++ b/drivers/pci/controller/plda/Makefile
@@ -0,0 +1,2 @@
+# SPDX-License-Identifier: GPL-2.0
+obj-$(CONFIG_PCIE_PLDA_HOST) += pcie-plda-host.o
diff --git a/drivers/pci/controller/plda/pcie-plda-host.c b/drivers/pci/controller/plda/pcie-plda-host.c
new file mode 100644
index 000000000000..ca720430721c
--- /dev/null
+++ b/drivers/pci/controller/plda/pcie-plda-host.c
@@ -0,0 +1,554 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * PLDA PCIe XpressRich host controller driver
+ *
+ * Copyright (C) 2023 Microchip Co. Ltd
+ *		      StarFive Co. Ltd.
+ *
+ * Author: Daire McNamara <daire.mcnamara@microchip.com>
+ * Author: Minda Chen <minda.chen@starfivetech.com>
+ */
+
+#include <linux/irqchip/chained_irq.h>
+#include <linux/irqdomain.h>
+#include <linux/msi.h>
+#include <linux/of_address.h>
+#include <linux/of_pci.h>
+#include <linux/pci_regs.h>
+#include <linux/pci-ecam.h>
+#include <linux/platform_device.h>
+
+#include "pcie-plda.h"
+
+void plda_pcie_enable_msi(struct plda_pcie *port)
+{
+	struct plda_msi *msi = &port->msi;
+	void __iomem *base = port->bridge_addr;
+	u32 cap_offset = MSI_CAP_CTRL_OFFSET;
+	u16 msg_ctrl = readw_relaxed(base + cap_offset + PCI_MSI_FLAGS);
+
+	msg_ctrl |= PCI_MSI_FLAGS_ENABLE;
+	msg_ctrl &= ~PCI_MSI_FLAGS_QMASK;
+	msg_ctrl |= MSI_MAX_Q_AVAIL;
+	msg_ctrl &= ~PCI_MSI_FLAGS_QSIZE;
+	msg_ctrl |= MSI_Q_SIZE;
+	msg_ctrl |= PCI_MSI_FLAGS_64BIT;
+
+	writew_relaxed(msg_ctrl, base + cap_offset + PCI_MSI_FLAGS);
+
+	writel_relaxed(lower_32_bits(msi->vector_phy),
+		       base + cap_offset + PCI_MSI_ADDRESS_LO);
+	writel_relaxed(upper_32_bits(msi->vector_phy),
+		       base + cap_offset + PCI_MSI_ADDRESS_HI);
+}
+EXPORT_SYMBOL_GPL(plda_pcie_enable_msi);
+
+static void plda_handle_msi(struct irq_desc *desc)
+{
+	struct plda_pcie *port = irq_desc_get_handler_data(desc);
+	struct irq_chip *chip = irq_desc_get_chip(desc);
+	struct device *dev = port->dev;
+	struct plda_msi *msi = &port->msi;
+	void __iomem *bridge_base_addr = port->bridge_addr;
+	unsigned long status;
+	u32 bit;
+	int ret;
+
+	chained_irq_enter(chip, desc);
+	status = readl_relaxed(bridge_base_addr + ISTATUS_LOCAL);
+	if (status & PM_MSI_INT_MSI_MASK) {
+		writel_relaxed(BIT(PM_MSI_INT_MSI_SHIFT), bridge_base_addr + ISTATUS_LOCAL);
+		status = readl_relaxed(bridge_base_addr + ISTATUS_MSI);
+		for_each_set_bit(bit, &status, msi->num_vectors) {
+			ret = generic_handle_domain_irq(msi->dev_domain, bit);
+			if (ret)
+				dev_err_ratelimited(dev, "bad MSI IRQ %d\n",
+						    bit);
+		}
+	}
+	chained_irq_exit(chip, desc);
+}
+
+static void plda_msi_bottom_irq_ack(struct irq_data *data)
+{
+	struct plda_pcie *port = irq_data_get_irq_chip_data(data);
+	void __iomem *bridge_base_addr = port->bridge_addr;
+	u32 bitpos = data->hwirq;
+
+	writel_relaxed(BIT(bitpos), bridge_base_addr + ISTATUS_MSI);
+}
+
+static void plda_compose_msi_msg(struct irq_data *data, struct msi_msg *msg)
+{
+	struct plda_pcie *port = irq_data_get_irq_chip_data(data);
+	phys_addr_t addr = port->msi.vector_phy;
+
+	msg->address_lo = lower_32_bits(addr);
+	msg->address_hi = upper_32_bits(addr);
+	msg->data = data->hwirq;
+
+	dev_dbg(port->dev, "msi#%x address_hi %#x address_lo %#x\n",
+		(int)data->hwirq, msg->address_hi, msg->address_lo);
+}
+
+static int plda_msi_set_affinity(struct irq_data *irq_data,
+				 const struct cpumask *mask, bool force)
+{
+	return -EINVAL;
+}
+
+static struct irq_chip plda_msi_bottom_irq_chip = {
+	.name = "PLDA MSI",
+	.irq_ack = plda_msi_bottom_irq_ack,
+	.irq_compose_msi_msg = plda_compose_msi_msg,
+	.irq_set_affinity = plda_msi_set_affinity,
+};
+
+static int plda_irq_msi_domain_alloc(struct irq_domain *domain, unsigned int virq,
+				     unsigned int nr_irqs, void *args)
+{
+	struct plda_pcie *port = domain->host_data;
+	struct plda_msi *msi = &port->msi;
+	void __iomem *bridge_base_addr = port->bridge_addr;
+	unsigned long bit;
+	u32 val;
+
+	mutex_lock(&msi->lock);
+	bit = find_first_zero_bit(msi->used, msi->num_vectors);
+	if (bit >= msi->num_vectors) {
+		mutex_unlock(&msi->lock);
+		return -ENOSPC;
+	}
+
+	set_bit(bit, msi->used);
+
+	irq_domain_set_info(domain, virq, bit, &plda_msi_bottom_irq_chip,
+			    domain->host_data, handle_edge_irq, NULL, NULL);
+
+	/* Enable MSI interrupts */
+	val = readl_relaxed(bridge_base_addr + IMASK_LOCAL);
+	val |= PM_MSI_INT_MSI_MASK;
+	writel_relaxed(val, bridge_base_addr + IMASK_LOCAL);
+
+	mutex_unlock(&msi->lock);
+
+	return 0;
+}
+
+static void plda_irq_msi_domain_free(struct irq_domain *domain, unsigned int virq,
+				     unsigned int nr_irqs)
+{
+	struct irq_data *d = irq_domain_get_irq_data(domain, virq);
+	struct plda_pcie *port = irq_data_get_irq_chip_data(d);
+	struct plda_msi *msi = &port->msi;
+
+	mutex_lock(&msi->lock);
+
+	if (test_bit(d->hwirq, msi->used))
+		__clear_bit(d->hwirq, msi->used);
+	else
+		dev_err(port->dev, "trying to free unused MSI%lu\n", d->hwirq);
+
+	mutex_unlock(&msi->lock);
+}
+
+static const struct irq_domain_ops msi_domain_ops = {
+	.alloc	= plda_irq_msi_domain_alloc,
+	.free	= plda_irq_msi_domain_free,
+};
+
+static struct irq_chip plda_msi_irq_chip = {
+	.name = "PLDA PCIe MSI",
+	.irq_ack = irq_chip_ack_parent,
+	.irq_mask = pci_msi_mask_irq,
+	.irq_unmask = pci_msi_unmask_irq,
+};
+
+static struct msi_domain_info plda_msi_domain_info = {
+	.flags = (MSI_FLAG_USE_DEF_DOM_OPS | MSI_FLAG_USE_DEF_CHIP_OPS |
+		  MSI_FLAG_PCI_MSIX),
+	.chip = &plda_msi_irq_chip,
+};
+
+static int plda_allocate_msi_domains(struct plda_pcie *port)
+{
+	struct device *dev = port->dev;
+	struct fwnode_handle *fwnode = of_node_to_fwnode(dev->of_node);
+	struct plda_msi *msi = &port->msi;
+
+	raw_spin_lock_init(&port->lock);
+	mutex_init(&port->msi.lock);
+
+	msi->dev_domain = irq_domain_add_linear(NULL, msi->num_vectors,
+						&msi_domain_ops, port);
+	if (!msi->dev_domain) {
+		dev_err(dev, "failed to create IRQ domain\n");
+		return -ENOMEM;
+	}
+
+	msi->msi_domain = pci_msi_create_irq_domain(fwnode, &plda_msi_domain_info,
+						    msi->dev_domain);
+	if (!msi->msi_domain) {
+		dev_err(dev, "failed to create MSI domain\n");
+		irq_domain_remove(msi->dev_domain);
+		return -ENOMEM;
+	}
+
+	return 0;
+}
+
+static void plda_handle_intx(struct irq_desc *desc)
+{
+	struct plda_pcie *port = irq_desc_get_handler_data(desc);
+	struct irq_chip *chip = irq_desc_get_chip(desc);
+	struct device *dev = port->dev;
+	unsigned long status;
+	u32 bit;
+	int ret;
+
+	chained_irq_enter(chip, desc);
+	status = readl_relaxed(port->bridge_addr + ISTATUS_LOCAL);
+	if (status & PM_MSI_INT_INTX_MASK) {
+		status &= PM_MSI_INT_INTX_MASK;
+		status >>= PM_MSI_INT_INTX_SHIFT;
+		for_each_set_bit(bit, &status, PCI_NUM_INTX) {
+			ret = generic_handle_domain_irq(port->intx_domain, bit);
+			if (ret)
+				dev_err_ratelimited(dev, "bad INTx IRQ %d %d\n",
+						    bit, ret);
+		}
+	}
+	chained_irq_exit(chip, desc);
+}
+
+static void plda_ack_intx_irq(struct irq_data *data)
+{
+	struct plda_pcie *port = irq_data_get_irq_chip_data(data);
+	void __iomem *bridge_base_addr = port->bridge_addr;
+	u32 mask = BIT(data->hwirq + PM_MSI_INT_INTX_SHIFT);
+
+	writel_relaxed(mask, bridge_base_addr + ISTATUS_LOCAL);
+}
+
+static void plda_mask_intx_irq(struct irq_data *data)
+{
+	struct plda_pcie *port = irq_data_get_irq_chip_data(data);
+	void __iomem *bridge_base_addr = port->bridge_addr;
+	unsigned long flags;
+	u32 mask = BIT(data->hwirq + PM_MSI_INT_INTX_SHIFT);
+	u32 val;
+
+	raw_spin_lock_irqsave(&port->lock, flags);
+	val = readl_relaxed(bridge_base_addr + IMASK_LOCAL);
+	val &= ~mask;
+	writel_relaxed(val, bridge_base_addr + IMASK_LOCAL);
+	raw_spin_unlock_irqrestore(&port->lock, flags);
+}
+
+static void plda_unmask_intx_irq(struct irq_data *data)
+{
+	struct plda_pcie *port = irq_data_get_irq_chip_data(data);
+	void __iomem *bridge_base_addr = port->bridge_addr;
+	unsigned long flags;
+	u32 mask = BIT(data->hwirq + PM_MSI_INT_INTX_SHIFT);
+	u32 val;
+
+	raw_spin_lock_irqsave(&port->lock, flags);
+	val = readl_relaxed(bridge_base_addr + IMASK_LOCAL);
+	val |= mask;
+	writel_relaxed(val, bridge_base_addr + IMASK_LOCAL);
+	raw_spin_unlock_irqrestore(&port->lock, flags);
+}
+
+static struct irq_chip plda_intx_irq_chip = {
+	.name = "PLDA PCIe INTx",
+	.irq_ack = plda_ack_intx_irq,
+	.irq_mask = plda_mask_intx_irq,
+	.irq_unmask = plda_unmask_intx_irq,
+};
+
+static int plda_pcie_intx_map(struct irq_domain *domain, unsigned int irq,
+			      irq_hw_number_t hwirq)
+{
+	irq_set_chip_and_handler(irq, &plda_intx_irq_chip, handle_level_irq);
+	irq_set_chip_data(irq, domain->host_data);
+
+	return 0;
+}
+
+static const struct irq_domain_ops intx_domain_ops = {
+	.map = plda_pcie_intx_map,
+};
+
+static void plda_handle_event(struct irq_desc *desc)
+{
+	struct plda_pcie *port = irq_desc_get_handler_data(desc);
+	struct irq_chip *chip = irq_desc_get_chip(desc);
+	unsigned long events = 0;
+	u32 bit, val, origin;
+
+	chained_irq_enter(chip, desc);
+
+	if (port->ops && port->ops->get_events) {
+		events = port->ops->get_events(port);
+	} else {
+		val = readl_relaxed(port->bridge_addr + ISTATUS_LOCAL);
+		origin = val;
+		val = val >> A_ATR_EVT_POST_ERR_SHIFT;
+		events |= val & 0xff;
+		if (origin & PM_MSI_INT_INTX_MASK)
+			events |= BIT(EVENT_PM_MSI_INT_INTX);
+		val = (origin >> PM_MSI_INT_MSI_SHIFT) & 0xf;
+		events |= val << EVENT_PM_MSI_INT_MSI;
+	}
+
+	for_each_set_bit(bit, &events, port->num_events)
+		generic_handle_domain_irq(port->event_domain, bit);
+
+	chained_irq_exit(chip, desc);
+}
+
+static u32 plda_hwirq_to_mask(int hwirq)
+{
+	u32 mask;
+
+	if (hwirq < EVENT_PM_MSI_INT_INTX)
+		mask = BIT(hwirq + A_ATR_EVT_POST_ERR_SHIFT);
+	else if (hwirq == EVENT_PM_MSI_INT_INTX)
+		mask = PM_MSI_INT_INTX_MASK;
+	else
+		mask = BIT(hwirq + PM_MSI_TO_MASK_OFFSET);
+
+	return mask;
+}
+
+static void plda_ack_event_irq(struct irq_data *data)
+{
+	struct plda_pcie *port = irq_data_get_irq_chip_data(data);
+
+	writel_relaxed(plda_hwirq_to_mask(data->hwirq),
+		       port->bridge_addr + ISTATUS_LOCAL);
+}
+
+static void plda_mask_event_irq(struct irq_data *data)
+{
+	struct plda_pcie *port = irq_data_get_irq_chip_data(data);
+	u32 mask, val;
+
+	mask = plda_hwirq_to_mask(data->hwirq);
+
+	raw_spin_lock(&port->lock);
+	val = readl_relaxed(port->bridge_addr + IMASK_LOCAL);
+	val &= ~mask;
+	writel_relaxed(val, port->bridge_addr + IMASK_LOCAL);
+	raw_spin_unlock(&port->lock);
+}
+
+static void plda_unmask_event_irq(struct irq_data *data)
+{
+	struct plda_pcie *port = irq_data_get_irq_chip_data(data);
+	u32 mask, val;
+
+	mask = plda_hwirq_to_mask(data->hwirq);
+
+	raw_spin_lock(&port->lock);
+	val = readl_relaxed(port->bridge_addr + IMASK_LOCAL);
+	val |= mask;
+	writel_relaxed(val, port->bridge_addr + IMASK_LOCAL);
+	raw_spin_unlock(&port->lock);
+}
+
+static struct irq_chip plda_event_irq_chip = {
+	.name = "PLDA PCIe EVENT",
+	.irq_ack = plda_ack_event_irq,
+	.irq_mask = plda_mask_event_irq,
+	.irq_unmask = plda_unmask_event_irq,
+};
+
+static int plda_pcie_event_map(struct irq_domain *domain, unsigned int irq,
+			       irq_hw_number_t hwirq)
+{
+	irq_set_chip_and_handler(irq, &plda_event_irq_chip, handle_level_irq);
+	irq_set_chip_data(irq, domain->host_data);
+
+	return 0;
+}
+
+static const struct irq_domain_ops event_domain_ops = {
+	.map = plda_pcie_event_map,
+};
+
+static irqreturn_t plda_event_handler(int irq, void *dev_id)
+{
+	return IRQ_HANDLED;
+}
+
+static int plda_pcie_init_irq_domains(struct plda_pcie *port,
+				      const struct irq_domain_ops *domain_ops)
+{
+	struct device *dev = port->dev;
+	struct device_node *node = dev->of_node;
+	struct device_node *pcie_intc_node;
+
+	/* Setup INTx */
+	pcie_intc_node = of_get_next_child(node, NULL);
+	if (!pcie_intc_node) {
+		dev_err(dev, "failed to find PCIe Intc node\n");
+		return -EINVAL;
+	}
+
+	port->event_domain = irq_domain_add_linear(pcie_intc_node,
+						   port->num_events,
+						   domain_ops, port);
+	if (!port->event_domain) {
+		dev_err(dev, "failed to get event domain\n");
+		of_node_put(pcie_intc_node);
+		return -ENOMEM;
+	}
+
+	irq_domain_update_bus_token(port->event_domain, DOMAIN_BUS_NEXUS);
+
+	port->intx_domain =
+		irq_domain_add_linear(node, PCI_NUM_INTX, &intx_domain_ops, port);
+	if (!port->intx_domain) {
+		dev_err(dev, "Failed to get a INTx IRQ domain\n");
+		of_node_put(pcie_intc_node);
+		return -ENOMEM;
+	}
+
+	irq_domain_update_bus_token(port->intx_domain, DOMAIN_BUS_WIRED);
+
+	of_node_put(pcie_intc_node);
+	raw_spin_lock_init(&port->lock);
+
+	return plda_allocate_msi_domains(port);
+}
+
+int plda_pcie_init_irq(struct plda_pcie *port, struct platform_device *pdev,
+		       struct plda_evt *evt)
+{
+	struct device *dev = port->dev;
+	const struct irq_domain_ops *ops;
+	int i;
+	int ret, err, event_irq, intx_irq, msi_irq;
+
+	if (!evt->domain_ops)
+		ops = &event_domain_ops;
+	else
+		ops = evt->domain_ops;
+
+	ret = plda_pcie_init_irq_domains(port, ops);
+	if (ret)
+		return ret;
+
+	port->irq = platform_get_irq(pdev, 0);
+	if (port->irq < 0) {
+		dev_err(dev, "Failed to get IRQ: %d\n", port->irq);
+		return port->irq;
+	}
+
+	for (i = 0; i < port->num_events; i++) {
+		event_irq = irq_create_mapping(port->event_domain, i);
+		if (!event_irq) {
+			dev_err(dev, "failed to map hwirq %d\n", i);
+			return -ENXIO;
+		}
+
+		if (evt->request_evt_irq)
+			err = evt->request_evt_irq(port, event_irq, i);
+		else
+			err = devm_request_irq(dev, event_irq,
+					       plda_event_handler,
+					       0, NULL, port);
+		if (err) {
+			dev_err(dev, "failed to request IRQ %d\n", event_irq);
+			return err;
+		}
+	}
+
+	intx_irq = irq_create_mapping(port->event_domain,
+				      evt->intx_evt);
+	if (!intx_irq) {
+		dev_err(dev, "failed to map INTx interrupt\n");
+		return -ENXIO;
+	}
+	port->intx_irq = intx_irq;
+
+	/* Plug the INTx chained handler */
+	irq_set_chained_handler_and_data(intx_irq, plda_handle_intx, port);
+
+	msi_irq = irq_create_mapping(port->event_domain,
+				     evt->msi_evt);
+	if (!msi_irq)
+		return -ENXIO;
+
+	port->msi_irq = msi_irq;
+	/* Plug the MSI chained handler */
+	irq_set_chained_handler_and_data(msi_irq, plda_handle_msi, port);
+
+	/* Plug the main event chained handler */
+	irq_set_chained_handler_and_data(port->irq, plda_handle_event, port);
+
+	return 0;
+}
+EXPORT_SYMBOL_GPL(plda_pcie_init_irq);
+
+void plda_pcie_setup_window(void __iomem *bridge_base_addr, u32 index,
+			    phys_addr_t axi_addr, phys_addr_t pci_addr,
+			    size_t size)
+{
+	u32 atr_sz = ilog2(size) - 1;
+	u32 val;
+
+	if (index == 0)
+		val = PCIE_CONFIG_INTERFACE;
+	else
+		val = PCIE_TX_RX_INTERFACE;
+
+	writel(val, bridge_base_addr + (index * ATR_ENTRY_SIZE) +
+	       ATR0_AXI4_SLV0_TRSL_PARAM);
+
+	val = lower_32_bits(axi_addr) | (atr_sz << ATR_SIZE_SHIFT) |
+			    ATR_IMPL_ENABLE;
+	writel_relaxed(val, bridge_base_addr + (index * ATR_ENTRY_SIZE) +
+	       ATR0_AXI4_SLV0_SRCADDR_PARAM);
+
+	val = upper_32_bits(axi_addr);
+	writel(val, bridge_base_addr + (index * ATR_ENTRY_SIZE) +
+	       ATR0_AXI4_SLV0_SRC_ADDR);
+
+	val = lower_32_bits(pci_addr);
+	writel(val, bridge_base_addr + (index * ATR_ENTRY_SIZE) +
+	       ATR0_AXI4_SLV0_TRSL_ADDR_LSB);
+
+	val = upper_32_bits(pci_addr);
+	writel(val, bridge_base_addr + (index * ATR_ENTRY_SIZE) +
+	       ATR0_AXI4_SLV0_TRSL_ADDR_UDW);
+
+	val = readl(bridge_base_addr + ATR0_PCIE_WIN0_SRCADDR_PARAM);
+	val |= (ATR0_PCIE_ATR_SIZE << ATR0_PCIE_ATR_SIZE_SHIFT);
+	writel(val, bridge_base_addr + ATR0_PCIE_WIN0_SRCADDR_PARAM);
+	writel(0, bridge_base_addr + ATR0_PCIE_WIN0_SRC_ADDR);
+}
+EXPORT_SYMBOL_GPL(plda_pcie_setup_window);
+
+int plda_pcie_setup_iomems(struct plda_pcie *port, struct pci_host_bridge *bridge)
+{
+	void __iomem *bridge_base_addr = port->bridge_addr;
+	struct resource_entry *entry;
+	u64 pci_addr;
+	u32 index = 1;
+
+	resource_list_for_each_entry(entry, &bridge->windows) {
+		if (resource_type(entry->res) == IORESOURCE_MEM) {
+			pci_addr = entry->res->start - entry->offset;
+			plda_pcie_setup_window(bridge_base_addr, index,
+					       entry->res->start, pci_addr,
+					       resource_size(entry->res));
+			index++;
+		}
+	}
+
+	return 0;
+}
+EXPORT_SYMBOL_GPL(plda_pcie_setup_iomems);
diff --git a/drivers/pci/controller/plda/pcie-plda.h b/drivers/pci/controller/plda/pcie-plda.h
new file mode 100644
index 000000000000..feb3a0d9ace5
--- /dev/null
+++ b/drivers/pci/controller/plda/pcie-plda.h
@@ -0,0 +1,173 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * PLDA PCIe host controller driver
+ */
+
+#ifndef _PCIE_PLDA_H
+#define _PCIE_PLDA_H
+
+#include <linux/kernel.h>
+#include <linux/pci.h>
+#include <linux/pci-epf.h>
+#include <linux/phy/phy.h>
+
+/* Number of MSI IRQs */
+#define PLDA_NUM_MSI_IRQS		32
+#define NUM_MSI_IRQS_CODED		5
+
+/* PCIe Bridge Phy Regs */
+#define PCIE_PCI_IDS_DW1		0x9c
+
+/* PCIe Config space MSI capability structure */
+#define MSI_CAP_CTRL_OFFSET		0xe0
+#define  MSI_MAX_Q_AVAIL		(NUM_MSI_IRQS_CODED << 1)
+#define  MSI_Q_SIZE			(NUM_MSI_IRQS_CODED << 4)
+
+#define IMASK_LOCAL				0x180
+#define  DMA_END_ENGINE_0_MASK			0x00000000u
+#define  DMA_END_ENGINE_0_SHIFT			0
+#define  DMA_END_ENGINE_1_MASK			0x00000000u
+#define  DMA_END_ENGINE_1_SHIFT			1
+#define  DMA_ERROR_ENGINE_0_MASK		BIT(8)
+#define  DMA_ERROR_ENGINE_0_SHIFT		8
+#define  DMA_ERROR_ENGINE_1_MASK		BIT(9)
+#define  DMA_ERROR_ENGINE_1_SHIFT		9
+#define  DMA_END_MASK				GENMASK(7, 0)
+#define  DMA_ERR_MASK				GENMASK(15, 8)
+#define  DMA_ERR_SHIFT				8
+#define  A_ATR_EVT_POST_ERR_MASK		BIT(16)
+#define  A_ATR_EVT_POST_ERR_SHIFT		16
+#define  A_ATR_EVT_FETCH_ERR_MASK		BIT(17)
+#define  A_ATR_EVT_FETCH_ERR_SHIFT		17
+#define  A_ATR_EVT_DISCARD_ERR_MASK		BIT(18)
+#define  A_ATR_EVT_DISCARD_ERR_SHIFT		18
+#define  A_ATR_EVT_DOORBELL_MASK		BIT(19)
+#define  A_ATR_EVT_DOORBELL_SHIFT		19
+#define  P_ATR_EVT_POST_ERR_MASK		BIT(20)
+#define  P_ATR_EVT_POST_ERR_SHIFT		20
+#define  P_ATR_EVT_FETCH_ERR_MASK		BIT(21)
+#define  P_ATR_EVT_FETCH_ERR_SHIFT		21
+#define  P_ATR_EVT_DISCARD_ERR_MASK		BIT(22)
+#define  P_ATR_EVT_DISCARD_ERR_SHIFT		22
+#define  P_ATR_EVT_DOORBELL_MASK		BIT(23)
+#define  P_ATR_EVT_DOORBELL_SHIFT		23
+#define  PM_MSI_INT_INTA_MASK			BIT(24)
+#define  PM_MSI_INT_INTA_SHIFT			24
+#define  PM_MSI_INT_INTB_MASK			BIT(25)
+#define  PM_MSI_INT_INTB_SHIFT			25
+#define  PM_MSI_INT_INTC_MASK			BIT(26)
+#define  PM_MSI_INT_INTC_SHIFT			26
+#define  PM_MSI_INT_INTD_MASK			BIT(27)
+#define  PM_MSI_INT_INTD_SHIFT			27
+#define  PM_MSI_INT_INTX_MASK			GENMASK(27, 24)
+#define  PM_MSI_INT_INTX_SHIFT			24
+#define  PM_MSI_INT_MSI_MASK			BIT(28)
+#define  PM_MSI_INT_MSI_SHIFT			28
+#define  PM_MSI_INT_AER_EVT_MASK		BIT(29)
+#define  PM_MSI_INT_AER_EVT_SHIFT		29
+#define  PM_MSI_INT_EVENTS_MASK			BIT(30)
+#define  PM_MSI_INT_EVENTS_SHIFT		30
+#define  PM_MSI_INT_SYS_ERR_MASK		BIT(31)
+#define  PM_MSI_INT_SYS_ERR_SHIFT		31
+
+#define ISTATUS_LOCAL				0x184
+#define IMASK_HOST				0x188
+#define ISTATUS_HOST				0x18c
+#define IMSI_ADDR				0x190
+#define ISTATUS_MSI				0x194
+
+/* PCIe Master table init defines */
+#define ATR0_PCIE_WIN0_SRCADDR_PARAM		0x600u
+#define  ATR0_PCIE_ATR_SIZE			0x25
+#define  ATR0_PCIE_ATR_SIZE_SHIFT		1
+#define ATR0_PCIE_WIN0_SRC_ADDR			0x604u
+#define ATR0_PCIE_WIN0_TRSL_ADDR_LSB		0x608u
+#define ATR0_PCIE_WIN0_TRSL_ADDR_UDW		0x60cu
+#define ATR0_PCIE_WIN0_TRSL_PARAM		0x610u
+
+/* PCIe AXI slave table init defines */
+#define ATR0_AXI4_SLV0_SRCADDR_PARAM		0x800u
+#define  ATR_SIZE_SHIFT				1
+#define  ATR_IMPL_ENABLE			1
+#define ATR0_AXI4_SLV0_SRC_ADDR			0x804u
+#define ATR0_AXI4_SLV0_TRSL_ADDR_LSB		0x808u
+#define ATR0_AXI4_SLV0_TRSL_ADDR_UDW		0x80cu
+#define ATR0_AXI4_SLV0_TRSL_PARAM		0x810u
+#define ATR0_AXI4_TABLE_OFFSET			0x20
+#define  PCIE_TX_RX_INTERFACE			0x00000000u
+#define  PCIE_CONFIG_INTERFACE			0x00000001u
+
+#define ATR_ENTRY_SIZE				32
+
+#define EVENT_A_ATR_EVT_POST_ERR		0
+#define EVENT_A_ATR_EVT_FETCH_ERR		1
+#define EVENT_A_ATR_EVT_DISCARD_ERR		2
+#define EVENT_A_ATR_EVT_DOORBELL		3
+#define EVENT_P_ATR_EVT_POST_ERR		4
+#define EVENT_P_ATR_EVT_FETCH_ERR		5
+#define EVENT_P_ATR_EVT_DISCARD_ERR		6
+#define EVENT_P_ATR_EVT_DOORBELL		7
+#define EVENT_PM_MSI_INT_INTX			8
+#define EVENT_PM_MSI_INT_MSI			9
+#define EVENT_PM_MSI_INT_AER_EVT		10
+#define EVENT_PM_MSI_INT_EVENTS			11
+#define EVENT_PM_MSI_INT_SYS_ERR		12
+#define NUM_PLDA_EVENTS				13
+
+#define PM_MSI_TO_MASK_OFFSET			19
+
+struct plda_pcie;
+
+struct plda_msi {
+	struct mutex lock;		/* Protect used bitmap */
+	struct irq_domain *msi_domain;
+	struct irq_domain *dev_domain; /* inner_domain*/
+	u32 num_vectors;
+	u64 vector_phy;
+	DECLARE_BITMAP(used, PLDA_NUM_MSI_IRQS);
+};
+
+struct plda_pcie_ops {
+	int (*host_init)(struct plda_pcie *pcie);
+	void (*host_deinit)(struct plda_pcie *pcie);
+	u32 (*get_events)(struct plda_pcie *pcie);
+};
+
+struct plda_pcie {
+	struct pci_host_bridge *bridge;
+	void __iomem *bridge_addr;
+	void __iomem *config_base;
+	struct irq_domain *intx_domain;
+	struct irq_domain *event_domain;
+	struct device *dev;
+	raw_spinlock_t lock;
+	struct plda_msi msi;
+	const struct plda_pcie_ops *ops;
+	struct phy *phy;
+	int irq;
+	int msi_irq;
+	int intx_irq;
+	int num_events;
+};
+
+struct plda_evt {
+	const struct irq_domain_ops *domain_ops;
+	int (*request_evt_irq)(struct plda_pcie *pcie, int evt_irq, int event);
+	int intx_evt;
+	int msi_evt;
+};
+
+void plda_pcie_enable_msi(struct plda_pcie *port);
+void plda_pcie_setup_window(void __iomem *bridge_base_addr, u32 index,
+			    phys_addr_t axi_addr, phys_addr_t pci_addr,
+			    size_t size);
+int plda_pcie_setup_iomems(struct plda_pcie *port, struct pci_host_bridge *host_bridge);
+int plda_pcie_init_irq(struct plda_pcie *port, struct platform_device *pdev,
+		       struct plda_evt *evt);
+
+static inline void plda_set_default_msi(struct plda_msi *msi)
+{
+	msi->vector_phy = IMSI_ADDR;
+	msi->num_vectors = PLDA_NUM_MSI_IRQS;
+}
+#endif /* _PCIE_PLDA_H */
-- 
2.17.1


_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply related	[flat|nested] 92+ messages in thread

* [PATCH v1 4/9] PCI: microchip: Move PCIe driver to PLDA directory
  2023-07-19 10:20 ` Minda Chen
@ 2023-07-19 10:20   ` Minda Chen
  -1 siblings, 0 replies; 92+ messages in thread
From: Minda Chen @ 2023-07-19 10:20 UTC (permalink / raw)
  To: Daire McNamara, Conor Dooley, Rob Herring, Krzysztof Kozlowski,
	Bjorn Helgaas, Lorenzo Pieralisi, Krzysztof Wilczyński,
	Emil Renner Berthing
  Cc: devicetree, linux-kernel, linux-riscv, linux-pci, Paul Walmsley,
	Palmer Dabbelt, Albert Ou, Philipp Zabel, Mason Huo, Leyfoon Tan,
	Kevin Xie, Minda Chen

Move Microchip specific platform codes to PLDA directory.
Including clock init, interrupt event handle and platform
init codes.

Signed-off-by: Minda Chen <minda.chen@starfivetech.com>
---
 MAINTAINERS                                   |   4 +-
 drivers/pci/controller/Kconfig                |   8 -
 drivers/pci/controller/Makefile               |   1 -
 drivers/pci/controller/plda/Kconfig           |  10 +-
 drivers/pci/controller/plda/Makefile          |   1 +
 .../{ => plda}/pcie-microchip-host.c          | 594 ++----------------
 6 files changed, 55 insertions(+), 563 deletions(-)
 rename drivers/pci/controller/{ => plda}/pcie-microchip-host.c (50%)

diff --git a/MAINTAINERS b/MAINTAINERS
index 3b3c4d8808ae..f02618c2bdf5 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -16503,7 +16503,7 @@ M:	Daire McNamara <daire.mcnamara@microchip.com>
 L:	linux-pci@vger.kernel.org
 S:	Supported
 F:	Documentation/devicetree/bindings/pci/microchip*
-F:	drivers/pci/controller/*microchip*
+F:	drivers/pci/controller/plda/*microchip*
 
 PCIE DRIVER FOR QUALCOMM MSM
 M:	Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
@@ -18287,7 +18287,7 @@ F:	drivers/char/hw_random/mpfs-rng.c
 F:	drivers/clk/microchip/clk-mpfs*.c
 F:	drivers/i2c/busses/i2c-microchip-corei2c.c
 F:	drivers/mailbox/mailbox-mpfs.c
-F:	drivers/pci/controller/pcie-microchip-host.c
+F:	drivers/pci/controller/plda/pcie-microchip-host.c
 F:	drivers/pwm/pwm-microchip-core.c
 F:	drivers/reset/reset-mpfs.c
 F:	drivers/rtc/rtc-mpfs.c
diff --git a/drivers/pci/controller/Kconfig b/drivers/pci/controller/Kconfig
index a106dabcb8dc..107cdb69e15c 100644
--- a/drivers/pci/controller/Kconfig
+++ b/drivers/pci/controller/Kconfig
@@ -216,14 +216,6 @@ config PCIE_MT7621
 	help
 	  This selects a driver for the MediaTek MT7621 PCIe Controller.
 
-config PCIE_MICROCHIP_HOST
-	bool "Microchip AXI PCIe controller"
-	depends on PCI_MSI && OF
-	select PCI_HOST_COMMON
-	help
-	  Say Y here if you want kernel to support the Microchip AXI PCIe
-	  Host Bridge driver.
-
 config PCI_HYPERV_INTERFACE
 	tristate "Microsoft Hyper-V PCI Interface"
 	depends on ((X86 && X86_64) || ARM64) && HYPERV && PCI_MSI
diff --git a/drivers/pci/controller/Makefile b/drivers/pci/controller/Makefile
index aa0a691ebcbc..93236dc97b21 100644
--- a/drivers/pci/controller/Makefile
+++ b/drivers/pci/controller/Makefile
@@ -32,7 +32,6 @@ obj-$(CONFIG_PCIE_ROCKCHIP_EP) += pcie-rockchip-ep.o
 obj-$(CONFIG_PCIE_ROCKCHIP_HOST) += pcie-rockchip-host.o
 obj-$(CONFIG_PCIE_MEDIATEK) += pcie-mediatek.o
 obj-$(CONFIG_PCIE_MEDIATEK_GEN3) += pcie-mediatek-gen3.o
-obj-$(CONFIG_PCIE_MICROCHIP_HOST) += pcie-microchip-host.o
 obj-$(CONFIG_VMD) += vmd.o
 obj-$(CONFIG_PCIE_BRCMSTB) += pcie-brcmstb.o
 obj-$(CONFIG_PCI_LOONGSON) += pci-loongson.o
diff --git a/drivers/pci/controller/plda/Kconfig b/drivers/pci/controller/plda/Kconfig
index 37e17326671b..fb274976b84b 100644
--- a/drivers/pci/controller/plda/Kconfig
+++ b/drivers/pci/controller/plda/Kconfig
@@ -4,8 +4,16 @@ menu "PLDA PCIe controllers support"
 	depends on PCI
 
 config PCIE_PLDA_HOST
-	bool "PLDA PCIe host controller"
+	bool
 	depends on OF && PCI_MSI
 	select IRQ_DOMAIN
 
+config PCIE_MICROCHIP_HOST
+	bool "Microchip AXI PCIe controller"
+	select PCI_HOST_COMMON
+	select PCIE_PLDA_HOST
+	help
+	  Say Y here if you want kernel to support the Microchip AXI PCIe
+	  Host Bridge driver.
+
 endmenu
diff --git a/drivers/pci/controller/plda/Makefile b/drivers/pci/controller/plda/Makefile
index 79dbae655c2b..4340ab007f44 100644
--- a/drivers/pci/controller/plda/Makefile
+++ b/drivers/pci/controller/plda/Makefile
@@ -1,2 +1,3 @@
 # SPDX-License-Identifier: GPL-2.0
 obj-$(CONFIG_PCIE_PLDA_HOST) += pcie-plda-host.o
+obj-$(CONFIG_PCIE_MICROCHIP_HOST) += pcie-microchip-host.o
diff --git a/drivers/pci/controller/pcie-microchip-host.c b/drivers/pci/controller/plda/pcie-microchip-host.c
similarity index 50%
rename from drivers/pci/controller/pcie-microchip-host.c
rename to drivers/pci/controller/plda/pcie-microchip-host.c
index 5e710e485464..4ff3d956c7f8 100644
--- a/drivers/pci/controller/pcie-microchip-host.c
+++ b/drivers/pci/controller/plda/pcie-microchip-host.c
@@ -17,11 +17,8 @@
 #include <linux/pci-ecam.h>
 #include <linux/platform_device.h>
 
-#include "../pci.h"
-
-/* Number of MSI IRQs */
-#define MC_NUM_MSI_IRQS				32
-#define MC_NUM_MSI_IRQS_CODED			5
+#include "../../pci.h"
+#include "pcie-plda.h"
 
 /* PCIe Bridge Phy and Controller Phy offsets */
 #define MC_PCIE1_BRIDGE_ADDR			0x00008000u
@@ -29,6 +26,7 @@
 
 #define MC_PCIE_BRIDGE_ADDR			(MC_PCIE1_BRIDGE_ADDR)
 #define MC_PCIE_CTRL_ADDR			(MC_PCIE1_CTRL_ADDR)
+#define MC_PCIE_CTRL_ADDR_OFFSET		0x00002000u
 
 /* PCIe Controller Phy Regs */
 #define SEC_ERROR_CNT				0x20
@@ -82,86 +80,6 @@
 #define  PCIE_EVENT_INT_ENB_SHIFT		16
 #define  NUM_PCIE_EVENTS			(3)
 
-/* PCIe Bridge Phy Regs */
-#define PCIE_PCI_IDS_DW1			0x9c
-
-/* PCIe Config space MSI capability structure */
-#define MC_MSI_CAP_CTRL_OFFSET			0xe0u
-#define  MC_MSI_MAX_Q_AVAIL			(MC_NUM_MSI_IRQS_CODED << 1)
-#define  MC_MSI_Q_SIZE				(MC_NUM_MSI_IRQS_CODED << 4)
-
-#define IMASK_LOCAL				0x180
-#define  DMA_END_ENGINE_0_MASK			0x00000000u
-#define  DMA_END_ENGINE_0_SHIFT			0
-#define  DMA_END_ENGINE_1_MASK			0x00000000u
-#define  DMA_END_ENGINE_1_SHIFT			1
-#define  DMA_ERROR_ENGINE_0_MASK		0x00000100u
-#define  DMA_ERROR_ENGINE_0_SHIFT		8
-#define  DMA_ERROR_ENGINE_1_MASK		0x00000200u
-#define  DMA_ERROR_ENGINE_1_SHIFT		9
-#define  A_ATR_EVT_POST_ERR_MASK		0x00010000u
-#define  A_ATR_EVT_POST_ERR_SHIFT		16
-#define  A_ATR_EVT_FETCH_ERR_MASK		0x00020000u
-#define  A_ATR_EVT_FETCH_ERR_SHIFT		17
-#define  A_ATR_EVT_DISCARD_ERR_MASK		0x00040000u
-#define  A_ATR_EVT_DISCARD_ERR_SHIFT		18
-#define  A_ATR_EVT_DOORBELL_MASK		0x00000000u
-#define  A_ATR_EVT_DOORBELL_SHIFT		19
-#define  P_ATR_EVT_POST_ERR_MASK		0x00100000u
-#define  P_ATR_EVT_POST_ERR_SHIFT		20
-#define  P_ATR_EVT_FETCH_ERR_MASK		0x00200000u
-#define  P_ATR_EVT_FETCH_ERR_SHIFT		21
-#define  P_ATR_EVT_DISCARD_ERR_MASK		0x00400000u
-#define  P_ATR_EVT_DISCARD_ERR_SHIFT		22
-#define  P_ATR_EVT_DOORBELL_MASK		0x00000000u
-#define  P_ATR_EVT_DOORBELL_SHIFT		23
-#define  PM_MSI_INT_INTA_MASK			0x01000000u
-#define  PM_MSI_INT_INTA_SHIFT			24
-#define  PM_MSI_INT_INTB_MASK			0x02000000u
-#define  PM_MSI_INT_INTB_SHIFT			25
-#define  PM_MSI_INT_INTC_MASK			0x04000000u
-#define  PM_MSI_INT_INTC_SHIFT			26
-#define  PM_MSI_INT_INTD_MASK			0x08000000u
-#define  PM_MSI_INT_INTD_SHIFT			27
-#define  PM_MSI_INT_INTX_MASK			0x0f000000u
-#define  PM_MSI_INT_INTX_SHIFT			24
-#define  PM_MSI_INT_MSI_MASK			0x10000000u
-#define  PM_MSI_INT_MSI_SHIFT			28
-#define  PM_MSI_INT_AER_EVT_MASK		0x20000000u
-#define  PM_MSI_INT_AER_EVT_SHIFT		29
-#define  PM_MSI_INT_EVENTS_MASK			0x40000000u
-#define  PM_MSI_INT_EVENTS_SHIFT		30
-#define  PM_MSI_INT_SYS_ERR_MASK		0x80000000u
-#define  PM_MSI_INT_SYS_ERR_SHIFT		31
-#define  NUM_LOCAL_EVENTS			15
-#define ISTATUS_LOCAL				0x184
-#define IMASK_HOST				0x188
-#define ISTATUS_HOST				0x18c
-#define MSI_ADDR				0x190
-#define ISTATUS_MSI				0x194
-
-/* PCIe Master table init defines */
-#define ATR0_PCIE_WIN0_SRCADDR_PARAM		0x600u
-#define  ATR0_PCIE_ATR_SIZE			0x25
-#define  ATR0_PCIE_ATR_SIZE_SHIFT		1
-#define ATR0_PCIE_WIN0_SRC_ADDR			0x604u
-#define ATR0_PCIE_WIN0_TRSL_ADDR_LSB		0x608u
-#define ATR0_PCIE_WIN0_TRSL_ADDR_UDW		0x60cu
-#define ATR0_PCIE_WIN0_TRSL_PARAM		0x610u
-
-/* PCIe AXI slave table init defines */
-#define ATR0_AXI4_SLV0_SRCADDR_PARAM		0x800u
-#define  ATR_SIZE_SHIFT				1
-#define  ATR_IMPL_ENABLE			1
-#define ATR0_AXI4_SLV0_SRC_ADDR			0x804u
-#define ATR0_AXI4_SLV0_TRSL_ADDR_LSB		0x808u
-#define ATR0_AXI4_SLV0_TRSL_ADDR_UDW		0x80cu
-#define ATR0_AXI4_SLV0_TRSL_PARAM		0x810u
-#define  PCIE_TX_RX_INTERFACE			0x00000000u
-#define  PCIE_CONFIG_INTERFACE			0x00000001u
-
-#define ATR_ENTRY_SIZE				32
-
 #define EVENT_PCIE_L2_EXIT			0
 #define EVENT_PCIE_HOTRST_EXIT			1
 #define EVENT_PCIE_DLUP_EXIT			2
@@ -205,7 +123,7 @@
 	[EVENT_LOCAL_ ## x] = { __stringify(x), s }
 
 #define PCIE_EVENT(x) \
-	.base = MC_PCIE_CTRL_ADDR, \
+	.base = MC_PCIE_CTRL_ADDR_OFFSET, \
 	.offset = PCIE_EVENT_INT, \
 	.mask_offset = PCIE_EVENT_INT, \
 	.mask_high = 1, \
@@ -213,7 +131,7 @@
 	.enb_mask = PCIE_EVENT_INT_ENB_MASK
 
 #define SEC_EVENT(x) \
-	.base = MC_PCIE_CTRL_ADDR, \
+	.base = MC_PCIE_CTRL_ADDR_OFFSET, \
 	.offset = SEC_ERROR_INT, \
 	.mask_offset = SEC_ERROR_INT_MASK, \
 	.mask = SEC_ERROR_INT_ ## x ## _INT, \
@@ -221,7 +139,7 @@
 	.enb_mask = 0
 
 #define DED_EVENT(x) \
-	.base = MC_PCIE_CTRL_ADDR, \
+	.base = MC_PCIE_CTRL_ADDR_OFFSET, \
 	.offset = DED_ERROR_INT, \
 	.mask_offset = DED_ERROR_INT_MASK, \
 	.mask_high = 1, \
@@ -229,7 +147,7 @@
 	.enb_mask = 0
 
 #define LOCAL_EVENT(x) \
-	.base = MC_PCIE_BRIDGE_ADDR, \
+	.base = 0, \
 	.offset = ISTATUS_LOCAL, \
 	.mask_offset = IMASK_LOCAL, \
 	.mask_high = 0, \
@@ -253,22 +171,9 @@ struct event_map {
 	u32 event_bit;
 };
 
-struct mc_msi {
-	struct mutex lock;		/* Protect used bitmap */
-	struct irq_domain *msi_domain;
-	struct irq_domain *dev_domain;
-	u32 num_vectors;
-	u64 vector_phy;
-	DECLARE_BITMAP(used, MC_NUM_MSI_IRQS);
-};
-
 struct mc_pcie {
+	struct plda_pcie plda;
 	void __iomem *axi_base_addr;
-	struct device *dev;
-	struct irq_domain *intx_domain;
-	struct irq_domain *event_domain;
-	raw_spinlock_t lock;
-	struct mc_msi msi;
 };
 
 struct cause {
@@ -382,275 +287,6 @@ static struct {
 
 static char poss_clks[][5] = { "fic0", "fic1", "fic2", "fic3" };
 
-static void mc_pcie_enable_msi(struct mc_pcie *port, void __iomem *base)
-{
-	struct mc_msi *msi = &port->msi;
-	u32 cap_offset = MC_MSI_CAP_CTRL_OFFSET;
-	u16 msg_ctrl = readw_relaxed(base + cap_offset + PCI_MSI_FLAGS);
-
-	msg_ctrl |= PCI_MSI_FLAGS_ENABLE;
-	msg_ctrl &= ~PCI_MSI_FLAGS_QMASK;
-	msg_ctrl |= MC_MSI_MAX_Q_AVAIL;
-	msg_ctrl &= ~PCI_MSI_FLAGS_QSIZE;
-	msg_ctrl |= MC_MSI_Q_SIZE;
-	msg_ctrl |= PCI_MSI_FLAGS_64BIT;
-
-	writew_relaxed(msg_ctrl, base + cap_offset + PCI_MSI_FLAGS);
-
-	writel_relaxed(lower_32_bits(msi->vector_phy),
-		       base + cap_offset + PCI_MSI_ADDRESS_LO);
-	writel_relaxed(upper_32_bits(msi->vector_phy),
-		       base + cap_offset + PCI_MSI_ADDRESS_HI);
-}
-
-static void mc_handle_msi(struct irq_desc *desc)
-{
-	struct mc_pcie *port = irq_desc_get_handler_data(desc);
-	struct irq_chip *chip = irq_desc_get_chip(desc);
-	struct device *dev = port->dev;
-	struct mc_msi *msi = &port->msi;
-	void __iomem *bridge_base_addr =
-		port->axi_base_addr + MC_PCIE_BRIDGE_ADDR;
-	unsigned long status;
-	u32 bit;
-	int ret;
-
-	chained_irq_enter(chip, desc);
-
-	status = readl_relaxed(bridge_base_addr + ISTATUS_LOCAL);
-	if (status & PM_MSI_INT_MSI_MASK) {
-		writel_relaxed(status & PM_MSI_INT_MSI_MASK, bridge_base_addr + ISTATUS_LOCAL);
-		status = readl_relaxed(bridge_base_addr + ISTATUS_MSI);
-		for_each_set_bit(bit, &status, msi->num_vectors) {
-			ret = generic_handle_domain_irq(msi->dev_domain, bit);
-			if (ret)
-				dev_err_ratelimited(dev, "bad MSI IRQ %d\n",
-						    bit);
-		}
-	}
-
-	chained_irq_exit(chip, desc);
-}
-
-static void mc_msi_bottom_irq_ack(struct irq_data *data)
-{
-	struct mc_pcie *port = irq_data_get_irq_chip_data(data);
-	void __iomem *bridge_base_addr =
-		port->axi_base_addr + MC_PCIE_BRIDGE_ADDR;
-	u32 bitpos = data->hwirq;
-
-	writel_relaxed(BIT(bitpos), bridge_base_addr + ISTATUS_MSI);
-}
-
-static void mc_compose_msi_msg(struct irq_data *data, struct msi_msg *msg)
-{
-	struct mc_pcie *port = irq_data_get_irq_chip_data(data);
-	phys_addr_t addr = port->msi.vector_phy;
-
-	msg->address_lo = lower_32_bits(addr);
-	msg->address_hi = upper_32_bits(addr);
-	msg->data = data->hwirq;
-
-	dev_dbg(port->dev, "msi#%x address_hi %#x address_lo %#x\n",
-		(int)data->hwirq, msg->address_hi, msg->address_lo);
-}
-
-static int mc_msi_set_affinity(struct irq_data *irq_data,
-			       const struct cpumask *mask, bool force)
-{
-	return -EINVAL;
-}
-
-static struct irq_chip mc_msi_bottom_irq_chip = {
-	.name = "Microchip MSI",
-	.irq_ack = mc_msi_bottom_irq_ack,
-	.irq_compose_msi_msg = mc_compose_msi_msg,
-	.irq_set_affinity = mc_msi_set_affinity,
-};
-
-static int mc_irq_msi_domain_alloc(struct irq_domain *domain, unsigned int virq,
-				   unsigned int nr_irqs, void *args)
-{
-	struct mc_pcie *port = domain->host_data;
-	struct mc_msi *msi = &port->msi;
-	void __iomem *bridge_base_addr =
-		port->axi_base_addr + MC_PCIE_BRIDGE_ADDR;
-	unsigned long bit;
-	u32 val;
-
-	mutex_lock(&msi->lock);
-	bit = find_first_zero_bit(msi->used, msi->num_vectors);
-	if (bit >= msi->num_vectors) {
-		mutex_unlock(&msi->lock);
-		return -ENOSPC;
-	}
-
-	set_bit(bit, msi->used);
-
-	irq_domain_set_info(domain, virq, bit, &mc_msi_bottom_irq_chip,
-			    domain->host_data, handle_edge_irq, NULL, NULL);
-
-	/* Enable MSI interrupts */
-	val = readl_relaxed(bridge_base_addr + IMASK_LOCAL);
-	val |= PM_MSI_INT_MSI_MASK;
-	writel_relaxed(val, bridge_base_addr + IMASK_LOCAL);
-
-	mutex_unlock(&msi->lock);
-
-	return 0;
-}
-
-static void mc_irq_msi_domain_free(struct irq_domain *domain, unsigned int virq,
-				   unsigned int nr_irqs)
-{
-	struct irq_data *d = irq_domain_get_irq_data(domain, virq);
-	struct mc_pcie *port = irq_data_get_irq_chip_data(d);
-	struct mc_msi *msi = &port->msi;
-
-	mutex_lock(&msi->lock);
-
-	if (test_bit(d->hwirq, msi->used))
-		__clear_bit(d->hwirq, msi->used);
-	else
-		dev_err(port->dev, "trying to free unused MSI%lu\n", d->hwirq);
-
-	mutex_unlock(&msi->lock);
-}
-
-static const struct irq_domain_ops msi_domain_ops = {
-	.alloc	= mc_irq_msi_domain_alloc,
-	.free	= mc_irq_msi_domain_free,
-};
-
-static struct irq_chip mc_msi_irq_chip = {
-	.name = "Microchip PCIe MSI",
-	.irq_ack = irq_chip_ack_parent,
-	.irq_mask = pci_msi_mask_irq,
-	.irq_unmask = pci_msi_unmask_irq,
-};
-
-static struct msi_domain_info mc_msi_domain_info = {
-	.flags = (MSI_FLAG_USE_DEF_DOM_OPS | MSI_FLAG_USE_DEF_CHIP_OPS |
-		  MSI_FLAG_PCI_MSIX),
-	.chip = &mc_msi_irq_chip,
-};
-
-static int mc_allocate_msi_domains(struct mc_pcie *port)
-{
-	struct device *dev = port->dev;
-	struct fwnode_handle *fwnode = of_node_to_fwnode(dev->of_node);
-	struct mc_msi *msi = &port->msi;
-
-	mutex_init(&port->msi.lock);
-
-	msi->dev_domain = irq_domain_add_linear(NULL, msi->num_vectors,
-						&msi_domain_ops, port);
-	if (!msi->dev_domain) {
-		dev_err(dev, "failed to create IRQ domain\n");
-		return -ENOMEM;
-	}
-
-	msi->msi_domain = pci_msi_create_irq_domain(fwnode, &mc_msi_domain_info,
-						    msi->dev_domain);
-	if (!msi->msi_domain) {
-		dev_err(dev, "failed to create MSI domain\n");
-		irq_domain_remove(msi->dev_domain);
-		return -ENOMEM;
-	}
-
-	return 0;
-}
-
-static void mc_handle_intx(struct irq_desc *desc)
-{
-	struct mc_pcie *port = irq_desc_get_handler_data(desc);
-	struct irq_chip *chip = irq_desc_get_chip(desc);
-	struct device *dev = port->dev;
-	void __iomem *bridge_base_addr =
-		port->axi_base_addr + MC_PCIE_BRIDGE_ADDR;
-	unsigned long status;
-	u32 bit;
-	int ret;
-
-	chained_irq_enter(chip, desc);
-
-	status = readl_relaxed(bridge_base_addr + ISTATUS_LOCAL);
-	if (status & PM_MSI_INT_INTX_MASK) {
-		status &= PM_MSI_INT_INTX_MASK;
-		status >>= PM_MSI_INT_INTX_SHIFT;
-		for_each_set_bit(bit, &status, PCI_NUM_INTX) {
-			ret = generic_handle_domain_irq(port->intx_domain, bit);
-			if (ret)
-				dev_err_ratelimited(dev, "bad INTx IRQ %d\n",
-						    bit);
-		}
-	}
-
-	chained_irq_exit(chip, desc);
-}
-
-static void mc_ack_intx_irq(struct irq_data *data)
-{
-	struct mc_pcie *port = irq_data_get_irq_chip_data(data);
-	void __iomem *bridge_base_addr =
-		port->axi_base_addr + MC_PCIE_BRIDGE_ADDR;
-	u32 mask = BIT(data->hwirq + PM_MSI_INT_INTX_SHIFT);
-
-	writel_relaxed(mask, bridge_base_addr + ISTATUS_LOCAL);
-}
-
-static void mc_mask_intx_irq(struct irq_data *data)
-{
-	struct mc_pcie *port = irq_data_get_irq_chip_data(data);
-	void __iomem *bridge_base_addr =
-		port->axi_base_addr + MC_PCIE_BRIDGE_ADDR;
-	unsigned long flags;
-	u32 mask = BIT(data->hwirq + PM_MSI_INT_INTX_SHIFT);
-	u32 val;
-
-	raw_spin_lock_irqsave(&port->lock, flags);
-	val = readl_relaxed(bridge_base_addr + IMASK_LOCAL);
-	val &= ~mask;
-	writel_relaxed(val, bridge_base_addr + IMASK_LOCAL);
-	raw_spin_unlock_irqrestore(&port->lock, flags);
-}
-
-static void mc_unmask_intx_irq(struct irq_data *data)
-{
-	struct mc_pcie *port = irq_data_get_irq_chip_data(data);
-	void __iomem *bridge_base_addr =
-		port->axi_base_addr + MC_PCIE_BRIDGE_ADDR;
-	unsigned long flags;
-	u32 mask = BIT(data->hwirq + PM_MSI_INT_INTX_SHIFT);
-	u32 val;
-
-	raw_spin_lock_irqsave(&port->lock, flags);
-	val = readl_relaxed(bridge_base_addr + IMASK_LOCAL);
-	val |= mask;
-	writel_relaxed(val, bridge_base_addr + IMASK_LOCAL);
-	raw_spin_unlock_irqrestore(&port->lock, flags);
-}
-
-static struct irq_chip mc_intx_irq_chip = {
-	.name = "Microchip PCIe INTx",
-	.irq_ack = mc_ack_intx_irq,
-	.irq_mask = mc_mask_intx_irq,
-	.irq_unmask = mc_unmask_intx_irq,
-};
-
-static int mc_pcie_intx_map(struct irq_domain *domain, unsigned int irq,
-			    irq_hw_number_t hwirq)
-{
-	irq_set_chip_and_handler(irq, &mc_intx_irq_chip, handle_level_irq);
-	irq_set_chip_data(irq, domain->host_data);
-
-	return 0;
-}
-
-static const struct irq_domain_ops intx_domain_ops = {
-	.map = mc_pcie_intx_map,
-};
-
 static inline u32 reg_to_event(u32 reg, struct event_map field)
 {
 	return (reg & field.reg_mask) ? BIT(field.event_bit) : 0;
@@ -704,11 +340,10 @@ static u32 local_events(void __iomem *addr)
 	return val;
 }
 
-static u32 get_events(struct mc_pcie *port)
+static u32 get_events(struct plda_pcie *port)
 {
-	void __iomem *bridge_base_addr =
-		port->axi_base_addr + MC_PCIE_BRIDGE_ADDR;
-	void __iomem *ctrl_base_addr = port->axi_base_addr + MC_PCIE_CTRL_ADDR;
+	void __iomem *bridge_base_addr = port->bridge_addr;
+	void __iomem *ctrl_base_addr = port->bridge_addr + MC_PCIE_CTRL_ADDR_OFFSET;
 	u32 events = 0;
 
 	events |= pcie_events(ctrl_base_addr + PCIE_EVENT_INT);
@@ -721,7 +356,7 @@ static u32 get_events(struct mc_pcie *port)
 
 static irqreturn_t mc_event_handler(int irq, void *dev_id)
 {
-	struct mc_pcie *port = dev_id;
+	struct plda_pcie *port = dev_id;
 	struct device *dev = port->dev;
 	struct irq_data *data;
 
@@ -735,31 +370,14 @@ static irqreturn_t mc_event_handler(int irq, void *dev_id)
 	return IRQ_HANDLED;
 }
 
-static void mc_handle_event(struct irq_desc *desc)
-{
-	struct mc_pcie *port = irq_desc_get_handler_data(desc);
-	unsigned long events;
-	u32 bit;
-	struct irq_chip *chip = irq_desc_get_chip(desc);
-
-	chained_irq_enter(chip, desc);
-
-	events = get_events(port);
-
-	for_each_set_bit(bit, &events, NUM_EVENTS)
-		generic_handle_domain_irq(port->event_domain, bit);
-
-	chained_irq_exit(chip, desc);
-}
-
 static void mc_ack_event_irq(struct irq_data *data)
 {
-	struct mc_pcie *port = irq_data_get_irq_chip_data(data);
+	struct plda_pcie *port = irq_data_get_irq_chip_data(data);
 	u32 event = data->hwirq;
 	void __iomem *addr;
 	u32 mask;
 
-	addr = port->axi_base_addr + event_descs[event].base +
+	addr = port->bridge_addr + event_descs[event].base +
 		event_descs[event].offset;
 	mask = event_descs[event].mask;
 	mask |= event_descs[event].enb_mask;
@@ -769,13 +387,13 @@ static void mc_ack_event_irq(struct irq_data *data)
 
 static void mc_mask_event_irq(struct irq_data *data)
 {
-	struct mc_pcie *port = irq_data_get_irq_chip_data(data);
+	struct plda_pcie *port = irq_data_get_irq_chip_data(data);
 	u32 event = data->hwirq;
 	void __iomem *addr;
 	u32 mask;
 	u32 val;
 
-	addr = port->axi_base_addr + event_descs[event].base +
+	addr = port->bridge_addr + event_descs[event].base +
 		event_descs[event].mask_offset;
 	mask = event_descs[event].mask;
 	if (event_descs[event].enb_mask) {
@@ -799,13 +417,13 @@ static void mc_mask_event_irq(struct irq_data *data)
 
 static void mc_unmask_event_irq(struct irq_data *data)
 {
-	struct mc_pcie *port = irq_data_get_irq_chip_data(data);
+	struct plda_pcie *port = irq_data_get_irq_chip_data(data);
 	u32 event = data->hwirq;
 	void __iomem *addr;
 	u32 mask;
 	u32 val;
 
-	addr = port->axi_base_addr + event_descs[event].base +
+	addr = port->bridge_addr + event_descs[event].base +
 		event_descs[event].mask_offset;
 	mask = event_descs[event].mask;
 
@@ -887,123 +505,42 @@ static int mc_pcie_init_clks(struct device *dev)
 	return 0;
 }
 
-static int mc_pcie_init_irq_domains(struct mc_pcie *port)
-{
-	struct device *dev = port->dev;
-	struct device_node *node = dev->of_node;
-	struct device_node *pcie_intc_node;
-
-	/* Setup INTx */
-	pcie_intc_node = of_get_next_child(node, NULL);
-	if (!pcie_intc_node) {
-		dev_err(dev, "failed to find PCIe Intc node\n");
-		return -EINVAL;
-	}
-
-	port->event_domain = irq_domain_add_linear(pcie_intc_node, NUM_EVENTS,
-						   &event_domain_ops, port);
-	if (!port->event_domain) {
-		dev_err(dev, "failed to get event domain\n");
-		of_node_put(pcie_intc_node);
-		return -ENOMEM;
-	}
-
-	irq_domain_update_bus_token(port->event_domain, DOMAIN_BUS_NEXUS);
-
-	port->intx_domain = irq_domain_add_linear(pcie_intc_node, PCI_NUM_INTX,
-						  &intx_domain_ops, port);
-	if (!port->intx_domain) {
-		dev_err(dev, "failed to get an INTx IRQ domain\n");
-		of_node_put(pcie_intc_node);
-		return -ENOMEM;
-	}
-
-	irq_domain_update_bus_token(port->intx_domain, DOMAIN_BUS_WIRED);
-
-	of_node_put(pcie_intc_node);
-	raw_spin_lock_init(&port->lock);
-
-	return mc_allocate_msi_domains(port);
-}
-
-static void mc_pcie_setup_window(void __iomem *bridge_base_addr, u32 index,
-				 phys_addr_t axi_addr, phys_addr_t pci_addr,
-				 size_t size)
-{
-	u32 atr_sz = ilog2(size) - 1;
-	u32 val;
-
-	if (index == 0)
-		val = PCIE_CONFIG_INTERFACE;
-	else
-		val = PCIE_TX_RX_INTERFACE;
-
-	writel(val, bridge_base_addr + (index * ATR_ENTRY_SIZE) +
-	       ATR0_AXI4_SLV0_TRSL_PARAM);
-
-	val = lower_32_bits(axi_addr) | (atr_sz << ATR_SIZE_SHIFT) |
-			    ATR_IMPL_ENABLE;
-	writel(val, bridge_base_addr + (index * ATR_ENTRY_SIZE) +
-	       ATR0_AXI4_SLV0_SRCADDR_PARAM);
-
-	val = upper_32_bits(axi_addr);
-	writel(val, bridge_base_addr + (index * ATR_ENTRY_SIZE) +
-	       ATR0_AXI4_SLV0_SRC_ADDR);
-
-	val = lower_32_bits(pci_addr);
-	writel(val, bridge_base_addr + (index * ATR_ENTRY_SIZE) +
-	       ATR0_AXI4_SLV0_TRSL_ADDR_LSB);
-
-	val = upper_32_bits(pci_addr);
-	writel(val, bridge_base_addr + (index * ATR_ENTRY_SIZE) +
-	       ATR0_AXI4_SLV0_TRSL_ADDR_UDW);
-
-	val = readl(bridge_base_addr + ATR0_PCIE_WIN0_SRCADDR_PARAM);
-	val |= (ATR0_PCIE_ATR_SIZE << ATR0_PCIE_ATR_SIZE_SHIFT);
-	writel(val, bridge_base_addr + ATR0_PCIE_WIN0_SRCADDR_PARAM);
-	writel(0, bridge_base_addr + ATR0_PCIE_WIN0_SRC_ADDR);
-}
+static const struct plda_pcie_ops pcie_ops = {
+	.get_events = get_events,
+};
 
-static int mc_pcie_setup_windows(struct platform_device *pdev,
-				 struct mc_pcie *port)
+static int mc_request_evt_irq(struct plda_pcie *plda, int event_irq,
+			      int evt)
 {
-	void __iomem *bridge_base_addr =
-		port->axi_base_addr + MC_PCIE_BRIDGE_ADDR;
-	struct pci_host_bridge *bridge = platform_get_drvdata(pdev);
-	struct resource_entry *entry;
-	u64 pci_addr;
-	u32 index = 1;
-
-	resource_list_for_each_entry(entry, &bridge->windows) {
-		if (resource_type(entry->res) == IORESOURCE_MEM) {
-			pci_addr = entry->res->start - entry->offset;
-			mc_pcie_setup_window(bridge_base_addr, index,
-					     entry->res->start, pci_addr,
-					     resource_size(entry->res));
-			index++;
-		}
-	}
+	struct device *dev = plda->dev;
 
-	return 0;
+	return devm_request_irq(dev, event_irq, mc_event_handler,
+				0, event_cause[evt].sym, plda);
 }
 
 static int mc_platform_init(struct pci_config_window *cfg)
 {
 	struct device *dev = cfg->parent;
 	struct platform_device *pdev = to_platform_device(dev);
+	struct pci_host_bridge *bridge = platform_get_drvdata(pdev);
 	struct mc_pcie *port;
+	struct plda_pcie *plda;
 	void __iomem *bridge_base_addr;
 	void __iomem *ctrl_base_addr;
+	struct plda_evt evt = {&event_domain_ops, mc_request_evt_irq,
+			       EVENT_LOCAL_PM_MSI_INT_INTX,
+			       EVENT_LOCAL_PM_MSI_INT_MSI};
 	int ret;
-	int irq;
-	int i, intx_irq, msi_irq, event_irq;
 	u32 val;
-	int err;
 
 	port = devm_kzalloc(dev, sizeof(*port), GFP_KERNEL);
 	if (!port)
 		return -ENOMEM;
-	port->dev = dev;
+
+	plda = &port->plda;
+	plda->dev = dev;
+	plda->ops = &pcie_ops;
+	plda->num_events = NUM_EVENTS;
 
 	ret = mc_pcie_init_clks(dev);
 	if (ret) {
@@ -1017,57 +554,12 @@ static int mc_platform_init(struct pci_config_window *cfg)
 
 	bridge_base_addr = port->axi_base_addr + MC_PCIE_BRIDGE_ADDR;
 	ctrl_base_addr = port->axi_base_addr + MC_PCIE_CTRL_ADDR;
+	plda->bridge_addr = bridge_base_addr;
 
-	port->msi.vector_phy = MSI_ADDR;
-	port->msi.num_vectors = MC_NUM_MSI_IRQS;
-	ret = mc_pcie_init_irq_domains(port);
-	if (ret) {
-		dev_err(dev, "failed creating IRQ domains\n");
-		return ret;
-	}
-
-	irq = platform_get_irq(pdev, 0);
-	if (irq < 0)
-		return -ENODEV;
-
-	for (i = 0; i < NUM_EVENTS; i++) {
-		event_irq = irq_create_mapping(port->event_domain, i);
-		if (!event_irq) {
-			dev_err(dev, "failed to map hwirq %d\n", i);
-			return -ENXIO;
-		}
-
-		err = devm_request_irq(dev, event_irq, mc_event_handler,
-				       0, event_cause[i].sym, port);
-		if (err) {
-			dev_err(dev, "failed to request IRQ %d\n", event_irq);
-			return err;
-		}
-	}
-
-	intx_irq = irq_create_mapping(port->event_domain,
-				      EVENT_LOCAL_PM_MSI_INT_INTX);
-	if (!intx_irq) {
-		dev_err(dev, "failed to map INTx interrupt\n");
-		return -ENXIO;
-	}
-
-	/* Plug the INTx chained handler */
-	irq_set_chained_handler_and_data(intx_irq, mc_handle_intx, port);
-
-	msi_irq = irq_create_mapping(port->event_domain,
-				     EVENT_LOCAL_PM_MSI_INT_MSI);
-	if (!msi_irq)
-		return -ENXIO;
-
-	/* Plug the MSI chained handler */
-	irq_set_chained_handler_and_data(msi_irq, mc_handle_msi, port);
-
-	/* Plug the main event chained handler */
-	irq_set_chained_handler_and_data(irq, mc_handle_event, port);
-
+	plda_set_default_msi(&plda->msi);
+	plda_pcie_init_irq(plda, pdev, &evt);
 	/* Hardware doesn't setup MSI by default */
-	mc_pcie_enable_msi(port, cfg->win);
+	plda_pcie_enable_msi(plda);
 
 	val = readl_relaxed(bridge_base_addr + IMASK_LOCAL);
 	val |= PM_MSI_INT_INTX_MASK;
@@ -1100,10 +592,10 @@ static int mc_platform_init(struct pci_config_window *cfg)
 	writel_relaxed(GENMASK(31, 0), bridge_base_addr + ISTATUS_HOST);
 
 	/* Configure Address Translation Table 0 for PCIe config space */
-	mc_pcie_setup_window(bridge_base_addr, 0, cfg->res.start & 0xffffffff,
-			     cfg->res.start, resource_size(&cfg->res));
+	plda_pcie_setup_window(bridge_base_addr, 0, cfg->res.start & 0xffffffff,
+			       cfg->res.start, resource_size(&cfg->res));
 
-	return mc_pcie_setup_windows(pdev, port);
+	return plda_pcie_setup_iomems(&port->plda, bridge);
 }
 
 static const struct pci_ecam_ops mc_ecam_ops = {
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 92+ messages in thread

* [PATCH v1 4/9] PCI: microchip: Move PCIe driver to PLDA directory
@ 2023-07-19 10:20   ` Minda Chen
  0 siblings, 0 replies; 92+ messages in thread
From: Minda Chen @ 2023-07-19 10:20 UTC (permalink / raw)
  To: Daire McNamara, Conor Dooley, Rob Herring, Krzysztof Kozlowski,
	Bjorn Helgaas, Lorenzo Pieralisi, Krzysztof Wilczyński,
	Emil Renner Berthing
  Cc: devicetree, linux-kernel, linux-riscv, linux-pci, Paul Walmsley,
	Palmer Dabbelt, Albert Ou, Philipp Zabel, Mason Huo, Leyfoon Tan,
	Kevin Xie, Minda Chen

Move Microchip specific platform codes to PLDA directory.
Including clock init, interrupt event handle and platform
init codes.

Signed-off-by: Minda Chen <minda.chen@starfivetech.com>
---
 MAINTAINERS                                   |   4 +-
 drivers/pci/controller/Kconfig                |   8 -
 drivers/pci/controller/Makefile               |   1 -
 drivers/pci/controller/plda/Kconfig           |  10 +-
 drivers/pci/controller/plda/Makefile          |   1 +
 .../{ => plda}/pcie-microchip-host.c          | 594 ++----------------
 6 files changed, 55 insertions(+), 563 deletions(-)
 rename drivers/pci/controller/{ => plda}/pcie-microchip-host.c (50%)

diff --git a/MAINTAINERS b/MAINTAINERS
index 3b3c4d8808ae..f02618c2bdf5 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -16503,7 +16503,7 @@ M:	Daire McNamara <daire.mcnamara@microchip.com>
 L:	linux-pci@vger.kernel.org
 S:	Supported
 F:	Documentation/devicetree/bindings/pci/microchip*
-F:	drivers/pci/controller/*microchip*
+F:	drivers/pci/controller/plda/*microchip*
 
 PCIE DRIVER FOR QUALCOMM MSM
 M:	Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
@@ -18287,7 +18287,7 @@ F:	drivers/char/hw_random/mpfs-rng.c
 F:	drivers/clk/microchip/clk-mpfs*.c
 F:	drivers/i2c/busses/i2c-microchip-corei2c.c
 F:	drivers/mailbox/mailbox-mpfs.c
-F:	drivers/pci/controller/pcie-microchip-host.c
+F:	drivers/pci/controller/plda/pcie-microchip-host.c
 F:	drivers/pwm/pwm-microchip-core.c
 F:	drivers/reset/reset-mpfs.c
 F:	drivers/rtc/rtc-mpfs.c
diff --git a/drivers/pci/controller/Kconfig b/drivers/pci/controller/Kconfig
index a106dabcb8dc..107cdb69e15c 100644
--- a/drivers/pci/controller/Kconfig
+++ b/drivers/pci/controller/Kconfig
@@ -216,14 +216,6 @@ config PCIE_MT7621
 	help
 	  This selects a driver for the MediaTek MT7621 PCIe Controller.
 
-config PCIE_MICROCHIP_HOST
-	bool "Microchip AXI PCIe controller"
-	depends on PCI_MSI && OF
-	select PCI_HOST_COMMON
-	help
-	  Say Y here if you want kernel to support the Microchip AXI PCIe
-	  Host Bridge driver.
-
 config PCI_HYPERV_INTERFACE
 	tristate "Microsoft Hyper-V PCI Interface"
 	depends on ((X86 && X86_64) || ARM64) && HYPERV && PCI_MSI
diff --git a/drivers/pci/controller/Makefile b/drivers/pci/controller/Makefile
index aa0a691ebcbc..93236dc97b21 100644
--- a/drivers/pci/controller/Makefile
+++ b/drivers/pci/controller/Makefile
@@ -32,7 +32,6 @@ obj-$(CONFIG_PCIE_ROCKCHIP_EP) += pcie-rockchip-ep.o
 obj-$(CONFIG_PCIE_ROCKCHIP_HOST) += pcie-rockchip-host.o
 obj-$(CONFIG_PCIE_MEDIATEK) += pcie-mediatek.o
 obj-$(CONFIG_PCIE_MEDIATEK_GEN3) += pcie-mediatek-gen3.o
-obj-$(CONFIG_PCIE_MICROCHIP_HOST) += pcie-microchip-host.o
 obj-$(CONFIG_VMD) += vmd.o
 obj-$(CONFIG_PCIE_BRCMSTB) += pcie-brcmstb.o
 obj-$(CONFIG_PCI_LOONGSON) += pci-loongson.o
diff --git a/drivers/pci/controller/plda/Kconfig b/drivers/pci/controller/plda/Kconfig
index 37e17326671b..fb274976b84b 100644
--- a/drivers/pci/controller/plda/Kconfig
+++ b/drivers/pci/controller/plda/Kconfig
@@ -4,8 +4,16 @@ menu "PLDA PCIe controllers support"
 	depends on PCI
 
 config PCIE_PLDA_HOST
-	bool "PLDA PCIe host controller"
+	bool
 	depends on OF && PCI_MSI
 	select IRQ_DOMAIN
 
+config PCIE_MICROCHIP_HOST
+	bool "Microchip AXI PCIe controller"
+	select PCI_HOST_COMMON
+	select PCIE_PLDA_HOST
+	help
+	  Say Y here if you want kernel to support the Microchip AXI PCIe
+	  Host Bridge driver.
+
 endmenu
diff --git a/drivers/pci/controller/plda/Makefile b/drivers/pci/controller/plda/Makefile
index 79dbae655c2b..4340ab007f44 100644
--- a/drivers/pci/controller/plda/Makefile
+++ b/drivers/pci/controller/plda/Makefile
@@ -1,2 +1,3 @@
 # SPDX-License-Identifier: GPL-2.0
 obj-$(CONFIG_PCIE_PLDA_HOST) += pcie-plda-host.o
+obj-$(CONFIG_PCIE_MICROCHIP_HOST) += pcie-microchip-host.o
diff --git a/drivers/pci/controller/pcie-microchip-host.c b/drivers/pci/controller/plda/pcie-microchip-host.c
similarity index 50%
rename from drivers/pci/controller/pcie-microchip-host.c
rename to drivers/pci/controller/plda/pcie-microchip-host.c
index 5e710e485464..4ff3d956c7f8 100644
--- a/drivers/pci/controller/pcie-microchip-host.c
+++ b/drivers/pci/controller/plda/pcie-microchip-host.c
@@ -17,11 +17,8 @@
 #include <linux/pci-ecam.h>
 #include <linux/platform_device.h>
 
-#include "../pci.h"
-
-/* Number of MSI IRQs */
-#define MC_NUM_MSI_IRQS				32
-#define MC_NUM_MSI_IRQS_CODED			5
+#include "../../pci.h"
+#include "pcie-plda.h"
 
 /* PCIe Bridge Phy and Controller Phy offsets */
 #define MC_PCIE1_BRIDGE_ADDR			0x00008000u
@@ -29,6 +26,7 @@
 
 #define MC_PCIE_BRIDGE_ADDR			(MC_PCIE1_BRIDGE_ADDR)
 #define MC_PCIE_CTRL_ADDR			(MC_PCIE1_CTRL_ADDR)
+#define MC_PCIE_CTRL_ADDR_OFFSET		0x00002000u
 
 /* PCIe Controller Phy Regs */
 #define SEC_ERROR_CNT				0x20
@@ -82,86 +80,6 @@
 #define  PCIE_EVENT_INT_ENB_SHIFT		16
 #define  NUM_PCIE_EVENTS			(3)
 
-/* PCIe Bridge Phy Regs */
-#define PCIE_PCI_IDS_DW1			0x9c
-
-/* PCIe Config space MSI capability structure */
-#define MC_MSI_CAP_CTRL_OFFSET			0xe0u
-#define  MC_MSI_MAX_Q_AVAIL			(MC_NUM_MSI_IRQS_CODED << 1)
-#define  MC_MSI_Q_SIZE				(MC_NUM_MSI_IRQS_CODED << 4)
-
-#define IMASK_LOCAL				0x180
-#define  DMA_END_ENGINE_0_MASK			0x00000000u
-#define  DMA_END_ENGINE_0_SHIFT			0
-#define  DMA_END_ENGINE_1_MASK			0x00000000u
-#define  DMA_END_ENGINE_1_SHIFT			1
-#define  DMA_ERROR_ENGINE_0_MASK		0x00000100u
-#define  DMA_ERROR_ENGINE_0_SHIFT		8
-#define  DMA_ERROR_ENGINE_1_MASK		0x00000200u
-#define  DMA_ERROR_ENGINE_1_SHIFT		9
-#define  A_ATR_EVT_POST_ERR_MASK		0x00010000u
-#define  A_ATR_EVT_POST_ERR_SHIFT		16
-#define  A_ATR_EVT_FETCH_ERR_MASK		0x00020000u
-#define  A_ATR_EVT_FETCH_ERR_SHIFT		17
-#define  A_ATR_EVT_DISCARD_ERR_MASK		0x00040000u
-#define  A_ATR_EVT_DISCARD_ERR_SHIFT		18
-#define  A_ATR_EVT_DOORBELL_MASK		0x00000000u
-#define  A_ATR_EVT_DOORBELL_SHIFT		19
-#define  P_ATR_EVT_POST_ERR_MASK		0x00100000u
-#define  P_ATR_EVT_POST_ERR_SHIFT		20
-#define  P_ATR_EVT_FETCH_ERR_MASK		0x00200000u
-#define  P_ATR_EVT_FETCH_ERR_SHIFT		21
-#define  P_ATR_EVT_DISCARD_ERR_MASK		0x00400000u
-#define  P_ATR_EVT_DISCARD_ERR_SHIFT		22
-#define  P_ATR_EVT_DOORBELL_MASK		0x00000000u
-#define  P_ATR_EVT_DOORBELL_SHIFT		23
-#define  PM_MSI_INT_INTA_MASK			0x01000000u
-#define  PM_MSI_INT_INTA_SHIFT			24
-#define  PM_MSI_INT_INTB_MASK			0x02000000u
-#define  PM_MSI_INT_INTB_SHIFT			25
-#define  PM_MSI_INT_INTC_MASK			0x04000000u
-#define  PM_MSI_INT_INTC_SHIFT			26
-#define  PM_MSI_INT_INTD_MASK			0x08000000u
-#define  PM_MSI_INT_INTD_SHIFT			27
-#define  PM_MSI_INT_INTX_MASK			0x0f000000u
-#define  PM_MSI_INT_INTX_SHIFT			24
-#define  PM_MSI_INT_MSI_MASK			0x10000000u
-#define  PM_MSI_INT_MSI_SHIFT			28
-#define  PM_MSI_INT_AER_EVT_MASK		0x20000000u
-#define  PM_MSI_INT_AER_EVT_SHIFT		29
-#define  PM_MSI_INT_EVENTS_MASK			0x40000000u
-#define  PM_MSI_INT_EVENTS_SHIFT		30
-#define  PM_MSI_INT_SYS_ERR_MASK		0x80000000u
-#define  PM_MSI_INT_SYS_ERR_SHIFT		31
-#define  NUM_LOCAL_EVENTS			15
-#define ISTATUS_LOCAL				0x184
-#define IMASK_HOST				0x188
-#define ISTATUS_HOST				0x18c
-#define MSI_ADDR				0x190
-#define ISTATUS_MSI				0x194
-
-/* PCIe Master table init defines */
-#define ATR0_PCIE_WIN0_SRCADDR_PARAM		0x600u
-#define  ATR0_PCIE_ATR_SIZE			0x25
-#define  ATR0_PCIE_ATR_SIZE_SHIFT		1
-#define ATR0_PCIE_WIN0_SRC_ADDR			0x604u
-#define ATR0_PCIE_WIN0_TRSL_ADDR_LSB		0x608u
-#define ATR0_PCIE_WIN0_TRSL_ADDR_UDW		0x60cu
-#define ATR0_PCIE_WIN0_TRSL_PARAM		0x610u
-
-/* PCIe AXI slave table init defines */
-#define ATR0_AXI4_SLV0_SRCADDR_PARAM		0x800u
-#define  ATR_SIZE_SHIFT				1
-#define  ATR_IMPL_ENABLE			1
-#define ATR0_AXI4_SLV0_SRC_ADDR			0x804u
-#define ATR0_AXI4_SLV0_TRSL_ADDR_LSB		0x808u
-#define ATR0_AXI4_SLV0_TRSL_ADDR_UDW		0x80cu
-#define ATR0_AXI4_SLV0_TRSL_PARAM		0x810u
-#define  PCIE_TX_RX_INTERFACE			0x00000000u
-#define  PCIE_CONFIG_INTERFACE			0x00000001u
-
-#define ATR_ENTRY_SIZE				32
-
 #define EVENT_PCIE_L2_EXIT			0
 #define EVENT_PCIE_HOTRST_EXIT			1
 #define EVENT_PCIE_DLUP_EXIT			2
@@ -205,7 +123,7 @@
 	[EVENT_LOCAL_ ## x] = { __stringify(x), s }
 
 #define PCIE_EVENT(x) \
-	.base = MC_PCIE_CTRL_ADDR, \
+	.base = MC_PCIE_CTRL_ADDR_OFFSET, \
 	.offset = PCIE_EVENT_INT, \
 	.mask_offset = PCIE_EVENT_INT, \
 	.mask_high = 1, \
@@ -213,7 +131,7 @@
 	.enb_mask = PCIE_EVENT_INT_ENB_MASK
 
 #define SEC_EVENT(x) \
-	.base = MC_PCIE_CTRL_ADDR, \
+	.base = MC_PCIE_CTRL_ADDR_OFFSET, \
 	.offset = SEC_ERROR_INT, \
 	.mask_offset = SEC_ERROR_INT_MASK, \
 	.mask = SEC_ERROR_INT_ ## x ## _INT, \
@@ -221,7 +139,7 @@
 	.enb_mask = 0
 
 #define DED_EVENT(x) \
-	.base = MC_PCIE_CTRL_ADDR, \
+	.base = MC_PCIE_CTRL_ADDR_OFFSET, \
 	.offset = DED_ERROR_INT, \
 	.mask_offset = DED_ERROR_INT_MASK, \
 	.mask_high = 1, \
@@ -229,7 +147,7 @@
 	.enb_mask = 0
 
 #define LOCAL_EVENT(x) \
-	.base = MC_PCIE_BRIDGE_ADDR, \
+	.base = 0, \
 	.offset = ISTATUS_LOCAL, \
 	.mask_offset = IMASK_LOCAL, \
 	.mask_high = 0, \
@@ -253,22 +171,9 @@ struct event_map {
 	u32 event_bit;
 };
 
-struct mc_msi {
-	struct mutex lock;		/* Protect used bitmap */
-	struct irq_domain *msi_domain;
-	struct irq_domain *dev_domain;
-	u32 num_vectors;
-	u64 vector_phy;
-	DECLARE_BITMAP(used, MC_NUM_MSI_IRQS);
-};
-
 struct mc_pcie {
+	struct plda_pcie plda;
 	void __iomem *axi_base_addr;
-	struct device *dev;
-	struct irq_domain *intx_domain;
-	struct irq_domain *event_domain;
-	raw_spinlock_t lock;
-	struct mc_msi msi;
 };
 
 struct cause {
@@ -382,275 +287,6 @@ static struct {
 
 static char poss_clks[][5] = { "fic0", "fic1", "fic2", "fic3" };
 
-static void mc_pcie_enable_msi(struct mc_pcie *port, void __iomem *base)
-{
-	struct mc_msi *msi = &port->msi;
-	u32 cap_offset = MC_MSI_CAP_CTRL_OFFSET;
-	u16 msg_ctrl = readw_relaxed(base + cap_offset + PCI_MSI_FLAGS);
-
-	msg_ctrl |= PCI_MSI_FLAGS_ENABLE;
-	msg_ctrl &= ~PCI_MSI_FLAGS_QMASK;
-	msg_ctrl |= MC_MSI_MAX_Q_AVAIL;
-	msg_ctrl &= ~PCI_MSI_FLAGS_QSIZE;
-	msg_ctrl |= MC_MSI_Q_SIZE;
-	msg_ctrl |= PCI_MSI_FLAGS_64BIT;
-
-	writew_relaxed(msg_ctrl, base + cap_offset + PCI_MSI_FLAGS);
-
-	writel_relaxed(lower_32_bits(msi->vector_phy),
-		       base + cap_offset + PCI_MSI_ADDRESS_LO);
-	writel_relaxed(upper_32_bits(msi->vector_phy),
-		       base + cap_offset + PCI_MSI_ADDRESS_HI);
-}
-
-static void mc_handle_msi(struct irq_desc *desc)
-{
-	struct mc_pcie *port = irq_desc_get_handler_data(desc);
-	struct irq_chip *chip = irq_desc_get_chip(desc);
-	struct device *dev = port->dev;
-	struct mc_msi *msi = &port->msi;
-	void __iomem *bridge_base_addr =
-		port->axi_base_addr + MC_PCIE_BRIDGE_ADDR;
-	unsigned long status;
-	u32 bit;
-	int ret;
-
-	chained_irq_enter(chip, desc);
-
-	status = readl_relaxed(bridge_base_addr + ISTATUS_LOCAL);
-	if (status & PM_MSI_INT_MSI_MASK) {
-		writel_relaxed(status & PM_MSI_INT_MSI_MASK, bridge_base_addr + ISTATUS_LOCAL);
-		status = readl_relaxed(bridge_base_addr + ISTATUS_MSI);
-		for_each_set_bit(bit, &status, msi->num_vectors) {
-			ret = generic_handle_domain_irq(msi->dev_domain, bit);
-			if (ret)
-				dev_err_ratelimited(dev, "bad MSI IRQ %d\n",
-						    bit);
-		}
-	}
-
-	chained_irq_exit(chip, desc);
-}
-
-static void mc_msi_bottom_irq_ack(struct irq_data *data)
-{
-	struct mc_pcie *port = irq_data_get_irq_chip_data(data);
-	void __iomem *bridge_base_addr =
-		port->axi_base_addr + MC_PCIE_BRIDGE_ADDR;
-	u32 bitpos = data->hwirq;
-
-	writel_relaxed(BIT(bitpos), bridge_base_addr + ISTATUS_MSI);
-}
-
-static void mc_compose_msi_msg(struct irq_data *data, struct msi_msg *msg)
-{
-	struct mc_pcie *port = irq_data_get_irq_chip_data(data);
-	phys_addr_t addr = port->msi.vector_phy;
-
-	msg->address_lo = lower_32_bits(addr);
-	msg->address_hi = upper_32_bits(addr);
-	msg->data = data->hwirq;
-
-	dev_dbg(port->dev, "msi#%x address_hi %#x address_lo %#x\n",
-		(int)data->hwirq, msg->address_hi, msg->address_lo);
-}
-
-static int mc_msi_set_affinity(struct irq_data *irq_data,
-			       const struct cpumask *mask, bool force)
-{
-	return -EINVAL;
-}
-
-static struct irq_chip mc_msi_bottom_irq_chip = {
-	.name = "Microchip MSI",
-	.irq_ack = mc_msi_bottom_irq_ack,
-	.irq_compose_msi_msg = mc_compose_msi_msg,
-	.irq_set_affinity = mc_msi_set_affinity,
-};
-
-static int mc_irq_msi_domain_alloc(struct irq_domain *domain, unsigned int virq,
-				   unsigned int nr_irqs, void *args)
-{
-	struct mc_pcie *port = domain->host_data;
-	struct mc_msi *msi = &port->msi;
-	void __iomem *bridge_base_addr =
-		port->axi_base_addr + MC_PCIE_BRIDGE_ADDR;
-	unsigned long bit;
-	u32 val;
-
-	mutex_lock(&msi->lock);
-	bit = find_first_zero_bit(msi->used, msi->num_vectors);
-	if (bit >= msi->num_vectors) {
-		mutex_unlock(&msi->lock);
-		return -ENOSPC;
-	}
-
-	set_bit(bit, msi->used);
-
-	irq_domain_set_info(domain, virq, bit, &mc_msi_bottom_irq_chip,
-			    domain->host_data, handle_edge_irq, NULL, NULL);
-
-	/* Enable MSI interrupts */
-	val = readl_relaxed(bridge_base_addr + IMASK_LOCAL);
-	val |= PM_MSI_INT_MSI_MASK;
-	writel_relaxed(val, bridge_base_addr + IMASK_LOCAL);
-
-	mutex_unlock(&msi->lock);
-
-	return 0;
-}
-
-static void mc_irq_msi_domain_free(struct irq_domain *domain, unsigned int virq,
-				   unsigned int nr_irqs)
-{
-	struct irq_data *d = irq_domain_get_irq_data(domain, virq);
-	struct mc_pcie *port = irq_data_get_irq_chip_data(d);
-	struct mc_msi *msi = &port->msi;
-
-	mutex_lock(&msi->lock);
-
-	if (test_bit(d->hwirq, msi->used))
-		__clear_bit(d->hwirq, msi->used);
-	else
-		dev_err(port->dev, "trying to free unused MSI%lu\n", d->hwirq);
-
-	mutex_unlock(&msi->lock);
-}
-
-static const struct irq_domain_ops msi_domain_ops = {
-	.alloc	= mc_irq_msi_domain_alloc,
-	.free	= mc_irq_msi_domain_free,
-};
-
-static struct irq_chip mc_msi_irq_chip = {
-	.name = "Microchip PCIe MSI",
-	.irq_ack = irq_chip_ack_parent,
-	.irq_mask = pci_msi_mask_irq,
-	.irq_unmask = pci_msi_unmask_irq,
-};
-
-static struct msi_domain_info mc_msi_domain_info = {
-	.flags = (MSI_FLAG_USE_DEF_DOM_OPS | MSI_FLAG_USE_DEF_CHIP_OPS |
-		  MSI_FLAG_PCI_MSIX),
-	.chip = &mc_msi_irq_chip,
-};
-
-static int mc_allocate_msi_domains(struct mc_pcie *port)
-{
-	struct device *dev = port->dev;
-	struct fwnode_handle *fwnode = of_node_to_fwnode(dev->of_node);
-	struct mc_msi *msi = &port->msi;
-
-	mutex_init(&port->msi.lock);
-
-	msi->dev_domain = irq_domain_add_linear(NULL, msi->num_vectors,
-						&msi_domain_ops, port);
-	if (!msi->dev_domain) {
-		dev_err(dev, "failed to create IRQ domain\n");
-		return -ENOMEM;
-	}
-
-	msi->msi_domain = pci_msi_create_irq_domain(fwnode, &mc_msi_domain_info,
-						    msi->dev_domain);
-	if (!msi->msi_domain) {
-		dev_err(dev, "failed to create MSI domain\n");
-		irq_domain_remove(msi->dev_domain);
-		return -ENOMEM;
-	}
-
-	return 0;
-}
-
-static void mc_handle_intx(struct irq_desc *desc)
-{
-	struct mc_pcie *port = irq_desc_get_handler_data(desc);
-	struct irq_chip *chip = irq_desc_get_chip(desc);
-	struct device *dev = port->dev;
-	void __iomem *bridge_base_addr =
-		port->axi_base_addr + MC_PCIE_BRIDGE_ADDR;
-	unsigned long status;
-	u32 bit;
-	int ret;
-
-	chained_irq_enter(chip, desc);
-
-	status = readl_relaxed(bridge_base_addr + ISTATUS_LOCAL);
-	if (status & PM_MSI_INT_INTX_MASK) {
-		status &= PM_MSI_INT_INTX_MASK;
-		status >>= PM_MSI_INT_INTX_SHIFT;
-		for_each_set_bit(bit, &status, PCI_NUM_INTX) {
-			ret = generic_handle_domain_irq(port->intx_domain, bit);
-			if (ret)
-				dev_err_ratelimited(dev, "bad INTx IRQ %d\n",
-						    bit);
-		}
-	}
-
-	chained_irq_exit(chip, desc);
-}
-
-static void mc_ack_intx_irq(struct irq_data *data)
-{
-	struct mc_pcie *port = irq_data_get_irq_chip_data(data);
-	void __iomem *bridge_base_addr =
-		port->axi_base_addr + MC_PCIE_BRIDGE_ADDR;
-	u32 mask = BIT(data->hwirq + PM_MSI_INT_INTX_SHIFT);
-
-	writel_relaxed(mask, bridge_base_addr + ISTATUS_LOCAL);
-}
-
-static void mc_mask_intx_irq(struct irq_data *data)
-{
-	struct mc_pcie *port = irq_data_get_irq_chip_data(data);
-	void __iomem *bridge_base_addr =
-		port->axi_base_addr + MC_PCIE_BRIDGE_ADDR;
-	unsigned long flags;
-	u32 mask = BIT(data->hwirq + PM_MSI_INT_INTX_SHIFT);
-	u32 val;
-
-	raw_spin_lock_irqsave(&port->lock, flags);
-	val = readl_relaxed(bridge_base_addr + IMASK_LOCAL);
-	val &= ~mask;
-	writel_relaxed(val, bridge_base_addr + IMASK_LOCAL);
-	raw_spin_unlock_irqrestore(&port->lock, flags);
-}
-
-static void mc_unmask_intx_irq(struct irq_data *data)
-{
-	struct mc_pcie *port = irq_data_get_irq_chip_data(data);
-	void __iomem *bridge_base_addr =
-		port->axi_base_addr + MC_PCIE_BRIDGE_ADDR;
-	unsigned long flags;
-	u32 mask = BIT(data->hwirq + PM_MSI_INT_INTX_SHIFT);
-	u32 val;
-
-	raw_spin_lock_irqsave(&port->lock, flags);
-	val = readl_relaxed(bridge_base_addr + IMASK_LOCAL);
-	val |= mask;
-	writel_relaxed(val, bridge_base_addr + IMASK_LOCAL);
-	raw_spin_unlock_irqrestore(&port->lock, flags);
-}
-
-static struct irq_chip mc_intx_irq_chip = {
-	.name = "Microchip PCIe INTx",
-	.irq_ack = mc_ack_intx_irq,
-	.irq_mask = mc_mask_intx_irq,
-	.irq_unmask = mc_unmask_intx_irq,
-};
-
-static int mc_pcie_intx_map(struct irq_domain *domain, unsigned int irq,
-			    irq_hw_number_t hwirq)
-{
-	irq_set_chip_and_handler(irq, &mc_intx_irq_chip, handle_level_irq);
-	irq_set_chip_data(irq, domain->host_data);
-
-	return 0;
-}
-
-static const struct irq_domain_ops intx_domain_ops = {
-	.map = mc_pcie_intx_map,
-};
-
 static inline u32 reg_to_event(u32 reg, struct event_map field)
 {
 	return (reg & field.reg_mask) ? BIT(field.event_bit) : 0;
@@ -704,11 +340,10 @@ static u32 local_events(void __iomem *addr)
 	return val;
 }
 
-static u32 get_events(struct mc_pcie *port)
+static u32 get_events(struct plda_pcie *port)
 {
-	void __iomem *bridge_base_addr =
-		port->axi_base_addr + MC_PCIE_BRIDGE_ADDR;
-	void __iomem *ctrl_base_addr = port->axi_base_addr + MC_PCIE_CTRL_ADDR;
+	void __iomem *bridge_base_addr = port->bridge_addr;
+	void __iomem *ctrl_base_addr = port->bridge_addr + MC_PCIE_CTRL_ADDR_OFFSET;
 	u32 events = 0;
 
 	events |= pcie_events(ctrl_base_addr + PCIE_EVENT_INT);
@@ -721,7 +356,7 @@ static u32 get_events(struct mc_pcie *port)
 
 static irqreturn_t mc_event_handler(int irq, void *dev_id)
 {
-	struct mc_pcie *port = dev_id;
+	struct plda_pcie *port = dev_id;
 	struct device *dev = port->dev;
 	struct irq_data *data;
 
@@ -735,31 +370,14 @@ static irqreturn_t mc_event_handler(int irq, void *dev_id)
 	return IRQ_HANDLED;
 }
 
-static void mc_handle_event(struct irq_desc *desc)
-{
-	struct mc_pcie *port = irq_desc_get_handler_data(desc);
-	unsigned long events;
-	u32 bit;
-	struct irq_chip *chip = irq_desc_get_chip(desc);
-
-	chained_irq_enter(chip, desc);
-
-	events = get_events(port);
-
-	for_each_set_bit(bit, &events, NUM_EVENTS)
-		generic_handle_domain_irq(port->event_domain, bit);
-
-	chained_irq_exit(chip, desc);
-}
-
 static void mc_ack_event_irq(struct irq_data *data)
 {
-	struct mc_pcie *port = irq_data_get_irq_chip_data(data);
+	struct plda_pcie *port = irq_data_get_irq_chip_data(data);
 	u32 event = data->hwirq;
 	void __iomem *addr;
 	u32 mask;
 
-	addr = port->axi_base_addr + event_descs[event].base +
+	addr = port->bridge_addr + event_descs[event].base +
 		event_descs[event].offset;
 	mask = event_descs[event].mask;
 	mask |= event_descs[event].enb_mask;
@@ -769,13 +387,13 @@ static void mc_ack_event_irq(struct irq_data *data)
 
 static void mc_mask_event_irq(struct irq_data *data)
 {
-	struct mc_pcie *port = irq_data_get_irq_chip_data(data);
+	struct plda_pcie *port = irq_data_get_irq_chip_data(data);
 	u32 event = data->hwirq;
 	void __iomem *addr;
 	u32 mask;
 	u32 val;
 
-	addr = port->axi_base_addr + event_descs[event].base +
+	addr = port->bridge_addr + event_descs[event].base +
 		event_descs[event].mask_offset;
 	mask = event_descs[event].mask;
 	if (event_descs[event].enb_mask) {
@@ -799,13 +417,13 @@ static void mc_mask_event_irq(struct irq_data *data)
 
 static void mc_unmask_event_irq(struct irq_data *data)
 {
-	struct mc_pcie *port = irq_data_get_irq_chip_data(data);
+	struct plda_pcie *port = irq_data_get_irq_chip_data(data);
 	u32 event = data->hwirq;
 	void __iomem *addr;
 	u32 mask;
 	u32 val;
 
-	addr = port->axi_base_addr + event_descs[event].base +
+	addr = port->bridge_addr + event_descs[event].base +
 		event_descs[event].mask_offset;
 	mask = event_descs[event].mask;
 
@@ -887,123 +505,42 @@ static int mc_pcie_init_clks(struct device *dev)
 	return 0;
 }
 
-static int mc_pcie_init_irq_domains(struct mc_pcie *port)
-{
-	struct device *dev = port->dev;
-	struct device_node *node = dev->of_node;
-	struct device_node *pcie_intc_node;
-
-	/* Setup INTx */
-	pcie_intc_node = of_get_next_child(node, NULL);
-	if (!pcie_intc_node) {
-		dev_err(dev, "failed to find PCIe Intc node\n");
-		return -EINVAL;
-	}
-
-	port->event_domain = irq_domain_add_linear(pcie_intc_node, NUM_EVENTS,
-						   &event_domain_ops, port);
-	if (!port->event_domain) {
-		dev_err(dev, "failed to get event domain\n");
-		of_node_put(pcie_intc_node);
-		return -ENOMEM;
-	}
-
-	irq_domain_update_bus_token(port->event_domain, DOMAIN_BUS_NEXUS);
-
-	port->intx_domain = irq_domain_add_linear(pcie_intc_node, PCI_NUM_INTX,
-						  &intx_domain_ops, port);
-	if (!port->intx_domain) {
-		dev_err(dev, "failed to get an INTx IRQ domain\n");
-		of_node_put(pcie_intc_node);
-		return -ENOMEM;
-	}
-
-	irq_domain_update_bus_token(port->intx_domain, DOMAIN_BUS_WIRED);
-
-	of_node_put(pcie_intc_node);
-	raw_spin_lock_init(&port->lock);
-
-	return mc_allocate_msi_domains(port);
-}
-
-static void mc_pcie_setup_window(void __iomem *bridge_base_addr, u32 index,
-				 phys_addr_t axi_addr, phys_addr_t pci_addr,
-				 size_t size)
-{
-	u32 atr_sz = ilog2(size) - 1;
-	u32 val;
-
-	if (index == 0)
-		val = PCIE_CONFIG_INTERFACE;
-	else
-		val = PCIE_TX_RX_INTERFACE;
-
-	writel(val, bridge_base_addr + (index * ATR_ENTRY_SIZE) +
-	       ATR0_AXI4_SLV0_TRSL_PARAM);
-
-	val = lower_32_bits(axi_addr) | (atr_sz << ATR_SIZE_SHIFT) |
-			    ATR_IMPL_ENABLE;
-	writel(val, bridge_base_addr + (index * ATR_ENTRY_SIZE) +
-	       ATR0_AXI4_SLV0_SRCADDR_PARAM);
-
-	val = upper_32_bits(axi_addr);
-	writel(val, bridge_base_addr + (index * ATR_ENTRY_SIZE) +
-	       ATR0_AXI4_SLV0_SRC_ADDR);
-
-	val = lower_32_bits(pci_addr);
-	writel(val, bridge_base_addr + (index * ATR_ENTRY_SIZE) +
-	       ATR0_AXI4_SLV0_TRSL_ADDR_LSB);
-
-	val = upper_32_bits(pci_addr);
-	writel(val, bridge_base_addr + (index * ATR_ENTRY_SIZE) +
-	       ATR0_AXI4_SLV0_TRSL_ADDR_UDW);
-
-	val = readl(bridge_base_addr + ATR0_PCIE_WIN0_SRCADDR_PARAM);
-	val |= (ATR0_PCIE_ATR_SIZE << ATR0_PCIE_ATR_SIZE_SHIFT);
-	writel(val, bridge_base_addr + ATR0_PCIE_WIN0_SRCADDR_PARAM);
-	writel(0, bridge_base_addr + ATR0_PCIE_WIN0_SRC_ADDR);
-}
+static const struct plda_pcie_ops pcie_ops = {
+	.get_events = get_events,
+};
 
-static int mc_pcie_setup_windows(struct platform_device *pdev,
-				 struct mc_pcie *port)
+static int mc_request_evt_irq(struct plda_pcie *plda, int event_irq,
+			      int evt)
 {
-	void __iomem *bridge_base_addr =
-		port->axi_base_addr + MC_PCIE_BRIDGE_ADDR;
-	struct pci_host_bridge *bridge = platform_get_drvdata(pdev);
-	struct resource_entry *entry;
-	u64 pci_addr;
-	u32 index = 1;
-
-	resource_list_for_each_entry(entry, &bridge->windows) {
-		if (resource_type(entry->res) == IORESOURCE_MEM) {
-			pci_addr = entry->res->start - entry->offset;
-			mc_pcie_setup_window(bridge_base_addr, index,
-					     entry->res->start, pci_addr,
-					     resource_size(entry->res));
-			index++;
-		}
-	}
+	struct device *dev = plda->dev;
 
-	return 0;
+	return devm_request_irq(dev, event_irq, mc_event_handler,
+				0, event_cause[evt].sym, plda);
 }
 
 static int mc_platform_init(struct pci_config_window *cfg)
 {
 	struct device *dev = cfg->parent;
 	struct platform_device *pdev = to_platform_device(dev);
+	struct pci_host_bridge *bridge = platform_get_drvdata(pdev);
 	struct mc_pcie *port;
+	struct plda_pcie *plda;
 	void __iomem *bridge_base_addr;
 	void __iomem *ctrl_base_addr;
+	struct plda_evt evt = {&event_domain_ops, mc_request_evt_irq,
+			       EVENT_LOCAL_PM_MSI_INT_INTX,
+			       EVENT_LOCAL_PM_MSI_INT_MSI};
 	int ret;
-	int irq;
-	int i, intx_irq, msi_irq, event_irq;
 	u32 val;
-	int err;
 
 	port = devm_kzalloc(dev, sizeof(*port), GFP_KERNEL);
 	if (!port)
 		return -ENOMEM;
-	port->dev = dev;
+
+	plda = &port->plda;
+	plda->dev = dev;
+	plda->ops = &pcie_ops;
+	plda->num_events = NUM_EVENTS;
 
 	ret = mc_pcie_init_clks(dev);
 	if (ret) {
@@ -1017,57 +554,12 @@ static int mc_platform_init(struct pci_config_window *cfg)
 
 	bridge_base_addr = port->axi_base_addr + MC_PCIE_BRIDGE_ADDR;
 	ctrl_base_addr = port->axi_base_addr + MC_PCIE_CTRL_ADDR;
+	plda->bridge_addr = bridge_base_addr;
 
-	port->msi.vector_phy = MSI_ADDR;
-	port->msi.num_vectors = MC_NUM_MSI_IRQS;
-	ret = mc_pcie_init_irq_domains(port);
-	if (ret) {
-		dev_err(dev, "failed creating IRQ domains\n");
-		return ret;
-	}
-
-	irq = platform_get_irq(pdev, 0);
-	if (irq < 0)
-		return -ENODEV;
-
-	for (i = 0; i < NUM_EVENTS; i++) {
-		event_irq = irq_create_mapping(port->event_domain, i);
-		if (!event_irq) {
-			dev_err(dev, "failed to map hwirq %d\n", i);
-			return -ENXIO;
-		}
-
-		err = devm_request_irq(dev, event_irq, mc_event_handler,
-				       0, event_cause[i].sym, port);
-		if (err) {
-			dev_err(dev, "failed to request IRQ %d\n", event_irq);
-			return err;
-		}
-	}
-
-	intx_irq = irq_create_mapping(port->event_domain,
-				      EVENT_LOCAL_PM_MSI_INT_INTX);
-	if (!intx_irq) {
-		dev_err(dev, "failed to map INTx interrupt\n");
-		return -ENXIO;
-	}
-
-	/* Plug the INTx chained handler */
-	irq_set_chained_handler_and_data(intx_irq, mc_handle_intx, port);
-
-	msi_irq = irq_create_mapping(port->event_domain,
-				     EVENT_LOCAL_PM_MSI_INT_MSI);
-	if (!msi_irq)
-		return -ENXIO;
-
-	/* Plug the MSI chained handler */
-	irq_set_chained_handler_and_data(msi_irq, mc_handle_msi, port);
-
-	/* Plug the main event chained handler */
-	irq_set_chained_handler_and_data(irq, mc_handle_event, port);
-
+	plda_set_default_msi(&plda->msi);
+	plda_pcie_init_irq(plda, pdev, &evt);
 	/* Hardware doesn't setup MSI by default */
-	mc_pcie_enable_msi(port, cfg->win);
+	plda_pcie_enable_msi(plda);
 
 	val = readl_relaxed(bridge_base_addr + IMASK_LOCAL);
 	val |= PM_MSI_INT_INTX_MASK;
@@ -1100,10 +592,10 @@ static int mc_platform_init(struct pci_config_window *cfg)
 	writel_relaxed(GENMASK(31, 0), bridge_base_addr + ISTATUS_HOST);
 
 	/* Configure Address Translation Table 0 for PCIe config space */
-	mc_pcie_setup_window(bridge_base_addr, 0, cfg->res.start & 0xffffffff,
-			     cfg->res.start, resource_size(&cfg->res));
+	plda_pcie_setup_window(bridge_base_addr, 0, cfg->res.start & 0xffffffff,
+			       cfg->res.start, resource_size(&cfg->res));
 
-	return mc_pcie_setup_windows(pdev, port);
+	return plda_pcie_setup_iomems(&port->plda, bridge);
 }
 
 static const struct pci_ecam_ops mc_ecam_ops = {
-- 
2.17.1


_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply related	[flat|nested] 92+ messages in thread

* [PATCH v1 5/9] dt-bindings: PLDA: Add PLDA XpressRICH PCIe host controller
  2023-07-19 10:20 ` Minda Chen
@ 2023-07-19 10:20   ` Minda Chen
  -1 siblings, 0 replies; 92+ messages in thread
From: Minda Chen @ 2023-07-19 10:20 UTC (permalink / raw)
  To: Daire McNamara, Conor Dooley, Rob Herring, Krzysztof Kozlowski,
	Bjorn Helgaas, Lorenzo Pieralisi, Krzysztof Wilczyński,
	Emil Renner Berthing
  Cc: devicetree, linux-kernel, linux-riscv, linux-pci, Paul Walmsley,
	Palmer Dabbelt, Albert Ou, Philipp Zabel, Mason Huo, Leyfoon Tan,
	Kevin Xie, Minda Chen

Add PLDA XpressRICH host controller dt-bindings. Both Microchip
PolarFire SoC and StarFive JH7110 SoC are using PLDA XpressRICH
PCIe host controller IP.

Signed-off-by: Minda Chen <minda.chen@starfivetech.com>
Reviewed-by: Hal Feng <hal.feng@starfivetech.com>
---
 .../pci/plda,xpressrich-pcie-host.yaml        | 66 +++++++++++++++++++
 1 file changed, 66 insertions(+)
 create mode 100644 Documentation/devicetree/bindings/pci/plda,xpressrich-pcie-host.yaml

diff --git a/Documentation/devicetree/bindings/pci/plda,xpressrich-pcie-host.yaml b/Documentation/devicetree/bindings/pci/plda,xpressrich-pcie-host.yaml
new file mode 100644
index 000000000000..10a10862a078
--- /dev/null
+++ b/Documentation/devicetree/bindings/pci/plda,xpressrich-pcie-host.yaml
@@ -0,0 +1,66 @@
+# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
+%YAML 1.2
+---
+$id: http://devicetree.org/schemas/pci/plda,xpressrich-pcie-host.yaml#
+$schema: http://devicetree.org/meta-schemas/core.yaml#
+
+title: PLDA XpressRICH PCIe host controller
+
+maintainers:
+  - Daire McNamara <daire.mcnamara@microchip.com>
+  - Minda Chen <minda.chen@starfivetech.com>
+
+allOf:
+  - $ref: /schemas/pci/pci-bus.yaml#
+  - $ref: plda,xpressrich-pcie-common.yaml#
+  - $ref: /schemas/interrupt-controller/msi-controller.yaml#
+
+properties:
+  compatible:
+    const: plda,xpressrich-pcie-host
+
+required:
+  - compatible
+  - reg
+  - reg-names
+  - "#interrupt-cells"
+  - interrupts
+  - interrupt-map-mask
+  - interrupt-map
+  - msi-controller
+
+unevaluatedProperties: false
+
+examples:
+  - |
+    soc {
+            #address-cells = <2>;
+            #size-cells = <2>;
+            pcie0: pcie@12000000 {
+                    compatible = "plda,xpressrich-pcie-host";
+                    reg = <0x0 0x12000000 0x0 0x00010000>,
+                          <0x0 0x43000000 0x0 0x00010000>;
+                    reg-names = "host", "cfg";
+                    ranges = <0x03000000 0x0 0x88000000 0x0 0x88000000 0x0 0x04000000>;
+                    device_type = "pci";
+                    #address-cells = <3>;
+                    #size-cells = <2>;
+                    #interrupt-cells = <1>;
+                    interrupts = <131>;
+                    interrupt-map-mask = <0x0 0x0 0x0 0x7>;
+                    interrupt-map = <0 0 0 1 &pcie_intc0 0>,
+                                    <0 0 0 2 &pcie_intc0 1>,
+                                    <0 0 0 3 &pcie_intc0 2>,
+                                    <0 0 0 4 &pcie_intc0 3>;
+                    interrupt-parent = <&plic0>;
+                    msi-parent = <&pcie0>;
+                    msi-controller;
+                    bus-range = <0x00 0x7f>;
+
+                    pcie_intc0: interrupt-controller {
+                        #address-cells = <0>;
+                        #interrupt-cells = <1>;
+                        interrupt-controller;
+                    };
+            };
+    };
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 92+ messages in thread

* [PATCH v1 5/9] dt-bindings: PLDA: Add PLDA XpressRICH PCIe host controller
@ 2023-07-19 10:20   ` Minda Chen
  0 siblings, 0 replies; 92+ messages in thread
From: Minda Chen @ 2023-07-19 10:20 UTC (permalink / raw)
  To: Daire McNamara, Conor Dooley, Rob Herring, Krzysztof Kozlowski,
	Bjorn Helgaas, Lorenzo Pieralisi, Krzysztof Wilczyński,
	Emil Renner Berthing
  Cc: devicetree, linux-kernel, linux-riscv, linux-pci, Paul Walmsley,
	Palmer Dabbelt, Albert Ou, Philipp Zabel, Mason Huo, Leyfoon Tan,
	Kevin Xie, Minda Chen

Add PLDA XpressRICH host controller dt-bindings. Both Microchip
PolarFire SoC and StarFive JH7110 SoC are using PLDA XpressRICH
PCIe host controller IP.

Signed-off-by: Minda Chen <minda.chen@starfivetech.com>
Reviewed-by: Hal Feng <hal.feng@starfivetech.com>
---
 .../pci/plda,xpressrich-pcie-host.yaml        | 66 +++++++++++++++++++
 1 file changed, 66 insertions(+)
 create mode 100644 Documentation/devicetree/bindings/pci/plda,xpressrich-pcie-host.yaml

diff --git a/Documentation/devicetree/bindings/pci/plda,xpressrich-pcie-host.yaml b/Documentation/devicetree/bindings/pci/plda,xpressrich-pcie-host.yaml
new file mode 100644
index 000000000000..10a10862a078
--- /dev/null
+++ b/Documentation/devicetree/bindings/pci/plda,xpressrich-pcie-host.yaml
@@ -0,0 +1,66 @@
+# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
+%YAML 1.2
+---
+$id: http://devicetree.org/schemas/pci/plda,xpressrich-pcie-host.yaml#
+$schema: http://devicetree.org/meta-schemas/core.yaml#
+
+title: PLDA XpressRICH PCIe host controller
+
+maintainers:
+  - Daire McNamara <daire.mcnamara@microchip.com>
+  - Minda Chen <minda.chen@starfivetech.com>
+
+allOf:
+  - $ref: /schemas/pci/pci-bus.yaml#
+  - $ref: plda,xpressrich-pcie-common.yaml#
+  - $ref: /schemas/interrupt-controller/msi-controller.yaml#
+
+properties:
+  compatible:
+    const: plda,xpressrich-pcie-host
+
+required:
+  - compatible
+  - reg
+  - reg-names
+  - "#interrupt-cells"
+  - interrupts
+  - interrupt-map-mask
+  - interrupt-map
+  - msi-controller
+
+unevaluatedProperties: false
+
+examples:
+  - |
+    soc {
+            #address-cells = <2>;
+            #size-cells = <2>;
+            pcie0: pcie@12000000 {
+                    compatible = "plda,xpressrich-pcie-host";
+                    reg = <0x0 0x12000000 0x0 0x00010000>,
+                          <0x0 0x43000000 0x0 0x00010000>;
+                    reg-names = "host", "cfg";
+                    ranges = <0x03000000 0x0 0x88000000 0x0 0x88000000 0x0 0x04000000>;
+                    device_type = "pci";
+                    #address-cells = <3>;
+                    #size-cells = <2>;
+                    #interrupt-cells = <1>;
+                    interrupts = <131>;
+                    interrupt-map-mask = <0x0 0x0 0x0 0x7>;
+                    interrupt-map = <0 0 0 1 &pcie_intc0 0>,
+                                    <0 0 0 2 &pcie_intc0 1>,
+                                    <0 0 0 3 &pcie_intc0 2>,
+                                    <0 0 0 4 &pcie_intc0 3>;
+                    interrupt-parent = <&plic0>;
+                    msi-parent = <&pcie0>;
+                    msi-controller;
+                    bus-range = <0x00 0x7f>;
+
+                    pcie_intc0: interrupt-controller {
+                        #address-cells = <0>;
+                        #interrupt-cells = <1>;
+                        interrupt-controller;
+                    };
+            };
+    };
-- 
2.17.1


_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply related	[flat|nested] 92+ messages in thread

* [PATCH v1 6/9] PCI: PLDA: Add host conroller platform driver
  2023-07-19 10:20 ` Minda Chen
@ 2023-07-19 10:20   ` Minda Chen
  -1 siblings, 0 replies; 92+ messages in thread
From: Minda Chen @ 2023-07-19 10:20 UTC (permalink / raw)
  To: Daire McNamara, Conor Dooley, Rob Herring, Krzysztof Kozlowski,
	Bjorn Helgaas, Lorenzo Pieralisi, Krzysztof Wilczyński,
	Emil Renner Berthing
  Cc: devicetree, linux-kernel, linux-riscv, linux-pci, Paul Walmsley,
	Palmer Dabbelt, Albert Ou, Philipp Zabel, Mason Huo, Leyfoon Tan,
	Kevin Xie, Minda Chen

Add PLDA XpressRICH PCIe host controller IP platform
driver. Implement it like DesignWare PCIe platform
driver.

Signed-off-by: Minda Chen <minda.chen@starfivetech.com>
---
 drivers/pci/controller/plda/Kconfig          |   8 ++
 drivers/pci/controller/plda/Makefile         |   1 +
 drivers/pci/controller/plda/pcie-plda-host.c | 111 +++++++++++++++++++
 drivers/pci/controller/plda/pcie-plda-plat.c |  64 +++++++++++
 drivers/pci/controller/plda/pcie-plda.h      |   3 +
 5 files changed, 187 insertions(+)
 create mode 100644 drivers/pci/controller/plda/pcie-plda-plat.c

diff --git a/drivers/pci/controller/plda/Kconfig b/drivers/pci/controller/plda/Kconfig
index fb274976b84b..a3c790545843 100644
--- a/drivers/pci/controller/plda/Kconfig
+++ b/drivers/pci/controller/plda/Kconfig
@@ -8,6 +8,14 @@ config PCIE_PLDA_HOST
 	depends on OF && PCI_MSI
 	select IRQ_DOMAIN
 
+config PCIE_PLDA_PLAT_HOST
+	bool "PLDA PCIe platform host controller"
+	select PCIE_PLDA_HOST
+	help
+	  Say Y here if you want to support the PLDA PCIe platform controller in
+	  host mode. This PCIe controller may be embedded into many different
+	  vendors SoCs.
+
 config PCIE_MICROCHIP_HOST
 	bool "Microchip AXI PCIe controller"
 	select PCI_HOST_COMMON
diff --git a/drivers/pci/controller/plda/Makefile b/drivers/pci/controller/plda/Makefile
index 4340ab007f44..2f16d9126535 100644
--- a/drivers/pci/controller/plda/Makefile
+++ b/drivers/pci/controller/plda/Makefile
@@ -1,3 +1,4 @@
 # SPDX-License-Identifier: GPL-2.0
 obj-$(CONFIG_PCIE_PLDA_HOST) += pcie-plda-host.o
+obj-$(CONFIG_PCIE_PLDA_PLAT_HOST) += pcie-plda-plat.o
 obj-$(CONFIG_PCIE_MICROCHIP_HOST) += pcie-microchip-host.o
diff --git a/drivers/pci/controller/plda/pcie-plda-host.c b/drivers/pci/controller/plda/pcie-plda-host.c
index ca720430721c..e4baa63f6a5b 100644
--- a/drivers/pci/controller/plda/pcie-plda-host.c
+++ b/drivers/pci/controller/plda/pcie-plda-host.c
@@ -20,6 +20,15 @@
 
 #include "pcie-plda.h"
 
+void __iomem *plda_pcie_map_bus(struct pci_bus *bus, unsigned int devfn,
+				int where)
+{
+	struct plda_pcie *pcie = bus->sysdata;
+
+	return pcie->config_base + PCIE_ECAM_OFFSET(bus->number, devfn, where);
+}
+EXPORT_SYMBOL_GPL(plda_pcie_map_bus);
+
 void plda_pcie_enable_msi(struct plda_pcie *port)
 {
 	struct plda_msi *msi = &port->msi;
@@ -552,3 +561,105 @@ int plda_pcie_setup_iomems(struct plda_pcie *port, struct pci_host_bridge *bridg
 	return 0;
 }
 EXPORT_SYMBOL_GPL(plda_pcie_setup_iomems);
+
+static void plda_pcie_irq_domain_deinit(struct plda_pcie *pcie)
+{
+	irq_set_chained_handler_and_data(pcie->irq, NULL, NULL);
+	irq_set_chained_handler_and_data(pcie->msi_irq, NULL, NULL);
+	irq_set_chained_handler_and_data(pcie->intx_irq, NULL, NULL);
+
+	irq_domain_remove(pcie->msi.msi_domain);
+	irq_domain_remove(pcie->msi.dev_domain);
+
+	irq_domain_remove(pcie->intx_domain);
+	irq_domain_remove(pcie->event_domain);
+}
+
+int plda_pcie_host_init(struct plda_pcie *pcie, struct pci_ops *ops)
+{
+	struct resource *cfg_res;
+	struct device *dev = pcie->dev;
+	int ret;
+	struct pci_host_bridge *bridge;
+	struct platform_device *pdev = to_platform_device(dev);
+	struct plda_evt evt = {NULL, NULL, EVENT_PM_MSI_INT_INTX,
+			       EVENT_PM_MSI_INT_MSI};
+
+	pcie->bridge_addr =
+		devm_platform_ioremap_resource_byname(pdev, "host");
+
+	if (IS_ERR(pcie->bridge_addr))
+		return dev_err_probe(dev, PTR_ERR(pcie->bridge_addr),
+			"Failed to map reg memory\n");
+
+	cfg_res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "cfg");
+	if (!cfg_res)
+		return dev_err_probe(dev, -ENODEV,
+			"Failed to get config memory\n");
+
+	pcie->config_base = devm_ioremap_resource(dev, cfg_res);
+	if (IS_ERR(pcie->config_base))
+		return dev_err_probe(dev, PTR_ERR(pcie->config_base),
+			"Failed to map config memory\n");
+
+	pcie->phy = devm_phy_optional_get(dev, NULL);
+	if (IS_ERR(pcie->phy))
+		return dev_err_probe(dev, PTR_ERR(pcie->phy),
+			"Failed to get pcie phy\n");
+
+	bridge = devm_pci_alloc_host_bridge(dev, 0);
+	if (!bridge)
+		return dev_err_probe(dev, -ENOMEM,
+			"Failed to alloc bridge\n");
+
+	pcie->bridge = bridge;
+
+	if (pcie->ops->host_init) {
+		ret = pcie->ops->host_init(pcie);
+		if (ret)
+			return ret;
+	}
+
+	plda_pcie_setup_window(pcie->bridge_addr, 0, cfg_res->start, 0,
+			       resource_size(cfg_res));
+	plda_pcie_setup_iomems(pcie, bridge);
+	plda_set_default_msi(&pcie->msi);
+	ret = plda_pcie_init_irq(pcie, pdev, &evt);
+	if (ret)
+		goto err_host;
+
+	/* Set default bus ops */
+	bridge->ops = ops;
+	bridge->sysdata = pcie;
+
+	plda_pcie_enable_msi(pcie);
+
+	ret = pci_host_probe(bridge);
+	if (ret < 0) {
+		dev_err(dev, "Failed to pci host probe: %d\n", ret);
+		goto err_probe;
+	}
+
+	return ret;
+
+err_probe:
+	plda_pcie_irq_domain_deinit(pcie);
+err_host:
+	if (pcie->ops->host_deinit)
+		pcie->ops->host_deinit(pcie);
+
+	return ret;
+}
+EXPORT_SYMBOL_GPL(plda_pcie_host_init);
+
+void plda_pcie_host_deinit(struct plda_pcie *pcie)
+{
+	pci_stop_root_bus(pcie->bridge->bus);
+	pci_remove_root_bus(pcie->bridge->bus);
+
+	plda_pcie_irq_domain_deinit(pcie);
+
+	if (pcie->ops->host_deinit)
+		pcie->ops->host_deinit(pcie);
+}
+EXPORT_SYMBOL_GPL(plda_pcie_host_deinit);
diff --git a/drivers/pci/controller/plda/pcie-plda-plat.c b/drivers/pci/controller/plda/pcie-plda-plat.c
new file mode 100644
index 000000000000..270aceb57c7a
--- /dev/null
+++ b/drivers/pci/controller/plda/pcie-plda-plat.c
@@ -0,0 +1,64 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * PLDA XpressRich PCIe platform driver
+ *
+ * Authors: Minda Chen <minda.chen@starfivetech.com>
+ */
+
+#include <linux/clk.h>
+#include <linux/delay.h>
+#include <linux/gpio.h>
+#include <linux/interrupt.h>
+#include <linux/kernel.h>
+#include <linux/init.h>
+#include <linux/of_device.h>
+#include <linux/pci.h>
+#include <linux/platform_device.h>
+#include <linux/resource.h>
+#include <linux/types.h>
+
+#include "pcie-plda.h"
+
+static struct pci_ops plda_default_ops = {
+	.map_bus	= plda_pcie_map_bus,
+	.read		= pci_generic_config_read,
+	.write		= pci_generic_config_write,
+};
+
+static int plda_plat_pcie_probe(struct platform_device *pdev)
+{
+	struct device *dev = &pdev->dev;
+	struct plda_pcie *pci;
+	int ret;
+
+	pci = devm_kzalloc(dev, sizeof(*pci), GFP_KERNEL);
+	if (!pci)
+		return -ENOMEM;
+
+	pci->dev = dev;
+
+	ret = plda_pcie_host_init(pci, &plda_default_ops);
+	if (ret) {
+		dev_err(dev, "Failed to initialize host\n");
+		return ret;
+	}
+
+	platform_set_drvdata(pdev, pci);
+
+	return ret;
+}
+
+static const struct of_device_id plda_plat_pcie_of_match[] = {
+	{ .compatible = "plda,xpressrich-pcie-host"},
+	{ /* sentinel */ }
+};
+
+static struct platform_driver plda_plat_pcie_driver = {
+	.driver = {
+		.name	= "plda-xpressrich-pcie",
+		.of_match_table = plda_plat_pcie_of_match,
+		.suppress_bind_attrs = true,
+	},
+	.probe = plda_plat_pcie_probe,
+};
+builtin_platform_driver(plda_plat_pcie_driver);
diff --git a/drivers/pci/controller/plda/pcie-plda.h b/drivers/pci/controller/plda/pcie-plda.h
index feb3a0d9ace5..8785f885ddb1 100644
--- a/drivers/pci/controller/plda/pcie-plda.h
+++ b/drivers/pci/controller/plda/pcie-plda.h
@@ -157,6 +157,7 @@ struct plda_evt {
 	int msi_evt;
 };
 
+void __iomem *plda_pcie_map_bus(struct pci_bus *bus, unsigned int devfn, int where);
 void plda_pcie_enable_msi(struct plda_pcie *port);
 void plda_pcie_setup_window(void __iomem *bridge_base_addr, u32 index,
 			    phys_addr_t axi_addr, phys_addr_t pci_addr,
@@ -164,6 +165,8 @@ void plda_pcie_setup_window(void __iomem *bridge_base_addr, u32 index,
 int plda_pcie_setup_iomems(struct plda_pcie *port, struct pci_host_bridge *host_bridge);
 int plda_pcie_init_irq(struct plda_pcie *port, struct platform_device *pdev,
 		       struct plda_evt *evt);
+int plda_pcie_host_init(struct plda_pcie *pcie, struct pci_ops *ops);
+void plda_pcie_host_deinit(struct plda_pcie *pcie);
 
 static inline void plda_set_default_msi(struct plda_msi *msi)
 {
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 92+ messages in thread

* [PATCH v1 6/9] PCI: PLDA: Add host conroller platform driver
@ 2023-07-19 10:20   ` Minda Chen
  0 siblings, 0 replies; 92+ messages in thread
From: Minda Chen @ 2023-07-19 10:20 UTC (permalink / raw)
  To: Daire McNamara, Conor Dooley, Rob Herring, Krzysztof Kozlowski,
	Bjorn Helgaas, Lorenzo Pieralisi, Krzysztof Wilczyński,
	Emil Renner Berthing
  Cc: devicetree, linux-kernel, linux-riscv, linux-pci, Paul Walmsley,
	Palmer Dabbelt, Albert Ou, Philipp Zabel, Mason Huo, Leyfoon Tan,
	Kevin Xie, Minda Chen

Add PLDA XpressRICH PCIe host controller IP platform
driver. Implement it like DesignWare PCIe platform
driver.

Signed-off-by: Minda Chen <minda.chen@starfivetech.com>
---
 drivers/pci/controller/plda/Kconfig          |   8 ++
 drivers/pci/controller/plda/Makefile         |   1 +
 drivers/pci/controller/plda/pcie-plda-host.c | 111 +++++++++++++++++++
 drivers/pci/controller/plda/pcie-plda-plat.c |  64 +++++++++++
 drivers/pci/controller/plda/pcie-plda.h      |   3 +
 5 files changed, 187 insertions(+)
 create mode 100644 drivers/pci/controller/plda/pcie-plda-plat.c

diff --git a/drivers/pci/controller/plda/Kconfig b/drivers/pci/controller/plda/Kconfig
index fb274976b84b..a3c790545843 100644
--- a/drivers/pci/controller/plda/Kconfig
+++ b/drivers/pci/controller/plda/Kconfig
@@ -8,6 +8,14 @@ config PCIE_PLDA_HOST
 	depends on OF && PCI_MSI
 	select IRQ_DOMAIN
 
+config PCIE_PLDA_PLAT_HOST
+	bool "PLDA PCIe platform host controller"
+	select PCIE_PLDA_HOST
+	help
+	  Say Y here if you want to support the PLDA PCIe platform controller in
+	  host mode. This PCIe controller may be embedded into many different
+	  vendors SoCs.
+
 config PCIE_MICROCHIP_HOST
 	bool "Microchip AXI PCIe controller"
 	select PCI_HOST_COMMON
diff --git a/drivers/pci/controller/plda/Makefile b/drivers/pci/controller/plda/Makefile
index 4340ab007f44..2f16d9126535 100644
--- a/drivers/pci/controller/plda/Makefile
+++ b/drivers/pci/controller/plda/Makefile
@@ -1,3 +1,4 @@
 # SPDX-License-Identifier: GPL-2.0
 obj-$(CONFIG_PCIE_PLDA_HOST) += pcie-plda-host.o
+obj-$(CONFIG_PCIE_PLDA_PLAT_HOST) += pcie-plda-plat.o
 obj-$(CONFIG_PCIE_MICROCHIP_HOST) += pcie-microchip-host.o
diff --git a/drivers/pci/controller/plda/pcie-plda-host.c b/drivers/pci/controller/plda/pcie-plda-host.c
index ca720430721c..e4baa63f6a5b 100644
--- a/drivers/pci/controller/plda/pcie-plda-host.c
+++ b/drivers/pci/controller/plda/pcie-plda-host.c
@@ -20,6 +20,15 @@
 
 #include "pcie-plda.h"
 
+void __iomem *plda_pcie_map_bus(struct pci_bus *bus, unsigned int devfn,
+				int where)
+{
+	struct plda_pcie *pcie = bus->sysdata;
+
+	return pcie->config_base + PCIE_ECAM_OFFSET(bus->number, devfn, where);
+}
+EXPORT_SYMBOL_GPL(plda_pcie_map_bus);
+
 void plda_pcie_enable_msi(struct plda_pcie *port)
 {
 	struct plda_msi *msi = &port->msi;
@@ -552,3 +561,105 @@ int plda_pcie_setup_iomems(struct plda_pcie *port, struct pci_host_bridge *bridg
 	return 0;
 }
 EXPORT_SYMBOL_GPL(plda_pcie_setup_iomems);
+
+static void plda_pcie_irq_domain_deinit(struct plda_pcie *pcie)
+{
+	irq_set_chained_handler_and_data(pcie->irq, NULL, NULL);
+	irq_set_chained_handler_and_data(pcie->msi_irq, NULL, NULL);
+	irq_set_chained_handler_and_data(pcie->intx_irq, NULL, NULL);
+
+	irq_domain_remove(pcie->msi.msi_domain);
+	irq_domain_remove(pcie->msi.dev_domain);
+
+	irq_domain_remove(pcie->intx_domain);
+	irq_domain_remove(pcie->event_domain);
+}
+
+int plda_pcie_host_init(struct plda_pcie *pcie, struct pci_ops *ops)
+{
+	struct resource *cfg_res;
+	struct device *dev = pcie->dev;
+	int ret;
+	struct pci_host_bridge *bridge;
+	struct platform_device *pdev = to_platform_device(dev);
+	struct plda_evt evt = {NULL, NULL, EVENT_PM_MSI_INT_INTX,
+			       EVENT_PM_MSI_INT_MSI};
+
+	pcie->bridge_addr =
+		devm_platform_ioremap_resource_byname(pdev, "host");
+
+	if (IS_ERR(pcie->bridge_addr))
+		return dev_err_probe(dev, PTR_ERR(pcie->bridge_addr),
+			"Failed to map reg memory\n");
+
+	cfg_res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "cfg");
+	if (!cfg_res)
+		return dev_err_probe(dev, -ENODEV,
+			"Failed to get config memory\n");
+
+	pcie->config_base = devm_ioremap_resource(dev, cfg_res);
+	if (IS_ERR(pcie->config_base))
+		return dev_err_probe(dev, PTR_ERR(pcie->config_base),
+			"Failed to map config memory\n");
+
+	pcie->phy = devm_phy_optional_get(dev, NULL);
+	if (IS_ERR(pcie->phy))
+		return dev_err_probe(dev, PTR_ERR(pcie->phy),
+			"Failed to get pcie phy\n");
+
+	bridge = devm_pci_alloc_host_bridge(dev, 0);
+	if (!bridge)
+		return dev_err_probe(dev, -ENOMEM,
+			"Failed to alloc bridge\n");
+
+	pcie->bridge = bridge;
+
+	if (pcie->ops->host_init) {
+		ret = pcie->ops->host_init(pcie);
+		if (ret)
+			return ret;
+	}
+
+	plda_pcie_setup_window(pcie->bridge_addr, 0, cfg_res->start, 0,
+			       resource_size(cfg_res));
+	plda_pcie_setup_iomems(pcie, bridge);
+	plda_set_default_msi(&pcie->msi);
+	ret = plda_pcie_init_irq(pcie, pdev, &evt);
+	if (ret)
+		goto err_host;
+
+	/* Set default bus ops */
+	bridge->ops = ops;
+	bridge->sysdata = pcie;
+
+	plda_pcie_enable_msi(pcie);
+
+	ret = pci_host_probe(bridge);
+	if (ret < 0) {
+		dev_err(dev, "Failed to pci host probe: %d\n", ret);
+		goto err_probe;
+	}
+
+	return ret;
+
+err_probe:
+	plda_pcie_irq_domain_deinit(pcie);
+err_host:
+	if (pcie->ops->host_deinit)
+		pcie->ops->host_deinit(pcie);
+
+	return ret;
+}
+EXPORT_SYMBOL_GPL(plda_pcie_host_init);
+
+void plda_pcie_host_deinit(struct plda_pcie *pcie)
+{
+	pci_stop_root_bus(pcie->bridge->bus);
+	pci_remove_root_bus(pcie->bridge->bus);
+
+	plda_pcie_irq_domain_deinit(pcie);
+
+	if (pcie->ops->host_deinit)
+		pcie->ops->host_deinit(pcie);
+}
+EXPORT_SYMBOL_GPL(plda_pcie_host_deinit);
diff --git a/drivers/pci/controller/plda/pcie-plda-plat.c b/drivers/pci/controller/plda/pcie-plda-plat.c
new file mode 100644
index 000000000000..270aceb57c7a
--- /dev/null
+++ b/drivers/pci/controller/plda/pcie-plda-plat.c
@@ -0,0 +1,64 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * PLDA XpressRich PCIe platform driver
+ *
+ * Authors: Minda Chen <minda.chen@starfivetech.com>
+ */
+
+#include <linux/clk.h>
+#include <linux/delay.h>
+#include <linux/gpio.h>
+#include <linux/interrupt.h>
+#include <linux/kernel.h>
+#include <linux/init.h>
+#include <linux/of_device.h>
+#include <linux/pci.h>
+#include <linux/platform_device.h>
+#include <linux/resource.h>
+#include <linux/types.h>
+
+#include "pcie-plda.h"
+
+static struct pci_ops plda_default_ops = {
+	.map_bus	= plda_pcie_map_bus,
+	.read		= pci_generic_config_read,
+	.write		= pci_generic_config_write,
+};
+
+static int plda_plat_pcie_probe(struct platform_device *pdev)
+{
+	struct device *dev = &pdev->dev;
+	struct plda_pcie *pci;
+	int ret;
+
+	pci = devm_kzalloc(dev, sizeof(*pci), GFP_KERNEL);
+	if (!pci)
+		return -ENOMEM;
+
+	pci->dev = dev;
+
+	ret = plda_pcie_host_init(pci, &plda_default_ops);
+	if (ret) {
+		dev_err(dev, "Failed to initialize host\n");
+		return ret;
+	}
+
+	platform_set_drvdata(pdev, pci);
+
+	return ret;
+}
+
+static const struct of_device_id plda_plat_pcie_of_match[] = {
+	{ .compatible = "plda,xpressrich-pcie-host"},
+	{ /* sentinel */ }
+};
+
+static struct platform_driver plda_plat_pcie_driver = {
+	.driver = {
+		.name	= "plda-xpressrich-pcie",
+		.of_match_table = plda_plat_pcie_of_match,
+		.suppress_bind_attrs = true,
+	},
+	.probe = plda_plat_pcie_probe,
+};
+builtin_platform_driver(plda_plat_pcie_driver);
diff --git a/drivers/pci/controller/plda/pcie-plda.h b/drivers/pci/controller/plda/pcie-plda.h
index feb3a0d9ace5..8785f885ddb1 100644
--- a/drivers/pci/controller/plda/pcie-plda.h
+++ b/drivers/pci/controller/plda/pcie-plda.h
@@ -157,6 +157,7 @@ struct plda_evt {
 	int msi_evt;
 };
 
+void __iomem *plda_pcie_map_bus(struct pci_bus *bus, unsigned int devfn, int where);
 void plda_pcie_enable_msi(struct plda_pcie *port);
 void plda_pcie_setup_window(void __iomem *bridge_base_addr, u32 index,
 			    phys_addr_t axi_addr, phys_addr_t pci_addr,
@@ -164,6 +165,8 @@ void plda_pcie_setup_window(void __iomem *bridge_base_addr, u32 index,
 int plda_pcie_setup_iomems(struct plda_pcie *port, struct pci_host_bridge *host_bridge);
 int plda_pcie_init_irq(struct plda_pcie *port, struct platform_device *pdev,
 		       struct plda_evt *evt);
+int plda_pcie_host_init(struct plda_pcie *pcie, struct pci_ops *ops);
+void plda_pcie_host_deinit(struct plda_pcie *pcie);
 
 static inline void plda_set_default_msi(struct plda_msi *msi)
 {
-- 
2.17.1


_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply related	[flat|nested] 92+ messages in thread

* [PATCH v1 7/9] dt-bindings: PCI: Add StarFive JH7110 PCIe controller
  2023-07-19 10:20 ` Minda Chen
@ 2023-07-19 10:20   ` Minda Chen
  -1 siblings, 0 replies; 92+ messages in thread
From: Minda Chen @ 2023-07-19 10:20 UTC (permalink / raw)
  To: Daire McNamara, Conor Dooley, Rob Herring, Krzysztof Kozlowski,
	Bjorn Helgaas, Lorenzo Pieralisi, Krzysztof Wilczyński,
	Emil Renner Berthing
  Cc: devicetree, linux-kernel, linux-riscv, linux-pci, Paul Walmsley,
	Palmer Dabbelt, Albert Ou, Philipp Zabel, Mason Huo, Leyfoon Tan,
	Kevin Xie, Minda Chen

Add StarFive JH7110 SoC PCIe controller dt-bindings.
JH7110 using PLDA XpressRICH PCIe host controller IP.

Signed-off-by: Minda Chen <minda.chen@starfivetech.com>
Reviewed-by: Hal Feng <hal.feng@starfivetech.com>
---
 .../bindings/pci/starfive,jh7110-pcie.yaml    | 138 ++++++++++++++++++
 1 file changed, 138 insertions(+)
 create mode 100644 Documentation/devicetree/bindings/pci/starfive,jh7110-pcie.yaml

diff --git a/Documentation/devicetree/bindings/pci/starfive,jh7110-pcie.yaml b/Documentation/devicetree/bindings/pci/starfive,jh7110-pcie.yaml
new file mode 100644
index 000000000000..467286df557d
--- /dev/null
+++ b/Documentation/devicetree/bindings/pci/starfive,jh7110-pcie.yaml
@@ -0,0 +1,138 @@
+# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
+%YAML 1.2
+---
+$id: http://devicetree.org/schemas/pci/starfive,jh7110-pcie.yaml#
+$schema: http://devicetree.org/meta-schemas/core.yaml#
+
+title: StarFive JH7110 PCIe host controller
+
+maintainers:
+  - Minda Chen <minda.chen@starfivetech.com>
+
+allOf:
+  - $ref: /schemas/pci/pci-bus.yaml#
+  - $ref: plda,xpressrich-pcie-common.yaml#
+  - $ref: /schemas/interrupt-controller/msi-controller.yaml#
+  - $ref: /schemas/gpio/gpio-consumer-common.yaml#
+
+properties:
+  compatible:
+    const: starfive,jh7110-pcie
+
+  clocks:
+    items:
+      - description: NOC bus clock
+      - description: Transport layer clock
+      - description: AXI MST0 clock
+      - description: APB clock
+
+  clock-names:
+    items:
+      - const: noc
+      - const: tl
+      - const: axi_mst0
+      - const: apb
+
+  resets:
+    items:
+      - description: AXI MST0 reset
+      - description: AXI SLAVE reset
+      - description: AXI SLAVE0 reset
+      - description: PCIE BRIDGE reset
+      - description: PCIE CORE reset
+      - description: PCIE APB reset
+
+  reset-names:
+    items:
+      - const: mst0
+      - const: slv0
+      - const: slv
+      - const: brg
+      - const: core
+      - const: apb
+
+  starfive,stg-syscon:
+    $ref: /schemas/types.yaml#/definitions/phandle-array
+    items:
+      - items:
+          - description: phandle to System Register Controller stg_syscon node.
+          - description: register0 offset of STG_SYSCONSAIF__SYSCFG register for PCIe.
+          - description: register1 offset of STG_SYSCONSAIF__SYSCFG register for PCIe.
+          - description: register2 offset of STG_SYSCONSAIF__SYSCFG register for PCIe.
+          - description: register3 offset of STG_SYSCONSAIF__SYSCFG register for PCIe.
+    description:
+      The phandle to System Register Controller syscon node and the offset
+      of STG_SYSCONSAIF__SYSCFG register for PCIe. Total 4 regsisters offset
+      for PCIe.
+
+  phys:
+    description:
+      Specified PHY is attached to PCIe controller.
+    maxItems: 1
+
+required:
+  - compatible
+  - reg
+  - reg-names
+  - "#interrupt-cells"
+  - interrupts
+  - interrupt-map-mask
+  - interrupt-map
+  - clocks
+  - resets
+  - starfive,stg-syscon
+  - msi-controller
+  - reset-gpios
+
+unevaluatedProperties: false
+
+examples:
+  - |
+    #include <dt-bindings/gpio/gpio.h>
+    soc {
+        #address-cells = <2>;
+        #size-cells = <2>;
+
+        pcie0: pcie@2b000000 {
+            compatible = "starfive,jh7110-pcie";
+            reg = <0x0 0x2b000000 0x0 0x1000000>,
+                  <0x9 0x40000000 0x0 0x10000000>;
+            reg-names = "host", "cfg";
+            #address-cells = <3>;
+            #size-cells = <2>;
+            #interrupt-cells = <1>;
+            device_type = "pci";
+            ranges = <0x82000000  0x0 0x30000000  0x0 0x30000000 0x0 0x08000000>,
+                     <0xc3000000  0x9 0x00000000  0x9 0x00000000 0x0 0x40000000>;
+            starfive,stg-syscon = <&stg_syscon 0xc0 0xc4 0x130 0x1b8>;
+            bus-range = <0x0 0xff>;
+            interrupt-parent = <&plic>;
+            interrupts = <56>;
+            interrupt-map-mask = <0x0 0x0 0x0 0x7>;
+            interrupt-map = <0x0 0x0 0x0 0x1 &pcie_intc0 0x1>,
+                            <0x0 0x0 0x0 0x2 &pcie_intc0 0x2>,
+                            <0x0 0x0 0x0 0x3 &pcie_intc0 0x3>,
+                            <0x0 0x0 0x0 0x4 &pcie_intc0 0x4>;
+            msi-parent = <&pcie0>;
+            msi-controller;
+            clocks = <&syscrg 86>,
+                     <&stgcrg 10>,
+                     <&stgcrg 8>,
+                     <&stgcrg 9>;
+            clock-names = "noc", "tl", "axi_mst0", "apb";
+            resets = <&stgcrg 11>,
+                     <&stgcrg 12>,
+                     <&stgcrg 13>,
+                     <&stgcrg 14>,
+                     <&stgcrg 15>,
+                     <&stgcrg 16>;
+            reset-gpios = <&gpios 26 GPIO_ACTIVE_LOW>;
+            phys = <&pciephy0>;
+
+            pcie_intc0: interrupt-controller {
+                #address-cells = <0>;
+                #interrupt-cells = <1>;
+                interrupt-controller;
+            };
+        };
+    };
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 92+ messages in thread

* [PATCH v1 7/9] dt-bindings: PCI: Add StarFive JH7110 PCIe controller
@ 2023-07-19 10:20   ` Minda Chen
  0 siblings, 0 replies; 92+ messages in thread
From: Minda Chen @ 2023-07-19 10:20 UTC (permalink / raw)
  To: Daire McNamara, Conor Dooley, Rob Herring, Krzysztof Kozlowski,
	Bjorn Helgaas, Lorenzo Pieralisi, Krzysztof Wilczyński,
	Emil Renner Berthing
  Cc: devicetree, linux-kernel, linux-riscv, linux-pci, Paul Walmsley,
	Palmer Dabbelt, Albert Ou, Philipp Zabel, Mason Huo, Leyfoon Tan,
	Kevin Xie, Minda Chen

Add StarFive JH7110 SoC PCIe controller dt-bindings.
JH7110 using PLDA XpressRICH PCIe host controller IP.

Signed-off-by: Minda Chen <minda.chen@starfivetech.com>
Reviewed-by: Hal Feng <hal.feng@starfivetech.com>
---
 .../bindings/pci/starfive,jh7110-pcie.yaml    | 138 ++++++++++++++++++
 1 file changed, 138 insertions(+)
 create mode 100644 Documentation/devicetree/bindings/pci/starfive,jh7110-pcie.yaml

diff --git a/Documentation/devicetree/bindings/pci/starfive,jh7110-pcie.yaml b/Documentation/devicetree/bindings/pci/starfive,jh7110-pcie.yaml
new file mode 100644
index 000000000000..467286df557d
--- /dev/null
+++ b/Documentation/devicetree/bindings/pci/starfive,jh7110-pcie.yaml
@@ -0,0 +1,138 @@
+# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
+%YAML 1.2
+---
+$id: http://devicetree.org/schemas/pci/starfive,jh7110-pcie.yaml#
+$schema: http://devicetree.org/meta-schemas/core.yaml#
+
+title: StarFive JH7110 PCIe host controller
+
+maintainers:
+  - Minda Chen <minda.chen@starfivetech.com>
+
+allOf:
+  - $ref: /schemas/pci/pci-bus.yaml#
+  - $ref: plda,xpressrich-pcie-common.yaml#
+  - $ref: /schemas/interrupt-controller/msi-controller.yaml#
+  - $ref: /schemas/gpio/gpio-consumer-common.yaml#
+
+properties:
+  compatible:
+    const: starfive,jh7110-pcie
+
+  clocks:
+    items:
+      - description: NOC bus clock
+      - description: Transport layer clock
+      - description: AXI MST0 clock
+      - description: APB clock
+
+  clock-names:
+    items:
+      - const: noc
+      - const: tl
+      - const: axi_mst0
+      - const: apb
+
+  resets:
+    items:
+      - description: AXI MST0 reset
+      - description: AXI SLAVE reset
+      - description: AXI SLAVE0 reset
+      - description: PCIE BRIDGE reset
+      - description: PCIE CORE reset
+      - description: PCIE APB reset
+
+  reset-names:
+    items:
+      - const: mst0
+      - const: slv0
+      - const: slv
+      - const: brg
+      - const: core
+      - const: apb
+
+  starfive,stg-syscon:
+    $ref: /schemas/types.yaml#/definitions/phandle-array
+    items:
+      - items:
+          - description: phandle to System Register Controller stg_syscon node.
+          - description: register0 offset of STG_SYSCONSAIF__SYSCFG register for PCIe.
+          - description: register1 offset of STG_SYSCONSAIF__SYSCFG register for PCIe.
+          - description: register2 offset of STG_SYSCONSAIF__SYSCFG register for PCIe.
+          - description: register3 offset of STG_SYSCONSAIF__SYSCFG register for PCIe.
+    description:
+      The phandle to System Register Controller syscon node and the offset
+      of STG_SYSCONSAIF__SYSCFG register for PCIe. Total 4 regsisters offset
+      for PCIe.
+
+  phys:
+    description:
+      Specified PHY is attached to PCIe controller.
+    maxItems: 1
+
+required:
+  - compatible
+  - reg
+  - reg-names
+  - "#interrupt-cells"
+  - interrupts
+  - interrupt-map-mask
+  - interrupt-map
+  - clocks
+  - resets
+  - starfive,stg-syscon
+  - msi-controller
+  - reset-gpios
+
+unevaluatedProperties: false
+
+examples:
+  - |
+    #include <dt-bindings/gpio/gpio.h>
+    soc {
+        #address-cells = <2>;
+        #size-cells = <2>;
+
+        pcie0: pcie@2b000000 {
+            compatible = "starfive,jh7110-pcie";
+            reg = <0x0 0x2b000000 0x0 0x1000000>,
+                  <0x9 0x40000000 0x0 0x10000000>;
+            reg-names = "host", "cfg";
+            #address-cells = <3>;
+            #size-cells = <2>;
+            #interrupt-cells = <1>;
+            device_type = "pci";
+            ranges = <0x82000000  0x0 0x30000000  0x0 0x30000000 0x0 0x08000000>,
+                     <0xc3000000  0x9 0x00000000  0x9 0x00000000 0x0 0x40000000>;
+            starfive,stg-syscon = <&stg_syscon 0xc0 0xc4 0x130 0x1b8>;
+            bus-range = <0x0 0xff>;
+            interrupt-parent = <&plic>;
+            interrupts = <56>;
+            interrupt-map-mask = <0x0 0x0 0x0 0x7>;
+            interrupt-map = <0x0 0x0 0x0 0x1 &pcie_intc0 0x1>,
+                            <0x0 0x0 0x0 0x2 &pcie_intc0 0x2>,
+                            <0x0 0x0 0x0 0x3 &pcie_intc0 0x3>,
+                            <0x0 0x0 0x0 0x4 &pcie_intc0 0x4>;
+            msi-parent = <&pcie0>;
+            msi-controller;
+            clocks = <&syscrg 86>,
+                     <&stgcrg 10>,
+                     <&stgcrg 8>,
+                     <&stgcrg 9>;
+            clock-names = "noc", "tl", "axi_mst0", "apb";
+            resets = <&stgcrg 11>,
+                     <&stgcrg 12>,
+                     <&stgcrg 13>,
+                     <&stgcrg 14>,
+                     <&stgcrg 15>,
+                     <&stgcrg 16>;
+            reset-gpios = <&gpios 26 GPIO_ACTIVE_LOW>;
+            phys = <&pciephy0>;
+
+            pcie_intc0: interrupt-controller {
+                #address-cells = <0>;
+                #interrupt-cells = <1>;
+                interrupt-controller;
+            };
+        };
+    };
-- 
2.17.1


_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply related	[flat|nested] 92+ messages in thread

* [PATCH v1 8/9] PCI: PLDA: starfive: Add JH7110 PCIe controller
  2023-07-19 10:20 ` Minda Chen
@ 2023-07-19 10:20   ` Minda Chen
  -1 siblings, 0 replies; 92+ messages in thread
From: Minda Chen @ 2023-07-19 10:20 UTC (permalink / raw)
  To: Daire McNamara, Conor Dooley, Rob Herring, Krzysztof Kozlowski,
	Bjorn Helgaas, Lorenzo Pieralisi, Krzysztof Wilczyński,
	Emil Renner Berthing
  Cc: devicetree, linux-kernel, linux-riscv, linux-pci, Paul Walmsley,
	Palmer Dabbelt, Albert Ou, Philipp Zabel, Mason Huo, Leyfoon Tan,
	Kevin Xie, Minda Chen

Add StarFive JH7110 SoC PCIe controller platform
driver codes.

Signed-off-by: Minda Chen <minda.chen@starfivetech.com>
Reviewed-by: Mason Huo <mason.huo@starfivetech.com>
---
 MAINTAINERS                                 |   7 +
 drivers/pci/controller/plda/Kconfig         |   8 +
 drivers/pci/controller/plda/Makefile        |   1 +
 drivers/pci/controller/plda/pcie-plda.h     |  58 ++-
 drivers/pci/controller/plda/pcie-starfive.c | 415 ++++++++++++++++++++
 5 files changed, 487 insertions(+), 2 deletions(-)
 create mode 100644 drivers/pci/controller/plda/pcie-starfive.c

diff --git a/MAINTAINERS b/MAINTAINERS
index f02618c2bdf5..b88a54a24ae5 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -20356,6 +20356,13 @@ S:	Supported
 F:	Documentation/devicetree/bindings/watchdog/starfive*
 F:	drivers/watchdog/starfive-wdt.c
 
+STARFIVE JH71x0 PCIE DRIVER
+M:	Minda Chen <minda.chen@starfivetech.com>
+L:	linux-pci@vger.kernel.org
+S:	Supported
+F:	Documentation/devicetree/bindings/pci/starfive*
+F:	drivers/pci/controller/plda/pcie-starfive.c
+
 STATIC BRANCH/CALL
 M:	Peter Zijlstra <peterz@infradead.org>
 M:	Josh Poimboeuf <jpoimboe@kernel.org>
diff --git a/drivers/pci/controller/plda/Kconfig b/drivers/pci/controller/plda/Kconfig
index a3c790545843..eaf72954da9f 100644
--- a/drivers/pci/controller/plda/Kconfig
+++ b/drivers/pci/controller/plda/Kconfig
@@ -24,4 +24,12 @@ config PCIE_MICROCHIP_HOST
 	  Say Y here if you want kernel to support the Microchip AXI PCIe
 	  Host Bridge driver.
 
+config PCIE_STARFIVE_HOST
+	tristate "StarFive PCIe host controller"
+	select PCIE_PLDA_HOST
+	help
+	  Say Y here if you want to support the StarFive PCIe controller
+	  in host mode. StarFive PCIe controller uses PLDA PCIe
+	  core.
+
 endmenu
diff --git a/drivers/pci/controller/plda/Makefile b/drivers/pci/controller/plda/Makefile
index 2f16d9126535..a6089a873d18 100644
--- a/drivers/pci/controller/plda/Makefile
+++ b/drivers/pci/controller/plda/Makefile
@@ -2,3 +2,4 @@
 obj-$(CONFIG_PCIE_PLDA_HOST) += pcie-plda-host.o
 obj-$(CONFIG_PCIE_PLDA_PLAT_HOST) += pcie-plda-plat.o
 obj-$(CONFIG_PCIE_MICROCHIP_HOST) += pcie-microchip-host.o
+obj-$(CONFIG_PCIE_STARFIVE_HOST) += pcie-starfive.o
diff --git a/drivers/pci/controller/plda/pcie-plda.h b/drivers/pci/controller/plda/pcie-plda.h
index 8785f885ddb1..ef0d2aa7ccea 100644
--- a/drivers/pci/controller/plda/pcie-plda.h
+++ b/drivers/pci/controller/plda/pcie-plda.h
@@ -15,13 +15,19 @@
 #define PLDA_NUM_MSI_IRQS		32
 #define NUM_MSI_IRQS_CODED		5
 
-/* PCIe Bridge Phy Regs */
+/* PCIe Bridge Regs */
+#define GEN_SETTINGS			0x80
+#define  RP_ENABLE			1
 #define PCIE_PCI_IDS_DW1		0x9c
-
+#define  IDS_CLASS_CODE_SHIFT		16
+#define PCI_MISC			0xB4
+#define  PHY_FUNCTION_DIS		BIT(15)
 /* PCIe Config space MSI capability structure */
 #define MSI_CAP_CTRL_OFFSET		0xe0
 #define  MSI_MAX_Q_AVAIL		(NUM_MSI_IRQS_CODED << 1)
 #define  MSI_Q_SIZE			(NUM_MSI_IRQS_CODED << 4)
+#define PCIE_WINROM			0xfc
+#define  PREF_MEM_WIN_64_SUPPORT		BIT(3)
 
 #define IMASK_LOCAL				0x180
 #define  DMA_END_ENGINE_0_MASK			0x00000000u
@@ -75,6 +81,8 @@
 #define ISTATUS_HOST				0x18c
 #define IMSI_ADDR				0x190
 #define ISTATUS_MSI				0x194
+#define PMSG_SUPPORT_RX				0x3F0
+#define  PMSG_LTR_SUPPORT			BIT(2)
 
 /* PCIe Master table init defines */
 #define ATR0_PCIE_WIN0_SRCADDR_PARAM		0x600u
@@ -173,4 +181,50 @@ static inline void plda_set_default_msi(struct plda_msi *msi)
 	msi->vector_phy = IMSI_ADDR;
 	msi->num_vectors = PLDA_NUM_MSI_IRQS;
 }
+
+static inline void plda_pcie_enable_root_port(struct plda_pcie *plda)
+{
+	u32 value;
+
+	value = readl_relaxed(plda->bridge_addr + GEN_SETTINGS);
+	value |= RP_ENABLE;
+	writel_relaxed(value, plda->bridge_addr + GEN_SETTINGS);
+}
+
+static inline void plda_pcie_set_standard_class(struct plda_pcie *plda)
+{
+	u32 value;
+
+	value = readl_relaxed(plda->bridge_addr + PCIE_PCI_IDS_DW1);
+	value &= 0xff;
+	value |= (PCI_CLASS_BRIDGE_PCI << IDS_CLASS_CODE_SHIFT);
+	writel_relaxed(value, plda->bridge_addr + PCIE_PCI_IDS_DW1);
+}
+
+static inline void plda_pcie_set_pref_win_64bit(struct plda_pcie *plda)
+{
+	u32 value;
+
+	value = readl_relaxed(plda->bridge_addr + PCIE_WINROM);
+	value |= PREF_MEM_WIN_64_SUPPORT;
+	writel_relaxed(value, plda->bridge_addr + PCIE_WINROM);
+}
+
+static inline void plda_pcie_disable_ltr(struct plda_pcie *plda)
+{
+	u32 value;
+
+	value = readl_relaxed(plda->bridge_addr + PMSG_SUPPORT_RX);
+	value &= ~PMSG_LTR_SUPPORT;
+	writel_relaxed(value, plda->bridge_addr + PMSG_SUPPORT_RX);
+}
+
+static inline void plda_pcie_disable_func(struct plda_pcie *plda)
+{
+	u32 value;
+
+	value = readl_relaxed(plda->bridge_addr + PCI_MISC);
+	value |= PHY_FUNCTION_DIS;
+	writel_relaxed(value, plda->bridge_addr + PCI_MISC);
+}
 #endif /* _PCIE_PLDA_H */
diff --git a/drivers/pci/controller/plda/pcie-starfive.c b/drivers/pci/controller/plda/pcie-starfive.c
new file mode 100644
index 000000000000..816aa77a311d
--- /dev/null
+++ b/drivers/pci/controller/plda/pcie-starfive.c
@@ -0,0 +1,415 @@
+// SPDX-License-Identifier: GPL-2.0+
+/*
+ * PCIe host controller driver for StarFive JH7110 Soc.
+ *
+ * Copyright (C) 2023 StarFive Technology Co., Ltd.
+ */
+
+#include <linux/clk.h>
+#include <linux/delay.h>
+#include <linux/gpio/consumer.h>
+#include <linux/interrupt.h>
+#include <linux/mfd/syscon.h>
+#include <linux/module.h>
+#include <linux/of_address.h>
+#include <linux/of_irq.h>
+#include <linux/of_pci.h>
+#include <linux/pci.h>
+#include <linux/platform_device.h>
+#include <linux/pm_runtime.h>
+#include <linux/regmap.h>
+#include <linux/reset.h>
+#include "../../pci.h"
+
+#include "pcie-plda.h"
+
+#define DATA_LINK_ACTIVE		BIT(5)
+#define PREF_MEM_WIN_64_SUPPORT		BIT(3)
+#define PMSG_LTR_SUPPORT		BIT(2)
+#define LINK_SPEED_GEN2			BIT(12)
+#define PHY_FUNCTION_DIS		BIT(15)
+#define PCIE_FUNC_NUM			4
+#define PHY_FUNC_SHIFT			9
+
+/* system control */
+#define STG_SYSCON_K_RP_NEP			BIT(8)
+#define STG_SYSCON_AXI4_SLVL_ARFUNC_MASK	GENMASK(22, 8)
+#define STG_SYSCON_AXI4_SLVL_ARFUNC_SHIFT	8
+#define STG_SYSCON_AXI4_SLVL_AWFUNC_MASK	GENMASK(14, 0)
+#define STG_SYSCON_CLKREQ			BIT(22)
+#define STG_SYSCON_CKREF_SRC_SHIFT		18
+#define STG_SYSCON_CKREF_SRC_MASK		GENMASK(19, 18)
+
+struct starfive_jh7110_pcie {
+	struct plda_pcie	plda;
+	struct reset_control *resets;
+	struct clk_bulk_data *clks;
+	struct regmap *reg_syscon;
+	struct gpio_desc *power_gpio;
+	struct gpio_desc *reset_gpio;
+
+	u32 stg_arfun;
+	u32 stg_awfun;
+	u32 stg_rp_nep;
+	u32 stg_lnksta;
+
+	int num_clks;
+};
+
+/*
+ * The BAR0/1 of bridge should be hidden during enumeration to
+ * avoid the sizing and resource allocation by PCIe core.
+ */
+static bool starfive_pcie_hide_rc_bar(struct pci_bus *bus, unsigned int  devfn,
+				      int offset)
+{
+	if (pci_is_root_bus(bus) && !devfn &&
+	    (offset == PCI_BASE_ADDRESS_0 || offset == PCI_BASE_ADDRESS_1))
+		return true;
+
+	return false;
+}
+
+int starfive_pcie_config_write(struct pci_bus *bus, unsigned int devfn,
+			       int where, int size, u32 value)
+{
+	if (starfive_pcie_hide_rc_bar(bus, devfn, where))
+		return PCIBIOS_BAD_REGISTER_NUMBER;
+
+	return pci_generic_config_write(bus, devfn, where, size, value);
+}
+
+static int starfive_pcie_parse_dt(struct starfive_jh7110_pcie *pcie, struct device *dev)
+{
+	unsigned int args[4];
+
+	pcie->num_clks = devm_clk_bulk_get_all(dev, &pcie->clks);
+	if (pcie->num_clks < 0)
+		return dev_err_probe(dev, -ENODEV,
+			"Failed to get pcie clocks\n");
+
+	pcie->resets = devm_reset_control_array_get_exclusive(dev);
+	if (IS_ERR(pcie->resets))
+		return dev_err_probe(dev, PTR_ERR(pcie->resets),
+			"Failed to get pcie resets");
+
+	pcie->reg_syscon =
+		syscon_regmap_lookup_by_phandle_args(dev->of_node,
+						     "starfive,stg-syscon", 4, args);
+
+	if (IS_ERR(pcie->reg_syscon))
+		return dev_err_probe(dev, PTR_ERR(pcie->reg_syscon),
+			"Failed to parse starfive,stg-syscon\n");
+
+	pcie->stg_arfun = args[0];
+	pcie->stg_awfun = args[1];
+	pcie->stg_rp_nep = args[2];
+	pcie->stg_lnksta = args[3];
+
+	pcie->reset_gpio = devm_gpiod_get_optional(dev, "reset", GPIOD_OUT_HIGH);
+	if (IS_ERR_OR_NULL(pcie->reset_gpio)) {
+		dev_warn(dev, "Failed to get reset-gpio.\n");
+		return -EINVAL;
+	}
+
+	pcie->power_gpio = devm_gpiod_get_optional(dev, "enable", GPIOD_OUT_LOW);
+	if (IS_ERR(pcie->power_gpio)) {
+		dev_warn(dev, "Failed to get power-gpio.\n");
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+static struct pci_ops starfive_pcie_ops = {
+	.map_bus	= plda_pcie_map_bus,
+	.read           = pci_generic_config_read,
+	.write          = starfive_pcie_config_write,
+};
+
+static int starfive_pcie_clk_rst_init(struct starfive_jh7110_pcie *pcie)
+{
+	int ret;
+	struct device *dev = pcie->plda.dev;
+
+	ret = clk_bulk_prepare_enable(pcie->num_clks, pcie->clks);
+	if (ret) {
+		dev_err(dev, "Failed to enable clocks\n");
+		return ret;
+	}
+
+	ret = reset_control_deassert(pcie->resets);
+	if (ret) {
+		clk_bulk_disable_unprepare(pcie->num_clks, pcie->clks);
+		dev_err(dev, "Failed to resets\n");
+	}
+
+	return ret;
+}
+
+static void starfive_pcie_clk_rst_deinit(struct starfive_jh7110_pcie *pcie)
+{
+	reset_control_assert(pcie->resets);
+	clk_bulk_disable_unprepare(pcie->num_clks, pcie->clks);
+}
+
+static int starfive_pcie_is_link_up(struct starfive_jh7110_pcie *pcie)
+{
+	struct device *dev = pcie->plda.dev;
+	int ret;
+	u32 stg_reg_val;
+
+	/* 100ms timeout value should be enough for Gen1/2 training */
+	ret = regmap_read_poll_timeout(pcie->reg_syscon,
+				       pcie->stg_lnksta,
+				       stg_reg_val,
+				       stg_reg_val & DATA_LINK_ACTIVE,
+				       10 * 1000, 100 * 1000);
+
+	/* If the link is down (no device in slot), then exit. */
+	if (ret == -ETIMEDOUT) {
+		dev_info(dev, "Port link down, exit.\n");
+		return 0;
+	} else if (ret == 0) {
+		dev_info(dev, "Port link up.\n");
+		return 1;
+	}
+
+	return 0;
+}
+
+int starfive_pcie_enable_phy(struct device *dev, struct plda_pcie *pcie)
+{
+	int ret;
+
+	if (!pcie->phy)
+		return 0;
+
+	ret = phy_init(pcie->phy);
+	if (ret)
+		return dev_err_probe(dev, ret,
+			"failed to initialize pcie phy\n");
+
+	ret = phy_set_mode(pcie->phy, PHY_MODE_PCIE);
+	if (ret) {
+		dev_err(dev, "failed to set pcie mode\n");
+		goto err_phy_on;
+	}
+
+	ret = phy_power_on(pcie->phy);
+	if (ret) {
+		dev_err(dev, "failed to power on pcie phy\n");
+		goto err_phy_on;
+	}
+
+	return 0;
+
+err_phy_on:
+	phy_exit(pcie->phy);
+	return ret;
+}
+
+void starfive_pcie_disable_phy(struct plda_pcie *pcie)
+{
+	phy_power_off(pcie->phy);
+	phy_exit(pcie->phy);
+}
+
+static void starfive_pcie_host_deinit(struct plda_pcie *plda)
+{
+	struct starfive_jh7110_pcie *pcie =
+		container_of(plda, struct starfive_jh7110_pcie, plda);
+
+	starfive_pcie_clk_rst_deinit(pcie);
+	if (pcie->power_gpio)
+		gpiod_set_value_cansleep(pcie->power_gpio, 0);
+	starfive_pcie_disable_phy(plda);
+}
+
+static int starfive_pcie_host_init(struct plda_pcie *plda)
+{
+	int i;
+	struct starfive_jh7110_pcie *pcie =
+		container_of(plda, struct starfive_jh7110_pcie, plda);
+	struct device *dev = plda->dev;
+	int ret;
+
+	ret = starfive_pcie_enable_phy(dev, plda);
+	if (ret)
+		return ret;
+
+	regmap_update_bits(pcie->reg_syscon, pcie->stg_rp_nep,
+			   STG_SYSCON_K_RP_NEP, STG_SYSCON_K_RP_NEP);
+
+	regmap_update_bits(pcie->reg_syscon, pcie->stg_awfun,
+			   STG_SYSCON_CKREF_SRC_MASK,
+			   2 << STG_SYSCON_CKREF_SRC_SHIFT);
+
+	regmap_update_bits(pcie->reg_syscon, pcie->stg_awfun,
+			   STG_SYSCON_CLKREQ, STG_SYSCON_CLKREQ);
+
+	ret = starfive_pcie_clk_rst_init(pcie);
+	if (ret)
+		return ret;
+
+	if (pcie->power_gpio)
+		gpiod_set_value_cansleep(pcie->power_gpio, 1);
+
+	gpiod_set_value_cansleep(pcie->reset_gpio, 1);
+
+	/* Disable physical functions except #0 */
+	for (i = 1; i < PCIE_FUNC_NUM; i++) {
+		regmap_update_bits(pcie->reg_syscon,
+				   pcie->stg_arfun,
+				   STG_SYSCON_AXI4_SLVL_ARFUNC_MASK,
+				   (i << PHY_FUNC_SHIFT) <<
+				   STG_SYSCON_AXI4_SLVL_ARFUNC_SHIFT);
+		regmap_update_bits(pcie->reg_syscon,
+				   pcie->stg_awfun,
+				   STG_SYSCON_AXI4_SLVL_AWFUNC_MASK,
+				   i << PHY_FUNC_SHIFT);
+
+		plda_pcie_disable_func(plda);
+	}
+
+	regmap_update_bits(pcie->reg_syscon, pcie->stg_arfun,
+			   STG_SYSCON_AXI4_SLVL_ARFUNC_MASK, 0);
+	regmap_update_bits(pcie->reg_syscon, pcie->stg_awfun,
+			   STG_SYSCON_AXI4_SLVL_AWFUNC_MASK, 0);
+
+	/* Enable root port */
+	plda_pcie_enable_root_port(plda);
+
+	/* PCIe PCI Standard Configuration Identification Settings. */
+	plda_pcie_set_standard_class(plda);
+
+	/*
+	 * The LTR message forwarding of PCIe Message Reception was set by core
+	 * as default, but the forward id & addr are also need to be reset.
+	 * If we do not disable LTR message forwarding here, or set a legal
+	 * forwarding address, the kernel will get stuck after this driver probe.
+	 * To workaround, disable the LTR message forwarding support on
+	 * PCIe Message Reception.
+	 */
+	plda_pcie_disable_ltr(plda);
+
+	/* Prefetchable memory window 64-bit addressing support */
+	plda_pcie_set_pref_win_64bit(plda);
+
+	/* Ensure that PERST has been asserted for at least 100 ms */
+	msleep(300);
+	gpiod_set_value_cansleep(pcie->reset_gpio, 0);
+
+	if (!starfive_pcie_is_link_up(pcie))
+		return -EIO;
+
+	return ret;
+}
+
+static const struct plda_pcie_ops pcie_ops = {
+	.host_init = starfive_pcie_host_init,
+	.host_deinit = starfive_pcie_host_deinit,
+};
+
+static int starfive_pcie_probe(struct platform_device *pdev)
+{
+	struct device *dev = &pdev->dev;
+	struct starfive_jh7110_pcie *pcie;
+	struct plda_pcie *plda;
+	int ret;
+
+	pcie = devm_kzalloc(dev, sizeof(*pcie), GFP_KERNEL);
+	if (!pcie)
+		return -ENOMEM;
+
+	plda = &pcie->plda;
+	plda->dev = dev;
+
+	ret = starfive_pcie_parse_dt(pcie, dev);
+	if (ret)
+		return ret;
+
+	plda->ops = &pcie_ops;
+	plda->num_events = NUM_PLDA_EVENTS;
+	ret = plda_pcie_host_init(&pcie->plda, &starfive_pcie_ops);
+	if (ret)
+		return ret;
+
+	pm_runtime_enable(&pdev->dev);
+	pm_runtime_get_sync(&pdev->dev);
+	platform_set_drvdata(pdev, pcie);
+
+	return 0;
+}
+
+static void starfive_pcie_remove(struct platform_device *pdev)
+{
+	struct starfive_jh7110_pcie *pcie = platform_get_drvdata(pdev);
+
+	plda_pcie_host_deinit(&pcie->plda);
+	platform_set_drvdata(pdev, NULL);
+}
+
+#ifdef CONFIG_PM_SLEEP
+static int __maybe_unused starfive_pcie_suspend_noirq(struct device *dev)
+{
+	struct starfive_jh7110_pcie *pcie = dev_get_drvdata(dev);
+
+	if (!pcie)
+		return 0;
+
+	clk_bulk_disable_unprepare(pcie->num_clks, pcie->clks);
+	starfive_pcie_disable_phy(&pcie->plda);
+
+	return 0;
+}
+
+static int __maybe_unused starfive_pcie_resume_noirq(struct device *dev)
+{
+	struct starfive_jh7110_pcie *pcie = dev_get_drvdata(dev);
+	int ret;
+
+	if (!pcie)
+		return 0;
+
+	ret = starfive_pcie_enable_phy(dev, &pcie->plda);
+	if (ret)
+		return ret;
+
+	ret = clk_bulk_prepare_enable(pcie->num_clks, pcie->clks);
+	if (ret) {
+		dev_err(dev, "Failed to enable clocks\n");
+		starfive_pcie_disable_phy(&pcie->plda);
+		return ret;
+	}
+
+	return ret;
+}
+
+static const struct dev_pm_ops starfive_pcie_pm_ops = {
+	SET_NOIRQ_SYSTEM_SLEEP_PM_OPS(starfive_pcie_suspend_noirq,
+				      starfive_pcie_resume_noirq)
+};
+#endif
+
+static const struct of_device_id starfive_pcie_of_match[] = {
+	{ .compatible = "starfive,jh7110-pcie"},
+	{ /* sentinel */ }
+};
+MODULE_DEVICE_TABLE(of, starfive_pcie_of_match);
+
+static struct platform_driver starfive_pcie_driver = {
+	.driver = {
+		.name = "pcie-starfive",
+		.of_match_table = of_match_ptr(starfive_pcie_of_match),
+#ifdef CONFIG_PM_SLEEP
+		.pm = &starfive_pcie_pm_ops,
+#endif
+	},
+	.probe = starfive_pcie_probe,
+	.remove_new = starfive_pcie_remove,
+};
+module_platform_driver(starfive_pcie_driver);
+
+MODULE_DESCRIPTION("StarFive JH7110 PCIe host driver");
+MODULE_LICENSE("GPL v2");
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 92+ messages in thread

* [PATCH v1 8/9] PCI: PLDA: starfive: Add JH7110 PCIe controller
@ 2023-07-19 10:20   ` Minda Chen
  0 siblings, 0 replies; 92+ messages in thread
From: Minda Chen @ 2023-07-19 10:20 UTC (permalink / raw)
  To: Daire McNamara, Conor Dooley, Rob Herring, Krzysztof Kozlowski,
	Bjorn Helgaas, Lorenzo Pieralisi, Krzysztof Wilczyński,
	Emil Renner Berthing
  Cc: devicetree, linux-kernel, linux-riscv, linux-pci, Paul Walmsley,
	Palmer Dabbelt, Albert Ou, Philipp Zabel, Mason Huo, Leyfoon Tan,
	Kevin Xie, Minda Chen

Add StarFive JH7110 SoC PCIe controller platform
driver codes.

Signed-off-by: Minda Chen <minda.chen@starfivetech.com>
Reviewed-by: Mason Huo <mason.huo@starfivetech.com>
---
 MAINTAINERS                                 |   7 +
 drivers/pci/controller/plda/Kconfig         |   8 +
 drivers/pci/controller/plda/Makefile        |   1 +
 drivers/pci/controller/plda/pcie-plda.h     |  58 ++-
 drivers/pci/controller/plda/pcie-starfive.c | 415 ++++++++++++++++++++
 5 files changed, 487 insertions(+), 2 deletions(-)
 create mode 100644 drivers/pci/controller/plda/pcie-starfive.c

diff --git a/MAINTAINERS b/MAINTAINERS
index f02618c2bdf5..b88a54a24ae5 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -20356,6 +20356,13 @@ S:	Supported
 F:	Documentation/devicetree/bindings/watchdog/starfive*
 F:	drivers/watchdog/starfive-wdt.c
 
+STARFIVE JH71x0 PCIE DRIVER
+M:	Minda Chen <minda.chen@starfivetech.com>
+L:	linux-pci@vger.kernel.org
+S:	Supported
+F:	Documentation/devicetree/bindings/pci/starfive*
+F:	drivers/pci/controller/plda/pcie-starfive.c
+
 STATIC BRANCH/CALL
 M:	Peter Zijlstra <peterz@infradead.org>
 M:	Josh Poimboeuf <jpoimboe@kernel.org>
diff --git a/drivers/pci/controller/plda/Kconfig b/drivers/pci/controller/plda/Kconfig
index a3c790545843..eaf72954da9f 100644
--- a/drivers/pci/controller/plda/Kconfig
+++ b/drivers/pci/controller/plda/Kconfig
@@ -24,4 +24,12 @@ config PCIE_MICROCHIP_HOST
 	  Say Y here if you want kernel to support the Microchip AXI PCIe
 	  Host Bridge driver.
 
+config PCIE_STARFIVE_HOST
+	tristate "StarFive PCIe host controller"
+	select PCIE_PLDA_HOST
+	help
+	  Say Y here if you want to support the StarFive PCIe controller
+	  in host mode. StarFive PCIe controller uses PLDA PCIe
+	  core.
+
 endmenu
diff --git a/drivers/pci/controller/plda/Makefile b/drivers/pci/controller/plda/Makefile
index 2f16d9126535..a6089a873d18 100644
--- a/drivers/pci/controller/plda/Makefile
+++ b/drivers/pci/controller/plda/Makefile
@@ -2,3 +2,4 @@
 obj-$(CONFIG_PCIE_PLDA_HOST) += pcie-plda-host.o
 obj-$(CONFIG_PCIE_PLDA_PLAT_HOST) += pcie-plda-plat.o
 obj-$(CONFIG_PCIE_MICROCHIP_HOST) += pcie-microchip-host.o
+obj-$(CONFIG_PCIE_STARFIVE_HOST) += pcie-starfive.o
diff --git a/drivers/pci/controller/plda/pcie-plda.h b/drivers/pci/controller/plda/pcie-plda.h
index 8785f885ddb1..ef0d2aa7ccea 100644
--- a/drivers/pci/controller/plda/pcie-plda.h
+++ b/drivers/pci/controller/plda/pcie-plda.h
@@ -15,13 +15,19 @@
 #define PLDA_NUM_MSI_IRQS		32
 #define NUM_MSI_IRQS_CODED		5
 
-/* PCIe Bridge Phy Regs */
+/* PCIe Bridge Regs */
+#define GEN_SETTINGS			0x80
+#define  RP_ENABLE			1
 #define PCIE_PCI_IDS_DW1		0x9c
-
+#define  IDS_CLASS_CODE_SHIFT		16
+#define PCI_MISC			0xB4
+#define  PHY_FUNCTION_DIS		BIT(15)
 /* PCIe Config space MSI capability structure */
 #define MSI_CAP_CTRL_OFFSET		0xe0
 #define  MSI_MAX_Q_AVAIL		(NUM_MSI_IRQS_CODED << 1)
 #define  MSI_Q_SIZE			(NUM_MSI_IRQS_CODED << 4)
+#define PCIE_WINROM			0xfc
+#define  PREF_MEM_WIN_64_SUPPORT		BIT(3)
 
 #define IMASK_LOCAL				0x180
 #define  DMA_END_ENGINE_0_MASK			0x00000000u
@@ -75,6 +81,8 @@
 #define ISTATUS_HOST				0x18c
 #define IMSI_ADDR				0x190
 #define ISTATUS_MSI				0x194
+#define PMSG_SUPPORT_RX				0x3F0
+#define  PMSG_LTR_SUPPORT			BIT(2)
 
 /* PCIe Master table init defines */
 #define ATR0_PCIE_WIN0_SRCADDR_PARAM		0x600u
@@ -173,4 +181,50 @@ static inline void plda_set_default_msi(struct plda_msi *msi)
 	msi->vector_phy = IMSI_ADDR;
 	msi->num_vectors = PLDA_NUM_MSI_IRQS;
 }
+
+static inline void plda_pcie_enable_root_port(struct plda_pcie *plda)
+{
+	u32 value;
+
+	value = readl_relaxed(plda->bridge_addr + GEN_SETTINGS);
+	value |= RP_ENABLE;
+	writel_relaxed(value, plda->bridge_addr + GEN_SETTINGS);
+}
+
+static inline void plda_pcie_set_standard_class(struct plda_pcie *plda)
+{
+	u32 value;
+
+	value = readl_relaxed(plda->bridge_addr + PCIE_PCI_IDS_DW1);
+	value &= 0xff;
+	value |= (PCI_CLASS_BRIDGE_PCI << IDS_CLASS_CODE_SHIFT);
+	writel_relaxed(value, plda->bridge_addr + PCIE_PCI_IDS_DW1);
+}
+
+static inline void plda_pcie_set_pref_win_64bit(struct plda_pcie *plda)
+{
+	u32 value;
+
+	value = readl_relaxed(plda->bridge_addr + PCIE_WINROM);
+	value |= PREF_MEM_WIN_64_SUPPORT;
+	writel_relaxed(value, plda->bridge_addr + PCIE_WINROM);
+}
+
+static inline void plda_pcie_disable_ltr(struct plda_pcie *plda)
+{
+	u32 value;
+
+	value = readl_relaxed(plda->bridge_addr + PMSG_SUPPORT_RX);
+	value &= ~PMSG_LTR_SUPPORT;
+	writel_relaxed(value, plda->bridge_addr + PMSG_SUPPORT_RX);
+}
+
+static inline void plda_pcie_disable_func(struct plda_pcie *plda)
+{
+	u32 value;
+
+	value = readl_relaxed(plda->bridge_addr + PCI_MISC);
+	value |= PHY_FUNCTION_DIS;
+	writel_relaxed(value, plda->bridge_addr + PCI_MISC);
+}
 #endif /* _PCIE_PLDA_H */
diff --git a/drivers/pci/controller/plda/pcie-starfive.c b/drivers/pci/controller/plda/pcie-starfive.c
new file mode 100644
index 000000000000..816aa77a311d
--- /dev/null
+++ b/drivers/pci/controller/plda/pcie-starfive.c
@@ -0,0 +1,415 @@
+// SPDX-License-Identifier: GPL-2.0+
+/*
+ * PCIe host controller driver for StarFive JH7110 Soc.
+ *
+ * Copyright (C) 2023 StarFive Technology Co., Ltd.
+ */
+
+#include <linux/clk.h>
+#include <linux/delay.h>
+#include <linux/gpio/consumer.h>
+#include <linux/interrupt.h>
+#include <linux/mfd/syscon.h>
+#include <linux/module.h>
+#include <linux/of_address.h>
+#include <linux/of_irq.h>
+#include <linux/of_pci.h>
+#include <linux/pci.h>
+#include <linux/platform_device.h>
+#include <linux/pm_runtime.h>
+#include <linux/regmap.h>
+#include <linux/reset.h>
+#include "../../pci.h"
+
+#include "pcie-plda.h"
+
+#define DATA_LINK_ACTIVE		BIT(5)
+#define PREF_MEM_WIN_64_SUPPORT		BIT(3)
+#define PMSG_LTR_SUPPORT		BIT(2)
+#define LINK_SPEED_GEN2			BIT(12)
+#define PHY_FUNCTION_DIS		BIT(15)
+#define PCIE_FUNC_NUM			4
+#define PHY_FUNC_SHIFT			9
+
+/* system control */
+#define STG_SYSCON_K_RP_NEP			BIT(8)
+#define STG_SYSCON_AXI4_SLVL_ARFUNC_MASK	GENMASK(22, 8)
+#define STG_SYSCON_AXI4_SLVL_ARFUNC_SHIFT	8
+#define STG_SYSCON_AXI4_SLVL_AWFUNC_MASK	GENMASK(14, 0)
+#define STG_SYSCON_CLKREQ			BIT(22)
+#define STG_SYSCON_CKREF_SRC_SHIFT		18
+#define STG_SYSCON_CKREF_SRC_MASK		GENMASK(19, 18)
+
+struct starfive_jh7110_pcie {
+	struct plda_pcie	plda;
+	struct reset_control *resets;
+	struct clk_bulk_data *clks;
+	struct regmap *reg_syscon;
+	struct gpio_desc *power_gpio;
+	struct gpio_desc *reset_gpio;
+
+	u32 stg_arfun;
+	u32 stg_awfun;
+	u32 stg_rp_nep;
+	u32 stg_lnksta;
+
+	int num_clks;
+};
+
+/*
+ * The BAR0/1 of bridge should be hidden during enumeration to
+ * avoid the sizing and resource allocation by PCIe core.
+ */
+static bool starfive_pcie_hide_rc_bar(struct pci_bus *bus, unsigned int  devfn,
+				      int offset)
+{
+	if (pci_is_root_bus(bus) && !devfn &&
+	    (offset == PCI_BASE_ADDRESS_0 || offset == PCI_BASE_ADDRESS_1))
+		return true;
+
+	return false;
+}
+
+int starfive_pcie_config_write(struct pci_bus *bus, unsigned int devfn,
+			       int where, int size, u32 value)
+{
+	if (starfive_pcie_hide_rc_bar(bus, devfn, where))
+		return PCIBIOS_BAD_REGISTER_NUMBER;
+
+	return pci_generic_config_write(bus, devfn, where, size, value);
+}
+
+static int starfive_pcie_parse_dt(struct starfive_jh7110_pcie *pcie, struct device *dev)
+{
+	unsigned int args[4];
+
+	pcie->num_clks = devm_clk_bulk_get_all(dev, &pcie->clks);
+	if (pcie->num_clks < 0)
+		return dev_err_probe(dev, -ENODEV,
+			"Failed to get pcie clocks\n");
+
+	pcie->resets = devm_reset_control_array_get_exclusive(dev);
+	if (IS_ERR(pcie->resets))
+		return dev_err_probe(dev, PTR_ERR(pcie->resets),
+			"Failed to get pcie resets");
+
+	pcie->reg_syscon =
+		syscon_regmap_lookup_by_phandle_args(dev->of_node,
+						     "starfive,stg-syscon", 4, args);
+
+	if (IS_ERR(pcie->reg_syscon))
+		return dev_err_probe(dev, PTR_ERR(pcie->reg_syscon),
+			"Failed to parse starfive,stg-syscon\n");
+
+	pcie->stg_arfun = args[0];
+	pcie->stg_awfun = args[1];
+	pcie->stg_rp_nep = args[2];
+	pcie->stg_lnksta = args[3];
+
+	pcie->reset_gpio = devm_gpiod_get_optional(dev, "reset", GPIOD_OUT_HIGH);
+	if (IS_ERR_OR_NULL(pcie->reset_gpio)) {
+		dev_warn(dev, "Failed to get reset-gpio.\n");
+		return -EINVAL;
+	}
+
+	pcie->power_gpio = devm_gpiod_get_optional(dev, "enable", GPIOD_OUT_LOW);
+	if (IS_ERR(pcie->power_gpio)) {
+		dev_warn(dev, "Failed to get power-gpio.\n");
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+static struct pci_ops starfive_pcie_ops = {
+	.map_bus	= plda_pcie_map_bus,
+	.read           = pci_generic_config_read,
+	.write          = starfive_pcie_config_write,
+};
+
+static int starfive_pcie_clk_rst_init(struct starfive_jh7110_pcie *pcie)
+{
+	int ret;
+	struct device *dev = pcie->plda.dev;
+
+	ret = clk_bulk_prepare_enable(pcie->num_clks, pcie->clks);
+	if (ret) {
+		dev_err(dev, "Failed to enable clocks\n");
+		return ret;
+	}
+
+	ret = reset_control_deassert(pcie->resets);
+	if (ret) {
+		clk_bulk_disable_unprepare(pcie->num_clks, pcie->clks);
+		dev_err(dev, "Failed to resets\n");
+	}
+
+	return ret;
+}
+
+static void starfive_pcie_clk_rst_deinit(struct starfive_jh7110_pcie *pcie)
+{
+	reset_control_assert(pcie->resets);
+	clk_bulk_disable_unprepare(pcie->num_clks, pcie->clks);
+}
+
+static int starfive_pcie_is_link_up(struct starfive_jh7110_pcie *pcie)
+{
+	struct device *dev = pcie->plda.dev;
+	int ret;
+	u32 stg_reg_val;
+
+	/* 100ms timeout value should be enough for Gen1/2 training */
+	ret = regmap_read_poll_timeout(pcie->reg_syscon,
+				       pcie->stg_lnksta,
+				       stg_reg_val,
+				       stg_reg_val & DATA_LINK_ACTIVE,
+				       10 * 1000, 100 * 1000);
+
+	/* If the link is down (no device in slot), then exit. */
+	if (ret == -ETIMEDOUT) {
+		dev_info(dev, "Port link down, exit.\n");
+		return 0;
+	} else if (ret == 0) {
+		dev_info(dev, "Port link up.\n");
+		return 1;
+	}
+
+	return 0;
+}
+
+int starfive_pcie_enable_phy(struct device *dev, struct plda_pcie *pcie)
+{
+	int ret;
+
+	if (!pcie->phy)
+		return 0;
+
+	ret = phy_init(pcie->phy);
+	if (ret)
+		return dev_err_probe(dev, ret,
+			"failed to initialize pcie phy\n");
+
+	ret = phy_set_mode(pcie->phy, PHY_MODE_PCIE);
+	if (ret) {
+		dev_err(dev, "failed to set pcie mode\n");
+		goto err_phy_on;
+	}
+
+	ret = phy_power_on(pcie->phy);
+	if (ret) {
+		dev_err(dev, "failed to power on pcie phy\n");
+		goto err_phy_on;
+	}
+
+	return 0;
+
+err_phy_on:
+	phy_exit(pcie->phy);
+	return ret;
+}
+
+void starfive_pcie_disable_phy(struct plda_pcie *pcie)
+{
+	phy_power_off(pcie->phy);
+	phy_exit(pcie->phy);
+}
+
+static void starfive_pcie_host_deinit(struct plda_pcie *plda)
+{
+	struct starfive_jh7110_pcie *pcie =
+		container_of(plda, struct starfive_jh7110_pcie, plda);
+
+	starfive_pcie_clk_rst_deinit(pcie);
+	if (pcie->power_gpio)
+		gpiod_set_value_cansleep(pcie->power_gpio, 0);
+	starfive_pcie_disable_phy(plda);
+}
+
+static int starfive_pcie_host_init(struct plda_pcie *plda)
+{
+	int i;
+	struct starfive_jh7110_pcie *pcie =
+		container_of(plda, struct starfive_jh7110_pcie, plda);
+	struct device *dev = plda->dev;
+	int ret;
+
+	ret = starfive_pcie_enable_phy(dev, plda);
+	if (ret)
+		return ret;
+
+	regmap_update_bits(pcie->reg_syscon, pcie->stg_rp_nep,
+			   STG_SYSCON_K_RP_NEP, STG_SYSCON_K_RP_NEP);
+
+	regmap_update_bits(pcie->reg_syscon, pcie->stg_awfun,
+			   STG_SYSCON_CKREF_SRC_MASK,
+			   2 << STG_SYSCON_CKREF_SRC_SHIFT);
+
+	regmap_update_bits(pcie->reg_syscon, pcie->stg_awfun,
+			   STG_SYSCON_CLKREQ, STG_SYSCON_CLKREQ);
+
+	ret = starfive_pcie_clk_rst_init(pcie);
+	if (ret)
+		return ret;
+
+	if (pcie->power_gpio)
+		gpiod_set_value_cansleep(pcie->power_gpio, 1);
+
+	gpiod_set_value_cansleep(pcie->reset_gpio, 1);
+
+	/* Disable physical functions except #0 */
+	for (i = 1; i < PCIE_FUNC_NUM; i++) {
+		regmap_update_bits(pcie->reg_syscon,
+				   pcie->stg_arfun,
+				   STG_SYSCON_AXI4_SLVL_ARFUNC_MASK,
+				   (i << PHY_FUNC_SHIFT) <<
+				   STG_SYSCON_AXI4_SLVL_ARFUNC_SHIFT);
+		regmap_update_bits(pcie->reg_syscon,
+				   pcie->stg_awfun,
+				   STG_SYSCON_AXI4_SLVL_AWFUNC_MASK,
+				   i << PHY_FUNC_SHIFT);
+
+		plda_pcie_disable_func(plda);
+	}
+
+	regmap_update_bits(pcie->reg_syscon, pcie->stg_arfun,
+			   STG_SYSCON_AXI4_SLVL_ARFUNC_MASK, 0);
+	regmap_update_bits(pcie->reg_syscon, pcie->stg_awfun,
+			   STG_SYSCON_AXI4_SLVL_AWFUNC_MASK, 0);
+
+	/* Enable root port */
+	plda_pcie_enable_root_port(plda);
+
+	/* PCIe PCI Standard Configuration Identification Settings. */
+	plda_pcie_set_standard_class(plda);
+
+	/*
+	 * The LTR message forwarding of PCIe Message Reception was set by core
+	 * as default, but the forward id & addr are also need to be reset.
+	 * If we do not disable LTR message forwarding here, or set a legal
+	 * forwarding address, the kernel will get stuck after this driver probe.
+	 * To workaround, disable the LTR message forwarding support on
+	 * PCIe Message Reception.
+	 */
+	plda_pcie_disable_ltr(plda);
+
+	/* Prefetchable memory window 64-bit addressing support */
+	plda_pcie_set_pref_win_64bit(plda);
+
+	/* Ensure that PERST has been asserted for at least 100 ms */
+	msleep(300);
+	gpiod_set_value_cansleep(pcie->reset_gpio, 0);
+
+	if (!starfive_pcie_is_link_up(pcie))
+		return -EIO;
+
+	return ret;
+}
+
+static const struct plda_pcie_ops pcie_ops = {
+	.host_init = starfive_pcie_host_init,
+	.host_deinit = starfive_pcie_host_deinit,
+};
+
+static int starfive_pcie_probe(struct platform_device *pdev)
+{
+	struct device *dev = &pdev->dev;
+	struct starfive_jh7110_pcie *pcie;
+	struct plda_pcie *plda;
+	int ret;
+
+	pcie = devm_kzalloc(dev, sizeof(*pcie), GFP_KERNEL);
+	if (!pcie)
+		return -ENOMEM;
+
+	plda = &pcie->plda;
+	plda->dev = dev;
+
+	ret = starfive_pcie_parse_dt(pcie, dev);
+	if (ret)
+		return ret;
+
+	plda->ops = &pcie_ops;
+	plda->num_events = NUM_PLDA_EVENTS;
+	ret = plda_pcie_host_init(&pcie->plda, &starfive_pcie_ops);
+	if (ret)
+		return ret;
+
+	pm_runtime_enable(&pdev->dev);
+	pm_runtime_get_sync(&pdev->dev);
+	platform_set_drvdata(pdev, pcie);
+
+	return 0;
+}
+
+static void starfive_pcie_remove(struct platform_device *pdev)
+{
+	struct starfive_jh7110_pcie *pcie = platform_get_drvdata(pdev);
+
+	plda_pcie_host_deinit(&pcie->plda);
+	platform_set_drvdata(pdev, NULL);
+}
+
+#ifdef CONFIG_PM_SLEEP
+static int __maybe_unused starfive_pcie_suspend_noirq(struct device *dev)
+{
+	struct starfive_jh7110_pcie *pcie = dev_get_drvdata(dev);
+
+	if (!pcie)
+		return 0;
+
+	clk_bulk_disable_unprepare(pcie->num_clks, pcie->clks);
+	starfive_pcie_disable_phy(&pcie->plda);
+
+	return 0;
+}
+
+static int __maybe_unused starfive_pcie_resume_noirq(struct device *dev)
+{
+	struct starfive_jh7110_pcie *pcie = dev_get_drvdata(dev);
+	int ret;
+
+	if (!pcie)
+		return 0;
+
+	ret = starfive_pcie_enable_phy(dev, &pcie->plda);
+	if (ret)
+		return ret;
+
+	ret = clk_bulk_prepare_enable(pcie->num_clks, pcie->clks);
+	if (ret) {
+		dev_err(dev, "Failed to enable clocks\n");
+		starfive_pcie_disable_phy(&pcie->plda);
+		return ret;
+	}
+
+	return ret;
+}
+
+static const struct dev_pm_ops starfive_pcie_pm_ops = {
+	SET_NOIRQ_SYSTEM_SLEEP_PM_OPS(starfive_pcie_suspend_noirq,
+				      starfive_pcie_resume_noirq)
+};
+#endif
+
+static const struct of_device_id starfive_pcie_of_match[] = {
+	{ .compatible = "starfive,jh7110-pcie"},
+	{ /* sentinel */ }
+};
+MODULE_DEVICE_TABLE(of, starfive_pcie_of_match);
+
+static struct platform_driver starfive_pcie_driver = {
+	.driver = {
+		.name = "pcie-starfive",
+		.of_match_table = of_match_ptr(starfive_pcie_of_match),
+#ifdef CONFIG_PM_SLEEP
+		.pm = &starfive_pcie_pm_ops,
+#endif
+	},
+	.probe = starfive_pcie_probe,
+	.remove_new = starfive_pcie_remove,
+};
+module_platform_driver(starfive_pcie_driver);
+
+MODULE_DESCRIPTION("StarFive JH7110 PCIe host driver");
+MODULE_LICENSE("GPL v2");
-- 
2.17.1


_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply related	[flat|nested] 92+ messages in thread

* [PATCH v1 9/9] riscv: dts: starfive: add PCIe dts configuration for JH7110
  2023-07-19 10:20 ` Minda Chen
@ 2023-07-19 10:20   ` Minda Chen
  -1 siblings, 0 replies; 92+ messages in thread
From: Minda Chen @ 2023-07-19 10:20 UTC (permalink / raw)
  To: Daire McNamara, Conor Dooley, Rob Herring, Krzysztof Kozlowski,
	Bjorn Helgaas, Lorenzo Pieralisi, Krzysztof Wilczyński,
	Emil Renner Berthing
  Cc: devicetree, linux-kernel, linux-riscv, linux-pci, Paul Walmsley,
	Palmer Dabbelt, Albert Ou, Philipp Zabel, Mason Huo, Leyfoon Tan,
	Kevin Xie, Minda Chen

Add PCIe dts configuraion for JH7110 SoC platform.

Signed-off-by: Minda Chen <minda.chen@starfivetech.com>
Reviewed-by: Hal Feng <hal.feng@starfivetech.com>
---
 .../jh7110-starfive-visionfive-2.dtsi         | 44 ++++++++++
 arch/riscv/boot/dts/starfive/jh7110.dtsi      | 88 +++++++++++++++++++
 2 files changed, 132 insertions(+)

diff --git a/arch/riscv/boot/dts/starfive/jh7110-starfive-visionfive-2.dtsi b/arch/riscv/boot/dts/starfive/jh7110-starfive-visionfive-2.dtsi
index de0f40a8be93..70deb4a3cb56 100644
--- a/arch/riscv/boot/dts/starfive/jh7110-starfive-visionfive-2.dtsi
+++ b/arch/riscv/boot/dts/starfive/jh7110-starfive-visionfive-2.dtsi
@@ -208,6 +208,36 @@
 		};
 	};
 
+	pcie0_pins: pcie0-0 {
+		pcie-pins {
+			pinmux = <GPIOMUX(32, GPOUT_HIGH,
+					      GPOEN_ENABLE,
+					      GPI_NONE)>,
+				 <GPIOMUX(27, GPOUT_HIGH,
+					      GPOEN_ENABLE,
+					      GPI_NONE)>;
+			drive-strength = <2>;
+			input-enable;
+			input-schmitt-disable;
+			slew-rate = <0>;
+		};
+	};
+
+	pcie1_pins: pcie1-0 {
+		pcie-pins {
+			pinmux = <GPIOMUX(21, GPOUT_HIGH,
+					      GPOEN_ENABLE,
+					      GPI_NONE)>,
+				 <GPIOMUX(29, GPOUT_HIGH,
+					      GPOEN_ENABLE,
+					      GPI_NONE)>;
+			drive-strength = <2>;
+			input-enable;
+			input-schmitt-disable;
+			slew-rate = <0>;
+		};
+	};
+
 	uart0_pins: uart0-0 {
 		tx-pins {
 			pinmux = <GPIOMUX(5, GPOUT_SYS_UART0_TX,
@@ -233,6 +263,20 @@
 	};
 };
 
+&pcie0 {
+	pinctrl-names = "default";
+	reset-gpios = <&sysgpio 26 GPIO_ACTIVE_LOW>;
+	pinctrl-0 = <&pcie0_pins>;
+	status = "okay";
+};
+
+&pcie1 {
+	pinctrl-names = "default";
+	reset-gpios = <&sysgpio 28 GPIO_ACTIVE_LOW>;
+	pinctrl-0 = <&pcie1_pins>;
+	status = "okay";
+};
+
 &uart0 {
 	pinctrl-names = "default";
 	pinctrl-0 = <&uart0_pins>;
diff --git a/arch/riscv/boot/dts/starfive/jh7110.dtsi b/arch/riscv/boot/dts/starfive/jh7110.dtsi
index 02354e642c44..6e1a73d9c1f4 100644
--- a/arch/riscv/boot/dts/starfive/jh7110.dtsi
+++ b/arch/riscv/boot/dts/starfive/jh7110.dtsi
@@ -629,5 +629,93 @@
 			#reset-cells = <1>;
 			power-domains = <&pwrc JH7110_PD_VOUT>;
 		};
+
+		pcie0: pcie@2b000000 {
+			compatible = "starfive,jh7110-pcie";
+			reg = <0x0 0x2b000000 0x0 0x100000>,
+			      <0x9 0x40000000 0x0 0x1000000>;
+			reg-names = "host", "cfg";
+			#address-cells = <3>;
+			#size-cells = <2>;
+			#interrupt-cells = <1>;
+			ranges = <0x82000000  0x0 0x30000000  0x0 0x30000000 0x0 0x08000000>,
+				 <0xc3000000  0x9 0x00000000  0x9 0x00000000 0x0 0x40000000>;
+			interrupts = <56>;
+			interrupt-parent = <&plic>;
+			interrupt-map-mask = <0x0 0x0 0x0 0x7>;
+			interrupt-map = <0x0 0x0 0x0 0x1 &pcie_intc0 0x1>,
+					<0x0 0x0 0x0 0x2 &pcie_intc0 0x2>,
+					<0x0 0x0 0x0 0x3 &pcie_intc0 0x3>,
+					<0x0 0x0 0x0 0x4 &pcie_intc0 0x4>;
+			msi-parent = <&pcie0>;
+			msi-controller;
+			device_type = "pci";
+			starfive,stg-syscon = <&stg_syscon 0xc0 0xc4 0x130 0x1b8>;
+			bus-range = <0x0 0xff>;
+			clocks = <&syscrg JH7110_SYSCLK_NOC_BUS_STG_AXI>,
+				 <&stgcrg JH7110_STGCLK_PCIE0_TL>,
+				 <&stgcrg JH7110_STGCLK_PCIE0_AXI_MST0>,
+				 <&stgcrg JH7110_STGCLK_PCIE0_APB>;
+			clock-names = "noc", "tl", "axi_mst0", "apb";
+			resets = <&stgcrg JH7110_STGRST_PCIE0_AXI_MST0>,
+				 <&stgcrg JH7110_STGRST_PCIE0_AXI_SLV0>,
+				 <&stgcrg JH7110_STGRST_PCIE0_AXI_SLV>,
+				 <&stgcrg JH7110_STGRST_PCIE0_BRG>,
+				 <&stgcrg JH7110_STGRST_PCIE0_CORE>,
+				 <&stgcrg JH7110_STGRST_PCIE0_APB>;
+			reset-names = "mst0", "slv0", "slv", "brg",
+				      "core", "apb";
+			status = "disabled";
+
+			pcie_intc0: interrupt-controller {
+				#address-cells = <0>;
+				#interrupt-cells = <1>;
+				interrupt-controller;
+			};
+		};
+
+		pcie1: pcie@2c000000 {
+			compatible = "starfive,jh7110-pcie";
+			reg = <0x0 0x2c000000 0x0 0x1000000>,
+			      <0x9 0xc0000000 0x0 0x10000000>;
+			reg-names = "host", "cfg";
+			#address-cells = <3>;
+			#size-cells = <2>;
+			#interrupt-cells = <1>;
+			ranges = <0x82000000  0x0 0x38000000  0x0 0x38000000 0x0 0x08000000>,
+				 <0xc3000000  0x9 0x80000000  0x9 0x80000000 0x0 0x40000000>;
+			interrupts = <57>;
+			interrupt-parent = <&plic>;
+			interrupt-map-mask = <0x0 0x0 0x0 0x7>;
+			interrupt-map = <0x0 0x0 0x0 0x1 &pcie_intc1 0x1>,
+					<0x0 0x0 0x0 0x2 &pcie_intc1 0x2>,
+					<0x0 0x0 0x0 0x3 &pcie_intc1 0x3>,
+					<0x0 0x0 0x0 0x4 &pcie_intc1 0x4>;
+			msi-parent = <&pcie1>;
+			msi-controller;
+			device_type = "pci";
+			starfive,stg-syscon = <&stg_syscon 0x270 0x274 0x2e0 0x368>;
+			bus-range = <0x0 0xff>;
+			clocks = <&syscrg JH7110_SYSCLK_NOC_BUS_STG_AXI>,
+				 <&stgcrg JH7110_STGCLK_PCIE1_TL>,
+				 <&stgcrg JH7110_STGCLK_PCIE1_AXI_MST0>,
+				 <&stgcrg JH7110_STGCLK_PCIE1_APB>;
+			clock-names = "noc", "tl", "axi_mst0", "apb";
+			resets = <&stgcrg JH7110_STGRST_PCIE1_AXI_MST0>,
+				 <&stgcrg JH7110_STGRST_PCIE1_AXI_SLV0>,
+				 <&stgcrg JH7110_STGRST_PCIE1_AXI_SLV>,
+				 <&stgcrg JH7110_STGRST_PCIE1_BRG>,
+				 <&stgcrg JH7110_STGRST_PCIE1_CORE>,
+				 <&stgcrg JH7110_STGRST_PCIE1_APB>;
+			reset-names = "mst0", "slv0", "slv", "brg",
+				      "core", "apb";
+			status = "disabled";
+
+			pcie_intc1: interrupt-controller {
+				#address-cells = <0>;
+				#interrupt-cells = <1>;
+				interrupt-controller;
+			};
+		};
 	};
 };
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 92+ messages in thread

* [PATCH v1 9/9] riscv: dts: starfive: add PCIe dts configuration for JH7110
@ 2023-07-19 10:20   ` Minda Chen
  0 siblings, 0 replies; 92+ messages in thread
From: Minda Chen @ 2023-07-19 10:20 UTC (permalink / raw)
  To: Daire McNamara, Conor Dooley, Rob Herring, Krzysztof Kozlowski,
	Bjorn Helgaas, Lorenzo Pieralisi, Krzysztof Wilczyński,
	Emil Renner Berthing
  Cc: devicetree, linux-kernel, linux-riscv, linux-pci, Paul Walmsley,
	Palmer Dabbelt, Albert Ou, Philipp Zabel, Mason Huo, Leyfoon Tan,
	Kevin Xie, Minda Chen

Add PCIe dts configuraion for JH7110 SoC platform.

Signed-off-by: Minda Chen <minda.chen@starfivetech.com>
Reviewed-by: Hal Feng <hal.feng@starfivetech.com>
---
 .../jh7110-starfive-visionfive-2.dtsi         | 44 ++++++++++
 arch/riscv/boot/dts/starfive/jh7110.dtsi      | 88 +++++++++++++++++++
 2 files changed, 132 insertions(+)

diff --git a/arch/riscv/boot/dts/starfive/jh7110-starfive-visionfive-2.dtsi b/arch/riscv/boot/dts/starfive/jh7110-starfive-visionfive-2.dtsi
index de0f40a8be93..70deb4a3cb56 100644
--- a/arch/riscv/boot/dts/starfive/jh7110-starfive-visionfive-2.dtsi
+++ b/arch/riscv/boot/dts/starfive/jh7110-starfive-visionfive-2.dtsi
@@ -208,6 +208,36 @@
 		};
 	};
 
+	pcie0_pins: pcie0-0 {
+		pcie-pins {
+			pinmux = <GPIOMUX(32, GPOUT_HIGH,
+					      GPOEN_ENABLE,
+					      GPI_NONE)>,
+				 <GPIOMUX(27, GPOUT_HIGH,
+					      GPOEN_ENABLE,
+					      GPI_NONE)>;
+			drive-strength = <2>;
+			input-enable;
+			input-schmitt-disable;
+			slew-rate = <0>;
+		};
+	};
+
+	pcie1_pins: pcie1-0 {
+		pcie-pins {
+			pinmux = <GPIOMUX(21, GPOUT_HIGH,
+					      GPOEN_ENABLE,
+					      GPI_NONE)>,
+				 <GPIOMUX(29, GPOUT_HIGH,
+					      GPOEN_ENABLE,
+					      GPI_NONE)>;
+			drive-strength = <2>;
+			input-enable;
+			input-schmitt-disable;
+			slew-rate = <0>;
+		};
+	};
+
 	uart0_pins: uart0-0 {
 		tx-pins {
 			pinmux = <GPIOMUX(5, GPOUT_SYS_UART0_TX,
@@ -233,6 +263,20 @@
 	};
 };
 
+&pcie0 {
+	pinctrl-names = "default";
+	reset-gpios = <&sysgpio 26 GPIO_ACTIVE_LOW>;
+	pinctrl-0 = <&pcie0_pins>;
+	status = "okay";
+};
+
+&pcie1 {
+	pinctrl-names = "default";
+	reset-gpios = <&sysgpio 28 GPIO_ACTIVE_LOW>;
+	pinctrl-0 = <&pcie1_pins>;
+	status = "okay";
+};
+
 &uart0 {
 	pinctrl-names = "default";
 	pinctrl-0 = <&uart0_pins>;
diff --git a/arch/riscv/boot/dts/starfive/jh7110.dtsi b/arch/riscv/boot/dts/starfive/jh7110.dtsi
index 02354e642c44..6e1a73d9c1f4 100644
--- a/arch/riscv/boot/dts/starfive/jh7110.dtsi
+++ b/arch/riscv/boot/dts/starfive/jh7110.dtsi
@@ -629,5 +629,93 @@
 			#reset-cells = <1>;
 			power-domains = <&pwrc JH7110_PD_VOUT>;
 		};
+
+		pcie0: pcie@2b000000 {
+			compatible = "starfive,jh7110-pcie";
+			reg = <0x0 0x2b000000 0x0 0x100000>,
+			      <0x9 0x40000000 0x0 0x1000000>;
+			reg-names = "host", "cfg";
+			#address-cells = <3>;
+			#size-cells = <2>;
+			#interrupt-cells = <1>;
+			ranges = <0x82000000  0x0 0x30000000  0x0 0x30000000 0x0 0x08000000>,
+				 <0xc3000000  0x9 0x00000000  0x9 0x00000000 0x0 0x40000000>;
+			interrupts = <56>;
+			interrupt-parent = <&plic>;
+			interrupt-map-mask = <0x0 0x0 0x0 0x7>;
+			interrupt-map = <0x0 0x0 0x0 0x1 &pcie_intc0 0x1>,
+					<0x0 0x0 0x0 0x2 &pcie_intc0 0x2>,
+					<0x0 0x0 0x0 0x3 &pcie_intc0 0x3>,
+					<0x0 0x0 0x0 0x4 &pcie_intc0 0x4>;
+			msi-parent = <&pcie0>;
+			msi-controller;
+			device_type = "pci";
+			starfive,stg-syscon = <&stg_syscon 0xc0 0xc4 0x130 0x1b8>;
+			bus-range = <0x0 0xff>;
+			clocks = <&syscrg JH7110_SYSCLK_NOC_BUS_STG_AXI>,
+				 <&stgcrg JH7110_STGCLK_PCIE0_TL>,
+				 <&stgcrg JH7110_STGCLK_PCIE0_AXI_MST0>,
+				 <&stgcrg JH7110_STGCLK_PCIE0_APB>;
+			clock-names = "noc", "tl", "axi_mst0", "apb";
+			resets = <&stgcrg JH7110_STGRST_PCIE0_AXI_MST0>,
+				 <&stgcrg JH7110_STGRST_PCIE0_AXI_SLV0>,
+				 <&stgcrg JH7110_STGRST_PCIE0_AXI_SLV>,
+				 <&stgcrg JH7110_STGRST_PCIE0_BRG>,
+				 <&stgcrg JH7110_STGRST_PCIE0_CORE>,
+				 <&stgcrg JH7110_STGRST_PCIE0_APB>;
+			reset-names = "mst0", "slv0", "slv", "brg",
+				      "core", "apb";
+			status = "disabled";
+
+			pcie_intc0: interrupt-controller {
+				#address-cells = <0>;
+				#interrupt-cells = <1>;
+				interrupt-controller;
+			};
+		};
+
+		pcie1: pcie@2c000000 {
+			compatible = "starfive,jh7110-pcie";
+			reg = <0x0 0x2c000000 0x0 0x1000000>,
+			      <0x9 0xc0000000 0x0 0x10000000>;
+			reg-names = "host", "cfg";
+			#address-cells = <3>;
+			#size-cells = <2>;
+			#interrupt-cells = <1>;
+			ranges = <0x82000000  0x0 0x38000000  0x0 0x38000000 0x0 0x08000000>,
+				 <0xc3000000  0x9 0x80000000  0x9 0x80000000 0x0 0x40000000>;
+			interrupts = <57>;
+			interrupt-parent = <&plic>;
+			interrupt-map-mask = <0x0 0x0 0x0 0x7>;
+			interrupt-map = <0x0 0x0 0x0 0x1 &pcie_intc1 0x1>,
+					<0x0 0x0 0x0 0x2 &pcie_intc1 0x2>,
+					<0x0 0x0 0x0 0x3 &pcie_intc1 0x3>,
+					<0x0 0x0 0x0 0x4 &pcie_intc1 0x4>;
+			msi-parent = <&pcie1>;
+			msi-controller;
+			device_type = "pci";
+			starfive,stg-syscon = <&stg_syscon 0x270 0x274 0x2e0 0x368>;
+			bus-range = <0x0 0xff>;
+			clocks = <&syscrg JH7110_SYSCLK_NOC_BUS_STG_AXI>,
+				 <&stgcrg JH7110_STGCLK_PCIE1_TL>,
+				 <&stgcrg JH7110_STGCLK_PCIE1_AXI_MST0>,
+				 <&stgcrg JH7110_STGCLK_PCIE1_APB>;
+			clock-names = "noc", "tl", "axi_mst0", "apb";
+			resets = <&stgcrg JH7110_STGRST_PCIE1_AXI_MST0>,
+				 <&stgcrg JH7110_STGRST_PCIE1_AXI_SLV0>,
+				 <&stgcrg JH7110_STGRST_PCIE1_AXI_SLV>,
+				 <&stgcrg JH7110_STGRST_PCIE1_BRG>,
+				 <&stgcrg JH7110_STGRST_PCIE1_CORE>,
+				 <&stgcrg JH7110_STGRST_PCIE1_APB>;
+			reset-names = "mst0", "slv0", "slv", "brg",
+				      "core", "apb";
+			status = "disabled";
+
+			pcie_intc1: interrupt-controller {
+				#address-cells = <0>;
+				#interrupt-cells = <1>;
+				interrupt-controller;
+			};
+		};
 	};
 };
-- 
2.17.1


_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply related	[flat|nested] 92+ messages in thread

* Re: [PATCH v1 1/9] dt-bindings: PCI: Add PLDA XpressRICH PCIe host common properties
  2023-07-19 10:20   ` Minda Chen
@ 2023-07-19 10:52     ` Krzysztof Kozlowski
  -1 siblings, 0 replies; 92+ messages in thread
From: Krzysztof Kozlowski @ 2023-07-19 10:52 UTC (permalink / raw)
  To: Minda Chen, Daire McNamara, Conor Dooley, Rob Herring,
	Krzysztof Kozlowski, Bjorn Helgaas, Lorenzo Pieralisi,
	Krzysztof Wilczyński, Emil Renner Berthing
  Cc: devicetree, linux-kernel, linux-riscv, linux-pci, Paul Walmsley,
	Palmer Dabbelt, Albert Ou, Philipp Zabel, Mason Huo, Leyfoon Tan,
	Kevin Xie

On 19/07/2023 12:20, Minda Chen wrote:
> Add PLDA XpressRICH PCIe host common properties dt-binding doc.
> Microchip PolarFire PCIe host using PLDA IP.
> Extract properties from Microchip PolarFire PCIe host.
> 
> Signed-off-by: Minda Chen <minda.chen@starfivetech.com>
> Reviewed-by: Hal Feng <hal.feng@starfivetech.com>
> ---
>  .../pci/plda,xpressrich-pcie-common.yaml      | 72 +++++++++++++++++++
>  1 file changed, 72 insertions(+)
>  create mode 100644 Documentation/devicetree/bindings/pci/plda,xpressrich-pcie-common.yaml

How is it related with existing plda,xpressrich3-axi?

> 
> diff --git a/Documentation/devicetree/bindings/pci/plda,xpressrich-pcie-common.yaml b/Documentation/devicetree/bindings/pci/plda,xpressrich-pcie-common.yaml
> new file mode 100644
> index 000000000000..3627a846c5d1
> --- /dev/null
> +++ b/Documentation/devicetree/bindings/pci/plda,xpressrich-pcie-common.yaml
> @@ -0,0 +1,72 @@
> +# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
> +%YAML 1.2
> +---
> +$id: http://devicetree.org/schemas/pci/plda,xpressrich-pcie-common.yaml#
> +$schema: http://devicetree.org/meta-schemas/core.yaml#
> +
> +title: PLDA XpressRICH PCIe host common properties
> +
> +maintainers:
> +  - Daire McNamara <daire.mcnamara@microchip.com>
> +  - Minda Chen <minda.chen@starfivetech.com>
> +
> +description:
> +  Generic PLDA XpressRICH PCIe host common properties.
> +
> +select: false

This should not be needed.

> +
> +properties:
> +  reg:
> +    description:
> +      At least host IP register set and configuration space are

"At least" does not fit here since you do not allow anything else.

> +      required for normal controller work.
> +    maxItems: 2
> +
> +  reg-names:
> +    oneOf:
> +      - items:
> +          - const: cfg
> +          - const: apb
> +      - items:
> +          - const: host
> +          - const: cfg

Maybe keep similar order, so cfg followed by host?

Best regards,
Krzysztof


^ permalink raw reply	[flat|nested] 92+ messages in thread

* Re: [PATCH v1 1/9] dt-bindings: PCI: Add PLDA XpressRICH PCIe host common properties
@ 2023-07-19 10:52     ` Krzysztof Kozlowski
  0 siblings, 0 replies; 92+ messages in thread
From: Krzysztof Kozlowski @ 2023-07-19 10:52 UTC (permalink / raw)
  To: Minda Chen, Daire McNamara, Conor Dooley, Rob Herring,
	Krzysztof Kozlowski, Bjorn Helgaas, Lorenzo Pieralisi,
	Krzysztof Wilczyński, Emil Renner Berthing
  Cc: devicetree, linux-kernel, linux-riscv, linux-pci, Paul Walmsley,
	Palmer Dabbelt, Albert Ou, Philipp Zabel, Mason Huo, Leyfoon Tan,
	Kevin Xie

On 19/07/2023 12:20, Minda Chen wrote:
> Add PLDA XpressRICH PCIe host common properties dt-binding doc.
> Microchip PolarFire PCIe host using PLDA IP.
> Extract properties from Microchip PolarFire PCIe host.
> 
> Signed-off-by: Minda Chen <minda.chen@starfivetech.com>
> Reviewed-by: Hal Feng <hal.feng@starfivetech.com>
> ---
>  .../pci/plda,xpressrich-pcie-common.yaml      | 72 +++++++++++++++++++
>  1 file changed, 72 insertions(+)
>  create mode 100644 Documentation/devicetree/bindings/pci/plda,xpressrich-pcie-common.yaml

How is it related with existing plda,xpressrich3-axi?

> 
> diff --git a/Documentation/devicetree/bindings/pci/plda,xpressrich-pcie-common.yaml b/Documentation/devicetree/bindings/pci/plda,xpressrich-pcie-common.yaml
> new file mode 100644
> index 000000000000..3627a846c5d1
> --- /dev/null
> +++ b/Documentation/devicetree/bindings/pci/plda,xpressrich-pcie-common.yaml
> @@ -0,0 +1,72 @@
> +# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
> +%YAML 1.2
> +---
> +$id: http://devicetree.org/schemas/pci/plda,xpressrich-pcie-common.yaml#
> +$schema: http://devicetree.org/meta-schemas/core.yaml#
> +
> +title: PLDA XpressRICH PCIe host common properties
> +
> +maintainers:
> +  - Daire McNamara <daire.mcnamara@microchip.com>
> +  - Minda Chen <minda.chen@starfivetech.com>
> +
> +description:
> +  Generic PLDA XpressRICH PCIe host common properties.
> +
> +select: false

This should not be needed.

> +
> +properties:
> +  reg:
> +    description:
> +      At least host IP register set and configuration space are

"At least" does not fit here since you do not allow anything else.

> +      required for normal controller work.
> +    maxItems: 2
> +
> +  reg-names:
> +    oneOf:
> +      - items:
> +          - const: cfg
> +          - const: apb
> +      - items:
> +          - const: host
> +          - const: cfg

Maybe keep similar order, so cfg followed by host?

Best regards,
Krzysztof


_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 92+ messages in thread

* Re: [PATCH v1 2/9] dt-bindings: PCI: microchip: Remove the PLDA common properties
  2023-07-19 10:20   ` Minda Chen
@ 2023-07-19 10:53     ` Krzysztof Kozlowski
  -1 siblings, 0 replies; 92+ messages in thread
From: Krzysztof Kozlowski @ 2023-07-19 10:53 UTC (permalink / raw)
  To: Minda Chen, Daire McNamara, Conor Dooley, Rob Herring,
	Krzysztof Kozlowski, Bjorn Helgaas, Lorenzo Pieralisi,
	Krzysztof Wilczyński, Emil Renner Berthing
  Cc: devicetree, linux-kernel, linux-riscv, linux-pci, Paul Walmsley,
	Palmer Dabbelt, Albert Ou, Philipp Zabel, Mason Huo, Leyfoon Tan,
	Kevin Xie

On 19/07/2023 12:20, Minda Chen wrote:
> Add plda,xpressrich-pcie-common.yaml reference and
> remove the PLDA XpressRICH PCIe host common properties.
> 
> Signed-off-by: Minda Chen <minda.chen@starfivetech.com>

This should be squashed with previous patch.

> ---
>  .../bindings/pci/microchip,pcie-host.yaml     | 45 +------------------
>  1 file changed, 1 insertion(+), 44 deletions(-)
> 


Best regards,
Krzysztof


^ permalink raw reply	[flat|nested] 92+ messages in thread

* Re: [PATCH v1 2/9] dt-bindings: PCI: microchip: Remove the PLDA common properties
@ 2023-07-19 10:53     ` Krzysztof Kozlowski
  0 siblings, 0 replies; 92+ messages in thread
From: Krzysztof Kozlowski @ 2023-07-19 10:53 UTC (permalink / raw)
  To: Minda Chen, Daire McNamara, Conor Dooley, Rob Herring,
	Krzysztof Kozlowski, Bjorn Helgaas, Lorenzo Pieralisi,
	Krzysztof Wilczyński, Emil Renner Berthing
  Cc: devicetree, linux-kernel, linux-riscv, linux-pci, Paul Walmsley,
	Palmer Dabbelt, Albert Ou, Philipp Zabel, Mason Huo, Leyfoon Tan,
	Kevin Xie

On 19/07/2023 12:20, Minda Chen wrote:
> Add plda,xpressrich-pcie-common.yaml reference and
> remove the PLDA XpressRICH PCIe host common properties.
> 
> Signed-off-by: Minda Chen <minda.chen@starfivetech.com>

This should be squashed with previous patch.

> ---
>  .../bindings/pci/microchip,pcie-host.yaml     | 45 +------------------
>  1 file changed, 1 insertion(+), 44 deletions(-)
> 


Best regards,
Krzysztof


_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 92+ messages in thread

* Re: [PATCH v1 5/9] dt-bindings: PLDA: Add PLDA XpressRICH PCIe host controller
  2023-07-19 10:20   ` Minda Chen
@ 2023-07-19 10:55     ` Krzysztof Kozlowski
  -1 siblings, 0 replies; 92+ messages in thread
From: Krzysztof Kozlowski @ 2023-07-19 10:55 UTC (permalink / raw)
  To: Minda Chen, Daire McNamara, Conor Dooley, Rob Herring,
	Krzysztof Kozlowski, Bjorn Helgaas, Lorenzo Pieralisi,
	Krzysztof Wilczyński, Emil Renner Berthing
  Cc: devicetree, linux-kernel, linux-riscv, linux-pci, Paul Walmsley,
	Palmer Dabbelt, Albert Ou, Philipp Zabel, Mason Huo, Leyfoon Tan,
	Kevin Xie

On 19/07/2023 12:20, Minda Chen wrote:
> Add PLDA XpressRICH host controller dt-bindings. Both Microchip
> PolarFire SoC and StarFive JH7110 SoC are using PLDA XpressRICH
> PCIe host controller IP.
> 
> Signed-off-by: Minda Chen <minda.chen@starfivetech.com>
> Reviewed-by: Hal Feng <hal.feng@starfivetech.com>
> ---
>  .../pci/plda,xpressrich-pcie-host.yaml        | 66 +++++++++++++++++++
>  1 file changed, 66 insertions(+)
>  create mode 100644 Documentation/devicetree/bindings/pci/plda,xpressrich-pcie-host.yaml
> 
> diff --git a/Documentation/devicetree/bindings/pci/plda,xpressrich-pcie-host.yaml b/Documentation/devicetree/bindings/pci/plda,xpressrich-pcie-host.yaml
> new file mode 100644
> index 000000000000..10a10862a078
> --- /dev/null
> +++ b/Documentation/devicetree/bindings/pci/plda,xpressrich-pcie-host.yaml
> @@ -0,0 +1,66 @@
> +# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
> +%YAML 1.2
> +---
> +$id: http://devicetree.org/schemas/pci/plda,xpressrich-pcie-host.yaml#
> +$schema: http://devicetree.org/meta-schemas/core.yaml#
> +
> +title: PLDA XpressRICH PCIe host controller
> +
> +maintainers:
> +  - Daire McNamara <daire.mcnamara@microchip.com>
> +  - Minda Chen <minda.chen@starfivetech.com>
> +
> +allOf:
> +  - $ref: /schemas/pci/pci-bus.yaml#
> +  - $ref: plda,xpressrich-pcie-common.yaml#
> +  - $ref: /schemas/interrupt-controller/msi-controller.yaml#
> +
> +properties:
> +  compatible:
> +    const: plda,xpressrich-pcie-host
> +
> +required:
> +  - compatible
> +  - reg
> +  - reg-names
> +  - "#interrupt-cells"
> +  - interrupts
> +  - interrupt-map-mask
> +  - interrupt-map
> +  - msi-controller

Your common schema should require properties which it defines. Here you
should require only difference or new properties.

> +
> +unevaluatedProperties: false
> +
> +examples:
> +  - |
> +    soc {
> +            #address-cells = <2>;

Use 4 spaces for example indentation.

> +            #size-cells = <2>;
> +            pcie0: pcie@12000000 {


Best regards,
Krzysztof


^ permalink raw reply	[flat|nested] 92+ messages in thread

* Re: [PATCH v1 5/9] dt-bindings: PLDA: Add PLDA XpressRICH PCIe host controller
@ 2023-07-19 10:55     ` Krzysztof Kozlowski
  0 siblings, 0 replies; 92+ messages in thread
From: Krzysztof Kozlowski @ 2023-07-19 10:55 UTC (permalink / raw)
  To: Minda Chen, Daire McNamara, Conor Dooley, Rob Herring,
	Krzysztof Kozlowski, Bjorn Helgaas, Lorenzo Pieralisi,
	Krzysztof Wilczyński, Emil Renner Berthing
  Cc: devicetree, linux-kernel, linux-riscv, linux-pci, Paul Walmsley,
	Palmer Dabbelt, Albert Ou, Philipp Zabel, Mason Huo, Leyfoon Tan,
	Kevin Xie

On 19/07/2023 12:20, Minda Chen wrote:
> Add PLDA XpressRICH host controller dt-bindings. Both Microchip
> PolarFire SoC and StarFive JH7110 SoC are using PLDA XpressRICH
> PCIe host controller IP.
> 
> Signed-off-by: Minda Chen <minda.chen@starfivetech.com>
> Reviewed-by: Hal Feng <hal.feng@starfivetech.com>
> ---
>  .../pci/plda,xpressrich-pcie-host.yaml        | 66 +++++++++++++++++++
>  1 file changed, 66 insertions(+)
>  create mode 100644 Documentation/devicetree/bindings/pci/plda,xpressrich-pcie-host.yaml
> 
> diff --git a/Documentation/devicetree/bindings/pci/plda,xpressrich-pcie-host.yaml b/Documentation/devicetree/bindings/pci/plda,xpressrich-pcie-host.yaml
> new file mode 100644
> index 000000000000..10a10862a078
> --- /dev/null
> +++ b/Documentation/devicetree/bindings/pci/plda,xpressrich-pcie-host.yaml
> @@ -0,0 +1,66 @@
> +# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
> +%YAML 1.2
> +---
> +$id: http://devicetree.org/schemas/pci/plda,xpressrich-pcie-host.yaml#
> +$schema: http://devicetree.org/meta-schemas/core.yaml#
> +
> +title: PLDA XpressRICH PCIe host controller
> +
> +maintainers:
> +  - Daire McNamara <daire.mcnamara@microchip.com>
> +  - Minda Chen <minda.chen@starfivetech.com>
> +
> +allOf:
> +  - $ref: /schemas/pci/pci-bus.yaml#
> +  - $ref: plda,xpressrich-pcie-common.yaml#
> +  - $ref: /schemas/interrupt-controller/msi-controller.yaml#
> +
> +properties:
> +  compatible:
> +    const: plda,xpressrich-pcie-host
> +
> +required:
> +  - compatible
> +  - reg
> +  - reg-names
> +  - "#interrupt-cells"
> +  - interrupts
> +  - interrupt-map-mask
> +  - interrupt-map
> +  - msi-controller

Your common schema should require properties which it defines. Here you
should require only difference or new properties.

> +
> +unevaluatedProperties: false
> +
> +examples:
> +  - |
> +    soc {
> +            #address-cells = <2>;

Use 4 spaces for example indentation.

> +            #size-cells = <2>;
> +            pcie0: pcie@12000000 {


Best regards,
Krzysztof


_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 92+ messages in thread

* Re: [PATCH v1 7/9] dt-bindings: PCI: Add StarFive JH7110 PCIe controller
  2023-07-19 10:20   ` Minda Chen
@ 2023-07-19 10:56     ` Krzysztof Kozlowski
  -1 siblings, 0 replies; 92+ messages in thread
From: Krzysztof Kozlowski @ 2023-07-19 10:56 UTC (permalink / raw)
  To: Minda Chen, Daire McNamara, Conor Dooley, Rob Herring,
	Krzysztof Kozlowski, Bjorn Helgaas, Lorenzo Pieralisi,
	Krzysztof Wilczyński, Emil Renner Berthing
  Cc: devicetree, linux-kernel, linux-riscv, linux-pci, Paul Walmsley,
	Palmer Dabbelt, Albert Ou, Philipp Zabel, Mason Huo, Leyfoon Tan,
	Kevin Xie

On 19/07/2023 12:20, Minda Chen wrote:
> Add StarFive JH7110 SoC PCIe controller dt-bindings.
> JH7110 using PLDA XpressRICH PCIe host controller IP.
> 
> Signed-off-by: Minda Chen <minda.chen@starfivetech.com>
> Reviewed-by: Hal Feng <hal.feng@starfivetech.com>
> ---


...

> +
> +required:
> +  - compatible
> +  - reg
> +  - reg-names
> +  - "#interrupt-cells"
> +  - interrupts
> +  - interrupt-map-mask
> +  - interrupt-map
> +  - clocks
> +  - resets
> +  - starfive,stg-syscon
> +  - msi-controller
> +  - reset-gpios

Same concern as previous binding patch.

Best regards,
Krzysztof


^ permalink raw reply	[flat|nested] 92+ messages in thread

* Re: [PATCH v1 7/9] dt-bindings: PCI: Add StarFive JH7110 PCIe controller
@ 2023-07-19 10:56     ` Krzysztof Kozlowski
  0 siblings, 0 replies; 92+ messages in thread
From: Krzysztof Kozlowski @ 2023-07-19 10:56 UTC (permalink / raw)
  To: Minda Chen, Daire McNamara, Conor Dooley, Rob Herring,
	Krzysztof Kozlowski, Bjorn Helgaas, Lorenzo Pieralisi,
	Krzysztof Wilczyński, Emil Renner Berthing
  Cc: devicetree, linux-kernel, linux-riscv, linux-pci, Paul Walmsley,
	Palmer Dabbelt, Albert Ou, Philipp Zabel, Mason Huo, Leyfoon Tan,
	Kevin Xie

On 19/07/2023 12:20, Minda Chen wrote:
> Add StarFive JH7110 SoC PCIe controller dt-bindings.
> JH7110 using PLDA XpressRICH PCIe host controller IP.
> 
> Signed-off-by: Minda Chen <minda.chen@starfivetech.com>
> Reviewed-by: Hal Feng <hal.feng@starfivetech.com>
> ---


...

> +
> +required:
> +  - compatible
> +  - reg
> +  - reg-names
> +  - "#interrupt-cells"
> +  - interrupts
> +  - interrupt-map-mask
> +  - interrupt-map
> +  - clocks
> +  - resets
> +  - starfive,stg-syscon
> +  - msi-controller
> +  - reset-gpios

Same concern as previous binding patch.

Best regards,
Krzysztof


_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 92+ messages in thread

* Re: [PATCH v1 0/9] Refactoring Microchip PolarFire PCIe driver
  2023-07-19 10:20 ` Minda Chen
@ 2023-07-19 15:26   ` Bjorn Helgaas
  -1 siblings, 0 replies; 92+ messages in thread
From: Bjorn Helgaas @ 2023-07-19 15:26 UTC (permalink / raw)
  To: Minda Chen
  Cc: Daire McNamara, Conor Dooley, Rob Herring, Krzysztof Kozlowski,
	Bjorn Helgaas, Lorenzo Pieralisi, Krzysztof Wilczyński,
	Emil Renner Berthing, devicetree, linux-kernel, linux-riscv,
	linux-pci, Paul Walmsley, Palmer Dabbelt, Albert Ou,
	Philipp Zabel, Mason Huo, Leyfoon Tan, Kevin Xie

On Wed, Jul 19, 2023 at 06:20:48PM +0800, Minda Chen wrote:
> This patchset final purpose is add PCIe driver for StarFive JH7110 SoC.
>   dt-bindings: PCI: Add PLDA XpressRICH PCIe host common properties
> JH7110 using PLDA XpressRICH PCIe IP. Microchip PolarFire Using the
> same IP and have commit their codes, which are mixed with PLDA
> controller codes and Microchip platform codes.

I guess this actually adds TWO drivers: PCIE_PLDA_PLAT_HOST (claims
"plda,xpressrich-pcie-host" devices) and PCIE_STARFIVE_HOST (claims
"starfive,jh7110-pcie" devices), right?

> For re-use the PLDA controller codes, I request refactoring microchip
> codes, move PLDA common codes to PLDA files.
> Desigware and Cadence is good example for refactoring codes.
> 
> So first step is extract the PLDA common codes from microchip, and
> refactoring the microchip codes.(patch1 - 4)
> Then add the PLDA platform codes. (patch5, 6)
> At last, add Starfive codes. (patch7 - 9)
> 
> This patchset is base on v6.5-rc1

Doesn't quite apply cleanly for me:

  10:10:15 ~/linux (main)$ git checkout -b wip/minda-starfive-v1 v6.5-rc1
  Switched to a new branch 'wip/minda-starfive-v1'
  10:10:33 ~/linux (wip/minda-starfive-v1)$ git am m/20230719_minda_chen_refactoring_microchip_polarfire_pcie_driver.mbx
  Applying: dt-bindings: PCI: Add PLDA XpressRICH PCIe host common properties
  Applying: dt-bindings: PCI: microchip: Remove the PLDA common properties
  Applying: PCI: PLDA: Get PLDA common codes from Microchip PolarFire host
  Applying: PCI: microchip: Move PCIe driver to PLDA directory
  Applying: dt-bindings: PLDA: Add PLDA XpressRICH PCIe host controller
  Applying: PCI: PLDA: Add host conroller platform driver
  Applying: dt-bindings: PCI: Add StarFive JH7110 PCIe controller
  Applying: PCI: PLDA: starfive: Add JH7110 PCIe controller
  Applying: riscv: dts: starfive: add PCIe dts configuration for JH7110
  error: patch failed: arch/riscv/boot/dts/starfive/jh7110.dtsi:629
  error: arch/riscv/boot/dts/starfive/jh7110.dtsi: patch does not apply
  Patch failed at 0009 riscv: dts: starfive: add PCIe dts configuration for JH7110

>   dt-bindings: PCI: Add PLDA XpressRICH PCIe host common properties
>   dt-bindings: PCI: microchip: Remove the PLDA common properties
>   PCI: PLDA: Get PLDA common codes from Microchip PolarFire host
>   PCI: microchip: Move PCIe driver to PLDA directory
>   dt-bindings: PLDA: Add PLDA XpressRICH PCIe host controller
>   PCI: PLDA: Add host conroller platform driver

"controller"

>   dt-bindings: PCI: Add StarFive JH7110 PCIe controller
>   PCI: PLDA: starfive: Add JH7110 PCIe controller
>   riscv: dts: starfive: add PCIe dts configuration for JH7110

Use "PCI: plda: " prefix for PLDA things that are shared across
multiple drivers.

Use "PCI: starfive: " prefix for starfive-specific things.

This is the same as how drivers/pci/controller/dwc/* looks.

Bjorn

^ permalink raw reply	[flat|nested] 92+ messages in thread

* Re: [PATCH v1 0/9] Refactoring Microchip PolarFire PCIe driver
@ 2023-07-19 15:26   ` Bjorn Helgaas
  0 siblings, 0 replies; 92+ messages in thread
From: Bjorn Helgaas @ 2023-07-19 15:26 UTC (permalink / raw)
  To: Minda Chen
  Cc: Daire McNamara, Conor Dooley, Rob Herring, Krzysztof Kozlowski,
	Bjorn Helgaas, Lorenzo Pieralisi, Krzysztof Wilczyński,
	Emil Renner Berthing, devicetree, linux-kernel, linux-riscv,
	linux-pci, Paul Walmsley, Palmer Dabbelt, Albert Ou,
	Philipp Zabel, Mason Huo, Leyfoon Tan, Kevin Xie

On Wed, Jul 19, 2023 at 06:20:48PM +0800, Minda Chen wrote:
> This patchset final purpose is add PCIe driver for StarFive JH7110 SoC.
>   dt-bindings: PCI: Add PLDA XpressRICH PCIe host common properties
> JH7110 using PLDA XpressRICH PCIe IP. Microchip PolarFire Using the
> same IP and have commit their codes, which are mixed with PLDA
> controller codes and Microchip platform codes.

I guess this actually adds TWO drivers: PCIE_PLDA_PLAT_HOST (claims
"plda,xpressrich-pcie-host" devices) and PCIE_STARFIVE_HOST (claims
"starfive,jh7110-pcie" devices), right?

> For re-use the PLDA controller codes, I request refactoring microchip
> codes, move PLDA common codes to PLDA files.
> Desigware and Cadence is good example for refactoring codes.
> 
> So first step is extract the PLDA common codes from microchip, and
> refactoring the microchip codes.(patch1 - 4)
> Then add the PLDA platform codes. (patch5, 6)
> At last, add Starfive codes. (patch7 - 9)
> 
> This patchset is base on v6.5-rc1

Doesn't quite apply cleanly for me:

  10:10:15 ~/linux (main)$ git checkout -b wip/minda-starfive-v1 v6.5-rc1
  Switched to a new branch 'wip/minda-starfive-v1'
  10:10:33 ~/linux (wip/minda-starfive-v1)$ git am m/20230719_minda_chen_refactoring_microchip_polarfire_pcie_driver.mbx
  Applying: dt-bindings: PCI: Add PLDA XpressRICH PCIe host common properties
  Applying: dt-bindings: PCI: microchip: Remove the PLDA common properties
  Applying: PCI: PLDA: Get PLDA common codes from Microchip PolarFire host
  Applying: PCI: microchip: Move PCIe driver to PLDA directory
  Applying: dt-bindings: PLDA: Add PLDA XpressRICH PCIe host controller
  Applying: PCI: PLDA: Add host conroller platform driver
  Applying: dt-bindings: PCI: Add StarFive JH7110 PCIe controller
  Applying: PCI: PLDA: starfive: Add JH7110 PCIe controller
  Applying: riscv: dts: starfive: add PCIe dts configuration for JH7110
  error: patch failed: arch/riscv/boot/dts/starfive/jh7110.dtsi:629
  error: arch/riscv/boot/dts/starfive/jh7110.dtsi: patch does not apply
  Patch failed at 0009 riscv: dts: starfive: add PCIe dts configuration for JH7110

>   dt-bindings: PCI: Add PLDA XpressRICH PCIe host common properties
>   dt-bindings: PCI: microchip: Remove the PLDA common properties
>   PCI: PLDA: Get PLDA common codes from Microchip PolarFire host
>   PCI: microchip: Move PCIe driver to PLDA directory
>   dt-bindings: PLDA: Add PLDA XpressRICH PCIe host controller
>   PCI: PLDA: Add host conroller platform driver

"controller"

>   dt-bindings: PCI: Add StarFive JH7110 PCIe controller
>   PCI: PLDA: starfive: Add JH7110 PCIe controller
>   riscv: dts: starfive: add PCIe dts configuration for JH7110

Use "PCI: plda: " prefix for PLDA things that are shared across
multiple drivers.

Use "PCI: starfive: " prefix for starfive-specific things.

This is the same as how drivers/pci/controller/dwc/* looks.

Bjorn

_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 92+ messages in thread

* Re: [PATCH v1 8/9] PCI: PLDA: starfive: Add JH7110 PCIe controller
  2023-07-19 10:20   ` Minda Chen
@ 2023-07-19 16:48     ` Bjorn Helgaas
  -1 siblings, 0 replies; 92+ messages in thread
From: Bjorn Helgaas @ 2023-07-19 16:48 UTC (permalink / raw)
  To: Minda Chen
  Cc: Daire McNamara, Conor Dooley, Rob Herring, Krzysztof Kozlowski,
	Bjorn Helgaas, Lorenzo Pieralisi, Krzysztof Wilczyński,
	Emil Renner Berthing, devicetree, linux-kernel, linux-riscv,
	linux-pci, Paul Walmsley, Palmer Dabbelt, Albert Ou,
	Philipp Zabel, Mason Huo, Leyfoon Tan, Kevin Xie

On Wed, Jul 19, 2023 at 06:20:56PM +0800, Minda Chen wrote:
> Add StarFive JH7110 SoC PCIe controller platform
> driver codes.

Rewrap all the commit logs to fill 75 columns or so.

>  #define PCIE_PCI_IDS_DW1		0x9c
> -
> +#define  IDS_CLASS_CODE_SHIFT		16
> +#define PCI_MISC			0xB4

Surrounding code uses lower-case hex.  Make it all match.

> +#define STG_SYSCON_AXI4_SLVL_ARFUNC_MASK	GENMASK(22, 8)
> +#define STG_SYSCON_AXI4_SLVL_ARFUNC_SHIFT	8

When practical, use FIELD_GET() and FIELD_PREP() to avoid the need for
*_SHIFT macros.

> +struct starfive_jh7110_pcie {
> +	struct plda_pcie	plda;
> +	struct reset_control *resets;
> +	struct clk_bulk_data *clks;
> +	struct regmap *reg_syscon;
> +	struct gpio_desc *power_gpio;
> +	struct gpio_desc *reset_gpio;
> +
> +	u32 stg_arfun;
> +	u32 stg_awfun;
> +	u32 stg_rp_nep;
> +	u32 stg_lnksta;
> +
> +	int num_clks;

If you indent one member with tabs, e.g., "struct plda_pcie        plda",
they should all be indented to match.

> + * The BAR0/1 of bridge should be hidden during enumeration to
> + * avoid the sizing and resource allocation by PCIe core.
> + */
> +static bool starfive_pcie_hide_rc_bar(struct pci_bus *bus, unsigned int  devfn,
> +				      int offset)
> +{
> +	if (pci_is_root_bus(bus) && !devfn &&
> +	    (offset == PCI_BASE_ADDRESS_0 || offset == PCI_BASE_ADDRESS_1))
> +		return true;
> +
> +	return false;
> +}
> +
> +int starfive_pcie_config_write(struct pci_bus *bus, unsigned int devfn,
> +			       int where, int size, u32 value)
> +{
> +	if (starfive_pcie_hide_rc_bar(bus, devfn, where))
> +		return PCIBIOS_BAD_REGISTER_NUMBER;

I think you are trying present BARs 0 & 1 as unimplemented.  Such BARs
are hardwired to zero, so you should make them behave that way (both
read and write).  Many callers of config accessors don't check the
return value, so I don't think it's reliable to just return
PCIBIOS_BAD_REGISTER_NUMBER.

> +static int starfive_pcie_is_link_up(struct starfive_jh7110_pcie *pcie)
> +{
> +	struct device *dev = pcie->plda.dev;
> +	int ret;
> +	u32 stg_reg_val;
> +
> +	/* 100ms timeout value should be enough for Gen1/2 training */
> +	ret = regmap_read_poll_timeout(pcie->reg_syscon,
> +				       pcie->stg_lnksta,
> +				       stg_reg_val,
> +				       stg_reg_val & DATA_LINK_ACTIVE,
> +				       10 * 1000, 100 * 1000);
> +
> +	/* If the link is down (no device in slot), then exit. */
> +	if (ret == -ETIMEDOUT) {
> +		dev_info(dev, "Port link down, exit.\n");
> +		return 0;
> +	} else if (ret == 0) {
> +		dev_info(dev, "Port link up.\n");
> +		return 1;
> +	}

Please copy the naming and style of the "*_pcie_link_up()" functions
in other drivers.  These are boolean functions with no side effects,
including no timeouts.

Some drivers have "*wait_for_link()" functions if polling is needed.

> +		return dev_err_probe(dev, ret,
> +			"failed to initialize pcie phy\n");

Driver messages should match (all capitalized or none capitalized).

> +	/* Enable root port */

Superfluous comment, since the function name says the same.

> +	plda_pcie_enable_root_port(plda);

> +	/* Ensure that PERST has been asserted for at least 100 ms */
> +	msleep(300);
> +	gpiod_set_value_cansleep(pcie->reset_gpio, 0);

At least 100 ms, but you sleep *300* ms?  This is probably related to
https://lore.kernel.org/r/20230718155515.GA483233@bhelgaas

Please include a comment with the source of the delay value.  I assume
it's T_PVPERL and T_PERST-CLK from the PCIe CEM spec.  This way we can
someday share those #defines across drivers.

> +#ifdef CONFIG_PM_SLEEP
> +static int __maybe_unused starfive_pcie_suspend_noirq(struct device *dev)

I think you can dispense with some of these #ifdefs and the
__maybe_unused as in
https://lore.kernel.org/all/20220720224829.GA1667002@bhelgaas/

> +{
> +	struct starfive_jh7110_pcie *pcie = dev_get_drvdata(dev);
> +
> +	if (!pcie)
> +		return 0;

How can this happen?  If we're only detecting memory corruption, it's
not worth it.

Bjorn

^ permalink raw reply	[flat|nested] 92+ messages in thread

* Re: [PATCH v1 8/9] PCI: PLDA: starfive: Add JH7110 PCIe controller
@ 2023-07-19 16:48     ` Bjorn Helgaas
  0 siblings, 0 replies; 92+ messages in thread
From: Bjorn Helgaas @ 2023-07-19 16:48 UTC (permalink / raw)
  To: Minda Chen
  Cc: Daire McNamara, Conor Dooley, Rob Herring, Krzysztof Kozlowski,
	Bjorn Helgaas, Lorenzo Pieralisi, Krzysztof Wilczyński,
	Emil Renner Berthing, devicetree, linux-kernel, linux-riscv,
	linux-pci, Paul Walmsley, Palmer Dabbelt, Albert Ou,
	Philipp Zabel, Mason Huo, Leyfoon Tan, Kevin Xie

On Wed, Jul 19, 2023 at 06:20:56PM +0800, Minda Chen wrote:
> Add StarFive JH7110 SoC PCIe controller platform
> driver codes.

Rewrap all the commit logs to fill 75 columns or so.

>  #define PCIE_PCI_IDS_DW1		0x9c
> -
> +#define  IDS_CLASS_CODE_SHIFT		16
> +#define PCI_MISC			0xB4

Surrounding code uses lower-case hex.  Make it all match.

> +#define STG_SYSCON_AXI4_SLVL_ARFUNC_MASK	GENMASK(22, 8)
> +#define STG_SYSCON_AXI4_SLVL_ARFUNC_SHIFT	8

When practical, use FIELD_GET() and FIELD_PREP() to avoid the need for
*_SHIFT macros.

> +struct starfive_jh7110_pcie {
> +	struct plda_pcie	plda;
> +	struct reset_control *resets;
> +	struct clk_bulk_data *clks;
> +	struct regmap *reg_syscon;
> +	struct gpio_desc *power_gpio;
> +	struct gpio_desc *reset_gpio;
> +
> +	u32 stg_arfun;
> +	u32 stg_awfun;
> +	u32 stg_rp_nep;
> +	u32 stg_lnksta;
> +
> +	int num_clks;

If you indent one member with tabs, e.g., "struct plda_pcie        plda",
they should all be indented to match.

> + * The BAR0/1 of bridge should be hidden during enumeration to
> + * avoid the sizing and resource allocation by PCIe core.
> + */
> +static bool starfive_pcie_hide_rc_bar(struct pci_bus *bus, unsigned int  devfn,
> +				      int offset)
> +{
> +	if (pci_is_root_bus(bus) && !devfn &&
> +	    (offset == PCI_BASE_ADDRESS_0 || offset == PCI_BASE_ADDRESS_1))
> +		return true;
> +
> +	return false;
> +}
> +
> +int starfive_pcie_config_write(struct pci_bus *bus, unsigned int devfn,
> +			       int where, int size, u32 value)
> +{
> +	if (starfive_pcie_hide_rc_bar(bus, devfn, where))
> +		return PCIBIOS_BAD_REGISTER_NUMBER;

I think you are trying present BARs 0 & 1 as unimplemented.  Such BARs
are hardwired to zero, so you should make them behave that way (both
read and write).  Many callers of config accessors don't check the
return value, so I don't think it's reliable to just return
PCIBIOS_BAD_REGISTER_NUMBER.

> +static int starfive_pcie_is_link_up(struct starfive_jh7110_pcie *pcie)
> +{
> +	struct device *dev = pcie->plda.dev;
> +	int ret;
> +	u32 stg_reg_val;
> +
> +	/* 100ms timeout value should be enough for Gen1/2 training */
> +	ret = regmap_read_poll_timeout(pcie->reg_syscon,
> +				       pcie->stg_lnksta,
> +				       stg_reg_val,
> +				       stg_reg_val & DATA_LINK_ACTIVE,
> +				       10 * 1000, 100 * 1000);
> +
> +	/* If the link is down (no device in slot), then exit. */
> +	if (ret == -ETIMEDOUT) {
> +		dev_info(dev, "Port link down, exit.\n");
> +		return 0;
> +	} else if (ret == 0) {
> +		dev_info(dev, "Port link up.\n");
> +		return 1;
> +	}

Please copy the naming and style of the "*_pcie_link_up()" functions
in other drivers.  These are boolean functions with no side effects,
including no timeouts.

Some drivers have "*wait_for_link()" functions if polling is needed.

> +		return dev_err_probe(dev, ret,
> +			"failed to initialize pcie phy\n");

Driver messages should match (all capitalized or none capitalized).

> +	/* Enable root port */

Superfluous comment, since the function name says the same.

> +	plda_pcie_enable_root_port(plda);

> +	/* Ensure that PERST has been asserted for at least 100 ms */
> +	msleep(300);
> +	gpiod_set_value_cansleep(pcie->reset_gpio, 0);

At least 100 ms, but you sleep *300* ms?  This is probably related to
https://lore.kernel.org/r/20230718155515.GA483233@bhelgaas

Please include a comment with the source of the delay value.  I assume
it's T_PVPERL and T_PERST-CLK from the PCIe CEM spec.  This way we can
someday share those #defines across drivers.

> +#ifdef CONFIG_PM_SLEEP
> +static int __maybe_unused starfive_pcie_suspend_noirq(struct device *dev)

I think you can dispense with some of these #ifdefs and the
__maybe_unused as in
https://lore.kernel.org/all/20220720224829.GA1667002@bhelgaas/

> +{
> +	struct starfive_jh7110_pcie *pcie = dev_get_drvdata(dev);
> +
> +	if (!pcie)
> +		return 0;

How can this happen?  If we're only detecting memory corruption, it's
not worth it.

Bjorn

_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 92+ messages in thread

* Re: [PATCH v1 0/9] Refactoring Microchip PolarFire PCIe driver
  2023-07-19 10:20 ` Minda Chen
@ 2023-07-19 16:58   ` Conor Dooley
  -1 siblings, 0 replies; 92+ messages in thread
From: Conor Dooley @ 2023-07-19 16:58 UTC (permalink / raw)
  To: Minda Chen
  Cc: Daire McNamara, Rob Herring, Krzysztof Kozlowski, Bjorn Helgaas,
	Lorenzo Pieralisi, Krzysztof Wilczyński,
	Emil Renner Berthing, devicetree, linux-kernel, linux-riscv,
	linux-pci, Paul Walmsley, Palmer Dabbelt, Albert Ou,
	Philipp Zabel, Mason Huo, Leyfoon Tan, Kevin Xie

[-- Attachment #1: Type: text/plain, Size: 2007 bytes --]

Hey Minda,

On Wed, Jul 19, 2023 at 06:20:48PM +0800, Minda Chen wrote:
> This patchset final purpose is add PCIe driver for StarFive JH7110 SoC.
> JH7110 using PLDA XpressRICH PCIe IP. Microchip PolarFire Using the
> same IP and have commit their codes, which are mixed with PLDA
> controller codes and Microchip platform codes.
> 
> For re-use the PLDA controller codes, I request refactoring microchip
> codes, move PLDA common codes to PLDA files.
> Desigware and Cadence is good example for refactoring codes.
> 
> So first step is extract the PLDA common codes from microchip, and
> refactoring the microchip codes.(patch1 - 4)
> Then add the PLDA platform codes. (patch5, 6)
> At last, add Starfive codes. (patch7 - 9)

Thanks for sending this, I'll try to have a look through it tomorrow, or
if not, early next week. As pointed out off-list, the gist of what you
have here looked good to myself and Daire.

> This patchset is base on v6.5-rc1
> 
> patch1 is add PLDA XpressRICH PCIe host common properties dt-binding
>        docs, most are extracted from microchip,pcie-host.yaml
> patch2 is add plda,xpressrich-pcie-common.yaml(patch1 file) reference
>        and remove the PLDA common properties.
> patch3 is extracting the PLDA common codes from microchip Polarfire PCIe
>        codes. The change list in the commit message.
> patch4 is move microchip driver to PLDA directory and remove the PLDA
>        common codes.
> patch5 is add PLDA Xpressrich platform driver dt-binding doc.
> patch6 is PLDA Xpressrich platform driver.
> patch7 is add StarFive JH7110 PCIe dt-binding doc.
> patch8 is add StarFive JH7110 Soc PCIe platform codes.
> patch9 is StarFive JH7110 device tree configuration.
> 
> I have noticed that Daire have changed microchip's codes.
> https://patchwork.kernel.org/project/linux-pci/cover/20230630154859.2049521-1-daire.mcnamara@microchip.com/

I'll go and ping this, it's been a few weeks with no movement :)

Thanks,
Conor.

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 228 bytes --]

^ permalink raw reply	[flat|nested] 92+ messages in thread

* Re: [PATCH v1 0/9] Refactoring Microchip PolarFire PCIe driver
@ 2023-07-19 16:58   ` Conor Dooley
  0 siblings, 0 replies; 92+ messages in thread
From: Conor Dooley @ 2023-07-19 16:58 UTC (permalink / raw)
  To: Minda Chen
  Cc: Daire McNamara, Rob Herring, Krzysztof Kozlowski, Bjorn Helgaas,
	Lorenzo Pieralisi, Krzysztof Wilczyński,
	Emil Renner Berthing, devicetree, linux-kernel, linux-riscv,
	linux-pci, Paul Walmsley, Palmer Dabbelt, Albert Ou,
	Philipp Zabel, Mason Huo, Leyfoon Tan, Kevin Xie


[-- Attachment #1.1: Type: text/plain, Size: 2007 bytes --]

Hey Minda,

On Wed, Jul 19, 2023 at 06:20:48PM +0800, Minda Chen wrote:
> This patchset final purpose is add PCIe driver for StarFive JH7110 SoC.
> JH7110 using PLDA XpressRICH PCIe IP. Microchip PolarFire Using the
> same IP and have commit their codes, which are mixed with PLDA
> controller codes and Microchip platform codes.
> 
> For re-use the PLDA controller codes, I request refactoring microchip
> codes, move PLDA common codes to PLDA files.
> Desigware and Cadence is good example for refactoring codes.
> 
> So first step is extract the PLDA common codes from microchip, and
> refactoring the microchip codes.(patch1 - 4)
> Then add the PLDA platform codes. (patch5, 6)
> At last, add Starfive codes. (patch7 - 9)

Thanks for sending this, I'll try to have a look through it tomorrow, or
if not, early next week. As pointed out off-list, the gist of what you
have here looked good to myself and Daire.

> This patchset is base on v6.5-rc1
> 
> patch1 is add PLDA XpressRICH PCIe host common properties dt-binding
>        docs, most are extracted from microchip,pcie-host.yaml
> patch2 is add plda,xpressrich-pcie-common.yaml(patch1 file) reference
>        and remove the PLDA common properties.
> patch3 is extracting the PLDA common codes from microchip Polarfire PCIe
>        codes. The change list in the commit message.
> patch4 is move microchip driver to PLDA directory and remove the PLDA
>        common codes.
> patch5 is add PLDA Xpressrich platform driver dt-binding doc.
> patch6 is PLDA Xpressrich platform driver.
> patch7 is add StarFive JH7110 PCIe dt-binding doc.
> patch8 is add StarFive JH7110 Soc PCIe platform codes.
> patch9 is StarFive JH7110 device tree configuration.
> 
> I have noticed that Daire have changed microchip's codes.
> https://patchwork.kernel.org/project/linux-pci/cover/20230630154859.2049521-1-daire.mcnamara@microchip.com/

I'll go and ping this, it's been a few weeks with no movement :)

Thanks,
Conor.

[-- Attachment #1.2: signature.asc --]
[-- Type: application/pgp-signature, Size: 228 bytes --]

[-- Attachment #2: Type: text/plain, Size: 161 bytes --]

_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 92+ messages in thread

* Re: [PATCH v1 5/9] dt-bindings: PLDA: Add PLDA XpressRICH PCIe host controller
  2023-07-19 10:20   ` Minda Chen
@ 2023-07-19 22:29     ` Rob Herring
  -1 siblings, 0 replies; 92+ messages in thread
From: Rob Herring @ 2023-07-19 22:29 UTC (permalink / raw)
  To: Minda Chen
  Cc: Daire McNamara, Conor Dooley, Krzysztof Kozlowski, Bjorn Helgaas,
	Lorenzo Pieralisi, Krzysztof Wilczyński,
	Emil Renner Berthing, devicetree, linux-kernel, linux-riscv,
	linux-pci, Paul Walmsley, Palmer Dabbelt, Albert Ou,
	Philipp Zabel, Mason Huo, Leyfoon Tan, Kevin Xie

On Wed, Jul 19, 2023 at 06:20:53PM +0800, Minda Chen wrote:
> Add PLDA XpressRICH host controller dt-bindings. Both Microchip
> PolarFire SoC and StarFive JH7110 SoC are using PLDA XpressRICH
> PCIe host controller IP.
> 
> Signed-off-by: Minda Chen <minda.chen@starfivetech.com>
> Reviewed-by: Hal Feng <hal.feng@starfivetech.com>
> ---
>  .../pci/plda,xpressrich-pcie-host.yaml        | 66 +++++++++++++++++++
>  1 file changed, 66 insertions(+)
>  create mode 100644 Documentation/devicetree/bindings/pci/plda,xpressrich-pcie-host.yaml
> 
> diff --git a/Documentation/devicetree/bindings/pci/plda,xpressrich-pcie-host.yaml b/Documentation/devicetree/bindings/pci/plda,xpressrich-pcie-host.yaml
> new file mode 100644
> index 000000000000..10a10862a078
> --- /dev/null
> +++ b/Documentation/devicetree/bindings/pci/plda,xpressrich-pcie-host.yaml
> @@ -0,0 +1,66 @@
> +# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
> +%YAML 1.2
> +---
> +$id: http://devicetree.org/schemas/pci/plda,xpressrich-pcie-host.yaml#
> +$schema: http://devicetree.org/meta-schemas/core.yaml#
> +
> +title: PLDA XpressRICH PCIe host controller
> +
> +maintainers:
> +  - Daire McNamara <daire.mcnamara@microchip.com>
> +  - Minda Chen <minda.chen@starfivetech.com>
> +
> +allOf:
> +  - $ref: /schemas/pci/pci-bus.yaml#
> +  - $ref: plda,xpressrich-pcie-common.yaml#
> +  - $ref: /schemas/interrupt-controller/msi-controller.yaml#
> +
> +properties:
> +  compatible:
> +    const: plda,xpressrich-pcie-host

What h/w is this in? I don't see why this is needed. You have the common 
schema for the IP block and then the Microchip and Starfive schemas for 
the 2 implementations.

Rob

^ permalink raw reply	[flat|nested] 92+ messages in thread

* Re: [PATCH v1 5/9] dt-bindings: PLDA: Add PLDA XpressRICH PCIe host controller
@ 2023-07-19 22:29     ` Rob Herring
  0 siblings, 0 replies; 92+ messages in thread
From: Rob Herring @ 2023-07-19 22:29 UTC (permalink / raw)
  To: Minda Chen
  Cc: Daire McNamara, Conor Dooley, Krzysztof Kozlowski, Bjorn Helgaas,
	Lorenzo Pieralisi, Krzysztof Wilczyński,
	Emil Renner Berthing, devicetree, linux-kernel, linux-riscv,
	linux-pci, Paul Walmsley, Palmer Dabbelt, Albert Ou,
	Philipp Zabel, Mason Huo, Leyfoon Tan, Kevin Xie

On Wed, Jul 19, 2023 at 06:20:53PM +0800, Minda Chen wrote:
> Add PLDA XpressRICH host controller dt-bindings. Both Microchip
> PolarFire SoC and StarFive JH7110 SoC are using PLDA XpressRICH
> PCIe host controller IP.
> 
> Signed-off-by: Minda Chen <minda.chen@starfivetech.com>
> Reviewed-by: Hal Feng <hal.feng@starfivetech.com>
> ---
>  .../pci/plda,xpressrich-pcie-host.yaml        | 66 +++++++++++++++++++
>  1 file changed, 66 insertions(+)
>  create mode 100644 Documentation/devicetree/bindings/pci/plda,xpressrich-pcie-host.yaml
> 
> diff --git a/Documentation/devicetree/bindings/pci/plda,xpressrich-pcie-host.yaml b/Documentation/devicetree/bindings/pci/plda,xpressrich-pcie-host.yaml
> new file mode 100644
> index 000000000000..10a10862a078
> --- /dev/null
> +++ b/Documentation/devicetree/bindings/pci/plda,xpressrich-pcie-host.yaml
> @@ -0,0 +1,66 @@
> +# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
> +%YAML 1.2
> +---
> +$id: http://devicetree.org/schemas/pci/plda,xpressrich-pcie-host.yaml#
> +$schema: http://devicetree.org/meta-schemas/core.yaml#
> +
> +title: PLDA XpressRICH PCIe host controller
> +
> +maintainers:
> +  - Daire McNamara <daire.mcnamara@microchip.com>
> +  - Minda Chen <minda.chen@starfivetech.com>
> +
> +allOf:
> +  - $ref: /schemas/pci/pci-bus.yaml#
> +  - $ref: plda,xpressrich-pcie-common.yaml#
> +  - $ref: /schemas/interrupt-controller/msi-controller.yaml#
> +
> +properties:
> +  compatible:
> +    const: plda,xpressrich-pcie-host

What h/w is this in? I don't see why this is needed. You have the common 
schema for the IP block and then the Microchip and Starfive schemas for 
the 2 implementations.

Rob

_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 92+ messages in thread

* Re: [PATCH v1 1/9] dt-bindings: PCI: Add PLDA XpressRICH PCIe host common properties
  2023-07-19 10:20   ` Minda Chen
@ 2023-07-19 22:31     ` Rob Herring
  -1 siblings, 0 replies; 92+ messages in thread
From: Rob Herring @ 2023-07-19 22:31 UTC (permalink / raw)
  To: Minda Chen
  Cc: Daire McNamara, Conor Dooley, Krzysztof Kozlowski, Bjorn Helgaas,
	Lorenzo Pieralisi, Krzysztof Wilczyński,
	Emil Renner Berthing, devicetree, linux-kernel, linux-riscv,
	linux-pci, Paul Walmsley, Palmer Dabbelt, Albert Ou,
	Philipp Zabel, Mason Huo, Leyfoon Tan, Kevin Xie

On Wed, Jul 19, 2023 at 06:20:49PM +0800, Minda Chen wrote:
> Add PLDA XpressRICH PCIe host common properties dt-binding doc.
> Microchip PolarFire PCIe host using PLDA IP.
> Extract properties from Microchip PolarFire PCIe host.
> 
> Signed-off-by: Minda Chen <minda.chen@starfivetech.com>
> Reviewed-by: Hal Feng <hal.feng@starfivetech.com>
> ---
>  .../pci/plda,xpressrich-pcie-common.yaml      | 72 +++++++++++++++++++
>  1 file changed, 72 insertions(+)
>  create mode 100644 Documentation/devicetree/bindings/pci/plda,xpressrich-pcie-common.yaml
> 
> diff --git a/Documentation/devicetree/bindings/pci/plda,xpressrich-pcie-common.yaml b/Documentation/devicetree/bindings/pci/plda,xpressrich-pcie-common.yaml
> new file mode 100644
> index 000000000000..3627a846c5d1
> --- /dev/null
> +++ b/Documentation/devicetree/bindings/pci/plda,xpressrich-pcie-common.yaml
> @@ -0,0 +1,72 @@
> +# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
> +%YAML 1.2
> +---
> +$id: http://devicetree.org/schemas/pci/plda,xpressrich-pcie-common.yaml#
> +$schema: http://devicetree.org/meta-schemas/core.yaml#
> +
> +title: PLDA XpressRICH PCIe host common properties
> +
> +maintainers:
> +  - Daire McNamara <daire.mcnamara@microchip.com>
> +  - Minda Chen <minda.chen@starfivetech.com>
> +
> +description:
> +  Generic PLDA XpressRICH PCIe host common properties.
> +
> +select: false
> +
> +properties:
> +  reg:
> +    description:
> +      At least host IP register set and configuration space are
> +      required for normal controller work.
> +    maxItems: 2
> +
> +  reg-names:
> +    oneOf:
> +      - items:
> +          - const: cfg
> +          - const: apb
> +      - items:
> +          - const: host
> +          - const: cfg

This didn't exist before. Where's the reasoning?

There's no reason for 'cfg' to be in different spots and little reason 
to have different names for the host/apb space.

Rob


^ permalink raw reply	[flat|nested] 92+ messages in thread

* Re: [PATCH v1 1/9] dt-bindings: PCI: Add PLDA XpressRICH PCIe host common properties
@ 2023-07-19 22:31     ` Rob Herring
  0 siblings, 0 replies; 92+ messages in thread
From: Rob Herring @ 2023-07-19 22:31 UTC (permalink / raw)
  To: Minda Chen
  Cc: Daire McNamara, Conor Dooley, Krzysztof Kozlowski, Bjorn Helgaas,
	Lorenzo Pieralisi, Krzysztof Wilczyński,
	Emil Renner Berthing, devicetree, linux-kernel, linux-riscv,
	linux-pci, Paul Walmsley, Palmer Dabbelt, Albert Ou,
	Philipp Zabel, Mason Huo, Leyfoon Tan, Kevin Xie

On Wed, Jul 19, 2023 at 06:20:49PM +0800, Minda Chen wrote:
> Add PLDA XpressRICH PCIe host common properties dt-binding doc.
> Microchip PolarFire PCIe host using PLDA IP.
> Extract properties from Microchip PolarFire PCIe host.
> 
> Signed-off-by: Minda Chen <minda.chen@starfivetech.com>
> Reviewed-by: Hal Feng <hal.feng@starfivetech.com>
> ---
>  .../pci/plda,xpressrich-pcie-common.yaml      | 72 +++++++++++++++++++
>  1 file changed, 72 insertions(+)
>  create mode 100644 Documentation/devicetree/bindings/pci/plda,xpressrich-pcie-common.yaml
> 
> diff --git a/Documentation/devicetree/bindings/pci/plda,xpressrich-pcie-common.yaml b/Documentation/devicetree/bindings/pci/plda,xpressrich-pcie-common.yaml
> new file mode 100644
> index 000000000000..3627a846c5d1
> --- /dev/null
> +++ b/Documentation/devicetree/bindings/pci/plda,xpressrich-pcie-common.yaml
> @@ -0,0 +1,72 @@
> +# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
> +%YAML 1.2
> +---
> +$id: http://devicetree.org/schemas/pci/plda,xpressrich-pcie-common.yaml#
> +$schema: http://devicetree.org/meta-schemas/core.yaml#
> +
> +title: PLDA XpressRICH PCIe host common properties
> +
> +maintainers:
> +  - Daire McNamara <daire.mcnamara@microchip.com>
> +  - Minda Chen <minda.chen@starfivetech.com>
> +
> +description:
> +  Generic PLDA XpressRICH PCIe host common properties.
> +
> +select: false
> +
> +properties:
> +  reg:
> +    description:
> +      At least host IP register set and configuration space are
> +      required for normal controller work.
> +    maxItems: 2
> +
> +  reg-names:
> +    oneOf:
> +      - items:
> +          - const: cfg
> +          - const: apb
> +      - items:
> +          - const: host
> +          - const: cfg

This didn't exist before. Where's the reasoning?

There's no reason for 'cfg' to be in different spots and little reason 
to have different names for the host/apb space.

Rob


_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 92+ messages in thread

* Re: [PATCH v1 0/9] Refactoring Microchip PolarFire PCIe driver
  2023-07-19 15:26   ` Bjorn Helgaas
@ 2023-07-20  2:15     ` Minda Chen
  -1 siblings, 0 replies; 92+ messages in thread
From: Minda Chen @ 2023-07-20  2:15 UTC (permalink / raw)
  To: Bjorn Helgaas
  Cc: Daire McNamara, Conor Dooley, Rob Herring, Krzysztof Kozlowski,
	Bjorn Helgaas, Lorenzo Pieralisi, Krzysztof Wilczyński,
	Emil Renner Berthing, devicetree, linux-kernel, linux-riscv,
	linux-pci, Paul Walmsley, Palmer Dabbelt, Albert Ou,
	Philipp Zabel, Mason Huo, Leyfoon Tan, Kevin Xie



On 2023/7/19 23:26, Bjorn Helgaas wrote:
> On Wed, Jul 19, 2023 at 06:20:48PM +0800, Minda Chen wrote:
>> This patchset final purpose is add PCIe driver for StarFive JH7110 SoC.
>>   dt-bindings: PCI: Add PLDA XpressRICH PCIe host common properties
>> JH7110 using PLDA XpressRICH PCIe IP. Microchip PolarFire Using the
>> same IP and have commit their codes, which are mixed with PLDA
>> controller codes and Microchip platform codes.
> 
> I guess this actually adds TWO drivers: PCIE_PLDA_PLAT_HOST (claims
> "plda,xpressrich-pcie-host" devices) and PCIE_STARFIVE_HOST (claims
> "starfive,jh7110-pcie" devices), right?
> 
Yes, plda,xpressrich-pcie-host is IP controller driver. Do it like designware/cadence/mobiveil, (pcie-(ip)-plat.c)
But I can't test it. I don't whether need it. If it not required, I will delete it.
>> For re-use the PLDA controller codes, I request refactoring microchip
>> codes, move PLDA common codes to PLDA files.
>> Desigware and Cadence is good example for refactoring codes.
>> 
>> So first step is extract the PLDA common codes from microchip, and
>> refactoring the microchip codes.(patch1 - 4)
>> Then add the PLDA platform codes. (patch5, 6)
>> At last, add Starfive codes. (patch7 - 9)
>> 
>> This patchset is base on v6.5-rc1
> 
> Doesn't quite apply cleanly for me:
> 
I am sorry, The driver need stg clk and syscon driver, which are have't be merge to main line.
mainly dts is(patch9) rejected, Must apply this series patch first. (I forget add this link in cover letter)
https://patchwork.kernel.org/project/linux-riscv/cover/20230712092007.31013-1-xingyu.wu@starfivetech.com/
and this syscon patch 
https://patchwork.kernel.org/project/linux-riscv/patch/20230717023040.78860-7-xingyu.wu@starfivetech.com/
>   10:10:15 ~/linux (main)$ git checkout -b wip/minda-starfive-v1 v6.5-rc1
>   Switched to a new branch 'wip/minda-starfive-v1'
>   10:10:33 ~/linux (wip/minda-starfive-v1)$ git am m/20230719_minda_chen_refactoring_microchip_polarfire_pcie_driver.mbx
>   Applying: dt-bindings: PCI: Add PLDA XpressRICH PCIe host common properties
>   Applying: dt-bindings: PCI: microchip: Remove the PLDA common properties
>   Applying: PCI: PLDA: Get PLDA common codes from Microchip PolarFire host
>   Applying: PCI: microchip: Move PCIe driver to PLDA directory
>   Applying: dt-bindings: PLDA: Add PLDA XpressRICH PCIe host controller
>   Applying: PCI: PLDA: Add host conroller platform driver
>   Applying: dt-bindings: PCI: Add StarFive JH7110 PCIe controller
>   Applying: PCI: PLDA: starfive: Add JH7110 PCIe controller
>   Applying: riscv: dts: starfive: add PCIe dts configuration for JH7110
>   error: patch failed: arch/riscv/boot/dts/starfive/jh7110.dtsi:629
>   error: arch/riscv/boot/dts/starfive/jh7110.dtsi: patch does not apply
>   Patch failed at 0009 riscv: dts: starfive: add PCIe dts configuration for JH7110
> 
>>   dt-bindings: PCI: Add PLDA XpressRICH PCIe host common properties
>>   dt-bindings: PCI: microchip: Remove the PLDA common properties
>>   PCI: PLDA: Get PLDA common codes from Microchip PolarFire host
>>   PCI: microchip: Move PCIe driver to PLDA directory
>>   dt-bindings: PLDA: Add PLDA XpressRICH PCIe host controller
>>   PCI: PLDA: Add host conroller platform driver
> 
> "controller"
>ok
>>   dt-bindings: PCI: Add StarFive JH7110 PCIe controller
>>   PCI: PLDA: starfive: Add JH7110 PCIe controller
>>   riscv: dts: starfive: add PCIe dts configuration for JH7110
> 
> Use "PCI: plda: " prefix for PLDA things that are shared across
> multiple drivers.
> 
> Use "PCI: starfive: " prefix for starfive-specific things.
> 
> This is the same as how drivers/pci/controller/dwc/* looks.
> 
ok, thanks.
> Bjorn

^ permalink raw reply	[flat|nested] 92+ messages in thread

* Re: [PATCH v1 0/9] Refactoring Microchip PolarFire PCIe driver
@ 2023-07-20  2:15     ` Minda Chen
  0 siblings, 0 replies; 92+ messages in thread
From: Minda Chen @ 2023-07-20  2:15 UTC (permalink / raw)
  To: Bjorn Helgaas
  Cc: Daire McNamara, Conor Dooley, Rob Herring, Krzysztof Kozlowski,
	Bjorn Helgaas, Lorenzo Pieralisi, Krzysztof Wilczyński,
	Emil Renner Berthing, devicetree, linux-kernel, linux-riscv,
	linux-pci, Paul Walmsley, Palmer Dabbelt, Albert Ou,
	Philipp Zabel, Mason Huo, Leyfoon Tan, Kevin Xie



On 2023/7/19 23:26, Bjorn Helgaas wrote:
> On Wed, Jul 19, 2023 at 06:20:48PM +0800, Minda Chen wrote:
>> This patchset final purpose is add PCIe driver for StarFive JH7110 SoC.
>>   dt-bindings: PCI: Add PLDA XpressRICH PCIe host common properties
>> JH7110 using PLDA XpressRICH PCIe IP. Microchip PolarFire Using the
>> same IP and have commit their codes, which are mixed with PLDA
>> controller codes and Microchip platform codes.
> 
> I guess this actually adds TWO drivers: PCIE_PLDA_PLAT_HOST (claims
> "plda,xpressrich-pcie-host" devices) and PCIE_STARFIVE_HOST (claims
> "starfive,jh7110-pcie" devices), right?
> 
Yes, plda,xpressrich-pcie-host is IP controller driver. Do it like designware/cadence/mobiveil, (pcie-(ip)-plat.c)
But I can't test it. I don't whether need it. If it not required, I will delete it.
>> For re-use the PLDA controller codes, I request refactoring microchip
>> codes, move PLDA common codes to PLDA files.
>> Desigware and Cadence is good example for refactoring codes.
>> 
>> So first step is extract the PLDA common codes from microchip, and
>> refactoring the microchip codes.(patch1 - 4)
>> Then add the PLDA platform codes. (patch5, 6)
>> At last, add Starfive codes. (patch7 - 9)
>> 
>> This patchset is base on v6.5-rc1
> 
> Doesn't quite apply cleanly for me:
> 
I am sorry, The driver need stg clk and syscon driver, which are have't be merge to main line.
mainly dts is(patch9) rejected, Must apply this series patch first. (I forget add this link in cover letter)
https://patchwork.kernel.org/project/linux-riscv/cover/20230712092007.31013-1-xingyu.wu@starfivetech.com/
and this syscon patch 
https://patchwork.kernel.org/project/linux-riscv/patch/20230717023040.78860-7-xingyu.wu@starfivetech.com/
>   10:10:15 ~/linux (main)$ git checkout -b wip/minda-starfive-v1 v6.5-rc1
>   Switched to a new branch 'wip/minda-starfive-v1'
>   10:10:33 ~/linux (wip/minda-starfive-v1)$ git am m/20230719_minda_chen_refactoring_microchip_polarfire_pcie_driver.mbx
>   Applying: dt-bindings: PCI: Add PLDA XpressRICH PCIe host common properties
>   Applying: dt-bindings: PCI: microchip: Remove the PLDA common properties
>   Applying: PCI: PLDA: Get PLDA common codes from Microchip PolarFire host
>   Applying: PCI: microchip: Move PCIe driver to PLDA directory
>   Applying: dt-bindings: PLDA: Add PLDA XpressRICH PCIe host controller
>   Applying: PCI: PLDA: Add host conroller platform driver
>   Applying: dt-bindings: PCI: Add StarFive JH7110 PCIe controller
>   Applying: PCI: PLDA: starfive: Add JH7110 PCIe controller
>   Applying: riscv: dts: starfive: add PCIe dts configuration for JH7110
>   error: patch failed: arch/riscv/boot/dts/starfive/jh7110.dtsi:629
>   error: arch/riscv/boot/dts/starfive/jh7110.dtsi: patch does not apply
>   Patch failed at 0009 riscv: dts: starfive: add PCIe dts configuration for JH7110
> 
>>   dt-bindings: PCI: Add PLDA XpressRICH PCIe host common properties
>>   dt-bindings: PCI: microchip: Remove the PLDA common properties
>>   PCI: PLDA: Get PLDA common codes from Microchip PolarFire host
>>   PCI: microchip: Move PCIe driver to PLDA directory
>>   dt-bindings: PLDA: Add PLDA XpressRICH PCIe host controller
>>   PCI: PLDA: Add host conroller platform driver
> 
> "controller"
>ok
>>   dt-bindings: PCI: Add StarFive JH7110 PCIe controller
>>   PCI: PLDA: starfive: Add JH7110 PCIe controller
>>   riscv: dts: starfive: add PCIe dts configuration for JH7110
> 
> Use "PCI: plda: " prefix for PLDA things that are shared across
> multiple drivers.
> 
> Use "PCI: starfive: " prefix for starfive-specific things.
> 
> This is the same as how drivers/pci/controller/dwc/* looks.
> 
ok, thanks.
> Bjorn

_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 92+ messages in thread

* Re: [PATCH v1 1/9] dt-bindings: PCI: Add PLDA XpressRICH PCIe host common properties
  2023-07-19 22:31     ` Rob Herring
@ 2023-07-20  6:47       ` Minda Chen
  -1 siblings, 0 replies; 92+ messages in thread
From: Minda Chen @ 2023-07-20  6:47 UTC (permalink / raw)
  To: Rob Herring
  Cc: Daire McNamara, Conor Dooley, Krzysztof Kozlowski, Bjorn Helgaas,
	Lorenzo Pieralisi, Krzysztof Wilczyński,
	Emil Renner Berthing, devicetree, linux-kernel, linux-riscv,
	linux-pci, Paul Walmsley, Palmer Dabbelt, Albert Ou,
	Philipp Zabel, Mason Huo, Leyfoon Tan, Kevin Xie



On 2023/7/20 6:31, Rob Herring wrote:
> On Wed, Jul 19, 2023 at 06:20:49PM +0800, Minda Chen wrote:
>> Add PLDA XpressRICH PCIe host common properties dt-binding doc.
>> Microchip PolarFire PCIe host using PLDA IP.
>> Extract properties from Microchip PolarFire PCIe host.
>> 
>> Signed-off-by: Minda Chen <minda.chen@starfivetech.com>
>> Reviewed-by: Hal Feng <hal.feng@starfivetech.com>
>> ---
>>  .../pci/plda,xpressrich-pcie-common.yaml      | 72 +++++++++++++++++++
>>  1 file changed, 72 insertions(+)
>>  create mode 100644 Documentation/devicetree/bindings/pci/plda,xpressrich-pcie-common.yaml
>> 
>> diff --git a/Documentation/devicetree/bindings/pci/plda,xpressrich-pcie-common.yaml b/Documentation/devicetree/bindings/pci/plda,xpressrich-pcie-common.yaml
>> new file mode 100644
>> index 000000000000..3627a846c5d1
>> --- /dev/null
>> +++ b/Documentation/devicetree/bindings/pci/plda,xpressrich-pcie-common.yaml
>> @@ -0,0 +1,72 @@
>> +# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
>> +%YAML 1.2
>> +---
>> +$id: http://devicetree.org/schemas/pci/plda,xpressrich-pcie-common.yaml#
>> +$schema: http://devicetree.org/meta-schemas/core.yaml#
>> +
>> +title: PLDA XpressRICH PCIe host common properties
>> +
>> +maintainers:
>> +  - Daire McNamara <daire.mcnamara@microchip.com>
>> +  - Minda Chen <minda.chen@starfivetech.com>
>> +
>> +description:
>> +  Generic PLDA XpressRICH PCIe host common properties.
>> +
>> +select: false
>> +
>> +properties:
>> +  reg:
>> +    description:
>> +      At least host IP register set and configuration space are
>> +      required for normal controller work.
>> +    maxItems: 2
>> +
>> +  reg-names:
>> +    oneOf:
>> +      - items:
>> +          - const: cfg
>> +          - const: apb
>> +      - items:
>> +          - const: host
>> +          - const: cfg
> 
> This didn't exist before. Where's the reasoning?
> 
> There's no reason for 'cfg' to be in different spots and little reason 
> to have different names for the host/apb space.
> 
> Rob
> 
ok, I will follow cfg and apb

^ permalink raw reply	[flat|nested] 92+ messages in thread

* Re: [PATCH v1 1/9] dt-bindings: PCI: Add PLDA XpressRICH PCIe host common properties
@ 2023-07-20  6:47       ` Minda Chen
  0 siblings, 0 replies; 92+ messages in thread
From: Minda Chen @ 2023-07-20  6:47 UTC (permalink / raw)
  To: Rob Herring
  Cc: Daire McNamara, Conor Dooley, Krzysztof Kozlowski, Bjorn Helgaas,
	Lorenzo Pieralisi, Krzysztof Wilczyński,
	Emil Renner Berthing, devicetree, linux-kernel, linux-riscv,
	linux-pci, Paul Walmsley, Palmer Dabbelt, Albert Ou,
	Philipp Zabel, Mason Huo, Leyfoon Tan, Kevin Xie



On 2023/7/20 6:31, Rob Herring wrote:
> On Wed, Jul 19, 2023 at 06:20:49PM +0800, Minda Chen wrote:
>> Add PLDA XpressRICH PCIe host common properties dt-binding doc.
>> Microchip PolarFire PCIe host using PLDA IP.
>> Extract properties from Microchip PolarFire PCIe host.
>> 
>> Signed-off-by: Minda Chen <minda.chen@starfivetech.com>
>> Reviewed-by: Hal Feng <hal.feng@starfivetech.com>
>> ---
>>  .../pci/plda,xpressrich-pcie-common.yaml      | 72 +++++++++++++++++++
>>  1 file changed, 72 insertions(+)
>>  create mode 100644 Documentation/devicetree/bindings/pci/plda,xpressrich-pcie-common.yaml
>> 
>> diff --git a/Documentation/devicetree/bindings/pci/plda,xpressrich-pcie-common.yaml b/Documentation/devicetree/bindings/pci/plda,xpressrich-pcie-common.yaml
>> new file mode 100644
>> index 000000000000..3627a846c5d1
>> --- /dev/null
>> +++ b/Documentation/devicetree/bindings/pci/plda,xpressrich-pcie-common.yaml
>> @@ -0,0 +1,72 @@
>> +# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
>> +%YAML 1.2
>> +---
>> +$id: http://devicetree.org/schemas/pci/plda,xpressrich-pcie-common.yaml#
>> +$schema: http://devicetree.org/meta-schemas/core.yaml#
>> +
>> +title: PLDA XpressRICH PCIe host common properties
>> +
>> +maintainers:
>> +  - Daire McNamara <daire.mcnamara@microchip.com>
>> +  - Minda Chen <minda.chen@starfivetech.com>
>> +
>> +description:
>> +  Generic PLDA XpressRICH PCIe host common properties.
>> +
>> +select: false
>> +
>> +properties:
>> +  reg:
>> +    description:
>> +      At least host IP register set and configuration space are
>> +      required for normal controller work.
>> +    maxItems: 2
>> +
>> +  reg-names:
>> +    oneOf:
>> +      - items:
>> +          - const: cfg
>> +          - const: apb
>> +      - items:
>> +          - const: host
>> +          - const: cfg
> 
> This didn't exist before. Where's the reasoning?
> 
> There's no reason for 'cfg' to be in different spots and little reason 
> to have different names for the host/apb space.
> 
> Rob
> 
ok, I will follow cfg and apb

_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 92+ messages in thread

* Re: [PATCH v1 1/9] dt-bindings: PCI: Add PLDA XpressRICH PCIe host common properties
  2023-07-19 10:52     ` Krzysztof Kozlowski
@ 2023-07-20  6:59       ` Minda Chen
  -1 siblings, 0 replies; 92+ messages in thread
From: Minda Chen @ 2023-07-20  6:59 UTC (permalink / raw)
  To: Krzysztof Kozlowski, Daire McNamara, Conor Dooley, Rob Herring,
	Krzysztof Kozlowski, Bjorn Helgaas, Lorenzo Pieralisi,
	Krzysztof Wilczyński, Emil Renner Berthing
  Cc: devicetree, linux-kernel, linux-riscv, linux-pci, Paul Walmsley,
	Palmer Dabbelt, Albert Ou, Philipp Zabel, Mason Huo, Leyfoon Tan,
	Kevin Xie



On 2023/7/19 18:52, Krzysztof Kozlowski wrote:
> On 19/07/2023 12:20, Minda Chen wrote:
>> Add PLDA XpressRICH PCIe host common properties dt-binding doc.
>> Microchip PolarFire PCIe host using PLDA IP.
>> Extract properties from Microchip PolarFire PCIe host.
>> 
>> Signed-off-by: Minda Chen <minda.chen@starfivetech.com>
>> Reviewed-by: Hal Feng <hal.feng@starfivetech.com>
>> ---
>>  .../pci/plda,xpressrich-pcie-common.yaml      | 72 +++++++++++++++++++
>>  1 file changed, 72 insertions(+)
>>  create mode 100644 Documentation/devicetree/bindings/pci/plda,xpressrich-pcie-common.yaml
> 
> How is it related with existing plda,xpressrich3-axi?
> 
 yes, I just found plda,xpressrich3-axi. It is same IP in ARM juno soc. But it is firmware-initialized while microchip and starfive not.
maybe I can rename this file to plda,xpressrich3-axi-common.yaml
>> 
>> diff --git a/Documentation/devicetree/bindings/pci/plda,xpressrich-pcie-common.yaml b/Documentation/devicetree/bindings/pci/plda,xpressrich-pcie-common.yaml
>> new file mode 100644
>> index 000000000000..3627a846c5d1
>> --- /dev/null
>> +++ b/Documentation/devicetree/bindings/pci/plda,xpressrich-pcie-common.yaml
>> @@ -0,0 +1,72 @@
>> +# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
>> +%YAML 1.2
>> +---
>> +$id: http://devicetree.org/schemas/pci/plda,xpressrich-pcie-common.yaml#
>> +$schema: http://devicetree.org/meta-schemas/core.yaml#
>> +
>> +title: PLDA XpressRICH PCIe host common properties
>> +
>> +maintainers:
>> +  - Daire McNamara <daire.mcnamara@microchip.com>
>> +  - Minda Chen <minda.chen@starfivetech.com>
>> +
>> +description:
>> +  Generic PLDA XpressRICH PCIe host common properties.
>> +
>> +select: false
> 
> This should not be needed.
> 
ok
>> +
>> +properties:
>> +  reg:
>> +    description:
>> +      At least host IP register set and configuration space are
> 
> "At least" does not fit here since you do not allow anything else.
> 
I will delete "At least"
>> +      required for normal controller work.
>> +    maxItems: 2
>> +
>> +  reg-names:
>> +    oneOf:
>> +      - items:
>> +          - const: cfg
>> +          - const: apb
>> +      - items:
>> +          - const: host
>> +          - const: cfg
> 
> Maybe keep similar order, so cfg followed by host?
> 
I will follow cfg, apb
> Best regards,
> Krzysztof
> 

^ permalink raw reply	[flat|nested] 92+ messages in thread

* Re: [PATCH v1 1/9] dt-bindings: PCI: Add PLDA XpressRICH PCIe host common properties
@ 2023-07-20  6:59       ` Minda Chen
  0 siblings, 0 replies; 92+ messages in thread
From: Minda Chen @ 2023-07-20  6:59 UTC (permalink / raw)
  To: Krzysztof Kozlowski, Daire McNamara, Conor Dooley, Rob Herring,
	Krzysztof Kozlowski, Bjorn Helgaas, Lorenzo Pieralisi,
	Krzysztof Wilczyński, Emil Renner Berthing
  Cc: devicetree, linux-kernel, linux-riscv, linux-pci, Paul Walmsley,
	Palmer Dabbelt, Albert Ou, Philipp Zabel, Mason Huo, Leyfoon Tan,
	Kevin Xie



On 2023/7/19 18:52, Krzysztof Kozlowski wrote:
> On 19/07/2023 12:20, Minda Chen wrote:
>> Add PLDA XpressRICH PCIe host common properties dt-binding doc.
>> Microchip PolarFire PCIe host using PLDA IP.
>> Extract properties from Microchip PolarFire PCIe host.
>> 
>> Signed-off-by: Minda Chen <minda.chen@starfivetech.com>
>> Reviewed-by: Hal Feng <hal.feng@starfivetech.com>
>> ---
>>  .../pci/plda,xpressrich-pcie-common.yaml      | 72 +++++++++++++++++++
>>  1 file changed, 72 insertions(+)
>>  create mode 100644 Documentation/devicetree/bindings/pci/plda,xpressrich-pcie-common.yaml
> 
> How is it related with existing plda,xpressrich3-axi?
> 
 yes, I just found plda,xpressrich3-axi. It is same IP in ARM juno soc. But it is firmware-initialized while microchip and starfive not.
maybe I can rename this file to plda,xpressrich3-axi-common.yaml
>> 
>> diff --git a/Documentation/devicetree/bindings/pci/plda,xpressrich-pcie-common.yaml b/Documentation/devicetree/bindings/pci/plda,xpressrich-pcie-common.yaml
>> new file mode 100644
>> index 000000000000..3627a846c5d1
>> --- /dev/null
>> +++ b/Documentation/devicetree/bindings/pci/plda,xpressrich-pcie-common.yaml
>> @@ -0,0 +1,72 @@
>> +# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
>> +%YAML 1.2
>> +---
>> +$id: http://devicetree.org/schemas/pci/plda,xpressrich-pcie-common.yaml#
>> +$schema: http://devicetree.org/meta-schemas/core.yaml#
>> +
>> +title: PLDA XpressRICH PCIe host common properties
>> +
>> +maintainers:
>> +  - Daire McNamara <daire.mcnamara@microchip.com>
>> +  - Minda Chen <minda.chen@starfivetech.com>
>> +
>> +description:
>> +  Generic PLDA XpressRICH PCIe host common properties.
>> +
>> +select: false
> 
> This should not be needed.
> 
ok
>> +
>> +properties:
>> +  reg:
>> +    description:
>> +      At least host IP register set and configuration space are
> 
> "At least" does not fit here since you do not allow anything else.
> 
I will delete "At least"
>> +      required for normal controller work.
>> +    maxItems: 2
>> +
>> +  reg-names:
>> +    oneOf:
>> +      - items:
>> +          - const: cfg
>> +          - const: apb
>> +      - items:
>> +          - const: host
>> +          - const: cfg
> 
> Maybe keep similar order, so cfg followed by host?
> 
I will follow cfg, apb
> Best regards,
> Krzysztof
> 

_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 92+ messages in thread

* Re: [PATCH v1 5/9] dt-bindings: PLDA: Add PLDA XpressRICH PCIe host controller
  2023-07-19 22:29     ` Rob Herring
@ 2023-07-20  7:02       ` Minda Chen
  -1 siblings, 0 replies; 92+ messages in thread
From: Minda Chen @ 2023-07-20  7:02 UTC (permalink / raw)
  To: Rob Herring
  Cc: Daire McNamara, Conor Dooley, Krzysztof Kozlowski, Bjorn Helgaas,
	Lorenzo Pieralisi, Krzysztof Wilczyński,
	Emil Renner Berthing, devicetree, linux-kernel, linux-riscv,
	linux-pci, Paul Walmsley, Palmer Dabbelt, Albert Ou,
	Philipp Zabel, Mason Huo, Leyfoon Tan, Kevin Xie



On 2023/7/20 6:29, Rob Herring wrote:
> On Wed, Jul 19, 2023 at 06:20:53PM +0800, Minda Chen wrote:
>> Add PLDA XpressRICH host controller dt-bindings. Both Microchip
>> PolarFire SoC and StarFive JH7110 SoC are using PLDA XpressRICH
>> PCIe host controller IP.
>> 
>> Signed-off-by: Minda Chen <minda.chen@starfivetech.com>
>> Reviewed-by: Hal Feng <hal.feng@starfivetech.com>
>> ---
>>  .../pci/plda,xpressrich-pcie-host.yaml        | 66 +++++++++++++++++++
>>  1 file changed, 66 insertions(+)
>>  create mode 100644 Documentation/devicetree/bindings/pci/plda,xpressrich-pcie-host.yaml
>> 
>> diff --git a/Documentation/devicetree/bindings/pci/plda,xpressrich-pcie-host.yaml b/Documentation/devicetree/bindings/pci/plda,xpressrich-pcie-host.yaml
>> new file mode 100644
>> index 000000000000..10a10862a078
>> --- /dev/null
>> +++ b/Documentation/devicetree/bindings/pci/plda,xpressrich-pcie-host.yaml
>> @@ -0,0 +1,66 @@
>> +# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
>> +%YAML 1.2
>> +---
>> +$id: http://devicetree.org/schemas/pci/plda,xpressrich-pcie-host.yaml#
>> +$schema: http://devicetree.org/meta-schemas/core.yaml#
>> +
>> +title: PLDA XpressRICH PCIe host controller
>> +
>> +maintainers:
>> +  - Daire McNamara <daire.mcnamara@microchip.com>
>> +  - Minda Chen <minda.chen@starfivetech.com>
>> +
>> +allOf:
>> +  - $ref: /schemas/pci/pci-bus.yaml#
>> +  - $ref: plda,xpressrich-pcie-common.yaml#
>> +  - $ref: /schemas/interrupt-controller/msi-controller.yaml#
>> +
>> +properties:
>> +  compatible:
>> +    const: plda,xpressrich-pcie-host
> 
> What h/w is this in? I don't see why this is needed. You have the common 
> schema for the IP block and then the Microchip and Starfive schemas for 
> the 2 implementations.
> 
> Rob
OK. I will delete this patch and pcie-plda-plat.c

^ permalink raw reply	[flat|nested] 92+ messages in thread

* Re: [PATCH v1 5/9] dt-bindings: PLDA: Add PLDA XpressRICH PCIe host controller
@ 2023-07-20  7:02       ` Minda Chen
  0 siblings, 0 replies; 92+ messages in thread
From: Minda Chen @ 2023-07-20  7:02 UTC (permalink / raw)
  To: Rob Herring
  Cc: Daire McNamara, Conor Dooley, Krzysztof Kozlowski, Bjorn Helgaas,
	Lorenzo Pieralisi, Krzysztof Wilczyński,
	Emil Renner Berthing, devicetree, linux-kernel, linux-riscv,
	linux-pci, Paul Walmsley, Palmer Dabbelt, Albert Ou,
	Philipp Zabel, Mason Huo, Leyfoon Tan, Kevin Xie



On 2023/7/20 6:29, Rob Herring wrote:
> On Wed, Jul 19, 2023 at 06:20:53PM +0800, Minda Chen wrote:
>> Add PLDA XpressRICH host controller dt-bindings. Both Microchip
>> PolarFire SoC and StarFive JH7110 SoC are using PLDA XpressRICH
>> PCIe host controller IP.
>> 
>> Signed-off-by: Minda Chen <minda.chen@starfivetech.com>
>> Reviewed-by: Hal Feng <hal.feng@starfivetech.com>
>> ---
>>  .../pci/plda,xpressrich-pcie-host.yaml        | 66 +++++++++++++++++++
>>  1 file changed, 66 insertions(+)
>>  create mode 100644 Documentation/devicetree/bindings/pci/plda,xpressrich-pcie-host.yaml
>> 
>> diff --git a/Documentation/devicetree/bindings/pci/plda,xpressrich-pcie-host.yaml b/Documentation/devicetree/bindings/pci/plda,xpressrich-pcie-host.yaml
>> new file mode 100644
>> index 000000000000..10a10862a078
>> --- /dev/null
>> +++ b/Documentation/devicetree/bindings/pci/plda,xpressrich-pcie-host.yaml
>> @@ -0,0 +1,66 @@
>> +# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
>> +%YAML 1.2
>> +---
>> +$id: http://devicetree.org/schemas/pci/plda,xpressrich-pcie-host.yaml#
>> +$schema: http://devicetree.org/meta-schemas/core.yaml#
>> +
>> +title: PLDA XpressRICH PCIe host controller
>> +
>> +maintainers:
>> +  - Daire McNamara <daire.mcnamara@microchip.com>
>> +  - Minda Chen <minda.chen@starfivetech.com>
>> +
>> +allOf:
>> +  - $ref: /schemas/pci/pci-bus.yaml#
>> +  - $ref: plda,xpressrich-pcie-common.yaml#
>> +  - $ref: /schemas/interrupt-controller/msi-controller.yaml#
>> +
>> +properties:
>> +  compatible:
>> +    const: plda,xpressrich-pcie-host
> 
> What h/w is this in? I don't see why this is needed. You have the common 
> schema for the IP block and then the Microchip and Starfive schemas for 
> the 2 implementations.
> 
> Rob
OK. I will delete this patch and pcie-plda-plat.c

_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 92+ messages in thread

* Re: [PATCH v1 8/9] PCI: PLDA: starfive: Add JH7110 PCIe controller
  2023-07-19 16:48     ` Bjorn Helgaas
@ 2023-07-20 10:11       ` Kevin Xie
  -1 siblings, 0 replies; 92+ messages in thread
From: Kevin Xie @ 2023-07-20 10:11 UTC (permalink / raw)
  To: Bjorn Helgaas, Minda Chen
  Cc: Daire McNamara, Conor Dooley, Rob Herring, Krzysztof Kozlowski,
	Bjorn Helgaas, Lorenzo Pieralisi, Krzysztof Wilczyński,
	Emil Renner Berthing, devicetree, linux-kernel, linux-riscv,
	linux-pci, Paul Walmsley, Palmer Dabbelt, Albert Ou,
	Philipp Zabel, Mason Huo, Leyfoon Tan



On 2023/7/20 0:48, Bjorn Helgaas wrote:
> On Wed, Jul 19, 2023 at 06:20:56PM +0800, Minda Chen wrote:
>> Add StarFive JH7110 SoC PCIe controller platform
>> driver codes.
> 
> Rewrap all the commit logs to fill 75 columns or so.
> 

OK.

>>  #define PCIE_PCI_IDS_DW1		0x9c
>> -
>> +#define  IDS_CLASS_CODE_SHIFT		16
>> +#define PCI_MISC			0xB4
> 
> Surrounding code uses lower-case hex.  Make it all match.
> 

OK, I will make it all match.

>> +#define STG_SYSCON_AXI4_SLVL_ARFUNC_MASK	GENMASK(22, 8)
>> +#define STG_SYSCON_AXI4_SLVL_ARFUNC_SHIFT	8
> 
> When practical, use FIELD_GET() and FIELD_PREP() to avoid the need for
> *_SHIFT macros.
> 

Got it.

>> +struct starfive_jh7110_pcie {
>> +	struct plda_pcie	plda;
>> +	struct reset_control *resets;
>> +	struct clk_bulk_data *clks;
>> +	struct regmap *reg_syscon;
>> +	struct gpio_desc *power_gpio;
>> +	struct gpio_desc *reset_gpio;
>> +
>> +	u32 stg_arfun;
>> +	u32 stg_awfun;
>> +	u32 stg_rp_nep;
>> +	u32 stg_lnksta;
>> +
>> +	int num_clks;
> 
> If you indent one member with tabs, e.g., "struct plda_pcie        plda",
> they should all be indented to match.
> 

OK, I will indent that member with white space.

>> + * The BAR0/1 of bridge should be hidden during enumeration to
>> + * avoid the sizing and resource allocation by PCIe core.
>> + */
>> +static bool starfive_pcie_hide_rc_bar(struct pci_bus *bus, unsigned int  devfn,
>> +				      int offset)
>> +{
>> +	if (pci_is_root_bus(bus) && !devfn &&
>> +	    (offset == PCI_BASE_ADDRESS_0 || offset == PCI_BASE_ADDRESS_1))
>> +		return true;
>> +
>> +	return false;
>> +}
>> +
>> +int starfive_pcie_config_write(struct pci_bus *bus, unsigned int devfn,
>> +			       int where, int size, u32 value)
>> +{
>> +	if (starfive_pcie_hide_rc_bar(bus, devfn, where))
>> +		return PCIBIOS_BAD_REGISTER_NUMBER;
> 
> I think you are trying present BARs 0 & 1 as unimplemented.  Such BARs
> are hardwired to zero, so you should make them behave that way (both
> read and write).  Many callers of config accessors don't check the
> return value, so I don't think it's reliable to just return
> PCIBIOS_BAD_REGISTER_NUMBER.
> 

This is a hardware defect that we did not hardwired those BARs to zero,
and it is configurable for software now.
We have to add this filter function for workaround.

>> +static int starfive_pcie_is_link_up(struct starfive_jh7110_pcie *pcie)
>> +{
>> +	struct device *dev = pcie->plda.dev;
>> +	int ret;
>> +	u32 stg_reg_val;
>> +
>> +	/* 100ms timeout value should be enough for Gen1/2 training */
>> +	ret = regmap_read_poll_timeout(pcie->reg_syscon,
>> +				       pcie->stg_lnksta,
>> +				       stg_reg_val,
>> +				       stg_reg_val & DATA_LINK_ACTIVE,
>> +				       10 * 1000, 100 * 1000);
>> +
>> +	/* If the link is down (no device in slot), then exit. */
>> +	if (ret == -ETIMEDOUT) {
>> +		dev_info(dev, "Port link down, exit.\n");
>> +		return 0;
>> +	} else if (ret == 0) {
>> +		dev_info(dev, "Port link up.\n");
>> +		return 1;
>> +	}
> 
> Please copy the naming and style of the "*_pcie_link_up()" functions
> in other drivers.  These are boolean functions with no side effects,
> including no timeouts.
> 
> Some drivers have "*wait_for_link()" functions if polling is needed.
> 

OK, I will refer to other drivers in this part.

>> +		return dev_err_probe(dev, ret,
>> +			"failed to initialize pcie phy\n");
> 
> Driver messages should match (all capitalized or none capitalized).
> 

OK, I will make them all matched.

>> +	/* Enable root port */
> 
> Superfluous comment, since the function name says the same.
> 

I will delete this comment.

>> +	plda_pcie_enable_root_port(plda);
> 
>> +	/* Ensure that PERST has been asserted for at least 100 ms */
>> +	msleep(300);
>> +	gpiod_set_value_cansleep(pcie->reset_gpio, 0);
> 
> At least 100 ms, but you sleep *300* ms?  This is probably related to
> https://lore.kernel.org/r/20230718155515.GA483233@bhelgaas
> 
> Please include a comment with the source of the delay value.  I assume
> it's T_PVPERL and T_PERST-CLK from the PCIe CEM spec.  This way we can
> someday share those #defines across drivers.
> 

Yes, the delay value here is T_PVPERL from PCIe CEM spec r2.0 (Table 2-4).
At the first time we set 100ms delay according to sector 2.2 of the spec:
"After there has been time (TPVPERL) for the power and clock to become stable,
PERST# is deasserted high and the PCI Express functions can start up."

However, in the compatibility testing with several NVMe SSD, we found that
Lenovo Thinklife ST8000 NVMe can not get ready in 100ms,
and it actually needs almost 200ms.
Thus, we increased the T_PVPERL value to 300ms for the better device compatibility.

We will use a macro to define T_PVPERL, and add comments for the source of it.
If the compatibility delay of 300ms is not reasonable, we can revert it to 100ms.

>> +#ifdef CONFIG_PM_SLEEP
>> +static int __maybe_unused starfive_pcie_suspend_noirq(struct device *dev)
> 
> I think you can dispense with some of these #ifdefs and the
> __maybe_unused as in
> https://lore.kernel.org/all/20220720224829.GA1667002@bhelgaas/
> 

Thanks, I will refer to your patch.

>> +{
>> +	struct starfive_jh7110_pcie *pcie = dev_get_drvdata(dev);
>> +
>> +	if (!pcie)
>> +		return 0;
> 
> How can this happen?  If we're only detecting memory corruption, it's
> not worth it.
> 
> Bjorn

OK, I will delete this condition.

^ permalink raw reply	[flat|nested] 92+ messages in thread

* Re: [PATCH v1 8/9] PCI: PLDA: starfive: Add JH7110 PCIe controller
@ 2023-07-20 10:11       ` Kevin Xie
  0 siblings, 0 replies; 92+ messages in thread
From: Kevin Xie @ 2023-07-20 10:11 UTC (permalink / raw)
  To: Bjorn Helgaas, Minda Chen
  Cc: Daire McNamara, Conor Dooley, Rob Herring, Krzysztof Kozlowski,
	Bjorn Helgaas, Lorenzo Pieralisi, Krzysztof Wilczyński,
	Emil Renner Berthing, devicetree, linux-kernel, linux-riscv,
	linux-pci, Paul Walmsley, Palmer Dabbelt, Albert Ou,
	Philipp Zabel, Mason Huo, Leyfoon Tan



On 2023/7/20 0:48, Bjorn Helgaas wrote:
> On Wed, Jul 19, 2023 at 06:20:56PM +0800, Minda Chen wrote:
>> Add StarFive JH7110 SoC PCIe controller platform
>> driver codes.
> 
> Rewrap all the commit logs to fill 75 columns or so.
> 

OK.

>>  #define PCIE_PCI_IDS_DW1		0x9c
>> -
>> +#define  IDS_CLASS_CODE_SHIFT		16
>> +#define PCI_MISC			0xB4
> 
> Surrounding code uses lower-case hex.  Make it all match.
> 

OK, I will make it all match.

>> +#define STG_SYSCON_AXI4_SLVL_ARFUNC_MASK	GENMASK(22, 8)
>> +#define STG_SYSCON_AXI4_SLVL_ARFUNC_SHIFT	8
> 
> When practical, use FIELD_GET() and FIELD_PREP() to avoid the need for
> *_SHIFT macros.
> 

Got it.

>> +struct starfive_jh7110_pcie {
>> +	struct plda_pcie	plda;
>> +	struct reset_control *resets;
>> +	struct clk_bulk_data *clks;
>> +	struct regmap *reg_syscon;
>> +	struct gpio_desc *power_gpio;
>> +	struct gpio_desc *reset_gpio;
>> +
>> +	u32 stg_arfun;
>> +	u32 stg_awfun;
>> +	u32 stg_rp_nep;
>> +	u32 stg_lnksta;
>> +
>> +	int num_clks;
> 
> If you indent one member with tabs, e.g., "struct plda_pcie        plda",
> they should all be indented to match.
> 

OK, I will indent that member with white space.

>> + * The BAR0/1 of bridge should be hidden during enumeration to
>> + * avoid the sizing and resource allocation by PCIe core.
>> + */
>> +static bool starfive_pcie_hide_rc_bar(struct pci_bus *bus, unsigned int  devfn,
>> +				      int offset)
>> +{
>> +	if (pci_is_root_bus(bus) && !devfn &&
>> +	    (offset == PCI_BASE_ADDRESS_0 || offset == PCI_BASE_ADDRESS_1))
>> +		return true;
>> +
>> +	return false;
>> +}
>> +
>> +int starfive_pcie_config_write(struct pci_bus *bus, unsigned int devfn,
>> +			       int where, int size, u32 value)
>> +{
>> +	if (starfive_pcie_hide_rc_bar(bus, devfn, where))
>> +		return PCIBIOS_BAD_REGISTER_NUMBER;
> 
> I think you are trying present BARs 0 & 1 as unimplemented.  Such BARs
> are hardwired to zero, so you should make them behave that way (both
> read and write).  Many callers of config accessors don't check the
> return value, so I don't think it's reliable to just return
> PCIBIOS_BAD_REGISTER_NUMBER.
> 

This is a hardware defect that we did not hardwired those BARs to zero,
and it is configurable for software now.
We have to add this filter function for workaround.

>> +static int starfive_pcie_is_link_up(struct starfive_jh7110_pcie *pcie)
>> +{
>> +	struct device *dev = pcie->plda.dev;
>> +	int ret;
>> +	u32 stg_reg_val;
>> +
>> +	/* 100ms timeout value should be enough for Gen1/2 training */
>> +	ret = regmap_read_poll_timeout(pcie->reg_syscon,
>> +				       pcie->stg_lnksta,
>> +				       stg_reg_val,
>> +				       stg_reg_val & DATA_LINK_ACTIVE,
>> +				       10 * 1000, 100 * 1000);
>> +
>> +	/* If the link is down (no device in slot), then exit. */
>> +	if (ret == -ETIMEDOUT) {
>> +		dev_info(dev, "Port link down, exit.\n");
>> +		return 0;
>> +	} else if (ret == 0) {
>> +		dev_info(dev, "Port link up.\n");
>> +		return 1;
>> +	}
> 
> Please copy the naming and style of the "*_pcie_link_up()" functions
> in other drivers.  These are boolean functions with no side effects,
> including no timeouts.
> 
> Some drivers have "*wait_for_link()" functions if polling is needed.
> 

OK, I will refer to other drivers in this part.

>> +		return dev_err_probe(dev, ret,
>> +			"failed to initialize pcie phy\n");
> 
> Driver messages should match (all capitalized or none capitalized).
> 

OK, I will make them all matched.

>> +	/* Enable root port */
> 
> Superfluous comment, since the function name says the same.
> 

I will delete this comment.

>> +	plda_pcie_enable_root_port(plda);
> 
>> +	/* Ensure that PERST has been asserted for at least 100 ms */
>> +	msleep(300);
>> +	gpiod_set_value_cansleep(pcie->reset_gpio, 0);
> 
> At least 100 ms, but you sleep *300* ms?  This is probably related to
> https://lore.kernel.org/r/20230718155515.GA483233@bhelgaas
> 
> Please include a comment with the source of the delay value.  I assume
> it's T_PVPERL and T_PERST-CLK from the PCIe CEM spec.  This way we can
> someday share those #defines across drivers.
> 

Yes, the delay value here is T_PVPERL from PCIe CEM spec r2.0 (Table 2-4).
At the first time we set 100ms delay according to sector 2.2 of the spec:
"After there has been time (TPVPERL) for the power and clock to become stable,
PERST# is deasserted high and the PCI Express functions can start up."

However, in the compatibility testing with several NVMe SSD, we found that
Lenovo Thinklife ST8000 NVMe can not get ready in 100ms,
and it actually needs almost 200ms.
Thus, we increased the T_PVPERL value to 300ms for the better device compatibility.

We will use a macro to define T_PVPERL, and add comments for the source of it.
If the compatibility delay of 300ms is not reasonable, we can revert it to 100ms.

>> +#ifdef CONFIG_PM_SLEEP
>> +static int __maybe_unused starfive_pcie_suspend_noirq(struct device *dev)
> 
> I think you can dispense with some of these #ifdefs and the
> __maybe_unused as in
> https://lore.kernel.org/all/20220720224829.GA1667002@bhelgaas/
> 

Thanks, I will refer to your patch.

>> +{
>> +	struct starfive_jh7110_pcie *pcie = dev_get_drvdata(dev);
>> +
>> +	if (!pcie)
>> +		return 0;
> 
> How can this happen?  If we're only detecting memory corruption, it's
> not worth it.
> 
> Bjorn

OK, I will delete this condition.

_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 92+ messages in thread

* Re: [PATCH v1 4/9] PCI: microchip: Move PCIe driver to PLDA directory
  2023-07-19 10:20   ` Minda Chen
@ 2023-07-20 11:07     ` Conor Dooley
  -1 siblings, 0 replies; 92+ messages in thread
From: Conor Dooley @ 2023-07-20 11:07 UTC (permalink / raw)
  To: Minda Chen
  Cc: Daire McNamara, Conor Dooley, Rob Herring, Krzysztof Kozlowski,
	Bjorn Helgaas, Lorenzo Pieralisi, Krzysztof Wilczyński,
	Emil Renner Berthing, devicetree, linux-kernel, linux-riscv,
	linux-pci, Paul Walmsley, Palmer Dabbelt, Albert Ou,
	Philipp Zabel, Mason Huo, Leyfoon Tan, Kevin Xie

[-- Attachment #1: Type: text/plain, Size: 4453 bytes --]

Hey Minda,

On Wed, Jul 19, 2023 at 06:20:52PM +0800, Minda Chen wrote:
> Move Microchip specific platform codes to PLDA directory.
> Including clock init, interrupt event handle and platform
> init codes.
> 
> Signed-off-by: Minda Chen <minda.chen@starfivetech.com>
> ---
>  MAINTAINERS                                   |   4 +-
>  drivers/pci/controller/Kconfig                |   8 -
>  drivers/pci/controller/Makefile               |   1 -
>  drivers/pci/controller/plda/Kconfig           |  10 +-
>  drivers/pci/controller/plda/Makefile          |   1 +
>  .../{ => plda}/pcie-microchip-host.c          | 594 ++----------------

I think that you should squash this with the previous patch, rather than
duplicate the code and then de-duplicate it, so that it is more obvious
what is being extracted.

>  6 files changed, 55 insertions(+), 563 deletions(-)
>  rename drivers/pci/controller/{ => plda}/pcie-microchip-host.c (50%)
> 
> diff --git a/MAINTAINERS b/MAINTAINERS
> index 3b3c4d8808ae..f02618c2bdf5 100644
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -16503,7 +16503,7 @@ M:	Daire McNamara <daire.mcnamara@microchip.com>
>  L:	linux-pci@vger.kernel.org
>  S:	Supported
>  F:	Documentation/devicetree/bindings/pci/microchip*
> -F:	drivers/pci/controller/*microchip*
> +F:	drivers/pci/controller/plda/*microchip*
>  
>  PCIE DRIVER FOR QUALCOMM MSM
>  M:	Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
> @@ -18287,7 +18287,7 @@ F:	drivers/char/hw_random/mpfs-rng.c
>  F:	drivers/clk/microchip/clk-mpfs*.c
>  F:	drivers/i2c/busses/i2c-microchip-corei2c.c
>  F:	drivers/mailbox/mailbox-mpfs.c
> -F:	drivers/pci/controller/pcie-microchip-host.c
> +F:	drivers/pci/controller/plda/pcie-microchip-host.c
>  F:	drivers/pwm/pwm-microchip-core.c
>  F:	drivers/reset/reset-mpfs.c
>  F:	drivers/rtc/rtc-mpfs.c
> diff --git a/drivers/pci/controller/Kconfig b/drivers/pci/controller/Kconfig
> index a106dabcb8dc..107cdb69e15c 100644
> --- a/drivers/pci/controller/Kconfig
> +++ b/drivers/pci/controller/Kconfig
> @@ -216,14 +216,6 @@ config PCIE_MT7621
>  	help
>  	  This selects a driver for the MediaTek MT7621 PCIe Controller.
>  
> -config PCIE_MICROCHIP_HOST
> -	bool "Microchip AXI PCIe controller"
> -	depends on PCI_MSI && OF
> -	select PCI_HOST_COMMON
> -	help
> -	  Say Y here if you want kernel to support the Microchip AXI PCIe
> -	  Host Bridge driver.
> -
>  config PCI_HYPERV_INTERFACE
>  	tristate "Microsoft Hyper-V PCI Interface"
>  	depends on ((X86 && X86_64) || ARM64) && HYPERV && PCI_MSI
> diff --git a/drivers/pci/controller/Makefile b/drivers/pci/controller/Makefile
> index aa0a691ebcbc..93236dc97b21 100644
> --- a/drivers/pci/controller/Makefile
> +++ b/drivers/pci/controller/Makefile
> @@ -32,7 +32,6 @@ obj-$(CONFIG_PCIE_ROCKCHIP_EP) += pcie-rockchip-ep.o
>  obj-$(CONFIG_PCIE_ROCKCHIP_HOST) += pcie-rockchip-host.o
>  obj-$(CONFIG_PCIE_MEDIATEK) += pcie-mediatek.o
>  obj-$(CONFIG_PCIE_MEDIATEK_GEN3) += pcie-mediatek-gen3.o
> -obj-$(CONFIG_PCIE_MICROCHIP_HOST) += pcie-microchip-host.o
>  obj-$(CONFIG_VMD) += vmd.o
>  obj-$(CONFIG_PCIE_BRCMSTB) += pcie-brcmstb.o
>  obj-$(CONFIG_PCI_LOONGSON) += pci-loongson.o
> diff --git a/drivers/pci/controller/plda/Kconfig b/drivers/pci/controller/plda/Kconfig
> index 37e17326671b..fb274976b84b 100644
> --- a/drivers/pci/controller/plda/Kconfig
> +++ b/drivers/pci/controller/plda/Kconfig
> @@ -4,8 +4,16 @@ menu "PLDA PCIe controllers support"
>  	depends on PCI
>  
>  config PCIE_PLDA_HOST
> -	bool "PLDA PCIe host controller"
> +	bool
>  	depends on OF && PCI_MSI

Is this okay w.r.t. unmet dependencies? IOW, if PCI_MSI is disabled,
PCIE_MICROCHIP_HOST can still select PCIE_PLDA_HOST, no?

>  	select IRQ_DOMAIN
>  
> +config PCIE_MICROCHIP_HOST
> +	bool "Microchip AXI PCIe controller"
> +	select PCI_HOST_COMMON
> +	select PCIE_PLDA_HOST
> +	help
> +	  Say Y here if you want kernel to support the Microchip AXI PCIe
> +	  Host Bridge driver.
> +
>  endmenu
> diff --git a/drivers/pci/controller/plda/Makefile b/drivers/pci/controller/plda/Makefile
> index 79dbae655c2b..4340ab007f44 100644
> --- a/drivers/pci/controller/plda/Makefile
> +++ b/drivers/pci/controller/plda/Makefile
> @@ -1,2 +1,3 @@
>  # SPDX-License-Identifier: GPL-2.0
>  obj-$(CONFIG_PCIE_PLDA_HOST) += pcie-plda-host.o
> +obj-$(CONFIG_PCIE_MICROCHIP_HOST) += pcie-microchip-host.o

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 228 bytes --]

^ permalink raw reply	[flat|nested] 92+ messages in thread

* Re: [PATCH v1 4/9] PCI: microchip: Move PCIe driver to PLDA directory
@ 2023-07-20 11:07     ` Conor Dooley
  0 siblings, 0 replies; 92+ messages in thread
From: Conor Dooley @ 2023-07-20 11:07 UTC (permalink / raw)
  To: Minda Chen
  Cc: Daire McNamara, Conor Dooley, Rob Herring, Krzysztof Kozlowski,
	Bjorn Helgaas, Lorenzo Pieralisi, Krzysztof Wilczyński,
	Emil Renner Berthing, devicetree, linux-kernel, linux-riscv,
	linux-pci, Paul Walmsley, Palmer Dabbelt, Albert Ou,
	Philipp Zabel, Mason Huo, Leyfoon Tan, Kevin Xie


[-- Attachment #1.1: Type: text/plain, Size: 4453 bytes --]

Hey Minda,

On Wed, Jul 19, 2023 at 06:20:52PM +0800, Minda Chen wrote:
> Move Microchip specific platform codes to PLDA directory.
> Including clock init, interrupt event handle and platform
> init codes.
> 
> Signed-off-by: Minda Chen <minda.chen@starfivetech.com>
> ---
>  MAINTAINERS                                   |   4 +-
>  drivers/pci/controller/Kconfig                |   8 -
>  drivers/pci/controller/Makefile               |   1 -
>  drivers/pci/controller/plda/Kconfig           |  10 +-
>  drivers/pci/controller/plda/Makefile          |   1 +
>  .../{ => plda}/pcie-microchip-host.c          | 594 ++----------------

I think that you should squash this with the previous patch, rather than
duplicate the code and then de-duplicate it, so that it is more obvious
what is being extracted.

>  6 files changed, 55 insertions(+), 563 deletions(-)
>  rename drivers/pci/controller/{ => plda}/pcie-microchip-host.c (50%)
> 
> diff --git a/MAINTAINERS b/MAINTAINERS
> index 3b3c4d8808ae..f02618c2bdf5 100644
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -16503,7 +16503,7 @@ M:	Daire McNamara <daire.mcnamara@microchip.com>
>  L:	linux-pci@vger.kernel.org
>  S:	Supported
>  F:	Documentation/devicetree/bindings/pci/microchip*
> -F:	drivers/pci/controller/*microchip*
> +F:	drivers/pci/controller/plda/*microchip*
>  
>  PCIE DRIVER FOR QUALCOMM MSM
>  M:	Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
> @@ -18287,7 +18287,7 @@ F:	drivers/char/hw_random/mpfs-rng.c
>  F:	drivers/clk/microchip/clk-mpfs*.c
>  F:	drivers/i2c/busses/i2c-microchip-corei2c.c
>  F:	drivers/mailbox/mailbox-mpfs.c
> -F:	drivers/pci/controller/pcie-microchip-host.c
> +F:	drivers/pci/controller/plda/pcie-microchip-host.c
>  F:	drivers/pwm/pwm-microchip-core.c
>  F:	drivers/reset/reset-mpfs.c
>  F:	drivers/rtc/rtc-mpfs.c
> diff --git a/drivers/pci/controller/Kconfig b/drivers/pci/controller/Kconfig
> index a106dabcb8dc..107cdb69e15c 100644
> --- a/drivers/pci/controller/Kconfig
> +++ b/drivers/pci/controller/Kconfig
> @@ -216,14 +216,6 @@ config PCIE_MT7621
>  	help
>  	  This selects a driver for the MediaTek MT7621 PCIe Controller.
>  
> -config PCIE_MICROCHIP_HOST
> -	bool "Microchip AXI PCIe controller"
> -	depends on PCI_MSI && OF
> -	select PCI_HOST_COMMON
> -	help
> -	  Say Y here if you want kernel to support the Microchip AXI PCIe
> -	  Host Bridge driver.
> -
>  config PCI_HYPERV_INTERFACE
>  	tristate "Microsoft Hyper-V PCI Interface"
>  	depends on ((X86 && X86_64) || ARM64) && HYPERV && PCI_MSI
> diff --git a/drivers/pci/controller/Makefile b/drivers/pci/controller/Makefile
> index aa0a691ebcbc..93236dc97b21 100644
> --- a/drivers/pci/controller/Makefile
> +++ b/drivers/pci/controller/Makefile
> @@ -32,7 +32,6 @@ obj-$(CONFIG_PCIE_ROCKCHIP_EP) += pcie-rockchip-ep.o
>  obj-$(CONFIG_PCIE_ROCKCHIP_HOST) += pcie-rockchip-host.o
>  obj-$(CONFIG_PCIE_MEDIATEK) += pcie-mediatek.o
>  obj-$(CONFIG_PCIE_MEDIATEK_GEN3) += pcie-mediatek-gen3.o
> -obj-$(CONFIG_PCIE_MICROCHIP_HOST) += pcie-microchip-host.o
>  obj-$(CONFIG_VMD) += vmd.o
>  obj-$(CONFIG_PCIE_BRCMSTB) += pcie-brcmstb.o
>  obj-$(CONFIG_PCI_LOONGSON) += pci-loongson.o
> diff --git a/drivers/pci/controller/plda/Kconfig b/drivers/pci/controller/plda/Kconfig
> index 37e17326671b..fb274976b84b 100644
> --- a/drivers/pci/controller/plda/Kconfig
> +++ b/drivers/pci/controller/plda/Kconfig
> @@ -4,8 +4,16 @@ menu "PLDA PCIe controllers support"
>  	depends on PCI
>  
>  config PCIE_PLDA_HOST
> -	bool "PLDA PCIe host controller"
> +	bool
>  	depends on OF && PCI_MSI

Is this okay w.r.t. unmet dependencies? IOW, if PCI_MSI is disabled,
PCIE_MICROCHIP_HOST can still select PCIE_PLDA_HOST, no?

>  	select IRQ_DOMAIN
>  
> +config PCIE_MICROCHIP_HOST
> +	bool "Microchip AXI PCIe controller"
> +	select PCI_HOST_COMMON
> +	select PCIE_PLDA_HOST
> +	help
> +	  Say Y here if you want kernel to support the Microchip AXI PCIe
> +	  Host Bridge driver.
> +
>  endmenu
> diff --git a/drivers/pci/controller/plda/Makefile b/drivers/pci/controller/plda/Makefile
> index 79dbae655c2b..4340ab007f44 100644
> --- a/drivers/pci/controller/plda/Makefile
> +++ b/drivers/pci/controller/plda/Makefile
> @@ -1,2 +1,3 @@
>  # SPDX-License-Identifier: GPL-2.0
>  obj-$(CONFIG_PCIE_PLDA_HOST) += pcie-plda-host.o
> +obj-$(CONFIG_PCIE_MICROCHIP_HOST) += pcie-microchip-host.o

[-- Attachment #1.2: signature.asc --]
[-- Type: application/pgp-signature, Size: 228 bytes --]

[-- Attachment #2: Type: text/plain, Size: 161 bytes --]

_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 92+ messages in thread

* Re: [PATCH v1 8/9] PCI: PLDA: starfive: Add JH7110 PCIe controller
  2023-07-19 10:20   ` Minda Chen
@ 2023-07-20 11:14     ` Conor Dooley
  -1 siblings, 0 replies; 92+ messages in thread
From: Conor Dooley @ 2023-07-20 11:14 UTC (permalink / raw)
  To: Minda Chen
  Cc: Daire McNamara, Conor Dooley, Rob Herring, Krzysztof Kozlowski,
	Bjorn Helgaas, Lorenzo Pieralisi, Krzysztof Wilczyński,
	Emil Renner Berthing, devicetree, linux-kernel, linux-riscv,
	linux-pci, Paul Walmsley, Palmer Dabbelt, Albert Ou,
	Philipp Zabel, Mason Huo, Leyfoon Tan, Kevin Xie

[-- Attachment #1: Type: text/plain, Size: 2082 bytes --]

On Wed, Jul 19, 2023 at 06:20:56PM +0800, Minda Chen wrote:
> Add StarFive JH7110 SoC PCIe controller platform
> driver codes.
> 
> Signed-off-by: Minda Chen <minda.chen@starfivetech.com>
> Reviewed-by: Mason Huo <mason.huo@starfivetech.com>
> ---
>  MAINTAINERS                                 |   7 +
>  drivers/pci/controller/plda/Kconfig         |   8 +
>  drivers/pci/controller/plda/Makefile        |   1 +
>  drivers/pci/controller/plda/pcie-plda.h     |  58 ++-
>  drivers/pci/controller/plda/pcie-starfive.c | 415 ++++++++++++++++++++
>  5 files changed, 487 insertions(+), 2 deletions(-)
>  create mode 100644 drivers/pci/controller/plda/pcie-starfive.c
> 
> diff --git a/MAINTAINERS b/MAINTAINERS
> index f02618c2bdf5..b88a54a24ae5 100644
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -20356,6 +20356,13 @@ S:	Supported
>  F:	Documentation/devicetree/bindings/watchdog/starfive*
>  F:	drivers/watchdog/starfive-wdt.c
>  
> +STARFIVE JH71x0 PCIE DRIVER
> +M:	Minda Chen <minda.chen@starfivetech.com>
> +L:	linux-pci@vger.kernel.org
> +S:	Supported
> +F:	Documentation/devicetree/bindings/pci/starfive*
> +F:	drivers/pci/controller/plda/pcie-starfive.c
> +
>  STATIC BRANCH/CALL
>  M:	Peter Zijlstra <peterz@infradead.org>
>  M:	Josh Poimboeuf <jpoimboe@kernel.org>
> diff --git a/drivers/pci/controller/plda/Kconfig b/drivers/pci/controller/plda/Kconfig
> index a3c790545843..eaf72954da9f 100644
> --- a/drivers/pci/controller/plda/Kconfig
> +++ b/drivers/pci/controller/plda/Kconfig
> @@ -24,4 +24,12 @@ config PCIE_MICROCHIP_HOST
>  	  Say Y here if you want kernel to support the Microchip AXI PCIe
>  	  Host Bridge driver.
>  
> +config PCIE_STARFIVE_HOST
> +	tristate "StarFive PCIe host controller"
> +	select PCIE_PLDA_HOST

Ditto here, I think this suffers from the same issue, although its
probably only really randconfigs that'll trigger it.

> +	help
> +	  Say Y here if you want to support the StarFive PCIe controller
> +	  in host mode. StarFive PCIe controller uses PLDA PCIe
> +	  core.
> +
>  endmenu

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 228 bytes --]

^ permalink raw reply	[flat|nested] 92+ messages in thread

* Re: [PATCH v1 8/9] PCI: PLDA: starfive: Add JH7110 PCIe controller
@ 2023-07-20 11:14     ` Conor Dooley
  0 siblings, 0 replies; 92+ messages in thread
From: Conor Dooley @ 2023-07-20 11:14 UTC (permalink / raw)
  To: Minda Chen
  Cc: Daire McNamara, Conor Dooley, Rob Herring, Krzysztof Kozlowski,
	Bjorn Helgaas, Lorenzo Pieralisi, Krzysztof Wilczyński,
	Emil Renner Berthing, devicetree, linux-kernel, linux-riscv,
	linux-pci, Paul Walmsley, Palmer Dabbelt, Albert Ou,
	Philipp Zabel, Mason Huo, Leyfoon Tan, Kevin Xie


[-- Attachment #1.1: Type: text/plain, Size: 2082 bytes --]

On Wed, Jul 19, 2023 at 06:20:56PM +0800, Minda Chen wrote:
> Add StarFive JH7110 SoC PCIe controller platform
> driver codes.
> 
> Signed-off-by: Minda Chen <minda.chen@starfivetech.com>
> Reviewed-by: Mason Huo <mason.huo@starfivetech.com>
> ---
>  MAINTAINERS                                 |   7 +
>  drivers/pci/controller/plda/Kconfig         |   8 +
>  drivers/pci/controller/plda/Makefile        |   1 +
>  drivers/pci/controller/plda/pcie-plda.h     |  58 ++-
>  drivers/pci/controller/plda/pcie-starfive.c | 415 ++++++++++++++++++++
>  5 files changed, 487 insertions(+), 2 deletions(-)
>  create mode 100644 drivers/pci/controller/plda/pcie-starfive.c
> 
> diff --git a/MAINTAINERS b/MAINTAINERS
> index f02618c2bdf5..b88a54a24ae5 100644
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -20356,6 +20356,13 @@ S:	Supported
>  F:	Documentation/devicetree/bindings/watchdog/starfive*
>  F:	drivers/watchdog/starfive-wdt.c
>  
> +STARFIVE JH71x0 PCIE DRIVER
> +M:	Minda Chen <minda.chen@starfivetech.com>
> +L:	linux-pci@vger.kernel.org
> +S:	Supported
> +F:	Documentation/devicetree/bindings/pci/starfive*
> +F:	drivers/pci/controller/plda/pcie-starfive.c
> +
>  STATIC BRANCH/CALL
>  M:	Peter Zijlstra <peterz@infradead.org>
>  M:	Josh Poimboeuf <jpoimboe@kernel.org>
> diff --git a/drivers/pci/controller/plda/Kconfig b/drivers/pci/controller/plda/Kconfig
> index a3c790545843..eaf72954da9f 100644
> --- a/drivers/pci/controller/plda/Kconfig
> +++ b/drivers/pci/controller/plda/Kconfig
> @@ -24,4 +24,12 @@ config PCIE_MICROCHIP_HOST
>  	  Say Y here if you want kernel to support the Microchip AXI PCIe
>  	  Host Bridge driver.
>  
> +config PCIE_STARFIVE_HOST
> +	tristate "StarFive PCIe host controller"
> +	select PCIE_PLDA_HOST

Ditto here, I think this suffers from the same issue, although its
probably only really randconfigs that'll trigger it.

> +	help
> +	  Say Y here if you want to support the StarFive PCIe controller
> +	  in host mode. StarFive PCIe controller uses PLDA PCIe
> +	  core.
> +
>  endmenu

[-- Attachment #1.2: signature.asc --]
[-- Type: application/pgp-signature, Size: 228 bytes --]

[-- Attachment #2: Type: text/plain, Size: 161 bytes --]

_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 92+ messages in thread

* Re: [PATCH v1 0/9] Refactoring Microchip PolarFire PCIe driver
  2023-07-20  2:15     ` Minda Chen
@ 2023-07-20 12:12       ` Conor Dooley
  -1 siblings, 0 replies; 92+ messages in thread
From: Conor Dooley @ 2023-07-20 12:12 UTC (permalink / raw)
  To: Minda Chen
  Cc: Bjorn Helgaas, Daire McNamara, Conor Dooley, Rob Herring,
	Krzysztof Kozlowski, Bjorn Helgaas, Lorenzo Pieralisi,
	Krzysztof Wilczyński, Emil Renner Berthing, devicetree,
	linux-kernel, linux-riscv, linux-pci, Paul Walmsley,
	Palmer Dabbelt, Albert Ou, Philipp Zabel, Mason Huo, Leyfoon Tan,
	Kevin Xie

[-- Attachment #1: Type: text/plain, Size: 926 bytes --]

On Thu, Jul 20, 2023 at 10:15:51AM +0800, Minda Chen wrote:
> On 2023/7/19 23:26, Bjorn Helgaas wrote:
> > On Wed, Jul 19, 2023 at 06:20:48PM +0800, Minda Chen wrote:

> >> This patchset is base on v6.5-rc1
> > 
> > Doesn't quite apply cleanly for me:
> > 
> I am sorry, The driver need stg clk and syscon driver, which are have't be merge to main line.
> mainly dts is(patch9) rejected, Must apply this series patch first. (I forget add this link in cover letter)
> https://patchwork.kernel.org/project/linux-riscv/cover/20230712092007.31013-1-xingyu.wu@starfivetech.com/
> and this syscon patch 
> https://patchwork.kernel.org/project/linux-riscv/patch/20230717023040.78860-7-xingyu.wu@starfivetech.com/

You could detach the dts patch from the series & send it independently once
everything it depends on is in place. I'm going to pick up both of the
patches you've linked for v6.6 in the next day or two.

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 228 bytes --]

^ permalink raw reply	[flat|nested] 92+ messages in thread

* Re: [PATCH v1 0/9] Refactoring Microchip PolarFire PCIe driver
@ 2023-07-20 12:12       ` Conor Dooley
  0 siblings, 0 replies; 92+ messages in thread
From: Conor Dooley @ 2023-07-20 12:12 UTC (permalink / raw)
  To: Minda Chen
  Cc: Bjorn Helgaas, Daire McNamara, Conor Dooley, Rob Herring,
	Krzysztof Kozlowski, Bjorn Helgaas, Lorenzo Pieralisi,
	Krzysztof Wilczyński, Emil Renner Berthing, devicetree,
	linux-kernel, linux-riscv, linux-pci, Paul Walmsley,
	Palmer Dabbelt, Albert Ou, Philipp Zabel, Mason Huo, Leyfoon Tan,
	Kevin Xie


[-- Attachment #1.1: Type: text/plain, Size: 926 bytes --]

On Thu, Jul 20, 2023 at 10:15:51AM +0800, Minda Chen wrote:
> On 2023/7/19 23:26, Bjorn Helgaas wrote:
> > On Wed, Jul 19, 2023 at 06:20:48PM +0800, Minda Chen wrote:

> >> This patchset is base on v6.5-rc1
> > 
> > Doesn't quite apply cleanly for me:
> > 
> I am sorry, The driver need stg clk and syscon driver, which are have't be merge to main line.
> mainly dts is(patch9) rejected, Must apply this series patch first. (I forget add this link in cover letter)
> https://patchwork.kernel.org/project/linux-riscv/cover/20230712092007.31013-1-xingyu.wu@starfivetech.com/
> and this syscon patch 
> https://patchwork.kernel.org/project/linux-riscv/patch/20230717023040.78860-7-xingyu.wu@starfivetech.com/

You could detach the dts patch from the series & send it independently once
everything it depends on is in place. I'm going to pick up both of the
patches you've linked for v6.6 in the next day or two.

[-- Attachment #1.2: signature.asc --]
[-- Type: application/pgp-signature, Size: 228 bytes --]

[-- Attachment #2: Type: text/plain, Size: 161 bytes --]

_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 92+ messages in thread

* Re: [PATCH v1 4/9] PCI: microchip: Move PCIe driver to PLDA directory
  2023-07-19 10:20   ` Minda Chen
@ 2023-07-20 12:26     ` Conor Dooley
  -1 siblings, 0 replies; 92+ messages in thread
From: Conor Dooley @ 2023-07-20 12:26 UTC (permalink / raw)
  To: Minda Chen
  Cc: Daire McNamara, Conor Dooley, Rob Herring, Krzysztof Kozlowski,
	Bjorn Helgaas, Lorenzo Pieralisi, Krzysztof Wilczyński,
	Emil Renner Berthing, devicetree, linux-kernel, linux-riscv,
	linux-pci, Paul Walmsley, Palmer Dabbelt, Albert Ou,
	Philipp Zabel, Mason Huo, Leyfoon Tan, Kevin Xie

[-- Attachment #1: Type: text/plain, Size: 509 bytes --]

Hey Minda,

On Wed, Jul 19, 2023 at 06:20:52PM +0800, Minda Chen wrote:
> Move Microchip specific platform codes to PLDA directory.
> Including clock init, interrupt event handle and platform
> init codes.
> 
> Signed-off-by: Minda Chen <minda.chen@starfivetech.com>

Something else that I noticed, looking at what is not in the diff here,
but is everything under the "/* PCIe Controller Phy Regs */" comment
that remains in the microchip driver not also common to the plda IP?

Thanks,
Conor.

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 228 bytes --]

^ permalink raw reply	[flat|nested] 92+ messages in thread

* Re: [PATCH v1 4/9] PCI: microchip: Move PCIe driver to PLDA directory
@ 2023-07-20 12:26     ` Conor Dooley
  0 siblings, 0 replies; 92+ messages in thread
From: Conor Dooley @ 2023-07-20 12:26 UTC (permalink / raw)
  To: Minda Chen
  Cc: Daire McNamara, Conor Dooley, Rob Herring, Krzysztof Kozlowski,
	Bjorn Helgaas, Lorenzo Pieralisi, Krzysztof Wilczyński,
	Emil Renner Berthing, devicetree, linux-kernel, linux-riscv,
	linux-pci, Paul Walmsley, Palmer Dabbelt, Albert Ou,
	Philipp Zabel, Mason Huo, Leyfoon Tan, Kevin Xie


[-- Attachment #1.1: Type: text/plain, Size: 509 bytes --]

Hey Minda,

On Wed, Jul 19, 2023 at 06:20:52PM +0800, Minda Chen wrote:
> Move Microchip specific platform codes to PLDA directory.
> Including clock init, interrupt event handle and platform
> init codes.
> 
> Signed-off-by: Minda Chen <minda.chen@starfivetech.com>

Something else that I noticed, looking at what is not in the diff here,
but is everything under the "/* PCIe Controller Phy Regs */" comment
that remains in the microchip driver not also common to the plda IP?

Thanks,
Conor.

[-- Attachment #1.2: signature.asc --]
[-- Type: application/pgp-signature, Size: 228 bytes --]

[-- Attachment #2: Type: text/plain, Size: 161 bytes --]

_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 92+ messages in thread

* Re: [PATCH v1 8/9] PCI: PLDA: starfive: Add JH7110 PCIe controller
  2023-07-20 10:11       ` Kevin Xie
@ 2023-07-20 16:15         ` Bjorn Helgaas
  -1 siblings, 0 replies; 92+ messages in thread
From: Bjorn Helgaas @ 2023-07-20 16:15 UTC (permalink / raw)
  To: Kevin Xie
  Cc: Minda Chen, Daire McNamara, Conor Dooley, Rob Herring,
	Krzysztof Kozlowski, Bjorn Helgaas, Lorenzo Pieralisi,
	Krzysztof Wilczyński, Emil Renner Berthing, devicetree,
	linux-kernel, linux-riscv, linux-pci, Paul Walmsley,
	Palmer Dabbelt, Albert Ou, Philipp Zabel, Mason Huo, Leyfoon Tan

On Thu, Jul 20, 2023 at 06:11:59PM +0800, Kevin Xie wrote:
> On 2023/7/20 0:48, Bjorn Helgaas wrote:
> > On Wed, Jul 19, 2023 at 06:20:56PM +0800, Minda Chen wrote:
> >> Add StarFive JH7110 SoC PCIe controller platform
> >> driver codes.

> >> + * The BAR0/1 of bridge should be hidden during enumeration to
> >> + * avoid the sizing and resource allocation by PCIe core.
> >> + */
> >> +static bool starfive_pcie_hide_rc_bar(struct pci_bus *bus, unsigned int  devfn,
> >> +				      int offset)
> >> +{
> >> +	if (pci_is_root_bus(bus) && !devfn &&
> >> +	    (offset == PCI_BASE_ADDRESS_0 || offset == PCI_BASE_ADDRESS_1))
> >> +		return true;
> >> +
> >> +	return false;
> >> +}
> >> +
> >> +int starfive_pcie_config_write(struct pci_bus *bus, unsigned int devfn,
> >> +			       int where, int size, u32 value)
> >> +{
> >> +	if (starfive_pcie_hide_rc_bar(bus, devfn, where))
> >> +		return PCIBIOS_BAD_REGISTER_NUMBER;
> > 
> > I think you are trying present BARs 0 & 1 as unimplemented.  Such BARs
> > are hardwired to zero, so you should make them behave that way (both
> > read and write).  Many callers of config accessors don't check the
> > return value, so I don't think it's reliable to just return
> > PCIBIOS_BAD_REGISTER_NUMBER.
> 
> This is a hardware defect that we did not hardwired those BARs to
> zero, and it is configurable for software now.  We have to add this
> filter function for workaround.

Yes.  My point is that this only affects the write path, and the read
probably does not read 0 as it should.  This means lspci will show the
wrong thing, and the PCI core will try to size the BAR when it doesn't
need to.  I haven't looked at the BAR sizing code; it might even come
up with a bogus size and address, when it *should* just conclude the
BAR doesn't exist at all.

> >> +	/* Ensure that PERST has been asserted for at least 100 ms */
> >> +	msleep(300);
> >> +	gpiod_set_value_cansleep(pcie->reset_gpio, 0);
> > 
> > At least 100 ms, but you sleep *300* ms?  This is probably related to
> > https://lore.kernel.org/r/20230718155515.GA483233@bhelgaas
> > 
> > Please include a comment with the source of the delay value.  I assume
> > it's T_PVPERL and T_PERST-CLK from the PCIe CEM spec.  This way we can
> > someday share those #defines across drivers.
> 
> Yes, the delay value here is T_PVPERL from PCIe CEM spec r2.0 (Table
> 2-4).  At the first time we set 100ms delay according to sector 2.2
> of the spec: "After there has been time (TPVPERL) for the power and
> clock to become stable, PERST# is deasserted high and the PCI
> Express functions can start up."
> 
> However, in the compatibility testing with several NVMe SSD, we
> found that Lenovo Thinklife ST8000 NVMe can not get ready in 100ms,
> and it actually needs almost 200ms.  Thus, we increased the T_PVPERL
> value to 300ms for the better device compatibility.
>
> We will use a macro to define T_PVPERL, and add comments for the
> source of it.  If the compatibility delay of 300ms is not
> reasonable, we can revert it to 100ms.

Thanks for this valuable information!  This NVMe issue potentially
affects many similar drivers, and we may need a more generic fix so
this device works well with all of them.

T_PVPERL is defined to start when power is stable.  Do you have a way
to accurately determine that point?  I'm guessing this:

  gpiod_set_value_cansleep(pcie->power_gpio, 1)

turns the power on?  But of course that doesn't mean it is instantly
stable.  Maybe your testing is telling you that your driver should
have a hardware-specific 200ms delay to wait for power to become
stable, followed by the standard 100ms for T_PVPERL?

Bjorn

^ permalink raw reply	[flat|nested] 92+ messages in thread

* Re: [PATCH v1 8/9] PCI: PLDA: starfive: Add JH7110 PCIe controller
@ 2023-07-20 16:15         ` Bjorn Helgaas
  0 siblings, 0 replies; 92+ messages in thread
From: Bjorn Helgaas @ 2023-07-20 16:15 UTC (permalink / raw)
  To: Kevin Xie
  Cc: Minda Chen, Daire McNamara, Conor Dooley, Rob Herring,
	Krzysztof Kozlowski, Bjorn Helgaas, Lorenzo Pieralisi,
	Krzysztof Wilczyński, Emil Renner Berthing, devicetree,
	linux-kernel, linux-riscv, linux-pci, Paul Walmsley,
	Palmer Dabbelt, Albert Ou, Philipp Zabel, Mason Huo, Leyfoon Tan

On Thu, Jul 20, 2023 at 06:11:59PM +0800, Kevin Xie wrote:
> On 2023/7/20 0:48, Bjorn Helgaas wrote:
> > On Wed, Jul 19, 2023 at 06:20:56PM +0800, Minda Chen wrote:
> >> Add StarFive JH7110 SoC PCIe controller platform
> >> driver codes.

> >> + * The BAR0/1 of bridge should be hidden during enumeration to
> >> + * avoid the sizing and resource allocation by PCIe core.
> >> + */
> >> +static bool starfive_pcie_hide_rc_bar(struct pci_bus *bus, unsigned int  devfn,
> >> +				      int offset)
> >> +{
> >> +	if (pci_is_root_bus(bus) && !devfn &&
> >> +	    (offset == PCI_BASE_ADDRESS_0 || offset == PCI_BASE_ADDRESS_1))
> >> +		return true;
> >> +
> >> +	return false;
> >> +}
> >> +
> >> +int starfive_pcie_config_write(struct pci_bus *bus, unsigned int devfn,
> >> +			       int where, int size, u32 value)
> >> +{
> >> +	if (starfive_pcie_hide_rc_bar(bus, devfn, where))
> >> +		return PCIBIOS_BAD_REGISTER_NUMBER;
> > 
> > I think you are trying present BARs 0 & 1 as unimplemented.  Such BARs
> > are hardwired to zero, so you should make them behave that way (both
> > read and write).  Many callers of config accessors don't check the
> > return value, so I don't think it's reliable to just return
> > PCIBIOS_BAD_REGISTER_NUMBER.
> 
> This is a hardware defect that we did not hardwired those BARs to
> zero, and it is configurable for software now.  We have to add this
> filter function for workaround.

Yes.  My point is that this only affects the write path, and the read
probably does not read 0 as it should.  This means lspci will show the
wrong thing, and the PCI core will try to size the BAR when it doesn't
need to.  I haven't looked at the BAR sizing code; it might even come
up with a bogus size and address, when it *should* just conclude the
BAR doesn't exist at all.

> >> +	/* Ensure that PERST has been asserted for at least 100 ms */
> >> +	msleep(300);
> >> +	gpiod_set_value_cansleep(pcie->reset_gpio, 0);
> > 
> > At least 100 ms, but you sleep *300* ms?  This is probably related to
> > https://lore.kernel.org/r/20230718155515.GA483233@bhelgaas
> > 
> > Please include a comment with the source of the delay value.  I assume
> > it's T_PVPERL and T_PERST-CLK from the PCIe CEM spec.  This way we can
> > someday share those #defines across drivers.
> 
> Yes, the delay value here is T_PVPERL from PCIe CEM spec r2.0 (Table
> 2-4).  At the first time we set 100ms delay according to sector 2.2
> of the spec: "After there has been time (TPVPERL) for the power and
> clock to become stable, PERST# is deasserted high and the PCI
> Express functions can start up."
> 
> However, in the compatibility testing with several NVMe SSD, we
> found that Lenovo Thinklife ST8000 NVMe can not get ready in 100ms,
> and it actually needs almost 200ms.  Thus, we increased the T_PVPERL
> value to 300ms for the better device compatibility.
>
> We will use a macro to define T_PVPERL, and add comments for the
> source of it.  If the compatibility delay of 300ms is not
> reasonable, we can revert it to 100ms.

Thanks for this valuable information!  This NVMe issue potentially
affects many similar drivers, and we may need a more generic fix so
this device works well with all of them.

T_PVPERL is defined to start when power is stable.  Do you have a way
to accurately determine that point?  I'm guessing this:

  gpiod_set_value_cansleep(pcie->power_gpio, 1)

turns the power on?  But of course that doesn't mean it is instantly
stable.  Maybe your testing is telling you that your driver should
have a hardware-specific 200ms delay to wait for power to become
stable, followed by the standard 100ms for T_PVPERL?

Bjorn

_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 92+ messages in thread

* Re: [PATCH v1 8/9] PCI: PLDA: starfive: Add JH7110 PCIe controller
  2023-07-20 11:14     ` Conor Dooley
@ 2023-07-21  1:03       ` Minda Chen
  -1 siblings, 0 replies; 92+ messages in thread
From: Minda Chen @ 2023-07-21  1:03 UTC (permalink / raw)
  To: Conor Dooley
  Cc: Daire McNamara, Conor Dooley, Rob Herring, Krzysztof Kozlowski,
	Bjorn Helgaas, Lorenzo Pieralisi, Krzysztof Wilczyński,
	Emil Renner Berthing, devicetree, linux-kernel, linux-riscv,
	linux-pci, Paul Walmsley, Palmer Dabbelt, Albert Ou,
	Philipp Zabel, Mason Huo, Leyfoon Tan, Kevin Xie



On 2023/7/20 19:14, Conor Dooley wrote:
> On Wed, Jul 19, 2023 at 06:20:56PM +0800, Minda Chen wrote:
>> Add StarFive JH7110 SoC PCIe controller platform
>> driver codes.
>> 
>> Signed-off-by: Minda Chen <minda.chen@starfivetech.com>
>> Reviewed-by: Mason Huo <mason.huo@starfivetech.com>
>> ---
>>  MAINTAINERS                                 |   7 +
>>  drivers/pci/controller/plda/Kconfig         |   8 +
>>  drivers/pci/controller/plda/Makefile        |   1 +
>>  drivers/pci/controller/plda/pcie-plda.h     |  58 ++-
>>  drivers/pci/controller/plda/pcie-starfive.c | 415 ++++++++++++++++++++
>>  5 files changed, 487 insertions(+), 2 deletions(-)
>>  create mode 100644 drivers/pci/controller/plda/pcie-starfive.c
>> 
>> diff --git a/MAINTAINERS b/MAINTAINERS
>> index f02618c2bdf5..b88a54a24ae5 100644
>> --- a/MAINTAINERS
>> +++ b/MAINTAINERS
>> @@ -20356,6 +20356,13 @@ S:	Supported
>>  F:	Documentation/devicetree/bindings/watchdog/starfive*
>>  F:	drivers/watchdog/starfive-wdt.c
>>  
>> +STARFIVE JH71x0 PCIE DRIVER
>> +M:	Minda Chen <minda.chen@starfivetech.com>
>> +L:	linux-pci@vger.kernel.org
>> +S:	Supported
>> +F:	Documentation/devicetree/bindings/pci/starfive*
>> +F:	drivers/pci/controller/plda/pcie-starfive.c
>> +
>>  STATIC BRANCH/CALL
>>  M:	Peter Zijlstra <peterz@infradead.org>
>>  M:	Josh Poimboeuf <jpoimboe@kernel.org>
>> diff --git a/drivers/pci/controller/plda/Kconfig b/drivers/pci/controller/plda/Kconfig
>> index a3c790545843..eaf72954da9f 100644
>> --- a/drivers/pci/controller/plda/Kconfig
>> +++ b/drivers/pci/controller/plda/Kconfig
>> @@ -24,4 +24,12 @@ config PCIE_MICROCHIP_HOST
>>  	  Say Y here if you want kernel to support the Microchip AXI PCIe
>>  	  Host Bridge driver.
>>  
>> +config PCIE_STARFIVE_HOST
>> +	tristate "StarFive PCIe host controller"
>> +	select PCIE_PLDA_HOST
> 
> Ditto here, I think this suffers from the same issue, although its
> probably only really randconfigs that'll trigger it.
> 
ok, thanks, I will change it with microchip.
>> +	help
>> +	  Say Y here if you want to support the StarFive PCIe controller
>> +	  in host mode. StarFive PCIe controller uses PLDA PCIe
>> +	  core.
>> +
>>  endmenu

^ permalink raw reply	[flat|nested] 92+ messages in thread

* Re: [PATCH v1 8/9] PCI: PLDA: starfive: Add JH7110 PCIe controller
@ 2023-07-21  1:03       ` Minda Chen
  0 siblings, 0 replies; 92+ messages in thread
From: Minda Chen @ 2023-07-21  1:03 UTC (permalink / raw)
  To: Conor Dooley
  Cc: Daire McNamara, Conor Dooley, Rob Herring, Krzysztof Kozlowski,
	Bjorn Helgaas, Lorenzo Pieralisi, Krzysztof Wilczyński,
	Emil Renner Berthing, devicetree, linux-kernel, linux-riscv,
	linux-pci, Paul Walmsley, Palmer Dabbelt, Albert Ou,
	Philipp Zabel, Mason Huo, Leyfoon Tan, Kevin Xie



On 2023/7/20 19:14, Conor Dooley wrote:
> On Wed, Jul 19, 2023 at 06:20:56PM +0800, Minda Chen wrote:
>> Add StarFive JH7110 SoC PCIe controller platform
>> driver codes.
>> 
>> Signed-off-by: Minda Chen <minda.chen@starfivetech.com>
>> Reviewed-by: Mason Huo <mason.huo@starfivetech.com>
>> ---
>>  MAINTAINERS                                 |   7 +
>>  drivers/pci/controller/plda/Kconfig         |   8 +
>>  drivers/pci/controller/plda/Makefile        |   1 +
>>  drivers/pci/controller/plda/pcie-plda.h     |  58 ++-
>>  drivers/pci/controller/plda/pcie-starfive.c | 415 ++++++++++++++++++++
>>  5 files changed, 487 insertions(+), 2 deletions(-)
>>  create mode 100644 drivers/pci/controller/plda/pcie-starfive.c
>> 
>> diff --git a/MAINTAINERS b/MAINTAINERS
>> index f02618c2bdf5..b88a54a24ae5 100644
>> --- a/MAINTAINERS
>> +++ b/MAINTAINERS
>> @@ -20356,6 +20356,13 @@ S:	Supported
>>  F:	Documentation/devicetree/bindings/watchdog/starfive*
>>  F:	drivers/watchdog/starfive-wdt.c
>>  
>> +STARFIVE JH71x0 PCIE DRIVER
>> +M:	Minda Chen <minda.chen@starfivetech.com>
>> +L:	linux-pci@vger.kernel.org
>> +S:	Supported
>> +F:	Documentation/devicetree/bindings/pci/starfive*
>> +F:	drivers/pci/controller/plda/pcie-starfive.c
>> +
>>  STATIC BRANCH/CALL
>>  M:	Peter Zijlstra <peterz@infradead.org>
>>  M:	Josh Poimboeuf <jpoimboe@kernel.org>
>> diff --git a/drivers/pci/controller/plda/Kconfig b/drivers/pci/controller/plda/Kconfig
>> index a3c790545843..eaf72954da9f 100644
>> --- a/drivers/pci/controller/plda/Kconfig
>> +++ b/drivers/pci/controller/plda/Kconfig
>> @@ -24,4 +24,12 @@ config PCIE_MICROCHIP_HOST
>>  	  Say Y here if you want kernel to support the Microchip AXI PCIe
>>  	  Host Bridge driver.
>>  
>> +config PCIE_STARFIVE_HOST
>> +	tristate "StarFive PCIe host controller"
>> +	select PCIE_PLDA_HOST
> 
> Ditto here, I think this suffers from the same issue, although its
> probably only really randconfigs that'll trigger it.
> 
ok, thanks, I will change it with microchip.
>> +	help
>> +	  Say Y here if you want to support the StarFive PCIe controller
>> +	  in host mode. StarFive PCIe controller uses PLDA PCIe
>> +	  core.
>> +
>>  endmenu

_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 92+ messages in thread

* Re: [PATCH v1 4/9] PCI: microchip: Move PCIe driver to PLDA directory
  2023-07-20 12:26     ` Conor Dooley
@ 2023-07-21  1:12       ` Minda Chen
  -1 siblings, 0 replies; 92+ messages in thread
From: Minda Chen @ 2023-07-21  1:12 UTC (permalink / raw)
  To: Conor Dooley
  Cc: Daire McNamara, Conor Dooley, Rob Herring, Krzysztof Kozlowski,
	Bjorn Helgaas, Lorenzo Pieralisi, Krzysztof Wilczyński,
	Emil Renner Berthing, devicetree, linux-kernel, linux-riscv,
	linux-pci, Paul Walmsley, Palmer Dabbelt, Albert Ou,
	Philipp Zabel, Mason Huo, Leyfoon Tan, Kevin Xie



On 2023/7/20 20:26, Conor Dooley wrote:
> Hey Minda,
> 
> On Wed, Jul 19, 2023 at 06:20:52PM +0800, Minda Chen wrote:
>> Move Microchip specific platform codes to PLDA directory.
>> Including clock init, interrupt event handle and platform
>> init codes.
>> 
>> Signed-off-by: Minda Chen <minda.chen@starfivetech.com>
> 
> Something else that I noticed, looking at what is not in the diff here,
> but is everything under the "/* PCIe Controller Phy Regs */" comment
> that remains in the microchip driver not also common to the plda IP?
> 
> Thanks,
> Conor.
I changed it to "PCIe Controller Reg" in patch8 ... It looks not the Phy register..
Maybe I don't change it. And I will squash this patch with patch3.

^ permalink raw reply	[flat|nested] 92+ messages in thread

* Re: [PATCH v1 4/9] PCI: microchip: Move PCIe driver to PLDA directory
@ 2023-07-21  1:12       ` Minda Chen
  0 siblings, 0 replies; 92+ messages in thread
From: Minda Chen @ 2023-07-21  1:12 UTC (permalink / raw)
  To: Conor Dooley
  Cc: Daire McNamara, Conor Dooley, Rob Herring, Krzysztof Kozlowski,
	Bjorn Helgaas, Lorenzo Pieralisi, Krzysztof Wilczyński,
	Emil Renner Berthing, devicetree, linux-kernel, linux-riscv,
	linux-pci, Paul Walmsley, Palmer Dabbelt, Albert Ou,
	Philipp Zabel, Mason Huo, Leyfoon Tan, Kevin Xie



On 2023/7/20 20:26, Conor Dooley wrote:
> Hey Minda,
> 
> On Wed, Jul 19, 2023 at 06:20:52PM +0800, Minda Chen wrote:
>> Move Microchip specific platform codes to PLDA directory.
>> Including clock init, interrupt event handle and platform
>> init codes.
>> 
>> Signed-off-by: Minda Chen <minda.chen@starfivetech.com>
> 
> Something else that I noticed, looking at what is not in the diff here,
> but is everything under the "/* PCIe Controller Phy Regs */" comment
> that remains in the microchip driver not also common to the plda IP?
> 
> Thanks,
> Conor.
I changed it to "PCIe Controller Reg" in patch8 ... It looks not the Phy register..
Maybe I don't change it. And I will squash this patch with patch3.

_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 92+ messages in thread

* Re: [PATCH v1 0/9] Refactoring Microchip PolarFire PCIe driver
  2023-07-20 12:12       ` Conor Dooley
@ 2023-07-21  9:34         ` Minda Chen
  -1 siblings, 0 replies; 92+ messages in thread
From: Minda Chen @ 2023-07-21  9:34 UTC (permalink / raw)
  To: Conor Dooley
  Cc: Bjorn Helgaas, Daire McNamara, Conor Dooley, Rob Herring,
	Krzysztof Kozlowski, Bjorn Helgaas, Lorenzo Pieralisi,
	Krzysztof Wilczyński, Emil Renner Berthing, devicetree,
	linux-kernel, linux-riscv, linux-pci, Paul Walmsley,
	Palmer Dabbelt, Albert Ou, Philipp Zabel, Mason Huo, Leyfoon Tan,
	Kevin Xie



On 2023/7/20 20:12, Conor Dooley wrote:
> On Thu, Jul 20, 2023 at 10:15:51AM +0800, Minda Chen wrote:
>> On 2023/7/19 23:26, Bjorn Helgaas wrote:
>> > On Wed, Jul 19, 2023 at 06:20:48PM +0800, Minda Chen wrote:
> 
>> >> This patchset is base on v6.5-rc1
>> > 
>> > Doesn't quite apply cleanly for me:
>> > 
>> I am sorry, The driver need stg clk and syscon driver, which are have't be merge to main line.
>> mainly dts is(patch9) rejected, Must apply this series patch first. (I forget add this link in cover letter)
>> https://patchwork.kernel.org/project/linux-riscv/cover/20230712092007.31013-1-xingyu.wu@starfivetech.com/
>> and this syscon patch 
>> https://patchwork.kernel.org/project/linux-riscv/patch/20230717023040.78860-7-xingyu.wu@starfivetech.com/
> 
> You could detach the dts patch from the series & send it independently once
> everything it depends on is in place. I'm going to pick up both of the
> patches you've linked for v6.6 in the next day or two.
Thanks very much. Yes, I have considered remove the dts patch. But I think the maintainers also
will review the dts file. So I add this....

^ permalink raw reply	[flat|nested] 92+ messages in thread

* Re: [PATCH v1 0/9] Refactoring Microchip PolarFire PCIe driver
@ 2023-07-21  9:34         ` Minda Chen
  0 siblings, 0 replies; 92+ messages in thread
From: Minda Chen @ 2023-07-21  9:34 UTC (permalink / raw)
  To: Conor Dooley
  Cc: Bjorn Helgaas, Daire McNamara, Conor Dooley, Rob Herring,
	Krzysztof Kozlowski, Bjorn Helgaas, Lorenzo Pieralisi,
	Krzysztof Wilczyński, Emil Renner Berthing, devicetree,
	linux-kernel, linux-riscv, linux-pci, Paul Walmsley,
	Palmer Dabbelt, Albert Ou, Philipp Zabel, Mason Huo, Leyfoon Tan,
	Kevin Xie



On 2023/7/20 20:12, Conor Dooley wrote:
> On Thu, Jul 20, 2023 at 10:15:51AM +0800, Minda Chen wrote:
>> On 2023/7/19 23:26, Bjorn Helgaas wrote:
>> > On Wed, Jul 19, 2023 at 06:20:48PM +0800, Minda Chen wrote:
> 
>> >> This patchset is base on v6.5-rc1
>> > 
>> > Doesn't quite apply cleanly for me:
>> > 
>> I am sorry, The driver need stg clk and syscon driver, which are have't be merge to main line.
>> mainly dts is(patch9) rejected, Must apply this series patch first. (I forget add this link in cover letter)
>> https://patchwork.kernel.org/project/linux-riscv/cover/20230712092007.31013-1-xingyu.wu@starfivetech.com/
>> and this syscon patch 
>> https://patchwork.kernel.org/project/linux-riscv/patch/20230717023040.78860-7-xingyu.wu@starfivetech.com/
> 
> You could detach the dts patch from the series & send it independently once
> everything it depends on is in place. I'm going to pick up both of the
> patches you've linked for v6.6 in the next day or two.
Thanks very much. Yes, I have considered remove the dts patch. But I think the maintainers also
will review the dts file. So I add this....

_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 92+ messages in thread

* Re: [PATCH v1 0/9] Refactoring Microchip PolarFire PCIe driver
  2023-07-20 12:12       ` Conor Dooley
@ 2023-07-21  9:55         ` Minda Chen
  -1 siblings, 0 replies; 92+ messages in thread
From: Minda Chen @ 2023-07-21  9:55 UTC (permalink / raw)
  To: Conor Dooley
  Cc: Bjorn Helgaas, Daire McNamara, Conor Dooley, Rob Herring,
	Krzysztof Kozlowski, Bjorn Helgaas, Lorenzo Pieralisi,
	Krzysztof Wilczyński, Emil Renner Berthing, devicetree,
	linux-kernel, linux-riscv, linux-pci, Paul Walmsley,
	Palmer Dabbelt, Albert Ou, Philipp Zabel, Mason Huo, Leyfoon Tan,
	Kevin Xie



On 2023/7/20 20:12, Conor Dooley wrote:
> On Thu, Jul 20, 2023 at 10:15:51AM +0800, Minda Chen wrote:
>> On 2023/7/19 23:26, Bjorn Helgaas wrote:
>> > On Wed, Jul 19, 2023 at 06:20:48PM +0800, Minda Chen wrote:
> 
>> >> This patchset is base on v6.5-rc1
>> > 
>> > Doesn't quite apply cleanly for me:
>> > 
>> I am sorry, The driver need stg clk and syscon driver, which are have't be merge to main line.
>> mainly dts is(patch9) rejected, Must apply this series patch first. (I forget add this link in cover letter)
>> https://patchwork.kernel.org/project/linux-riscv/cover/20230712092007.31013-1-xingyu.wu@starfivetech.com/
>> and this syscon patch 
>> https://patchwork.kernel.org/project/linux-riscv/patch/20230717023040.78860-7-xingyu.wu@starfivetech.com/
> 
> You could detach the dts patch from the series & send it independently once
> everything it depends on is in place. I'm going to pick up both of the
> patches you've linked for v6.6 in the next day or two.
Thanks very much. I have considered removing the dts patch. But I think Maintainers
also review the dts patch. So I add this.....

^ permalink raw reply	[flat|nested] 92+ messages in thread

* Re: [PATCH v1 0/9] Refactoring Microchip PolarFire PCIe driver
@ 2023-07-21  9:55         ` Minda Chen
  0 siblings, 0 replies; 92+ messages in thread
From: Minda Chen @ 2023-07-21  9:55 UTC (permalink / raw)
  To: Conor Dooley
  Cc: Bjorn Helgaas, Daire McNamara, Conor Dooley, Rob Herring,
	Krzysztof Kozlowski, Bjorn Helgaas, Lorenzo Pieralisi,
	Krzysztof Wilczyński, Emil Renner Berthing, devicetree,
	linux-kernel, linux-riscv, linux-pci, Paul Walmsley,
	Palmer Dabbelt, Albert Ou, Philipp Zabel, Mason Huo, Leyfoon Tan,
	Kevin Xie



On 2023/7/20 20:12, Conor Dooley wrote:
> On Thu, Jul 20, 2023 at 10:15:51AM +0800, Minda Chen wrote:
>> On 2023/7/19 23:26, Bjorn Helgaas wrote:
>> > On Wed, Jul 19, 2023 at 06:20:48PM +0800, Minda Chen wrote:
> 
>> >> This patchset is base on v6.5-rc1
>> > 
>> > Doesn't quite apply cleanly for me:
>> > 
>> I am sorry, The driver need stg clk and syscon driver, which are have't be merge to main line.
>> mainly dts is(patch9) rejected, Must apply this series patch first. (I forget add this link in cover letter)
>> https://patchwork.kernel.org/project/linux-riscv/cover/20230712092007.31013-1-xingyu.wu@starfivetech.com/
>> and this syscon patch 
>> https://patchwork.kernel.org/project/linux-riscv/patch/20230717023040.78860-7-xingyu.wu@starfivetech.com/
> 
> You could detach the dts patch from the series & send it independently once
> everything it depends on is in place. I'm going to pick up both of the
> patches you've linked for v6.6 in the next day or two.
Thanks very much. I have considered removing the dts patch. But I think Maintainers
also review the dts patch. So I add this.....

_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 92+ messages in thread

* Re: [PATCH v1 8/9] PCI: PLDA: starfive: Add JH7110 PCIe controller
  2023-07-20 16:15         ` Bjorn Helgaas
@ 2023-07-24 10:48           ` Kevin Xie
  -1 siblings, 0 replies; 92+ messages in thread
From: Kevin Xie @ 2023-07-24 10:48 UTC (permalink / raw)
  To: Bjorn Helgaas
  Cc: Minda Chen, Daire McNamara, Conor Dooley, Rob Herring,
	Krzysztof Kozlowski, Bjorn Helgaas, Lorenzo Pieralisi,
	Krzysztof Wilczyński, Emil Renner Berthing, devicetree,
	linux-kernel, linux-riscv, linux-pci, Paul Walmsley,
	Palmer Dabbelt, Albert Ou, Philipp Zabel, Mason Huo, Leyfoon Tan



On 2023/7/21 0:15, Bjorn Helgaas wrote:
> On Thu, Jul 20, 2023 at 06:11:59PM +0800, Kevin Xie wrote:
>> On 2023/7/20 0:48, Bjorn Helgaas wrote:
>> > On Wed, Jul 19, 2023 at 06:20:56PM +0800, Minda Chen wrote:
>> >> Add StarFive JH7110 SoC PCIe controller platform
>> >> driver codes.
> 
>> >> + * The BAR0/1 of bridge should be hidden during enumeration to
>> >> + * avoid the sizing and resource allocation by PCIe core.
>> >> + */
>> >> +static bool starfive_pcie_hide_rc_bar(struct pci_bus *bus, unsigned int  devfn,
>> >> +				      int offset)
>> >> +{
>> >> +	if (pci_is_root_bus(bus) && !devfn &&
>> >> +	    (offset == PCI_BASE_ADDRESS_0 || offset == PCI_BASE_ADDRESS_1))
>> >> +		return true;
>> >> +
>> >> +	return false;
>> >> +}
>> >> +
>> >> +int starfive_pcie_config_write(struct pci_bus *bus, unsigned int devfn,
>> >> +			       int where, int size, u32 value)
>> >> +{
>> >> +	if (starfive_pcie_hide_rc_bar(bus, devfn, where))
>> >> +		return PCIBIOS_BAD_REGISTER_NUMBER;
>> > 
>> > I think you are trying present BARs 0 & 1 as unimplemented.  Such BARs
>> > are hardwired to zero, so you should make them behave that way (both
>> > read and write).  Many callers of config accessors don't check the
>> > return value, so I don't think it's reliable to just return
>> > PCIBIOS_BAD_REGISTER_NUMBER.
>> 
>> This is a hardware defect that we did not hardwired those BARs to
>> zero, and it is configurable for software now.  We have to add this
>> filter function for workaround.
> 
> Yes.  My point is that this only affects the write path, and the read
> probably does not read 0 as it should.  This means lspci will show the
> wrong thing, and the PCI core will try to size the BAR when it doesn't
> need to.  I haven't looked at the BAR sizing code; it might even come
> up with a bogus size and address, when it *should* just conclude the
> BAR doesn't exist at all.
> 

Got it, I will try to hide those BARs both in read and write operations.

>> >> +	/* Ensure that PERST has been asserted for at least 100 ms */
>> >> +	msleep(300);
>> >> +	gpiod_set_value_cansleep(pcie->reset_gpio, 0);
>> > 
>> > At least 100 ms, but you sleep *300* ms?  This is probably related to
>> > https://lore.kernel.org/r/20230718155515.GA483233@bhelgaas
>> > 
>> > Please include a comment with the source of the delay value.  I assume
>> > it's T_PVPERL and T_PERST-CLK from the PCIe CEM spec.  This way we can
>> > someday share those #defines across drivers.
>> 
>> Yes, the delay value here is T_PVPERL from PCIe CEM spec r2.0 (Table
>> 2-4).  At the first time we set 100ms delay according to sector 2.2
>> of the spec: "After there has been time (TPVPERL) for the power and
>> clock to become stable, PERST# is deasserted high and the PCI
>> Express functions can start up."
>> 
>> However, in the compatibility testing with several NVMe SSD, we
>> found that Lenovo Thinklife ST8000 NVMe can not get ready in 100ms,
>> and it actually needs almost 200ms.  Thus, we increased the T_PVPERL
>> value to 300ms for the better device compatibility.
>>
>> We will use a macro to define T_PVPERL, and add comments for the
>> source of it.  If the compatibility delay of 300ms is not
>> reasonable, we can revert it to 100ms.
> 
> Thanks for this valuable information!  This NVMe issue potentially
> affects many similar drivers, and we may need a more generic fix so
> this device works well with all of them.
> 
> T_PVPERL is defined to start when power is stable.  Do you have a way
> to accurately determine that point?  I'm guessing this:
> 
>   gpiod_set_value_cansleep(pcie->power_gpio, 1)
> 
> turns the power on?  But of course that doesn't mean it is instantly
> stable.  Maybe your testing is telling you that your driver should
> have a hardware-specific 200ms delay to wait for power to become
> stable, followed by the standard 100ms for T_PVPERL?
> 

You are right, we did not take the power stable cost into account.
T_PVPERL is enough for Lenovo Thinklife ST8000 NVMe SSD to get ready,
and the extra cost is from the power circuit of a PCIe to M.2 connector,
which is used to verify M.2 SSD with our EVB at early stage.

As the Thinklife NVMe SSD may be a halted product,
and the onboard power circuit of VisionFive V2 is no problem,
we decided revert the sleep time to be 100ms.

We will add a comment for the source of T_PVPERL until your define in pci.h is accepted.

> Bjorn

^ permalink raw reply	[flat|nested] 92+ messages in thread

* Re: [PATCH v1 8/9] PCI: PLDA: starfive: Add JH7110 PCIe controller
@ 2023-07-24 10:48           ` Kevin Xie
  0 siblings, 0 replies; 92+ messages in thread
From: Kevin Xie @ 2023-07-24 10:48 UTC (permalink / raw)
  To: Bjorn Helgaas
  Cc: Minda Chen, Daire McNamara, Conor Dooley, Rob Herring,
	Krzysztof Kozlowski, Bjorn Helgaas, Lorenzo Pieralisi,
	Krzysztof Wilczyński, Emil Renner Berthing, devicetree,
	linux-kernel, linux-riscv, linux-pci, Paul Walmsley,
	Palmer Dabbelt, Albert Ou, Philipp Zabel, Mason Huo, Leyfoon Tan



On 2023/7/21 0:15, Bjorn Helgaas wrote:
> On Thu, Jul 20, 2023 at 06:11:59PM +0800, Kevin Xie wrote:
>> On 2023/7/20 0:48, Bjorn Helgaas wrote:
>> > On Wed, Jul 19, 2023 at 06:20:56PM +0800, Minda Chen wrote:
>> >> Add StarFive JH7110 SoC PCIe controller platform
>> >> driver codes.
> 
>> >> + * The BAR0/1 of bridge should be hidden during enumeration to
>> >> + * avoid the sizing and resource allocation by PCIe core.
>> >> + */
>> >> +static bool starfive_pcie_hide_rc_bar(struct pci_bus *bus, unsigned int  devfn,
>> >> +				      int offset)
>> >> +{
>> >> +	if (pci_is_root_bus(bus) && !devfn &&
>> >> +	    (offset == PCI_BASE_ADDRESS_0 || offset == PCI_BASE_ADDRESS_1))
>> >> +		return true;
>> >> +
>> >> +	return false;
>> >> +}
>> >> +
>> >> +int starfive_pcie_config_write(struct pci_bus *bus, unsigned int devfn,
>> >> +			       int where, int size, u32 value)
>> >> +{
>> >> +	if (starfive_pcie_hide_rc_bar(bus, devfn, where))
>> >> +		return PCIBIOS_BAD_REGISTER_NUMBER;
>> > 
>> > I think you are trying present BARs 0 & 1 as unimplemented.  Such BARs
>> > are hardwired to zero, so you should make them behave that way (both
>> > read and write).  Many callers of config accessors don't check the
>> > return value, so I don't think it's reliable to just return
>> > PCIBIOS_BAD_REGISTER_NUMBER.
>> 
>> This is a hardware defect that we did not hardwired those BARs to
>> zero, and it is configurable for software now.  We have to add this
>> filter function for workaround.
> 
> Yes.  My point is that this only affects the write path, and the read
> probably does not read 0 as it should.  This means lspci will show the
> wrong thing, and the PCI core will try to size the BAR when it doesn't
> need to.  I haven't looked at the BAR sizing code; it might even come
> up with a bogus size and address, when it *should* just conclude the
> BAR doesn't exist at all.
> 

Got it, I will try to hide those BARs both in read and write operations.

>> >> +	/* Ensure that PERST has been asserted for at least 100 ms */
>> >> +	msleep(300);
>> >> +	gpiod_set_value_cansleep(pcie->reset_gpio, 0);
>> > 
>> > At least 100 ms, but you sleep *300* ms?  This is probably related to
>> > https://lore.kernel.org/r/20230718155515.GA483233@bhelgaas
>> > 
>> > Please include a comment with the source of the delay value.  I assume
>> > it's T_PVPERL and T_PERST-CLK from the PCIe CEM spec.  This way we can
>> > someday share those #defines across drivers.
>> 
>> Yes, the delay value here is T_PVPERL from PCIe CEM spec r2.0 (Table
>> 2-4).  At the first time we set 100ms delay according to sector 2.2
>> of the spec: "After there has been time (TPVPERL) for the power and
>> clock to become stable, PERST# is deasserted high and the PCI
>> Express functions can start up."
>> 
>> However, in the compatibility testing with several NVMe SSD, we
>> found that Lenovo Thinklife ST8000 NVMe can not get ready in 100ms,
>> and it actually needs almost 200ms.  Thus, we increased the T_PVPERL
>> value to 300ms for the better device compatibility.
>>
>> We will use a macro to define T_PVPERL, and add comments for the
>> source of it.  If the compatibility delay of 300ms is not
>> reasonable, we can revert it to 100ms.
> 
> Thanks for this valuable information!  This NVMe issue potentially
> affects many similar drivers, and we may need a more generic fix so
> this device works well with all of them.
> 
> T_PVPERL is defined to start when power is stable.  Do you have a way
> to accurately determine that point?  I'm guessing this:
> 
>   gpiod_set_value_cansleep(pcie->power_gpio, 1)
> 
> turns the power on?  But of course that doesn't mean it is instantly
> stable.  Maybe your testing is telling you that your driver should
> have a hardware-specific 200ms delay to wait for power to become
> stable, followed by the standard 100ms for T_PVPERL?
> 

You are right, we did not take the power stable cost into account.
T_PVPERL is enough for Lenovo Thinklife ST8000 NVMe SSD to get ready,
and the extra cost is from the power circuit of a PCIe to M.2 connector,
which is used to verify M.2 SSD with our EVB at early stage.

As the Thinklife NVMe SSD may be a halted product,
and the onboard power circuit of VisionFive V2 is no problem,
we decided revert the sleep time to be 100ms.

We will add a comment for the source of T_PVPERL until your define in pci.h is accepted.

> Bjorn

_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 92+ messages in thread

* Re: [PATCH v1 8/9] PCI: PLDA: starfive: Add JH7110 PCIe controller
  2023-07-24 10:48           ` Kevin Xie
@ 2023-07-25 20:46             ` Bjorn Helgaas
  -1 siblings, 0 replies; 92+ messages in thread
From: Bjorn Helgaas @ 2023-07-25 20:46 UTC (permalink / raw)
  To: Kevin Xie
  Cc: Minda Chen, Daire McNamara, Conor Dooley, Rob Herring,
	Krzysztof Kozlowski, Bjorn Helgaas, Lorenzo Pieralisi,
	Krzysztof Wilczyński, Emil Renner Berthing, devicetree,
	linux-kernel, linux-riscv, linux-pci, Paul Walmsley,
	Palmer Dabbelt, Albert Ou, Philipp Zabel, Mason Huo, Leyfoon Tan

On Mon, Jul 24, 2023 at 06:48:47PM +0800, Kevin Xie wrote:
> On 2023/7/21 0:15, Bjorn Helgaas wrote:
> > On Thu, Jul 20, 2023 at 06:11:59PM +0800, Kevin Xie wrote:
> >> On 2023/7/20 0:48, Bjorn Helgaas wrote:
> >> > On Wed, Jul 19, 2023 at 06:20:56PM +0800, Minda Chen wrote:
> >> >> Add StarFive JH7110 SoC PCIe controller platform
> >> >> driver codes.

> >> However, in the compatibility testing with several NVMe SSD, we
> >> found that Lenovo Thinklife ST8000 NVMe can not get ready in 100ms,
> >> and it actually needs almost 200ms.  Thus, we increased the T_PVPERL
> >> value to 300ms for the better device compatibility.
> > ...
> > 
> > Thanks for this valuable information!  This NVMe issue potentially
> > affects many similar drivers, and we may need a more generic fix so
> > this device works well with all of them.
> > 
> > T_PVPERL is defined to start when power is stable.  Do you have a way
> > to accurately determine that point?  I'm guessing this:
> > 
> >   gpiod_set_value_cansleep(pcie->power_gpio, 1)
> > 
> > turns the power on?  But of course that doesn't mean it is instantly
> > stable.  Maybe your testing is telling you that your driver should
> > have a hardware-specific 200ms delay to wait for power to become
> > stable, followed by the standard 100ms for T_PVPERL?
> 
> You are right, we did not take the power stable cost into account.
> T_PVPERL is enough for Lenovo Thinklife ST8000 NVMe SSD to get ready,
> and the extra cost is from the power circuit of a PCIe to M.2 connector,
> which is used to verify M.2 SSD with our EVB at early stage.

Hmm.  That sounds potentially interesting.  I assume you're talking
about something like this: https://www.amazon.com/dp/B07JKH5VTL

I'm not familiar with the timing requirements for something like this.
There is a PCIe M.2 spec with some timing requirements, but I don't
know whether or how software is supposed to manage this.  There is a
T_PVPGL (power valid to PERST# inactive) parameter, but it's
implementation specific, so I don't know what the point of that is.
And I don't see a way for software to even detect the presence of such
an adapter.

But I assume some end users will use adapters like this and expect it
to "just work," so it would be nice if it did.

> As the Thinklife NVMe SSD may be a halted product, and the onboard
> power circuit of VisionFive V2 is no problem, we decided revert the
> sleep time to be 100ms.

Even though the product may be end-of-life, people will probably still
try to use it, and I would like it to work.  Otherwise we end up with
frustrated users and problem reports that are hard to resolve.  But I
don't know where to go here.

Bjorn

^ permalink raw reply	[flat|nested] 92+ messages in thread

* Re: [PATCH v1 8/9] PCI: PLDA: starfive: Add JH7110 PCIe controller
@ 2023-07-25 20:46             ` Bjorn Helgaas
  0 siblings, 0 replies; 92+ messages in thread
From: Bjorn Helgaas @ 2023-07-25 20:46 UTC (permalink / raw)
  To: Kevin Xie
  Cc: Minda Chen, Daire McNamara, Conor Dooley, Rob Herring,
	Krzysztof Kozlowski, Bjorn Helgaas, Lorenzo Pieralisi,
	Krzysztof Wilczyński, Emil Renner Berthing, devicetree,
	linux-kernel, linux-riscv, linux-pci, Paul Walmsley,
	Palmer Dabbelt, Albert Ou, Philipp Zabel, Mason Huo, Leyfoon Tan

On Mon, Jul 24, 2023 at 06:48:47PM +0800, Kevin Xie wrote:
> On 2023/7/21 0:15, Bjorn Helgaas wrote:
> > On Thu, Jul 20, 2023 at 06:11:59PM +0800, Kevin Xie wrote:
> >> On 2023/7/20 0:48, Bjorn Helgaas wrote:
> >> > On Wed, Jul 19, 2023 at 06:20:56PM +0800, Minda Chen wrote:
> >> >> Add StarFive JH7110 SoC PCIe controller platform
> >> >> driver codes.

> >> However, in the compatibility testing with several NVMe SSD, we
> >> found that Lenovo Thinklife ST8000 NVMe can not get ready in 100ms,
> >> and it actually needs almost 200ms.  Thus, we increased the T_PVPERL
> >> value to 300ms for the better device compatibility.
> > ...
> > 
> > Thanks for this valuable information!  This NVMe issue potentially
> > affects many similar drivers, and we may need a more generic fix so
> > this device works well with all of them.
> > 
> > T_PVPERL is defined to start when power is stable.  Do you have a way
> > to accurately determine that point?  I'm guessing this:
> > 
> >   gpiod_set_value_cansleep(pcie->power_gpio, 1)
> > 
> > turns the power on?  But of course that doesn't mean it is instantly
> > stable.  Maybe your testing is telling you that your driver should
> > have a hardware-specific 200ms delay to wait for power to become
> > stable, followed by the standard 100ms for T_PVPERL?
> 
> You are right, we did not take the power stable cost into account.
> T_PVPERL is enough for Lenovo Thinklife ST8000 NVMe SSD to get ready,
> and the extra cost is from the power circuit of a PCIe to M.2 connector,
> which is used to verify M.2 SSD with our EVB at early stage.

Hmm.  That sounds potentially interesting.  I assume you're talking
about something like this: https://www.amazon.com/dp/B07JKH5VTL

I'm not familiar with the timing requirements for something like this.
There is a PCIe M.2 spec with some timing requirements, but I don't
know whether or how software is supposed to manage this.  There is a
T_PVPGL (power valid to PERST# inactive) parameter, but it's
implementation specific, so I don't know what the point of that is.
And I don't see a way for software to even detect the presence of such
an adapter.

But I assume some end users will use adapters like this and expect it
to "just work," so it would be nice if it did.

> As the Thinklife NVMe SSD may be a halted product, and the onboard
> power circuit of VisionFive V2 is no problem, we decided revert the
> sleep time to be 100ms.

Even though the product may be end-of-life, people will probably still
try to use it, and I would like it to work.  Otherwise we end up with
frustrated users and problem reports that are hard to resolve.  But I
don't know where to go here.

Bjorn

_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 92+ messages in thread

* Re: [PATCH v1 8/9] PCI: PLDA: starfive: Add JH7110 PCIe controller
  2023-07-25 20:46             ` Bjorn Helgaas
@ 2023-07-27 21:40               ` Bjorn Helgaas
  -1 siblings, 0 replies; 92+ messages in thread
From: Bjorn Helgaas @ 2023-07-27 21:40 UTC (permalink / raw)
  To: Kevin Xie
  Cc: Minda Chen, Daire McNamara, Conor Dooley, Rob Herring,
	Krzysztof Kozlowski, Bjorn Helgaas, Lorenzo Pieralisi,
	Krzysztof Wilczyński, Emil Renner Berthing, devicetree,
	linux-kernel, linux-riscv, linux-pci, Paul Walmsley,
	Palmer Dabbelt, Albert Ou, Philipp Zabel, Mason Huo, Leyfoon Tan,
	Mika Westerberg, Maciej W. Rozycki

[+cc Mika, Maciej since they've worked on similar delays recently]

On Tue, Jul 25, 2023 at 03:46:35PM -0500, Bjorn Helgaas wrote:
> On Mon, Jul 24, 2023 at 06:48:47PM +0800, Kevin Xie wrote:
> > On 2023/7/21 0:15, Bjorn Helgaas wrote:
> > > On Thu, Jul 20, 2023 at 06:11:59PM +0800, Kevin Xie wrote:
> > >> On 2023/7/20 0:48, Bjorn Helgaas wrote:
> > >> > On Wed, Jul 19, 2023 at 06:20:56PM +0800, Minda Chen wrote:
> > >> >> Add StarFive JH7110 SoC PCIe controller platform
> > >> >> driver codes.
> 
> > >> However, in the compatibility testing with several NVMe SSD, we
> > >> found that Lenovo Thinklife ST8000 NVMe can not get ready in 100ms,
> > >> and it actually needs almost 200ms.  Thus, we increased the T_PVPERL
> > >> value to 300ms for the better device compatibility.
> > > ...
> > > 
> > > Thanks for this valuable information!  This NVMe issue potentially
> > > affects many similar drivers, and we may need a more generic fix so
> > > this device works well with all of them.
> > > 
> > > T_PVPERL is defined to start when power is stable.  Do you have a way
> > > to accurately determine that point?  I'm guessing this:
> > > 
> > >   gpiod_set_value_cansleep(pcie->power_gpio, 1)
> > > 
> > > turns the power on?  But of course that doesn't mean it is instantly
> > > stable.  Maybe your testing is telling you that your driver should
> > > have a hardware-specific 200ms delay to wait for power to become
> > > stable, followed by the standard 100ms for T_PVPERL?
> > 
> > You are right, we did not take the power stable cost into account.
> > T_PVPERL is enough for Lenovo Thinklife ST8000 NVMe SSD to get ready,
> > and the extra cost is from the power circuit of a PCIe to M.2 connector,
> > which is used to verify M.2 SSD with our EVB at early stage.
> 
> Hmm.  That sounds potentially interesting.  I assume you're talking
> about something like this: https://www.amazon.com/dp/B07JKH5VTL
> 
> I'm not familiar with the timing requirements for something like this.
> There is a PCIe M.2 spec with some timing requirements, but I don't
> know whether or how software is supposed to manage this.  There is a
> T_PVPGL (power valid to PERST# inactive) parameter, but it's
> implementation specific, so I don't know what the point of that is.
> And I don't see a way for software to even detect the presence of such
> an adapter.

I intended to ask about this on the PCI-SIG forum, but after reading
this thread [1], I don't think we would learn anything.  The question
was:

  The M.2 device has 5 voltage rails generated from the 3.3V input
  supply voltage
  -------------------------------------------
  This is re. Table 17 in PCI Express M.2 Specification Revision 1.1
  Power Valid* to PERST# input inactive : Implementation specific;
  recommended 50 ms

  What exactly does this mean ?

  The Note says

    *Power Valid when all the voltage supply rails have reached their
    respective Vmin.

  Does this mean that the 50ms to PERSTn is counted from the instant
  when all *5 voltage rails* on the M.2 device have become "good" ?

and the answer was:

  You wrote;
  Does this mean that the 50ms to PERSTn is counted from the instant
  when all 5 voltage rails on the M.2 device have become "good" ?

  Reply:
  This means that counting the recommended 50 ms begins from the time
  when the power rails coming to the device/module, from the host, are
  stable *at the device connector*.

  As for the time it takes voltages derived inside the device from any
  of the host power rails (e.g., 3.3V rail) to become stable, that is
  part of the 50ms the host should wait before de-asserting PERST#, in
  order ensure that most devices will be ready by then.

  Strictly speaking, nothing disastrous happens if a host violates the
  50ms. If it de-asserts too soon, the device may not be ready, but
  most hosts will try again. If the host de-asserts too late, the
  device has even more time to stabilize. This is why the WG felt that
  an exact minimum number for >>Tpvpgl, was not valid in practice, and
  we made it a recommendation.

Since T_PVPGL is implementation-specific, we can't really base
anything in software on the 50ms recommendation.  It sounds to me like
they are counting on software to retry config reads when enumerating.

I guess the delays we *can* observe are:

  100ms T_PVPERL "Power stable to PERST# inactive" (CEM 2.9.2)
  100ms software delay between reset and config request (Base 6.6.1)

The PCI core doesn't know how to assert PERST#, so the T_PVPERL delay
definitely has to be in the host controller driver.

The PCI core observes the second 100ms delay after a reset in
pci_bridge_wait_for_secondary_bus().  But this 100ms delay does not
happen during initial enumeration.  I think the assumption of the PCI
core is that when the host controller driver calls pci_host_probe(),
we can issue config requests immediately.

So I think that to be safe, we probably need to do both of those 100ms
delays in the host controller driver.  Maybe there's some hope of
supporting the latter one in the PCI core someday, but that's not
today.

Bjorn

[1] https://forum.pcisig.com/viewtopic.php?f=74&t=1037

^ permalink raw reply	[flat|nested] 92+ messages in thread

* Re: [PATCH v1 8/9] PCI: PLDA: starfive: Add JH7110 PCIe controller
@ 2023-07-27 21:40               ` Bjorn Helgaas
  0 siblings, 0 replies; 92+ messages in thread
From: Bjorn Helgaas @ 2023-07-27 21:40 UTC (permalink / raw)
  To: Kevin Xie
  Cc: Minda Chen, Daire McNamara, Conor Dooley, Rob Herring,
	Krzysztof Kozlowski, Bjorn Helgaas, Lorenzo Pieralisi,
	Krzysztof Wilczyński, Emil Renner Berthing, devicetree,
	linux-kernel, linux-riscv, linux-pci, Paul Walmsley,
	Palmer Dabbelt, Albert Ou, Philipp Zabel, Mason Huo, Leyfoon Tan,
	Mika Westerberg, Maciej W. Rozycki

[+cc Mika, Maciej since they've worked on similar delays recently]

On Tue, Jul 25, 2023 at 03:46:35PM -0500, Bjorn Helgaas wrote:
> On Mon, Jul 24, 2023 at 06:48:47PM +0800, Kevin Xie wrote:
> > On 2023/7/21 0:15, Bjorn Helgaas wrote:
> > > On Thu, Jul 20, 2023 at 06:11:59PM +0800, Kevin Xie wrote:
> > >> On 2023/7/20 0:48, Bjorn Helgaas wrote:
> > >> > On Wed, Jul 19, 2023 at 06:20:56PM +0800, Minda Chen wrote:
> > >> >> Add StarFive JH7110 SoC PCIe controller platform
> > >> >> driver codes.
> 
> > >> However, in the compatibility testing with several NVMe SSD, we
> > >> found that Lenovo Thinklife ST8000 NVMe can not get ready in 100ms,
> > >> and it actually needs almost 200ms.  Thus, we increased the T_PVPERL
> > >> value to 300ms for the better device compatibility.
> > > ...
> > > 
> > > Thanks for this valuable information!  This NVMe issue potentially
> > > affects many similar drivers, and we may need a more generic fix so
> > > this device works well with all of them.
> > > 
> > > T_PVPERL is defined to start when power is stable.  Do you have a way
> > > to accurately determine that point?  I'm guessing this:
> > > 
> > >   gpiod_set_value_cansleep(pcie->power_gpio, 1)
> > > 
> > > turns the power on?  But of course that doesn't mean it is instantly
> > > stable.  Maybe your testing is telling you that your driver should
> > > have a hardware-specific 200ms delay to wait for power to become
> > > stable, followed by the standard 100ms for T_PVPERL?
> > 
> > You are right, we did not take the power stable cost into account.
> > T_PVPERL is enough for Lenovo Thinklife ST8000 NVMe SSD to get ready,
> > and the extra cost is from the power circuit of a PCIe to M.2 connector,
> > which is used to verify M.2 SSD with our EVB at early stage.
> 
> Hmm.  That sounds potentially interesting.  I assume you're talking
> about something like this: https://www.amazon.com/dp/B07JKH5VTL
> 
> I'm not familiar with the timing requirements for something like this.
> There is a PCIe M.2 spec with some timing requirements, but I don't
> know whether or how software is supposed to manage this.  There is a
> T_PVPGL (power valid to PERST# inactive) parameter, but it's
> implementation specific, so I don't know what the point of that is.
> And I don't see a way for software to even detect the presence of such
> an adapter.

I intended to ask about this on the PCI-SIG forum, but after reading
this thread [1], I don't think we would learn anything.  The question
was:

  The M.2 device has 5 voltage rails generated from the 3.3V input
  supply voltage
  -------------------------------------------
  This is re. Table 17 in PCI Express M.2 Specification Revision 1.1
  Power Valid* to PERST# input inactive : Implementation specific;
  recommended 50 ms

  What exactly does this mean ?

  The Note says

    *Power Valid when all the voltage supply rails have reached their
    respective Vmin.

  Does this mean that the 50ms to PERSTn is counted from the instant
  when all *5 voltage rails* on the M.2 device have become "good" ?

and the answer was:

  You wrote;
  Does this mean that the 50ms to PERSTn is counted from the instant
  when all 5 voltage rails on the M.2 device have become "good" ?

  Reply:
  This means that counting the recommended 50 ms begins from the time
  when the power rails coming to the device/module, from the host, are
  stable *at the device connector*.

  As for the time it takes voltages derived inside the device from any
  of the host power rails (e.g., 3.3V rail) to become stable, that is
  part of the 50ms the host should wait before de-asserting PERST#, in
  order ensure that most devices will be ready by then.

  Strictly speaking, nothing disastrous happens if a host violates the
  50ms. If it de-asserts too soon, the device may not be ready, but
  most hosts will try again. If the host de-asserts too late, the
  device has even more time to stabilize. This is why the WG felt that
  an exact minimum number for >>Tpvpgl, was not valid in practice, and
  we made it a recommendation.

Since T_PVPGL is implementation-specific, we can't really base
anything in software on the 50ms recommendation.  It sounds to me like
they are counting on software to retry config reads when enumerating.

I guess the delays we *can* observe are:

  100ms T_PVPERL "Power stable to PERST# inactive" (CEM 2.9.2)
  100ms software delay between reset and config request (Base 6.6.1)

The PCI core doesn't know how to assert PERST#, so the T_PVPERL delay
definitely has to be in the host controller driver.

The PCI core observes the second 100ms delay after a reset in
pci_bridge_wait_for_secondary_bus().  But this 100ms delay does not
happen during initial enumeration.  I think the assumption of the PCI
core is that when the host controller driver calls pci_host_probe(),
we can issue config requests immediately.

So I think that to be safe, we probably need to do both of those 100ms
delays in the host controller driver.  Maybe there's some hope of
supporting the latter one in the PCI core someday, but that's not
today.

Bjorn

[1] https://forum.pcisig.com/viewtopic.php?f=74&t=1037

_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 92+ messages in thread

* Re: [PATCH v1 8/9] PCI: PLDA: starfive: Add JH7110 PCIe controller
  2023-07-27 21:40               ` Bjorn Helgaas
@ 2023-07-31  5:52                 ` Kevin Xie
  -1 siblings, 0 replies; 92+ messages in thread
From: Kevin Xie @ 2023-07-31  5:52 UTC (permalink / raw)
  To: Bjorn Helgaas
  Cc: Minda Chen, Daire McNamara, Conor Dooley, Rob Herring,
	Krzysztof Kozlowski, Bjorn Helgaas, Lorenzo Pieralisi,
	Krzysztof Wilczyński, Emil Renner Berthing, devicetree,
	linux-kernel, linux-riscv, linux-pci, Paul Walmsley,
	Palmer Dabbelt, Albert Ou, Philipp Zabel, Mason Huo, Leyfoon Tan,
	Mika Westerberg, Maciej W. Rozycki



On 2023/7/28 5:40, Bjorn Helgaas wrote:
> [+cc Mika, Maciej since they've worked on similar delays recently]
> 
> On Tue, Jul 25, 2023 at 03:46:35PM -0500, Bjorn Helgaas wrote:
>> On Mon, Jul 24, 2023 at 06:48:47PM +0800, Kevin Xie wrote:
>> > On 2023/7/21 0:15, Bjorn Helgaas wrote:
>> > > On Thu, Jul 20, 2023 at 06:11:59PM +0800, Kevin Xie wrote:
>> > >> On 2023/7/20 0:48, Bjorn Helgaas wrote:
>> > >> > On Wed, Jul 19, 2023 at 06:20:56PM +0800, Minda Chen wrote:
>> > >> >> Add StarFive JH7110 SoC PCIe controller platform
>> > >> >> driver codes.
>> 
>> > >> However, in the compatibility testing with several NVMe SSD, we
>> > >> found that Lenovo Thinklife ST8000 NVMe can not get ready in 100ms,
>> > >> and it actually needs almost 200ms.  Thus, we increased the T_PVPERL
>> > >> value to 300ms for the better device compatibility.
>> > > ...
>> > > 
>> > > Thanks for this valuable information!  This NVMe issue potentially
>> > > affects many similar drivers, and we may need a more generic fix so
>> > > this device works well with all of them.
>> > > 
>> > > T_PVPERL is defined to start when power is stable.  Do you have a way
>> > > to accurately determine that point?  I'm guessing this:
>> > > 
>> > >   gpiod_set_value_cansleep(pcie->power_gpio, 1)
>> > > 
>> > > turns the power on?  But of course that doesn't mean it is instantly
>> > > stable.  Maybe your testing is telling you that your driver should
>> > > have a hardware-specific 200ms delay to wait for power to become
>> > > stable, followed by the standard 100ms for T_PVPERL?
>> > 
>> > You are right, we did not take the power stable cost into account.
>> > T_PVPERL is enough for Lenovo Thinklife ST8000 NVMe SSD to get ready,
>> > and the extra cost is from the power circuit of a PCIe to M.2 connector,
>> > which is used to verify M.2 SSD with our EVB at early stage.
>> 
>> Hmm.  That sounds potentially interesting.  I assume you're talking
>> about something like this: https://www.amazon.com/dp/B07JKH5VTL
>> 
>> I'm not familiar with the timing requirements for something like this.
>> There is a PCIe M.2 spec with some timing requirements, but I don't
>> know whether or how software is supposed to manage this.  There is a
>> T_PVPGL (power valid to PERST# inactive) parameter, but it's
>> implementation specific, so I don't know what the point of that is.
>> And I don't see a way for software to even detect the presence of such
>> an adapter.
> 
> I intended to ask about this on the PCI-SIG forum, but after reading
> this thread [1], I don't think we would learn anything.  The question
> was:
> 
>   The M.2 device has 5 voltage rails generated from the 3.3V input
>   supply voltage
>   -------------------------------------------
>   This is re. Table 17 in PCI Express M.2 Specification Revision 1.1
>   Power Valid* to PERST# input inactive : Implementation specific;
>   recommended 50 ms
> 
>   What exactly does this mean ?
> 
>   The Note says
> 
>     *Power Valid when all the voltage supply rails have reached their
>     respective Vmin.
> 
>   Does this mean that the 50ms to PERSTn is counted from the instant
>   when all *5 voltage rails* on the M.2 device have become "good" ?
> 
> and the answer was:
> 
>   You wrote;
>   Does this mean that the 50ms to PERSTn is counted from the instant
>   when all 5 voltage rails on the M.2 device have become "good" ?
> 
>   Reply:
>   This means that counting the recommended 50 ms begins from the time
>   when the power rails coming to the device/module, from the host, are
>   stable *at the device connector*.
> 
>   As for the time it takes voltages derived inside the device from any
>   of the host power rails (e.g., 3.3V rail) to become stable, that is
>   part of the 50ms the host should wait before de-asserting PERST#, in
>   order ensure that most devices will be ready by then.
> 
>   Strictly speaking, nothing disastrous happens if a host violates the
>   50ms. If it de-asserts too soon, the device may not be ready, but
>   most hosts will try again. If the host de-asserts too late, the
>   device has even more time to stabilize. This is why the WG felt that
>   an exact minimum number for >>Tpvpgl, was not valid in practice, and
>   we made it a recommendation.
> 
> Since T_PVPGL is implementation-specific, we can't really base
> anything in software on the 50ms recommendation.  It sounds to me like
> they are counting on software to retry config reads when enumerating.
> 
> I guess the delays we *can* observe are:
> 
>   100ms T_PVPERL "Power stable to PERST# inactive" (CEM 2.9.2)
>   100ms software delay between reset and config request (Base 6.6.1)
> 

Refer to Figure2-10 in CEM Spec V2.0, I guess this two delays are T2 & T4?
In the PATCH v2[4/4], T2 is the msleep(100) for T_PVPERL,
and T4 is done by starfive_pcie_host_wait_for_link().

I am sorry for the late feedback to you, because we keep on testing since last week.
Several NVMe SSD are verified with this patch, and they work fine.

It is a pity that we lost the Thinklife NVMe SSD mentioned before,
because it belongs to a departing employee.
We bought two new SSD in the same model for testing,
the issue can not be reproduced, and all of then work fine with V1 & V2 patch.

> The PCI core doesn't know how to assert PERST#, so the T_PVPERL delay
> definitely has to be in the host controller driver.
> 
> The PCI core observes the second 100ms delay after a reset in
> pci_bridge_wait_for_secondary_bus().  But this 100ms delay does not
> happen during initial enumeration.  I think the assumption of the PCI
> core is that when the host controller driver calls pci_host_probe(),
> we can issue config requests immediately.
> 
> So I think that to be safe, we probably need to do both of those 100ms
> delays in the host controller driver.  Maybe there's some hope of
> supporting the latter one in the PCI core someday, but that's not
> today.
> 
> Bjorn
> 
> [1] https://forum.pcisig.com/viewtopic.php?f=74&t=1037

^ permalink raw reply	[flat|nested] 92+ messages in thread

* Re: [PATCH v1 8/9] PCI: PLDA: starfive: Add JH7110 PCIe controller
@ 2023-07-31  5:52                 ` Kevin Xie
  0 siblings, 0 replies; 92+ messages in thread
From: Kevin Xie @ 2023-07-31  5:52 UTC (permalink / raw)
  To: Bjorn Helgaas
  Cc: Minda Chen, Daire McNamara, Conor Dooley, Rob Herring,
	Krzysztof Kozlowski, Bjorn Helgaas, Lorenzo Pieralisi,
	Krzysztof Wilczyński, Emil Renner Berthing, devicetree,
	linux-kernel, linux-riscv, linux-pci, Paul Walmsley,
	Palmer Dabbelt, Albert Ou, Philipp Zabel, Mason Huo, Leyfoon Tan,
	Mika Westerberg, Maciej W. Rozycki



On 2023/7/28 5:40, Bjorn Helgaas wrote:
> [+cc Mika, Maciej since they've worked on similar delays recently]
> 
> On Tue, Jul 25, 2023 at 03:46:35PM -0500, Bjorn Helgaas wrote:
>> On Mon, Jul 24, 2023 at 06:48:47PM +0800, Kevin Xie wrote:
>> > On 2023/7/21 0:15, Bjorn Helgaas wrote:
>> > > On Thu, Jul 20, 2023 at 06:11:59PM +0800, Kevin Xie wrote:
>> > >> On 2023/7/20 0:48, Bjorn Helgaas wrote:
>> > >> > On Wed, Jul 19, 2023 at 06:20:56PM +0800, Minda Chen wrote:
>> > >> >> Add StarFive JH7110 SoC PCIe controller platform
>> > >> >> driver codes.
>> 
>> > >> However, in the compatibility testing with several NVMe SSD, we
>> > >> found that Lenovo Thinklife ST8000 NVMe can not get ready in 100ms,
>> > >> and it actually needs almost 200ms.  Thus, we increased the T_PVPERL
>> > >> value to 300ms for the better device compatibility.
>> > > ...
>> > > 
>> > > Thanks for this valuable information!  This NVMe issue potentially
>> > > affects many similar drivers, and we may need a more generic fix so
>> > > this device works well with all of them.
>> > > 
>> > > T_PVPERL is defined to start when power is stable.  Do you have a way
>> > > to accurately determine that point?  I'm guessing this:
>> > > 
>> > >   gpiod_set_value_cansleep(pcie->power_gpio, 1)
>> > > 
>> > > turns the power on?  But of course that doesn't mean it is instantly
>> > > stable.  Maybe your testing is telling you that your driver should
>> > > have a hardware-specific 200ms delay to wait for power to become
>> > > stable, followed by the standard 100ms for T_PVPERL?
>> > 
>> > You are right, we did not take the power stable cost into account.
>> > T_PVPERL is enough for Lenovo Thinklife ST8000 NVMe SSD to get ready,
>> > and the extra cost is from the power circuit of a PCIe to M.2 connector,
>> > which is used to verify M.2 SSD with our EVB at early stage.
>> 
>> Hmm.  That sounds potentially interesting.  I assume you're talking
>> about something like this: https://www.amazon.com/dp/B07JKH5VTL
>> 
>> I'm not familiar with the timing requirements for something like this.
>> There is a PCIe M.2 spec with some timing requirements, but I don't
>> know whether or how software is supposed to manage this.  There is a
>> T_PVPGL (power valid to PERST# inactive) parameter, but it's
>> implementation specific, so I don't know what the point of that is.
>> And I don't see a way for software to even detect the presence of such
>> an adapter.
> 
> I intended to ask about this on the PCI-SIG forum, but after reading
> this thread [1], I don't think we would learn anything.  The question
> was:
> 
>   The M.2 device has 5 voltage rails generated from the 3.3V input
>   supply voltage
>   -------------------------------------------
>   This is re. Table 17 in PCI Express M.2 Specification Revision 1.1
>   Power Valid* to PERST# input inactive : Implementation specific;
>   recommended 50 ms
> 
>   What exactly does this mean ?
> 
>   The Note says
> 
>     *Power Valid when all the voltage supply rails have reached their
>     respective Vmin.
> 
>   Does this mean that the 50ms to PERSTn is counted from the instant
>   when all *5 voltage rails* on the M.2 device have become "good" ?
> 
> and the answer was:
> 
>   You wrote;
>   Does this mean that the 50ms to PERSTn is counted from the instant
>   when all 5 voltage rails on the M.2 device have become "good" ?
> 
>   Reply:
>   This means that counting the recommended 50 ms begins from the time
>   when the power rails coming to the device/module, from the host, are
>   stable *at the device connector*.
> 
>   As for the time it takes voltages derived inside the device from any
>   of the host power rails (e.g., 3.3V rail) to become stable, that is
>   part of the 50ms the host should wait before de-asserting PERST#, in
>   order ensure that most devices will be ready by then.
> 
>   Strictly speaking, nothing disastrous happens if a host violates the
>   50ms. If it de-asserts too soon, the device may not be ready, but
>   most hosts will try again. If the host de-asserts too late, the
>   device has even more time to stabilize. This is why the WG felt that
>   an exact minimum number for >>Tpvpgl, was not valid in practice, and
>   we made it a recommendation.
> 
> Since T_PVPGL is implementation-specific, we can't really base
> anything in software on the 50ms recommendation.  It sounds to me like
> they are counting on software to retry config reads when enumerating.
> 
> I guess the delays we *can* observe are:
> 
>   100ms T_PVPERL "Power stable to PERST# inactive" (CEM 2.9.2)
>   100ms software delay between reset and config request (Base 6.6.1)
> 

Refer to Figure2-10 in CEM Spec V2.0, I guess this two delays are T2 & T4?
In the PATCH v2[4/4], T2 is the msleep(100) for T_PVPERL,
and T4 is done by starfive_pcie_host_wait_for_link().

I am sorry for the late feedback to you, because we keep on testing since last week.
Several NVMe SSD are verified with this patch, and they work fine.

It is a pity that we lost the Thinklife NVMe SSD mentioned before,
because it belongs to a departing employee.
We bought two new SSD in the same model for testing,
the issue can not be reproduced, and all of then work fine with V1 & V2 patch.

> The PCI core doesn't know how to assert PERST#, so the T_PVPERL delay
> definitely has to be in the host controller driver.
> 
> The PCI core observes the second 100ms delay after a reset in
> pci_bridge_wait_for_secondary_bus().  But this 100ms delay does not
> happen during initial enumeration.  I think the assumption of the PCI
> core is that when the host controller driver calls pci_host_probe(),
> we can issue config requests immediately.
> 
> So I think that to be safe, we probably need to do both of those 100ms
> delays in the host controller driver.  Maybe there's some hope of
> supporting the latter one in the PCI core someday, but that's not
> today.
> 
> Bjorn
> 
> [1] https://forum.pcisig.com/viewtopic.php?f=74&t=1037

_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 92+ messages in thread

* Re: [PATCH v1 8/9] PCI: PLDA: starfive: Add JH7110 PCIe controller
  2023-07-31  5:52                 ` Kevin Xie
@ 2023-07-31 23:12                   ` Bjorn Helgaas
  -1 siblings, 0 replies; 92+ messages in thread
From: Bjorn Helgaas @ 2023-07-31 23:12 UTC (permalink / raw)
  To: Kevin Xie
  Cc: Minda Chen, Daire McNamara, Conor Dooley, Rob Herring,
	Krzysztof Kozlowski, Bjorn Helgaas, Lorenzo Pieralisi,
	Krzysztof Wilczyński, Emil Renner Berthing, devicetree,
	linux-kernel, linux-riscv, linux-pci, Paul Walmsley,
	Palmer Dabbelt, Albert Ou, Philipp Zabel, Mason Huo, Leyfoon Tan,
	Mika Westerberg, Maciej W. Rozycki, Pali Rohár,
	Marek Behún

[+cc Pali, Marek because I used f76b36d40bee ("PCI: aardvark: Fix link
training") as an example]

On Mon, Jul 31, 2023 at 01:52:35PM +0800, Kevin Xie wrote:
> On 2023/7/28 5:40, Bjorn Helgaas wrote:
> > On Tue, Jul 25, 2023 at 03:46:35PM -0500, Bjorn Helgaas wrote:
> >> On Mon, Jul 24, 2023 at 06:48:47PM +0800, Kevin Xie wrote:
> >> > On 2023/7/21 0:15, Bjorn Helgaas wrote:
> >> > > On Thu, Jul 20, 2023 at 06:11:59PM +0800, Kevin Xie wrote:
> >> > >> On 2023/7/20 0:48, Bjorn Helgaas wrote:
> >> > >> > On Wed, Jul 19, 2023 at 06:20:56PM +0800, Minda Chen wrote:
> >> > >> >> Add StarFive JH7110 SoC PCIe controller platform
> >> > >> >> driver codes.
> >> 
> >> > >> However, in the compatibility testing with several NVMe SSD, we
> >> > >> found that Lenovo Thinklife ST8000 NVMe can not get ready in 100ms,
> >> > >> and it actually needs almost 200ms.  Thus, we increased the T_PVPERL
> >> > >> value to 300ms for the better device compatibility.
> >> > > ...
> >> > > 
> >> > > Thanks for this valuable information!  This NVMe issue potentially
> >> > > affects many similar drivers, and we may need a more generic fix so
> >> > > this device works well with all of them.
> >> > > 
> >> > > T_PVPERL is defined to start when power is stable.  Do you have a way
> >> > > to accurately determine that point?  I'm guessing this:
> >> > > 
> >> > >   gpiod_set_value_cansleep(pcie->power_gpio, 1)
> >> > > 
> >> > > turns the power on?  But of course that doesn't mean it is instantly
> >> > > stable.  Maybe your testing is telling you that your driver should
> >> > > have a hardware-specific 200ms delay to wait for power to become
> >> > > stable, followed by the standard 100ms for T_PVPERL?
> >> > 
> >> > You are right, we did not take the power stable cost into account.
> >> > T_PVPERL is enough for Lenovo Thinklife ST8000 NVMe SSD to get ready,
> >> > and the extra cost is from the power circuit of a PCIe to M.2 connector,
> >> > which is used to verify M.2 SSD with our EVB at early stage.
> >> 
> >> Hmm.  That sounds potentially interesting.  I assume you're talking
> >> about something like this: https://www.amazon.com/dp/B07JKH5VTL
> >> 
> >> I'm not familiar with the timing requirements for something like this.
> >> There is a PCIe M.2 spec with some timing requirements, but I don't
> >> know whether or how software is supposed to manage this.  There is a
> >> T_PVPGL (power valid to PERST# inactive) parameter, but it's
> >> implementation specific, so I don't know what the point of that is.
> >> And I don't see a way for software to even detect the presence of such
> >> an adapter.
> > 
> > I intended to ask about this on the PCI-SIG forum, but after reading
> > this thread [1], I don't think we would learn anything.  The question
> > was:
> > 
> >   The M.2 device has 5 voltage rails generated from the 3.3V input
> >   supply voltage
> >   -------------------------------------------
> >   This is re. Table 17 in PCI Express M.2 Specification Revision 1.1
> >   Power Valid* to PERST# input inactive : Implementation specific;
> >   recommended 50 ms
> > 
> >   What exactly does this mean ?
> > 
> >   The Note says
> > 
> >     *Power Valid when all the voltage supply rails have reached their
> >     respective Vmin.
> > 
> >   Does this mean that the 50ms to PERSTn is counted from the instant
> >   when all *5 voltage rails* on the M.2 device have become "good" ?
> > 
> > and the answer was:
> > 
> >   You wrote;
> >   Does this mean that the 50ms to PERSTn is counted from the instant
> >   when all 5 voltage rails on the M.2 device have become "good" ?
> > 
> >   Reply:
> >   This means that counting the recommended 50 ms begins from the time
> >   when the power rails coming to the device/module, from the host, are
> >   stable *at the device connector*.
> > 
> >   As for the time it takes voltages derived inside the device from any
> >   of the host power rails (e.g., 3.3V rail) to become stable, that is
> >   part of the 50ms the host should wait before de-asserting PERST#, in
> >   order ensure that most devices will be ready by then.
> > 
> >   Strictly speaking, nothing disastrous happens if a host violates the
> >   50ms. If it de-asserts too soon, the device may not be ready, but
> >   most hosts will try again. If the host de-asserts too late, the
> >   device has even more time to stabilize. This is why the WG felt that
> >   an exact minimum number for >>Tpvpgl, was not valid in practice, and
> >   we made it a recommendation.
> > 
> > Since T_PVPGL is implementation-specific, we can't really base
> > anything in software on the 50ms recommendation.  It sounds to me like
> > they are counting on software to retry config reads when enumerating.
> > 
> > I guess the delays we *can* observe are:
> > 
> >   100ms T_PVPERL "Power stable to PERST# inactive" (CEM 2.9.2)
> >   100ms software delay between reset and config request (Base 6.6.1)
> 
> Refer to Figure2-10 in CEM Spec V2.0, I guess this two delays are T2 & T4?
> In the PATCH v2[4/4], T2 is the msleep(100) for T_PVPERL,
> and T4 is done by starfive_pcie_host_wait_for_link().

Yes, I think "T2" is T_PVPERL.  The CEM r2.0 Figure 2-10 note is
"2. Minimum time from power rails within specified tolerance to
PERST# inactive (T_PVPERL)."

As far as T4 ("Minimum PERST# inactive to PCI Express link out of
electrical idle"), I don't see a name or a value for that parameter,
and I don't think it is the delay required by PCIe r6.0, sec 6.6.1.

The delay required by sec 6.6.1 is a minimum of 100ms following exit
from reset or, for fast links, 100ms after link training completes.

The comment at the call of advk_pcie_wait_for_link() [2] says it is
the delay required by sec 6.6.1, but that doesn't seem right to me.

For one thing, I don't think 6.6.1 says anything about "link up" being
the end of a delay.  So if we want to do the delay required by 6.6.1,
"wait_for_link()" doesn't seem like quite the right name.

For another, all the *_wait_for_link() functions can return success
after 0ms, 90ms, 180ms, etc.  They're unlikely to return after 0ms,
but 90ms is quite possible.  If we avoided the 0ms return and
LINK_WAIT_USLEEP_MIN were 100ms instead of 90ms, that should be enough
for slow links, where we need 100ms following "exit from reset."

But it's still not enough for fast links where we need 100ms "after
link training completes" because we don't know when training
completed.  If training completed 89ms into *_wait_for_link(), we only
delay 1ms after that.

> > The PCI core doesn't know how to assert PERST#, so the T_PVPERL delay
> > definitely has to be in the host controller driver.
> > 
> > The PCI core observes the second 100ms delay after a reset in
> > pci_bridge_wait_for_secondary_bus().  But this 100ms delay does not
> > happen during initial enumeration.  I think the assumption of the PCI
> > core is that when the host controller driver calls pci_host_probe(),
> > we can issue config requests immediately.
> > 
> > So I think that to be safe, we probably need to do both of those 100ms
> > delays in the host controller driver.  Maybe there's some hope of
> > supporting the latter one in the PCI core someday, but that's not
> > today.
> > 
> > Bjorn
> > 
> > [1] https://forum.pcisig.com/viewtopic.php?f=74&t=1037

[2] https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/drivers/pci/controller/pci-aardvark.c?id=v6.4#n433

^ permalink raw reply	[flat|nested] 92+ messages in thread

* Re: [PATCH v1 8/9] PCI: PLDA: starfive: Add JH7110 PCIe controller
@ 2023-07-31 23:12                   ` Bjorn Helgaas
  0 siblings, 0 replies; 92+ messages in thread
From: Bjorn Helgaas @ 2023-07-31 23:12 UTC (permalink / raw)
  To: Kevin Xie
  Cc: Minda Chen, Daire McNamara, Conor Dooley, Rob Herring,
	Krzysztof Kozlowski, Bjorn Helgaas, Lorenzo Pieralisi,
	Krzysztof Wilczyński, Emil Renner Berthing, devicetree,
	linux-kernel, linux-riscv, linux-pci, Paul Walmsley,
	Palmer Dabbelt, Albert Ou, Philipp Zabel, Mason Huo, Leyfoon Tan,
	Mika Westerberg, Maciej W. Rozycki, Pali Rohár,
	Marek Behún

[+cc Pali, Marek because I used f76b36d40bee ("PCI: aardvark: Fix link
training") as an example]

On Mon, Jul 31, 2023 at 01:52:35PM +0800, Kevin Xie wrote:
> On 2023/7/28 5:40, Bjorn Helgaas wrote:
> > On Tue, Jul 25, 2023 at 03:46:35PM -0500, Bjorn Helgaas wrote:
> >> On Mon, Jul 24, 2023 at 06:48:47PM +0800, Kevin Xie wrote:
> >> > On 2023/7/21 0:15, Bjorn Helgaas wrote:
> >> > > On Thu, Jul 20, 2023 at 06:11:59PM +0800, Kevin Xie wrote:
> >> > >> On 2023/7/20 0:48, Bjorn Helgaas wrote:
> >> > >> > On Wed, Jul 19, 2023 at 06:20:56PM +0800, Minda Chen wrote:
> >> > >> >> Add StarFive JH7110 SoC PCIe controller platform
> >> > >> >> driver codes.
> >> 
> >> > >> However, in the compatibility testing with several NVMe SSD, we
> >> > >> found that Lenovo Thinklife ST8000 NVMe can not get ready in 100ms,
> >> > >> and it actually needs almost 200ms.  Thus, we increased the T_PVPERL
> >> > >> value to 300ms for the better device compatibility.
> >> > > ...
> >> > > 
> >> > > Thanks for this valuable information!  This NVMe issue potentially
> >> > > affects many similar drivers, and we may need a more generic fix so
> >> > > this device works well with all of them.
> >> > > 
> >> > > T_PVPERL is defined to start when power is stable.  Do you have a way
> >> > > to accurately determine that point?  I'm guessing this:
> >> > > 
> >> > >   gpiod_set_value_cansleep(pcie->power_gpio, 1)
> >> > > 
> >> > > turns the power on?  But of course that doesn't mean it is instantly
> >> > > stable.  Maybe your testing is telling you that your driver should
> >> > > have a hardware-specific 200ms delay to wait for power to become
> >> > > stable, followed by the standard 100ms for T_PVPERL?
> >> > 
> >> > You are right, we did not take the power stable cost into account.
> >> > T_PVPERL is enough for Lenovo Thinklife ST8000 NVMe SSD to get ready,
> >> > and the extra cost is from the power circuit of a PCIe to M.2 connector,
> >> > which is used to verify M.2 SSD with our EVB at early stage.
> >> 
> >> Hmm.  That sounds potentially interesting.  I assume you're talking
> >> about something like this: https://www.amazon.com/dp/B07JKH5VTL
> >> 
> >> I'm not familiar with the timing requirements for something like this.
> >> There is a PCIe M.2 spec with some timing requirements, but I don't
> >> know whether or how software is supposed to manage this.  There is a
> >> T_PVPGL (power valid to PERST# inactive) parameter, but it's
> >> implementation specific, so I don't know what the point of that is.
> >> And I don't see a way for software to even detect the presence of such
> >> an adapter.
> > 
> > I intended to ask about this on the PCI-SIG forum, but after reading
> > this thread [1], I don't think we would learn anything.  The question
> > was:
> > 
> >   The M.2 device has 5 voltage rails generated from the 3.3V input
> >   supply voltage
> >   -------------------------------------------
> >   This is re. Table 17 in PCI Express M.2 Specification Revision 1.1
> >   Power Valid* to PERST# input inactive : Implementation specific;
> >   recommended 50 ms
> > 
> >   What exactly does this mean ?
> > 
> >   The Note says
> > 
> >     *Power Valid when all the voltage supply rails have reached their
> >     respective Vmin.
> > 
> >   Does this mean that the 50ms to PERSTn is counted from the instant
> >   when all *5 voltage rails* on the M.2 device have become "good" ?
> > 
> > and the answer was:
> > 
> >   You wrote;
> >   Does this mean that the 50ms to PERSTn is counted from the instant
> >   when all 5 voltage rails on the M.2 device have become "good" ?
> > 
> >   Reply:
> >   This means that counting the recommended 50 ms begins from the time
> >   when the power rails coming to the device/module, from the host, are
> >   stable *at the device connector*.
> > 
> >   As for the time it takes voltages derived inside the device from any
> >   of the host power rails (e.g., 3.3V rail) to become stable, that is
> >   part of the 50ms the host should wait before de-asserting PERST#, in
> >   order ensure that most devices will be ready by then.
> > 
> >   Strictly speaking, nothing disastrous happens if a host violates the
> >   50ms. If it de-asserts too soon, the device may not be ready, but
> >   most hosts will try again. If the host de-asserts too late, the
> >   device has even more time to stabilize. This is why the WG felt that
> >   an exact minimum number for >>Tpvpgl, was not valid in practice, and
> >   we made it a recommendation.
> > 
> > Since T_PVPGL is implementation-specific, we can't really base
> > anything in software on the 50ms recommendation.  It sounds to me like
> > they are counting on software to retry config reads when enumerating.
> > 
> > I guess the delays we *can* observe are:
> > 
> >   100ms T_PVPERL "Power stable to PERST# inactive" (CEM 2.9.2)
> >   100ms software delay between reset and config request (Base 6.6.1)
> 
> Refer to Figure2-10 in CEM Spec V2.0, I guess this two delays are T2 & T4?
> In the PATCH v2[4/4], T2 is the msleep(100) for T_PVPERL,
> and T4 is done by starfive_pcie_host_wait_for_link().

Yes, I think "T2" is T_PVPERL.  The CEM r2.0 Figure 2-10 note is
"2. Minimum time from power rails within specified tolerance to
PERST# inactive (T_PVPERL)."

As far as T4 ("Minimum PERST# inactive to PCI Express link out of
electrical idle"), I don't see a name or a value for that parameter,
and I don't think it is the delay required by PCIe r6.0, sec 6.6.1.

The delay required by sec 6.6.1 is a minimum of 100ms following exit
from reset or, for fast links, 100ms after link training completes.

The comment at the call of advk_pcie_wait_for_link() [2] says it is
the delay required by sec 6.6.1, but that doesn't seem right to me.

For one thing, I don't think 6.6.1 says anything about "link up" being
the end of a delay.  So if we want to do the delay required by 6.6.1,
"wait_for_link()" doesn't seem like quite the right name.

For another, all the *_wait_for_link() functions can return success
after 0ms, 90ms, 180ms, etc.  They're unlikely to return after 0ms,
but 90ms is quite possible.  If we avoided the 0ms return and
LINK_WAIT_USLEEP_MIN were 100ms instead of 90ms, that should be enough
for slow links, where we need 100ms following "exit from reset."

But it's still not enough for fast links where we need 100ms "after
link training completes" because we don't know when training
completed.  If training completed 89ms into *_wait_for_link(), we only
delay 1ms after that.

> > The PCI core doesn't know how to assert PERST#, so the T_PVPERL delay
> > definitely has to be in the host controller driver.
> > 
> > The PCI core observes the second 100ms delay after a reset in
> > pci_bridge_wait_for_secondary_bus().  But this 100ms delay does not
> > happen during initial enumeration.  I think the assumption of the PCI
> > core is that when the host controller driver calls pci_host_probe(),
> > we can issue config requests immediately.
> > 
> > So I think that to be safe, we probably need to do both of those 100ms
> > delays in the host controller driver.  Maybe there's some hope of
> > supporting the latter one in the PCI core someday, but that's not
> > today.
> > 
> > Bjorn
> > 
> > [1] https://forum.pcisig.com/viewtopic.php?f=74&t=1037

[2] https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/drivers/pci/controller/pci-aardvark.c?id=v6.4#n433

_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 92+ messages in thread

* Re: [PATCH v1 8/9] PCI: PLDA: starfive: Add JH7110 PCIe controller
  2023-07-31 23:12                   ` Bjorn Helgaas
@ 2023-08-01  7:05                     ` Pali Rohár
  -1 siblings, 0 replies; 92+ messages in thread
From: Pali Rohár @ 2023-08-01  7:05 UTC (permalink / raw)
  To: Bjorn Helgaas
  Cc: Kevin Xie, Minda Chen, Daire McNamara, Conor Dooley, Rob Herring,
	Krzysztof Kozlowski, Bjorn Helgaas, Lorenzo Pieralisi,
	Krzysztof Wilczyński, Emil Renner Berthing, devicetree,
	linux-kernel, linux-riscv, linux-pci, Paul Walmsley,
	Palmer Dabbelt, Albert Ou, Philipp Zabel, Mason Huo, Leyfoon Tan,
	Mika Westerberg, Maciej W. Rozycki, Marek Behún

On Monday 31 July 2023 18:12:23 Bjorn Helgaas wrote:
> [+cc Pali, Marek because I used f76b36d40bee ("PCI: aardvark: Fix link
> training") as an example]
> 
> On Mon, Jul 31, 2023 at 01:52:35PM +0800, Kevin Xie wrote:
> > On 2023/7/28 5:40, Bjorn Helgaas wrote:
> > > On Tue, Jul 25, 2023 at 03:46:35PM -0500, Bjorn Helgaas wrote:
> > >> On Mon, Jul 24, 2023 at 06:48:47PM +0800, Kevin Xie wrote:
> > >> > On 2023/7/21 0:15, Bjorn Helgaas wrote:
> > >> > > On Thu, Jul 20, 2023 at 06:11:59PM +0800, Kevin Xie wrote:
> > >> > >> On 2023/7/20 0:48, Bjorn Helgaas wrote:
> > >> > >> > On Wed, Jul 19, 2023 at 06:20:56PM +0800, Minda Chen wrote:
> > >> > >> >> Add StarFive JH7110 SoC PCIe controller platform
> > >> > >> >> driver codes.
> > >> 
> > >> > >> However, in the compatibility testing with several NVMe SSD, we
> > >> > >> found that Lenovo Thinklife ST8000 NVMe can not get ready in 100ms,
> > >> > >> and it actually needs almost 200ms.  Thus, we increased the T_PVPERL
> > >> > >> value to 300ms for the better device compatibility.
> > >> > > ...
> > >> > > 
> > >> > > Thanks for this valuable information!  This NVMe issue potentially
> > >> > > affects many similar drivers, and we may need a more generic fix so
> > >> > > this device works well with all of them.
> > >> > > 
> > >> > > T_PVPERL is defined to start when power is stable.  Do you have a way
> > >> > > to accurately determine that point?  I'm guessing this:
> > >> > > 
> > >> > >   gpiod_set_value_cansleep(pcie->power_gpio, 1)
> > >> > > 
> > >> > > turns the power on?  But of course that doesn't mean it is instantly
> > >> > > stable.  Maybe your testing is telling you that your driver should
> > >> > > have a hardware-specific 200ms delay to wait for power to become
> > >> > > stable, followed by the standard 100ms for T_PVPERL?
> > >> > 
> > >> > You are right, we did not take the power stable cost into account.
> > >> > T_PVPERL is enough for Lenovo Thinklife ST8000 NVMe SSD to get ready,
> > >> > and the extra cost is from the power circuit of a PCIe to M.2 connector,
> > >> > which is used to verify M.2 SSD with our EVB at early stage.
> > >> 
> > >> Hmm.  That sounds potentially interesting.  I assume you're talking
> > >> about something like this: https://www.amazon.com/dp/B07JKH5VTL
> > >> 
> > >> I'm not familiar with the timing requirements for something like this.
> > >> There is a PCIe M.2 spec with some timing requirements, but I don't
> > >> know whether or how software is supposed to manage this.  There is a
> > >> T_PVPGL (power valid to PERST# inactive) parameter, but it's
> > >> implementation specific, so I don't know what the point of that is.
> > >> And I don't see a way for software to even detect the presence of such
> > >> an adapter.
> > > 
> > > I intended to ask about this on the PCI-SIG forum, but after reading
> > > this thread [1], I don't think we would learn anything.  The question
> > > was:
> > > 
> > >   The M.2 device has 5 voltage rails generated from the 3.3V input
> > >   supply voltage
> > >   -------------------------------------------
> > >   This is re. Table 17 in PCI Express M.2 Specification Revision 1.1
> > >   Power Valid* to PERST# input inactive : Implementation specific;
> > >   recommended 50 ms
> > > 
> > >   What exactly does this mean ?
> > > 
> > >   The Note says
> > > 
> > >     *Power Valid when all the voltage supply rails have reached their
> > >     respective Vmin.
> > > 
> > >   Does this mean that the 50ms to PERSTn is counted from the instant
> > >   when all *5 voltage rails* on the M.2 device have become "good" ?
> > > 
> > > and the answer was:
> > > 
> > >   You wrote;
> > >   Does this mean that the 50ms to PERSTn is counted from the instant
> > >   when all 5 voltage rails on the M.2 device have become "good" ?
> > > 
> > >   Reply:
> > >   This means that counting the recommended 50 ms begins from the time
> > >   when the power rails coming to the device/module, from the host, are
> > >   stable *at the device connector*.
> > > 
> > >   As for the time it takes voltages derived inside the device from any
> > >   of the host power rails (e.g., 3.3V rail) to become stable, that is
> > >   part of the 50ms the host should wait before de-asserting PERST#, in
> > >   order ensure that most devices will be ready by then.
> > > 
> > >   Strictly speaking, nothing disastrous happens if a host violates the
> > >   50ms. If it de-asserts too soon, the device may not be ready, but
> > >   most hosts will try again. If the host de-asserts too late, the
> > >   device has even more time to stabilize. This is why the WG felt that
> > >   an exact minimum number for >>Tpvpgl, was not valid in practice, and
> > >   we made it a recommendation.
> > > 
> > > Since T_PVPGL is implementation-specific, we can't really base
> > > anything in software on the 50ms recommendation.  It sounds to me like
> > > they are counting on software to retry config reads when enumerating.
> > > 
> > > I guess the delays we *can* observe are:
> > > 
> > >   100ms T_PVPERL "Power stable to PERST# inactive" (CEM 2.9.2)
> > >   100ms software delay between reset and config request (Base 6.6.1)
> > 
> > Refer to Figure2-10 in CEM Spec V2.0, I guess this two delays are T2 & T4?
> > In the PATCH v2[4/4], T2 is the msleep(100) for T_PVPERL,
> > and T4 is done by starfive_pcie_host_wait_for_link().
> 
> Yes, I think "T2" is T_PVPERL.  The CEM r2.0 Figure 2-10 note is
> "2. Minimum time from power rails within specified tolerance to
> PERST# inactive (T_PVPERL)."
> 
> As far as T4 ("Minimum PERST# inactive to PCI Express link out of
> electrical idle"), I don't see a name or a value for that parameter,
> and I don't think it is the delay required by PCIe r6.0, sec 6.6.1.
> 
> The delay required by sec 6.6.1 is a minimum of 100ms following exit
> from reset or, for fast links, 100ms after link training completes.
> 
> The comment at the call of advk_pcie_wait_for_link() [2] says it is
> the delay required by sec 6.6.1, but that doesn't seem right to me.
> 
> For one thing, I don't think 6.6.1 says anything about "link up" being
> the end of a delay.  So if we want to do the delay required by 6.6.1,
> "wait_for_link()" doesn't seem like quite the right name.
> 
> For another, all the *_wait_for_link() functions can return success
> after 0ms, 90ms, 180ms, etc.  They're unlikely to return after 0ms,
> but 90ms is quite possible.  If we avoided the 0ms return and
> LINK_WAIT_USLEEP_MIN were 100ms instead of 90ms, that should be enough
> for slow links, where we need 100ms following "exit from reset."
> 
> But it's still not enough for fast links where we need 100ms "after
> link training completes" because we don't know when training
> completed.  If training completed 89ms into *_wait_for_link(), we only
> delay 1ms after that.

Please look into discussion "How long should be PCIe card in Warm Reset
state?" including external references where are more interesting details:
https://lore.kernel.org/linux-pci/20210310110535.zh4pnn4vpmvzwl5q@pali/

About wait for the link, this should be done asynchronously...

> > > The PCI core doesn't know how to assert PERST#, so the T_PVPERL delay
> > > definitely has to be in the host controller driver.
> > > 
> > > The PCI core observes the second 100ms delay after a reset in
> > > pci_bridge_wait_for_secondary_bus().  But this 100ms delay does not
> > > happen during initial enumeration.  I think the assumption of the PCI
> > > core is that when the host controller driver calls pci_host_probe(),
> > > we can issue config requests immediately.
> > > 
> > > So I think that to be safe, we probably need to do both of those 100ms
> > > delays in the host controller driver.  Maybe there's some hope of
> > > supporting the latter one in the PCI core someday, but that's not
> > > today.
> > > 
> > > Bjorn
> > > 
> > > [1] https://forum.pcisig.com/viewtopic.php?f=74&t=1037
> 
> [2] https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/drivers/pci/controller/pci-aardvark.c?id=v6.4#n433

^ permalink raw reply	[flat|nested] 92+ messages in thread

* Re: [PATCH v1 8/9] PCI: PLDA: starfive: Add JH7110 PCIe controller
@ 2023-08-01  7:05                     ` Pali Rohár
  0 siblings, 0 replies; 92+ messages in thread
From: Pali Rohár @ 2023-08-01  7:05 UTC (permalink / raw)
  To: Bjorn Helgaas
  Cc: Kevin Xie, Minda Chen, Daire McNamara, Conor Dooley, Rob Herring,
	Krzysztof Kozlowski, Bjorn Helgaas, Lorenzo Pieralisi,
	Krzysztof Wilczyński, Emil Renner Berthing, devicetree,
	linux-kernel, linux-riscv, linux-pci, Paul Walmsley,
	Palmer Dabbelt, Albert Ou, Philipp Zabel, Mason Huo, Leyfoon Tan,
	Mika Westerberg, Maciej W. Rozycki, Marek Behún

On Monday 31 July 2023 18:12:23 Bjorn Helgaas wrote:
> [+cc Pali, Marek because I used f76b36d40bee ("PCI: aardvark: Fix link
> training") as an example]
> 
> On Mon, Jul 31, 2023 at 01:52:35PM +0800, Kevin Xie wrote:
> > On 2023/7/28 5:40, Bjorn Helgaas wrote:
> > > On Tue, Jul 25, 2023 at 03:46:35PM -0500, Bjorn Helgaas wrote:
> > >> On Mon, Jul 24, 2023 at 06:48:47PM +0800, Kevin Xie wrote:
> > >> > On 2023/7/21 0:15, Bjorn Helgaas wrote:
> > >> > > On Thu, Jul 20, 2023 at 06:11:59PM +0800, Kevin Xie wrote:
> > >> > >> On 2023/7/20 0:48, Bjorn Helgaas wrote:
> > >> > >> > On Wed, Jul 19, 2023 at 06:20:56PM +0800, Minda Chen wrote:
> > >> > >> >> Add StarFive JH7110 SoC PCIe controller platform
> > >> > >> >> driver codes.
> > >> 
> > >> > >> However, in the compatibility testing with several NVMe SSD, we
> > >> > >> found that Lenovo Thinklife ST8000 NVMe can not get ready in 100ms,
> > >> > >> and it actually needs almost 200ms.  Thus, we increased the T_PVPERL
> > >> > >> value to 300ms for the better device compatibility.
> > >> > > ...
> > >> > > 
> > >> > > Thanks for this valuable information!  This NVMe issue potentially
> > >> > > affects many similar drivers, and we may need a more generic fix so
> > >> > > this device works well with all of them.
> > >> > > 
> > >> > > T_PVPERL is defined to start when power is stable.  Do you have a way
> > >> > > to accurately determine that point?  I'm guessing this:
> > >> > > 
> > >> > >   gpiod_set_value_cansleep(pcie->power_gpio, 1)
> > >> > > 
> > >> > > turns the power on?  But of course that doesn't mean it is instantly
> > >> > > stable.  Maybe your testing is telling you that your driver should
> > >> > > have a hardware-specific 200ms delay to wait for power to become
> > >> > > stable, followed by the standard 100ms for T_PVPERL?
> > >> > 
> > >> > You are right, we did not take the power stable cost into account.
> > >> > T_PVPERL is enough for Lenovo Thinklife ST8000 NVMe SSD to get ready,
> > >> > and the extra cost is from the power circuit of a PCIe to M.2 connector,
> > >> > which is used to verify M.2 SSD with our EVB at early stage.
> > >> 
> > >> Hmm.  That sounds potentially interesting.  I assume you're talking
> > >> about something like this: https://www.amazon.com/dp/B07JKH5VTL
> > >> 
> > >> I'm not familiar with the timing requirements for something like this.
> > >> There is a PCIe M.2 spec with some timing requirements, but I don't
> > >> know whether or how software is supposed to manage this.  There is a
> > >> T_PVPGL (power valid to PERST# inactive) parameter, but it's
> > >> implementation specific, so I don't know what the point of that is.
> > >> And I don't see a way for software to even detect the presence of such
> > >> an adapter.
> > > 
> > > I intended to ask about this on the PCI-SIG forum, but after reading
> > > this thread [1], I don't think we would learn anything.  The question
> > > was:
> > > 
> > >   The M.2 device has 5 voltage rails generated from the 3.3V input
> > >   supply voltage
> > >   -------------------------------------------
> > >   This is re. Table 17 in PCI Express M.2 Specification Revision 1.1
> > >   Power Valid* to PERST# input inactive : Implementation specific;
> > >   recommended 50 ms
> > > 
> > >   What exactly does this mean ?
> > > 
> > >   The Note says
> > > 
> > >     *Power Valid when all the voltage supply rails have reached their
> > >     respective Vmin.
> > > 
> > >   Does this mean that the 50ms to PERSTn is counted from the instant
> > >   when all *5 voltage rails* on the M.2 device have become "good" ?
> > > 
> > > and the answer was:
> > > 
> > >   You wrote;
> > >   Does this mean that the 50ms to PERSTn is counted from the instant
> > >   when all 5 voltage rails on the M.2 device have become "good" ?
> > > 
> > >   Reply:
> > >   This means that counting the recommended 50 ms begins from the time
> > >   when the power rails coming to the device/module, from the host, are
> > >   stable *at the device connector*.
> > > 
> > >   As for the time it takes voltages derived inside the device from any
> > >   of the host power rails (e.g., 3.3V rail) to become stable, that is
> > >   part of the 50ms the host should wait before de-asserting PERST#, in
> > >   order ensure that most devices will be ready by then.
> > > 
> > >   Strictly speaking, nothing disastrous happens if a host violates the
> > >   50ms. If it de-asserts too soon, the device may not be ready, but
> > >   most hosts will try again. If the host de-asserts too late, the
> > >   device has even more time to stabilize. This is why the WG felt that
> > >   an exact minimum number for >>Tpvpgl, was not valid in practice, and
> > >   we made it a recommendation.
> > > 
> > > Since T_PVPGL is implementation-specific, we can't really base
> > > anything in software on the 50ms recommendation.  It sounds to me like
> > > they are counting on software to retry config reads when enumerating.
> > > 
> > > I guess the delays we *can* observe are:
> > > 
> > >   100ms T_PVPERL "Power stable to PERST# inactive" (CEM 2.9.2)
> > >   100ms software delay between reset and config request (Base 6.6.1)
> > 
> > Refer to Figure2-10 in CEM Spec V2.0, I guess this two delays are T2 & T4?
> > In the PATCH v2[4/4], T2 is the msleep(100) for T_PVPERL,
> > and T4 is done by starfive_pcie_host_wait_for_link().
> 
> Yes, I think "T2" is T_PVPERL.  The CEM r2.0 Figure 2-10 note is
> "2. Minimum time from power rails within specified tolerance to
> PERST# inactive (T_PVPERL)."
> 
> As far as T4 ("Minimum PERST# inactive to PCI Express link out of
> electrical idle"), I don't see a name or a value for that parameter,
> and I don't think it is the delay required by PCIe r6.0, sec 6.6.1.
> 
> The delay required by sec 6.6.1 is a minimum of 100ms following exit
> from reset or, for fast links, 100ms after link training completes.
> 
> The comment at the call of advk_pcie_wait_for_link() [2] says it is
> the delay required by sec 6.6.1, but that doesn't seem right to me.
> 
> For one thing, I don't think 6.6.1 says anything about "link up" being
> the end of a delay.  So if we want to do the delay required by 6.6.1,
> "wait_for_link()" doesn't seem like quite the right name.
> 
> For another, all the *_wait_for_link() functions can return success
> after 0ms, 90ms, 180ms, etc.  They're unlikely to return after 0ms,
> but 90ms is quite possible.  If we avoided the 0ms return and
> LINK_WAIT_USLEEP_MIN were 100ms instead of 90ms, that should be enough
> for slow links, where we need 100ms following "exit from reset."
> 
> But it's still not enough for fast links where we need 100ms "after
> link training completes" because we don't know when training
> completed.  If training completed 89ms into *_wait_for_link(), we only
> delay 1ms after that.

Please look into discussion "How long should be PCIe card in Warm Reset
state?" including external references where are more interesting details:
https://lore.kernel.org/linux-pci/20210310110535.zh4pnn4vpmvzwl5q@pali/

About wait for the link, this should be done asynchronously...

> > > The PCI core doesn't know how to assert PERST#, so the T_PVPERL delay
> > > definitely has to be in the host controller driver.
> > > 
> > > The PCI core observes the second 100ms delay after a reset in
> > > pci_bridge_wait_for_secondary_bus().  But this 100ms delay does not
> > > happen during initial enumeration.  I think the assumption of the PCI
> > > core is that when the host controller driver calls pci_host_probe(),
> > > we can issue config requests immediately.
> > > 
> > > So I think that to be safe, we probably need to do both of those 100ms
> > > delays in the host controller driver.  Maybe there's some hope of
> > > supporting the latter one in the PCI core someday, but that's not
> > > today.
> > > 
> > > Bjorn
> > > 
> > > [1] https://forum.pcisig.com/viewtopic.php?f=74&t=1037
> 
> [2] https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/drivers/pci/controller/pci-aardvark.c?id=v6.4#n433

_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 92+ messages in thread

* Re: [PATCH v1 8/9] PCI: PLDA: starfive: Add JH7110 PCIe controller
  2023-07-31 23:12                   ` Bjorn Helgaas
@ 2023-08-01  7:05                     ` Kevin Xie
  -1 siblings, 0 replies; 92+ messages in thread
From: Kevin Xie @ 2023-08-01  7:05 UTC (permalink / raw)
  To: Bjorn Helgaas
  Cc: Minda Chen, Daire McNamara, Conor Dooley, Rob Herring,
	Krzysztof Kozlowski, Bjorn Helgaas, Lorenzo Pieralisi,
	Krzysztof Wilczyński, Emil Renner Berthing, devicetree,
	linux-kernel, linux-riscv, linux-pci, Paul Walmsley,
	Palmer Dabbelt, Albert Ou, Philipp Zabel, Mason Huo, Leyfoon Tan,
	Mika Westerberg, Maciej W. Rozycki, Pali Rohár,
	Marek Behún



On 2023/8/1 7:12, Bjorn Helgaas wrote:
> [+cc Pali, Marek because I used f76b36d40bee ("PCI: aardvark: Fix link
> training") as an example]
> 
> On Mon, Jul 31, 2023 at 01:52:35PM +0800, Kevin Xie wrote:
>> On 2023/7/28 5:40, Bjorn Helgaas wrote:
>> > On Tue, Jul 25, 2023 at 03:46:35PM -0500, Bjorn Helgaas wrote:
>> >> On Mon, Jul 24, 2023 at 06:48:47PM +0800, Kevin Xie wrote:
>> >> > On 2023/7/21 0:15, Bjorn Helgaas wrote:
>> >> > > On Thu, Jul 20, 2023 at 06:11:59PM +0800, Kevin Xie wrote:
>> >> > >> On 2023/7/20 0:48, Bjorn Helgaas wrote:
>> >> > >> > On Wed, Jul 19, 2023 at 06:20:56PM +0800, Minda Chen wrote:
>> >> > >> >> Add StarFive JH7110 SoC PCIe controller platform
>> >> > >> >> driver codes.
>> >> 
>> >> > >> However, in the compatibility testing with several NVMe SSD, we
>> >> > >> found that Lenovo Thinklife ST8000 NVMe can not get ready in 100ms,
>> >> > >> and it actually needs almost 200ms.  Thus, we increased the T_PVPERL
>> >> > >> value to 300ms for the better device compatibility.
>> >> > > ...
>> >> > > 
>> >> > > Thanks for this valuable information!  This NVMe issue potentially
>> >> > > affects many similar drivers, and we may need a more generic fix so
>> >> > > this device works well with all of them.
>> >> > > 
>> >> > > T_PVPERL is defined to start when power is stable.  Do you have a way
>> >> > > to accurately determine that point?  I'm guessing this:
>> >> > > 
>> >> > >   gpiod_set_value_cansleep(pcie->power_gpio, 1)
>> >> > > 
>> >> > > turns the power on?  But of course that doesn't mean it is instantly
>> >> > > stable.  Maybe your testing is telling you that your driver should
>> >> > > have a hardware-specific 200ms delay to wait for power to become
>> >> > > stable, followed by the standard 100ms for T_PVPERL?
>> >> > 
>> >> > You are right, we did not take the power stable cost into account.
>> >> > T_PVPERL is enough for Lenovo Thinklife ST8000 NVMe SSD to get ready,
>> >> > and the extra cost is from the power circuit of a PCIe to M.2 connector,
>> >> > which is used to verify M.2 SSD with our EVB at early stage.
>> >> 
>> >> Hmm.  That sounds potentially interesting.  I assume you're talking
>> >> about something like this: https://www.amazon.com/dp/B07JKH5VTL
>> >> 
>> >> I'm not familiar with the timing requirements for something like this.
>> >> There is a PCIe M.2 spec with some timing requirements, but I don't
>> >> know whether or how software is supposed to manage this.  There is a
>> >> T_PVPGL (power valid to PERST# inactive) parameter, but it's
>> >> implementation specific, so I don't know what the point of that is.
>> >> And I don't see a way for software to even detect the presence of such
>> >> an adapter.
>> > 
>> > I intended to ask about this on the PCI-SIG forum, but after reading
>> > this thread [1], I don't think we would learn anything.  The question
>> > was:
>> > 
>> >   The M.2 device has 5 voltage rails generated from the 3.3V input
>> >   supply voltage
>> >   -------------------------------------------
>> >   This is re. Table 17 in PCI Express M.2 Specification Revision 1.1
>> >   Power Valid* to PERST# input inactive : Implementation specific;
>> >   recommended 50 ms
>> > 
>> >   What exactly does this mean ?
>> > 
>> >   The Note says
>> > 
>> >     *Power Valid when all the voltage supply rails have reached their
>> >     respective Vmin.
>> > 
>> >   Does this mean that the 50ms to PERSTn is counted from the instant
>> >   when all *5 voltage rails* on the M.2 device have become "good" ?
>> > 
>> > and the answer was:
>> > 
>> >   You wrote;
>> >   Does this mean that the 50ms to PERSTn is counted from the instant
>> >   when all 5 voltage rails on the M.2 device have become "good" ?
>> > 
>> >   Reply:
>> >   This means that counting the recommended 50 ms begins from the time
>> >   when the power rails coming to the device/module, from the host, are
>> >   stable *at the device connector*.
>> > 
>> >   As for the time it takes voltages derived inside the device from any
>> >   of the host power rails (e.g., 3.3V rail) to become stable, that is
>> >   part of the 50ms the host should wait before de-asserting PERST#, in
>> >   order ensure that most devices will be ready by then.
>> > 
>> >   Strictly speaking, nothing disastrous happens if a host violates the
>> >   50ms. If it de-asserts too soon, the device may not be ready, but
>> >   most hosts will try again. If the host de-asserts too late, the
>> >   device has even more time to stabilize. This is why the WG felt that
>> >   an exact minimum number for >>Tpvpgl, was not valid in practice, and
>> >   we made it a recommendation.
>> > 
>> > Since T_PVPGL is implementation-specific, we can't really base
>> > anything in software on the 50ms recommendation.  It sounds to me like
>> > they are counting on software to retry config reads when enumerating.
>> > 
>> > I guess the delays we *can* observe are:
>> > 
>> >   100ms T_PVPERL "Power stable to PERST# inactive" (CEM 2.9.2)
>> >   100ms software delay between reset and config request (Base 6.6.1)
>> 
>> Refer to Figure2-10 in CEM Spec V2.0, I guess this two delays are T2 & T4?
>> In the PATCH v2[4/4], T2 is the msleep(100) for T_PVPERL,
>> and T4 is done by starfive_pcie_host_wait_for_link().
> 
> Yes, I think "T2" is T_PVPERL.  The CEM r2.0 Figure 2-10 note is
> "2. Minimum time from power rails within specified tolerance to
> PERST# inactive (T_PVPERL)."
> 
> As far as T4 ("Minimum PERST# inactive to PCI Express link out of
> electrical idle"), I don't see a name or a value for that parameter,
> and I don't think it is the delay required by PCIe r6.0, sec 6.6.1.
> 
> The delay required by sec 6.6.1 is a minimum of 100ms following exit
> from reset or, for fast links, 100ms after link training completes.
> 
> The comment at the call of advk_pcie_wait_for_link() [2] says it is
> the delay required by sec 6.6.1, but that doesn't seem right to me.
> 
> For one thing, I don't think 6.6.1 says anything about "link up" being
> the end of a delay.  So if we want to do the delay required by 6.6.1,
> "wait_for_link()" doesn't seem like quite the right name.
> 
> For another, all the *_wait_for_link() functions can return success
> after 0ms, 90ms, 180ms, etc.  They're unlikely to return after 0ms,
> but 90ms is quite possible.  If we avoided the 0ms return and
> LINK_WAIT_USLEEP_MIN were 100ms instead of 90ms, that should be enough
> for slow links, where we need 100ms following "exit from reset."
> 
> But it's still not enough for fast links where we need 100ms "after
> link training completes" because we don't know when training
> completed.  If training completed 89ms into *_wait_for_link(), we only
> delay 1ms after that.
> 

That's the point, we will add a extra 100ms after PERST# de-assert
in the patch-v3 according to Base Spec r6.0 - 6.6.1:
        msleep(100);
        gpiod_set_value_cansleep(pcie->reset_gpio, 0);

+       /* As the requirement in PCIe base spec r6.0, system must wait a
+        * minimum of 100 ms following exit from a Conventional Reset
+        * before sending a Configuration Request to the device.*/
+       msleep(100);
+
        if (starfive_pcie_host_wait_for_link(pcie))
                return -EIO;

>> > The PCI core doesn't know how to assert PERST#, so the T_PVPERL delay
>> > definitely has to be in the host controller driver.
>> > 
>> > The PCI core observes the second 100ms delay after a reset in
>> > pci_bridge_wait_for_secondary_bus().  But this 100ms delay does not
>> > happen during initial enumeration.  I think the assumption of the PCI
>> > core is that when the host controller driver calls pci_host_probe(),
>> > we can issue config requests immediately.
>> > 
>> > So I think that to be safe, we probably need to do both of those 100ms
>> > delays in the host controller driver.  Maybe there's some hope of
>> > supporting the latter one in the PCI core someday, but that's not
>> > today.
>> > 
>> > Bjorn
>> > 
>> > [1] https://forum.pcisig.com/viewtopic.php?f=74&t=1037
> 
> [2] https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/drivers/pci/controller/pci-aardvark.c?id=v6.4#n433

^ permalink raw reply	[flat|nested] 92+ messages in thread

* Re: [PATCH v1 8/9] PCI: PLDA: starfive: Add JH7110 PCIe controller
@ 2023-08-01  7:05                     ` Kevin Xie
  0 siblings, 0 replies; 92+ messages in thread
From: Kevin Xie @ 2023-08-01  7:05 UTC (permalink / raw)
  To: Bjorn Helgaas
  Cc: Minda Chen, Daire McNamara, Conor Dooley, Rob Herring,
	Krzysztof Kozlowski, Bjorn Helgaas, Lorenzo Pieralisi,
	Krzysztof Wilczyński, Emil Renner Berthing, devicetree,
	linux-kernel, linux-riscv, linux-pci, Paul Walmsley,
	Palmer Dabbelt, Albert Ou, Philipp Zabel, Mason Huo, Leyfoon Tan,
	Mika Westerberg, Maciej W. Rozycki, Pali Rohár,
	Marek Behún



On 2023/8/1 7:12, Bjorn Helgaas wrote:
> [+cc Pali, Marek because I used f76b36d40bee ("PCI: aardvark: Fix link
> training") as an example]
> 
> On Mon, Jul 31, 2023 at 01:52:35PM +0800, Kevin Xie wrote:
>> On 2023/7/28 5:40, Bjorn Helgaas wrote:
>> > On Tue, Jul 25, 2023 at 03:46:35PM -0500, Bjorn Helgaas wrote:
>> >> On Mon, Jul 24, 2023 at 06:48:47PM +0800, Kevin Xie wrote:
>> >> > On 2023/7/21 0:15, Bjorn Helgaas wrote:
>> >> > > On Thu, Jul 20, 2023 at 06:11:59PM +0800, Kevin Xie wrote:
>> >> > >> On 2023/7/20 0:48, Bjorn Helgaas wrote:
>> >> > >> > On Wed, Jul 19, 2023 at 06:20:56PM +0800, Minda Chen wrote:
>> >> > >> >> Add StarFive JH7110 SoC PCIe controller platform
>> >> > >> >> driver codes.
>> >> 
>> >> > >> However, in the compatibility testing with several NVMe SSD, we
>> >> > >> found that Lenovo Thinklife ST8000 NVMe can not get ready in 100ms,
>> >> > >> and it actually needs almost 200ms.  Thus, we increased the T_PVPERL
>> >> > >> value to 300ms for the better device compatibility.
>> >> > > ...
>> >> > > 
>> >> > > Thanks for this valuable information!  This NVMe issue potentially
>> >> > > affects many similar drivers, and we may need a more generic fix so
>> >> > > this device works well with all of them.
>> >> > > 
>> >> > > T_PVPERL is defined to start when power is stable.  Do you have a way
>> >> > > to accurately determine that point?  I'm guessing this:
>> >> > > 
>> >> > >   gpiod_set_value_cansleep(pcie->power_gpio, 1)
>> >> > > 
>> >> > > turns the power on?  But of course that doesn't mean it is instantly
>> >> > > stable.  Maybe your testing is telling you that your driver should
>> >> > > have a hardware-specific 200ms delay to wait for power to become
>> >> > > stable, followed by the standard 100ms for T_PVPERL?
>> >> > 
>> >> > You are right, we did not take the power stable cost into account.
>> >> > T_PVPERL is enough for Lenovo Thinklife ST8000 NVMe SSD to get ready,
>> >> > and the extra cost is from the power circuit of a PCIe to M.2 connector,
>> >> > which is used to verify M.2 SSD with our EVB at early stage.
>> >> 
>> >> Hmm.  That sounds potentially interesting.  I assume you're talking
>> >> about something like this: https://www.amazon.com/dp/B07JKH5VTL
>> >> 
>> >> I'm not familiar with the timing requirements for something like this.
>> >> There is a PCIe M.2 spec with some timing requirements, but I don't
>> >> know whether or how software is supposed to manage this.  There is a
>> >> T_PVPGL (power valid to PERST# inactive) parameter, but it's
>> >> implementation specific, so I don't know what the point of that is.
>> >> And I don't see a way for software to even detect the presence of such
>> >> an adapter.
>> > 
>> > I intended to ask about this on the PCI-SIG forum, but after reading
>> > this thread [1], I don't think we would learn anything.  The question
>> > was:
>> > 
>> >   The M.2 device has 5 voltage rails generated from the 3.3V input
>> >   supply voltage
>> >   -------------------------------------------
>> >   This is re. Table 17 in PCI Express M.2 Specification Revision 1.1
>> >   Power Valid* to PERST# input inactive : Implementation specific;
>> >   recommended 50 ms
>> > 
>> >   What exactly does this mean ?
>> > 
>> >   The Note says
>> > 
>> >     *Power Valid when all the voltage supply rails have reached their
>> >     respective Vmin.
>> > 
>> >   Does this mean that the 50ms to PERSTn is counted from the instant
>> >   when all *5 voltage rails* on the M.2 device have become "good" ?
>> > 
>> > and the answer was:
>> > 
>> >   You wrote;
>> >   Does this mean that the 50ms to PERSTn is counted from the instant
>> >   when all 5 voltage rails on the M.2 device have become "good" ?
>> > 
>> >   Reply:
>> >   This means that counting the recommended 50 ms begins from the time
>> >   when the power rails coming to the device/module, from the host, are
>> >   stable *at the device connector*.
>> > 
>> >   As for the time it takes voltages derived inside the device from any
>> >   of the host power rails (e.g., 3.3V rail) to become stable, that is
>> >   part of the 50ms the host should wait before de-asserting PERST#, in
>> >   order ensure that most devices will be ready by then.
>> > 
>> >   Strictly speaking, nothing disastrous happens if a host violates the
>> >   50ms. If it de-asserts too soon, the device may not be ready, but
>> >   most hosts will try again. If the host de-asserts too late, the
>> >   device has even more time to stabilize. This is why the WG felt that
>> >   an exact minimum number for >>Tpvpgl, was not valid in practice, and
>> >   we made it a recommendation.
>> > 
>> > Since T_PVPGL is implementation-specific, we can't really base
>> > anything in software on the 50ms recommendation.  It sounds to me like
>> > they are counting on software to retry config reads when enumerating.
>> > 
>> > I guess the delays we *can* observe are:
>> > 
>> >   100ms T_PVPERL "Power stable to PERST# inactive" (CEM 2.9.2)
>> >   100ms software delay between reset and config request (Base 6.6.1)
>> 
>> Refer to Figure2-10 in CEM Spec V2.0, I guess this two delays are T2 & T4?
>> In the PATCH v2[4/4], T2 is the msleep(100) for T_PVPERL,
>> and T4 is done by starfive_pcie_host_wait_for_link().
> 
> Yes, I think "T2" is T_PVPERL.  The CEM r2.0 Figure 2-10 note is
> "2. Minimum time from power rails within specified tolerance to
> PERST# inactive (T_PVPERL)."
> 
> As far as T4 ("Minimum PERST# inactive to PCI Express link out of
> electrical idle"), I don't see a name or a value for that parameter,
> and I don't think it is the delay required by PCIe r6.0, sec 6.6.1.
> 
> The delay required by sec 6.6.1 is a minimum of 100ms following exit
> from reset or, for fast links, 100ms after link training completes.
> 
> The comment at the call of advk_pcie_wait_for_link() [2] says it is
> the delay required by sec 6.6.1, but that doesn't seem right to me.
> 
> For one thing, I don't think 6.6.1 says anything about "link up" being
> the end of a delay.  So if we want to do the delay required by 6.6.1,
> "wait_for_link()" doesn't seem like quite the right name.
> 
> For another, all the *_wait_for_link() functions can return success
> after 0ms, 90ms, 180ms, etc.  They're unlikely to return after 0ms,
> but 90ms is quite possible.  If we avoided the 0ms return and
> LINK_WAIT_USLEEP_MIN were 100ms instead of 90ms, that should be enough
> for slow links, where we need 100ms following "exit from reset."
> 
> But it's still not enough for fast links where we need 100ms "after
> link training completes" because we don't know when training
> completed.  If training completed 89ms into *_wait_for_link(), we only
> delay 1ms after that.
> 

That's the point, we will add a extra 100ms after PERST# de-assert
in the patch-v3 according to Base Spec r6.0 - 6.6.1:
        msleep(100);
        gpiod_set_value_cansleep(pcie->reset_gpio, 0);

+       /* As the requirement in PCIe base spec r6.0, system must wait a
+        * minimum of 100 ms following exit from a Conventional Reset
+        * before sending a Configuration Request to the device.*/
+       msleep(100);
+
        if (starfive_pcie_host_wait_for_link(pcie))
                return -EIO;

>> > The PCI core doesn't know how to assert PERST#, so the T_PVPERL delay
>> > definitely has to be in the host controller driver.
>> > 
>> > The PCI core observes the second 100ms delay after a reset in
>> > pci_bridge_wait_for_secondary_bus().  But this 100ms delay does not
>> > happen during initial enumeration.  I think the assumption of the PCI
>> > core is that when the host controller driver calls pci_host_probe(),
>> > we can issue config requests immediately.
>> > 
>> > So I think that to be safe, we probably need to do both of those 100ms
>> > delays in the host controller driver.  Maybe there's some hope of
>> > supporting the latter one in the PCI core someday, but that's not
>> > today.
>> > 
>> > Bjorn
>> > 
>> > [1] https://forum.pcisig.com/viewtopic.php?f=74&t=1037
> 
> [2] https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/drivers/pci/controller/pci-aardvark.c?id=v6.4#n433

_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 92+ messages in thread

* Re: [PATCH v1 8/9] PCI: PLDA: starfive: Add JH7110 PCIe controller
  2023-08-01  7:05                     ` Kevin Xie
@ 2023-08-01  7:14                       ` Pali Rohár
  -1 siblings, 0 replies; 92+ messages in thread
From: Pali Rohár @ 2023-08-01  7:14 UTC (permalink / raw)
  To: Kevin Xie
  Cc: Bjorn Helgaas, Minda Chen, Daire McNamara, Conor Dooley,
	Rob Herring, Krzysztof Kozlowski, Bjorn Helgaas,
	Lorenzo Pieralisi, Krzysztof Wilczyński,
	Emil Renner Berthing, devicetree, linux-kernel, linux-riscv,
	linux-pci, Paul Walmsley, Palmer Dabbelt, Albert Ou,
	Philipp Zabel, Mason Huo, Leyfoon Tan, Mika Westerberg,
	Maciej W. Rozycki, Marek Behún

On Tuesday 01 August 2023 15:05:46 Kevin Xie wrote:
> 
> 
> On 2023/8/1 7:12, Bjorn Helgaas wrote:
> > [+cc Pali, Marek because I used f76b36d40bee ("PCI: aardvark: Fix link
> > training") as an example]
> > 
> > On Mon, Jul 31, 2023 at 01:52:35PM +0800, Kevin Xie wrote:
> >> On 2023/7/28 5:40, Bjorn Helgaas wrote:
> >> > On Tue, Jul 25, 2023 at 03:46:35PM -0500, Bjorn Helgaas wrote:
> >> >> On Mon, Jul 24, 2023 at 06:48:47PM +0800, Kevin Xie wrote:
> >> >> > On 2023/7/21 0:15, Bjorn Helgaas wrote:
> >> >> > > On Thu, Jul 20, 2023 at 06:11:59PM +0800, Kevin Xie wrote:
> >> >> > >> On 2023/7/20 0:48, Bjorn Helgaas wrote:
> >> >> > >> > On Wed, Jul 19, 2023 at 06:20:56PM +0800, Minda Chen wrote:
> >> >> > >> >> Add StarFive JH7110 SoC PCIe controller platform
> >> >> > >> >> driver codes.
> >> >> 
> >> >> > >> However, in the compatibility testing with several NVMe SSD, we
> >> >> > >> found that Lenovo Thinklife ST8000 NVMe can not get ready in 100ms,
> >> >> > >> and it actually needs almost 200ms.  Thus, we increased the T_PVPERL
> >> >> > >> value to 300ms for the better device compatibility.
> >> >> > > ...
> >> >> > > 
> >> >> > > Thanks for this valuable information!  This NVMe issue potentially
> >> >> > > affects many similar drivers, and we may need a more generic fix so
> >> >> > > this device works well with all of them.
> >> >> > > 
> >> >> > > T_PVPERL is defined to start when power is stable.  Do you have a way
> >> >> > > to accurately determine that point?  I'm guessing this:
> >> >> > > 
> >> >> > >   gpiod_set_value_cansleep(pcie->power_gpio, 1)
> >> >> > > 
> >> >> > > turns the power on?  But of course that doesn't mean it is instantly
> >> >> > > stable.  Maybe your testing is telling you that your driver should
> >> >> > > have a hardware-specific 200ms delay to wait for power to become
> >> >> > > stable, followed by the standard 100ms for T_PVPERL?
> >> >> > 
> >> >> > You are right, we did not take the power stable cost into account.
> >> >> > T_PVPERL is enough for Lenovo Thinklife ST8000 NVMe SSD to get ready,
> >> >> > and the extra cost is from the power circuit of a PCIe to M.2 connector,
> >> >> > which is used to verify M.2 SSD with our EVB at early stage.
> >> >> 
> >> >> Hmm.  That sounds potentially interesting.  I assume you're talking
> >> >> about something like this: https://www.amazon.com/dp/B07JKH5VTL
> >> >> 
> >> >> I'm not familiar with the timing requirements for something like this.
> >> >> There is a PCIe M.2 spec with some timing requirements, but I don't
> >> >> know whether or how software is supposed to manage this.  There is a
> >> >> T_PVPGL (power valid to PERST# inactive) parameter, but it's
> >> >> implementation specific, so I don't know what the point of that is.
> >> >> And I don't see a way for software to even detect the presence of such
> >> >> an adapter.
> >> > 
> >> > I intended to ask about this on the PCI-SIG forum, but after reading
> >> > this thread [1], I don't think we would learn anything.  The question
> >> > was:
> >> > 
> >> >   The M.2 device has 5 voltage rails generated from the 3.3V input
> >> >   supply voltage
> >> >   -------------------------------------------
> >> >   This is re. Table 17 in PCI Express M.2 Specification Revision 1.1
> >> >   Power Valid* to PERST# input inactive : Implementation specific;
> >> >   recommended 50 ms
> >> > 
> >> >   What exactly does this mean ?
> >> > 
> >> >   The Note says
> >> > 
> >> >     *Power Valid when all the voltage supply rails have reached their
> >> >     respective Vmin.
> >> > 
> >> >   Does this mean that the 50ms to PERSTn is counted from the instant
> >> >   when all *5 voltage rails* on the M.2 device have become "good" ?
> >> > 
> >> > and the answer was:
> >> > 
> >> >   You wrote;
> >> >   Does this mean that the 50ms to PERSTn is counted from the instant
> >> >   when all 5 voltage rails on the M.2 device have become "good" ?
> >> > 
> >> >   Reply:
> >> >   This means that counting the recommended 50 ms begins from the time
> >> >   when the power rails coming to the device/module, from the host, are
> >> >   stable *at the device connector*.
> >> > 
> >> >   As for the time it takes voltages derived inside the device from any
> >> >   of the host power rails (e.g., 3.3V rail) to become stable, that is
> >> >   part of the 50ms the host should wait before de-asserting PERST#, in
> >> >   order ensure that most devices will be ready by then.
> >> > 
> >> >   Strictly speaking, nothing disastrous happens if a host violates the
> >> >   50ms. If it de-asserts too soon, the device may not be ready, but
> >> >   most hosts will try again. If the host de-asserts too late, the
> >> >   device has even more time to stabilize. This is why the WG felt that
> >> >   an exact minimum number for >>Tpvpgl, was not valid in practice, and
> >> >   we made it a recommendation.
> >> > 
> >> > Since T_PVPGL is implementation-specific, we can't really base
> >> > anything in software on the 50ms recommendation.  It sounds to me like
> >> > they are counting on software to retry config reads when enumerating.
> >> > 
> >> > I guess the delays we *can* observe are:
> >> > 
> >> >   100ms T_PVPERL "Power stable to PERST# inactive" (CEM 2.9.2)
> >> >   100ms software delay between reset and config request (Base 6.6.1)
> >> 
> >> Refer to Figure2-10 in CEM Spec V2.0, I guess this two delays are T2 & T4?
> >> In the PATCH v2[4/4], T2 is the msleep(100) for T_PVPERL,
> >> and T4 is done by starfive_pcie_host_wait_for_link().
> > 
> > Yes, I think "T2" is T_PVPERL.  The CEM r2.0 Figure 2-10 note is
> > "2. Minimum time from power rails within specified tolerance to
> > PERST# inactive (T_PVPERL)."
> > 
> > As far as T4 ("Minimum PERST# inactive to PCI Express link out of
> > electrical idle"), I don't see a name or a value for that parameter,
> > and I don't think it is the delay required by PCIe r6.0, sec 6.6.1.
> > 
> > The delay required by sec 6.6.1 is a minimum of 100ms following exit
> > from reset or, for fast links, 100ms after link training completes.
> > 
> > The comment at the call of advk_pcie_wait_for_link() [2] says it is
> > the delay required by sec 6.6.1, but that doesn't seem right to me.
> > 
> > For one thing, I don't think 6.6.1 says anything about "link up" being
> > the end of a delay.  So if we want to do the delay required by 6.6.1,
> > "wait_for_link()" doesn't seem like quite the right name.
> > 
> > For another, all the *_wait_for_link() functions can return success
> > after 0ms, 90ms, 180ms, etc.  They're unlikely to return after 0ms,
> > but 90ms is quite possible.  If we avoided the 0ms return and
> > LINK_WAIT_USLEEP_MIN were 100ms instead of 90ms, that should be enough
> > for slow links, where we need 100ms following "exit from reset."
> > 
> > But it's still not enough for fast links where we need 100ms "after
> > link training completes" because we don't know when training
> > completed.  If training completed 89ms into *_wait_for_link(), we only
> > delay 1ms after that.
> > 
> 
> That's the point, we will add a extra 100ms after PERST# de-assert
> in the patch-v3 according to Base Spec r6.0 - 6.6.1:
>         msleep(100);
>         gpiod_set_value_cansleep(pcie->reset_gpio, 0);
> 
> +       /* As the requirement in PCIe base spec r6.0, system must wait a
> +        * minimum of 100 ms following exit from a Conventional Reset
> +        * before sending a Configuration Request to the device.*/
> +       msleep(100);
> +
>         if (starfive_pcie_host_wait_for_link(pcie))
>                 return -EIO;
> 

Maybe this information can be useful here:
https://lore.kernel.org/linux-pci/20211022183808.jdeo7vntnagqkg7g@pali/

> >> > The PCI core doesn't know how to assert PERST#, so the T_PVPERL delay
> >> > definitely has to be in the host controller driver.
> >> > 
> >> > The PCI core observes the second 100ms delay after a reset in
> >> > pci_bridge_wait_for_secondary_bus().  But this 100ms delay does not
> >> > happen during initial enumeration.  I think the assumption of the PCI
> >> > core is that when the host controller driver calls pci_host_probe(),
> >> > we can issue config requests immediately.
> >> > 
> >> > So I think that to be safe, we probably need to do both of those 100ms
> >> > delays in the host controller driver.  Maybe there's some hope of
> >> > supporting the latter one in the PCI core someday, but that's not
> >> > today.
> >> > 
> >> > Bjorn
> >> > 
> >> > [1] https://forum.pcisig.com/viewtopic.php?f=74&t=1037
> > 
> > [2] https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/drivers/pci/controller/pci-aardvark.c?id=v6.4#n433

^ permalink raw reply	[flat|nested] 92+ messages in thread

* Re: [PATCH v1 8/9] PCI: PLDA: starfive: Add JH7110 PCIe controller
@ 2023-08-01  7:14                       ` Pali Rohár
  0 siblings, 0 replies; 92+ messages in thread
From: Pali Rohár @ 2023-08-01  7:14 UTC (permalink / raw)
  To: Kevin Xie
  Cc: Bjorn Helgaas, Minda Chen, Daire McNamara, Conor Dooley,
	Rob Herring, Krzysztof Kozlowski, Bjorn Helgaas,
	Lorenzo Pieralisi, Krzysztof Wilczyński,
	Emil Renner Berthing, devicetree, linux-kernel, linux-riscv,
	linux-pci, Paul Walmsley, Palmer Dabbelt, Albert Ou,
	Philipp Zabel, Mason Huo, Leyfoon Tan, Mika Westerberg,
	Maciej W. Rozycki, Marek Behún

On Tuesday 01 August 2023 15:05:46 Kevin Xie wrote:
> 
> 
> On 2023/8/1 7:12, Bjorn Helgaas wrote:
> > [+cc Pali, Marek because I used f76b36d40bee ("PCI: aardvark: Fix link
> > training") as an example]
> > 
> > On Mon, Jul 31, 2023 at 01:52:35PM +0800, Kevin Xie wrote:
> >> On 2023/7/28 5:40, Bjorn Helgaas wrote:
> >> > On Tue, Jul 25, 2023 at 03:46:35PM -0500, Bjorn Helgaas wrote:
> >> >> On Mon, Jul 24, 2023 at 06:48:47PM +0800, Kevin Xie wrote:
> >> >> > On 2023/7/21 0:15, Bjorn Helgaas wrote:
> >> >> > > On Thu, Jul 20, 2023 at 06:11:59PM +0800, Kevin Xie wrote:
> >> >> > >> On 2023/7/20 0:48, Bjorn Helgaas wrote:
> >> >> > >> > On Wed, Jul 19, 2023 at 06:20:56PM +0800, Minda Chen wrote:
> >> >> > >> >> Add StarFive JH7110 SoC PCIe controller platform
> >> >> > >> >> driver codes.
> >> >> 
> >> >> > >> However, in the compatibility testing with several NVMe SSD, we
> >> >> > >> found that Lenovo Thinklife ST8000 NVMe can not get ready in 100ms,
> >> >> > >> and it actually needs almost 200ms.  Thus, we increased the T_PVPERL
> >> >> > >> value to 300ms for the better device compatibility.
> >> >> > > ...
> >> >> > > 
> >> >> > > Thanks for this valuable information!  This NVMe issue potentially
> >> >> > > affects many similar drivers, and we may need a more generic fix so
> >> >> > > this device works well with all of them.
> >> >> > > 
> >> >> > > T_PVPERL is defined to start when power is stable.  Do you have a way
> >> >> > > to accurately determine that point?  I'm guessing this:
> >> >> > > 
> >> >> > >   gpiod_set_value_cansleep(pcie->power_gpio, 1)
> >> >> > > 
> >> >> > > turns the power on?  But of course that doesn't mean it is instantly
> >> >> > > stable.  Maybe your testing is telling you that your driver should
> >> >> > > have a hardware-specific 200ms delay to wait for power to become
> >> >> > > stable, followed by the standard 100ms for T_PVPERL?
> >> >> > 
> >> >> > You are right, we did not take the power stable cost into account.
> >> >> > T_PVPERL is enough for Lenovo Thinklife ST8000 NVMe SSD to get ready,
> >> >> > and the extra cost is from the power circuit of a PCIe to M.2 connector,
> >> >> > which is used to verify M.2 SSD with our EVB at early stage.
> >> >> 
> >> >> Hmm.  That sounds potentially interesting.  I assume you're talking
> >> >> about something like this: https://www.amazon.com/dp/B07JKH5VTL
> >> >> 
> >> >> I'm not familiar with the timing requirements for something like this.
> >> >> There is a PCIe M.2 spec with some timing requirements, but I don't
> >> >> know whether or how software is supposed to manage this.  There is a
> >> >> T_PVPGL (power valid to PERST# inactive) parameter, but it's
> >> >> implementation specific, so I don't know what the point of that is.
> >> >> And I don't see a way for software to even detect the presence of such
> >> >> an adapter.
> >> > 
> >> > I intended to ask about this on the PCI-SIG forum, but after reading
> >> > this thread [1], I don't think we would learn anything.  The question
> >> > was:
> >> > 
> >> >   The M.2 device has 5 voltage rails generated from the 3.3V input
> >> >   supply voltage
> >> >   -------------------------------------------
> >> >   This is re. Table 17 in PCI Express M.2 Specification Revision 1.1
> >> >   Power Valid* to PERST# input inactive : Implementation specific;
> >> >   recommended 50 ms
> >> > 
> >> >   What exactly does this mean ?
> >> > 
> >> >   The Note says
> >> > 
> >> >     *Power Valid when all the voltage supply rails have reached their
> >> >     respective Vmin.
> >> > 
> >> >   Does this mean that the 50ms to PERSTn is counted from the instant
> >> >   when all *5 voltage rails* on the M.2 device have become "good" ?
> >> > 
> >> > and the answer was:
> >> > 
> >> >   You wrote;
> >> >   Does this mean that the 50ms to PERSTn is counted from the instant
> >> >   when all 5 voltage rails on the M.2 device have become "good" ?
> >> > 
> >> >   Reply:
> >> >   This means that counting the recommended 50 ms begins from the time
> >> >   when the power rails coming to the device/module, from the host, are
> >> >   stable *at the device connector*.
> >> > 
> >> >   As for the time it takes voltages derived inside the device from any
> >> >   of the host power rails (e.g., 3.3V rail) to become stable, that is
> >> >   part of the 50ms the host should wait before de-asserting PERST#, in
> >> >   order ensure that most devices will be ready by then.
> >> > 
> >> >   Strictly speaking, nothing disastrous happens if a host violates the
> >> >   50ms. If it de-asserts too soon, the device may not be ready, but
> >> >   most hosts will try again. If the host de-asserts too late, the
> >> >   device has even more time to stabilize. This is why the WG felt that
> >> >   an exact minimum number for >>Tpvpgl, was not valid in practice, and
> >> >   we made it a recommendation.
> >> > 
> >> > Since T_PVPGL is implementation-specific, we can't really base
> >> > anything in software on the 50ms recommendation.  It sounds to me like
> >> > they are counting on software to retry config reads when enumerating.
> >> > 
> >> > I guess the delays we *can* observe are:
> >> > 
> >> >   100ms T_PVPERL "Power stable to PERST# inactive" (CEM 2.9.2)
> >> >   100ms software delay between reset and config request (Base 6.6.1)
> >> 
> >> Refer to Figure2-10 in CEM Spec V2.0, I guess this two delays are T2 & T4?
> >> In the PATCH v2[4/4], T2 is the msleep(100) for T_PVPERL,
> >> and T4 is done by starfive_pcie_host_wait_for_link().
> > 
> > Yes, I think "T2" is T_PVPERL.  The CEM r2.0 Figure 2-10 note is
> > "2. Minimum time from power rails within specified tolerance to
> > PERST# inactive (T_PVPERL)."
> > 
> > As far as T4 ("Minimum PERST# inactive to PCI Express link out of
> > electrical idle"), I don't see a name or a value for that parameter,
> > and I don't think it is the delay required by PCIe r6.0, sec 6.6.1.
> > 
> > The delay required by sec 6.6.1 is a minimum of 100ms following exit
> > from reset or, for fast links, 100ms after link training completes.
> > 
> > The comment at the call of advk_pcie_wait_for_link() [2] says it is
> > the delay required by sec 6.6.1, but that doesn't seem right to me.
> > 
> > For one thing, I don't think 6.6.1 says anything about "link up" being
> > the end of a delay.  So if we want to do the delay required by 6.6.1,
> > "wait_for_link()" doesn't seem like quite the right name.
> > 
> > For another, all the *_wait_for_link() functions can return success
> > after 0ms, 90ms, 180ms, etc.  They're unlikely to return after 0ms,
> > but 90ms is quite possible.  If we avoided the 0ms return and
> > LINK_WAIT_USLEEP_MIN were 100ms instead of 90ms, that should be enough
> > for slow links, where we need 100ms following "exit from reset."
> > 
> > But it's still not enough for fast links where we need 100ms "after
> > link training completes" because we don't know when training
> > completed.  If training completed 89ms into *_wait_for_link(), we only
> > delay 1ms after that.
> > 
> 
> That's the point, we will add a extra 100ms after PERST# de-assert
> in the patch-v3 according to Base Spec r6.0 - 6.6.1:
>         msleep(100);
>         gpiod_set_value_cansleep(pcie->reset_gpio, 0);
> 
> +       /* As the requirement in PCIe base spec r6.0, system must wait a
> +        * minimum of 100 ms following exit from a Conventional Reset
> +        * before sending a Configuration Request to the device.*/
> +       msleep(100);
> +
>         if (starfive_pcie_host_wait_for_link(pcie))
>                 return -EIO;
> 

Maybe this information can be useful here:
https://lore.kernel.org/linux-pci/20211022183808.jdeo7vntnagqkg7g@pali/

> >> > The PCI core doesn't know how to assert PERST#, so the T_PVPERL delay
> >> > definitely has to be in the host controller driver.
> >> > 
> >> > The PCI core observes the second 100ms delay after a reset in
> >> > pci_bridge_wait_for_secondary_bus().  But this 100ms delay does not
> >> > happen during initial enumeration.  I think the assumption of the PCI
> >> > core is that when the host controller driver calls pci_host_probe(),
> >> > we can issue config requests immediately.
> >> > 
> >> > So I think that to be safe, we probably need to do both of those 100ms
> >> > delays in the host controller driver.  Maybe there's some hope of
> >> > supporting the latter one in the PCI core someday, but that's not
> >> > today.
> >> > 
> >> > Bjorn
> >> > 
> >> > [1] https://forum.pcisig.com/viewtopic.php?f=74&t=1037
> > 
> > [2] https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/drivers/pci/controller/pci-aardvark.c?id=v6.4#n433

_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 92+ messages in thread

* Re: [PATCH v1 8/9] PCI: PLDA: starfive: Add JH7110 PCIe controller
  2023-08-01  7:14                       ` Pali Rohár
@ 2023-08-02 17:14                         ` Bjorn Helgaas
  -1 siblings, 0 replies; 92+ messages in thread
From: Bjorn Helgaas @ 2023-08-02 17:14 UTC (permalink / raw)
  To: Pali Rohár
  Cc: Kevin Xie, Minda Chen, Daire McNamara, Conor Dooley, Rob Herring,
	Krzysztof Kozlowski, Bjorn Helgaas, Lorenzo Pieralisi,
	Krzysztof Wilczyński, Emil Renner Berthing, devicetree,
	linux-kernel, linux-riscv, linux-pci, Paul Walmsley,
	Palmer Dabbelt, Albert Ou, Philipp Zabel, Mason Huo, Leyfoon Tan,
	Mika Westerberg, Maciej W. Rozycki, Marek Behún

On Tue, Aug 01, 2023 at 09:14:53AM +0200, Pali Rohár wrote:
> On Tuesday 01 August 2023 15:05:46 Kevin Xie wrote:
> > On 2023/8/1 7:12, Bjorn Helgaas wrote:
> > ...

> > That's the point, we will add a extra 100ms after PERST# de-assert
> > in the patch-v3 according to Base Spec r6.0 - 6.6.1:
> >         msleep(100);
> >         gpiod_set_value_cansleep(pcie->reset_gpio, 0);
> > 
> > +       /* As the requirement in PCIe base spec r6.0, system must wait a
> > +        * minimum of 100 ms following exit from a Conventional Reset
> > +        * before sending a Configuration Request to the device.*/
> > +       msleep(100);
> > +
> >         if (starfive_pcie_host_wait_for_link(pcie))
> >                 return -EIO;
> 
> Maybe this information can be useful here:
> https://lore.kernel.org/linux-pci/20211022183808.jdeo7vntnagqkg7g@pali/

Yes, thank you!  That is a great summary!

^ permalink raw reply	[flat|nested] 92+ messages in thread

* Re: [PATCH v1 8/9] PCI: PLDA: starfive: Add JH7110 PCIe controller
@ 2023-08-02 17:14                         ` Bjorn Helgaas
  0 siblings, 0 replies; 92+ messages in thread
From: Bjorn Helgaas @ 2023-08-02 17:14 UTC (permalink / raw)
  To: Pali Rohár
  Cc: Kevin Xie, Minda Chen, Daire McNamara, Conor Dooley, Rob Herring,
	Krzysztof Kozlowski, Bjorn Helgaas, Lorenzo Pieralisi,
	Krzysztof Wilczyński, Emil Renner Berthing, devicetree,
	linux-kernel, linux-riscv, linux-pci, Paul Walmsley,
	Palmer Dabbelt, Albert Ou, Philipp Zabel, Mason Huo, Leyfoon Tan,
	Mika Westerberg, Maciej W. Rozycki, Marek Behún

On Tue, Aug 01, 2023 at 09:14:53AM +0200, Pali Rohár wrote:
> On Tuesday 01 August 2023 15:05:46 Kevin Xie wrote:
> > On 2023/8/1 7:12, Bjorn Helgaas wrote:
> > ...

> > That's the point, we will add a extra 100ms after PERST# de-assert
> > in the patch-v3 according to Base Spec r6.0 - 6.6.1:
> >         msleep(100);
> >         gpiod_set_value_cansleep(pcie->reset_gpio, 0);
> > 
> > +       /* As the requirement in PCIe base spec r6.0, system must wait a
> > +        * minimum of 100 ms following exit from a Conventional Reset
> > +        * before sending a Configuration Request to the device.*/
> > +       msleep(100);
> > +
> >         if (starfive_pcie_host_wait_for_link(pcie))
> >                 return -EIO;
> 
> Maybe this information can be useful here:
> https://lore.kernel.org/linux-pci/20211022183808.jdeo7vntnagqkg7g@pali/

Yes, thank you!  That is a great summary!

_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 92+ messages in thread

* Re: [PATCH v1 8/9] PCI: PLDA: starfive: Add JH7110 PCIe controller
  2023-08-01  7:05                     ` Kevin Xie
@ 2023-08-02 17:18                       ` Bjorn Helgaas
  -1 siblings, 0 replies; 92+ messages in thread
From: Bjorn Helgaas @ 2023-08-02 17:18 UTC (permalink / raw)
  To: Kevin Xie
  Cc: Minda Chen, Daire McNamara, Conor Dooley, Rob Herring,
	Krzysztof Kozlowski, Bjorn Helgaas, Lorenzo Pieralisi,
	Krzysztof Wilczyński, Emil Renner Berthing, devicetree,
	linux-kernel, linux-riscv, linux-pci, Paul Walmsley,
	Palmer Dabbelt, Albert Ou, Philipp Zabel, Mason Huo, Leyfoon Tan,
	Mika Westerberg, Maciej W. Rozycki, Pali Rohár,
	Marek Behún

On Tue, Aug 01, 2023 at 03:05:46PM +0800, Kevin Xie wrote:
> On 2023/8/1 7:12, Bjorn Helgaas wrote:
> ...

> > The delay required by sec 6.6.1 is a minimum of 100ms following exit
> > from reset or, for fast links, 100ms after link training completes.
> > 
> > The comment at the call of advk_pcie_wait_for_link() [2] says it is
> > the delay required by sec 6.6.1, but that doesn't seem right to me.
> > 
> > For one thing, I don't think 6.6.1 says anything about "link up" being
> > the end of a delay.  So if we want to do the delay required by 6.6.1,
> > "wait_for_link()" doesn't seem like quite the right name.
> > 
> > For another, all the *_wait_for_link() functions can return success
> > after 0ms, 90ms, 180ms, etc.  They're unlikely to return after 0ms,
> > but 90ms is quite possible.  If we avoided the 0ms return and
> > LINK_WAIT_USLEEP_MIN were 100ms instead of 90ms, that should be enough
> > for slow links, where we need 100ms following "exit from reset."
> > 
> > But it's still not enough for fast links where we need 100ms "after
> > link training completes" because we don't know when training
> > completed.  If training completed 89ms into *_wait_for_link(), we only
> > delay 1ms after that.
> 
> That's the point, we will add a extra 100ms after PERST# de-assert
> in the patch-v3 according to Base Spec r6.0 - 6.6.1:
>         msleep(100);
>         gpiod_set_value_cansleep(pcie->reset_gpio, 0);
> 
> +       /* As the requirement in PCIe base spec r6.0, system must wait a
> +        * minimum of 100 ms following exit from a Conventional Reset
> +        * before sending a Configuration Request to the device.*/
> +       msleep(100);
> +
>         if (starfive_pcie_host_wait_for_link(pcie))
>                 return -EIO;

For fast links (links that support > 5.0 GT/s), the 100ms starts
*after* link training completes.  The above looks OK if starfive only
supports slow links, but then I'm not sure why we would need
starfive_pcie_host_wait_for_link().

Bjorn

^ permalink raw reply	[flat|nested] 92+ messages in thread

* Re: [PATCH v1 8/9] PCI: PLDA: starfive: Add JH7110 PCIe controller
@ 2023-08-02 17:18                       ` Bjorn Helgaas
  0 siblings, 0 replies; 92+ messages in thread
From: Bjorn Helgaas @ 2023-08-02 17:18 UTC (permalink / raw)
  To: Kevin Xie
  Cc: Minda Chen, Daire McNamara, Conor Dooley, Rob Herring,
	Krzysztof Kozlowski, Bjorn Helgaas, Lorenzo Pieralisi,
	Krzysztof Wilczyński, Emil Renner Berthing, devicetree,
	linux-kernel, linux-riscv, linux-pci, Paul Walmsley,
	Palmer Dabbelt, Albert Ou, Philipp Zabel, Mason Huo, Leyfoon Tan,
	Mika Westerberg, Maciej W. Rozycki, Pali Rohár,
	Marek Behún

On Tue, Aug 01, 2023 at 03:05:46PM +0800, Kevin Xie wrote:
> On 2023/8/1 7:12, Bjorn Helgaas wrote:
> ...

> > The delay required by sec 6.6.1 is a minimum of 100ms following exit
> > from reset or, for fast links, 100ms after link training completes.
> > 
> > The comment at the call of advk_pcie_wait_for_link() [2] says it is
> > the delay required by sec 6.6.1, but that doesn't seem right to me.
> > 
> > For one thing, I don't think 6.6.1 says anything about "link up" being
> > the end of a delay.  So if we want to do the delay required by 6.6.1,
> > "wait_for_link()" doesn't seem like quite the right name.
> > 
> > For another, all the *_wait_for_link() functions can return success
> > after 0ms, 90ms, 180ms, etc.  They're unlikely to return after 0ms,
> > but 90ms is quite possible.  If we avoided the 0ms return and
> > LINK_WAIT_USLEEP_MIN were 100ms instead of 90ms, that should be enough
> > for slow links, where we need 100ms following "exit from reset."
> > 
> > But it's still not enough for fast links where we need 100ms "after
> > link training completes" because we don't know when training
> > completed.  If training completed 89ms into *_wait_for_link(), we only
> > delay 1ms after that.
> 
> That's the point, we will add a extra 100ms after PERST# de-assert
> in the patch-v3 according to Base Spec r6.0 - 6.6.1:
>         msleep(100);
>         gpiod_set_value_cansleep(pcie->reset_gpio, 0);
> 
> +       /* As the requirement in PCIe base spec r6.0, system must wait a
> +        * minimum of 100 ms following exit from a Conventional Reset
> +        * before sending a Configuration Request to the device.*/
> +       msleep(100);
> +
>         if (starfive_pcie_host_wait_for_link(pcie))
>                 return -EIO;

For fast links (links that support > 5.0 GT/s), the 100ms starts
*after* link training completes.  The above looks OK if starfive only
supports slow links, but then I'm not sure why we would need
starfive_pcie_host_wait_for_link().

Bjorn

_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 92+ messages in thread

* Re: [PATCH v1 8/9] PCI: PLDA: starfive: Add JH7110 PCIe controller
  2023-08-02 17:18                       ` Bjorn Helgaas
@ 2023-08-03  2:23                         ` Kevin Xie
  -1 siblings, 0 replies; 92+ messages in thread
From: Kevin Xie @ 2023-08-03  2:23 UTC (permalink / raw)
  To: Bjorn Helgaas
  Cc: Minda Chen, Daire McNamara, Conor Dooley, Rob Herring,
	Krzysztof Kozlowski, Bjorn Helgaas, Lorenzo Pieralisi,
	Krzysztof Wilczyński, Emil Renner Berthing, devicetree,
	linux-kernel, linux-riscv, linux-pci, Paul Walmsley,
	Palmer Dabbelt, Albert Ou, Philipp Zabel, Mason Huo, Leyfoon Tan,
	Mika Westerberg, Maciej W. Rozycki, Pali Rohár,
	Marek Behún



On 2023/8/3 1:18, Bjorn Helgaas wrote:
> On Tue, Aug 01, 2023 at 03:05:46PM +0800, Kevin Xie wrote:
>> On 2023/8/1 7:12, Bjorn Helgaas wrote:
>> ...
> 
>> > The delay required by sec 6.6.1 is a minimum of 100ms following exit
>> > from reset or, for fast links, 100ms after link training completes.
>> > 
>> > The comment at the call of advk_pcie_wait_for_link() [2] says it is
>> > the delay required by sec 6.6.1, but that doesn't seem right to me.
>> > 
>> > For one thing, I don't think 6.6.1 says anything about "link up" being
>> > the end of a delay.  So if we want to do the delay required by 6.6.1,
>> > "wait_for_link()" doesn't seem like quite the right name.
>> > 
>> > For another, all the *_wait_for_link() functions can return success
>> > after 0ms, 90ms, 180ms, etc.  They're unlikely to return after 0ms,
>> > but 90ms is quite possible.  If we avoided the 0ms return and
>> > LINK_WAIT_USLEEP_MIN were 100ms instead of 90ms, that should be enough
>> > for slow links, where we need 100ms following "exit from reset."
>> > 
>> > But it's still not enough for fast links where we need 100ms "after
>> > link training completes" because we don't know when training
>> > completed.  If training completed 89ms into *_wait_for_link(), we only
>> > delay 1ms after that.
>> 
>> That's the point, we will add a extra 100ms after PERST# de-assert
>> in the patch-v3 according to Base Spec r6.0 - 6.6.1:
>>         msleep(100);
>>         gpiod_set_value_cansleep(pcie->reset_gpio, 0);
>> 
>> +       /* As the requirement in PCIe base spec r6.0, system must wait a
>> +        * minimum of 100 ms following exit from a Conventional Reset
>> +        * before sending a Configuration Request to the device.*/
>> +       msleep(100);
>> +
>>         if (starfive_pcie_host_wait_for_link(pcie))
>>                 return -EIO;
> 
> For fast links (links that support > 5.0 GT/s), the 100ms starts
> *after* link training completes.  The above looks OK if starfive only
> supports slow links, but then I'm not sure why we would need
> starfive_pcie_host_wait_for_link().
> 
Yes, the maximum speed of JH7110 PCIe is 5.0 GT/s (Gen2x1).

About starfive_pcie_host_wait_for_link():
JH7110 SoC only has one root port in each PCIe controller (2 in total)
and they do not support hot-plug yet.
Thus, We add starfive_pcie_host_wait_for_link() to poll if it is a empty slot.
If nothing here, we will exit the probe() of this controller, and it will not
go into pci_host_probe() too.
This may not be a very standard logic, should we remove it or rewrite in a better way?

> Bjorn

_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 92+ messages in thread

* Re: [PATCH v1 8/9] PCI: PLDA: starfive: Add JH7110 PCIe controller
@ 2023-08-03  2:23                         ` Kevin Xie
  0 siblings, 0 replies; 92+ messages in thread
From: Kevin Xie @ 2023-08-03  2:23 UTC (permalink / raw)
  To: Bjorn Helgaas
  Cc: Minda Chen, Daire McNamara, Conor Dooley, Rob Herring,
	Krzysztof Kozlowski, Bjorn Helgaas, Lorenzo Pieralisi,
	Krzysztof Wilczyński, Emil Renner Berthing, devicetree,
	linux-kernel, linux-riscv, linux-pci, Paul Walmsley,
	Palmer Dabbelt, Albert Ou, Philipp Zabel, Mason Huo, Leyfoon Tan,
	Mika Westerberg, Maciej W. Rozycki, Pali Rohár,
	Marek Behún



On 2023/8/3 1:18, Bjorn Helgaas wrote:
> On Tue, Aug 01, 2023 at 03:05:46PM +0800, Kevin Xie wrote:
>> On 2023/8/1 7:12, Bjorn Helgaas wrote:
>> ...
> 
>> > The delay required by sec 6.6.1 is a minimum of 100ms following exit
>> > from reset or, for fast links, 100ms after link training completes.
>> > 
>> > The comment at the call of advk_pcie_wait_for_link() [2] says it is
>> > the delay required by sec 6.6.1, but that doesn't seem right to me.
>> > 
>> > For one thing, I don't think 6.6.1 says anything about "link up" being
>> > the end of a delay.  So if we want to do the delay required by 6.6.1,
>> > "wait_for_link()" doesn't seem like quite the right name.
>> > 
>> > For another, all the *_wait_for_link() functions can return success
>> > after 0ms, 90ms, 180ms, etc.  They're unlikely to return after 0ms,
>> > but 90ms is quite possible.  If we avoided the 0ms return and
>> > LINK_WAIT_USLEEP_MIN were 100ms instead of 90ms, that should be enough
>> > for slow links, where we need 100ms following "exit from reset."
>> > 
>> > But it's still not enough for fast links where we need 100ms "after
>> > link training completes" because we don't know when training
>> > completed.  If training completed 89ms into *_wait_for_link(), we only
>> > delay 1ms after that.
>> 
>> That's the point, we will add a extra 100ms after PERST# de-assert
>> in the patch-v3 according to Base Spec r6.0 - 6.6.1:
>>         msleep(100);
>>         gpiod_set_value_cansleep(pcie->reset_gpio, 0);
>> 
>> +       /* As the requirement in PCIe base spec r6.0, system must wait a
>> +        * minimum of 100 ms following exit from a Conventional Reset
>> +        * before sending a Configuration Request to the device.*/
>> +       msleep(100);
>> +
>>         if (starfive_pcie_host_wait_for_link(pcie))
>>                 return -EIO;
> 
> For fast links (links that support > 5.0 GT/s), the 100ms starts
> *after* link training completes.  The above looks OK if starfive only
> supports slow links, but then I'm not sure why we would need
> starfive_pcie_host_wait_for_link().
> 
Yes, the maximum speed of JH7110 PCIe is 5.0 GT/s (Gen2x1).

About starfive_pcie_host_wait_for_link():
JH7110 SoC only has one root port in each PCIe controller (2 in total)
and they do not support hot-plug yet.
Thus, We add starfive_pcie_host_wait_for_link() to poll if it is a empty slot.
If nothing here, we will exit the probe() of this controller, and it will not
go into pci_host_probe() too.
This may not be a very standard logic, should we remove it or rewrite in a better way?

> Bjorn

^ permalink raw reply	[flat|nested] 92+ messages in thread

* Re: [PATCH v1 8/9] PCI: PLDA: starfive: Add JH7110 PCIe controller
  2023-08-03  2:23                         ` Kevin Xie
@ 2023-08-03  6:58                           ` Pali Rohár
  -1 siblings, 0 replies; 92+ messages in thread
From: Pali Rohár @ 2023-08-03  6:58 UTC (permalink / raw)
  To: Kevin Xie
  Cc: Bjorn Helgaas, Minda Chen, Daire McNamara, Conor Dooley,
	Rob Herring, Krzysztof Kozlowski, Bjorn Helgaas,
	Lorenzo Pieralisi, Krzysztof Wilczyński,
	Emil Renner Berthing, devicetree, linux-kernel, linux-riscv,
	linux-pci, Paul Walmsley, Palmer Dabbelt, Albert Ou,
	Philipp Zabel, Mason Huo, Leyfoon Tan, Mika Westerberg,
	Maciej W. Rozycki, Marek Behún

On Thursday 03 August 2023 10:23:47 Kevin Xie wrote:
> On 2023/8/3 1:18, Bjorn Helgaas wrote:
> > On Tue, Aug 01, 2023 at 03:05:46PM +0800, Kevin Xie wrote:
> >> On 2023/8/1 7:12, Bjorn Helgaas wrote:
> >> ...
> > 
> >> > The delay required by sec 6.6.1 is a minimum of 100ms following exit
> >> > from reset or, for fast links, 100ms after link training completes.
> >> > 
> >> > The comment at the call of advk_pcie_wait_for_link() [2] says it is
> >> > the delay required by sec 6.6.1, but that doesn't seem right to me.
> >> > 
> >> > For one thing, I don't think 6.6.1 says anything about "link up" being
> >> > the end of a delay.  So if we want to do the delay required by 6.6.1,
> >> > "wait_for_link()" doesn't seem like quite the right name.
> >> > 
> >> > For another, all the *_wait_for_link() functions can return success
> >> > after 0ms, 90ms, 180ms, etc.  They're unlikely to return after 0ms,
> >> > but 90ms is quite possible.  If we avoided the 0ms return and
> >> > LINK_WAIT_USLEEP_MIN were 100ms instead of 90ms, that should be enough
> >> > for slow links, where we need 100ms following "exit from reset."
> >> > 
> >> > But it's still not enough for fast links where we need 100ms "after
> >> > link training completes" because we don't know when training
> >> > completed.  If training completed 89ms into *_wait_for_link(), we only
> >> > delay 1ms after that.
> >> 
> >> That's the point, we will add a extra 100ms after PERST# de-assert
> >> in the patch-v3 according to Base Spec r6.0 - 6.6.1:
> >>         msleep(100);
> >>         gpiod_set_value_cansleep(pcie->reset_gpio, 0);
> >> 
> >> +       /* As the requirement in PCIe base spec r6.0, system must wait a
> >> +        * minimum of 100 ms following exit from a Conventional Reset
> >> +        * before sending a Configuration Request to the device.*/
> >> +       msleep(100);
> >> +
> >>         if (starfive_pcie_host_wait_for_link(pcie))
> >>                 return -EIO;
> > 
> > For fast links (links that support > 5.0 GT/s), the 100ms starts
> > *after* link training completes.  The above looks OK if starfive only
> > supports slow links, but then I'm not sure why we would need
> > starfive_pcie_host_wait_for_link().
> > 
> Yes, the maximum speed of JH7110 PCIe is 5.0 GT/s (Gen2x1).
> 
> About starfive_pcie_host_wait_for_link():
> JH7110 SoC only has one root port in each PCIe controller (2 in total)
> and they do not support hot-plug yet.

Beware that even if HW does not support hotplug, endpoint PCIe card
still may drop the link down and later put it up (for example if FW in
the card crashes or when card want to do internal reset, etc...; this is
very common for wifi cards). So drivers for non-hotplug controllers
still have to handle hotplug events generated by link up/down state.

So it means that, if endpoint PCIe card is not detected during probe
time, it may be detected later. So this check to completely stop
registering controller is not a good idea. Note that userspace can
tell kernel (via sysfs) to rescan all PCIe buses and try to discover new
PCIea devices.

> Thus, We add starfive_pcie_host_wait_for_link() to poll if it is a empty slot.
> If nothing here, we will exit the probe() of this controller, and it will not
> go into pci_host_probe() too.
> This may not be a very standard logic, should we remove it or rewrite in a better way?
> 
> > Bjorn

Rather to remove this starfive_pcie_host_wait_for_link logic.

Better option would be to teach PCI core code to wait for the link
before trying to read vendor/device ids, like I described in my old
proposal.

^ permalink raw reply	[flat|nested] 92+ messages in thread

* Re: [PATCH v1 8/9] PCI: PLDA: starfive: Add JH7110 PCIe controller
@ 2023-08-03  6:58                           ` Pali Rohár
  0 siblings, 0 replies; 92+ messages in thread
From: Pali Rohár @ 2023-08-03  6:58 UTC (permalink / raw)
  To: Kevin Xie
  Cc: Bjorn Helgaas, Minda Chen, Daire McNamara, Conor Dooley,
	Rob Herring, Krzysztof Kozlowski, Bjorn Helgaas,
	Lorenzo Pieralisi, Krzysztof Wilczyński,
	Emil Renner Berthing, devicetree, linux-kernel, linux-riscv,
	linux-pci, Paul Walmsley, Palmer Dabbelt, Albert Ou,
	Philipp Zabel, Mason Huo, Leyfoon Tan, Mika Westerberg,
	Maciej W. Rozycki, Marek Behún

On Thursday 03 August 2023 10:23:47 Kevin Xie wrote:
> On 2023/8/3 1:18, Bjorn Helgaas wrote:
> > On Tue, Aug 01, 2023 at 03:05:46PM +0800, Kevin Xie wrote:
> >> On 2023/8/1 7:12, Bjorn Helgaas wrote:
> >> ...
> > 
> >> > The delay required by sec 6.6.1 is a minimum of 100ms following exit
> >> > from reset or, for fast links, 100ms after link training completes.
> >> > 
> >> > The comment at the call of advk_pcie_wait_for_link() [2] says it is
> >> > the delay required by sec 6.6.1, but that doesn't seem right to me.
> >> > 
> >> > For one thing, I don't think 6.6.1 says anything about "link up" being
> >> > the end of a delay.  So if we want to do the delay required by 6.6.1,
> >> > "wait_for_link()" doesn't seem like quite the right name.
> >> > 
> >> > For another, all the *_wait_for_link() functions can return success
> >> > after 0ms, 90ms, 180ms, etc.  They're unlikely to return after 0ms,
> >> > but 90ms is quite possible.  If we avoided the 0ms return and
> >> > LINK_WAIT_USLEEP_MIN were 100ms instead of 90ms, that should be enough
> >> > for slow links, where we need 100ms following "exit from reset."
> >> > 
> >> > But it's still not enough for fast links where we need 100ms "after
> >> > link training completes" because we don't know when training
> >> > completed.  If training completed 89ms into *_wait_for_link(), we only
> >> > delay 1ms after that.
> >> 
> >> That's the point, we will add a extra 100ms after PERST# de-assert
> >> in the patch-v3 according to Base Spec r6.0 - 6.6.1:
> >>         msleep(100);
> >>         gpiod_set_value_cansleep(pcie->reset_gpio, 0);
> >> 
> >> +       /* As the requirement in PCIe base spec r6.0, system must wait a
> >> +        * minimum of 100 ms following exit from a Conventional Reset
> >> +        * before sending a Configuration Request to the device.*/
> >> +       msleep(100);
> >> +
> >>         if (starfive_pcie_host_wait_for_link(pcie))
> >>                 return -EIO;
> > 
> > For fast links (links that support > 5.0 GT/s), the 100ms starts
> > *after* link training completes.  The above looks OK if starfive only
> > supports slow links, but then I'm not sure why we would need
> > starfive_pcie_host_wait_for_link().
> > 
> Yes, the maximum speed of JH7110 PCIe is 5.0 GT/s (Gen2x1).
> 
> About starfive_pcie_host_wait_for_link():
> JH7110 SoC only has one root port in each PCIe controller (2 in total)
> and they do not support hot-plug yet.

Beware that even if HW does not support hotplug, endpoint PCIe card
still may drop the link down and later put it up (for example if FW in
the card crashes or when card want to do internal reset, etc...; this is
very common for wifi cards). So drivers for non-hotplug controllers
still have to handle hotplug events generated by link up/down state.

So it means that, if endpoint PCIe card is not detected during probe
time, it may be detected later. So this check to completely stop
registering controller is not a good idea. Note that userspace can
tell kernel (via sysfs) to rescan all PCIe buses and try to discover new
PCIea devices.

> Thus, We add starfive_pcie_host_wait_for_link() to poll if it is a empty slot.
> If nothing here, we will exit the probe() of this controller, and it will not
> go into pci_host_probe() too.
> This may not be a very standard logic, should we remove it or rewrite in a better way?
> 
> > Bjorn

Rather to remove this starfive_pcie_host_wait_for_link logic.

Better option would be to teach PCI core code to wait for the link
before trying to read vendor/device ids, like I described in my old
proposal.

_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 92+ messages in thread

* Re: [PATCH v1 8/9] PCI: PLDA: starfive: Add JH7110 PCIe controller
  2023-08-03  6:58                           ` Pali Rohár
@ 2023-08-03  7:43                             ` Kevin Xie
  -1 siblings, 0 replies; 92+ messages in thread
From: Kevin Xie @ 2023-08-03  7:43 UTC (permalink / raw)
  To: Pali Rohár
  Cc: Bjorn Helgaas, Minda Chen, Daire McNamara, Conor Dooley,
	Rob Herring, Krzysztof Kozlowski, Bjorn Helgaas,
	Lorenzo Pieralisi, Krzysztof Wilczyński,
	Emil Renner Berthing, devicetree, linux-kernel, linux-riscv,
	linux-pci, Paul Walmsley, Palmer Dabbelt, Albert Ou,
	Philipp Zabel, Mason Huo, Leyfoon Tan, Mika Westerberg,
	Maciej W. Rozycki, Marek Behún



On 2023/8/3 14:58, Pali Rohár wrote:
> On Thursday 03 August 2023 10:23:47 Kevin Xie wrote:
>> On 2023/8/3 1:18, Bjorn Helgaas wrote:
>> > On Tue, Aug 01, 2023 at 03:05:46PM +0800, Kevin Xie wrote:
>> >> On 2023/8/1 7:12, Bjorn Helgaas wrote:
>> >> ...
>> > 
>> >> > The delay required by sec 6.6.1 is a minimum of 100ms following exit
>> >> > from reset or, for fast links, 100ms after link training completes.
>> >> > 
>> >> > The comment at the call of advk_pcie_wait_for_link() [2] says it is
>> >> > the delay required by sec 6.6.1, but that doesn't seem right to me.
>> >> > 
>> >> > For one thing, I don't think 6.6.1 says anything about "link up" being
>> >> > the end of a delay.  So if we want to do the delay required by 6.6.1,
>> >> > "wait_for_link()" doesn't seem like quite the right name.
>> >> > 
>> >> > For another, all the *_wait_for_link() functions can return success
>> >> > after 0ms, 90ms, 180ms, etc.  They're unlikely to return after 0ms,
>> >> > but 90ms is quite possible.  If we avoided the 0ms return and
>> >> > LINK_WAIT_USLEEP_MIN were 100ms instead of 90ms, that should be enough
>> >> > for slow links, where we need 100ms following "exit from reset."
>> >> > 
>> >> > But it's still not enough for fast links where we need 100ms "after
>> >> > link training completes" because we don't know when training
>> >> > completed.  If training completed 89ms into *_wait_for_link(), we only
>> >> > delay 1ms after that.
>> >> 
>> >> That's the point, we will add a extra 100ms after PERST# de-assert
>> >> in the patch-v3 according to Base Spec r6.0 - 6.6.1:
>> >>         msleep(100);
>> >>         gpiod_set_value_cansleep(pcie->reset_gpio, 0);
>> >> 
>> >> +       /* As the requirement in PCIe base spec r6.0, system must wait a
>> >> +        * minimum of 100 ms following exit from a Conventional Reset
>> >> +        * before sending a Configuration Request to the device.*/
>> >> +       msleep(100);
>> >> +
>> >>         if (starfive_pcie_host_wait_for_link(pcie))
>> >>                 return -EIO;
>> > 
>> > For fast links (links that support > 5.0 GT/s), the 100ms starts
>> > *after* link training completes.  The above looks OK if starfive only
>> > supports slow links, but then I'm not sure why we would need
>> > starfive_pcie_host_wait_for_link().
>> > 
>> Yes, the maximum speed of JH7110 PCIe is 5.0 GT/s (Gen2x1).
>> 
>> About starfive_pcie_host_wait_for_link():
>> JH7110 SoC only has one root port in each PCIe controller (2 in total)
>> and they do not support hot-plug yet.
> 
> Beware that even if HW does not support hotplug, endpoint PCIe card
> still may drop the link down and later put it up (for example if FW in
> the card crashes or when card want to do internal reset, etc...; this is
> very common for wifi cards). So drivers for non-hotplug controllers
> still have to handle hotplug events generated by link up/down state.
> 
> So it means that, if endpoint PCIe card is not detected during probe
> time, it may be detected later. So this check to completely stop
> registering controller is not a good idea. Note that userspace can
> tell kernel (via sysfs) to rescan all PCIe buses and try to discover new
> PCIea devices.
> 

Yes, we should not ignored this situation.

>> Thus, We add starfive_pcie_host_wait_for_link() to poll if it is a empty slot.
>> If nothing here, we will exit the probe() of this controller, and it will not
>> go into pci_host_probe() too.
>> This may not be a very standard logic, should we remove it or rewrite in a better way?
>> 
>> > Bjorn
> 
> Rather to remove this starfive_pcie_host_wait_for_link logic.
> 
> Better option would be to teach PCI core code to wait for the link
> before trying to read vendor/device ids, like I described in my old
> proposal.

Yes, the proposal can prevent us from writing the wrong timing.
However, as things stand, we have to do the waiting in host controller driver now.
We will keep the wait for the link, but don't return error when the link is down,
such as:
    if (starfive_pcie_host_wait_for_link(pcie))
	dev_info(dev, "port link down\n");

_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 92+ messages in thread

* Re: [PATCH v1 8/9] PCI: PLDA: starfive: Add JH7110 PCIe controller
@ 2023-08-03  7:43                             ` Kevin Xie
  0 siblings, 0 replies; 92+ messages in thread
From: Kevin Xie @ 2023-08-03  7:43 UTC (permalink / raw)
  To: Pali Rohár
  Cc: Bjorn Helgaas, Minda Chen, Daire McNamara, Conor Dooley,
	Rob Herring, Krzysztof Kozlowski, Bjorn Helgaas,
	Lorenzo Pieralisi, Krzysztof Wilczyński,
	Emil Renner Berthing, devicetree, linux-kernel, linux-riscv,
	linux-pci, Paul Walmsley, Palmer Dabbelt, Albert Ou,
	Philipp Zabel, Mason Huo, Leyfoon Tan, Mika Westerberg,
	Maciej W. Rozycki, Marek Behún



On 2023/8/3 14:58, Pali Rohár wrote:
> On Thursday 03 August 2023 10:23:47 Kevin Xie wrote:
>> On 2023/8/3 1:18, Bjorn Helgaas wrote:
>> > On Tue, Aug 01, 2023 at 03:05:46PM +0800, Kevin Xie wrote:
>> >> On 2023/8/1 7:12, Bjorn Helgaas wrote:
>> >> ...
>> > 
>> >> > The delay required by sec 6.6.1 is a minimum of 100ms following exit
>> >> > from reset or, for fast links, 100ms after link training completes.
>> >> > 
>> >> > The comment at the call of advk_pcie_wait_for_link() [2] says it is
>> >> > the delay required by sec 6.6.1, but that doesn't seem right to me.
>> >> > 
>> >> > For one thing, I don't think 6.6.1 says anything about "link up" being
>> >> > the end of a delay.  So if we want to do the delay required by 6.6.1,
>> >> > "wait_for_link()" doesn't seem like quite the right name.
>> >> > 
>> >> > For another, all the *_wait_for_link() functions can return success
>> >> > after 0ms, 90ms, 180ms, etc.  They're unlikely to return after 0ms,
>> >> > but 90ms is quite possible.  If we avoided the 0ms return and
>> >> > LINK_WAIT_USLEEP_MIN were 100ms instead of 90ms, that should be enough
>> >> > for slow links, where we need 100ms following "exit from reset."
>> >> > 
>> >> > But it's still not enough for fast links where we need 100ms "after
>> >> > link training completes" because we don't know when training
>> >> > completed.  If training completed 89ms into *_wait_for_link(), we only
>> >> > delay 1ms after that.
>> >> 
>> >> That's the point, we will add a extra 100ms after PERST# de-assert
>> >> in the patch-v3 according to Base Spec r6.0 - 6.6.1:
>> >>         msleep(100);
>> >>         gpiod_set_value_cansleep(pcie->reset_gpio, 0);
>> >> 
>> >> +       /* As the requirement in PCIe base spec r6.0, system must wait a
>> >> +        * minimum of 100 ms following exit from a Conventional Reset
>> >> +        * before sending a Configuration Request to the device.*/
>> >> +       msleep(100);
>> >> +
>> >>         if (starfive_pcie_host_wait_for_link(pcie))
>> >>                 return -EIO;
>> > 
>> > For fast links (links that support > 5.0 GT/s), the 100ms starts
>> > *after* link training completes.  The above looks OK if starfive only
>> > supports slow links, but then I'm not sure why we would need
>> > starfive_pcie_host_wait_for_link().
>> > 
>> Yes, the maximum speed of JH7110 PCIe is 5.0 GT/s (Gen2x1).
>> 
>> About starfive_pcie_host_wait_for_link():
>> JH7110 SoC only has one root port in each PCIe controller (2 in total)
>> and they do not support hot-plug yet.
> 
> Beware that even if HW does not support hotplug, endpoint PCIe card
> still may drop the link down and later put it up (for example if FW in
> the card crashes or when card want to do internal reset, etc...; this is
> very common for wifi cards). So drivers for non-hotplug controllers
> still have to handle hotplug events generated by link up/down state.
> 
> So it means that, if endpoint PCIe card is not detected during probe
> time, it may be detected later. So this check to completely stop
> registering controller is not a good idea. Note that userspace can
> tell kernel (via sysfs) to rescan all PCIe buses and try to discover new
> PCIea devices.
> 

Yes, we should not ignored this situation.

>> Thus, We add starfive_pcie_host_wait_for_link() to poll if it is a empty slot.
>> If nothing here, we will exit the probe() of this controller, and it will not
>> go into pci_host_probe() too.
>> This may not be a very standard logic, should we remove it or rewrite in a better way?
>> 
>> > Bjorn
> 
> Rather to remove this starfive_pcie_host_wait_for_link logic.
> 
> Better option would be to teach PCI core code to wait for the link
> before trying to read vendor/device ids, like I described in my old
> proposal.

Yes, the proposal can prevent us from writing the wrong timing.
However, as things stand, we have to do the waiting in host controller driver now.
We will keep the wait for the link, but don't return error when the link is down,
such as:
    if (starfive_pcie_host_wait_for_link(pcie))
	dev_info(dev, "port link down\n");

^ permalink raw reply	[flat|nested] 92+ messages in thread

end of thread, other threads:[~2023-08-03  7:57 UTC | newest]

Thread overview: 92+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-07-19 10:20 [PATCH v1 0/9] Refactoring Microchip PolarFire PCIe driver Minda Chen
2023-07-19 10:20 ` Minda Chen
2023-07-19 10:20 ` [PATCH v1 1/9] dt-bindings: PCI: Add PLDA XpressRICH PCIe host common properties Minda Chen
2023-07-19 10:20   ` Minda Chen
2023-07-19 10:52   ` Krzysztof Kozlowski
2023-07-19 10:52     ` Krzysztof Kozlowski
2023-07-20  6:59     ` Minda Chen
2023-07-20  6:59       ` Minda Chen
2023-07-19 22:31   ` Rob Herring
2023-07-19 22:31     ` Rob Herring
2023-07-20  6:47     ` Minda Chen
2023-07-20  6:47       ` Minda Chen
2023-07-19 10:20 ` [PATCH v1 2/9] dt-bindings: PCI: microchip: Remove the PLDA " Minda Chen
2023-07-19 10:20   ` Minda Chen
2023-07-19 10:53   ` Krzysztof Kozlowski
2023-07-19 10:53     ` Krzysztof Kozlowski
2023-07-19 10:20 ` [PATCH v1 3/9] PCI: PLDA: Get PLDA common codes from Microchip PolarFire host Minda Chen
2023-07-19 10:20   ` Minda Chen
2023-07-19 10:20 ` [PATCH v1 4/9] PCI: microchip: Move PCIe driver to PLDA directory Minda Chen
2023-07-19 10:20   ` Minda Chen
2023-07-20 11:07   ` Conor Dooley
2023-07-20 11:07     ` Conor Dooley
2023-07-20 12:26   ` Conor Dooley
2023-07-20 12:26     ` Conor Dooley
2023-07-21  1:12     ` Minda Chen
2023-07-21  1:12       ` Minda Chen
2023-07-19 10:20 ` [PATCH v1 5/9] dt-bindings: PLDA: Add PLDA XpressRICH PCIe host controller Minda Chen
2023-07-19 10:20   ` Minda Chen
2023-07-19 10:55   ` Krzysztof Kozlowski
2023-07-19 10:55     ` Krzysztof Kozlowski
2023-07-19 22:29   ` Rob Herring
2023-07-19 22:29     ` Rob Herring
2023-07-20  7:02     ` Minda Chen
2023-07-20  7:02       ` Minda Chen
2023-07-19 10:20 ` [PATCH v1 6/9] PCI: PLDA: Add host conroller platform driver Minda Chen
2023-07-19 10:20   ` Minda Chen
2023-07-19 10:20 ` [PATCH v1 7/9] dt-bindings: PCI: Add StarFive JH7110 PCIe controller Minda Chen
2023-07-19 10:20   ` Minda Chen
2023-07-19 10:56   ` Krzysztof Kozlowski
2023-07-19 10:56     ` Krzysztof Kozlowski
2023-07-19 10:20 ` [PATCH v1 8/9] PCI: PLDA: starfive: Add " Minda Chen
2023-07-19 10:20   ` Minda Chen
2023-07-19 16:48   ` Bjorn Helgaas
2023-07-19 16:48     ` Bjorn Helgaas
2023-07-20 10:11     ` Kevin Xie
2023-07-20 10:11       ` Kevin Xie
2023-07-20 16:15       ` Bjorn Helgaas
2023-07-20 16:15         ` Bjorn Helgaas
2023-07-24 10:48         ` Kevin Xie
2023-07-24 10:48           ` Kevin Xie
2023-07-25 20:46           ` Bjorn Helgaas
2023-07-25 20:46             ` Bjorn Helgaas
2023-07-27 21:40             ` Bjorn Helgaas
2023-07-27 21:40               ` Bjorn Helgaas
2023-07-31  5:52               ` Kevin Xie
2023-07-31  5:52                 ` Kevin Xie
2023-07-31 23:12                 ` Bjorn Helgaas
2023-07-31 23:12                   ` Bjorn Helgaas
2023-08-01  7:05                   ` Pali Rohár
2023-08-01  7:05                     ` Pali Rohár
2023-08-01  7:05                   ` Kevin Xie
2023-08-01  7:05                     ` Kevin Xie
2023-08-01  7:14                     ` Pali Rohár
2023-08-01  7:14                       ` Pali Rohár
2023-08-02 17:14                       ` Bjorn Helgaas
2023-08-02 17:14                         ` Bjorn Helgaas
2023-08-02 17:18                     ` Bjorn Helgaas
2023-08-02 17:18                       ` Bjorn Helgaas
2023-08-03  2:23                       ` Kevin Xie
2023-08-03  2:23                         ` Kevin Xie
2023-08-03  6:58                         ` Pali Rohár
2023-08-03  6:58                           ` Pali Rohár
2023-08-03  7:43                           ` Kevin Xie
2023-08-03  7:43                             ` Kevin Xie
2023-07-20 11:14   ` Conor Dooley
2023-07-20 11:14     ` Conor Dooley
2023-07-21  1:03     ` Minda Chen
2023-07-21  1:03       ` Minda Chen
2023-07-19 10:20 ` [PATCH v1 9/9] riscv: dts: starfive: add PCIe dts configuration for JH7110 Minda Chen
2023-07-19 10:20   ` Minda Chen
2023-07-19 15:26 ` [PATCH v1 0/9] Refactoring Microchip PolarFire PCIe driver Bjorn Helgaas
2023-07-19 15:26   ` Bjorn Helgaas
2023-07-20  2:15   ` Minda Chen
2023-07-20  2:15     ` Minda Chen
2023-07-20 12:12     ` Conor Dooley
2023-07-20 12:12       ` Conor Dooley
2023-07-21  9:34       ` Minda Chen
2023-07-21  9:34         ` Minda Chen
2023-07-21  9:55       ` Minda Chen
2023-07-21  9:55         ` Minda Chen
2023-07-19 16:58 ` Conor Dooley
2023-07-19 16:58   ` Conor Dooley

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.