All of lore.kernel.org
 help / color / mirror / Atom feed
* Using the generic host PCIe driver
@ 2017-02-27 16:14 ` Mason
  0 siblings, 0 replies; 54+ messages in thread
From: Mason @ 2017-02-27 16:14 UTC (permalink / raw)
  To: linux-pci, Linux ARM
  Cc: Bjorn Helgaas, Will Deacon, David Daney, Rob Herring,
	Thierry Reding, Phuong Nguyen, Thibaud Cornic

On 17/02/2017, Bjorn Helgaas wrote:

> I don't know anything about your hardware or environment, but I highly
> encourage you to use [...] generic DT (drivers/pci/host/pci-host-generic.c)
> instead of writing a custom host controller driver.
> 
> The native drivers in drivers/pci/host are a huge maintenance hassle
> for no real benefit.
>
> You're right that the programming model of the host bridge itself is
> not specified by PCI specs, so it's impossible to have a generic
> driver that can manage it completely by itself.
> 
> If you have firmware that initializes the bridge and tells the OS what
> the windows are (bus numbers, memory space, I/O space) and where the
> PCIe-specified ECAM space is, a generic driver can take it from there.

I am currently trying the approach suggested by Bjorn.

This is what my DT node currently looks like:

		pcie@50000000 {
			compatible = "pci-host-ecam-generic";
			reg = <0x50000000 0x200000>;
			device_type = "pci";
			bus-range = <0>, <1>;
			#size-cells = <2>;
			#address-cells = <3>;
			#interrupt-cells = <1>;
			/* BUS_ADDRESS(3)  CPU_PHYSICAL(1)  SIZE(2) */
			ranges = <0x02000000 0x0 0xa0000000  0xa0000000  0x0 0x00100000>;
		};

(I am aware that interrupt specs are missing, but I hope this
is not a problem at an early stage.)

I.e. the configuration space is at 0x5000_0000
(so bus_0 at 0x5000_0000, and bus_1 at 0x5010_0000)

And I defined a 32-bit non-prefetchable memory space at 0xa0000000

Question 1:
Is there any reason to map CPU address 0xa0000000 anywhere else than
PCI bus address 0xa0000000?


Now, this host controller is revision 1, and as such contains several
"interesting" bugs. How can I work around them in software?


Bug #1

The controller reports an incorrect class. It is a bridge, but it
reports class = 0x048000

AFAICT, tegra had a similar problem, and the solution seems to be
to define a DECLARE_PCI_FIXUP_EARLY hook:

/* Root complex reports incorrect device class */
static void tango_pcie_fixup_class(struct pci_dev *dev)
{
	dev->class = PCI_CLASS_BRIDGE_PCI << 8;
}
DECLARE_PCI_FIXUP_EARLY(0x1105, 0x8758, tango_pcie_fixup_class);

Is that the correct way to work around bug #1


Bug #2

Bus 0 cannot be enumerated, because it reports garbage data for
devices and functions other than 0, i.e. only 0/0/0 works.

How do I work around that issue?


I have a googol more questions, but I'll stick to these for the time being.

Regards.

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Using the generic host PCIe driver
@ 2017-02-27 16:14 ` Mason
  0 siblings, 0 replies; 54+ messages in thread
From: Mason @ 2017-02-27 16:14 UTC (permalink / raw)
  To: linux-arm-kernel

On 17/02/2017, Bjorn Helgaas wrote:

> I don't know anything about your hardware or environment, but I highly
> encourage you to use [...] generic DT (drivers/pci/host/pci-host-generic.c)
> instead of writing a custom host controller driver.
> 
> The native drivers in drivers/pci/host are a huge maintenance hassle
> for no real benefit.
>
> You're right that the programming model of the host bridge itself is
> not specified by PCI specs, so it's impossible to have a generic
> driver that can manage it completely by itself.
> 
> If you have firmware that initializes the bridge and tells the OS what
> the windows are (bus numbers, memory space, I/O space) and where the
> PCIe-specified ECAM space is, a generic driver can take it from there.

I am currently trying the approach suggested by Bjorn.

This is what my DT node currently looks like:

		pcie at 50000000 {
			compatible = "pci-host-ecam-generic";
			reg = <0x50000000 0x200000>;
			device_type = "pci";
			bus-range = <0>, <1>;
			#size-cells = <2>;
			#address-cells = <3>;
			#interrupt-cells = <1>;
			/* BUS_ADDRESS(3)  CPU_PHYSICAL(1)  SIZE(2) */
			ranges = <0x02000000 0x0 0xa0000000  0xa0000000  0x0 0x00100000>;
		};

(I am aware that interrupt specs are missing, but I hope this
is not a problem at an early stage.)

I.e. the configuration space is at 0x5000_0000
(so bus_0 at 0x5000_0000, and bus_1 at 0x5010_0000)

And I defined a 32-bit non-prefetchable memory space at 0xa0000000

Question 1:
Is there any reason to map CPU address 0xa0000000 anywhere else than
PCI bus address 0xa0000000?


Now, this host controller is revision 1, and as such contains several
"interesting" bugs. How can I work around them in software?


Bug #1

The controller reports an incorrect class. It is a bridge, but it
reports class = 0x048000

AFAICT, tegra had a similar problem, and the solution seems to be
to define a DECLARE_PCI_FIXUP_EARLY hook:

/* Root complex reports incorrect device class */
static void tango_pcie_fixup_class(struct pci_dev *dev)
{
	dev->class = PCI_CLASS_BRIDGE_PCI << 8;
}
DECLARE_PCI_FIXUP_EARLY(0x1105, 0x8758, tango_pcie_fixup_class);

Is that the correct way to work around bug #1


Bug #2

Bus 0 cannot be enumerated, because it reports garbage data for
devices and functions other than 0, i.e. only 0/0/0 works.

How do I work around that issue?


I have a googol more questions, but I'll stick to these for the time being.

Regards.

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: Using the generic host PCIe driver
  2017-02-27 16:14 ` Mason
@ 2017-02-27 16:44   ` Bjorn Helgaas
  -1 siblings, 0 replies; 54+ messages in thread
From: Bjorn Helgaas @ 2017-02-27 16:44 UTC (permalink / raw)
  To: Mason
  Cc: Rob Herring, Phuong Nguyen, David Daney, linux-pci,
	Thibaud Cornic, Will Deacon, Thierry Reding, Linux ARM

On Mon, Feb 27, 2017 at 05:14:15PM +0100, Mason wrote:
> On 17/02/2017, Bjorn Helgaas wrote:
> 
> > I don't know anything about your hardware or environment, but I highly
> > encourage you to use [...] generic DT (drivers/pci/host/pci-host-generic.c)
> > instead of writing a custom host controller driver.
> > 
> > The native drivers in drivers/pci/host are a huge maintenance hassle
> > for no real benefit.
> >
> > You're right that the programming model of the host bridge itself is
> > not specified by PCI specs, so it's impossible to have a generic
> > driver that can manage it completely by itself.
> > 
> > If you have firmware that initializes the bridge and tells the OS what
> > the windows are (bus numbers, memory space, I/O space) and where the
> > PCIe-specified ECAM space is, a generic driver can take it from there.
> 
> I am currently trying the approach suggested by Bjorn.
> 
> This is what my DT node currently looks like:
> 
> 		pcie@50000000 {
> 			compatible = "pci-host-ecam-generic";
> 			reg = <0x50000000 0x200000>;
> 			device_type = "pci";
> 			bus-range = <0>, <1>;
> 			#size-cells = <2>;
> 			#address-cells = <3>;
> 			#interrupt-cells = <1>;
> 			/* BUS_ADDRESS(3)  CPU_PHYSICAL(1)  SIZE(2) */
> 			ranges = <0x02000000 0x0 0xa0000000  0xa0000000  0x0 0x00100000>;
> 		};
> 
> (I am aware that interrupt specs are missing, but I hope this
> is not a problem at an early stage.)
> 
> I.e. the configuration space is at 0x5000_0000
> (so bus_0 at 0x5000_0000, and bus_1 at 0x5010_0000)
> 
> And I defined a 32-bit non-prefetchable memory space at 0xa0000000
> 
> Question 1:
> Is there any reason to map CPU address 0xa0000000 anywhere else than
> PCI bus address 0xa0000000?

Not really.  Large systems with multiple host bridges might map things
differently so they can use the same 32-bit bus addresses on several
root buses.  But if you only have one host bridge, that's not
necessary.

> Now, this host controller is revision 1, and as such contains several
> "interesting" bugs. How can I work around them in software?
> 
> 
> Bug #1
> 
> The controller reports an incorrect class. It is a bridge, but it
> reports class = 0x048000
> 
> AFAICT, tegra had a similar problem, and the solution seems to be
> to define a DECLARE_PCI_FIXUP_EARLY hook:
> 
> /* Root complex reports incorrect device class */
> static void tango_pcie_fixup_class(struct pci_dev *dev)
> {
> 	dev->class = PCI_CLASS_BRIDGE_PCI << 8;
> }
> DECLARE_PCI_FIXUP_EARLY(0x1105, 0x8758, tango_pcie_fixup_class);
> 
> Is that the correct way to work around bug #1

That seems fine, at least for now.

> Bug #2
> 
> Bus 0 cannot be enumerated, because it reports garbage data for
> devices and functions other than 0, i.e. only 0/0/0 works.
> 
> How do I work around that issue?

There are several drivers that provide their own "ECAM-like" config
accessors.  Look at "struct pci_ecam_ops" structures, e.g.,
hisi_pcie_ops, pci_thunder_ecam_ops, xgene_v1_pcie_ecam_ops, etc.

You can also work around Bug #1 in a custom accessor instead of a
quirk.

Bjorn

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Using the generic host PCIe driver
@ 2017-02-27 16:44   ` Bjorn Helgaas
  0 siblings, 0 replies; 54+ messages in thread
From: Bjorn Helgaas @ 2017-02-27 16:44 UTC (permalink / raw)
  To: linux-arm-kernel

On Mon, Feb 27, 2017 at 05:14:15PM +0100, Mason wrote:
> On 17/02/2017, Bjorn Helgaas wrote:
> 
> > I don't know anything about your hardware or environment, but I highly
> > encourage you to use [...] generic DT (drivers/pci/host/pci-host-generic.c)
> > instead of writing a custom host controller driver.
> > 
> > The native drivers in drivers/pci/host are a huge maintenance hassle
> > for no real benefit.
> >
> > You're right that the programming model of the host bridge itself is
> > not specified by PCI specs, so it's impossible to have a generic
> > driver that can manage it completely by itself.
> > 
> > If you have firmware that initializes the bridge and tells the OS what
> > the windows are (bus numbers, memory space, I/O space) and where the
> > PCIe-specified ECAM space is, a generic driver can take it from there.
> 
> I am currently trying the approach suggested by Bjorn.
> 
> This is what my DT node currently looks like:
> 
> 		pcie at 50000000 {
> 			compatible = "pci-host-ecam-generic";
> 			reg = <0x50000000 0x200000>;
> 			device_type = "pci";
> 			bus-range = <0>, <1>;
> 			#size-cells = <2>;
> 			#address-cells = <3>;
> 			#interrupt-cells = <1>;
> 			/* BUS_ADDRESS(3)  CPU_PHYSICAL(1)  SIZE(2) */
> 			ranges = <0x02000000 0x0 0xa0000000  0xa0000000  0x0 0x00100000>;
> 		};
> 
> (I am aware that interrupt specs are missing, but I hope this
> is not a problem at an early stage.)
> 
> I.e. the configuration space is at 0x5000_0000
> (so bus_0 at 0x5000_0000, and bus_1 at 0x5010_0000)
> 
> And I defined a 32-bit non-prefetchable memory space at 0xa0000000
> 
> Question 1:
> Is there any reason to map CPU address 0xa0000000 anywhere else than
> PCI bus address 0xa0000000?

Not really.  Large systems with multiple host bridges might map things
differently so they can use the same 32-bit bus addresses on several
root buses.  But if you only have one host bridge, that's not
necessary.

> Now, this host controller is revision 1, and as such contains several
> "interesting" bugs. How can I work around them in software?
> 
> 
> Bug #1
> 
> The controller reports an incorrect class. It is a bridge, but it
> reports class = 0x048000
> 
> AFAICT, tegra had a similar problem, and the solution seems to be
> to define a DECLARE_PCI_FIXUP_EARLY hook:
> 
> /* Root complex reports incorrect device class */
> static void tango_pcie_fixup_class(struct pci_dev *dev)
> {
> 	dev->class = PCI_CLASS_BRIDGE_PCI << 8;
> }
> DECLARE_PCI_FIXUP_EARLY(0x1105, 0x8758, tango_pcie_fixup_class);
> 
> Is that the correct way to work around bug #1

That seems fine, at least for now.

> Bug #2
> 
> Bus 0 cannot be enumerated, because it reports garbage data for
> devices and functions other than 0, i.e. only 0/0/0 works.
> 
> How do I work around that issue?

There are several drivers that provide their own "ECAM-like" config
accessors.  Look at "struct pci_ecam_ops" structures, e.g.,
hisi_pcie_ops, pci_thunder_ecam_ops, xgene_v1_pcie_ecam_ops, etc.

You can also work around Bug #1 in a custom accessor instead of a
quirk.

Bjorn

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: Using the generic host PCIe driver
  2017-02-27 16:44   ` Bjorn Helgaas
@ 2017-02-27 17:02     ` Mason
  -1 siblings, 0 replies; 54+ messages in thread
From: Mason @ 2017-02-27 17:02 UTC (permalink / raw)
  To: Bjorn Helgaas
  Cc: linux-pci, Linux ARM, Will Deacon, David Daney, Rob Herring,
	Thierry Reding, Phuong Nguyen, Thibaud Cornic

On 27/02/2017 17:44, Bjorn Helgaas wrote:

> On Mon, Feb 27, 2017 at 05:14:15PM +0100, Mason wrote:
>
>> Bug #2
>>
>> Bus 0 cannot be enumerated, because it reports garbage data for
>> devices and functions other than 0, i.e. only 0/0/0 works.
>>
>> How do I work around that issue?
> 
> There are several drivers that provide their own "ECAM-like" config
> accessors.  Look at "struct pci_ecam_ops" structures, e.g.,
> hisi_pcie_ops, pci_thunder_ecam_ops, xgene_v1_pcie_ecam_ops, etc.

If I understand correctly, I do need to write my own driver then,
if I need specific quirks to work around some issues?

I'm slightly confused because you originally said "The native drivers
in drivers/pci/host are a huge maintenance hassle for no real benefit."

But I do need to write one, correct?

> You can also work around Bug #1 in a custom accessor instead of a
> quirk.

By checking for the device ID and vendor ID, and returning the
expected class code, instead of the contents of the reg?

Do you consider this a better solution?

Regards.

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Using the generic host PCIe driver
@ 2017-02-27 17:02     ` Mason
  0 siblings, 0 replies; 54+ messages in thread
From: Mason @ 2017-02-27 17:02 UTC (permalink / raw)
  To: linux-arm-kernel

On 27/02/2017 17:44, Bjorn Helgaas wrote:

> On Mon, Feb 27, 2017 at 05:14:15PM +0100, Mason wrote:
>
>> Bug #2
>>
>> Bus 0 cannot be enumerated, because it reports garbage data for
>> devices and functions other than 0, i.e. only 0/0/0 works.
>>
>> How do I work around that issue?
> 
> There are several drivers that provide their own "ECAM-like" config
> accessors.  Look at "struct pci_ecam_ops" structures, e.g.,
> hisi_pcie_ops, pci_thunder_ecam_ops, xgene_v1_pcie_ecam_ops, etc.

If I understand correctly, I do need to write my own driver then,
if I need specific quirks to work around some issues?

I'm slightly confused because you originally said "The native drivers
in drivers/pci/host are a huge maintenance hassle for no real benefit."

But I do need to write one, correct?

> You can also work around Bug #1 in a custom accessor instead of a
> quirk.

By checking for the device ID and vendor ID, and returning the
expected class code, instead of the contents of the reg?

Do you consider this a better solution?

Regards.

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: Using the generic host PCIe driver
  2017-02-27 17:02     ` Mason
@ 2017-02-27 18:35       ` Bjorn Helgaas
  -1 siblings, 0 replies; 54+ messages in thread
From: Bjorn Helgaas @ 2017-02-27 18:35 UTC (permalink / raw)
  To: Mason
  Cc: Rob Herring, Phuong Nguyen, David Daney, linux-pci,
	Thibaud Cornic, Will Deacon, Thierry Reding, Linux ARM

On Mon, Feb 27, 2017 at 06:02:36PM +0100, Mason wrote:
> On 27/02/2017 17:44, Bjorn Helgaas wrote:
> 
> > On Mon, Feb 27, 2017 at 05:14:15PM +0100, Mason wrote:
> >
> >> Bug #2
> >>
> >> Bus 0 cannot be enumerated, because it reports garbage data for
> >> devices and functions other than 0, i.e. only 0/0/0 works.
> >>
> >> How do I work around that issue?
> > 
> > There are several drivers that provide their own "ECAM-like" config
> > accessors.  Look at "struct pci_ecam_ops" structures, e.g.,
> > hisi_pcie_ops, pci_thunder_ecam_ops, xgene_v1_pcie_ecam_ops, etc.
> 
> If I understand correctly, I do need to write my own driver then,
> if I need specific quirks to work around some issues?
> 
> I'm slightly confused because you originally said "The native drivers
> in drivers/pci/host are a huge maintenance hassle for no real benefit."
> 
> But I do need to write one, correct?

When I said the native drivers provide no real benefit, I meant that
they do not provide any value-add functionality beyond what a generic
driver like drivers/acpi/pci_root.c already does.

Obviously there are many different host bridges and they have
different programming models, so there has to be bridge-specific
support *somewhere*.  The question is whether that's in firmware, in
Linux, or both.  For ACPI systems, it's all in firmware.

Systems with well-behaved hardware, i.e., it supports PCIe and ECAM
without warts, firmware can initialize the bridge and tell the OS
about it via DT, and the drivers/pci/pci-host-generic.c driver can do
everything else.

For systems that aren't so well-behaved, we'll need either a full
native driver that knows how to program bridge window CSRs, set up
interrupts, etc., or a simpler native driver that papers over warts
like ECAM that doesn't work quite according to spec.

It sounds like your system falls into the latter category.

> > You can also work around Bug #1 in a custom accessor instead of a
> > quirk.
> 
> By checking for the device ID and vendor ID, and returning the
> expected class code, instead of the contents of the reg?
> 
> Do you consider this a better solution?

It's not a big deal either way.  Doing it in the accessor has the
advantage that you have to have the accessor anyway.  Doing it in a
quirk means you need to figure out where to put the quirk.  It isn't
really generic (so drivers/pci/quirks.c seems overly generic), it
isn't really arch-specific (so arch/* seems maybe *too* specific),
it's really just bridge-specific.

Bjorn

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Using the generic host PCIe driver
@ 2017-02-27 18:35       ` Bjorn Helgaas
  0 siblings, 0 replies; 54+ messages in thread
From: Bjorn Helgaas @ 2017-02-27 18:35 UTC (permalink / raw)
  To: linux-arm-kernel

On Mon, Feb 27, 2017 at 06:02:36PM +0100, Mason wrote:
> On 27/02/2017 17:44, Bjorn Helgaas wrote:
> 
> > On Mon, Feb 27, 2017 at 05:14:15PM +0100, Mason wrote:
> >
> >> Bug #2
> >>
> >> Bus 0 cannot be enumerated, because it reports garbage data for
> >> devices and functions other than 0, i.e. only 0/0/0 works.
> >>
> >> How do I work around that issue?
> > 
> > There are several drivers that provide their own "ECAM-like" config
> > accessors.  Look at "struct pci_ecam_ops" structures, e.g.,
> > hisi_pcie_ops, pci_thunder_ecam_ops, xgene_v1_pcie_ecam_ops, etc.
> 
> If I understand correctly, I do need to write my own driver then,
> if I need specific quirks to work around some issues?
> 
> I'm slightly confused because you originally said "The native drivers
> in drivers/pci/host are a huge maintenance hassle for no real benefit."
> 
> But I do need to write one, correct?

When I said the native drivers provide no real benefit, I meant that
they do not provide any value-add functionality beyond what a generic
driver like drivers/acpi/pci_root.c already does.

Obviously there are many different host bridges and they have
different programming models, so there has to be bridge-specific
support *somewhere*.  The question is whether that's in firmware, in
Linux, or both.  For ACPI systems, it's all in firmware.

Systems with well-behaved hardware, i.e., it supports PCIe and ECAM
without warts, firmware can initialize the bridge and tell the OS
about it via DT, and the drivers/pci/pci-host-generic.c driver can do
everything else.

For systems that aren't so well-behaved, we'll need either a full
native driver that knows how to program bridge window CSRs, set up
interrupts, etc., or a simpler native driver that papers over warts
like ECAM that doesn't work quite according to spec.

It sounds like your system falls into the latter category.

> > You can also work around Bug #1 in a custom accessor instead of a
> > quirk.
> 
> By checking for the device ID and vendor ID, and returning the
> expected class code, instead of the contents of the reg?
> 
> Do you consider this a better solution?

It's not a big deal either way.  Doing it in the accessor has the
advantage that you have to have the accessor anyway.  Doing it in a
quirk means you need to figure out where to put the quirk.  It isn't
really generic (so drivers/pci/quirks.c seems overly generic), it
isn't really arch-specific (so arch/* seems maybe *too* specific),
it's really just bridge-specific.

Bjorn

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: Using the generic host PCIe driver
  2017-02-27 18:35       ` Bjorn Helgaas
@ 2017-03-01 15:18         ` Mason
  -1 siblings, 0 replies; 54+ messages in thread
From: Mason @ 2017-03-01 15:18 UTC (permalink / raw)
  To: Bjorn Helgaas
  Cc: linux-pci, Linux ARM, Will Deacon, David Daney, Rob Herring,
	Thierry Reding, Phuong Nguyen, Thibaud Cornic

On 27/02/2017 19:35, Bjorn Helgaas wrote:

> When I said the native drivers provide no real benefit, I meant that
> they do not provide any value-add functionality beyond what a generic
> driver like drivers/acpi/pci_root.c already does.
> 
> Obviously there are many different host bridges and they have
> different programming models, so there has to be bridge-specific
> support *somewhere*.  The question is whether that's in firmware, in
> Linux, or both.  For ACPI systems, it's all in firmware.
> 
> Systems with well-behaved hardware, i.e., it supports PCIe and ECAM
> without warts, firmware can initialize the bridge and tell the OS
> about it via DT, and the drivers/pci/pci-host-generic.c driver can do
> everything else.
> 
> For systems that aren't so well-behaved, we'll need either a full
> native driver that knows how to program bridge window CSRs, set up
> interrupts, etc., or a simpler native driver that papers over warts
> like ECAM that doesn't work quite according to spec.
> 
> It sounds like your system falls into the latter category.

Hello Bjorn,

Having worked around 3 HW bugs, things are starting to look
slightly more "normal". Here is my current boot log:
(I've added a few questions inline.)

[    0.197669] PCI: CLS 0 bytes, default 64

Is it an error for Cache Line Size to be 0 here?

[    0.652356] OF: PCI: host bridge /soc/pcie@50000000 ranges:
[    0.652380] OF: PCI:   No bus range found for /soc/pcie@50000000, using [bus 00-ff]
[    0.652407] OF: PCI: Parsing ranges property...
[    0.652494] OF: PCI:   MEM 0xa0000000..0xa03fffff -> 0xa0000000
[    0.655744] pci-host-generic 50000000.pcie: ECAM at [mem 0x50000000-0x5fffffff] for [bus 00-ff]
[    0.656097] pci-host-generic 50000000.pcie: PCI host bridge to bus 0000:00
[    0.656145] pci_bus 0000:00: root bus resource [bus 00-ff]
[    0.656168] pci_bus 0000:00: root bus resource [mem 0xa0000000-0xa03fffff]
[    0.656191] pci_bus 0000:00: scanning bus
[    0.656257] pci 0000:00:00.0: [1105:8758] type 01 class 0x048000
[    0.656314] pci 0000:00:00.0: calling tango_pcie_fixup_class+0x0/0x10
[    0.656358] pci 0000:00:00.0: reg 0x10: [mem 0x00000000-0x00ffffff 64bit]
[    0.656400] pci 0000:00:00.0: calling pci_fixup_ide_bases+0x0/0x40
[    0.656451] pci 0000:00:00.0: supports D1 D2
[    0.656468] pci 0000:00:00.0: PME# supported from D0 D1 D2 D3hot
[    0.656486] pci 0000:00:00.0: PME# disabled
[    0.656657] pci_bus 0000:00: fixups for bus
[    0.656686] PCI: bus0: Fast back to back transfers disabled
[    0.656707] pci 0000:00:00.0: scanning [bus 00-00] behind bridge, pass 0
[    0.656725] pci 0000:00:00.0: bridge configuration invalid ([bus 00-00]), reconfiguring
[    0.656753] pci 0000:00:00.0: scanning [bus 00-00] behind bridge, pass 1
[    0.656845] pci_bus 0000:01: scanning bus
[    0.656911] pci 0000:01:00.0: [1912:0014] type 00 class 0x0c0330
[    0.656968] pci 0000:01:00.0: reg 0x10: [mem 0x00000000-0x00001fff 64bit]
[    0.657065] pci 0000:01:00.0: calling pci_fixup_ide_bases+0x0/0x40
[    0.657192] pci 0000:01:00.0: PME# supported from D0 D3hot D3cold
[    0.657213] pci 0000:01:00.0: PME# disabled
[    0.657495] pci_bus 0000:01: fixups for bus
[    0.657521] PCI: bus1: Fast back to back transfers disabled
[    0.657538] pci_bus 0000:01: bus scan returning with max=01
[    0.657556] pci_bus 0000:01: busn_res: [bus 01-ff] end is updated to 01
[    0.657575] pci_bus 0000:00: bus scan returning with max=01
[    0.657593] pci 0000:00:00.0: fixup irq: got 0
[    0.657608] pci 0000:00:00.0: assigning IRQ 00
[    0.657651] pci 0000:01:00.0: fixup irq: got 20
[    0.657667] pci 0000:01:00.0: assigning IRQ 20

This revision of the controller does not support legacy interrupt mode,
only MSI. I looked at the bindings for MSI:

https://www.kernel.org/doc/Documentation/devicetree/bindings/pci/pci-msi.txt
https://www.kernel.org/doc/Documentation/devicetree/bindings/interrupt-controller/msi.txt

But it is not clear to me if I need to write a specific driver
for the MSI controller, or if there is some kind of generic
support? If the latter, what are the required properties?
A "door-bell" address? Anything else?

[    0.657711] pci 0000:00:00.0: BAR 0: no space for [mem size 0x01000000 64bit]
[    0.657731] pci 0000:00:00.0: BAR 0: failed to assign [mem size 0x01000000 64bit]
[    0.657755] pci 0000:00:00.0: BAR 8: assigned [mem 0xa0000000-0xa00fffff]
[    0.657776] pci 0000:01:00.0: BAR 0: assigned [mem 0xa0000000-0xa0001fff 64bit]

These 4 statements sound fishy.

[    0.657813] pci 0000:00:00.0: PCI bridge to [bus 01]
[    0.657831] pci 0000:00:00.0:   bridge window [mem 0xa0000000-0xa00fffff]
[    0.657904] pcieport 0000:00:00.0: enabling device (0140 -> 0142)
[    0.657931] pcieport 0000:00:00.0: enabling bus mastering
[    0.658058] pci 0000:01:00.0: calling quirk_usb_early_handoff+0x0/0x790
[    0.658088] pci 0000:01:00.0: enabling device (0140 -> 0142)
[    0.663235] pci 0000:01:00.0: xHCI HW not ready after 5 sec (HC bug?) status = 0x1e7fffd0
[    0.679283] pci 0000:01:00.0: xHCI HW did not halt within 16000 usec status = 0x1e7fffd0

The PCIe card is a USB3 adapter. I suppose it's not working
because MSI is not properly configured.

# /usr/sbin/lspci -v
00:00.0 PCI bridge: Sigma Designs, Inc. Device 8758 (rev 01) (prog-if 00 [Normal decode])
        Flags: bus master, fast devsel, latency 0
        Memory at <unassigned> (64-bit, non-prefetchable)
        Bus: primary=00, secondary=01, subordinate=01, sec-latency=0
        I/O behind bridge: 00000000-00000fff
        Memory behind bridge: a0000000-a00fffff
        Prefetchable memory behind bridge: 00000000-000fffff
        Capabilities: [50] MSI: Enable- Count=1/4 Maskable- 64bit+
        Capabilities: [78] Power Management version 3
        Capabilities: [80] Express Root Port (Slot-), MSI 03
        Capabilities: [100] Virtual Channel
        Capabilities: [800] Advanced Error Reporting
        Kernel driver in use: pcieport

01:00.0 USB controller: Renesas Technology Corp. uPD720201 USB 3.0 Host Controller (rev 03) (prog-if 30 [XHCI])
        Flags: fast devsel, IRQ 20
        Memory at a0000000 (64-bit, non-prefetchable) [size=8K]
        Capabilities: [50] Power Management version 3
        Capabilities: [70] MSI: Enable- Count=1/8 Maskable- 64bit+
        Capabilities: [90] MSI-X: Enable- Count=8 Masked-
        Capabilities: [a0] Express Endpoint, MSI 00
        Capabilities: [100] Advanced Error Reporting
        Capabilities: [150] Latency Tolerance Reporting


What does "Capabilities: [50] MSI: Enable- Count=1/4 Maskable- 64bit+" mean?

http://man7.org/linux/man-pages/man8/lspci.8.html



For reference only, below is the extra verbose output.

# /usr/sbin/lspci -vvv
00:00.0 PCI bridge: Sigma Designs, Inc. Device 8758 (rev 01) (prog-if 00 [Normal decode])
        Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr+ Stepping- SERR+ FastB2B- DisINTx-
        Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
        Latency: 0, Cache Line Size: 64 bytes
        Region 0: Memory at <unassigned> (64-bit, non-prefetchable)
        Bus: primary=00, secondary=01, subordinate=01, sec-latency=0
        I/O behind bridge: 00000000-00000fff
        Memory behind bridge: a0000000-a00fffff
        Prefetchable memory behind bridge: 00000000-000fffff
        Secondary status: 66MHz- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- <SERR- <PERR-
        BridgeCtl: Parity+ SERR- NoISA- VGA- MAbort- >Reset- FastB2B-
                PriDiscTmr- SecDiscTmr- DiscTmrStat- DiscTmrSERREn-
        Capabilities: [50] MSI: Enable- Count=1/4 Maskable- 64bit+
                Address: 0000000000000000  Data: 0000
        Capabilities: [78] Power Management version 3
                Flags: PMEClk- DSI- D1+ D2+ AuxCurrent=0mA PME(D0+,D1+,D2+,D3hot+,D3cold-)
                Status: D0 NoSoftRst+ PME-Enable- DSel=0 DScale=3 PME-
        Capabilities: [80] Express (v2) Root Port (Slot-), MSI 03
                DevCap: MaxPayload 256 bytes, PhantFunc 0
                        ExtTag- RBE+
                DevCtl: Report errors: Correctable- Non-Fatal- Fatal- Unsupported-
                        RlxdOrd+ ExtTag- PhantFunc- AuxPwr- NoSnoop+
                        MaxPayload 128 bytes, MaxReadReq 512 bytes
                DevSta: CorrErr- UncorrErr- FatalErr- UnsuppReq- AuxPwr- TransPend+
                LnkCap: Port #1, Speed 5GT/s, Width x1, ASPM L0s L1, Exit Latency L0s <2us, L1 <4us
                        ClockPM- Surprise- LLActRep- BwNot+
                LnkCtl: ASPM Disabled; RCB 128 bytes Disabled- CommClk-
                        ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
                LnkSta: Speed 5GT/s, Width x1, TrErr- Train- SlotClk- DLActive- BWMgmt- ABWMgmt-
                RootCtl: ErrCorrectable- ErrNon-Fatal- ErrFatal- PMEIntEna- CRSVisible-
                RootCap: CRSVisible-
                RootSta: PME ReqID 0000, PMEStatus- PMEPending-
                DevCap2: Completion Timeout: Range B, TimeoutDis-, LTR-, OBFF Not Supported ARIFwd-
                DevCtl2: Completion Timeout: 50us to 50ms, TimeoutDis-, LTR-, OBFF Disabled ARIFwd-
                LnkCtl2: Target Link Speed: 5GT/s, EnterCompliance- SpeedDis-
                         Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS-
                         Compliance De-emphasis: -6dB
                LnkSta2: Current De-emphasis Level: -6dB, EqualizationComplete-, EqualizationPhase1-
                         EqualizationPhase2-, EqualizationPhase3-, LinkEqualizationRequest-
        Capabilities: [100 v1] Virtual Channel
                Caps:   LPEVC=0 RefClk=100ns PATEntryBits=1
                Arb:    Fixed- WRR32- WRR64- WRR128-
                Ctrl:   ArbSelect=Fixed
                Status: InProgress-
                VC0:    Caps:   PATOffset=00 MaxTimeSlots=1 RejSnoopTrans-
                        Arb:    Fixed- WRR32- WRR64- WRR128- TWRR128- WRR256-
                        Ctrl:   Enable+ ID=0 ArbSelect=Fixed TC/VC=ff
                        Status: NegoPending- InProgress-
        Capabilities: [800 v1] Advanced Error Reporting
                UESta:  DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
                UEMsk:  DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
                UESvrt: DLP+ SDES+ TLP- FCP+ CmpltTO- CmpltAbrt- UnxCmplt- RxOF+ MalfTLP+ ECRC- UnsupReq- ACSViol-
                CESta:  RxErr- BadTLP- BadDLLP- Rollover- Timeout- NonFatalErr-
                CEMsk:  RxErr- BadTLP- BadDLLP- Rollover- Timeout- NonFatalErr+
                AERCap: First Error Pointer: 00, GenCap- CGenEn- ChkCap- ChkEn-
        Kernel driver in use: pcieport


01:00.0 USB controller: Renesas Technology Corp. uPD720201 USB 3.0 Host Controller (rev 03) (prog-if 30 [XHCI])
        Control: I/O- Mem+ BusMaster- SpecCycle- MemWINV- VGASnoop- ParErr+ Stepping- SERR+ FastB2B- DisINTx-
        Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
        Interrupt: pin A routed to IRQ 20
        Region 0: Memory at a0000000 (64-bit, non-prefetchable) [size=8K]
        Capabilities: [50] Power Management version 3
                Flags: PMEClk- DSI- D1- D2- AuxCurrent=375mA PME(D0+,D1-,D2-,D3hot+,D3cold+)
                Status: D0 NoSoftRst+ PME-Enable- DSel=0 DScale=0 PME-
        Capabilities: [70] MSI: Enable- Count=1/8 Maskable- 64bit+
                Address: 0000000000000000  Data: 0000
        Capabilities: [90] MSI-X: Enable- Count=8 Masked-
                Vector table: BAR=0 offset=00001000
                PBA: BAR=0 offset=00001080
        Capabilities: [a0] Express (v2) Endpoint, MSI 00
                DevCap: MaxPayload 128 bytes, PhantFunc 0, Latency L0s unlimited, L1 unlimited
                        ExtTag- AttnBtn- AttnInd- PwrInd- RBE+ FLReset-
                DevCtl: Report errors: Correctable- Non-Fatal- Fatal- Unsupported-
                        RlxdOrd+ ExtTag- PhantFunc- AuxPwr- NoSnoop+
                        MaxPayload 128 bytes, MaxReadReq 512 bytes
                DevSta: CorrErr- UncorrErr- FatalErr- UnsuppReq- AuxPwr+ TransPend-
                LnkCap: Port #0, Speed 5GT/s, Width x1, ASPM L0s L1, Exit Latency L0s <4us, L1 unlimited
                        ClockPM+ Surprise- LLActRep- BwNot-
                LnkCtl: ASPM Disabled; RCB 64 bytes Disabled- CommClk-
                        ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
                LnkSta: Speed 5GT/s, Width x1, TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
                DevCap2: Completion Timeout: Not Supported, TimeoutDis+, LTR+, OBFF Not Supported
                DevCtl2: Completion Timeout: 50us to 50ms, TimeoutDis-, LTR-, OBFF Disabled
                LnkCtl2: Target Link Speed: 5GT/s, EnterCompliance- SpeedDis-
                         Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS-
                         Compliance De-emphasis: -6dB
                LnkSta2: Current De-emphasis Level: -6dB, EqualizationComplete-, EqualizationPhase1-
                         EqualizationPhase2-, EqualizationPhase3-, LinkEqualizationRequest-
        Capabilities: [100 v1] Advanced Error Reporting
                UESta:  DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
                UEMsk:  DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
                UESvrt: DLP+ SDES+ TLP- FCP+ CmpltTO- CmpltAbrt- UnxCmplt- RxOF+ MalfTLP+ ECRC- UnsupReq- ACSViol-
                CESta:  RxErr- BadTLP- BadDLLP- Rollover- Timeout- NonFatalErr-
                CEMsk:  RxErr- BadTLP- BadDLLP- Rollover- Timeout- NonFatalErr+
                AERCap: First Error Pointer: 00, GenCap- CGenEn- ChkCap- ChkEn-
        Capabilities: [150 v1] Latency Tolerance Reporting
                Max snoop latency: 0ns
                Max no snoop latency: 0ns



Regards.

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Using the generic host PCIe driver
@ 2017-03-01 15:18         ` Mason
  0 siblings, 0 replies; 54+ messages in thread
From: Mason @ 2017-03-01 15:18 UTC (permalink / raw)
  To: linux-arm-kernel

On 27/02/2017 19:35, Bjorn Helgaas wrote:

> When I said the native drivers provide no real benefit, I meant that
> they do not provide any value-add functionality beyond what a generic
> driver like drivers/acpi/pci_root.c already does.
> 
> Obviously there are many different host bridges and they have
> different programming models, so there has to be bridge-specific
> support *somewhere*.  The question is whether that's in firmware, in
> Linux, or both.  For ACPI systems, it's all in firmware.
> 
> Systems with well-behaved hardware, i.e., it supports PCIe and ECAM
> without warts, firmware can initialize the bridge and tell the OS
> about it via DT, and the drivers/pci/pci-host-generic.c driver can do
> everything else.
> 
> For systems that aren't so well-behaved, we'll need either a full
> native driver that knows how to program bridge window CSRs, set up
> interrupts, etc., or a simpler native driver that papers over warts
> like ECAM that doesn't work quite according to spec.
> 
> It sounds like your system falls into the latter category.

Hello Bjorn,

Having worked around 3 HW bugs, things are starting to look
slightly more "normal". Here is my current boot log:
(I've added a few questions inline.)

[    0.197669] PCI: CLS 0 bytes, default 64

Is it an error for Cache Line Size to be 0 here?

[    0.652356] OF: PCI: host bridge /soc/pcie at 50000000 ranges:
[    0.652380] OF: PCI:   No bus range found for /soc/pcie at 50000000, using [bus 00-ff]
[    0.652407] OF: PCI: Parsing ranges property...
[    0.652494] OF: PCI:   MEM 0xa0000000..0xa03fffff -> 0xa0000000
[    0.655744] pci-host-generic 50000000.pcie: ECAM at [mem 0x50000000-0x5fffffff] for [bus 00-ff]
[    0.656097] pci-host-generic 50000000.pcie: PCI host bridge to bus 0000:00
[    0.656145] pci_bus 0000:00: root bus resource [bus 00-ff]
[    0.656168] pci_bus 0000:00: root bus resource [mem 0xa0000000-0xa03fffff]
[    0.656191] pci_bus 0000:00: scanning bus
[    0.656257] pci 0000:00:00.0: [1105:8758] type 01 class 0x048000
[    0.656314] pci 0000:00:00.0: calling tango_pcie_fixup_class+0x0/0x10
[    0.656358] pci 0000:00:00.0: reg 0x10: [mem 0x00000000-0x00ffffff 64bit]
[    0.656400] pci 0000:00:00.0: calling pci_fixup_ide_bases+0x0/0x40
[    0.656451] pci 0000:00:00.0: supports D1 D2
[    0.656468] pci 0000:00:00.0: PME# supported from D0 D1 D2 D3hot
[    0.656486] pci 0000:00:00.0: PME# disabled
[    0.656657] pci_bus 0000:00: fixups for bus
[    0.656686] PCI: bus0: Fast back to back transfers disabled
[    0.656707] pci 0000:00:00.0: scanning [bus 00-00] behind bridge, pass 0
[    0.656725] pci 0000:00:00.0: bridge configuration invalid ([bus 00-00]), reconfiguring
[    0.656753] pci 0000:00:00.0: scanning [bus 00-00] behind bridge, pass 1
[    0.656845] pci_bus 0000:01: scanning bus
[    0.656911] pci 0000:01:00.0: [1912:0014] type 00 class 0x0c0330
[    0.656968] pci 0000:01:00.0: reg 0x10: [mem 0x00000000-0x00001fff 64bit]
[    0.657065] pci 0000:01:00.0: calling pci_fixup_ide_bases+0x0/0x40
[    0.657192] pci 0000:01:00.0: PME# supported from D0 D3hot D3cold
[    0.657213] pci 0000:01:00.0: PME# disabled
[    0.657495] pci_bus 0000:01: fixups for bus
[    0.657521] PCI: bus1: Fast back to back transfers disabled
[    0.657538] pci_bus 0000:01: bus scan returning with max=01
[    0.657556] pci_bus 0000:01: busn_res: [bus 01-ff] end is updated to 01
[    0.657575] pci_bus 0000:00: bus scan returning with max=01
[    0.657593] pci 0000:00:00.0: fixup irq: got 0
[    0.657608] pci 0000:00:00.0: assigning IRQ 00
[    0.657651] pci 0000:01:00.0: fixup irq: got 20
[    0.657667] pci 0000:01:00.0: assigning IRQ 20

This revision of the controller does not support legacy interrupt mode,
only MSI. I looked at the bindings for MSI:

https://www.kernel.org/doc/Documentation/devicetree/bindings/pci/pci-msi.txt
https://www.kernel.org/doc/Documentation/devicetree/bindings/interrupt-controller/msi.txt

But it is not clear to me if I need to write a specific driver
for the MSI controller, or if there is some kind of generic
support? If the latter, what are the required properties?
A "door-bell" address? Anything else?

[    0.657711] pci 0000:00:00.0: BAR 0: no space for [mem size 0x01000000 64bit]
[    0.657731] pci 0000:00:00.0: BAR 0: failed to assign [mem size 0x01000000 64bit]
[    0.657755] pci 0000:00:00.0: BAR 8: assigned [mem 0xa0000000-0xa00fffff]
[    0.657776] pci 0000:01:00.0: BAR 0: assigned [mem 0xa0000000-0xa0001fff 64bit]

These 4 statements sound fishy.

[    0.657813] pci 0000:00:00.0: PCI bridge to [bus 01]
[    0.657831] pci 0000:00:00.0:   bridge window [mem 0xa0000000-0xa00fffff]
[    0.657904] pcieport 0000:00:00.0: enabling device (0140 -> 0142)
[    0.657931] pcieport 0000:00:00.0: enabling bus mastering
[    0.658058] pci 0000:01:00.0: calling quirk_usb_early_handoff+0x0/0x790
[    0.658088] pci 0000:01:00.0: enabling device (0140 -> 0142)
[    0.663235] pci 0000:01:00.0: xHCI HW not ready after 5 sec (HC bug?) status = 0x1e7fffd0
[    0.679283] pci 0000:01:00.0: xHCI HW did not halt within 16000 usec status = 0x1e7fffd0

The PCIe card is a USB3 adapter. I suppose it's not working
because MSI is not properly configured.

# /usr/sbin/lspci -v
00:00.0 PCI bridge: Sigma Designs, Inc. Device 8758 (rev 01) (prog-if 00 [Normal decode])
        Flags: bus master, fast devsel, latency 0
        Memory at <unassigned> (64-bit, non-prefetchable)
        Bus: primary=00, secondary=01, subordinate=01, sec-latency=0
        I/O behind bridge: 00000000-00000fff
        Memory behind bridge: a0000000-a00fffff
        Prefetchable memory behind bridge: 00000000-000fffff
        Capabilities: [50] MSI: Enable- Count=1/4 Maskable- 64bit+
        Capabilities: [78] Power Management version 3
        Capabilities: [80] Express Root Port (Slot-), MSI 03
        Capabilities: [100] Virtual Channel
        Capabilities: [800] Advanced Error Reporting
        Kernel driver in use: pcieport

01:00.0 USB controller: Renesas Technology Corp. uPD720201 USB 3.0 Host Controller (rev 03) (prog-if 30 [XHCI])
        Flags: fast devsel, IRQ 20
        Memory at a0000000 (64-bit, non-prefetchable) [size=8K]
        Capabilities: [50] Power Management version 3
        Capabilities: [70] MSI: Enable- Count=1/8 Maskable- 64bit+
        Capabilities: [90] MSI-X: Enable- Count=8 Masked-
        Capabilities: [a0] Express Endpoint, MSI 00
        Capabilities: [100] Advanced Error Reporting
        Capabilities: [150] Latency Tolerance Reporting


What does "Capabilities: [50] MSI: Enable- Count=1/4 Maskable- 64bit+" mean?

http://man7.org/linux/man-pages/man8/lspci.8.html



For reference only, below is the extra verbose output.

# /usr/sbin/lspci -vvv
00:00.0 PCI bridge: Sigma Designs, Inc. Device 8758 (rev 01) (prog-if 00 [Normal decode])
        Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr+ Stepping- SERR+ FastB2B- DisINTx-
        Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
        Latency: 0, Cache Line Size: 64 bytes
        Region 0: Memory@<unassigned> (64-bit, non-prefetchable)
        Bus: primary=00, secondary=01, subordinate=01, sec-latency=0
        I/O behind bridge: 00000000-00000fff
        Memory behind bridge: a0000000-a00fffff
        Prefetchable memory behind bridge: 00000000-000fffff
        Secondary status: 66MHz- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- <SERR- <PERR-
        BridgeCtl: Parity+ SERR- NoISA- VGA- MAbort- >Reset- FastB2B-
                PriDiscTmr- SecDiscTmr- DiscTmrStat- DiscTmrSERREn-
        Capabilities: [50] MSI: Enable- Count=1/4 Maskable- 64bit+
                Address: 0000000000000000  Data: 0000
        Capabilities: [78] Power Management version 3
                Flags: PMEClk- DSI- D1+ D2+ AuxCurrent=0mA PME(D0+,D1+,D2+,D3hot+,D3cold-)
                Status: D0 NoSoftRst+ PME-Enable- DSel=0 DScale=3 PME-
        Capabilities: [80] Express (v2) Root Port (Slot-), MSI 03
                DevCap: MaxPayload 256 bytes, PhantFunc 0
                        ExtTag- RBE+
                DevCtl: Report errors: Correctable- Non-Fatal- Fatal- Unsupported-
                        RlxdOrd+ ExtTag- PhantFunc- AuxPwr- NoSnoop+
                        MaxPayload 128 bytes, MaxReadReq 512 bytes
                DevSta: CorrErr- UncorrErr- FatalErr- UnsuppReq- AuxPwr- TransPend+
                LnkCap: Port #1, Speed 5GT/s, Width x1, ASPM L0s L1, Exit Latency L0s <2us, L1 <4us
                        ClockPM- Surprise- LLActRep- BwNot+
                LnkCtl: ASPM Disabled; RCB 128 bytes Disabled- CommClk-
                        ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
                LnkSta: Speed 5GT/s, Width x1, TrErr- Train- SlotClk- DLActive- BWMgmt- ABWMgmt-
                RootCtl: ErrCorrectable- ErrNon-Fatal- ErrFatal- PMEIntEna- CRSVisible-
                RootCap: CRSVisible-
                RootSta: PME ReqID 0000, PMEStatus- PMEPending-
                DevCap2: Completion Timeout: Range B, TimeoutDis-, LTR-, OBFF Not Supported ARIFwd-
                DevCtl2: Completion Timeout: 50us to 50ms, TimeoutDis-, LTR-, OBFF Disabled ARIFwd-
                LnkCtl2: Target Link Speed: 5GT/s, EnterCompliance- SpeedDis-
                         Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS-
                         Compliance De-emphasis: -6dB
                LnkSta2: Current De-emphasis Level: -6dB, EqualizationComplete-, EqualizationPhase1-
                         EqualizationPhase2-, EqualizationPhase3-, LinkEqualizationRequest-
        Capabilities: [100 v1] Virtual Channel
                Caps:   LPEVC=0 RefClk=100ns PATEntryBits=1
                Arb:    Fixed- WRR32- WRR64- WRR128-
                Ctrl:   ArbSelect=Fixed
                Status: InProgress-
                VC0:    Caps:   PATOffset=00 MaxTimeSlots=1 RejSnoopTrans-
                        Arb:    Fixed- WRR32- WRR64- WRR128- TWRR128- WRR256-
                        Ctrl:   Enable+ ID=0 ArbSelect=Fixed TC/VC=ff
                        Status: NegoPending- InProgress-
        Capabilities: [800 v1] Advanced Error Reporting
                UESta:  DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
                UEMsk:  DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
                UESvrt: DLP+ SDES+ TLP- FCP+ CmpltTO- CmpltAbrt- UnxCmplt- RxOF+ MalfTLP+ ECRC- UnsupReq- ACSViol-
                CESta:  RxErr- BadTLP- BadDLLP- Rollover- Timeout- NonFatalErr-
                CEMsk:  RxErr- BadTLP- BadDLLP- Rollover- Timeout- NonFatalErr+
                AERCap: First Error Pointer: 00, GenCap- CGenEn- ChkCap- ChkEn-
        Kernel driver in use: pcieport


01:00.0 USB controller: Renesas Technology Corp. uPD720201 USB 3.0 Host Controller (rev 03) (prog-if 30 [XHCI])
        Control: I/O- Mem+ BusMaster- SpecCycle- MemWINV- VGASnoop- ParErr+ Stepping- SERR+ FastB2B- DisINTx-
        Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
        Interrupt: pin A routed to IRQ 20
        Region 0: Memory@a0000000 (64-bit, non-prefetchable) [size=8K]
        Capabilities: [50] Power Management version 3
                Flags: PMEClk- DSI- D1- D2- AuxCurrent=375mA PME(D0+,D1-,D2-,D3hot+,D3cold+)
                Status: D0 NoSoftRst+ PME-Enable- DSel=0 DScale=0 PME-
        Capabilities: [70] MSI: Enable- Count=1/8 Maskable- 64bit+
                Address: 0000000000000000  Data: 0000
        Capabilities: [90] MSI-X: Enable- Count=8 Masked-
                Vector table: BAR=0 offset=00001000
                PBA: BAR=0 offset=00001080
        Capabilities: [a0] Express (v2) Endpoint, MSI 00
                DevCap: MaxPayload 128 bytes, PhantFunc 0, Latency L0s unlimited, L1 unlimited
                        ExtTag- AttnBtn- AttnInd- PwrInd- RBE+ FLReset-
                DevCtl: Report errors: Correctable- Non-Fatal- Fatal- Unsupported-
                        RlxdOrd+ ExtTag- PhantFunc- AuxPwr- NoSnoop+
                        MaxPayload 128 bytes, MaxReadReq 512 bytes
                DevSta: CorrErr- UncorrErr- FatalErr- UnsuppReq- AuxPwr+ TransPend-
                LnkCap: Port #0, Speed 5GT/s, Width x1, ASPM L0s L1, Exit Latency L0s <4us, L1 unlimited
                        ClockPM+ Surprise- LLActRep- BwNot-
                LnkCtl: ASPM Disabled; RCB 64 bytes Disabled- CommClk-
                        ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
                LnkSta: Speed 5GT/s, Width x1, TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
                DevCap2: Completion Timeout: Not Supported, TimeoutDis+, LTR+, OBFF Not Supported
                DevCtl2: Completion Timeout: 50us to 50ms, TimeoutDis-, LTR-, OBFF Disabled
                LnkCtl2: Target Link Speed: 5GT/s, EnterCompliance- SpeedDis-
                         Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS-
                         Compliance De-emphasis: -6dB
                LnkSta2: Current De-emphasis Level: -6dB, EqualizationComplete-, EqualizationPhase1-
                         EqualizationPhase2-, EqualizationPhase3-, LinkEqualizationRequest-
        Capabilities: [100 v1] Advanced Error Reporting
                UESta:  DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
                UEMsk:  DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
                UESvrt: DLP+ SDES+ TLP- FCP+ CmpltTO- CmpltAbrt- UnxCmplt- RxOF+ MalfTLP+ ECRC- UnsupReq- ACSViol-
                CESta:  RxErr- BadTLP- BadDLLP- Rollover- Timeout- NonFatalErr-
                CEMsk:  RxErr- BadTLP- BadDLLP- Rollover- Timeout- NonFatalErr+
                AERCap: First Error Pointer: 00, GenCap- CGenEn- ChkCap- ChkEn-
        Capabilities: [150 v1] Latency Tolerance Reporting
                Max snoop latency: 0ns
                Max no snoop latency: 0ns



Regards.

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: Using the generic host PCIe driver
  2017-03-01 15:18         ` Mason
@ 2017-03-01 16:18           ` Bjorn Helgaas
  -1 siblings, 0 replies; 54+ messages in thread
From: Bjorn Helgaas @ 2017-03-01 16:18 UTC (permalink / raw)
  To: Mason
  Cc: Rob Herring, Phuong Nguyen, David Daney, Marc Zyngier, linux-pci,
	Thibaud Cornic, Will Deacon, Thierry Reding, Linux ARM

[+cc Marc for MSI]

On Wed, Mar 01, 2017 at 04:18:51PM +0100, Mason wrote:
> On 27/02/2017 19:35, Bjorn Helgaas wrote:
> 
> > When I said the native drivers provide no real benefit, I meant that
> > they do not provide any value-add functionality beyond what a generic
> > driver like drivers/acpi/pci_root.c already does.
> > 
> > Obviously there are many different host bridges and they have
> > different programming models, so there has to be bridge-specific
> > support *somewhere*.  The question is whether that's in firmware, in
> > Linux, or both.  For ACPI systems, it's all in firmware.
> > 
> > Systems with well-behaved hardware, i.e., it supports PCIe and ECAM
> > without warts, firmware can initialize the bridge and tell the OS
> > about it via DT, and the drivers/pci/pci-host-generic.c driver can do
> > everything else.
> > 
> > For systems that aren't so well-behaved, we'll need either a full
> > native driver that knows how to program bridge window CSRs, set up
> > interrupts, etc., or a simpler native driver that papers over warts
> > like ECAM that doesn't work quite according to spec.
> > 
> > It sounds like your system falls into the latter category.
> 
> Hello Bjorn,
> 
> Having worked around 3 HW bugs, things are starting to look
> slightly more "normal". Here is my current boot log:
> (I've added a few questions inline.)

Sounds like you're making good progress!

> [    0.197669] PCI: CLS 0 bytes, default 64
> 
> Is it an error for Cache Line Size to be 0 here?

Not a problem.  I think your host bridge is to PCIe, and Cache Line
Size is not relevant for PCIe.  We should clean this up in the PCI
core someday.

> [    0.652356] OF: PCI: host bridge /soc/pcie@50000000 ranges:
> [    0.652380] OF: PCI:   No bus range found for /soc/pcie@50000000, using [bus 00-ff]
> [    0.652407] OF: PCI: Parsing ranges property...
> [    0.652494] OF: PCI:   MEM 0xa0000000..0xa03fffff -> 0xa0000000
> [    0.655744] pci-host-generic 50000000.pcie: ECAM at [mem 0x50000000-0x5fffffff] for [bus 00-ff]
> [    0.656097] pci-host-generic 50000000.pcie: PCI host bridge to bus 0000:00
> [    0.656145] pci_bus 0000:00: root bus resource [bus 00-ff]
> [    0.656168] pci_bus 0000:00: root bus resource [mem 0xa0000000-0xa03fffff]
> [    0.656191] pci_bus 0000:00: scanning bus
> [    0.656257] pci 0000:00:00.0: [1105:8758] type 01 class 0x048000
> [    0.656314] pci 0000:00:00.0: calling tango_pcie_fixup_class+0x0/0x10
> [    0.656358] pci 0000:00:00.0: reg 0x10: [mem 0x00000000-0x00ffffff 64bit]
> [    0.656400] pci 0000:00:00.0: calling pci_fixup_ide_bases+0x0/0x40
> [    0.656451] pci 0000:00:00.0: supports D1 D2
> [    0.656468] pci 0000:00:00.0: PME# supported from D0 D1 D2 D3hot
> [    0.656486] pci 0000:00:00.0: PME# disabled
> [    0.656657] pci_bus 0000:00: fixups for bus
> [    0.656686] PCI: bus0: Fast back to back transfers disabled

FWIW, back-to-back transfers is also irrelevant on PCIe.  Another
useless historical artifact.

> [    0.656707] pci 0000:00:00.0: scanning [bus 00-00] behind bridge, pass 0
> [    0.656725] pci 0000:00:00.0: bridge configuration invalid ([bus 00-00]), reconfiguring
> [    0.656753] pci 0000:00:00.0: scanning [bus 00-00] behind bridge, pass 1
> [    0.656845] pci_bus 0000:01: scanning bus
> [    0.656911] pci 0000:01:00.0: [1912:0014] type 00 class 0x0c0330
> [    0.656968] pci 0000:01:00.0: reg 0x10: [mem 0x00000000-0x00001fff 64bit]
> [    0.657065] pci 0000:01:00.0: calling pci_fixup_ide_bases+0x0/0x40
> [    0.657192] pci 0000:01:00.0: PME# supported from D0 D3hot D3cold
> [    0.657213] pci 0000:01:00.0: PME# disabled
> [    0.657495] pci_bus 0000:01: fixups for bus
> [    0.657521] PCI: bus1: Fast back to back transfers disabled
> [    0.657538] pci_bus 0000:01: bus scan returning with max=01
> [    0.657556] pci_bus 0000:01: busn_res: [bus 01-ff] end is updated to 01
> [    0.657575] pci_bus 0000:00: bus scan returning with max=01
> [    0.657593] pci 0000:00:00.0: fixup irq: got 0
> [    0.657608] pci 0000:00:00.0: assigning IRQ 00
> [    0.657651] pci 0000:01:00.0: fixup irq: got 20
> [    0.657667] pci 0000:01:00.0: assigning IRQ 20
> 
> This revision of the controller does not support legacy interrupt mode,
> only MSI. I looked at the bindings for MSI:
> 
> https://www.kernel.org/doc/Documentation/devicetree/bindings/pci/pci-msi.txt
> https://www.kernel.org/doc/Documentation/devicetree/bindings/interrupt-controller/msi.txt
> 
> But it is not clear to me if I need to write a specific driver
> for the MSI controller, or if there is some kind of generic
> support? If the latter, what are the required properties?
> A "door-bell" address? Anything else?

I added Marc in case he has advice here.  My only advice would be to look
at other drivers and see how they did it.  I'm pretty sure MSI isn't
going to work unless your platform has some way to set the MSI
addresses, whether this is some arch-specific thing or something in
the host bridge.

> [    0.657711] pci 0000:00:00.0: BAR 0: no space for [mem size 0x01000000 64bit]
> [    0.657731] pci 0000:00:00.0: BAR 0: failed to assign [mem size 0x01000000 64bit]
> [    0.657755] pci 0000:00:00.0: BAR 8: assigned [mem 0xa0000000-0xa00fffff]
> [    0.657776] pci 0000:01:00.0: BAR 0: assigned [mem 0xa0000000-0xa0001fff 64bit]
> 
> These 4 statements sound fishy.

00:00.0 is a PCI-to-PCI bridge.  "BAR 8" is its memory window (as shown
below).  01:00.0 is below the bridge and is using part of the window.  That
part is normal.

00:00.0 also has a BAR of its own.  That's perfectly legal but slightly
unusual.  The device will still work fine as a generic PCI-to-PCI bridge
even though we didn't assign the BAR.

The BAR would contain device-specific stuff: maybe performance monitoring
or management interfaces.  Those things won't work because we didn't assign
space.  But even if we did assign space, they would require a special
driver to make them work, since they're device-specific and the PCI core
knows nothing about them.

Bottom line is that you can ignore the 00:00.0 BAR 0 assignment
failure.  It has nothing to do with getting other devices below the
bridge to work.

> [    0.657813] pci 0000:00:00.0: PCI bridge to [bus 01]
> [    0.657831] pci 0000:00:00.0:   bridge window [mem 0xa0000000-0xa00fffff]
> [    0.657904] pcieport 0000:00:00.0: enabling device (0140 -> 0142)
> [    0.657931] pcieport 0000:00:00.0: enabling bus mastering
> [    0.658058] pci 0000:01:00.0: calling quirk_usb_early_handoff+0x0/0x790
> [    0.658088] pci 0000:01:00.0: enabling device (0140 -> 0142)
> [    0.663235] pci 0000:01:00.0: xHCI HW not ready after 5 sec (HC bug?) status = 0x1e7fffd0
> [    0.679283] pci 0000:01:00.0: xHCI HW did not halt within 16000 usec status = 0x1e7fffd0
> 
> The PCIe card is a USB3 adapter. I suppose it's not working
> because MSI is not properly configured.

Probably *some* sort of IRQ problem, whether it's INTx or MSI, I don't
know.

> # /usr/sbin/lspci -v
> 00:00.0 PCI bridge: Sigma Designs, Inc. Device 8758 (rev 01) (prog-if 00 [Normal decode])
>         Flags: bus master, fast devsel, latency 0
>         Memory at <unassigned> (64-bit, non-prefetchable)
>         Bus: primary=00, secondary=01, subordinate=01, sec-latency=0
>         I/O behind bridge: 00000000-00000fff
>         Memory behind bridge: a0000000-a00fffff
>         Prefetchable memory behind bridge: 00000000-000fffff
>         Capabilities: [50] MSI: Enable- Count=1/4 Maskable- 64bit+
>         Capabilities: [78] Power Management version 3
>         Capabilities: [80] Express Root Port (Slot-), MSI 03
>         Capabilities: [100] Virtual Channel
>         Capabilities: [800] Advanced Error Reporting
>         Kernel driver in use: pcieport
> 
> 01:00.0 USB controller: Renesas Technology Corp. uPD720201 USB 3.0 Host Controller (rev 03) (prog-if 30 [XHCI])
>         Flags: fast devsel, IRQ 20
>         Memory at a0000000 (64-bit, non-prefetchable) [size=8K]
>         Capabilities: [50] Power Management version 3
>         Capabilities: [70] MSI: Enable- Count=1/8 Maskable- 64bit+
>         Capabilities: [90] MSI-X: Enable- Count=8 Masked-
>         Capabilities: [a0] Express Endpoint, MSI 00
>         Capabilities: [100] Advanced Error Reporting
>         Capabilities: [150] Latency Tolerance Reporting
> 
> 
> What does "Capabilities: [50] MSI: Enable- Count=1/4 Maskable- 64bit+" mean?

If you have a copy of the PCI spec, you can match these up with bits
in the MSI Capability (PCI r3.0, sec 6.8.1.3).  Otherwise, take a look
at include/uapi/linux/pci_regs.h, where PCI_MSI_FLAGS_ENABLE, etc.,
are for the same bits.

The "[50]" part is the offset in config space of the capability
structure.  Since this is for the bridge (a Root Port in this case),
it's for PCIe interrupts like AER, power management, hotplug, etc.
This is unrelated to interrupts from devices below the bridge.

Bjorn

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Using the generic host PCIe driver
@ 2017-03-01 16:18           ` Bjorn Helgaas
  0 siblings, 0 replies; 54+ messages in thread
From: Bjorn Helgaas @ 2017-03-01 16:18 UTC (permalink / raw)
  To: linux-arm-kernel

[+cc Marc for MSI]

On Wed, Mar 01, 2017 at 04:18:51PM +0100, Mason wrote:
> On 27/02/2017 19:35, Bjorn Helgaas wrote:
> 
> > When I said the native drivers provide no real benefit, I meant that
> > they do not provide any value-add functionality beyond what a generic
> > driver like drivers/acpi/pci_root.c already does.
> > 
> > Obviously there are many different host bridges and they have
> > different programming models, so there has to be bridge-specific
> > support *somewhere*.  The question is whether that's in firmware, in
> > Linux, or both.  For ACPI systems, it's all in firmware.
> > 
> > Systems with well-behaved hardware, i.e., it supports PCIe and ECAM
> > without warts, firmware can initialize the bridge and tell the OS
> > about it via DT, and the drivers/pci/pci-host-generic.c driver can do
> > everything else.
> > 
> > For systems that aren't so well-behaved, we'll need either a full
> > native driver that knows how to program bridge window CSRs, set up
> > interrupts, etc., or a simpler native driver that papers over warts
> > like ECAM that doesn't work quite according to spec.
> > 
> > It sounds like your system falls into the latter category.
> 
> Hello Bjorn,
> 
> Having worked around 3 HW bugs, things are starting to look
> slightly more "normal". Here is my current boot log:
> (I've added a few questions inline.)

Sounds like you're making good progress!

> [    0.197669] PCI: CLS 0 bytes, default 64
> 
> Is it an error for Cache Line Size to be 0 here?

Not a problem.  I think your host bridge is to PCIe, and Cache Line
Size is not relevant for PCIe.  We should clean this up in the PCI
core someday.

> [    0.652356] OF: PCI: host bridge /soc/pcie at 50000000 ranges:
> [    0.652380] OF: PCI:   No bus range found for /soc/pcie at 50000000, using [bus 00-ff]
> [    0.652407] OF: PCI: Parsing ranges property...
> [    0.652494] OF: PCI:   MEM 0xa0000000..0xa03fffff -> 0xa0000000
> [    0.655744] pci-host-generic 50000000.pcie: ECAM at [mem 0x50000000-0x5fffffff] for [bus 00-ff]
> [    0.656097] pci-host-generic 50000000.pcie: PCI host bridge to bus 0000:00
> [    0.656145] pci_bus 0000:00: root bus resource [bus 00-ff]
> [    0.656168] pci_bus 0000:00: root bus resource [mem 0xa0000000-0xa03fffff]
> [    0.656191] pci_bus 0000:00: scanning bus
> [    0.656257] pci 0000:00:00.0: [1105:8758] type 01 class 0x048000
> [    0.656314] pci 0000:00:00.0: calling tango_pcie_fixup_class+0x0/0x10
> [    0.656358] pci 0000:00:00.0: reg 0x10: [mem 0x00000000-0x00ffffff 64bit]
> [    0.656400] pci 0000:00:00.0: calling pci_fixup_ide_bases+0x0/0x40
> [    0.656451] pci 0000:00:00.0: supports D1 D2
> [    0.656468] pci 0000:00:00.0: PME# supported from D0 D1 D2 D3hot
> [    0.656486] pci 0000:00:00.0: PME# disabled
> [    0.656657] pci_bus 0000:00: fixups for bus
> [    0.656686] PCI: bus0: Fast back to back transfers disabled

FWIW, back-to-back transfers is also irrelevant on PCIe.  Another
useless historical artifact.

> [    0.656707] pci 0000:00:00.0: scanning [bus 00-00] behind bridge, pass 0
> [    0.656725] pci 0000:00:00.0: bridge configuration invalid ([bus 00-00]), reconfiguring
> [    0.656753] pci 0000:00:00.0: scanning [bus 00-00] behind bridge, pass 1
> [    0.656845] pci_bus 0000:01: scanning bus
> [    0.656911] pci 0000:01:00.0: [1912:0014] type 00 class 0x0c0330
> [    0.656968] pci 0000:01:00.0: reg 0x10: [mem 0x00000000-0x00001fff 64bit]
> [    0.657065] pci 0000:01:00.0: calling pci_fixup_ide_bases+0x0/0x40
> [    0.657192] pci 0000:01:00.0: PME# supported from D0 D3hot D3cold
> [    0.657213] pci 0000:01:00.0: PME# disabled
> [    0.657495] pci_bus 0000:01: fixups for bus
> [    0.657521] PCI: bus1: Fast back to back transfers disabled
> [    0.657538] pci_bus 0000:01: bus scan returning with max=01
> [    0.657556] pci_bus 0000:01: busn_res: [bus 01-ff] end is updated to 01
> [    0.657575] pci_bus 0000:00: bus scan returning with max=01
> [    0.657593] pci 0000:00:00.0: fixup irq: got 0
> [    0.657608] pci 0000:00:00.0: assigning IRQ 00
> [    0.657651] pci 0000:01:00.0: fixup irq: got 20
> [    0.657667] pci 0000:01:00.0: assigning IRQ 20
> 
> This revision of the controller does not support legacy interrupt mode,
> only MSI. I looked at the bindings for MSI:
> 
> https://www.kernel.org/doc/Documentation/devicetree/bindings/pci/pci-msi.txt
> https://www.kernel.org/doc/Documentation/devicetree/bindings/interrupt-controller/msi.txt
> 
> But it is not clear to me if I need to write a specific driver
> for the MSI controller, or if there is some kind of generic
> support? If the latter, what are the required properties?
> A "door-bell" address? Anything else?

I added Marc in case he has advice here.  My only advice would be to look
at other drivers and see how they did it.  I'm pretty sure MSI isn't
going to work unless your platform has some way to set the MSI
addresses, whether this is some arch-specific thing or something in
the host bridge.

> [    0.657711] pci 0000:00:00.0: BAR 0: no space for [mem size 0x01000000 64bit]
> [    0.657731] pci 0000:00:00.0: BAR 0: failed to assign [mem size 0x01000000 64bit]
> [    0.657755] pci 0000:00:00.0: BAR 8: assigned [mem 0xa0000000-0xa00fffff]
> [    0.657776] pci 0000:01:00.0: BAR 0: assigned [mem 0xa0000000-0xa0001fff 64bit]
> 
> These 4 statements sound fishy.

00:00.0 is a PCI-to-PCI bridge.  "BAR 8" is its memory window (as shown
below).  01:00.0 is below the bridge and is using part of the window.  That
part is normal.

00:00.0 also has a BAR of its own.  That's perfectly legal but slightly
unusual.  The device will still work fine as a generic PCI-to-PCI bridge
even though we didn't assign the BAR.

The BAR would contain device-specific stuff: maybe performance monitoring
or management interfaces.  Those things won't work because we didn't assign
space.  But even if we did assign space, they would require a special
driver to make them work, since they're device-specific and the PCI core
knows nothing about them.

Bottom line is that you can ignore the 00:00.0 BAR 0 assignment
failure.  It has nothing to do with getting other devices below the
bridge to work.

> [    0.657813] pci 0000:00:00.0: PCI bridge to [bus 01]
> [    0.657831] pci 0000:00:00.0:   bridge window [mem 0xa0000000-0xa00fffff]
> [    0.657904] pcieport 0000:00:00.0: enabling device (0140 -> 0142)
> [    0.657931] pcieport 0000:00:00.0: enabling bus mastering
> [    0.658058] pci 0000:01:00.0: calling quirk_usb_early_handoff+0x0/0x790
> [    0.658088] pci 0000:01:00.0: enabling device (0140 -> 0142)
> [    0.663235] pci 0000:01:00.0: xHCI HW not ready after 5 sec (HC bug?) status = 0x1e7fffd0
> [    0.679283] pci 0000:01:00.0: xHCI HW did not halt within 16000 usec status = 0x1e7fffd0
> 
> The PCIe card is a USB3 adapter. I suppose it's not working
> because MSI is not properly configured.

Probably *some* sort of IRQ problem, whether it's INTx or MSI, I don't
know.

> # /usr/sbin/lspci -v
> 00:00.0 PCI bridge: Sigma Designs, Inc. Device 8758 (rev 01) (prog-if 00 [Normal decode])
>         Flags: bus master, fast devsel, latency 0
>         Memory at <unassigned> (64-bit, non-prefetchable)
>         Bus: primary=00, secondary=01, subordinate=01, sec-latency=0
>         I/O behind bridge: 00000000-00000fff
>         Memory behind bridge: a0000000-a00fffff
>         Prefetchable memory behind bridge: 00000000-000fffff
>         Capabilities: [50] MSI: Enable- Count=1/4 Maskable- 64bit+
>         Capabilities: [78] Power Management version 3
>         Capabilities: [80] Express Root Port (Slot-), MSI 03
>         Capabilities: [100] Virtual Channel
>         Capabilities: [800] Advanced Error Reporting
>         Kernel driver in use: pcieport
> 
> 01:00.0 USB controller: Renesas Technology Corp. uPD720201 USB 3.0 Host Controller (rev 03) (prog-if 30 [XHCI])
>         Flags: fast devsel, IRQ 20
>         Memory at a0000000 (64-bit, non-prefetchable) [size=8K]
>         Capabilities: [50] Power Management version 3
>         Capabilities: [70] MSI: Enable- Count=1/8 Maskable- 64bit+
>         Capabilities: [90] MSI-X: Enable- Count=8 Masked-
>         Capabilities: [a0] Express Endpoint, MSI 00
>         Capabilities: [100] Advanced Error Reporting
>         Capabilities: [150] Latency Tolerance Reporting
> 
> 
> What does "Capabilities: [50] MSI: Enable- Count=1/4 Maskable- 64bit+" mean?

If you have a copy of the PCI spec, you can match these up with bits
in the MSI Capability (PCI r3.0, sec 6.8.1.3).  Otherwise, take a look
at include/uapi/linux/pci_regs.h, where PCI_MSI_FLAGS_ENABLE, etc.,
are for the same bits.

The "[50]" part is the offset in config space of the capability
structure.  Since this is for the bridge (a Root Port in this case),
it's for PCIe interrupts like AER, power management, hotplug, etc.
This is unrelated to interrupts from devices below the bridge.

Bjorn

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: Using the generic host PCIe driver
  2017-03-01 16:18           ` Bjorn Helgaas
@ 2017-03-01 16:36             ` Marc Zyngier
  -1 siblings, 0 replies; 54+ messages in thread
From: Marc Zyngier @ 2017-03-01 16:36 UTC (permalink / raw)
  To: Bjorn Helgaas, Mason
  Cc: linux-pci, Linux ARM, Will Deacon, David Daney, Rob Herring,
	Thierry Reding, Phuong Nguyen, Thibaud Cornic

On 01/03/17 16:18, Bjorn Helgaas wrote:
> [+cc Marc for MSI]
> 
> On Wed, Mar 01, 2017 at 04:18:51PM +0100, Mason wrote:
>> On 27/02/2017 19:35, Bjorn Helgaas wrote:
>>
>>> When I said the native drivers provide no real benefit, I meant that
>>> they do not provide any value-add functionality beyond what a generic
>>> driver like drivers/acpi/pci_root.c already does.
>>>
>>> Obviously there are many different host bridges and they have
>>> different programming models, so there has to be bridge-specific
>>> support *somewhere*.  The question is whether that's in firmware, in
>>> Linux, or both.  For ACPI systems, it's all in firmware.
>>>
>>> Systems with well-behaved hardware, i.e., it supports PCIe and ECAM
>>> without warts, firmware can initialize the bridge and tell the OS
>>> about it via DT, and the drivers/pci/pci-host-generic.c driver can do
>>> everything else.
>>>
>>> For systems that aren't so well-behaved, we'll need either a full
>>> native driver that knows how to program bridge window CSRs, set up
>>> interrupts, etc., or a simpler native driver that papers over warts
>>> like ECAM that doesn't work quite according to spec.
>>>
>>> It sounds like your system falls into the latter category.
>>
>> Hello Bjorn,
>>
>> Having worked around 3 HW bugs, things are starting to look
>> slightly more "normal". Here is my current boot log:
>> (I've added a few questions inline.)
> 
> Sounds like you're making good progress!
> 
>> [    0.197669] PCI: CLS 0 bytes, default 64
>>
>> Is it an error for Cache Line Size to be 0 here?
> 
> Not a problem.  I think your host bridge is to PCIe, and Cache Line
> Size is not relevant for PCIe.  We should clean this up in the PCI
> core someday.
> 
>> [    0.652356] OF: PCI: host bridge /soc/pcie@50000000 ranges:
>> [    0.652380] OF: PCI:   No bus range found for /soc/pcie@50000000, using [bus 00-ff]
>> [    0.652407] OF: PCI: Parsing ranges property...
>> [    0.652494] OF: PCI:   MEM 0xa0000000..0xa03fffff -> 0xa0000000
>> [    0.655744] pci-host-generic 50000000.pcie: ECAM at [mem 0x50000000-0x5fffffff] for [bus 00-ff]
>> [    0.656097] pci-host-generic 50000000.pcie: PCI host bridge to bus 0000:00
>> [    0.656145] pci_bus 0000:00: root bus resource [bus 00-ff]
>> [    0.656168] pci_bus 0000:00: root bus resource [mem 0xa0000000-0xa03fffff]
>> [    0.656191] pci_bus 0000:00: scanning bus
>> [    0.656257] pci 0000:00:00.0: [1105:8758] type 01 class 0x048000
>> [    0.656314] pci 0000:00:00.0: calling tango_pcie_fixup_class+0x0/0x10
>> [    0.656358] pci 0000:00:00.0: reg 0x10: [mem 0x00000000-0x00ffffff 64bit]
>> [    0.656400] pci 0000:00:00.0: calling pci_fixup_ide_bases+0x0/0x40
>> [    0.656451] pci 0000:00:00.0: supports D1 D2
>> [    0.656468] pci 0000:00:00.0: PME# supported from D0 D1 D2 D3hot
>> [    0.656486] pci 0000:00:00.0: PME# disabled
>> [    0.656657] pci_bus 0000:00: fixups for bus
>> [    0.656686] PCI: bus0: Fast back to back transfers disabled
> 
> FWIW, back-to-back transfers is also irrelevant on PCIe.  Another
> useless historical artifact.
> 
>> [    0.656707] pci 0000:00:00.0: scanning [bus 00-00] behind bridge, pass 0
>> [    0.656725] pci 0000:00:00.0: bridge configuration invalid ([bus 00-00]), reconfiguring
>> [    0.656753] pci 0000:00:00.0: scanning [bus 00-00] behind bridge, pass 1
>> [    0.656845] pci_bus 0000:01: scanning bus
>> [    0.656911] pci 0000:01:00.0: [1912:0014] type 00 class 0x0c0330
>> [    0.656968] pci 0000:01:00.0: reg 0x10: [mem 0x00000000-0x00001fff 64bit]
>> [    0.657065] pci 0000:01:00.0: calling pci_fixup_ide_bases+0x0/0x40
>> [    0.657192] pci 0000:01:00.0: PME# supported from D0 D3hot D3cold
>> [    0.657213] pci 0000:01:00.0: PME# disabled
>> [    0.657495] pci_bus 0000:01: fixups for bus
>> [    0.657521] PCI: bus1: Fast back to back transfers disabled
>> [    0.657538] pci_bus 0000:01: bus scan returning with max=01
>> [    0.657556] pci_bus 0000:01: busn_res: [bus 01-ff] end is updated to 01
>> [    0.657575] pci_bus 0000:00: bus scan returning with max=01
>> [    0.657593] pci 0000:00:00.0: fixup irq: got 0
>> [    0.657608] pci 0000:00:00.0: assigning IRQ 00
>> [    0.657651] pci 0000:01:00.0: fixup irq: got 20
>> [    0.657667] pci 0000:01:00.0: assigning IRQ 20
>>
>> This revision of the controller does not support legacy interrupt mode,
>> only MSI. I looked at the bindings for MSI:
>>
>> https://www.kernel.org/doc/Documentation/devicetree/bindings/pci/pci-msi.txt
>> https://www.kernel.org/doc/Documentation/devicetree/bindings/interrupt-controller/msi.txt
>>
>> But it is not clear to me if I need to write a specific driver
>> for the MSI controller, or if there is some kind of generic
>> support? If the latter, what are the required properties?
>> A "door-bell" address? Anything else?
> 
> I added Marc in case he has advice here.  My only advice would be to look
> at other drivers and see how they did it.  I'm pretty sure MSI isn't
> going to work unless your platform has some way to set the MSI
> addresses, whether this is some arch-specific thing or something in
> the host bridge.

Thanks Bjorn.

Mason: while the kernel has generic support for dealing with MSI, there
is not standardization at the interrupt controller level, so you do have
to write your own driver, and wire it in the rest of the framework.

I suggest you look at things like drivers/pci/host/pcie-altera-msi.c,
which has an extremely simple implementation. You can use this as a
starting point for your own driver.

Thanks,

	M.
-- 
Jazz is not dead. It just smells funny...

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Using the generic host PCIe driver
@ 2017-03-01 16:36             ` Marc Zyngier
  0 siblings, 0 replies; 54+ messages in thread
From: Marc Zyngier @ 2017-03-01 16:36 UTC (permalink / raw)
  To: linux-arm-kernel

On 01/03/17 16:18, Bjorn Helgaas wrote:
> [+cc Marc for MSI]
> 
> On Wed, Mar 01, 2017 at 04:18:51PM +0100, Mason wrote:
>> On 27/02/2017 19:35, Bjorn Helgaas wrote:
>>
>>> When I said the native drivers provide no real benefit, I meant that
>>> they do not provide any value-add functionality beyond what a generic
>>> driver like drivers/acpi/pci_root.c already does.
>>>
>>> Obviously there are many different host bridges and they have
>>> different programming models, so there has to be bridge-specific
>>> support *somewhere*.  The question is whether that's in firmware, in
>>> Linux, or both.  For ACPI systems, it's all in firmware.
>>>
>>> Systems with well-behaved hardware, i.e., it supports PCIe and ECAM
>>> without warts, firmware can initialize the bridge and tell the OS
>>> about it via DT, and the drivers/pci/pci-host-generic.c driver can do
>>> everything else.
>>>
>>> For systems that aren't so well-behaved, we'll need either a full
>>> native driver that knows how to program bridge window CSRs, set up
>>> interrupts, etc., or a simpler native driver that papers over warts
>>> like ECAM that doesn't work quite according to spec.
>>>
>>> It sounds like your system falls into the latter category.
>>
>> Hello Bjorn,
>>
>> Having worked around 3 HW bugs, things are starting to look
>> slightly more "normal". Here is my current boot log:
>> (I've added a few questions inline.)
> 
> Sounds like you're making good progress!
> 
>> [    0.197669] PCI: CLS 0 bytes, default 64
>>
>> Is it an error for Cache Line Size to be 0 here?
> 
> Not a problem.  I think your host bridge is to PCIe, and Cache Line
> Size is not relevant for PCIe.  We should clean this up in the PCI
> core someday.
> 
>> [    0.652356] OF: PCI: host bridge /soc/pcie at 50000000 ranges:
>> [    0.652380] OF: PCI:   No bus range found for /soc/pcie at 50000000, using [bus 00-ff]
>> [    0.652407] OF: PCI: Parsing ranges property...
>> [    0.652494] OF: PCI:   MEM 0xa0000000..0xa03fffff -> 0xa0000000
>> [    0.655744] pci-host-generic 50000000.pcie: ECAM at [mem 0x50000000-0x5fffffff] for [bus 00-ff]
>> [    0.656097] pci-host-generic 50000000.pcie: PCI host bridge to bus 0000:00
>> [    0.656145] pci_bus 0000:00: root bus resource [bus 00-ff]
>> [    0.656168] pci_bus 0000:00: root bus resource [mem 0xa0000000-0xa03fffff]
>> [    0.656191] pci_bus 0000:00: scanning bus
>> [    0.656257] pci 0000:00:00.0: [1105:8758] type 01 class 0x048000
>> [    0.656314] pci 0000:00:00.0: calling tango_pcie_fixup_class+0x0/0x10
>> [    0.656358] pci 0000:00:00.0: reg 0x10: [mem 0x00000000-0x00ffffff 64bit]
>> [    0.656400] pci 0000:00:00.0: calling pci_fixup_ide_bases+0x0/0x40
>> [    0.656451] pci 0000:00:00.0: supports D1 D2
>> [    0.656468] pci 0000:00:00.0: PME# supported from D0 D1 D2 D3hot
>> [    0.656486] pci 0000:00:00.0: PME# disabled
>> [    0.656657] pci_bus 0000:00: fixups for bus
>> [    0.656686] PCI: bus0: Fast back to back transfers disabled
> 
> FWIW, back-to-back transfers is also irrelevant on PCIe.  Another
> useless historical artifact.
> 
>> [    0.656707] pci 0000:00:00.0: scanning [bus 00-00] behind bridge, pass 0
>> [    0.656725] pci 0000:00:00.0: bridge configuration invalid ([bus 00-00]), reconfiguring
>> [    0.656753] pci 0000:00:00.0: scanning [bus 00-00] behind bridge, pass 1
>> [    0.656845] pci_bus 0000:01: scanning bus
>> [    0.656911] pci 0000:01:00.0: [1912:0014] type 00 class 0x0c0330
>> [    0.656968] pci 0000:01:00.0: reg 0x10: [mem 0x00000000-0x00001fff 64bit]
>> [    0.657065] pci 0000:01:00.0: calling pci_fixup_ide_bases+0x0/0x40
>> [    0.657192] pci 0000:01:00.0: PME# supported from D0 D3hot D3cold
>> [    0.657213] pci 0000:01:00.0: PME# disabled
>> [    0.657495] pci_bus 0000:01: fixups for bus
>> [    0.657521] PCI: bus1: Fast back to back transfers disabled
>> [    0.657538] pci_bus 0000:01: bus scan returning with max=01
>> [    0.657556] pci_bus 0000:01: busn_res: [bus 01-ff] end is updated to 01
>> [    0.657575] pci_bus 0000:00: bus scan returning with max=01
>> [    0.657593] pci 0000:00:00.0: fixup irq: got 0
>> [    0.657608] pci 0000:00:00.0: assigning IRQ 00
>> [    0.657651] pci 0000:01:00.0: fixup irq: got 20
>> [    0.657667] pci 0000:01:00.0: assigning IRQ 20
>>
>> This revision of the controller does not support legacy interrupt mode,
>> only MSI. I looked at the bindings for MSI:
>>
>> https://www.kernel.org/doc/Documentation/devicetree/bindings/pci/pci-msi.txt
>> https://www.kernel.org/doc/Documentation/devicetree/bindings/interrupt-controller/msi.txt
>>
>> But it is not clear to me if I need to write a specific driver
>> for the MSI controller, or if there is some kind of generic
>> support? If the latter, what are the required properties?
>> A "door-bell" address? Anything else?
> 
> I added Marc in case he has advice here.  My only advice would be to look
> at other drivers and see how they did it.  I'm pretty sure MSI isn't
> going to work unless your platform has some way to set the MSI
> addresses, whether this is some arch-specific thing or something in
> the host bridge.

Thanks Bjorn.

Mason: while the kernel has generic support for dealing with MSI, there
is not standardization at the interrupt controller level, so you do have
to write your own driver, and wire it in the rest of the framework.

I suggest you look at things like drivers/pci/host/pcie-altera-msi.c,
which has an extremely simple implementation. You can use this as a
starting point for your own driver.

Thanks,

	M.
-- 
Jazz is not dead. It just smells funny...

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: Using the generic host PCIe driver
  2017-03-01 16:18           ` Bjorn Helgaas
@ 2017-03-01 18:05             ` Mason
  -1 siblings, 0 replies; 54+ messages in thread
From: Mason @ 2017-03-01 18:05 UTC (permalink / raw)
  To: Bjorn Helgaas
  Cc: linux-pci, Linux ARM, Will Deacon, David Daney, Rob Herring,
	Thierry Reding, Phuong Nguyen, Thibaud Cornic, Marc Zyngier

On 01/03/2017 17:18, Bjorn Helgaas wrote:

> On Wed, Mar 01, 2017 at 04:18:51PM +0100, Mason wrote:
>
>> [    0.657711] pci 0000:00:00.0: BAR 0: no space for [mem size 0x01000000 64bit]
>> [    0.657731] pci 0000:00:00.0: BAR 0: failed to assign [mem size 0x01000000 64bit]
>> [    0.657755] pci 0000:00:00.0: BAR 8: assigned [mem 0xa0000000-0xa00fffff]
>> [    0.657776] pci 0000:01:00.0: BAR 0: assigned [mem 0xa0000000-0xa0001fff 64bit]
>>
>> These 4 statements sound fishy.
> 
> 00:00.0 is a PCI-to-PCI bridge.  "BAR 8" is its memory window (as shown
> below).  01:00.0 is below the bridge and is using part of the window.  That
> part is normal.
> 
> 00:00.0 also has a BAR of its own.  That's perfectly legal but slightly
> unusual.  The device will still work fine as a generic PCI-to-PCI bridge
> even though we didn't assign the BAR.
> 
> The BAR would contain device-specific stuff: maybe performance monitoring
> or management interfaces.  Those things won't work because we didn't assign
> space.  But even if we did assign space, they would require a special
> driver to make them work, since they're device-specific and the PCI core
> knows nothing about them.
> 
> Bottom line is that you can ignore the 00:00.0 BAR 0 assignment
> failure.  It has nothing to do with getting other devices below the
> bridge to work.

Another thing I don't understand... According to this reference:
http://elinux.org/Device_Tree_Usage#PCI_Address_Translation

The "ranges" prop starts with:

    phys.hi  cell: npt000ss bbbbbbbb dddddfff rrrrrrrr
    phys.mid cell: hhhhhhhh hhhhhhhh hhhhhhhh hhhhhhhh
    phys.low cell: llllllll llllllll llllllll llllllll

So I thought it might be possible to specify bbbbbbbb = 0x01
to mean "I want to assign memory only to bus 1, don't assign
any memory to bus 0". Am I mistaken?

The kernel panics when I use

	ranges = <0x02010000 0x0 0x90000000  0x90000000  0x0 0x00100000>;

[    1.118503] pci 0000:00:00.0: BAR 0: no space for [mem size 0x01000000 64bit]
[    1.125774] pci 0000:00:00.0: BAR 0: failed to assign [mem size 0x01000000 64bit]
[    1.133393] pci 0000:00:00.0: BAR 8: assigned [mem 0x90000000-0x900fffff]
[    1.140315] pci 0000:01:00.0: BAR 0: assigned [mem 0x90000000-0x90001fff 64bit]
[    1.147771] pci 0000:00:00.0: PCI bridge to [bus 01]
[    1.152857] pci 0000:00:00.0:   bridge window [mem 0x90000000-0x900fffff]
[    1.159830] pcieport 0000:00:00.0: enabling device (0140 -> 0142)
[    1.166062] pcieport 0000:00:00.0: enabling bus mastering
[    1.171730] pci 0000:01:00.0: calling quirk_usb_early_handoff+0x0/0x790
[    1.178486] pci 0000:01:00.0: enabling device (0140 -> 0142)
[    1.184298] Unable to handle kernel paging request at virtual address d08671c4
[    1.191652] pgd = c0004000
[    1.194465] [d08671c4] *pgd=8f804811, *pte=00000000, *ppte=00000000
[    1.200881] Internal error: Oops: 7 [#1] PREEMPT SMP ARM
[    1.206302] Modules linked in:
[    1.209458] CPU: 0 PID: 1 Comm: swapper/0 Not tainted 4.9.7-1-rc2 #125
[    1.216101] Hardware name: Sigma Tango DT
[    1.220213] task: cf82c9c0 task.stack: cf838000
[    1.224851] PC is at quirk_usb_early_handoff+0x3e8/0x790
[    1.230277] LR is at ioremap_page_range+0xf8/0x1a8
[    1.235175] pc : [<c039fe8c>]    lr : [<c02d0a10>]    psr: 000e0013
[    1.235175] sp : cf839d78  ip : 00000000  fp : cf839e38
[    1.246886] r10: c10248a0  r9 : 00000000  r8 : d08671c4
[    1.252220] r7 : d084e000  r6 : 00002000  r5 : 000c0300  r4 : cfb4e800
[    1.258864] r3 : 000191c4  r2 : 00000000  r1 : 90001e13  r0 : d084e000
[    1.265509] Flags: nzcv  IRQs on  FIQs on  Mode SVC_32  ISA ARM  Segment none
[    1.272764] Control: 10c5387d  Table: 8fa2c04a  DAC: 00000051
[    1.278622] Process swapper/0 (pid: 1, stack limit = 0xcf838210)
[    1.284742] Stack: (0xcf839d78 to 0xcf83a000)
[    1.289204] 9d60:                                                       c058f578 c058b180
[    1.297510] 9d80: cfb46300 cf839d98 c0350218 c05adccc cfb4e800 c05adcdc cf838000 00000000
[    1.305816] 9da0: 00000000 c10248a0 cf839e38 c030bfa4 cf9cd480 c034e69c cf867270 00000000
[    1.314122] 9dc0: cfb4e800 cfa3e414 cfa3e400 cf839e30 cf9cd480 00000000 cf906010 c02fa484
[    1.322428] 9de0: cfb4e800 cfa3e414 cfa3e400 c02fa538 cfb4ec00 cfa3e814 cfa3e800 c02fa56c
[    1.330734] 9e00: cfa3e80c cfa3e80c cfa3e800 c031387c cf839e30 cf9ee9b0 c05178c8 c10101d8
[    1.339040] 9e20: cfa15cc0 00000000 cf906000 c058cd2c cf839e30 cf839e30 50000000 5fffffff
[    1.347345] 9e40: cfdf7764 00000200 00000000 00000000 00000000 00000000 c1057de8 cf906010
[    1.355651] 9e60: c1010208 cf906044 c1010208 00000000 00000007 00000000 cfffcec0 c0351624
[    1.363957] 9e80: c1056fb0 cf906010 cf906044 c03500c0 cf906010 c1010208 cf906044 c10177d0
[    1.372262] 9ea0: 00000073 c0350214 00000000 c1010208 c0350150 c034e5e8 cf80545c cf8a60b4
[    1.380568] 9ec0: c1010208 cf9b8d80 00000000 c034f72c c058cd84 c0616a94 c0633cb0 c1010208
[    1.388874] 9ee0: c0616a94 c0633cb0 c0628834 c0350770 ffffe000 c0616a94 c0633cb0 c0101834
[    1.397179] 9f00: c104a354 c100a5c8 00000000 c0220830 00000000 cf87cf00 00000000 c1009370
[    1.405485] 9f20: cfffceee c050fa08 00000073 c0132aec c059a1c4 c05da4a4 00000000 00000006
[    1.413790] 9f40: 00000006 c05723fc c1009358 c1024880 c1024880 c1024880 c0633cb0 c0628834
[    1.422096] 9f60: 00000073 00000007 c062883c c0600db4 00000006 00000006 00000000 c06005ac
[    1.430401] 9f80: a4b68a61 00000000 c049fafc 00000000 00000000 00000000 00000000 00000000
[    1.438706] 9fa0: 00000000 c049fb04 00000000 c01077b8 00000000 00000000 00000000 00000000
[    1.447011] 9fc0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
[    1.455316] 9fe0: 00000000 00000000 00000000 00000000 00000013 00000000 c47be6fa ff2306fe
[    1.463640] [<c039fe8c>] (quirk_usb_early_handoff) from [<c030bfa4>] (pci_do_fixups+0xc8/0x158)
[    1.472480] [<c030bfa4>] (pci_do_fixups) from [<c02fa484>] (pci_bus_add_device+0x18/0x90)
[    1.480790] [<c02fa484>] (pci_bus_add_device) from [<c02fa538>] (pci_bus_add_devices+0x3c/0x80)
[    1.489622] [<c02fa538>] (pci_bus_add_devices) from [<c02fa56c>] (pci_bus_add_devices+0x70/0x80)
[    1.498544] [<c02fa56c>] (pci_bus_add_devices) from [<c031387c>] (pci_host_common_probe+0xfc/0x324)
[    1.507732] [<c031387c>] (pci_host_common_probe) from [<c0351624>] (platform_drv_probe+0x34/0x7c)
[    1.516742] [<c0351624>] (platform_drv_probe) from [<c03500c0>] (really_probe+0x1c4/0x254)
[    1.525137] [<c03500c0>] (really_probe) from [<c0350214>] (__driver_attach+0xc4/0xc8)
[    1.533095] [<c0350214>] (__driver_attach) from [<c034e5e8>] (bus_for_each_dev+0x68/0x9c)
[    1.541402] [<c034e5e8>] (bus_for_each_dev) from [<c034f72c>] (bus_add_driver+0x1a0/0x218)
[    1.549797] [<c034f72c>] (bus_add_driver) from [<c0350770>] (driver_register+0x78/0xf8)
[    1.557931] [<c0350770>] (driver_register) from [<c0101834>] (do_one_initcall+0x44/0x174)
[    1.566247] [<c0101834>] (do_one_initcall) from [<c0600db4>] (kernel_init_freeable+0x154/0x1e4)
[    1.575082] [<c0600db4>] (kernel_init_freeable) from [<c049fb04>] (kernel_init+0x8/0x10c)
[    1.583393] [<c049fb04>] (kernel_init) from [<c01077b8>] (ret_from_fork+0x14/0x3c)
[    1.591090] Code: e3500000 e0833100 0affffcb e0878003 (e5982000) 
[    1.597333] ---[ end trace bbc44517edfb9c6a ]---
[    1.602076] Kernel panic - not syncing: Attempted to kill init! exitcode=0x0000000b
[    1.602076] 
[    1.611435] CPU1: stopping
[    1.614241] CPU: 1 PID: 0 Comm: swapper/1 Tainted: G      D         4.9.7-1-rc2 #125
[    1.622107] Hardware name: Sigma Tango DT
[    1.626233] [<c010ed94>] (unwind_backtrace) from [<c010ae24>] (show_stack+0x10/0x14)
[    1.634106] [<c010ae24>] (show_stack) from [<c02cecc0>] (dump_stack+0x78/0x8c)
[    1.641454] [<c02cecc0>] (dump_stack) from [<c010dc10>] (handle_IPI+0x198/0x1ac)
[    1.648975] [<c010dc10>] (handle_IPI) from [<c01014a4>] (gic_handle_irq+0x88/0x8c)
[    1.656670] [<c01014a4>] (gic_handle_irq) from [<c010b90c>] (__irq_svc+0x6c/0xa8)
[    1.664274] Exception stack(0xcf859f98 to 0xcf859fe0)
[    1.669433] 9f80:                                                       00000001 00000000
[    1.677739] 9fa0: 0000168e c0114620 cf858000 c1002fe4 c1003048 00000002 c100ba2e 413fc090
[    1.686045] 9fc0: 00000000 00000000 00000001 cf859fe8 c0108220 c0108224 60000013 ffffffff
[    1.694352] [<c010b90c>] (__irq_svc) from [<c0108224>] (arch_cpu_idle+0x38/0x3c)
[    1.701878] [<c0108224>] (arch_cpu_idle) from [<c0151f4c>] (cpu_startup_entry+0xcc/0x144)
[    1.710187] [<c0151f4c>] (cpu_startup_entry) from [<8010154c>] (0x8010154c)
[    1.717272] ---[ end Kernel panic - not syncing: Attempted to kill init! exitcode=0x0000000b

Am I doing something stupid? (Very likely)

Regards.

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Using the generic host PCIe driver
@ 2017-03-01 18:05             ` Mason
  0 siblings, 0 replies; 54+ messages in thread
From: Mason @ 2017-03-01 18:05 UTC (permalink / raw)
  To: linux-arm-kernel

On 01/03/2017 17:18, Bjorn Helgaas wrote:

> On Wed, Mar 01, 2017 at 04:18:51PM +0100, Mason wrote:
>
>> [    0.657711] pci 0000:00:00.0: BAR 0: no space for [mem size 0x01000000 64bit]
>> [    0.657731] pci 0000:00:00.0: BAR 0: failed to assign [mem size 0x01000000 64bit]
>> [    0.657755] pci 0000:00:00.0: BAR 8: assigned [mem 0xa0000000-0xa00fffff]
>> [    0.657776] pci 0000:01:00.0: BAR 0: assigned [mem 0xa0000000-0xa0001fff 64bit]
>>
>> These 4 statements sound fishy.
> 
> 00:00.0 is a PCI-to-PCI bridge.  "BAR 8" is its memory window (as shown
> below).  01:00.0 is below the bridge and is using part of the window.  That
> part is normal.
> 
> 00:00.0 also has a BAR of its own.  That's perfectly legal but slightly
> unusual.  The device will still work fine as a generic PCI-to-PCI bridge
> even though we didn't assign the BAR.
> 
> The BAR would contain device-specific stuff: maybe performance monitoring
> or management interfaces.  Those things won't work because we didn't assign
> space.  But even if we did assign space, they would require a special
> driver to make them work, since they're device-specific and the PCI core
> knows nothing about them.
> 
> Bottom line is that you can ignore the 00:00.0 BAR 0 assignment
> failure.  It has nothing to do with getting other devices below the
> bridge to work.

Another thing I don't understand... According to this reference:
http://elinux.org/Device_Tree_Usage#PCI_Address_Translation

The "ranges" prop starts with:

    phys.hi  cell: npt000ss bbbbbbbb dddddfff rrrrrrrr
    phys.mid cell: hhhhhhhh hhhhhhhh hhhhhhhh hhhhhhhh
    phys.low cell: llllllll llllllll llllllll llllllll

So I thought it might be possible to specify bbbbbbbb = 0x01
to mean "I want to assign memory only to bus 1, don't assign
any memory to bus 0". Am I mistaken?

The kernel panics when I use

	ranges = <0x02010000 0x0 0x90000000  0x90000000  0x0 0x00100000>;

[    1.118503] pci 0000:00:00.0: BAR 0: no space for [mem size 0x01000000 64bit]
[    1.125774] pci 0000:00:00.0: BAR 0: failed to assign [mem size 0x01000000 64bit]
[    1.133393] pci 0000:00:00.0: BAR 8: assigned [mem 0x90000000-0x900fffff]
[    1.140315] pci 0000:01:00.0: BAR 0: assigned [mem 0x90000000-0x90001fff 64bit]
[    1.147771] pci 0000:00:00.0: PCI bridge to [bus 01]
[    1.152857] pci 0000:00:00.0:   bridge window [mem 0x90000000-0x900fffff]
[    1.159830] pcieport 0000:00:00.0: enabling device (0140 -> 0142)
[    1.166062] pcieport 0000:00:00.0: enabling bus mastering
[    1.171730] pci 0000:01:00.0: calling quirk_usb_early_handoff+0x0/0x790
[    1.178486] pci 0000:01:00.0: enabling device (0140 -> 0142)
[    1.184298] Unable to handle kernel paging request at virtual address d08671c4
[    1.191652] pgd = c0004000
[    1.194465] [d08671c4] *pgd=8f804811, *pte=00000000, *ppte=00000000
[    1.200881] Internal error: Oops: 7 [#1] PREEMPT SMP ARM
[    1.206302] Modules linked in:
[    1.209458] CPU: 0 PID: 1 Comm: swapper/0 Not tainted 4.9.7-1-rc2 #125
[    1.216101] Hardware name: Sigma Tango DT
[    1.220213] task: cf82c9c0 task.stack: cf838000
[    1.224851] PC is at quirk_usb_early_handoff+0x3e8/0x790
[    1.230277] LR is at ioremap_page_range+0xf8/0x1a8
[    1.235175] pc : [<c039fe8c>]    lr : [<c02d0a10>]    psr: 000e0013
[    1.235175] sp : cf839d78  ip : 00000000  fp : cf839e38
[    1.246886] r10: c10248a0  r9 : 00000000  r8 : d08671c4
[    1.252220] r7 : d084e000  r6 : 00002000  r5 : 000c0300  r4 : cfb4e800
[    1.258864] r3 : 000191c4  r2 : 00000000  r1 : 90001e13  r0 : d084e000
[    1.265509] Flags: nzcv  IRQs on  FIQs on  Mode SVC_32  ISA ARM  Segment none
[    1.272764] Control: 10c5387d  Table: 8fa2c04a  DAC: 00000051
[    1.278622] Process swapper/0 (pid: 1, stack limit = 0xcf838210)
[    1.284742] Stack: (0xcf839d78 to 0xcf83a000)
[    1.289204] 9d60:                                                       c058f578 c058b180
[    1.297510] 9d80: cfb46300 cf839d98 c0350218 c05adccc cfb4e800 c05adcdc cf838000 00000000
[    1.305816] 9da0: 00000000 c10248a0 cf839e38 c030bfa4 cf9cd480 c034e69c cf867270 00000000
[    1.314122] 9dc0: cfb4e800 cfa3e414 cfa3e400 cf839e30 cf9cd480 00000000 cf906010 c02fa484
[    1.322428] 9de0: cfb4e800 cfa3e414 cfa3e400 c02fa538 cfb4ec00 cfa3e814 cfa3e800 c02fa56c
[    1.330734] 9e00: cfa3e80c cfa3e80c cfa3e800 c031387c cf839e30 cf9ee9b0 c05178c8 c10101d8
[    1.339040] 9e20: cfa15cc0 00000000 cf906000 c058cd2c cf839e30 cf839e30 50000000 5fffffff
[    1.347345] 9e40: cfdf7764 00000200 00000000 00000000 00000000 00000000 c1057de8 cf906010
[    1.355651] 9e60: c1010208 cf906044 c1010208 00000000 00000007 00000000 cfffcec0 c0351624
[    1.363957] 9e80: c1056fb0 cf906010 cf906044 c03500c0 cf906010 c1010208 cf906044 c10177d0
[    1.372262] 9ea0: 00000073 c0350214 00000000 c1010208 c0350150 c034e5e8 cf80545c cf8a60b4
[    1.380568] 9ec0: c1010208 cf9b8d80 00000000 c034f72c c058cd84 c0616a94 c0633cb0 c1010208
[    1.388874] 9ee0: c0616a94 c0633cb0 c0628834 c0350770 ffffe000 c0616a94 c0633cb0 c0101834
[    1.397179] 9f00: c104a354 c100a5c8 00000000 c0220830 00000000 cf87cf00 00000000 c1009370
[    1.405485] 9f20: cfffceee c050fa08 00000073 c0132aec c059a1c4 c05da4a4 00000000 00000006
[    1.413790] 9f40: 00000006 c05723fc c1009358 c1024880 c1024880 c1024880 c0633cb0 c0628834
[    1.422096] 9f60: 00000073 00000007 c062883c c0600db4 00000006 00000006 00000000 c06005ac
[    1.430401] 9f80: a4b68a61 00000000 c049fafc 00000000 00000000 00000000 00000000 00000000
[    1.438706] 9fa0: 00000000 c049fb04 00000000 c01077b8 00000000 00000000 00000000 00000000
[    1.447011] 9fc0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
[    1.455316] 9fe0: 00000000 00000000 00000000 00000000 00000013 00000000 c47be6fa ff2306fe
[    1.463640] [<c039fe8c>] (quirk_usb_early_handoff) from [<c030bfa4>] (pci_do_fixups+0xc8/0x158)
[    1.472480] [<c030bfa4>] (pci_do_fixups) from [<c02fa484>] (pci_bus_add_device+0x18/0x90)
[    1.480790] [<c02fa484>] (pci_bus_add_device) from [<c02fa538>] (pci_bus_add_devices+0x3c/0x80)
[    1.489622] [<c02fa538>] (pci_bus_add_devices) from [<c02fa56c>] (pci_bus_add_devices+0x70/0x80)
[    1.498544] [<c02fa56c>] (pci_bus_add_devices) from [<c031387c>] (pci_host_common_probe+0xfc/0x324)
[    1.507732] [<c031387c>] (pci_host_common_probe) from [<c0351624>] (platform_drv_probe+0x34/0x7c)
[    1.516742] [<c0351624>] (platform_drv_probe) from [<c03500c0>] (really_probe+0x1c4/0x254)
[    1.525137] [<c03500c0>] (really_probe) from [<c0350214>] (__driver_attach+0xc4/0xc8)
[    1.533095] [<c0350214>] (__driver_attach) from [<c034e5e8>] (bus_for_each_dev+0x68/0x9c)
[    1.541402] [<c034e5e8>] (bus_for_each_dev) from [<c034f72c>] (bus_add_driver+0x1a0/0x218)
[    1.549797] [<c034f72c>] (bus_add_driver) from [<c0350770>] (driver_register+0x78/0xf8)
[    1.557931] [<c0350770>] (driver_register) from [<c0101834>] (do_one_initcall+0x44/0x174)
[    1.566247] [<c0101834>] (do_one_initcall) from [<c0600db4>] (kernel_init_freeable+0x154/0x1e4)
[    1.575082] [<c0600db4>] (kernel_init_freeable) from [<c049fb04>] (kernel_init+0x8/0x10c)
[    1.583393] [<c049fb04>] (kernel_init) from [<c01077b8>] (ret_from_fork+0x14/0x3c)
[    1.591090] Code: e3500000 e0833100 0affffcb e0878003 (e5982000) 
[    1.597333] ---[ end trace bbc44517edfb9c6a ]---
[    1.602076] Kernel panic - not syncing: Attempted to kill init! exitcode=0x0000000b
[    1.602076] 
[    1.611435] CPU1: stopping
[    1.614241] CPU: 1 PID: 0 Comm: swapper/1 Tainted: G      D         4.9.7-1-rc2 #125
[    1.622107] Hardware name: Sigma Tango DT
[    1.626233] [<c010ed94>] (unwind_backtrace) from [<c010ae24>] (show_stack+0x10/0x14)
[    1.634106] [<c010ae24>] (show_stack) from [<c02cecc0>] (dump_stack+0x78/0x8c)
[    1.641454] [<c02cecc0>] (dump_stack) from [<c010dc10>] (handle_IPI+0x198/0x1ac)
[    1.648975] [<c010dc10>] (handle_IPI) from [<c01014a4>] (gic_handle_irq+0x88/0x8c)
[    1.656670] [<c01014a4>] (gic_handle_irq) from [<c010b90c>] (__irq_svc+0x6c/0xa8)
[    1.664274] Exception stack(0xcf859f98 to 0xcf859fe0)
[    1.669433] 9f80:                                                       00000001 00000000
[    1.677739] 9fa0: 0000168e c0114620 cf858000 c1002fe4 c1003048 00000002 c100ba2e 413fc090
[    1.686045] 9fc0: 00000000 00000000 00000001 cf859fe8 c0108220 c0108224 60000013 ffffffff
[    1.694352] [<c010b90c>] (__irq_svc) from [<c0108224>] (arch_cpu_idle+0x38/0x3c)
[    1.701878] [<c0108224>] (arch_cpu_idle) from [<c0151f4c>] (cpu_startup_entry+0xcc/0x144)
[    1.710187] [<c0151f4c>] (cpu_startup_entry) from [<8010154c>] (0x8010154c)
[    1.717272] ---[ end Kernel panic - not syncing: Attempted to kill init! exitcode=0x0000000b

Am I doing something stupid? (Very likely)

Regards.

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: Using the generic host PCIe driver
  2017-03-01 18:05             ` Mason
@ 2017-03-01 21:57               ` Bjorn Helgaas
  -1 siblings, 0 replies; 54+ messages in thread
From: Bjorn Helgaas @ 2017-03-01 21:57 UTC (permalink / raw)
  To: Mason
  Cc: Rob Herring, Phuong Nguyen, David Daney, Marc Zyngier, linux-pci,
	Thibaud Cornic, Will Deacon, Thierry Reding, Linux ARM

On Wed, Mar 01, 2017 at 07:05:26PM +0100, Mason wrote:
> On 01/03/2017 17:18, Bjorn Helgaas wrote:
> 
> > On Wed, Mar 01, 2017 at 04:18:51PM +0100, Mason wrote:
> >
> >> [    0.657711] pci 0000:00:00.0: BAR 0: no space for [mem size 0x01000000 64bit]
> >> [    0.657731] pci 0000:00:00.0: BAR 0: failed to assign [mem size 0x01000000 64bit]
> >> [    0.657755] pci 0000:00:00.0: BAR 8: assigned [mem 0xa0000000-0xa00fffff]
> >> [    0.657776] pci 0000:01:00.0: BAR 0: assigned [mem 0xa0000000-0xa0001fff 64bit]
> >>
> >> These 4 statements sound fishy.
> > 
> > 00:00.0 is a PCI-to-PCI bridge.  "BAR 8" is its memory window (as shown
> > below).  01:00.0 is below the bridge and is using part of the window.  That
> > part is normal.
> > 
> > 00:00.0 also has a BAR of its own.  That's perfectly legal but slightly
> > unusual.  The device will still work fine as a generic PCI-to-PCI bridge
> > even though we didn't assign the BAR.
> > 
> > The BAR would contain device-specific stuff: maybe performance monitoring
> > or management interfaces.  Those things won't work because we didn't assign
> > space.  But even if we did assign space, they would require a special
> > driver to make them work, since they're device-specific and the PCI core
> > knows nothing about them.
> > 
> > Bottom line is that you can ignore the 00:00.0 BAR 0 assignment
> > failure.  It has nothing to do with getting other devices below the
> > bridge to work.
> 
> Another thing I don't understand... According to this reference:
> http://elinux.org/Device_Tree_Usage#PCI_Address_Translation
> 
> The "ranges" prop starts with:
> 
>     phys.hi  cell: npt000ss bbbbbbbb dddddfff rrrrrrrr
>     phys.mid cell: hhhhhhhh hhhhhhhh hhhhhhhh hhhhhhhh
>     phys.low cell: llllllll llllllll llllllll llllllll
> 
> So I thought it might be possible to specify bbbbbbbb = 0x01
> to mean "I want to assign memory only to bus 1, don't assign
> any memory to bus 0". Am I mistaken?

I don't really understand the "bbbbbbbb" field in that range.  The
bridge has a "bus-ranges" property that defines the PCI bus numbers
below the bridge.  As I understand it, the first number in bus-ranges
is the root bus (the bus immediately below the host bridge).  Is
"bbbbbbbb" supposed to be identical to the that root bus number?  If
so, it seems redundant, so why does the "bbbbbbbb" field even exist?

If "bbbbbbbb" can be different from the root bus number, I don't know
what it means.  The host bridge translates CPU physical addresses to
PCI bus address on the root bus (the PCI bus immediately below the
bridge).  Once on the PCI side, the correspondence of memory addresses
to bus numbers is controlled completely by PCI-to-PCI bridges,
independent of what DT contains.

The way to control what bus addresses are available on bus 1 is to:

  1) Use DT to define the addresses available on bus 0 (the root bus) and
  2) Program the memory windows of 00:00.0 (the PCI-to-PCI bridge
     between bus 0 and bus 1

In other words, it's impossible to assign memory only on bus 1.  The
memory on bus 1 is a subset of what's available on bus 0.

> The kernel panics when I use
> 
> 	ranges = <0x02010000 0x0 0x90000000  0x90000000  0x0 0x00100000>;
> 
> [    1.118503] pci 0000:00:00.0: BAR 0: no space for [mem size 0x01000000 64bit]
> [    1.125774] pci 0000:00:00.0: BAR 0: failed to assign [mem size 0x01000000 64bit]
> [    1.133393] pci 0000:00:00.0: BAR 8: assigned [mem 0x90000000-0x900fffff]
> [    1.140315] pci 0000:01:00.0: BAR 0: assigned [mem 0x90000000-0x90001fff 64bit]
> [    1.147771] pci 0000:00:00.0: PCI bridge to [bus 01]
> [    1.152857] pci 0000:00:00.0:   bridge window [mem 0x90000000-0x900fffff]
> [    1.159830] pcieport 0000:00:00.0: enabling device (0140 -> 0142)
> [    1.166062] pcieport 0000:00:00.0: enabling bus mastering
> [    1.171730] pci 0000:01:00.0: calling quirk_usb_early_handoff+0x0/0x790
> [    1.178486] pci 0000:01:00.0: enabling device (0140 -> 0142)
> [    1.184298] Unable to handle kernel paging request at virtual address d08671c4
> [    1.191652] pgd = c0004000
> [    1.194465] [d08671c4] *pgd=8f804811, *pte=00000000, *ppte=00000000
> [    1.200881] Internal error: Oops: 7 [#1] PREEMPT SMP ARM
> [    1.206302] Modules linked in:
> [    1.209458] CPU: 0 PID: 1 Comm: swapper/0 Not tainted 4.9.7-1-rc2 #125
> [    1.216101] Hardware name: Sigma Tango DT
> [    1.220213] task: cf82c9c0 task.stack: cf838000
> [    1.224851] PC is at quirk_usb_early_handoff+0x3e8/0x790
> [    1.230277] LR is at ioremap_page_range+0xf8/0x1a8
> [    1.235175] pc : [<c039fe8c>]    lr : [<c02d0a10>]    psr: 000e0013
> [    1.235175] sp : cf839d78  ip : 00000000  fp : cf839e38
> [    1.246886] r10: c10248a0  r9 : 00000000  r8 : d08671c4
> [    1.252220] r7 : d084e000  r6 : 00002000  r5 : 000c0300  r4 : cfb4e800
> [    1.258864] r3 : 000191c4  r2 : 00000000  r1 : 90001e13  r0 : d084e000
> [    1.265509] Flags: nzcv  IRQs on  FIQs on  Mode SVC_32  ISA ARM  Segment none
> [    1.272764] Control: 10c5387d  Table: 8fa2c04a  DAC: 00000051
> [    1.278622] Process swapper/0 (pid: 1, stack limit = 0xcf838210)
> [    1.284742] Stack: (0xcf839d78 to 0xcf83a000)
> [    1.289204] 9d60:                                                       c058f578 c058b180
> [    1.297510] 9d80: cfb46300 cf839d98 c0350218 c05adccc cfb4e800 c05adcdc cf838000 00000000
> [    1.305816] 9da0: 00000000 c10248a0 cf839e38 c030bfa4 cf9cd480 c034e69c cf867270 00000000
> [    1.314122] 9dc0: cfb4e800 cfa3e414 cfa3e400 cf839e30 cf9cd480 00000000 cf906010 c02fa484
> [    1.322428] 9de0: cfb4e800 cfa3e414 cfa3e400 c02fa538 cfb4ec00 cfa3e814 cfa3e800 c02fa56c
> [    1.330734] 9e00: cfa3e80c cfa3e80c cfa3e800 c031387c cf839e30 cf9ee9b0 c05178c8 c10101d8
> [    1.339040] 9e20: cfa15cc0 00000000 cf906000 c058cd2c cf839e30 cf839e30 50000000 5fffffff
> [    1.347345] 9e40: cfdf7764 00000200 00000000 00000000 00000000 00000000 c1057de8 cf906010
> [    1.355651] 9e60: c1010208 cf906044 c1010208 00000000 00000007 00000000 cfffcec0 c0351624
> [    1.363957] 9e80: c1056fb0 cf906010 cf906044 c03500c0 cf906010 c1010208 cf906044 c10177d0
> [    1.372262] 9ea0: 00000073 c0350214 00000000 c1010208 c0350150 c034e5e8 cf80545c cf8a60b4
> [    1.380568] 9ec0: c1010208 cf9b8d80 00000000 c034f72c c058cd84 c0616a94 c0633cb0 c1010208
> [    1.388874] 9ee0: c0616a94 c0633cb0 c0628834 c0350770 ffffe000 c0616a94 c0633cb0 c0101834
> [    1.397179] 9f00: c104a354 c100a5c8 00000000 c0220830 00000000 cf87cf00 00000000 c1009370
> [    1.405485] 9f20: cfffceee c050fa08 00000073 c0132aec c059a1c4 c05da4a4 00000000 00000006
> [    1.413790] 9f40: 00000006 c05723fc c1009358 c1024880 c1024880 c1024880 c0633cb0 c0628834
> [    1.422096] 9f60: 00000073 00000007 c062883c c0600db4 00000006 00000006 00000000 c06005ac
> [    1.430401] 9f80: a4b68a61 00000000 c049fafc 00000000 00000000 00000000 00000000 00000000
> [    1.438706] 9fa0: 00000000 c049fb04 00000000 c01077b8 00000000 00000000 00000000 00000000
> [    1.447011] 9fc0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
> [    1.455316] 9fe0: 00000000 00000000 00000000 00000000 00000013 00000000 c47be6fa ff2306fe
> [    1.463640] [<c039fe8c>] (quirk_usb_early_handoff) from [<c030bfa4>] (pci_do_fixups+0xc8/0x158)
> [    1.472480] [<c030bfa4>] (pci_do_fixups) from [<c02fa484>] (pci_bus_add_device+0x18/0x90)
> [    1.480790] [<c02fa484>] (pci_bus_add_device) from [<c02fa538>] (pci_bus_add_devices+0x3c/0x80)
> [    1.489622] [<c02fa538>] (pci_bus_add_devices) from [<c02fa56c>] (pci_bus_add_devices+0x70/0x80)
> [    1.498544] [<c02fa56c>] (pci_bus_add_devices) from [<c031387c>] (pci_host_common_probe+0xfc/0x324)
> [    1.507732] [<c031387c>] (pci_host_common_probe) from [<c0351624>] (platform_drv_probe+0x34/0x7c)
> [    1.516742] [<c0351624>] (platform_drv_probe) from [<c03500c0>] (really_probe+0x1c4/0x254)
> [    1.525137] [<c03500c0>] (really_probe) from [<c0350214>] (__driver_attach+0xc4/0xc8)
> [    1.533095] [<c0350214>] (__driver_attach) from [<c034e5e8>] (bus_for_each_dev+0x68/0x9c)
> [    1.541402] [<c034e5e8>] (bus_for_each_dev) from [<c034f72c>] (bus_add_driver+0x1a0/0x218)
> [    1.549797] [<c034f72c>] (bus_add_driver) from [<c0350770>] (driver_register+0x78/0xf8)
> [    1.557931] [<c0350770>] (driver_register) from [<c0101834>] (do_one_initcall+0x44/0x174)
> [    1.566247] [<c0101834>] (do_one_initcall) from [<c0600db4>] (kernel_init_freeable+0x154/0x1e4)
> [    1.575082] [<c0600db4>] (kernel_init_freeable) from [<c049fb04>] (kernel_init+0x8/0x10c)
> [    1.583393] [<c049fb04>] (kernel_init) from [<c01077b8>] (ret_from_fork+0x14/0x3c)
> [    1.591090] Code: e3500000 e0833100 0affffcb e0878003 (e5982000) 
> [    1.597333] ---[ end trace bbc44517edfb9c6a ]---
> [    1.602076] Kernel panic - not syncing: Attempted to kill init! exitcode=0x0000000b
> [    1.602076] 
> [    1.611435] CPU1: stopping
> [    1.614241] CPU: 1 PID: 0 Comm: swapper/1 Tainted: G      D         4.9.7-1-rc2 #125
> [    1.622107] Hardware name: Sigma Tango DT
> [    1.626233] [<c010ed94>] (unwind_backtrace) from [<c010ae24>] (show_stack+0x10/0x14)
> [    1.634106] [<c010ae24>] (show_stack) from [<c02cecc0>] (dump_stack+0x78/0x8c)
> [    1.641454] [<c02cecc0>] (dump_stack) from [<c010dc10>] (handle_IPI+0x198/0x1ac)
> [    1.648975] [<c010dc10>] (handle_IPI) from [<c01014a4>] (gic_handle_irq+0x88/0x8c)
> [    1.656670] [<c01014a4>] (gic_handle_irq) from [<c010b90c>] (__irq_svc+0x6c/0xa8)
> [    1.664274] Exception stack(0xcf859f98 to 0xcf859fe0)
> [    1.669433] 9f80:                                                       00000001 00000000
> [    1.677739] 9fa0: 0000168e c0114620 cf858000 c1002fe4 c1003048 00000002 c100ba2e 413fc090
> [    1.686045] 9fc0: 00000000 00000000 00000001 cf859fe8 c0108220 c0108224 60000013 ffffffff
> [    1.694352] [<c010b90c>] (__irq_svc) from [<c0108224>] (arch_cpu_idle+0x38/0x3c)
> [    1.701878] [<c0108224>] (arch_cpu_idle) from [<c0151f4c>] (cpu_startup_entry+0xcc/0x144)
> [    1.710187] [<c0151f4c>] (cpu_startup_entry) from [<8010154c>] (0x8010154c)
> [    1.717272] ---[ end Kernel panic - not syncing: Attempted to kill init! exitcode=0x0000000b
> 
> Am I doing something stupid? (Very likely)
> 
> Regards.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Using the generic host PCIe driver
@ 2017-03-01 21:57               ` Bjorn Helgaas
  0 siblings, 0 replies; 54+ messages in thread
From: Bjorn Helgaas @ 2017-03-01 21:57 UTC (permalink / raw)
  To: linux-arm-kernel

On Wed, Mar 01, 2017 at 07:05:26PM +0100, Mason wrote:
> On 01/03/2017 17:18, Bjorn Helgaas wrote:
> 
> > On Wed, Mar 01, 2017 at 04:18:51PM +0100, Mason wrote:
> >
> >> [    0.657711] pci 0000:00:00.0: BAR 0: no space for [mem size 0x01000000 64bit]
> >> [    0.657731] pci 0000:00:00.0: BAR 0: failed to assign [mem size 0x01000000 64bit]
> >> [    0.657755] pci 0000:00:00.0: BAR 8: assigned [mem 0xa0000000-0xa00fffff]
> >> [    0.657776] pci 0000:01:00.0: BAR 0: assigned [mem 0xa0000000-0xa0001fff 64bit]
> >>
> >> These 4 statements sound fishy.
> > 
> > 00:00.0 is a PCI-to-PCI bridge.  "BAR 8" is its memory window (as shown
> > below).  01:00.0 is below the bridge and is using part of the window.  That
> > part is normal.
> > 
> > 00:00.0 also has a BAR of its own.  That's perfectly legal but slightly
> > unusual.  The device will still work fine as a generic PCI-to-PCI bridge
> > even though we didn't assign the BAR.
> > 
> > The BAR would contain device-specific stuff: maybe performance monitoring
> > or management interfaces.  Those things won't work because we didn't assign
> > space.  But even if we did assign space, they would require a special
> > driver to make them work, since they're device-specific and the PCI core
> > knows nothing about them.
> > 
> > Bottom line is that you can ignore the 00:00.0 BAR 0 assignment
> > failure.  It has nothing to do with getting other devices below the
> > bridge to work.
> 
> Another thing I don't understand... According to this reference:
> http://elinux.org/Device_Tree_Usage#PCI_Address_Translation
> 
> The "ranges" prop starts with:
> 
>     phys.hi  cell: npt000ss bbbbbbbb dddddfff rrrrrrrr
>     phys.mid cell: hhhhhhhh hhhhhhhh hhhhhhhh hhhhhhhh
>     phys.low cell: llllllll llllllll llllllll llllllll
> 
> So I thought it might be possible to specify bbbbbbbb = 0x01
> to mean "I want to assign memory only to bus 1, don't assign
> any memory to bus 0". Am I mistaken?

I don't really understand the "bbbbbbbb" field in that range.  The
bridge has a "bus-ranges" property that defines the PCI bus numbers
below the bridge.  As I understand it, the first number in bus-ranges
is the root bus (the bus immediately below the host bridge).  Is
"bbbbbbbb" supposed to be identical to the that root bus number?  If
so, it seems redundant, so why does the "bbbbbbbb" field even exist?

If "bbbbbbbb" can be different from the root bus number, I don't know
what it means.  The host bridge translates CPU physical addresses to
PCI bus address on the root bus (the PCI bus immediately below the
bridge).  Once on the PCI side, the correspondence of memory addresses
to bus numbers is controlled completely by PCI-to-PCI bridges,
independent of what DT contains.

The way to control what bus addresses are available on bus 1 is to:

  1) Use DT to define the addresses available on bus 0 (the root bus) and
  2) Program the memory windows of 00:00.0 (the PCI-to-PCI bridge
     between bus 0 and bus 1

In other words, it's impossible to assign memory only on bus 1.  The
memory on bus 1 is a subset of what's available on bus 0.

> The kernel panics when I use
> 
> 	ranges = <0x02010000 0x0 0x90000000  0x90000000  0x0 0x00100000>;
> 
> [    1.118503] pci 0000:00:00.0: BAR 0: no space for [mem size 0x01000000 64bit]
> [    1.125774] pci 0000:00:00.0: BAR 0: failed to assign [mem size 0x01000000 64bit]
> [    1.133393] pci 0000:00:00.0: BAR 8: assigned [mem 0x90000000-0x900fffff]
> [    1.140315] pci 0000:01:00.0: BAR 0: assigned [mem 0x90000000-0x90001fff 64bit]
> [    1.147771] pci 0000:00:00.0: PCI bridge to [bus 01]
> [    1.152857] pci 0000:00:00.0:   bridge window [mem 0x90000000-0x900fffff]
> [    1.159830] pcieport 0000:00:00.0: enabling device (0140 -> 0142)
> [    1.166062] pcieport 0000:00:00.0: enabling bus mastering
> [    1.171730] pci 0000:01:00.0: calling quirk_usb_early_handoff+0x0/0x790
> [    1.178486] pci 0000:01:00.0: enabling device (0140 -> 0142)
> [    1.184298] Unable to handle kernel paging request at virtual address d08671c4
> [    1.191652] pgd = c0004000
> [    1.194465] [d08671c4] *pgd=8f804811, *pte=00000000, *ppte=00000000
> [    1.200881] Internal error: Oops: 7 [#1] PREEMPT SMP ARM
> [    1.206302] Modules linked in:
> [    1.209458] CPU: 0 PID: 1 Comm: swapper/0 Not tainted 4.9.7-1-rc2 #125
> [    1.216101] Hardware name: Sigma Tango DT
> [    1.220213] task: cf82c9c0 task.stack: cf838000
> [    1.224851] PC is at quirk_usb_early_handoff+0x3e8/0x790
> [    1.230277] LR is at ioremap_page_range+0xf8/0x1a8
> [    1.235175] pc : [<c039fe8c>]    lr : [<c02d0a10>]    psr: 000e0013
> [    1.235175] sp : cf839d78  ip : 00000000  fp : cf839e38
> [    1.246886] r10: c10248a0  r9 : 00000000  r8 : d08671c4
> [    1.252220] r7 : d084e000  r6 : 00002000  r5 : 000c0300  r4 : cfb4e800
> [    1.258864] r3 : 000191c4  r2 : 00000000  r1 : 90001e13  r0 : d084e000
> [    1.265509] Flags: nzcv  IRQs on  FIQs on  Mode SVC_32  ISA ARM  Segment none
> [    1.272764] Control: 10c5387d  Table: 8fa2c04a  DAC: 00000051
> [    1.278622] Process swapper/0 (pid: 1, stack limit = 0xcf838210)
> [    1.284742] Stack: (0xcf839d78 to 0xcf83a000)
> [    1.289204] 9d60:                                                       c058f578 c058b180
> [    1.297510] 9d80: cfb46300 cf839d98 c0350218 c05adccc cfb4e800 c05adcdc cf838000 00000000
> [    1.305816] 9da0: 00000000 c10248a0 cf839e38 c030bfa4 cf9cd480 c034e69c cf867270 00000000
> [    1.314122] 9dc0: cfb4e800 cfa3e414 cfa3e400 cf839e30 cf9cd480 00000000 cf906010 c02fa484
> [    1.322428] 9de0: cfb4e800 cfa3e414 cfa3e400 c02fa538 cfb4ec00 cfa3e814 cfa3e800 c02fa56c
> [    1.330734] 9e00: cfa3e80c cfa3e80c cfa3e800 c031387c cf839e30 cf9ee9b0 c05178c8 c10101d8
> [    1.339040] 9e20: cfa15cc0 00000000 cf906000 c058cd2c cf839e30 cf839e30 50000000 5fffffff
> [    1.347345] 9e40: cfdf7764 00000200 00000000 00000000 00000000 00000000 c1057de8 cf906010
> [    1.355651] 9e60: c1010208 cf906044 c1010208 00000000 00000007 00000000 cfffcec0 c0351624
> [    1.363957] 9e80: c1056fb0 cf906010 cf906044 c03500c0 cf906010 c1010208 cf906044 c10177d0
> [    1.372262] 9ea0: 00000073 c0350214 00000000 c1010208 c0350150 c034e5e8 cf80545c cf8a60b4
> [    1.380568] 9ec0: c1010208 cf9b8d80 00000000 c034f72c c058cd84 c0616a94 c0633cb0 c1010208
> [    1.388874] 9ee0: c0616a94 c0633cb0 c0628834 c0350770 ffffe000 c0616a94 c0633cb0 c0101834
> [    1.397179] 9f00: c104a354 c100a5c8 00000000 c0220830 00000000 cf87cf00 00000000 c1009370
> [    1.405485] 9f20: cfffceee c050fa08 00000073 c0132aec c059a1c4 c05da4a4 00000000 00000006
> [    1.413790] 9f40: 00000006 c05723fc c1009358 c1024880 c1024880 c1024880 c0633cb0 c0628834
> [    1.422096] 9f60: 00000073 00000007 c062883c c0600db4 00000006 00000006 00000000 c06005ac
> [    1.430401] 9f80: a4b68a61 00000000 c049fafc 00000000 00000000 00000000 00000000 00000000
> [    1.438706] 9fa0: 00000000 c049fb04 00000000 c01077b8 00000000 00000000 00000000 00000000
> [    1.447011] 9fc0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
> [    1.455316] 9fe0: 00000000 00000000 00000000 00000000 00000013 00000000 c47be6fa ff2306fe
> [    1.463640] [<c039fe8c>] (quirk_usb_early_handoff) from [<c030bfa4>] (pci_do_fixups+0xc8/0x158)
> [    1.472480] [<c030bfa4>] (pci_do_fixups) from [<c02fa484>] (pci_bus_add_device+0x18/0x90)
> [    1.480790] [<c02fa484>] (pci_bus_add_device) from [<c02fa538>] (pci_bus_add_devices+0x3c/0x80)
> [    1.489622] [<c02fa538>] (pci_bus_add_devices) from [<c02fa56c>] (pci_bus_add_devices+0x70/0x80)
> [    1.498544] [<c02fa56c>] (pci_bus_add_devices) from [<c031387c>] (pci_host_common_probe+0xfc/0x324)
> [    1.507732] [<c031387c>] (pci_host_common_probe) from [<c0351624>] (platform_drv_probe+0x34/0x7c)
> [    1.516742] [<c0351624>] (platform_drv_probe) from [<c03500c0>] (really_probe+0x1c4/0x254)
> [    1.525137] [<c03500c0>] (really_probe) from [<c0350214>] (__driver_attach+0xc4/0xc8)
> [    1.533095] [<c0350214>] (__driver_attach) from [<c034e5e8>] (bus_for_each_dev+0x68/0x9c)
> [    1.541402] [<c034e5e8>] (bus_for_each_dev) from [<c034f72c>] (bus_add_driver+0x1a0/0x218)
> [    1.549797] [<c034f72c>] (bus_add_driver) from [<c0350770>] (driver_register+0x78/0xf8)
> [    1.557931] [<c0350770>] (driver_register) from [<c0101834>] (do_one_initcall+0x44/0x174)
> [    1.566247] [<c0101834>] (do_one_initcall) from [<c0600db4>] (kernel_init_freeable+0x154/0x1e4)
> [    1.575082] [<c0600db4>] (kernel_init_freeable) from [<c049fb04>] (kernel_init+0x8/0x10c)
> [    1.583393] [<c049fb04>] (kernel_init) from [<c01077b8>] (ret_from_fork+0x14/0x3c)
> [    1.591090] Code: e3500000 e0833100 0affffcb e0878003 (e5982000) 
> [    1.597333] ---[ end trace bbc44517edfb9c6a ]---
> [    1.602076] Kernel panic - not syncing: Attempted to kill init! exitcode=0x0000000b
> [    1.602076] 
> [    1.611435] CPU1: stopping
> [    1.614241] CPU: 1 PID: 0 Comm: swapper/1 Tainted: G      D         4.9.7-1-rc2 #125
> [    1.622107] Hardware name: Sigma Tango DT
> [    1.626233] [<c010ed94>] (unwind_backtrace) from [<c010ae24>] (show_stack+0x10/0x14)
> [    1.634106] [<c010ae24>] (show_stack) from [<c02cecc0>] (dump_stack+0x78/0x8c)
> [    1.641454] [<c02cecc0>] (dump_stack) from [<c010dc10>] (handle_IPI+0x198/0x1ac)
> [    1.648975] [<c010dc10>] (handle_IPI) from [<c01014a4>] (gic_handle_irq+0x88/0x8c)
> [    1.656670] [<c01014a4>] (gic_handle_irq) from [<c010b90c>] (__irq_svc+0x6c/0xa8)
> [    1.664274] Exception stack(0xcf859f98 to 0xcf859fe0)
> [    1.669433] 9f80:                                                       00000001 00000000
> [    1.677739] 9fa0: 0000168e c0114620 cf858000 c1002fe4 c1003048 00000002 c100ba2e 413fc090
> [    1.686045] 9fc0: 00000000 00000000 00000001 cf859fe8 c0108220 c0108224 60000013 ffffffff
> [    1.694352] [<c010b90c>] (__irq_svc) from [<c0108224>] (arch_cpu_idle+0x38/0x3c)
> [    1.701878] [<c0108224>] (arch_cpu_idle) from [<c0151f4c>] (cpu_startup_entry+0xcc/0x144)
> [    1.710187] [<c0151f4c>] (cpu_startup_entry) from [<8010154c>] (0x8010154c)
> [    1.717272] ---[ end Kernel panic - not syncing: Attempted to kill init! exitcode=0x0000000b
> 
> Am I doing something stupid? (Very likely)
> 
> Regards.

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: Using the generic host PCIe driver
  2017-03-01 16:36             ` Marc Zyngier
@ 2017-03-03 11:26               ` Mason
  -1 siblings, 0 replies; 54+ messages in thread
From: Mason @ 2017-03-03 11:26 UTC (permalink / raw)
  To: Marc Zyngier, Bjorn Helgaas
  Cc: linux-pci, Linux ARM, Will Deacon, David Daney, Rob Herring,
	Thierry Reding, Phuong Nguyen, Thibaud Cornic

On 01/03/2017 17:36, Marc Zyngier wrote:

> Mason: while the kernel has generic support for dealing with MSI, there
> is not standardization at the interrupt controller level, so you do have
> to write your own driver, and wire it in the rest of the framework.
> 
> I suggest you look at things like drivers/pci/host/pcie-altera-msi.c,
> which has an extremely simple implementation. You can use this as a
> starting point for your own driver.

Thanks Marc,

I'll have a close look at the Altera driver.

I'm having a hard time understanding 3 different kinds of interrupts:

  1. MSI (message-signalled interrupts)
  2. legacy interrupts
  3. custom interrupts

I mostly understand MSI. When a device needs attention from the CPU,
it sends a specific packet over the PCIe data link to a specific
PCI bus address, which the PCIe controller knows to interpret as
an interrupt request, so it raises the appropriate signal to
interrupt the CPU.

Legacy interrupts are the old-style PCI interrupts, when a device
expects to have an actual physical interrupt line to the PCI
controller, correct? The controller then forwards the interrupt
request to the CPU, as with MSIs.

Custom interrupts, I'm not sure. Here's the list:

system_error : indicates that the interrupt is triggered by a system error, signaled by the lower layers.
dma_rd_int : interrupt is triggered by DMA read availability.
dma_wr_int : interrupt is triggered by DMA write availability.
cpl_ur : interrupt is triggered by unsupported completion request.
cpl_crs : interrupt is triggered by configuration request retry status.
cpl_ca : interrupt is triggered by completer abort event.
cpl_timeout : interrupt is triggered by a completion timeout event.
pci_intx : one of selected legacy interrupts INTx is triggered. 

Notes:

It appears that legacy interrupts are supported through this
custom register.

I think I might be able to ignore DMA for the time being.

cpl_* interrupts look somewhat standard, yet the PCI framework
cannot know they exist, since they are in some random MMIO register.
I'm confused about these and system_error. I guess I can ignore
them at first, or just print a message when they trigger, to try
to figure out what to do with them.


I suppose the interrupt controller I'm supposed to write needs
to handle all 3 types of interrupts?

Regards.

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Using the generic host PCIe driver
@ 2017-03-03 11:26               ` Mason
  0 siblings, 0 replies; 54+ messages in thread
From: Mason @ 2017-03-03 11:26 UTC (permalink / raw)
  To: linux-arm-kernel

On 01/03/2017 17:36, Marc Zyngier wrote:

> Mason: while the kernel has generic support for dealing with MSI, there
> is not standardization at the interrupt controller level, so you do have
> to write your own driver, and wire it in the rest of the framework.
> 
> I suggest you look at things like drivers/pci/host/pcie-altera-msi.c,
> which has an extremely simple implementation. You can use this as a
> starting point for your own driver.

Thanks Marc,

I'll have a close look at the Altera driver.

I'm having a hard time understanding 3 different kinds of interrupts:

  1. MSI (message-signalled interrupts)
  2. legacy interrupts
  3. custom interrupts

I mostly understand MSI. When a device needs attention from the CPU,
it sends a specific packet over the PCIe data link to a specific
PCI bus address, which the PCIe controller knows to interpret as
an interrupt request, so it raises the appropriate signal to
interrupt the CPU.

Legacy interrupts are the old-style PCI interrupts, when a device
expects to have an actual physical interrupt line to the PCI
controller, correct? The controller then forwards the interrupt
request to the CPU, as with MSIs.

Custom interrupts, I'm not sure. Here's the list:

system_error : indicates that the interrupt is triggered by a system error, signaled by the lower layers.
dma_rd_int : interrupt is triggered by DMA read availability.
dma_wr_int : interrupt is triggered by DMA write availability.
cpl_ur : interrupt is triggered by unsupported completion request.
cpl_crs : interrupt is triggered by configuration request retry status.
cpl_ca : interrupt is triggered by completer abort event.
cpl_timeout : interrupt is triggered by a completion timeout event.
pci_intx : one of selected legacy interrupts INTx is triggered. 

Notes:

It appears that legacy interrupts are supported through this
custom register.

I think I might be able to ignore DMA for the time being.

cpl_* interrupts look somewhat standard, yet the PCI framework
cannot know they exist, since they are in some random MMIO register.
I'm confused about these and system_error. I guess I can ignore
them at first, or just print a message when they trigger, to try
to figure out what to do with them.


I suppose the interrupt controller I'm supposed to write needs
to handle all 3 types of interrupts?

Regards.

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: Using the generic host PCIe driver
  2017-03-01 21:57               ` Bjorn Helgaas
@ 2017-03-03 12:44                 ` Mason
  -1 siblings, 0 replies; 54+ messages in thread
From: Mason @ 2017-03-03 12:44 UTC (permalink / raw)
  To: Bjorn Helgaas
  Cc: linux-pci, Linux ARM, Will Deacon, David Daney, Rob Herring,
	Thierry Reding, Phuong Nguyen, Thibaud Cornic, Marc Zyngier

On 01/03/2017 22:57, Bjorn Helgaas wrote:

> On Wed, Mar 01, 2017 at 07:05:26PM +0100, Mason wrote:
> 
>> Another thing I don't understand... According to this reference:
>> http://elinux.org/Device_Tree_Usage#PCI_Address_Translation
>>
>> The "ranges" prop starts with:
>>
>>     phys.hi  cell: npt000ss bbbbbbbb dddddfff rrrrrrrr
>>     phys.mid cell: hhhhhhhh hhhhhhhh hhhhhhhh hhhhhhhh
>>     phys.low cell: llllllll llllllll llllllll llllllll
>>
>> So I thought it might be possible to specify bbbbbbbb = 0x01
>> to mean "I want to assign memory only to bus 1, don't assign
>> any memory to bus 0". Am I mistaken?
> 
> I don't really understand the "bbbbbbbb" field in that range.  The
> bridge has a "bus-ranges" property that defines the PCI bus numbers
> below the bridge.  As I understand it, the first number in bus-ranges
> is the root bus (the bus immediately below the host bridge).  Is
> "bbbbbbbb" supposed to be identical to the that root bus number?  If
> so, it seems redundant, so why does the "bbbbbbbb" field even exist?
> 
> If "bbbbbbbb" can be different from the root bus number, I don't know
> what it means.  The host bridge translates CPU physical addresses to
> PCI bus address on the root bus (the PCI bus immediately below the
> bridge).  Once on the PCI side, the correspondence of memory addresses
> to bus numbers is controlled completely by PCI-to-PCI bridges,
> independent of what DT contains.

The answer (probably) lies in
PCI Bus Binding to: IEEE Std 1275-1994
Standard for Boot (Initialization Configuration) Firmware, Revision 2.1
[What does "to" mean in "PCI Bus Binding to"? Is it a typo?]

> 12. Use of the "ranges" property
> 
> The "ranges" property of Open Firmware represents how address
> transformation is done across bus bridges. The "ranges" property
> conveys this information for PCI, but the use of the property is not
> as straightforward as on some other busses.
> 
> In particular, the phys.hi fields of the child address spaces in the
> "ranges" property for PCI does not contain the same information as
> "reg" property entries within PCI nodes. The only information that is
> present in "ranges" phys.hi entries are the non-relocatable,
> prefetchable and the PCI address space bits for which the entry
> applies. I.e., only the n, p and ss bits are present; the bbbbbbbb,
> ddddd, fff and rrrrrrrr fields are 0.
> 
> When an address is to be mapped through a PCI bus bridge node, the
> phys.hi value of the address to be mapped and the child field of a
> "ranges" entry should be masked so that only the ss bits are
> compared. I.e., the only portion of phys.hi that should participate
> in the range determination is the address space indicator (the ss bits).

So my bbbbbbbb = 0x01 attempt was non-sensical (as I expected).

For now, I have "hidden" the root's BAR0 from the system with:

	if (bus->number == 0 && where == PCI_BASE_ADDRESS_0) {
		*val = 0;
		return PCIBIOS_SUCCESSFUL;
	}

Regards.

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Using the generic host PCIe driver
@ 2017-03-03 12:44                 ` Mason
  0 siblings, 0 replies; 54+ messages in thread
From: Mason @ 2017-03-03 12:44 UTC (permalink / raw)
  To: linux-arm-kernel

On 01/03/2017 22:57, Bjorn Helgaas wrote:

> On Wed, Mar 01, 2017 at 07:05:26PM +0100, Mason wrote:
> 
>> Another thing I don't understand... According to this reference:
>> http://elinux.org/Device_Tree_Usage#PCI_Address_Translation
>>
>> The "ranges" prop starts with:
>>
>>     phys.hi  cell: npt000ss bbbbbbbb dddddfff rrrrrrrr
>>     phys.mid cell: hhhhhhhh hhhhhhhh hhhhhhhh hhhhhhhh
>>     phys.low cell: llllllll llllllll llllllll llllllll
>>
>> So I thought it might be possible to specify bbbbbbbb = 0x01
>> to mean "I want to assign memory only to bus 1, don't assign
>> any memory to bus 0". Am I mistaken?
> 
> I don't really understand the "bbbbbbbb" field in that range.  The
> bridge has a "bus-ranges" property that defines the PCI bus numbers
> below the bridge.  As I understand it, the first number in bus-ranges
> is the root bus (the bus immediately below the host bridge).  Is
> "bbbbbbbb" supposed to be identical to the that root bus number?  If
> so, it seems redundant, so why does the "bbbbbbbb" field even exist?
> 
> If "bbbbbbbb" can be different from the root bus number, I don't know
> what it means.  The host bridge translates CPU physical addresses to
> PCI bus address on the root bus (the PCI bus immediately below the
> bridge).  Once on the PCI side, the correspondence of memory addresses
> to bus numbers is controlled completely by PCI-to-PCI bridges,
> independent of what DT contains.

The answer (probably) lies in
PCI Bus Binding to: IEEE Std 1275-1994
Standard for Boot (Initialization Configuration) Firmware, Revision 2.1
[What does "to" mean in "PCI Bus Binding to"? Is it a typo?]

> 12. Use of the "ranges" property
> 
> The "ranges" property of Open Firmware represents how address
> transformation is done across bus bridges. The "ranges" property
> conveys this information for PCI, but the use of the property is not
> as straightforward as on some other busses.
> 
> In particular, the phys.hi fields of the child address spaces in the
> "ranges" property for PCI does not contain the same information as
> "reg" property entries within PCI nodes. The only information that is
> present in "ranges" phys.hi entries are the non-relocatable,
> prefetchable and the PCI address space bits for which the entry
> applies. I.e., only the n, p and ss bits are present; the bbbbbbbb,
> ddddd, fff and rrrrrrrr fields are 0.
> 
> When an address is to be mapped through a PCI bus bridge node, the
> phys.hi value of the address to be mapped and the child field of a
> "ranges" entry should be masked so that only the ss bits are
> compared. I.e., the only portion of phys.hi that should participate
> in the range determination is the address space indicator (the ss bits).

So my bbbbbbbb = 0x01 attempt was non-sensical (as I expected).

For now, I have "hidden" the root's BAR0 from the system with:

	if (bus->number == 0 && where == PCI_BASE_ADDRESS_0) {
		*val = 0;
		return PCIBIOS_SUCCESSFUL;
	}

Regards.

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: Using the generic host PCIe driver
  2017-03-03 12:44                 ` Mason
@ 2017-03-03 15:46                   ` Bjorn Helgaas
  -1 siblings, 0 replies; 54+ messages in thread
From: Bjorn Helgaas @ 2017-03-03 15:46 UTC (permalink / raw)
  To: Mason
  Cc: Rob Herring, Phuong Nguyen, David Daney, Marc Zyngier, linux-pci,
	Thibaud Cornic, Will Deacon, Thierry Reding, Linux ARM

On Fri, Mar 03, 2017 at 01:44:54PM +0100, Mason wrote:
> For now, I have "hidden" the root's BAR0 from the system with:
> 
> 	if (bus->number == 0 && where == PCI_BASE_ADDRESS_0) {
> 		*val = 0;
> 		return PCIBIOS_SUCCESSFUL;
> 	}

I'm scratching my head about this a little.  Here's what your dmesg
log contained originally:

  pci 0000:00:00.0: [1105:8758] type 01 class 0x048000
  pci 0000:00:00.0: reg 0x10: [mem 0x00000000-0x00ffffff 64bit]
  pci 0000:00:00.0: BAR 0: no space for [mem size 0x01000000 64bit]
  pci 0000:00:00.0: BAR 0: failed to assign [mem size 0x01000000 64bit]
  pci 0000:00:00.0: PCI bridge to [bus 01]
  pcieport 0000:00:00.0: enabling device (0140 -> 0142)

This device is a bridge (a Root Port, per your lspci output).  With a
BAR, which is legal but unusual.  We couldn't assign space for the
BAR, which means we can't use whatever vendor-specific functionality
it provides.

What's puzzling me is that pcieport was able to enable the device and
turn on PCI_COMMAND_MEMORY (the 0x2 bit).  It seems like this should
have failed because pci_enable_resources() checks to see that all the
BARs have been assigned.

Since this is a bridge, we really *have* to turn on PCI_COMMAND_MEMORY
in order for the bridge to forward memory transactions to its
secondary bus (bus 01).  But we can't safely enable PCI_COMMAND_MEMORY
unless all its memory BARs are assigned.

So it's not safe to hide BAR0 from the PCI core.  That makes Linux
think the BAR doesn't exist, but of course it still exists in the
hardware itself, and it will respond at whatever address it happens to
contain.  In this case, that address happens to be zero, and the host
bridge does not advertise a window that maps to bus address zero, so
you probably won't see a conflict right away, but it's a latent issue
that may come back to bite you some day.

The easiest fix would be for you to increase the host bridge memory
window size.  It's currently [mem 0xa0000000-0xa03fffff], which is
only 4MB, which is *tiny*.  You need 16MB just to contain the bridge
BAR, plus at least 8K for the USB controller.

If you can't make space for a bigger window, it's possible the Root
Port has some device-specific way to disable BAR0 in hardware, e.g.,
some register your firmware or an early Linux quirk could write.  That
would be much safer than enabling an unassigned BAR.

Bjorn

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Using the generic host PCIe driver
@ 2017-03-03 15:46                   ` Bjorn Helgaas
  0 siblings, 0 replies; 54+ messages in thread
From: Bjorn Helgaas @ 2017-03-03 15:46 UTC (permalink / raw)
  To: linux-arm-kernel

On Fri, Mar 03, 2017 at 01:44:54PM +0100, Mason wrote:
> For now, I have "hidden" the root's BAR0 from the system with:
> 
> 	if (bus->number == 0 && where == PCI_BASE_ADDRESS_0) {
> 		*val = 0;
> 		return PCIBIOS_SUCCESSFUL;
> 	}

I'm scratching my head about this a little.  Here's what your dmesg
log contained originally:

  pci 0000:00:00.0: [1105:8758] type 01 class 0x048000
  pci 0000:00:00.0: reg 0x10: [mem 0x00000000-0x00ffffff 64bit]
  pci 0000:00:00.0: BAR 0: no space for [mem size 0x01000000 64bit]
  pci 0000:00:00.0: BAR 0: failed to assign [mem size 0x01000000 64bit]
  pci 0000:00:00.0: PCI bridge to [bus 01]
  pcieport 0000:00:00.0: enabling device (0140 -> 0142)

This device is a bridge (a Root Port, per your lspci output).  With a
BAR, which is legal but unusual.  We couldn't assign space for the
BAR, which means we can't use whatever vendor-specific functionality
it provides.

What's puzzling me is that pcieport was able to enable the device and
turn on PCI_COMMAND_MEMORY (the 0x2 bit).  It seems like this should
have failed because pci_enable_resources() checks to see that all the
BARs have been assigned.

Since this is a bridge, we really *have* to turn on PCI_COMMAND_MEMORY
in order for the bridge to forward memory transactions to its
secondary bus (bus 01).  But we can't safely enable PCI_COMMAND_MEMORY
unless all its memory BARs are assigned.

So it's not safe to hide BAR0 from the PCI core.  That makes Linux
think the BAR doesn't exist, but of course it still exists in the
hardware itself, and it will respond at whatever address it happens to
contain.  In this case, that address happens to be zero, and the host
bridge does not advertise a window that maps to bus address zero, so
you probably won't see a conflict right away, but it's a latent issue
that may come back to bite you some day.

The easiest fix would be for you to increase the host bridge memory
window size.  It's currently [mem 0xa0000000-0xa03fffff], which is
only 4MB, which is *tiny*.  You need 16MB just to contain the bridge
BAR, plus at least 8K for the USB controller.

If you can't make space for a bigger window, it's possible the Root
Port has some device-specific way to disable BAR0 in hardware, e.g.,
some register your firmware or an early Linux quirk could write.  That
would be much safer than enabling an unassigned BAR.

Bjorn

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: Using the generic host PCIe driver
  2017-03-03 11:26               ` Mason
@ 2017-03-03 16:41                 ` Marc Zyngier
  -1 siblings, 0 replies; 54+ messages in thread
From: Marc Zyngier @ 2017-03-03 16:41 UTC (permalink / raw)
  To: Mason
  Cc: Bjorn Helgaas, linux-pci, Linux ARM, Will Deacon, David Daney,
	Rob Herring, Thierry Reding, Phuong Nguyen, Thibaud Cornic

On Fri, Mar 03 2017 at 11:26:27 am GMT, Mason <slash.tmp@free.fr> wrote:
> On 01/03/2017 17:36, Marc Zyngier wrote:
>
>> Mason: while the kernel has generic support for dealing with MSI, there
>> is not standardization at the interrupt controller level, so you do have
>> to write your own driver, and wire it in the rest of the framework.
>> 
>> I suggest you look at things like drivers/pci/host/pcie-altera-msi.c,
>> which has an extremely simple implementation. You can use this as a
>> starting point for your own driver.
>
> Thanks Marc,
>
> I'll have a close look at the Altera driver.
>
> I'm having a hard time understanding 3 different kinds of interrupts:
>
>   1. MSI (message-signalled interrupts)
>   2. legacy interrupts
>   3. custom interrupts

[...]

> I suppose the interrupt controller I'm supposed to write needs
> to handle all 3 types of interrupts?

That's entirely up to you. INTx is the bare minimum. MSI is what people
actually need. The rest has more to do with configuring your host
controller, but only you know about it (and I'm not really interested in
the gory details of how this particular HW works).

I mentioned the Altera driver because it is a very simple example of an
MSI controller driver that uses the generic MSI domains. It doesn't care
about INTx, nor host controller management interrupts (that's handled
separately).
Thanks,

        M.
-- 
Jazz is not dead, it just smell funny.

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Using the generic host PCIe driver
@ 2017-03-03 16:41                 ` Marc Zyngier
  0 siblings, 0 replies; 54+ messages in thread
From: Marc Zyngier @ 2017-03-03 16:41 UTC (permalink / raw)
  To: linux-arm-kernel

On Fri, Mar 03 2017 at 11:26:27 am GMT, Mason <slash.tmp@free.fr> wrote:
> On 01/03/2017 17:36, Marc Zyngier wrote:
>
>> Mason: while the kernel has generic support for dealing with MSI, there
>> is not standardization at the interrupt controller level, so you do have
>> to write your own driver, and wire it in the rest of the framework.
>> 
>> I suggest you look at things like drivers/pci/host/pcie-altera-msi.c,
>> which has an extremely simple implementation. You can use this as a
>> starting point for your own driver.
>
> Thanks Marc,
>
> I'll have a close look at the Altera driver.
>
> I'm having a hard time understanding 3 different kinds of interrupts:
>
>   1. MSI (message-signalled interrupts)
>   2. legacy interrupts
>   3. custom interrupts

[...]

> I suppose the interrupt controller I'm supposed to write needs
> to handle all 3 types of interrupts?

That's entirely up to you. INTx is the bare minimum. MSI is what people
actually need. The rest has more to do with configuring your host
controller, but only you know about it (and I'm not really interested in
the gory details of how this particular HW works).

I mentioned the Altera driver because it is a very simple example of an
MSI controller driver that uses the generic MSI domains. It doesn't care
about INTx, nor host controller management interrupts (that's handled
separately).
Thanks,

        M.
-- 
Jazz is not dead, it just smell funny.

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: Using the generic host PCIe driver
  2017-03-03 16:41                 ` Marc Zyngier
@ 2017-03-03 16:53                   ` Mason
  -1 siblings, 0 replies; 54+ messages in thread
From: Mason @ 2017-03-03 16:53 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: Bjorn Helgaas, linux-pci, Linux ARM, Will Deacon, David Daney,
	Rob Herring, Thierry Reding, Phuong Nguyen, Thibaud Cornic

On 03/03/2017 17:41, Marc Zyngier wrote:
> On Fri, Mar 03 2017 at 11:26:27 am GMT, Mason <slash.tmp@free.fr> wrote:
>> On 01/03/2017 17:36, Marc Zyngier wrote:
>>
>>> Mason: while the kernel has generic support for dealing with MSI, there
>>> is not standardization at the interrupt controller level, so you do have
>>> to write your own driver, and wire it in the rest of the framework.
>>>
>>> I suggest you look at things like drivers/pci/host/pcie-altera-msi.c,
>>> which has an extremely simple implementation. You can use this as a
>>> starting point for your own driver.
>>
>> Thanks Marc,
>>
>> I'll have a close look at the Altera driver.
>>
>> I'm having a hard time understanding 3 different kinds of interrupts:
>>
>>   1. MSI (message-signalled interrupts)
>>   2. legacy interrupts
>>   3. custom interrupts
> 
> [...]
> 
>> I suppose the interrupt controller I'm supposed to write needs
>> to handle all 3 types of interrupts?
> 
> That's entirely up to you. INTx is the bare minimum.

That's going to be a problem. Rev 1 of the PCIe controller does not
support legacy interrupts at all.

> MSI is what people
> actually need. The rest has more to do with configuring your host
> controller, but only you know about it (and I'm not really interested in
> the gory details of how this particular HW works).

I was under the impression that some of the error interrupts might be
required for proper PCIe functionality.

> I mentioned the Altera driver because it is a very simple example of an
> MSI controller driver that uses the generic MSI domains. It doesn't care
> about INTx, nor host controller management interrupts (that's handled
> separately).

OK, MSI support is all I need to start with, so I'll try my best to
decipher the cryptic intc API, without melting my remaining neuron.

Regards.

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Using the generic host PCIe driver
@ 2017-03-03 16:53                   ` Mason
  0 siblings, 0 replies; 54+ messages in thread
From: Mason @ 2017-03-03 16:53 UTC (permalink / raw)
  To: linux-arm-kernel

On 03/03/2017 17:41, Marc Zyngier wrote:
> On Fri, Mar 03 2017 at 11:26:27 am GMT, Mason <slash.tmp@free.fr> wrote:
>> On 01/03/2017 17:36, Marc Zyngier wrote:
>>
>>> Mason: while the kernel has generic support for dealing with MSI, there
>>> is not standardization at the interrupt controller level, so you do have
>>> to write your own driver, and wire it in the rest of the framework.
>>>
>>> I suggest you look at things like drivers/pci/host/pcie-altera-msi.c,
>>> which has an extremely simple implementation. You can use this as a
>>> starting point for your own driver.
>>
>> Thanks Marc,
>>
>> I'll have a close look at the Altera driver.
>>
>> I'm having a hard time understanding 3 different kinds of interrupts:
>>
>>   1. MSI (message-signalled interrupts)
>>   2. legacy interrupts
>>   3. custom interrupts
> 
> [...]
> 
>> I suppose the interrupt controller I'm supposed to write needs
>> to handle all 3 types of interrupts?
> 
> That's entirely up to you. INTx is the bare minimum.

That's going to be a problem. Rev 1 of the PCIe controller does not
support legacy interrupts at all.

> MSI is what people
> actually need. The rest has more to do with configuring your host
> controller, but only you know about it (and I'm not really interested in
> the gory details of how this particular HW works).

I was under the impression that some of the error interrupts might be
required for proper PCIe functionality.

> I mentioned the Altera driver because it is a very simple example of an
> MSI controller driver that uses the generic MSI domains. It doesn't care
> about INTx, nor host controller management interrupts (that's handled
> separately).

OK, MSI support is all I need to start with, so I'll try my best to
decipher the cryptic intc API, without melting my remaining neuron.

Regards.

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: Using the generic host PCIe driver
  2017-03-03 16:53                   ` Mason
@ 2017-03-03 17:08                     ` Marc Zyngier
  -1 siblings, 0 replies; 54+ messages in thread
From: Marc Zyngier @ 2017-03-03 17:08 UTC (permalink / raw)
  To: Mason
  Cc: Bjorn Helgaas, linux-pci, Linux ARM, Will Deacon, David Daney,
	Rob Herring, Thierry Reding, Phuong Nguyen, Thibaud Cornic

On Fri, Mar 03 2017 at  4:53:58 pm GMT, Mason <slash.tmp@free.fr> wrote:
> On 03/03/2017 17:41, Marc Zyngier wrote:
>> On Fri, Mar 03 2017 at 11:26:27 am GMT, Mason <slash.tmp@free.fr> wrote:
>>> On 01/03/2017 17:36, Marc Zyngier wrote:
>>>
>>>> Mason: while the kernel has generic support for dealing with MSI, there
>>>> is not standardization at the interrupt controller level, so you do have
>>>> to write your own driver, and wire it in the rest of the framework.
>>>>
>>>> I suggest you look at things like drivers/pci/host/pcie-altera-msi.c,
>>>> which has an extremely simple implementation. You can use this as a
>>>> starting point for your own driver.
>>>
>>> Thanks Marc,
>>>
>>> I'll have a close look at the Altera driver.
>>>
>>> I'm having a hard time understanding 3 different kinds of interrupts:
>>>
>>>   1. MSI (message-signalled interrupts)
>>>   2. legacy interrupts
>>>   3. custom interrupts
>> 
>> [...]
>> 
>>> I suppose the interrupt controller I'm supposed to write needs
>>> to handle all 3 types of interrupts?
>> 
>> That's entirely up to you. INTx is the bare minimum.
>
> That's going to be a problem. Rev 1 of the PCIe controller does not
> support legacy interrupts at all.

Well, let's hope that you never run out of MSIs, and that you don't face
a PCI device that insists (for better or worse) on using INTx.

>
>> MSI is what people
>> actually need. The rest has more to do with configuring your host
>> controller, but only you know about it (and I'm not really interested in
>> the gory details of how this particular HW works).
>
> I was under the impression that some of the error interrupts might be
> required for proper PCIe functionality.

Maybe, but that's not something the PCIe *device* will ever have to care
about. That's the host controller's business.

>> I mentioned the Altera driver because it is a very simple example of an
>> MSI controller driver that uses the generic MSI domains. It doesn't care
>> about INTx, nor host controller management interrupts (that's handled
>> separately).
>
> OK, MSI support is all I need to start with, so I'll try my best to
> decipher the cryptic intc API, without melting my remaining neuron.

What cryptic for someone is usually crystal clear for someone else. We
deal with it.

     M.
-- 
Jazz is not dead, it just smell funny.

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Using the generic host PCIe driver
@ 2017-03-03 17:08                     ` Marc Zyngier
  0 siblings, 0 replies; 54+ messages in thread
From: Marc Zyngier @ 2017-03-03 17:08 UTC (permalink / raw)
  To: linux-arm-kernel

On Fri, Mar 03 2017 at  4:53:58 pm GMT, Mason <slash.tmp@free.fr> wrote:
> On 03/03/2017 17:41, Marc Zyngier wrote:
>> On Fri, Mar 03 2017 at 11:26:27 am GMT, Mason <slash.tmp@free.fr> wrote:
>>> On 01/03/2017 17:36, Marc Zyngier wrote:
>>>
>>>> Mason: while the kernel has generic support for dealing with MSI, there
>>>> is not standardization at the interrupt controller level, so you do have
>>>> to write your own driver, and wire it in the rest of the framework.
>>>>
>>>> I suggest you look at things like drivers/pci/host/pcie-altera-msi.c,
>>>> which has an extremely simple implementation. You can use this as a
>>>> starting point for your own driver.
>>>
>>> Thanks Marc,
>>>
>>> I'll have a close look at the Altera driver.
>>>
>>> I'm having a hard time understanding 3 different kinds of interrupts:
>>>
>>>   1. MSI (message-signalled interrupts)
>>>   2. legacy interrupts
>>>   3. custom interrupts
>> 
>> [...]
>> 
>>> I suppose the interrupt controller I'm supposed to write needs
>>> to handle all 3 types of interrupts?
>> 
>> That's entirely up to you. INTx is the bare minimum.
>
> That's going to be a problem. Rev 1 of the PCIe controller does not
> support legacy interrupts at all.

Well, let's hope that you never run out of MSIs, and that you don't face
a PCI device that insists (for better or worse) on using INTx.

>
>> MSI is what people
>> actually need. The rest has more to do with configuring your host
>> controller, but only you know about it (and I'm not really interested in
>> the gory details of how this particular HW works).
>
> I was under the impression that some of the error interrupts might be
> required for proper PCIe functionality.

Maybe, but that's not something the PCIe *device* will ever have to care
about. That's the host controller's business.

>> I mentioned the Altera driver because it is a very simple example of an
>> MSI controller driver that uses the generic MSI domains. It doesn't care
>> about INTx, nor host controller management interrupts (that's handled
>> separately).
>
> OK, MSI support is all I need to start with, so I'll try my best to
> decipher the cryptic intc API, without melting my remaining neuron.

What cryptic for someone is usually crystal clear for someone else. We
deal with it.

     M.
-- 
Jazz is not dead, it just smell funny.

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: Using the generic host PCIe driver
  2017-03-03 15:46                   ` Bjorn Helgaas
@ 2017-03-03 17:18                     ` Mason
  -1 siblings, 0 replies; 54+ messages in thread
From: Mason @ 2017-03-03 17:18 UTC (permalink / raw)
  To: Bjorn Helgaas
  Cc: linux-pci, Linux ARM, Will Deacon, David Daney, Rob Herring,
	Thierry Reding, Phuong Nguyen, Thibaud Cornic, Marc Zyngier

On 03/03/2017 16:46, Bjorn Helgaas wrote:

> On Fri, Mar 03, 2017 at 01:44:54PM +0100, Mason wrote:
>
>> For now, I have "hidden" the root's BAR0 from the system with:
>>
>> 	if (bus->number == 0 && where == PCI_BASE_ADDRESS_0) {
>> 		*val = 0;
>> 		return PCIBIOS_SUCCESSFUL;
>> 	}
> 
> I'm scratching my head about this a little.  Here's what your dmesg
> log contained originally:
> 
>   pci 0000:00:00.0: [1105:8758] type 01 class 0x048000
>   pci 0000:00:00.0: reg 0x10: [mem 0x00000000-0x00ffffff 64bit]
>   pci 0000:00:00.0: BAR 0: no space for [mem size 0x01000000 64bit]
>   pci 0000:00:00.0: BAR 0: failed to assign [mem size 0x01000000 64bit]
>   pci 0000:00:00.0: PCI bridge to [bus 01]
>   pcieport 0000:00:00.0: enabling device (0140 -> 0142)
> 
> This device is a bridge (a Root Port, per your lspci output).  With a
> BAR, which is legal but unusual.  We couldn't assign space for the
> BAR, which means we can't use whatever vendor-specific functionality
> it provides.

I had several chats with the HW designer. I'll try to explain, only as
far as I could understand ;-)

We used to make devices, before implementing a root. Since at least
one BAR is required (?) for a device, it was decided to have one BAR
for the root, for symmetry.

In fact, I thought I could ignore that BAR, but it is apparently NOT
the case, as MSIs are supposed to be sent *within* the BAR of the root.
So I have removed my kludge hiding the BAR.

The weird twist is that the BAR advertizes a 64-bit memory zone,
but we will, in fact, map MMIO registers behind it. So all the
RAM Linux assigns to the area is wasted, IIUC.

Did the above make sense?

[    0.986762] OF: PCI: host bridge /soc/pcie@50000000 ranges:
[    0.992478] OF: PCI:   No bus range found for /soc/pcie@50000000, using [bus 00-ff]
[    1.000279] OF: PCI: Parsing ranges property...
[    1.004938] OF: PCI:   MEM 0x90000000..0x9fffffff -> 0x90000000
[    1.014047] pci-host-generic 50000000.pcie: ECAM at [mem 0x50000000-0x5fffffff] for [bus 00-ff]
[    1.023088] pci-host-generic 50000000.pcie: PCI host bridge to bus 0000:00
[    1.030112] pci_bus 0000:00: root bus resource [bus 00-ff]
[    1.035729] pci_bus 0000:00: root bus resource [mem 0x90000000-0x9fffffff]
[    1.042737] pci_bus 0000:00: scanning bus
[    1.046895] pci 0000:00:00.0: [1105:0024] type 01 class 0x048000
[    1.053050] pci 0000:00:00.0: calling tango_pcie_fixup_class+0x0/0x10
[    1.059639] pci 0000:00:00.0: reg 0x10: [mem 0x00000000-0x00ffffff 64bit]
[    1.066583] pci 0000:00:00.0: calling pci_fixup_ide_bases+0x0/0x40
[    1.072929] pci 0000:00:00.0: supports D1 D2
[    1.077318] pci 0000:00:00.0: PME# supported from D0 D1 D2 D3hot
[    1.083452] pci 0000:00:00.0: PME# disabled
[    1.087901] pci_bus 0000:00: fixups for bus
[    1.092212] PCI: bus0: Fast back to back transfers disabled
[    1.097913] pci 0000:00:00.0: scanning [bus 00-00] behind bridge, pass 0
[    1.104746] pci 0000:00:00.0: bridge configuration invalid ([bus 00-00]), reconfiguring
[    1.112893] pci 0000:00:00.0: scanning [bus 00-00] behind bridge, pass 1
[    1.119800] pci_bus 0000:01: scanning bus
[    1.123972] pci 0000:01:00.0: [1912:0014] type 00 class 0x0c0330
[    1.130144] pci 0000:01:00.0: reg 0x10: [mem 0x00000000-0x00001fff 64bit]
[    1.137147] pci 0000:01:00.0: calling pci_fixup_ide_bases+0x0/0x40
[    1.143577] pci 0000:01:00.0: PME# supported from D0 D3hot D3cold
[    1.149801] pci 0000:01:00.0: PME# disabled
[    1.154364] pci_bus 0000:01: fixups for bus
[    1.158671] PCI: bus1: Fast back to back transfers disabled
[    1.164368] pci_bus 0000:01: bus scan returning with max=01
[    1.170067] pci_bus 0000:01: busn_res: [bus 01-ff] end is updated to 01
[    1.176814] pci_bus 0000:00: bus scan returning with max=01
[    1.182511] pci 0000:00:00.0: fixup irq: got 0
[    1.187072] pci 0000:00:00.0: assigning IRQ 00
[    1.191658] pci 0000:01:00.0: fixup irq: got 20
[    1.196307] pci 0000:01:00.0: assigning IRQ 20
[    1.200892] pci 0000:00:00.0: BAR 0: assigned [mem 0x90000000-0x90ffffff 64bit]
[    1.208344] pci 0000:00:00.0: BAR 8: assigned [mem 0x91000000-0x910fffff]
[    1.215267] pci 0000:01:00.0: BAR 0: assigned [mem 0x91000000-0x91001fff 64bit]
[    1.222726] pci 0000:00:00.0: PCI bridge to [bus 01]
[    1.227812] pci 0000:00:00.0:   bridge window [mem 0x91000000-0x910fffff]
[    1.234781] pcieport 0000:00:00.0: enabling device (0140 -> 0142)
[    1.241013] pcieport 0000:00:00.0: enabling bus mastering
[    1.246634] pci 0000:01:00.0: calling quirk_usb_early_handoff+0x0/0x790
[    1.253389] pci 0000:01:00.0: enabling device (0140 -> 0142)
[    1.275215] pci 0000:01:00.0: xHCI HW did not halt within 16000 usec status = 0xd2f4167e

# /usr/sbin/lspci -v
00:00.0 PCI bridge: Sigma Designs, Inc. Device 0024 (rev 01) (prog-if 00 [Normal decode])
        Flags: bus master, fast devsel, latency 0
        Memory at 90000000 (64-bit, non-prefetchable) [size=16M]
        Bus: primary=00, secondary=01, subordinate=01, sec-latency=0
        I/O behind bridge: 00000000-00000fff
        Memory behind bridge: 91000000-910fffff
        Prefetchable memory behind bridge: 00000000-000fffff
        Capabilities: [50] MSI: Enable- Count=1/4 Maskable- 64bit+
        Capabilities: [78] Power Management version 3
        Capabilities: [80] Express Root Port (Slot-), MSI 03
        Capabilities: [100] Virtual Channel
        Capabilities: [800] Advanced Error Reporting
        Kernel driver in use: pcieport

01:00.0 USB controller: Renesas Technology Corp. uPD720201 USB 3.0 Host Controller (rev 03) (prog-if 30 [XHCI])
        Flags: fast devsel, IRQ 20
        Memory at 91000000 (64-bit, non-prefetchable) [size=8K]
        Capabilities: [50] Power Management version 3
        Capabilities: [70] MSI: Enable- Count=1/8 Maskable- 64bit+
        Capabilities: [90] MSI-X: Enable- Count=8 Masked-
        Capabilities: [a0] Express Endpoint, MSI 00
        Capabilities: [100] Advanced Error Reporting
        Capabilities: [150] Latency Tolerance Reporting


> What's puzzling me is that pcieport was able to enable the device and
> turn on PCI_COMMAND_MEMORY (the 0x2 bit).  It seems like this should
> have failed because pci_enable_resources() checks to see that all the
> BARs have been assigned.
> 
> Since this is a bridge, we really *have* to turn on PCI_COMMAND_MEMORY
> in order for the bridge to forward memory transactions to its
> secondary bus (bus 01).  But we can't safely enable PCI_COMMAND_MEMORY
> unless all its memory BARs are assigned.
> 
> So it's not safe to hide BAR0 from the PCI core.  That makes Linux
> think the BAR doesn't exist, but of course it still exists in the
> hardware itself, and it will respond at whatever address it happens to
> contain.  In this case, that address happens to be zero, and the host
> bridge does not advertise a window that maps to bus address zero, so
> you probably won't see a conflict right away, but it's a latent issue
> that may come back to bite you some day.

OK, no more hiding! :-)

> The easiest fix would be for you to increase the host bridge memory
> window size.  It's currently [mem 0xa0000000-0xa03fffff], which is
> only 4MB, which is *tiny*.  You need 16MB just to contain the bridge
> BAR, plus at least 8K for the USB controller.
> 
> If you can't make space for a bigger window, it's possible the Root
> Port has some device-specific way to disable BAR0 in hardware, e.g.,
> some register your firmware or an early Linux quirk could write.  That
> would be much safer than enabling an unassigned BAR.

Apparently, disabling BAR0 is not an option.

Regards.

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Using the generic host PCIe driver
@ 2017-03-03 17:18                     ` Mason
  0 siblings, 0 replies; 54+ messages in thread
From: Mason @ 2017-03-03 17:18 UTC (permalink / raw)
  To: linux-arm-kernel

On 03/03/2017 16:46, Bjorn Helgaas wrote:

> On Fri, Mar 03, 2017 at 01:44:54PM +0100, Mason wrote:
>
>> For now, I have "hidden" the root's BAR0 from the system with:
>>
>> 	if (bus->number == 0 && where == PCI_BASE_ADDRESS_0) {
>> 		*val = 0;
>> 		return PCIBIOS_SUCCESSFUL;
>> 	}
> 
> I'm scratching my head about this a little.  Here's what your dmesg
> log contained originally:
> 
>   pci 0000:00:00.0: [1105:8758] type 01 class 0x048000
>   pci 0000:00:00.0: reg 0x10: [mem 0x00000000-0x00ffffff 64bit]
>   pci 0000:00:00.0: BAR 0: no space for [mem size 0x01000000 64bit]
>   pci 0000:00:00.0: BAR 0: failed to assign [mem size 0x01000000 64bit]
>   pci 0000:00:00.0: PCI bridge to [bus 01]
>   pcieport 0000:00:00.0: enabling device (0140 -> 0142)
> 
> This device is a bridge (a Root Port, per your lspci output).  With a
> BAR, which is legal but unusual.  We couldn't assign space for the
> BAR, which means we can't use whatever vendor-specific functionality
> it provides.

I had several chats with the HW designer. I'll try to explain, only as
far as I could understand ;-)

We used to make devices, before implementing a root. Since at least
one BAR is required (?) for a device, it was decided to have one BAR
for the root, for symmetry.

In fact, I thought I could ignore that BAR, but it is apparently NOT
the case, as MSIs are supposed to be sent *within* the BAR of the root.
So I have removed my kludge hiding the BAR.

The weird twist is that the BAR advertizes a 64-bit memory zone,
but we will, in fact, map MMIO registers behind it. So all the
RAM Linux assigns to the area is wasted, IIUC.

Did the above make sense?

[    0.986762] OF: PCI: host bridge /soc/pcie at 50000000 ranges:
[    0.992478] OF: PCI:   No bus range found for /soc/pcie at 50000000, using [bus 00-ff]
[    1.000279] OF: PCI: Parsing ranges property...
[    1.004938] OF: PCI:   MEM 0x90000000..0x9fffffff -> 0x90000000
[    1.014047] pci-host-generic 50000000.pcie: ECAM at [mem 0x50000000-0x5fffffff] for [bus 00-ff]
[    1.023088] pci-host-generic 50000000.pcie: PCI host bridge to bus 0000:00
[    1.030112] pci_bus 0000:00: root bus resource [bus 00-ff]
[    1.035729] pci_bus 0000:00: root bus resource [mem 0x90000000-0x9fffffff]
[    1.042737] pci_bus 0000:00: scanning bus
[    1.046895] pci 0000:00:00.0: [1105:0024] type 01 class 0x048000
[    1.053050] pci 0000:00:00.0: calling tango_pcie_fixup_class+0x0/0x10
[    1.059639] pci 0000:00:00.0: reg 0x10: [mem 0x00000000-0x00ffffff 64bit]
[    1.066583] pci 0000:00:00.0: calling pci_fixup_ide_bases+0x0/0x40
[    1.072929] pci 0000:00:00.0: supports D1 D2
[    1.077318] pci 0000:00:00.0: PME# supported from D0 D1 D2 D3hot
[    1.083452] pci 0000:00:00.0: PME# disabled
[    1.087901] pci_bus 0000:00: fixups for bus
[    1.092212] PCI: bus0: Fast back to back transfers disabled
[    1.097913] pci 0000:00:00.0: scanning [bus 00-00] behind bridge, pass 0
[    1.104746] pci 0000:00:00.0: bridge configuration invalid ([bus 00-00]), reconfiguring
[    1.112893] pci 0000:00:00.0: scanning [bus 00-00] behind bridge, pass 1
[    1.119800] pci_bus 0000:01: scanning bus
[    1.123972] pci 0000:01:00.0: [1912:0014] type 00 class 0x0c0330
[    1.130144] pci 0000:01:00.0: reg 0x10: [mem 0x00000000-0x00001fff 64bit]
[    1.137147] pci 0000:01:00.0: calling pci_fixup_ide_bases+0x0/0x40
[    1.143577] pci 0000:01:00.0: PME# supported from D0 D3hot D3cold
[    1.149801] pci 0000:01:00.0: PME# disabled
[    1.154364] pci_bus 0000:01: fixups for bus
[    1.158671] PCI: bus1: Fast back to back transfers disabled
[    1.164368] pci_bus 0000:01: bus scan returning with max=01
[    1.170067] pci_bus 0000:01: busn_res: [bus 01-ff] end is updated to 01
[    1.176814] pci_bus 0000:00: bus scan returning with max=01
[    1.182511] pci 0000:00:00.0: fixup irq: got 0
[    1.187072] pci 0000:00:00.0: assigning IRQ 00
[    1.191658] pci 0000:01:00.0: fixup irq: got 20
[    1.196307] pci 0000:01:00.0: assigning IRQ 20
[    1.200892] pci 0000:00:00.0: BAR 0: assigned [mem 0x90000000-0x90ffffff 64bit]
[    1.208344] pci 0000:00:00.0: BAR 8: assigned [mem 0x91000000-0x910fffff]
[    1.215267] pci 0000:01:00.0: BAR 0: assigned [mem 0x91000000-0x91001fff 64bit]
[    1.222726] pci 0000:00:00.0: PCI bridge to [bus 01]
[    1.227812] pci 0000:00:00.0:   bridge window [mem 0x91000000-0x910fffff]
[    1.234781] pcieport 0000:00:00.0: enabling device (0140 -> 0142)
[    1.241013] pcieport 0000:00:00.0: enabling bus mastering
[    1.246634] pci 0000:01:00.0: calling quirk_usb_early_handoff+0x0/0x790
[    1.253389] pci 0000:01:00.0: enabling device (0140 -> 0142)
[    1.275215] pci 0000:01:00.0: xHCI HW did not halt within 16000 usec status = 0xd2f4167e

# /usr/sbin/lspci -v
00:00.0 PCI bridge: Sigma Designs, Inc. Device 0024 (rev 01) (prog-if 00 [Normal decode])
        Flags: bus master, fast devsel, latency 0
        Memory at 90000000 (64-bit, non-prefetchable) [size=16M]
        Bus: primary=00, secondary=01, subordinate=01, sec-latency=0
        I/O behind bridge: 00000000-00000fff
        Memory behind bridge: 91000000-910fffff
        Prefetchable memory behind bridge: 00000000-000fffff
        Capabilities: [50] MSI: Enable- Count=1/4 Maskable- 64bit+
        Capabilities: [78] Power Management version 3
        Capabilities: [80] Express Root Port (Slot-), MSI 03
        Capabilities: [100] Virtual Channel
        Capabilities: [800] Advanced Error Reporting
        Kernel driver in use: pcieport

01:00.0 USB controller: Renesas Technology Corp. uPD720201 USB 3.0 Host Controller (rev 03) (prog-if 30 [XHCI])
        Flags: fast devsel, IRQ 20
        Memory at 91000000 (64-bit, non-prefetchable) [size=8K]
        Capabilities: [50] Power Management version 3
        Capabilities: [70] MSI: Enable- Count=1/8 Maskable- 64bit+
        Capabilities: [90] MSI-X: Enable- Count=8 Masked-
        Capabilities: [a0] Express Endpoint, MSI 00
        Capabilities: [100] Advanced Error Reporting
        Capabilities: [150] Latency Tolerance Reporting


> What's puzzling me is that pcieport was able to enable the device and
> turn on PCI_COMMAND_MEMORY (the 0x2 bit).  It seems like this should
> have failed because pci_enable_resources() checks to see that all the
> BARs have been assigned.
> 
> Since this is a bridge, we really *have* to turn on PCI_COMMAND_MEMORY
> in order for the bridge to forward memory transactions to its
> secondary bus (bus 01).  But we can't safely enable PCI_COMMAND_MEMORY
> unless all its memory BARs are assigned.
> 
> So it's not safe to hide BAR0 from the PCI core.  That makes Linux
> think the BAR doesn't exist, but of course it still exists in the
> hardware itself, and it will respond at whatever address it happens to
> contain.  In this case, that address happens to be zero, and the host
> bridge does not advertise a window that maps to bus address zero, so
> you probably won't see a conflict right away, but it's a latent issue
> that may come back to bite you some day.

OK, no more hiding! :-)

> The easiest fix would be for you to increase the host bridge memory
> window size.  It's currently [mem 0xa0000000-0xa03fffff], which is
> only 4MB, which is *tiny*.  You need 16MB just to contain the bridge
> BAR, plus at least 8K for the USB controller.
> 
> If you can't make space for a bigger window, it's possible the Root
> Port has some device-specific way to disable BAR0 in hardware, e.g.,
> some register your firmware or an early Linux quirk could write.  That
> would be much safer than enabling an unassigned BAR.

Apparently, disabling BAR0 is not an option.

Regards.

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: Using the generic host PCIe driver
  2017-03-03 17:18                     ` Mason
@ 2017-03-03 20:04                       ` Bjorn Helgaas
  -1 siblings, 0 replies; 54+ messages in thread
From: Bjorn Helgaas @ 2017-03-03 20:04 UTC (permalink / raw)
  To: Mason
  Cc: Rob Herring, Phuong Nguyen, David Daney, Marc Zyngier, linux-pci,
	Thibaud Cornic, Will Deacon, Thierry Reding, Linux ARM

On Fri, Mar 03, 2017 at 06:18:02PM +0100, Mason wrote:
> On 03/03/2017 16:46, Bjorn Helgaas wrote:
> > On Fri, Mar 03, 2017 at 01:44:54PM +0100, Mason wrote:
> >
> >> For now, I have "hidden" the root's BAR0 from the system with:
> >>
> >> 	if (bus->number == 0 && where == PCI_BASE_ADDRESS_0) {
> >> 		*val = 0;
> >> 		return PCIBIOS_SUCCESSFUL;
> >> 	}
> > 
> > I'm scratching my head about this a little.  Here's what your dmesg
> > log contained originally:
> > 
> >   pci 0000:00:00.0: [1105:8758] type 01 class 0x048000
> >   pci 0000:00:00.0: reg 0x10: [mem 0x00000000-0x00ffffff 64bit]
> >   pci 0000:00:00.0: BAR 0: no space for [mem size 0x01000000 64bit]
> >   pci 0000:00:00.0: BAR 0: failed to assign [mem size 0x01000000 64bit]
> >   pci 0000:00:00.0: PCI bridge to [bus 01]
> >   pcieport 0000:00:00.0: enabling device (0140 -> 0142)
> > 
> > This device is a bridge (a Root Port, per your lspci output).  With a
> > BAR, which is legal but unusual.  We couldn't assign space for the
> > BAR, which means we can't use whatever vendor-specific functionality
> > it provides.
> 
> I had several chats with the HW designer. I'll try to explain, only as
> far as I could understand ;-)
> 
> We used to make devices, before implementing a root. Since at least
> one BAR is required (?) for a device, it was decided to have one BAR
> for the root, for symmetry.

I'm not aware of a spec requirement for any BARs.  It's conceivable
that one could build a device that only uses config space.  And of
course, most bridges have windows but no BARs.  But that doesn't
matter; the hardware is what it is and we have to deal with it.

> In fact, I thought I could ignore that BAR, but it is apparently NOT
> the case, as MSIs are supposed to be sent *within* the BAR of the root.

I don't know much about this piece of the MSI puzzle, but maybe Marc
can enlighten us.  If this Root Port is the target of MSIs and the
Root Port turns them into some sort of interrupt on the CPU side, I
can see how this might make sense.

I think it's unusual for the PCI core to assign the MSI target using a
BAR, though.  I think this means you'll have to implement your
arch_setup_msi_irq() or .irq_compose_msi_msg() method such that it 
looks up that BAR value, since you won't know it at build-time.

> The weird twist is that the BAR advertizes a 64-bit memory zone,
> but we will, in fact, map MMIO registers behind it. So all the
> RAM Linux assigns to the area is wasted, IIUC.

I'm not sure what this means.  You have this:

> OF: PCI:   MEM 0x90000000..0x9fffffff -> 0x90000000
> pci_bus 0000:00: root bus resource [mem 0x90000000-0x9fffffff]

This [mem 0x90000000-0x9fffffff] host bridge window means there can't
be RAM in that region.  CPU accesses to 0x90000000-0x9fffffff have to
be claimed by the host bridge and forwarded to PCI.

Linux doesn't "assign system RAM" anywhere; we just learn somehow
where that RAM is.  Linux *does* assign BARs of PCI devices, and they
have to be inside the host bridge windows(s).

The BARs may contain registers, RAM, frame buffers, etc., that live
on the PCI device.  It's totally up to the device what it is.

> OF: PCI: host bridge /soc/pcie@50000000 ranges:
> OF: PCI:   No bus range found for /soc/pcie@50000000, using [bus 00-ff]

Tangent: the lack of a bus range is a defect in your DTS.

> [    1.042737] pci_bus 0000:00: scanning bus
> [    1.046895] pci 0000:00:00.0: [1105:0024] type 01 class 0x048000
> [    1.053050] pci 0000:00:00.0: calling tango_pcie_fixup_class+0x0/0x10
> [    1.059639] pci 0000:00:00.0: reg 0x10: [mem 0x00000000-0x00ffffff 64bit]
> [    1.066583] pci 0000:00:00.0: calling pci_fixup_ide_bases+0x0/0x40
> [    1.072929] pci 0000:00:00.0: supports D1 D2
> [    1.077318] pci 0000:00:00.0: PME# supported from D0 D1 D2 D3hot
> [    1.083452] pci 0000:00:00.0: PME# disabled
> [    1.087901] pci_bus 0000:00: fixups for bus
> [    1.092212] PCI: bus0: Fast back to back transfers disabled
> [    1.097913] pci 0000:00:00.0: scanning [bus 00-00] behind bridge, pass 0
> [    1.104746] pci 0000:00:00.0: bridge configuration invalid ([bus 00-00]), reconfiguring
> [    1.112893] pci 0000:00:00.0: scanning [bus 00-00] behind bridge, pass 1
> [    1.119800] pci_bus 0000:01: scanning bus
> [    1.123972] pci 0000:01:00.0: [1912:0014] type 00 class 0x0c0330
> [    1.130144] pci 0000:01:00.0: reg 0x10: [mem 0x00000000-0x00001fff 64bit]
> [    1.137147] pci 0000:01:00.0: calling pci_fixup_ide_bases+0x0/0x40
> [    1.143577] pci 0000:01:00.0: PME# supported from D0 D3hot D3cold
> [    1.149801] pci 0000:01:00.0: PME# disabled
> [    1.154364] pci_bus 0000:01: fixups for bus
> [    1.158671] PCI: bus1: Fast back to back transfers disabled
> [    1.164368] pci_bus 0000:01: bus scan returning with max=01
> [    1.170067] pci_bus 0000:01: busn_res: [bus 01-ff] end is updated to 01
> [    1.176814] pci_bus 0000:00: bus scan returning with max=01
> [    1.182511] pci 0000:00:00.0: fixup irq: got 0
> [    1.187072] pci 0000:00:00.0: assigning IRQ 00
> [    1.191658] pci 0000:01:00.0: fixup irq: got 20
> [    1.196307] pci 0000:01:00.0: assigning IRQ 20
> [    1.200892] pci 0000:00:00.0: BAR 0: assigned [mem 0x90000000-0x90ffffff 64bit]
> [    1.208344] pci 0000:00:00.0: BAR 8: assigned [mem 0x91000000-0x910fffff]
> [    1.215267] pci 0000:01:00.0: BAR 0: assigned [mem 0x91000000-0x91001fff 64bit]
> [    1.222726] pci 0000:00:00.0: PCI bridge to [bus 01]
> [    1.227812] pci 0000:00:00.0:   bridge window [mem 0x91000000-0x910fffff]
> [    1.234781] pcieport 0000:00:00.0: enabling device (0140 -> 0142)
> [    1.241013] pcieport 0000:00:00.0: enabling bus mastering
> [    1.246634] pci 0000:01:00.0: calling quirk_usb_early_handoff+0x0/0x790
> [    1.253389] pci 0000:01:00.0: enabling device (0140 -> 0142)
> [    1.275215] pci 0000:01:00.0: xHCI HW did not halt within 16000 usec status = 0xd2f4167e
> 
> # /usr/sbin/lspci -v
> 00:00.0 PCI bridge: Sigma Designs, Inc. Device 0024 (rev 01) (prog-if 00 [Normal decode])
>         Flags: bus master, fast devsel, latency 0
>         Memory at 90000000 (64-bit, non-prefetchable) [size=16M]
>         Bus: primary=00, secondary=01, subordinate=01, sec-latency=0
>         I/O behind bridge: 00000000-00000fff

Something's wrong with this.  You have no I/O windows through the host
bridge, which implies that you can't generic PCI I/O transactions, so
this I/O window should be disabled.  This might be an lspci issue;
what does "lspci -xxx" show?

>         Memory behind bridge: 91000000-910fffff
>         Prefetchable memory behind bridge: 00000000-000fffff

This prefetchable memory window is bogus, too.  It should probably be
disabled.  If the bridge doesn't support a prefetchable window, the
base and limit should be hardwired to zero.  If it supports a window
but it's disabled, the limit should be less than the base.  For
example, on my system I see this for a bridge with the window
disabled:

  # setpci -s00:1c.0 PREF_MEMORY_BASE
  fff1
  # setpci -s00:1c.0 PREF_MEMORY_LIMIT
  0001

Bjorn

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Using the generic host PCIe driver
@ 2017-03-03 20:04                       ` Bjorn Helgaas
  0 siblings, 0 replies; 54+ messages in thread
From: Bjorn Helgaas @ 2017-03-03 20:04 UTC (permalink / raw)
  To: linux-arm-kernel

On Fri, Mar 03, 2017 at 06:18:02PM +0100, Mason wrote:
> On 03/03/2017 16:46, Bjorn Helgaas wrote:
> > On Fri, Mar 03, 2017 at 01:44:54PM +0100, Mason wrote:
> >
> >> For now, I have "hidden" the root's BAR0 from the system with:
> >>
> >> 	if (bus->number == 0 && where == PCI_BASE_ADDRESS_0) {
> >> 		*val = 0;
> >> 		return PCIBIOS_SUCCESSFUL;
> >> 	}
> > 
> > I'm scratching my head about this a little.  Here's what your dmesg
> > log contained originally:
> > 
> >   pci 0000:00:00.0: [1105:8758] type 01 class 0x048000
> >   pci 0000:00:00.0: reg 0x10: [mem 0x00000000-0x00ffffff 64bit]
> >   pci 0000:00:00.0: BAR 0: no space for [mem size 0x01000000 64bit]
> >   pci 0000:00:00.0: BAR 0: failed to assign [mem size 0x01000000 64bit]
> >   pci 0000:00:00.0: PCI bridge to [bus 01]
> >   pcieport 0000:00:00.0: enabling device (0140 -> 0142)
> > 
> > This device is a bridge (a Root Port, per your lspci output).  With a
> > BAR, which is legal but unusual.  We couldn't assign space for the
> > BAR, which means we can't use whatever vendor-specific functionality
> > it provides.
> 
> I had several chats with the HW designer. I'll try to explain, only as
> far as I could understand ;-)
> 
> We used to make devices, before implementing a root. Since at least
> one BAR is required (?) for a device, it was decided to have one BAR
> for the root, for symmetry.

I'm not aware of a spec requirement for any BARs.  It's conceivable
that one could build a device that only uses config space.  And of
course, most bridges have windows but no BARs.  But that doesn't
matter; the hardware is what it is and we have to deal with it.

> In fact, I thought I could ignore that BAR, but it is apparently NOT
> the case, as MSIs are supposed to be sent *within* the BAR of the root.

I don't know much about this piece of the MSI puzzle, but maybe Marc
can enlighten us.  If this Root Port is the target of MSIs and the
Root Port turns them into some sort of interrupt on the CPU side, I
can see how this might make sense.

I think it's unusual for the PCI core to assign the MSI target using a
BAR, though.  I think this means you'll have to implement your
arch_setup_msi_irq() or .irq_compose_msi_msg() method such that it 
looks up that BAR value, since you won't know it at build-time.

> The weird twist is that the BAR advertizes a 64-bit memory zone,
> but we will, in fact, map MMIO registers behind it. So all the
> RAM Linux assigns to the area is wasted, IIUC.

I'm not sure what this means.  You have this:

> OF: PCI:   MEM 0x90000000..0x9fffffff -> 0x90000000
> pci_bus 0000:00: root bus resource [mem 0x90000000-0x9fffffff]

This [mem 0x90000000-0x9fffffff] host bridge window means there can't
be RAM in that region.  CPU accesses to 0x90000000-0x9fffffff have to
be claimed by the host bridge and forwarded to PCI.

Linux doesn't "assign system RAM" anywhere; we just learn somehow
where that RAM is.  Linux *does* assign BARs of PCI devices, and they
have to be inside the host bridge windows(s).

The BARs may contain registers, RAM, frame buffers, etc., that live
on the PCI device.  It's totally up to the device what it is.

> OF: PCI: host bridge /soc/pcie at 50000000 ranges:
> OF: PCI:   No bus range found for /soc/pcie at 50000000, using [bus 00-ff]

Tangent: the lack of a bus range is a defect in your DTS.

> [    1.042737] pci_bus 0000:00: scanning bus
> [    1.046895] pci 0000:00:00.0: [1105:0024] type 01 class 0x048000
> [    1.053050] pci 0000:00:00.0: calling tango_pcie_fixup_class+0x0/0x10
> [    1.059639] pci 0000:00:00.0: reg 0x10: [mem 0x00000000-0x00ffffff 64bit]
> [    1.066583] pci 0000:00:00.0: calling pci_fixup_ide_bases+0x0/0x40
> [    1.072929] pci 0000:00:00.0: supports D1 D2
> [    1.077318] pci 0000:00:00.0: PME# supported from D0 D1 D2 D3hot
> [    1.083452] pci 0000:00:00.0: PME# disabled
> [    1.087901] pci_bus 0000:00: fixups for bus
> [    1.092212] PCI: bus0: Fast back to back transfers disabled
> [    1.097913] pci 0000:00:00.0: scanning [bus 00-00] behind bridge, pass 0
> [    1.104746] pci 0000:00:00.0: bridge configuration invalid ([bus 00-00]), reconfiguring
> [    1.112893] pci 0000:00:00.0: scanning [bus 00-00] behind bridge, pass 1
> [    1.119800] pci_bus 0000:01: scanning bus
> [    1.123972] pci 0000:01:00.0: [1912:0014] type 00 class 0x0c0330
> [    1.130144] pci 0000:01:00.0: reg 0x10: [mem 0x00000000-0x00001fff 64bit]
> [    1.137147] pci 0000:01:00.0: calling pci_fixup_ide_bases+0x0/0x40
> [    1.143577] pci 0000:01:00.0: PME# supported from D0 D3hot D3cold
> [    1.149801] pci 0000:01:00.0: PME# disabled
> [    1.154364] pci_bus 0000:01: fixups for bus
> [    1.158671] PCI: bus1: Fast back to back transfers disabled
> [    1.164368] pci_bus 0000:01: bus scan returning with max=01
> [    1.170067] pci_bus 0000:01: busn_res: [bus 01-ff] end is updated to 01
> [    1.176814] pci_bus 0000:00: bus scan returning with max=01
> [    1.182511] pci 0000:00:00.0: fixup irq: got 0
> [    1.187072] pci 0000:00:00.0: assigning IRQ 00
> [    1.191658] pci 0000:01:00.0: fixup irq: got 20
> [    1.196307] pci 0000:01:00.0: assigning IRQ 20
> [    1.200892] pci 0000:00:00.0: BAR 0: assigned [mem 0x90000000-0x90ffffff 64bit]
> [    1.208344] pci 0000:00:00.0: BAR 8: assigned [mem 0x91000000-0x910fffff]
> [    1.215267] pci 0000:01:00.0: BAR 0: assigned [mem 0x91000000-0x91001fff 64bit]
> [    1.222726] pci 0000:00:00.0: PCI bridge to [bus 01]
> [    1.227812] pci 0000:00:00.0:   bridge window [mem 0x91000000-0x910fffff]
> [    1.234781] pcieport 0000:00:00.0: enabling device (0140 -> 0142)
> [    1.241013] pcieport 0000:00:00.0: enabling bus mastering
> [    1.246634] pci 0000:01:00.0: calling quirk_usb_early_handoff+0x0/0x790
> [    1.253389] pci 0000:01:00.0: enabling device (0140 -> 0142)
> [    1.275215] pci 0000:01:00.0: xHCI HW did not halt within 16000 usec status = 0xd2f4167e
> 
> # /usr/sbin/lspci -v
> 00:00.0 PCI bridge: Sigma Designs, Inc. Device 0024 (rev 01) (prog-if 00 [Normal decode])
>         Flags: bus master, fast devsel, latency 0
>         Memory at 90000000 (64-bit, non-prefetchable) [size=16M]
>         Bus: primary=00, secondary=01, subordinate=01, sec-latency=0
>         I/O behind bridge: 00000000-00000fff

Something's wrong with this.  You have no I/O windows through the host
bridge, which implies that you can't generic PCI I/O transactions, so
this I/O window should be disabled.  This might be an lspci issue;
what does "lspci -xxx" show?

>         Memory behind bridge: 91000000-910fffff
>         Prefetchable memory behind bridge: 00000000-000fffff

This prefetchable memory window is bogus, too.  It should probably be
disabled.  If the bridge doesn't support a prefetchable window, the
base and limit should be hardwired to zero.  If it supports a window
but it's disabled, the limit should be less than the base.  For
example, on my system I see this for a bridge with the window
disabled:

  # setpci -s00:1c.0 PREF_MEMORY_BASE
  fff1
  # setpci -s00:1c.0 PREF_MEMORY_LIMIT
  0001

Bjorn

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: Using the generic host PCIe driver
  2017-03-03 20:04                       ` Bjorn Helgaas
@ 2017-03-03 23:23                         ` Mason
  -1 siblings, 0 replies; 54+ messages in thread
From: Mason @ 2017-03-03 23:23 UTC (permalink / raw)
  To: Bjorn Helgaas
  Cc: linux-pci, Linux ARM, Will Deacon, David Daney, Rob Herring,
	Thierry Reding, Phuong Nguyen, Thibaud Cornic, Marc Zyngier

On 03/03/2017 21:04, Bjorn Helgaas wrote:
> On Fri, Mar 03, 2017 at 06:18:02PM +0100, Mason wrote:
>> On 03/03/2017 16:46, Bjorn Helgaas wrote:
>>> On Fri, Mar 03, 2017 at 01:44:54PM +0100, Mason wrote:
>>>
>>>> For now, I have "hidden" the root's BAR0 from the system with:
>>>>
>>>> 	if (bus->number == 0 && where == PCI_BASE_ADDRESS_0) {
>>>> 		*val = 0;
>>>> 		return PCIBIOS_SUCCESSFUL;
>>>> 	}
>>>
>>> I'm scratching my head about this a little.  Here's what your dmesg
>>> log contained originally:
>>>
>>>   pci 0000:00:00.0: [1105:8758] type 01 class 0x048000
>>>   pci 0000:00:00.0: reg 0x10: [mem 0x00000000-0x00ffffff 64bit]
>>>   pci 0000:00:00.0: BAR 0: no space for [mem size 0x01000000 64bit]
>>>   pci 0000:00:00.0: BAR 0: failed to assign [mem size 0x01000000 64bit]
>>>   pci 0000:00:00.0: PCI bridge to [bus 01]
>>>   pcieport 0000:00:00.0: enabling device (0140 -> 0142)
>>>
>>> This device is a bridge (a Root Port, per your lspci output).  With a
>>> BAR, which is legal but unusual.  We couldn't assign space for the
>>> BAR, which means we can't use whatever vendor-specific functionality
>>> it provides.
>>
>> I had several chats with the HW designer. I'll try to explain, only as
>> far as I could understand ;-)
>>
>> We used to make devices, before implementing a root. Since at least
>> one BAR is required (?) for a device, it was decided to have one BAR
>> for the root, for symmetry.
> 
> I'm not aware of a spec requirement for any BARs.  It's conceivable
> that one could build a device that only uses config space.  And of
> course, most bridges have windows but no BARs.  But that doesn't
> matter; the hardware is what it is and we have to deal with it.

I appreciate the compassion. RMK considered the DMA HW too screwy
to bother supporting ;-)

>> In fact, I thought I could ignore that BAR, but it is apparently NOT
>> the case, as MSIs are supposed to be sent *within* the BAR of the root.
> 
> I don't know much about this piece of the MSI puzzle, but maybe Marc
> can enlighten us.  If this Root Port is the target of MSIs and the
> Root Port turns them into some sort of interrupt on the CPU side, I
> can see how this might make sense.
> 
> I think it's unusual for the PCI core to assign the MSI target using a
> BAR, though.  I think this means you'll have to implement your
> arch_setup_msi_irq() or .irq_compose_msi_msg() method such that it 
> looks up that BAR value, since you won't know it at build-time.

I'll hack the Altera driver to fit my purpose.

>> The weird twist is that the BAR advertizes a 64-bit memory zone,
>> but we will, in fact, map MMIO registers behind it. So all the
>> RAM Linux assigns to the area is wasted, IIUC.
> 
> I'm not sure what this means.  You have this:
> 
>> OF: PCI:   MEM 0x90000000..0x9fffffff -> 0x90000000

This means I've put 256 MB of system RAM aside for PCIe devices.
This memory is no longer available for Linux "stuff".

>> pci_bus 0000:00: root bus resource [mem 0x90000000-0x9fffffff]

I suppose this is the PCI bus address. As we've discussed,
I used the identity to map bus <-> CPU addresses.

> This [mem 0x90000000-0x9fffffff] host bridge window means there can't
> be RAM in that region.  CPU accesses to 0x90000000-0x9fffffff have to
> be claimed by the host bridge and forwarded to PCI.
> 
> Linux doesn't "assign system RAM" anywhere; we just learn somehow
> where that RAM is.  Linux *does* assign BARs of PCI devices, and they
> have to be inside the host bridge windows(s).

I'm confused, I thought I had understood that part...
I thought the binding required me to specify (in the "ranges"
property) a non-prefetchable zone of system RAM, and this
memory is then "handed out" by Linux to different devices.
Or do I just need to specify some address range that's not
necessarily backed with actual RAM?

> The BARs may contain registers, RAM, frame buffers, etc., that live
> on the PCI device.  It's totally up to the device what it is.
> 
>> OF: PCI: host bridge /soc/pcie@50000000 ranges:
>> OF: PCI:   No bus range found for /soc/pcie@50000000, using [bus 00-ff]
> 
> Tangent: the lack of a bus range is a defect in your DTS.

What range should I specify? 0-1? 0-2? 0-255?

>> [    1.042737] pci_bus 0000:00: scanning bus
>> [    1.046895] pci 0000:00:00.0: [1105:0024] type 01 class 0x048000
>> [    1.053050] pci 0000:00:00.0: calling tango_pcie_fixup_class+0x0/0x10
>> [    1.059639] pci 0000:00:00.0: reg 0x10: [mem 0x00000000-0x00ffffff 64bit]
>> [    1.066583] pci 0000:00:00.0: calling pci_fixup_ide_bases+0x0/0x40
>> [    1.072929] pci 0000:00:00.0: supports D1 D2
>> [    1.077318] pci 0000:00:00.0: PME# supported from D0 D1 D2 D3hot
>> [    1.083452] pci 0000:00:00.0: PME# disabled
>> [    1.087901] pci_bus 0000:00: fixups for bus
>> [    1.092212] PCI: bus0: Fast back to back transfers disabled
>> [    1.097913] pci 0000:00:00.0: scanning [bus 00-00] behind bridge, pass 0
>> [    1.104746] pci 0000:00:00.0: bridge configuration invalid ([bus 00-00]), reconfiguring
>> [    1.112893] pci 0000:00:00.0: scanning [bus 00-00] behind bridge, pass 1
>> [    1.119800] pci_bus 0000:01: scanning bus
>> [    1.123972] pci 0000:01:00.0: [1912:0014] type 00 class 0x0c0330
>> [    1.130144] pci 0000:01:00.0: reg 0x10: [mem 0x00000000-0x00001fff 64bit]
>> [    1.137147] pci 0000:01:00.0: calling pci_fixup_ide_bases+0x0/0x40
>> [    1.143577] pci 0000:01:00.0: PME# supported from D0 D3hot D3cold
>> [    1.149801] pci 0000:01:00.0: PME# disabled
>> [    1.154364] pci_bus 0000:01: fixups for bus
>> [    1.158671] PCI: bus1: Fast back to back transfers disabled
>> [    1.164368] pci_bus 0000:01: bus scan returning with max=01
>> [    1.170067] pci_bus 0000:01: busn_res: [bus 01-ff] end is updated to 01
>> [    1.176814] pci_bus 0000:00: bus scan returning with max=01
>> [    1.182511] pci 0000:00:00.0: fixup irq: got 0
>> [    1.187072] pci 0000:00:00.0: assigning IRQ 00
>> [    1.191658] pci 0000:01:00.0: fixup irq: got 20
>> [    1.196307] pci 0000:01:00.0: assigning IRQ 20
>> [    1.200892] pci 0000:00:00.0: BAR 0: assigned [mem 0x90000000-0x90ffffff 64bit]
>> [    1.208344] pci 0000:00:00.0: BAR 8: assigned [mem 0x91000000-0x910fffff]
>> [    1.215267] pci 0000:01:00.0: BAR 0: assigned [mem 0x91000000-0x91001fff 64bit]
>> [    1.222726] pci 0000:00:00.0: PCI bridge to [bus 01]
>> [    1.227812] pci 0000:00:00.0:   bridge window [mem 0x91000000-0x910fffff]
>> [    1.234781] pcieport 0000:00:00.0: enabling device (0140 -> 0142)
>> [    1.241013] pcieport 0000:00:00.0: enabling bus mastering
>> [    1.246634] pci 0000:01:00.0: calling quirk_usb_early_handoff+0x0/0x790
>> [    1.253389] pci 0000:01:00.0: enabling device (0140 -> 0142)
>> [    1.275215] pci 0000:01:00.0: xHCI HW did not halt within 16000 usec status = 0xd2f4167e
>>
>> # /usr/sbin/lspci -v
>> 00:00.0 PCI bridge: Sigma Designs, Inc. Device 0024 (rev 01) (prog-if 00 [Normal decode])
>>         Flags: bus master, fast devsel, latency 0
>>         Memory at 90000000 (64-bit, non-prefetchable) [size=16M]
>>         Bus: primary=00, secondary=01, subordinate=01, sec-latency=0
>>         I/O behind bridge: 00000000-00000fff
> 
> Something's wrong with this.  You have no I/O windows through the host
> bridge, which implies that you can't generic PCI I/O transactions, so
> this I/O window should be disabled.  This might be an lspci issue;
> what does "lspci -xxx" show?

I'll look on Monday. But I know that this revision of the controller
does not support any I/O areas. I don't know why Linux sees this.
Might be a bug in the controller, or a missing init in my code.

>>         Memory behind bridge: 91000000-910fffff
>>         Prefetchable memory behind bridge: 00000000-000fffff
> 
> This prefetchable memory window is bogus, too.  It should probably be
> disabled.  If the bridge doesn't support a prefetchable window, the
> base and limit should be hardwired to zero.  If it supports a window
> but it's disabled, the limit should be less than the base.  For
> example, on my system I see this for a bridge with the window
> disabled:
> 
>   # setpci -s00:1c.0 PREF_MEMORY_BASE
>   fff1
>   # setpci -s00:1c.0 PREF_MEMORY_LIMIT
>   0001

OK, one more thing for me to check. Thanks for your thoroughness.

Regards.

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Using the generic host PCIe driver
@ 2017-03-03 23:23                         ` Mason
  0 siblings, 0 replies; 54+ messages in thread
From: Mason @ 2017-03-03 23:23 UTC (permalink / raw)
  To: linux-arm-kernel

On 03/03/2017 21:04, Bjorn Helgaas wrote:
> On Fri, Mar 03, 2017 at 06:18:02PM +0100, Mason wrote:
>> On 03/03/2017 16:46, Bjorn Helgaas wrote:
>>> On Fri, Mar 03, 2017 at 01:44:54PM +0100, Mason wrote:
>>>
>>>> For now, I have "hidden" the root's BAR0 from the system with:
>>>>
>>>> 	if (bus->number == 0 && where == PCI_BASE_ADDRESS_0) {
>>>> 		*val = 0;
>>>> 		return PCIBIOS_SUCCESSFUL;
>>>> 	}
>>>
>>> I'm scratching my head about this a little.  Here's what your dmesg
>>> log contained originally:
>>>
>>>   pci 0000:00:00.0: [1105:8758] type 01 class 0x048000
>>>   pci 0000:00:00.0: reg 0x10: [mem 0x00000000-0x00ffffff 64bit]
>>>   pci 0000:00:00.0: BAR 0: no space for [mem size 0x01000000 64bit]
>>>   pci 0000:00:00.0: BAR 0: failed to assign [mem size 0x01000000 64bit]
>>>   pci 0000:00:00.0: PCI bridge to [bus 01]
>>>   pcieport 0000:00:00.0: enabling device (0140 -> 0142)
>>>
>>> This device is a bridge (a Root Port, per your lspci output).  With a
>>> BAR, which is legal but unusual.  We couldn't assign space for the
>>> BAR, which means we can't use whatever vendor-specific functionality
>>> it provides.
>>
>> I had several chats with the HW designer. I'll try to explain, only as
>> far as I could understand ;-)
>>
>> We used to make devices, before implementing a root. Since at least
>> one BAR is required (?) for a device, it was decided to have one BAR
>> for the root, for symmetry.
> 
> I'm not aware of a spec requirement for any BARs.  It's conceivable
> that one could build a device that only uses config space.  And of
> course, most bridges have windows but no BARs.  But that doesn't
> matter; the hardware is what it is and we have to deal with it.

I appreciate the compassion. RMK considered the DMA HW too screwy
to bother supporting ;-)

>> In fact, I thought I could ignore that BAR, but it is apparently NOT
>> the case, as MSIs are supposed to be sent *within* the BAR of the root.
> 
> I don't know much about this piece of the MSI puzzle, but maybe Marc
> can enlighten us.  If this Root Port is the target of MSIs and the
> Root Port turns them into some sort of interrupt on the CPU side, I
> can see how this might make sense.
> 
> I think it's unusual for the PCI core to assign the MSI target using a
> BAR, though.  I think this means you'll have to implement your
> arch_setup_msi_irq() or .irq_compose_msi_msg() method such that it 
> looks up that BAR value, since you won't know it at build-time.

I'll hack the Altera driver to fit my purpose.

>> The weird twist is that the BAR advertizes a 64-bit memory zone,
>> but we will, in fact, map MMIO registers behind it. So all the
>> RAM Linux assigns to the area is wasted, IIUC.
> 
> I'm not sure what this means.  You have this:
> 
>> OF: PCI:   MEM 0x90000000..0x9fffffff -> 0x90000000

This means I've put 256 MB of system RAM aside for PCIe devices.
This memory is no longer available for Linux "stuff".

>> pci_bus 0000:00: root bus resource [mem 0x90000000-0x9fffffff]

I suppose this is the PCI bus address. As we've discussed,
I used the identity to map bus <-> CPU addresses.

> This [mem 0x90000000-0x9fffffff] host bridge window means there can't
> be RAM in that region.  CPU accesses to 0x90000000-0x9fffffff have to
> be claimed by the host bridge and forwarded to PCI.
> 
> Linux doesn't "assign system RAM" anywhere; we just learn somehow
> where that RAM is.  Linux *does* assign BARs of PCI devices, and they
> have to be inside the host bridge windows(s).

I'm confused, I thought I had understood that part...
I thought the binding required me to specify (in the "ranges"
property) a non-prefetchable zone of system RAM, and this
memory is then "handed out" by Linux to different devices.
Or do I just need to specify some address range that's not
necessarily backed with actual RAM?

> The BARs may contain registers, RAM, frame buffers, etc., that live
> on the PCI device.  It's totally up to the device what it is.
> 
>> OF: PCI: host bridge /soc/pcie at 50000000 ranges:
>> OF: PCI:   No bus range found for /soc/pcie at 50000000, using [bus 00-ff]
> 
> Tangent: the lack of a bus range is a defect in your DTS.

What range should I specify? 0-1? 0-2? 0-255?

>> [    1.042737] pci_bus 0000:00: scanning bus
>> [    1.046895] pci 0000:00:00.0: [1105:0024] type 01 class 0x048000
>> [    1.053050] pci 0000:00:00.0: calling tango_pcie_fixup_class+0x0/0x10
>> [    1.059639] pci 0000:00:00.0: reg 0x10: [mem 0x00000000-0x00ffffff 64bit]
>> [    1.066583] pci 0000:00:00.0: calling pci_fixup_ide_bases+0x0/0x40
>> [    1.072929] pci 0000:00:00.0: supports D1 D2
>> [    1.077318] pci 0000:00:00.0: PME# supported from D0 D1 D2 D3hot
>> [    1.083452] pci 0000:00:00.0: PME# disabled
>> [    1.087901] pci_bus 0000:00: fixups for bus
>> [    1.092212] PCI: bus0: Fast back to back transfers disabled
>> [    1.097913] pci 0000:00:00.0: scanning [bus 00-00] behind bridge, pass 0
>> [    1.104746] pci 0000:00:00.0: bridge configuration invalid ([bus 00-00]), reconfiguring
>> [    1.112893] pci 0000:00:00.0: scanning [bus 00-00] behind bridge, pass 1
>> [    1.119800] pci_bus 0000:01: scanning bus
>> [    1.123972] pci 0000:01:00.0: [1912:0014] type 00 class 0x0c0330
>> [    1.130144] pci 0000:01:00.0: reg 0x10: [mem 0x00000000-0x00001fff 64bit]
>> [    1.137147] pci 0000:01:00.0: calling pci_fixup_ide_bases+0x0/0x40
>> [    1.143577] pci 0000:01:00.0: PME# supported from D0 D3hot D3cold
>> [    1.149801] pci 0000:01:00.0: PME# disabled
>> [    1.154364] pci_bus 0000:01: fixups for bus
>> [    1.158671] PCI: bus1: Fast back to back transfers disabled
>> [    1.164368] pci_bus 0000:01: bus scan returning with max=01
>> [    1.170067] pci_bus 0000:01: busn_res: [bus 01-ff] end is updated to 01
>> [    1.176814] pci_bus 0000:00: bus scan returning with max=01
>> [    1.182511] pci 0000:00:00.0: fixup irq: got 0
>> [    1.187072] pci 0000:00:00.0: assigning IRQ 00
>> [    1.191658] pci 0000:01:00.0: fixup irq: got 20
>> [    1.196307] pci 0000:01:00.0: assigning IRQ 20
>> [    1.200892] pci 0000:00:00.0: BAR 0: assigned [mem 0x90000000-0x90ffffff 64bit]
>> [    1.208344] pci 0000:00:00.0: BAR 8: assigned [mem 0x91000000-0x910fffff]
>> [    1.215267] pci 0000:01:00.0: BAR 0: assigned [mem 0x91000000-0x91001fff 64bit]
>> [    1.222726] pci 0000:00:00.0: PCI bridge to [bus 01]
>> [    1.227812] pci 0000:00:00.0:   bridge window [mem 0x91000000-0x910fffff]
>> [    1.234781] pcieport 0000:00:00.0: enabling device (0140 -> 0142)
>> [    1.241013] pcieport 0000:00:00.0: enabling bus mastering
>> [    1.246634] pci 0000:01:00.0: calling quirk_usb_early_handoff+0x0/0x790
>> [    1.253389] pci 0000:01:00.0: enabling device (0140 -> 0142)
>> [    1.275215] pci 0000:01:00.0: xHCI HW did not halt within 16000 usec status = 0xd2f4167e
>>
>> # /usr/sbin/lspci -v
>> 00:00.0 PCI bridge: Sigma Designs, Inc. Device 0024 (rev 01) (prog-if 00 [Normal decode])
>>         Flags: bus master, fast devsel, latency 0
>>         Memory at 90000000 (64-bit, non-prefetchable) [size=16M]
>>         Bus: primary=00, secondary=01, subordinate=01, sec-latency=0
>>         I/O behind bridge: 00000000-00000fff
> 
> Something's wrong with this.  You have no I/O windows through the host
> bridge, which implies that you can't generic PCI I/O transactions, so
> this I/O window should be disabled.  This might be an lspci issue;
> what does "lspci -xxx" show?

I'll look on Monday. But I know that this revision of the controller
does not support any I/O areas. I don't know why Linux sees this.
Might be a bug in the controller, or a missing init in my code.

>>         Memory behind bridge: 91000000-910fffff
>>         Prefetchable memory behind bridge: 00000000-000fffff
> 
> This prefetchable memory window is bogus, too.  It should probably be
> disabled.  If the bridge doesn't support a prefetchable window, the
> base and limit should be hardwired to zero.  If it supports a window
> but it's disabled, the limit should be less than the base.  For
> example, on my system I see this for a bridge with the window
> disabled:
> 
>   # setpci -s00:1c.0 PREF_MEMORY_BASE
>   fff1
>   # setpci -s00:1c.0 PREF_MEMORY_LIMIT
>   0001

OK, one more thing for me to check. Thanks for your thoroughness.

Regards.

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: Using the generic host PCIe driver
  2017-03-03 23:23                         ` Mason
@ 2017-03-04  9:35                           ` Ard Biesheuvel
  -1 siblings, 0 replies; 54+ messages in thread
From: Ard Biesheuvel @ 2017-03-04  9:35 UTC (permalink / raw)
  To: Mason
  Cc: Bjorn Helgaas, Rob Herring, Phuong Nguyen, David Daney,
	Marc Zyngier, linux-pci, Thibaud Cornic, Will Deacon,
	Thierry Reding, Linux ARM

On 3 March 2017 at 23:23, Mason <slash.tmp@free.fr> wrote:
> On 03/03/2017 21:04, Bjorn Helgaas wrote:
>> On Fri, Mar 03, 2017 at 06:18:02PM +0100, Mason wrote:
>>> On 03/03/2017 16:46, Bjorn Helgaas wrote:
>>>> On Fri, Mar 03, 2017 at 01:44:54PM +0100, Mason wrote:
>>>>
>>>>> For now, I have "hidden" the root's BAR0 from the system with:
>>>>>
>>>>>    if (bus->number == 0 && where == PCI_BASE_ADDRESS_0) {
>>>>>            *val = 0;
>>>>>            return PCIBIOS_SUCCESSFUL;
>>>>>    }
>>>>
>>>> I'm scratching my head about this a little.  Here's what your dmesg
>>>> log contained originally:
>>>>
>>>>   pci 0000:00:00.0: [1105:8758] type 01 class 0x048000
>>>>   pci 0000:00:00.0: reg 0x10: [mem 0x00000000-0x00ffffff 64bit]
>>>>   pci 0000:00:00.0: BAR 0: no space for [mem size 0x01000000 64bit]
>>>>   pci 0000:00:00.0: BAR 0: failed to assign [mem size 0x01000000 64bit]
>>>>   pci 0000:00:00.0: PCI bridge to [bus 01]
>>>>   pcieport 0000:00:00.0: enabling device (0140 -> 0142)
>>>>
>>>> This device is a bridge (a Root Port, per your lspci output).  With a
>>>> BAR, which is legal but unusual.  We couldn't assign space for the
>>>> BAR, which means we can't use whatever vendor-specific functionality
>>>> it provides.
>>>
>>> I had several chats with the HW designer. I'll try to explain, only as
>>> far as I could understand ;-)
>>>
>>> We used to make devices, before implementing a root. Since at least
>>> one BAR is required (?) for a device, it was decided to have one BAR
>>> for the root, for symmetry.
>>
>> I'm not aware of a spec requirement for any BARs.  It's conceivable
>> that one could build a device that only uses config space.  And of
>> course, most bridges have windows but no BARs.  But that doesn't
>> matter; the hardware is what it is and we have to deal with it.
>
> I appreciate the compassion. RMK considered the DMA HW too screwy
> to bother supporting ;-)
>
>>> In fact, I thought I could ignore that BAR, but it is apparently NOT
>>> the case, as MSIs are supposed to be sent *within* the BAR of the root.
>>
>> I don't know much about this piece of the MSI puzzle, but maybe Marc
>> can enlighten us.  If this Root Port is the target of MSIs and the
>> Root Port turns them into some sort of interrupt on the CPU side, I
>> can see how this might make sense.
>>
>> I think it's unusual for the PCI core to assign the MSI target using a
>> BAR, though.  I think this means you'll have to implement your
>> arch_setup_msi_irq() or .irq_compose_msi_msg() method such that it
>> looks up that BAR value, since you won't know it at build-time.
>
> I'll hack the Altera driver to fit my purpose.
>
>>> The weird twist is that the BAR advertizes a 64-bit memory zone,
>>> but we will, in fact, map MMIO registers behind it. So all the
>>> RAM Linux assigns to the area is wasted, IIUC.
>>
>> I'm not sure what this means.  You have this:
>>
>>> OF: PCI:   MEM 0x90000000..0x9fffffff -> 0x90000000
>
> This means I've put 256 MB of system RAM aside for PCIe devices.
> This memory is no longer available for Linux "stuff".
>

No it doesn't. It is a physical memory *range* that is assigned to the
PCI host bridge. Any memory accesses by the CPU to that window will be
forwarded to the PCI bus by the host bridge. From the kernel driver's
POV, this range is a given, but your host bridge h/w may involve some
configuration to make the host bridge 'listen' to this range. This is
h/w specific, and as Bjorn pointed out, usually configured by the
firmware so that the kernel driver does not require any knowledge of
those internals.

>>> pci_bus 0000:00: root bus resource [mem 0x90000000-0x9fffffff]
>
> I suppose this is the PCI bus address. As we've discussed,
> I used the identity to map bus <-> CPU addresses.
>

Yes, that is fine

>> This [mem 0x90000000-0x9fffffff] host bridge window means there can't
>> be RAM in that region.  CPU accesses to 0x90000000-0x9fffffff have to
>> be claimed by the host bridge and forwarded to PCI.
>>
>> Linux doesn't "assign system RAM" anywhere; we just learn somehow
>> where that RAM is.  Linux *does* assign BARs of PCI devices, and they
>> have to be inside the host bridge windows(s).
>
> I'm confused, I thought I had understood that part...
> I thought the binding required me to specify (in the "ranges"
> property) a non-prefetchable zone of system RAM, and this
> memory is then "handed out" by Linux to different devices.
> Or do I just need to specify some address range that's not
> necessarily backed with actual RAM?
>

Yes. Each PCI device advertises its need of memory windows via its
BARs, but the actual placement of those windows inside the host
bridge's memory range is configured dynamically, usually by the
firmware (on PCs) but on ARM/arm64 systems, this is done from scratch
by the kernel. The *purpose* of those memory windows is device
specific, but whatever is behind it lives on the PCI device. So this
is *not* system RAM.

>> The BARs may contain registers, RAM, frame buffers, etc., that live
>> on the PCI device.  It's totally up to the device what it is.
>>
>>> OF: PCI: host bridge /soc/pcie@50000000 ranges:
>>> OF: PCI:   No bus range found for /soc/pcie@50000000, using [bus 00-ff]
>>
>> Tangent: the lack of a bus range is a defect in your DTS.
>
> What range should I specify? 0-1? 0-2? 0-255?
>
>>> [    1.042737] pci_bus 0000:00: scanning bus
>>> [    1.046895] pci 0000:00:00.0: [1105:0024] type 01 class 0x048000
>>> [    1.053050] pci 0000:00:00.0: calling tango_pcie_fixup_class+0x0/0x10
>>> [    1.059639] pci 0000:00:00.0: reg 0x10: [mem 0x00000000-0x00ffffff 64bit]
>>> [    1.066583] pci 0000:00:00.0: calling pci_fixup_ide_bases+0x0/0x40
>>> [    1.072929] pci 0000:00:00.0: supports D1 D2
>>> [    1.077318] pci 0000:00:00.0: PME# supported from D0 D1 D2 D3hot
>>> [    1.083452] pci 0000:00:00.0: PME# disabled
>>> [    1.087901] pci_bus 0000:00: fixups for bus
>>> [    1.092212] PCI: bus0: Fast back to back transfers disabled
>>> [    1.097913] pci 0000:00:00.0: scanning [bus 00-00] behind bridge, pass 0
>>> [    1.104746] pci 0000:00:00.0: bridge configuration invalid ([bus 00-00]), reconfiguring
>>> [    1.112893] pci 0000:00:00.0: scanning [bus 00-00] behind bridge, pass 1
>>> [    1.119800] pci_bus 0000:01: scanning bus
>>> [    1.123972] pci 0000:01:00.0: [1912:0014] type 00 class 0x0c0330
>>> [    1.130144] pci 0000:01:00.0: reg 0x10: [mem 0x00000000-0x00001fff 64bit]
>>> [    1.137147] pci 0000:01:00.0: calling pci_fixup_ide_bases+0x0/0x40
>>> [    1.143577] pci 0000:01:00.0: PME# supported from D0 D3hot D3cold
>>> [    1.149801] pci 0000:01:00.0: PME# disabled
>>> [    1.154364] pci_bus 0000:01: fixups for bus
>>> [    1.158671] PCI: bus1: Fast back to back transfers disabled
>>> [    1.164368] pci_bus 0000:01: bus scan returning with max=01
>>> [    1.170067] pci_bus 0000:01: busn_res: [bus 01-ff] end is updated to 01
>>> [    1.176814] pci_bus 0000:00: bus scan returning with max=01
>>> [    1.182511] pci 0000:00:00.0: fixup irq: got 0
>>> [    1.187072] pci 0000:00:00.0: assigning IRQ 00
>>> [    1.191658] pci 0000:01:00.0: fixup irq: got 20
>>> [    1.196307] pci 0000:01:00.0: assigning IRQ 20
>>> [    1.200892] pci 0000:00:00.0: BAR 0: assigned [mem 0x90000000-0x90ffffff 64bit]
>>> [    1.208344] pci 0000:00:00.0: BAR 8: assigned [mem 0x91000000-0x910fffff]
>>> [    1.215267] pci 0000:01:00.0: BAR 0: assigned [mem 0x91000000-0x91001fff 64bit]
>>> [    1.222726] pci 0000:00:00.0: PCI bridge to [bus 01]
>>> [    1.227812] pci 0000:00:00.0:   bridge window [mem 0x91000000-0x910fffff]
>>> [    1.234781] pcieport 0000:00:00.0: enabling device (0140 -> 0142)
>>> [    1.241013] pcieport 0000:00:00.0: enabling bus mastering
>>> [    1.246634] pci 0000:01:00.0: calling quirk_usb_early_handoff+0x0/0x790
>>> [    1.253389] pci 0000:01:00.0: enabling device (0140 -> 0142)
>>> [    1.275215] pci 0000:01:00.0: xHCI HW did not halt within 16000 usec status = 0xd2f4167e
>>>
>>> # /usr/sbin/lspci -v
>>> 00:00.0 PCI bridge: Sigma Designs, Inc. Device 0024 (rev 01) (prog-if 00 [Normal decode])
>>>         Flags: bus master, fast devsel, latency 0
>>>         Memory at 90000000 (64-bit, non-prefetchable) [size=16M]
>>>         Bus: primary=00, secondary=01, subordinate=01, sec-latency=0
>>>         I/O behind bridge: 00000000-00000fff
>>
>> Something's wrong with this.  You have no I/O windows through the host
>> bridge, which implies that you can't generic PCI I/O transactions, so
>> this I/O window should be disabled.  This might be an lspci issue;
>> what does "lspci -xxx" show?
>
> I'll look on Monday. But I know that this revision of the controller
> does not support any I/O areas. I don't know why Linux sees this.
> Might be a bug in the controller, or a missing init in my code.
>
>>>         Memory behind bridge: 91000000-910fffff
>>>         Prefetchable memory behind bridge: 00000000-000fffff
>>
>> This prefetchable memory window is bogus, too.  It should probably be
>> disabled.  If the bridge doesn't support a prefetchable window, the
>> base and limit should be hardwired to zero.  If it supports a window
>> but it's disabled, the limit should be less than the base.  For
>> example, on my system I see this for a bridge with the window
>> disabled:
>>
>>   # setpci -s00:1c.0 PREF_MEMORY_BASE
>>   fff1
>>   # setpci -s00:1c.0 PREF_MEMORY_LIMIT
>>   0001
>
> OK, one more thing for me to check. Thanks for your thoroughness.
>
> Regards.
>
> _______________________________________________
> linux-arm-kernel mailing list
> linux-arm-kernel@lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Using the generic host PCIe driver
@ 2017-03-04  9:35                           ` Ard Biesheuvel
  0 siblings, 0 replies; 54+ messages in thread
From: Ard Biesheuvel @ 2017-03-04  9:35 UTC (permalink / raw)
  To: linux-arm-kernel

On 3 March 2017 at 23:23, Mason <slash.tmp@free.fr> wrote:
> On 03/03/2017 21:04, Bjorn Helgaas wrote:
>> On Fri, Mar 03, 2017 at 06:18:02PM +0100, Mason wrote:
>>> On 03/03/2017 16:46, Bjorn Helgaas wrote:
>>>> On Fri, Mar 03, 2017 at 01:44:54PM +0100, Mason wrote:
>>>>
>>>>> For now, I have "hidden" the root's BAR0 from the system with:
>>>>>
>>>>>    if (bus->number == 0 && where == PCI_BASE_ADDRESS_0) {
>>>>>            *val = 0;
>>>>>            return PCIBIOS_SUCCESSFUL;
>>>>>    }
>>>>
>>>> I'm scratching my head about this a little.  Here's what your dmesg
>>>> log contained originally:
>>>>
>>>>   pci 0000:00:00.0: [1105:8758] type 01 class 0x048000
>>>>   pci 0000:00:00.0: reg 0x10: [mem 0x00000000-0x00ffffff 64bit]
>>>>   pci 0000:00:00.0: BAR 0: no space for [mem size 0x01000000 64bit]
>>>>   pci 0000:00:00.0: BAR 0: failed to assign [mem size 0x01000000 64bit]
>>>>   pci 0000:00:00.0: PCI bridge to [bus 01]
>>>>   pcieport 0000:00:00.0: enabling device (0140 -> 0142)
>>>>
>>>> This device is a bridge (a Root Port, per your lspci output).  With a
>>>> BAR, which is legal but unusual.  We couldn't assign space for the
>>>> BAR, which means we can't use whatever vendor-specific functionality
>>>> it provides.
>>>
>>> I had several chats with the HW designer. I'll try to explain, only as
>>> far as I could understand ;-)
>>>
>>> We used to make devices, before implementing a root. Since at least
>>> one BAR is required (?) for a device, it was decided to have one BAR
>>> for the root, for symmetry.
>>
>> I'm not aware of a spec requirement for any BARs.  It's conceivable
>> that one could build a device that only uses config space.  And of
>> course, most bridges have windows but no BARs.  But that doesn't
>> matter; the hardware is what it is and we have to deal with it.
>
> I appreciate the compassion. RMK considered the DMA HW too screwy
> to bother supporting ;-)
>
>>> In fact, I thought I could ignore that BAR, but it is apparently NOT
>>> the case, as MSIs are supposed to be sent *within* the BAR of the root.
>>
>> I don't know much about this piece of the MSI puzzle, but maybe Marc
>> can enlighten us.  If this Root Port is the target of MSIs and the
>> Root Port turns them into some sort of interrupt on the CPU side, I
>> can see how this might make sense.
>>
>> I think it's unusual for the PCI core to assign the MSI target using a
>> BAR, though.  I think this means you'll have to implement your
>> arch_setup_msi_irq() or .irq_compose_msi_msg() method such that it
>> looks up that BAR value, since you won't know it at build-time.
>
> I'll hack the Altera driver to fit my purpose.
>
>>> The weird twist is that the BAR advertizes a 64-bit memory zone,
>>> but we will, in fact, map MMIO registers behind it. So all the
>>> RAM Linux assigns to the area is wasted, IIUC.
>>
>> I'm not sure what this means.  You have this:
>>
>>> OF: PCI:   MEM 0x90000000..0x9fffffff -> 0x90000000
>
> This means I've put 256 MB of system RAM aside for PCIe devices.
> This memory is no longer available for Linux "stuff".
>

No it doesn't. It is a physical memory *range* that is assigned to the
PCI host bridge. Any memory accesses by the CPU to that window will be
forwarded to the PCI bus by the host bridge. From the kernel driver's
POV, this range is a given, but your host bridge h/w may involve some
configuration to make the host bridge 'listen' to this range. This is
h/w specific, and as Bjorn pointed out, usually configured by the
firmware so that the kernel driver does not require any knowledge of
those internals.

>>> pci_bus 0000:00: root bus resource [mem 0x90000000-0x9fffffff]
>
> I suppose this is the PCI bus address. As we've discussed,
> I used the identity to map bus <-> CPU addresses.
>

Yes, that is fine

>> This [mem 0x90000000-0x9fffffff] host bridge window means there can't
>> be RAM in that region.  CPU accesses to 0x90000000-0x9fffffff have to
>> be claimed by the host bridge and forwarded to PCI.
>>
>> Linux doesn't "assign system RAM" anywhere; we just learn somehow
>> where that RAM is.  Linux *does* assign BARs of PCI devices, and they
>> have to be inside the host bridge windows(s).
>
> I'm confused, I thought I had understood that part...
> I thought the binding required me to specify (in the "ranges"
> property) a non-prefetchable zone of system RAM, and this
> memory is then "handed out" by Linux to different devices.
> Or do I just need to specify some address range that's not
> necessarily backed with actual RAM?
>

Yes. Each PCI device advertises its need of memory windows via its
BARs, but the actual placement of those windows inside the host
bridge's memory range is configured dynamically, usually by the
firmware (on PCs) but on ARM/arm64 systems, this is done from scratch
by the kernel. The *purpose* of those memory windows is device
specific, but whatever is behind it lives on the PCI device. So this
is *not* system RAM.

>> The BARs may contain registers, RAM, frame buffers, etc., that live
>> on the PCI device.  It's totally up to the device what it is.
>>
>>> OF: PCI: host bridge /soc/pcie at 50000000 ranges:
>>> OF: PCI:   No bus range found for /soc/pcie at 50000000, using [bus 00-ff]
>>
>> Tangent: the lack of a bus range is a defect in your DTS.
>
> What range should I specify? 0-1? 0-2? 0-255?
>
>>> [    1.042737] pci_bus 0000:00: scanning bus
>>> [    1.046895] pci 0000:00:00.0: [1105:0024] type 01 class 0x048000
>>> [    1.053050] pci 0000:00:00.0: calling tango_pcie_fixup_class+0x0/0x10
>>> [    1.059639] pci 0000:00:00.0: reg 0x10: [mem 0x00000000-0x00ffffff 64bit]
>>> [    1.066583] pci 0000:00:00.0: calling pci_fixup_ide_bases+0x0/0x40
>>> [    1.072929] pci 0000:00:00.0: supports D1 D2
>>> [    1.077318] pci 0000:00:00.0: PME# supported from D0 D1 D2 D3hot
>>> [    1.083452] pci 0000:00:00.0: PME# disabled
>>> [    1.087901] pci_bus 0000:00: fixups for bus
>>> [    1.092212] PCI: bus0: Fast back to back transfers disabled
>>> [    1.097913] pci 0000:00:00.0: scanning [bus 00-00] behind bridge, pass 0
>>> [    1.104746] pci 0000:00:00.0: bridge configuration invalid ([bus 00-00]), reconfiguring
>>> [    1.112893] pci 0000:00:00.0: scanning [bus 00-00] behind bridge, pass 1
>>> [    1.119800] pci_bus 0000:01: scanning bus
>>> [    1.123972] pci 0000:01:00.0: [1912:0014] type 00 class 0x0c0330
>>> [    1.130144] pci 0000:01:00.0: reg 0x10: [mem 0x00000000-0x00001fff 64bit]
>>> [    1.137147] pci 0000:01:00.0: calling pci_fixup_ide_bases+0x0/0x40
>>> [    1.143577] pci 0000:01:00.0: PME# supported from D0 D3hot D3cold
>>> [    1.149801] pci 0000:01:00.0: PME# disabled
>>> [    1.154364] pci_bus 0000:01: fixups for bus
>>> [    1.158671] PCI: bus1: Fast back to back transfers disabled
>>> [    1.164368] pci_bus 0000:01: bus scan returning with max=01
>>> [    1.170067] pci_bus 0000:01: busn_res: [bus 01-ff] end is updated to 01
>>> [    1.176814] pci_bus 0000:00: bus scan returning with max=01
>>> [    1.182511] pci 0000:00:00.0: fixup irq: got 0
>>> [    1.187072] pci 0000:00:00.0: assigning IRQ 00
>>> [    1.191658] pci 0000:01:00.0: fixup irq: got 20
>>> [    1.196307] pci 0000:01:00.0: assigning IRQ 20
>>> [    1.200892] pci 0000:00:00.0: BAR 0: assigned [mem 0x90000000-0x90ffffff 64bit]
>>> [    1.208344] pci 0000:00:00.0: BAR 8: assigned [mem 0x91000000-0x910fffff]
>>> [    1.215267] pci 0000:01:00.0: BAR 0: assigned [mem 0x91000000-0x91001fff 64bit]
>>> [    1.222726] pci 0000:00:00.0: PCI bridge to [bus 01]
>>> [    1.227812] pci 0000:00:00.0:   bridge window [mem 0x91000000-0x910fffff]
>>> [    1.234781] pcieport 0000:00:00.0: enabling device (0140 -> 0142)
>>> [    1.241013] pcieport 0000:00:00.0: enabling bus mastering
>>> [    1.246634] pci 0000:01:00.0: calling quirk_usb_early_handoff+0x0/0x790
>>> [    1.253389] pci 0000:01:00.0: enabling device (0140 -> 0142)
>>> [    1.275215] pci 0000:01:00.0: xHCI HW did not halt within 16000 usec status = 0xd2f4167e
>>>
>>> # /usr/sbin/lspci -v
>>> 00:00.0 PCI bridge: Sigma Designs, Inc. Device 0024 (rev 01) (prog-if 00 [Normal decode])
>>>         Flags: bus master, fast devsel, latency 0
>>>         Memory at 90000000 (64-bit, non-prefetchable) [size=16M]
>>>         Bus: primary=00, secondary=01, subordinate=01, sec-latency=0
>>>         I/O behind bridge: 00000000-00000fff
>>
>> Something's wrong with this.  You have no I/O windows through the host
>> bridge, which implies that you can't generic PCI I/O transactions, so
>> this I/O window should be disabled.  This might be an lspci issue;
>> what does "lspci -xxx" show?
>
> I'll look on Monday. But I know that this revision of the controller
> does not support any I/O areas. I don't know why Linux sees this.
> Might be a bug in the controller, or a missing init in my code.
>
>>>         Memory behind bridge: 91000000-910fffff
>>>         Prefetchable memory behind bridge: 00000000-000fffff
>>
>> This prefetchable memory window is bogus, too.  It should probably be
>> disabled.  If the bridge doesn't support a prefetchable window, the
>> base and limit should be hardwired to zero.  If it supports a window
>> but it's disabled, the limit should be less than the base.  For
>> example, on my system I see this for a bridge with the window
>> disabled:
>>
>>   # setpci -s00:1c.0 PREF_MEMORY_BASE
>>   fff1
>>   # setpci -s00:1c.0 PREF_MEMORY_LIMIT
>>   0001
>
> OK, one more thing for me to check. Thanks for your thoroughness.
>
> Regards.
>
> _______________________________________________
> linux-arm-kernel mailing list
> linux-arm-kernel at lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: Using the generic host PCIe driver
  2017-03-03 20:04                       ` Bjorn Helgaas
@ 2017-03-04 10:50                         ` Marc Zyngier
  -1 siblings, 0 replies; 54+ messages in thread
From: Marc Zyngier @ 2017-03-04 10:50 UTC (permalink / raw)
  To: Bjorn Helgaas
  Cc: Mason, linux-pci, Linux ARM, Will Deacon, David Daney,
	Rob Herring, Thierry Reding, Phuong Nguyen, Thibaud Cornic

On Fri, Mar 03 2017 at  8:04:07 pm GMT, Bjorn Helgaas <helgaas@kernel.org> wrote:
> On Fri, Mar 03, 2017 at 06:18:02PM +0100, Mason wrote:
>> In fact, I thought I could ignore that BAR, but it is apparently NOT
>> the case, as MSIs are supposed to be sent *within* the BAR of the root.
>
> I don't know much about this piece of the MSI puzzle, but maybe Marc
> can enlighten us.  If this Root Port is the target of MSIs and the
> Root Port turns them into some sort of interrupt on the CPU side, I
> can see how this might make sense.

There is a whole range of PCIe RC that require to be programmed with the
doorbell address. It can be any address, but the kernel has to make sure
it is not something you will ever DMA to, as the RC is unlikely to
forward the transaction after having matched it.

> I think it's unusual for the PCI core to assign the MSI target using a
> BAR, though.  I think this means you'll have to implement your
> arch_setup_msi_irq() or .irq_compose_msi_msg() method such that it 
> looks up that BAR value, since you won't know it at build-time.

A common trick is to set the doorbell address to a well known value,
such as the base address for the PCIe RC itself. Of course, that only
works if the RC doesn't forward writes to the doorbell. Otherwise, any
RAM address will do, provided that it is not something we'd expect to
DMA to.

Thanks,

        M.
-- 
Jazz is not dead, it just smell funny.

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Using the generic host PCIe driver
@ 2017-03-04 10:50                         ` Marc Zyngier
  0 siblings, 0 replies; 54+ messages in thread
From: Marc Zyngier @ 2017-03-04 10:50 UTC (permalink / raw)
  To: linux-arm-kernel

On Fri, Mar 03 2017 at  8:04:07 pm GMT, Bjorn Helgaas <helgaas@kernel.org> wrote:
> On Fri, Mar 03, 2017 at 06:18:02PM +0100, Mason wrote:
>> In fact, I thought I could ignore that BAR, but it is apparently NOT
>> the case, as MSIs are supposed to be sent *within* the BAR of the root.
>
> I don't know much about this piece of the MSI puzzle, but maybe Marc
> can enlighten us.  If this Root Port is the target of MSIs and the
> Root Port turns them into some sort of interrupt on the CPU side, I
> can see how this might make sense.

There is a whole range of PCIe RC that require to be programmed with the
doorbell address. It can be any address, but the kernel has to make sure
it is not something you will ever DMA to, as the RC is unlikely to
forward the transaction after having matched it.

> I think it's unusual for the PCI core to assign the MSI target using a
> BAR, though.  I think this means you'll have to implement your
> arch_setup_msi_irq() or .irq_compose_msi_msg() method such that it 
> looks up that BAR value, since you won't know it at build-time.

A common trick is to set the doorbell address to a well known value,
such as the base address for the PCIe RC itself. Of course, that only
works if the RC doesn't forward writes to the doorbell. Otherwise, any
RAM address will do, provided that it is not something we'd expect to
DMA to.

Thanks,

        M.
-- 
Jazz is not dead, it just smell funny.

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: Using the generic host PCIe driver
  2017-03-04  9:35                           ` Ard Biesheuvel
@ 2017-03-04 10:56                             ` Mason
  -1 siblings, 0 replies; 54+ messages in thread
From: Mason @ 2017-03-04 10:56 UTC (permalink / raw)
  To: Ard Biesheuvel
  Cc: Bjorn Helgaas, Rob Herring, Phuong Nguyen, David Daney,
	Marc Zyngier, linux-pci, Thibaud Cornic, Will Deacon,
	Thierry Reding, Linux ARM

On 04/03/2017 10:35, Ard Biesheuvel wrote:
> On 3 March 2017 at 23:23, Mason <slash.tmp@free.fr> wrote:
>> On 03/03/2017 21:04, Bjorn Helgaas wrote:
>>> On Fri, Mar 03, 2017 at 06:18:02PM +0100, Mason wrote:
>>>> On 03/03/2017 16:46, Bjorn Helgaas wrote:
>>>>> On Fri, Mar 03, 2017 at 01:44:54PM +0100, Mason wrote:
>>>>>
>>>>>> For now, I have "hidden" the root's BAR0 from the system with:
>>>>>>
>>>>>>    if (bus->number == 0 && where == PCI_BASE_ADDRESS_0) {
>>>>>>            *val = 0;
>>>>>>            return PCIBIOS_SUCCESSFUL;
>>>>>>    }
>>>>>
>>>>> I'm scratching my head about this a little.  Here's what your dmesg
>>>>> log contained originally:
>>>>>
>>>>>   pci 0000:00:00.0: [1105:8758] type 01 class 0x048000
>>>>>   pci 0000:00:00.0: reg 0x10: [mem 0x00000000-0x00ffffff 64bit]
>>>>>   pci 0000:00:00.0: BAR 0: no space for [mem size 0x01000000 64bit]
>>>>>   pci 0000:00:00.0: BAR 0: failed to assign [mem size 0x01000000 64bit]
>>>>>   pci 0000:00:00.0: PCI bridge to [bus 01]
>>>>>   pcieport 0000:00:00.0: enabling device (0140 -> 0142)
>>>>>
>>>>> This device is a bridge (a Root Port, per your lspci output).  With a
>>>>> BAR, which is legal but unusual.  We couldn't assign space for the
>>>>> BAR, which means we can't use whatever vendor-specific functionality
>>>>> it provides.
>>>>
>>>> I had several chats with the HW designer. I'll try to explain, only as
>>>> far as I could understand ;-)
>>>>
>>>> We used to make devices, before implementing a root. Since at least
>>>> one BAR is required (?) for a device, it was decided to have one BAR
>>>> for the root, for symmetry.
>>>
>>> I'm not aware of a spec requirement for any BARs.  It's conceivable
>>> that one could build a device that only uses config space.  And of
>>> course, most bridges have windows but no BARs.  But that doesn't
>>> matter; the hardware is what it is and we have to deal with it.
>>
>> I appreciate the compassion. RMK considered the DMA HW too screwy
>> to bother supporting ;-)
>>
>>>> In fact, I thought I could ignore that BAR, but it is apparently NOT
>>>> the case, as MSIs are supposed to be sent *within* the BAR of the root.
>>>
>>> I don't know much about this piece of the MSI puzzle, but maybe Marc
>>> can enlighten us.  If this Root Port is the target of MSIs and the
>>> Root Port turns them into some sort of interrupt on the CPU side, I
>>> can see how this might make sense.
>>>
>>> I think it's unusual for the PCI core to assign the MSI target using a
>>> BAR, though.  I think this means you'll have to implement your
>>> arch_setup_msi_irq() or .irq_compose_msi_msg() method such that it
>>> looks up that BAR value, since you won't know it at build-time.
>>
>> I'll hack the Altera driver to fit my purpose.
>>
>>>> The weird twist is that the BAR advertizes a 64-bit memory zone,
>>>> but we will, in fact, map MMIO registers behind it. So all the
>>>> RAM Linux assigns to the area is wasted, IIUC.
>>>
>>> I'm not sure what this means.  You have this:
>>>
>>>> OF: PCI:   MEM 0x90000000..0x9fffffff -> 0x90000000
>>
>> This means I've put 256 MB of system RAM aside for PCIe devices.
>> This memory is no longer available for Linux "stuff".
>>
> 
> No it doesn't. It is a physical memory *range* that is assigned to the
> PCI host bridge. Any memory accesses by the CPU to that window will be
> forwarded to the PCI bus by the host bridge. From the kernel driver's
> POV, this range is a given, but your host bridge h/w may involve some
> configuration to make the host bridge 'listen' to this range. This is
> h/w specific, and as Bjorn pointed out, usually configured by the
> firmware so that the kernel driver does not require any knowledge of
> those internals.
> 
>>>> pci_bus 0000:00: root bus resource [mem 0x90000000-0x9fffffff]
>>
>> I suppose this is the PCI bus address. As we've discussed,
>> I used the identity to map bus <-> CPU addresses.
>>
> 
> Yes, that is fine
> 
>>> This [mem 0x90000000-0x9fffffff] host bridge window means there can't
>>> be RAM in that region.  CPU accesses to 0x90000000-0x9fffffff have to
>>> be claimed by the host bridge and forwarded to PCI.
>>>
>>> Linux doesn't "assign system RAM" anywhere; we just learn somehow
>>> where that RAM is.  Linux *does* assign BARs of PCI devices, and they
>>> have to be inside the host bridge windows(s).
>>
>> I'm confused, I thought I had understood that part...
>> I thought the binding required me to specify (in the "ranges"
>> property) a non-prefetchable zone of system RAM, and this
>> memory is then "handed out" by Linux to different devices.
>> Or do I just need to specify some address range that's not
>> necessarily backed with actual RAM?
>>
> 
> Yes. Each PCI device advertises its need of memory windows via its
> BARs, but the actual placement of those windows inside the host
> bridge's memory range is configured dynamically, usually by the
> firmware (on PCs) but on ARM/arm64 systems, this is done from scratch
> by the kernel. The *purpose* of those memory windows is device
> specific, but whatever is behind it lives on the PCI device. So this
> is *not* system RAM.

Hello Ard,

It appears I have misunderstood something fundamental.

The binding for generic PCI support
http://lxr.free-electrons.com/source/Documentation/devicetree/bindings/pci/host-generic-pci.txt
requires two address-type specs
(please correct me if I'm wrong)
1) in the "reg" prop, the address of the configuration space (CPU physical)
2) in the "ranges" prop, at least a non-prefetchable area
http://elinux.org/Device_Tree_Usage#PCI_Address_Translation

In my 32-bit system, there are 2GB of RAM at [0x8000_0000,0x10000_0000[
There are MMIO registers at [0, 16MB[ and also other stuff higher
Suppose there is nothing mapped at [0x7000_0000, 0x8000_0000[

Can I provide that range to the PCI subsystem?

Regards.

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Using the generic host PCIe driver
@ 2017-03-04 10:56                             ` Mason
  0 siblings, 0 replies; 54+ messages in thread
From: Mason @ 2017-03-04 10:56 UTC (permalink / raw)
  To: linux-arm-kernel

On 04/03/2017 10:35, Ard Biesheuvel wrote:
> On 3 March 2017 at 23:23, Mason <slash.tmp@free.fr> wrote:
>> On 03/03/2017 21:04, Bjorn Helgaas wrote:
>>> On Fri, Mar 03, 2017 at 06:18:02PM +0100, Mason wrote:
>>>> On 03/03/2017 16:46, Bjorn Helgaas wrote:
>>>>> On Fri, Mar 03, 2017 at 01:44:54PM +0100, Mason wrote:
>>>>>
>>>>>> For now, I have "hidden" the root's BAR0 from the system with:
>>>>>>
>>>>>>    if (bus->number == 0 && where == PCI_BASE_ADDRESS_0) {
>>>>>>            *val = 0;
>>>>>>            return PCIBIOS_SUCCESSFUL;
>>>>>>    }
>>>>>
>>>>> I'm scratching my head about this a little.  Here's what your dmesg
>>>>> log contained originally:
>>>>>
>>>>>   pci 0000:00:00.0: [1105:8758] type 01 class 0x048000
>>>>>   pci 0000:00:00.0: reg 0x10: [mem 0x00000000-0x00ffffff 64bit]
>>>>>   pci 0000:00:00.0: BAR 0: no space for [mem size 0x01000000 64bit]
>>>>>   pci 0000:00:00.0: BAR 0: failed to assign [mem size 0x01000000 64bit]
>>>>>   pci 0000:00:00.0: PCI bridge to [bus 01]
>>>>>   pcieport 0000:00:00.0: enabling device (0140 -> 0142)
>>>>>
>>>>> This device is a bridge (a Root Port, per your lspci output).  With a
>>>>> BAR, which is legal but unusual.  We couldn't assign space for the
>>>>> BAR, which means we can't use whatever vendor-specific functionality
>>>>> it provides.
>>>>
>>>> I had several chats with the HW designer. I'll try to explain, only as
>>>> far as I could understand ;-)
>>>>
>>>> We used to make devices, before implementing a root. Since at least
>>>> one BAR is required (?) for a device, it was decided to have one BAR
>>>> for the root, for symmetry.
>>>
>>> I'm not aware of a spec requirement for any BARs.  It's conceivable
>>> that one could build a device that only uses config space.  And of
>>> course, most bridges have windows but no BARs.  But that doesn't
>>> matter; the hardware is what it is and we have to deal with it.
>>
>> I appreciate the compassion. RMK considered the DMA HW too screwy
>> to bother supporting ;-)
>>
>>>> In fact, I thought I could ignore that BAR, but it is apparently NOT
>>>> the case, as MSIs are supposed to be sent *within* the BAR of the root.
>>>
>>> I don't know much about this piece of the MSI puzzle, but maybe Marc
>>> can enlighten us.  If this Root Port is the target of MSIs and the
>>> Root Port turns them into some sort of interrupt on the CPU side, I
>>> can see how this might make sense.
>>>
>>> I think it's unusual for the PCI core to assign the MSI target using a
>>> BAR, though.  I think this means you'll have to implement your
>>> arch_setup_msi_irq() or .irq_compose_msi_msg() method such that it
>>> looks up that BAR value, since you won't know it at build-time.
>>
>> I'll hack the Altera driver to fit my purpose.
>>
>>>> The weird twist is that the BAR advertizes a 64-bit memory zone,
>>>> but we will, in fact, map MMIO registers behind it. So all the
>>>> RAM Linux assigns to the area is wasted, IIUC.
>>>
>>> I'm not sure what this means.  You have this:
>>>
>>>> OF: PCI:   MEM 0x90000000..0x9fffffff -> 0x90000000
>>
>> This means I've put 256 MB of system RAM aside for PCIe devices.
>> This memory is no longer available for Linux "stuff".
>>
> 
> No it doesn't. It is a physical memory *range* that is assigned to the
> PCI host bridge. Any memory accesses by the CPU to that window will be
> forwarded to the PCI bus by the host bridge. From the kernel driver's
> POV, this range is a given, but your host bridge h/w may involve some
> configuration to make the host bridge 'listen' to this range. This is
> h/w specific, and as Bjorn pointed out, usually configured by the
> firmware so that the kernel driver does not require any knowledge of
> those internals.
> 
>>>> pci_bus 0000:00: root bus resource [mem 0x90000000-0x9fffffff]
>>
>> I suppose this is the PCI bus address. As we've discussed,
>> I used the identity to map bus <-> CPU addresses.
>>
> 
> Yes, that is fine
> 
>>> This [mem 0x90000000-0x9fffffff] host bridge window means there can't
>>> be RAM in that region.  CPU accesses to 0x90000000-0x9fffffff have to
>>> be claimed by the host bridge and forwarded to PCI.
>>>
>>> Linux doesn't "assign system RAM" anywhere; we just learn somehow
>>> where that RAM is.  Linux *does* assign BARs of PCI devices, and they
>>> have to be inside the host bridge windows(s).
>>
>> I'm confused, I thought I had understood that part...
>> I thought the binding required me to specify (in the "ranges"
>> property) a non-prefetchable zone of system RAM, and this
>> memory is then "handed out" by Linux to different devices.
>> Or do I just need to specify some address range that's not
>> necessarily backed with actual RAM?
>>
> 
> Yes. Each PCI device advertises its need of memory windows via its
> BARs, but the actual placement of those windows inside the host
> bridge's memory range is configured dynamically, usually by the
> firmware (on PCs) but on ARM/arm64 systems, this is done from scratch
> by the kernel. The *purpose* of those memory windows is device
> specific, but whatever is behind it lives on the PCI device. So this
> is *not* system RAM.

Hello Ard,

It appears I have misunderstood something fundamental.

The binding for generic PCI support
http://lxr.free-electrons.com/source/Documentation/devicetree/bindings/pci/host-generic-pci.txt
requires two address-type specs
(please correct me if I'm wrong)
1) in the "reg" prop, the address of the configuration space (CPU physical)
2) in the "ranges" prop, at least a non-prefetchable area
http://elinux.org/Device_Tree_Usage#PCI_Address_Translation

In my 32-bit system, there are 2GB of RAM at [0x8000_0000,0x10000_0000[
There are MMIO registers at [0, 16MB[ and also other stuff higher
Suppose there is nothing mapped at [0x7000_0000, 0x8000_0000[

Can I provide that range to the PCI subsystem?

Regards.

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: Using the generic host PCIe driver
  2017-03-04 10:56                             ` Mason
@ 2017-03-04 11:45                               ` Ard Biesheuvel
  -1 siblings, 0 replies; 54+ messages in thread
From: Ard Biesheuvel @ 2017-03-04 11:45 UTC (permalink / raw)
  To: Mason
  Cc: Bjorn Helgaas, Rob Herring, Phuong Nguyen, David Daney,
	Marc Zyngier, linux-pci, Thibaud Cornic, Will Deacon,
	Thierry Reding, Linux ARM

On 4 March 2017 at 10:56, Mason <slash.tmp@free.fr> wrote:
> On 04/03/2017 10:35, Ard Biesheuvel wrote:
>> On 3 March 2017 at 23:23, Mason <slash.tmp@free.fr> wrote:
>>> On 03/03/2017 21:04, Bjorn Helgaas wrote:
>>>> On Fri, Mar 03, 2017 at 06:18:02PM +0100, Mason wrote:
>>>>> On 03/03/2017 16:46, Bjorn Helgaas wrote:
>>>>>> On Fri, Mar 03, 2017 at 01:44:54PM +0100, Mason wrote:
>>>>>>
>>>>>>> For now, I have "hidden" the root's BAR0 from the system with:
>>>>>>>
>>>>>>>    if (bus->number == 0 && where == PCI_BASE_ADDRESS_0) {
>>>>>>>            *val = 0;
>>>>>>>            return PCIBIOS_SUCCESSFUL;
>>>>>>>    }
>>>>>>
>>>>>> I'm scratching my head about this a little.  Here's what your dmesg
>>>>>> log contained originally:
>>>>>>
>>>>>>   pci 0000:00:00.0: [1105:8758] type 01 class 0x048000
>>>>>>   pci 0000:00:00.0: reg 0x10: [mem 0x00000000-0x00ffffff 64bit]
>>>>>>   pci 0000:00:00.0: BAR 0: no space for [mem size 0x01000000 64bit]
>>>>>>   pci 0000:00:00.0: BAR 0: failed to assign [mem size 0x01000000 64bit]
>>>>>>   pci 0000:00:00.0: PCI bridge to [bus 01]
>>>>>>   pcieport 0000:00:00.0: enabling device (0140 -> 0142)
>>>>>>
>>>>>> This device is a bridge (a Root Port, per your lspci output).  With a
>>>>>> BAR, which is legal but unusual.  We couldn't assign space for the
>>>>>> BAR, which means we can't use whatever vendor-specific functionality
>>>>>> it provides.
>>>>>
>>>>> I had several chats with the HW designer. I'll try to explain, only as
>>>>> far as I could understand ;-)
>>>>>
>>>>> We used to make devices, before implementing a root. Since at least
>>>>> one BAR is required (?) for a device, it was decided to have one BAR
>>>>> for the root, for symmetry.
>>>>
>>>> I'm not aware of a spec requirement for any BARs.  It's conceivable
>>>> that one could build a device that only uses config space.  And of
>>>> course, most bridges have windows but no BARs.  But that doesn't
>>>> matter; the hardware is what it is and we have to deal with it.
>>>
>>> I appreciate the compassion. RMK considered the DMA HW too screwy
>>> to bother supporting ;-)
>>>
>>>>> In fact, I thought I could ignore that BAR, but it is apparently NOT
>>>>> the case, as MSIs are supposed to be sent *within* the BAR of the root.
>>>>
>>>> I don't know much about this piece of the MSI puzzle, but maybe Marc
>>>> can enlighten us.  If this Root Port is the target of MSIs and the
>>>> Root Port turns them into some sort of interrupt on the CPU side, I
>>>> can see how this might make sense.
>>>>
>>>> I think it's unusual for the PCI core to assign the MSI target using a
>>>> BAR, though.  I think this means you'll have to implement your
>>>> arch_setup_msi_irq() or .irq_compose_msi_msg() method such that it
>>>> looks up that BAR value, since you won't know it at build-time.
>>>
>>> I'll hack the Altera driver to fit my purpose.
>>>
>>>>> The weird twist is that the BAR advertizes a 64-bit memory zone,
>>>>> but we will, in fact, map MMIO registers behind it. So all the
>>>>> RAM Linux assigns to the area is wasted, IIUC.
>>>>
>>>> I'm not sure what this means.  You have this:
>>>>
>>>>> OF: PCI:   MEM 0x90000000..0x9fffffff -> 0x90000000
>>>
>>> This means I've put 256 MB of system RAM aside for PCIe devices.
>>> This memory is no longer available for Linux "stuff".
>>>
>>
>> No it doesn't. It is a physical memory *range* that is assigned to the
>> PCI host bridge. Any memory accesses by the CPU to that window will be
>> forwarded to the PCI bus by the host bridge. From the kernel driver's
>> POV, this range is a given, but your host bridge h/w may involve some
>> configuration to make the host bridge 'listen' to this range. This is
>> h/w specific, and as Bjorn pointed out, usually configured by the
>> firmware so that the kernel driver does not require any knowledge of
>> those internals.
>>
>>>>> pci_bus 0000:00: root bus resource [mem 0x90000000-0x9fffffff]
>>>
>>> I suppose this is the PCI bus address. As we've discussed,
>>> I used the identity to map bus <-> CPU addresses.
>>>
>>
>> Yes, that is fine
>>
>>>> This [mem 0x90000000-0x9fffffff] host bridge window means there can't
>>>> be RAM in that region.  CPU accesses to 0x90000000-0x9fffffff have to
>>>> be claimed by the host bridge and forwarded to PCI.
>>>>
>>>> Linux doesn't "assign system RAM" anywhere; we just learn somehow
>>>> where that RAM is.  Linux *does* assign BARs of PCI devices, and they
>>>> have to be inside the host bridge windows(s).
>>>
>>> I'm confused, I thought I had understood that part...
>>> I thought the binding required me to specify (in the "ranges"
>>> property) a non-prefetchable zone of system RAM, and this
>>> memory is then "handed out" by Linux to different devices.
>>> Or do I just need to specify some address range that's not
>>> necessarily backed with actual RAM?
>>>
>>
>> Yes. Each PCI device advertises its need of memory windows via its
>> BARs, but the actual placement of those windows inside the host
>> bridge's memory range is configured dynamically, usually by the
>> firmware (on PCs) but on ARM/arm64 systems, this is done from scratch
>> by the kernel. The *purpose* of those memory windows is device
>> specific, but whatever is behind it lives on the PCI device. So this
>> is *not* system RAM.
>
> Hello Ard,
>
> It appears I have misunderstood something fundamental.
>
> The binding for generic PCI support
> http://lxr.free-electrons.com/source/Documentation/devicetree/bindings/pci/host-generic-pci.txt
> requires two address-type specs
> (please correct me if I'm wrong)
> 1) in the "reg" prop, the address of the configuration space (CPU physical)
> 2) in the "ranges" prop, at least a non-prefetchable area
> http://elinux.org/Device_Tree_Usage#PCI_Address_Translation
>
> In my 32-bit system, there are 2GB of RAM at [0x8000_0000,0x10000_0000[
> There are MMIO registers at [0, 16MB[ and also other stuff higher
> Suppose there is nothing mapped at [0x7000_0000, 0x8000_0000[
>
> Can I provide that range to the PCI subsystem?

Well, it obviously needs to be a range that is not otherwise occupied.
But it is SoC specific where the forwarded MEM region(s) are, and
whether they are configurable or not. IOW, you can ask *us* all you
want about these details, but only the H/W designer can answer this
for you.

The DT node that describes the host bridge should simply describe
which MMIO regions are used by the device. This is no different from
any other MMO peripheral.

As for the bus ranges: this also depends on the h/w, as far as i know,
and has a direct relation with the size of the PCI configuration space
(1 MB per bus for ECAM iirc?) On 32-bit systems, supporting that many
buses may be costly in terms of 32-bit addressable space, given that
the PCIe config space is typically below 4 GB. But it all depends on
the h/w implementation.

-- 
Ard.

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Using the generic host PCIe driver
@ 2017-03-04 11:45                               ` Ard Biesheuvel
  0 siblings, 0 replies; 54+ messages in thread
From: Ard Biesheuvel @ 2017-03-04 11:45 UTC (permalink / raw)
  To: linux-arm-kernel

On 4 March 2017 at 10:56, Mason <slash.tmp@free.fr> wrote:
> On 04/03/2017 10:35, Ard Biesheuvel wrote:
>> On 3 March 2017 at 23:23, Mason <slash.tmp@free.fr> wrote:
>>> On 03/03/2017 21:04, Bjorn Helgaas wrote:
>>>> On Fri, Mar 03, 2017 at 06:18:02PM +0100, Mason wrote:
>>>>> On 03/03/2017 16:46, Bjorn Helgaas wrote:
>>>>>> On Fri, Mar 03, 2017 at 01:44:54PM +0100, Mason wrote:
>>>>>>
>>>>>>> For now, I have "hidden" the root's BAR0 from the system with:
>>>>>>>
>>>>>>>    if (bus->number == 0 && where == PCI_BASE_ADDRESS_0) {
>>>>>>>            *val = 0;
>>>>>>>            return PCIBIOS_SUCCESSFUL;
>>>>>>>    }
>>>>>>
>>>>>> I'm scratching my head about this a little.  Here's what your dmesg
>>>>>> log contained originally:
>>>>>>
>>>>>>   pci 0000:00:00.0: [1105:8758] type 01 class 0x048000
>>>>>>   pci 0000:00:00.0: reg 0x10: [mem 0x00000000-0x00ffffff 64bit]
>>>>>>   pci 0000:00:00.0: BAR 0: no space for [mem size 0x01000000 64bit]
>>>>>>   pci 0000:00:00.0: BAR 0: failed to assign [mem size 0x01000000 64bit]
>>>>>>   pci 0000:00:00.0: PCI bridge to [bus 01]
>>>>>>   pcieport 0000:00:00.0: enabling device (0140 -> 0142)
>>>>>>
>>>>>> This device is a bridge (a Root Port, per your lspci output).  With a
>>>>>> BAR, which is legal but unusual.  We couldn't assign space for the
>>>>>> BAR, which means we can't use whatever vendor-specific functionality
>>>>>> it provides.
>>>>>
>>>>> I had several chats with the HW designer. I'll try to explain, only as
>>>>> far as I could understand ;-)
>>>>>
>>>>> We used to make devices, before implementing a root. Since at least
>>>>> one BAR is required (?) for a device, it was decided to have one BAR
>>>>> for the root, for symmetry.
>>>>
>>>> I'm not aware of a spec requirement for any BARs.  It's conceivable
>>>> that one could build a device that only uses config space.  And of
>>>> course, most bridges have windows but no BARs.  But that doesn't
>>>> matter; the hardware is what it is and we have to deal with it.
>>>
>>> I appreciate the compassion. RMK considered the DMA HW too screwy
>>> to bother supporting ;-)
>>>
>>>>> In fact, I thought I could ignore that BAR, but it is apparently NOT
>>>>> the case, as MSIs are supposed to be sent *within* the BAR of the root.
>>>>
>>>> I don't know much about this piece of the MSI puzzle, but maybe Marc
>>>> can enlighten us.  If this Root Port is the target of MSIs and the
>>>> Root Port turns them into some sort of interrupt on the CPU side, I
>>>> can see how this might make sense.
>>>>
>>>> I think it's unusual for the PCI core to assign the MSI target using a
>>>> BAR, though.  I think this means you'll have to implement your
>>>> arch_setup_msi_irq() or .irq_compose_msi_msg() method such that it
>>>> looks up that BAR value, since you won't know it at build-time.
>>>
>>> I'll hack the Altera driver to fit my purpose.
>>>
>>>>> The weird twist is that the BAR advertizes a 64-bit memory zone,
>>>>> but we will, in fact, map MMIO registers behind it. So all the
>>>>> RAM Linux assigns to the area is wasted, IIUC.
>>>>
>>>> I'm not sure what this means.  You have this:
>>>>
>>>>> OF: PCI:   MEM 0x90000000..0x9fffffff -> 0x90000000
>>>
>>> This means I've put 256 MB of system RAM aside for PCIe devices.
>>> This memory is no longer available for Linux "stuff".
>>>
>>
>> No it doesn't. It is a physical memory *range* that is assigned to the
>> PCI host bridge. Any memory accesses by the CPU to that window will be
>> forwarded to the PCI bus by the host bridge. From the kernel driver's
>> POV, this range is a given, but your host bridge h/w may involve some
>> configuration to make the host bridge 'listen' to this range. This is
>> h/w specific, and as Bjorn pointed out, usually configured by the
>> firmware so that the kernel driver does not require any knowledge of
>> those internals.
>>
>>>>> pci_bus 0000:00: root bus resource [mem 0x90000000-0x9fffffff]
>>>
>>> I suppose this is the PCI bus address. As we've discussed,
>>> I used the identity to map bus <-> CPU addresses.
>>>
>>
>> Yes, that is fine
>>
>>>> This [mem 0x90000000-0x9fffffff] host bridge window means there can't
>>>> be RAM in that region.  CPU accesses to 0x90000000-0x9fffffff have to
>>>> be claimed by the host bridge and forwarded to PCI.
>>>>
>>>> Linux doesn't "assign system RAM" anywhere; we just learn somehow
>>>> where that RAM is.  Linux *does* assign BARs of PCI devices, and they
>>>> have to be inside the host bridge windows(s).
>>>
>>> I'm confused, I thought I had understood that part...
>>> I thought the binding required me to specify (in the "ranges"
>>> property) a non-prefetchable zone of system RAM, and this
>>> memory is then "handed out" by Linux to different devices.
>>> Or do I just need to specify some address range that's not
>>> necessarily backed with actual RAM?
>>>
>>
>> Yes. Each PCI device advertises its need of memory windows via its
>> BARs, but the actual placement of those windows inside the host
>> bridge's memory range is configured dynamically, usually by the
>> firmware (on PCs) but on ARM/arm64 systems, this is done from scratch
>> by the kernel. The *purpose* of those memory windows is device
>> specific, but whatever is behind it lives on the PCI device. So this
>> is *not* system RAM.
>
> Hello Ard,
>
> It appears I have misunderstood something fundamental.
>
> The binding for generic PCI support
> http://lxr.free-electrons.com/source/Documentation/devicetree/bindings/pci/host-generic-pci.txt
> requires two address-type specs
> (please correct me if I'm wrong)
> 1) in the "reg" prop, the address of the configuration space (CPU physical)
> 2) in the "ranges" prop, at least a non-prefetchable area
> http://elinux.org/Device_Tree_Usage#PCI_Address_Translation
>
> In my 32-bit system, there are 2GB of RAM at [0x8000_0000,0x10000_0000[
> There are MMIO registers at [0, 16MB[ and also other stuff higher
> Suppose there is nothing mapped at [0x7000_0000, 0x8000_0000[
>
> Can I provide that range to the PCI subsystem?

Well, it obviously needs to be a range that is not otherwise occupied.
But it is SoC specific where the forwarded MEM region(s) are, and
whether they are configurable or not. IOW, you can ask *us* all you
want about these details, but only the H/W designer can answer this
for you.

The DT node that describes the host bridge should simply describe
which MMIO regions are used by the device. This is no different from
any other MMO peripheral.

As for the bus ranges: this also depends on the h/w, as far as i know,
and has a direct relation with the size of the PCI configuration space
(1 MB per bus for ECAM iirc?) On 32-bit systems, supporting that many
buses may be costly in terms of 32-bit addressable space, given that
the PCIe config space is typically below 4 GB. But it all depends on
the h/w implementation.

-- 
Ard.

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: Using the generic host PCIe driver
  2017-03-04 11:45                               ` Ard Biesheuvel
@ 2017-03-04 13:07                                 ` Mason
  -1 siblings, 0 replies; 54+ messages in thread
From: Mason @ 2017-03-04 13:07 UTC (permalink / raw)
  To: Ard Biesheuvel
  Cc: Bjorn Helgaas, Rob Herring, Phuong Nguyen, David Daney,
	Marc Zyngier, linux-pci, Thibaud Cornic, Will Deacon,
	Thierry Reding, Linux ARM

On 04/03/2017 12:45, Ard Biesheuvel wrote:
> On 4 March 2017 at 10:56, Mason <slash.tmp@free.fr> wrote:
>> On 04/03/2017 10:35, Ard Biesheuvel wrote:
>>> On 3 March 2017 at 23:23, Mason <slash.tmp@free.fr> wrote:
>>>> On 03/03/2017 21:04, Bjorn Helgaas wrote:
>>>>> On Fri, Mar 03, 2017 at 06:18:02PM +0100, Mason wrote:
>>>>>> On 03/03/2017 16:46, Bjorn Helgaas wrote:
>>>>>>> On Fri, Mar 03, 2017 at 01:44:54PM +0100, Mason wrote:
>>>>>>>
>>>>>>>> For now, I have "hidden" the root's BAR0 from the system with:
>>>>>>>>
>>>>>>>>    if (bus->number == 0 && where == PCI_BASE_ADDRESS_0) {
>>>>>>>>            *val = 0;
>>>>>>>>            return PCIBIOS_SUCCESSFUL;
>>>>>>>>    }
>>>>>>>
>>>>>>> I'm scratching my head about this a little.  Here's what your dmesg
>>>>>>> log contained originally:
>>>>>>>
>>>>>>>   pci 0000:00:00.0: [1105:8758] type 01 class 0x048000
>>>>>>>   pci 0000:00:00.0: reg 0x10: [mem 0x00000000-0x00ffffff 64bit]
>>>>>>>   pci 0000:00:00.0: BAR 0: no space for [mem size 0x01000000 64bit]
>>>>>>>   pci 0000:00:00.0: BAR 0: failed to assign [mem size 0x01000000 64bit]
>>>>>>>   pci 0000:00:00.0: PCI bridge to [bus 01]
>>>>>>>   pcieport 0000:00:00.0: enabling device (0140 -> 0142)
>>>>>>>
>>>>>>> This device is a bridge (a Root Port, per your lspci output).  With a
>>>>>>> BAR, which is legal but unusual.  We couldn't assign space for the
>>>>>>> BAR, which means we can't use whatever vendor-specific functionality
>>>>>>> it provides.
>>>>>>
>>>>>> I had several chats with the HW designer. I'll try to explain, only as
>>>>>> far as I could understand ;-)
>>>>>>
>>>>>> We used to make devices, before implementing a root. Since at least
>>>>>> one BAR is required (?) for a device, it was decided to have one BAR
>>>>>> for the root, for symmetry.
>>>>>
>>>>> I'm not aware of a spec requirement for any BARs.  It's conceivable
>>>>> that one could build a device that only uses config space.  And of
>>>>> course, most bridges have windows but no BARs.  But that doesn't
>>>>> matter; the hardware is what it is and we have to deal with it.
>>>>
>>>> I appreciate the compassion. RMK considered the DMA HW too screwy
>>>> to bother supporting ;-)
>>>>
>>>>>> In fact, I thought I could ignore that BAR, but it is apparently NOT
>>>>>> the case, as MSIs are supposed to be sent *within* the BAR of the root.
>>>>>
>>>>> I don't know much about this piece of the MSI puzzle, but maybe Marc
>>>>> can enlighten us.  If this Root Port is the target of MSIs and the
>>>>> Root Port turns them into some sort of interrupt on the CPU side, I
>>>>> can see how this might make sense.
>>>>>
>>>>> I think it's unusual for the PCI core to assign the MSI target using a
>>>>> BAR, though.  I think this means you'll have to implement your
>>>>> arch_setup_msi_irq() or .irq_compose_msi_msg() method such that it
>>>>> looks up that BAR value, since you won't know it at build-time.
>>>>
>>>> I'll hack the Altera driver to fit my purpose.
>>>>
>>>>>> The weird twist is that the BAR advertizes a 64-bit memory zone,
>>>>>> but we will, in fact, map MMIO registers behind it. So all the
>>>>>> RAM Linux assigns to the area is wasted, IIUC.
>>>>>
>>>>> I'm not sure what this means.  You have this:
>>>>>
>>>>>> OF: PCI:   MEM 0x90000000..0x9fffffff -> 0x90000000
>>>>
>>>> This means I've put 256 MB of system RAM aside for PCIe devices.
>>>> This memory is no longer available for Linux "stuff".
>>>>
>>>
>>> No it doesn't. It is a physical memory *range* that is assigned to the
>>> PCI host bridge. Any memory accesses by the CPU to that window will be
>>> forwarded to the PCI bus by the host bridge. From the kernel driver's
>>> POV, this range is a given, but your host bridge h/w may involve some
>>> configuration to make the host bridge 'listen' to this range. This is
>>> h/w specific, and as Bjorn pointed out, usually configured by the
>>> firmware so that the kernel driver does not require any knowledge of
>>> those internals.
>>>
>>>>>> pci_bus 0000:00: root bus resource [mem 0x90000000-0x9fffffff]
>>>>
>>>> I suppose this is the PCI bus address. As we've discussed,
>>>> I used the identity to map bus <-> CPU addresses.
>>>>
>>>
>>> Yes, that is fine
>>>
>>>>> This [mem 0x90000000-0x9fffffff] host bridge window means there can't
>>>>> be RAM in that region.  CPU accesses to 0x90000000-0x9fffffff have to
>>>>> be claimed by the host bridge and forwarded to PCI.
>>>>>
>>>>> Linux doesn't "assign system RAM" anywhere; we just learn somehow
>>>>> where that RAM is.  Linux *does* assign BARs of PCI devices, and they
>>>>> have to be inside the host bridge windows(s).
>>>>
>>>> I'm confused, I thought I had understood that part...
>>>> I thought the binding required me to specify (in the "ranges"
>>>> property) a non-prefetchable zone of system RAM, and this
>>>> memory is then "handed out" by Linux to different devices.
>>>> Or do I just need to specify some address range that's not
>>>> necessarily backed with actual RAM?
>>>>
>>>
>>> Yes. Each PCI device advertises its need of memory windows via its
>>> BARs, but the actual placement of those windows inside the host
>>> bridge's memory range is configured dynamically, usually by the
>>> firmware (on PCs) but on ARM/arm64 systems, this is done from scratch
>>> by the kernel. The *purpose* of those memory windows is device
>>> specific, but whatever is behind it lives on the PCI device. So this
>>> is *not* system RAM.
>>
>> Hello Ard,
>>
>> It appears I have misunderstood something fundamental.
>>
>> The binding for generic PCI support
>> http://lxr.free-electrons.com/source/Documentation/devicetree/bindings/pci/host-generic-pci.txt
>> requires two address-type specs
>> (please correct me if I'm wrong)
>> 1) in the "reg" prop, the address of the configuration space (CPU physical)
>> 2) in the "ranges" prop, at least a non-prefetchable area
>> http://elinux.org/Device_Tree_Usage#PCI_Address_Translation
>>
>> In my 32-bit system, there are 2GB of RAM at [0x8000_0000,0x10000_0000[
>> There are MMIO registers at [0, 16MB[ and also other stuff higher
>> Suppose there is nothing mapped at [0x7000_0000, 0x8000_0000[
>>
>> Can I provide that range to the PCI subsystem?
> 
> Well, it obviously needs to be a range that is not otherwise occupied.
> But it is SoC specific where the forwarded MEM region(s) are, and
> whether they are configurable or not.

My problem is that I don't understand bus addresses vs physical addresses.
(where and when they are used, and how.) Devices themselves put bus
addresses in messages in the PCIe protocol, I assume? When does it matter
what physical address maps to a bus address? When and where does this
mapping take place? (In the RC HW, in the RC driver, elsewhere?)

I suppose some devices do actually need access to *real* *actual* memory
for stuff like DMA. I suppose they must use system memory for that.
Does the generic PCI(e) framework setup this memory?

> IOW, you can ask *us* all you
> want about these details, but only the H/W designer can answer this
> for you.

My biggest problem is that, in order to get useful answers, one must
ask specific questions. And my understanding of PCI is still too
limited to ask good questions.

My current understanding is that I must find a large area in the memory
map where there is NOTHING (no RAM, no registers). Then I can specify
this area in the "ranges" prop of my DT node, to be used as a
non-prefetchable memory address range.

> The DT node that describes the host bridge should simply describe
> which MMIO regions are used by the device. This is no different from
> any other MMO peripheral.

In my limited experience, the DT node for PCI is, by far, the most
complex node I've had to write.

> As for the bus ranges: this also depends on the h/w, as far as i know,
> and has a direct relation with the size of the PCI configuration space
> (1 MB per bus for ECAM iirc?) On 32-bit systems, supporting that many
> buses may be costly in terms of 32-bit addressable space, given that
> the PCIe config space is typically below 4 GB. But it all depends on
> the h/w implementation.

That I know. The HW designer has confirmed reserving 256 MB of address
space for the configuration space. In hind-sight, this was probably a
waste of address space. Supporting 4 buses seems amply sufficient.
Am I wrong?

I suppose wasting 256 MB of address space is not an issue on 64-bit
systems, though.

Regards.

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Using the generic host PCIe driver
@ 2017-03-04 13:07                                 ` Mason
  0 siblings, 0 replies; 54+ messages in thread
From: Mason @ 2017-03-04 13:07 UTC (permalink / raw)
  To: linux-arm-kernel

On 04/03/2017 12:45, Ard Biesheuvel wrote:
> On 4 March 2017 at 10:56, Mason <slash.tmp@free.fr> wrote:
>> On 04/03/2017 10:35, Ard Biesheuvel wrote:
>>> On 3 March 2017 at 23:23, Mason <slash.tmp@free.fr> wrote:
>>>> On 03/03/2017 21:04, Bjorn Helgaas wrote:
>>>>> On Fri, Mar 03, 2017 at 06:18:02PM +0100, Mason wrote:
>>>>>> On 03/03/2017 16:46, Bjorn Helgaas wrote:
>>>>>>> On Fri, Mar 03, 2017 at 01:44:54PM +0100, Mason wrote:
>>>>>>>
>>>>>>>> For now, I have "hidden" the root's BAR0 from the system with:
>>>>>>>>
>>>>>>>>    if (bus->number == 0 && where == PCI_BASE_ADDRESS_0) {
>>>>>>>>            *val = 0;
>>>>>>>>            return PCIBIOS_SUCCESSFUL;
>>>>>>>>    }
>>>>>>>
>>>>>>> I'm scratching my head about this a little.  Here's what your dmesg
>>>>>>> log contained originally:
>>>>>>>
>>>>>>>   pci 0000:00:00.0: [1105:8758] type 01 class 0x048000
>>>>>>>   pci 0000:00:00.0: reg 0x10: [mem 0x00000000-0x00ffffff 64bit]
>>>>>>>   pci 0000:00:00.0: BAR 0: no space for [mem size 0x01000000 64bit]
>>>>>>>   pci 0000:00:00.0: BAR 0: failed to assign [mem size 0x01000000 64bit]
>>>>>>>   pci 0000:00:00.0: PCI bridge to [bus 01]
>>>>>>>   pcieport 0000:00:00.0: enabling device (0140 -> 0142)
>>>>>>>
>>>>>>> This device is a bridge (a Root Port, per your lspci output).  With a
>>>>>>> BAR, which is legal but unusual.  We couldn't assign space for the
>>>>>>> BAR, which means we can't use whatever vendor-specific functionality
>>>>>>> it provides.
>>>>>>
>>>>>> I had several chats with the HW designer. I'll try to explain, only as
>>>>>> far as I could understand ;-)
>>>>>>
>>>>>> We used to make devices, before implementing a root. Since at least
>>>>>> one BAR is required (?) for a device, it was decided to have one BAR
>>>>>> for the root, for symmetry.
>>>>>
>>>>> I'm not aware of a spec requirement for any BARs.  It's conceivable
>>>>> that one could build a device that only uses config space.  And of
>>>>> course, most bridges have windows but no BARs.  But that doesn't
>>>>> matter; the hardware is what it is and we have to deal with it.
>>>>
>>>> I appreciate the compassion. RMK considered the DMA HW too screwy
>>>> to bother supporting ;-)
>>>>
>>>>>> In fact, I thought I could ignore that BAR, but it is apparently NOT
>>>>>> the case, as MSIs are supposed to be sent *within* the BAR of the root.
>>>>>
>>>>> I don't know much about this piece of the MSI puzzle, but maybe Marc
>>>>> can enlighten us.  If this Root Port is the target of MSIs and the
>>>>> Root Port turns them into some sort of interrupt on the CPU side, I
>>>>> can see how this might make sense.
>>>>>
>>>>> I think it's unusual for the PCI core to assign the MSI target using a
>>>>> BAR, though.  I think this means you'll have to implement your
>>>>> arch_setup_msi_irq() or .irq_compose_msi_msg() method such that it
>>>>> looks up that BAR value, since you won't know it at build-time.
>>>>
>>>> I'll hack the Altera driver to fit my purpose.
>>>>
>>>>>> The weird twist is that the BAR advertizes a 64-bit memory zone,
>>>>>> but we will, in fact, map MMIO registers behind it. So all the
>>>>>> RAM Linux assigns to the area is wasted, IIUC.
>>>>>
>>>>> I'm not sure what this means.  You have this:
>>>>>
>>>>>> OF: PCI:   MEM 0x90000000..0x9fffffff -> 0x90000000
>>>>
>>>> This means I've put 256 MB of system RAM aside for PCIe devices.
>>>> This memory is no longer available for Linux "stuff".
>>>>
>>>
>>> No it doesn't. It is a physical memory *range* that is assigned to the
>>> PCI host bridge. Any memory accesses by the CPU to that window will be
>>> forwarded to the PCI bus by the host bridge. From the kernel driver's
>>> POV, this range is a given, but your host bridge h/w may involve some
>>> configuration to make the host bridge 'listen' to this range. This is
>>> h/w specific, and as Bjorn pointed out, usually configured by the
>>> firmware so that the kernel driver does not require any knowledge of
>>> those internals.
>>>
>>>>>> pci_bus 0000:00: root bus resource [mem 0x90000000-0x9fffffff]
>>>>
>>>> I suppose this is the PCI bus address. As we've discussed,
>>>> I used the identity to map bus <-> CPU addresses.
>>>>
>>>
>>> Yes, that is fine
>>>
>>>>> This [mem 0x90000000-0x9fffffff] host bridge window means there can't
>>>>> be RAM in that region.  CPU accesses to 0x90000000-0x9fffffff have to
>>>>> be claimed by the host bridge and forwarded to PCI.
>>>>>
>>>>> Linux doesn't "assign system RAM" anywhere; we just learn somehow
>>>>> where that RAM is.  Linux *does* assign BARs of PCI devices, and they
>>>>> have to be inside the host bridge windows(s).
>>>>
>>>> I'm confused, I thought I had understood that part...
>>>> I thought the binding required me to specify (in the "ranges"
>>>> property) a non-prefetchable zone of system RAM, and this
>>>> memory is then "handed out" by Linux to different devices.
>>>> Or do I just need to specify some address range that's not
>>>> necessarily backed with actual RAM?
>>>>
>>>
>>> Yes. Each PCI device advertises its need of memory windows via its
>>> BARs, but the actual placement of those windows inside the host
>>> bridge's memory range is configured dynamically, usually by the
>>> firmware (on PCs) but on ARM/arm64 systems, this is done from scratch
>>> by the kernel. The *purpose* of those memory windows is device
>>> specific, but whatever is behind it lives on the PCI device. So this
>>> is *not* system RAM.
>>
>> Hello Ard,
>>
>> It appears I have misunderstood something fundamental.
>>
>> The binding for generic PCI support
>> http://lxr.free-electrons.com/source/Documentation/devicetree/bindings/pci/host-generic-pci.txt
>> requires two address-type specs
>> (please correct me if I'm wrong)
>> 1) in the "reg" prop, the address of the configuration space (CPU physical)
>> 2) in the "ranges" prop, at least a non-prefetchable area
>> http://elinux.org/Device_Tree_Usage#PCI_Address_Translation
>>
>> In my 32-bit system, there are 2GB of RAM at [0x8000_0000,0x10000_0000[
>> There are MMIO registers at [0, 16MB[ and also other stuff higher
>> Suppose there is nothing mapped at [0x7000_0000, 0x8000_0000[
>>
>> Can I provide that range to the PCI subsystem?
> 
> Well, it obviously needs to be a range that is not otherwise occupied.
> But it is SoC specific where the forwarded MEM region(s) are, and
> whether they are configurable or not.

My problem is that I don't understand bus addresses vs physical addresses.
(where and when they are used, and how.) Devices themselves put bus
addresses in messages in the PCIe protocol, I assume? When does it matter
what physical address maps to a bus address? When and where does this
mapping take place? (In the RC HW, in the RC driver, elsewhere?)

I suppose some devices do actually need access to *real* *actual* memory
for stuff like DMA. I suppose they must use system memory for that.
Does the generic PCI(e) framework setup this memory?

> IOW, you can ask *us* all you
> want about these details, but only the H/W designer can answer this
> for you.

My biggest problem is that, in order to get useful answers, one must
ask specific questions. And my understanding of PCI is still too
limited to ask good questions.

My current understanding is that I must find a large area in the memory
map where there is NOTHING (no RAM, no registers). Then I can specify
this area in the "ranges" prop of my DT node, to be used as a
non-prefetchable memory address range.

> The DT node that describes the host bridge should simply describe
> which MMIO regions are used by the device. This is no different from
> any other MMO peripheral.

In my limited experience, the DT node for PCI is, by far, the most
complex node I've had to write.

> As for the bus ranges: this also depends on the h/w, as far as i know,
> and has a direct relation with the size of the PCI configuration space
> (1 MB per bus for ECAM iirc?) On 32-bit systems, supporting that many
> buses may be costly in terms of 32-bit addressable space, given that
> the PCIe config space is typically below 4 GB. But it all depends on
> the h/w implementation.

That I know. The HW designer has confirmed reserving 256 MB of address
space for the configuration space. In hind-sight, this was probably a
waste of address space. Supporting 4 buses seems amply sufficient.
Am I wrong?

I suppose wasting 256 MB of address space is not an issue on 64-bit
systems, though.

Regards.

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: Using the generic host PCIe driver
  2017-03-04 13:07                                 ` Mason
@ 2017-03-04 13:49                                   ` Ard Biesheuvel
  -1 siblings, 0 replies; 54+ messages in thread
From: Ard Biesheuvel @ 2017-03-04 13:49 UTC (permalink / raw)
  To: Mason
  Cc: Bjorn Helgaas, Rob Herring, Phuong Nguyen, David Daney,
	Marc Zyngier, linux-pci, Thibaud Cornic, Will Deacon,
	Thierry Reding, Linux ARM

On 4 March 2017 at 13:07, Mason <slash.tmp@free.fr> wrote:
> On 04/03/2017 12:45, Ard Biesheuvel wrote:
>> On 4 March 2017 at 10:56, Mason <slash.tmp@free.fr> wrote:
[...]
>>> In my 32-bit system, there are 2GB of RAM at [0x8000_0000,0x10000_0000[
>>> There are MMIO registers at [0, 16MB[ and also other stuff higher
>>> Suppose there is nothing mapped at [0x7000_0000, 0x8000_0000[
>>>
>>> Can I provide that range to the PCI subsystem?
>>
>> Well, it obviously needs to be a range that is not otherwise occupied.
>> But it is SoC specific where the forwarded MEM region(s) are, and
>> whether they are configurable or not.
>
> My problem is that I don't understand bus addresses vs physical addresses.
> (where and when they are used, and how.) Devices themselves put bus
> addresses in messages in the PCIe protocol, I assume? When does it matter
> what physical address maps to a bus address? When and where does this
> mapping take place? (In the RC HW, in the RC driver, elsewhere?)
>

This is mostly for DMA: there is no 'mapping' that takes place, it
simply means that the CPU physical address may deviate from the
address used by a PCI bus master to refer to the same location.

For instance, there are arm64 SoCs that map the physical RAM way above
the 4 GB limit. In this case, it may make sense to program the PCI
host controller in such a way that it applies an offset so that at
least the first 4 GB of RAM are 32-bit addressable by PCI devices
(which may not be capable of 64-bit addressing).

The implication is that, the memory address used when programming a
PCI device to perform bus master DMA is different from the physical
address used by the host.

> I suppose some devices do actually need access to *real* *actual* memory
> for stuff like DMA. I suppose they must use system memory for that.
> Does the generic PCI(e) framework setup this memory?
>

You don't need to 'set up' this memory in the general case, although
this is different in the presence of IOMMUs, but let's disregard that
for now.

>> IOW, you can ask *us* all you
>> want about these details, but only the H/W designer can answer this
>> for you.
> befor
> My biggest problem is that, in order to get useful answers, one must
> ask specific questions. And my understanding of PCI is still too
> limited to ask good questions.
>
> My current understanding is that I must find a large area in the memory
> map where there is NOTHING (no RAM, no registers). Then I can specify
> this area in the "ranges" prop of my DT node, to be used as a
> non-prefetchable memory address range.
>

'Finding' a memory area suggests that you could pick a range at random
and put that in the DT. This is *not* the case.

The PCIe controller hardware needs to know that it needs to decode
that range, i.e., it needs to forward memory accesses that hit this
window. You need to figure out how this is configured on the h/w that
you are using.

>> The DT node that describes the host bridge should simply describe
>> which MMIO regions are used by the device. This is no different from
>> any other MMO peripheral.
>
> In my limited experience, the DT node for PCI is, by far, the most
> complex node I've had to write.
>

Yes, but that is not the point. My point is that the information you
put in the DT should reflect *reailty* in one way or the other. Every
value you put there should match the current configuration of the h/w
IP block.

>> As for the bus ranges: this also depends on the h/w, as far as i know,
>> and has a direct relation with the size of the PCI configuration space
>> (1 MB per bus for ECAM iirc?) On 32-bit systems, supporting that many
>> buses may be costly in terms of 32-bit addressable space, given that
>> the PCIe config space is typically below 4 GB. But it all depends on
>> the h/w implementation.
>
> That I know. The HW designer has confirmed reserving 256 MB of address
> space for the configuration space. In hind-sight, this was probably a
> waste of address space. Supporting 4 buses seems amply sufficient.
> Am I wrong?
>

PCIe puts every device on its own bus, so it is good to have some headroom imo

> I suppose wasting 256 MB of address space is not an issue on 64-bit
> systems, though.
>

Hardly

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Using the generic host PCIe driver
@ 2017-03-04 13:49                                   ` Ard Biesheuvel
  0 siblings, 0 replies; 54+ messages in thread
From: Ard Biesheuvel @ 2017-03-04 13:49 UTC (permalink / raw)
  To: linux-arm-kernel

On 4 March 2017 at 13:07, Mason <slash.tmp@free.fr> wrote:
> On 04/03/2017 12:45, Ard Biesheuvel wrote:
>> On 4 March 2017 at 10:56, Mason <slash.tmp@free.fr> wrote:
[...]
>>> In my 32-bit system, there are 2GB of RAM at [0x8000_0000,0x10000_0000[
>>> There are MMIO registers at [0, 16MB[ and also other stuff higher
>>> Suppose there is nothing mapped at [0x7000_0000, 0x8000_0000[
>>>
>>> Can I provide that range to the PCI subsystem?
>>
>> Well, it obviously needs to be a range that is not otherwise occupied.
>> But it is SoC specific where the forwarded MEM region(s) are, and
>> whether they are configurable or not.
>
> My problem is that I don't understand bus addresses vs physical addresses.
> (where and when they are used, and how.) Devices themselves put bus
> addresses in messages in the PCIe protocol, I assume? When does it matter
> what physical address maps to a bus address? When and where does this
> mapping take place? (In the RC HW, in the RC driver, elsewhere?)
>

This is mostly for DMA: there is no 'mapping' that takes place, it
simply means that the CPU physical address may deviate from the
address used by a PCI bus master to refer to the same location.

For instance, there are arm64 SoCs that map the physical RAM way above
the 4 GB limit. In this case, it may make sense to program the PCI
host controller in such a way that it applies an offset so that at
least the first 4 GB of RAM are 32-bit addressable by PCI devices
(which may not be capable of 64-bit addressing).

The implication is that, the memory address used when programming a
PCI device to perform bus master DMA is different from the physical
address used by the host.

> I suppose some devices do actually need access to *real* *actual* memory
> for stuff like DMA. I suppose they must use system memory for that.
> Does the generic PCI(e) framework setup this memory?
>

You don't need to 'set up' this memory in the general case, although
this is different in the presence of IOMMUs, but let's disregard that
for now.

>> IOW, you can ask *us* all you
>> want about these details, but only the H/W designer can answer this
>> for you.
> befor
> My biggest problem is that, in order to get useful answers, one must
> ask specific questions. And my understanding of PCI is still too
> limited to ask good questions.
>
> My current understanding is that I must find a large area in the memory
> map where there is NOTHING (no RAM, no registers). Then I can specify
> this area in the "ranges" prop of my DT node, to be used as a
> non-prefetchable memory address range.
>

'Finding' a memory area suggests that you could pick a range at random
and put that in the DT. This is *not* the case.

The PCIe controller hardware needs to know that it needs to decode
that range, i.e., it needs to forward memory accesses that hit this
window. You need to figure out how this is configured on the h/w that
you are using.

>> The DT node that describes the host bridge should simply describe
>> which MMIO regions are used by the device. This is no different from
>> any other MMO peripheral.
>
> In my limited experience, the DT node for PCI is, by far, the most
> complex node I've had to write.
>

Yes, but that is not the point. My point is that the information you
put in the DT should reflect *reailty* in one way or the other. Every
value you put there should match the current configuration of the h/w
IP block.

>> As for the bus ranges: this also depends on the h/w, as far as i know,
>> and has a direct relation with the size of the PCI configuration space
>> (1 MB per bus for ECAM iirc?) On 32-bit systems, supporting that many
>> buses may be costly in terms of 32-bit addressable space, given that
>> the PCIe config space is typically below 4 GB. But it all depends on
>> the h/w implementation.
>
> That I know. The HW designer has confirmed reserving 256 MB of address
> space for the configuration space. In hind-sight, this was probably a
> waste of address space. Supporting 4 buses seems amply sufficient.
> Am I wrong?
>

PCIe puts every device on its own bus, so it is good to have some headroom imo

> I suppose wasting 256 MB of address space is not an issue on 64-bit
> systems, though.
>

Hardly

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: Using the generic host PCIe driver
  2017-03-04 13:49                                   ` Ard Biesheuvel
@ 2017-03-04 14:33                                     ` Mason
  -1 siblings, 0 replies; 54+ messages in thread
From: Mason @ 2017-03-04 14:33 UTC (permalink / raw)
  To: Ard Biesheuvel
  Cc: Bjorn Helgaas, Rob Herring, Phuong Nguyen, David Daney,
	Marc Zyngier, linux-pci, Thibaud Cornic, Will Deacon,
	Thierry Reding, Linux ARM

On 04/03/2017 14:49, Ard Biesheuvel wrote:

> On 4 March 2017 at 13:07, Mason wrote:
>
>> My current understanding is that I must find a large area in the memory
>> map where there is NOTHING (no RAM, no registers). Then I can specify
>> this area in the "ranges" prop of my DT node, to be used as a
>> non-prefetchable memory address range.
> 
> 'Finding' a memory area suggests that you could pick a range at random
> and put that in the DT. This is *not* the case.
> 
> The PCIe controller hardware needs to know that it needs to decode
> that range, i.e., it needs to forward memory accesses that hit this
> window. You need to figure out how this is configured on the h/w that
> you are using.

My confusion level is at 11 :-)

I'll sleep on it, then take a fresh look at the PCIe controller
register map. I know there is a way to configure mappings in
the RC BAR0, e.g. the MSI doorbell is in MMIO space, and devices
need to write there to request an interrupt. But I thought all
the range stuff was configured at run-time by the PCI framework
itself, using standard registers.

I still need to investigate "I/O and prefetchable mem behind bridge",
as pointed out by Bjorn.

>>> The DT node that describes the host bridge should simply describe
>>> which MMIO regions are used by the device. This is no different from
>>> any other MMO peripheral.
>>
>> In my limited experience, the DT node for PCI is, by far, the most
>> complex node I've had to write.
> 
> Yes, but that is not the point. My point is that the information you
> put in the DT should reflect *reality* in one way or the other. Every
> value you put there should match the current configuration of the h/w
> IP block.

The HW designers are never sure how SW will use the block,
so they often make everything under the sun SW-configurable.
For example, the RC BAR0 is actually split into 8 "regions"
which can map to arbitrary areas in the physical address space.

So I don't think there is an actual "current configuration of
the h/w IP block".

Regards.

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Using the generic host PCIe driver
@ 2017-03-04 14:33                                     ` Mason
  0 siblings, 0 replies; 54+ messages in thread
From: Mason @ 2017-03-04 14:33 UTC (permalink / raw)
  To: linux-arm-kernel

On 04/03/2017 14:49, Ard Biesheuvel wrote:

> On 4 March 2017 at 13:07, Mason wrote:
>
>> My current understanding is that I must find a large area in the memory
>> map where there is NOTHING (no RAM, no registers). Then I can specify
>> this area in the "ranges" prop of my DT node, to be used as a
>> non-prefetchable memory address range.
> 
> 'Finding' a memory area suggests that you could pick a range at random
> and put that in the DT. This is *not* the case.
> 
> The PCIe controller hardware needs to know that it needs to decode
> that range, i.e., it needs to forward memory accesses that hit this
> window. You need to figure out how this is configured on the h/w that
> you are using.

My confusion level is at 11 :-)

I'll sleep on it, then take a fresh look at the PCIe controller
register map. I know there is a way to configure mappings in
the RC BAR0, e.g. the MSI doorbell is in MMIO space, and devices
need to write there to request an interrupt. But I thought all
the range stuff was configured at run-time by the PCI framework
itself, using standard registers.

I still need to investigate "I/O and prefetchable mem behind bridge",
as pointed out by Bjorn.

>>> The DT node that describes the host bridge should simply describe
>>> which MMIO regions are used by the device. This is no different from
>>> any other MMO peripheral.
>>
>> In my limited experience, the DT node for PCI is, by far, the most
>> complex node I've had to write.
> 
> Yes, but that is not the point. My point is that the information you
> put in the DT should reflect *reality* in one way or the other. Every
> value you put there should match the current configuration of the h/w
> IP block.

The HW designers are never sure how SW will use the block,
so they often make everything under the sun SW-configurable.
For example, the RC BAR0 is actually split into 8 "regions"
which can map to arbitrary areas in the physical address space.

So I don't think there is an actual "current configuration of
the h/w IP block".

Regards.

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: Using the generic host PCIe driver
  2017-03-03 20:04                       ` Bjorn Helgaas
@ 2017-03-06 16:12                         ` Mason
  -1 siblings, 0 replies; 54+ messages in thread
From: Mason @ 2017-03-06 16:12 UTC (permalink / raw)
  To: Bjorn Helgaas
  Cc: linux-pci, Linux ARM, Ard Biesheuvel, Robin Murphy, Marc Zyngier,
	Rob Herring, Phuong Nguyen, Thibaud Cornic, David Laight

On 03/03/2017 21:04, Bjorn Helgaas wrote:

> On Fri, Mar 03, 2017 at 06:18:02PM +0100, Mason wrote:
>
>> # /usr/sbin/lspci -v
>> 00:00.0 PCI bridge: Sigma Designs, Inc. Device 0024 (rev 01) (prog-if 00 [Normal decode])
>>         Flags: bus master, fast devsel, latency 0
>>         Memory at 90000000 (64-bit, non-prefetchable) [size=16M]
>>         Bus: primary=00, secondary=01, subordinate=01, sec-latency=0
>>         I/O behind bridge: 00000000-00000fff
> 
> Something's wrong with this.  You have no I/O windows through the host
> bridge, which implies that you can't generic PCI I/O transactions, so
> this I/O window should be disabled.  This might be an lspci issue;
> what does "lspci -xxx" show?
> 
>>         Memory behind bridge: 91000000-910fffff
>>         Prefetchable memory behind bridge: 00000000-000fffff
> 
> This prefetchable memory window is bogus, too.  It should probably be
> disabled.  If the bridge doesn't support a prefetchable window, the
> base and limit should be hardwired to zero.  If it supports a window
> but it's disabled, the limit should be less than the base.  For
> example, on my system I see this for a bridge with the window
> disabled:
> 
>   # setpci -s00:1c.0 PREF_MEMORY_BASE
>   fff1
>   # setpci -s00:1c.0 PREF_MEMORY_LIMIT
>   0001

MAJOR UPDATE: As pointed out by Ard, my DT was hopelessly wrong for
the non-prefetchable memory region (in the ranges prop).

In fact, my platform *multiplexes* config and MEM spaces.

In other words, there are *two* overlapping 256 MB windows at CPU
address 0x50000000. A register in MMIO space allows software to
select either config space or MEM space.

Current DT node:

		pcie@50000000 {
			compatible = "pci-host-ecam-generic";
			reg = <0x50000000 0x10000000>;
			device_type = "pci";
			#size-cells = <2>;
			#address-cells = <3>;
			#interrupt-cells = <1>;
			ranges = <0x02000000 0x0 0x0  0x50000000  0x0 0x10000000>;
		};


Ard pointed out that Linux does not support such a setup.

[    0.994011] OF: PCI: host bridge /soc/pcie@50000000 ranges:
[    0.999721] OF: PCI: Parsing ranges property...
[    1.004386] OF: PCI:   MEM 0x50000000..0x5fffffff -> 0x00000000
[    1.010471] pci-host-generic 50000000.pcie:
		can't claim ECAM area [mem 0x50000000-0x5fffffff]:
		address conflict with /soc/pcie@50000000 [mem 0x50000000-0x5fffffff]
[    1.025265] pci-host-generic: probe of 50000000.pcie failed with error -16

IIUC, there may be concurrent accesses to config space and MEM space?

I'm wondering what my options are for this controller at this point :-(

In a separate (related) thread ("Panic in quirk_usb_early_handoff",
David Laight wrote:

> So to do a config space access you have to use a pair of IPIs
> to stop the other cpus doing any PCIe data accesses while the
> MMIO bit makes the accesses all point to config space.
> (After taking a lock to get access to the MMIO register.)
> 
> Or has someone a better idea?

Regards.

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Using the generic host PCIe driver
@ 2017-03-06 16:12                         ` Mason
  0 siblings, 0 replies; 54+ messages in thread
From: Mason @ 2017-03-06 16:12 UTC (permalink / raw)
  To: linux-arm-kernel

On 03/03/2017 21:04, Bjorn Helgaas wrote:

> On Fri, Mar 03, 2017 at 06:18:02PM +0100, Mason wrote:
>
>> # /usr/sbin/lspci -v
>> 00:00.0 PCI bridge: Sigma Designs, Inc. Device 0024 (rev 01) (prog-if 00 [Normal decode])
>>         Flags: bus master, fast devsel, latency 0
>>         Memory at 90000000 (64-bit, non-prefetchable) [size=16M]
>>         Bus: primary=00, secondary=01, subordinate=01, sec-latency=0
>>         I/O behind bridge: 00000000-00000fff
> 
> Something's wrong with this.  You have no I/O windows through the host
> bridge, which implies that you can't generic PCI I/O transactions, so
> this I/O window should be disabled.  This might be an lspci issue;
> what does "lspci -xxx" show?
> 
>>         Memory behind bridge: 91000000-910fffff
>>         Prefetchable memory behind bridge: 00000000-000fffff
> 
> This prefetchable memory window is bogus, too.  It should probably be
> disabled.  If the bridge doesn't support a prefetchable window, the
> base and limit should be hardwired to zero.  If it supports a window
> but it's disabled, the limit should be less than the base.  For
> example, on my system I see this for a bridge with the window
> disabled:
> 
>   # setpci -s00:1c.0 PREF_MEMORY_BASE
>   fff1
>   # setpci -s00:1c.0 PREF_MEMORY_LIMIT
>   0001

MAJOR UPDATE: As pointed out by Ard, my DT was hopelessly wrong for
the non-prefetchable memory region (in the ranges prop).

In fact, my platform *multiplexes* config and MEM spaces.

In other words, there are *two* overlapping 256 MB windows at CPU
address 0x50000000. A register in MMIO space allows software to
select either config space or MEM space.

Current DT node:

		pcie at 50000000 {
			compatible = "pci-host-ecam-generic";
			reg = <0x50000000 0x10000000>;
			device_type = "pci";
			#size-cells = <2>;
			#address-cells = <3>;
			#interrupt-cells = <1>;
			ranges = <0x02000000 0x0 0x0  0x50000000  0x0 0x10000000>;
		};


Ard pointed out that Linux does not support such a setup.

[    0.994011] OF: PCI: host bridge /soc/pcie at 50000000 ranges:
[    0.999721] OF: PCI: Parsing ranges property...
[    1.004386] OF: PCI:   MEM 0x50000000..0x5fffffff -> 0x00000000
[    1.010471] pci-host-generic 50000000.pcie:
		can't claim ECAM area [mem 0x50000000-0x5fffffff]:
		address conflict with /soc/pcie at 50000000 [mem 0x50000000-0x5fffffff]
[    1.025265] pci-host-generic: probe of 50000000.pcie failed with error -16

IIUC, there may be concurrent accesses to config space and MEM space?

I'm wondering what my options are for this controller at this point :-(

In a separate (related) thread ("Panic in quirk_usb_early_handoff",
David Laight wrote:

> So to do a config space access you have to use a pair of IPIs
> to stop the other cpus doing any PCIe data accesses while the
> MMIO bit makes the accesses all point to config space.
> (After taking a lock to get access to the MMIO register.)
> 
> Or has someone a better idea?

Regards.

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: Using the generic host PCIe driver
  2017-03-06 16:12                         ` Mason
@ 2017-03-06 16:57                           ` Mason
  -1 siblings, 0 replies; 54+ messages in thread
From: Mason @ 2017-03-06 16:57 UTC (permalink / raw)
  To: Bjorn Helgaas
  Cc: linux-pci, Linux ARM, Ard Biesheuvel, Robin Murphy, Marc Zyngier,
	Rob Herring, Phuong Nguyen, Thibaud Cornic, David Laight

On 06/03/2017 17:12, Mason wrote:

> On 03/03/2017 21:04, Bjorn Helgaas wrote:
> 
>> On Fri, Mar 03, 2017 at 06:18:02PM +0100, Mason wrote:
>>
>>> # /usr/sbin/lspci -v
>>> 00:00.0 PCI bridge: Sigma Designs, Inc. Device 0024 (rev 01) (prog-if 00 [Normal decode])
>>>         Flags: bus master, fast devsel, latency 0
>>>         Memory at 90000000 (64-bit, non-prefetchable) [size=16M]
>>>         Bus: primary=00, secondary=01, subordinate=01, sec-latency=0
>>>         I/O behind bridge: 00000000-00000fff
>>
>> Something's wrong with this.  You have no I/O windows through the host
>> bridge, which implies that you can't generic PCI I/O transactions, so
>> this I/O window should be disabled.  This might be an lspci issue;
>> what does "lspci -xxx" show?
>>
>>>         Memory behind bridge: 91000000-910fffff
>>>         Prefetchable memory behind bridge: 00000000-000fffff
>>
>> This prefetchable memory window is bogus, too.  It should probably be
>> disabled.  If the bridge doesn't support a prefetchable window, the
>> base and limit should be hardwired to zero.  If it supports a window
>> but it's disabled, the limit should be less than the base.  For
>> example, on my system I see this for a bridge with the window
>> disabled:
>>
>>   # setpci -s00:1c.0 PREF_MEMORY_BASE
>>   fff1
>>   # setpci -s00:1c.0 PREF_MEMORY_LIMIT
>>   0001
> 
> MAJOR UPDATE: As pointed out by Ard, my DT was hopelessly wrong for
> the non-prefetchable memory region (in the ranges prop).
> 
> In fact, my platform *multiplexes* config and MEM spaces.
> 
> In other words, there are *two* overlapping 256 MB windows at CPU
> address 0x50000000. A register in MMIO space allows software to
> select either config space or MEM space.

I artificially cut each window in half (to 128 MB).

		pcie@50000000 {
			compatible = "sigma,foo";
			reg = <0x50000000 0x8000000>;
			device_type = "pci";
			bus-range = <0x0 0x7f>;
			#size-cells = <2>;
			#address-cells = <3>;
			#interrupt-cells = <1>;
			ranges = <0x02000000 0x0 0x8000000  0x58000000  0x0 0x8000000>;
		};

And my config space accessors set/reset the config_space bit on entry/exit.

[    0.986807] OF: PCI: host bridge /soc/pcie@50000000 ranges:
[    0.992524] OF: PCI: Parsing ranges property...
[    0.997185] OF: PCI:   MEM 0x58000000..0x5fffffff -> 0x08000000
[    1.004774] pci_tango 50000000.pcie: ECAM at [mem 0x50000000-0x57ffffff] for [bus 00-7f]
[    1.013256] pci_tango 50000000.pcie: PCI host bridge to bus 0000:00
[    1.019668] pci_bus 0000:00: root bus resource [bus 00-7f]
[    1.025285] pci_bus 0000:00: root bus resource [mem 0x58000000-0x5fffffff] (bus address [0x08000000-0x0fffffff])
[    1.035613] pci_bus 0000:00: scanning bus
[    1.039766] pci 0000:00:00.0: [1105:0024] type 01 class 0x048000
[    1.045918] pci 0000:00:00.0: calling tango_pcie_fixup_class+0x0/0x10
[    1.052506] pci 0000:00:00.0: reg 0x10: [mem 0x00000000-0x00ffffff 64bit]
[    1.059452] pci 0000:00:00.0: calling pci_fixup_ide_bases+0x0/0x40
[    1.065800] pci 0000:00:00.0: supports D1 D2
[    1.070188] pci 0000:00:00.0: PME# supported from D0 D1 D2 D3hot
[    1.076322] pci 0000:00:00.0: PME# disabled
[    1.080834] pci_bus 0000:00: fixups for bus
[    1.085142] PCI: bus0: Fast back to back transfers disabled
[    1.090843] pci 0000:00:00.0: scanning [bus 00-00] behind bridge, pass 0
[    1.097676] pci 0000:00:00.0: bridge configuration invalid ([bus 00-00]), reconfiguring
[    1.105822] pci 0000:00:00.0: scanning [bus 00-00] behind bridge, pass 1
[    1.112772] pci_bus 0000:01: busn_res: can not insert [bus 01-ff] under [bus 00-7f] (conflicts with (null) [bus 00-7f])

	I don't understand the above warning.

[    1.123718] pci_bus 0000:01: scanning bus
[    1.127887] pci 0000:01:00.0: [1912:0014] type 00 class 0x0c0330
[    1.134066] pci 0000:01:00.0: reg 0x10: [mem 0x00000000-0x00001fff 64bit]
[    1.141071] pci 0000:01:00.0: calling pci_fixup_ide_bases+0x0/0x40
[    1.147496] pci 0000:01:00.0: PME# supported from D0 D3hot D3cold
[    1.153722] pci 0000:01:00.0: PME# disabled
[    1.158335] pci_bus 0000:01: fixups for bus
[    1.162643] PCI: bus1: Fast back to back transfers disabled
[    1.168341] pci_bus 0000:01: bus scan returning with max=01
[    1.174039] pci_bus 0000:01: busn_res: [bus 01-ff] end is updated to 01
[    1.180786] pci_bus 0000:00: bus scan returning with max=01
[    1.186484] pci 0000:00:00.0: fixup irq: got 0
[    1.191045] pci 0000:00:00.0: assigning IRQ 00
[    1.195631] pci 0000:01:00.0: fixup irq: got 20
[    1.200279] pci 0000:01:00.0: assigning IRQ 20
[    1.204868] pci 0000:00:00.0: BAR 0: assigned [mem 0x58000000-0x58ffffff 64bit]
[    1.212321] pci 0000:00:00.0: BAR 8: assigned [mem 0x59000000-0x590fffff]
[    1.219245] pci 0000:01:00.0: BAR 0: assigned [mem 0x59000000-0x59001fff 64bit]
[    1.226702] pci 0000:00:00.0: PCI bridge to [bus 01]
[    1.231789] pci 0000:00:00.0:   bridge window [mem 0x59000000-0x590fffff]
[    1.238758] pcieport 0000:00:00.0: enabling device (0140 -> 0142)
[    1.244989] pcieport 0000:00:00.0: enabling bus mastering
[    1.250672] pci 0000:01:00.0: calling quirk_usb_early_handoff+0x0/0x7e0
[    1.257430] pci 0000:01:00.0: enabling device (0140 -> 0142)
[    1.263226] quirk_usb_handoff_xhci: ioremap(0x59000000, 8192)
[    1.269109] xhci_find_next_ext_cap: offset=0x500
[    1.273844] val = 0x1000401

This looks like a non-random value for XHCI_HCC_EXT_CAPS, but I'll have
to check the code and the standard tomorrow.

# /usr/sbin/lspci -v
00:00.0 PCI bridge: Sigma Designs, Inc. Device 0024 (rev 01) (prog-if 00 [Normal decode])
        Flags: bus master, fast devsel, latency 0
        Memory at 58000000 (64-bit, non-prefetchable) [size=16M]
        Bus: primary=00, secondary=01, subordinate=01, sec-latency=0
        I/O behind bridge: 00000000-00000fff
        Memory behind bridge: 09000000-090fffff
        Prefetchable memory behind bridge: 00000000-000fffff
        Capabilities: [50] MSI: Enable- Count=1/4 Maskable- 64bit+
        Capabilities: [78] Power Management version 3
        Capabilities: [80] Express Root Port (Slot-), MSI 03
        Capabilities: [100] Virtual Channel
        Capabilities: [800] Advanced Error Reporting
        Kernel driver in use: pcieport

01:00.0 USB controller: Renesas Technology Corp. uPD720201 USB 3.0 Host Controller (rev 03) (prog-if 30 [XHCI])
        Flags: fast devsel, IRQ 20
        Memory at 59000000 (64-bit, non-prefetchable) [size=8K]
        Capabilities: [50] Power Management version 3
        Capabilities: [70] MSI: Enable- Count=1/8 Maskable- 64bit+
        Capabilities: [90] MSI-X: Enable- Count=8 Masked-
        Capabilities: [a0] Express Endpoint, MSI 00
        Capabilities: [100] Advanced Error Reporting
        Capabilities: [150] Latency Tolerance Reporting

Hmmm, I still get the I/O and prefetchable mem behind bridge lines...
(I thought they'd disappear once I fixed the mem space bug.)

# /usr/sbin/lspci -xxx
00:00.0 PCI bridge: Sigma Designs, Inc. Device 0024 (rev 01)
00: 05 11 24 00 46 01 10 00 01 00 80 04 10 00 01 00
10: 04 00 00 08 00 00 00 00 00 01 01 00 00 00 00 00
20: 00 09 00 09 00 00 00 00 00 00 00 00 00 00 00 00
30: 00 00 00 00 50 00 00 00 00 00 00 00 00 00 01 00
40: 00 00 00 00 60 61 15 02 00 00 00 00 00 00 00 00
50: 05 78 84 00 00 00 00 00 00 00 00 00 00 00 00 00
60: 00 00 00 00 00 00 00 00 11 78 00 00 00 00 00 00
70: 00 00 00 00 00 00 00 00 01 80 03 7e 08 60 00 64
80: 10 00 42 06 01 80 00 00 10 28 20 00 12 5c 21 01
90: 08 00 12 00 00 00 00 00 00 00 00 00 00 00 00 00
a0: 00 00 00 00 02 00 00 00 00 00 00 00 00 00 00 00
b0: 02 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
c0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
d0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
e0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
f0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00

01:00.0 USB controller: Renesas Technology Corp. uPD720201 USB 3.0 Host Controller (rev 03)
00: 12 19 14 00 42 01 10 00 03 30 03 0c 10 00 00 00
10: 04 00 00 09 00 00 00 00 00 00 00 00 00 00 00 00
20: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
30: 00 00 00 00 50 00 00 00 00 00 00 00 14 01 00 00
40: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
50: 01 70 c3 c9 08 00 00 00 00 00 00 00 00 00 00 00
60: 30 20 00 00 00 00 00 00 00 00 00 00 09 18 20 00
70: 05 90 86 00 00 00 00 00 00 00 00 00 00 00 00 00
80: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
90: 11 a0 07 00 00 10 00 00 80 10 00 00 00 00 00 00
a0: 10 00 02 00 c0 8f 00 00 10 28 10 00 12 ec 07 00
b0: 00 00 12 10 00 00 00 00 00 00 00 00 00 00 00 00
c0: 00 00 00 00 10 08 00 00 00 00 00 00 00 00 00 00
d0: 02 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
e0: 00 00 00 00 00 00 55 00 00 00 01 05 10 22 c2 00
f0: 00 05 00 00 00 00 00 80 00 00 00 00 00 00 00 00

Regards.

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Using the generic host PCIe driver
@ 2017-03-06 16:57                           ` Mason
  0 siblings, 0 replies; 54+ messages in thread
From: Mason @ 2017-03-06 16:57 UTC (permalink / raw)
  To: linux-arm-kernel

On 06/03/2017 17:12, Mason wrote:

> On 03/03/2017 21:04, Bjorn Helgaas wrote:
> 
>> On Fri, Mar 03, 2017 at 06:18:02PM +0100, Mason wrote:
>>
>>> # /usr/sbin/lspci -v
>>> 00:00.0 PCI bridge: Sigma Designs, Inc. Device 0024 (rev 01) (prog-if 00 [Normal decode])
>>>         Flags: bus master, fast devsel, latency 0
>>>         Memory at 90000000 (64-bit, non-prefetchable) [size=16M]
>>>         Bus: primary=00, secondary=01, subordinate=01, sec-latency=0
>>>         I/O behind bridge: 00000000-00000fff
>>
>> Something's wrong with this.  You have no I/O windows through the host
>> bridge, which implies that you can't generic PCI I/O transactions, so
>> this I/O window should be disabled.  This might be an lspci issue;
>> what does "lspci -xxx" show?
>>
>>>         Memory behind bridge: 91000000-910fffff
>>>         Prefetchable memory behind bridge: 00000000-000fffff
>>
>> This prefetchable memory window is bogus, too.  It should probably be
>> disabled.  If the bridge doesn't support a prefetchable window, the
>> base and limit should be hardwired to zero.  If it supports a window
>> but it's disabled, the limit should be less than the base.  For
>> example, on my system I see this for a bridge with the window
>> disabled:
>>
>>   # setpci -s00:1c.0 PREF_MEMORY_BASE
>>   fff1
>>   # setpci -s00:1c.0 PREF_MEMORY_LIMIT
>>   0001
> 
> MAJOR UPDATE: As pointed out by Ard, my DT was hopelessly wrong for
> the non-prefetchable memory region (in the ranges prop).
> 
> In fact, my platform *multiplexes* config and MEM spaces.
> 
> In other words, there are *two* overlapping 256 MB windows at CPU
> address 0x50000000. A register in MMIO space allows software to
> select either config space or MEM space.

I artificially cut each window in half (to 128 MB).

		pcie at 50000000 {
			compatible = "sigma,foo";
			reg = <0x50000000 0x8000000>;
			device_type = "pci";
			bus-range = <0x0 0x7f>;
			#size-cells = <2>;
			#address-cells = <3>;
			#interrupt-cells = <1>;
			ranges = <0x02000000 0x0 0x8000000  0x58000000  0x0 0x8000000>;
		};

And my config space accessors set/reset the config_space bit on entry/exit.

[    0.986807] OF: PCI: host bridge /soc/pcie at 50000000 ranges:
[    0.992524] OF: PCI: Parsing ranges property...
[    0.997185] OF: PCI:   MEM 0x58000000..0x5fffffff -> 0x08000000
[    1.004774] pci_tango 50000000.pcie: ECAM at [mem 0x50000000-0x57ffffff] for [bus 00-7f]
[    1.013256] pci_tango 50000000.pcie: PCI host bridge to bus 0000:00
[    1.019668] pci_bus 0000:00: root bus resource [bus 00-7f]
[    1.025285] pci_bus 0000:00: root bus resource [mem 0x58000000-0x5fffffff] (bus address [0x08000000-0x0fffffff])
[    1.035613] pci_bus 0000:00: scanning bus
[    1.039766] pci 0000:00:00.0: [1105:0024] type 01 class 0x048000
[    1.045918] pci 0000:00:00.0: calling tango_pcie_fixup_class+0x0/0x10
[    1.052506] pci 0000:00:00.0: reg 0x10: [mem 0x00000000-0x00ffffff 64bit]
[    1.059452] pci 0000:00:00.0: calling pci_fixup_ide_bases+0x0/0x40
[    1.065800] pci 0000:00:00.0: supports D1 D2
[    1.070188] pci 0000:00:00.0: PME# supported from D0 D1 D2 D3hot
[    1.076322] pci 0000:00:00.0: PME# disabled
[    1.080834] pci_bus 0000:00: fixups for bus
[    1.085142] PCI: bus0: Fast back to back transfers disabled
[    1.090843] pci 0000:00:00.0: scanning [bus 00-00] behind bridge, pass 0
[    1.097676] pci 0000:00:00.0: bridge configuration invalid ([bus 00-00]), reconfiguring
[    1.105822] pci 0000:00:00.0: scanning [bus 00-00] behind bridge, pass 1
[    1.112772] pci_bus 0000:01: busn_res: can not insert [bus 01-ff] under [bus 00-7f] (conflicts with (null) [bus 00-7f])

	I don't understand the above warning.

[    1.123718] pci_bus 0000:01: scanning bus
[    1.127887] pci 0000:01:00.0: [1912:0014] type 00 class 0x0c0330
[    1.134066] pci 0000:01:00.0: reg 0x10: [mem 0x00000000-0x00001fff 64bit]
[    1.141071] pci 0000:01:00.0: calling pci_fixup_ide_bases+0x0/0x40
[    1.147496] pci 0000:01:00.0: PME# supported from D0 D3hot D3cold
[    1.153722] pci 0000:01:00.0: PME# disabled
[    1.158335] pci_bus 0000:01: fixups for bus
[    1.162643] PCI: bus1: Fast back to back transfers disabled
[    1.168341] pci_bus 0000:01: bus scan returning with max=01
[    1.174039] pci_bus 0000:01: busn_res: [bus 01-ff] end is updated to 01
[    1.180786] pci_bus 0000:00: bus scan returning with max=01
[    1.186484] pci 0000:00:00.0: fixup irq: got 0
[    1.191045] pci 0000:00:00.0: assigning IRQ 00
[    1.195631] pci 0000:01:00.0: fixup irq: got 20
[    1.200279] pci 0000:01:00.0: assigning IRQ 20
[    1.204868] pci 0000:00:00.0: BAR 0: assigned [mem 0x58000000-0x58ffffff 64bit]
[    1.212321] pci 0000:00:00.0: BAR 8: assigned [mem 0x59000000-0x590fffff]
[    1.219245] pci 0000:01:00.0: BAR 0: assigned [mem 0x59000000-0x59001fff 64bit]
[    1.226702] pci 0000:00:00.0: PCI bridge to [bus 01]
[    1.231789] pci 0000:00:00.0:   bridge window [mem 0x59000000-0x590fffff]
[    1.238758] pcieport 0000:00:00.0: enabling device (0140 -> 0142)
[    1.244989] pcieport 0000:00:00.0: enabling bus mastering
[    1.250672] pci 0000:01:00.0: calling quirk_usb_early_handoff+0x0/0x7e0
[    1.257430] pci 0000:01:00.0: enabling device (0140 -> 0142)
[    1.263226] quirk_usb_handoff_xhci: ioremap(0x59000000, 8192)
[    1.269109] xhci_find_next_ext_cap: offset=0x500
[    1.273844] val = 0x1000401

This looks like a non-random value for XHCI_HCC_EXT_CAPS, but I'll have
to check the code and the standard tomorrow.

# /usr/sbin/lspci -v
00:00.0 PCI bridge: Sigma Designs, Inc. Device 0024 (rev 01) (prog-if 00 [Normal decode])
        Flags: bus master, fast devsel, latency 0
        Memory at 58000000 (64-bit, non-prefetchable) [size=16M]
        Bus: primary=00, secondary=01, subordinate=01, sec-latency=0
        I/O behind bridge: 00000000-00000fff
        Memory behind bridge: 09000000-090fffff
        Prefetchable memory behind bridge: 00000000-000fffff
        Capabilities: [50] MSI: Enable- Count=1/4 Maskable- 64bit+
        Capabilities: [78] Power Management version 3
        Capabilities: [80] Express Root Port (Slot-), MSI 03
        Capabilities: [100] Virtual Channel
        Capabilities: [800] Advanced Error Reporting
        Kernel driver in use: pcieport

01:00.0 USB controller: Renesas Technology Corp. uPD720201 USB 3.0 Host Controller (rev 03) (prog-if 30 [XHCI])
        Flags: fast devsel, IRQ 20
        Memory at 59000000 (64-bit, non-prefetchable) [size=8K]
        Capabilities: [50] Power Management version 3
        Capabilities: [70] MSI: Enable- Count=1/8 Maskable- 64bit+
        Capabilities: [90] MSI-X: Enable- Count=8 Masked-
        Capabilities: [a0] Express Endpoint, MSI 00
        Capabilities: [100] Advanced Error Reporting
        Capabilities: [150] Latency Tolerance Reporting

Hmmm, I still get the I/O and prefetchable mem behind bridge lines...
(I thought they'd disappear once I fixed the mem space bug.)

# /usr/sbin/lspci -xxx
00:00.0 PCI bridge: Sigma Designs, Inc. Device 0024 (rev 01)
00: 05 11 24 00 46 01 10 00 01 00 80 04 10 00 01 00
10: 04 00 00 08 00 00 00 00 00 01 01 00 00 00 00 00
20: 00 09 00 09 00 00 00 00 00 00 00 00 00 00 00 00
30: 00 00 00 00 50 00 00 00 00 00 00 00 00 00 01 00
40: 00 00 00 00 60 61 15 02 00 00 00 00 00 00 00 00
50: 05 78 84 00 00 00 00 00 00 00 00 00 00 00 00 00
60: 00 00 00 00 00 00 00 00 11 78 00 00 00 00 00 00
70: 00 00 00 00 00 00 00 00 01 80 03 7e 08 60 00 64
80: 10 00 42 06 01 80 00 00 10 28 20 00 12 5c 21 01
90: 08 00 12 00 00 00 00 00 00 00 00 00 00 00 00 00
a0: 00 00 00 00 02 00 00 00 00 00 00 00 00 00 00 00
b0: 02 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
c0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
d0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
e0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
f0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00

01:00.0 USB controller: Renesas Technology Corp. uPD720201 USB 3.0 Host Controller (rev 03)
00: 12 19 14 00 42 01 10 00 03 30 03 0c 10 00 00 00
10: 04 00 00 09 00 00 00 00 00 00 00 00 00 00 00 00
20: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
30: 00 00 00 00 50 00 00 00 00 00 00 00 14 01 00 00
40: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
50: 01 70 c3 c9 08 00 00 00 00 00 00 00 00 00 00 00
60: 30 20 00 00 00 00 00 00 00 00 00 00 09 18 20 00
70: 05 90 86 00 00 00 00 00 00 00 00 00 00 00 00 00
80: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
90: 11 a0 07 00 00 10 00 00 80 10 00 00 00 00 00 00
a0: 10 00 02 00 c0 8f 00 00 10 28 10 00 12 ec 07 00
b0: 00 00 12 10 00 00 00 00 00 00 00 00 00 00 00 00
c0: 00 00 00 00 10 08 00 00 00 00 00 00 00 00 00 00
d0: 02 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
e0: 00 00 00 00 00 00 55 00 00 00 01 05 10 22 c2 00
f0: 00 05 00 00 00 00 00 80 00 00 00 00 00 00 00 00

Regards.

^ permalink raw reply	[flat|nested] 54+ messages in thread

end of thread, other threads:[~2017-03-06 16:57 UTC | newest]

Thread overview: 54+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-02-27 16:14 Using the generic host PCIe driver Mason
2017-02-27 16:14 ` Mason
2017-02-27 16:44 ` Bjorn Helgaas
2017-02-27 16:44   ` Bjorn Helgaas
2017-02-27 17:02   ` Mason
2017-02-27 17:02     ` Mason
2017-02-27 18:35     ` Bjorn Helgaas
2017-02-27 18:35       ` Bjorn Helgaas
2017-03-01 15:18       ` Mason
2017-03-01 15:18         ` Mason
2017-03-01 16:18         ` Bjorn Helgaas
2017-03-01 16:18           ` Bjorn Helgaas
2017-03-01 16:36           ` Marc Zyngier
2017-03-01 16:36             ` Marc Zyngier
2017-03-03 11:26             ` Mason
2017-03-03 11:26               ` Mason
2017-03-03 16:41               ` Marc Zyngier
2017-03-03 16:41                 ` Marc Zyngier
2017-03-03 16:53                 ` Mason
2017-03-03 16:53                   ` Mason
2017-03-03 17:08                   ` Marc Zyngier
2017-03-03 17:08                     ` Marc Zyngier
2017-03-01 18:05           ` Mason
2017-03-01 18:05             ` Mason
2017-03-01 21:57             ` Bjorn Helgaas
2017-03-01 21:57               ` Bjorn Helgaas
2017-03-03 12:44               ` Mason
2017-03-03 12:44                 ` Mason
2017-03-03 15:46                 ` Bjorn Helgaas
2017-03-03 15:46                   ` Bjorn Helgaas
2017-03-03 17:18                   ` Mason
2017-03-03 17:18                     ` Mason
2017-03-03 20:04                     ` Bjorn Helgaas
2017-03-03 20:04                       ` Bjorn Helgaas
2017-03-03 23:23                       ` Mason
2017-03-03 23:23                         ` Mason
2017-03-04  9:35                         ` Ard Biesheuvel
2017-03-04  9:35                           ` Ard Biesheuvel
2017-03-04 10:56                           ` Mason
2017-03-04 10:56                             ` Mason
2017-03-04 11:45                             ` Ard Biesheuvel
2017-03-04 11:45                               ` Ard Biesheuvel
2017-03-04 13:07                               ` Mason
2017-03-04 13:07                                 ` Mason
2017-03-04 13:49                                 ` Ard Biesheuvel
2017-03-04 13:49                                   ` Ard Biesheuvel
2017-03-04 14:33                                   ` Mason
2017-03-04 14:33                                     ` Mason
2017-03-04 10:50                       ` Marc Zyngier
2017-03-04 10:50                         ` Marc Zyngier
2017-03-06 16:12                       ` Mason
2017-03-06 16:12                         ` Mason
2017-03-06 16:57                         ` Mason
2017-03-06 16:57                           ` Mason

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.