linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* Purpose of pci_remap_iospace
@ 2016-07-12  6:57 Bharat Kumar Gogada
  2016-07-12  8:31 ` Arnd Bergmann
  0 siblings, 1 reply; 22+ messages in thread
From: Bharat Kumar Gogada @ 2016-07-12  6:57 UTC (permalink / raw)
  To: linux-pci, linux-kernel
  Cc: Arnd Bergmann, Bjorn Helgaas, Liviu.Dudau, nofooter, thomas.petazzoni

Hi,

I have a query.

Can any once explain the purpose of pci_remap_iospace function in root port driver.

What is its dependency with architecture ?

Here is my understanding, the above API takes PCIe IO resource and its to be mapped CPU address from
ranges property and remaps into virtual address space.

So my question is who uses this virtual addresses ?

When End Point requests for IO BARs doesn't it get from the above resource range (first parameter of API) and
do ioremap to access this region ?

But why root complex driver is mapping this address region ?

Please correct me if my understanding is wrong.

Regards,
Bharat




This email and any attachments are intended for the sole use of the named recipient(s) and contain(s) confidential information that may be proprietary, privileged or copyrighted under applicable law. If you are not the intended recipient, do not read, copy, or forward this email message or any attachments. Delete this email message and any attachments immediately.

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: Purpose of pci_remap_iospace
  2016-07-12  6:57 Purpose of pci_remap_iospace Bharat Kumar Gogada
@ 2016-07-12  8:31 ` Arnd Bergmann
  2016-07-12  8:40   ` Bharat Kumar Gogada
  2016-07-13  8:11   ` Bharat Kumar Gogada
  0 siblings, 2 replies; 22+ messages in thread
From: Arnd Bergmann @ 2016-07-12  8:31 UTC (permalink / raw)
  To: Bharat Kumar Gogada
  Cc: linux-pci, linux-kernel, Bjorn Helgaas, Liviu.Dudau, nofooter,
	thomas.petazzoni

On Tuesday, July 12, 2016 6:57:10 AM CEST Bharat Kumar Gogada wrote:
> Hi,
> 
> I have a query.
> 
> Can any once explain the purpose of pci_remap_iospace function in root port driver.
> 
> What is its dependency with architecture ?
> 
> Here is my understanding, the above API takes PCIe IO resource and its to be mapped CPU address from
> ranges property and remaps into virtual address space.
> 
> So my question is who uses this virtual addresses ?

The inb()/outb() functions declared in asm/io.h

> When End Point requests for IO BARs doesn't it get
> from the above resource range (first parameter of API) and
> do ioremap to access this region ?

Device drivers generally do not ioremap() the I/O BARs but they
use inb()/outb() directly. They can also call pci_iomap() and
do ioread8()/iowrite8() on the pointer returned from that function,
but generally the call to pci_iomap() then returns a pointer into
the virtual address that is already mapped.
 
> But why root complex driver is mapping this address region ?

The PCI core does not know that the I/O space is memory mapped.
On x86 and a few others, I/O space is not memory mapped but requires
the use of special CPU instructions.

	Arnd

^ permalink raw reply	[flat|nested] 22+ messages in thread

* RE: Purpose of pci_remap_iospace
  2016-07-12  8:31 ` Arnd Bergmann
@ 2016-07-12  8:40   ` Bharat Kumar Gogada
  2016-07-13  8:11   ` Bharat Kumar Gogada
  1 sibling, 0 replies; 22+ messages in thread
From: Bharat Kumar Gogada @ 2016-07-12  8:40 UTC (permalink / raw)
  To: Arnd Bergmann
  Cc: linux-pci, linux-kernel, Bjorn Helgaas, Liviu.Dudau, nofooter,
	thomas.petazzoni

> Subject: Re: Purpose of pci_remap_iospace
>
> On Tuesday, July 12, 2016 6:57:10 AM CEST Bharat Kumar Gogada wrote:
> > Hi,
> >
> > I have a query.
> >
> > Can any once explain the purpose of pci_remap_iospace function in root
> port driver.
> >
> > What is its dependency with architecture ?
> >
> > Here is my understanding, the above API takes PCIe IO resource and its
> > to be mapped CPU address from ranges property and remaps into virtual
> address space.
> >
> > So my question is who uses this virtual addresses ?
>
> The inb()/outb() functions declared in asm/io.h
>
> > When End Point requests for IO BARs doesn't it get from the above
> > resource range (first parameter of API) and do ioremap to access this
> > region ?
>
> Device drivers generally do not ioremap() the I/O BARs but they use
> inb()/outb() directly. They can also call pci_iomap() and do
> ioread8()/iowrite8() on the pointer returned from that function, but
> generally the call to pci_iomap() then returns a pointer into the virtual
> address that is already mapped.
>
> > But why root complex driver is mapping this address region ?
>
> The PCI core does not know that the I/O space is memory mapped.
> On x86 and a few others, I/O space is not memory mapped but requires the
> use of special CPU instructions.
>
Thanks Bergmann


This email and any attachments are intended for the sole use of the named recipient(s) and contain(s) confidential information that may be proprietary, privileged or copyrighted under applicable law. If you are not the intended recipient, do not read, copy, or forward this email message or any attachments. Delete this email message and any attachments immediately.

^ permalink raw reply	[flat|nested] 22+ messages in thread

* RE: Purpose of pci_remap_iospace
  2016-07-12  8:31 ` Arnd Bergmann
  2016-07-12  8:40   ` Bharat Kumar Gogada
@ 2016-07-13  8:11   ` Bharat Kumar Gogada
  2016-07-13  8:30     ` Arnd Bergmann
  2016-07-13 13:24     ` Liviu.Dudau
  1 sibling, 2 replies; 22+ messages in thread
From: Bharat Kumar Gogada @ 2016-07-13  8:11 UTC (permalink / raw)
  To: Arnd Bergmann
  Cc: linux-pci, linux-kernel, Bjorn Helgaas, Liviu.Dudau, nofooter,
	thomas.petazzoni

> Subject: Re: Purpose of pci_remap_iospace
>
> On Tuesday, July 12, 2016 6:57:10 AM CEST Bharat Kumar Gogada wrote:
> > Hi,
> >
> > I have a query.
> >
> > Can any once explain the purpose of pci_remap_iospace function in root
> port driver.
> >
> > What is its dependency with architecture ?
> >
> > Here is my understanding, the above API takes PCIe IO resource and its
> > to be mapped CPU address from ranges property and remaps into virtual
> address space.
> >
> > So my question is who uses this virtual addresses ?
>
> The inb()/outb() functions declared in asm/io.h
>
> > When End Point requests for IO BARs doesn't it get from the above
> > resource range (first parameter of API) and do ioremap to access this
> > region ?
>
> Device drivers generally do not ioremap() the I/O BARs but they use
> inb()/outb() directly. They can also call pci_iomap() and do
> ioread8()/iowrite8() on the pointer returned from that function, but
> generally the call to pci_iomap() then returns a pointer into the virtual
> address that is already mapped.
>
> > But why root complex driver is mapping this address region ?
>
> The PCI core does not know that the I/O space is memory mapped.
> On x86 and a few others, I/O space is not memory mapped but requires the
> use of special CPU instructions.
>
Thanks Arnd.

I'm facing issue in testing IO bars on our SoC.

I added following ranges in our device tree :
ranges = <0x01000000 0x00000000 0x00000000 0x00000000 0xe0000000 0 0x00100000   //io
             0x02000000 0x00000000 0xe0100000 0x00000000 0xe0100000 0 0x0ef00000>;   //non prefetchabe memory

And I'm using above API to map the res and cpu physical address in my driver.

Kernel Boot log:
[    2.345294] nwl-pcie fd0e0000.pcie: Link is UP
[    2.345339] PCI host bridge /amba/pcie@fd0e0000 ranges:
[    2.345356]   No bus range found for /amba/pcie@fd0e0000, using [bus 00-ff]
[    2.345382]    IO 0xe0000000..0xe00fffff -> 0x00000000
[    2.345401]   MEM 0xe0100000..0xeeffffff -> 0xe0100000
[    2.345498] nwl-pcie fd0e0000.pcie: PCI host bridge to bus 0000:00
[    2.345517] pci_bus 0000:00: root bus resource [bus 00-ff]
[    2.345533] pci_bus 0000:00: root bus resource [io  0x0000-0xfffff]
[    2.345550] pci_bus 0000:00: root bus resource [mem 0xe0100000-0xeeffffff]
[    2.345770] pci 0000:00:00.0: cannot attach to SMMU, is it on the same bus?
[    2.345786] iommu: Adding device 0000:00:00.0 to group 1
[    2.346142] pci 0000:01:00.0: cannot attach to SMMU, is it on the same bus?
[    2.346158] iommu: Adding device 0000:01:00.0 to group 1
[    2.346213] pci 0000:00:00.0: BAR 8: assigned [mem 0xe0100000-0xe02fffff]
[    2.346234] pci 0000:01:00.0: BAR 0: assigned [mem 0xe0100000-0xe01fffff 64bit]
[    2.346268] pci 0000:01:00.0: BAR 2: assigned [mem 0xe0200000-0xe02fffff 64bit]
[    2.346300] pci 0000:01:00.0: BAR 4: no space for [io  size 0x0040]
[    2.346316] pci 0000:01:00.0: BAR 4: failed to assign [io  size 0x0040]
[    2.346333] pci 0000:00:00.0: PCI bridge to [bus 01-0c]
[    2.346350] pci 0000:00:00.0:   bridge window [mem 0xe0100000-0xe02fffff]

IO assignment fails.

On End Point:
01:00.0 Memory controller: Xilinx Corporation Device a024
        Subsystem: Xilinx Corporation Device 0007
        Control: I/O- Mem- BusMaster- SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
        Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
        Interrupt: pin A routed to IRQ 224
        Region 0: Memory at e0100000 (64-bit, non-prefetchable) [disabled] [size=1M]
        Region 2: Memory at e0200000 (64-bit, non-prefetchable) [disabled] [size=1M]
        Region 4: I/O ports at <unassigned> [disabled]

When I tested on x86 machine the same End Point I/O address is assigned, but it is a IO port mapped address.

So my doubt is why the memory mapped IO addresses are not assigned to EP on SoC ?

Do we need to have port mapped addresses on SoC also for PCI IO bars ?

Please let me know If I'm doing something wrong or missing something.

Thanks & Regards,
Bharat



This email and any attachments are intended for the sole use of the named recipient(s) and contain(s) confidential information that may be proprietary, privileged or copyrighted under applicable law. If you are not the intended recipient, do not read, copy, or forward this email message or any attachments. Delete this email message and any attachments immediately.

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: Purpose of pci_remap_iospace
  2016-07-13  8:11   ` Bharat Kumar Gogada
@ 2016-07-13  8:30     ` Arnd Bergmann
  2016-07-13 12:30       ` Bharat Kumar Gogada
  2016-07-13 13:24     ` Liviu.Dudau
  1 sibling, 1 reply; 22+ messages in thread
From: Arnd Bergmann @ 2016-07-13  8:30 UTC (permalink / raw)
  To: Bharat Kumar Gogada
  Cc: linux-pci, linux-kernel, Bjorn Helgaas, Liviu.Dudau, nofooter,
	thomas.petazzoni

On Wednesday, July 13, 2016 8:11:56 AM CEST Bharat Kumar Gogada wrote:
> > Subject: Re: Purpose of pci_remap_iospace
> >
> > On Tuesday, July 12, 2016 6:57:10 AM CEST Bharat Kumar Gogada wrote:
> > > Hi,
> > >
> > > I have a query.
> > >
> > > Can any once explain the purpose of pci_remap_iospace function in root
> > port driver.
> > >
> > > What is its dependency with architecture ?
> > >
> > > Here is my understanding, the above API takes PCIe IO resource and its
> > > to be mapped CPU address from ranges property and remaps into virtual
> > address space.
> > >
> > > So my question is who uses this virtual addresses ?
> >
> > The inb()/outb() functions declared in asm/io.h
> >
> > > When End Point requests for IO BARs doesn't it get from the above
> > > resource range (first parameter of API) and do ioremap to access this
> > > region ?
> >
> > Device drivers generally do not ioremap() the I/O BARs but they use
> > inb()/outb() directly. They can also call pci_iomap() and do
> > ioread8()/iowrite8() on the pointer returned from that function, but
> > generally the call to pci_iomap() then returns a pointer into the virtual
> > address that is already mapped.
> >
> > > But why root complex driver is mapping this address region ?
> >
> > The PCI core does not know that the I/O space is memory mapped.
> > On x86 and a few others, I/O space is not memory mapped but requires the
> > use of special CPU instructions.
> >
> Thanks Arnd.
> 
> I'm facing issue in testing IO bars on our SoC.
> 
> I added following ranges in our device tree :
> ranges = <0x01000000 0x00000000 0x00000000 0x00000000 0xe0000000 0 0x00100000   //io
>              0x02000000 0x00000000 0xe0100000 0x00000000 0xe0100000 0 0x0ef00000>;   //non prefetchabe memory
> 
> And I'm using above API to map the res and cpu physical address in my driver.

I notice you have 1MB of I/O space here

> Kernel Boot log:
> [    2.345294] nwl-pcie fd0e0000.pcie: Link is UP
> [    2.345339] PCI host bridge /amba/pcie@fd0e0000 ranges:
> [    2.345356]   No bus range found for /amba/pcie@fd0e0000, using [bus 00-ff]
> [    2.345382]    IO 0xe0000000..0xe00fffff -> 0x00000000
> [    2.345401]   MEM 0xe0100000..0xeeffffff -> 0xe0100000
> [    2.345498] nwl-pcie fd0e0000.pcie: PCI host bridge to bus 0000:00
> [    2.345517] pci_bus 0000:00: root bus resource [bus 00-ff]
> [    2.345533] pci_bus 0000:00: root bus resource [io  0x0000-0xfffff]

and all of it gets mapped by the PCI core. Usually you only have 64K of I/O
space per host bridge, and the PCI core should perhaps not try to map
all of it, though I don't think this is actually your problem here.

> [    2.345550] pci_bus 0000:00: root bus resource [mem 0xe0100000-0xeeffffff]
> [    2.345770] pci 0000:00:00.0: cannot attach to SMMU, is it on the same bus?
> [    2.345786] iommu: Adding device 0000:00:00.0 to group 1
> [    2.346142] pci 0000:01:00.0: cannot attach to SMMU, is it on the same bus?
> [    2.346158] iommu: Adding device 0000:01:00.0 to group 1
> [    2.346213] pci 0000:00:00.0: BAR 8: assigned [mem 0xe0100000-0xe02fffff]
> [    2.346234] pci 0000:01:00.0: BAR 0: assigned [mem 0xe0100000-0xe01fffff 64bit]
> [    2.346268] pci 0000:01:00.0: BAR 2: assigned [mem 0xe0200000-0xe02fffff 64bit]
> [    2.346300] pci 0000:01:00.0: BAR 4: no space for [io  size 0x0040]
> [    2.346316] pci 0000:01:00.0: BAR 4: failed to assign [io  size 0x0040]
> [    2.346333] pci 0000:00:00.0: PCI bridge to [bus 01-0c]
> [    2.346350] pci 0000:00:00.0:   bridge window [mem 0xe0100000-0xe02fffff]
> 
> IO assignment fails.

I would guess that the I/O space is not registered correctly. Is this
drivers/pci/host/pcie-xilinx.c ? We have had problems with this in the
past, since almost nobody uses I/O space and it requires several
steps to all be done correctly.

The line "  IO 0xe0000000..0xe00fffff -> 0x00000000" from your log actually
comes from the driver parsing the DT, and that seems to be correct.

Can you add a printk to pci_add_resource_offset() to show which resources
actually get added and what the offset is? Also, please show the contents
of /proc/ioport and /proc/iomem.

	Arnd

^ permalink raw reply	[flat|nested] 22+ messages in thread

* RE: Purpose of pci_remap_iospace
  2016-07-13  8:30     ` Arnd Bergmann
@ 2016-07-13 12:30       ` Bharat Kumar Gogada
  2016-07-13 13:28         ` Arnd Bergmann
  2016-07-13 13:46         ` Lorenzo Pieralisi
  0 siblings, 2 replies; 22+ messages in thread
From: Bharat Kumar Gogada @ 2016-07-13 12:30 UTC (permalink / raw)
  To: Arnd Bergmann
  Cc: linux-pci, linux-kernel, Bjorn Helgaas, Liviu.Dudau, nofooter,
	thomas.petazzoni

 > On Wednesday, July 13, 2016 8:11:56 AM CEST Bharat Kumar Gogada wrote:
> > > Subject: Re: Purpose of pci_remap_iospace
> > >
> > > On Tuesday, July 12, 2016 6:57:10 AM CEST Bharat Kumar Gogada wrote:
> > > > Hi,
> > > >
> > > > I have a query.
> > > >
> > > > Can any once explain the purpose of pci_remap_iospace function in
> root
> > > port driver.
> > > >
> > > > What is its dependency with architecture ?
> > > >
> > > > Here is my understanding, the above API takes PCIe IO resource and its
> > > > to be mapped CPU address from ranges property and remaps into
> virtual
> > > address space.
> > > >
> > > > So my question is who uses this virtual addresses ?
> > >
> > > The inb()/outb() functions declared in asm/io.h
> > >
> > > > When End Point requests for IO BARs doesn't it get from the above
> > > > resource range (first parameter of API) and do ioremap to access this
> > > > region ?
> > >
> > > Device drivers generally do not ioremap() the I/O BARs but they use
> > > inb()/outb() directly. They can also call pci_iomap() and do
> > > ioread8()/iowrite8() on the pointer returned from that function, but
> > > generally the call to pci_iomap() then returns a pointer into the virtual
> > > address that is already mapped.
> > >
> > > > But why root complex driver is mapping this address region ?
> > >
> > > The PCI core does not know that the I/O space is memory mapped.
> > > On x86 and a few others, I/O space is not memory mapped but requires
> the
> > > use of special CPU instructions.
> > >
> > Thanks Arnd.
> >
> > I'm facing issue in testing IO bars on our SoC.
> >
> > I added following ranges in our device tree :
> > ranges = <0x01000000 0x00000000 0x00000000 0x00000000 0xe0000000 0
> 0x00100000   //io
> >              0x02000000 0x00000000 0xe0100000 0x00000000 0xe0100000 0
> 0x0ef00000>;   //non prefetchabe memory
> >
> > And I'm using above API to map the res and cpu physical address in my
> driver.
>
> I notice you have 1MB of I/O space here
>
> > Kernel Boot log:
> > [    2.345294] nwl-pcie fd0e0000.pcie: Link is UP
> > [    2.345339] PCI host bridge /amba/pcie@fd0e0000 ranges:
> > [    2.345356]   No bus range found for /amba/pcie@fd0e0000, using [bus
> 00-ff]
> > [    2.345382]    IO 0xe0000000..0xe00fffff -> 0x00000000
> > [    2.345401]   MEM 0xe0100000..0xeeffffff -> 0xe0100000
> > [    2.345498] nwl-pcie fd0e0000.pcie: PCI host bridge to bus 0000:00
> > [    2.345517] pci_bus 0000:00: root bus resource [bus 00-ff]
> > [    2.345533] pci_bus 0000:00: root bus resource [io  0x0000-0xfffff]
>
> and all of it gets mapped by the PCI core. Usually you only have 64K of I/O
> space per host bridge, and the PCI core should perhaps not try to map
> all of it, though I don't think this is actually your problem here.
>
> > [    2.345550] pci_bus 0000:00: root bus resource [mem 0xe0100000-
> 0xeeffffff]
> > [    2.345770] pci 0000:00:00.0: cannot attach to SMMU, is it on the same
> bus?
> > [    2.345786] iommu: Adding device 0000:00:00.0 to group 1
> > [    2.346142] pci 0000:01:00.0: cannot attach to SMMU, is it on the same
> bus?
> > [    2.346158] iommu: Adding device 0000:01:00.0 to group 1
> > [    2.346213] pci 0000:00:00.0: BAR 8: assigned [mem 0xe0100000-
> 0xe02fffff]
> > [    2.346234] pci 0000:01:00.0: BAR 0: assigned [mem 0xe0100000-0xe01fffff
> 64bit]
> > [    2.346268] pci 0000:01:00.0: BAR 2: assigned [mem 0xe0200000-0xe02fffff
> 64bit]
> > [    2.346300] pci 0000:01:00.0: BAR 4: no space for [io  size 0x0040]
> > [    2.346316] pci 0000:01:00.0: BAR 4: failed to assign [io  size 0x0040]
> > [    2.346333] pci 0000:00:00.0: PCI bridge to [bus 01-0c]
> > [    2.346350] pci 0000:00:00.0:   bridge window [mem 0xe0100000-
> 0xe02fffff]
> >
> > IO assignment fails.
>
> I would guess that the I/O space is not registered correctly. Is this
> drivers/pci/host/pcie-xilinx.c ? We have had problems with this in the
> past, since almost nobody uses I/O space and it requires several
> steps to all be done correctly.
>
Thanks Arnd.

we are testing using drivers/pci/host/pcie-xilinx-nwl.c.

Here is the code I added to driver in probe:
..
err = of_pci_get_host_bridge_resources(node, 0, 0xff, &res, &iobase);
        if (err) {
                pr_err("Getting bridge resources failed\n");
                return err;
        }
resource_list_for_each_entry(window, &res) {            //code for io resource
                struct resource *res = window->res;
                u64 restype = resource_type(res);

                switch (restype) {
                case IORESOURCE_IO:
                        err = pci_remap_iospace(res, iobase);
                        if(err)
                                pr_info("FAILED TO IPREMAP RESOURCE\n");
                        break;
                default:
                        dev_err(pcie->dev, "invalid resource %pR\n", res);

                }
        }

Other than above code I haven't done any change in driver.

Here is the printk added boot log:
[    2.308680] nwl-pcie fd0e0000.pcie: Link is UP
[    2.308724] PCI host bridge /amba/pcie@fd0e0000 ranges:
[    2.308741]   No bus range found for /amba/pcie@fd0e0000, using [bus 00-ff]
[    2.308755] in pci_add_resource_offset res->start 0   offset 0
[    2.308774]    IO 0xe0000000..0xe00fffff -> 0x00000000
[    2.308795] in pci_add_resource_offset res->start 0   offset 0
[    2.308805]   MEM 0xe0100000..0xeeffffff -> 0xe0100000
[    2.308824] in pci_add_resource_offset res->start e0100000    offset 0
[    2.308834] nwl-pcie fd0e0000.pcie: invalid resource [bus 00-ff]
[    2.308870] nwl-pcie fd0e0000.pcie: invalid resource [mem 0xe0100000-0xeeffffff]
[    2.308979] nwl-pcie fd0e0000.pcie: PCI host bridge to bus 0000:00
[    2.308998] pci_bus 0000:00: root bus resource [bus 00-ff]
[    2.309014] pci_bus 0000:00: root bus resource [io  0x0000-0xfffff]
[    2.309030] pci_bus 0000:00: root bus resource [mem 0xe0100000-0xeeffffff]
[    2.309253] pci 0000:00:00.0: cannot attach to SMMU, is it on the same bus?
[    2.309269] iommu: Adding device 0000:00:00.0 to group 1
[    2.309625] pci 0000:01:00.0: cannot attach to SMMU, is it on the same bus?
[    2.309641] iommu: Adding device 0000:01:00.0 to group 1
[    2.309697] pci 0000:00:00.0: BAR 8: assigned [mem 0xe0100000-0xe02fffff]
[    2.309718] pci 0000:01:00.0: BAR 0: assigned [mem 0xe0100000-0xe01fffff 64bit]
[    2.309752] pci 0000:01:00.0: BAR 2: assigned [mem 0xe0200000-0xe02fffff 64bit]
[    2.309784] pci 0000:01:00.0: BAR 4: no space for [io  size 0x0040]
[    2.309800] pci 0000:01:00.0: BAR 4: failed to assign [io  size 0x0040]
[    2.309816] pci 0000:00:00.0: PCI bridge to [bus 01-0c]
[    2.309833] pci 0000:00:00.0:   bridge window [mem 0xe0100000-0xe02fffff]

Here is the output of ioports and iomem:

root@:~# cat /proc/iomem
00000000-7fffffff : System RAM
  00080000-00a76fff : Kernel code
  01c72000-01d4bfff : Kernel data
fd0c0000-fd0c1fff : /amba/ahci@fd0c0000
fd0e0000-fd0e0fff : breg
fd480000-fd480fff : pcireg
ff000000-ff000fff : xuartps
ff010000-ff010fff : xuartps
ff020000-ff020fff : /amba/i2c@ff020000
ff030000-ff030fff : /amba/i2c@ff030000
ff070000-ff070fff : /amba/can@ff070000
ff0a0000-ff0a0fff : /amba/gpio@ff0a0000
ff0f0000-ff0f0fff : /amba/spi@ff0f0000
ff170000-ff170fff : mmc0
ffa60000-ffa600ff : /amba/rtc@ffa60000
8000000000-8000ffffff : cfg
root@:~# cat /proc/ioports
root@:~#

/proc/ioports is empty.

Thanks & Regards,
Bharat


This email and any attachments are intended for the sole use of the named recipient(s) and contain(s) confidential information that may be proprietary, privileged or copyrighted under applicable law. If you are not the intended recipient, do not read, copy, or forward this email message or any attachments. Delete this email message and any attachments immediately.

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: Purpose of pci_remap_iospace
  2016-07-13  8:11   ` Bharat Kumar Gogada
  2016-07-13  8:30     ` Arnd Bergmann
@ 2016-07-13 13:24     ` Liviu.Dudau
  1 sibling, 0 replies; 22+ messages in thread
From: Liviu.Dudau @ 2016-07-13 13:24 UTC (permalink / raw)
  To: Bharat Kumar Gogada
  Cc: Arnd Bergmann, linux-pci, linux-kernel, Bjorn Helgaas, nofooter,
	thomas.petazzoni

On Wed, Jul 13, 2016 at 08:11:56AM +0000, Bharat Kumar Gogada wrote:
> > Subject: Re: Purpose of pci_remap_iospace
> >
> > On Tuesday, July 12, 2016 6:57:10 AM CEST Bharat Kumar Gogada wrote:
> > > Hi,
> > >
> > > I have a query.
> > >
> > > Can any once explain the purpose of pci_remap_iospace function in root
> > port driver.
> > >
> > > What is its dependency with architecture ?
> > >
> > > Here is my understanding, the above API takes PCIe IO resource and its
> > > to be mapped CPU address from ranges property and remaps into virtual
> > address space.
> > >
> > > So my question is who uses this virtual addresses ?
> >
> > The inb()/outb() functions declared in asm/io.h
> >
> > > When End Point requests for IO BARs doesn't it get from the above
> > > resource range (first parameter of API) and do ioremap to access this
> > > region ?
> >
> > Device drivers generally do not ioremap() the I/O BARs but they use
> > inb()/outb() directly. They can also call pci_iomap() and do
> > ioread8()/iowrite8() on the pointer returned from that function, but
> > generally the call to pci_iomap() then returns a pointer into the virtual
> > address that is already mapped.
> >
> > > But why root complex driver is mapping this address region ?
> >
> > The PCI core does not know that the I/O space is memory mapped.
> > On x86 and a few others, I/O space is not memory mapped but requires the
> > use of special CPU instructions.
> >
> Thanks Arnd.
> 
> I'm facing issue in testing IO bars on our SoC.
> 
> I added following ranges in our device tree :
> ranges = <0x01000000 0x00000000 0x00000000 0x00000000 0xe0000000 0 0x00100000   //io
>              0x02000000 0x00000000 0xe0100000 0x00000000 0xe0100000 0 0x0ef00000>;   //non prefetchabe memory
> 
> And I'm using above API to map the res and cpu physical address in my driver.
> 
> Kernel Boot log:
> [    2.345294] nwl-pcie fd0e0000.pcie: Link is UP
> [    2.345339] PCI host bridge /amba/pcie@fd0e0000 ranges:
> [    2.345356]   No bus range found for /amba/pcie@fd0e0000, using [bus 00-ff]
> [    2.345382]    IO 0xe0000000..0xe00fffff -> 0x00000000
> [    2.345401]   MEM 0xe0100000..0xeeffffff -> 0xe0100000
> [    2.345498] nwl-pcie fd0e0000.pcie: PCI host bridge to bus 0000:00
> [    2.345517] pci_bus 0000:00: root bus resource [bus 00-ff]
> [    2.345533] pci_bus 0000:00: root bus resource [io  0x0000-0xfffff]
> [    2.345550] pci_bus 0000:00: root bus resource [mem 0xe0100000-0xeeffffff]
> [    2.345770] pci 0000:00:00.0: cannot attach to SMMU, is it on the same bus?
> [    2.345786] iommu: Adding device 0000:00:00.0 to group 1
> [    2.346142] pci 0000:01:00.0: cannot attach to SMMU, is it on the same bus?
> [    2.346158] iommu: Adding device 0000:01:00.0 to group 1
> [    2.346213] pci 0000:00:00.0: BAR 8: assigned [mem 0xe0100000-0xe02fffff]
> [    2.346234] pci 0000:01:00.0: BAR 0: assigned [mem 0xe0100000-0xe01fffff 64bit]
> [    2.346268] pci 0000:01:00.0: BAR 2: assigned [mem 0xe0200000-0xe02fffff 64bit]
> [    2.346300] pci 0000:01:00.0: BAR 4: no space for [io  size 0x0040]

Can you try to print the value of ret in pci_assign_resource() when it is printing the above message?

I would tr debugging that function and the __pci_assign_resource() function to figure out
where it fails. Maybe due to IO region being 1MB?

Best regards,
Liviu

> [    2.346316] pci 0000:01:00.0: BAR 4: failed to assign [io  size 0x0040]
> [    2.346333] pci 0000:00:00.0: PCI bridge to [bus 01-0c]
> [    2.346350] pci 0000:00:00.0:   bridge window [mem 0xe0100000-0xe02fffff]
> 
> IO assignment fails.
> 
> On End Point:
> 01:00.0 Memory controller: Xilinx Corporation Device a024
>         Subsystem: Xilinx Corporation Device 0007
>         Control: I/O- Mem- BusMaster- SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
>         Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
>         Interrupt: pin A routed to IRQ 224
>         Region 0: Memory at e0100000 (64-bit, non-prefetchable) [disabled] [size=1M]
>         Region 2: Memory at e0200000 (64-bit, non-prefetchable) [disabled] [size=1M]
>         Region 4: I/O ports at <unassigned> [disabled]
> 
> When I tested on x86 machine the same End Point I/O address is assigned, but it is a IO port mapped address.
> 
> So my doubt is why the memory mapped IO addresses are not assigned to EP on SoC ?
> 
> Do we need to have port mapped addresses on SoC also for PCI IO bars ?
> 
> Please let me know If I'm doing something wrong or missing something.
> 
> Thanks & Regards,
> Bharat
> 
> 
> 
> This email and any attachments are intended for the sole use of the named recipient(s) and contain(s) confidential information that may be proprietary, privileged or copyrighted under applicable law. If you are not the intended recipient, do not read, copy, or forward this email message or any attachments. Delete this email message and any attachments immediately.
> 
> --
> To unsubscribe from this list: send the line "unsubscribe linux-pci" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 

-- 
====================
| I would like to |
| fix the world,  |
| but they're not |
| giving me the   |
 \ source code!  /
  ---------------
    ¯\_(ツ)_/¯

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: Purpose of pci_remap_iospace
  2016-07-13 12:30       ` Bharat Kumar Gogada
@ 2016-07-13 13:28         ` Arnd Bergmann
  2016-07-13 15:16           ` Bharat Kumar Gogada
  2016-07-13 13:46         ` Lorenzo Pieralisi
  1 sibling, 1 reply; 22+ messages in thread
From: Arnd Bergmann @ 2016-07-13 13:28 UTC (permalink / raw)
  To: Bharat Kumar Gogada
  Cc: linux-pci, linux-kernel, Bjorn Helgaas, Liviu.Dudau, nofooter,
	thomas.petazzoni

On Wednesday, July 13, 2016 12:30:44 PM CEST Bharat Kumar Gogada wrote:
>  > On Wednesday, July 13, 2016 8:11:56 AM CEST Bharat Kumar Gogada wrote:
> > > > Subject: Re: Purpose of pci_remap_iospace
> >
> > I notice you have 1MB of I/O space here
> >
> > > Kernel Boot log:
> > > [    2.345294] nwl-pcie fd0e0000.pcie: Link is UP
> > > [    2.345339] PCI host bridge /amba/pcie@fd0e0000 ranges:
> > > [    2.345356]   No bus range found for /amba/pcie@fd0e0000, using [bus
> > 00-ff]
> > > [    2.345382]    IO 0xe0000000..0xe00fffff -> 0x00000000
> > > [    2.345401]   MEM 0xe0100000..0xeeffffff -> 0xe0100000
> > > [    2.345498] nwl-pcie fd0e0000.pcie: PCI host bridge to bus 0000:00
> > > [    2.345517] pci_bus 0000:00: root bus resource [bus 00-ff]
> > > [    2.345533] pci_bus 0000:00: root bus resource [io  0x0000-0xfffff]
> >
> > and all of it gets mapped by the PCI core. Usually you only have 64K of I/O
> > space per host bridge, and the PCI core should perhaps not try to map
> > all of it, though I don't think this is actually your problem here.
> >
> > > [    2.345550] pci_bus 0000:00: root bus resource [mem 0xe0100000-
> > 0xeeffffff]
> > > [    2.345770] pci 0000:00:00.0: cannot attach to SMMU, is it on the same
> > bus?
> > > [    2.345786] iommu: Adding device 0000:00:00.0 to group 1
> > > [    2.346142] pci 0000:01:00.0: cannot attach to SMMU, is it on the same
> > bus?
> > > [    2.346158] iommu: Adding device 0000:01:00.0 to group 1
> > > [    2.346213] pci 0000:00:00.0: BAR 8: assigned [mem 0xe0100000-
> > 0xe02fffff]
> > > [    2.346234] pci 0000:01:00.0: BAR 0: assigned [mem 0xe0100000-0xe01fffff
> > 64bit]
> > > [    2.346268] pci 0000:01:00.0: BAR 2: assigned [mem 0xe0200000-0xe02fffff
> > 64bit]
> > > [    2.346300] pci 0000:01:00.0: BAR 4: no space for [io  size 0x0040]
> > > [    2.346316] pci 0000:01:00.0: BAR 4: failed to assign [io  size 0x0040]
> > > [    2.346333] pci 0000:00:00.0: PCI bridge to [bus 01-0c]
> > > [    2.346350] pci 0000:00:00.0:   bridge window [mem 0xe0100000-
> > 0xe02fffff]
> > >
> > > IO assignment fails.
> >
> > I would guess that the I/O space is not registered correctly. Is this
> > drivers/pci/host/pcie-xilinx.c ? We have had problems with this in the
> > past, since almost nobody uses I/O space and it requires several
> > steps to all be done correctly.
> >
> Thanks Arnd.
> 
> we are testing using drivers/pci/host/pcie-xilinx-nwl.c.

According to Documentation/devicetree/bindings/pci/xilinx-nwl-pcie.txt,
this hardware does not support I/O space.

Is this on ARM or microblaze?

> Here is the code I added to driver in probe:
> ..
> err = of_pci_get_host_bridge_resources(node, 0, 0xff, &res, &iobase);
>         if (err) {
>                 pr_err("Getting bridge resources failed\n");
>                 return err;
>         }
> resource_list_for_each_entry(window, &res) {            //code for io resource
>                 struct resource *res = window->res;
>                 u64 restype = resource_type(res);
> 
>                 switch (restype) {
>                 case IORESOURCE_IO:
>                         err = pci_remap_iospace(res, iobase);
>                         if(err)
>                                 pr_info("FAILED TO IPREMAP RESOURCE\n");
>                         break;
>                 default:
>                         dev_err(pcie->dev, "invalid resource %pR\n", res);
> 
>                 }
>         }
> 
> Other than above code I haven't done any change in driver.
> 
> Here is the printk added boot log:
> [    2.308680] nwl-pcie fd0e0000.pcie: Link is UP
> [    2.308724] PCI host bridge /amba/pcie@fd0e0000 ranges:
> [    2.308741]   No bus range found for /amba/pcie@fd0e0000, using [bus 00-ff]
> [    2.308755] in pci_add_resource_offset res->start 0   offset 0
> [    2.308774]    IO 0xe0000000..0xe00fffff -> 0x00000000
> [    2.308795] in pci_add_resource_offset res->start 0   offset 0
> [    2.308805]   MEM 0xe0100000..0xeeffffff -> 0xe0100000
> [    2.308824] in pci_add_resource_offset res->start e0100000    offset 0
> [    2.308834] nwl-pcie fd0e0000.pcie: invalid resource [bus 00-ff]
> [    2.308870] nwl-pcie fd0e0000.pcie: invalid resource [mem 0xe0100000-0xeeffffff]
> [    2.308979] nwl-pcie fd0e0000.pcie: PCI host bridge to bus 0000:00
> [    2.308998] pci_bus 0000:00: root bus resource [bus 00-ff]
> [    2.309014] pci_bus 0000:00: root bus resource [io  0x0000-0xfffff]
> [    2.309030] pci_bus 0000:00: root bus resource [mem 0xe0100000-0xeeffffff]
> [    2.309253] pci 0000:00:00.0: cannot attach to SMMU, is it on the same bus?
> [    2.309269] iommu: Adding device 0000:00:00.0 to group 1
> [    2.309625] pci 0000:01:00.0: cannot attach to SMMU, is it on the same bus?
> [    2.309641] iommu: Adding device 0000:01:00.0 to group 1
> [    2.309697] pci 0000:00:00.0: BAR 8: assigned [mem 0xe0100000-0xe02fffff]
> [    2.309718] pci 0000:01:00.0: BAR 0: assigned [mem 0xe0100000-0xe01fffff 64bit]
> [    2.309752] pci 0000:01:00.0: BAR 2: assigned [mem 0xe0200000-0xe02fffff 64bit]
> [    2.309784] pci 0000:01:00.0: BAR 4: no space for [io  size 0x0040]
> [    2.309800] pci 0000:01:00.0: BAR 4: failed to assign [io  size 0x0040]
> [    2.309816] pci 0000:00:00.0: PCI bridge to [bus 01-0c]
> [    2.309833] pci 0000:00:00.0:   bridge window [mem 0xe0100000-0xe02fffff]
> 
> Here is the output of ioports and iomem:
> 
> root@:~# cat /proc/iomem
> 00000000-7fffffff : System RAM
>   00080000-00a76fff : Kernel code
>   01c72000-01d4bfff : Kernel data
> fd0c0000-fd0c1fff : /amba/ahci@fd0c0000
> fd0e0000-fd0e0fff : breg
> fd480000-fd480fff : pcireg
> ff000000-ff000fff : xuartps
> ff010000-ff010fff : xuartps
> ff020000-ff020fff : /amba/i2c@ff020000
> ff030000-ff030fff : /amba/i2c@ff030000
> ff070000-ff070fff : /amba/can@ff070000
> ff0a0000-ff0a0fff : /amba/gpio@ff0a0000
> ff0f0000-ff0f0fff : /amba/spi@ff0f0000
> ff170000-ff170fff : mmc0
> ffa60000-ffa600ff : /amba/rtc@ffa60000
> 8000000000-8000ffffff : cfg
> root@:~# cat /proc/ioports
> root@:~#
> 
> /proc/ioports is empty.
> 

This has neither the PCI memory nor the I/O resource, it looks like you never
call pci_add_resource_offset() to start with, or maybe it fails for some reason.

	Arnd

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: Purpose of pci_remap_iospace
  2016-07-13 12:30       ` Bharat Kumar Gogada
  2016-07-13 13:28         ` Arnd Bergmann
@ 2016-07-13 13:46         ` Lorenzo Pieralisi
  2016-07-14  6:03           ` Bharat Kumar Gogada
  2016-07-14 13:32           ` Bharat Kumar Gogada
  1 sibling, 2 replies; 22+ messages in thread
From: Lorenzo Pieralisi @ 2016-07-13 13:46 UTC (permalink / raw)
  To: Bharat Kumar Gogada
  Cc: Arnd Bergmann, linux-pci, linux-kernel, Bjorn Helgaas,
	Liviu.Dudau, nofooter, thomas.petazzoni

On Wed, Jul 13, 2016 at 12:30:44PM +0000, Bharat Kumar Gogada wrote:

[...]

> err = of_pci_get_host_bridge_resources(node, 0, 0xff, &res, &iobase);
>         if (err) {
>                 pr_err("Getting bridge resources failed\n");
>                 return err;
>         }
> resource_list_for_each_entry(window, &res) {            //code for io resource
>                 struct resource *res = window->res;
>                 u64 restype = resource_type(res);
> 
>                 switch (restype) {
>                 case IORESOURCE_IO:
>                         err = pci_remap_iospace(res, iobase);
>                         if(err)
>                                 pr_info("FAILED TO IPREMAP RESOURCE\n");
>                         break;
>                 default:
>                         dev_err(pcie->dev, "invalid resource %pR\n", res);
> 
>                 }
>         }
> 
> Other than above code I haven't done any change in driver.
> 
> Here is the printk added boot log:
> [    2.308680] nwl-pcie fd0e0000.pcie: Link is UP
> [    2.308724] PCI host bridge /amba/pcie@fd0e0000 ranges:
> [    2.308741]   No bus range found for /amba/pcie@fd0e0000, using [bus 00-ff]
> [    2.308755] in pci_add_resource_offset res->start 0   offset 0
> [    2.308774]    IO 0xe0000000..0xe00fffff -> 0x00000000
> [    2.308795] in pci_add_resource_offset res->start 0   offset 0
> [    2.308805]   MEM 0xe0100000..0xeeffffff -> 0xe0100000
> [    2.308824] in pci_add_resource_offset res->start e0100000    offset 0
> [    2.308834] nwl-pcie fd0e0000.pcie: invalid resource [bus 00-ff]
> [    2.308870] nwl-pcie fd0e0000.pcie: invalid resource [mem 0xe0100000-0xeeffffff]
> [    2.308979] nwl-pcie fd0e0000.pcie: PCI host bridge to bus 0000:00
> [    2.308998] pci_bus 0000:00: root bus resource [bus 00-ff]
> [    2.309014] pci_bus 0000:00: root bus resource [io  0x0000-0xfffff]
> [    2.309030] pci_bus 0000:00: root bus resource [mem 0xe0100000-0xeeffffff]
> [    2.309253] pci 0000:00:00.0: cannot attach to SMMU, is it on the same bus?
> [    2.309269] iommu: Adding device 0000:00:00.0 to group 1
> [    2.309625] pci 0000:01:00.0: cannot attach to SMMU, is it on the same bus?
> [    2.309641] iommu: Adding device 0000:01:00.0 to group 1
> [    2.309697] pci 0000:00:00.0: BAR 8: assigned [mem 0xe0100000-0xe02fffff]

Here is your PCI bridge mem space window assignment. I do not see
an IO window assignment which makes me think that IO cycles and
relative IO window is not enabled through the bridge, that's the
reason you can't assign IO space to the endpoint, because it has
no parent IO window enabled IIUC.

You can add some debug info into pci_bridge_check_ranges() in
particular to the reading of PCI_IO_BASE resources to confirm
what I am saying above, thanks.

Lorenzo

> [    2.309718] pci 0000:01:00.0: BAR 0: assigned [mem 0xe0100000-0xe01fffff 64bit]
> [    2.309752] pci 0000:01:00.0: BAR 2: assigned [mem 0xe0200000-0xe02fffff 64bit]
> [    2.309784] pci 0000:01:00.0: BAR 4: no space for [io  size 0x0040]
> [    2.309800] pci 0000:01:00.0: BAR 4: failed to assign [io  size 0x0040]
> [    2.309816] pci 0000:00:00.0: PCI bridge to [bus 01-0c]
> [    2.309833] pci 0000:00:00.0:   bridge window [mem 0xe0100000-0xe02fffff]
> 
> Here is the output of ioports and iomem:
> 
> root@:~# cat /proc/iomem
> 00000000-7fffffff : System RAM
>   00080000-00a76fff : Kernel code
>   01c72000-01d4bfff : Kernel data
> fd0c0000-fd0c1fff : /amba/ahci@fd0c0000
> fd0e0000-fd0e0fff : breg
> fd480000-fd480fff : pcireg
> ff000000-ff000fff : xuartps
> ff010000-ff010fff : xuartps
> ff020000-ff020fff : /amba/i2c@ff020000
> ff030000-ff030fff : /amba/i2c@ff030000
> ff070000-ff070fff : /amba/can@ff070000
> ff0a0000-ff0a0fff : /amba/gpio@ff0a0000
> ff0f0000-ff0f0fff : /amba/spi@ff0f0000
> ff170000-ff170fff : mmc0
> ffa60000-ffa600ff : /amba/rtc@ffa60000
> 8000000000-8000ffffff : cfg
> root@:~# cat /proc/ioports
> root@:~#
> 
> /proc/ioports is empty.
> 
> Thanks & Regards,
> Bharat
> 
> 
> This email and any attachments are intended for the sole use of the named recipient(s) and contain(s) confidential information that may be proprietary, privileged or copyrighted under applicable law. If you are not the intended recipient, do not read, copy, or forward this email message or any attachments. Delete this email message and any attachments immediately.
> 

^ permalink raw reply	[flat|nested] 22+ messages in thread

* RE: Purpose of pci_remap_iospace
  2016-07-13 13:28         ` Arnd Bergmann
@ 2016-07-13 15:16           ` Bharat Kumar Gogada
  2016-07-13 15:28             ` Arnd Bergmann
  2016-07-13 16:13             ` Lorenzo Pieralisi
  0 siblings, 2 replies; 22+ messages in thread
From: Bharat Kumar Gogada @ 2016-07-13 15:16 UTC (permalink / raw)
  To: Arnd Bergmann
  Cc: linux-pci, linux-kernel, Bjorn Helgaas, Liviu.Dudau, nofooter,
	thomas.petazzoni

> Subject: Re: Purpose of pci_remap_iospace
>
> On Wednesday, July 13, 2016 12:30:44 PM CEST Bharat Kumar Gogada wrote:
> >  > On Wednesday, July 13, 2016 8:11:56 AM CEST Bharat Kumar Gogada
> wrote:
> > > > > Subject: Re: Purpose of pci_remap_iospace
> > >
> > > I notice you have 1MB of I/O space here
> > >
> > > > Kernel Boot log:
> > > > [    2.345294] nwl-pcie fd0e0000.pcie: Link is UP
> > > > [    2.345339] PCI host bridge /amba/pcie@fd0e0000 ranges:
> > > > [    2.345356]   No bus range found for /amba/pcie@fd0e0000, using
> [bus
> > > 00-ff]
> > > > [    2.345382]    IO 0xe0000000..0xe00fffff -> 0x00000000
> > > > [    2.345401]   MEM 0xe0100000..0xeeffffff -> 0xe0100000
> > > > [    2.345498] nwl-pcie fd0e0000.pcie: PCI host bridge to bus 0000:00
> > > > [    2.345517] pci_bus 0000:00: root bus resource [bus 00-ff]
> > > > [    2.345533] pci_bus 0000:00: root bus resource [io  0x0000-0xfffff]
> > >
> > > and all of it gets mapped by the PCI core. Usually you only have 64K
> > > of I/O space per host bridge, and the PCI core should perhaps not
> > > try to map all of it, though I don't think this is actually your problem here.
> > >
> > > > [    2.345550] pci_bus 0000:00: root bus resource [mem 0xe0100000-
> > > 0xeeffffff]
> > > > [    2.345770] pci 0000:00:00.0: cannot attach to SMMU, is it on the same
> > > bus?
> > > > [    2.345786] iommu: Adding device 0000:00:00.0 to group 1
> > > > [    2.346142] pci 0000:01:00.0: cannot attach to SMMU, is it on the same
> > > bus?
> > > > [    2.346158] iommu: Adding device 0000:01:00.0 to group 1
> > > > [    2.346213] pci 0000:00:00.0: BAR 8: assigned [mem 0xe0100000-
> > > 0xe02fffff]
> > > > [    2.346234] pci 0000:01:00.0: BAR 0: assigned [mem 0xe0100000-
> 0xe01fffff
> > > 64bit]
> > > > [    2.346268] pci 0000:01:00.0: BAR 2: assigned [mem 0xe0200000-
> 0xe02fffff
> > > 64bit]
> > > > [    2.346300] pci 0000:01:00.0: BAR 4: no space for [io  size 0x0040]
> > > > [    2.346316] pci 0000:01:00.0: BAR 4: failed to assign [io  size 0x0040]
> > > > [    2.346333] pci 0000:00:00.0: PCI bridge to [bus 01-0c]
> > > > [    2.346350] pci 0000:00:00.0:   bridge window [mem 0xe0100000-
> > > 0xe02fffff]
> > > >
> > > > IO assignment fails.
> > >
> > > I would guess that the I/O space is not registered correctly. Is
> > > this drivers/pci/host/pcie-xilinx.c ? We have had problems with this
> > > in the past, since almost nobody uses I/O space and it requires
> > > several steps to all be done correctly.
> > >
> > Thanks Arnd.
> >
> > we are testing using drivers/pci/host/pcie-xilinx-nwl.c.
>
> According to Documentation/devicetree/bindings/pci/xilinx-nwl-pcie.txt,
> this hardware does not support I/O space.

We received a newer IP version with IO support, so we are trying to test this feature.
>
> Is this on ARM or microblaze?

It is ARM 64-bit.

> This has neither the PCI memory nor the I/O resource, it looks like you never
> call pci_add_resource_offset() to start with, or maybe it fails for some
> reason.

I see that above API is used in ARM drivers, do we need to do it in ARM64 also ?

Regards,
Bharat



This email and any attachments are intended for the sole use of the named recipient(s) and contain(s) confidential information that may be proprietary, privileged or copyrighted under applicable law. If you are not the intended recipient, do not read, copy, or forward this email message or any attachments. Delete this email message and any attachments immediately.

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: Purpose of pci_remap_iospace
  2016-07-13 15:16           ` Bharat Kumar Gogada
@ 2016-07-13 15:28             ` Arnd Bergmann
  2016-07-13 15:42               ` Liviu.Dudau
  2016-07-13 16:13             ` Lorenzo Pieralisi
  1 sibling, 1 reply; 22+ messages in thread
From: Arnd Bergmann @ 2016-07-13 15:28 UTC (permalink / raw)
  To: Bharat Kumar Gogada
  Cc: linux-pci, linux-kernel, Bjorn Helgaas, Liviu.Dudau, nofooter,
	thomas.petazzoni

On Wednesday, July 13, 2016 3:16:21 PM CEST Bharat Kumar Gogada wrote:
> 
> > This has neither the PCI memory nor the I/O resource, it looks like you never
> > call pci_add_resource_offset() to start with, or maybe it fails for some
> > reason.
> 
> I see that above API is used in ARM drivers, do we need to do it in ARM64 also ?
> 

Yes, all architectures need it.

	Arnd

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: Purpose of pci_remap_iospace
  2016-07-13 15:28             ` Arnd Bergmann
@ 2016-07-13 15:42               ` Liviu.Dudau
  0 siblings, 0 replies; 22+ messages in thread
From: Liviu.Dudau @ 2016-07-13 15:42 UTC (permalink / raw)
  To: Arnd Bergmann
  Cc: Bharat Kumar Gogada, linux-pci, linux-kernel, Bjorn Helgaas,
	nofooter, thomas.petazzoni

On Wed, Jul 13, 2016 at 05:28:47PM +0200, Arnd Bergmann wrote:
> On Wednesday, July 13, 2016 3:16:21 PM CEST Bharat Kumar Gogada wrote:
> > 
> > > This has neither the PCI memory nor the I/O resource, it looks like you never
> > > call pci_add_resource_offset() to start with, or maybe it fails for some
> > > reason.
> > 
> > I see that above API is used in ARM drivers, do we need to do it in ARM64 also ?
> > 
> 
> Yes, all architectures need it.

of_pci_get_host_bridge_resources() calls it for him.

Liviu

> 
> 	Arnd
> 

-- 
====================
| I would like to |
| fix the world,  |
| but they're not |
| giving me the   |
 \ source code!  /
  ---------------
    ¯\_(ツ)_/¯

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: Purpose of pci_remap_iospace
  2016-07-13 15:16           ` Bharat Kumar Gogada
  2016-07-13 15:28             ` Arnd Bergmann
@ 2016-07-13 16:13             ` Lorenzo Pieralisi
  1 sibling, 0 replies; 22+ messages in thread
From: Lorenzo Pieralisi @ 2016-07-13 16:13 UTC (permalink / raw)
  To: Bharat Kumar Gogada
  Cc: Arnd Bergmann, linux-pci, linux-kernel, Bjorn Helgaas,
	Liviu.Dudau, nofooter, thomas.petazzoni

On Wed, Jul 13, 2016 at 03:16:21PM +0000, Bharat Kumar Gogada wrote:
> > Subject: Re: Purpose of pci_remap_iospace
> >
> > On Wednesday, July 13, 2016 12:30:44 PM CEST Bharat Kumar Gogada wrote:
> > >  > On Wednesday, July 13, 2016 8:11:56 AM CEST Bharat Kumar Gogada
> > wrote:
> > > > > > Subject: Re: Purpose of pci_remap_iospace
> > > >
> > > > I notice you have 1MB of I/O space here
> > > >
> > > > > Kernel Boot log:
> > > > > [    2.345294] nwl-pcie fd0e0000.pcie: Link is UP
> > > > > [    2.345339] PCI host bridge /amba/pcie@fd0e0000 ranges:
> > > > > [    2.345356]   No bus range found for /amba/pcie@fd0e0000, using
> > [bus
> > > > 00-ff]
> > > > > [    2.345382]    IO 0xe0000000..0xe00fffff -> 0x00000000
> > > > > [    2.345401]   MEM 0xe0100000..0xeeffffff -> 0xe0100000
> > > > > [    2.345498] nwl-pcie fd0e0000.pcie: PCI host bridge to bus 0000:00
> > > > > [    2.345517] pci_bus 0000:00: root bus resource [bus 00-ff]
> > > > > [    2.345533] pci_bus 0000:00: root bus resource [io  0x0000-0xfffff]
> > > >
> > > > and all of it gets mapped by the PCI core. Usually you only have 64K
> > > > of I/O space per host bridge, and the PCI core should perhaps not
> > > > try to map all of it, though I don't think this is actually your problem here.
> > > >
> > > > > [    2.345550] pci_bus 0000:00: root bus resource [mem 0xe0100000-
> > > > 0xeeffffff]
> > > > > [    2.345770] pci 0000:00:00.0: cannot attach to SMMU, is it on the same
> > > > bus?
> > > > > [    2.345786] iommu: Adding device 0000:00:00.0 to group 1
> > > > > [    2.346142] pci 0000:01:00.0: cannot attach to SMMU, is it on the same
> > > > bus?
> > > > > [    2.346158] iommu: Adding device 0000:01:00.0 to group 1
> > > > > [    2.346213] pci 0000:00:00.0: BAR 8: assigned [mem 0xe0100000-
> > > > 0xe02fffff]
> > > > > [    2.346234] pci 0000:01:00.0: BAR 0: assigned [mem 0xe0100000-
> > 0xe01fffff
> > > > 64bit]
> > > > > [    2.346268] pci 0000:01:00.0: BAR 2: assigned [mem 0xe0200000-
> > 0xe02fffff
> > > > 64bit]
> > > > > [    2.346300] pci 0000:01:00.0: BAR 4: no space for [io  size 0x0040]
> > > > > [    2.346316] pci 0000:01:00.0: BAR 4: failed to assign [io  size 0x0040]
> > > > > [    2.346333] pci 0000:00:00.0: PCI bridge to [bus 01-0c]
> > > > > [    2.346350] pci 0000:00:00.0:   bridge window [mem 0xe0100000-
> > > > 0xe02fffff]
> > > > >
> > > > > IO assignment fails.
> > > >
> > > > I would guess that the I/O space is not registered correctly. Is
> > > > this drivers/pci/host/pcie-xilinx.c ? We have had problems with this
> > > > in the past, since almost nobody uses I/O space and it requires
> > > > several steps to all be done correctly.
> > > >
> > > Thanks Arnd.
> > >
> > > we are testing using drivers/pci/host/pcie-xilinx-nwl.c.
> >
> > According to Documentation/devicetree/bindings/pci/xilinx-nwl-pcie.txt,
> > this hardware does not support I/O space.
> 
> We received a newer IP version with IO support, so we are trying to test this feature.
> >
> > Is this on ARM or microblaze?
> 
> It is ARM 64-bit.
> 
> > This has neither the PCI memory nor the I/O resource, it looks like you never
> > call pci_add_resource_offset() to start with, or maybe it fails for some
> > reason.
> 
> I see that above API is used in ARM drivers, do we need to do it in
> ARM64 also ?

It is called in of_pci_get_host_bridge_resources(), since you
are using that API there is nothing more you have to do. The problem
with resources in /proc/iomem and /proc/ioports is that you
do not request the host bridge apertures in your host controller
driver, see drivers/pci/host/pci-host-common.c (devm_request_resource())
to see how to do it.

And as I said previously in this thread none of this is related to
your IO BAR assignment failures IMHO.

Lorenzo

^ permalink raw reply	[flat|nested] 22+ messages in thread

* RE: Purpose of pci_remap_iospace
  2016-07-13 13:46         ` Lorenzo Pieralisi
@ 2016-07-14  6:03           ` Bharat Kumar Gogada
  2016-07-14 13:32           ` Bharat Kumar Gogada
  1 sibling, 0 replies; 22+ messages in thread
From: Bharat Kumar Gogada @ 2016-07-14  6:03 UTC (permalink / raw)
  To: Lorenzo Pieralisi
  Cc: Arnd Bergmann, linux-pci, linux-kernel, Bjorn Helgaas,
	Liviu.Dudau, nofooter, thomas.petazzoni

> Subject: Re: Purpose of pci_remap_iospace
>
> On Wed, Jul 13, 2016 at 12:30:44PM +0000, Bharat Kumar Gogada wrote:
>
> [...]
>
> > err = of_pci_get_host_bridge_resources(node, 0, 0xff, &res, &iobase);
> >         if (err) {
> >                 pr_err("Getting bridge resources failed\n");
> >                 return err;
> >         }
> > resource_list_for_each_entry(window, &res) {            //code for io resource
> >                 struct resource *res = window->res;
> >                 u64 restype = resource_type(res);
> >
> >                 switch (restype) {
> >                 case IORESOURCE_IO:
> >                         err = pci_remap_iospace(res, iobase);
> >                         if(err)
> >                                 pr_info("FAILED TO IPREMAP RESOURCE\n");
> >                         break;
> >                 default:
> >                         dev_err(pcie->dev, "invalid resource %pR\n",
> > res);
> >
> >                 }
> >         }
> >
> > Other than above code I haven't done any change in driver.
> >
> Here is your PCI bridge mem space window assignment. I do not see an IO
> window assignment which makes me think that IO cycles and relative IO
> window is not enabled through the bridge, that's the reason you can't assign
> IO space to the endpoint, because it has no parent IO window enabled IIUC.
>

We sorted this out, enabled the IO base limit / upper 16bit registers in the bridge for 32 bit decode.
However my IO address being assigned to EP is different than what I provide in device tree.

Device tree property:
ranges = <0x01000000 0x00000000 0x00000000 0x00000000 0xe0000000 0 0x00010000   //io
                      0x02000000 0x00000000 0xe0100000 0x00000000 0xe0100000 0 0x0ef00000>; //non prefetchabe memory

Here is the boot log:
[    2.312504] nwl-pcie fd0e0000.pcie: Link is UP
[    2.312548] PCI host bridge /amba/pcie@fd0e0000 ranges:
[    2.312565]   No bus range found for /amba/pcie@fd0e0000, using [bus 00-ff]
[    2.312591]    IO 0xe0000000..0xe000ffff -> 0x00000000
[    2.312610]   MEM 0xe0100000..0xeeffffff -> 0xe0100000
[    2.312711] nwl-pcie fd0e0000.pcie: PCI host bridge to bus 0000:00
[    2.312729] pci_bus 0000:00: root bus resource [bus 00-ff]
[    2.312745] pci_bus 0000:00: root bus resource [io  0x0000-0xffff]
[    2.312761] pci_bus 0000:00: root bus resource [mem 0xe0100000-0xeeffffff]
[    2.312993] pci 0000:00:00.0: cannot attach to SMMU, is it on the same bus?
[    2.313009] iommu: Adding device 0000:00:00.0 to group 1
[    2.313363] pci 0000:01:00.0: cannot attach to SMMU, is it on the same bus?
[    2.313379] iommu: Adding device 0000:01:00.0 to group 1
[    2.313434] pci 0000:00:00.0: BAR 8: assigned [mem 0xe0100000-0xe02fffff]
[    2.313452] pci 0000:00:00.0: BAR 7: assigned [io  0x1000-0x1fff]
[    2.313469] pci 0000:00:00.0: BAR 6: assigned [mem 0xe0300000-0xe03007ff pref]
[    2.313495] pci 0000:01:00.0: BAR 0: assigned [mem 0xe0100000-0xe01fffff 64bit]
[    2.313529] pci 0000:01:00.0: BAR 2: assigned [mem 0xe0200000-0xe02fffff 64bit]
[    2.313561] pci 0000:01:00.0: BAR 4: assigned [io  0x1000-0x103f]
[    2.313581] pci 0000:00:00.0: PCI bridge to [bus 01-0c]
[    2.313597] pci 0000:00:00.0:   bridge window [io  0x1000-0x1fff]
[    2.313614] pci 0000:00:00.0:   bridge window [mem 0xe0100000-0xe02fffff]

If we are mapping our IO space to 0xe0000000 and 64k size, why kernel is showing 0x1000-0x1fff which is 4k ?

Lspci of bridge :
00:00.0 PCI bridge: Xilinx Corporation Device a024 (prog-if 00 [Normal decode])
        Control: I/O- Mem- BusMaster- SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
        Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
        Interrupt: pin A routed to IRQ 224
        Bus: primary=00, secondary=01, subordinate=0c, sec-latency=0
        I/O behind bridge: 00001000-00001fff
        Memory behind bridge: e0100000-e02fffff

Lspci ofEP:
01:00.0 Memory controller: Xilinx Corporation Device d024
        Subsystem: Xilinx Corporation Device 0007
        Control: I/O- Mem- BusMaster- SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
        Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
        Interrupt: pin A routed to IRQ 224
        Region 0: Memory at e0100000 (64-bit, non-prefetchable) [disabled] [size=1M]
        Region 2: Memory at e0200000 (64-bit, non-prefetchable) [disabled] [size=1M]
        Region 4: I/O ports at 1000 [disabled] [size=64]

I'm yet to try with the other API's you have pointed out (devm_request_resource()).

Thanks & Regards,
Bharat


This email and any attachments are intended for the sole use of the named recipient(s) and contain(s) confidential information that may be proprietary, privileged or copyrighted under applicable law. If you are not the intended recipient, do not read, copy, or forward this email message or any attachments. Delete this email message and any attachments immediately.

^ permalink raw reply	[flat|nested] 22+ messages in thread

* RE: Purpose of pci_remap_iospace
  2016-07-13 13:46         ` Lorenzo Pieralisi
  2016-07-14  6:03           ` Bharat Kumar Gogada
@ 2016-07-14 13:32           ` Bharat Kumar Gogada
  2016-07-14 14:56             ` Lorenzo Pieralisi
  1 sibling, 1 reply; 22+ messages in thread
From: Bharat Kumar Gogada @ 2016-07-14 13:32 UTC (permalink / raw)
  To: Lorenzo Pieralisi
  Cc: Arnd Bergmann, linux-pci, linux-kernel, Bjorn Helgaas,
	Liviu.Dudau, nofooter, thomas.petazzoni

> > Subject: Re: Purpose of pci_remap_iospace
> >
> > On Wed, Jul 13, 2016 at 12:30:44PM +0000, Bharat Kumar Gogada wrote:
> >
> > [...]
> >
> > > err = of_pci_get_host_bridge_resources(node, 0, 0xff, &res, &iobase);
> > >         if (err) {
> > >                 pr_err("Getting bridge resources failed\n");
> > >                 return err;
> > >         }
> > > resource_list_for_each_entry(window, &res) {            //code for io
> resource
> > >                 struct resource *res = window->res;
> > >                 u64 restype = resource_type(res);
> > >
> > >                 switch (restype) {
> > >                 case IORESOURCE_IO:
> > >                         err = pci_remap_iospace(res, iobase);
> > >                         if(err)
> > >                                 pr_info("FAILED TO IPREMAP RESOURCE\n");
> > >                         break;
> > >                 default:
> > >                         dev_err(pcie->dev, "invalid resource %pR\n",
> > > res);
> > >
> > >                 }
> > >         }
> > >
> > > Other than above code I haven't done any change in driver.
> > >
> > Here is your PCI bridge mem space window assignment. I do not see an
> > IO window assignment which makes me think that IO cycles and relative
> > IO window is not enabled through the bridge, that's the reason you
> > can't assign IO space to the endpoint, because it has no parent IO window
> enabled IIUC.
> >
>
> We sorted this out, enabled the IO base limit / upper 16bit registers in the
> bridge for 32 bit decode.
> However my IO address being assigned to EP is different than what I provide
> in device tree.
>

Hi Lorenzo,

I missed something in my device tree now I corrected it.

ranges = <0x01000000 0x00000000 0xe0000000 0x00000000 0xe0000000 0 0x00010000   //io
                     0x02000000 0x00000000 0xe0100000 0x00000000 0xe0100000 0 0x0ef00000>; //non prefetchabe memory

[    2.389498] nwl-pcie fd0e0000.pcie: Link is UP
[    2.389541] PCI host bridge /amba/pcie@fd0e0000 ranges:
[    2.389558]   No bus range found for /amba/pcie@fd0e0000, using [bus 00-ff]
[    2.389583]    IO 0xe0000000..0xe000ffff -> 0xe0000000
[    2.389624]   MEM 0xe0100000..0xeeffffff -> 0xe0100000
[    2.389803] nwl-pcie fd0e0000.pcie: PCI host bridge to bus 0000:00
[    2.389822] pci_bus 0000:00: root bus resource [bus 00-ff]
[    2.389839] pci_bus 0000:00: root bus resource [io  0x0000-0xffff] (bus address [0xe0000000-0xe000ffff])
[    2.389863] pci_bus 0000:00: root bus resource [mem 0xe0100000-0xeeffffff]
[    2.390094] pci 0000:00:00.0: cannot attach to SMMU, is it on the same bus?
[    2.390110] iommu: Adding device 0000:00:00.0 to group 1
[    2.390274] pci 0000:01:00.0: reg 0x20: initial BAR value 0x00000000 invalid
[    2.390481] pci 0000:01:00.0: cannot attach to SMMU, is it on the same bus?
[    2.390496] iommu: Adding device 0000:01:00.0 to group 1
[    2.390533] in pci_bridge_check_ranges io 101
[    2.390545] in pci_bridge_check_ranges io 2 101
[    2.390575] pci 0000:00:00.0: BAR 8: assigned [mem 0xe0100000-0xe02fffff]
[    2.390592] pci 0000:00:00.0: BAR 7: assigned [io  0x1000-0x1fff]
[    2.390609] pci 0000:00:00.0: BAR 6: assigned [mem 0xe0300000-0xe03007ff pref]
[    2.390636] pci 0000:01:00.0: BAR 0: assigned [mem 0xe0100000-0xe01fffff 64bit]
[    2.390669] pci 0000:01:00.0: BAR 2: assigned [mem 0xe0200000-0xe02fffff 64bit]
[    2.390702] pci 0000:01:00.0: BAR 4: assigned [io  0x1000-0x103f]
[    2.390721] pci 0000:00:00.0: PCI bridge to [bus 01-0c]
[    2.390785] pci 0000:00:00.0:   bridge window [io  0x1000-0x1fff]
[    2.390823] pci 0000:00:00.0:   bridge window [mem 0xe0100000-0xe02fffff]

Lspci on bridge:
00:00.0 PCI bridge: Xilinx Corporation Device a024 (prog-if 00 [Normal decode])
        Control: I/O- Mem- BusMaster- SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
        Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
        Interrupt: pin A routed to IRQ 224
        Bus: primary=00, secondary=01, subordinate=0c, sec-latency=0
        I/O behind bridge: e0001000-e0001fff
        Memory behind bridge: e0100000-e02fffff

Here my IO space is showing 4k, but what I'm providing is 4k ?(In above boot log also IO space length 4k)

Lspci on EP:
01:00.0 Memory controller: Xilinx Corporation Device d024
        Subsystem: Xilinx Corporation Device 0007
        Control: I/O- Mem- BusMaster- SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
        Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
        Interrupt: pin A routed to IRQ 224
        Region 0: Memory at e0100000 (64-bit, non-prefetchable) [disabled] [size=1M]
        Region 2: Memory at e0200000 (64-bit, non-prefetchable) [disabled] [size=1M]
        Region 4: I/O ports at 1000 [disabled] [size=64]

On EP from where it is getting this 1000 address, it should be within I/O behind bridge range know ?


Thanks & Regards,
Bharat



This email and any attachments are intended for the sole use of the named recipient(s) and contain(s) confidential information that may be proprietary, privileged or copyrighted under applicable law. If you are not the intended recipient, do not read, copy, or forward this email message or any attachments. Delete this email message and any attachments immediately.

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: Purpose of pci_remap_iospace
  2016-07-14 13:32           ` Bharat Kumar Gogada
@ 2016-07-14 14:56             ` Lorenzo Pieralisi
  2016-07-14 15:05               ` Bharat Kumar Gogada
                                 ` (2 more replies)
  0 siblings, 3 replies; 22+ messages in thread
From: Lorenzo Pieralisi @ 2016-07-14 14:56 UTC (permalink / raw)
  To: Bharat Kumar Gogada
  Cc: Arnd Bergmann, linux-pci, linux-kernel, Bjorn Helgaas,
	Liviu.Dudau, nofooter, thomas.petazzoni

On Thu, Jul 14, 2016 at 01:32:13PM +0000, Bharat Kumar Gogada wrote:

[...]

> Hi Lorenzo,
> 
> I missed something in my device tree now I corrected it.
> 
> ranges = <0x01000000 0x00000000 0xe0000000 0x00000000 0xe0000000 0 0x00010000   //io

You have not missed anything, you changed the PCI bus address at
which your host bridge responds to IO space and it must match
your configuration. At what PCI bus address your host bridge
maps IO space ?

>                      0x02000000 0x00000000 0xe0100000 0x00000000 0xe0100000 0 0x0ef00000>; //non prefetchabe memory
> 
> [    2.389498] nwl-pcie fd0e0000.pcie: Link is UP
> [    2.389541] PCI host bridge /amba/pcie@fd0e0000 ranges:
> [    2.389558]   No bus range found for /amba/pcie@fd0e0000, using [bus 00-ff]
> [    2.389583]    IO 0xe0000000..0xe000ffff -> 0xe0000000
> [    2.389624]   MEM 0xe0100000..0xeeffffff -> 0xe0100000
> [    2.389803] nwl-pcie fd0e0000.pcie: PCI host bridge to bus 0000:00
> [    2.389822] pci_bus 0000:00: root bus resource [bus 00-ff]
> [    2.389839] pci_bus 0000:00: root bus resource [io  0x0000-0xffff] (bus address [0xe0000000-0xe000ffff])
> [    2.389863] pci_bus 0000:00: root bus resource [mem 0xe0100000-0xeeffffff]
> [    2.390094] pci 0000:00:00.0: cannot attach to SMMU, is it on the same bus?
> [    2.390110] iommu: Adding device 0000:00:00.0 to group 1
> [    2.390274] pci 0000:01:00.0: reg 0x20: initial BAR value 0x00000000 invalid
> [    2.390481] pci 0000:01:00.0: cannot attach to SMMU, is it on the same bus?
> [    2.390496] iommu: Adding device 0000:01:00.0 to group 1
> [    2.390533] in pci_bridge_check_ranges io 101
> [    2.390545] in pci_bridge_check_ranges io 2 101
> [    2.390575] pci 0000:00:00.0: BAR 8: assigned [mem 0xe0100000-0xe02fffff]
> [    2.390592] pci 0000:00:00.0: BAR 7: assigned [io  0x1000-0x1fff]
> [    2.390609] pci 0000:00:00.0: BAR 6: assigned [mem 0xe0300000-0xe03007ff pref]
> [    2.390636] pci 0000:01:00.0: BAR 0: assigned [mem 0xe0100000-0xe01fffff 64bit]
> [    2.390669] pci 0000:01:00.0: BAR 2: assigned [mem 0xe0200000-0xe02fffff 64bit]
> [    2.390702] pci 0000:01:00.0: BAR 4: assigned [io  0x1000-0x103f]
> [    2.390721] pci 0000:00:00.0: PCI bridge to [bus 01-0c]
> [    2.390785] pci 0000:00:00.0:   bridge window [io  0x1000-0x1fff]
> [    2.390823] pci 0000:00:00.0:   bridge window [mem 0xe0100000-0xe02fffff]
> 
> Lspci on bridge:
> 00:00.0 PCI bridge: Xilinx Corporation Device a024 (prog-if 00 [Normal decode])
>         Control: I/O- Mem- BusMaster- SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
>         Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
>         Interrupt: pin A routed to IRQ 224
>         Bus: primary=00, secondary=01, subordinate=0c, sec-latency=0
>         I/O behind bridge: e0001000-e0001fff
>         Memory behind bridge: e0100000-e02fffff
> 
> Here my IO space is showing 4k, but what I'm providing is 4k ?(In above boot log also IO space length 4k)
> 
> Lspci on EP:
> 01:00.0 Memory controller: Xilinx Corporation Device d024
>         Subsystem: Xilinx Corporation Device 0007
>         Control: I/O- Mem- BusMaster- SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
>         Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
>         Interrupt: pin A routed to IRQ 224
>         Region 0: Memory at e0100000 (64-bit, non-prefetchable) [disabled] [size=1M]
>         Region 2: Memory at e0200000 (64-bit, non-prefetchable) [disabled] [size=1M]
>         Region 4: I/O ports at 1000 [disabled] [size=64]
> 
> On EP from where it is getting this 1000 address, it should be within
> I/O behind bridge range know ?


The CPU physical address in the DT range for PCI IO range is the
address at which your host bridge responds to PCI IO space cycle
(through memory mapped accesses, to emulate x86 IO port behaviour).

The PCI bus address in the range is the address your
host bridge will convert the incoming physical CPU address
and drive the PCI bus transactions.

Is your host bridge programmed with its address decoder
set-up according to what I say above (and your DT bindings) ?

If yes, on to the virtual address space.

On ARM, for IO space, we map the cpu physical address I
mention above to a chunk of virtual address space allocated
for PCI IO space, that's what pci_remap_iospace() is meant
for.

That physical address is mapped to a fixed virtual address range
(starting with PCI_IOBASE).

The value you see in the IO bar above is an offset into that chunk
of virtual addresses so that, when you do eg inb(offset) in a driver
the code behind it translates that access to a memory mapped access into
the virtual address space allocated to PCI IO space (that you
previously mapped through pci_remap_iospace()).

The offset allocated starts from 0x1000, since that's the
value of PCIBIOS_MIN_IO, that the code assigning resources
use to preserve the range [0..PCIBIOS_MIN_IO] so that it
is not allocated to devices/bridges (ie legacy ISA space).

Does it help ? Your set-up _seems_ correct, what I am worried
about is fiddling about with the DT PCI bus address that is used
to drive PCI IO cycles. That depends on your host bridge address
decoder programmed values and that must match the DT ranges.

Lorenzo

> 
> 
> Thanks & Regards,
> Bharat
> 
> 
> 
> This email and any attachments are intended for the sole use of the named recipient(s) and contain(s) confidential information that may be proprietary, privileged or copyrighted under applicable law. If you are not the intended recipient, do not read, copy, or forward this email message or any attachments. Delete this email message and any attachments immediately.
> 

^ permalink raw reply	[flat|nested] 22+ messages in thread

* RE: Purpose of pci_remap_iospace
  2016-07-14 14:56             ` Lorenzo Pieralisi
@ 2016-07-14 15:05               ` Bharat Kumar Gogada
  2016-07-14 15:20                 ` Lorenzo Pieralisi
  2016-07-14 15:12               ` Arnd Bergmann
  2016-07-15  5:21               ` Bharat Kumar Gogada
  2 siblings, 1 reply; 22+ messages in thread
From: Bharat Kumar Gogada @ 2016-07-14 15:05 UTC (permalink / raw)
  To: Lorenzo Pieralisi
  Cc: Arnd Bergmann, linux-pci, linux-kernel, Bjorn Helgaas,
	Liviu.Dudau, nofooter, thomas.petazzoni

> On Thu, Jul 14, 2016 at 01:32:13PM +0000, Bharat Kumar Gogada wrote:
>
> [...]
>
> > Hi Lorenzo,
> >
> > I missed something in my device tree now I corrected it.
> >
> > ranges = <0x01000000 0x00000000 0xe0000000 0x00000000 0xe0000000 0
> 0x00010000   //io
>
> You have not missed anything, you changed the PCI bus address at which
> your host bridge responds to IO space and it must match your configuration.
> At what PCI bus address your host bridge maps IO space ?
>
> >                      0x02000000 0x00000000 0xe0100000 0x00000000
> > 0xe0100000 0 0x0ef00000>; //non prefetchabe memory
> >
> > [    2.389498] nwl-pcie fd0e0000.pcie: Link is UP
> > [    2.389541] PCI host bridge /amba/pcie@fd0e0000 ranges:
> > [    2.389558]   No bus range found for /amba/pcie@fd0e0000, using [bus
> 00-ff]
> > [    2.389583]    IO 0xe0000000..0xe000ffff -> 0xe0000000
> > [    2.389624]   MEM 0xe0100000..0xeeffffff -> 0xe0100000
> > [    2.389803] nwl-pcie fd0e0000.pcie: PCI host bridge to bus 0000:00
> > [    2.389822] pci_bus 0000:00: root bus resource [bus 00-ff]
> > [    2.389839] pci_bus 0000:00: root bus resource [io  0x0000-0xffff] (bus
> address [0xe0000000-0xe000ffff])
> > [    2.389863] pci_bus 0000:00: root bus resource [mem 0xe0100000-
> 0xeeffffff]
> > [    2.390094] pci 0000:00:00.0: cannot attach to SMMU, is it on the same
> bus?
> > [    2.390110] iommu: Adding device 0000:00:00.0 to group 1
> > [    2.390274] pci 0000:01:00.0: reg 0x20: initial BAR value 0x00000000 invalid
> > [    2.390481] pci 0000:01:00.0: cannot attach to SMMU, is it on the same
> bus?
> > [    2.390496] iommu: Adding device 0000:01:00.0 to group 1
> > [    2.390533] in pci_bridge_check_ranges io 101
> > [    2.390545] in pci_bridge_check_ranges io 2 101
> > [    2.390575] pci 0000:00:00.0: BAR 8: assigned [mem 0xe0100000-
> 0xe02fffff]
> > [    2.390592] pci 0000:00:00.0: BAR 7: assigned [io  0x1000-0x1fff]
> > [    2.390609] pci 0000:00:00.0: BAR 6: assigned [mem 0xe0300000-
> 0xe03007ff pref]
> > [    2.390636] pci 0000:01:00.0: BAR 0: assigned [mem 0xe0100000-0xe01fffff
> 64bit]
> > [    2.390669] pci 0000:01:00.0: BAR 2: assigned [mem 0xe0200000-0xe02fffff
> 64bit]
> > [    2.390702] pci 0000:01:00.0: BAR 4: assigned [io  0x1000-0x103f]
> > [    2.390721] pci 0000:00:00.0: PCI bridge to [bus 01-0c]
> > [    2.390785] pci 0000:00:00.0:   bridge window [io  0x1000-0x1fff]
> > [    2.390823] pci 0000:00:00.0:   bridge window [mem 0xe0100000-
> 0xe02fffff]
> >
Thanks a lot Loenzo for your kind and clear explanation, I will dig through hardware and correct my device tree.

>From above log why IO space is allocated as only 4k even though I'm allocating 64k through device tree ?

Regards,
Bharat



This email and any attachments are intended for the sole use of the named recipient(s) and contain(s) confidential information that may be proprietary, privileged or copyrighted under applicable law. If you are not the intended recipient, do not read, copy, or forward this email message or any attachments. Delete this email message and any attachments immediately.

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: Purpose of pci_remap_iospace
  2016-07-14 14:56             ` Lorenzo Pieralisi
  2016-07-14 15:05               ` Bharat Kumar Gogada
@ 2016-07-14 15:12               ` Arnd Bergmann
  2016-07-14 15:27                 ` Lorenzo Pieralisi
  2016-07-15  5:21               ` Bharat Kumar Gogada
  2 siblings, 1 reply; 22+ messages in thread
From: Arnd Bergmann @ 2016-07-14 15:12 UTC (permalink / raw)
  To: Lorenzo Pieralisi
  Cc: Bharat Kumar Gogada, linux-pci, linux-kernel, Bjorn Helgaas,
	Liviu.Dudau, nofooter, thomas.petazzoni

On Thursday, July 14, 2016 3:56:24 PM CEST Lorenzo Pieralisi wrote:
> On Thu, Jul 14, 2016 at 01:32:13PM +0000, Bharat Kumar Gogada wrote:
> 
> [...]
> 
> > Hi Lorenzo,
> > 
> > I missed something in my device tree now I corrected it.
> > 
> > ranges = <0x01000000 0x00000000 0xe0000000 0x00000000 0xe0000000 0 0x00010000   //io
> 
> You have not missed anything, you changed the PCI bus address at
> which your host bridge responds to IO space and it must match
> your configuration.

I'd always recommend mapping the I/O space to PCI address zero, but
evidently the hardware is not configured that way here.

	Arnd

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: Purpose of pci_remap_iospace
  2016-07-14 15:05               ` Bharat Kumar Gogada
@ 2016-07-14 15:20                 ` Lorenzo Pieralisi
  0 siblings, 0 replies; 22+ messages in thread
From: Lorenzo Pieralisi @ 2016-07-14 15:20 UTC (permalink / raw)
  To: Bharat Kumar Gogada
  Cc: Arnd Bergmann, linux-pci, linux-kernel, Bjorn Helgaas,
	Liviu.Dudau, nofooter, thomas.petazzoni

On Thu, Jul 14, 2016 at 03:05:40PM +0000, Bharat Kumar Gogada wrote:

[...]

> > On Thu, Jul 14, 2016 at 01:32:13PM +0000, Bharat Kumar Gogada wrote:
> > > ranges = <0x01000000 0x00000000 0xe0000000 0x00000000 0xe0000000 0
> > 0x00010000   //io
> >
> > You have not missed anything, you changed the PCI bus address at which
> > your host bridge responds to IO space and it must match your configuration.
> > At what PCI bus address your host bridge maps IO space ?
> >
> > >                      0x02000000 0x00000000 0xe0100000 0x00000000
> > > 0xe0100000 0 0x0ef00000>; //non prefetchabe memory
> > >
> > > [    2.389498] nwl-pcie fd0e0000.pcie: Link is UP
> > > [    2.389541] PCI host bridge /amba/pcie@fd0e0000 ranges:
> > > [    2.389558]   No bus range found for /amba/pcie@fd0e0000, using [bus
> > 00-ff]
> > > [    2.389583]    IO 0xe0000000..0xe000ffff -> 0xe0000000
> > > [    2.389624]   MEM 0xe0100000..0xeeffffff -> 0xe0100000
> > > [    2.389803] nwl-pcie fd0e0000.pcie: PCI host bridge to bus 0000:00
> > > [    2.389822] pci_bus 0000:00: root bus resource [bus 00-ff]
> > > [    2.389839] pci_bus 0000:00: root bus resource [io  0x0000-0xffff] (bus
> > address [0xe0000000-0xe000ffff])
> > > [    2.389863] pci_bus 0000:00: root bus resource [mem 0xe0100000-
> > 0xeeffffff]
> > > [    2.390094] pci 0000:00:00.0: cannot attach to SMMU, is it on the same
> > bus?
> > > [    2.390110] iommu: Adding device 0000:00:00.0 to group 1
> > > [    2.390274] pci 0000:01:00.0: reg 0x20: initial BAR value 0x00000000 invalid
> > > [    2.390481] pci 0000:01:00.0: cannot attach to SMMU, is it on the same
> > bus?
> > > [    2.390496] iommu: Adding device 0000:01:00.0 to group 1
> > > [    2.390533] in pci_bridge_check_ranges io 101
> > > [    2.390545] in pci_bridge_check_ranges io 2 101
> > > [    2.390575] pci 0000:00:00.0: BAR 8: assigned [mem 0xe0100000-
> > 0xe02fffff]
> > > [    2.390592] pci 0000:00:00.0: BAR 7: assigned [io  0x1000-0x1fff]
> > > [    2.390609] pci 0000:00:00.0: BAR 6: assigned [mem 0xe0300000-
> > 0xe03007ff pref]
> > > [    2.390636] pci 0000:01:00.0: BAR 0: assigned [mem 0xe0100000-0xe01fffff
> > 64bit]
> > > [    2.390669] pci 0000:01:00.0: BAR 2: assigned [mem 0xe0200000-0xe02fffff
> > 64bit]
> > > [    2.390702] pci 0000:01:00.0: BAR 4: assigned [io  0x1000-0x103f]
> > > [    2.390721] pci 0000:00:00.0: PCI bridge to [bus 01-0c]
> > > [    2.390785] pci 0000:00:00.0:   bridge window [io  0x1000-0x1fff]
> > > [    2.390823] pci 0000:00:00.0:   bridge window [mem 0xe0100000-
> > 0xe02fffff]
> > >
> Thanks a lot Loenzo for your kind and clear explanation, I will dig
> through hardware and correct my device tree.
> 
> From above log why IO space is allocated as only 4k even though I'm
> allocating 64k through device tree ?

You are not allocating anything in the device tree, you are just
defining the physical memory window at which your PCI host bridge
address decoders "map" PCI IO cycles.

PCI core code, while assigning resources, sizes the PCI bridge
IO window BAR by sizing the downstream PCI devices BARs:

See:

pbus_size_io()

PCI core won't allocate an IO window to your PCI bridge window BARs
bigger than what's necessary (according to downstream devices), keeping
alignment in mind.

Is that clear ?

> This email and any attachments are intended for the sole use of the named recipient(s) and contain(s) confidential information that may be proprietary, privileged or copyrighted under applicable law. If you are not the intended recipient, do not read, copy, or forward this email message or any attachments. Delete this email message and any attachments immediately.

This disclaimer should disappear if you want to discuss patches on
public mailing lists.

Thanks,
Lorenzo

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: Purpose of pci_remap_iospace
  2016-07-14 15:12               ` Arnd Bergmann
@ 2016-07-14 15:27                 ` Lorenzo Pieralisi
  0 siblings, 0 replies; 22+ messages in thread
From: Lorenzo Pieralisi @ 2016-07-14 15:27 UTC (permalink / raw)
  To: Arnd Bergmann
  Cc: Bharat Kumar Gogada, linux-pci, linux-kernel, Bjorn Helgaas,
	Liviu.Dudau, nofooter, thomas.petazzoni

On Thu, Jul 14, 2016 at 05:12:01PM +0200, Arnd Bergmann wrote:
> On Thursday, July 14, 2016 3:56:24 PM CEST Lorenzo Pieralisi wrote:
> > On Thu, Jul 14, 2016 at 01:32:13PM +0000, Bharat Kumar Gogada wrote:
> > 
> > [...]
> > 
> > > Hi Lorenzo,
> > > 
> > > I missed something in my device tree now I corrected it.
> > > 
> > > ranges = <0x01000000 0x00000000 0xe0000000 0x00000000 0xe0000000 0 0x00010000   //io
> > 
> > You have not missed anything, you changed the PCI bus address at
> > which your host bridge responds to IO space and it must match
> > your configuration.
> 
> I'd always recommend mapping the I/O space to PCI address zero, but
> evidently the hardware is not configured that way here.

+1 and it is a message that must be heeded by Xiling folks before
merging the host controller changes and respective DT bindings/dts.

Lorenzo

^ permalink raw reply	[flat|nested] 22+ messages in thread

* RE: Purpose of pci_remap_iospace
  2016-07-14 14:56             ` Lorenzo Pieralisi
  2016-07-14 15:05               ` Bharat Kumar Gogada
  2016-07-14 15:12               ` Arnd Bergmann
@ 2016-07-15  5:21               ` Bharat Kumar Gogada
  2016-07-15  6:55                 ` Arnd Bergmann
  2 siblings, 1 reply; 22+ messages in thread
From: Bharat Kumar Gogada @ 2016-07-15  5:21 UTC (permalink / raw)
  To: Lorenzo Pieralisi
  Cc: Arnd Bergmann, linux-pci, linux-kernel, Bjorn Helgaas,
	Liviu.Dudau, nofooter, thomas.petazzoni

> On Thu, Jul 14, 2016 at 01:32:13PM +0000, Bharat Kumar Gogada wrote:
>
> [...]
>
> > Hi Lorenzo,
> >
> > I missed something in my device tree now I corrected it.
> >
> > ranges = <0x01000000 0x00000000 0xe0000000 0x00000000 0xe0000000 0
> 0x00010000   //io
>
> You have not missed anything, you changed the PCI bus address at which
> your host bridge responds to IO space and it must match your configuration.
> At what PCI bus address your host bridge maps IO space ?
>
Our host bridge does not have dedicted address space mapped for IO transactions.
For generation of IO transactions it requires some register read and write operations
in bridge logic.

So the above PCI address does not come in to picture also, is there alternate way to handle IO
Bars with our kind of hardware architecture.

Thanks & Regards,
Bharat


This email and any attachments are intended for the sole use of the named recipient(s) and contain(s) confidential information that may be proprietary, privileged or copyrighted under applicable law. If you are not the intended recipient, do not read, copy, or forward this email message or any attachments. Delete this email message and any attachments immediately.

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: Purpose of pci_remap_iospace
  2016-07-15  5:21               ` Bharat Kumar Gogada
@ 2016-07-15  6:55                 ` Arnd Bergmann
  0 siblings, 0 replies; 22+ messages in thread
From: Arnd Bergmann @ 2016-07-15  6:55 UTC (permalink / raw)
  To: Bharat Kumar Gogada
  Cc: Lorenzo Pieralisi, linux-pci, linux-kernel, Bjorn Helgaas,
	Liviu.Dudau, nofooter, thomas.petazzoni, Rongrong Zou,
	Rongrong Zou, linux-arm-kernel

On Friday, July 15, 2016 5:21:20 AM CEST Bharat Kumar Gogada wrote:
> > On Thu, Jul 14, 2016 at 01:32:13PM +0000, Bharat Kumar Gogada wrote:
> >
> > [...]
> >
> > > Hi Lorenzo,
> > >
> > > I missed something in my device tree now I corrected it.
> > >
> > > ranges = <0x01000000 0x00000000 0xe0000000 0x00000000 0xe0000000 0
> > 0x00010000   //io
> >
> > You have not missed anything, you changed the PCI bus address at which
> > your host bridge responds to IO space and it must match your configuration.
> > At what PCI bus address your host bridge maps IO space ?
> >
> Our host bridge does not have dedicted address space mapped for IO transactions.
> For generation of IO transactions it requires some register read and write operations
> in bridge logic.
> 
> So the above PCI address does not come in to picture also, is there alternate way to handle IO
> Bars with our kind of hardware architecture.

Hisilicon has a similar thing on one of their LPC bridges, and
Rongrong Zou has implemented something for it in the past, but I
think it never got merged.

https://lkml.org/lkml/2015/12/29/154 has one version of his
proposal, not sure if that was the latest one or if something
newer exists.

	Arnd

^ permalink raw reply	[flat|nested] 22+ messages in thread

end of thread, other threads:[~2016-07-15  6:56 UTC | newest]

Thread overview: 22+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-07-12  6:57 Purpose of pci_remap_iospace Bharat Kumar Gogada
2016-07-12  8:31 ` Arnd Bergmann
2016-07-12  8:40   ` Bharat Kumar Gogada
2016-07-13  8:11   ` Bharat Kumar Gogada
2016-07-13  8:30     ` Arnd Bergmann
2016-07-13 12:30       ` Bharat Kumar Gogada
2016-07-13 13:28         ` Arnd Bergmann
2016-07-13 15:16           ` Bharat Kumar Gogada
2016-07-13 15:28             ` Arnd Bergmann
2016-07-13 15:42               ` Liviu.Dudau
2016-07-13 16:13             ` Lorenzo Pieralisi
2016-07-13 13:46         ` Lorenzo Pieralisi
2016-07-14  6:03           ` Bharat Kumar Gogada
2016-07-14 13:32           ` Bharat Kumar Gogada
2016-07-14 14:56             ` Lorenzo Pieralisi
2016-07-14 15:05               ` Bharat Kumar Gogada
2016-07-14 15:20                 ` Lorenzo Pieralisi
2016-07-14 15:12               ` Arnd Bergmann
2016-07-14 15:27                 ` Lorenzo Pieralisi
2016-07-15  5:21               ` Bharat Kumar Gogada
2016-07-15  6:55                 ` Arnd Bergmann
2016-07-13 13:24     ` Liviu.Dudau

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).