From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752805AbcGMIba (ORCPT ); Wed, 13 Jul 2016 04:31:30 -0400 Received: from mout.kundenserver.de ([212.227.17.13]:56191 "EHLO mout.kundenserver.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751013AbcGMIbU (ORCPT ); Wed, 13 Jul 2016 04:31:20 -0400 From: Arnd Bergmann To: Bharat Kumar Gogada Cc: "linux-pci@vger.kernel.org" , "linux-kernel@vger.kernel.org" , Bjorn Helgaas , "Liviu.Dudau@arm.com" , nofooter , "thomas.petazzoni@free-electrons.com" Subject: Re: Purpose of pci_remap_iospace Date: Wed, 13 Jul 2016 10:30:57 +0200 Message-ID: <4235946.u1vYRsOpTR@wuerfel> User-Agent: KMail/5.1.3 (Linux/4.4.0-28-generic; KDE/5.18.0; x86_64; ; ) In-Reply-To: <8520D5D51A55D047800579B094147198258B8952@XAP-PVEXMBX01.xlnx.xilinx.com> References: <8520D5D51A55D047800579B094147198258B85DC@XAP-PVEXMBX01.xlnx.xilinx.com> <3927657.6zNCtCntSU@wuerfel> <8520D5D51A55D047800579B094147198258B8952@XAP-PVEXMBX01.xlnx.xilinx.com> MIME-Version: 1.0 Content-Transfer-Encoding: 7Bit Content-Type: text/plain; charset="us-ascii" X-Provags-ID: V03:K0:1iUQTl9ZVHF8DjyrXyy1gXDBZZVpBip3jU+ddZhrxZl1GwX7Tmd BiVKrvBdVrae70qon7sXmrGDOYeXsHBbwg2Q/RRXjxw3lHUR6dqont0f9/6V0LGLtXE89dr QINL7D5xPOAuISmpGsxd7zrmd1/IkTBG8IZGLAs5Lvlwhj96eaaD+oOVr8HaUoRjrDp6gNJ gCci/0+omjT482jsUuNVA== X-UI-Out-Filterresults: notjunk:1;V01:K0:PHgYneb09ig=:6ZBF49RGtLSEKA3doLFYnI YbruPk5zVJV5be+dkYF8aEOUEs0WgMpNZ67DN9ROIXNrVgvWktaWrdM7Sg7fI4vUhIYr22UbE XpZl5oSTo71WlE9KaHFw0I3/dPw6p8ub9Hagb0o3mB0EKCIGms8xxmv8g1vXm4LGWu7KaDPOB 8An7AKwPFj0xX5XIzm1Ue+QWPn5bpSBqOVWWUZYUqO/1nzRNKcgEEZOrLp98cv9DHe8mn4bSA nKNth4qzvi2hXldpWdBh3xJVBA+nnJrV9fGGIwRmwBBInU3Et8W7Zw7D4SxeeTbl9z4rIw7Pg D3e+PG92c/OttdghQ7AIxC9/QFEsMzDP3fIVR9wbImpanKVu9mxdGevBqK4jNVVGA7gbsayBT IFf1PZAoiOqugSoEs6z8QCqjtya7WFicSVzBWBSS6CnFQzt8CktV1rdpO7AZoWSPeBlTFTuII NSNcViCcmPvZ8ZJtQjWDf85Hr7wgx/Cn+AZfKAhRTc0KrWgx6wwBllnkh7nzSzsB2TkGFo8fh 7x1Ey+AM99lmmd/R5RmIwQ5QmEQpg+qD+PpnOK60Qloywa+A/WBKvbxPQMlmsll6IY0g1XQ2i pZZIGKbHlVSUnFzg4LQJH6AEMdtdAbhWgJ1KNrASJtO3nMkhYmO2Lg9hwBT9+VGVqKNbGx6Kh MlWO61DjPxGK7GIzavYpF1uh9ZQ3k1FGQFvr/gQJcnpEH0E3DvIf+e2bbVf/VCFxfeENuQm8O B+QPmeQSzGvzm/L9 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wednesday, July 13, 2016 8:11:56 AM CEST Bharat Kumar Gogada wrote: > > Subject: Re: Purpose of pci_remap_iospace > > > > On Tuesday, July 12, 2016 6:57:10 AM CEST Bharat Kumar Gogada wrote: > > > Hi, > > > > > > I have a query. > > > > > > Can any once explain the purpose of pci_remap_iospace function in root > > port driver. > > > > > > What is its dependency with architecture ? > > > > > > Here is my understanding, the above API takes PCIe IO resource and its > > > to be mapped CPU address from ranges property and remaps into virtual > > address space. > > > > > > So my question is who uses this virtual addresses ? > > > > The inb()/outb() functions declared in asm/io.h > > > > > When End Point requests for IO BARs doesn't it get from the above > > > resource range (first parameter of API) and do ioremap to access this > > > region ? > > > > Device drivers generally do not ioremap() the I/O BARs but they use > > inb()/outb() directly. They can also call pci_iomap() and do > > ioread8()/iowrite8() on the pointer returned from that function, but > > generally the call to pci_iomap() then returns a pointer into the virtual > > address that is already mapped. > > > > > But why root complex driver is mapping this address region ? > > > > The PCI core does not know that the I/O space is memory mapped. > > On x86 and a few others, I/O space is not memory mapped but requires the > > use of special CPU instructions. > > > Thanks Arnd. > > I'm facing issue in testing IO bars on our SoC. > > I added following ranges in our device tree : > ranges = <0x01000000 0x00000000 0x00000000 0x00000000 0xe0000000 0 0x00100000 //io > 0x02000000 0x00000000 0xe0100000 0x00000000 0xe0100000 0 0x0ef00000>; //non prefetchabe memory > > And I'm using above API to map the res and cpu physical address in my driver. I notice you have 1MB of I/O space here > Kernel Boot log: > [ 2.345294] nwl-pcie fd0e0000.pcie: Link is UP > [ 2.345339] PCI host bridge /amba/pcie@fd0e0000 ranges: > [ 2.345356] No bus range found for /amba/pcie@fd0e0000, using [bus 00-ff] > [ 2.345382] IO 0xe0000000..0xe00fffff -> 0x00000000 > [ 2.345401] MEM 0xe0100000..0xeeffffff -> 0xe0100000 > [ 2.345498] nwl-pcie fd0e0000.pcie: PCI host bridge to bus 0000:00 > [ 2.345517] pci_bus 0000:00: root bus resource [bus 00-ff] > [ 2.345533] pci_bus 0000:00: root bus resource [io 0x0000-0xfffff] and all of it gets mapped by the PCI core. Usually you only have 64K of I/O space per host bridge, and the PCI core should perhaps not try to map all of it, though I don't think this is actually your problem here. > [ 2.345550] pci_bus 0000:00: root bus resource [mem 0xe0100000-0xeeffffff] > [ 2.345770] pci 0000:00:00.0: cannot attach to SMMU, is it on the same bus? > [ 2.345786] iommu: Adding device 0000:00:00.0 to group 1 > [ 2.346142] pci 0000:01:00.0: cannot attach to SMMU, is it on the same bus? > [ 2.346158] iommu: Adding device 0000:01:00.0 to group 1 > [ 2.346213] pci 0000:00:00.0: BAR 8: assigned [mem 0xe0100000-0xe02fffff] > [ 2.346234] pci 0000:01:00.0: BAR 0: assigned [mem 0xe0100000-0xe01fffff 64bit] > [ 2.346268] pci 0000:01:00.0: BAR 2: assigned [mem 0xe0200000-0xe02fffff 64bit] > [ 2.346300] pci 0000:01:00.0: BAR 4: no space for [io size 0x0040] > [ 2.346316] pci 0000:01:00.0: BAR 4: failed to assign [io size 0x0040] > [ 2.346333] pci 0000:00:00.0: PCI bridge to [bus 01-0c] > [ 2.346350] pci 0000:00:00.0: bridge window [mem 0xe0100000-0xe02fffff] > > IO assignment fails. I would guess that the I/O space is not registered correctly. Is this drivers/pci/host/pcie-xilinx.c ? We have had problems with this in the past, since almost nobody uses I/O space and it requires several steps to all be done correctly. The line " IO 0xe0000000..0xe00fffff -> 0x00000000" from your log actually comes from the driver parsing the DT, and that seems to be correct. Can you add a printk to pci_add_resource_offset() to show which resources actually get added and what the offset is? Also, please show the contents of /proc/ioport and /proc/iomem. Arnd