All of lore.kernel.org
 help / color / mirror / Atom feed
From: Rahul Singh <Rahul.Singh@arm.com>
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: xen-devel <xen-devel@lists.xenproject.org>,
	Bertrand Marquis <Bertrand.Marquis@arm.com>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH v1 06/14] xen/arm: Add support for PCI ecam operations
Date: Wed, 15 Sep 2021 16:38:45 +0000	[thread overview]
Message-ID: <8B84E0D6-70A7-4C8F-ACBE-D1773F6C8FAD@arm.com> (raw)
In-Reply-To: <alpine.DEB.2.21.2109141543090.21985@sstabellini-ThinkPad-T480s>

Hi Stefano,

> On 15 Sep 2021, at 12:06 am, Stefano Stabellini <sstabellini@kernel.org> wrote:
> 
> On Tue, 14 Sep 2021, Rahul Singh wrote:
>>>> +        return NULL;
>>>> +
>>>> +    busn -= cfg->busn_start;
>>>> +    base = cfg->win + (busn << cfg->ops->bus_shift);
>>>> +
>>>> +    return base + (PCI_DEVFN(sbdf_t.dev, sbdf_t.fn) << devfn_shift) + where;
>>>> +}
>>> 
>>> I understand that the arm32 part is not implemented and not part of this
>>> series, that's fine. However if the plan is that arm32 will dynamically
>>> map each bus individually, then I imagine this function will have an
>>> ioremap in the arm32 version. Which means that we also need an
>>> unmap_bus call in struct pci_ops. I understand that pci_ecam_unmap_bus
>>> would be a NOP today for arm64, but I think it makes sense to have it if
>>> we want the API to be generic.
>> 
>> As per my understanding we don’t need pci_ecam_unmap_bus(..) as I don’t see any use case to unmap the 
>> bus dynamically. We can add the support for per_bus_mapping for ARM32 once we implement arm32 part.
>> Let me know your view on this.
> 
> In the patch titled "xen/arm: PCI host bridge discovery within XEN on
> ARM" there is the following in-code comment:
> 
> * On 64-bit systems, we do a single ioremap for the whole config space
> * since we have enough virtual address range available.  On 32-bit, we
> * ioremap the config space for each bus individually.
> *
> * As of now only 64-bit is supported 32-bit is not supported.
> 
> 
> So I take it that on arm32 we don't have enough virtual address range
> available, therefore we cannot ioremap the whole range. Instead, we'll
> have to ioremap the config space of each bus individually.

Yes you are right my understand is also same.
> 
> I assumed that the idea was to call ioremap and iounmap dynamically,
> otherwise the total amount of virtual address range required would still
> be the same.

As per my understanding for 32-bit we need per_bus mapping as we don’t have enough virtual address space in one chunk
but we can have virtual address space in different chunk.

I am not sure if we need to map/unmap the virtual address space for each read/write call. 
I just checked the Linux code[1]  and there also mapping is done once not for each read/write call.

[1]  https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/drivers/pci/ecam.c?id=ea4aae05974334e9837d86ff1cb716bad36b3ca8#n76

> I also thought that the dynamic mapping function, the one
> which would end up calling ioremap on arm32, would be pci_ecam_map_bus.
> If so, then we are missing the corresponding unmapping function, the one
> that would call iounmap on arm32 and do nothing on arm64, called before
> returning from pci_generic_config_read and pci_generic_config_write.
> 
> As always, I am not asking for the arm32 implementation, but if we are
> introducing internal APIs, it would be good that they are consistent.
> 
> 
> 
>>>> @@ -50,10 +51,41 @@ struct pci_host_bridge {
>>>>    u8 bus_start;                    /* Bus start of this bridge. */
>>>>    u8 bus_end;                      /* Bus end of this bridge. */
>>>>    void *sysdata;                   /* Pointer to the config space window*/
>>>> +    const struct pci_ops *ops;
>>>> };
>>>> 
>>>> +struct pci_ops {
>>>> +    void __iomem *(*map_bus)(struct pci_host_bridge *bridge, uint32_t sbdf,
>>>> +                             uint32_t offset);
>>>> +    int (*read)(struct pci_host_bridge *bridge, uint32_t sbdf,
>>>> +                uint32_t reg, uint32_t len, uint32_t *value);
>>>> +    int (*write)(struct pci_host_bridge *bridge, uint32_t sbdf,
>>>> +                 uint32_t reg, uint32_t len, uint32_t value);
>>>> +};
>>>> +
>>>> +/*
>>>> + * struct to hold pci ops and bus shift of the config window
>>>> + * for a PCI controller.
>>>> + */
>>>> +struct pci_ecam_ops {
>>>> +    unsigned int            bus_shift;
>>>> +    struct pci_ops          pci_ops;
>>>> +    int (*init)(struct pci_config_window *);
>>> 
>>> Is this last member of the struct needed/used?
>> 
>> Yes It will be used by some vendor specific board( N1SDP ). Please check below.
>> https://git.linaro.org/landing-teams/working/arm/n1sdp-pcie-quirk.git/commit/?id=04b7e76d0fe6481a803f58e54e008a1489d713a5
> 
> OK. I don't doubt that there might be bridge-specific initializations,
> but we already have things like:
> 
> DT_DEVICE_START(pci_gen, "PCI HOST ZYNQMP", DEVICE_PCI)
> .dt_match = gen_pci_dt_match,
> .init = gen_pci_dt_init,
> DT_DEVICE_END
> 
> The question is do we actually need two different vendor-specific init
> functions? One is driven by device tree, e.g. ZynqMP is calling
> gen_pci_dt_init. The other one is pci_ecam_ops->init, which is called
> from the following chain:
> 
> DT_DEVICE_START -> gen_pci_dt_init -> pci_host_common_probe ->
> gen_pci_init -> pci_generic_ecam_ops.init
> 
> What's the difference between gen_pci_dt_init and pci_ecam_ops.init in
> terms of purpose?

Yes I also agree with you we don’t need two init function. I will modify the code like below.
DT_DEVICE_START ->pci_host_common_probe -> gen_pci_init -> pci_generic_ecam_ops.init

Regards,
Rahul
> 
> I am happy to have pci_ecam_ops.init if it serves a different purpose
> from DT_DEVICE_START.init. In that case we might want to add an in-code
> comment so that an engineer would know where to add vendor-specific code
> (whether to DT_DEVICE_START.init or to pci_ecam_ops.init).


  reply	other threads:[~2021-09-15 16:39 UTC|newest]

Thread overview: 98+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-08-19 12:02 [PATCH v1 00/14] PCI devices passthrough on Arm Rahul Singh
2021-08-19 12:02 ` [PATCH v1 01/14] xen/pci: Refactor MSI code that implements MSI functionality within XEN Rahul Singh
2021-08-19 12:18   ` Julien Grall
2021-08-19 14:16     ` Rahul Singh
2021-09-07 10:01       ` Julien Grall
2021-08-24 15:53   ` Jan Beulich
2021-08-31 12:31     ` Rahul Singh
2021-08-31 13:00       ` Jan Beulich
2021-08-26 13:23   ` Daniel P. Smith
2021-08-19 12:02 ` [PATCH v1 02/14] xen/pci: solve compilation error on ARM with HAS_PCI enabled Rahul Singh
2021-08-19 12:28   ` Julien Grall
2021-08-20 10:30     ` Rahul Singh
2021-08-20 11:37       ` Julien Grall
2021-08-20 11:55         ` Jan Beulich
2021-08-20 12:10           ` Julien Grall
2021-08-20  7:01   ` Jan Beulich
2021-08-20 11:21     ` Rahul Singh
2021-09-09 13:16   ` Julien Grall
2021-08-19 12:02 ` [PATCH v1 03/14] xen/pci: solve compilation error on ARM with ACPI && HAS_PCI Rahul Singh
2021-08-20  7:06   ` Jan Beulich
2021-08-20 11:41     ` Rahul Singh
2021-08-20 11:54       ` Jan Beulich
2021-09-09  1:11         ` Stefano Stabellini
2021-09-10 10:22           ` Rahul Singh
2021-08-19 12:02 ` [PATCH v1 04/14] xen/arm: Add support for PCI init to initialize the PCI driver Rahul Singh
2021-09-07  8:20   ` Julien Grall
2021-09-10 10:47     ` Rahul Singh
2021-09-09  1:16   ` Stefano Stabellini
2021-09-10 10:32     ` Rahul Singh
2021-08-19 12:02 ` [PATCH v1 05/14] xen/arm: PCI host bridge discovery within XEN on ARM Rahul Singh
2021-09-07  9:05   ` Julien Grall
2021-09-10 11:22     ` Rahul Singh
2021-09-10 11:53       ` Julien Grall
2021-09-09 22:54   ` Stefano Stabellini
2021-09-10 11:53     ` Rahul Singh
2021-09-13 14:52   ` Oleksandr Andrushchenko
2021-09-13 20:23     ` Stefano Stabellini
2021-09-14  4:35       ` Oleksandr Andrushchenko
2021-09-14  7:53   ` Oleksandr Andrushchenko
2021-08-19 12:02 ` [PATCH v1 06/14] xen/arm: Add support for PCI ecam operations Rahul Singh
2021-09-09 11:32   ` Julien Grall
2021-09-14  8:13     ` Rahul Singh
2021-09-09 23:21   ` Stefano Stabellini
2021-09-14 11:13     ` Rahul Singh
2021-09-14 23:06       ` Stefano Stabellini
2021-09-15 16:38         ` Rahul Singh [this message]
2021-09-15 20:45           ` Stefano Stabellini
2021-09-16 16:51             ` Rahul Singh
2021-08-19 12:02 ` [PATCH v1 07/14] xen/arm: Add support for Xilinx ZynqMP PCI host controller Rahul Singh
2021-09-09 23:34   ` Stefano Stabellini
2021-09-10 12:01     ` Rahul Singh
2021-09-13 14:46       ` Oleksandr Andrushchenko
2021-09-13 21:02         ` Stefano Stabellini
2021-09-14  4:31           ` Oleksandr Andrushchenko
2021-09-17  7:39             ` Rahul Singh
2021-08-19 12:02 ` [PATCH v1 08/14] xen:arm: Implement pci access functions Rahul Singh
2021-09-09 23:41   ` Stefano Stabellini
2021-09-14 16:05     ` Rahul Singh
2021-09-14 22:40       ` Stefano Stabellini
2021-09-15  7:54       ` Oleksandr Andrushchenko
2021-09-15 10:47         ` Rahul Singh
2021-08-19 12:02 ` [PATCH v1 09/14] xen/arm: Add cmdline boot option "pci=on" Rahul Singh
2021-08-19 12:09   ` Jan Beulich
2021-08-19 12:31   ` Julien Grall
2021-08-20 12:19     ` Rahul Singh
2021-08-20 14:34       ` Julien Grall
2021-08-20 14:37         ` Jan Beulich
2021-09-09 23:46           ` Stefano Stabellini
2021-09-09 23:48   ` Stefano Stabellini
2021-08-19 12:02 ` [PATCH v1 10/14] xen/arm: Discovering PCI devices and add the PCI devices in XEN Rahul Singh
2021-08-19 12:35   ` Julien Grall
2021-08-19 13:40     ` Jan Beulich
2021-08-20 13:05     ` Rahul Singh
2021-08-19 12:02 ` [PATCH v1 11/14] xen/arm: Enable the existing x86 virtual PCI support for ARM Rahul Singh
2021-08-24 16:09   ` Jan Beulich
2021-08-25  5:44     ` Oleksandr Andrushchenko
2021-08-25  6:35       ` Jan Beulich
2021-09-09 13:50   ` Julien Grall
2021-09-16 10:46     ` Rahul Singh
2021-09-10  0:26   ` Stefano Stabellini
2021-09-16 11:01     ` Rahul Singh
2021-09-16 20:26       ` Stefano Stabellini
2021-09-21 13:49         ` Rahul Singh
2021-09-21 21:38           ` Stefano Stabellini
2021-08-19 12:02 ` [PATCH v1 12/14] arm/libxl: Emulated PCI device tree node in libxl Rahul Singh
2021-08-19 13:00   ` Julien Grall
2021-08-20 16:03     ` Rahul Singh
2021-09-09 13:59       ` Julien Grall
2021-09-16 16:16         ` Rahul Singh
2021-09-10  0:51   ` Stefano Stabellini
2021-09-16 16:35     ` Rahul Singh
2021-08-19 12:02 ` [PATCH v1 13/14] xen/arm: Fixed error when PCI device is assigned to guest Rahul Singh
2021-08-19 12:12   ` Jan Beulich
2021-08-19 12:40     ` Julien Grall
2021-08-20 17:01     ` Rahul Singh
2021-08-19 12:02 ` [PATCH v1 14/14] xen/arm: Add linux,pci-domain property for hwdom if not available Rahul Singh
2021-09-10  1:00   ` Stefano Stabellini
2021-09-16 16:36     ` Rahul Singh

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=8B84E0D6-70A7-4C8F-ACBE-D1773F6C8FAD@arm.com \
    --to=rahul.singh@arm.com \
    --cc=Bertrand.Marquis@arm.com \
    --cc=Volodymyr_Babchuk@epam.com \
    --cc=julien@xen.org \
    --cc=sstabellini@kernel.org \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.