xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: Julien Grall <julien@xen.org>
To: "Roger Pau Monné" <roger.pau@citrix.com>
Cc: Rahul Singh <rahul.singh@arm.com>,
	Bertrand.Marquis@arm.com,
	Stefano Stabellini <sstabellini@kernel.org>,
	xen-devel@lists.xenproject.org, nd@arm.com,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [RFC PATCH v1 1/4] arm/pci: PCI setup and PCI host bridge discovery within XEN on ARM.
Date: Fri, 24 Jul 2020 16:46:10 +0100	[thread overview]
Message-ID: <69e478a9-7eee-4407-e811-2308dff19b79@xen.org> (raw)
In-Reply-To: <20200724152905.GM7191@Air-de-Roger>



On 24/07/2020 16:29, Roger Pau Monné wrote:
> On Fri, Jul 24, 2020 at 04:15:47PM +0100, Julien Grall wrote:
>>
>>
>> On 24/07/2020 15:44, Roger Pau Monné wrote:
>>>> diff --git a/xen/arch/arm/pci/Makefile b/xen/arch/arm/pci/Makefile
>>>> new file mode 100644
>>>> index 0000000000..358508b787
>>>> --- /dev/null
>>>> +++ b/xen/arch/arm/pci/Makefile
>>>> @@ -0,0 +1,4 @@
>>>> +obj-y += pci.o
>>>> +obj-y += pci-host-generic.o
>>>> +obj-y += pci-host-common.o
>>>> +obj-y += pci-access.o
>>>
>>> The Kconfig option mentions the support being explicitly for ARM64,
>>> would it be better to place the code in arch/arm/arm64 then?
>> I don't believe any of the code in this series is very arm64 specific. I
>> guess it was just only tested on arm64. So I would rather keep that under
>> arm/pci.
> 
> Ack. Could the Kconfg be adjusted to not depend on ARM_64? Just
> stating it's only been tested on Arm64 would be enough IMO.

We already have an option to select PCI (see CONFIG_HAS_PCI). So I would 
prefer if we reuse it (possible renamed to CONFIG_PCI) rather than 
inventing one for Arm specific.

Regarding the dependency, it will depend if it is possible to make it 
build easily on Arm32. If not, then we will need to keep the ARM_64.

> 
>>>> +
>>>> +    struct pci_host_bridge *bridge = pci_find_host_bridge(sbdf.seg, sbdf.bus);
>>>> +
>>>> +    if ( unlikely(!bridge) )
>>>> +    {
>>>> +        printk(XENLOG_ERR "Unable to find bridge for "PRI_pci"\n",
>>>> +                sbdf.seg, sbdf.bus, sbdf.dev, sbdf.fn);
>>>
>>> I had a patch to add a custom modifier to out printf format in
>>> order to handle pci_sbdf_t natively:
>>>
>>> https://patchew.org/Xen/20190822065132.48200-1-roger.pau@citrix.com/
>>>
>>> It missed maintainers Acks and was never committed. Since you are
>>> doing a bunch of work here, and likely adding a lot of SBDF related
>>> prints, feel free to import the modifier (%pp) and use in your code
>>> (do not attempt to switch existing users, or it's likely to get
>>> stuck again).
>>
>> I forgot about this patch :/. It would be good to revive it. Which acks are
>> you missing?
> 
> I only had an Ack from Jan, so I was missing Intel and AMD Acks, which
> would now only be Intel since AMD has been absorbed by the x86
> maintainers.

Ok. So, it should be easier to get it acked now :).

> 
>> [...]
>>
>>>> +static bool __init dt_pci_parse_bus_range(struct dt_device_node *dev,
>>>> +        struct pci_config_window *cfg)
>>>> +{
>>>> +    const __be32 *cells;
>>>
>>> It's my impression that while based on Linux this is not a verbatim
>>> copy of a Linux file, and tries to adhere with the Xen coding style.
>>> If so please use uint32_t here.
>>
>> uint32_t would be incorrect because this is a 32-bit value always in big
>> endian. I don't think we have other typedef to show it is a 32-bit BE value,
>> so __be32 is the best choice.
> 
> Oh, OK, so this is done to explicitly denote the endianness of a value
> on the type itself.

That's correct. On Linux, they use sparse to then check the BE and LE 
fields are not mixed together. We don't have that on Xen, but at least 
this makes more obvious what we are using.

> 
>> [...]
>>
>>>> +
>>>> +    if ( acpi_disabled )
>>>> +        dt_pci_init();
>>>> +    else
>>>> +        acpi_pci_init();
>>>
>>> Isn't there an enum or something that tells you whether the system
>>> description is coming from ACPI or from DT?
>>>
>>> This if .. else seems fragile.
>>>
>>
>> This is the common way we do it on Arm.... I would welcome any improvement,
>> but I don't think this should be part of this work.
> 
> Ack. In any case I think for ACPI PCI init will get called by
> acpi_mmcfg_init as part of acpi_boot_init, so I'm not sure there's
> much point in having something about ACPI added here, as it seems this
> will be DT only?

acpi_boot_init() does not exist on Arm. Looking at x86, 
acpi_mmcfg_init() is not event called from that function.

In general, I would prefer if each subsystem takes care of 
initialization itself. This makes easier to figure out the difference 
between ACPI and DT. FWIW, this is inline with majority of the Arm code.

Cheers,

-- 
Julien Grall


  parent reply	other threads:[~2020-07-24 15:46 UTC|newest]

Thread overview: 48+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-07-23 15:40 [RFC PATCH v1 0/4] PCI devices passthrough on Arm Rahul Singh
2020-07-23 15:40 ` [RFC PATCH v1 1/4] arm/pci: PCI setup and PCI host bridge discovery within XEN on ARM Rahul Singh
2020-07-23 23:38   ` Stefano Stabellini
2020-07-24  7:03     ` Oleksandr Andrushchenko
2020-07-24  8:05       ` Julien Grall
2020-07-24 17:47         ` Stefano Stabellini
2020-07-27 15:27         ` Rahul Singh
2020-07-27 15:20       ` Rahul Singh
2020-07-24  8:44     ` Julien Grall
2020-07-24 17:41       ` Stefano Stabellini
2020-07-24 18:21         ` Julien Grall
2020-07-24 18:32           ` Stefano Stabellini
2020-07-24 19:24             ` Julien Grall
2020-07-24 23:46               ` Stefano Stabellini
2020-07-25  9:59                 ` Julien Grall
2020-07-27 11:06                   ` Roger Pau Monné
2020-07-28  0:06                     ` Stefano Stabellini
2020-07-28  8:33                       ` Roger Pau Monné
2020-07-28 18:33                         ` Stefano Stabellini
2020-07-26  7:01                 ` Jan Beulich
2020-07-27 13:27     ` Rahul Singh
2020-07-24  8:23   ` Julien Grall
2020-07-27 15:29     ` Rahul Singh
2020-07-24 14:44   ` Roger Pau Monné
2020-07-24 15:15     ` Julien Grall
2020-07-24 15:29       ` Roger Pau Monné
2020-07-24 15:42         ` Roger Pau Monné
2020-07-24 15:46         ` Julien Grall [this message]
2020-07-24 16:01       ` Jan Beulich
2020-07-24 16:54         ` Julien Grall
2020-07-27 10:34           ` Roger Pau Monné
2020-07-28  8:06     ` Rahul Singh
2020-07-28  8:21       ` Roger Pau Monné
2020-07-23 15:40 ` [RFC PATCH v1 2/4] xen/arm: Discovering PCI devices and add the PCI devices in XEN Rahul Singh
2020-07-23 20:44   ` Stefano Stabellini
2020-07-24  7:14     ` Oleksandr Andrushchenko
2020-07-24  8:19       ` Julien Grall
2020-07-27 16:10       ` Rahul Singh
2020-07-24 14:49     ` Roger Pau Monné
2020-07-27  8:40     ` Rahul Singh
2020-07-23 15:40 ` [RFC PATCH v1 3/4] xen/arm: Enable the existing x86 virtual PCI support for ARM Rahul Singh
2020-07-23 23:39   ` Stefano Stabellini
2020-07-24 15:08   ` Roger Pau Monné
2020-07-23 15:40 ` [RFC PATCH v1 4/4] arm/libxl: Emulated PCI device tree node in libxl Rahul Singh
2020-07-23 23:39   ` Stefano Stabellini
2020-07-24  7:55     ` Oleksandr Andrushchenko
2020-07-24  9:11     ` Julien Grall
2020-07-27 13:40     ` Rahul Singh

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=69e478a9-7eee-4407-e811-2308dff19b79@xen.org \
    --to=julien@xen.org \
    --cc=Bertrand.Marquis@arm.com \
    --cc=Volodymyr_Babchuk@epam.com \
    --cc=nd@arm.com \
    --cc=rahul.singh@arm.com \
    --cc=roger.pau@citrix.com \
    --cc=sstabellini@kernel.org \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).