All of lore.kernel.org
 help / color / mirror / Atom feed
From: Rahul Singh <Rahul.Singh@arm.com>
To: Oleksandr Andrushchenko <andr2000@gmail.com>
Cc: xen-devel <xen-devel@lists.xenproject.org>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	"oleksandr_tyshchenko@epam.com" <oleksandr_tyshchenko@epam.com>,
	"volodymyr_babchuk@epam.com" <volodymyr_babchuk@epam.com>,
	"Artem_Mygaiev@epam.com" <Artem_Mygaiev@epam.com>,
	"roger.pau@citrix.com" <roger.pau@citrix.com>,
	"jbeulich@suse.com" <jbeulich@suse.com>,
	"andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>,
	"george.dunlap@citrix.com" <george.dunlap@citrix.com>,
	"paul@xen.org" <paul@xen.org>,
	Bertrand Marquis <Bertrand.Marquis@arm.com>,
	Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
Subject: Re: [PATCH v4 09/11] xen/arm: Setup MMIO range trap handlers for hardware domain
Date: Wed, 6 Oct 2021 10:36:11 +0000	[thread overview]
Message-ID: <E9A4ABB2-F5D1-4468-8DBC-5FA95163101D@arm.com> (raw)
In-Reply-To: <20211004141151.132231-10-andr2000@gmail.com>

Hi Oleksandr,

> On 4 Oct 2021, at 3:11 pm, Oleksandr Andrushchenko <andr2000@gmail.com> wrote:
> 
> From: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
> 
> In order for vPCI to work it needs to maintain guest and hardware
> domain's views of the configuration space. For example, BARs and
> COMMAND registers require emulation for guests and the guest view
> of the registers needs to be in sync with the real contents of the
> relevant registers. For that ECAM address space needs to also be
> trapped for the hardware domain, so we need to implement PCI host
> bridge specific callbacks to properly setup MMIO handlers for those
> ranges depending on particular host bridge implementation.
> 
> Signed-off-by: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
> Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>

Reviewed-by: Rahul Singh <rahul.singh@arm.com>
Tested-by: Rahul Singh <rahul.singh@arm.com>

Regards,
Rahul

> ---
> Since v3:
> - fixed comment formatting
> Since v2:
> - removed unneeded assignment (count = 0)
> - removed unneeded header inclusion
> - update commit message
> Since v1:
> - Dynamically calculate the number of MMIO handlers required for vPCI
>   and update the total number accordingly
> - s/clb/cb
> - Do not introduce a new callback for MMIO handler setup
> ---
> xen/arch/arm/domain.c              |  2 ++
> xen/arch/arm/pci/pci-host-common.c | 28 ++++++++++++++++++++++++
> xen/arch/arm/vpci.c                | 34 ++++++++++++++++++++++++++++++
> xen/arch/arm/vpci.h                |  6 ++++++
> xen/include/asm-arm/pci.h          |  5 +++++
> 5 files changed, 75 insertions(+)
> 
> diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
> index 79012bf77757..fa6fcc5e467c 100644
> --- a/xen/arch/arm/domain.c
> +++ b/xen/arch/arm/domain.c
> @@ -733,6 +733,8 @@ int arch_domain_create(struct domain *d,
>     if ( (rc = domain_vgic_register(d, &count)) != 0 )
>         goto fail;
> 
> +    count += domain_vpci_get_num_mmio_handlers(d);
> +
>     if ( (rc = domain_io_init(d, count + MAX_IO_HANDLER)) != 0 )
>         goto fail;
> 
> diff --git a/xen/arch/arm/pci/pci-host-common.c b/xen/arch/arm/pci/pci-host-common.c
> index 592c01aae5bb..1eb4daa87365 100644
> --- a/xen/arch/arm/pci/pci-host-common.c
> +++ b/xen/arch/arm/pci/pci-host-common.c
> @@ -292,6 +292,34 @@ struct dt_device_node *pci_find_host_bridge_node(struct device *dev)
>     }
>     return bridge->dt_node;
> }
> +
> +int pci_host_iterate_bridges(struct domain *d,
> +                             int (*cb)(struct domain *d,
> +                                       struct pci_host_bridge *bridge))
> +{
> +    struct pci_host_bridge *bridge;
> +    int err;
> +
> +    list_for_each_entry( bridge, &pci_host_bridges, node )
> +    {
> +        err = cb(d, bridge);
> +        if ( err )
> +            return err;
> +    }
> +    return 0;
> +}
> +
> +int pci_host_get_num_bridges(void)
> +{
> +    struct pci_host_bridge *bridge;
> +    int count = 0;
> +
> +    list_for_each_entry( bridge, &pci_host_bridges, node )
> +        count++;
> +
> +    return count;
> +}
> +
> /*
>  * Local variables:
>  * mode: C
> diff --git a/xen/arch/arm/vpci.c b/xen/arch/arm/vpci.c
> index 76c12b92814f..6e179cd3010b 100644
> --- a/xen/arch/arm/vpci.c
> +++ b/xen/arch/arm/vpci.c
> @@ -80,17 +80,51 @@ static const struct mmio_handler_ops vpci_mmio_handler = {
>     .write = vpci_mmio_write,
> };
> 
> +static int vpci_setup_mmio_handler(struct domain *d,
> +                                   struct pci_host_bridge *bridge)
> +{
> +    struct pci_config_window *cfg = bridge->cfg;
> +
> +    register_mmio_handler(d, &vpci_mmio_handler,
> +                          cfg->phys_addr, cfg->size, NULL);
> +    return 0;
> +}
> +
> int domain_vpci_init(struct domain *d)
> {
>     if ( !has_vpci(d) )
>         return 0;
> 
> +    if ( is_hardware_domain(d) )
> +        return pci_host_iterate_bridges(d, vpci_setup_mmio_handler);
> +
> +    /* Guest domains use what is programmed in their device tree. */
>     register_mmio_handler(d, &vpci_mmio_handler,
>                           GUEST_VPCI_ECAM_BASE, GUEST_VPCI_ECAM_SIZE, NULL);
> 
>     return 0;
> }
> 
> +int domain_vpci_get_num_mmio_handlers(struct domain *d)
> +{
> +    int count;
> +
> +    if ( is_hardware_domain(d) )
> +        /* For each PCI host bridge's configuration space. */
> +        count = pci_host_get_num_bridges();
> +    else
> +        /*
> +         * VPCI_MSIX_MEM_NUM handlers for MSI-X tables per each PCI device
> +         * being passed through. Maximum number of supported devices
> +         * is 32 as virtual bus topology emulates the devices as embedded
> +         * endpoints.
> +         * +1 for a single emulated host bridge's configuration space.
> +         */
> +        count = VPCI_MSIX_MEM_NUM * 32 + 1;
> +
> +    return count;
> +}
> +
> /*
>  * Local variables:
>  * mode: C
> diff --git a/xen/arch/arm/vpci.h b/xen/arch/arm/vpci.h
> index d8a7b0e3e802..27a2b069abd2 100644
> --- a/xen/arch/arm/vpci.h
> +++ b/xen/arch/arm/vpci.h
> @@ -17,11 +17,17 @@
> 
> #ifdef CONFIG_HAS_VPCI
> int domain_vpci_init(struct domain *d);
> +int domain_vpci_get_num_mmio_handlers(struct domain *d);
> #else
> static inline int domain_vpci_init(struct domain *d)
> {
>     return 0;
> }
> +
> +static inline int domain_vpci_get_num_mmio_handlers(struct domain *d)
> +{
> +    return 0;
> +}
> #endif
> 
> #endif /* __ARCH_ARM_VPCI_H__ */
> diff --git a/xen/include/asm-arm/pci.h b/xen/include/asm-arm/pci.h
> index ea87ec6006fc..a62d8bc60086 100644
> --- a/xen/include/asm-arm/pci.h
> +++ b/xen/include/asm-arm/pci.h
> @@ -108,6 +108,11 @@ static always_inline bool is_pci_passthrough_enabled(void)
> 
> void arch_pci_init_pdev(struct pci_dev *pdev);
> 
> +int pci_host_iterate_bridges(struct domain *d,
> +                             int (*clb)(struct domain *d,
> +                                        struct pci_host_bridge *bridge));
> +int pci_host_get_num_bridges(void);
> +
> #else   /*!CONFIG_HAS_PCI*/
> 
> struct arch_pci_dev { };
> -- 
> 2.25.1
> 



  reply	other threads:[~2021-10-06 10:36 UTC|newest]

Thread overview: 27+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-10-04 14:11 [PATCH v4 00/11] PCI devices passthrough on Arm, part 2 Oleksandr Andrushchenko
2021-10-04 14:11 ` [PATCH v4 01/11] xen/arm: Fix dev_is_dt macro definition Oleksandr Andrushchenko
2021-10-05  0:57   ` Stefano Stabellini
2021-10-04 14:11 ` [PATCH v4 02/11] xen/arm: Add new device type for PCI Oleksandr Andrushchenko
2021-10-06 10:15   ` Rahul Singh
2021-10-04 14:11 ` [PATCH v4 03/11] xen/arm: Introduce pci_find_host_bridge_node helper Oleksandr Andrushchenko
2021-10-06 10:24   ` Rahul Singh
2021-10-04 14:11 ` [PATCH v4 04/11] xen/device-tree: Make dt_find_node_by_phandle global Oleksandr Andrushchenko
2021-10-06 10:32   ` Rahul Singh
2021-10-04 14:11 ` [PATCH v4 05/11] xen/arm: Mark device as PCI while creating one Oleksandr Andrushchenko
2021-10-06 10:33   ` Rahul Singh
2021-10-04 14:11 ` [PATCH v4 06/11] xen/domain: Call pci_release_devices() when releasing domain resources Oleksandr Andrushchenko
2021-10-06 10:35   ` Rahul Singh
2021-10-04 14:11 ` [PATCH v4 07/11] libxl: Allow removing PCI devices for all types of domains Oleksandr Andrushchenko
2021-10-06 10:25   ` Rahul Singh
2021-10-04 14:11 ` [PATCH v4 08/11] libxl: Only map legacy PCI IRQs if they are supported Oleksandr Andrushchenko
2021-10-06 10:39   ` Rahul Singh
2021-10-04 14:11 ` [PATCH v4 09/11] xen/arm: Setup MMIO range trap handlers for hardware domain Oleksandr Andrushchenko
2021-10-06 10:36   ` Rahul Singh [this message]
2021-10-04 14:11 ` [PATCH v4 10/11] xen/arm: Do not map PCI ECAM and MMIO space to Domain-0's p2m Oleksandr Andrushchenko
2021-10-05  1:24   ` Stefano Stabellini
2021-10-05  4:32     ` Oleksandr Andrushchenko
2021-10-05 21:43       ` Stefano Stabellini
2021-10-06  4:55         ` Oleksandr Andrushchenko
2021-10-06 21:33           ` Stefano Stabellini
2021-10-04 14:11 ` [PATCH v4 11/11] xen/arm: Process pending vPCI map/unmap operations Oleksandr Andrushchenko
2021-10-06 10:24   ` Rahul Singh

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=E9A4ABB2-F5D1-4468-8DBC-5FA95163101D@arm.com \
    --to=rahul.singh@arm.com \
    --cc=Artem_Mygaiev@epam.com \
    --cc=Bertrand.Marquis@arm.com \
    --cc=andr2000@gmail.com \
    --cc=andrew.cooper3@citrix.com \
    --cc=george.dunlap@citrix.com \
    --cc=jbeulich@suse.com \
    --cc=julien@xen.org \
    --cc=oleksandr_andrushchenko@epam.com \
    --cc=oleksandr_tyshchenko@epam.com \
    --cc=paul@xen.org \
    --cc=roger.pau@citrix.com \
    --cc=sstabellini@kernel.org \
    --cc=volodymyr_babchuk@epam.com \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.