All of lore.kernel.org
 help / color / mirror / Atom feed
From: "Roger Pau Monné" <roger.pau@citrix.com>
To: Oleksandr Andrushchenko <andr2000@gmail.com>
Cc: <xen-devel@lists.xenproject.org>, <julien@xen.org>,
	<sstabellini@kernel.org>, <oleksandr_tyshchenko@epam.com>,
	<volodymyr_babchuk@epam.com>, <Artem_Mygaiev@epam.com>,
	<jbeulich@suse.com>, <andrew.cooper3@citrix.com>,
	<george.dunlap@citrix.com>, <paul@xen.org>,
	<bertrand.marquis@arm.com>, <rahul.singh@arm.com>,
	Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
Subject: Re: [PATCH v5 07/14] vpci/header: handle p2m range sets per BAR
Date: Wed, 12 Jan 2022 16:15:24 +0100	[thread overview]
Message-ID: <Yd7wjP8WLWQxzLbq@Air-de-Roger> (raw)
In-Reply-To: <20211125110251.2877218-8-andr2000@gmail.com>

On Thu, Nov 25, 2021 at 01:02:44PM +0200, Oleksandr Andrushchenko wrote:
> From: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
> 
> Instead of handling a single range set, that contains all the memory
> regions of all the BARs and ROM, have them per BAR.
> As the range sets are now created when a PCI device is added and destroyed
> when it is removed so make them named and accounted.
> 
> Note that rangesets were chosen here despite there being only up to
> 3 separate ranges in each set (typically just 1). But rangeset per BAR
> was chosen for the ease of implementation and existing code re-usability.
> 
> This is in preparation of making non-identity mappings in p2m for the
> MMIOs/ROM.

I think we don't want to support ROM for guests (at least initially),
so no need to mention it here.

> 
> Signed-off-by: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
> 
> ---
> Since v4:
> - use named range sets for BARs (Jan)
> - changes required by the new locking scheme
> - updated commit message (Jan)
> Since v3:
> - re-work vpci_cancel_pending accordingly to the per-BAR handling
> - s/num_mem_ranges/map_pending and s/uint8_t/bool
> - ASSERT(bar->mem) in modify_bars
> - create and destroy the rangesets on add/remove
> ---
>  xen/drivers/vpci/header.c | 190 +++++++++++++++++++++++++++-----------
>  xen/drivers/vpci/vpci.c   |  30 +++++-
>  xen/include/xen/vpci.h    |   3 +-
>  3 files changed, 166 insertions(+), 57 deletions(-)
> 
> diff --git a/xen/drivers/vpci/header.c b/xen/drivers/vpci/header.c
> index 8880d34ebf8e..cc49aa68886f 100644
> --- a/xen/drivers/vpci/header.c
> +++ b/xen/drivers/vpci/header.c
> @@ -137,45 +137,86 @@ bool vpci_process_pending(struct vcpu *v)
>          return false;
>  
>      spin_lock(&pdev->vpci_lock);
> -    if ( !pdev->vpci_cancel_pending && v->vpci.mem )
> +    if ( !pdev->vpci )
> +    {
> +        spin_unlock(&pdev->vpci_lock);
> +        return false;
> +    }
> +
> +    if ( !pdev->vpci_cancel_pending && v->vpci.map_pending )
>      {
>          struct map_data data = {
>              .d = v->domain,
>              .map = v->vpci.cmd & PCI_COMMAND_MEMORY,
>          };
> -        int rc = rangeset_consume_ranges(v->vpci.mem, map_range, &data);
> +        struct vpci_header *header = &pdev->vpci->header;
> +        unsigned int i;
>  
> -        if ( rc == -ERESTART )
> +        for ( i = 0; i < ARRAY_SIZE(header->bars); i++ )
>          {
> -            spin_unlock(&pdev->vpci_lock);
> -            return true;
> -        }
> +            struct vpci_bar *bar = &header->bars[i];
> +            int rc;
> +

You should check bar->mem != NULL here, there's no need to allocate a
rangeset for non-mappable BARs.

> +            if ( rangeset_is_empty(bar->mem) )
> +                continue;
> +
> +            rc = rangeset_consume_ranges(bar->mem, map_range, &data);
> +
> +            if ( rc == -ERESTART )
> +            {
> +                spin_unlock(&pdev->vpci_lock);
> +                return true;
> +            }
>  
> -        if ( pdev->vpci )
>              /* Disable memory decoding unconditionally on failure. */
> -            modify_decoding(pdev,
> -                            rc ? v->vpci.cmd & ~PCI_COMMAND_MEMORY : v->vpci.cmd,
> +            modify_decoding(pdev, rc ? v->vpci.cmd & ~PCI_COMMAND_MEMORY : v->vpci.cmd,

The above seems to be an unrelated change, and also exceeds the max
line length.

>                              !rc && v->vpci.rom_only);
>  
> -        if ( rc )
> -        {
> -            /*
> -             * FIXME: in case of failure remove the device from the domain.
> -             * Note that there might still be leftover mappings. While this is
> -             * safe for Dom0, for DomUs the domain needs to be killed in order
> -             * to avoid leaking stale p2m mappings on failure.
> -             */
> -            if ( is_hardware_domain(v->domain) )
> -                vpci_remove_device_locked(pdev);
> -            else
> -                domain_crash(v->domain);
> +            if ( rc )
> +            {
> +                /*
> +                 * FIXME: in case of failure remove the device from the domain.
> +                 * Note that there might still be leftover mappings. While this is
> +                 * safe for Dom0, for DomUs the domain needs to be killed in order
> +                 * to avoid leaking stale p2m mappings on failure.
> +                 */
> +                if ( is_hardware_domain(v->domain) )
> +                    vpci_remove_device_locked(pdev);
> +                else
> +                    domain_crash(v->domain);
> +
> +                break;
> +            }
>          }
> +
> +        v->vpci.map_pending = false;
>      }
>      spin_unlock(&pdev->vpci_lock);
>  
>      return false;
>  }
>  
> +static void vpci_bar_remove_ranges(const struct pci_dev *pdev)
> +{
> +    struct vpci_header *header = &pdev->vpci->header;
> +    unsigned int i;
> +    int rc;
> +
> +    for ( i = 0; i < ARRAY_SIZE(header->bars); i++ )
> +    {
> +        struct vpci_bar *bar = &header->bars[i];
> +
> +        if ( rangeset_is_empty(bar->mem) )
> +            continue;
> +
> +        rc = rangeset_remove_range(bar->mem, 0, ~0ULL);

Might be interesting to introduce a rangeset_reset function that
removes all ranges. That would never fail, and thus there would be no
need to check for rc.

Also I think the current rangeset_remove_range should never fail when
removing all ranges, as there's nothing to allocate. Hence you can add
an ASSERT_UNREACHABLE below.

> +        if ( !rc )
> +            printk(XENLOG_ERR
> +                   "%pd %pp failed to remove range set for BAR: %d\n",
> +                   pdev->domain, &pdev->sbdf, rc);
> +    }
> +}
> +
>  void vpci_cancel_pending_locked(struct pci_dev *pdev)
>  {
>      struct vcpu *v;
> @@ -185,23 +226,33 @@ void vpci_cancel_pending_locked(struct pci_dev *pdev)
>      /* Cancel any pending work now on all vCPUs. */
>      for_each_vcpu( pdev->domain, v )
>      {
> -        if ( v->vpci.mem && (v->vpci.pdev == pdev) )
> +        if ( v->vpci.map_pending && (v->vpci.pdev == pdev) )
>          {
> -            rangeset_destroy(v->vpci.mem);
> -            v->vpci.mem = NULL;
> +            vpci_bar_remove_ranges(pdev);
> +            v->vpci.map_pending = false;
>          }
>      }
>  }
>  
>  static int __init apply_map(struct domain *d, const struct pci_dev *pdev,
> -                            struct rangeset *mem, uint16_t cmd)
> +                            uint16_t cmd)
>  {
>      struct map_data data = { .d = d, .map = true };
> -    int rc;
> +    struct vpci_header *header = &pdev->vpci->header;
> +    int rc = 0;
> +    unsigned int i;
> +
> +    for ( i = 0; i < ARRAY_SIZE(header->bars); i++ )
> +    {
> +        struct vpci_bar *bar = &header->bars[i];
>  
> -    while ( (rc = rangeset_consume_ranges(mem, map_range, &data)) == -ERESTART )
> -        process_pending_softirqs();
> -    rangeset_destroy(mem);
> +        if ( rangeset_is_empty(bar->mem) )
> +            continue;
> +
> +        while ( (rc = rangeset_consume_ranges(bar->mem, map_range,
> +                                              &data)) == -ERESTART )
> +            process_pending_softirqs();
> +    }
>      if ( !rc )
>          modify_decoding(pdev, cmd, false);
>  
> @@ -209,7 +260,7 @@ static int __init apply_map(struct domain *d, const struct pci_dev *pdev,
>  }
>  
>  static void defer_map(struct domain *d, struct pci_dev *pdev,
> -                      struct rangeset *mem, uint16_t cmd, bool rom_only)
> +                      uint16_t cmd, bool rom_only)
>  {
>      struct vcpu *curr = current;
>  
> @@ -220,7 +271,7 @@ static void defer_map(struct domain *d, struct pci_dev *pdev,
>       * started for the same device if the domain is not well-behaved.
>       */
>      curr->vpci.pdev = pdev;
> -    curr->vpci.mem = mem;
> +    curr->vpci.map_pending = true;
>      curr->vpci.cmd = cmd;
>      curr->vpci.rom_only = rom_only;
>      /*
> @@ -234,42 +285,40 @@ static void defer_map(struct domain *d, struct pci_dev *pdev,
>  static int modify_bars(const struct pci_dev *pdev, uint16_t cmd, bool rom_only)
>  {
>      struct vpci_header *header = &pdev->vpci->header;
> -    struct rangeset *mem = rangeset_new(NULL, NULL, 0);
>      struct pci_dev *tmp, *dev = NULL;
>      const struct vpci_msix *msix = pdev->vpci->msix;
> -    unsigned int i;
> +    unsigned int i, j;
>      int rc;
> -
> -    if ( !mem )
> -        return -ENOMEM;
> +    bool map_pending;
>  
>      /*
> -     * Create a rangeset that represents the current device BARs memory region
> +     * Create a rangeset per BAR that represents the current device memory region
>       * and compare it against all the currently active BAR memory regions. If
>       * an overlap is found, subtract it from the region to be mapped/unmapped.
>       *
> -     * First fill the rangeset with all the BARs of this device or with the ROM
> +     * First fill the rangesets with all the BARs of this device or with the ROM
                                        ^ 'all' doesn't apply anymore.
>       * BAR only, depending on whether the guest is toggling the memory decode
>       * bit of the command register, or the enable bit of the ROM BAR register.
>       */
>      for ( i = 0; i < ARRAY_SIZE(header->bars); i++ )
>      {
> -        const struct vpci_bar *bar = &header->bars[i];
> +        struct vpci_bar *bar = &header->bars[i];
>          unsigned long start = PFN_DOWN(bar->addr);
>          unsigned long end = PFN_DOWN(bar->addr + bar->size - 1);
>  
> +        ASSERT(bar->mem);
> +
>          if ( !MAPPABLE_BAR(bar) ||
>               (rom_only ? bar->type != VPCI_BAR_ROM
>                         : (bar->type == VPCI_BAR_ROM && !header->rom_enabled)) )
>              continue;
>  
> -        rc = rangeset_add_range(mem, start, end);
> +        rc = rangeset_add_range(bar->mem, start, end);
>          if ( rc )
>          {
>              printk(XENLOG_G_WARNING "Failed to add [%lx, %lx]: %d\n",
>                     start, end, rc);
> -            rangeset_destroy(mem);
> -            return rc;
> +            goto fail;
>          }


I think you also need to check that BARs from the same device don't
overlap themselves. This wasn't needed before because all BARs shared
the same rangeset. It's not uncommon for BARs of the same device to
share a page.

So you would need something like the following added to the loop:

/* Check for overlap with the already setup BAR ranges. */
for ( j = 0; j < i; j++ )
    rangeset_remove_range(header->bars[j].mem, start, end);

>      }
>  
> @@ -280,14 +329,21 @@ static int modify_bars(const struct pci_dev *pdev, uint16_t cmd, bool rom_only)
>          unsigned long end = PFN_DOWN(vmsix_table_addr(pdev->vpci, i) +
>                                       vmsix_table_size(pdev->vpci, i) - 1);
>  
> -        rc = rangeset_remove_range(mem, start, end);
> -        if ( rc )
> +        for ( j = 0; j < ARRAY_SIZE(header->bars); j++ )
>          {
> -            printk(XENLOG_G_WARNING
> -                   "Failed to remove MSIX table [%lx, %lx]: %d\n",
> -                   start, end, rc);
> -            rangeset_destroy(mem);
> -            return rc;
> +            const struct vpci_bar *bar = &header->bars[j];
> +
> +            if ( rangeset_is_empty(bar->mem) )
> +                continue;
> +
> +            rc = rangeset_remove_range(bar->mem, start, end);
> +            if ( rc )
> +            {
> +                printk(XENLOG_G_WARNING
> +                       "Failed to remove MSIX table [%lx, %lx]: %d\n",
> +                       start, end, rc);
> +                goto fail;
> +            }
>          }
>      }
>  
> @@ -325,7 +381,8 @@ static int modify_bars(const struct pci_dev *pdev, uint16_t cmd, bool rom_only)
>              unsigned long start = PFN_DOWN(bar->addr);
>              unsigned long end = PFN_DOWN(bar->addr + bar->size - 1);
>  
> -            if ( !bar->enabled || !rangeset_overlaps_range(mem, start, end) ||
> +            if ( !bar->enabled ||
> +                 !rangeset_overlaps_range(bar->mem, start, end) ||
>                   /*
>                    * If only the ROM enable bit is toggled check against other
>                    * BARs in the same device for overlaps, but not against the
> @@ -334,14 +391,13 @@ static int modify_bars(const struct pci_dev *pdev, uint16_t cmd, bool rom_only)
>                   (rom_only && tmp == pdev && bar->type == VPCI_BAR_ROM) )
>                  continue;
>  
> -            rc = rangeset_remove_range(mem, start, end);
> +            rc = rangeset_remove_range(bar->mem, start, end);
>              if ( rc )
>              {
>                  spin_unlock(&tmp->vpci_lock);
>                  printk(XENLOG_G_WARNING "Failed to remove [%lx, %lx]: %d\n",
>                         start, end, rc);
> -                rangeset_destroy(mem);
> -                return rc;
> +                goto fail;
>              }
>          }
>          spin_unlock(&tmp->vpci_lock);
> @@ -360,12 +416,36 @@ static int modify_bars(const struct pci_dev *pdev, uint16_t cmd, bool rom_only)
>           * will always be to establish mappings and process all the BARs.
>           */
>          ASSERT((cmd & PCI_COMMAND_MEMORY) && !rom_only);
> -        return apply_map(pdev->domain, pdev, mem, cmd);
> +        return apply_map(pdev->domain, pdev, cmd);
>      }
>  
> -    defer_map(dev->domain, dev, mem, cmd, rom_only);
> +    /* Find out how many memory ranges has left after MSI and overlaps. */
> +    map_pending = false;
> +    for ( i = 0; i < ARRAY_SIZE(header->bars); i++ )
> +        if ( !rangeset_is_empty(header->bars[i].mem) )
> +        {
> +            map_pending = true;
> +            break;
> +        }
> +
> +    /*
> +     * There are cases when PCI device, root port for example, has neither
> +     * memory space nor IO. In this case PCI command register write is
> +     * missed resulting in the underlying PCI device not functional, so:
> +     *   - if there are no regions write the command register now
> +     *   - if there are regions then defer work and write later on

I would just say:

/* If there's no mapping work write the command register now. */

> +     */
> +    if ( !map_pending )
> +        pci_conf_write16(pdev->sbdf, PCI_COMMAND, cmd);
> +    else
> +        defer_map(dev->domain, dev, cmd, rom_only);
>  
>      return 0;
> +
> +fail:
> +    /* Destroy all the ranges we may have added. */
> +    vpci_bar_remove_ranges(pdev);
> +    return rc;
>  }
>  
>  static void cmd_write(const struct pci_dev *pdev, unsigned int reg,
> diff --git a/xen/drivers/vpci/vpci.c b/xen/drivers/vpci/vpci.c
> index a9e9e8ec438c..98b12a61be6f 100644
> --- a/xen/drivers/vpci/vpci.c
> +++ b/xen/drivers/vpci/vpci.c
> @@ -52,11 +52,16 @@ static void vpci_remove_device_handlers_locked(struct pci_dev *pdev)
>  
>  void vpci_remove_device_locked(struct pci_dev *pdev)
>  {
> +    struct vpci_header *header = &pdev->vpci->header;
> +    unsigned int i;
> +
>      ASSERT(spin_is_locked(&pdev->vpci_lock));
>  
>      pdev->vpci_cancel_pending = true;
>      vpci_remove_device_handlers_locked(pdev);
>      vpci_cancel_pending_locked(pdev);
> +    for ( i = 0; i < ARRAY_SIZE(header->bars); i++ )
> +        rangeset_destroy(header->bars[i].mem);
>      xfree(pdev->vpci->msix);
>      xfree(pdev->vpci->msi);
>      xfree(pdev->vpci);
> @@ -92,6 +97,8 @@ static int run_vpci_init(struct pci_dev *pdev)
>  int vpci_add_handlers(struct pci_dev *pdev)
>  {
>      struct vpci *vpci;
> +    struct vpci_header *header;
> +    unsigned int i;
>      int rc;
>  
>      if ( !has_vpci(pdev->domain) )
> @@ -108,11 +115,32 @@ int vpci_add_handlers(struct pci_dev *pdev)
>      pdev->vpci = vpci;
>      INIT_LIST_HEAD(&pdev->vpci->handlers);
>  
> +    header = &pdev->vpci->header;
> +    for ( i = 0; i < ARRAY_SIZE(header->bars); i++ )
> +    {
> +        struct vpci_bar *bar = &header->bars[i];
> +        char str[32];
> +
> +        snprintf(str, sizeof(str), "%pp:BAR%d", &pdev->sbdf, i);
> +        bar->mem = rangeset_new(pdev->domain, str, RANGESETF_no_print);
> +        if ( !bar->mem )
> +        {
> +            rc = -ENOMEM;
> +            goto fail;
> +        }
> +    }

You just need the ranges for the VPCI_BAR_MEM32, VPCI_BAR_MEM64_LO and
VPCI_BAR_ROM BAR types (see the MAPPABLE_BAR macro). Would it be
possible to only allocate the rangeset for those BAR types?

Also this should be done in init_bars rather than here, as you would
know the BAR types.

Thanks, Roger.


  reply	other threads:[~2022-01-12 15:16 UTC|newest]

Thread overview: 130+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-11-25 11:02 [PATCH v5 00/14] PCI devices passthrough on Arm, part 3 Oleksandr Andrushchenko
2021-11-25 11:02 ` [PATCH v5 01/14] rangeset: add RANGESETF_no_print flag Oleksandr Andrushchenko
2021-11-25 11:06   ` Jan Beulich
2021-11-25 11:08     ` Oleksandr Andrushchenko
2021-12-15  3:20   ` Volodymyr Babchuk
2021-12-15  5:53     ` Oleksandr Andrushchenko
2021-11-25 11:02 ` [PATCH v5 02/14] vpci: fix function attributes for vpci_process_pending Oleksandr Andrushchenko
2021-12-10 17:55   ` Julien Grall
2021-12-11  8:20     ` Roger Pau Monné
2021-12-11  8:57       ` Oleksandr Andrushchenko
2022-01-26  8:31         ` Oleksandr Andrushchenko
2022-01-26 10:54           ` Jan Beulich
2021-11-25 11:02 ` [PATCH v5 03/14] vpci: move lock outside of struct vpci Oleksandr Andrushchenko
2022-01-11 15:17   ` Roger Pau Monné
2022-01-12 14:42     ` Jan Beulich
2022-01-26  8:40       ` Oleksandr Andrushchenko
2022-01-26 11:13         ` Roger Pau Monné
2022-01-31  7:41           ` Oleksandr Andrushchenko
2022-01-12 14:57   ` Jan Beulich
2022-01-12 15:42     ` Roger Pau Monné
2022-01-12 15:52       ` Jan Beulich
2022-01-13  8:58         ` Roger Pau Monné
2022-01-28 14:15           ` Oleksandr Andrushchenko
2022-01-31  8:56             ` Roger Pau Monné
2022-01-31  9:00               ` Oleksandr Andrushchenko
2022-01-28 14:12     ` Oleksandr Andrushchenko
2021-11-25 11:02 ` [PATCH v5 04/14] vpci: cancel pending map/unmap on vpci removal Oleksandr Andrushchenko
2022-01-11 16:57   ` Roger Pau Monné
2022-01-12 15:27   ` Jan Beulich
2022-01-28 12:21     ` Oleksandr Andrushchenko
2022-01-31  7:53   ` Oleksandr Andrushchenko
2021-11-25 11:02 ` [PATCH v5 05/14] vpci: add hooks for PCI device assign/de-assign Oleksandr Andrushchenko
2022-01-12 12:12   ` Roger Pau Monné
2022-01-31  8:43     ` Oleksandr Andrushchenko
2022-01-13 11:40   ` Roger Pau Monné
2022-01-31  8:45     ` Oleksandr Andrushchenko
2022-02-01  8:56       ` Oleksandr Andrushchenko
2022-02-01 10:23         ` Roger Pau Monné
2021-11-25 11:02 ` [PATCH v5 06/14] vpci/header: implement guest BAR register handlers Oleksandr Andrushchenko
2021-11-25 16:28   ` Bertrand Marquis
2021-11-26 12:19     ` Oleksandr Andrushchenko
2022-02-03 12:36       ` Oleksandr Andrushchenko
2022-02-03 12:44         ` Jan Beulich
2022-02-03 12:48           ` Oleksandr Andrushchenko
2022-02-03 12:50             ` Jan Beulich
2022-02-03 12:53               ` Oleksandr Andrushchenko
2022-01-12 12:35   ` Roger Pau Monné
2022-01-31  9:47     ` Oleksandr Andrushchenko
2022-01-31 10:40       ` Oleksandr Andrushchenko
2022-01-31 10:54         ` Jan Beulich
2022-01-31 11:04           ` Oleksandr Andrushchenko
2022-01-31 11:27             ` Roger Pau Monné
2022-01-31 11:30               ` Oleksandr Andrushchenko
2022-01-31 11:10         ` Roger Pau Monné
2022-01-31 11:23           ` Oleksandr Andrushchenko
2022-01-31 11:31             ` Roger Pau Monné
2022-01-31 11:39             ` Jan Beulich
2022-01-31 13:30               ` Oleksandr Andrushchenko
2022-01-31 13:36                 ` Jan Beulich
2022-01-31 13:41                   ` Oleksandr Andrushchenko
2022-01-31 13:51                     ` Jan Beulich
2022-01-31 13:58                       ` Oleksandr Andrushchenko
2022-01-31 11:04       ` Roger Pau Monné
2022-01-31 14:51         ` Oleksandr Andrushchenko
2022-01-31 15:06     ` Oleksandr Andrushchenko
2022-01-31 15:50       ` Jan Beulich
2022-02-01  7:31         ` Oleksandr Andrushchenko
2022-02-01 10:10           ` Roger Pau Monné
2022-02-01 10:41             ` Oleksandr Andrushchenko
2022-01-12 17:34   ` Roger Pau Monné
2022-01-31  9:53     ` Oleksandr Andrushchenko
2022-01-31 10:56       ` Roger Pau Monné
2022-02-03 12:45       ` Oleksandr Andrushchenko
2022-02-03 12:54         ` Jan Beulich
2022-02-03 13:30           ` Oleksandr Andrushchenko
2022-02-03 14:04             ` Jan Beulich
2022-02-03 14:19               ` Oleksandr Andrushchenko
2022-02-03 14:05             ` Roger Pau Monné
2022-02-03 14:26               ` Oleksandr Andrushchenko
2021-11-25 11:02 ` [PATCH v5 07/14] vpci/header: handle p2m range sets per BAR Oleksandr Andrushchenko
2022-01-12 15:15   ` Roger Pau Monné [this message]
2022-01-12 15:18     ` Jan Beulich
2022-02-02  6:44     ` Oleksandr Andrushchenko
2022-02-02  9:56       ` Roger Pau Monné
2022-02-02 10:02         ` Oleksandr Andrushchenko
2021-11-25 11:02 ` [PATCH v5 08/14] vpci/header: program p2m with guest BAR view Oleksandr Andrushchenko
2022-01-13 10:22   ` Roger Pau Monné
2022-02-02  8:23     ` Oleksandr Andrushchenko
2022-02-02  9:46       ` Oleksandr Andrushchenko
2022-02-02 10:34         ` Roger Pau Monné
2022-02-02 10:44           ` Oleksandr Andrushchenko
2022-02-02 11:11             ` Jan Beulich
2022-02-02 11:14               ` Oleksandr Andrushchenko
2021-11-25 11:02 ` [PATCH v5 09/14] vpci/header: emulate PCI_COMMAND register for guests Oleksandr Andrushchenko
2022-01-13 10:50   ` Roger Pau Monné
2022-02-02 12:49     ` Oleksandr Andrushchenko
2022-02-02 13:32       ` Jan Beulich
2022-02-02 13:47         ` Oleksandr Andrushchenko
2022-02-02 14:18           ` Jan Beulich
2022-02-02 14:26             ` Oleksandr Andrushchenko
2022-02-02 14:31               ` Jan Beulich
2022-02-02 15:04                 ` Oleksandr Andrushchenko
2022-02-02 15:08                   ` Jan Beulich
2022-02-02 15:12                     ` Oleksandr Andrushchenko
2022-02-02 15:31                       ` Jan Beulich
2021-11-25 11:02 ` [PATCH v5 10/14] vpci/header: reset the command register when adding devices Oleksandr Andrushchenko
2022-01-13 11:07   ` Roger Pau Monné
2022-02-02 12:58     ` Oleksandr Andrushchenko
2021-11-25 11:02 ` [PATCH v5 11/14] vpci: add initial support for virtual PCI bus topology Oleksandr Andrushchenko
2022-01-12 15:39   ` Jan Beulich
2022-02-02 13:15     ` Oleksandr Andrushchenko
2022-01-13 11:35   ` Roger Pau Monné
2022-02-02 13:17     ` Oleksandr Andrushchenko
2021-11-25 11:02 ` [PATCH v5 12/14] xen/arm: translate virtual PCI bus topology for guests Oleksandr Andrushchenko
2022-01-13 12:18   ` Roger Pau Monné
2022-02-02 13:58     ` Oleksandr Andrushchenko
2021-11-25 11:02 ` [PATCH v5 13/14] xen/arm: account IO handlers for emulated PCI MSI-X Oleksandr Andrushchenko
2022-01-13 13:23   ` Roger Pau Monné
2022-02-02 14:08     ` Oleksandr Andrushchenko
2021-11-25 11:02 ` [PATCH v5 14/14] vpci: add TODO for the registers not explicitly handled Oleksandr Andrushchenko
2021-11-25 11:17   ` Jan Beulich
2021-11-25 11:20     ` Oleksandr Andrushchenko
2022-01-13 13:27     ` Roger Pau Monné
2022-01-13 13:38       ` Jan Beulich
2022-01-28 13:03         ` Oleksandr Andrushchenko
2021-12-15 11:56 ` [PATCH v5 00/14] PCI devices passthrough on Arm, part 3 Oleksandr Andrushchenko
2021-12-15 12:07   ` Jan Beulich
2021-12-15 12:22     ` Oleksandr Andrushchenko
2021-12-15 14:51       ` Roger Pau Monné
2021-12-15 15:02         ` Oleksandr Andrushchenko

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=Yd7wjP8WLWQxzLbq@Air-de-Roger \
    --to=roger.pau@citrix.com \
    --cc=Artem_Mygaiev@epam.com \
    --cc=andr2000@gmail.com \
    --cc=andrew.cooper3@citrix.com \
    --cc=bertrand.marquis@arm.com \
    --cc=george.dunlap@citrix.com \
    --cc=jbeulich@suse.com \
    --cc=julien@xen.org \
    --cc=oleksandr_andrushchenko@epam.com \
    --cc=oleksandr_tyshchenko@epam.com \
    --cc=paul@xen.org \
    --cc=rahul.singh@arm.com \
    --cc=sstabellini@kernel.org \
    --cc=volodymyr_babchuk@epam.com \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.