xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: Paul Durrant <xadimgnik@gmail.com>
To: "'Oleksandr Tyshchenko'" <olekstysh@gmail.com>,
	<xen-devel@lists.xenproject.org>
Cc: "'Oleksandr Tyshchenko'" <oleksandr_tyshchenko@epam.com>,
	"'Jan Beulich'" <jbeulich@suse.com>,
	"'Andrew Cooper'" <andrew.cooper3@citrix.com>,
	"'Roger Pau Monné'" <roger.pau@citrix.com>,
	"'Wei Liu'" <wl@xen.org>, "'Julien Grall'" <julien@xen.org>,
	"'Stefano Stabellini'" <sstabellini@kernel.org>,
	"'Julien Grall'" <julien.grall@arm.com>
Subject: RE: [PATCH V4 01/24] x86/ioreq: Prepare IOREQ feature for making it common
Date: Mon, 18 Jan 2021 08:22:55 -0000	[thread overview]
Message-ID: <00b901d6ed73$1f91a930$5eb4fb90$@xen.org> (raw)
In-Reply-To: <1610488352-18494-2-git-send-email-olekstysh@gmail.com>

> -----Original Message-----
> From: Oleksandr Tyshchenko <olekstysh@gmail.com>
> Sent: 12 January 2021 21:52
> To: xen-devel@lists.xenproject.org
> Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>; Paul Durrant <paul@xen.org>; Jan Beulich
> <jbeulich@suse.com>; Andrew Cooper <andrew.cooper3@citrix.com>; Roger Pau Monné
> <roger.pau@citrix.com>; Wei Liu <wl@xen.org>; Julien Grall <julien@xen.org>; Stefano Stabellini
> <sstabellini@kernel.org>; Julien Grall <julien.grall@arm.com>
> Subject: [PATCH V4 01/24] x86/ioreq: Prepare IOREQ feature for making it common
> 
> From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
> 
> As a lot of x86 code can be re-used on Arm later on, this
> patch makes some preparation to x86/hvm/ioreq.c before moving
> to the common code. This way we will get a verbatim copy
> for a code movement in subsequent patch.
> 
> This patch mostly introduces specific hooks to abstract arch
> specific materials taking into the account the requirment to leave
> the "legacy" mechanism of mapping magic pages for the IOREQ servers
> x86 specific and not expose it to the common code.
> 
> These hooks are named according to the more consistent new naming
> scheme right away (including dropping the "hvm" prefixes and infixes):
> - IOREQ server functions should start with "ioreq_server_"
> - IOREQ functions should start with "ioreq_"
> other functions will be renamed in subsequent patches.
> 
> Also re-order #include-s alphabetically.
> 
> This support is going to be used on Arm to be able run device
> emulator outside of Xen hypervisor.
> 
> Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>

Reviewed-by: Paul Durrant <paul@xen.org>

> Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
> CC: Julien Grall <julien.grall@arm.com>
> [On Arm only]
> Tested-by: Wei Chen <Wei.Chen@arm.com>
> 
> ---
> Please note, this is a split/cleanup/hardening of Julien's PoC:
> "Add support for Guest IO forwarding to a device emulator"
> 
> Changes RFC -> V1:
>    - new patch, was split from:
>      "[RFC PATCH V1 01/12] hvm/ioreq: Make x86's IOREQ feature common"
>    - fold the check of p->type into hvm_get_ioreq_server_range_type()
>      and make it return success/failure
>    - remove relocate_portio_handler() call from arch_hvm_ioreq_destroy()
>      in arch/x86/hvm/ioreq.c
>    - introduce arch_hvm_destroy_ioreq_server()/arch_handle_hvm_io_completion()
> 
> Changes V1 -> V2:
>    - update patch description
>    - make arch functions inline and put them into arch header
>      to achieve a truly rename by the subsequent patch
>    - return void in arch_hvm_destroy_ioreq_server()
>    - return bool in arch_hvm_ioreq_destroy()
>    - bring relocate_portio_handler() back to arch_hvm_ioreq_destroy()
>    - rename IOREQ_IO* to IOREQ_STATUS*
>    - remove *handle* from arch_handle_hvm_io_completion()
>    - re-order #include-s alphabetically
>    - rename hvm_get_ioreq_server_range_type() to hvm_ioreq_server_get_type_addr()
>      and add "const" to several arguments
> 
> Changes V2 -> V3:
>    - update patch description
>    - name new arch hooks according to the new naming scheme
>    - don't make arch hooks inline, move them ioreq.c
>    - make get_ioreq_server() local again
>    - rework the whole patch taking into the account that "legacy" interface
>      should remain x86 specific (additional arch hooks, etc)
>    - update the code to be able to use hvm_map_mem_type_to_ioreq_server()
>      in the common code (an extra arch hook, etc)
>    - don’t include <asm/hvm/emulate.h> from arch header
>    - add "arch" prefix to hvm_ioreq_server_get_type_addr()
>    - move IOREQ_STATUS_* #define-s introduction to the separate patch
>    - move HANDLE_BUFIOREQ to the arch header
>    - just return relocate_portio_handler() from arch_ioreq_server_destroy_all()
>    - misc adjustments proposed by Jan (adding const, unsigned int instead of uint32_t)
> 
> Changes V3 -> V4:
>    - add Alex's R-b
>    - update patch description
>    - make arch_ioreq_server_get_type_addr return bool
>    - drop #include <xen/ctype.h>
>    - use two arch hooks in hvm_map_mem_type_to_ioreq_server()
>      to avoid calling p2m_change_entry_type_global() with lock held
> ---
>  xen/arch/x86/hvm/ioreq.c        | 179 ++++++++++++++++++++++++++--------------
>  xen/include/asm-x86/hvm/ioreq.h |  22 +++++
>  2 files changed, 141 insertions(+), 60 deletions(-)
> 
> diff --git a/xen/arch/x86/hvm/ioreq.c b/xen/arch/x86/hvm/ioreq.c
> index 1cc27df..468fe84 100644
> --- a/xen/arch/x86/hvm/ioreq.c
> +++ b/xen/arch/x86/hvm/ioreq.c
> @@ -16,16 +16,15 @@
>   * this program; If not, see <http://www.gnu.org/licenses/>.
>   */
> 
> -#include <xen/ctype.h>
> +#include <xen/domain.h>
> +#include <xen/event.h>
>  #include <xen/init.h>
> +#include <xen/irq.h>
>  #include <xen/lib.h>
> -#include <xen/trace.h>
> +#include <xen/paging.h>
>  #include <xen/sched.h>
> -#include <xen/irq.h>
>  #include <xen/softirq.h>
> -#include <xen/domain.h>
> -#include <xen/event.h>
> -#include <xen/paging.h>
> +#include <xen/trace.h>
>  #include <xen/vpci.h>
> 
>  #include <asm/hvm/emulate.h>
> @@ -170,6 +169,29 @@ static bool hvm_wait_for_io(struct hvm_ioreq_vcpu *sv, ioreq_t *p)
>      return true;
>  }
> 
> +bool arch_vcpu_ioreq_completion(enum hvm_io_completion io_completion)
> +{
> +    switch ( io_completion )
> +    {
> +    case HVMIO_realmode_completion:
> +    {
> +        struct hvm_emulate_ctxt ctxt;
> +
> +        hvm_emulate_init_once(&ctxt, NULL, guest_cpu_user_regs());
> +        vmx_realmode_emulate_one(&ctxt);
> +        hvm_emulate_writeback(&ctxt);
> +
> +        break;
> +    }
> +
> +    default:
> +        ASSERT_UNREACHABLE();
> +        break;
> +    }
> +
> +    return true;
> +}
> +
>  bool handle_hvm_io_completion(struct vcpu *v)
>  {
>      struct domain *d = v->domain;
> @@ -209,19 +231,8 @@ bool handle_hvm_io_completion(struct vcpu *v)
>          return handle_pio(vio->io_req.addr, vio->io_req.size,
>                            vio->io_req.dir);
> 
> -    case HVMIO_realmode_completion:
> -    {
> -        struct hvm_emulate_ctxt ctxt;
> -
> -        hvm_emulate_init_once(&ctxt, NULL, guest_cpu_user_regs());
> -        vmx_realmode_emulate_one(&ctxt);
> -        hvm_emulate_writeback(&ctxt);
> -
> -        break;
> -    }
>      default:
> -        ASSERT_UNREACHABLE();
> -        break;
> +        return arch_vcpu_ioreq_completion(io_completion);
>      }
> 
>      return true;
> @@ -477,9 +488,6 @@ static void hvm_update_ioreq_evtchn(struct hvm_ioreq_server *s,
>      }
>  }
> 
> -#define HANDLE_BUFIOREQ(s) \
> -    ((s)->bufioreq_handling != HVM_IOREQSRV_BUFIOREQ_OFF)
> -
>  static int hvm_ioreq_server_add_vcpu(struct hvm_ioreq_server *s,
>                                       struct vcpu *v)
>  {
> @@ -586,7 +594,7 @@ static void hvm_ioreq_server_remove_all_vcpus(struct hvm_ioreq_server *s)
>      spin_unlock(&s->lock);
>  }
> 
> -static int hvm_ioreq_server_map_pages(struct hvm_ioreq_server *s)
> +int arch_ioreq_server_map_pages(struct hvm_ioreq_server *s)
>  {
>      int rc;
> 
> @@ -601,7 +609,7 @@ static int hvm_ioreq_server_map_pages(struct hvm_ioreq_server *s)
>      return rc;
>  }
> 
> -static void hvm_ioreq_server_unmap_pages(struct hvm_ioreq_server *s)
> +void arch_ioreq_server_unmap_pages(struct hvm_ioreq_server *s)
>  {
>      hvm_unmap_ioreq_gfn(s, true);
>      hvm_unmap_ioreq_gfn(s, false);
> @@ -674,6 +682,12 @@ static int hvm_ioreq_server_alloc_rangesets(struct hvm_ioreq_server *s,
>      return rc;
>  }
> 
> +void arch_ioreq_server_enable(struct hvm_ioreq_server *s)
> +{
> +    hvm_remove_ioreq_gfn(s, false);
> +    hvm_remove_ioreq_gfn(s, true);
> +}
> +
>  static void hvm_ioreq_server_enable(struct hvm_ioreq_server *s)
>  {
>      struct hvm_ioreq_vcpu *sv;
> @@ -683,8 +697,7 @@ static void hvm_ioreq_server_enable(struct hvm_ioreq_server *s)
>      if ( s->enabled )
>          goto done;
> 
> -    hvm_remove_ioreq_gfn(s, false);
> -    hvm_remove_ioreq_gfn(s, true);
> +    arch_ioreq_server_enable(s);
> 
>      s->enabled = true;
> 
> @@ -697,6 +710,12 @@ static void hvm_ioreq_server_enable(struct hvm_ioreq_server *s)
>      spin_unlock(&s->lock);
>  }
> 
> +void arch_ioreq_server_disable(struct hvm_ioreq_server *s)
> +{
> +    hvm_add_ioreq_gfn(s, true);
> +    hvm_add_ioreq_gfn(s, false);
> +}
> +
>  static void hvm_ioreq_server_disable(struct hvm_ioreq_server *s)
>  {
>      spin_lock(&s->lock);
> @@ -704,8 +723,7 @@ static void hvm_ioreq_server_disable(struct hvm_ioreq_server *s)
>      if ( !s->enabled )
>          goto done;
> 
> -    hvm_add_ioreq_gfn(s, true);
> -    hvm_add_ioreq_gfn(s, false);
> +    arch_ioreq_server_disable(s);
> 
>      s->enabled = false;
> 
> @@ -750,7 +768,7 @@ static int hvm_ioreq_server_init(struct hvm_ioreq_server *s,
> 
>   fail_add:
>      hvm_ioreq_server_remove_all_vcpus(s);
> -    hvm_ioreq_server_unmap_pages(s);
> +    arch_ioreq_server_unmap_pages(s);
> 
>      hvm_ioreq_server_free_rangesets(s);
> 
> @@ -764,7 +782,7 @@ static void hvm_ioreq_server_deinit(struct hvm_ioreq_server *s)
>      hvm_ioreq_server_remove_all_vcpus(s);
> 
>      /*
> -     * NOTE: It is safe to call both hvm_ioreq_server_unmap_pages() and
> +     * NOTE: It is safe to call both arch_ioreq_server_unmap_pages() and
>       *       hvm_ioreq_server_free_pages() in that order.
>       *       This is because the former will do nothing if the pages
>       *       are not mapped, leaving the page to be freed by the latter.
> @@ -772,7 +790,7 @@ static void hvm_ioreq_server_deinit(struct hvm_ioreq_server *s)
>       *       the page_info pointer to NULL, meaning the latter will do
>       *       nothing.
>       */
> -    hvm_ioreq_server_unmap_pages(s);
> +    arch_ioreq_server_unmap_pages(s);
>      hvm_ioreq_server_free_pages(s);
> 
>      hvm_ioreq_server_free_rangesets(s);
> @@ -836,6 +854,12 @@ int hvm_create_ioreq_server(struct domain *d, int bufioreq_handling,
>      return rc;
>  }
> 
> +/* Called when target domain is paused */
> +void arch_ioreq_server_destroy(struct hvm_ioreq_server *s)
> +{
> +    p2m_set_ioreq_server(s->target, 0, s);
> +}
> +
>  int hvm_destroy_ioreq_server(struct domain *d, ioservid_t id)
>  {
>      struct hvm_ioreq_server *s;
> @@ -855,7 +879,7 @@ int hvm_destroy_ioreq_server(struct domain *d, ioservid_t id)
> 
>      domain_pause(d);
> 
> -    p2m_set_ioreq_server(d, 0, s);
> +    arch_ioreq_server_destroy(s);
> 
>      hvm_ioreq_server_disable(s);
> 
> @@ -900,7 +924,7 @@ int hvm_get_ioreq_server_info(struct domain *d, ioservid_t id,
> 
>      if ( ioreq_gfn || bufioreq_gfn )
>      {
> -        rc = hvm_ioreq_server_map_pages(s);
> +        rc = arch_ioreq_server_map_pages(s);
>          if ( rc )
>              goto out;
>      }
> @@ -1080,6 +1104,27 @@ int hvm_unmap_io_range_from_ioreq_server(struct domain *d, ioservid_t id,
>      return rc;
>  }
> 
> +/* Called with ioreq_server lock held */
> +int arch_ioreq_server_map_mem_type(struct domain *d,
> +                                   struct hvm_ioreq_server *s,
> +                                   uint32_t flags)
> +{
> +    return p2m_set_ioreq_server(d, flags, s);
> +}
> +
> +void arch_ioreq_server_map_mem_type_completed(struct domain *d,
> +                                              struct hvm_ioreq_server *s,
> +                                              uint32_t flags)
> +{
> +    if ( flags == 0 )
> +    {
> +        const struct p2m_domain *p2m = p2m_get_hostp2m(d);
> +
> +        if ( read_atomic(&p2m->ioreq.entry_count) )
> +            p2m_change_entry_type_global(d, p2m_ioreq_server, p2m_ram_rw);
> +    }
> +}
> +
>  /*
>   * Map or unmap an ioreq server to specific memory type. For now, only
>   * HVMMEM_ioreq_server is supported, and in the future new types can be
> @@ -1112,18 +1157,13 @@ int hvm_map_mem_type_to_ioreq_server(struct domain *d, ioservid_t id,
>      if ( s->emulator != current->domain )
>          goto out;
> 
> -    rc = p2m_set_ioreq_server(d, flags, s);
> +    rc = arch_ioreq_server_map_mem_type(d, s, flags);
> 
>   out:
>      spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock);
> 
> -    if ( rc == 0 && flags == 0 )
> -    {
> -        struct p2m_domain *p2m = p2m_get_hostp2m(d);
> -
> -        if ( read_atomic(&p2m->ioreq.entry_count) )
> -            p2m_change_entry_type_global(d, p2m_ioreq_server, p2m_ram_rw);
> -    }
> +    if ( rc == 0 )
> +        arch_ioreq_server_map_mem_type_completed(d, s, flags);
> 
>      return rc;
>  }
> @@ -1210,12 +1250,17 @@ void hvm_all_ioreq_servers_remove_vcpu(struct domain *d, struct vcpu *v)
>      spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock);
>  }
> 
> +bool arch_ioreq_server_destroy_all(struct domain *d)
> +{
> +    return relocate_portio_handler(d, 0xcf8, 0xcf8, 4);
> +}
> +
>  void hvm_destroy_all_ioreq_servers(struct domain *d)
>  {
>      struct hvm_ioreq_server *s;
>      unsigned int id;
> 
> -    if ( !relocate_portio_handler(d, 0xcf8, 0xcf8, 4) )
> +    if ( !arch_ioreq_server_destroy_all(d) )
>          return;
> 
>      spin_lock_recursive(&d->arch.hvm.ioreq_server.lock);
> @@ -1239,33 +1284,28 @@ void hvm_destroy_all_ioreq_servers(struct domain *d)
>      spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock);
>  }
> 
> -struct hvm_ioreq_server *hvm_select_ioreq_server(struct domain *d,
> -                                                 ioreq_t *p)
> +bool arch_ioreq_server_get_type_addr(const struct domain *d,
> +                                     const ioreq_t *p,
> +                                     uint8_t *type,
> +                                     uint64_t *addr)
>  {
> -    struct hvm_ioreq_server *s;
> -    uint32_t cf8;
> -    uint8_t type;
> -    uint64_t addr;
> -    unsigned int id;
> +    unsigned int cf8 = d->arch.hvm.pci_cf8;
> 
>      if ( p->type != IOREQ_TYPE_COPY && p->type != IOREQ_TYPE_PIO )
> -        return NULL;
> -
> -    cf8 = d->arch.hvm.pci_cf8;
> +        return false;
> 
>      if ( p->type == IOREQ_TYPE_PIO &&
>           (p->addr & ~3) == 0xcfc &&
>           CF8_ENABLED(cf8) )
>      {
> -        uint32_t x86_fam;
> +        unsigned int x86_fam, reg;
>          pci_sbdf_t sbdf;
> -        unsigned int reg;
> 
>          reg = hvm_pci_decode_addr(cf8, p->addr, &sbdf);
> 
>          /* PCI config data cycle */
> -        type = XEN_DMOP_IO_RANGE_PCI;
> -        addr = ((uint64_t)sbdf.sbdf << 32) | reg;
> +        *type = XEN_DMOP_IO_RANGE_PCI;
> +        *addr = ((uint64_t)sbdf.sbdf << 32) | reg;
>          /* AMD extended configuration space access? */
>          if ( CF8_ADDR_HI(cf8) &&
>               d->arch.cpuid->x86_vendor == X86_VENDOR_AMD &&
> @@ -1277,16 +1317,30 @@ struct hvm_ioreq_server *hvm_select_ioreq_server(struct domain *d,
> 
>              if ( !rdmsr_safe(MSR_AMD64_NB_CFG, msr_val) &&
>                   (msr_val & (1ULL << AMD64_NB_CFG_CF8_EXT_ENABLE_BIT)) )
> -                addr |= CF8_ADDR_HI(cf8);
> +                *addr |= CF8_ADDR_HI(cf8);
>          }
>      }
>      else
>      {
> -        type = (p->type == IOREQ_TYPE_PIO) ?
> -                XEN_DMOP_IO_RANGE_PORT : XEN_DMOP_IO_RANGE_MEMORY;
> -        addr = p->addr;
> +        *type = (p->type == IOREQ_TYPE_PIO) ?
> +                 XEN_DMOP_IO_RANGE_PORT : XEN_DMOP_IO_RANGE_MEMORY;
> +        *addr = p->addr;
>      }
> 
> +    return true;
> +}
> +
> +struct hvm_ioreq_server *hvm_select_ioreq_server(struct domain *d,
> +                                                 ioreq_t *p)
> +{
> +    struct hvm_ioreq_server *s;
> +    uint8_t type;
> +    uint64_t addr;
> +    unsigned int id;
> +
> +    if ( !arch_ioreq_server_get_type_addr(d, p, &type, &addr) )
> +        return NULL;
> +
>      FOR_EACH_IOREQ_SERVER(d, id, s)
>      {
>          struct rangeset *r;
> @@ -1515,11 +1569,16 @@ static int hvm_access_cf8(
>      return X86EMUL_UNHANDLEABLE;
>  }
> 
> +void arch_ioreq_domain_init(struct domain *d)
> +{
> +    register_portio_handler(d, 0xcf8, 4, hvm_access_cf8);
> +}
> +
>  void hvm_ioreq_init(struct domain *d)
>  {
>      spin_lock_init(&d->arch.hvm.ioreq_server.lock);
> 
> -    register_portio_handler(d, 0xcf8, 4, hvm_access_cf8);
> +    arch_ioreq_domain_init(d);
>  }
> 
>  /*
> diff --git a/xen/include/asm-x86/hvm/ioreq.h b/xen/include/asm-x86/hvm/ioreq.h
> index e2588e9..13d35e1 100644
> --- a/xen/include/asm-x86/hvm/ioreq.h
> +++ b/xen/include/asm-x86/hvm/ioreq.h
> @@ -19,6 +19,9 @@
>  #ifndef __ASM_X86_HVM_IOREQ_H__
>  #define __ASM_X86_HVM_IOREQ_H__
> 
> +#define HANDLE_BUFIOREQ(s) \
> +    ((s)->bufioreq_handling != HVM_IOREQSRV_BUFIOREQ_OFF)
> +
>  bool hvm_io_pending(struct vcpu *v);
>  bool handle_hvm_io_completion(struct vcpu *v);
>  bool is_ioreq_server_page(struct domain *d, const struct page_info *page);
> @@ -55,6 +58,25 @@ unsigned int hvm_broadcast_ioreq(ioreq_t *p, bool buffered);
> 
>  void hvm_ioreq_init(struct domain *d);
> 
> +bool arch_vcpu_ioreq_completion(enum hvm_io_completion io_completion);
> +int arch_ioreq_server_map_pages(struct hvm_ioreq_server *s);
> +void arch_ioreq_server_unmap_pages(struct hvm_ioreq_server *s);
> +void arch_ioreq_server_enable(struct hvm_ioreq_server *s);
> +void arch_ioreq_server_disable(struct hvm_ioreq_server *s);
> +void arch_ioreq_server_destroy(struct hvm_ioreq_server *s);
> +int arch_ioreq_server_map_mem_type(struct domain *d,
> +                                   struct hvm_ioreq_server *s,
> +                                   uint32_t flags);
> +void arch_ioreq_server_map_mem_type_completed(struct domain *d,
> +                                              struct hvm_ioreq_server *s,
> +                                              uint32_t flags);
> +bool arch_ioreq_server_destroy_all(struct domain *d);
> +bool arch_ioreq_server_get_type_addr(const struct domain *d,
> +                                     const ioreq_t *p,
> +                                     uint8_t *type,
> +                                     uint64_t *addr);
> +void arch_ioreq_domain_init(struct domain *d);
> +
>  #endif /* __ASM_X86_HVM_IOREQ_H__ */
> 
>  /*
> --
> 2.7.4




  parent reply	other threads:[~2021-01-18  8:23 UTC|newest]

Thread overview: 144+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-01-12 21:52 [PATCH V4 00/24] IOREQ feature (+ virtio-mmio) on Arm Oleksandr Tyshchenko
2021-01-12 21:52 ` [PATCH V4 01/24] x86/ioreq: Prepare IOREQ feature for making it common Oleksandr Tyshchenko
2021-01-15 15:16   ` Julien Grall
2021-01-15 16:41   ` Jan Beulich
2021-01-16  9:48     ` Oleksandr
2021-01-18  8:22   ` Paul Durrant [this message]
2021-01-12 21:52 ` [PATCH V4 02/24] x86/ioreq: Add IOREQ_STATUS_* #define-s and update code for moving Oleksandr Tyshchenko
2021-01-15 15:17   ` Julien Grall
2021-01-18  8:24   ` Paul Durrant
2021-01-12 21:52 ` [PATCH V4 03/24] x86/ioreq: Provide out-of-line wrapper for the handle_mmio() Oleksandr Tyshchenko
2021-01-15 14:48   ` Alex Bennée
2021-01-15 15:19   ` Julien Grall
2021-01-18  8:29   ` Paul Durrant
2021-01-12 21:52 ` [PATCH V4 04/24] xen/ioreq: Make x86's IOREQ feature common Oleksandr Tyshchenko
2021-01-15 14:55   ` Alex Bennée
2021-01-15 15:23   ` Julien Grall
2021-01-18  8:48   ` Paul Durrant
2021-01-12 21:52 ` [PATCH V4 05/24] xen/ioreq: Make x86's hvm_ioreq_needs_completion() common Oleksandr Tyshchenko
2021-01-15 15:25   ` Julien Grall
2021-01-20  8:48   ` Alex Bennée
2021-01-20  9:31     ` Julien Grall
2021-01-12 21:52 ` [PATCH V4 06/24] xen/ioreq: Make x86's hvm_mmio_first(last)_byte() common Oleksandr Tyshchenko
2021-01-15 15:34   ` Julien Grall
2021-01-20  8:57   ` Alex Bennée
2021-01-20 16:15   ` Jan Beulich
2021-01-20 20:47     ` Oleksandr
2021-01-12 21:52 ` [PATCH V4 07/24] xen/ioreq: Make x86's hvm_ioreq_(page/vcpu/server) structs common Oleksandr Tyshchenko
2021-01-15 15:36   ` Julien Grall
2021-01-18  8:59   ` Paul Durrant
2021-01-20  8:58   ` Alex Bennée
2021-01-12 21:52 ` [PATCH V4 08/24] xen/ioreq: Move x86's ioreq_server to struct domain Oleksandr Tyshchenko
2021-01-15 15:44   ` Julien Grall
2021-01-18  9:09   ` Paul Durrant
2021-01-20  9:00   ` Alex Bennée
2021-01-12 21:52 ` [PATCH V4 09/24] xen/ioreq: Make x86's IOREQ related dm-op handling common Oleksandr Tyshchenko
2021-01-18  9:17   ` Paul Durrant
2021-01-18 10:19     ` Oleksandr
2021-01-18 10:34       ` Paul Durrant
2021-01-20 16:21   ` Jan Beulich
2021-01-21 10:23     ` Oleksandr
2021-01-21 10:27       ` Jan Beulich
2021-01-21 11:13         ` Oleksandr
2021-01-12 21:52 ` [PATCH V4 10/24] xen/ioreq: Move x86's io_completion/io_req fields to struct vcpu Oleksandr Tyshchenko
2021-01-15 19:34   ` Julien Grall
2021-01-18  9:35   ` Paul Durrant
2021-01-20 16:24   ` Jan Beulich
2021-01-12 21:52 ` [PATCH V4 11/24] xen/mm: Make x86's XENMEM_resource_ioreq_server handling common Oleksandr Tyshchenko
2021-01-14  3:58   ` Wei Chen
2021-01-14 15:31     ` Oleksandr
2021-01-15 14:35       ` Alex Bennée
2021-01-18 17:42         ` Oleksandr
2021-01-18  9:38   ` Paul Durrant
2021-01-12 21:52 ` [PATCH V4 12/24] xen/ioreq: Remove "hvm" prefixes from involved function names Oleksandr Tyshchenko
2021-01-18  9:55   ` Paul Durrant
2021-01-12 21:52 ` [PATCH V4 13/24] xen/ioreq: Use guest_cmpxchg64() instead of cmpxchg() Oleksandr Tyshchenko
2021-01-15 19:37   ` Julien Grall
2021-01-17 11:32     ` Oleksandr
2021-01-18 10:00   ` Paul Durrant
2021-01-12 21:52 ` [PATCH V4 14/24] arm/ioreq: Introduce arch specific bits for IOREQ/DM features Oleksandr Tyshchenko
2021-01-15  0:55   ` Stefano Stabellini
2021-01-17 12:45     ` Oleksandr
2021-01-20  0:23       ` Stefano Stabellini
2021-01-21  9:51         ` Oleksandr
2021-01-15 20:26   ` Julien Grall
2021-01-17 17:11     ` Oleksandr
2021-01-17 18:07       ` Julien Grall
2021-01-17 18:52         ` Oleksandr
2021-01-18 19:17           ` Julien Grall
2021-01-19 15:20             ` Oleksandr
2021-01-20  0:50               ` Stefano Stabellini
2021-01-20 15:57                 ` Julien Grall
2021-01-20 19:47                   ` Stefano Stabellini
2021-01-21  9:31                     ` Oleksandr
2021-01-21 21:34                       ` Stefano Stabellini
2021-01-20 15:50           ` Julien Grall
2021-01-21  8:50             ` Oleksandr
2021-01-27 10:24               ` Jan Beulich
2021-01-27 12:22                 ` Oleksandr
2021-01-27 12:52                   ` Jan Beulich
2021-01-18 10:44       ` Jan Beulich
2021-01-18 15:52         ` Oleksandr
2021-01-18 16:00           ` Jan Beulich
2021-01-18 16:29             ` Oleksandr
2021-01-12 21:52 ` [PATCH V4 15/24] xen/arm: Stick around in leave_hypervisor_to_guest until I/O has completed Oleksandr Tyshchenko
2021-01-15  1:12   ` Stefano Stabellini
2021-01-15 20:55   ` Julien Grall
2021-01-17 20:23     ` Oleksandr
2021-01-18 10:57       ` Julien Grall
2021-01-18 13:23         ` Oleksandr
2021-01-12 21:52 ` [PATCH V4 16/24] xen/mm: Handle properly reference in set_foreign_p2m_entry() on Arm Oleksandr Tyshchenko
2021-01-15  1:19   ` Stefano Stabellini
2021-01-15 20:59   ` Julien Grall
2021-01-21 13:57   ` Jan Beulich
2021-01-21 18:42     ` Oleksandr
2021-01-12 21:52 ` [PATCH V4 17/24] xen/ioreq: Introduce domain_has_ioreq_server() Oleksandr Tyshchenko
2021-01-15  1:24   ` Stefano Stabellini
2021-01-18 10:23   ` Paul Durrant
2021-01-12 21:52 ` [PATCH V4 18/24] xen/dm: Introduce xendevicemodel_set_irq_level DM op Oleksandr Tyshchenko
2021-01-15  1:32   ` Stefano Stabellini
2021-01-12 21:52 ` [PATCH V4 19/24] xen/arm: io: Abstract sign-extension Oleksandr Tyshchenko
2021-01-15  1:35   ` Stefano Stabellini
2021-01-12 21:52 ` [PATCH V4 20/24] xen/arm: io: Harden sign extension check Oleksandr Tyshchenko
2021-01-15  1:48   ` Stefano Stabellini
2021-01-22 10:15   ` Volodymyr Babchuk
2021-01-12 21:52 ` [PATCH V4 21/24] xen/ioreq: Make x86's send_invalidate_req() common Oleksandr Tyshchenko
2021-01-18 10:31   ` Paul Durrant
2021-01-21 14:02     ` Jan Beulich
2021-01-12 21:52 ` [PATCH V4 22/24] xen/arm: Add mapcache invalidation handling Oleksandr Tyshchenko
2021-01-15  2:11   ` Stefano Stabellini
2021-01-21 19:47     ` Oleksandr
2021-01-12 21:52 ` [PATCH V4 23/24] libxl: Introduce basic virtio-mmio support on Arm Oleksandr Tyshchenko
2021-01-15 21:30   ` Julien Grall
2021-01-17 22:22     ` Oleksandr
2021-01-20 16:40       ` Julien Grall
2021-01-20 20:35         ` Stefano Stabellini
2021-02-09 21:04         ` Oleksandr
2021-01-12 21:52 ` [PATCH V4 24/24] [RFC] libxl: Add support for virtio-disk configuration Oleksandr Tyshchenko
2021-01-14 17:20   ` Ian Jackson
2021-01-16  9:05     ` Oleksandr
2021-01-15 22:01   ` Julien Grall
2021-01-18  8:32     ` Oleksandr
2021-01-20 17:05       ` Julien Grall
2021-02-10  9:02         ` Oleksandr
2021-03-06 19:52           ` Julien Grall
2021-01-14  3:55 ` [PATCH V4 00/24] IOREQ feature (+ virtio-mmio) on Arm Wei Chen
2021-01-14 15:23   ` Oleksandr
2021-01-07 14:35     ` [ANNOUNCE] Xen 4.15 release schedule and feature tracking Ian Jackson
2021-01-07 15:45       ` Oleksandr
2021-01-14 16:11         ` [PATCH V4 00/24] IOREQ feature (+ virtio-mmio) on Arm Ian Jackson
2021-01-14 18:41           ` Oleksandr
2021-01-14 16:06       ` [ANNOUNCE] Xen 4.15 release schedule and feature tracking Ian Jackson
2021-01-14 19:02         ` Andrew Cooper
2021-01-15  9:57           ` Jan Beulich
2021-01-15 10:00             ` Julien Grall
2021-01-15 10:52             ` Andrew Cooper
2021-01-15 10:59               ` Andrew Cooper
2021-01-15 11:08                 ` Jan Beulich
2021-01-15 10:43           ` Bertrand Marquis
2021-01-15 15:14           ` Lengyel, Tamas
2021-01-28 22:55             ` Dario Faggioli
2021-01-28 18:26           ` Dario Faggioli
2021-01-28 22:15             ` Dario Faggioli
2021-01-29  8:38             ` Jan Beulich
2021-01-29  9:22               ` Dario Faggioli

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='00b901d6ed73$1f91a930$5eb4fb90$@xen.org' \
    --to=xadimgnik@gmail.com \
    --cc=andrew.cooper3@citrix.com \
    --cc=jbeulich@suse.com \
    --cc=julien.grall@arm.com \
    --cc=julien@xen.org \
    --cc=oleksandr_tyshchenko@epam.com \
    --cc=olekstysh@gmail.com \
    --cc=paul@xen.org \
    --cc=roger.pau@citrix.com \
    --cc=sstabellini@kernel.org \
    --cc=wl@xen.org \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).