xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
* [Xen-devel] [PATCH 0/7] ioreq: add support for internal servers
@ 2019-08-21 14:58 Roger Pau Monne
  2019-08-21 14:58 ` [Xen-devel] [PATCH 1/7] ioreq: add fields to allow internal ioreq servers Roger Pau Monne
                   ` (6 more replies)
  0 siblings, 7 replies; 20+ messages in thread
From: Roger Pau Monne @ 2019-08-21 14:58 UTC (permalink / raw)
  To: xen-devel
  Cc: Stefano Stabellini, Wei Liu, Konrad Rzeszutek Wilk,
	George Dunlap, Andrew Cooper, Ian Jackson, Tim Deegan,
	Julien Grall, Paul Durrant, Jan Beulich, Roger Pau Monne

Such internal servers are implemented by a single function that handles
ioreqs inside the hypervisor.

The motivation behind this change is to switch vPCI to become an
internal ioreq server, so that accesses to the PCI config space can be
multiplexed between devices handled by vPCI and devices handled by other
ioreq servers.

The implementation is fairly simple and limited to what's needed by
vPCI, but can be expanded in the future if other more complex users
appear.

The series can also be found at:

git://xenbits.xen.org/people/royger/xen.git ioreq_vpci_v1

Thanks, Roger.

Roger Pau Monne (7):
  ioreq: add fields to allow internal ioreq servers
  ioreq: add internal ioreq initialization support
  ioreq: allow dispatching ioreqs to internal servers
  ioreq: allow registering internal ioreq server handler
  ioreq: allow decoding accesses to MMCFG regions
  vpci: register as an internal ioreq server
  ioreq: provide support for long-running operations...

 xen/arch/x86/hvm/dm.c               |   9 +-
 xen/arch/x86/hvm/dom0_build.c       |   9 +-
 xen/arch/x86/hvm/hvm.c              |   7 +-
 xen/arch/x86/hvm/io.c               | 288 +----------------------
 xen/arch/x86/hvm/ioreq.c            | 349 +++++++++++++++++++++++-----
 xen/arch/x86/physdev.c              |   7 +-
 xen/drivers/passthrough/x86/iommu.c |   2 +-
 xen/drivers/vpci/vpci.c             |  57 +++++
 xen/include/asm-x86/hvm/domain.h    |  31 ++-
 xen/include/asm-x86/hvm/io.h        |  14 +-
 xen/include/asm-x86/hvm/ioreq.h     |  19 +-
 xen/include/xen/vpci.h              |   3 +
 12 files changed, 415 insertions(+), 380 deletions(-)

-- 
2.22.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 20+ messages in thread

* [Xen-devel] [PATCH 1/7] ioreq: add fields to allow internal ioreq servers
  2019-08-21 14:58 [Xen-devel] [PATCH 0/7] ioreq: add support for internal servers Roger Pau Monne
@ 2019-08-21 14:58 ` Roger Pau Monne
  2019-08-21 14:58 ` [Xen-devel] [PATCH 2/7] ioreq: add internal ioreq initialization support Roger Pau Monne
                   ` (5 subsequent siblings)
  6 siblings, 0 replies; 20+ messages in thread
From: Roger Pau Monne @ 2019-08-21 14:58 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Wei Liu, Jan Beulich, Roger Pau Monne

Internal ioreq servers are plain function handlers implemented inside
of the hypervisor. Note that most fields used by current (external)
ioreq servers are not needed for internal ones, and hence have been
placed inside of a struct and packed in an union together with the
only internal specific field, a function pointer to a handler.

This is required in order to have PCI config accesses forwarded to
external ioreq servers or to internal ones (ie: QEMU emulated devices
vs vPCI passthrough), and is the first step in order to allow
unprivileged domains to use vPCI.

Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
---
 xen/include/asm-x86/hvm/domain.h | 30 +++++++++++++++++++-----------
 1 file changed, 19 insertions(+), 11 deletions(-)

diff --git a/xen/include/asm-x86/hvm/domain.h b/xen/include/asm-x86/hvm/domain.h
index 6c7c4f5aa6..f0be303517 100644
--- a/xen/include/asm-x86/hvm/domain.h
+++ b/xen/include/asm-x86/hvm/domain.h
@@ -52,21 +52,29 @@ struct hvm_ioreq_vcpu {
 #define MAX_NR_IO_RANGES  256
 
 struct hvm_ioreq_server {
-    struct domain          *target, *emulator;
-
+    struct domain          *target;
     /* Lock to serialize toolstack modifications */
     spinlock_t             lock;
-
-    struct hvm_ioreq_page  ioreq;
-    struct list_head       ioreq_vcpu_list;
-    struct hvm_ioreq_page  bufioreq;
-
-    /* Lock to serialize access to buffered ioreq ring */
-    spinlock_t             bufioreq_lock;
-    evtchn_port_t          bufioreq_evtchn;
     struct rangeset        *range[NR_IO_RANGE_TYPES];
     bool                   enabled;
-    uint8_t                bufioreq_handling;
+    bool                   internal;
+
+    union {
+        struct {
+            struct domain          *emulator;
+            struct hvm_ioreq_page  ioreq;
+            struct list_head       ioreq_vcpu_list;
+            struct hvm_ioreq_page  bufioreq;
+
+            /* Lock to serialize access to buffered ioreq ring */
+            spinlock_t             bufioreq_lock;
+            evtchn_port_t          bufioreq_evtchn;
+            uint8_t                bufioreq_handling;
+        };
+        struct {
+            int (*handler)(struct vcpu *v, ioreq_t *);
+        };
+    };
 };
 
 /*
-- 
2.22.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [Xen-devel] [PATCH 2/7] ioreq: add internal ioreq initialization support
  2019-08-21 14:58 [Xen-devel] [PATCH 0/7] ioreq: add support for internal servers Roger Pau Monne
  2019-08-21 14:58 ` [Xen-devel] [PATCH 1/7] ioreq: add fields to allow internal ioreq servers Roger Pau Monne
@ 2019-08-21 14:58 ` Roger Pau Monne
  2019-08-21 16:24   ` Paul Durrant
  2019-08-21 14:58 ` [Xen-devel] [PATCH 3/7] ioreq: allow dispatching ioreqs to internal servers Roger Pau Monne
                   ` (4 subsequent siblings)
  6 siblings, 1 reply; 20+ messages in thread
From: Roger Pau Monne @ 2019-08-21 14:58 UTC (permalink / raw)
  To: xen-devel
  Cc: Andrew Cooper, Paul Durrant, Wei Liu, Jan Beulich, Roger Pau Monne

Add support for internal ioreq servers to initialization and
deinitialization routines, prevent some functions from being executed
against internal ioreq servers and add guards to only allow internal
callers to modify internal ioreq servers. External callers (ie: from
hypercalls) are only allowed to deal with external ioreq servers.

Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
---
 xen/arch/x86/hvm/dm.c           |   9 +-
 xen/arch/x86/hvm/ioreq.c        | 150 +++++++++++++++++++++-----------
 xen/include/asm-x86/hvm/ioreq.h |   8 +-
 3 files changed, 108 insertions(+), 59 deletions(-)

diff --git a/xen/arch/x86/hvm/dm.c b/xen/arch/x86/hvm/dm.c
index d6d0e8be89..5ca8b66d67 100644
--- a/xen/arch/x86/hvm/dm.c
+++ b/xen/arch/x86/hvm/dm.c
@@ -417,7 +417,7 @@ static int dm_op(const struct dmop_args *op_args)
             break;
 
         rc = hvm_create_ioreq_server(d, data->handle_bufioreq,
-                                     &data->id);
+                                     &data->id, false);
         break;
     }
 
@@ -452,7 +452,7 @@ static int dm_op(const struct dmop_args *op_args)
             break;
 
         rc = hvm_map_io_range_to_ioreq_server(d, data->id, data->type,
-                                              data->start, data->end);
+                                              data->start, data->end, false);
         break;
     }
 
@@ -466,7 +466,8 @@ static int dm_op(const struct dmop_args *op_args)
             break;
 
         rc = hvm_unmap_io_range_from_ioreq_server(d, data->id, data->type,
-                                                  data->start, data->end);
+                                                  data->start, data->end,
+                                                  false);
         break;
     }
 
@@ -529,7 +530,7 @@ static int dm_op(const struct dmop_args *op_args)
         if ( data->pad )
             break;
 
-        rc = hvm_set_ioreq_server_state(d, data->id, !!data->enabled);
+        rc = hvm_set_ioreq_server_state(d, data->id, !!data->enabled, false);
         break;
     }
 
diff --git a/xen/arch/x86/hvm/ioreq.c b/xen/arch/x86/hvm/ioreq.c
index a79cabb680..23ef9b0c02 100644
--- a/xen/arch/x86/hvm/ioreq.c
+++ b/xen/arch/x86/hvm/ioreq.c
@@ -89,6 +89,9 @@ bool hvm_io_pending(struct vcpu *v)
     {
         struct hvm_ioreq_vcpu *sv;
 
+        if ( s->internal )
+          continue;
+
         list_for_each_entry ( sv,
                               &s->ioreq_vcpu_list,
                               list_entry )
@@ -193,6 +196,9 @@ bool handle_hvm_io_completion(struct vcpu *v)
     {
         struct hvm_ioreq_vcpu *sv;
 
+        if ( s->internal )
+            continue;
+
         list_for_each_entry ( sv,
                               &s->ioreq_vcpu_list,
                               list_entry )
@@ -431,6 +437,9 @@ bool is_ioreq_server_page(struct domain *d, const struct page_info *page)
 
     FOR_EACH_IOREQ_SERVER(d, id, s)
     {
+        if ( s->internal )
+            continue;
+
         if ( (s->ioreq.page == page) || (s->bufioreq.page == page) )
         {
             found = true;
@@ -696,15 +705,18 @@ static void hvm_ioreq_server_enable(struct hvm_ioreq_server *s)
     if ( s->enabled )
         goto done;
 
-    hvm_remove_ioreq_gfn(s, false);
-    hvm_remove_ioreq_gfn(s, true);
+    if ( !s->internal )
+    {
+        hvm_remove_ioreq_gfn(s, false);
+        hvm_remove_ioreq_gfn(s, true);
 
-    s->enabled = true;
+        list_for_each_entry ( sv,
+                              &s->ioreq_vcpu_list,
+                              list_entry )
+            hvm_update_ioreq_evtchn(s, sv);
+    }
 
-    list_for_each_entry ( sv,
-                          &s->ioreq_vcpu_list,
-                          list_entry )
-        hvm_update_ioreq_evtchn(s, sv);
+    s->enabled = true;
 
   done:
     spin_unlock(&s->lock);
@@ -717,8 +729,11 @@ static void hvm_ioreq_server_disable(struct hvm_ioreq_server *s)
     if ( !s->enabled )
         goto done;
 
-    hvm_add_ioreq_gfn(s, true);
-    hvm_add_ioreq_gfn(s, false);
+    if ( !s->internal )
+    {
+        hvm_add_ioreq_gfn(s, true);
+        hvm_add_ioreq_gfn(s, false);
+    }
 
     s->enabled = false;
 
@@ -728,40 +743,47 @@ static void hvm_ioreq_server_disable(struct hvm_ioreq_server *s)
 
 static int hvm_ioreq_server_init(struct hvm_ioreq_server *s,
                                  struct domain *d, int bufioreq_handling,
-                                 ioservid_t id)
+                                 ioservid_t id, bool internal)
 {
     struct domain *currd = current->domain;
     struct vcpu *v;
     int rc;
 
+    s->internal = internal;
     s->target = d;
-
-    get_knownalive_domain(currd);
-    s->emulator = currd;
-
     spin_lock_init(&s->lock);
-    INIT_LIST_HEAD(&s->ioreq_vcpu_list);
-    spin_lock_init(&s->bufioreq_lock);
-
-    s->ioreq.gfn = INVALID_GFN;
-    s->bufioreq.gfn = INVALID_GFN;
 
     rc = hvm_ioreq_server_alloc_rangesets(s, id);
     if ( rc )
         return rc;
 
-    s->bufioreq_handling = bufioreq_handling;
-
-    for_each_vcpu ( d, v )
+    if ( !internal )
     {
-        rc = hvm_ioreq_server_add_vcpu(s, v);
-        if ( rc )
-            goto fail_add;
+        get_knownalive_domain(currd);
+
+        s->emulator = currd;
+        INIT_LIST_HEAD(&s->ioreq_vcpu_list);
+        spin_lock_init(&s->bufioreq_lock);
+
+        s->ioreq.gfn = INVALID_GFN;
+        s->bufioreq.gfn = INVALID_GFN;
+
+        s->bufioreq_handling = bufioreq_handling;
+
+        for_each_vcpu ( d, v )
+        {
+            rc = hvm_ioreq_server_add_vcpu(s, v);
+            if ( rc )
+                goto fail_add;
+        }
     }
+    else
+        s->handler = NULL;
 
     return 0;
 
  fail_add:
+    ASSERT(!internal);
     hvm_ioreq_server_remove_all_vcpus(s);
     hvm_ioreq_server_unmap_pages(s);
 
@@ -774,27 +796,31 @@ static int hvm_ioreq_server_init(struct hvm_ioreq_server *s,
 static void hvm_ioreq_server_deinit(struct hvm_ioreq_server *s)
 {
     ASSERT(!s->enabled);
-    hvm_ioreq_server_remove_all_vcpus(s);
-
-    /*
-     * NOTE: It is safe to call both hvm_ioreq_server_unmap_pages() and
-     *       hvm_ioreq_server_free_pages() in that order.
-     *       This is because the former will do nothing if the pages
-     *       are not mapped, leaving the page to be freed by the latter.
-     *       However if the pages are mapped then the former will set
-     *       the page_info pointer to NULL, meaning the latter will do
-     *       nothing.
-     */
-    hvm_ioreq_server_unmap_pages(s);
-    hvm_ioreq_server_free_pages(s);
 
     hvm_ioreq_server_free_rangesets(s);
 
-    put_domain(s->emulator);
+    if ( !s->internal )
+    {
+        hvm_ioreq_server_remove_all_vcpus(s);
+
+        /*
+         * NOTE: It is safe to call both hvm_ioreq_server_unmap_pages() and
+         *       hvm_ioreq_server_free_pages() in that order.
+         *       This is because the former will do nothing if the pages
+         *       are not mapped, leaving the page to be freed by the latter.
+         *       However if the pages are mapped then the former will set
+         *       the page_info pointer to NULL, meaning the latter will do
+         *       nothing.
+         */
+        hvm_ioreq_server_unmap_pages(s);
+        hvm_ioreq_server_free_pages(s);
+
+        put_domain(s->emulator);
+    }
 }
 
 int hvm_create_ioreq_server(struct domain *d, int bufioreq_handling,
-                            ioservid_t *id)
+                            ioservid_t *id, bool internal)
 {
     struct hvm_ioreq_server *s;
     unsigned int i;
@@ -826,7 +852,7 @@ int hvm_create_ioreq_server(struct domain *d, int bufioreq_handling,
      */
     set_ioreq_server(d, i, s);
 
-    rc = hvm_ioreq_server_init(s, d, bufioreq_handling, i);
+    rc = hvm_ioreq_server_init(s, d, bufioreq_handling, i, internal);
     if ( rc )
     {
         set_ioreq_server(d, i, NULL);
@@ -863,7 +889,8 @@ int hvm_destroy_ioreq_server(struct domain *d, ioservid_t id)
         goto out;
 
     rc = -EPERM;
-    if ( s->emulator != current->domain )
+    /* NB: internal servers cannot be destroyed. */
+    if ( s->internal || s->emulator != current->domain )
         goto out;
 
     domain_pause(d);
@@ -908,7 +935,11 @@ int hvm_get_ioreq_server_info(struct domain *d, ioservid_t id,
         goto out;
 
     rc = -EPERM;
-    if ( s->emulator != current->domain )
+    /*
+     * NB: don't allow external callers to fetch information about internal
+     * ioreq servers.
+     */
+    if ( s->internal || s->emulator != current->domain )
         goto out;
 
     if ( ioreq_gfn || bufioreq_gfn )
@@ -955,7 +986,7 @@ int hvm_get_ioreq_server_frame(struct domain *d, ioservid_t id,
         goto out;
 
     rc = -EPERM;
-    if ( s->emulator != current->domain )
+    if ( s->internal || s->emulator != current->domain )
         goto out;
 
     rc = hvm_ioreq_server_alloc_pages(s);
@@ -991,7 +1022,7 @@ int hvm_get_ioreq_server_frame(struct domain *d, ioservid_t id,
 
 int hvm_map_io_range_to_ioreq_server(struct domain *d, ioservid_t id,
                                      uint32_t type, uint64_t start,
-                                     uint64_t end)
+                                     uint64_t end, bool internal)
 {
     struct hvm_ioreq_server *s;
     struct rangeset *r;
@@ -1009,7 +1040,12 @@ int hvm_map_io_range_to_ioreq_server(struct domain *d, ioservid_t id,
         goto out;
 
     rc = -EPERM;
-    if ( s->emulator != current->domain )
+    /*
+     * NB: don't allow external callers to modify the ranges of internal
+     * servers.
+     */
+    if ( (s->internal != internal) ||
+         (!internal && s->emulator != current->domain) )
         goto out;
 
     switch ( type )
@@ -1043,7 +1079,7 @@ int hvm_map_io_range_to_ioreq_server(struct domain *d, ioservid_t id,
 
 int hvm_unmap_io_range_from_ioreq_server(struct domain *d, ioservid_t id,
                                          uint32_t type, uint64_t start,
-                                         uint64_t end)
+                                         uint64_t end, bool internal)
 {
     struct hvm_ioreq_server *s;
     struct rangeset *r;
@@ -1061,7 +1097,12 @@ int hvm_unmap_io_range_from_ioreq_server(struct domain *d, ioservid_t id,
         goto out;
 
     rc = -EPERM;
-    if ( s->emulator != current->domain )
+    /*
+     * NB: don't allow external callers to modify the ranges of internal
+     * servers.
+     */
+    if ( s->internal != internal ||
+         (!internal && s->emulator != current->domain) )
         goto out;
 
     switch ( type )
@@ -1122,7 +1163,7 @@ int hvm_map_mem_type_to_ioreq_server(struct domain *d, ioservid_t id,
         goto out;
 
     rc = -EPERM;
-    if ( s->emulator != current->domain )
+    if ( s->internal || s->emulator != current->domain )
         goto out;
 
     rc = p2m_set_ioreq_server(d, flags, s);
@@ -1142,7 +1183,7 @@ int hvm_map_mem_type_to_ioreq_server(struct domain *d, ioservid_t id,
 }
 
 int hvm_set_ioreq_server_state(struct domain *d, ioservid_t id,
-                               bool enabled)
+                               bool enabled, bool internal)
 {
     struct hvm_ioreq_server *s;
     int rc;
@@ -1156,7 +1197,8 @@ int hvm_set_ioreq_server_state(struct domain *d, ioservid_t id,
         goto out;
 
     rc = -EPERM;
-    if ( s->emulator != current->domain )
+    if ( s->internal != internal ||
+         (!internal && s->emulator != current->domain) )
         goto out;
 
     domain_pause(d);
@@ -1185,6 +1227,8 @@ int hvm_all_ioreq_servers_add_vcpu(struct domain *d, struct vcpu *v)
 
     FOR_EACH_IOREQ_SERVER(d, id, s)
     {
+        if ( s->internal )
+            continue;
         rc = hvm_ioreq_server_add_vcpu(s, v);
         if ( rc )
             goto fail;
@@ -1218,7 +1262,11 @@ void hvm_all_ioreq_servers_remove_vcpu(struct domain *d, struct vcpu *v)
     spin_lock_recursive(&d->arch.hvm.ioreq_server.lock);
 
     FOR_EACH_IOREQ_SERVER(d, id, s)
+    {
+        if ( s->internal )
+            continue;
         hvm_ioreq_server_remove_vcpu(s, v);
+    }
 
     spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock);
 }
diff --git a/xen/include/asm-x86/hvm/ioreq.h b/xen/include/asm-x86/hvm/ioreq.h
index e2588e912f..e8119b26a6 100644
--- a/xen/include/asm-x86/hvm/ioreq.h
+++ b/xen/include/asm-x86/hvm/ioreq.h
@@ -24,7 +24,7 @@ bool handle_hvm_io_completion(struct vcpu *v);
 bool is_ioreq_server_page(struct domain *d, const struct page_info *page);
 
 int hvm_create_ioreq_server(struct domain *d, int bufioreq_handling,
-                            ioservid_t *id);
+                            ioservid_t *id, bool internal);
 int hvm_destroy_ioreq_server(struct domain *d, ioservid_t id);
 int hvm_get_ioreq_server_info(struct domain *d, ioservid_t id,
                               unsigned long *ioreq_gfn,
@@ -34,14 +34,14 @@ int hvm_get_ioreq_server_frame(struct domain *d, ioservid_t id,
                                unsigned long idx, mfn_t *mfn);
 int hvm_map_io_range_to_ioreq_server(struct domain *d, ioservid_t id,
                                      uint32_t type, uint64_t start,
-                                     uint64_t end);
+                                     uint64_t end, bool internal);
 int hvm_unmap_io_range_from_ioreq_server(struct domain *d, ioservid_t id,
                                          uint32_t type, uint64_t start,
-                                         uint64_t end);
+                                         uint64_t end, bool internal);
 int hvm_map_mem_type_to_ioreq_server(struct domain *d, ioservid_t id,
                                      uint32_t type, uint32_t flags);
 int hvm_set_ioreq_server_state(struct domain *d, ioservid_t id,
-                               bool enabled);
+                               bool enabled, bool internal);
 
 int hvm_all_ioreq_servers_add_vcpu(struct domain *d, struct vcpu *v);
 void hvm_all_ioreq_servers_remove_vcpu(struct domain *d, struct vcpu *v);
-- 
2.22.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [Xen-devel] [PATCH 3/7] ioreq: allow dispatching ioreqs to internal servers
  2019-08-21 14:58 [Xen-devel] [PATCH 0/7] ioreq: add support for internal servers Roger Pau Monne
  2019-08-21 14:58 ` [Xen-devel] [PATCH 1/7] ioreq: add fields to allow internal ioreq servers Roger Pau Monne
  2019-08-21 14:58 ` [Xen-devel] [PATCH 2/7] ioreq: add internal ioreq initialization support Roger Pau Monne
@ 2019-08-21 14:58 ` Roger Pau Monne
  2019-08-21 16:29   ` Paul Durrant
  2019-08-21 14:59 ` [Xen-devel] [PATCH 4/7] ioreq: allow registering internal ioreq server handler Roger Pau Monne
                   ` (3 subsequent siblings)
  6 siblings, 1 reply; 20+ messages in thread
From: Roger Pau Monne @ 2019-08-21 14:58 UTC (permalink / raw)
  To: xen-devel
  Cc: Andrew Cooper, Paul Durrant, Wei Liu, Jan Beulich, Roger Pau Monne

Internal ioreq servers are always processed first, and ioreqs are
dispatched by calling the handler function. If no internal servers have
registered for an ioreq it's then forwarded to external callers.

Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
---
 xen/arch/x86/hvm/ioreq.c | 19 ++++++++++++++++++-
 1 file changed, 18 insertions(+), 1 deletion(-)

diff --git a/xen/arch/x86/hvm/ioreq.c b/xen/arch/x86/hvm/ioreq.c
index 23ef9b0c02..3fb6fe9585 100644
--- a/xen/arch/x86/hvm/ioreq.c
+++ b/xen/arch/x86/hvm/ioreq.c
@@ -1305,6 +1305,7 @@ struct hvm_ioreq_server *hvm_select_ioreq_server(struct domain *d,
     uint8_t type;
     uint64_t addr;
     unsigned int id;
+    bool internal = true;
 
     if ( p->type != IOREQ_TYPE_COPY && p->type != IOREQ_TYPE_PIO )
         return NULL;
@@ -1345,11 +1346,12 @@ struct hvm_ioreq_server *hvm_select_ioreq_server(struct domain *d,
         addr = p->addr;
     }
 
+ retry:
     FOR_EACH_IOREQ_SERVER(d, id, s)
     {
         struct rangeset *r;
 
-        if ( !s->enabled )
+        if ( !s->enabled || s->internal != internal )
             continue;
 
         r = s->range[type];
@@ -1387,6 +1389,12 @@ struct hvm_ioreq_server *hvm_select_ioreq_server(struct domain *d,
         }
     }
 
+    if ( internal )
+    {
+        internal = false;
+        goto retry;
+    }
+
     return NULL;
 }
 
@@ -1492,9 +1500,18 @@ int hvm_send_ioreq(struct hvm_ioreq_server *s, ioreq_t *proto_p,
 
     ASSERT(s);
 
+    if ( s->internal && buffered )
+    {
+        ASSERT_UNREACHABLE();
+        return X86EMUL_UNHANDLEABLE;
+    }
+
     if ( buffered )
         return hvm_send_buffered_ioreq(s, proto_p);
 
+    if ( s->internal )
+        return s->handler(curr, proto_p);
+
     if ( unlikely(!vcpu_start_shutdown_deferral(curr)) )
         return X86EMUL_RETRY;
 
-- 
2.22.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [Xen-devel] [PATCH 4/7] ioreq: allow registering internal ioreq server handler
  2019-08-21 14:58 [Xen-devel] [PATCH 0/7] ioreq: add support for internal servers Roger Pau Monne
                   ` (2 preceding siblings ...)
  2019-08-21 14:58 ` [Xen-devel] [PATCH 3/7] ioreq: allow dispatching ioreqs to internal servers Roger Pau Monne
@ 2019-08-21 14:59 ` Roger Pau Monne
  2019-08-21 16:35   ` Paul Durrant
  2019-08-21 14:59 ` [Xen-devel] [PATCH 5/7] ioreq: allow decoding accesses to MMCFG regions Roger Pau Monne
                   ` (2 subsequent siblings)
  6 siblings, 1 reply; 20+ messages in thread
From: Roger Pau Monne @ 2019-08-21 14:59 UTC (permalink / raw)
  To: xen-devel
  Cc: Andrew Cooper, Paul Durrant, Wei Liu, Jan Beulich, Roger Pau Monne

Provide a routine to register the handler for an internal ioreq
server. Note the handler can only be set once.

Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
---
 xen/arch/x86/hvm/ioreq.c        | 32 ++++++++++++++++++++++++++++++++
 xen/include/asm-x86/hvm/ioreq.h |  3 +++
 2 files changed, 35 insertions(+)

diff --git a/xen/arch/x86/hvm/ioreq.c b/xen/arch/x86/hvm/ioreq.c
index 3fb6fe9585..d8fea191aa 100644
--- a/xen/arch/x86/hvm/ioreq.c
+++ b/xen/arch/x86/hvm/ioreq.c
@@ -486,6 +486,38 @@ static int hvm_add_ioreq_gfn(struct hvm_ioreq_server *s, bool buf)
     return rc;
 }
 
+int hvm_add_ioreq_handler(struct domain *d, ioservid_t id,
+                          int (*handler)(struct vcpu *v, ioreq_t *))
+{
+    struct hvm_ioreq_server *s;
+    int rc = 0;
+
+    spin_lock_recursive(&d->arch.hvm.ioreq_server.lock);
+    s = get_ioreq_server(d, id);
+    if ( !s )
+    {
+        rc = -ENOENT;
+        goto out;
+    }
+    if ( !s->internal )
+    {
+        rc = -EINVAL;
+        goto out;
+    }
+    if ( s->handler != NULL )
+    {
+        rc = -EBUSY;
+        goto out;
+    }
+
+    s->handler = handler;
+
+ out:
+    spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock);
+
+    return rc;
+}
+
 static void hvm_update_ioreq_evtchn(struct hvm_ioreq_server *s,
                                     struct hvm_ioreq_vcpu *sv)
 {
diff --git a/xen/include/asm-x86/hvm/ioreq.h b/xen/include/asm-x86/hvm/ioreq.h
index e8119b26a6..2131c944d4 100644
--- a/xen/include/asm-x86/hvm/ioreq.h
+++ b/xen/include/asm-x86/hvm/ioreq.h
@@ -55,6 +55,9 @@ unsigned int hvm_broadcast_ioreq(ioreq_t *p, bool buffered);
 
 void hvm_ioreq_init(struct domain *d);
 
+int hvm_add_ioreq_handler(struct domain *d, ioservid_t id,
+                          int (*handler)(struct vcpu *v, ioreq_t *));
+
 #endif /* __ASM_X86_HVM_IOREQ_H__ */
 
 /*
-- 
2.22.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [Xen-devel] [PATCH 5/7] ioreq: allow decoding accesses to MMCFG regions
  2019-08-21 14:58 [Xen-devel] [PATCH 0/7] ioreq: add support for internal servers Roger Pau Monne
                   ` (3 preceding siblings ...)
  2019-08-21 14:59 ` [Xen-devel] [PATCH 4/7] ioreq: allow registering internal ioreq server handler Roger Pau Monne
@ 2019-08-21 14:59 ` Roger Pau Monne
  2019-08-21 14:59 ` [Xen-devel] [PATCH 6/7] vpci: register as an internal ioreq server Roger Pau Monne
  2019-08-21 14:59 ` [Xen-devel] [PATCH 7/7] ioreq: provide support for long-running operations Roger Pau Monne
  6 siblings, 0 replies; 20+ messages in thread
From: Roger Pau Monne @ 2019-08-21 14:59 UTC (permalink / raw)
  To: xen-devel
  Cc: Andrew Cooper, Paul Durrant, Wei Liu, Jan Beulich, Roger Pau Monne

Pick up on the infrastructure already added for vPCI and allow ioreq
to decode accesses to MMCFG regions registered for a domain. This
infrastructure is still only accessible from internal callers, so
MMCFG regions can only be registered from the internal domain builder
used by PVH dom0.

Note that the vPCI infrastructure to decode and handle accesses to
MMCFG regions will be removed in following patches when vPCI is
switched to become an internal ioreq server.

Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
---
 xen/arch/x86/hvm/hvm.c          |  2 +-
 xen/arch/x86/hvm/io.c           | 36 +++++---------
 xen/arch/x86/hvm/ioreq.c        | 88 +++++++++++++++++++++++++++++++--
 xen/include/asm-x86/hvm/io.h    | 12 ++++-
 xen/include/asm-x86/hvm/ioreq.h |  6 +++
 5 files changed, 113 insertions(+), 31 deletions(-)

diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 029eea3b85..b7a53377a5 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -741,7 +741,7 @@ void hvm_domain_destroy(struct domain *d)
         xfree(ioport);
     }
 
-    destroy_vpci_mmcfg(d);
+    hvm_ioreq_free_mmcfg(d);
 }
 
 static int hvm_save_tsc_adjust(struct vcpu *v, hvm_domain_context_t *h)
diff --git a/xen/arch/x86/hvm/io.c b/xen/arch/x86/hvm/io.c
index a5b0a23f06..6585767c03 100644
--- a/xen/arch/x86/hvm/io.c
+++ b/xen/arch/x86/hvm/io.c
@@ -279,6 +279,18 @@ unsigned int hvm_pci_decode_addr(unsigned int cf8, unsigned int addr,
     return CF8_ADDR_LO(cf8) | (addr & 3);
 }
 
+unsigned int hvm_mmcfg_decode_addr(const struct hvm_mmcfg *mmcfg,
+                                   paddr_t addr, pci_sbdf_t *sbdf)
+{
+    addr -= mmcfg->addr;
+    sbdf->bdf = MMCFG_BDF(addr);
+    sbdf->bus += mmcfg->start_bus;
+    sbdf->seg = mmcfg->segment;
+
+    return addr & (PCI_CFG_SPACE_EXP_SIZE - 1);
+}
+
+
 /* Do some sanity checks. */
 static bool vpci_access_allowed(unsigned int reg, unsigned int len)
 {
@@ -383,14 +395,6 @@ void register_vpci_portio_handler(struct domain *d)
     handler->ops = &vpci_portio_ops;
 }
 
-struct hvm_mmcfg {
-    struct list_head next;
-    paddr_t addr;
-    unsigned int size;
-    uint16_t segment;
-    uint8_t start_bus;
-};
-
 /* Handlers to trap PCI MMCFG config accesses. */
 static const struct hvm_mmcfg *vpci_mmcfg_find(const struct domain *d,
                                                paddr_t addr)
@@ -558,22 +562,6 @@ int register_vpci_mmcfg_handler(struct domain *d, paddr_t addr,
     return 0;
 }
 
-void destroy_vpci_mmcfg(struct domain *d)
-{
-    struct list_head *mmcfg_regions = &d->arch.hvm.mmcfg_regions;
-
-    write_lock(&d->arch.hvm.mmcfg_lock);
-    while ( !list_empty(mmcfg_regions) )
-    {
-        struct hvm_mmcfg *mmcfg = list_first_entry(mmcfg_regions,
-                                                   struct hvm_mmcfg, next);
-
-        list_del(&mmcfg->next);
-        xfree(mmcfg);
-    }
-    write_unlock(&d->arch.hvm.mmcfg_lock);
-}
-
 /*
  * Local variables:
  * mode: C
diff --git a/xen/arch/x86/hvm/ioreq.c b/xen/arch/x86/hvm/ioreq.c
index d8fea191aa..10c0f7a574 100644
--- a/xen/arch/x86/hvm/ioreq.c
+++ b/xen/arch/x86/hvm/ioreq.c
@@ -690,6 +690,22 @@ static void hvm_ioreq_server_free_rangesets(struct hvm_ioreq_server *s)
         rangeset_destroy(s->range[i]);
 }
 
+void hvm_ioreq_free_mmcfg(struct domain *d)
+{
+    struct list_head *mmcfg_regions = &d->arch.hvm.mmcfg_regions;
+
+    write_lock(&d->arch.hvm.mmcfg_lock);
+    while ( !list_empty(mmcfg_regions) )
+    {
+        struct hvm_mmcfg *mmcfg = list_first_entry(mmcfg_regions,
+                                                   struct hvm_mmcfg, next);
+
+        list_del(&mmcfg->next);
+        xfree(mmcfg);
+    }
+    write_unlock(&d->arch.hvm.mmcfg_lock);
+}
+
 static int hvm_ioreq_server_alloc_rangesets(struct hvm_ioreq_server *s,
                                             ioservid_t id)
 {
@@ -1329,6 +1345,19 @@ void hvm_destroy_all_ioreq_servers(struct domain *d)
     spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock);
 }
 
+static const struct hvm_mmcfg *mmcfg_find(const struct domain *d,
+                                          paddr_t addr)
+{
+    const struct hvm_mmcfg *mmcfg;
+
+    list_for_each_entry ( mmcfg, &d->arch.hvm.mmcfg_regions, next )
+        if ( addr >= mmcfg->addr && addr < mmcfg->addr + mmcfg->size )
+            return mmcfg;
+
+    return NULL;
+}
+
+
 struct hvm_ioreq_server *hvm_select_ioreq_server(struct domain *d,
                                                  ioreq_t *p)
 {
@@ -1338,27 +1367,34 @@ struct hvm_ioreq_server *hvm_select_ioreq_server(struct domain *d,
     uint64_t addr;
     unsigned int id;
     bool internal = true;
+    const struct hvm_mmcfg *mmcfg;
 
     if ( p->type != IOREQ_TYPE_COPY && p->type != IOREQ_TYPE_PIO )
         return NULL;
 
     cf8 = d->arch.hvm.pci_cf8;
 
-    if ( p->type == IOREQ_TYPE_PIO &&
-         (p->addr & ~3) == 0xcfc &&
-         CF8_ENABLED(cf8) )
+    read_lock(&d->arch.hvm.mmcfg_lock);
+    if ( (p->type == IOREQ_TYPE_PIO &&
+          (p->addr & ~3) == 0xcfc &&
+          CF8_ENABLED(cf8)) ||
+         (p->type == IOREQ_TYPE_COPY &&
+          (mmcfg = mmcfg_find(d, p->addr)) != NULL) )
     {
         uint32_t x86_fam;
         pci_sbdf_t sbdf;
         unsigned int reg;
 
-        reg = hvm_pci_decode_addr(cf8, p->addr, &sbdf);
+        reg = p->type == IOREQ_TYPE_PIO ? hvm_pci_decode_addr(cf8, p->addr,
+                                                              &sbdf)
+                                        : hvm_mmcfg_decode_addr(mmcfg, p->addr,
+                                                                &sbdf);
 
         /* PCI config data cycle */
         type = XEN_DMOP_IO_RANGE_PCI;
         addr = ((uint64_t)sbdf.sbdf << 32) | reg;
         /* AMD extended configuration space access? */
-        if ( CF8_ADDR_HI(cf8) &&
+        if ( p->type == IOREQ_TYPE_PIO && CF8_ADDR_HI(cf8) &&
              d->arch.cpuid->x86_vendor == X86_VENDOR_AMD &&
              (x86_fam = get_cpu_family(
                  d->arch.cpuid->basic.raw_fms, NULL, NULL)) > 0x10 &&
@@ -1377,6 +1413,7 @@ struct hvm_ioreq_server *hvm_select_ioreq_server(struct domain *d,
                 XEN_DMOP_IO_RANGE_PORT : XEN_DMOP_IO_RANGE_MEMORY;
         addr = p->addr;
     }
+    read_unlock(&d->arch.hvm.mmcfg_lock);
 
  retry:
     FOR_EACH_IOREQ_SERVER(d, id, s)
@@ -1629,6 +1666,47 @@ void hvm_ioreq_init(struct domain *d)
     register_portio_handler(d, 0xcf8, 4, hvm_access_cf8);
 }
 
+int hvm_ioreq_register_mmcfg(struct domain *d, paddr_t addr,
+                             unsigned int start_bus, unsigned int end_bus,
+                             unsigned int seg)
+{
+    struct hvm_mmcfg *mmcfg, *new;
+
+    if ( start_bus > end_bus )
+        return -EINVAL;
+
+    new = xmalloc(struct hvm_mmcfg);
+    if ( !new )
+        return -ENOMEM;
+
+    new->addr = addr + (start_bus << 20);
+    new->start_bus = start_bus;
+    new->segment = seg;
+    new->size = (end_bus - start_bus + 1) << 20;
+
+    write_lock(&d->arch.hvm.mmcfg_lock);
+    list_for_each_entry ( mmcfg, &d->arch.hvm.mmcfg_regions, next )
+        if ( new->addr < mmcfg->addr + mmcfg->size &&
+             mmcfg->addr < new->addr + new->size )
+        {
+            int ret = -EEXIST;
+
+            if ( new->addr == mmcfg->addr &&
+                 new->start_bus == mmcfg->start_bus &&
+                 new->segment == mmcfg->segment &&
+                 new->size == mmcfg->size )
+                ret = 0;
+            write_unlock(&d->arch.hvm.mmcfg_lock);
+            xfree(new);
+            return ret;
+        }
+
+    list_add(&new->next, &d->arch.hvm.mmcfg_regions);
+    write_unlock(&d->arch.hvm.mmcfg_lock);
+
+    return 0;
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/xen/include/asm-x86/hvm/io.h b/xen/include/asm-x86/hvm/io.h
index 7ceb119b64..26f0489171 100644
--- a/xen/include/asm-x86/hvm/io.h
+++ b/xen/include/asm-x86/hvm/io.h
@@ -165,9 +165,19 @@ void stdvga_deinit(struct domain *d);
 
 extern void hvm_dpci_msi_eoi(struct domain *d, int vector);
 
-/* Decode a PCI port IO access into a bus/slot/func/reg. */
+struct hvm_mmcfg {
+    struct list_head next;
+    paddr_t addr;
+    unsigned int size;
+    uint16_t segment;
+    uint8_t start_bus;
+};
+
+/* Decode a PCI port IO or MMCFG access into a bus/slot/func/reg. */
 unsigned int hvm_pci_decode_addr(unsigned int cf8, unsigned int addr,
                                  pci_sbdf_t *sbdf);
+unsigned int hvm_mmcfg_decode_addr(const struct hvm_mmcfg *mmcfg,
+                                   paddr_t addr, pci_sbdf_t *sbdf);
 
 /*
  * HVM port IO handler that performs forwarding of guest IO ports into machine
diff --git a/xen/include/asm-x86/hvm/ioreq.h b/xen/include/asm-x86/hvm/ioreq.h
index 2131c944d4..10b9586885 100644
--- a/xen/include/asm-x86/hvm/ioreq.h
+++ b/xen/include/asm-x86/hvm/ioreq.h
@@ -58,6 +58,12 @@ void hvm_ioreq_init(struct domain *d);
 int hvm_add_ioreq_handler(struct domain *d, ioservid_t id,
                           int (*handler)(struct vcpu *v, ioreq_t *));
 
+int hvm_ioreq_register_mmcfg(struct domain *d, paddr_t addr,
+                             unsigned int start_bus, unsigned int end_bus,
+                             unsigned int seg);
+
+void hvm_ioreq_free_mmcfg(struct domain *d);
+
 #endif /* __ASM_X86_HVM_IOREQ_H__ */
 
 /*
-- 
2.22.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [Xen-devel] [PATCH 6/7] vpci: register as an internal ioreq server
  2019-08-21 14:58 [Xen-devel] [PATCH 0/7] ioreq: add support for internal servers Roger Pau Monne
                   ` (4 preceding siblings ...)
  2019-08-21 14:59 ` [Xen-devel] [PATCH 5/7] ioreq: allow decoding accesses to MMCFG regions Roger Pau Monne
@ 2019-08-21 14:59 ` Roger Pau Monne
  2019-08-21 14:59 ` [Xen-devel] [PATCH 7/7] ioreq: provide support for long-running operations Roger Pau Monne
  6 siblings, 0 replies; 20+ messages in thread
From: Roger Pau Monne @ 2019-08-21 14:59 UTC (permalink / raw)
  To: xen-devel
  Cc: Stefano Stabellini, Wei Liu, Konrad Rzeszutek Wilk,
	George Dunlap, Andrew Cooper, Ian Jackson, Tim Deegan,
	Julien Grall, Paul Durrant, Jan Beulich, Roger Pau Monne

Switch vPCI to become an internal ioreq server, and hence drop all the
vPCI specific decoding and trapping to PCI IO ports and MMCFG regions.

This allows to unify the vPCI code with the ioreq infrastructure,
opening the door for domains having PCI accesses handled by vPCI and
other ioreq servers at the same time.

Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
---
 xen/arch/x86/hvm/dom0_build.c       |   9 +-
 xen/arch/x86/hvm/hvm.c              |   5 +-
 xen/arch/x86/hvm/io.c               | 272 ----------------------------
 xen/arch/x86/hvm/ioreq.c            |   5 +
 xen/arch/x86/physdev.c              |   7 +-
 xen/drivers/passthrough/x86/iommu.c |   2 +-
 xen/drivers/vpci/vpci.c             |  54 ++++++
 xen/include/asm-x86/hvm/io.h        |   2 +-
 xen/include/xen/vpci.h              |   3 +
 9 files changed, 77 insertions(+), 282 deletions(-)

diff --git a/xen/arch/x86/hvm/dom0_build.c b/xen/arch/x86/hvm/dom0_build.c
index 8845399ae9..7925189fed 100644
--- a/xen/arch/x86/hvm/dom0_build.c
+++ b/xen/arch/x86/hvm/dom0_build.c
@@ -29,6 +29,7 @@
 
 #include <asm/bzimage.h>
 #include <asm/dom0_build.h>
+#include <asm/hvm/ioreq.h>
 #include <asm/hvm/support.h>
 #include <asm/io_apic.h>
 #include <asm/p2m.h>
@@ -1117,10 +1118,10 @@ static void __hwdom_init pvh_setup_mmcfg(struct domain *d)
 
     for ( i = 0; i < pci_mmcfg_config_num; i++ )
     {
-        rc = register_vpci_mmcfg_handler(d, pci_mmcfg_config[i].address,
-                                         pci_mmcfg_config[i].start_bus_number,
-                                         pci_mmcfg_config[i].end_bus_number,
-                                         pci_mmcfg_config[i].pci_segment);
+        rc = hvm_ioreq_register_mmcfg(d, pci_mmcfg_config[i].address,
+                                      pci_mmcfg_config[i].start_bus_number,
+                                      pci_mmcfg_config[i].end_bus_number,
+                                      pci_mmcfg_config[i].pci_segment);
         if ( rc )
             printk("Unable to setup MMCFG handler at %#lx for segment %u\n",
                    pci_mmcfg_config[i].address,
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index b7a53377a5..3fcf46779b 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -644,10 +644,13 @@ int hvm_domain_initialise(struct domain *d)
         d->arch.hvm.io_bitmap = hvm_io_bitmap;
 
     register_g2m_portio_handler(d);
-    register_vpci_portio_handler(d);
 
     hvm_ioreq_init(d);
 
+    rc = vpci_register_ioreq(d);
+    if ( rc )
+        goto fail1;
+
     hvm_init_guest_time(d);
 
     d->arch.hvm.params[HVM_PARAM_TRIPLE_FAULT_REASON] = SHUTDOWN_reboot;
diff --git a/xen/arch/x86/hvm/io.c b/xen/arch/x86/hvm/io.c
index 6585767c03..9c323d17ef 100644
--- a/xen/arch/x86/hvm/io.c
+++ b/xen/arch/x86/hvm/io.c
@@ -290,278 +290,6 @@ unsigned int hvm_mmcfg_decode_addr(const struct hvm_mmcfg *mmcfg,
     return addr & (PCI_CFG_SPACE_EXP_SIZE - 1);
 }
 
-
-/* Do some sanity checks. */
-static bool vpci_access_allowed(unsigned int reg, unsigned int len)
-{
-    /* Check access size. */
-    if ( len != 1 && len != 2 && len != 4 && len != 8 )
-        return false;
-
-    /* Check that access is size aligned. */
-    if ( (reg & (len - 1)) )
-        return false;
-
-    return true;
-}
-
-/* vPCI config space IO ports handlers (0xcf8/0xcfc). */
-static bool vpci_portio_accept(const struct hvm_io_handler *handler,
-                               const ioreq_t *p)
-{
-    return (p->addr == 0xcf8 && p->size == 4) || (p->addr & ~3) == 0xcfc;
-}
-
-static int vpci_portio_read(const struct hvm_io_handler *handler,
-                            uint64_t addr, uint32_t size, uint64_t *data)
-{
-    const struct domain *d = current->domain;
-    unsigned int reg;
-    pci_sbdf_t sbdf;
-    uint32_t cf8;
-
-    *data = ~(uint64_t)0;
-
-    if ( addr == 0xcf8 )
-    {
-        ASSERT(size == 4);
-        *data = d->arch.hvm.pci_cf8;
-        return X86EMUL_OKAY;
-    }
-
-    ASSERT((addr & ~3) == 0xcfc);
-    cf8 = ACCESS_ONCE(d->arch.hvm.pci_cf8);
-    if ( !CF8_ENABLED(cf8) )
-        return X86EMUL_UNHANDLEABLE;
-
-    reg = hvm_pci_decode_addr(cf8, addr, &sbdf);
-
-    if ( !vpci_access_allowed(reg, size) )
-        return X86EMUL_OKAY;
-
-    *data = vpci_read(sbdf, reg, size);
-
-    return X86EMUL_OKAY;
-}
-
-static int vpci_portio_write(const struct hvm_io_handler *handler,
-                             uint64_t addr, uint32_t size, uint64_t data)
-{
-    struct domain *d = current->domain;
-    unsigned int reg;
-    pci_sbdf_t sbdf;
-    uint32_t cf8;
-
-    if ( addr == 0xcf8 )
-    {
-        ASSERT(size == 4);
-        d->arch.hvm.pci_cf8 = data;
-        return X86EMUL_OKAY;
-    }
-
-    ASSERT((addr & ~3) == 0xcfc);
-    cf8 = ACCESS_ONCE(d->arch.hvm.pci_cf8);
-    if ( !CF8_ENABLED(cf8) )
-        return X86EMUL_UNHANDLEABLE;
-
-    reg = hvm_pci_decode_addr(cf8, addr, &sbdf);
-
-    if ( !vpci_access_allowed(reg, size) )
-        return X86EMUL_OKAY;
-
-    vpci_write(sbdf, reg, size, data);
-
-    return X86EMUL_OKAY;
-}
-
-static const struct hvm_io_ops vpci_portio_ops = {
-    .accept = vpci_portio_accept,
-    .read = vpci_portio_read,
-    .write = vpci_portio_write,
-};
-
-void register_vpci_portio_handler(struct domain *d)
-{
-    struct hvm_io_handler *handler;
-
-    if ( !has_vpci(d) )
-        return;
-
-    handler = hvm_next_io_handler(d);
-    if ( !handler )
-        return;
-
-    handler->type = IOREQ_TYPE_PIO;
-    handler->ops = &vpci_portio_ops;
-}
-
-/* Handlers to trap PCI MMCFG config accesses. */
-static const struct hvm_mmcfg *vpci_mmcfg_find(const struct domain *d,
-                                               paddr_t addr)
-{
-    const struct hvm_mmcfg *mmcfg;
-
-    list_for_each_entry ( mmcfg, &d->arch.hvm.mmcfg_regions, next )
-        if ( addr >= mmcfg->addr && addr < mmcfg->addr + mmcfg->size )
-            return mmcfg;
-
-    return NULL;
-}
-
-bool vpci_is_mmcfg_address(const struct domain *d, paddr_t addr)
-{
-    return vpci_mmcfg_find(d, addr);
-}
-
-static unsigned int vpci_mmcfg_decode_addr(const struct hvm_mmcfg *mmcfg,
-                                           paddr_t addr, pci_sbdf_t *sbdf)
-{
-    addr -= mmcfg->addr;
-    sbdf->bdf = MMCFG_BDF(addr);
-    sbdf->bus += mmcfg->start_bus;
-    sbdf->seg = mmcfg->segment;
-
-    return addr & (PCI_CFG_SPACE_EXP_SIZE - 1);
-}
-
-static int vpci_mmcfg_accept(struct vcpu *v, unsigned long addr)
-{
-    struct domain *d = v->domain;
-    bool found;
-
-    read_lock(&d->arch.hvm.mmcfg_lock);
-    found = vpci_mmcfg_find(d, addr);
-    read_unlock(&d->arch.hvm.mmcfg_lock);
-
-    return found;
-}
-
-static int vpci_mmcfg_read(struct vcpu *v, unsigned long addr,
-                           unsigned int len, unsigned long *data)
-{
-    struct domain *d = v->domain;
-    const struct hvm_mmcfg *mmcfg;
-    unsigned int reg;
-    pci_sbdf_t sbdf;
-
-    *data = ~0ul;
-
-    read_lock(&d->arch.hvm.mmcfg_lock);
-    mmcfg = vpci_mmcfg_find(d, addr);
-    if ( !mmcfg )
-    {
-        read_unlock(&d->arch.hvm.mmcfg_lock);
-        return X86EMUL_RETRY;
-    }
-
-    reg = vpci_mmcfg_decode_addr(mmcfg, addr, &sbdf);
-    read_unlock(&d->arch.hvm.mmcfg_lock);
-
-    if ( !vpci_access_allowed(reg, len) ||
-         (reg + len) > PCI_CFG_SPACE_EXP_SIZE )
-        return X86EMUL_OKAY;
-
-    /*
-     * According to the PCIe 3.1A specification:
-     *  - Configuration Reads and Writes must usually be DWORD or smaller
-     *    in size.
-     *  - Because Root Complex implementations are not required to support
-     *    accesses to a RCRB that cross DW boundaries [...] software
-     *    should take care not to cause the generation of such accesses
-     *    when accessing a RCRB unless the Root Complex will support the
-     *    access.
-     *  Xen however supports 8byte accesses by splitting them into two
-     *  4byte accesses.
-     */
-    *data = vpci_read(sbdf, reg, min(4u, len));
-    if ( len == 8 )
-        *data |= (uint64_t)vpci_read(sbdf, reg + 4, 4) << 32;
-
-    return X86EMUL_OKAY;
-}
-
-static int vpci_mmcfg_write(struct vcpu *v, unsigned long addr,
-                            unsigned int len, unsigned long data)
-{
-    struct domain *d = v->domain;
-    const struct hvm_mmcfg *mmcfg;
-    unsigned int reg;
-    pci_sbdf_t sbdf;
-
-    read_lock(&d->arch.hvm.mmcfg_lock);
-    mmcfg = vpci_mmcfg_find(d, addr);
-    if ( !mmcfg )
-    {
-        read_unlock(&d->arch.hvm.mmcfg_lock);
-        return X86EMUL_RETRY;
-    }
-
-    reg = vpci_mmcfg_decode_addr(mmcfg, addr, &sbdf);
-    read_unlock(&d->arch.hvm.mmcfg_lock);
-
-    if ( !vpci_access_allowed(reg, len) ||
-         (reg + len) > PCI_CFG_SPACE_EXP_SIZE )
-        return X86EMUL_OKAY;
-
-    vpci_write(sbdf, reg, min(4u, len), data);
-    if ( len == 8 )
-        vpci_write(sbdf, reg + 4, 4, data >> 32);
-
-    return X86EMUL_OKAY;
-}
-
-static const struct hvm_mmio_ops vpci_mmcfg_ops = {
-    .check = vpci_mmcfg_accept,
-    .read = vpci_mmcfg_read,
-    .write = vpci_mmcfg_write,
-};
-
-int register_vpci_mmcfg_handler(struct domain *d, paddr_t addr,
-                                unsigned int start_bus, unsigned int end_bus,
-                                unsigned int seg)
-{
-    struct hvm_mmcfg *mmcfg, *new;
-
-    ASSERT(is_hardware_domain(d));
-
-    if ( start_bus > end_bus )
-        return -EINVAL;
-
-    new = xmalloc(struct hvm_mmcfg);
-    if ( !new )
-        return -ENOMEM;
-
-    new->addr = addr + (start_bus << 20);
-    new->start_bus = start_bus;
-    new->segment = seg;
-    new->size = (end_bus - start_bus + 1) << 20;
-
-    write_lock(&d->arch.hvm.mmcfg_lock);
-    list_for_each_entry ( mmcfg, &d->arch.hvm.mmcfg_regions, next )
-        if ( new->addr < mmcfg->addr + mmcfg->size &&
-             mmcfg->addr < new->addr + new->size )
-        {
-            int ret = -EEXIST;
-
-            if ( new->addr == mmcfg->addr &&
-                 new->start_bus == mmcfg->start_bus &&
-                 new->segment == mmcfg->segment &&
-                 new->size == mmcfg->size )
-                ret = 0;
-            write_unlock(&d->arch.hvm.mmcfg_lock);
-            xfree(new);
-            return ret;
-        }
-
-    if ( list_empty(&d->arch.hvm.mmcfg_regions) )
-        register_mmio_handler(d, &vpci_mmcfg_ops);
-
-    list_add(&new->next, &d->arch.hvm.mmcfg_regions);
-    write_unlock(&d->arch.hvm.mmcfg_lock);
-
-    return 0;
-}
-
 /*
  * Local variables:
  * mode: C
diff --git a/xen/arch/x86/hvm/ioreq.c b/xen/arch/x86/hvm/ioreq.c
index 10c0f7a574..b2582bd3a0 100644
--- a/xen/arch/x86/hvm/ioreq.c
+++ b/xen/arch/x86/hvm/ioreq.c
@@ -1707,6 +1707,11 @@ int hvm_ioreq_register_mmcfg(struct domain *d, paddr_t addr,
     return 0;
 }
 
+bool hvm_is_mmcfg_address(const struct domain *d, paddr_t addr)
+{
+    return mmcfg_find(d, addr);
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/xen/arch/x86/physdev.c b/xen/arch/x86/physdev.c
index 3a3c15890b..a48b220fc3 100644
--- a/xen/arch/x86/physdev.c
+++ b/xen/arch/x86/physdev.c
@@ -11,6 +11,7 @@
 #include <asm/current.h>
 #include <asm/io_apic.h>
 #include <asm/msi.h>
+#include <asm/hvm/ioreq.h>
 #include <asm/hvm/irq.h>
 #include <asm/hypercall.h>
 #include <public/xen.h>
@@ -562,9 +563,9 @@ ret_t do_physdev_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
              * For HVM (PVH) domains try to add the newly found MMCFG to the
              * domain.
              */
-            ret = register_vpci_mmcfg_handler(currd, info.address,
-                                              info.start_bus, info.end_bus,
-                                              info.segment);
+            ret = hvm_ioreq_register_mmcfg(currd, info.address,
+                                           info.start_bus, info.end_bus,
+                                           info.segment);
         }
 
         break;
diff --git a/xen/drivers/passthrough/x86/iommu.c b/xen/drivers/passthrough/x86/iommu.c
index fd05075bb5..e0f3da91ce 100644
--- a/xen/drivers/passthrough/x86/iommu.c
+++ b/xen/drivers/passthrough/x86/iommu.c
@@ -244,7 +244,7 @@ static bool __hwdom_init hwdom_iommu_map(const struct domain *d,
      * TODO: runtime added MMCFG regions are not checked to make sure they
      * don't overlap with already mapped regions, thus preventing trapping.
      */
-    if ( has_vpci(d) && vpci_is_mmcfg_address(d, pfn_to_paddr(pfn)) )
+    if ( hvm_is_mmcfg_address(d, pfn_to_paddr(pfn)) )
         return false;
 
     return true;
diff --git a/xen/drivers/vpci/vpci.c b/xen/drivers/vpci/vpci.c
index 758d9420e7..510e3ee771 100644
--- a/xen/drivers/vpci/vpci.c
+++ b/xen/drivers/vpci/vpci.c
@@ -20,6 +20,8 @@
 #include <xen/sched.h>
 #include <xen/vpci.h>
 
+#include <asm/hvm/ioreq.h>
+
 /* Internal struct to store the emulated PCI registers. */
 struct vpci_register {
     vpci_read_t *read;
@@ -473,6 +475,58 @@ void vpci_write(pci_sbdf_t sbdf, unsigned int reg, unsigned int size,
     spin_unlock(&pdev->vpci->lock);
 }
 
+static int ioreq_handler(struct vcpu *v, ioreq_t *req)
+{
+    pci_sbdf_t sbdf;
+
+    if ( req->type != IOREQ_TYPE_PCI_CONFIG || req->data_is_ptr )
+    {
+        ASSERT_UNREACHABLE();
+        return X86EMUL_UNHANDLEABLE;
+    }
+
+    sbdf.sbdf = req->addr >> 32;
+
+    if ( req->dir )
+        req->data = vpci_read(sbdf, req->addr, req->size);
+    else
+        vpci_write(sbdf, req->addr, req->size, req->data);
+
+    return X86EMUL_OKAY;
+}
+
+int vpci_register_ioreq(struct domain *d)
+{
+    ioservid_t id;
+    int rc;
+
+    if ( !has_vpci(d) )
+        return 0;
+
+    rc = hvm_create_ioreq_server(d, HVM_IOREQSRV_BUFIOREQ_OFF, &id, true);
+    if ( rc )
+        return rc;
+
+    rc = hvm_add_ioreq_handler(d, id, ioreq_handler);
+    if ( rc )
+        return rc;
+
+    if ( is_hardware_domain(d) )
+    {
+        /* Handle all devices in vpci. */
+        rc = hvm_map_io_range_to_ioreq_server(d, id, XEN_DMOP_IO_RANGE_PCI,
+                                              0, ~(uint64_t)0, true);
+        if ( rc )
+            return rc;
+    }
+
+    rc = hvm_set_ioreq_server_state(d, id, true, true);
+    if ( rc )
+        return rc;
+
+    return rc;
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/xen/include/asm-x86/hvm/io.h b/xen/include/asm-x86/hvm/io.h
index 26f0489171..75a24f33bc 100644
--- a/xen/include/asm-x86/hvm/io.h
+++ b/xen/include/asm-x86/hvm/io.h
@@ -196,7 +196,7 @@ int register_vpci_mmcfg_handler(struct domain *d, paddr_t addr,
 void destroy_vpci_mmcfg(struct domain *d);
 
 /* Check if an address is between a MMCFG region for a domain. */
-bool vpci_is_mmcfg_address(const struct domain *d, paddr_t addr);
+bool hvm_is_mmcfg_address(const struct domain *d, paddr_t addr);
 
 #endif /* __ASM_X86_HVM_IO_H__ */
 
diff --git a/xen/include/xen/vpci.h b/xen/include/xen/vpci.h
index 4cf233c779..666dd1ca68 100644
--- a/xen/include/xen/vpci.h
+++ b/xen/include/xen/vpci.h
@@ -23,6 +23,9 @@ typedef int vpci_register_init_t(struct pci_dev *dev);
   static vpci_register_init_t *const x##_entry  \
                __used_section(".data.vpci." p) = x
 
+/* Register vPCI handler wit ioreq. */
+int vpci_register_ioreq(struct domain *d);
+
 /* Add vPCI handlers to device. */
 int __must_check vpci_add_handlers(struct pci_dev *dev);
 
-- 
2.22.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [Xen-devel] [PATCH 7/7] ioreq: provide support for long-running operations...
  2019-08-21 14:58 [Xen-devel] [PATCH 0/7] ioreq: add support for internal servers Roger Pau Monne
                   ` (5 preceding siblings ...)
  2019-08-21 14:59 ` [Xen-devel] [PATCH 6/7] vpci: register as an internal ioreq server Roger Pau Monne
@ 2019-08-21 14:59 ` Roger Pau Monne
  2019-08-22  9:15   ` Paul Durrant
  6 siblings, 1 reply; 20+ messages in thread
From: Roger Pau Monne @ 2019-08-21 14:59 UTC (permalink / raw)
  To: xen-devel
  Cc: Stefano Stabellini, Wei Liu, Konrad Rzeszutek Wilk,
	George Dunlap, Andrew Cooper, Ian Jackson, Tim Deegan,
	Julien Grall, Paul Durrant, Jan Beulich, Roger Pau Monne

...and switch vPCI to use this infrastructure for long running
physmap modification operations.

This allows to get rid of the vPCI specific modifications done to
handle_hvm_io_completion and allows generalizing the support for
long-running operations to other internal ioreq servers. Such support
is implemented as a specific handler that can be registers by internal
ioreq servers and that will be called to check for pending work.
Returning true from this handler will prevent the vcpu from running
until the handler returns false.

Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
---
 xen/arch/x86/hvm/ioreq.c         | 55 ++++++++++++++++++++++++++++----
 xen/drivers/vpci/vpci.c          |  3 ++
 xen/include/asm-x86/hvm/domain.h |  1 +
 xen/include/asm-x86/hvm/ioreq.h  |  2 ++
 4 files changed, 55 insertions(+), 6 deletions(-)

diff --git a/xen/arch/x86/hvm/ioreq.c b/xen/arch/x86/hvm/ioreq.c
index b2582bd3a0..8e160a0a14 100644
--- a/xen/arch/x86/hvm/ioreq.c
+++ b/xen/arch/x86/hvm/ioreq.c
@@ -186,18 +186,29 @@ bool handle_hvm_io_completion(struct vcpu *v)
     enum hvm_io_completion io_completion;
     unsigned int id;
 
-    if ( has_vpci(d) && vpci_process_pending(v) )
-    {
-        raise_softirq(SCHEDULE_SOFTIRQ);
-        return false;
-    }
-
     FOR_EACH_IOREQ_SERVER(d, id, s)
     {
         struct hvm_ioreq_vcpu *sv;
 
         if ( s->internal )
+        {
+            if ( s->pending && s->pending(v) )
+            {
+                /*
+                 * Need to raise a scheduler irq in order to prevent the guest
+                 * vcpu from resuming execution.
+                 *
+                 * Note this is not required for external ioreq operations
+                 * because in that case the vcpu is marked as blocked, but this
+                 * cannot be done for long-running internal operations, since
+                 * it would prevent the vcpu from being scheduled and thus the
+                 * long running operation from finishing.
+                 */
+                raise_softirq(SCHEDULE_SOFTIRQ);
+                return false;
+            }
             continue;
+        }
 
         list_for_each_entry ( sv,
                               &s->ioreq_vcpu_list,
@@ -518,6 +529,38 @@ int hvm_add_ioreq_handler(struct domain *d, ioservid_t id,
     return rc;
 }
 
+int hvm_add_ioreq_pending_handler(struct domain *d, ioservid_t id,
+                                  bool (*pending)(struct vcpu *v))
+{
+    struct hvm_ioreq_server *s;
+    int rc = 0;
+
+    spin_lock_recursive(&d->arch.hvm.ioreq_server.lock);
+    s = get_ioreq_server(d, id);
+    if ( !s )
+    {
+        rc = -ENOENT;
+        goto out;
+    }
+    if ( !s->internal )
+    {
+        rc = -EINVAL;
+        goto out;
+    }
+    if ( s->pending != NULL )
+    {
+        rc = -EBUSY;
+        goto out;
+    }
+
+    s->pending = pending;
+
+ out:
+    spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock);
+
+    return rc;
+}
+
 static void hvm_update_ioreq_evtchn(struct hvm_ioreq_server *s,
                                     struct hvm_ioreq_vcpu *sv)
 {
diff --git a/xen/drivers/vpci/vpci.c b/xen/drivers/vpci/vpci.c
index 510e3ee771..54b0f31612 100644
--- a/xen/drivers/vpci/vpci.c
+++ b/xen/drivers/vpci/vpci.c
@@ -508,6 +508,9 @@ int vpci_register_ioreq(struct domain *d)
         return rc;
 
     rc = hvm_add_ioreq_handler(d, id, ioreq_handler);
+    if ( rc )
+        return rc;
+    rc = hvm_add_ioreq_pending_handler(d, id, vpci_process_pending);
     if ( rc )
         return rc;
 
diff --git a/xen/include/asm-x86/hvm/domain.h b/xen/include/asm-x86/hvm/domain.h
index f0be303517..80a38ffe48 100644
--- a/xen/include/asm-x86/hvm/domain.h
+++ b/xen/include/asm-x86/hvm/domain.h
@@ -73,6 +73,7 @@ struct hvm_ioreq_server {
         };
         struct {
             int (*handler)(struct vcpu *v, ioreq_t *);
+            bool (*pending)(struct vcpu *v);
         };
     };
 };
diff --git a/xen/include/asm-x86/hvm/ioreq.h b/xen/include/asm-x86/hvm/ioreq.h
index 10b9586885..cc3e27d059 100644
--- a/xen/include/asm-x86/hvm/ioreq.h
+++ b/xen/include/asm-x86/hvm/ioreq.h
@@ -57,6 +57,8 @@ void hvm_ioreq_init(struct domain *d);
 
 int hvm_add_ioreq_handler(struct domain *d, ioservid_t id,
                           int (*handler)(struct vcpu *v, ioreq_t *));
+int hvm_add_ioreq_pending_handler(struct domain *d, ioservid_t id,
+                                  bool (*pending)(struct vcpu *v));
 
 int hvm_ioreq_register_mmcfg(struct domain *d, paddr_t addr,
                              unsigned int start_bus, unsigned int end_bus,
-- 
2.22.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* Re: [Xen-devel] [PATCH 2/7] ioreq: add internal ioreq initialization support
  2019-08-21 14:58 ` [Xen-devel] [PATCH 2/7] ioreq: add internal ioreq initialization support Roger Pau Monne
@ 2019-08-21 16:24   ` Paul Durrant
  2019-08-22  7:23     ` Roger Pau Monné
  0 siblings, 1 reply; 20+ messages in thread
From: Paul Durrant @ 2019-08-21 16:24 UTC (permalink / raw)
  To: Roger Pau Monne, xen-devel
  Cc: Andrew Cooper, Wei Liu, Jan Beulich, Roger Pau Monne

> -----Original Message-----
> From: Roger Pau Monne <roger.pau@citrix.com>
> Sent: 21 August 2019 15:59
> To: xen-devel@lists.xenproject.org
> Cc: Roger Pau Monne <roger.pau@citrix.com>; Jan Beulich <jbeulich@suse.com>; Andrew Cooper
> <Andrew.Cooper3@citrix.com>; Wei Liu <wl@xen.org>; Paul Durrant <Paul.Durrant@citrix.com>
> Subject: [PATCH 2/7] ioreq: add internal ioreq initialization support
> 
> Add support for internal ioreq servers to initialization and
> deinitialization routines, prevent some functions from being executed
> against internal ioreq servers and add guards to only allow internal
> callers to modify internal ioreq servers. External callers (ie: from
> hypercalls) are only allowed to deal with external ioreq servers.

It's kind of ugly to have the extra 'internal' argument passed to anything other than the create function so I wonder whether it would be neater to encode it in the ioreq server id. I.e. we have distinct id ranges for internal and external servers. What do you think?

  Paul
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [Xen-devel] [PATCH 3/7] ioreq: allow dispatching ioreqs to internal servers
  2019-08-21 14:58 ` [Xen-devel] [PATCH 3/7] ioreq: allow dispatching ioreqs to internal servers Roger Pau Monne
@ 2019-08-21 16:29   ` Paul Durrant
  2019-08-22  7:40     ` Roger Pau Monné
  0 siblings, 1 reply; 20+ messages in thread
From: Paul Durrant @ 2019-08-21 16:29 UTC (permalink / raw)
  To: Roger Pau Monne, xen-devel
  Cc: Andrew Cooper, Wei Liu, Jan Beulich, Roger Pau Monne

> -----Original Message-----
> From: Roger Pau Monne <roger.pau@citrix.com>
> Sent: 21 August 2019 15:59
> To: xen-devel@lists.xenproject.org
> Cc: Roger Pau Monne <roger.pau@citrix.com>; Paul Durrant <Paul.Durrant@citrix.com>; Jan Beulich
> <jbeulich@suse.com>; Andrew Cooper <Andrew.Cooper3@citrix.com>; Wei Liu <wl@xen.org>
> Subject: [PATCH 3/7] ioreq: allow dispatching ioreqs to internal servers
> 
> Internal ioreq servers are always processed first, and ioreqs are
> dispatched by calling the handler function. If no internal servers have
> registered for an ioreq it's then forwarded to external callers.

Distinct id ranges would help here... Internal ids could be walked first, then external. If there's no possibility of interleaving then you don't need the retry.

  Paul

> 
> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
> ---
>  xen/arch/x86/hvm/ioreq.c | 19 ++++++++++++++++++-
>  1 file changed, 18 insertions(+), 1 deletion(-)
> 
> diff --git a/xen/arch/x86/hvm/ioreq.c b/xen/arch/x86/hvm/ioreq.c
> index 23ef9b0c02..3fb6fe9585 100644
> --- a/xen/arch/x86/hvm/ioreq.c
> +++ b/xen/arch/x86/hvm/ioreq.c
> @@ -1305,6 +1305,7 @@ struct hvm_ioreq_server *hvm_select_ioreq_server(struct domain *d,
>      uint8_t type;
>      uint64_t addr;
>      unsigned int id;
> +    bool internal = true;
> 
>      if ( p->type != IOREQ_TYPE_COPY && p->type != IOREQ_TYPE_PIO )
>          return NULL;
> @@ -1345,11 +1346,12 @@ struct hvm_ioreq_server *hvm_select_ioreq_server(struct domain *d,
>          addr = p->addr;
>      }
> 
> + retry:
>      FOR_EACH_IOREQ_SERVER(d, id, s)
>      {
>          struct rangeset *r;
> 
> -        if ( !s->enabled )
> +        if ( !s->enabled || s->internal != internal )
>              continue;
> 
>          r = s->range[type];
> @@ -1387,6 +1389,12 @@ struct hvm_ioreq_server *hvm_select_ioreq_server(struct domain *d,
>          }
>      }
> 
> +    if ( internal )
> +    {
> +        internal = false;
> +        goto retry;
> +    }
> +
>      return NULL;
>  }
> 
> @@ -1492,9 +1500,18 @@ int hvm_send_ioreq(struct hvm_ioreq_server *s, ioreq_t *proto_p,
> 
>      ASSERT(s);
> 
> +    if ( s->internal && buffered )
> +    {
> +        ASSERT_UNREACHABLE();
> +        return X86EMUL_UNHANDLEABLE;
> +    }
> +
>      if ( buffered )
>          return hvm_send_buffered_ioreq(s, proto_p);
> 
> +    if ( s->internal )
> +        return s->handler(curr, proto_p);
> +
>      if ( unlikely(!vcpu_start_shutdown_deferral(curr)) )
>          return X86EMUL_RETRY;
> 
> --
> 2.22.0

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [Xen-devel] [PATCH 4/7] ioreq: allow registering internal ioreq server handler
  2019-08-21 14:59 ` [Xen-devel] [PATCH 4/7] ioreq: allow registering internal ioreq server handler Roger Pau Monne
@ 2019-08-21 16:35   ` Paul Durrant
  2019-08-22  7:43     ` Roger Pau Monné
  0 siblings, 1 reply; 20+ messages in thread
From: Paul Durrant @ 2019-08-21 16:35 UTC (permalink / raw)
  To: Roger Pau Monne, xen-devel
  Cc: Andrew Cooper, Wei Liu, Jan Beulich, Roger Pau Monne

> -----Original Message-----
> From: Roger Pau Monne <roger.pau@citrix.com>
> Sent: 21 August 2019 15:59
> To: xen-devel@lists.xenproject.org
> Cc: Roger Pau Monne <roger.pau@citrix.com>; Paul Durrant <Paul.Durrant@citrix.com>; Jan Beulich
> <jbeulich@suse.com>; Andrew Cooper <Andrew.Cooper3@citrix.com>; Wei Liu <wl@xen.org>
> Subject: [PATCH 4/7] ioreq: allow registering internal ioreq server handler
> 
> Provide a routine to register the handler for an internal ioreq
> server. Note the handler can only be set once.

I'd prefer hvm_set_ioreq_handler() and some sort of guard to prevent enabling of an internal server with no handler (probably in the previous patch) would be prudent, I think. Also, why the set-once semantic?

  Paul

> 
> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
> ---
>  xen/arch/x86/hvm/ioreq.c        | 32 ++++++++++++++++++++++++++++++++
>  xen/include/asm-x86/hvm/ioreq.h |  3 +++
>  2 files changed, 35 insertions(+)
> 
> diff --git a/xen/arch/x86/hvm/ioreq.c b/xen/arch/x86/hvm/ioreq.c
> index 3fb6fe9585..d8fea191aa 100644
> --- a/xen/arch/x86/hvm/ioreq.c
> +++ b/xen/arch/x86/hvm/ioreq.c
> @@ -486,6 +486,38 @@ static int hvm_add_ioreq_gfn(struct hvm_ioreq_server *s, bool buf)
>      return rc;
>  }
> 
> +int hvm_add_ioreq_handler(struct domain *d, ioservid_t id,
> +                          int (*handler)(struct vcpu *v, ioreq_t *))
> +{
> +    struct hvm_ioreq_server *s;
> +    int rc = 0;
> +
> +    spin_lock_recursive(&d->arch.hvm.ioreq_server.lock);
> +    s = get_ioreq_server(d, id);
> +    if ( !s )
> +    {
> +        rc = -ENOENT;
> +        goto out;
> +    }
> +    if ( !s->internal )
> +    {
> +        rc = -EINVAL;
> +        goto out;
> +    }
> +    if ( s->handler != NULL )
> +    {
> +        rc = -EBUSY;
> +        goto out;
> +    }
> +
> +    s->handler = handler;
> +
> + out:
> +    spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock);
> +
> +    return rc;
> +}
> +
>  static void hvm_update_ioreq_evtchn(struct hvm_ioreq_server *s,
>                                      struct hvm_ioreq_vcpu *sv)
>  {
> diff --git a/xen/include/asm-x86/hvm/ioreq.h b/xen/include/asm-x86/hvm/ioreq.h
> index e8119b26a6..2131c944d4 100644
> --- a/xen/include/asm-x86/hvm/ioreq.h
> +++ b/xen/include/asm-x86/hvm/ioreq.h
> @@ -55,6 +55,9 @@ unsigned int hvm_broadcast_ioreq(ioreq_t *p, bool buffered);
> 
>  void hvm_ioreq_init(struct domain *d);
> 
> +int hvm_add_ioreq_handler(struct domain *d, ioservid_t id,
> +                          int (*handler)(struct vcpu *v, ioreq_t *));
> +
>  #endif /* __ASM_X86_HVM_IOREQ_H__ */
> 
>  /*
> --
> 2.22.0

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [Xen-devel] [PATCH 2/7] ioreq: add internal ioreq initialization support
  2019-08-21 16:24   ` Paul Durrant
@ 2019-08-22  7:23     ` Roger Pau Monné
  2019-08-22  8:30       ` Paul Durrant
  0 siblings, 1 reply; 20+ messages in thread
From: Roger Pau Monné @ 2019-08-22  7:23 UTC (permalink / raw)
  To: Paul Durrant; +Cc: xen-devel, Wei Liu, Jan Beulich, Andrew Cooper

On Wed, Aug 21, 2019 at 06:24:17PM +0200, Paul Durrant wrote:
> > -----Original Message-----
> > From: Roger Pau Monne <roger.pau@citrix.com>
> > Sent: 21 August 2019 15:59
> > To: xen-devel@lists.xenproject.org
> > Cc: Roger Pau Monne <roger.pau@citrix.com>; Jan Beulich <jbeulich@suse.com>; Andrew Cooper
> > <Andrew.Cooper3@citrix.com>; Wei Liu <wl@xen.org>; Paul Durrant <Paul.Durrant@citrix.com>
> > Subject: [PATCH 2/7] ioreq: add internal ioreq initialization support
> > 
> > Add support for internal ioreq servers to initialization and
> > deinitialization routines, prevent some functions from being executed
> > against internal ioreq servers and add guards to only allow internal
> > callers to modify internal ioreq servers. External callers (ie: from
> > hypercalls) are only allowed to deal with external ioreq servers.
> 
> It's kind of ugly to have the extra 'internal' argument passed to anything other than the create function so I wonder whether it would be neater to encode it in the ioreq server id. I.e. we have distinct id ranges for internal and external servers. What do you think?

That would be fine, I guess I could use the most significant bit in
the id to signal whether the server is internal or external, and
reject dmop calls that target internal servers based on the provided
id. Does that sound sensible?

Thanks, Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [Xen-devel] [PATCH 3/7] ioreq: allow dispatching ioreqs to internal servers
  2019-08-21 16:29   ` Paul Durrant
@ 2019-08-22  7:40     ` Roger Pau Monné
  2019-08-22  8:33       ` Paul Durrant
  0 siblings, 1 reply; 20+ messages in thread
From: Roger Pau Monné @ 2019-08-22  7:40 UTC (permalink / raw)
  To: Paul Durrant; +Cc: xen-devel, Wei Liu, Jan Beulich, Andrew Cooper

On Wed, Aug 21, 2019 at 06:29:04PM +0200, Paul Durrant wrote:
> > -----Original Message-----
> > From: Roger Pau Monne <roger.pau@citrix.com>
> > Sent: 21 August 2019 15:59
> > To: xen-devel@lists.xenproject.org
> > Cc: Roger Pau Monne <roger.pau@citrix.com>; Paul Durrant <Paul.Durrant@citrix.com>; Jan Beulich
> > <jbeulich@suse.com>; Andrew Cooper <Andrew.Cooper3@citrix.com>; Wei Liu <wl@xen.org>
> > Subject: [PATCH 3/7] ioreq: allow dispatching ioreqs to internal servers
> > 
> > Internal ioreq servers are always processed first, and ioreqs are
> > dispatched by calling the handler function. If no internal servers have
> > registered for an ioreq it's then forwarded to external callers.
> 
> Distinct id ranges would help here... Internal ids could be walked first, then external. If there's no possibility of interleaving then you don't need the retry.

So if internal vs external is keyed on the ID then we would end up
with two different arrays in hvm_domain, one for internal and one for
external ioreq servers.

Maybe instead of my previous suggestion it would be better to just
define consecutive ranges for external and internal servers, like:

#define MAX_NR_EXTERNAL_IOREQ_SERVERS 8
#define MAX_NR_INTERNAL_IOREQ_SERVERS 1
#define MAX_NR_IOREQ_SERVERS \
    (MAX_NR_EXTERNAL_IOREQ_SERVERS + MAX_NR_INTERNAL_IOREQ_SERVERS)

#define FOR_EACH_IOREQ_SERVER(d, id, s) \
    for ( (id) = MAX_NR_IOREQ_SERVERS * 2; (id) != 0; ) \
        if ( !(s = GET_IOREQ_SERVER(d, --(id))) ) \
            continue; \
        else

#define FOR_EACH_INTERNAL_IOREQ_SERVER(d, id, s) \
    for ( (id) = MAX_NR_IOREQ_SERVERS; (id) > MAX_NR_INTERNAL_IOREQ_SERVERS && (id) != 0; ) \
        if ( !(s = GET_IOREQ_SERVER(d, --(id))) ) \
            continue; \
        else

#define FOR_EACH_EXTERNAL_IOREQ_SERVER(d, id, s) \
    for ( (id) = MAX_NR_INTERNAL_IOREQ_SERVERS; (id) != 0; ) \
        if ( !(s = GET_IOREQ_SERVER(d, --(id))) ) \
            continue; \
        else

That would also force FOR_EACH_IOREQ_SERVER to always process internal
ioreq servers first.

We could even have something like:

union {
    struct {
        struct hvm_ioreq_server *external_server[MAX_NR_EXTERNAL_IOREQ_SERVERS];
        struct hvm_ioreq_server *internal_server[MAX_NR_INTERNAL_IOREQ_SERVERS];
    }
    struct hvm_ioreq_server *server[MAX_NR_IOREQ_SERVERS];
}

In order to split the arrays if required.

Thanks, Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [Xen-devel] [PATCH 4/7] ioreq: allow registering internal ioreq server handler
  2019-08-21 16:35   ` Paul Durrant
@ 2019-08-22  7:43     ` Roger Pau Monné
  2019-08-22  8:38       ` Paul Durrant
  0 siblings, 1 reply; 20+ messages in thread
From: Roger Pau Monné @ 2019-08-22  7:43 UTC (permalink / raw)
  To: Paul Durrant; +Cc: xen-devel, Wei Liu, Jan Beulich, Andrew Cooper

On Wed, Aug 21, 2019 at 06:35:15PM +0200, Paul Durrant wrote:
> > -----Original Message-----
> > From: Roger Pau Monne <roger.pau@citrix.com>
> > Sent: 21 August 2019 15:59
> > To: xen-devel@lists.xenproject.org
> > Cc: Roger Pau Monne <roger.pau@citrix.com>; Paul Durrant <Paul.Durrant@citrix.com>; Jan Beulich
> > <jbeulich@suse.com>; Andrew Cooper <Andrew.Cooper3@citrix.com>; Wei Liu <wl@xen.org>
> > Subject: [PATCH 4/7] ioreq: allow registering internal ioreq server handler
> > 
> > Provide a routine to register the handler for an internal ioreq
> > server. Note the handler can only be set once.
> 
> I'd prefer hvm_set_ioreq_handler() and some sort of guard to prevent enabling of an internal server with no handler (probably in the previous patch) would be prudent, I think.

Right, I will add it.

> Also, why the set-once semantic?

Well, I didn't have the need to change the handler of internal ioreq
servers (vPCI) so I've coded it that way. If you think it's better to
allow run time changes of the handler that's fine, I just didn't have
the need for it given the current use-case and I thought it would be
safer.

Thanks, Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [Xen-devel] [PATCH 2/7] ioreq: add internal ioreq initialization support
  2019-08-22  7:23     ` Roger Pau Monné
@ 2019-08-22  8:30       ` Paul Durrant
  0 siblings, 0 replies; 20+ messages in thread
From: Paul Durrant @ 2019-08-22  8:30 UTC (permalink / raw)
  To: Roger Pau Monne; +Cc: xen-devel, Wei Liu, Jan Beulich, Andrew Cooper

> -----Original Message-----
> From: Roger Pau Monne <roger.pau@citrix.com>
> Sent: 22 August 2019 08:24
> To: Paul Durrant <Paul.Durrant@citrix.com>
> Cc: xen-devel@lists.xenproject.org; Jan Beulich <jbeulich@suse.com>; Andrew Cooper
> <Andrew.Cooper3@citrix.com>; Wei Liu <wl@xen.org>
> Subject: Re: [PATCH 2/7] ioreq: add internal ioreq initialization support
> 
> On Wed, Aug 21, 2019 at 06:24:17PM +0200, Paul Durrant wrote:
> > > -----Original Message-----
> > > From: Roger Pau Monne <roger.pau@citrix.com>
> > > Sent: 21 August 2019 15:59
> > > To: xen-devel@lists.xenproject.org
> > > Cc: Roger Pau Monne <roger.pau@citrix.com>; Jan Beulich <jbeulich@suse.com>; Andrew Cooper
> > > <Andrew.Cooper3@citrix.com>; Wei Liu <wl@xen.org>; Paul Durrant <Paul.Durrant@citrix.com>
> > > Subject: [PATCH 2/7] ioreq: add internal ioreq initialization support
> > >
> > > Add support for internal ioreq servers to initialization and
> > > deinitialization routines, prevent some functions from being executed
> > > against internal ioreq servers and add guards to only allow internal
> > > callers to modify internal ioreq servers. External callers (ie: from
> > > hypercalls) are only allowed to deal with external ioreq servers.
> >
> > It's kind of ugly to have the extra 'internal' argument passed to anything other than the create
> function so I wonder whether it would be neater to encode it in the ioreq server id. I.e. we have
> distinct id ranges for internal and external servers. What do you think?
> 
> That would be fine, I guess I could use the most significant bit in
> the id to signal whether the server is internal or external, and
> reject dmop calls that target internal servers based on the provided
> id. Does that sound sensible?
> 

Yes, that's basically what I was thinking initially although, as you observe, in the thread for patch #3 having two smaller consecutive ranges would be more convenient.

  Paul

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [Xen-devel] [PATCH 3/7] ioreq: allow dispatching ioreqs to internal servers
  2019-08-22  7:40     ` Roger Pau Monné
@ 2019-08-22  8:33       ` Paul Durrant
  0 siblings, 0 replies; 20+ messages in thread
From: Paul Durrant @ 2019-08-22  8:33 UTC (permalink / raw)
  To: Roger Pau Monne; +Cc: xen-devel, Wei Liu, Jan Beulich, Andrew Cooper

> -----Original Message-----
> From: Roger Pau Monne <roger.pau@citrix.com>
> Sent: 22 August 2019 08:41
> To: Paul Durrant <Paul.Durrant@citrix.com>
> Cc: xen-devel@lists.xenproject.org; Jan Beulich <jbeulich@suse.com>; Andrew Cooper
> <Andrew.Cooper3@citrix.com>; Wei Liu <wl@xen.org>
> Subject: Re: [PATCH 3/7] ioreq: allow dispatching ioreqs to internal servers
> 
> On Wed, Aug 21, 2019 at 06:29:04PM +0200, Paul Durrant wrote:
> > > -----Original Message-----
> > > From: Roger Pau Monne <roger.pau@citrix.com>
> > > Sent: 21 August 2019 15:59
> > > To: xen-devel@lists.xenproject.org
> > > Cc: Roger Pau Monne <roger.pau@citrix.com>; Paul Durrant <Paul.Durrant@citrix.com>; Jan Beulich
> > > <jbeulich@suse.com>; Andrew Cooper <Andrew.Cooper3@citrix.com>; Wei Liu <wl@xen.org>
> > > Subject: [PATCH 3/7] ioreq: allow dispatching ioreqs to internal servers
> > >
> > > Internal ioreq servers are always processed first, and ioreqs are
> > > dispatched by calling the handler function. If no internal servers have
> > > registered for an ioreq it's then forwarded to external callers.
> >
> > Distinct id ranges would help here... Internal ids could be walked first, then external. If there's
> no possibility of interleaving then you don't need the retry.
> 
> So if internal vs external is keyed on the ID then we would end up
> with two different arrays in hvm_domain, one for internal and one for
> external ioreq servers.
> 
> Maybe instead of my previous suggestion it would be better to just
> define consecutive ranges for external and internal servers, like:
> 
> #define MAX_NR_EXTERNAL_IOREQ_SERVERS 8
> #define MAX_NR_INTERNAL_IOREQ_SERVERS 1
> #define MAX_NR_IOREQ_SERVERS \
>     (MAX_NR_EXTERNAL_IOREQ_SERVERS + MAX_NR_INTERNAL_IOREQ_SERVERS)
> 
> #define FOR_EACH_IOREQ_SERVER(d, id, s) \
>     for ( (id) = MAX_NR_IOREQ_SERVERS * 2; (id) != 0; ) \
>         if ( !(s = GET_IOREQ_SERVER(d, --(id))) ) \
>             continue; \
>         else
> 
> #define FOR_EACH_INTERNAL_IOREQ_SERVER(d, id, s) \
>     for ( (id) = MAX_NR_IOREQ_SERVERS; (id) > MAX_NR_INTERNAL_IOREQ_SERVERS && (id) != 0; ) \
>         if ( !(s = GET_IOREQ_SERVER(d, --(id))) ) \
>             continue; \
>         else
> 
> #define FOR_EACH_EXTERNAL_IOREQ_SERVER(d, id, s) \
>     for ( (id) = MAX_NR_INTERNAL_IOREQ_SERVERS; (id) != 0; ) \
>         if ( !(s = GET_IOREQ_SERVER(d, --(id))) ) \
>             continue; \
>         else
> 
> That would also force FOR_EACH_IOREQ_SERVER to always process internal
> ioreq servers first.

Exactly what I was thinking.

> 
> We could even have something like:
> 
> union {
>     struct {
>         struct hvm_ioreq_server *external_server[MAX_NR_EXTERNAL_IOREQ_SERVERS];
>         struct hvm_ioreq_server *internal_server[MAX_NR_INTERNAL_IOREQ_SERVERS];
>     }
>     struct hvm_ioreq_server *server[MAX_NR_IOREQ_SERVERS];
> }
> 
> In order to split the arrays if required.
> 

I'd not considered a union, but it makes sense :-)

  Paul

> Thanks, Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [Xen-devel] [PATCH 4/7] ioreq: allow registering internal ioreq server handler
  2019-08-22  7:43     ` Roger Pau Monné
@ 2019-08-22  8:38       ` Paul Durrant
  0 siblings, 0 replies; 20+ messages in thread
From: Paul Durrant @ 2019-08-22  8:38 UTC (permalink / raw)
  To: Roger Pau Monne; +Cc: xen-devel, Wei Liu, Jan Beulich, Andrew Cooper

> -----Original Message-----
> From: Roger Pau Monne <roger.pau@citrix.com>
> Sent: 22 August 2019 08:44
> To: Paul Durrant <Paul.Durrant@citrix.com>
> Cc: xen-devel@lists.xenproject.org; Jan Beulich <jbeulich@suse.com>; Andrew Cooper
> <Andrew.Cooper3@citrix.com>; Wei Liu <wl@xen.org>
> Subject: Re: [PATCH 4/7] ioreq: allow registering internal ioreq server handler
> 
> On Wed, Aug 21, 2019 at 06:35:15PM +0200, Paul Durrant wrote:
> > > -----Original Message-----
> > > From: Roger Pau Monne <roger.pau@citrix.com>
> > > Sent: 21 August 2019 15:59
> > > To: xen-devel@lists.xenproject.org
> > > Cc: Roger Pau Monne <roger.pau@citrix.com>; Paul Durrant <Paul.Durrant@citrix.com>; Jan Beulich
> > > <jbeulich@suse.com>; Andrew Cooper <Andrew.Cooper3@citrix.com>; Wei Liu <wl@xen.org>
> > > Subject: [PATCH 4/7] ioreq: allow registering internal ioreq server handler
> > >
> > > Provide a routine to register the handler for an internal ioreq
> > > server. Note the handler can only be set once.
> >
> > I'd prefer hvm_set_ioreq_handler() and some sort of guard to prevent enabling of an internal server
> with no handler (probably in the previous patch) would be prudent, I think.
> 
> Right, I will add it.
> 
> > Also, why the set-once semantic?
> 
> Well, I didn't have the need to change the handler of internal ioreq
> servers (vPCI) so I've coded it that way. If you think it's better to
> allow run time changes of the handler that's fine, I just didn't have
> the need for it given the current use-case and I thought it would be
> safer.
> 

I think a more relaxed semantic of only being able to change the handler when the ioreq server is disabled would be fine. Also, I wonder whether you ought to allow handler registration to set some opaque caller context too, rather than assuming that the vcpu is the appropriate context to pass to all handlers?

  Paul

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [Xen-devel] [PATCH 7/7] ioreq: provide support for long-running operations...
  2019-08-21 14:59 ` [Xen-devel] [PATCH 7/7] ioreq: provide support for long-running operations Roger Pau Monne
@ 2019-08-22  9:15   ` Paul Durrant
  2019-08-22 12:55     ` Roger Pau Monné
  0 siblings, 1 reply; 20+ messages in thread
From: Paul Durrant @ 2019-08-22  9:15 UTC (permalink / raw)
  To: Roger Pau Monne, xen-devel
  Cc: Stefano Stabellini, Wei Liu, Konrad Rzeszutek Wilk,
	Andrew Cooper, Tim (Xen.org),
	George Dunlap, Julien Grall, Jan Beulich, Ian Jackson,
	Roger Pau Monne

> -----Original Message-----
> From: Roger Pau Monne <roger.pau@citrix.com>
> Sent: 21 August 2019 15:59
> To: xen-devel@lists.xenproject.org
> Cc: Roger Pau Monne <roger.pau@citrix.com>; Paul Durrant <Paul.Durrant@citrix.com>; Jan Beulich
> <jbeulich@suse.com>; Andrew Cooper <Andrew.Cooper3@citrix.com>; Wei Liu <wl@xen.org>; George Dunlap
> <George.Dunlap@citrix.com>; Ian Jackson <Ian.Jackson@citrix.com>; Julien Grall <julien.grall@arm.com>;
> Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>; Stefano Stabellini <sstabellini@kernel.org>; Tim
> (Xen.org) <tim@xen.org>
> Subject: [PATCH 7/7] ioreq: provide support for long-running operations...
> 
> ...and switch vPCI to use this infrastructure for long running
> physmap modification operations.
> 
> This allows to get rid of the vPCI specific modifications done to
> handle_hvm_io_completion and allows generalizing the support for
> long-running operations to other internal ioreq servers. Such support
> is implemented as a specific handler that can be registers by internal
> ioreq servers and that will be called to check for pending work.
> Returning true from this handler will prevent the vcpu from running
> until the handler returns false.

Rather than having another callback can the handler not be re-called with same ioreq? It could return different values depending on whether there is more work to do (requiring another call) or whether it is done and the vcpu can be resumed. Would that work?

  Paul

> 
> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
> ---
>  xen/arch/x86/hvm/ioreq.c         | 55 ++++++++++++++++++++++++++++----
>  xen/drivers/vpci/vpci.c          |  3 ++
>  xen/include/asm-x86/hvm/domain.h |  1 +
>  xen/include/asm-x86/hvm/ioreq.h  |  2 ++
>  4 files changed, 55 insertions(+), 6 deletions(-)
> 
> diff --git a/xen/arch/x86/hvm/ioreq.c b/xen/arch/x86/hvm/ioreq.c
> index b2582bd3a0..8e160a0a14 100644
> --- a/xen/arch/x86/hvm/ioreq.c
> +++ b/xen/arch/x86/hvm/ioreq.c
> @@ -186,18 +186,29 @@ bool handle_hvm_io_completion(struct vcpu *v)
>      enum hvm_io_completion io_completion;
>      unsigned int id;
> 
> -    if ( has_vpci(d) && vpci_process_pending(v) )
> -    {
> -        raise_softirq(SCHEDULE_SOFTIRQ);
> -        return false;
> -    }
> -
>      FOR_EACH_IOREQ_SERVER(d, id, s)
>      {
>          struct hvm_ioreq_vcpu *sv;
> 
>          if ( s->internal )
> +        {
> +            if ( s->pending && s->pending(v) )
> +            {
> +                /*
> +                 * Need to raise a scheduler irq in order to prevent the guest
> +                 * vcpu from resuming execution.
> +                 *
> +                 * Note this is not required for external ioreq operations
> +                 * because in that case the vcpu is marked as blocked, but this
> +                 * cannot be done for long-running internal operations, since
> +                 * it would prevent the vcpu from being scheduled and thus the
> +                 * long running operation from finishing.
> +                 */
> +                raise_softirq(SCHEDULE_SOFTIRQ);
> +                return false;
> +            }
>              continue;
> +        }
> 
>          list_for_each_entry ( sv,
>                                &s->ioreq_vcpu_list,
> @@ -518,6 +529,38 @@ int hvm_add_ioreq_handler(struct domain *d, ioservid_t id,
>      return rc;
>  }
> 
> +int hvm_add_ioreq_pending_handler(struct domain *d, ioservid_t id,
> +                                  bool (*pending)(struct vcpu *v))
> +{
> +    struct hvm_ioreq_server *s;
> +    int rc = 0;
> +
> +    spin_lock_recursive(&d->arch.hvm.ioreq_server.lock);
> +    s = get_ioreq_server(d, id);
> +    if ( !s )
> +    {
> +        rc = -ENOENT;
> +        goto out;
> +    }
> +    if ( !s->internal )
> +    {
> +        rc = -EINVAL;
> +        goto out;
> +    }
> +    if ( s->pending != NULL )
> +    {
> +        rc = -EBUSY;
> +        goto out;
> +    }
> +
> +    s->pending = pending;
> +
> + out:
> +    spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock);
> +
> +    return rc;
> +}
> +
>  static void hvm_update_ioreq_evtchn(struct hvm_ioreq_server *s,
>                                      struct hvm_ioreq_vcpu *sv)
>  {
> diff --git a/xen/drivers/vpci/vpci.c b/xen/drivers/vpci/vpci.c
> index 510e3ee771..54b0f31612 100644
> --- a/xen/drivers/vpci/vpci.c
> +++ b/xen/drivers/vpci/vpci.c
> @@ -508,6 +508,9 @@ int vpci_register_ioreq(struct domain *d)
>          return rc;
> 
>      rc = hvm_add_ioreq_handler(d, id, ioreq_handler);
> +    if ( rc )
> +        return rc;
> +    rc = hvm_add_ioreq_pending_handler(d, id, vpci_process_pending);
>      if ( rc )
>          return rc;
> 
> diff --git a/xen/include/asm-x86/hvm/domain.h b/xen/include/asm-x86/hvm/domain.h
> index f0be303517..80a38ffe48 100644
> --- a/xen/include/asm-x86/hvm/domain.h
> +++ b/xen/include/asm-x86/hvm/domain.h
> @@ -73,6 +73,7 @@ struct hvm_ioreq_server {
>          };
>          struct {
>              int (*handler)(struct vcpu *v, ioreq_t *);
> +            bool (*pending)(struct vcpu *v);
>          };
>      };
>  };
> diff --git a/xen/include/asm-x86/hvm/ioreq.h b/xen/include/asm-x86/hvm/ioreq.h
> index 10b9586885..cc3e27d059 100644
> --- a/xen/include/asm-x86/hvm/ioreq.h
> +++ b/xen/include/asm-x86/hvm/ioreq.h
> @@ -57,6 +57,8 @@ void hvm_ioreq_init(struct domain *d);
> 
>  int hvm_add_ioreq_handler(struct domain *d, ioservid_t id,
>                            int (*handler)(struct vcpu *v, ioreq_t *));
> +int hvm_add_ioreq_pending_handler(struct domain *d, ioservid_t id,
> +                                  bool (*pending)(struct vcpu *v));
> 
>  int hvm_ioreq_register_mmcfg(struct domain *d, paddr_t addr,
>                               unsigned int start_bus, unsigned int end_bus,
> --
> 2.22.0

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [Xen-devel] [PATCH 7/7] ioreq: provide support for long-running operations...
  2019-08-22  9:15   ` Paul Durrant
@ 2019-08-22 12:55     ` Roger Pau Monné
  2019-08-22 13:07       ` Paul Durrant
  0 siblings, 1 reply; 20+ messages in thread
From: Roger Pau Monné @ 2019-08-22 12:55 UTC (permalink / raw)
  To: Paul Durrant
  Cc: Stefano Stabellini, Wei Liu, Konrad Rzeszutek Wilk,
	Andrew Cooper, Tim (Xen.org),
	George Dunlap, Julien Grall, Jan Beulich, Ian Jackson, xen-devel

On Thu, Aug 22, 2019 at 11:15:50AM +0200, Paul Durrant wrote:
> > -----Original Message-----
> > From: Roger Pau Monne <roger.pau@citrix.com>
> > Sent: 21 August 2019 15:59
> > To: xen-devel@lists.xenproject.org
> > Cc: Roger Pau Monne <roger.pau@citrix.com>; Paul Durrant <Paul.Durrant@citrix.com>; Jan Beulich
> > <jbeulich@suse.com>; Andrew Cooper <Andrew.Cooper3@citrix.com>; Wei Liu <wl@xen.org>; George Dunlap
> > <George.Dunlap@citrix.com>; Ian Jackson <Ian.Jackson@citrix.com>; Julien Grall <julien.grall@arm.com>;
> > Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>; Stefano Stabellini <sstabellini@kernel.org>; Tim
> > (Xen.org) <tim@xen.org>
> > Subject: [PATCH 7/7] ioreq: provide support for long-running operations...
> > 
> > ...and switch vPCI to use this infrastructure for long running
> > physmap modification operations.
> > 
> > This allows to get rid of the vPCI specific modifications done to
> > handle_hvm_io_completion and allows generalizing the support for
> > long-running operations to other internal ioreq servers. Such support
> > is implemented as a specific handler that can be registers by internal
> > ioreq servers and that will be called to check for pending work.
> > Returning true from this handler will prevent the vcpu from running
> > until the handler returns false.
> 
> Rather than having another callback can the handler not be re-called with same ioreq? It could return different values depending on whether there is more work to do (requiring another call) or whether it is done and the vcpu can be resumed. Would that work?

I guess this would work also. The issue with this approach is that I
would have to find somewhere to store the ioreq while the operation is
being processed, which is not required with the proposed two handler
approach.

Thanks, Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [Xen-devel] [PATCH 7/7] ioreq: provide support for long-running operations...
  2019-08-22 12:55     ` Roger Pau Monné
@ 2019-08-22 13:07       ` Paul Durrant
  0 siblings, 0 replies; 20+ messages in thread
From: Paul Durrant @ 2019-08-22 13:07 UTC (permalink / raw)
  To: Roger Pau Monne
  Cc: Stefano Stabellini, Wei Liu, Konrad Rzeszutek Wilk,
	Andrew Cooper, Tim (Xen.org),
	George Dunlap, Julien Grall, Jan Beulich, Ian Jackson, xen-devel



> -----Original Message-----
> From: Roger Pau Monne <roger.pau@citrix.com>
> Sent: 22 August 2019 13:55
> To: Paul Durrant <Paul.Durrant@citrix.com>
> Cc: xen-devel@lists.xenproject.org; Jan Beulich <jbeulich@suse.com>; Andrew Cooper
> <Andrew.Cooper3@citrix.com>; Wei Liu <wl@xen.org>; George Dunlap <George.Dunlap@citrix.com>; Ian
> Jackson <Ian.Jackson@citrix.com>; Julien Grall <julien.grall@arm.com>; Konrad Rzeszutek Wilk
> <konrad.wilk@oracle.com>; Stefano Stabellini <sstabellini@kernel.org>; Tim (Xen.org) <tim@xen.org>
> Subject: Re: [PATCH 7/7] ioreq: provide support for long-running operations...
> 
> On Thu, Aug 22, 2019 at 11:15:50AM +0200, Paul Durrant wrote:
> > > -----Original Message-----
> > > From: Roger Pau Monne <roger.pau@citrix.com>
> > > Sent: 21 August 2019 15:59
> > > To: xen-devel@lists.xenproject.org
> > > Cc: Roger Pau Monne <roger.pau@citrix.com>; Paul Durrant <Paul.Durrant@citrix.com>; Jan Beulich
> > > <jbeulich@suse.com>; Andrew Cooper <Andrew.Cooper3@citrix.com>; Wei Liu <wl@xen.org>; George
> Dunlap
> > > <George.Dunlap@citrix.com>; Ian Jackson <Ian.Jackson@citrix.com>; Julien Grall
> <julien.grall@arm.com>;
> > > Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>; Stefano Stabellini <sstabellini@kernel.org>; Tim
> > > (Xen.org) <tim@xen.org>
> > > Subject: [PATCH 7/7] ioreq: provide support for long-running operations...
> > >
> > > ...and switch vPCI to use this infrastructure for long running
> > > physmap modification operations.
> > >
> > > This allows to get rid of the vPCI specific modifications done to
> > > handle_hvm_io_completion and allows generalizing the support for
> > > long-running operations to other internal ioreq servers. Such support
> > > is implemented as a specific handler that can be registers by internal
> > > ioreq servers and that will be called to check for pending work.
> > > Returning true from this handler will prevent the vcpu from running
> > > until the handler returns false.
> >
> > Rather than having another callback can the handler not be re-called with same ioreq? It could
> return different values depending on whether there is more work to do (requiring another call) or
> whether it is done and the vcpu can be resumed. Would that work?
> 
> I guess this would work also. The issue with this approach is that I
> would have to find somewhere to store the ioreq while the operation is
> being processed, which is not required with the proposed two handler
> approach.

The ioreq already is stored in v->arch.hvm.hvm_io.io_req anyway, so can't you use that copy?

  Paul

> 
> Thanks, Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 20+ messages in thread

end of thread, other threads:[~2019-08-22 13:08 UTC | newest]

Thread overview: 20+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-08-21 14:58 [Xen-devel] [PATCH 0/7] ioreq: add support for internal servers Roger Pau Monne
2019-08-21 14:58 ` [Xen-devel] [PATCH 1/7] ioreq: add fields to allow internal ioreq servers Roger Pau Monne
2019-08-21 14:58 ` [Xen-devel] [PATCH 2/7] ioreq: add internal ioreq initialization support Roger Pau Monne
2019-08-21 16:24   ` Paul Durrant
2019-08-22  7:23     ` Roger Pau Monné
2019-08-22  8:30       ` Paul Durrant
2019-08-21 14:58 ` [Xen-devel] [PATCH 3/7] ioreq: allow dispatching ioreqs to internal servers Roger Pau Monne
2019-08-21 16:29   ` Paul Durrant
2019-08-22  7:40     ` Roger Pau Monné
2019-08-22  8:33       ` Paul Durrant
2019-08-21 14:59 ` [Xen-devel] [PATCH 4/7] ioreq: allow registering internal ioreq server handler Roger Pau Monne
2019-08-21 16:35   ` Paul Durrant
2019-08-22  7:43     ` Roger Pau Monné
2019-08-22  8:38       ` Paul Durrant
2019-08-21 14:59 ` [Xen-devel] [PATCH 5/7] ioreq: allow decoding accesses to MMCFG regions Roger Pau Monne
2019-08-21 14:59 ` [Xen-devel] [PATCH 6/7] vpci: register as an internal ioreq server Roger Pau Monne
2019-08-21 14:59 ` [Xen-devel] [PATCH 7/7] ioreq: provide support for long-running operations Roger Pau Monne
2019-08-22  9:15   ` Paul Durrant
2019-08-22 12:55     ` Roger Pau Monné
2019-08-22 13:07       ` Paul Durrant

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).