All of lore.kernel.org
 help / color / mirror / Atom feed
* [Qemu-devel] [PATCH 0/4] block/xen_disk: legacy code removal and cleanup
@ 2018-04-30 12:01 ` Paul Durrant
  0 siblings, 0 replies; 28+ messages in thread
From: Paul Durrant @ 2018-04-30 12:01 UTC (permalink / raw)
  To: xen-devel, qemu-block, qemu-devel
  Cc: Paul Durrant, Stefano Stabellini, Anthony Perard, Kevin Wolf, Max Reitz

The grant copy operation was added to libxengnttab in Xen 4.8.0 (released
nearly 18 months ago) but the xen_disk PV backend QEMU is still carrying
a significant amount of code purely to remain compatible with older
versions of Xen.

As can be inferred from the diff stats below, removing this support for
older versions of Xen from QEMU reduces the size of the xen_disk source by
more than 350 lines (~25%). The majority of this is done in patches #1
and #2. Further simplifications are made in patch #3 and then some cosmetic
work is done in patch #4.

Paul Durrant (4):
  block/xen_disk: remove persistent grant code
  block/xen_disk: remove use of grant map/unmap
  block/xen_disk: use a single entry iovec
  block/xen_disk: be consistent with use of xendev and blkdev->xendev

 hw/block/xen_disk.c | 590 ++++++++++------------------------------------------
 1 file changed, 109 insertions(+), 481 deletions(-)
---
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Anthony Perard <anthony.perard@citrix.com>
Cc: Kevin Wolf <kwolf@redhat.com>
Cc: Max Reitz <mreitz@redhat.com>

-- 
2.1.4

^ permalink raw reply	[flat|nested] 28+ messages in thread

* [PATCH 0/4] block/xen_disk: legacy code removal and cleanup
@ 2018-04-30 12:01 ` Paul Durrant
  0 siblings, 0 replies; 28+ messages in thread
From: Paul Durrant @ 2018-04-30 12:01 UTC (permalink / raw)
  To: xen-devel, qemu-block, qemu-devel
  Cc: Anthony Perard, Kevin Wolf, Paul Durrant, Stefano Stabellini, Max Reitz

The grant copy operation was added to libxengnttab in Xen 4.8.0 (released
nearly 18 months ago) but the xen_disk PV backend QEMU is still carrying
a significant amount of code purely to remain compatible with older
versions of Xen.

As can be inferred from the diff stats below, removing this support for
older versions of Xen from QEMU reduces the size of the xen_disk source by
more than 350 lines (~25%). The majority of this is done in patches #1
and #2. Further simplifications are made in patch #3 and then some cosmetic
work is done in patch #4.

Paul Durrant (4):
  block/xen_disk: remove persistent grant code
  block/xen_disk: remove use of grant map/unmap
  block/xen_disk: use a single entry iovec
  block/xen_disk: be consistent with use of xendev and blkdev->xendev

 hw/block/xen_disk.c | 590 ++++++++++------------------------------------------
 1 file changed, 109 insertions(+), 481 deletions(-)
---
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Anthony Perard <anthony.perard@citrix.com>
Cc: Kevin Wolf <kwolf@redhat.com>
Cc: Max Reitz <mreitz@redhat.com>

-- 
2.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 28+ messages in thread

* [Qemu-devel] [PATCH 1/4] block/xen_disk: remove persistent grant code
  2018-04-30 12:01 ` Paul Durrant
@ 2018-04-30 12:01   ` Paul Durrant
  -1 siblings, 0 replies; 28+ messages in thread
From: Paul Durrant @ 2018-04-30 12:01 UTC (permalink / raw)
  To: xen-devel, qemu-block, qemu-devel
  Cc: Paul Durrant, Stefano Stabellini, Anthony Perard, Kevin Wolf, Max Reitz

The grant copy operation was added to libxengnttab in Xen 4.8.0. If grant
copy is available then persistent grants will not be used.
The xen_disk source can be siginificantly simplified by removing this now
redundant code.

Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
---
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Anthony Perard <anthony.perard@citrix.com>
Cc: Kevin Wolf <kwolf@redhat.com>
Cc: Max Reitz <mreitz@redhat.com>
---
 hw/block/xen_disk.c | 237 +++++-----------------------------------------------
 1 file changed, 21 insertions(+), 216 deletions(-)

diff --git a/hw/block/xen_disk.c b/hw/block/xen_disk.c
index f74fcd4..b33611a 100644
--- a/hw/block/xen_disk.c
+++ b/hw/block/xen_disk.c
@@ -43,20 +43,6 @@ static int batch_maps   = 0;
 #define BLOCK_SIZE  512
 #define IOCB_COUNT  (BLKIF_MAX_SEGMENTS_PER_REQUEST + 2)
 
-struct PersistentGrant {
-    void *page;
-    struct XenBlkDev *blkdev;
-};
-
-typedef struct PersistentGrant PersistentGrant;
-
-struct PersistentRegion {
-    void *addr;
-    int num;
-};
-
-typedef struct PersistentRegion PersistentRegion;
-
 struct ioreq {
     blkif_request_t     req;
     int16_t             status;
@@ -73,7 +59,6 @@ struct ioreq {
     int                 prot;
     void                *page[BLKIF_MAX_SEGMENTS_PER_REQUEST];
     void                *pages;
-    int                 num_unmap;
 
     /* aio status */
     int                 aio_inflight;
@@ -115,13 +100,7 @@ struct XenBlkDev {
     int                 requests_finished;
     unsigned int        max_requests;
 
-    /* Persistent grants extension */
     gboolean            feature_discard;
-    gboolean            feature_persistent;
-    GTree               *persistent_gnts;
-    GSList              *persistent_regions;
-    unsigned int        persistent_gnt_count;
-    unsigned int        max_grants;
 
     /* qemu block driver */
     DriveInfo           *dinfo;
@@ -158,46 +137,6 @@ static void ioreq_reset(struct ioreq *ioreq)
     qemu_iovec_reset(&ioreq->v);
 }
 
-static gint int_cmp(gconstpointer a, gconstpointer b, gpointer user_data)
-{
-    uint ua = GPOINTER_TO_UINT(a);
-    uint ub = GPOINTER_TO_UINT(b);
-    return (ua > ub) - (ua < ub);
-}
-
-static void destroy_grant(gpointer pgnt)
-{
-    PersistentGrant *grant = pgnt;
-    xengnttab_handle *gnt = grant->blkdev->xendev.gnttabdev;
-
-    if (xengnttab_unmap(gnt, grant->page, 1) != 0) {
-        xen_pv_printf(&grant->blkdev->xendev, 0,
-                      "xengnttab_unmap failed: %s\n",
-                      strerror(errno));
-    }
-    grant->blkdev->persistent_gnt_count--;
-    xen_pv_printf(&grant->blkdev->xendev, 3,
-                  "unmapped grant %p\n", grant->page);
-    g_free(grant);
-}
-
-static void remove_persistent_region(gpointer data, gpointer dev)
-{
-    PersistentRegion *region = data;
-    struct XenBlkDev *blkdev = dev;
-    xengnttab_handle *gnt = blkdev->xendev.gnttabdev;
-
-    if (xengnttab_unmap(gnt, region->addr, region->num) != 0) {
-        xen_pv_printf(&blkdev->xendev, 0,
-                      "xengnttab_unmap region %p failed: %s\n",
-                      region->addr, strerror(errno));
-    }
-    xen_pv_printf(&blkdev->xendev, 3,
-                  "unmapped grant region %p with %d pages\n",
-                  region->addr, region->num);
-    g_free(region);
-}
-
 static struct ioreq *ioreq_start(struct XenBlkDev *blkdev)
 {
     struct ioreq *ioreq = NULL;
@@ -327,22 +266,22 @@ static void ioreq_unmap(struct ioreq *ioreq)
     xengnttab_handle *gnt = ioreq->blkdev->xendev.gnttabdev;
     int i;
 
-    if (ioreq->num_unmap == 0 || ioreq->mapped == 0) {
+    if (ioreq->v.niov == 0 || ioreq->mapped == 0) {
         return;
     }
     if (batch_maps) {
         if (!ioreq->pages) {
             return;
         }
-        if (xengnttab_unmap(gnt, ioreq->pages, ioreq->num_unmap) != 0) {
+        if (xengnttab_unmap(gnt, ioreq->pages, ioreq->v.niov) != 0) {
             xen_pv_printf(&ioreq->blkdev->xendev, 0,
                           "xengnttab_unmap failed: %s\n",
                           strerror(errno));
         }
-        ioreq->blkdev->cnt_map -= ioreq->num_unmap;
+        ioreq->blkdev->cnt_map -= ioreq->v.niov;
         ioreq->pages = NULL;
     } else {
-        for (i = 0; i < ioreq->num_unmap; i++) {
+        for (i = 0; i < ioreq->v.niov; i++) {
             if (!ioreq->page[i]) {
                 continue;
             }
@@ -361,138 +300,44 @@ static void ioreq_unmap(struct ioreq *ioreq)
 static int ioreq_map(struct ioreq *ioreq)
 {
     xengnttab_handle *gnt = ioreq->blkdev->xendev.gnttabdev;
-    uint32_t domids[BLKIF_MAX_SEGMENTS_PER_REQUEST];
-    uint32_t refs[BLKIF_MAX_SEGMENTS_PER_REQUEST];
-    void *page[BLKIF_MAX_SEGMENTS_PER_REQUEST];
-    int i, j, new_maps = 0;
-    PersistentGrant *grant;
-    PersistentRegion *region;
-    /* domids and refs variables will contain the information necessary
-     * to map the grants that are needed to fulfill this request.
-     *
-     * After mapping the needed grants, the page array will contain the
-     * memory address of each granted page in the order specified in ioreq
-     * (disregarding if it's a persistent grant or not).
-     */
+    int i;
 
     if (ioreq->v.niov == 0 || ioreq->mapped == 1) {
         return 0;
     }
-    if (ioreq->blkdev->feature_persistent) {
-        for (i = 0; i < ioreq->v.niov; i++) {
-            grant = g_tree_lookup(ioreq->blkdev->persistent_gnts,
-                                    GUINT_TO_POINTER(ioreq->refs[i]));
-
-            if (grant != NULL) {
-                page[i] = grant->page;
-                xen_pv_printf(&ioreq->blkdev->xendev, 3,
-                              "using persistent-grant %" PRIu32 "\n",
-                              ioreq->refs[i]);
-            } else {
-                    /* Add the grant to the list of grants that
-                     * should be mapped
-                     */
-                    domids[new_maps] = ioreq->domids[i];
-                    refs[new_maps] = ioreq->refs[i];
-                    page[i] = NULL;
-                    new_maps++;
-            }
-        }
-        /* Set the protection to RW, since grants may be reused later
-         * with a different protection than the one needed for this request
-         */
-        ioreq->prot = PROT_WRITE | PROT_READ;
-    } else {
-        /* All grants in the request should be mapped */
-        memcpy(refs, ioreq->refs, sizeof(refs));
-        memcpy(domids, ioreq->domids, sizeof(domids));
-        memset(page, 0, sizeof(page));
-        new_maps = ioreq->v.niov;
-    }
-
-    if (batch_maps && new_maps) {
+    if (batch_maps) {
         ioreq->pages = xengnttab_map_grant_refs
-            (gnt, new_maps, domids, refs, ioreq->prot);
+            (gnt, ioreq->v.niov, ioreq->domids, ioreq->refs, ioreq->prot);
         if (ioreq->pages == NULL) {
             xen_pv_printf(&ioreq->blkdev->xendev, 0,
                           "can't map %d grant refs (%s, %d maps)\n",
-                          new_maps, strerror(errno), ioreq->blkdev->cnt_map);
+                          ioreq->v.niov, strerror(errno),
+                          ioreq->blkdev->cnt_map);
             return -1;
         }
-        for (i = 0, j = 0; i < ioreq->v.niov; i++) {
-            if (page[i] == NULL) {
-                page[i] = ioreq->pages + (j++) * XC_PAGE_SIZE;
-            }
+        for (i = 0; i < ioreq->v.niov; i++) {
+            ioreq->v.iov[i].iov_base = ioreq->pages + i * XC_PAGE_SIZE +
+                (uintptr_t)ioreq->v.iov[i].iov_base;
         }
-        ioreq->blkdev->cnt_map += new_maps;
-    } else if (new_maps)  {
-        for (i = 0; i < new_maps; i++) {
+        ioreq->blkdev->cnt_map += ioreq->v.niov;
+    } else  {
+        for (i = 0; i < ioreq->v.niov; i++) {
             ioreq->page[i] = xengnttab_map_grant_ref
-                (gnt, domids[i], refs[i], ioreq->prot);
+                (gnt, ioreq->domids[i], ioreq->refs[i], ioreq->prot);
             if (ioreq->page[i] == NULL) {
                 xen_pv_printf(&ioreq->blkdev->xendev, 0,
                               "can't map grant ref %d (%s, %d maps)\n",
-                              refs[i], strerror(errno), ioreq->blkdev->cnt_map);
+                              ioreq->refs[i], strerror(errno),
+                              ioreq->blkdev->cnt_map);
                 ioreq->mapped = 1;
                 ioreq_unmap(ioreq);
                 return -1;
             }
-            ioreq->blkdev->cnt_map++;
-        }
-        for (i = 0, j = 0; i < ioreq->v.niov; i++) {
-            if (page[i] == NULL) {
-                page[i] = ioreq->page[j++];
-            }
+            ioreq->v.iov[i].iov_base = ioreq->page[i] +
+                (uintptr_t)ioreq->v.iov[i].iov_base;
         }
     }
-    if (ioreq->blkdev->feature_persistent && new_maps != 0 &&
-        (!batch_maps || (ioreq->blkdev->persistent_gnt_count + new_maps <=
-        ioreq->blkdev->max_grants))) {
-        /*
-         * If we are using persistent grants and batch mappings only
-         * add the new maps to the list of persistent grants if the whole
-         * area can be persistently mapped.
-         */
-        if (batch_maps) {
-            region = g_malloc0(sizeof(*region));
-            region->addr = ioreq->pages;
-            region->num = new_maps;
-            ioreq->blkdev->persistent_regions = g_slist_append(
-                                            ioreq->blkdev->persistent_regions,
-                                            region);
-        }
-        while ((ioreq->blkdev->persistent_gnt_count < ioreq->blkdev->max_grants)
-              && new_maps) {
-            /* Go through the list of newly mapped grants and add as many
-             * as possible to the list of persistently mapped grants.
-             *
-             * Since we start at the end of ioreq->page(s), we only need
-             * to decrease new_maps to prevent this granted pages from
-             * being unmapped in ioreq_unmap.
-             */
-            grant = g_malloc0(sizeof(*grant));
-            new_maps--;
-            if (batch_maps) {
-                grant->page = ioreq->pages + (new_maps) * XC_PAGE_SIZE;
-            } else {
-                grant->page = ioreq->page[new_maps];
-            }
-            grant->blkdev = ioreq->blkdev;
-            xen_pv_printf(&ioreq->blkdev->xendev, 3,
-                          "adding grant %" PRIu32 " page: %p\n",
-                          refs[new_maps], grant->page);
-            g_tree_insert(ioreq->blkdev->persistent_gnts,
-                          GUINT_TO_POINTER(refs[new_maps]),
-                          grant);
-            ioreq->blkdev->persistent_gnt_count++;
-        }
-        assert(!batch_maps || new_maps == 0);
-    }
-    for (i = 0; i < ioreq->v.niov; i++) {
-        ioreq->v.iov[i].iov_base += (uintptr_t)page[i];
-    }
     ioreq->mapped = 1;
-    ioreq->num_unmap = new_maps;
     return 0;
 }
 
@@ -1039,8 +884,6 @@ static int blk_init(struct XenDevice *xendev)
      * blk_connect supplies sector-size and sectors
      */
     xenstore_write_be_int(&blkdev->xendev, "feature-flush-cache", 1);
-    xenstore_write_be_int(&blkdev->xendev, "feature-persistent",
-                          !xen_feature_grant_copy);
     xenstore_write_be_int(&blkdev->xendev, "info", info);
 
     xenstore_write_be_int(&blkdev->xendev, "max-ring-page-order",
@@ -1079,7 +922,7 @@ out_error:
 static int blk_connect(struct XenDevice *xendev)
 {
     struct XenBlkDev *blkdev = container_of(xendev, struct XenBlkDev, xendev);
-    int pers, index, qflags;
+    int index, qflags;
     bool readonly = true;
     bool writethrough = true;
     int order, ring_ref;
@@ -1202,11 +1045,6 @@ static int blk_connect(struct XenDevice *xendev)
                              &blkdev->xendev.remote_port) == -1) {
         return -1;
     }
-    if (xenstore_read_fe_int(&blkdev->xendev, "feature-persistent", &pers)) {
-        blkdev->feature_persistent = FALSE;
-    } else {
-        blkdev->feature_persistent = !!pers;
-    }
 
     if (!blkdev->xendev.protocol) {
         blkdev->protocol = BLKIF_PROTOCOL_NATIVE;
@@ -1301,19 +1139,6 @@ static int blk_connect(struct XenDevice *xendev)
     }
     }
 
-    if (blkdev->feature_persistent) {
-        /* Init persistent grants */
-        blkdev->max_grants = blkdev->max_requests *
-            BLKIF_MAX_SEGMENTS_PER_REQUEST;
-        blkdev->persistent_gnts = g_tree_new_full((GCompareDataFunc)int_cmp,
-                                             NULL, NULL,
-                                             batch_maps ?
-                                             (GDestroyNotify)g_free :
-                                             (GDestroyNotify)destroy_grant);
-        blkdev->persistent_regions = NULL;
-        blkdev->persistent_gnt_count = 0;
-    }
-
     blk_set_aio_context(blkdev->blk, blkdev->ctx);
 
     xen_be_bind_evtchn(&blkdev->xendev);
@@ -1350,26 +1175,6 @@ static void blk_disconnect(struct XenDevice *xendev)
         blkdev->sring = NULL;
     }
 
-    /*
-     * Unmap persistent grants before switching to the closed state
-     * so the frontend can free them.
-     *
-     * In the !batch_maps case g_tree_destroy will take care of unmapping
-     * the grant, but in the batch_maps case we need to iterate over every
-     * region in persistent_regions and unmap it.
-     */
-    if (blkdev->feature_persistent) {
-        g_tree_destroy(blkdev->persistent_gnts);
-        assert(batch_maps || blkdev->persistent_gnt_count == 0);
-        if (batch_maps) {
-            blkdev->persistent_gnt_count = 0;
-            g_slist_foreach(blkdev->persistent_regions,
-                            (GFunc)remove_persistent_region, blkdev);
-            g_slist_free(blkdev->persistent_regions);
-        }
-        blkdev->feature_persistent = false;
-    }
-
     if (blkdev->xendev.gnttabdev) {
         xengnttab_close(blkdev->xendev.gnttabdev);
         blkdev->xendev.gnttabdev = NULL;
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH 1/4] block/xen_disk: remove persistent grant code
@ 2018-04-30 12:01   ` Paul Durrant
  0 siblings, 0 replies; 28+ messages in thread
From: Paul Durrant @ 2018-04-30 12:01 UTC (permalink / raw)
  To: xen-devel, qemu-block, qemu-devel
  Cc: Anthony Perard, Kevin Wolf, Paul Durrant, Stefano Stabellini, Max Reitz

The grant copy operation was added to libxengnttab in Xen 4.8.0. If grant
copy is available then persistent grants will not be used.
The xen_disk source can be siginificantly simplified by removing this now
redundant code.

Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
---
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Anthony Perard <anthony.perard@citrix.com>
Cc: Kevin Wolf <kwolf@redhat.com>
Cc: Max Reitz <mreitz@redhat.com>
---
 hw/block/xen_disk.c | 237 +++++-----------------------------------------------
 1 file changed, 21 insertions(+), 216 deletions(-)

diff --git a/hw/block/xen_disk.c b/hw/block/xen_disk.c
index f74fcd4..b33611a 100644
--- a/hw/block/xen_disk.c
+++ b/hw/block/xen_disk.c
@@ -43,20 +43,6 @@ static int batch_maps   = 0;
 #define BLOCK_SIZE  512
 #define IOCB_COUNT  (BLKIF_MAX_SEGMENTS_PER_REQUEST + 2)
 
-struct PersistentGrant {
-    void *page;
-    struct XenBlkDev *blkdev;
-};
-
-typedef struct PersistentGrant PersistentGrant;
-
-struct PersistentRegion {
-    void *addr;
-    int num;
-};
-
-typedef struct PersistentRegion PersistentRegion;
-
 struct ioreq {
     blkif_request_t     req;
     int16_t             status;
@@ -73,7 +59,6 @@ struct ioreq {
     int                 prot;
     void                *page[BLKIF_MAX_SEGMENTS_PER_REQUEST];
     void                *pages;
-    int                 num_unmap;
 
     /* aio status */
     int                 aio_inflight;
@@ -115,13 +100,7 @@ struct XenBlkDev {
     int                 requests_finished;
     unsigned int        max_requests;
 
-    /* Persistent grants extension */
     gboolean            feature_discard;
-    gboolean            feature_persistent;
-    GTree               *persistent_gnts;
-    GSList              *persistent_regions;
-    unsigned int        persistent_gnt_count;
-    unsigned int        max_grants;
 
     /* qemu block driver */
     DriveInfo           *dinfo;
@@ -158,46 +137,6 @@ static void ioreq_reset(struct ioreq *ioreq)
     qemu_iovec_reset(&ioreq->v);
 }
 
-static gint int_cmp(gconstpointer a, gconstpointer b, gpointer user_data)
-{
-    uint ua = GPOINTER_TO_UINT(a);
-    uint ub = GPOINTER_TO_UINT(b);
-    return (ua > ub) - (ua < ub);
-}
-
-static void destroy_grant(gpointer pgnt)
-{
-    PersistentGrant *grant = pgnt;
-    xengnttab_handle *gnt = grant->blkdev->xendev.gnttabdev;
-
-    if (xengnttab_unmap(gnt, grant->page, 1) != 0) {
-        xen_pv_printf(&grant->blkdev->xendev, 0,
-                      "xengnttab_unmap failed: %s\n",
-                      strerror(errno));
-    }
-    grant->blkdev->persistent_gnt_count--;
-    xen_pv_printf(&grant->blkdev->xendev, 3,
-                  "unmapped grant %p\n", grant->page);
-    g_free(grant);
-}
-
-static void remove_persistent_region(gpointer data, gpointer dev)
-{
-    PersistentRegion *region = data;
-    struct XenBlkDev *blkdev = dev;
-    xengnttab_handle *gnt = blkdev->xendev.gnttabdev;
-
-    if (xengnttab_unmap(gnt, region->addr, region->num) != 0) {
-        xen_pv_printf(&blkdev->xendev, 0,
-                      "xengnttab_unmap region %p failed: %s\n",
-                      region->addr, strerror(errno));
-    }
-    xen_pv_printf(&blkdev->xendev, 3,
-                  "unmapped grant region %p with %d pages\n",
-                  region->addr, region->num);
-    g_free(region);
-}
-
 static struct ioreq *ioreq_start(struct XenBlkDev *blkdev)
 {
     struct ioreq *ioreq = NULL;
@@ -327,22 +266,22 @@ static void ioreq_unmap(struct ioreq *ioreq)
     xengnttab_handle *gnt = ioreq->blkdev->xendev.gnttabdev;
     int i;
 
-    if (ioreq->num_unmap == 0 || ioreq->mapped == 0) {
+    if (ioreq->v.niov == 0 || ioreq->mapped == 0) {
         return;
     }
     if (batch_maps) {
         if (!ioreq->pages) {
             return;
         }
-        if (xengnttab_unmap(gnt, ioreq->pages, ioreq->num_unmap) != 0) {
+        if (xengnttab_unmap(gnt, ioreq->pages, ioreq->v.niov) != 0) {
             xen_pv_printf(&ioreq->blkdev->xendev, 0,
                           "xengnttab_unmap failed: %s\n",
                           strerror(errno));
         }
-        ioreq->blkdev->cnt_map -= ioreq->num_unmap;
+        ioreq->blkdev->cnt_map -= ioreq->v.niov;
         ioreq->pages = NULL;
     } else {
-        for (i = 0; i < ioreq->num_unmap; i++) {
+        for (i = 0; i < ioreq->v.niov; i++) {
             if (!ioreq->page[i]) {
                 continue;
             }
@@ -361,138 +300,44 @@ static void ioreq_unmap(struct ioreq *ioreq)
 static int ioreq_map(struct ioreq *ioreq)
 {
     xengnttab_handle *gnt = ioreq->blkdev->xendev.gnttabdev;
-    uint32_t domids[BLKIF_MAX_SEGMENTS_PER_REQUEST];
-    uint32_t refs[BLKIF_MAX_SEGMENTS_PER_REQUEST];
-    void *page[BLKIF_MAX_SEGMENTS_PER_REQUEST];
-    int i, j, new_maps = 0;
-    PersistentGrant *grant;
-    PersistentRegion *region;
-    /* domids and refs variables will contain the information necessary
-     * to map the grants that are needed to fulfill this request.
-     *
-     * After mapping the needed grants, the page array will contain the
-     * memory address of each granted page in the order specified in ioreq
-     * (disregarding if it's a persistent grant or not).
-     */
+    int i;
 
     if (ioreq->v.niov == 0 || ioreq->mapped == 1) {
         return 0;
     }
-    if (ioreq->blkdev->feature_persistent) {
-        for (i = 0; i < ioreq->v.niov; i++) {
-            grant = g_tree_lookup(ioreq->blkdev->persistent_gnts,
-                                    GUINT_TO_POINTER(ioreq->refs[i]));
-
-            if (grant != NULL) {
-                page[i] = grant->page;
-                xen_pv_printf(&ioreq->blkdev->xendev, 3,
-                              "using persistent-grant %" PRIu32 "\n",
-                              ioreq->refs[i]);
-            } else {
-                    /* Add the grant to the list of grants that
-                     * should be mapped
-                     */
-                    domids[new_maps] = ioreq->domids[i];
-                    refs[new_maps] = ioreq->refs[i];
-                    page[i] = NULL;
-                    new_maps++;
-            }
-        }
-        /* Set the protection to RW, since grants may be reused later
-         * with a different protection than the one needed for this request
-         */
-        ioreq->prot = PROT_WRITE | PROT_READ;
-    } else {
-        /* All grants in the request should be mapped */
-        memcpy(refs, ioreq->refs, sizeof(refs));
-        memcpy(domids, ioreq->domids, sizeof(domids));
-        memset(page, 0, sizeof(page));
-        new_maps = ioreq->v.niov;
-    }
-
-    if (batch_maps && new_maps) {
+    if (batch_maps) {
         ioreq->pages = xengnttab_map_grant_refs
-            (gnt, new_maps, domids, refs, ioreq->prot);
+            (gnt, ioreq->v.niov, ioreq->domids, ioreq->refs, ioreq->prot);
         if (ioreq->pages == NULL) {
             xen_pv_printf(&ioreq->blkdev->xendev, 0,
                           "can't map %d grant refs (%s, %d maps)\n",
-                          new_maps, strerror(errno), ioreq->blkdev->cnt_map);
+                          ioreq->v.niov, strerror(errno),
+                          ioreq->blkdev->cnt_map);
             return -1;
         }
-        for (i = 0, j = 0; i < ioreq->v.niov; i++) {
-            if (page[i] == NULL) {
-                page[i] = ioreq->pages + (j++) * XC_PAGE_SIZE;
-            }
+        for (i = 0; i < ioreq->v.niov; i++) {
+            ioreq->v.iov[i].iov_base = ioreq->pages + i * XC_PAGE_SIZE +
+                (uintptr_t)ioreq->v.iov[i].iov_base;
         }
-        ioreq->blkdev->cnt_map += new_maps;
-    } else if (new_maps)  {
-        for (i = 0; i < new_maps; i++) {
+        ioreq->blkdev->cnt_map += ioreq->v.niov;
+    } else  {
+        for (i = 0; i < ioreq->v.niov; i++) {
             ioreq->page[i] = xengnttab_map_grant_ref
-                (gnt, domids[i], refs[i], ioreq->prot);
+                (gnt, ioreq->domids[i], ioreq->refs[i], ioreq->prot);
             if (ioreq->page[i] == NULL) {
                 xen_pv_printf(&ioreq->blkdev->xendev, 0,
                               "can't map grant ref %d (%s, %d maps)\n",
-                              refs[i], strerror(errno), ioreq->blkdev->cnt_map);
+                              ioreq->refs[i], strerror(errno),
+                              ioreq->blkdev->cnt_map);
                 ioreq->mapped = 1;
                 ioreq_unmap(ioreq);
                 return -1;
             }
-            ioreq->blkdev->cnt_map++;
-        }
-        for (i = 0, j = 0; i < ioreq->v.niov; i++) {
-            if (page[i] == NULL) {
-                page[i] = ioreq->page[j++];
-            }
+            ioreq->v.iov[i].iov_base = ioreq->page[i] +
+                (uintptr_t)ioreq->v.iov[i].iov_base;
         }
     }
-    if (ioreq->blkdev->feature_persistent && new_maps != 0 &&
-        (!batch_maps || (ioreq->blkdev->persistent_gnt_count + new_maps <=
-        ioreq->blkdev->max_grants))) {
-        /*
-         * If we are using persistent grants and batch mappings only
-         * add the new maps to the list of persistent grants if the whole
-         * area can be persistently mapped.
-         */
-        if (batch_maps) {
-            region = g_malloc0(sizeof(*region));
-            region->addr = ioreq->pages;
-            region->num = new_maps;
-            ioreq->blkdev->persistent_regions = g_slist_append(
-                                            ioreq->blkdev->persistent_regions,
-                                            region);
-        }
-        while ((ioreq->blkdev->persistent_gnt_count < ioreq->blkdev->max_grants)
-              && new_maps) {
-            /* Go through the list of newly mapped grants and add as many
-             * as possible to the list of persistently mapped grants.
-             *
-             * Since we start at the end of ioreq->page(s), we only need
-             * to decrease new_maps to prevent this granted pages from
-             * being unmapped in ioreq_unmap.
-             */
-            grant = g_malloc0(sizeof(*grant));
-            new_maps--;
-            if (batch_maps) {
-                grant->page = ioreq->pages + (new_maps) * XC_PAGE_SIZE;
-            } else {
-                grant->page = ioreq->page[new_maps];
-            }
-            grant->blkdev = ioreq->blkdev;
-            xen_pv_printf(&ioreq->blkdev->xendev, 3,
-                          "adding grant %" PRIu32 " page: %p\n",
-                          refs[new_maps], grant->page);
-            g_tree_insert(ioreq->blkdev->persistent_gnts,
-                          GUINT_TO_POINTER(refs[new_maps]),
-                          grant);
-            ioreq->blkdev->persistent_gnt_count++;
-        }
-        assert(!batch_maps || new_maps == 0);
-    }
-    for (i = 0; i < ioreq->v.niov; i++) {
-        ioreq->v.iov[i].iov_base += (uintptr_t)page[i];
-    }
     ioreq->mapped = 1;
-    ioreq->num_unmap = new_maps;
     return 0;
 }
 
@@ -1039,8 +884,6 @@ static int blk_init(struct XenDevice *xendev)
      * blk_connect supplies sector-size and sectors
      */
     xenstore_write_be_int(&blkdev->xendev, "feature-flush-cache", 1);
-    xenstore_write_be_int(&blkdev->xendev, "feature-persistent",
-                          !xen_feature_grant_copy);
     xenstore_write_be_int(&blkdev->xendev, "info", info);
 
     xenstore_write_be_int(&blkdev->xendev, "max-ring-page-order",
@@ -1079,7 +922,7 @@ out_error:
 static int blk_connect(struct XenDevice *xendev)
 {
     struct XenBlkDev *blkdev = container_of(xendev, struct XenBlkDev, xendev);
-    int pers, index, qflags;
+    int index, qflags;
     bool readonly = true;
     bool writethrough = true;
     int order, ring_ref;
@@ -1202,11 +1045,6 @@ static int blk_connect(struct XenDevice *xendev)
                              &blkdev->xendev.remote_port) == -1) {
         return -1;
     }
-    if (xenstore_read_fe_int(&blkdev->xendev, "feature-persistent", &pers)) {
-        blkdev->feature_persistent = FALSE;
-    } else {
-        blkdev->feature_persistent = !!pers;
-    }
 
     if (!blkdev->xendev.protocol) {
         blkdev->protocol = BLKIF_PROTOCOL_NATIVE;
@@ -1301,19 +1139,6 @@ static int blk_connect(struct XenDevice *xendev)
     }
     }
 
-    if (blkdev->feature_persistent) {
-        /* Init persistent grants */
-        blkdev->max_grants = blkdev->max_requests *
-            BLKIF_MAX_SEGMENTS_PER_REQUEST;
-        blkdev->persistent_gnts = g_tree_new_full((GCompareDataFunc)int_cmp,
-                                             NULL, NULL,
-                                             batch_maps ?
-                                             (GDestroyNotify)g_free :
-                                             (GDestroyNotify)destroy_grant);
-        blkdev->persistent_regions = NULL;
-        blkdev->persistent_gnt_count = 0;
-    }
-
     blk_set_aio_context(blkdev->blk, blkdev->ctx);
 
     xen_be_bind_evtchn(&blkdev->xendev);
@@ -1350,26 +1175,6 @@ static void blk_disconnect(struct XenDevice *xendev)
         blkdev->sring = NULL;
     }
 
-    /*
-     * Unmap persistent grants before switching to the closed state
-     * so the frontend can free them.
-     *
-     * In the !batch_maps case g_tree_destroy will take care of unmapping
-     * the grant, but in the batch_maps case we need to iterate over every
-     * region in persistent_regions and unmap it.
-     */
-    if (blkdev->feature_persistent) {
-        g_tree_destroy(blkdev->persistent_gnts);
-        assert(batch_maps || blkdev->persistent_gnt_count == 0);
-        if (batch_maps) {
-            blkdev->persistent_gnt_count = 0;
-            g_slist_foreach(blkdev->persistent_regions,
-                            (GFunc)remove_persistent_region, blkdev);
-            g_slist_free(blkdev->persistent_regions);
-        }
-        blkdev->feature_persistent = false;
-    }
-
     if (blkdev->xendev.gnttabdev) {
         xengnttab_close(blkdev->xendev.gnttabdev);
         blkdev->xendev.gnttabdev = NULL;
-- 
2.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [Qemu-devel] [PATCH 2/4] block/xen_disk: remove use of grant map/unmap
  2018-04-30 12:01 ` Paul Durrant
@ 2018-04-30 12:01   ` Paul Durrant
  -1 siblings, 0 replies; 28+ messages in thread
From: Paul Durrant @ 2018-04-30 12:01 UTC (permalink / raw)
  To: xen-devel, qemu-block, qemu-devel
  Cc: Paul Durrant, Stefano Stabellini, Anthony Perard, Kevin Wolf, Max Reitz

The grant copy operation was added to libxengnttab in Xen 4.8.0. If grant
copy is available then data from the guest will be copied rather than
mapped.
The xen_disk source can be significantly simplified by removing this now
redundant code.

Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
---
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Anthony Perard <anthony.perard@citrix.com>
Cc: Kevin Wolf <kwolf@redhat.com>
Cc: Max Reitz <mreitz@redhat.com>
---
 hw/block/xen_disk.c | 194 ++++++++--------------------------------------------
 1 file changed, 27 insertions(+), 167 deletions(-)

diff --git a/hw/block/xen_disk.c b/hw/block/xen_disk.c
index b33611a..8f4e229 100644
--- a/hw/block/xen_disk.c
+++ b/hw/block/xen_disk.c
@@ -36,10 +36,6 @@
 
 /* ------------------------------------------------------------- */
 
-static int batch_maps   = 0;
-
-/* ------------------------------------------------------------- */
-
 #define BLOCK_SIZE  512
 #define IOCB_COUNT  (BLKIF_MAX_SEGMENTS_PER_REQUEST + 2)
 
@@ -51,12 +47,9 @@ struct ioreq {
     off_t               start;
     QEMUIOVector        v;
     int                 presync;
-    uint8_t             mapped;
 
-    /* grant mapping */
     uint32_t            domids[BLKIF_MAX_SEGMENTS_PER_REQUEST];
     uint32_t            refs[BLKIF_MAX_SEGMENTS_PER_REQUEST];
-    int                 prot;
     void                *page[BLKIF_MAX_SEGMENTS_PER_REQUEST];
     void                *pages;
 
@@ -89,7 +82,6 @@ struct XenBlkDev {
     int                 protocol;
     blkif_back_rings_t  rings;
     int                 more_work;
-    int                 cnt_map;
 
     /* request lists */
     QLIST_HEAD(inflight_head, ioreq) inflight;
@@ -119,11 +111,9 @@ static void ioreq_reset(struct ioreq *ioreq)
     ioreq->status = 0;
     ioreq->start = 0;
     ioreq->presync = 0;
-    ioreq->mapped = 0;
 
     memset(ioreq->domids, 0, sizeof(ioreq->domids));
     memset(ioreq->refs, 0, sizeof(ioreq->refs));
-    ioreq->prot = 0;
     memset(ioreq->page, 0, sizeof(ioreq->page));
     ioreq->pages = NULL;
 
@@ -204,7 +194,6 @@ static int ioreq_parse(struct ioreq *ioreq)
                   ioreq->req.handle, ioreq->req.id, ioreq->req.sector_number);
     switch (ioreq->req.operation) {
     case BLKIF_OP_READ:
-        ioreq->prot = PROT_WRITE; /* to memory */
         break;
     case BLKIF_OP_FLUSH_DISKCACHE:
         ioreq->presync = 1;
@@ -213,7 +202,6 @@ static int ioreq_parse(struct ioreq *ioreq)
         }
         /* fall through */
     case BLKIF_OP_WRITE:
-        ioreq->prot = PROT_READ; /* from memory */
         break;
     case BLKIF_OP_DISCARD:
         return 0;
@@ -261,88 +249,6 @@ err:
     return -1;
 }
 
-static void ioreq_unmap(struct ioreq *ioreq)
-{
-    xengnttab_handle *gnt = ioreq->blkdev->xendev.gnttabdev;
-    int i;
-
-    if (ioreq->v.niov == 0 || ioreq->mapped == 0) {
-        return;
-    }
-    if (batch_maps) {
-        if (!ioreq->pages) {
-            return;
-        }
-        if (xengnttab_unmap(gnt, ioreq->pages, ioreq->v.niov) != 0) {
-            xen_pv_printf(&ioreq->blkdev->xendev, 0,
-                          "xengnttab_unmap failed: %s\n",
-                          strerror(errno));
-        }
-        ioreq->blkdev->cnt_map -= ioreq->v.niov;
-        ioreq->pages = NULL;
-    } else {
-        for (i = 0; i < ioreq->v.niov; i++) {
-            if (!ioreq->page[i]) {
-                continue;
-            }
-            if (xengnttab_unmap(gnt, ioreq->page[i], 1) != 0) {
-                xen_pv_printf(&ioreq->blkdev->xendev, 0,
-                              "xengnttab_unmap failed: %s\n",
-                              strerror(errno));
-            }
-            ioreq->blkdev->cnt_map--;
-            ioreq->page[i] = NULL;
-        }
-    }
-    ioreq->mapped = 0;
-}
-
-static int ioreq_map(struct ioreq *ioreq)
-{
-    xengnttab_handle *gnt = ioreq->blkdev->xendev.gnttabdev;
-    int i;
-
-    if (ioreq->v.niov == 0 || ioreq->mapped == 1) {
-        return 0;
-    }
-    if (batch_maps) {
-        ioreq->pages = xengnttab_map_grant_refs
-            (gnt, ioreq->v.niov, ioreq->domids, ioreq->refs, ioreq->prot);
-        if (ioreq->pages == NULL) {
-            xen_pv_printf(&ioreq->blkdev->xendev, 0,
-                          "can't map %d grant refs (%s, %d maps)\n",
-                          ioreq->v.niov, strerror(errno),
-                          ioreq->blkdev->cnt_map);
-            return -1;
-        }
-        for (i = 0; i < ioreq->v.niov; i++) {
-            ioreq->v.iov[i].iov_base = ioreq->pages + i * XC_PAGE_SIZE +
-                (uintptr_t)ioreq->v.iov[i].iov_base;
-        }
-        ioreq->blkdev->cnt_map += ioreq->v.niov;
-    } else  {
-        for (i = 0; i < ioreq->v.niov; i++) {
-            ioreq->page[i] = xengnttab_map_grant_ref
-                (gnt, ioreq->domids[i], ioreq->refs[i], ioreq->prot);
-            if (ioreq->page[i] == NULL) {
-                xen_pv_printf(&ioreq->blkdev->xendev, 0,
-                              "can't map grant ref %d (%s, %d maps)\n",
-                              ioreq->refs[i], strerror(errno),
-                              ioreq->blkdev->cnt_map);
-                ioreq->mapped = 1;
-                ioreq_unmap(ioreq);
-                return -1;
-            }
-            ioreq->v.iov[i].iov_base = ioreq->page[i] +
-                (uintptr_t)ioreq->v.iov[i].iov_base;
-        }
-    }
-    ioreq->mapped = 1;
-    return 0;
-}
-
-#if CONFIG_XEN_CTRL_INTERFACE_VERSION >= 40800
-
 static void ioreq_free_copy_buffers(struct ioreq *ioreq)
 {
     int i;
@@ -424,22 +330,6 @@ static int ioreq_grant_copy(struct ioreq *ioreq)
 
     return rc;
 }
-#else
-static void ioreq_free_copy_buffers(struct ioreq *ioreq)
-{
-    abort();
-}
-
-static int ioreq_init_copy_buffers(struct ioreq *ioreq)
-{
-    abort();
-}
-
-static int ioreq_grant_copy(struct ioreq *ioreq)
-{
-    abort();
-}
-#endif
 
 static int ioreq_runio_qemu_aio(struct ioreq *ioreq);
 
@@ -466,32 +356,28 @@ static void qemu_aio_complete(void *opaque, int ret)
         goto done;
     }
 
-    if (xen_feature_grant_copy) {
-        switch (ioreq->req.operation) {
-        case BLKIF_OP_READ:
-            /* in case of failure ioreq->aio_errors is increased */
-            if (ret == 0) {
-                ioreq_grant_copy(ioreq);
-            }
-            ioreq_free_copy_buffers(ioreq);
-            break;
-        case BLKIF_OP_WRITE:
-        case BLKIF_OP_FLUSH_DISKCACHE:
-            if (!ioreq->req.nr_segments) {
-                break;
-            }
-            ioreq_free_copy_buffers(ioreq);
-            break;
-        default:
+    switch (ioreq->req.operation) {
+    case BLKIF_OP_READ:
+        /* in case of failure ioreq->aio_errors is increased */
+        if (ret == 0) {
+            ioreq_grant_copy(ioreq);
+        }
+        ioreq_free_copy_buffers(ioreq);
+        break;
+    case BLKIF_OP_WRITE:
+    case BLKIF_OP_FLUSH_DISKCACHE:
+        if (!ioreq->req.nr_segments) {
             break;
         }
+        ioreq_free_copy_buffers(ioreq);
+        break;
+    default:
+        break;
     }
 
     ioreq->status = ioreq->aio_errors ? BLKIF_RSP_ERROR : BLKIF_RSP_OKAY;
-    if (!xen_feature_grant_copy) {
-        ioreq_unmap(ioreq);
-    }
     ioreq_finish(ioreq);
+
     switch (ioreq->req.operation) {
     case BLKIF_OP_WRITE:
     case BLKIF_OP_FLUSH_DISKCACHE:
@@ -551,18 +437,13 @@ static int ioreq_runio_qemu_aio(struct ioreq *ioreq)
 {
     struct XenBlkDev *blkdev = ioreq->blkdev;
 
-    if (xen_feature_grant_copy) {
-        ioreq_init_copy_buffers(ioreq);
-        if (ioreq->req.nr_segments && (ioreq->req.operation == BLKIF_OP_WRITE ||
-            ioreq->req.operation == BLKIF_OP_FLUSH_DISKCACHE) &&
-            ioreq_grant_copy(ioreq)) {
-                ioreq_free_copy_buffers(ioreq);
-                goto err;
-        }
-    } else {
-        if (ioreq->req.nr_segments && ioreq_map(ioreq)) {
-            goto err;
-        }
+    ioreq_init_copy_buffers(ioreq);
+    if (ioreq->req.nr_segments &&
+        (ioreq->req.operation == BLKIF_OP_WRITE ||
+         ioreq->req.operation == BLKIF_OP_FLUSH_DISKCACHE) &&
+        ioreq_grant_copy(ioreq)) {
+        ioreq_free_copy_buffers(ioreq);
+        goto err;
     }
 
     ioreq->aio_inflight++;
@@ -603,9 +484,6 @@ static int ioreq_runio_qemu_aio(struct ioreq *ioreq)
     }
     default:
         /* unknown operation (shouldn't happen -- parse catches this) */
-        if (!xen_feature_grant_copy) {
-            ioreq_unmap(ioreq);
-        }
         goto err;
     }
 
@@ -791,10 +669,6 @@ static void blk_alloc(struct XenDevice *xendev)
 
     blkdev->ctx = iothread_get_aio_context(blkdev->iothread);
     blkdev->bh = aio_bh_new(blkdev->ctx, blk_bh, blkdev);
-
-    if (xen_mode != XEN_EMULATE) {
-        batch_maps = 1;
-    }
 }
 
 static void blk_parse_discard(struct XenBlkDev *blkdev)
@@ -877,8 +751,9 @@ static int blk_init(struct XenDevice *xendev)
 
     blkdev->file_blk  = BLOCK_SIZE;
 
-    xen_pv_printf(&blkdev->xendev, 3, "grant copy operation %s\n",
-                  xen_feature_grant_copy ? "enabled" : "disabled");
+    if (!xen_feature_grant_copy) {
+        goto out_error;
+    }
 
     /* fill info
      * blk_connect supplies sector-size and sectors
@@ -910,15 +785,6 @@ out_error:
     return -1;
 }
 
-/*
- * We need to account for the grant allocations requiring contiguous
- * chunks; the worst case number would be
- *     max_req * max_seg + (max_req - 1) * (max_seg - 1) + 1,
- * but in order to keep things simple just use
- *     2 * max_req * max_seg.
- */
-#define MAX_GRANTS(max_req, max_seg) (2 * (max_req) * (max_seg))
-
 static int blk_connect(struct XenDevice *xendev)
 {
     struct XenBlkDev *blkdev = container_of(xendev, struct XenBlkDev, xendev);
@@ -1079,11 +945,8 @@ static int blk_connect(struct XenDevice *xendev)
         return -1;
     }
 
-    /* Calculate the maximum number of grants needed by ioreqs */
-    max_grants = MAX_GRANTS(blkdev->max_requests,
-                            BLKIF_MAX_SEGMENTS_PER_REQUEST);
     /* Add on the number needed for the ring pages */
-    max_grants += blkdev->nr_ring_ref;
+    max_grants = blkdev->nr_ring_ref;
 
     blkdev->xendev.gnttabdev = xengnttab_open(NULL, 0);
     if (blkdev->xendev.gnttabdev == NULL) {
@@ -1114,8 +977,6 @@ static int blk_connect(struct XenDevice *xendev)
         return -1;
     }
 
-    blkdev->cnt_map++;
-
     switch (blkdev->protocol) {
     case BLKIF_PROTOCOL_NATIVE:
     {
@@ -1171,7 +1032,6 @@ static void blk_disconnect(struct XenDevice *xendev)
     if (blkdev->sring) {
         xengnttab_unmap(blkdev->xendev.gnttabdev, blkdev->sring,
                         blkdev->nr_ring_ref);
-        blkdev->cnt_map--;
         blkdev->sring = NULL;
     }
 
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH 2/4] block/xen_disk: remove use of grant map/unmap
@ 2018-04-30 12:01   ` Paul Durrant
  0 siblings, 0 replies; 28+ messages in thread
From: Paul Durrant @ 2018-04-30 12:01 UTC (permalink / raw)
  To: xen-devel, qemu-block, qemu-devel
  Cc: Anthony Perard, Kevin Wolf, Paul Durrant, Stefano Stabellini, Max Reitz

The grant copy operation was added to libxengnttab in Xen 4.8.0. If grant
copy is available then data from the guest will be copied rather than
mapped.
The xen_disk source can be significantly simplified by removing this now
redundant code.

Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
---
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Anthony Perard <anthony.perard@citrix.com>
Cc: Kevin Wolf <kwolf@redhat.com>
Cc: Max Reitz <mreitz@redhat.com>
---
 hw/block/xen_disk.c | 194 ++++++++--------------------------------------------
 1 file changed, 27 insertions(+), 167 deletions(-)

diff --git a/hw/block/xen_disk.c b/hw/block/xen_disk.c
index b33611a..8f4e229 100644
--- a/hw/block/xen_disk.c
+++ b/hw/block/xen_disk.c
@@ -36,10 +36,6 @@
 
 /* ------------------------------------------------------------- */
 
-static int batch_maps   = 0;
-
-/* ------------------------------------------------------------- */
-
 #define BLOCK_SIZE  512
 #define IOCB_COUNT  (BLKIF_MAX_SEGMENTS_PER_REQUEST + 2)
 
@@ -51,12 +47,9 @@ struct ioreq {
     off_t               start;
     QEMUIOVector        v;
     int                 presync;
-    uint8_t             mapped;
 
-    /* grant mapping */
     uint32_t            domids[BLKIF_MAX_SEGMENTS_PER_REQUEST];
     uint32_t            refs[BLKIF_MAX_SEGMENTS_PER_REQUEST];
-    int                 prot;
     void                *page[BLKIF_MAX_SEGMENTS_PER_REQUEST];
     void                *pages;
 
@@ -89,7 +82,6 @@ struct XenBlkDev {
     int                 protocol;
     blkif_back_rings_t  rings;
     int                 more_work;
-    int                 cnt_map;
 
     /* request lists */
     QLIST_HEAD(inflight_head, ioreq) inflight;
@@ -119,11 +111,9 @@ static void ioreq_reset(struct ioreq *ioreq)
     ioreq->status = 0;
     ioreq->start = 0;
     ioreq->presync = 0;
-    ioreq->mapped = 0;
 
     memset(ioreq->domids, 0, sizeof(ioreq->domids));
     memset(ioreq->refs, 0, sizeof(ioreq->refs));
-    ioreq->prot = 0;
     memset(ioreq->page, 0, sizeof(ioreq->page));
     ioreq->pages = NULL;
 
@@ -204,7 +194,6 @@ static int ioreq_parse(struct ioreq *ioreq)
                   ioreq->req.handle, ioreq->req.id, ioreq->req.sector_number);
     switch (ioreq->req.operation) {
     case BLKIF_OP_READ:
-        ioreq->prot = PROT_WRITE; /* to memory */
         break;
     case BLKIF_OP_FLUSH_DISKCACHE:
         ioreq->presync = 1;
@@ -213,7 +202,6 @@ static int ioreq_parse(struct ioreq *ioreq)
         }
         /* fall through */
     case BLKIF_OP_WRITE:
-        ioreq->prot = PROT_READ; /* from memory */
         break;
     case BLKIF_OP_DISCARD:
         return 0;
@@ -261,88 +249,6 @@ err:
     return -1;
 }
 
-static void ioreq_unmap(struct ioreq *ioreq)
-{
-    xengnttab_handle *gnt = ioreq->blkdev->xendev.gnttabdev;
-    int i;
-
-    if (ioreq->v.niov == 0 || ioreq->mapped == 0) {
-        return;
-    }
-    if (batch_maps) {
-        if (!ioreq->pages) {
-            return;
-        }
-        if (xengnttab_unmap(gnt, ioreq->pages, ioreq->v.niov) != 0) {
-            xen_pv_printf(&ioreq->blkdev->xendev, 0,
-                          "xengnttab_unmap failed: %s\n",
-                          strerror(errno));
-        }
-        ioreq->blkdev->cnt_map -= ioreq->v.niov;
-        ioreq->pages = NULL;
-    } else {
-        for (i = 0; i < ioreq->v.niov; i++) {
-            if (!ioreq->page[i]) {
-                continue;
-            }
-            if (xengnttab_unmap(gnt, ioreq->page[i], 1) != 0) {
-                xen_pv_printf(&ioreq->blkdev->xendev, 0,
-                              "xengnttab_unmap failed: %s\n",
-                              strerror(errno));
-            }
-            ioreq->blkdev->cnt_map--;
-            ioreq->page[i] = NULL;
-        }
-    }
-    ioreq->mapped = 0;
-}
-
-static int ioreq_map(struct ioreq *ioreq)
-{
-    xengnttab_handle *gnt = ioreq->blkdev->xendev.gnttabdev;
-    int i;
-
-    if (ioreq->v.niov == 0 || ioreq->mapped == 1) {
-        return 0;
-    }
-    if (batch_maps) {
-        ioreq->pages = xengnttab_map_grant_refs
-            (gnt, ioreq->v.niov, ioreq->domids, ioreq->refs, ioreq->prot);
-        if (ioreq->pages == NULL) {
-            xen_pv_printf(&ioreq->blkdev->xendev, 0,
-                          "can't map %d grant refs (%s, %d maps)\n",
-                          ioreq->v.niov, strerror(errno),
-                          ioreq->blkdev->cnt_map);
-            return -1;
-        }
-        for (i = 0; i < ioreq->v.niov; i++) {
-            ioreq->v.iov[i].iov_base = ioreq->pages + i * XC_PAGE_SIZE +
-                (uintptr_t)ioreq->v.iov[i].iov_base;
-        }
-        ioreq->blkdev->cnt_map += ioreq->v.niov;
-    } else  {
-        for (i = 0; i < ioreq->v.niov; i++) {
-            ioreq->page[i] = xengnttab_map_grant_ref
-                (gnt, ioreq->domids[i], ioreq->refs[i], ioreq->prot);
-            if (ioreq->page[i] == NULL) {
-                xen_pv_printf(&ioreq->blkdev->xendev, 0,
-                              "can't map grant ref %d (%s, %d maps)\n",
-                              ioreq->refs[i], strerror(errno),
-                              ioreq->blkdev->cnt_map);
-                ioreq->mapped = 1;
-                ioreq_unmap(ioreq);
-                return -1;
-            }
-            ioreq->v.iov[i].iov_base = ioreq->page[i] +
-                (uintptr_t)ioreq->v.iov[i].iov_base;
-        }
-    }
-    ioreq->mapped = 1;
-    return 0;
-}
-
-#if CONFIG_XEN_CTRL_INTERFACE_VERSION >= 40800
-
 static void ioreq_free_copy_buffers(struct ioreq *ioreq)
 {
     int i;
@@ -424,22 +330,6 @@ static int ioreq_grant_copy(struct ioreq *ioreq)
 
     return rc;
 }
-#else
-static void ioreq_free_copy_buffers(struct ioreq *ioreq)
-{
-    abort();
-}
-
-static int ioreq_init_copy_buffers(struct ioreq *ioreq)
-{
-    abort();
-}
-
-static int ioreq_grant_copy(struct ioreq *ioreq)
-{
-    abort();
-}
-#endif
 
 static int ioreq_runio_qemu_aio(struct ioreq *ioreq);
 
@@ -466,32 +356,28 @@ static void qemu_aio_complete(void *opaque, int ret)
         goto done;
     }
 
-    if (xen_feature_grant_copy) {
-        switch (ioreq->req.operation) {
-        case BLKIF_OP_READ:
-            /* in case of failure ioreq->aio_errors is increased */
-            if (ret == 0) {
-                ioreq_grant_copy(ioreq);
-            }
-            ioreq_free_copy_buffers(ioreq);
-            break;
-        case BLKIF_OP_WRITE:
-        case BLKIF_OP_FLUSH_DISKCACHE:
-            if (!ioreq->req.nr_segments) {
-                break;
-            }
-            ioreq_free_copy_buffers(ioreq);
-            break;
-        default:
+    switch (ioreq->req.operation) {
+    case BLKIF_OP_READ:
+        /* in case of failure ioreq->aio_errors is increased */
+        if (ret == 0) {
+            ioreq_grant_copy(ioreq);
+        }
+        ioreq_free_copy_buffers(ioreq);
+        break;
+    case BLKIF_OP_WRITE:
+    case BLKIF_OP_FLUSH_DISKCACHE:
+        if (!ioreq->req.nr_segments) {
             break;
         }
+        ioreq_free_copy_buffers(ioreq);
+        break;
+    default:
+        break;
     }
 
     ioreq->status = ioreq->aio_errors ? BLKIF_RSP_ERROR : BLKIF_RSP_OKAY;
-    if (!xen_feature_grant_copy) {
-        ioreq_unmap(ioreq);
-    }
     ioreq_finish(ioreq);
+
     switch (ioreq->req.operation) {
     case BLKIF_OP_WRITE:
     case BLKIF_OP_FLUSH_DISKCACHE:
@@ -551,18 +437,13 @@ static int ioreq_runio_qemu_aio(struct ioreq *ioreq)
 {
     struct XenBlkDev *blkdev = ioreq->blkdev;
 
-    if (xen_feature_grant_copy) {
-        ioreq_init_copy_buffers(ioreq);
-        if (ioreq->req.nr_segments && (ioreq->req.operation == BLKIF_OP_WRITE ||
-            ioreq->req.operation == BLKIF_OP_FLUSH_DISKCACHE) &&
-            ioreq_grant_copy(ioreq)) {
-                ioreq_free_copy_buffers(ioreq);
-                goto err;
-        }
-    } else {
-        if (ioreq->req.nr_segments && ioreq_map(ioreq)) {
-            goto err;
-        }
+    ioreq_init_copy_buffers(ioreq);
+    if (ioreq->req.nr_segments &&
+        (ioreq->req.operation == BLKIF_OP_WRITE ||
+         ioreq->req.operation == BLKIF_OP_FLUSH_DISKCACHE) &&
+        ioreq_grant_copy(ioreq)) {
+        ioreq_free_copy_buffers(ioreq);
+        goto err;
     }
 
     ioreq->aio_inflight++;
@@ -603,9 +484,6 @@ static int ioreq_runio_qemu_aio(struct ioreq *ioreq)
     }
     default:
         /* unknown operation (shouldn't happen -- parse catches this) */
-        if (!xen_feature_grant_copy) {
-            ioreq_unmap(ioreq);
-        }
         goto err;
     }
 
@@ -791,10 +669,6 @@ static void blk_alloc(struct XenDevice *xendev)
 
     blkdev->ctx = iothread_get_aio_context(blkdev->iothread);
     blkdev->bh = aio_bh_new(blkdev->ctx, blk_bh, blkdev);
-
-    if (xen_mode != XEN_EMULATE) {
-        batch_maps = 1;
-    }
 }
 
 static void blk_parse_discard(struct XenBlkDev *blkdev)
@@ -877,8 +751,9 @@ static int blk_init(struct XenDevice *xendev)
 
     blkdev->file_blk  = BLOCK_SIZE;
 
-    xen_pv_printf(&blkdev->xendev, 3, "grant copy operation %s\n",
-                  xen_feature_grant_copy ? "enabled" : "disabled");
+    if (!xen_feature_grant_copy) {
+        goto out_error;
+    }
 
     /* fill info
      * blk_connect supplies sector-size and sectors
@@ -910,15 +785,6 @@ out_error:
     return -1;
 }
 
-/*
- * We need to account for the grant allocations requiring contiguous
- * chunks; the worst case number would be
- *     max_req * max_seg + (max_req - 1) * (max_seg - 1) + 1,
- * but in order to keep things simple just use
- *     2 * max_req * max_seg.
- */
-#define MAX_GRANTS(max_req, max_seg) (2 * (max_req) * (max_seg))
-
 static int blk_connect(struct XenDevice *xendev)
 {
     struct XenBlkDev *blkdev = container_of(xendev, struct XenBlkDev, xendev);
@@ -1079,11 +945,8 @@ static int blk_connect(struct XenDevice *xendev)
         return -1;
     }
 
-    /* Calculate the maximum number of grants needed by ioreqs */
-    max_grants = MAX_GRANTS(blkdev->max_requests,
-                            BLKIF_MAX_SEGMENTS_PER_REQUEST);
     /* Add on the number needed for the ring pages */
-    max_grants += blkdev->nr_ring_ref;
+    max_grants = blkdev->nr_ring_ref;
 
     blkdev->xendev.gnttabdev = xengnttab_open(NULL, 0);
     if (blkdev->xendev.gnttabdev == NULL) {
@@ -1114,8 +977,6 @@ static int blk_connect(struct XenDevice *xendev)
         return -1;
     }
 
-    blkdev->cnt_map++;
-
     switch (blkdev->protocol) {
     case BLKIF_PROTOCOL_NATIVE:
     {
@@ -1171,7 +1032,6 @@ static void blk_disconnect(struct XenDevice *xendev)
     if (blkdev->sring) {
         xengnttab_unmap(blkdev->xendev.gnttabdev, blkdev->sring,
                         blkdev->nr_ring_ref);
-        blkdev->cnt_map--;
         blkdev->sring = NULL;
     }
 
-- 
2.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [Qemu-devel] [PATCH 3/4] block/xen_disk: use a single entry iovec
  2018-04-30 12:01 ` Paul Durrant
@ 2018-04-30 12:01   ` Paul Durrant
  -1 siblings, 0 replies; 28+ messages in thread
From: Paul Durrant @ 2018-04-30 12:01 UTC (permalink / raw)
  To: xen-devel, qemu-block, qemu-devel
  Cc: Paul Durrant, Stefano Stabellini, Anthony Perard, Kevin Wolf, Max Reitz

Since xen_disk now always copies data to and from a guest there is no need
to maintain a vector entry corresponding to every page of a request.
This means there is less per-request state to maintain so the ioreq
structure can shrink significantly.

Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
---
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Anthony Perard <anthony.perard@citrix.com>
Cc: Kevin Wolf <kwolf@redhat.com>
Cc: Max Reitz <mreitz@redhat.com>
---
 hw/block/xen_disk.c | 103 ++++++++++++++++------------------------------------
 1 file changed, 31 insertions(+), 72 deletions(-)

diff --git a/hw/block/xen_disk.c b/hw/block/xen_disk.c
index 8f4e229..6d737fd 100644
--- a/hw/block/xen_disk.c
+++ b/hw/block/xen_disk.c
@@ -46,13 +46,10 @@ struct ioreq {
     /* parsed request */
     off_t               start;
     QEMUIOVector        v;
+    void                *buf;
+    size_t              size;
     int                 presync;
 
-    uint32_t            domids[BLKIF_MAX_SEGMENTS_PER_REQUEST];
-    uint32_t            refs[BLKIF_MAX_SEGMENTS_PER_REQUEST];
-    void                *page[BLKIF_MAX_SEGMENTS_PER_REQUEST];
-    void                *pages;
-
     /* aio status */
     int                 aio_inflight;
     int                 aio_errors;
@@ -110,13 +107,10 @@ static void ioreq_reset(struct ioreq *ioreq)
     memset(&ioreq->req, 0, sizeof(ioreq->req));
     ioreq->status = 0;
     ioreq->start = 0;
+    ioreq->buf = NULL;
+    ioreq->size = 0;
     ioreq->presync = 0;
 
-    memset(ioreq->domids, 0, sizeof(ioreq->domids));
-    memset(ioreq->refs, 0, sizeof(ioreq->refs));
-    memset(ioreq->page, 0, sizeof(ioreq->page));
-    ioreq->pages = NULL;
-
     ioreq->aio_inflight = 0;
     ioreq->aio_errors = 0;
 
@@ -139,7 +133,7 @@ static struct ioreq *ioreq_start(struct XenBlkDev *blkdev)
         ioreq = g_malloc0(sizeof(*ioreq));
         ioreq->blkdev = blkdev;
         blkdev->requests_total++;
-        qemu_iovec_init(&ioreq->v, BLKIF_MAX_SEGMENTS_PER_REQUEST);
+        qemu_iovec_init(&ioreq->v, 1);
     } else {
         /* get one from freelist */
         ioreq = QLIST_FIRST(&blkdev->freelist);
@@ -184,7 +178,6 @@ static void ioreq_release(struct ioreq *ioreq, bool finish)
 static int ioreq_parse(struct ioreq *ioreq)
 {
     struct XenBlkDev *blkdev = ioreq->blkdev;
-    uintptr_t mem;
     size_t len;
     int i;
 
@@ -231,14 +224,10 @@ static int ioreq_parse(struct ioreq *ioreq)
             goto err;
         }
 
-        ioreq->domids[i] = blkdev->xendev.dom;
-        ioreq->refs[i]   = ioreq->req.seg[i].gref;
-
-        mem = ioreq->req.seg[i].first_sect * blkdev->file_blk;
         len = (ioreq->req.seg[i].last_sect - ioreq->req.seg[i].first_sect + 1) * blkdev->file_blk;
-        qemu_iovec_add(&ioreq->v, (void*)mem, len);
+        ioreq->size += len;
     }
-    if (ioreq->start + ioreq->v.size > blkdev->file_size) {
+    if (ioreq->start + ioreq->size > blkdev->file_size) {
         xen_pv_printf(&blkdev->xendev, 0, "error: access beyond end of file\n");
         goto err;
     }
@@ -249,85 +238,55 @@ err:
     return -1;
 }
 
-static void ioreq_free_copy_buffers(struct ioreq *ioreq)
-{
-    int i;
-
-    for (i = 0; i < ioreq->v.niov; i++) {
-        ioreq->page[i] = NULL;
-    }
-
-    qemu_vfree(ioreq->pages);
-}
-
-static int ioreq_init_copy_buffers(struct ioreq *ioreq)
-{
-    int i;
-
-    if (ioreq->v.niov == 0) {
-        return 0;
-    }
-
-    ioreq->pages = qemu_memalign(XC_PAGE_SIZE, ioreq->v.niov * XC_PAGE_SIZE);
-
-    for (i = 0; i < ioreq->v.niov; i++) {
-        ioreq->page[i] = ioreq->pages + i * XC_PAGE_SIZE;
-        ioreq->v.iov[i].iov_base = ioreq->page[i];
-    }
-
-    return 0;
-}
-
 static int ioreq_grant_copy(struct ioreq *ioreq)
 {
-    xengnttab_handle *gnt = ioreq->blkdev->xendev.gnttabdev;
+    struct XenBlkDev *blkdev = ioreq->blkdev;
+    xengnttab_handle *gnt = blkdev->xendev.gnttabdev;
+    void *virt = ioreq->buf;
     xengnttab_grant_copy_segment_t segs[BLKIF_MAX_SEGMENTS_PER_REQUEST];
-    int i, count, rc;
-    int64_t file_blk = ioreq->blkdev->file_blk;
-
-    if (ioreq->v.niov == 0) {
-        return 0;
-    }
+    int i, rc;
+    int64_t file_blk = blkdev->file_blk;
 
-    count = ioreq->v.niov;
-
-    for (i = 0; i < count; i++) {
+    for (i = 0; i < ioreq->req.nr_segments; i++) {
         if (ioreq->req.operation == BLKIF_OP_READ) {
             segs[i].flags = GNTCOPY_dest_gref;
-            segs[i].dest.foreign.ref = ioreq->refs[i];
-            segs[i].dest.foreign.domid = ioreq->domids[i];
+            segs[i].dest.foreign.ref = ioreq->req.seg[i].gref;
+            segs[i].dest.foreign.domid = blkdev->xendev.dom;
             segs[i].dest.foreign.offset = ioreq->req.seg[i].first_sect * file_blk;
-            segs[i].source.virt = ioreq->v.iov[i].iov_base;
+            segs[i].source.virt = virt;
         } else {
             segs[i].flags = GNTCOPY_source_gref;
-            segs[i].source.foreign.ref = ioreq->refs[i];
-            segs[i].source.foreign.domid = ioreq->domids[i];
+            segs[i].source.foreign.ref = ioreq->req.seg[i].gref;
+            segs[i].source.foreign.domid = blkdev->xendev.dom;
             segs[i].source.foreign.offset = ioreq->req.seg[i].first_sect * file_blk;
-            segs[i].dest.virt = ioreq->v.iov[i].iov_base;
+            segs[i].dest.virt = virt;
         }
         segs[i].len = (ioreq->req.seg[i].last_sect
                        - ioreq->req.seg[i].first_sect + 1) * file_blk;
+        virt += segs[i].len;
     }
 
-    rc = xengnttab_grant_copy(gnt, count, segs);
+    rc = xengnttab_grant_copy(gnt, ioreq->req.nr_segments, segs);
 
     if (rc) {
-        xen_pv_printf(&ioreq->blkdev->xendev, 0,
+        xen_pv_printf(&blkdev->xendev, 0,
                       "failed to copy data %d\n", rc);
         ioreq->aio_errors++;
         return -1;
     }
 
-    for (i = 0; i < count; i++) {
+    for (i = 0; i < ioreq->req.nr_segments; i++) {
         if (segs[i].status != GNTST_okay) {
-            xen_pv_printf(&ioreq->blkdev->xendev, 3,
+            xen_pv_printf(&blkdev->xendev, 3,
                           "failed to copy data %d for gref %d, domid %d\n",
-                          segs[i].status, ioreq->refs[i], ioreq->domids[i]);
+                          segs[i].status, ioreq->req.seg[i].gref,
+                          blkdev->xendev.dom);
             ioreq->aio_errors++;
             rc = -1;
         }
     }
 
+    qemu_iovec_add(&ioreq->v, ioreq->buf, ioreq->size);
     return rc;
 }
 
@@ -362,14 +321,14 @@ static void qemu_aio_complete(void *opaque, int ret)
         if (ret == 0) {
             ioreq_grant_copy(ioreq);
         }
-        ioreq_free_copy_buffers(ioreq);
+        qemu_vfree(ioreq->buf);
         break;
     case BLKIF_OP_WRITE:
     case BLKIF_OP_FLUSH_DISKCACHE:
         if (!ioreq->req.nr_segments) {
             break;
         }
-        ioreq_free_copy_buffers(ioreq);
+        qemu_vfree(ioreq->buf);
         break;
     default:
         break;
@@ -437,12 +396,12 @@ static int ioreq_runio_qemu_aio(struct ioreq *ioreq)
 {
     struct XenBlkDev *blkdev = ioreq->blkdev;
 
-    ioreq_init_copy_buffers(ioreq);
+    ioreq->buf = qemu_memalign(XC_PAGE_SIZE, ioreq->size);
     if (ioreq->req.nr_segments &&
         (ioreq->req.operation == BLKIF_OP_WRITE ||
          ioreq->req.operation == BLKIF_OP_FLUSH_DISKCACHE) &&
         ioreq_grant_copy(ioreq)) {
-        ioreq_free_copy_buffers(ioreq);
+        qemu_vfree(ioreq->buf);
         goto err;
     }
 
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH 3/4] block/xen_disk: use a single entry iovec
@ 2018-04-30 12:01   ` Paul Durrant
  0 siblings, 0 replies; 28+ messages in thread
From: Paul Durrant @ 2018-04-30 12:01 UTC (permalink / raw)
  To: xen-devel, qemu-block, qemu-devel
  Cc: Anthony Perard, Kevin Wolf, Paul Durrant, Stefano Stabellini, Max Reitz

Since xen_disk now always copies data to and from a guest there is no need
to maintain a vector entry corresponding to every page of a request.
This means there is less per-request state to maintain so the ioreq
structure can shrink significantly.

Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
---
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Anthony Perard <anthony.perard@citrix.com>
Cc: Kevin Wolf <kwolf@redhat.com>
Cc: Max Reitz <mreitz@redhat.com>
---
 hw/block/xen_disk.c | 103 ++++++++++++++++------------------------------------
 1 file changed, 31 insertions(+), 72 deletions(-)

diff --git a/hw/block/xen_disk.c b/hw/block/xen_disk.c
index 8f4e229..6d737fd 100644
--- a/hw/block/xen_disk.c
+++ b/hw/block/xen_disk.c
@@ -46,13 +46,10 @@ struct ioreq {
     /* parsed request */
     off_t               start;
     QEMUIOVector        v;
+    void                *buf;
+    size_t              size;
     int                 presync;
 
-    uint32_t            domids[BLKIF_MAX_SEGMENTS_PER_REQUEST];
-    uint32_t            refs[BLKIF_MAX_SEGMENTS_PER_REQUEST];
-    void                *page[BLKIF_MAX_SEGMENTS_PER_REQUEST];
-    void                *pages;
-
     /* aio status */
     int                 aio_inflight;
     int                 aio_errors;
@@ -110,13 +107,10 @@ static void ioreq_reset(struct ioreq *ioreq)
     memset(&ioreq->req, 0, sizeof(ioreq->req));
     ioreq->status = 0;
     ioreq->start = 0;
+    ioreq->buf = NULL;
+    ioreq->size = 0;
     ioreq->presync = 0;
 
-    memset(ioreq->domids, 0, sizeof(ioreq->domids));
-    memset(ioreq->refs, 0, sizeof(ioreq->refs));
-    memset(ioreq->page, 0, sizeof(ioreq->page));
-    ioreq->pages = NULL;
-
     ioreq->aio_inflight = 0;
     ioreq->aio_errors = 0;
 
@@ -139,7 +133,7 @@ static struct ioreq *ioreq_start(struct XenBlkDev *blkdev)
         ioreq = g_malloc0(sizeof(*ioreq));
         ioreq->blkdev = blkdev;
         blkdev->requests_total++;
-        qemu_iovec_init(&ioreq->v, BLKIF_MAX_SEGMENTS_PER_REQUEST);
+        qemu_iovec_init(&ioreq->v, 1);
     } else {
         /* get one from freelist */
         ioreq = QLIST_FIRST(&blkdev->freelist);
@@ -184,7 +178,6 @@ static void ioreq_release(struct ioreq *ioreq, bool finish)
 static int ioreq_parse(struct ioreq *ioreq)
 {
     struct XenBlkDev *blkdev = ioreq->blkdev;
-    uintptr_t mem;
     size_t len;
     int i;
 
@@ -231,14 +224,10 @@ static int ioreq_parse(struct ioreq *ioreq)
             goto err;
         }
 
-        ioreq->domids[i] = blkdev->xendev.dom;
-        ioreq->refs[i]   = ioreq->req.seg[i].gref;
-
-        mem = ioreq->req.seg[i].first_sect * blkdev->file_blk;
         len = (ioreq->req.seg[i].last_sect - ioreq->req.seg[i].first_sect + 1) * blkdev->file_blk;
-        qemu_iovec_add(&ioreq->v, (void*)mem, len);
+        ioreq->size += len;
     }
-    if (ioreq->start + ioreq->v.size > blkdev->file_size) {
+    if (ioreq->start + ioreq->size > blkdev->file_size) {
         xen_pv_printf(&blkdev->xendev, 0, "error: access beyond end of file\n");
         goto err;
     }
@@ -249,85 +238,55 @@ err:
     return -1;
 }
 
-static void ioreq_free_copy_buffers(struct ioreq *ioreq)
-{
-    int i;
-
-    for (i = 0; i < ioreq->v.niov; i++) {
-        ioreq->page[i] = NULL;
-    }
-
-    qemu_vfree(ioreq->pages);
-}
-
-static int ioreq_init_copy_buffers(struct ioreq *ioreq)
-{
-    int i;
-
-    if (ioreq->v.niov == 0) {
-        return 0;
-    }
-
-    ioreq->pages = qemu_memalign(XC_PAGE_SIZE, ioreq->v.niov * XC_PAGE_SIZE);
-
-    for (i = 0; i < ioreq->v.niov; i++) {
-        ioreq->page[i] = ioreq->pages + i * XC_PAGE_SIZE;
-        ioreq->v.iov[i].iov_base = ioreq->page[i];
-    }
-
-    return 0;
-}
-
 static int ioreq_grant_copy(struct ioreq *ioreq)
 {
-    xengnttab_handle *gnt = ioreq->blkdev->xendev.gnttabdev;
+    struct XenBlkDev *blkdev = ioreq->blkdev;
+    xengnttab_handle *gnt = blkdev->xendev.gnttabdev;
+    void *virt = ioreq->buf;
     xengnttab_grant_copy_segment_t segs[BLKIF_MAX_SEGMENTS_PER_REQUEST];
-    int i, count, rc;
-    int64_t file_blk = ioreq->blkdev->file_blk;
-
-    if (ioreq->v.niov == 0) {
-        return 0;
-    }
+    int i, rc;
+    int64_t file_blk = blkdev->file_blk;
 
-    count = ioreq->v.niov;
-
-    for (i = 0; i < count; i++) {
+    for (i = 0; i < ioreq->req.nr_segments; i++) {
         if (ioreq->req.operation == BLKIF_OP_READ) {
             segs[i].flags = GNTCOPY_dest_gref;
-            segs[i].dest.foreign.ref = ioreq->refs[i];
-            segs[i].dest.foreign.domid = ioreq->domids[i];
+            segs[i].dest.foreign.ref = ioreq->req.seg[i].gref;
+            segs[i].dest.foreign.domid = blkdev->xendev.dom;
             segs[i].dest.foreign.offset = ioreq->req.seg[i].first_sect * file_blk;
-            segs[i].source.virt = ioreq->v.iov[i].iov_base;
+            segs[i].source.virt = virt;
         } else {
             segs[i].flags = GNTCOPY_source_gref;
-            segs[i].source.foreign.ref = ioreq->refs[i];
-            segs[i].source.foreign.domid = ioreq->domids[i];
+            segs[i].source.foreign.ref = ioreq->req.seg[i].gref;
+            segs[i].source.foreign.domid = blkdev->xendev.dom;
             segs[i].source.foreign.offset = ioreq->req.seg[i].first_sect * file_blk;
-            segs[i].dest.virt = ioreq->v.iov[i].iov_base;
+            segs[i].dest.virt = virt;
         }
         segs[i].len = (ioreq->req.seg[i].last_sect
                        - ioreq->req.seg[i].first_sect + 1) * file_blk;
+        virt += segs[i].len;
     }
 
-    rc = xengnttab_grant_copy(gnt, count, segs);
+    rc = xengnttab_grant_copy(gnt, ioreq->req.nr_segments, segs);
 
     if (rc) {
-        xen_pv_printf(&ioreq->blkdev->xendev, 0,
+        xen_pv_printf(&blkdev->xendev, 0,
                       "failed to copy data %d\n", rc);
         ioreq->aio_errors++;
         return -1;
     }
 
-    for (i = 0; i < count; i++) {
+    for (i = 0; i < ioreq->req.nr_segments; i++) {
         if (segs[i].status != GNTST_okay) {
-            xen_pv_printf(&ioreq->blkdev->xendev, 3,
+            xen_pv_printf(&blkdev->xendev, 3,
                           "failed to copy data %d for gref %d, domid %d\n",
-                          segs[i].status, ioreq->refs[i], ioreq->domids[i]);
+                          segs[i].status, ioreq->req.seg[i].gref,
+                          blkdev->xendev.dom);
             ioreq->aio_errors++;
             rc = -1;
         }
     }
 
+    qemu_iovec_add(&ioreq->v, ioreq->buf, ioreq->size);
     return rc;
 }
 
@@ -362,14 +321,14 @@ static void qemu_aio_complete(void *opaque, int ret)
         if (ret == 0) {
             ioreq_grant_copy(ioreq);
         }
-        ioreq_free_copy_buffers(ioreq);
+        qemu_vfree(ioreq->buf);
         break;
     case BLKIF_OP_WRITE:
     case BLKIF_OP_FLUSH_DISKCACHE:
         if (!ioreq->req.nr_segments) {
             break;
         }
-        ioreq_free_copy_buffers(ioreq);
+        qemu_vfree(ioreq->buf);
         break;
     default:
         break;
@@ -437,12 +396,12 @@ static int ioreq_runio_qemu_aio(struct ioreq *ioreq)
 {
     struct XenBlkDev *blkdev = ioreq->blkdev;
 
-    ioreq_init_copy_buffers(ioreq);
+    ioreq->buf = qemu_memalign(XC_PAGE_SIZE, ioreq->size);
     if (ioreq->req.nr_segments &&
         (ioreq->req.operation == BLKIF_OP_WRITE ||
          ioreq->req.operation == BLKIF_OP_FLUSH_DISKCACHE) &&
         ioreq_grant_copy(ioreq)) {
-        ioreq_free_copy_buffers(ioreq);
+        qemu_vfree(ioreq->buf);
         goto err;
     }
 
-- 
2.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [Qemu-devel] [PATCH 4/4] block/xen_disk: be consistent with use of xendev and blkdev->xendev
  2018-04-30 12:01 ` Paul Durrant
@ 2018-04-30 12:01   ` Paul Durrant
  -1 siblings, 0 replies; 28+ messages in thread
From: Paul Durrant @ 2018-04-30 12:01 UTC (permalink / raw)
  To: xen-devel, qemu-block, qemu-devel
  Cc: Paul Durrant, Stefano Stabellini, Anthony Perard, Kevin Wolf, Max Reitz

Certain functions in xen_disk are called with a pointer to xendev
(struct XenDevice *). They then use continer_of() to acces the surrounding
blkdev (struct XenBlkDev) but then in various places use &blkdev->xendev
when use of the original xendev pointer is shorter to express and clearly
equivalent.

This patch is a purely cosmetic patch which makes sure there is a xendev
pointer on stack for any function where the pointer is need on multiple
occasions modified those functions to use it consistently.

Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
---
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Anthony Perard <anthony.perard@citrix.com>
Cc: Kevin Wolf <kwolf@redhat.com>
Cc: Max Reitz <mreitz@redhat.com>
---
 hw/block/xen_disk.c | 116 +++++++++++++++++++++++++++-------------------------
 1 file changed, 60 insertions(+), 56 deletions(-)

diff --git a/hw/block/xen_disk.c b/hw/block/xen_disk.c
index 6d737fd..b538d21 100644
--- a/hw/block/xen_disk.c
+++ b/hw/block/xen_disk.c
@@ -178,10 +178,11 @@ static void ioreq_release(struct ioreq *ioreq, bool finish)
 static int ioreq_parse(struct ioreq *ioreq)
 {
     struct XenBlkDev *blkdev = ioreq->blkdev;
+    struct XenDevice *xendev = &blkdev->xendev;
     size_t len;
     int i;
 
-    xen_pv_printf(&blkdev->xendev, 3,
+    xen_pv_printf(xendev, 3,
                   "op %d, nr %d, handle %d, id %" PRId64 ", sector %" PRId64 "\n",
                   ioreq->req.operation, ioreq->req.nr_segments,
                   ioreq->req.handle, ioreq->req.id, ioreq->req.sector_number);
@@ -199,28 +200,28 @@ static int ioreq_parse(struct ioreq *ioreq)
     case BLKIF_OP_DISCARD:
         return 0;
     default:
-        xen_pv_printf(&blkdev->xendev, 0, "error: unknown operation (%d)\n",
+        xen_pv_printf(xendev, 0, "error: unknown operation (%d)\n",
                       ioreq->req.operation);
         goto err;
     };
 
     if (ioreq->req.operation != BLKIF_OP_READ && blkdev->mode[0] != 'w') {
-        xen_pv_printf(&blkdev->xendev, 0, "error: write req for ro device\n");
+        xen_pv_printf(xendev, 0, "error: write req for ro device\n");
         goto err;
     }
 
     ioreq->start = ioreq->req.sector_number * blkdev->file_blk;
     for (i = 0; i < ioreq->req.nr_segments; i++) {
         if (i == BLKIF_MAX_SEGMENTS_PER_REQUEST) {
-            xen_pv_printf(&blkdev->xendev, 0, "error: nr_segments too big\n");
+            xen_pv_printf(xendev, 0, "error: nr_segments too big\n");
             goto err;
         }
         if (ioreq->req.seg[i].first_sect > ioreq->req.seg[i].last_sect) {
-            xen_pv_printf(&blkdev->xendev, 0, "error: first > last sector\n");
+            xen_pv_printf(xendev, 0, "error: first > last sector\n");
             goto err;
         }
         if (ioreq->req.seg[i].last_sect * BLOCK_SIZE >= XC_PAGE_SIZE) {
-            xen_pv_printf(&blkdev->xendev, 0, "error: page crossing\n");
+            xen_pv_printf(xendev, 0, "error: page crossing\n");
             goto err;
         }
 
@@ -228,7 +229,7 @@ static int ioreq_parse(struct ioreq *ioreq)
         ioreq->size += len;
     }
     if (ioreq->start + ioreq->size > blkdev->file_size) {
-        xen_pv_printf(&blkdev->xendev, 0, "error: access beyond end of file\n");
+        xen_pv_printf(xendev, 0, "error: access beyond end of file\n");
         goto err;
     }
     return 0;
@@ -241,7 +242,8 @@ err:
 static int ioreq_grant_copy(struct ioreq *ioreq)
 {
     struct XenBlkDev *blkdev = ioreq->blkdev;
-    xengnttab_handle *gnt = blkdev->xendev.gnttabdev;
+    struct XenDevice *xendev = &blkdev->xendev;
+    xengnttab_handle *gnt = xendev->gnttabdev;
     void *virt = ioreq->buf;
     xengnttab_grant_copy_segment_t segs[BLKIF_MAX_SEGMENTS_PER_REQUEST];
     int i, rc;
@@ -251,13 +253,13 @@ static int ioreq_grant_copy(struct ioreq *ioreq)
         if (ioreq->req.operation == BLKIF_OP_READ) {
             segs[i].flags = GNTCOPY_dest_gref;
             segs[i].dest.foreign.ref = ioreq->req.seg[i].gref;
-            segs[i].dest.foreign.domid = blkdev->xendev.dom;
+            segs[i].dest.foreign.domid = xendev->dom;
             segs[i].dest.foreign.offset = ioreq->req.seg[i].first_sect * file_blk;
             segs[i].source.virt = virt;
         } else {
             segs[i].flags = GNTCOPY_source_gref;
             segs[i].source.foreign.ref = ioreq->req.seg[i].gref;
-            segs[i].source.foreign.domid = blkdev->xendev.dom;
+            segs[i].source.foreign.domid = xendev->dom;
             segs[i].source.foreign.offset = ioreq->req.seg[i].first_sect * file_blk;
             segs[i].dest.virt = virt;
         }
@@ -269,7 +271,7 @@ static int ioreq_grant_copy(struct ioreq *ioreq)
     rc = xengnttab_grant_copy(gnt, ioreq->req.nr_segments, segs);
 
     if (rc) {
-        xen_pv_printf(&blkdev->xendev, 0,
+        xen_pv_printf(xendev, 0,
                       "failed to copy data %d\n", rc);
         ioreq->aio_errors++;
         return -1;
@@ -277,10 +279,10 @@ static int ioreq_grant_copy(struct ioreq *ioreq)
 
     for (i = 0; i < ioreq->req.nr_segments; i++) {
         if (segs[i].status != GNTST_okay) {
-            xen_pv_printf(&blkdev->xendev, 3,
+            xen_pv_printf(xendev, 3,
                           "failed to copy data %d for gref %d, domid %d\n",
                           segs[i].status, ioreq->req.seg[i].gref,
-                          blkdev->xendev.dom);
+                          xendev->dom);
             ioreq->aio_errors++;
             rc = -1;
         }
@@ -296,11 +298,12 @@ static void qemu_aio_complete(void *opaque, int ret)
 {
     struct ioreq *ioreq = opaque;
     struct XenBlkDev *blkdev = ioreq->blkdev;
+    struct XenDevice *xendev = &blkdev->xendev;
 
     aio_context_acquire(blkdev->ctx);
 
     if (ret != 0) {
-        xen_pv_printf(&blkdev->xendev, 0, "%s I/O error\n",
+        xen_pv_printf(xendev, 0, "%s I/O error\n",
                       ioreq->req.operation == BLKIF_OP_READ ? "read" : "write");
         ioreq->aio_errors++;
     }
@@ -632,16 +635,17 @@ static void blk_alloc(struct XenDevice *xendev)
 
 static void blk_parse_discard(struct XenBlkDev *blkdev)
 {
+    struct XenDevice *xendev = &blkdev->xendev;
     int enable;
 
     blkdev->feature_discard = true;
 
-    if (xenstore_read_be_int(&blkdev->xendev, "discard-enable", &enable) == 0) {
+    if (xenstore_read_be_int(xendev, "discard-enable", &enable) == 0) {
         blkdev->feature_discard = !!enable;
     }
 
     if (blkdev->feature_discard) {
-        xenstore_write_be_int(&blkdev->xendev, "feature-discard", 1);
+        xenstore_write_be_int(xendev, "feature-discard", 1);
     }
 }
 
@@ -656,7 +660,7 @@ static int blk_init(struct XenDevice *xendev)
     /* read xenstore entries */
     if (blkdev->params == NULL) {
         char *h = NULL;
-        blkdev->params = xenstore_read_be_str(&blkdev->xendev, "params");
+        blkdev->params = xenstore_read_be_str(xendev, "params");
         if (blkdev->params != NULL) {
             h = strchr(blkdev->params, ':');
         }
@@ -676,18 +680,18 @@ static int blk_init(struct XenDevice *xendev)
         blkdev->fileproto = "vpc";
     }
     if (blkdev->mode == NULL) {
-        blkdev->mode = xenstore_read_be_str(&blkdev->xendev, "mode");
+        blkdev->mode = xenstore_read_be_str(xendev, "mode");
     }
     if (blkdev->type == NULL) {
-        blkdev->type = xenstore_read_be_str(&blkdev->xendev, "type");
+        blkdev->type = xenstore_read_be_str(xendev, "type");
     }
     if (blkdev->dev == NULL) {
-        blkdev->dev = xenstore_read_be_str(&blkdev->xendev, "dev");
+        blkdev->dev = xenstore_read_be_str(xendev, "dev");
     }
     if (blkdev->devtype == NULL) {
-        blkdev->devtype = xenstore_read_be_str(&blkdev->xendev, "device-type");
+        blkdev->devtype = xenstore_read_be_str(xendev, "device-type");
     }
-    directiosafe = xenstore_read_be_str(&blkdev->xendev, "direct-io-safe");
+    directiosafe = xenstore_read_be_str(xendev, "direct-io-safe");
     blkdev->directiosafe = (directiosafe && atoi(directiosafe));
 
     /* do we have all we need? */
@@ -717,10 +721,10 @@ static int blk_init(struct XenDevice *xendev)
     /* fill info
      * blk_connect supplies sector-size and sectors
      */
-    xenstore_write_be_int(&blkdev->xendev, "feature-flush-cache", 1);
-    xenstore_write_be_int(&blkdev->xendev, "info", info);
+    xenstore_write_be_int(xendev, "feature-flush-cache", 1);
+    xenstore_write_be_int(xendev, "info", info);
 
-    xenstore_write_be_int(&blkdev->xendev, "max-ring-page-order",
+    xenstore_write_be_int(xendev, "max-ring-page-order",
                           MAX_RING_PAGE_ORDER);
 
     blk_parse_discard(blkdev);
@@ -773,7 +777,7 @@ static int blk_connect(struct XenDevice *xendev)
     }
 
     /* init qemu block driver */
-    index = (blkdev->xendev.dev - 202 * 256) / 16;
+    index = (xendev->dev - 202 * 256) / 16;
     blkdev->dinfo = drive_get(IF_XEN, 0, index);
     if (!blkdev->dinfo) {
         Error *local_err = NULL;
@@ -785,11 +789,11 @@ static int blk_connect(struct XenDevice *xendev)
         }
 
         /* setup via xenbus -> create new block driver instance */
-        xen_pv_printf(&blkdev->xendev, 2, "create new bdrv (xenbus setup)\n");
+        xen_pv_printf(xendev, 2, "create new bdrv (xenbus setup)\n");
         blkdev->blk = blk_new_open(blkdev->filename, NULL, options,
                                    qflags, &local_err);
         if (!blkdev->blk) {
-            xen_pv_printf(&blkdev->xendev, 0, "error: %s\n",
+            xen_pv_printf(xendev, 0, "error: %s\n",
                           error_get_pretty(local_err));
             error_free(local_err);
             return -1;
@@ -797,11 +801,11 @@ static int blk_connect(struct XenDevice *xendev)
         blk_set_enable_write_cache(blkdev->blk, !writethrough);
     } else {
         /* setup via qemu cmdline -> already setup for us */
-        xen_pv_printf(&blkdev->xendev, 2,
+        xen_pv_printf(xendev, 2,
                       "get configured bdrv (cmdline setup)\n");
         blkdev->blk = blk_by_legacy_dinfo(blkdev->dinfo);
         if (blk_is_read_only(blkdev->blk) && !readonly) {
-            xen_pv_printf(&blkdev->xendev, 0, "Unexpected read-only drive");
+            xen_pv_printf(xendev, 0, "Unexpected read-only drive");
             blkdev->blk = NULL;
             return -1;
         }
@@ -814,7 +818,7 @@ static int blk_connect(struct XenDevice *xendev)
     if (blkdev->file_size < 0) {
         BlockDriverState *bs = blk_bs(blkdev->blk);
         const char *drv_name = bs ? bdrv_get_format_name(bs) : NULL;
-        xen_pv_printf(&blkdev->xendev, 1, "blk_getlength: %d (%s) | drv %s\n",
+        xen_pv_printf(xendev, 1, "blk_getlength: %d (%s) | drv %s\n",
                       (int)blkdev->file_size, strerror(-blkdev->file_size),
                       drv_name ?: "-");
         blkdev->file_size = 0;
@@ -826,15 +830,15 @@ static int blk_connect(struct XenDevice *xendev)
                   blkdev->file_size, blkdev->file_size >> 20);
 
     /* Fill in number of sector size and number of sectors */
-    xenstore_write_be_int(&blkdev->xendev, "sector-size", blkdev->file_blk);
-    xenstore_write_be_int64(&blkdev->xendev, "sectors",
+    xenstore_write_be_int(xendev, "sector-size", blkdev->file_blk);
+    xenstore_write_be_int64(xendev, "sectors",
                             blkdev->file_size / blkdev->file_blk);
 
-    if (xenstore_read_fe_int(&blkdev->xendev, "ring-page-order",
+    if (xenstore_read_fe_int(xendev, "ring-page-order",
                              &order) == -1) {
         blkdev->nr_ring_ref = 1;
 
-        if (xenstore_read_fe_int(&blkdev->xendev, "ring-ref",
+        if (xenstore_read_fe_int(xendev, "ring-ref",
                                  &ring_ref) == -1) {
             return -1;
         }
@@ -851,7 +855,7 @@ static int blk_connect(struct XenDevice *xendev)
                 return -1;
             }
 
-            if (xenstore_read_fe_int(&blkdev->xendev, key,
+            if (xenstore_read_fe_int(xendev, key,
                                      &ring_ref) == -1) {
                 g_free(key);
                 return -1;
@@ -866,18 +870,18 @@ static int blk_connect(struct XenDevice *xendev)
         return -1;
     }
 
-    if (xenstore_read_fe_int(&blkdev->xendev, "event-channel",
-                             &blkdev->xendev.remote_port) == -1) {
+    if (xenstore_read_fe_int(xendev, "event-channel",
+                             &xendev->remote_port) == -1) {
         return -1;
     }
 
-    if (!blkdev->xendev.protocol) {
+    if (!xendev->protocol) {
         blkdev->protocol = BLKIF_PROTOCOL_NATIVE;
-    } else if (strcmp(blkdev->xendev.protocol, XEN_IO_PROTO_ABI_NATIVE) == 0) {
+    } else if (strcmp(xendev->protocol, XEN_IO_PROTO_ABI_NATIVE) == 0) {
         blkdev->protocol = BLKIF_PROTOCOL_NATIVE;
-    } else if (strcmp(blkdev->xendev.protocol, XEN_IO_PROTO_ABI_X86_32) == 0) {
+    } else if (strcmp(xendev->protocol, XEN_IO_PROTO_ABI_X86_32) == 0) {
         blkdev->protocol = BLKIF_PROTOCOL_X86_32;
-    } else if (strcmp(blkdev->xendev.protocol, XEN_IO_PROTO_ABI_X86_64) == 0) {
+    } else if (strcmp(xendev->protocol, XEN_IO_PROTO_ABI_X86_64) == 0) {
         blkdev->protocol = BLKIF_PROTOCOL_X86_64;
     } else {
         blkdev->protocol = BLKIF_PROTOCOL_NATIVE;
@@ -907,13 +911,13 @@ static int blk_connect(struct XenDevice *xendev)
     /* Add on the number needed for the ring pages */
     max_grants = blkdev->nr_ring_ref;
 
-    blkdev->xendev.gnttabdev = xengnttab_open(NULL, 0);
-    if (blkdev->xendev.gnttabdev == NULL) {
+    xendev->gnttabdev = xengnttab_open(NULL, 0);
+    if (xendev->gnttabdev == NULL) {
         xen_pv_printf(xendev, 0, "xengnttab_open failed: %s\n",
                       strerror(errno));
         return -1;
     }
-    if (xengnttab_set_max_grants(blkdev->xendev.gnttabdev, max_grants)) {
+    if (xengnttab_set_max_grants(xendev->gnttabdev, max_grants)) {
         xen_pv_printf(xendev, 0, "xengnttab_set_max_grants failed: %s\n",
                       strerror(errno));
         return -1;
@@ -921,10 +925,10 @@ static int blk_connect(struct XenDevice *xendev)
 
     domids = g_new0(uint32_t, blkdev->nr_ring_ref);
     for (i = 0; i < blkdev->nr_ring_ref; i++) {
-        domids[i] = blkdev->xendev.dom;
+        domids[i] = xendev->dom;
     }
 
-    blkdev->sring = xengnttab_map_grant_refs(blkdev->xendev.gnttabdev,
+    blkdev->sring = xengnttab_map_grant_refs(xendev->gnttabdev,
                                              blkdev->nr_ring_ref,
                                              domids,
                                              blkdev->ring_ref,
@@ -961,12 +965,12 @@ static int blk_connect(struct XenDevice *xendev)
 
     blk_set_aio_context(blkdev->blk, blkdev->ctx);
 
-    xen_be_bind_evtchn(&blkdev->xendev);
+    xen_be_bind_evtchn(xendev);
 
-    xen_pv_printf(&blkdev->xendev, 1, "ok: proto %s, nr-ring-ref %u, "
+    xen_pv_printf(xendev, 1, "ok: proto %s, nr-ring-ref %u, "
                   "remote port %d, local port %d\n",
-                  blkdev->xendev.protocol, blkdev->nr_ring_ref,
-                  blkdev->xendev.remote_port, blkdev->xendev.local_port);
+                  xendev->protocol, blkdev->nr_ring_ref,
+                  xendev->remote_port, xendev->local_port);
     return 0;
 }
 
@@ -984,19 +988,19 @@ static void blk_disconnect(struct XenDevice *xendev)
         blk_unref(blkdev->blk);
         blkdev->blk = NULL;
     }
-    xen_pv_unbind_evtchn(&blkdev->xendev);
+    xen_pv_unbind_evtchn(xendev);
 
     aio_context_release(blkdev->ctx);
 
     if (blkdev->sring) {
-        xengnttab_unmap(blkdev->xendev.gnttabdev, blkdev->sring,
+        xengnttab_unmap(xendev->gnttabdev, blkdev->sring,
                         blkdev->nr_ring_ref);
         blkdev->sring = NULL;
     }
 
-    if (blkdev->xendev.gnttabdev) {
-        xengnttab_close(blkdev->xendev.gnttabdev);
-        blkdev->xendev.gnttabdev = NULL;
+    if (xendev->gnttabdev) {
+        xengnttab_close(xendev->gnttabdev);
+        xendev->gnttabdev = NULL;
     }
 }
 
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH 4/4] block/xen_disk: be consistent with use of xendev and blkdev->xendev
@ 2018-04-30 12:01   ` Paul Durrant
  0 siblings, 0 replies; 28+ messages in thread
From: Paul Durrant @ 2018-04-30 12:01 UTC (permalink / raw)
  To: xen-devel, qemu-block, qemu-devel
  Cc: Anthony Perard, Kevin Wolf, Paul Durrant, Stefano Stabellini, Max Reitz

Certain functions in xen_disk are called with a pointer to xendev
(struct XenDevice *). They then use continer_of() to acces the surrounding
blkdev (struct XenBlkDev) but then in various places use &blkdev->xendev
when use of the original xendev pointer is shorter to express and clearly
equivalent.

This patch is a purely cosmetic patch which makes sure there is a xendev
pointer on stack for any function where the pointer is need on multiple
occasions modified those functions to use it consistently.

Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
---
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Anthony Perard <anthony.perard@citrix.com>
Cc: Kevin Wolf <kwolf@redhat.com>
Cc: Max Reitz <mreitz@redhat.com>
---
 hw/block/xen_disk.c | 116 +++++++++++++++++++++++++++-------------------------
 1 file changed, 60 insertions(+), 56 deletions(-)

diff --git a/hw/block/xen_disk.c b/hw/block/xen_disk.c
index 6d737fd..b538d21 100644
--- a/hw/block/xen_disk.c
+++ b/hw/block/xen_disk.c
@@ -178,10 +178,11 @@ static void ioreq_release(struct ioreq *ioreq, bool finish)
 static int ioreq_parse(struct ioreq *ioreq)
 {
     struct XenBlkDev *blkdev = ioreq->blkdev;
+    struct XenDevice *xendev = &blkdev->xendev;
     size_t len;
     int i;
 
-    xen_pv_printf(&blkdev->xendev, 3,
+    xen_pv_printf(xendev, 3,
                   "op %d, nr %d, handle %d, id %" PRId64 ", sector %" PRId64 "\n",
                   ioreq->req.operation, ioreq->req.nr_segments,
                   ioreq->req.handle, ioreq->req.id, ioreq->req.sector_number);
@@ -199,28 +200,28 @@ static int ioreq_parse(struct ioreq *ioreq)
     case BLKIF_OP_DISCARD:
         return 0;
     default:
-        xen_pv_printf(&blkdev->xendev, 0, "error: unknown operation (%d)\n",
+        xen_pv_printf(xendev, 0, "error: unknown operation (%d)\n",
                       ioreq->req.operation);
         goto err;
     };
 
     if (ioreq->req.operation != BLKIF_OP_READ && blkdev->mode[0] != 'w') {
-        xen_pv_printf(&blkdev->xendev, 0, "error: write req for ro device\n");
+        xen_pv_printf(xendev, 0, "error: write req for ro device\n");
         goto err;
     }
 
     ioreq->start = ioreq->req.sector_number * blkdev->file_blk;
     for (i = 0; i < ioreq->req.nr_segments; i++) {
         if (i == BLKIF_MAX_SEGMENTS_PER_REQUEST) {
-            xen_pv_printf(&blkdev->xendev, 0, "error: nr_segments too big\n");
+            xen_pv_printf(xendev, 0, "error: nr_segments too big\n");
             goto err;
         }
         if (ioreq->req.seg[i].first_sect > ioreq->req.seg[i].last_sect) {
-            xen_pv_printf(&blkdev->xendev, 0, "error: first > last sector\n");
+            xen_pv_printf(xendev, 0, "error: first > last sector\n");
             goto err;
         }
         if (ioreq->req.seg[i].last_sect * BLOCK_SIZE >= XC_PAGE_SIZE) {
-            xen_pv_printf(&blkdev->xendev, 0, "error: page crossing\n");
+            xen_pv_printf(xendev, 0, "error: page crossing\n");
             goto err;
         }
 
@@ -228,7 +229,7 @@ static int ioreq_parse(struct ioreq *ioreq)
         ioreq->size += len;
     }
     if (ioreq->start + ioreq->size > blkdev->file_size) {
-        xen_pv_printf(&blkdev->xendev, 0, "error: access beyond end of file\n");
+        xen_pv_printf(xendev, 0, "error: access beyond end of file\n");
         goto err;
     }
     return 0;
@@ -241,7 +242,8 @@ err:
 static int ioreq_grant_copy(struct ioreq *ioreq)
 {
     struct XenBlkDev *blkdev = ioreq->blkdev;
-    xengnttab_handle *gnt = blkdev->xendev.gnttabdev;
+    struct XenDevice *xendev = &blkdev->xendev;
+    xengnttab_handle *gnt = xendev->gnttabdev;
     void *virt = ioreq->buf;
     xengnttab_grant_copy_segment_t segs[BLKIF_MAX_SEGMENTS_PER_REQUEST];
     int i, rc;
@@ -251,13 +253,13 @@ static int ioreq_grant_copy(struct ioreq *ioreq)
         if (ioreq->req.operation == BLKIF_OP_READ) {
             segs[i].flags = GNTCOPY_dest_gref;
             segs[i].dest.foreign.ref = ioreq->req.seg[i].gref;
-            segs[i].dest.foreign.domid = blkdev->xendev.dom;
+            segs[i].dest.foreign.domid = xendev->dom;
             segs[i].dest.foreign.offset = ioreq->req.seg[i].first_sect * file_blk;
             segs[i].source.virt = virt;
         } else {
             segs[i].flags = GNTCOPY_source_gref;
             segs[i].source.foreign.ref = ioreq->req.seg[i].gref;
-            segs[i].source.foreign.domid = blkdev->xendev.dom;
+            segs[i].source.foreign.domid = xendev->dom;
             segs[i].source.foreign.offset = ioreq->req.seg[i].first_sect * file_blk;
             segs[i].dest.virt = virt;
         }
@@ -269,7 +271,7 @@ static int ioreq_grant_copy(struct ioreq *ioreq)
     rc = xengnttab_grant_copy(gnt, ioreq->req.nr_segments, segs);
 
     if (rc) {
-        xen_pv_printf(&blkdev->xendev, 0,
+        xen_pv_printf(xendev, 0,
                       "failed to copy data %d\n", rc);
         ioreq->aio_errors++;
         return -1;
@@ -277,10 +279,10 @@ static int ioreq_grant_copy(struct ioreq *ioreq)
 
     for (i = 0; i < ioreq->req.nr_segments; i++) {
         if (segs[i].status != GNTST_okay) {
-            xen_pv_printf(&blkdev->xendev, 3,
+            xen_pv_printf(xendev, 3,
                           "failed to copy data %d for gref %d, domid %d\n",
                           segs[i].status, ioreq->req.seg[i].gref,
-                          blkdev->xendev.dom);
+                          xendev->dom);
             ioreq->aio_errors++;
             rc = -1;
         }
@@ -296,11 +298,12 @@ static void qemu_aio_complete(void *opaque, int ret)
 {
     struct ioreq *ioreq = opaque;
     struct XenBlkDev *blkdev = ioreq->blkdev;
+    struct XenDevice *xendev = &blkdev->xendev;
 
     aio_context_acquire(blkdev->ctx);
 
     if (ret != 0) {
-        xen_pv_printf(&blkdev->xendev, 0, "%s I/O error\n",
+        xen_pv_printf(xendev, 0, "%s I/O error\n",
                       ioreq->req.operation == BLKIF_OP_READ ? "read" : "write");
         ioreq->aio_errors++;
     }
@@ -632,16 +635,17 @@ static void blk_alloc(struct XenDevice *xendev)
 
 static void blk_parse_discard(struct XenBlkDev *blkdev)
 {
+    struct XenDevice *xendev = &blkdev->xendev;
     int enable;
 
     blkdev->feature_discard = true;
 
-    if (xenstore_read_be_int(&blkdev->xendev, "discard-enable", &enable) == 0) {
+    if (xenstore_read_be_int(xendev, "discard-enable", &enable) == 0) {
         blkdev->feature_discard = !!enable;
     }
 
     if (blkdev->feature_discard) {
-        xenstore_write_be_int(&blkdev->xendev, "feature-discard", 1);
+        xenstore_write_be_int(xendev, "feature-discard", 1);
     }
 }
 
@@ -656,7 +660,7 @@ static int blk_init(struct XenDevice *xendev)
     /* read xenstore entries */
     if (blkdev->params == NULL) {
         char *h = NULL;
-        blkdev->params = xenstore_read_be_str(&blkdev->xendev, "params");
+        blkdev->params = xenstore_read_be_str(xendev, "params");
         if (blkdev->params != NULL) {
             h = strchr(blkdev->params, ':');
         }
@@ -676,18 +680,18 @@ static int blk_init(struct XenDevice *xendev)
         blkdev->fileproto = "vpc";
     }
     if (blkdev->mode == NULL) {
-        blkdev->mode = xenstore_read_be_str(&blkdev->xendev, "mode");
+        blkdev->mode = xenstore_read_be_str(xendev, "mode");
     }
     if (blkdev->type == NULL) {
-        blkdev->type = xenstore_read_be_str(&blkdev->xendev, "type");
+        blkdev->type = xenstore_read_be_str(xendev, "type");
     }
     if (blkdev->dev == NULL) {
-        blkdev->dev = xenstore_read_be_str(&blkdev->xendev, "dev");
+        blkdev->dev = xenstore_read_be_str(xendev, "dev");
     }
     if (blkdev->devtype == NULL) {
-        blkdev->devtype = xenstore_read_be_str(&blkdev->xendev, "device-type");
+        blkdev->devtype = xenstore_read_be_str(xendev, "device-type");
     }
-    directiosafe = xenstore_read_be_str(&blkdev->xendev, "direct-io-safe");
+    directiosafe = xenstore_read_be_str(xendev, "direct-io-safe");
     blkdev->directiosafe = (directiosafe && atoi(directiosafe));
 
     /* do we have all we need? */
@@ -717,10 +721,10 @@ static int blk_init(struct XenDevice *xendev)
     /* fill info
      * blk_connect supplies sector-size and sectors
      */
-    xenstore_write_be_int(&blkdev->xendev, "feature-flush-cache", 1);
-    xenstore_write_be_int(&blkdev->xendev, "info", info);
+    xenstore_write_be_int(xendev, "feature-flush-cache", 1);
+    xenstore_write_be_int(xendev, "info", info);
 
-    xenstore_write_be_int(&blkdev->xendev, "max-ring-page-order",
+    xenstore_write_be_int(xendev, "max-ring-page-order",
                           MAX_RING_PAGE_ORDER);
 
     blk_parse_discard(blkdev);
@@ -773,7 +777,7 @@ static int blk_connect(struct XenDevice *xendev)
     }
 
     /* init qemu block driver */
-    index = (blkdev->xendev.dev - 202 * 256) / 16;
+    index = (xendev->dev - 202 * 256) / 16;
     blkdev->dinfo = drive_get(IF_XEN, 0, index);
     if (!blkdev->dinfo) {
         Error *local_err = NULL;
@@ -785,11 +789,11 @@ static int blk_connect(struct XenDevice *xendev)
         }
 
         /* setup via xenbus -> create new block driver instance */
-        xen_pv_printf(&blkdev->xendev, 2, "create new bdrv (xenbus setup)\n");
+        xen_pv_printf(xendev, 2, "create new bdrv (xenbus setup)\n");
         blkdev->blk = blk_new_open(blkdev->filename, NULL, options,
                                    qflags, &local_err);
         if (!blkdev->blk) {
-            xen_pv_printf(&blkdev->xendev, 0, "error: %s\n",
+            xen_pv_printf(xendev, 0, "error: %s\n",
                           error_get_pretty(local_err));
             error_free(local_err);
             return -1;
@@ -797,11 +801,11 @@ static int blk_connect(struct XenDevice *xendev)
         blk_set_enable_write_cache(blkdev->blk, !writethrough);
     } else {
         /* setup via qemu cmdline -> already setup for us */
-        xen_pv_printf(&blkdev->xendev, 2,
+        xen_pv_printf(xendev, 2,
                       "get configured bdrv (cmdline setup)\n");
         blkdev->blk = blk_by_legacy_dinfo(blkdev->dinfo);
         if (blk_is_read_only(blkdev->blk) && !readonly) {
-            xen_pv_printf(&blkdev->xendev, 0, "Unexpected read-only drive");
+            xen_pv_printf(xendev, 0, "Unexpected read-only drive");
             blkdev->blk = NULL;
             return -1;
         }
@@ -814,7 +818,7 @@ static int blk_connect(struct XenDevice *xendev)
     if (blkdev->file_size < 0) {
         BlockDriverState *bs = blk_bs(blkdev->blk);
         const char *drv_name = bs ? bdrv_get_format_name(bs) : NULL;
-        xen_pv_printf(&blkdev->xendev, 1, "blk_getlength: %d (%s) | drv %s\n",
+        xen_pv_printf(xendev, 1, "blk_getlength: %d (%s) | drv %s\n",
                       (int)blkdev->file_size, strerror(-blkdev->file_size),
                       drv_name ?: "-");
         blkdev->file_size = 0;
@@ -826,15 +830,15 @@ static int blk_connect(struct XenDevice *xendev)
                   blkdev->file_size, blkdev->file_size >> 20);
 
     /* Fill in number of sector size and number of sectors */
-    xenstore_write_be_int(&blkdev->xendev, "sector-size", blkdev->file_blk);
-    xenstore_write_be_int64(&blkdev->xendev, "sectors",
+    xenstore_write_be_int(xendev, "sector-size", blkdev->file_blk);
+    xenstore_write_be_int64(xendev, "sectors",
                             blkdev->file_size / blkdev->file_blk);
 
-    if (xenstore_read_fe_int(&blkdev->xendev, "ring-page-order",
+    if (xenstore_read_fe_int(xendev, "ring-page-order",
                              &order) == -1) {
         blkdev->nr_ring_ref = 1;
 
-        if (xenstore_read_fe_int(&blkdev->xendev, "ring-ref",
+        if (xenstore_read_fe_int(xendev, "ring-ref",
                                  &ring_ref) == -1) {
             return -1;
         }
@@ -851,7 +855,7 @@ static int blk_connect(struct XenDevice *xendev)
                 return -1;
             }
 
-            if (xenstore_read_fe_int(&blkdev->xendev, key,
+            if (xenstore_read_fe_int(xendev, key,
                                      &ring_ref) == -1) {
                 g_free(key);
                 return -1;
@@ -866,18 +870,18 @@ static int blk_connect(struct XenDevice *xendev)
         return -1;
     }
 
-    if (xenstore_read_fe_int(&blkdev->xendev, "event-channel",
-                             &blkdev->xendev.remote_port) == -1) {
+    if (xenstore_read_fe_int(xendev, "event-channel",
+                             &xendev->remote_port) == -1) {
         return -1;
     }
 
-    if (!blkdev->xendev.protocol) {
+    if (!xendev->protocol) {
         blkdev->protocol = BLKIF_PROTOCOL_NATIVE;
-    } else if (strcmp(blkdev->xendev.protocol, XEN_IO_PROTO_ABI_NATIVE) == 0) {
+    } else if (strcmp(xendev->protocol, XEN_IO_PROTO_ABI_NATIVE) == 0) {
         blkdev->protocol = BLKIF_PROTOCOL_NATIVE;
-    } else if (strcmp(blkdev->xendev.protocol, XEN_IO_PROTO_ABI_X86_32) == 0) {
+    } else if (strcmp(xendev->protocol, XEN_IO_PROTO_ABI_X86_32) == 0) {
         blkdev->protocol = BLKIF_PROTOCOL_X86_32;
-    } else if (strcmp(blkdev->xendev.protocol, XEN_IO_PROTO_ABI_X86_64) == 0) {
+    } else if (strcmp(xendev->protocol, XEN_IO_PROTO_ABI_X86_64) == 0) {
         blkdev->protocol = BLKIF_PROTOCOL_X86_64;
     } else {
         blkdev->protocol = BLKIF_PROTOCOL_NATIVE;
@@ -907,13 +911,13 @@ static int blk_connect(struct XenDevice *xendev)
     /* Add on the number needed for the ring pages */
     max_grants = blkdev->nr_ring_ref;
 
-    blkdev->xendev.gnttabdev = xengnttab_open(NULL, 0);
-    if (blkdev->xendev.gnttabdev == NULL) {
+    xendev->gnttabdev = xengnttab_open(NULL, 0);
+    if (xendev->gnttabdev == NULL) {
         xen_pv_printf(xendev, 0, "xengnttab_open failed: %s\n",
                       strerror(errno));
         return -1;
     }
-    if (xengnttab_set_max_grants(blkdev->xendev.gnttabdev, max_grants)) {
+    if (xengnttab_set_max_grants(xendev->gnttabdev, max_grants)) {
         xen_pv_printf(xendev, 0, "xengnttab_set_max_grants failed: %s\n",
                       strerror(errno));
         return -1;
@@ -921,10 +925,10 @@ static int blk_connect(struct XenDevice *xendev)
 
     domids = g_new0(uint32_t, blkdev->nr_ring_ref);
     for (i = 0; i < blkdev->nr_ring_ref; i++) {
-        domids[i] = blkdev->xendev.dom;
+        domids[i] = xendev->dom;
     }
 
-    blkdev->sring = xengnttab_map_grant_refs(blkdev->xendev.gnttabdev,
+    blkdev->sring = xengnttab_map_grant_refs(xendev->gnttabdev,
                                              blkdev->nr_ring_ref,
                                              domids,
                                              blkdev->ring_ref,
@@ -961,12 +965,12 @@ static int blk_connect(struct XenDevice *xendev)
 
     blk_set_aio_context(blkdev->blk, blkdev->ctx);
 
-    xen_be_bind_evtchn(&blkdev->xendev);
+    xen_be_bind_evtchn(xendev);
 
-    xen_pv_printf(&blkdev->xendev, 1, "ok: proto %s, nr-ring-ref %u, "
+    xen_pv_printf(xendev, 1, "ok: proto %s, nr-ring-ref %u, "
                   "remote port %d, local port %d\n",
-                  blkdev->xendev.protocol, blkdev->nr_ring_ref,
-                  blkdev->xendev.remote_port, blkdev->xendev.local_port);
+                  xendev->protocol, blkdev->nr_ring_ref,
+                  xendev->remote_port, xendev->local_port);
     return 0;
 }
 
@@ -984,19 +988,19 @@ static void blk_disconnect(struct XenDevice *xendev)
         blk_unref(blkdev->blk);
         blkdev->blk = NULL;
     }
-    xen_pv_unbind_evtchn(&blkdev->xendev);
+    xen_pv_unbind_evtchn(xendev);
 
     aio_context_release(blkdev->ctx);
 
     if (blkdev->sring) {
-        xengnttab_unmap(blkdev->xendev.gnttabdev, blkdev->sring,
+        xengnttab_unmap(xendev->gnttabdev, blkdev->sring,
                         blkdev->nr_ring_ref);
         blkdev->sring = NULL;
     }
 
-    if (blkdev->xendev.gnttabdev) {
-        xengnttab_close(blkdev->xendev.gnttabdev);
-        blkdev->xendev.gnttabdev = NULL;
+    if (xendev->gnttabdev) {
+        xengnttab_close(xendev->gnttabdev);
+        xendev->gnttabdev = NULL;
     }
 }
 
-- 
2.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* Re: [Qemu-devel] [Xen-devel] [PATCH 2/4] block/xen_disk: remove use of grant map/unmap
  2018-04-30 12:01   ` Paul Durrant
@ 2018-04-30 15:12     ` Roger Pau Monné
  -1 siblings, 0 replies; 28+ messages in thread
From: Roger Pau Monné @ 2018-04-30 15:12 UTC (permalink / raw)
  To: Paul Durrant
  Cc: xen-devel, qemu-block, qemu-devel, Anthony Perard, Kevin Wolf,
	Stefano Stabellini, Max Reitz

On Mon, Apr 30, 2018 at 01:01:37PM +0100, Paul Durrant wrote:
> The grant copy operation was added to libxengnttab in Xen 4.8.0. If grant
> copy is available then data from the guest will be copied rather than
> mapped.
> The xen_disk source can be significantly simplified by removing this now
> redundant code.

Hm, I know this is a PITA, but FreeBSD gntdev hasn't implemented the
grant-copy operation yet.

I could try to implement it, but I can't make any promises on the time
ATM, since I'm quite busy.

Thanks, Roger.

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH 2/4] block/xen_disk: remove use of grant map/unmap
@ 2018-04-30 15:12     ` Roger Pau Monné
  0 siblings, 0 replies; 28+ messages in thread
From: Roger Pau Monné @ 2018-04-30 15:12 UTC (permalink / raw)
  To: Paul Durrant
  Cc: Kevin Wolf, Stefano Stabellini, qemu-block, qemu-devel,
	Max Reitz, Anthony Perard, xen-devel

On Mon, Apr 30, 2018 at 01:01:37PM +0100, Paul Durrant wrote:
> The grant copy operation was added to libxengnttab in Xen 4.8.0. If grant
> copy is available then data from the guest will be copied rather than
> mapped.
> The xen_disk source can be significantly simplified by removing this now
> redundant code.

Hm, I know this is a PITA, but FreeBSD gntdev hasn't implemented the
grant-copy operation yet.

I could try to implement it, but I can't make any promises on the time
ATM, since I'm quite busy.

Thanks, Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [Qemu-devel] [Xen-devel] [PATCH 2/4] block/xen_disk: remove use of grant map/unmap
  2018-04-30 15:12     ` Roger Pau Monné
@ 2018-04-30 15:16       ` Paul Durrant
  -1 siblings, 0 replies; 28+ messages in thread
From: Paul Durrant @ 2018-04-30 15:16 UTC (permalink / raw)
  To: Roger Pau Monne
  Cc: xen-devel, qemu-block, qemu-devel, Anthony Perard, Kevin Wolf,
	Stefano Stabellini, Max Reitz

> -----Original Message-----
> From: Roger Pau Monne
> Sent: 30 April 2018 16:12
> To: Paul Durrant <Paul.Durrant@citrix.com>
> Cc: xen-devel@lists.xenproject.org; qemu-block@nongnu.org; qemu-
> devel@nongnu.org; Anthony Perard <anthony.perard@citrix.com>; Kevin
> Wolf <kwolf@redhat.com>; Stefano Stabellini <sstabellini@kernel.org>; Max
> Reitz <mreitz@redhat.com>
> Subject: Re: [Xen-devel] [PATCH 2/4] block/xen_disk: remove use of grant
> map/unmap
> 
> On Mon, Apr 30, 2018 at 01:01:37PM +0100, Paul Durrant wrote:
> > The grant copy operation was added to libxengnttab in Xen 4.8.0. If grant
> > copy is available then data from the guest will be copied rather than
> > mapped.
> > The xen_disk source can be significantly simplified by removing this now
> > redundant code.
> 
> Hm, I know this is a PITA, but FreeBSD gntdev hasn't implemented the
> grant-copy operation yet.
> 
> I could try to implement it, but I can't make any promises on the time
> ATM, since I'm quite busy.
> 

I guess we could carry a compat patch in QEMU that implements grant copy by doing a map/memcpy/unmap , but QEMU feels like the wrong place for that. I could try putting together a similar patch for the freebsd.c component of libxengnttab in the xen source rather than it simply failing with ENOSYS as it does now. Would either of those help?

  Cheers,

    Paul

> Thanks, Roger.

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH 2/4] block/xen_disk: remove use of grant map/unmap
@ 2018-04-30 15:16       ` Paul Durrant
  0 siblings, 0 replies; 28+ messages in thread
From: Paul Durrant @ 2018-04-30 15:16 UTC (permalink / raw)
  To: Roger Pau Monne
  Cc: Kevin Wolf, Stefano Stabellini, qemu-block, qemu-devel,
	Max Reitz, Anthony Perard, xen-devel

> -----Original Message-----
> From: Roger Pau Monne
> Sent: 30 April 2018 16:12
> To: Paul Durrant <Paul.Durrant@citrix.com>
> Cc: xen-devel@lists.xenproject.org; qemu-block@nongnu.org; qemu-
> devel@nongnu.org; Anthony Perard <anthony.perard@citrix.com>; Kevin
> Wolf <kwolf@redhat.com>; Stefano Stabellini <sstabellini@kernel.org>; Max
> Reitz <mreitz@redhat.com>
> Subject: Re: [Xen-devel] [PATCH 2/4] block/xen_disk: remove use of grant
> map/unmap
> 
> On Mon, Apr 30, 2018 at 01:01:37PM +0100, Paul Durrant wrote:
> > The grant copy operation was added to libxengnttab in Xen 4.8.0. If grant
> > copy is available then data from the guest will be copied rather than
> > mapped.
> > The xen_disk source can be significantly simplified by removing this now
> > redundant code.
> 
> Hm, I know this is a PITA, but FreeBSD gntdev hasn't implemented the
> grant-copy operation yet.
> 
> I could try to implement it, but I can't make any promises on the time
> ATM, since I'm quite busy.
> 

I guess we could carry a compat patch in QEMU that implements grant copy by doing a map/memcpy/unmap , but QEMU feels like the wrong place for that. I could try putting together a similar patch for the freebsd.c component of libxengnttab in the xen source rather than it simply failing with ENOSYS as it does now. Would either of those help?

  Cheers,

    Paul

> Thanks, Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [Qemu-devel] [Xen-devel] [PATCH 2/4] block/xen_disk: remove use of grant map/unmap
  2018-04-30 15:16       ` Paul Durrant
@ 2018-04-30 15:28         ` Roger Pau Monné
  -1 siblings, 0 replies; 28+ messages in thread
From: Roger Pau Monné @ 2018-04-30 15:28 UTC (permalink / raw)
  To: Paul Durrant
  Cc: xen-devel, qemu-block, qemu-devel, Anthony Perard, Kevin Wolf,
	Stefano Stabellini, Max Reitz

On Mon, Apr 30, 2018 at 04:16:52PM +0100, Paul Durrant wrote:
> > -----Original Message-----
> > From: Roger Pau Monne
> > Sent: 30 April 2018 16:12
> > To: Paul Durrant <Paul.Durrant@citrix.com>
> > Cc: xen-devel@lists.xenproject.org; qemu-block@nongnu.org; qemu-
> > devel@nongnu.org; Anthony Perard <anthony.perard@citrix.com>; Kevin
> > Wolf <kwolf@redhat.com>; Stefano Stabellini <sstabellini@kernel.org>; Max
> > Reitz <mreitz@redhat.com>
> > Subject: Re: [Xen-devel] [PATCH 2/4] block/xen_disk: remove use of grant
> > map/unmap
> > 
> > On Mon, Apr 30, 2018 at 01:01:37PM +0100, Paul Durrant wrote:
> > > The grant copy operation was added to libxengnttab in Xen 4.8.0. If grant
> > > copy is available then data from the guest will be copied rather than
> > > mapped.
> > > The xen_disk source can be significantly simplified by removing this now
> > > redundant code.
> > 
> > Hm, I know this is a PITA, but FreeBSD gntdev hasn't implemented the
> > grant-copy operation yet.
> > 
> > I could try to implement it, but I can't make any promises on the time
> > ATM, since I'm quite busy.
> > 
> 
> I guess we could carry a compat patch in QEMU that implements grant copy by doing a map/memcpy/unmap , but QEMU feels like the wrong place for that. I could try putting together a similar patch for the freebsd.c component of libxengnttab in the xen source rather than it simply failing with ENOSYS as it does now. Would either of those help?

Maybe this could be implemented in gnttab_core.c, so it can also be
used by MiniOS and Linux versions not supporting the copy ioctl as a
fallback?

Roger.

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH 2/4] block/xen_disk: remove use of grant map/unmap
@ 2018-04-30 15:28         ` Roger Pau Monné
  0 siblings, 0 replies; 28+ messages in thread
From: Roger Pau Monné @ 2018-04-30 15:28 UTC (permalink / raw)
  To: Paul Durrant
  Cc: Kevin Wolf, Stefano Stabellini, qemu-block, qemu-devel,
	Max Reitz, Anthony Perard, xen-devel

On Mon, Apr 30, 2018 at 04:16:52PM +0100, Paul Durrant wrote:
> > -----Original Message-----
> > From: Roger Pau Monne
> > Sent: 30 April 2018 16:12
> > To: Paul Durrant <Paul.Durrant@citrix.com>
> > Cc: xen-devel@lists.xenproject.org; qemu-block@nongnu.org; qemu-
> > devel@nongnu.org; Anthony Perard <anthony.perard@citrix.com>; Kevin
> > Wolf <kwolf@redhat.com>; Stefano Stabellini <sstabellini@kernel.org>; Max
> > Reitz <mreitz@redhat.com>
> > Subject: Re: [Xen-devel] [PATCH 2/4] block/xen_disk: remove use of grant
> > map/unmap
> > 
> > On Mon, Apr 30, 2018 at 01:01:37PM +0100, Paul Durrant wrote:
> > > The grant copy operation was added to libxengnttab in Xen 4.8.0. If grant
> > > copy is available then data from the guest will be copied rather than
> > > mapped.
> > > The xen_disk source can be significantly simplified by removing this now
> > > redundant code.
> > 
> > Hm, I know this is a PITA, but FreeBSD gntdev hasn't implemented the
> > grant-copy operation yet.
> > 
> > I could try to implement it, but I can't make any promises on the time
> > ATM, since I'm quite busy.
> > 
> 
> I guess we could carry a compat patch in QEMU that implements grant copy by doing a map/memcpy/unmap , but QEMU feels like the wrong place for that. I could try putting together a similar patch for the freebsd.c component of libxengnttab in the xen source rather than it simply failing with ENOSYS as it does now. Would either of those help?

Maybe this could be implemented in gnttab_core.c, so it can also be
used by MiniOS and Linux versions not supporting the copy ioctl as a
fallback?

Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [Qemu-devel] [Xen-devel] [PATCH 2/4] block/xen_disk: remove use of grant map/unmap
  2018-04-30 15:28         ` Roger Pau Monné
@ 2018-04-30 15:30           ` Paul Durrant
  -1 siblings, 0 replies; 28+ messages in thread
From: Paul Durrant @ 2018-04-30 15:30 UTC (permalink / raw)
  To: Roger Pau Monne
  Cc: xen-devel, qemu-block, qemu-devel, Anthony Perard, Kevin Wolf,
	Stefano Stabellini, Max Reitz

> -----Original Message-----
> From: Roger Pau Monne
> Sent: 30 April 2018 16:28
> To: Paul Durrant <Paul.Durrant@citrix.com>
> Cc: xen-devel@lists.xenproject.org; qemu-block@nongnu.org; qemu-
> devel@nongnu.org; Anthony Perard <anthony.perard@citrix.com>; Kevin
> Wolf <kwolf@redhat.com>; Stefano Stabellini <sstabellini@kernel.org>; Max
> Reitz <mreitz@redhat.com>
> Subject: Re: [Xen-devel] [PATCH 2/4] block/xen_disk: remove use of grant
> map/unmap
> 
> On Mon, Apr 30, 2018 at 04:16:52PM +0100, Paul Durrant wrote:
> > > -----Original Message-----
> > > From: Roger Pau Monne
> > > Sent: 30 April 2018 16:12
> > > To: Paul Durrant <Paul.Durrant@citrix.com>
> > > Cc: xen-devel@lists.xenproject.org; qemu-block@nongnu.org; qemu-
> > > devel@nongnu.org; Anthony Perard <anthony.perard@citrix.com>;
> Kevin
> > > Wolf <kwolf@redhat.com>; Stefano Stabellini <sstabellini@kernel.org>;
> Max
> > > Reitz <mreitz@redhat.com>
> > > Subject: Re: [Xen-devel] [PATCH 2/4] block/xen_disk: remove use of
> grant
> > > map/unmap
> > >
> > > On Mon, Apr 30, 2018 at 01:01:37PM +0100, Paul Durrant wrote:
> > > > The grant copy operation was added to libxengnttab in Xen 4.8.0. If
> grant
> > > > copy is available then data from the guest will be copied rather than
> > > > mapped.
> > > > The xen_disk source can be significantly simplified by removing this
> now
> > > > redundant code.
> > >
> > > Hm, I know this is a PITA, but FreeBSD gntdev hasn't implemented the
> > > grant-copy operation yet.
> > >
> > > I could try to implement it, but I can't make any promises on the time
> > > ATM, since I'm quite busy.
> > >
> >
> > I guess we could carry a compat patch in QEMU that implements grant copy
> by doing a map/memcpy/unmap , but QEMU feels like the wrong place for
> that. I could try putting together a similar patch for the freebsd.c component
> of libxengnttab in the xen source rather than it simply failing with ENOSYS as
> it does now. Would either of those help?
> 
> Maybe this could be implemented in gnttab_core.c, so it can also be
> used by MiniOS and Linux versions not supporting the copy ioctl as a
> fallback?

That sounds like a reasonable idea. I'll put something together so that it can go in early in 4.12.

  Paul

> 
> Roger.

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH 2/4] block/xen_disk: remove use of grant map/unmap
@ 2018-04-30 15:30           ` Paul Durrant
  0 siblings, 0 replies; 28+ messages in thread
From: Paul Durrant @ 2018-04-30 15:30 UTC (permalink / raw)
  To: Roger Pau Monne
  Cc: Kevin Wolf, Stefano Stabellini, qemu-block, qemu-devel,
	Max Reitz, Anthony Perard, xen-devel

> -----Original Message-----
> From: Roger Pau Monne
> Sent: 30 April 2018 16:28
> To: Paul Durrant <Paul.Durrant@citrix.com>
> Cc: xen-devel@lists.xenproject.org; qemu-block@nongnu.org; qemu-
> devel@nongnu.org; Anthony Perard <anthony.perard@citrix.com>; Kevin
> Wolf <kwolf@redhat.com>; Stefano Stabellini <sstabellini@kernel.org>; Max
> Reitz <mreitz@redhat.com>
> Subject: Re: [Xen-devel] [PATCH 2/4] block/xen_disk: remove use of grant
> map/unmap
> 
> On Mon, Apr 30, 2018 at 04:16:52PM +0100, Paul Durrant wrote:
> > > -----Original Message-----
> > > From: Roger Pau Monne
> > > Sent: 30 April 2018 16:12
> > > To: Paul Durrant <Paul.Durrant@citrix.com>
> > > Cc: xen-devel@lists.xenproject.org; qemu-block@nongnu.org; qemu-
> > > devel@nongnu.org; Anthony Perard <anthony.perard@citrix.com>;
> Kevin
> > > Wolf <kwolf@redhat.com>; Stefano Stabellini <sstabellini@kernel.org>;
> Max
> > > Reitz <mreitz@redhat.com>
> > > Subject: Re: [Xen-devel] [PATCH 2/4] block/xen_disk: remove use of
> grant
> > > map/unmap
> > >
> > > On Mon, Apr 30, 2018 at 01:01:37PM +0100, Paul Durrant wrote:
> > > > The grant copy operation was added to libxengnttab in Xen 4.8.0. If
> grant
> > > > copy is available then data from the guest will be copied rather than
> > > > mapped.
> > > > The xen_disk source can be significantly simplified by removing this
> now
> > > > redundant code.
> > >
> > > Hm, I know this is a PITA, but FreeBSD gntdev hasn't implemented the
> > > grant-copy operation yet.
> > >
> > > I could try to implement it, but I can't make any promises on the time
> > > ATM, since I'm quite busy.
> > >
> >
> > I guess we could carry a compat patch in QEMU that implements grant copy
> by doing a map/memcpy/unmap , but QEMU feels like the wrong place for
> that. I could try putting together a similar patch for the freebsd.c component
> of libxengnttab in the xen source rather than it simply failing with ENOSYS as
> it does now. Would either of those help?
> 
> Maybe this could be implemented in gnttab_core.c, so it can also be
> used by MiniOS and Linux versions not supporting the copy ioctl as a
> fallback?

That sounds like a reasonable idea. I'll put something together so that it can go in early in 4.12.

  Paul

> 
> Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [Qemu-devel] [Xen-devel] [PATCH 2/4] block/xen_disk: remove use of grant map/unmap
  2018-04-30 15:30           ` Paul Durrant
  (?)
  (?)
@ 2018-05-01 10:29           ` Wei Liu
  2018-05-01 10:31             ` Paul Durrant
  2018-05-01 10:31             ` [Qemu-devel] [Xen-devel] " Paul Durrant
  -1 siblings, 2 replies; 28+ messages in thread
From: Wei Liu @ 2018-05-01 10:29 UTC (permalink / raw)
  To: Paul Durrant
  Cc: Roger Pau Monne, Kevin Wolf, Stefano Stabellini, qemu-block,
	qemu-devel, Max Reitz, Anthony Perard, xen-devel, Wei Liu

On Mon, Apr 30, 2018 at 03:30:02PM +0000, Paul Durrant wrote:
> > -----Original Message-----
> > From: Roger Pau Monne
> > Sent: 30 April 2018 16:28
> > To: Paul Durrant <Paul.Durrant@citrix.com>
> > Cc: xen-devel@lists.xenproject.org; qemu-block@nongnu.org; qemu-
> > devel@nongnu.org; Anthony Perard <anthony.perard@citrix.com>; Kevin
> > Wolf <kwolf@redhat.com>; Stefano Stabellini <sstabellini@kernel.org>; Max
> > Reitz <mreitz@redhat.com>
> > Subject: Re: [Xen-devel] [PATCH 2/4] block/xen_disk: remove use of grant
> > map/unmap
> > 
> > On Mon, Apr 30, 2018 at 04:16:52PM +0100, Paul Durrant wrote:
> > > > -----Original Message-----
> > > > From: Roger Pau Monne
> > > > Sent: 30 April 2018 16:12
> > > > To: Paul Durrant <Paul.Durrant@citrix.com>
> > > > Cc: xen-devel@lists.xenproject.org; qemu-block@nongnu.org; qemu-
> > > > devel@nongnu.org; Anthony Perard <anthony.perard@citrix.com>;
> > Kevin
> > > > Wolf <kwolf@redhat.com>; Stefano Stabellini <sstabellini@kernel.org>;
> > Max
> > > > Reitz <mreitz@redhat.com>
> > > > Subject: Re: [Xen-devel] [PATCH 2/4] block/xen_disk: remove use of
> > grant
> > > > map/unmap
> > > >
> > > > On Mon, Apr 30, 2018 at 01:01:37PM +0100, Paul Durrant wrote:
> > > > > The grant copy operation was added to libxengnttab in Xen 4.8.0. If
> > grant
> > > > > copy is available then data from the guest will be copied rather than
> > > > > mapped.
> > > > > The xen_disk source can be significantly simplified by removing this
> > now
> > > > > redundant code.
> > > >
> > > > Hm, I know this is a PITA, but FreeBSD gntdev hasn't implemented the
> > > > grant-copy operation yet.
> > > >
> > > > I could try to implement it, but I can't make any promises on the time
> > > > ATM, since I'm quite busy.
> > > >
> > >
> > > I guess we could carry a compat patch in QEMU that implements grant copy
> > by doing a map/memcpy/unmap , but QEMU feels like the wrong place for
> > that. I could try putting together a similar patch for the freebsd.c component
> > of libxengnttab in the xen source rather than it simply failing with ENOSYS as
> > it does now. Would either of those help?
> > 
> > Maybe this could be implemented in gnttab_core.c, so it can also be
> > used by MiniOS and Linux versions not supporting the copy ioctl as a
> > fallback?
> 
> That sounds like a reasonable idea. I'll put something together so that it can go in early in 4.12.
> 

This will not work if XSM disallows grant map but allows grant copy.
Not sure how important that is.

Wei.

>   Paul
> 
> > 
> > Roger.
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xenproject.org
> https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH 2/4] block/xen_disk: remove use of grant map/unmap
  2018-04-30 15:30           ` Paul Durrant
  (?)
@ 2018-05-01 10:29           ` Wei Liu
  -1 siblings, 0 replies; 28+ messages in thread
From: Wei Liu @ 2018-05-01 10:29 UTC (permalink / raw)
  To: Paul Durrant
  Cc: Kevin Wolf, Stefano Stabellini, Wei Liu, qemu-block, qemu-devel,
	Max Reitz, Anthony Perard, xen-devel, Roger Pau Monne

On Mon, Apr 30, 2018 at 03:30:02PM +0000, Paul Durrant wrote:
> > -----Original Message-----
> > From: Roger Pau Monne
> > Sent: 30 April 2018 16:28
> > To: Paul Durrant <Paul.Durrant@citrix.com>
> > Cc: xen-devel@lists.xenproject.org; qemu-block@nongnu.org; qemu-
> > devel@nongnu.org; Anthony Perard <anthony.perard@citrix.com>; Kevin
> > Wolf <kwolf@redhat.com>; Stefano Stabellini <sstabellini@kernel.org>; Max
> > Reitz <mreitz@redhat.com>
> > Subject: Re: [Xen-devel] [PATCH 2/4] block/xen_disk: remove use of grant
> > map/unmap
> > 
> > On Mon, Apr 30, 2018 at 04:16:52PM +0100, Paul Durrant wrote:
> > > > -----Original Message-----
> > > > From: Roger Pau Monne
> > > > Sent: 30 April 2018 16:12
> > > > To: Paul Durrant <Paul.Durrant@citrix.com>
> > > > Cc: xen-devel@lists.xenproject.org; qemu-block@nongnu.org; qemu-
> > > > devel@nongnu.org; Anthony Perard <anthony.perard@citrix.com>;
> > Kevin
> > > > Wolf <kwolf@redhat.com>; Stefano Stabellini <sstabellini@kernel.org>;
> > Max
> > > > Reitz <mreitz@redhat.com>
> > > > Subject: Re: [Xen-devel] [PATCH 2/4] block/xen_disk: remove use of
> > grant
> > > > map/unmap
> > > >
> > > > On Mon, Apr 30, 2018 at 01:01:37PM +0100, Paul Durrant wrote:
> > > > > The grant copy operation was added to libxengnttab in Xen 4.8.0. If
> > grant
> > > > > copy is available then data from the guest will be copied rather than
> > > > > mapped.
> > > > > The xen_disk source can be significantly simplified by removing this
> > now
> > > > > redundant code.
> > > >
> > > > Hm, I know this is a PITA, but FreeBSD gntdev hasn't implemented the
> > > > grant-copy operation yet.
> > > >
> > > > I could try to implement it, but I can't make any promises on the time
> > > > ATM, since I'm quite busy.
> > > >
> > >
> > > I guess we could carry a compat patch in QEMU that implements grant copy
> > by doing a map/memcpy/unmap , but QEMU feels like the wrong place for
> > that. I could try putting together a similar patch for the freebsd.c component
> > of libxengnttab in the xen source rather than it simply failing with ENOSYS as
> > it does now. Would either of those help?
> > 
> > Maybe this could be implemented in gnttab_core.c, so it can also be
> > used by MiniOS and Linux versions not supporting the copy ioctl as a
> > fallback?
> 
> That sounds like a reasonable idea. I'll put something together so that it can go in early in 4.12.
> 

This will not work if XSM disallows grant map but allows grant copy.
Not sure how important that is.

Wei.

>   Paul
> 
> > 
> > Roger.
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xenproject.org
> https://lists.xenproject.org/mailman/listinfo/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [Qemu-devel] [Xen-devel] [PATCH 2/4] block/xen_disk: remove use of grant map/unmap
  2018-05-01 10:29           ` [Qemu-devel] [Xen-devel] " Wei Liu
  2018-05-01 10:31             ` Paul Durrant
@ 2018-05-01 10:31             ` Paul Durrant
  1 sibling, 0 replies; 28+ messages in thread
From: Paul Durrant @ 2018-05-01 10:31 UTC (permalink / raw)
  To: Wei Liu
  Cc: Roger Pau Monne, Kevin Wolf, Stefano Stabellini, qemu-block,
	qemu-devel, Max Reitz, Anthony Perard, xen-devel

> -----Original Message-----
> From: Wei Liu [mailto:wei.liu2@citrix.com]
> Sent: 01 May 2018 11:30
> To: Paul Durrant <Paul.Durrant@citrix.com>
> Cc: Roger Pau Monne <roger.pau@citrix.com>; Kevin Wolf
> <kwolf@redhat.com>; Stefano Stabellini <sstabellini@kernel.org>; qemu-
> block@nongnu.org; qemu-devel@nongnu.org; Max Reitz
> <mreitz@redhat.com>; Anthony Perard <anthony.perard@citrix.com>; xen-
> devel@lists.xenproject.org; Wei Liu <wei.liu2@citrix.com>
> Subject: Re: [Xen-devel] [PATCH 2/4] block/xen_disk: remove use of grant
> map/unmap
> 
> On Mon, Apr 30, 2018 at 03:30:02PM +0000, Paul Durrant wrote:
> > > -----Original Message-----
> > > From: Roger Pau Monne
> > > Sent: 30 April 2018 16:28
> > > To: Paul Durrant <Paul.Durrant@citrix.com>
> > > Cc: xen-devel@lists.xenproject.org; qemu-block@nongnu.org; qemu-
> > > devel@nongnu.org; Anthony Perard <anthony.perard@citrix.com>;
> Kevin
> > > Wolf <kwolf@redhat.com>; Stefano Stabellini <sstabellini@kernel.org>;
> Max
> > > Reitz <mreitz@redhat.com>
> > > Subject: Re: [Xen-devel] [PATCH 2/4] block/xen_disk: remove use of
> grant
> > > map/unmap
> > >
> > > On Mon, Apr 30, 2018 at 04:16:52PM +0100, Paul Durrant wrote:
> > > > > -----Original Message-----
> > > > > From: Roger Pau Monne
> > > > > Sent: 30 April 2018 16:12
> > > > > To: Paul Durrant <Paul.Durrant@citrix.com>
> > > > > Cc: xen-devel@lists.xenproject.org; qemu-block@nongnu.org; qemu-
> > > > > devel@nongnu.org; Anthony Perard <anthony.perard@citrix.com>;
> > > Kevin
> > > > > Wolf <kwolf@redhat.com>; Stefano Stabellini
> <sstabellini@kernel.org>;
> > > Max
> > > > > Reitz <mreitz@redhat.com>
> > > > > Subject: Re: [Xen-devel] [PATCH 2/4] block/xen_disk: remove use of
> > > grant
> > > > > map/unmap
> > > > >
> > > > > On Mon, Apr 30, 2018 at 01:01:37PM +0100, Paul Durrant wrote:
> > > > > > The grant copy operation was added to libxengnttab in Xen 4.8.0. If
> > > grant
> > > > > > copy is available then data from the guest will be copied rather than
> > > > > > mapped.
> > > > > > The xen_disk source can be significantly simplified by removing this
> > > now
> > > > > > redundant code.
> > > > >
> > > > > Hm, I know this is a PITA, but FreeBSD gntdev hasn't implemented the
> > > > > grant-copy operation yet.
> > > > >
> > > > > I could try to implement it, but I can't make any promises on the time
> > > > > ATM, since I'm quite busy.
> > > > >
> > > >
> > > > I guess we could carry a compat patch in QEMU that implements grant
> copy
> > > by doing a map/memcpy/unmap , but QEMU feels like the wrong place
> for
> > > that. I could try putting together a similar patch for the freebsd.c
> component
> > > of libxengnttab in the xen source rather than it simply failing with ENOSYS
> as
> > > it does now. Would either of those help?
> > >
> > > Maybe this could be implemented in gnttab_core.c, so it can also be
> > > used by MiniOS and Linux versions not supporting the copy ioctl as a
> > > fallback?
> >
> > That sounds like a reasonable idea. I'll put something together so that it can
> go in early in 4.12.
> >
> 
> This will not work if XSM disallows grant map but allows grant copy.
> Not sure how important that is.

I think it's just 'tough' at that point. This is only compat and there'd be no change from using current QEMU (which would issue the grant maps directly).

  Cheers,

    Paul

> 
> Wei.
> 
> >   Paul
> >
> > >
> > > Roger.
> >
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@lists.xenproject.org
> > https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH 2/4] block/xen_disk: remove use of grant map/unmap
  2018-05-01 10:29           ` [Qemu-devel] [Xen-devel] " Wei Liu
@ 2018-05-01 10:31             ` Paul Durrant
  2018-05-01 10:31             ` [Qemu-devel] [Xen-devel] " Paul Durrant
  1 sibling, 0 replies; 28+ messages in thread
From: Paul Durrant @ 2018-05-01 10:31 UTC (permalink / raw)
  Cc: Kevin Wolf, Stefano Stabellini, Wei Liu, qemu-block, qemu-devel,
	Max Reitz, Anthony Perard, xen-devel, Roger Pau Monne

> -----Original Message-----
> From: Wei Liu [mailto:wei.liu2@citrix.com]
> Sent: 01 May 2018 11:30
> To: Paul Durrant <Paul.Durrant@citrix.com>
> Cc: Roger Pau Monne <roger.pau@citrix.com>; Kevin Wolf
> <kwolf@redhat.com>; Stefano Stabellini <sstabellini@kernel.org>; qemu-
> block@nongnu.org; qemu-devel@nongnu.org; Max Reitz
> <mreitz@redhat.com>; Anthony Perard <anthony.perard@citrix.com>; xen-
> devel@lists.xenproject.org; Wei Liu <wei.liu2@citrix.com>
> Subject: Re: [Xen-devel] [PATCH 2/4] block/xen_disk: remove use of grant
> map/unmap
> 
> On Mon, Apr 30, 2018 at 03:30:02PM +0000, Paul Durrant wrote:
> > > -----Original Message-----
> > > From: Roger Pau Monne
> > > Sent: 30 April 2018 16:28
> > > To: Paul Durrant <Paul.Durrant@citrix.com>
> > > Cc: xen-devel@lists.xenproject.org; qemu-block@nongnu.org; qemu-
> > > devel@nongnu.org; Anthony Perard <anthony.perard@citrix.com>;
> Kevin
> > > Wolf <kwolf@redhat.com>; Stefano Stabellini <sstabellini@kernel.org>;
> Max
> > > Reitz <mreitz@redhat.com>
> > > Subject: Re: [Xen-devel] [PATCH 2/4] block/xen_disk: remove use of
> grant
> > > map/unmap
> > >
> > > On Mon, Apr 30, 2018 at 04:16:52PM +0100, Paul Durrant wrote:
> > > > > -----Original Message-----
> > > > > From: Roger Pau Monne
> > > > > Sent: 30 April 2018 16:12
> > > > > To: Paul Durrant <Paul.Durrant@citrix.com>
> > > > > Cc: xen-devel@lists.xenproject.org; qemu-block@nongnu.org; qemu-
> > > > > devel@nongnu.org; Anthony Perard <anthony.perard@citrix.com>;
> > > Kevin
> > > > > Wolf <kwolf@redhat.com>; Stefano Stabellini
> <sstabellini@kernel.org>;
> > > Max
> > > > > Reitz <mreitz@redhat.com>
> > > > > Subject: Re: [Xen-devel] [PATCH 2/4] block/xen_disk: remove use of
> > > grant
> > > > > map/unmap
> > > > >
> > > > > On Mon, Apr 30, 2018 at 01:01:37PM +0100, Paul Durrant wrote:
> > > > > > The grant copy operation was added to libxengnttab in Xen 4.8.0. If
> > > grant
> > > > > > copy is available then data from the guest will be copied rather than
> > > > > > mapped.
> > > > > > The xen_disk source can be significantly simplified by removing this
> > > now
> > > > > > redundant code.
> > > > >
> > > > > Hm, I know this is a PITA, but FreeBSD gntdev hasn't implemented the
> > > > > grant-copy operation yet.
> > > > >
> > > > > I could try to implement it, but I can't make any promises on the time
> > > > > ATM, since I'm quite busy.
> > > > >
> > > >
> > > > I guess we could carry a compat patch in QEMU that implements grant
> copy
> > > by doing a map/memcpy/unmap , but QEMU feels like the wrong place
> for
> > > that. I could try putting together a similar patch for the freebsd.c
> component
> > > of libxengnttab in the xen source rather than it simply failing with ENOSYS
> as
> > > it does now. Would either of those help?
> > >
> > > Maybe this could be implemented in gnttab_core.c, so it can also be
> > > used by MiniOS and Linux versions not supporting the copy ioctl as a
> > > fallback?
> >
> > That sounds like a reasonable idea. I'll put something together so that it can
> go in early in 4.12.
> >
> 
> This will not work if XSM disallows grant map but allows grant copy.
> Not sure how important that is.

I think it's just 'tough' at that point. This is only compat and there'd be no change from using current QEMU (which would issue the grant maps directly).

  Cheers,

    Paul

> 
> Wei.
> 
> >   Paul
> >
> > >
> > > Roger.
> >
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@lists.xenproject.org
> > https://lists.xenproject.org/mailman/listinfo/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [Qemu-devel] [PATCH 0/4] block/xen_disk: legacy code removal and cleanup
  2018-04-30 12:01 ` Paul Durrant
                   ` (5 preceding siblings ...)
  (?)
@ 2018-05-02 15:58 ` Anthony PERARD
  2018-05-02 16:03   ` Paul Durrant
  2018-05-02 16:03   ` [Qemu-devel] " Paul Durrant
  -1 siblings, 2 replies; 28+ messages in thread
From: Anthony PERARD @ 2018-05-02 15:58 UTC (permalink / raw)
  To: Paul Durrant
  Cc: xen-devel, qemu-block, qemu-devel, Stefano Stabellini,
	Kevin Wolf, Max Reitz

On Mon, Apr 30, 2018 at 01:01:35PM +0100, Paul Durrant wrote:
> The grant copy operation was added to libxengnttab in Xen 4.8.0 (released
> nearly 18 months ago) but the xen_disk PV backend QEMU is still carrying
> a significant amount of code purely to remain compatible with older
> versions of Xen.
> 
> As can be inferred from the diff stats below, removing this support for
> older versions of Xen from QEMU reduces the size of the xen_disk source by
> more than 350 lines (~25%). The majority of this is done in patches #1
> and #2. Further simplifications are made in patch #3 and then some cosmetic
> work is done in patch #4.

FIY, I don't like this patch series. We've been checking that QEMU
builds against older version. I've check that it builds against 4.5 and
newer.

Also the fact that FreeBSD doesn't have support for grant copy probably
mean that it is too soon to remove the compatibility code from qemu.

Regards,

-- 
Anthony PERARD

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH 0/4] block/xen_disk: legacy code removal and cleanup
  2018-04-30 12:01 ` Paul Durrant
                   ` (4 preceding siblings ...)
  (?)
@ 2018-05-02 15:58 ` Anthony PERARD
  -1 siblings, 0 replies; 28+ messages in thread
From: Anthony PERARD @ 2018-05-02 15:58 UTC (permalink / raw)
  To: Paul Durrant
  Cc: Kevin Wolf, Stefano Stabellini, qemu-block, qemu-devel,
	Max Reitz, xen-devel

On Mon, Apr 30, 2018 at 01:01:35PM +0100, Paul Durrant wrote:
> The grant copy operation was added to libxengnttab in Xen 4.8.0 (released
> nearly 18 months ago) but the xen_disk PV backend QEMU is still carrying
> a significant amount of code purely to remain compatible with older
> versions of Xen.
> 
> As can be inferred from the diff stats below, removing this support for
> older versions of Xen from QEMU reduces the size of the xen_disk source by
> more than 350 lines (~25%). The majority of this is done in patches #1
> and #2. Further simplifications are made in patch #3 and then some cosmetic
> work is done in patch #4.

FIY, I don't like this patch series. We've been checking that QEMU
builds against older version. I've check that it builds against 4.5 and
newer.

Also the fact that FreeBSD doesn't have support for grant copy probably
mean that it is too soon to remove the compatibility code from qemu.

Regards,

-- 
Anthony PERARD

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [Qemu-devel] [PATCH 0/4] block/xen_disk: legacy code removal and cleanup
  2018-05-02 15:58 ` [Qemu-devel] " Anthony PERARD
  2018-05-02 16:03   ` Paul Durrant
@ 2018-05-02 16:03   ` Paul Durrant
  2018-05-03  9:55     ` Anthony PERARD
  2018-05-03  9:55     ` Anthony PERARD
  1 sibling, 2 replies; 28+ messages in thread
From: Paul Durrant @ 2018-05-02 16:03 UTC (permalink / raw)
  To: Anthony Perard
  Cc: xen-devel, qemu-block, qemu-devel, Stefano Stabellini,
	Kevin Wolf, Max Reitz

> -----Original Message-----
> From: Anthony PERARD [mailto:anthony.perard@citrix.com]
> Sent: 02 May 2018 16:58
> To: Paul Durrant <Paul.Durrant@citrix.com>
> Cc: xen-devel@lists.xenproject.org; qemu-block@nongnu.org; qemu-
> devel@nongnu.org; Stefano Stabellini <sstabellini@kernel.org>; Kevin Wolf
> <kwolf@redhat.com>; Max Reitz <mreitz@redhat.com>
> Subject: Re: [PATCH 0/4] block/xen_disk: legacy code removal and cleanup
> 
> On Mon, Apr 30, 2018 at 01:01:35PM +0100, Paul Durrant wrote:
> > The grant copy operation was added to libxengnttab in Xen 4.8.0 (released
> > nearly 18 months ago) but the xen_disk PV backend QEMU is still carrying
> > a significant amount of code purely to remain compatible with older
> > versions of Xen.
> >
> > As can be inferred from the diff stats below, removing this support for
> > older versions of Xen from QEMU reduces the size of the xen_disk source
> by
> > more than 350 lines (~25%). The majority of this is done in patches #1
> > and #2. Further simplifications are made in patch #3 and then some
> cosmetic
> > work is done in patch #4.
> 
> FIY, I don't like this patch series. We've been checking that QEMU
> builds against older version. I've check that it builds against 4.5 and
> newer.
> 

Ok, I can grant copy emulation in QEMU then should it not exist for the particular Xen/OS combo.

> Also the fact that FreeBSD doesn't have support for grant copy probably
> mean that it is too soon to remove the compatibility code from qemu.
> 

On another thread I'd already agreed to emulate grant copy in libxengnttab for those OS where it is not supported, but if you prefer it to be in QEMU I'll put it there.

  Paul

> Regards,
> 
> --
> Anthony PERARD

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH 0/4] block/xen_disk: legacy code removal and cleanup
  2018-05-02 15:58 ` [Qemu-devel] " Anthony PERARD
@ 2018-05-02 16:03   ` Paul Durrant
  2018-05-02 16:03   ` [Qemu-devel] " Paul Durrant
  1 sibling, 0 replies; 28+ messages in thread
From: Paul Durrant @ 2018-05-02 16:03 UTC (permalink / raw)
  To: Anthony Perard
  Cc: Kevin Wolf, Stefano Stabellini, qemu-block, qemu-devel,
	Max Reitz, xen-devel

> -----Original Message-----
> From: Anthony PERARD [mailto:anthony.perard@citrix.com]
> Sent: 02 May 2018 16:58
> To: Paul Durrant <Paul.Durrant@citrix.com>
> Cc: xen-devel@lists.xenproject.org; qemu-block@nongnu.org; qemu-
> devel@nongnu.org; Stefano Stabellini <sstabellini@kernel.org>; Kevin Wolf
> <kwolf@redhat.com>; Max Reitz <mreitz@redhat.com>
> Subject: Re: [PATCH 0/4] block/xen_disk: legacy code removal and cleanup
> 
> On Mon, Apr 30, 2018 at 01:01:35PM +0100, Paul Durrant wrote:
> > The grant copy operation was added to libxengnttab in Xen 4.8.0 (released
> > nearly 18 months ago) but the xen_disk PV backend QEMU is still carrying
> > a significant amount of code purely to remain compatible with older
> > versions of Xen.
> >
> > As can be inferred from the diff stats below, removing this support for
> > older versions of Xen from QEMU reduces the size of the xen_disk source
> by
> > more than 350 lines (~25%). The majority of this is done in patches #1
> > and #2. Further simplifications are made in patch #3 and then some
> cosmetic
> > work is done in patch #4.
> 
> FIY, I don't like this patch series. We've been checking that QEMU
> builds against older version. I've check that it builds against 4.5 and
> newer.
> 

Ok, I can grant copy emulation in QEMU then should it not exist for the particular Xen/OS combo.

> Also the fact that FreeBSD doesn't have support for grant copy probably
> mean that it is too soon to remove the compatibility code from qemu.
> 

On another thread I'd already agreed to emulate grant copy in libxengnttab for those OS where it is not supported, but if you prefer it to be in QEMU I'll put it there.

  Paul

> Regards,
> 
> --
> Anthony PERARD

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [Qemu-devel] [PATCH 0/4] block/xen_disk: legacy code removal and cleanup
  2018-05-02 16:03   ` [Qemu-devel] " Paul Durrant
@ 2018-05-03  9:55     ` Anthony PERARD
  2018-05-03  9:55     ` Anthony PERARD
  1 sibling, 0 replies; 28+ messages in thread
From: Anthony PERARD @ 2018-05-03  9:55 UTC (permalink / raw)
  To: Paul Durrant
  Cc: xen-devel, qemu-block, qemu-devel, Stefano Stabellini,
	Kevin Wolf, Max Reitz

On Wed, May 02, 2018 at 05:03:09PM +0100, Paul Durrant wrote:
> > -----Original Message-----
> > From: Anthony PERARD [mailto:anthony.perard@citrix.com]
> > Sent: 02 May 2018 16:58
> > To: Paul Durrant <Paul.Durrant@citrix.com>
> > Cc: xen-devel@lists.xenproject.org; qemu-block@nongnu.org; qemu-
> > devel@nongnu.org; Stefano Stabellini <sstabellini@kernel.org>; Kevin Wolf
> > <kwolf@redhat.com>; Max Reitz <mreitz@redhat.com>
> > Subject: Re: [PATCH 0/4] block/xen_disk: legacy code removal and cleanup
> > 
> > On Mon, Apr 30, 2018 at 01:01:35PM +0100, Paul Durrant wrote:
> > > The grant copy operation was added to libxengnttab in Xen 4.8.0 (released
> > > nearly 18 months ago) but the xen_disk PV backend QEMU is still carrying
> > > a significant amount of code purely to remain compatible with older
> > > versions of Xen.
> > >
> > > As can be inferred from the diff stats below, removing this support for
> > > older versions of Xen from QEMU reduces the size of the xen_disk source
> > by
> > > more than 350 lines (~25%). The majority of this is done in patches #1
> > > and #2. Further simplifications are made in patch #3 and then some
> > cosmetic
> > > work is done in patch #4.
> > 
> > FIY, I don't like this patch series. We've been checking that QEMU
> > builds against older version. I've check that it builds against 4.5 and
> > newer.
> > 
> 
> Ok, I can grant copy emulation in QEMU then should it not exist for the particular Xen/OS combo.
> 
> > Also the fact that FreeBSD doesn't have support for grant copy probably
> > mean that it is too soon to remove the compatibility code from qemu.
> > 
> 
> On another thread I'd already agreed to emulate grant copy in libxengnttab for those OS where it is not supported, but if you prefer it to be in QEMU I'll put it there.

Yes, I think it will be better from QEMU point of view.

Thanks,

-- 
Anthony PERARD

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH 0/4] block/xen_disk: legacy code removal and cleanup
  2018-05-02 16:03   ` [Qemu-devel] " Paul Durrant
  2018-05-03  9:55     ` Anthony PERARD
@ 2018-05-03  9:55     ` Anthony PERARD
  1 sibling, 0 replies; 28+ messages in thread
From: Anthony PERARD @ 2018-05-03  9:55 UTC (permalink / raw)
  To: Paul Durrant
  Cc: Kevin Wolf, Stefano Stabellini, qemu-block, qemu-devel,
	Max Reitz, xen-devel

On Wed, May 02, 2018 at 05:03:09PM +0100, Paul Durrant wrote:
> > -----Original Message-----
> > From: Anthony PERARD [mailto:anthony.perard@citrix.com]
> > Sent: 02 May 2018 16:58
> > To: Paul Durrant <Paul.Durrant@citrix.com>
> > Cc: xen-devel@lists.xenproject.org; qemu-block@nongnu.org; qemu-
> > devel@nongnu.org; Stefano Stabellini <sstabellini@kernel.org>; Kevin Wolf
> > <kwolf@redhat.com>; Max Reitz <mreitz@redhat.com>
> > Subject: Re: [PATCH 0/4] block/xen_disk: legacy code removal and cleanup
> > 
> > On Mon, Apr 30, 2018 at 01:01:35PM +0100, Paul Durrant wrote:
> > > The grant copy operation was added to libxengnttab in Xen 4.8.0 (released
> > > nearly 18 months ago) but the xen_disk PV backend QEMU is still carrying
> > > a significant amount of code purely to remain compatible with older
> > > versions of Xen.
> > >
> > > As can be inferred from the diff stats below, removing this support for
> > > older versions of Xen from QEMU reduces the size of the xen_disk source
> > by
> > > more than 350 lines (~25%). The majority of this is done in patches #1
> > > and #2. Further simplifications are made in patch #3 and then some
> > cosmetic
> > > work is done in patch #4.
> > 
> > FIY, I don't like this patch series. We've been checking that QEMU
> > builds against older version. I've check that it builds against 4.5 and
> > newer.
> > 
> 
> Ok, I can grant copy emulation in QEMU then should it not exist for the particular Xen/OS combo.
> 
> > Also the fact that FreeBSD doesn't have support for grant copy probably
> > mean that it is too soon to remove the compatibility code from qemu.
> > 
> 
> On another thread I'd already agreed to emulate grant copy in libxengnttab for those OS where it is not supported, but if you prefer it to be in QEMU I'll put it there.

Yes, I think it will be better from QEMU point of view.

Thanks,

-- 
Anthony PERARD

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 28+ messages in thread

end of thread, other threads:[~2018-05-03  9:55 UTC | newest]

Thread overview: 28+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-04-30 12:01 [Qemu-devel] [PATCH 0/4] block/xen_disk: legacy code removal and cleanup Paul Durrant
2018-04-30 12:01 ` Paul Durrant
2018-04-30 12:01 ` [Qemu-devel] [PATCH 1/4] block/xen_disk: remove persistent grant code Paul Durrant
2018-04-30 12:01   ` Paul Durrant
2018-04-30 12:01 ` [Qemu-devel] [PATCH 2/4] block/xen_disk: remove use of grant map/unmap Paul Durrant
2018-04-30 12:01   ` Paul Durrant
2018-04-30 15:12   ` [Qemu-devel] [Xen-devel] " Roger Pau Monné
2018-04-30 15:12     ` Roger Pau Monné
2018-04-30 15:16     ` [Qemu-devel] [Xen-devel] " Paul Durrant
2018-04-30 15:16       ` Paul Durrant
2018-04-30 15:28       ` [Qemu-devel] [Xen-devel] " Roger Pau Monné
2018-04-30 15:28         ` Roger Pau Monné
2018-04-30 15:30         ` [Qemu-devel] [Xen-devel] " Paul Durrant
2018-04-30 15:30           ` Paul Durrant
2018-05-01 10:29           ` Wei Liu
2018-05-01 10:29           ` [Qemu-devel] [Xen-devel] " Wei Liu
2018-05-01 10:31             ` Paul Durrant
2018-05-01 10:31             ` [Qemu-devel] [Xen-devel] " Paul Durrant
2018-04-30 12:01 ` [Qemu-devel] [PATCH 3/4] block/xen_disk: use a single entry iovec Paul Durrant
2018-04-30 12:01   ` Paul Durrant
2018-04-30 12:01 ` [Qemu-devel] [PATCH 4/4] block/xen_disk: be consistent with use of xendev and blkdev->xendev Paul Durrant
2018-04-30 12:01   ` Paul Durrant
2018-05-02 15:58 ` [PATCH 0/4] block/xen_disk: legacy code removal and cleanup Anthony PERARD
2018-05-02 15:58 ` [Qemu-devel] " Anthony PERARD
2018-05-02 16:03   ` Paul Durrant
2018-05-02 16:03   ` [Qemu-devel] " Paul Durrant
2018-05-03  9:55     ` Anthony PERARD
2018-05-03  9:55     ` Anthony PERARD

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.