All of lore.kernel.org
 help / color / mirror / Atom feed
* [Qemu-devel] [PATCH 0/2] xen: fix gnttab handling with old dom0 kernels
@ 2017-09-19 11:50 ` Juergen Gross
  0 siblings, 0 replies; 14+ messages in thread
From: Juergen Gross @ 2017-09-19 11:50 UTC (permalink / raw)
  To: qemu-devel, xen-devel; +Cc: anthony.perard, kraxel, sstabellini, Juergen Gross

Trying to start a domain with a qdisk backend on an old dom0 kernel
(non-pvops xen kernel) will fail as xengnttab_set_max_grants() will
succeed only if called directly after opening the gnttab device.

These two patches address the issue by:

- moving the test for availability of xengnttab_grant_copy() from
  the qdisk backend to generic backend handling in order to not have
  to issue the grant copy ioctl with the qdisk specific file handle
- opening the gnttab device in blk_connect() only in order to
  guarantee xengnttab_set_max_grants() is called just after opening
  the gnttab device


Juergen Gross (2):
  xen: add a global indicator for grant copy being available
  xen: dont try setting max grants multiple times

 hw/block/xen_disk.c          | 30 +++++++++++++++++-------------
 hw/xen/xen_backend.c         | 11 +++++++++++
 include/hw/xen/xen_backend.h |  1 +
 3 files changed, 29 insertions(+), 13 deletions(-)

-- 
2.12.3

^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH 0/2] xen: fix gnttab handling with old dom0 kernels
@ 2017-09-19 11:50 ` Juergen Gross
  0 siblings, 0 replies; 14+ messages in thread
From: Juergen Gross @ 2017-09-19 11:50 UTC (permalink / raw)
  To: qemu-devel, xen-devel; +Cc: anthony.perard, Juergen Gross, sstabellini, kraxel

Trying to start a domain with a qdisk backend on an old dom0 kernel
(non-pvops xen kernel) will fail as xengnttab_set_max_grants() will
succeed only if called directly after opening the gnttab device.

These two patches address the issue by:

- moving the test for availability of xengnttab_grant_copy() from
  the qdisk backend to generic backend handling in order to not have
  to issue the grant copy ioctl with the qdisk specific file handle
- opening the gnttab device in blk_connect() only in order to
  guarantee xengnttab_set_max_grants() is called just after opening
  the gnttab device


Juergen Gross (2):
  xen: add a global indicator for grant copy being available
  xen: dont try setting max grants multiple times

 hw/block/xen_disk.c          | 30 +++++++++++++++++-------------
 hw/xen/xen_backend.c         | 11 +++++++++++
 include/hw/xen/xen_backend.h |  1 +
 3 files changed, 29 insertions(+), 13 deletions(-)

-- 
2.12.3


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 14+ messages in thread

* [Qemu-devel] [PATCH 1/2] xen: add a global indicator for grant copy being available
  2017-09-19 11:50 ` Juergen Gross
@ 2017-09-19 11:50   ` Juergen Gross
  -1 siblings, 0 replies; 14+ messages in thread
From: Juergen Gross @ 2017-09-19 11:50 UTC (permalink / raw)
  To: qemu-devel, xen-devel; +Cc: anthony.perard, kraxel, sstabellini, Juergen Gross

The Xen qdisk backend needs to test whether grant copy operations is
available in the kernel. Unfortunately this collides with using
xengnttab_set_max_grants() on some kernels as this operation has to
be the first one after opening the gnttab device.

In order to solve this problem test for the availability of grant copy
in xen_be_init() opening the gnttab device just for that purpose and
closing it again afterwards. Advertise the availability via a global
flag and use that flag in the qdisk backend.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 hw/block/xen_disk.c          | 18 ++++++------------
 hw/xen/xen_backend.c         | 11 +++++++++++
 include/hw/xen/xen_backend.h |  1 +
 3 files changed, 18 insertions(+), 12 deletions(-)

diff --git a/hw/block/xen_disk.c b/hw/block/xen_disk.c
index d42ed7070d..6632746250 100644
--- a/hw/block/xen_disk.c
+++ b/hw/block/xen_disk.c
@@ -121,9 +121,6 @@ struct XenBlkDev {
     unsigned int        persistent_gnt_count;
     unsigned int        max_grants;
 
-    /* Grant copy */
-    gboolean            feature_grant_copy;
-
     /* qemu block driver */
     DriveInfo           *dinfo;
     BlockBackend        *blk;
@@ -616,7 +613,7 @@ static void qemu_aio_complete(void *opaque, int ret)
         return;
     }
 
-    if (ioreq->blkdev->feature_grant_copy) {
+    if (xen_feature_grant_copy) {
         switch (ioreq->req.operation) {
         case BLKIF_OP_READ:
             /* in case of failure ioreq->aio_errors is increased */
@@ -638,7 +635,7 @@ static void qemu_aio_complete(void *opaque, int ret)
     }
 
     ioreq->status = ioreq->aio_errors ? BLKIF_RSP_ERROR : BLKIF_RSP_OKAY;
-    if (!ioreq->blkdev->feature_grant_copy) {
+    if (!xen_feature_grant_copy) {
         ioreq_unmap(ioreq);
     }
     ioreq_finish(ioreq);
@@ -698,7 +695,7 @@ static int ioreq_runio_qemu_aio(struct ioreq *ioreq)
 {
     struct XenBlkDev *blkdev = ioreq->blkdev;
 
-    if (ioreq->blkdev->feature_grant_copy) {
+    if (xen_feature_grant_copy) {
         ioreq_init_copy_buffers(ioreq);
         if (ioreq->req.nr_segments && (ioreq->req.operation == BLKIF_OP_WRITE ||
             ioreq->req.operation == BLKIF_OP_FLUSH_DISKCACHE) &&
@@ -750,7 +747,7 @@ static int ioreq_runio_qemu_aio(struct ioreq *ioreq)
     }
     default:
         /* unknown operation (shouldn't happen -- parse catches this) */
-        if (!ioreq->blkdev->feature_grant_copy) {
+        if (!xen_feature_grant_copy) {
             ioreq_unmap(ioreq);
         }
         goto err;
@@ -1010,18 +1007,15 @@ static int blk_init(struct XenDevice *xendev)
 
     blkdev->file_blk  = BLOCK_SIZE;
 
-    blkdev->feature_grant_copy =
-                (xengnttab_grant_copy(blkdev->xendev.gnttabdev, 0, NULL) == 0);
-
     xen_pv_printf(&blkdev->xendev, 3, "grant copy operation %s\n",
-                  blkdev->feature_grant_copy ? "enabled" : "disabled");
+                  xen_feature_grant_copy ? "enabled" : "disabled");
 
     /* fill info
      * blk_connect supplies sector-size and sectors
      */
     xenstore_write_be_int(&blkdev->xendev, "feature-flush-cache", 1);
     xenstore_write_be_int(&blkdev->xendev, "feature-persistent",
-                          !blkdev->feature_grant_copy);
+                          !xen_feature_grant_copy);
     xenstore_write_be_int(&blkdev->xendev, "info", info);
 
     xenstore_write_be_int(&blkdev->xendev, "max-ring-page-order",
diff --git a/hw/xen/xen_backend.c b/hw/xen/xen_backend.c
index c46cbb0759..00210627a9 100644
--- a/hw/xen/xen_backend.c
+++ b/hw/xen/xen_backend.c
@@ -44,6 +44,7 @@ BusState *xen_sysbus;
 /* public */
 struct xs_handle *xenstore = NULL;
 const char *xen_protocol;
+gboolean xen_feature_grant_copy;
 
 /* private */
 static int debug;
@@ -519,6 +520,8 @@ void xenstore_update_fe(char *watch, struct XenDevice *xendev)
 
 int xen_be_init(void)
 {
+    xengnttab_handle *gnttabdev;
+
     xenstore = xs_daemon_open();
     if (!xenstore) {
         xen_pv_printf(NULL, 0, "can't connect to xenstored\n");
@@ -532,6 +535,14 @@ int xen_be_init(void)
         goto err;
     }
 
+    gnttabdev = xengnttab_open(NULL, 0);
+    if (gnttabdev != NULL) {
+        if (xengnttab_grant_copy(gnttabdev, 0, NULL) == 0) {
+            xen_feature_grant_copy = true;
+        }
+        xengnttab_close(gnttabdev);
+    }
+
     xen_sysdev = qdev_create(NULL, TYPE_XENSYSDEV);
     qdev_init_nofail(xen_sysdev);
     xen_sysbus = qbus_create(TYPE_XENSYSBUS, DEVICE(xen_sysdev), "xen-sysbus");
diff --git a/include/hw/xen/xen_backend.h b/include/hw/xen/xen_backend.h
index 8a6fbcbe20..08a054f524 100644
--- a/include/hw/xen/xen_backend.h
+++ b/include/hw/xen/xen_backend.h
@@ -16,6 +16,7 @@
 /* variables */
 extern struct xs_handle *xenstore;
 extern const char *xen_protocol;
+extern gboolean xen_feature_grant_copy;
 extern DeviceState *xen_sysdev;
 extern BusState *xen_sysbus;
 
-- 
2.12.3

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH 1/2] xen: add a global indicator for grant copy being available
@ 2017-09-19 11:50   ` Juergen Gross
  0 siblings, 0 replies; 14+ messages in thread
From: Juergen Gross @ 2017-09-19 11:50 UTC (permalink / raw)
  To: qemu-devel, xen-devel; +Cc: anthony.perard, Juergen Gross, sstabellini, kraxel

The Xen qdisk backend needs to test whether grant copy operations is
available in the kernel. Unfortunately this collides with using
xengnttab_set_max_grants() on some kernels as this operation has to
be the first one after opening the gnttab device.

In order to solve this problem test for the availability of grant copy
in xen_be_init() opening the gnttab device just for that purpose and
closing it again afterwards. Advertise the availability via a global
flag and use that flag in the qdisk backend.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 hw/block/xen_disk.c          | 18 ++++++------------
 hw/xen/xen_backend.c         | 11 +++++++++++
 include/hw/xen/xen_backend.h |  1 +
 3 files changed, 18 insertions(+), 12 deletions(-)

diff --git a/hw/block/xen_disk.c b/hw/block/xen_disk.c
index d42ed7070d..6632746250 100644
--- a/hw/block/xen_disk.c
+++ b/hw/block/xen_disk.c
@@ -121,9 +121,6 @@ struct XenBlkDev {
     unsigned int        persistent_gnt_count;
     unsigned int        max_grants;
 
-    /* Grant copy */
-    gboolean            feature_grant_copy;
-
     /* qemu block driver */
     DriveInfo           *dinfo;
     BlockBackend        *blk;
@@ -616,7 +613,7 @@ static void qemu_aio_complete(void *opaque, int ret)
         return;
     }
 
-    if (ioreq->blkdev->feature_grant_copy) {
+    if (xen_feature_grant_copy) {
         switch (ioreq->req.operation) {
         case BLKIF_OP_READ:
             /* in case of failure ioreq->aio_errors is increased */
@@ -638,7 +635,7 @@ static void qemu_aio_complete(void *opaque, int ret)
     }
 
     ioreq->status = ioreq->aio_errors ? BLKIF_RSP_ERROR : BLKIF_RSP_OKAY;
-    if (!ioreq->blkdev->feature_grant_copy) {
+    if (!xen_feature_grant_copy) {
         ioreq_unmap(ioreq);
     }
     ioreq_finish(ioreq);
@@ -698,7 +695,7 @@ static int ioreq_runio_qemu_aio(struct ioreq *ioreq)
 {
     struct XenBlkDev *blkdev = ioreq->blkdev;
 
-    if (ioreq->blkdev->feature_grant_copy) {
+    if (xen_feature_grant_copy) {
         ioreq_init_copy_buffers(ioreq);
         if (ioreq->req.nr_segments && (ioreq->req.operation == BLKIF_OP_WRITE ||
             ioreq->req.operation == BLKIF_OP_FLUSH_DISKCACHE) &&
@@ -750,7 +747,7 @@ static int ioreq_runio_qemu_aio(struct ioreq *ioreq)
     }
     default:
         /* unknown operation (shouldn't happen -- parse catches this) */
-        if (!ioreq->blkdev->feature_grant_copy) {
+        if (!xen_feature_grant_copy) {
             ioreq_unmap(ioreq);
         }
         goto err;
@@ -1010,18 +1007,15 @@ static int blk_init(struct XenDevice *xendev)
 
     blkdev->file_blk  = BLOCK_SIZE;
 
-    blkdev->feature_grant_copy =
-                (xengnttab_grant_copy(blkdev->xendev.gnttabdev, 0, NULL) == 0);
-
     xen_pv_printf(&blkdev->xendev, 3, "grant copy operation %s\n",
-                  blkdev->feature_grant_copy ? "enabled" : "disabled");
+                  xen_feature_grant_copy ? "enabled" : "disabled");
 
     /* fill info
      * blk_connect supplies sector-size and sectors
      */
     xenstore_write_be_int(&blkdev->xendev, "feature-flush-cache", 1);
     xenstore_write_be_int(&blkdev->xendev, "feature-persistent",
-                          !blkdev->feature_grant_copy);
+                          !xen_feature_grant_copy);
     xenstore_write_be_int(&blkdev->xendev, "info", info);
 
     xenstore_write_be_int(&blkdev->xendev, "max-ring-page-order",
diff --git a/hw/xen/xen_backend.c b/hw/xen/xen_backend.c
index c46cbb0759..00210627a9 100644
--- a/hw/xen/xen_backend.c
+++ b/hw/xen/xen_backend.c
@@ -44,6 +44,7 @@ BusState *xen_sysbus;
 /* public */
 struct xs_handle *xenstore = NULL;
 const char *xen_protocol;
+gboolean xen_feature_grant_copy;
 
 /* private */
 static int debug;
@@ -519,6 +520,8 @@ void xenstore_update_fe(char *watch, struct XenDevice *xendev)
 
 int xen_be_init(void)
 {
+    xengnttab_handle *gnttabdev;
+
     xenstore = xs_daemon_open();
     if (!xenstore) {
         xen_pv_printf(NULL, 0, "can't connect to xenstored\n");
@@ -532,6 +535,14 @@ int xen_be_init(void)
         goto err;
     }
 
+    gnttabdev = xengnttab_open(NULL, 0);
+    if (gnttabdev != NULL) {
+        if (xengnttab_grant_copy(gnttabdev, 0, NULL) == 0) {
+            xen_feature_grant_copy = true;
+        }
+        xengnttab_close(gnttabdev);
+    }
+
     xen_sysdev = qdev_create(NULL, TYPE_XENSYSDEV);
     qdev_init_nofail(xen_sysdev);
     xen_sysbus = qbus_create(TYPE_XENSYSBUS, DEVICE(xen_sysdev), "xen-sysbus");
diff --git a/include/hw/xen/xen_backend.h b/include/hw/xen/xen_backend.h
index 8a6fbcbe20..08a054f524 100644
--- a/include/hw/xen/xen_backend.h
+++ b/include/hw/xen/xen_backend.h
@@ -16,6 +16,7 @@
 /* variables */
 extern struct xs_handle *xenstore;
 extern const char *xen_protocol;
+extern gboolean xen_feature_grant_copy;
 extern DeviceState *xen_sysdev;
 extern BusState *xen_sysbus;
 
-- 
2.12.3


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [Qemu-devel] [PATCH 2/2] xen: dont try setting max grants multiple times
  2017-09-19 11:50 ` Juergen Gross
@ 2017-09-19 11:50   ` Juergen Gross
  -1 siblings, 0 replies; 14+ messages in thread
From: Juergen Gross @ 2017-09-19 11:50 UTC (permalink / raw)
  To: qemu-devel, xen-devel; +Cc: anthony.perard, kraxel, sstabellini, Juergen Gross

Trying to call xengnttab_set_max_grants() with the same file handle
might fail on some kernels, as this operation is allowed only once.

This is a problem for the qdisk backend as blk_connect() can be
called multiple times for a domain, e.g. in case grub-xen is being
used to boot it.

So instead of letting the generic backend code open the gnttab device
do it in blk_connect() and close it again in blk_disconnect.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 hw/block/xen_disk.c | 12 +++++++++++-
 1 file changed, 11 insertions(+), 1 deletion(-)

diff --git a/hw/block/xen_disk.c b/hw/block/xen_disk.c
index 6632746250..7cff8863cb 100644
--- a/hw/block/xen_disk.c
+++ b/hw/block/xen_disk.c
@@ -1220,6 +1220,12 @@ static int blk_connect(struct XenDevice *xendev)
     /* Add on the number needed for the ring pages */
     max_grants += blkdev->nr_ring_ref;
 
+    blkdev->xendev.gnttabdev = xengnttab_open(NULL, 0);
+    if (blkdev->xendev.gnttabdev == NULL) {
+        xen_pv_printf(xendev, 0, "xengnttab_open failed: %s\n",
+                      strerror(errno));
+        return -1;
+    }
     if (xengnttab_set_max_grants(blkdev->xendev.gnttabdev, max_grants)) {
         xen_pv_printf(xendev, 0, "xengnttab_set_max_grants failed: %s\n",
                       strerror(errno));
@@ -1327,6 +1333,11 @@ static void blk_disconnect(struct XenDevice *xendev)
         }
         blkdev->feature_persistent = false;
     }
+
+    if (blkdev->xendev.gnttabdev) {
+        xengnttab_close(blkdev->xendev.gnttabdev);
+        blkdev->xendev.gnttabdev = NULL;
+    }
 }
 
 static int blk_free(struct XenDevice *xendev)
@@ -1363,7 +1374,6 @@ static void blk_event(struct XenDevice *xendev)
 
 struct XenDevOps xen_blkdev_ops = {
     .size       = sizeof(struct XenBlkDev),
-    .flags      = DEVOPS_FLAG_NEED_GNTDEV,
     .alloc      = blk_alloc,
     .init       = blk_init,
     .initialise    = blk_connect,
-- 
2.12.3

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH 2/2] xen: dont try setting max grants multiple times
@ 2017-09-19 11:50   ` Juergen Gross
  0 siblings, 0 replies; 14+ messages in thread
From: Juergen Gross @ 2017-09-19 11:50 UTC (permalink / raw)
  To: qemu-devel, xen-devel; +Cc: anthony.perard, Juergen Gross, sstabellini, kraxel

Trying to call xengnttab_set_max_grants() with the same file handle
might fail on some kernels, as this operation is allowed only once.

This is a problem for the qdisk backend as blk_connect() can be
called multiple times for a domain, e.g. in case grub-xen is being
used to boot it.

So instead of letting the generic backend code open the gnttab device
do it in blk_connect() and close it again in blk_disconnect.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 hw/block/xen_disk.c | 12 +++++++++++-
 1 file changed, 11 insertions(+), 1 deletion(-)

diff --git a/hw/block/xen_disk.c b/hw/block/xen_disk.c
index 6632746250..7cff8863cb 100644
--- a/hw/block/xen_disk.c
+++ b/hw/block/xen_disk.c
@@ -1220,6 +1220,12 @@ static int blk_connect(struct XenDevice *xendev)
     /* Add on the number needed for the ring pages */
     max_grants += blkdev->nr_ring_ref;
 
+    blkdev->xendev.gnttabdev = xengnttab_open(NULL, 0);
+    if (blkdev->xendev.gnttabdev == NULL) {
+        xen_pv_printf(xendev, 0, "xengnttab_open failed: %s\n",
+                      strerror(errno));
+        return -1;
+    }
     if (xengnttab_set_max_grants(blkdev->xendev.gnttabdev, max_grants)) {
         xen_pv_printf(xendev, 0, "xengnttab_set_max_grants failed: %s\n",
                       strerror(errno));
@@ -1327,6 +1333,11 @@ static void blk_disconnect(struct XenDevice *xendev)
         }
         blkdev->feature_persistent = false;
     }
+
+    if (blkdev->xendev.gnttabdev) {
+        xengnttab_close(blkdev->xendev.gnttabdev);
+        blkdev->xendev.gnttabdev = NULL;
+    }
 }
 
 static int blk_free(struct XenDevice *xendev)
@@ -1363,7 +1374,6 @@ static void blk_event(struct XenDevice *xendev)
 
 struct XenDevOps xen_blkdev_ops = {
     .size       = sizeof(struct XenBlkDev),
-    .flags      = DEVOPS_FLAG_NEED_GNTDEV,
     .alloc      = blk_alloc,
     .init       = blk_init,
     .initialise    = blk_connect,
-- 
2.12.3


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* Re: [Qemu-devel] [PATCH 1/2] xen: add a global indicator for grant copy being available
  2017-09-19 11:50   ` Juergen Gross
@ 2017-09-20 14:53     ` Anthony PERARD
  -1 siblings, 0 replies; 14+ messages in thread
From: Anthony PERARD @ 2017-09-20 14:53 UTC (permalink / raw)
  To: Juergen Gross; +Cc: qemu-devel, xen-devel, kraxel, sstabellini

On Tue, Sep 19, 2017 at 01:50:54PM +0200, Juergen Gross wrote:
> The Xen qdisk backend needs to test whether grant copy operations is
> available in the kernel. Unfortunately this collides with using
> xengnttab_set_max_grants() on some kernels as this operation has to
> be the first one after opening the gnttab device.
> 
> In order to solve this problem test for the availability of grant copy
> in xen_be_init() opening the gnttab device just for that purpose and
> closing it again afterwards. Advertise the availability via a global
> flag and use that flag in the qdisk backend.
> 
> Signed-off-by: Juergen Gross <jgross@suse.com>
> ---
>  hw/block/xen_disk.c          | 18 ++++++------------
>  hw/xen/xen_backend.c         | 11 +++++++++++
>  include/hw/xen/xen_backend.h |  1 +
>  3 files changed, 18 insertions(+), 12 deletions(-)
> 
> diff --git a/hw/block/xen_disk.c b/hw/block/xen_disk.c
> index d42ed7070d..6632746250 100644
> --- a/hw/block/xen_disk.c
> +++ b/hw/block/xen_disk.c
> @@ -121,9 +121,6 @@ struct XenBlkDev {
>      unsigned int        persistent_gnt_count;
>      unsigned int        max_grants;
>  
> -    /* Grant copy */
> -    gboolean            feature_grant_copy;
> -
>      /* qemu block driver */
>      DriveInfo           *dinfo;
>      BlockBackend        *blk;
> @@ -616,7 +613,7 @@ static void qemu_aio_complete(void *opaque, int ret)
>          return;
>      }
>  
> -    if (ioreq->blkdev->feature_grant_copy) {
> +    if (xen_feature_grant_copy) {
>          switch (ioreq->req.operation) {
>          case BLKIF_OP_READ:
>              /* in case of failure ioreq->aio_errors is increased */
> @@ -638,7 +635,7 @@ static void qemu_aio_complete(void *opaque, int ret)
>      }
>  
>      ioreq->status = ioreq->aio_errors ? BLKIF_RSP_ERROR : BLKIF_RSP_OKAY;
> -    if (!ioreq->blkdev->feature_grant_copy) {
> +    if (!xen_feature_grant_copy) {
>          ioreq_unmap(ioreq);
>      }
>      ioreq_finish(ioreq);
> @@ -698,7 +695,7 @@ static int ioreq_runio_qemu_aio(struct ioreq *ioreq)
>  {
>      struct XenBlkDev *blkdev = ioreq->blkdev;
>  
> -    if (ioreq->blkdev->feature_grant_copy) {
> +    if (xen_feature_grant_copy) {
>          ioreq_init_copy_buffers(ioreq);
>          if (ioreq->req.nr_segments && (ioreq->req.operation == BLKIF_OP_WRITE ||
>              ioreq->req.operation == BLKIF_OP_FLUSH_DISKCACHE) &&
> @@ -750,7 +747,7 @@ static int ioreq_runio_qemu_aio(struct ioreq *ioreq)
>      }
>      default:
>          /* unknown operation (shouldn't happen -- parse catches this) */
> -        if (!ioreq->blkdev->feature_grant_copy) {
> +        if (!xen_feature_grant_copy) {
>              ioreq_unmap(ioreq);
>          }
>          goto err;
> @@ -1010,18 +1007,15 @@ static int blk_init(struct XenDevice *xendev)
>  
>      blkdev->file_blk  = BLOCK_SIZE;
>  
> -    blkdev->feature_grant_copy =
> -                (xengnttab_grant_copy(blkdev->xendev.gnttabdev, 0, NULL) == 0);
> -
>      xen_pv_printf(&blkdev->xendev, 3, "grant copy operation %s\n",
> -                  blkdev->feature_grant_copy ? "enabled" : "disabled");
> +                  xen_feature_grant_copy ? "enabled" : "disabled");
>  
>      /* fill info
>       * blk_connect supplies sector-size and sectors
>       */
>      xenstore_write_be_int(&blkdev->xendev, "feature-flush-cache", 1);
>      xenstore_write_be_int(&blkdev->xendev, "feature-persistent",
> -                          !blkdev->feature_grant_copy);
> +                          !xen_feature_grant_copy);
>      xenstore_write_be_int(&blkdev->xendev, "info", info);
>  
>      xenstore_write_be_int(&blkdev->xendev, "max-ring-page-order",
> diff --git a/hw/xen/xen_backend.c b/hw/xen/xen_backend.c
> index c46cbb0759..00210627a9 100644
> --- a/hw/xen/xen_backend.c
> +++ b/hw/xen/xen_backend.c
> @@ -44,6 +44,7 @@ BusState *xen_sysbus;
>  /* public */
>  struct xs_handle *xenstore = NULL;
>  const char *xen_protocol;
> +gboolean xen_feature_grant_copy;

I think it would be better if this was changed to bool instead of a
gboolean.

Beside that,
Acked-by: Anthony PERARD <anthony.perard@citrix.com>

>  
>  /* private */
>  static int debug;
> @@ -519,6 +520,8 @@ void xenstore_update_fe(char *watch, struct XenDevice *xendev)
>  
>  int xen_be_init(void)
>  {
> +    xengnttab_handle *gnttabdev;
> +
>      xenstore = xs_daemon_open();
>      if (!xenstore) {
>          xen_pv_printf(NULL, 0, "can't connect to xenstored\n");
> @@ -532,6 +535,14 @@ int xen_be_init(void)
>          goto err;
>      }
>  
> +    gnttabdev = xengnttab_open(NULL, 0);
> +    if (gnttabdev != NULL) {
> +        if (xengnttab_grant_copy(gnttabdev, 0, NULL) == 0) {
> +            xen_feature_grant_copy = true;
> +        }
> +        xengnttab_close(gnttabdev);
> +    }
> +
>      xen_sysdev = qdev_create(NULL, TYPE_XENSYSDEV);
>      qdev_init_nofail(xen_sysdev);
>      xen_sysbus = qbus_create(TYPE_XENSYSBUS, DEVICE(xen_sysdev), "xen-sysbus");
> diff --git a/include/hw/xen/xen_backend.h b/include/hw/xen/xen_backend.h
> index 8a6fbcbe20..08a054f524 100644
> --- a/include/hw/xen/xen_backend.h
> +++ b/include/hw/xen/xen_backend.h
> @@ -16,6 +16,7 @@
>  /* variables */
>  extern struct xs_handle *xenstore;
>  extern const char *xen_protocol;
> +extern gboolean xen_feature_grant_copy;
>  extern DeviceState *xen_sysdev;
>  extern BusState *xen_sysbus;
>  
> -- 
> 2.12.3
> 

-- 
Anthony PERARD

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 1/2] xen: add a global indicator for grant copy being available
@ 2017-09-20 14:53     ` Anthony PERARD
  0 siblings, 0 replies; 14+ messages in thread
From: Anthony PERARD @ 2017-09-20 14:53 UTC (permalink / raw)
  To: Juergen Gross; +Cc: xen-devel, sstabellini, qemu-devel, kraxel

On Tue, Sep 19, 2017 at 01:50:54PM +0200, Juergen Gross wrote:
> The Xen qdisk backend needs to test whether grant copy operations is
> available in the kernel. Unfortunately this collides with using
> xengnttab_set_max_grants() on some kernels as this operation has to
> be the first one after opening the gnttab device.
> 
> In order to solve this problem test for the availability of grant copy
> in xen_be_init() opening the gnttab device just for that purpose and
> closing it again afterwards. Advertise the availability via a global
> flag and use that flag in the qdisk backend.
> 
> Signed-off-by: Juergen Gross <jgross@suse.com>
> ---
>  hw/block/xen_disk.c          | 18 ++++++------------
>  hw/xen/xen_backend.c         | 11 +++++++++++
>  include/hw/xen/xen_backend.h |  1 +
>  3 files changed, 18 insertions(+), 12 deletions(-)
> 
> diff --git a/hw/block/xen_disk.c b/hw/block/xen_disk.c
> index d42ed7070d..6632746250 100644
> --- a/hw/block/xen_disk.c
> +++ b/hw/block/xen_disk.c
> @@ -121,9 +121,6 @@ struct XenBlkDev {
>      unsigned int        persistent_gnt_count;
>      unsigned int        max_grants;
>  
> -    /* Grant copy */
> -    gboolean            feature_grant_copy;
> -
>      /* qemu block driver */
>      DriveInfo           *dinfo;
>      BlockBackend        *blk;
> @@ -616,7 +613,7 @@ static void qemu_aio_complete(void *opaque, int ret)
>          return;
>      }
>  
> -    if (ioreq->blkdev->feature_grant_copy) {
> +    if (xen_feature_grant_copy) {
>          switch (ioreq->req.operation) {
>          case BLKIF_OP_READ:
>              /* in case of failure ioreq->aio_errors is increased */
> @@ -638,7 +635,7 @@ static void qemu_aio_complete(void *opaque, int ret)
>      }
>  
>      ioreq->status = ioreq->aio_errors ? BLKIF_RSP_ERROR : BLKIF_RSP_OKAY;
> -    if (!ioreq->blkdev->feature_grant_copy) {
> +    if (!xen_feature_grant_copy) {
>          ioreq_unmap(ioreq);
>      }
>      ioreq_finish(ioreq);
> @@ -698,7 +695,7 @@ static int ioreq_runio_qemu_aio(struct ioreq *ioreq)
>  {
>      struct XenBlkDev *blkdev = ioreq->blkdev;
>  
> -    if (ioreq->blkdev->feature_grant_copy) {
> +    if (xen_feature_grant_copy) {
>          ioreq_init_copy_buffers(ioreq);
>          if (ioreq->req.nr_segments && (ioreq->req.operation == BLKIF_OP_WRITE ||
>              ioreq->req.operation == BLKIF_OP_FLUSH_DISKCACHE) &&
> @@ -750,7 +747,7 @@ static int ioreq_runio_qemu_aio(struct ioreq *ioreq)
>      }
>      default:
>          /* unknown operation (shouldn't happen -- parse catches this) */
> -        if (!ioreq->blkdev->feature_grant_copy) {
> +        if (!xen_feature_grant_copy) {
>              ioreq_unmap(ioreq);
>          }
>          goto err;
> @@ -1010,18 +1007,15 @@ static int blk_init(struct XenDevice *xendev)
>  
>      blkdev->file_blk  = BLOCK_SIZE;
>  
> -    blkdev->feature_grant_copy =
> -                (xengnttab_grant_copy(blkdev->xendev.gnttabdev, 0, NULL) == 0);
> -
>      xen_pv_printf(&blkdev->xendev, 3, "grant copy operation %s\n",
> -                  blkdev->feature_grant_copy ? "enabled" : "disabled");
> +                  xen_feature_grant_copy ? "enabled" : "disabled");
>  
>      /* fill info
>       * blk_connect supplies sector-size and sectors
>       */
>      xenstore_write_be_int(&blkdev->xendev, "feature-flush-cache", 1);
>      xenstore_write_be_int(&blkdev->xendev, "feature-persistent",
> -                          !blkdev->feature_grant_copy);
> +                          !xen_feature_grant_copy);
>      xenstore_write_be_int(&blkdev->xendev, "info", info);
>  
>      xenstore_write_be_int(&blkdev->xendev, "max-ring-page-order",
> diff --git a/hw/xen/xen_backend.c b/hw/xen/xen_backend.c
> index c46cbb0759..00210627a9 100644
> --- a/hw/xen/xen_backend.c
> +++ b/hw/xen/xen_backend.c
> @@ -44,6 +44,7 @@ BusState *xen_sysbus;
>  /* public */
>  struct xs_handle *xenstore = NULL;
>  const char *xen_protocol;
> +gboolean xen_feature_grant_copy;

I think it would be better if this was changed to bool instead of a
gboolean.

Beside that,
Acked-by: Anthony PERARD <anthony.perard@citrix.com>

>  
>  /* private */
>  static int debug;
> @@ -519,6 +520,8 @@ void xenstore_update_fe(char *watch, struct XenDevice *xendev)
>  
>  int xen_be_init(void)
>  {
> +    xengnttab_handle *gnttabdev;
> +
>      xenstore = xs_daemon_open();
>      if (!xenstore) {
>          xen_pv_printf(NULL, 0, "can't connect to xenstored\n");
> @@ -532,6 +535,14 @@ int xen_be_init(void)
>          goto err;
>      }
>  
> +    gnttabdev = xengnttab_open(NULL, 0);
> +    if (gnttabdev != NULL) {
> +        if (xengnttab_grant_copy(gnttabdev, 0, NULL) == 0) {
> +            xen_feature_grant_copy = true;
> +        }
> +        xengnttab_close(gnttabdev);
> +    }
> +
>      xen_sysdev = qdev_create(NULL, TYPE_XENSYSDEV);
>      qdev_init_nofail(xen_sysdev);
>      xen_sysbus = qbus_create(TYPE_XENSYSBUS, DEVICE(xen_sysdev), "xen-sysbus");
> diff --git a/include/hw/xen/xen_backend.h b/include/hw/xen/xen_backend.h
> index 8a6fbcbe20..08a054f524 100644
> --- a/include/hw/xen/xen_backend.h
> +++ b/include/hw/xen/xen_backend.h
> @@ -16,6 +16,7 @@
>  /* variables */
>  extern struct xs_handle *xenstore;
>  extern const char *xen_protocol;
> +extern gboolean xen_feature_grant_copy;
>  extern DeviceState *xen_sysdev;
>  extern BusState *xen_sysbus;
>  
> -- 
> 2.12.3
> 

-- 
Anthony PERARD

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [Qemu-devel] [PATCH 2/2] xen: dont try setting max grants multiple times
  2017-09-19 11:50   ` Juergen Gross
@ 2017-09-20 15:00     ` Anthony PERARD
  -1 siblings, 0 replies; 14+ messages in thread
From: Anthony PERARD @ 2017-09-20 15:00 UTC (permalink / raw)
  To: Juergen Gross; +Cc: qemu-devel, xen-devel, kraxel, sstabellini

On Tue, Sep 19, 2017 at 01:50:55PM +0200, Juergen Gross wrote:
> Trying to call xengnttab_set_max_grants() with the same file handle
> might fail on some kernels, as this operation is allowed only once.
> 
> This is a problem for the qdisk backend as blk_connect() can be
> called multiple times for a domain, e.g. in case grub-xen is being
> used to boot it.
> 
> So instead of letting the generic backend code open the gnttab device
> do it in blk_connect() and close it again in blk_disconnect.
> 
> Signed-off-by: Juergen Gross <jgross@suse.com>
> ---
>  hw/block/xen_disk.c | 12 +++++++++++-
>  1 file changed, 11 insertions(+), 1 deletion(-)
> 
> diff --git a/hw/block/xen_disk.c b/hw/block/xen_disk.c
> index 6632746250..7cff8863cb 100644
> --- a/hw/block/xen_disk.c
> +++ b/hw/block/xen_disk.c
> @@ -1220,6 +1220,12 @@ static int blk_connect(struct XenDevice *xendev)
>      /* Add on the number needed for the ring pages */
>      max_grants += blkdev->nr_ring_ref;
>  
> +    blkdev->xendev.gnttabdev = xengnttab_open(NULL, 0);
> +    if (blkdev->xendev.gnttabdev == NULL) {
> +        xen_pv_printf(xendev, 0, "xengnttab_open failed: %s\n",
> +                      strerror(errno));
> +        return -1;
> +    }
>      if (xengnttab_set_max_grants(blkdev->xendev.gnttabdev, max_grants)) {
>          xen_pv_printf(xendev, 0, "xengnttab_set_max_grants failed: %s\n",
>                        strerror(errno));
> @@ -1327,6 +1333,11 @@ static void blk_disconnect(struct XenDevice *xendev)
>          }
>          blkdev->feature_persistent = false;
>      }
> +
> +    if (blkdev->xendev.gnttabdev) {
> +        xengnttab_close(blkdev->xendev.gnttabdev);
> +        blkdev->xendev.gnttabdev = NULL;
> +    }

I think blk_disconnect needs to be called from blk_free in case where
the gnttabdev is not closed (like it is done when blk or the sring are
not cleared, in blk_free).

>  }

>  static int blk_free(struct XenDevice *xendev)
> @@ -1363,7 +1374,6 @@ static void blk_event(struct XenDevice *xendev)
>  
>  struct XenDevOps xen_blkdev_ops = {
>      .size       = sizeof(struct XenBlkDev),
> -    .flags      = DEVOPS_FLAG_NEED_GNTDEV,
>      .alloc      = blk_alloc,
>      .init       = blk_init,
>      .initialise    = blk_connect,
> -- 
> 2.12.3
> 

-- 
Anthony PERARD

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 2/2] xen: dont try setting max grants multiple times
@ 2017-09-20 15:00     ` Anthony PERARD
  0 siblings, 0 replies; 14+ messages in thread
From: Anthony PERARD @ 2017-09-20 15:00 UTC (permalink / raw)
  To: Juergen Gross; +Cc: xen-devel, sstabellini, qemu-devel, kraxel

On Tue, Sep 19, 2017 at 01:50:55PM +0200, Juergen Gross wrote:
> Trying to call xengnttab_set_max_grants() with the same file handle
> might fail on some kernels, as this operation is allowed only once.
> 
> This is a problem for the qdisk backend as blk_connect() can be
> called multiple times for a domain, e.g. in case grub-xen is being
> used to boot it.
> 
> So instead of letting the generic backend code open the gnttab device
> do it in blk_connect() and close it again in blk_disconnect.
> 
> Signed-off-by: Juergen Gross <jgross@suse.com>
> ---
>  hw/block/xen_disk.c | 12 +++++++++++-
>  1 file changed, 11 insertions(+), 1 deletion(-)
> 
> diff --git a/hw/block/xen_disk.c b/hw/block/xen_disk.c
> index 6632746250..7cff8863cb 100644
> --- a/hw/block/xen_disk.c
> +++ b/hw/block/xen_disk.c
> @@ -1220,6 +1220,12 @@ static int blk_connect(struct XenDevice *xendev)
>      /* Add on the number needed for the ring pages */
>      max_grants += blkdev->nr_ring_ref;
>  
> +    blkdev->xendev.gnttabdev = xengnttab_open(NULL, 0);
> +    if (blkdev->xendev.gnttabdev == NULL) {
> +        xen_pv_printf(xendev, 0, "xengnttab_open failed: %s\n",
> +                      strerror(errno));
> +        return -1;
> +    }
>      if (xengnttab_set_max_grants(blkdev->xendev.gnttabdev, max_grants)) {
>          xen_pv_printf(xendev, 0, "xengnttab_set_max_grants failed: %s\n",
>                        strerror(errno));
> @@ -1327,6 +1333,11 @@ static void blk_disconnect(struct XenDevice *xendev)
>          }
>          blkdev->feature_persistent = false;
>      }
> +
> +    if (blkdev->xendev.gnttabdev) {
> +        xengnttab_close(blkdev->xendev.gnttabdev);
> +        blkdev->xendev.gnttabdev = NULL;
> +    }

I think blk_disconnect needs to be called from blk_free in case where
the gnttabdev is not closed (like it is done when blk or the sring are
not cleared, in blk_free).

>  }

>  static int blk_free(struct XenDevice *xendev)
> @@ -1363,7 +1374,6 @@ static void blk_event(struct XenDevice *xendev)
>  
>  struct XenDevOps xen_blkdev_ops = {
>      .size       = sizeof(struct XenBlkDev),
> -    .flags      = DEVOPS_FLAG_NEED_GNTDEV,
>      .alloc      = blk_alloc,
>      .init       = blk_init,
>      .initialise    = blk_connect,
> -- 
> 2.12.3
> 

-- 
Anthony PERARD

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [Qemu-devel] [PATCH 1/2] xen: add a global indicator for grant copy being available
  2017-09-20 14:53     ` Anthony PERARD
@ 2017-09-22 11:46       ` Juergen Gross
  -1 siblings, 0 replies; 14+ messages in thread
From: Juergen Gross @ 2017-09-22 11:46 UTC (permalink / raw)
  To: Anthony PERARD; +Cc: qemu-devel, xen-devel, kraxel, sstabellini

On 20/09/17 16:53, Anthony PERARD wrote:
> On Tue, Sep 19, 2017 at 01:50:54PM +0200, Juergen Gross wrote:
>> The Xen qdisk backend needs to test whether grant copy operations is
>> available in the kernel. Unfortunately this collides with using
>> xengnttab_set_max_grants() on some kernels as this operation has to
>> be the first one after opening the gnttab device.
>>
>> In order to solve this problem test for the availability of grant copy
>> in xen_be_init() opening the gnttab device just for that purpose and
>> closing it again afterwards. Advertise the availability via a global
>> flag and use that flag in the qdisk backend.
>>
>> Signed-off-by: Juergen Gross <jgross@suse.com>
>> ---
>>  hw/block/xen_disk.c          | 18 ++++++------------
>>  hw/xen/xen_backend.c         | 11 +++++++++++
>>  include/hw/xen/xen_backend.h |  1 +
>>  3 files changed, 18 insertions(+), 12 deletions(-)
>>
>> diff --git a/hw/block/xen_disk.c b/hw/block/xen_disk.c
>> index d42ed7070d..6632746250 100644
>> --- a/hw/block/xen_disk.c
>> +++ b/hw/block/xen_disk.c
>> @@ -121,9 +121,6 @@ struct XenBlkDev {
>>      unsigned int        persistent_gnt_count;
>>      unsigned int        max_grants;
>>  
>> -    /* Grant copy */
>> -    gboolean            feature_grant_copy;
>> -
>>      /* qemu block driver */
>>      DriveInfo           *dinfo;
>>      BlockBackend        *blk;
>> @@ -616,7 +613,7 @@ static void qemu_aio_complete(void *opaque, int ret)
>>          return;
>>      }
>>  
>> -    if (ioreq->blkdev->feature_grant_copy) {
>> +    if (xen_feature_grant_copy) {
>>          switch (ioreq->req.operation) {
>>          case BLKIF_OP_READ:
>>              /* in case of failure ioreq->aio_errors is increased */
>> @@ -638,7 +635,7 @@ static void qemu_aio_complete(void *opaque, int ret)
>>      }
>>  
>>      ioreq->status = ioreq->aio_errors ? BLKIF_RSP_ERROR : BLKIF_RSP_OKAY;
>> -    if (!ioreq->blkdev->feature_grant_copy) {
>> +    if (!xen_feature_grant_copy) {
>>          ioreq_unmap(ioreq);
>>      }
>>      ioreq_finish(ioreq);
>> @@ -698,7 +695,7 @@ static int ioreq_runio_qemu_aio(struct ioreq *ioreq)
>>  {
>>      struct XenBlkDev *blkdev = ioreq->blkdev;
>>  
>> -    if (ioreq->blkdev->feature_grant_copy) {
>> +    if (xen_feature_grant_copy) {
>>          ioreq_init_copy_buffers(ioreq);
>>          if (ioreq->req.nr_segments && (ioreq->req.operation == BLKIF_OP_WRITE ||
>>              ioreq->req.operation == BLKIF_OP_FLUSH_DISKCACHE) &&
>> @@ -750,7 +747,7 @@ static int ioreq_runio_qemu_aio(struct ioreq *ioreq)
>>      }
>>      default:
>>          /* unknown operation (shouldn't happen -- parse catches this) */
>> -        if (!ioreq->blkdev->feature_grant_copy) {
>> +        if (!xen_feature_grant_copy) {
>>              ioreq_unmap(ioreq);
>>          }
>>          goto err;
>> @@ -1010,18 +1007,15 @@ static int blk_init(struct XenDevice *xendev)
>>  
>>      blkdev->file_blk  = BLOCK_SIZE;
>>  
>> -    blkdev->feature_grant_copy =
>> -                (xengnttab_grant_copy(blkdev->xendev.gnttabdev, 0, NULL) == 0);
>> -
>>      xen_pv_printf(&blkdev->xendev, 3, "grant copy operation %s\n",
>> -                  blkdev->feature_grant_copy ? "enabled" : "disabled");
>> +                  xen_feature_grant_copy ? "enabled" : "disabled");
>>  
>>      /* fill info
>>       * blk_connect supplies sector-size and sectors
>>       */
>>      xenstore_write_be_int(&blkdev->xendev, "feature-flush-cache", 1);
>>      xenstore_write_be_int(&blkdev->xendev, "feature-persistent",
>> -                          !blkdev->feature_grant_copy);
>> +                          !xen_feature_grant_copy);
>>      xenstore_write_be_int(&blkdev->xendev, "info", info);
>>  
>>      xenstore_write_be_int(&blkdev->xendev, "max-ring-page-order",
>> diff --git a/hw/xen/xen_backend.c b/hw/xen/xen_backend.c
>> index c46cbb0759..00210627a9 100644
>> --- a/hw/xen/xen_backend.c
>> +++ b/hw/xen/xen_backend.c
>> @@ -44,6 +44,7 @@ BusState *xen_sysbus;
>>  /* public */
>>  struct xs_handle *xenstore = NULL;
>>  const char *xen_protocol;
>> +gboolean xen_feature_grant_copy;
> 
> I think it would be better if this was changed to bool instead of a
> gboolean.

Okay.

> 
> Beside that,
> Acked-by: Anthony PERARD <anthony.perard@citrix.com>

Thanks,

Juergen

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 1/2] xen: add a global indicator for grant copy being available
@ 2017-09-22 11:46       ` Juergen Gross
  0 siblings, 0 replies; 14+ messages in thread
From: Juergen Gross @ 2017-09-22 11:46 UTC (permalink / raw)
  To: Anthony PERARD; +Cc: xen-devel, sstabellini, qemu-devel, kraxel

On 20/09/17 16:53, Anthony PERARD wrote:
> On Tue, Sep 19, 2017 at 01:50:54PM +0200, Juergen Gross wrote:
>> The Xen qdisk backend needs to test whether grant copy operations is
>> available in the kernel. Unfortunately this collides with using
>> xengnttab_set_max_grants() on some kernels as this operation has to
>> be the first one after opening the gnttab device.
>>
>> In order to solve this problem test for the availability of grant copy
>> in xen_be_init() opening the gnttab device just for that purpose and
>> closing it again afterwards. Advertise the availability via a global
>> flag and use that flag in the qdisk backend.
>>
>> Signed-off-by: Juergen Gross <jgross@suse.com>
>> ---
>>  hw/block/xen_disk.c          | 18 ++++++------------
>>  hw/xen/xen_backend.c         | 11 +++++++++++
>>  include/hw/xen/xen_backend.h |  1 +
>>  3 files changed, 18 insertions(+), 12 deletions(-)
>>
>> diff --git a/hw/block/xen_disk.c b/hw/block/xen_disk.c
>> index d42ed7070d..6632746250 100644
>> --- a/hw/block/xen_disk.c
>> +++ b/hw/block/xen_disk.c
>> @@ -121,9 +121,6 @@ struct XenBlkDev {
>>      unsigned int        persistent_gnt_count;
>>      unsigned int        max_grants;
>>  
>> -    /* Grant copy */
>> -    gboolean            feature_grant_copy;
>> -
>>      /* qemu block driver */
>>      DriveInfo           *dinfo;
>>      BlockBackend        *blk;
>> @@ -616,7 +613,7 @@ static void qemu_aio_complete(void *opaque, int ret)
>>          return;
>>      }
>>  
>> -    if (ioreq->blkdev->feature_grant_copy) {
>> +    if (xen_feature_grant_copy) {
>>          switch (ioreq->req.operation) {
>>          case BLKIF_OP_READ:
>>              /* in case of failure ioreq->aio_errors is increased */
>> @@ -638,7 +635,7 @@ static void qemu_aio_complete(void *opaque, int ret)
>>      }
>>  
>>      ioreq->status = ioreq->aio_errors ? BLKIF_RSP_ERROR : BLKIF_RSP_OKAY;
>> -    if (!ioreq->blkdev->feature_grant_copy) {
>> +    if (!xen_feature_grant_copy) {
>>          ioreq_unmap(ioreq);
>>      }
>>      ioreq_finish(ioreq);
>> @@ -698,7 +695,7 @@ static int ioreq_runio_qemu_aio(struct ioreq *ioreq)
>>  {
>>      struct XenBlkDev *blkdev = ioreq->blkdev;
>>  
>> -    if (ioreq->blkdev->feature_grant_copy) {
>> +    if (xen_feature_grant_copy) {
>>          ioreq_init_copy_buffers(ioreq);
>>          if (ioreq->req.nr_segments && (ioreq->req.operation == BLKIF_OP_WRITE ||
>>              ioreq->req.operation == BLKIF_OP_FLUSH_DISKCACHE) &&
>> @@ -750,7 +747,7 @@ static int ioreq_runio_qemu_aio(struct ioreq *ioreq)
>>      }
>>      default:
>>          /* unknown operation (shouldn't happen -- parse catches this) */
>> -        if (!ioreq->blkdev->feature_grant_copy) {
>> +        if (!xen_feature_grant_copy) {
>>              ioreq_unmap(ioreq);
>>          }
>>          goto err;
>> @@ -1010,18 +1007,15 @@ static int blk_init(struct XenDevice *xendev)
>>  
>>      blkdev->file_blk  = BLOCK_SIZE;
>>  
>> -    blkdev->feature_grant_copy =
>> -                (xengnttab_grant_copy(blkdev->xendev.gnttabdev, 0, NULL) == 0);
>> -
>>      xen_pv_printf(&blkdev->xendev, 3, "grant copy operation %s\n",
>> -                  blkdev->feature_grant_copy ? "enabled" : "disabled");
>> +                  xen_feature_grant_copy ? "enabled" : "disabled");
>>  
>>      /* fill info
>>       * blk_connect supplies sector-size and sectors
>>       */
>>      xenstore_write_be_int(&blkdev->xendev, "feature-flush-cache", 1);
>>      xenstore_write_be_int(&blkdev->xendev, "feature-persistent",
>> -                          !blkdev->feature_grant_copy);
>> +                          !xen_feature_grant_copy);
>>      xenstore_write_be_int(&blkdev->xendev, "info", info);
>>  
>>      xenstore_write_be_int(&blkdev->xendev, "max-ring-page-order",
>> diff --git a/hw/xen/xen_backend.c b/hw/xen/xen_backend.c
>> index c46cbb0759..00210627a9 100644
>> --- a/hw/xen/xen_backend.c
>> +++ b/hw/xen/xen_backend.c
>> @@ -44,6 +44,7 @@ BusState *xen_sysbus;
>>  /* public */
>>  struct xs_handle *xenstore = NULL;
>>  const char *xen_protocol;
>> +gboolean xen_feature_grant_copy;
> 
> I think it would be better if this was changed to bool instead of a
> gboolean.

Okay.

> 
> Beside that,
> Acked-by: Anthony PERARD <anthony.perard@citrix.com>

Thanks,

Juergen

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [Qemu-devel] [Xen-devel] [PATCH 2/2] xen: dont try setting max grants multiple times
  2017-09-20 15:00     ` Anthony PERARD
@ 2017-09-22 12:03       ` Juergen Gross
  -1 siblings, 0 replies; 14+ messages in thread
From: Juergen Gross @ 2017-09-22 12:03 UTC (permalink / raw)
  To: Anthony PERARD; +Cc: xen-devel, sstabellini, qemu-devel, kraxel

On 20/09/17 17:00, Anthony PERARD wrote:
> On Tue, Sep 19, 2017 at 01:50:55PM +0200, Juergen Gross wrote:
>> Trying to call xengnttab_set_max_grants() with the same file handle
>> might fail on some kernels, as this operation is allowed only once.
>>
>> This is a problem for the qdisk backend as blk_connect() can be
>> called multiple times for a domain, e.g. in case grub-xen is being
>> used to boot it.
>>
>> So instead of letting the generic backend code open the gnttab device
>> do it in blk_connect() and close it again in blk_disconnect.
>>
>> Signed-off-by: Juergen Gross <jgross@suse.com>
>> ---
>>  hw/block/xen_disk.c | 12 +++++++++++-
>>  1 file changed, 11 insertions(+), 1 deletion(-)
>>
>> diff --git a/hw/block/xen_disk.c b/hw/block/xen_disk.c
>> index 6632746250..7cff8863cb 100644
>> --- a/hw/block/xen_disk.c
>> +++ b/hw/block/xen_disk.c
>> @@ -1220,6 +1220,12 @@ static int blk_connect(struct XenDevice *xendev)
>>      /* Add on the number needed for the ring pages */
>>      max_grants += blkdev->nr_ring_ref;
>>  
>> +    blkdev->xendev.gnttabdev = xengnttab_open(NULL, 0);
>> +    if (blkdev->xendev.gnttabdev == NULL) {
>> +        xen_pv_printf(xendev, 0, "xengnttab_open failed: %s\n",
>> +                      strerror(errno));
>> +        return -1;
>> +    }
>>      if (xengnttab_set_max_grants(blkdev->xendev.gnttabdev, max_grants)) {
>>          xen_pv_printf(xendev, 0, "xengnttab_set_max_grants failed: %s\n",
>>                        strerror(errno));
>> @@ -1327,6 +1333,11 @@ static void blk_disconnect(struct XenDevice *xendev)
>>          }
>>          blkdev->feature_persistent = false;
>>      }
>> +
>> +    if (blkdev->xendev.gnttabdev) {
>> +        xengnttab_close(blkdev->xendev.gnttabdev);
>> +        blkdev->xendev.gnttabdev = NULL;
>> +    }
> 
> I think blk_disconnect needs to be called from blk_free in case where
> the gnttabdev is not closed (like it is done when blk or the sring are
> not cleared, in blk_free).

Right. Just calling it always from blk_free() should do the job.

BTW: in practice this wouldn't be necessary as xen_pv_del_xendev()
already does the close, but this is just good luck. Better closing it
via blk_free() in case xen_pv_del_xendev() adds a test for
DEVOPS_FLAG_NEED_GNTDEV.


Thanks,

Juergen

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 2/2] xen: dont try setting max grants multiple times
@ 2017-09-22 12:03       ` Juergen Gross
  0 siblings, 0 replies; 14+ messages in thread
From: Juergen Gross @ 2017-09-22 12:03 UTC (permalink / raw)
  To: Anthony PERARD; +Cc: xen-devel, sstabellini, qemu-devel, kraxel

On 20/09/17 17:00, Anthony PERARD wrote:
> On Tue, Sep 19, 2017 at 01:50:55PM +0200, Juergen Gross wrote:
>> Trying to call xengnttab_set_max_grants() with the same file handle
>> might fail on some kernels, as this operation is allowed only once.
>>
>> This is a problem for the qdisk backend as blk_connect() can be
>> called multiple times for a domain, e.g. in case grub-xen is being
>> used to boot it.
>>
>> So instead of letting the generic backend code open the gnttab device
>> do it in blk_connect() and close it again in blk_disconnect.
>>
>> Signed-off-by: Juergen Gross <jgross@suse.com>
>> ---
>>  hw/block/xen_disk.c | 12 +++++++++++-
>>  1 file changed, 11 insertions(+), 1 deletion(-)
>>
>> diff --git a/hw/block/xen_disk.c b/hw/block/xen_disk.c
>> index 6632746250..7cff8863cb 100644
>> --- a/hw/block/xen_disk.c
>> +++ b/hw/block/xen_disk.c
>> @@ -1220,6 +1220,12 @@ static int blk_connect(struct XenDevice *xendev)
>>      /* Add on the number needed for the ring pages */
>>      max_grants += blkdev->nr_ring_ref;
>>  
>> +    blkdev->xendev.gnttabdev = xengnttab_open(NULL, 0);
>> +    if (blkdev->xendev.gnttabdev == NULL) {
>> +        xen_pv_printf(xendev, 0, "xengnttab_open failed: %s\n",
>> +                      strerror(errno));
>> +        return -1;
>> +    }
>>      if (xengnttab_set_max_grants(blkdev->xendev.gnttabdev, max_grants)) {
>>          xen_pv_printf(xendev, 0, "xengnttab_set_max_grants failed: %s\n",
>>                        strerror(errno));
>> @@ -1327,6 +1333,11 @@ static void blk_disconnect(struct XenDevice *xendev)
>>          }
>>          blkdev->feature_persistent = false;
>>      }
>> +
>> +    if (blkdev->xendev.gnttabdev) {
>> +        xengnttab_close(blkdev->xendev.gnttabdev);
>> +        blkdev->xendev.gnttabdev = NULL;
>> +    }
> 
> I think blk_disconnect needs to be called from blk_free in case where
> the gnttabdev is not closed (like it is done when blk or the sring are
> not cleared, in blk_free).

Right. Just calling it always from blk_free() should do the job.

BTW: in practice this wouldn't be necessary as xen_pv_del_xendev()
already does the close, but this is just good luck. Better closing it
via blk_free() in case xen_pv_del_xendev() adds a test for
DEVOPS_FLAG_NEED_GNTDEV.


Thanks,

Juergen

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 14+ messages in thread

end of thread, other threads:[~2017-09-22 12:04 UTC | newest]

Thread overview: 14+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-09-19 11:50 [Qemu-devel] [PATCH 0/2] xen: fix gnttab handling with old dom0 kernels Juergen Gross
2017-09-19 11:50 ` Juergen Gross
2017-09-19 11:50 ` [Qemu-devel] [PATCH 1/2] xen: add a global indicator for grant copy being available Juergen Gross
2017-09-19 11:50   ` Juergen Gross
2017-09-20 14:53   ` [Qemu-devel] " Anthony PERARD
2017-09-20 14:53     ` Anthony PERARD
2017-09-22 11:46     ` [Qemu-devel] " Juergen Gross
2017-09-22 11:46       ` Juergen Gross
2017-09-19 11:50 ` [Qemu-devel] [PATCH 2/2] xen: dont try setting max grants multiple times Juergen Gross
2017-09-19 11:50   ` Juergen Gross
2017-09-20 15:00   ` [Qemu-devel] " Anthony PERARD
2017-09-20 15:00     ` Anthony PERARD
2017-09-22 12:03     ` [Qemu-devel] [Xen-devel] " Juergen Gross
2017-09-22 12:03       ` Juergen Gross

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.