All of lore.kernel.org
 help / color / mirror / Atom feed
* [Qemu-devel] [PATCH v18 0/4] block/gluster: add support for multiple gluster servers
@ 2016-07-13 13:57 Prasanna Kumar Kalever
  2016-07-13 13:57 ` [Qemu-devel] [PATCH v18 1/4] block/gluster: rename [server, volname, image] -> [host, volume, path] Prasanna Kumar Kalever
                   ` (3 more replies)
  0 siblings, 4 replies; 11+ messages in thread
From: Prasanna Kumar Kalever @ 2016-07-13 13:57 UTC (permalink / raw)
  To: qemu-devel
  Cc: kwolf, eblake, pkrempa, jcody, bharata, berrange, rtalur,
	deepakcs, Prasanna Kumar Kalever

This version of patches are rebased on master branch.

Prasanna Kumar Kalever (4):
  block/gluster: rename [server, volname, image] -> [host, volume, path]
  block/gluster: code cleanup
  block/gluster: using new qapi schema
  block/gluster: add support for multiple gluster servers

v1:
multiple host addresses but common port number and transport type
pattern: URI syntax with query (?) delimitor
syntax:
    file=gluster[+transport-type]://host1:24007/testvol/a.img\
         ?server=host2&server=host3

v2:
multiple host addresses each have their own port number, but all use
                                                         common transport type
pattern: URI syntax  with query (?) delimiter
syntax:
    file=gluster[+transport-type]://[host[:port]]/testvol/a.img\
         [?server=host1[:port]\
          &server=host2[:port]]

v3:
multiple host addresses each have their own port number and transport type
pattern: changed to json
syntax:
    'json:{"driver":"qcow2","file":{"driver":"gluster","volume":"testvol",
           "path":"/path/a.qcow2","server":
         [{"host":"1.2.3.4","port":"24007","transport":"tcp"},
          {"host":"4.5.6.7","port":"24008","transport":"rdma"}] } }'

v4, v5:
address comments from "Eric Blake" <eblake@redhat.com>
renamed:
'backup-volfile-servers' -> 'volfile-servers'

v6:
address comments from Peter Krempa <pkrempa@redhat.com>
renamed:
 'volname'    ->  'volume'
 'image-path' ->  'path'
 'server'     ->  'host'

v7:
fix for v6 (initialize num_servers to 1 and other typos)

v8:
split patch set v7 into series of 3 as per Peter Krempa <pkrempa@redhat.com>
review comments

v9:
reorder the series of patches addressing "Eric Blake" <eblake@redhat.com>
review comments

v10:
fix mem-leak as per Peter Krempa <pkrempa@redhat.com> review comments

v11:
using qapi-types* defined structures as per "Eric Blake" <eblake@redhat.com>
review comments.

v12:
fix crash caused in qapi_free_BlockdevOptionsGluster

v13:
address comments from "Jeff Cody" <jcody@redhat.com>

v14:
address comments from "Eric Blake" <eblake@redhat.com>
split patch 3/3 into two
rename input option and variable from 'servers' to 'server'

v15:
patch 1/4 changed the commit message as per Eric's comment
patch 2/4 are unchanged
patch 3/4 addressed Jeff's comments
patch 4/4 concentrates on unix transport related help info,
rename 'parse_transport_option()' to 'qapi_enum_parse()',
address memory leaks and other comments given by Jeff and Eric

v16:
In patch 4/4 fixed segfault on glfs_init() error case, as per Jeff's comments
other patches in this series remain unchanged

v17:
rebase of v16 on latest master

v18:
rebase of v17 on latest master
rebase has demanded type conversion of 'qemu_gluster_init()'[s] first argument
from 'BlockdevOptionsGluster**' to 'BlockdevOptionsGluster*' and all its callees
both in 3/4 and 4/4 patches

 block/gluster.c      | 502 +++++++++++++++++++++++++++++++++++++--------------
 qapi/block-core.json |  67 ++++++-
 2 files changed, 425 insertions(+), 144 deletions(-)

-- 
2.7.4

^ permalink raw reply	[flat|nested] 11+ messages in thread

* [Qemu-devel] [PATCH v18 1/4] block/gluster: rename [server, volname, image] -> [host, volume, path]
  2016-07-13 13:57 [Qemu-devel] [PATCH v18 0/4] block/gluster: add support for multiple gluster servers Prasanna Kumar Kalever
@ 2016-07-13 13:57 ` Prasanna Kumar Kalever
  2016-07-13 13:57 ` [Qemu-devel] [PATCH v18 2/4] block/gluster: code cleanup Prasanna Kumar Kalever
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 11+ messages in thread
From: Prasanna Kumar Kalever @ 2016-07-13 13:57 UTC (permalink / raw)
  To: qemu-devel
  Cc: kwolf, eblake, pkrempa, jcody, bharata, berrange, rtalur,
	deepakcs, Prasanna Kumar Kalever

A future patch will add support for multiple gluster servers. Existing
terminology is a bit unusual in relation to what names are used by
other networked devices, and doesn't map very well to the terminology
we expect to use for multiple servers.  Therefore, rename the following
options:
'server'  -> 'host'
'image'   -> 'path'
'volname' -> 'volume'

Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Jeff Cody <jcody@redhat.com>
---
 block/gluster.c | 54 +++++++++++++++++++++++++++---------------------------
 1 file changed, 27 insertions(+), 27 deletions(-)

diff --git a/block/gluster.c b/block/gluster.c
index 16f7778..f1ac9a2 100644
--- a/block/gluster.c
+++ b/block/gluster.c
@@ -29,10 +29,10 @@ typedef struct BDRVGlusterState {
 } BDRVGlusterState;
 
 typedef struct GlusterConf {
-    char *server;
+    char *host;
     int port;
-    char *volname;
-    char *image;
+    char *volume;
+    char *path;
     char *transport;
     int debug_level;
 } GlusterConf;
@@ -40,9 +40,9 @@ typedef struct GlusterConf {
 static void qemu_gluster_gconf_free(GlusterConf *gconf)
 {
     if (gconf) {
-        g_free(gconf->server);
-        g_free(gconf->volname);
-        g_free(gconf->image);
+        g_free(gconf->host);
+        g_free(gconf->volume);
+        g_free(gconf->path);
         g_free(gconf->transport);
         g_free(gconf);
     }
@@ -62,19 +62,19 @@ static int parse_volume_options(GlusterConf *gconf, char *path)
     if (*p == '\0') {
         return -EINVAL;
     }
-    gconf->volname = g_strndup(q, p - q);
+    gconf->volume = g_strndup(q, p - q);
 
-    /* image */
+    /* path */
     p += strspn(p, "/");
     if (*p == '\0') {
         return -EINVAL;
     }
-    gconf->image = g_strdup(p);
+    gconf->path = g_strdup(p);
     return 0;
 }
 
 /*
- * file=gluster[+transport]://[server[:port]]/volname/image[?socket=...]
+ * file=gluster[+transport]://[host[:port]]/volume/path[?socket=...]
  *
  * 'gluster' is the protocol.
  *
@@ -83,10 +83,10 @@ static int parse_volume_options(GlusterConf *gconf, char *path)
  * tcp, unix and rdma. If a transport type isn't specified, then tcp
  * type is assumed.
  *
- * 'server' specifies the server where the volume file specification for
+ * 'host' specifies the host where the volume file specification for
  * the given volume resides. This can be either hostname, ipv4 address
  * or ipv6 address. ipv6 address needs to be within square brackets [ ].
- * If transport type is 'unix', then 'server' field should not be specified.
+ * If transport type is 'unix', then 'host' field should not be specified.
  * The 'socket' field needs to be populated with the path to unix domain
  * socket.
  *
@@ -95,9 +95,9 @@ static int parse_volume_options(GlusterConf *gconf, char *path)
  * default port. If the transport type is unix, then 'port' should not be
  * specified.
  *
- * 'volname' is the name of the gluster volume which contains the VM image.
+ * 'volume' is the name of the gluster volume which contains the VM image.
  *
- * 'image' is the path to the actual VM image that resides on gluster volume.
+ * 'path' is the path to the actual VM image that resides on gluster volume.
  *
  * Examples:
  *
@@ -106,7 +106,7 @@ static int parse_volume_options(GlusterConf *gconf, char *path)
  * file=gluster+tcp://1.2.3.4:24007/testvol/dir/a.img
  * file=gluster+tcp://[1:2:3:4:5:6:7:8]/testvol/dir/a.img
  * file=gluster+tcp://[1:2:3:4:5:6:7:8]:24007/testvol/dir/a.img
- * file=gluster+tcp://server.domain.com:24007/testvol/dir/a.img
+ * file=gluster+tcp://host.domain.com:24007/testvol/dir/a.img
  * file=gluster+unix:///testvol/dir/a.img?socket=/tmp/glusterd.socket
  * file=gluster+rdma://1.2.3.4:24007/testvol/a.img
  */
@@ -157,9 +157,9 @@ static int qemu_gluster_parseuri(GlusterConf *gconf, const char *filename)
             ret = -EINVAL;
             goto out;
         }
-        gconf->server = g_strdup(qp->p[0].value);
+        gconf->host = g_strdup(qp->p[0].value);
     } else {
-        gconf->server = g_strdup(uri->server ? uri->server : "localhost");
+        gconf->host = g_strdup(uri->server ? uri->server : "localhost");
         gconf->port = uri->port;
     }
 
@@ -180,18 +180,18 @@ static struct glfs *qemu_gluster_init(GlusterConf *gconf, const char *filename,
 
     ret = qemu_gluster_parseuri(gconf, filename);
     if (ret < 0) {
-        error_setg(errp, "Usage: file=gluster[+transport]://[server[:port]]/"
-                   "volname/image[?socket=...]");
+        error_setg(errp, "Usage: file=gluster[+transport]://[host[:port]]/"
+                   "volume/path[?socket=...]");
         errno = -ret;
         goto out;
     }
 
-    glfs = glfs_new(gconf->volname);
+    glfs = glfs_new(gconf->volume);
     if (!glfs) {
         goto out;
     }
 
-    ret = glfs_set_volfile_server(glfs, gconf->transport, gconf->server,
+    ret = glfs_set_volfile_server(glfs, gconf->transport, gconf->host,
             gconf->port);
     if (ret < 0) {
         goto out;
@@ -205,9 +205,9 @@ static struct glfs *qemu_gluster_init(GlusterConf *gconf, const char *filename,
     ret = glfs_init(glfs);
     if (ret) {
         error_setg_errno(errp, errno,
-                         "Gluster connection failed for server=%s port=%d "
-                         "volume=%s image=%s transport=%s", gconf->server,
-                         gconf->port, gconf->volname, gconf->image,
+                         "Gluster connection failed for host=%s port=%d "
+                         "volume=%s path=%s transport=%s", gconf->host,
+                         gconf->port, gconf->volume, gconf->path,
                          gconf->transport);
 
         /* glfs_init sometimes doesn't set errno although docs suggest that */
@@ -373,7 +373,7 @@ static int qemu_gluster_open(BlockDriverState *bs,  QDict *options,
 
     qemu_gluster_parse_flags(bdrv_flags, &open_flags);
 
-    s->fd = glfs_open(s->glfs, gconf->image, open_flags);
+    s->fd = glfs_open(s->glfs, gconf->path, open_flags);
     if (!s->fd) {
         ret = -errno;
     }
@@ -439,7 +439,7 @@ static int qemu_gluster_reopen_prepare(BDRVReopenState *state,
     }
 #endif
 
-    reop_s->fd = glfs_open(reop_s->glfs, gconf->image, open_flags);
+    reop_s->fd = glfs_open(reop_s->glfs, gconf->path, open_flags);
     if (reop_s->fd == NULL) {
         /* reops->glfs will be cleaned up in _abort */
         ret = -errno;
@@ -587,7 +587,7 @@ static int qemu_gluster_create(const char *filename,
         goto out;
     }
 
-    fd = glfs_creat(glfs, gconf->image,
+    fd = glfs_creat(glfs, gconf->path,
         O_WRONLY | O_CREAT | O_TRUNC | O_BINARY, S_IRUSR | S_IWUSR);
     if (!fd) {
         ret = -errno;
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [Qemu-devel] [PATCH v18 2/4] block/gluster: code cleanup
  2016-07-13 13:57 [Qemu-devel] [PATCH v18 0/4] block/gluster: add support for multiple gluster servers Prasanna Kumar Kalever
  2016-07-13 13:57 ` [Qemu-devel] [PATCH v18 1/4] block/gluster: rename [server, volname, image] -> [host, volume, path] Prasanna Kumar Kalever
@ 2016-07-13 13:57 ` Prasanna Kumar Kalever
  2016-07-13 13:57 ` [Qemu-devel] [PATCH v18 3/4] block/gluster: using new qapi schema Prasanna Kumar Kalever
  2016-07-13 13:57 ` [Qemu-devel] [PATCH v18 4/4] block/gluster: add support for multiple gluster servers Prasanna Kumar Kalever
  3 siblings, 0 replies; 11+ messages in thread
From: Prasanna Kumar Kalever @ 2016-07-13 13:57 UTC (permalink / raw)
  To: qemu-devel
  Cc: kwolf, eblake, pkrempa, jcody, bharata, berrange, rtalur,
	deepakcs, Prasanna Kumar Kalever

unified coding styles of multiline function arguments and other error functions
moved random declarations of structures and other list variables

Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Jeff Cody <jcody@redhat.com>
---
 block/gluster.c | 143 +++++++++++++++++++++++++++++---------------------------
 1 file changed, 75 insertions(+), 68 deletions(-)

diff --git a/block/gluster.c b/block/gluster.c
index f1ac9a2..40ee852 100644
--- a/block/gluster.c
+++ b/block/gluster.c
@@ -13,6 +13,12 @@
 #include "qapi/error.h"
 #include "qemu/uri.h"
 
+#define GLUSTER_OPT_FILENAME        "filename"
+#define GLUSTER_OPT_DEBUG           "debug"
+#define GLUSTER_DEBUG_DEFAULT       4
+#define GLUSTER_DEBUG_MAX           9
+
+
 typedef struct GlusterAIOCB {
     int64_t size;
     int ret;
@@ -28,6 +34,11 @@ typedef struct BDRVGlusterState {
     int debug_level;
 } BDRVGlusterState;
 
+typedef struct BDRVGlusterReopenState {
+    struct glfs *glfs;
+    struct glfs_fd *fd;
+} BDRVGlusterReopenState;
+
 typedef struct GlusterConf {
     char *host;
     int port;
@@ -37,6 +48,49 @@ typedef struct GlusterConf {
     int debug_level;
 } GlusterConf;
 
+
+static QemuOptsList qemu_gluster_create_opts = {
+    .name = "qemu-gluster-create-opts",
+    .head = QTAILQ_HEAD_INITIALIZER(qemu_gluster_create_opts.head),
+    .desc = {
+        {
+            .name = BLOCK_OPT_SIZE,
+            .type = QEMU_OPT_SIZE,
+            .help = "Virtual disk size"
+        },
+        {
+            .name = BLOCK_OPT_PREALLOC,
+            .type = QEMU_OPT_STRING,
+            .help = "Preallocation mode (allowed values: off, full)"
+        },
+        {
+            .name = GLUSTER_OPT_DEBUG,
+            .type = QEMU_OPT_NUMBER,
+            .help = "Gluster log level, valid range is 0-9",
+        },
+        { /* end of list */ }
+    }
+};
+
+static QemuOptsList runtime_opts = {
+    .name = "gluster",
+    .head = QTAILQ_HEAD_INITIALIZER(runtime_opts.head),
+    .desc = {
+        {
+            .name = GLUSTER_OPT_FILENAME,
+            .type = QEMU_OPT_STRING,
+            .help = "URL to the gluster image",
+        },
+        {
+            .name = GLUSTER_OPT_DEBUG,
+            .type = QEMU_OPT_NUMBER,
+            .help = "Gluster log level, valid range is 0-9",
+        },
+        { /* end of list */ }
+    },
+};
+
+
 static void qemu_gluster_gconf_free(GlusterConf *gconf)
 {
     if (gconf) {
@@ -181,7 +235,7 @@ static struct glfs *qemu_gluster_init(GlusterConf *gconf, const char *filename,
     ret = qemu_gluster_parseuri(gconf, filename);
     if (ret < 0) {
         error_setg(errp, "Usage: file=gluster[+transport]://[host[:port]]/"
-                   "volume/path[?socket=...]");
+                         "volume/path[?socket=...]");
         errno = -ret;
         goto out;
     }
@@ -255,30 +309,6 @@ static void gluster_finish_aiocb(struct glfs_fd *fd, ssize_t ret, void *arg)
     qemu_bh_schedule(acb->bh);
 }
 
-#define GLUSTER_OPT_FILENAME "filename"
-#define GLUSTER_OPT_DEBUG "debug"
-#define GLUSTER_DEBUG_DEFAULT 4
-#define GLUSTER_DEBUG_MAX 9
-
-/* TODO Convert to fine grained options */
-static QemuOptsList runtime_opts = {
-    .name = "gluster",
-    .head = QTAILQ_HEAD_INITIALIZER(runtime_opts.head),
-    .desc = {
-        {
-            .name = GLUSTER_OPT_FILENAME,
-            .type = QEMU_OPT_STRING,
-            .help = "URL to the gluster image",
-        },
-        {
-            .name = GLUSTER_OPT_DEBUG,
-            .type = QEMU_OPT_NUMBER,
-            .help = "Gluster log level, valid range is 0-9",
-        },
-        { /* end of list */ }
-    },
-};
-
 static void qemu_gluster_parse_flags(int bdrv_flags, int *open_flags)
 {
     assert(open_flags != NULL);
@@ -395,12 +425,6 @@ out:
     return ret;
 }
 
-typedef struct BDRVGlusterReopenState {
-    struct glfs *glfs;
-    struct glfs_fd *fd;
-} BDRVGlusterReopenState;
-
-
 static int qemu_gluster_reopen_prepare(BDRVReopenState *state,
                                        BlockReopenQueue *queue, Error **errp)
 {
@@ -501,7 +525,9 @@ static void qemu_gluster_reopen_abort(BDRVReopenState *state)
 
 #ifdef CONFIG_GLUSTERFS_ZEROFILL
 static coroutine_fn int qemu_gluster_co_pwrite_zeroes(BlockDriverState *bs,
-        int64_t offset, int size, BdrvRequestFlags flags)
+                                                      int64_t offset,
+                                                      int size,
+                                                      BdrvRequestFlags flags)
 {
     int ret;
     GlusterAIOCB acb;
@@ -527,7 +553,7 @@ static inline bool gluster_supports_zerofill(void)
 }
 
 static inline int qemu_gluster_zerofill(struct glfs_fd *fd, int64_t offset,
-        int64_t size)
+                                        int64_t size)
 {
     return glfs_zerofill(fd, offset, size);
 }
@@ -539,7 +565,7 @@ static inline bool gluster_supports_zerofill(void)
 }
 
 static inline int qemu_gluster_zerofill(struct glfs_fd *fd, int64_t offset,
-        int64_t size)
+                                        int64_t size)
 {
     return 0;
 }
@@ -576,19 +602,17 @@ static int qemu_gluster_create(const char *filename,
     tmp = qemu_opt_get_del(opts, BLOCK_OPT_PREALLOC);
     if (!tmp || !strcmp(tmp, "off")) {
         prealloc = 0;
-    } else if (!strcmp(tmp, "full") &&
-               gluster_supports_zerofill()) {
+    } else if (!strcmp(tmp, "full") && gluster_supports_zerofill()) {
         prealloc = 1;
     } else {
         error_setg(errp, "Invalid preallocation mode: '%s'"
-            " or GlusterFS doesn't support zerofill API",
-            tmp);
+                         " or GlusterFS doesn't support zerofill API", tmp);
         ret = -EINVAL;
         goto out;
     }
 
     fd = glfs_creat(glfs, gconf->path,
-        O_WRONLY | O_CREAT | O_TRUNC | O_BINARY, S_IRUSR | S_IWUSR);
+                    O_WRONLY | O_CREAT | O_TRUNC | O_BINARY, S_IRUSR | S_IWUSR);
     if (!fd) {
         ret = -errno;
     } else {
@@ -614,7 +638,8 @@ out:
 }
 
 static coroutine_fn int qemu_gluster_co_rw(BlockDriverState *bs,
-        int64_t sector_num, int nb_sectors, QEMUIOVector *qiov, int write)
+                                           int64_t sector_num, int nb_sectors,
+                                           QEMUIOVector *qiov, int write)
 {
     int ret;
     GlusterAIOCB acb;
@@ -629,10 +654,10 @@ static coroutine_fn int qemu_gluster_co_rw(BlockDriverState *bs,
 
     if (write) {
         ret = glfs_pwritev_async(s->fd, qiov->iov, qiov->niov, offset, 0,
-            gluster_finish_aiocb, &acb);
+                                 gluster_finish_aiocb, &acb);
     } else {
         ret = glfs_preadv_async(s->fd, qiov->iov, qiov->niov, offset, 0,
-            gluster_finish_aiocb, &acb);
+                                gluster_finish_aiocb, &acb);
     }
 
     if (ret < 0) {
@@ -657,13 +682,17 @@ static int qemu_gluster_truncate(BlockDriverState *bs, int64_t offset)
 }
 
 static coroutine_fn int qemu_gluster_co_readv(BlockDriverState *bs,
-        int64_t sector_num, int nb_sectors, QEMUIOVector *qiov)
+                                              int64_t sector_num,
+                                              int nb_sectors,
+                                              QEMUIOVector *qiov)
 {
     return qemu_gluster_co_rw(bs, sector_num, nb_sectors, qiov, 0);
 }
 
 static coroutine_fn int qemu_gluster_co_writev(BlockDriverState *bs,
-        int64_t sector_num, int nb_sectors, QEMUIOVector *qiov)
+                                               int64_t sector_num,
+                                               int nb_sectors,
+                                               QEMUIOVector *qiov)
 {
     return qemu_gluster_co_rw(bs, sector_num, nb_sectors, qiov, 1);
 }
@@ -725,7 +754,8 @@ error:
 
 #ifdef CONFIG_GLUSTERFS_DISCARD
 static coroutine_fn int qemu_gluster_co_discard(BlockDriverState *bs,
-        int64_t sector_num, int nb_sectors)
+                                                int64_t sector_num,
+                                                int nb_sectors)
 {
     int ret;
     GlusterAIOCB acb;
@@ -934,29 +964,6 @@ static int64_t coroutine_fn qemu_gluster_co_get_block_status(
 }
 
 
-static QemuOptsList qemu_gluster_create_opts = {
-    .name = "qemu-gluster-create-opts",
-    .head = QTAILQ_HEAD_INITIALIZER(qemu_gluster_create_opts.head),
-    .desc = {
-        {
-            .name = BLOCK_OPT_SIZE,
-            .type = QEMU_OPT_SIZE,
-            .help = "Virtual disk size"
-        },
-        {
-            .name = BLOCK_OPT_PREALLOC,
-            .type = QEMU_OPT_STRING,
-            .help = "Preallocation mode (allowed values: off, full)"
-        },
-        {
-            .name = GLUSTER_OPT_DEBUG,
-            .type = QEMU_OPT_NUMBER,
-            .help = "Gluster log level, valid range is 0-9",
-        },
-        { /* end of list */ }
-    }
-};
-
 static BlockDriver bdrv_gluster = {
     .format_name                  = "gluster",
     .protocol_name                = "gluster",
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [Qemu-devel] [PATCH v18 3/4] block/gluster: using new qapi schema
  2016-07-13 13:57 [Qemu-devel] [PATCH v18 0/4] block/gluster: add support for multiple gluster servers Prasanna Kumar Kalever
  2016-07-13 13:57 ` [Qemu-devel] [PATCH v18 1/4] block/gluster: rename [server, volname, image] -> [host, volume, path] Prasanna Kumar Kalever
  2016-07-13 13:57 ` [Qemu-devel] [PATCH v18 2/4] block/gluster: code cleanup Prasanna Kumar Kalever
@ 2016-07-13 13:57 ` Prasanna Kumar Kalever
  2016-07-14  7:52   ` Markus Armbruster
  2016-07-13 13:57 ` [Qemu-devel] [PATCH v18 4/4] block/gluster: add support for multiple gluster servers Prasanna Kumar Kalever
  3 siblings, 1 reply; 11+ messages in thread
From: Prasanna Kumar Kalever @ 2016-07-13 13:57 UTC (permalink / raw)
  To: qemu-devel
  Cc: kwolf, eblake, pkrempa, jcody, bharata, berrange, rtalur,
	deepakcs, Prasanna Kumar Kalever

this patch adds 'GlusterServer' related schema in qapi/block-core.json

Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
---
 block/gluster.c      | 89 ++++++++++++++++++++++++++--------------------------
 qapi/block-core.json | 67 ++++++++++++++++++++++++++++++++++++---
 2 files changed, 107 insertions(+), 49 deletions(-)

diff --git a/block/gluster.c b/block/gluster.c
index 40ee852..40dcbc1 100644
--- a/block/gluster.c
+++ b/block/gluster.c
@@ -15,6 +15,7 @@
 
 #define GLUSTER_OPT_FILENAME        "filename"
 #define GLUSTER_OPT_DEBUG           "debug"
+#define GLUSTER_DEFAULT_PORT        24007
 #define GLUSTER_DEBUG_DEFAULT       4
 #define GLUSTER_DEBUG_MAX           9
 
@@ -39,15 +40,6 @@ typedef struct BDRVGlusterReopenState {
     struct glfs_fd *fd;
 } BDRVGlusterReopenState;
 
-typedef struct GlusterConf {
-    char *host;
-    int port;
-    char *volume;
-    char *path;
-    char *transport;
-    int debug_level;
-} GlusterConf;
-
 
 static QemuOptsList qemu_gluster_create_opts = {
     .name = "qemu-gluster-create-opts",
@@ -91,18 +83,7 @@ static QemuOptsList runtime_opts = {
 };
 
 
-static void qemu_gluster_gconf_free(GlusterConf *gconf)
-{
-    if (gconf) {
-        g_free(gconf->host);
-        g_free(gconf->volume);
-        g_free(gconf->path);
-        g_free(gconf->transport);
-        g_free(gconf);
-    }
-}
-
-static int parse_volume_options(GlusterConf *gconf, char *path)
+static int parse_volume_options(BlockdevOptionsGluster *gconf, char *path)
 {
     char *p, *q;
 
@@ -164,7 +145,8 @@ static int parse_volume_options(GlusterConf *gconf, char *path)
  * file=gluster+unix:///testvol/dir/a.img?socket=/tmp/glusterd.socket
  * file=gluster+rdma://1.2.3.4:24007/testvol/a.img
  */
-static int qemu_gluster_parseuri(GlusterConf *gconf, const char *filename)
+static int qemu_gluster_parseuri(BlockdevOptionsGluster *gconf,
+                                 const char *filename)
 {
     URI *uri;
     QueryParams *qp = NULL;
@@ -176,20 +158,23 @@ static int qemu_gluster_parseuri(GlusterConf *gconf, const char *filename)
         return -EINVAL;
     }
 
+    gconf->server = g_new0(GlusterServer, 1);
+
     /* transport */
     if (!uri->scheme || !strcmp(uri->scheme, "gluster")) {
-        gconf->transport = g_strdup("tcp");
+        gconf->server->transport = GLUSTER_TRANSPORT_TCP;
     } else if (!strcmp(uri->scheme, "gluster+tcp")) {
-        gconf->transport = g_strdup("tcp");
+        gconf->server->transport = GLUSTER_TRANSPORT_TCP;
     } else if (!strcmp(uri->scheme, "gluster+unix")) {
-        gconf->transport = g_strdup("unix");
+        gconf->server->transport = GLUSTER_TRANSPORT_UNIX;
         is_unix = true;
     } else if (!strcmp(uri->scheme, "gluster+rdma")) {
-        gconf->transport = g_strdup("rdma");
+        gconf->server->transport = GLUSTER_TRANSPORT_RDMA;
     } else {
         ret = -EINVAL;
         goto out;
     }
+    gconf->server->has_transport = true;
 
     ret = parse_volume_options(gconf, uri->path);
     if (ret < 0) {
@@ -211,10 +196,15 @@ static int qemu_gluster_parseuri(GlusterConf *gconf, const char *filename)
             ret = -EINVAL;
             goto out;
         }
-        gconf->host = g_strdup(qp->p[0].value);
+        gconf->server->host = g_strdup(qp->p[0].value);
     } else {
-        gconf->host = g_strdup(uri->server ? uri->server : "localhost");
-        gconf->port = uri->port;
+        gconf->server->host = g_strdup(uri->server ? uri->server : "localhost");
+        if (uri->port) {
+            gconf->server->port = uri->port;
+        } else {
+            gconf->server->port = GLUSTER_DEFAULT_PORT;
+        }
+        gconf->server->has_port = true;
     }
 
 out:
@@ -225,8 +215,8 @@ out:
     return ret;
 }
 
-static struct glfs *qemu_gluster_init(GlusterConf *gconf, const char *filename,
-                                      Error **errp)
+static struct glfs *qemu_gluster_init(BlockdevOptionsGluster *gconf,
+                                      const char *filename, Error **errp)
 {
     struct glfs *glfs = NULL;
     int ret;
@@ -245,8 +235,9 @@ static struct glfs *qemu_gluster_init(GlusterConf *gconf, const char *filename,
         goto out;
     }
 
-    ret = glfs_set_volfile_server(glfs, gconf->transport, gconf->host,
-            gconf->port);
+    ret = glfs_set_volfile_server(glfs,
+                                  GlusterTransport_lookup[gconf->server->transport],
+                                  gconf->server->host, gconf->server->port);
     if (ret < 0) {
         goto out;
     }
@@ -260,9 +251,9 @@ static struct glfs *qemu_gluster_init(GlusterConf *gconf, const char *filename,
     if (ret) {
         error_setg_errno(errp, errno,
                          "Gluster connection failed for host=%s port=%d "
-                         "volume=%s path=%s transport=%s", gconf->host,
-                         gconf->port, gconf->volume, gconf->path,
-                         gconf->transport);
+                         "volume=%s path=%s transport=%s", gconf->server->host,
+                         gconf->server->port, gconf->volume, gconf->path,
+                         GlusterTransport_lookup[gconf->server->transport]);
 
         /* glfs_init sometimes doesn't set errno although docs suggest that */
         if (errno == 0)
@@ -354,7 +345,7 @@ static int qemu_gluster_open(BlockDriverState *bs,  QDict *options,
     BDRVGlusterState *s = bs->opaque;
     int open_flags = 0;
     int ret = 0;
-    GlusterConf *gconf = g_new0(GlusterConf, 1);
+    BlockdevOptionsGluster *gconf = NULL;
     QemuOpts *opts;
     Error *local_err = NULL;
     const char *filename;
@@ -377,7 +368,9 @@ static int qemu_gluster_open(BlockDriverState *bs,  QDict *options,
         s->debug_level = GLUSTER_DEBUG_MAX;
     }
 
+    gconf = g_new0(BlockdevOptionsGluster, 1);
     gconf->debug_level = s->debug_level;
+    gconf->has_debug_level = true;
     s->glfs = qemu_gluster_init(gconf, filename, errp);
     if (!s->glfs) {
         ret = -errno;
@@ -412,7 +405,9 @@ static int qemu_gluster_open(BlockDriverState *bs,  QDict *options,
 
 out:
     qemu_opts_del(opts);
-    qemu_gluster_gconf_free(gconf);
+    if (gconf) {
+        qapi_free_BlockdevOptionsGluster(gconf);
+    }
     if (!ret) {
         return ret;
     }
@@ -431,7 +426,7 @@ static int qemu_gluster_reopen_prepare(BDRVReopenState *state,
     int ret = 0;
     BDRVGlusterState *s;
     BDRVGlusterReopenState *reop_s;
-    GlusterConf *gconf = NULL;
+    BlockdevOptionsGluster *gconf;
     int open_flags = 0;
 
     assert(state != NULL);
@@ -444,9 +439,9 @@ static int qemu_gluster_reopen_prepare(BDRVReopenState *state,
 
     qemu_gluster_parse_flags(state->flags, &open_flags);
 
-    gconf = g_new0(GlusterConf, 1);
-
+    gconf = g_new0(BlockdevOptionsGluster, 1);
     gconf->debug_level = s->debug_level;
+    gconf->has_debug_level = true;
     reop_s->glfs = qemu_gluster_init(gconf, state->bs->filename, errp);
     if (reop_s->glfs == NULL) {
         ret = -errno;
@@ -472,7 +467,9 @@ static int qemu_gluster_reopen_prepare(BDRVReopenState *state,
 
 exit:
     /* state->opaque will be freed in either the _abort or _commit */
-    qemu_gluster_gconf_free(gconf);
+    if (gconf) {
+        qapi_free_BlockdevOptionsGluster(gconf);
+    }
     return ret;
 }
 
@@ -574,14 +571,15 @@ static inline int qemu_gluster_zerofill(struct glfs_fd *fd, int64_t offset,
 static int qemu_gluster_create(const char *filename,
                                QemuOpts *opts, Error **errp)
 {
+    BlockdevOptionsGluster *gconf;
     struct glfs *glfs;
     struct glfs_fd *fd;
     int ret = 0;
     int prealloc = 0;
     int64_t total_size = 0;
     char *tmp = NULL;
-    GlusterConf *gconf = g_new0(GlusterConf, 1);
 
+    gconf = g_new0(BlockdevOptionsGluster, 1);
     gconf->debug_level = qemu_opt_get_number_del(opts, GLUSTER_OPT_DEBUG,
                                                  GLUSTER_DEBUG_DEFAULT);
     if (gconf->debug_level < 0) {
@@ -589,6 +587,7 @@ static int qemu_gluster_create(const char *filename,
     } else if (gconf->debug_level > GLUSTER_DEBUG_MAX) {
         gconf->debug_level = GLUSTER_DEBUG_MAX;
     }
+    gconf->has_debug_level = true;
 
     glfs = qemu_gluster_init(gconf, filename, errp);
     if (!glfs) {
@@ -630,7 +629,9 @@ static int qemu_gluster_create(const char *filename,
     }
 out:
     g_free(tmp);
-    qemu_gluster_gconf_free(gconf);
+    if (gconf) {
+        qapi_free_BlockdevOptionsGluster(gconf);
+    }
     if (glfs) {
         glfs_fini(glfs);
     }
diff --git a/qapi/block-core.json b/qapi/block-core.json
index ac8f5f6..f8b38bb 100644
--- a/qapi/block-core.json
+++ b/qapi/block-core.json
@@ -1634,13 +1634,14 @@
 # @host_device, @host_cdrom: Since 2.1
 #
 # Since: 2.0
+# @gluster: Since 2.7
 ##
 { 'enum': 'BlockdevDriver',
   'data': [ 'archipelago', 'blkdebug', 'blkverify', 'bochs', 'cloop',
-            'dmg', 'file', 'ftp', 'ftps', 'host_cdrom', 'host_device',
-            'http', 'https', 'luks', 'null-aio', 'null-co', 'parallels',
-            'qcow', 'qcow2', 'qed', 'quorum', 'raw', 'tftp', 'vdi', 'vhdx',
-            'vmdk', 'vpc', 'vvfat' ] }
+            'dmg', 'file', 'ftp', 'ftps', 'gluster', 'host_cdrom',
+            'host_device', 'http', 'https', 'luks', 'null-aio', 'null-co',
+            'parallels', 'qcow', 'qcow2', 'qed', 'quorum', 'raw', 'tftp',
+            'vdi', 'vhdx', 'vmdk', 'vpc', 'vvfat' ] }
 
 ##
 # @BlockdevOptionsFile
@@ -2033,6 +2034,62 @@
             '*read-pattern': 'QuorumReadPattern' } }
 
 ##
+# @GlusterTransport
+#
+# An enumeration of Gluster transport type
+#
+# @tcp:   TCP   - Transmission Control Protocol
+#
+# @unix:  UNIX  - Unix domain socket
+#
+# @rdma:  RDMA  - Remote direct memory access
+#
+# Since: 2.7
+##
+{ 'enum': 'GlusterTransport', 'data': [ 'tcp', 'unix', 'rdma'] }
+
+##
+# @GlusterServer
+#
+# Details for connecting to a gluster server
+#
+# @host:       host address (hostname/ipv4/ipv6 addresses)
+#
+# @port:       #optional port number on which glusterd is listening
+#               (default 24007)
+#
+# @transport:  #optional transport type used to connect to gluster management
+#               daemon (default 'tcp')
+#
+# Since: 2.7
+##
+{ 'struct': 'GlusterServer',
+  'data': { 'host': 'str',
+            '*port': 'uint16',
+            '*transport': 'GlusterTransport' } }
+
+##
+# @BlockdevOptionsGluster
+#
+# Driver specific block device options for Gluster
+#
+# @volume:      name of gluster volume where VM image resides
+#
+# @path:        absolute path to image file in gluster volume
+#
+# @server:      gluster server description
+#
+# @debug_level: #libgfapi log level (default '4' which is Error)
+#
+# Since: 2.7
+##
+{ 'struct': 'BlockdevOptionsGluster',
+  'data': { 'volume': 'str',
+            'path': 'str',
+            'server': 'GlusterServer',
+            '*debug_level': 'int' } }
+
+##
 # @BlockdevOptions
 #
 # Options for creating a block device.  Many options are available for all
@@ -2095,7 +2152,7 @@
       'file':       'BlockdevOptionsFile',
       'ftp':        'BlockdevOptionsFile',
       'ftps':       'BlockdevOptionsFile',
-# TODO gluster: Wait for structured options
+      'gluster':    'BlockdevOptionsGluster',
       'host_cdrom': 'BlockdevOptionsFile',
       'host_device':'BlockdevOptionsFile',
       'http':       'BlockdevOptionsFile',
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [Qemu-devel] [PATCH v18 4/4] block/gluster: add support for multiple gluster servers
  2016-07-13 13:57 [Qemu-devel] [PATCH v18 0/4] block/gluster: add support for multiple gluster servers Prasanna Kumar Kalever
                   ` (2 preceding siblings ...)
  2016-07-13 13:57 ` [Qemu-devel] [PATCH v18 3/4] block/gluster: using new qapi schema Prasanna Kumar Kalever
@ 2016-07-13 13:57 ` Prasanna Kumar Kalever
  2016-07-14 12:05   ` Markus Armbruster
  3 siblings, 1 reply; 11+ messages in thread
From: Prasanna Kumar Kalever @ 2016-07-13 13:57 UTC (permalink / raw)
  To: qemu-devel
  Cc: kwolf, eblake, pkrempa, jcody, bharata, berrange, rtalur,
	deepakcs, Prasanna Kumar Kalever

This patch adds a way to specify multiple volfile servers to the gluster
block backend of QEMU with tcp|rdma transport types and their port numbers.

Problem:

Currently VM Image on gluster volume is specified like this:

file=gluster[+tcp]://host[:port]/testvol/a.img

Assuming we have three hosts in trusted pool with replica 3 volume
in action and unfortunately host (mentioned in the command above) went down
for some reason, since the volume is replica 3 we now have other 2 hosts
active from which we can boot the VM.

But currently there is no mechanism to pass the other 2 gluster host
addresses to qemu.

Solution:

New way of specifying VM Image on gluster volume with volfile servers:
(We still support old syntax to maintain backward compatibility)

Basic command line syntax looks like:

Pattern I:
 -drive driver=gluster,
        volume=testvol,path=/path/a.raw,
        server.0.host=1.2.3.4,
       [server.0.port=24007,]
       [server.0.transport=tcp,]
        server.1.host=5.6.7.8,
       [server.1.port=24008,]
       [server.1.transport=rdma,]
        server.2.host=/var/run/glusterd.socket,
        server.2.transport=unix ...

Pattern II:
 'json:{"driver":"qcow2","file":{"driver":"gluster",
       "volume":"testvol","path":"/path/a.qcow2",
       "server":[{tuple0},{tuple1}, ...{tupleN}]}}'

   driver      => 'gluster' (protocol name)
   volume      => name of gluster volume where our VM image resides
   path        => absolute path of image in gluster volume

  {tuple}      => {"host":"1.2.3.4"[,"port":"24007","transport":"tcp"]}

   host        => host address (hostname/ipv4/ipv6 addresses/socket path)
   port        => port number on which glusterd is listening. (default 24007)
   transport   => transport type used to connect to gluster management daemon,
                   it can be tcp|rdma|unix (default 'tcp')

Examples:
1.
 -drive driver=qcow2,file.driver=gluster,
        file.volume=testvol,file.path=/path/a.qcow2,
        file.server.0.host=1.2.3.4,
        file.server.0.port=24007,
        file.server.0.transport=tcp,
        file.server.1.host=5.6.7.8,
        file.server.1.port=24008,
        file.server.1.transport=rdma,
        file.server.2.host=/var/run/glusterd.socket
        file.server.1.transport=unix
2.
 'json:{"driver":"qcow2","file":{"driver":"gluster","volume":"testvol",
         "path":"/path/a.qcow2","server":
         [{"host":"1.2.3.4","port":"24007","transport":"tcp"},
          {"host":"4.5.6.7","port":"24008","transport":"rdma"},
          {"host":"/var/run/glusterd.socket","transport":"unix"}] } }'

This patch gives a mechanism to provide all the server addresses, which are in
replica set, so in case host1 is down VM can still boot from any of the
active hosts.

This is equivalent to the backup-volfile-servers option supported by
mount.glusterfs (FUSE way of mounting gluster volume)

Credits: Sincere thanks to Kevin Wolf <kwolf@redhat.com> and
"Deepak C Shetty" <deepakcs@redhat.com> for inputs and all their support

Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
---
 block/gluster.c      | 290 ++++++++++++++++++++++++++++++++++++++++++++-------
 qapi/block-core.json |   4 +-
 2 files changed, 255 insertions(+), 39 deletions(-)

diff --git a/block/gluster.c b/block/gluster.c
index 40dcbc1..41046f0 100644
--- a/block/gluster.c
+++ b/block/gluster.c
@@ -12,8 +12,15 @@
 #include "block/block_int.h"
 #include "qapi/error.h"
 #include "qemu/uri.h"
+#include "qemu/error-report.h"
 
 #define GLUSTER_OPT_FILENAME        "filename"
+#define GLUSTER_OPT_VOLUME          "volume"
+#define GLUSTER_OPT_PATH            "path"
+#define GLUSTER_OPT_HOST            "host"
+#define GLUSTER_OPT_PORT            "port"
+#define GLUSTER_OPT_TRANSPORT       "transport"
+#define GLUSTER_OPT_SERVER_PATTERN  "server."
 #define GLUSTER_OPT_DEBUG           "debug"
 #define GLUSTER_DEFAULT_PORT        24007
 #define GLUSTER_DEBUG_DEFAULT       4
@@ -82,6 +89,46 @@ static QemuOptsList runtime_opts = {
     },
 };
 
+static QemuOptsList runtime_json_opts = {
+    .name = "gluster_json",
+    .head = QTAILQ_HEAD_INITIALIZER(runtime_json_opts.head),
+    .desc = {
+        {
+            .name = GLUSTER_OPT_VOLUME,
+            .type = QEMU_OPT_STRING,
+            .help = "name of gluster volume where VM image resides",
+        },
+        {
+            .name = GLUSTER_OPT_PATH,
+            .type = QEMU_OPT_STRING,
+            .help = "absolute path to image file in gluster volume",
+        },
+        { /* end of list */ }
+    },
+};
+
+static QemuOptsList runtime_tuple_opts = {
+    .name = "gluster_tuple",
+    .head = QTAILQ_HEAD_INITIALIZER(runtime_tuple_opts.head),
+    .desc = {
+        {
+            .name = GLUSTER_OPT_HOST,
+            .type = QEMU_OPT_STRING,
+            .help = "host address (hostname/ipv4/ipv6 addresses/socket path)",
+        },
+        {
+            .name = GLUSTER_OPT_PORT,
+            .type = QEMU_OPT_NUMBER,
+            .help = "port number on which glusterd is listening (default 24007)",
+        },
+        {
+            .name = GLUSTER_OPT_TRANSPORT,
+            .type = QEMU_OPT_STRING,
+            .help = "transport type 'tcp'|'rdma'|'unix' (default 'tcp')",
+        },
+        { /* end of list */ }
+    },
+};
 
 static int parse_volume_options(BlockdevOptionsGluster *gconf, char *path)
 {
@@ -148,6 +195,7 @@ static int parse_volume_options(BlockdevOptionsGluster *gconf, char *path)
 static int qemu_gluster_parseuri(BlockdevOptionsGluster *gconf,
                                  const char *filename)
 {
+    GlusterServer *gsconf;
     URI *uri;
     QueryParams *qp = NULL;
     bool is_unix = false;
@@ -158,23 +206,24 @@ static int qemu_gluster_parseuri(BlockdevOptionsGluster *gconf,
         return -EINVAL;
     }
 
-    gconf->server = g_new0(GlusterServer, 1);
+    gconf->server = g_new0(GlusterServerList, 1);
+    gconf->server->value = gsconf = g_new0(GlusterServer, 1);
 
     /* transport */
     if (!uri->scheme || !strcmp(uri->scheme, "gluster")) {
-        gconf->server->transport = GLUSTER_TRANSPORT_TCP;
+        gsconf->transport = GLUSTER_TRANSPORT_TCP;
     } else if (!strcmp(uri->scheme, "gluster+tcp")) {
-        gconf->server->transport = GLUSTER_TRANSPORT_TCP;
+        gsconf->transport = GLUSTER_TRANSPORT_TCP;
     } else if (!strcmp(uri->scheme, "gluster+unix")) {
-        gconf->server->transport = GLUSTER_TRANSPORT_UNIX;
+        gsconf->transport = GLUSTER_TRANSPORT_UNIX;
         is_unix = true;
     } else if (!strcmp(uri->scheme, "gluster+rdma")) {
-        gconf->server->transport = GLUSTER_TRANSPORT_RDMA;
+        gsconf->transport = GLUSTER_TRANSPORT_RDMA;
     } else {
         ret = -EINVAL;
         goto out;
     }
-    gconf->server->has_transport = true;
+    gsconf->has_transport = true;
 
     ret = parse_volume_options(gconf, uri->path);
     if (ret < 0) {
@@ -196,15 +245,15 @@ static int qemu_gluster_parseuri(BlockdevOptionsGluster *gconf,
             ret = -EINVAL;
             goto out;
         }
-        gconf->server->host = g_strdup(qp->p[0].value);
+        gsconf->host = g_strdup(qp->p[0].value);
     } else {
-        gconf->server->host = g_strdup(uri->server ? uri->server : "localhost");
+        gsconf->host = g_strdup(uri->server ? uri->server : "localhost");
         if (uri->port) {
-            gconf->server->port = uri->port;
+            gsconf->port = uri->port;
         } else {
-            gconf->server->port = GLUSTER_DEFAULT_PORT;
+            gsconf->port = GLUSTER_DEFAULT_PORT;
         }
-        gconf->server->has_port = true;
+        gsconf->has_port = true;
     }
 
 out:
@@ -215,31 +264,26 @@ out:
     return ret;
 }
 
-static struct glfs *qemu_gluster_init(BlockdevOptionsGluster *gconf,
-                                      const char *filename, Error **errp)
+static struct glfs *qemu_gluster_glfs_init(BlockdevOptionsGluster *gconf,
+                                           Error **errp)
 {
-    struct glfs *glfs = NULL;
+    struct glfs *glfs;
     int ret;
     int old_errno;
-
-    ret = qemu_gluster_parseuri(gconf, filename);
-    if (ret < 0) {
-        error_setg(errp, "Usage: file=gluster[+transport]://[host[:port]]/"
-                         "volume/path[?socket=...]");
-        errno = -ret;
-        goto out;
-    }
+    GlusterServerList *server = NULL;
 
     glfs = glfs_new(gconf->volume);
     if (!glfs) {
         goto out;
     }
 
-    ret = glfs_set_volfile_server(glfs,
-                                  GlusterTransport_lookup[gconf->server->transport],
-                                  gconf->server->host, gconf->server->port);
-    if (ret < 0) {
-        goto out;
+    for (server = gconf->server; server; server = server->next) {
+        ret = glfs_set_volfile_server(glfs,
+                                      GlusterTransport_lookup[server->value->transport],
+                                      server->value->host, server->value->port);
+        if (ret < 0) {
+            goto out;
+        }
     }
 
     ret = glfs_set_logging(glfs, "-", gconf->debug_level);
@@ -249,11 +293,13 @@ static struct glfs *qemu_gluster_init(BlockdevOptionsGluster *gconf,
 
     ret = glfs_init(glfs);
     if (ret) {
+        error_report("Gluster connection for 'volume: %s and path: %s': ",
+                         gconf->volume, gconf->path);
+        for (server = gconf->server; server; server = server->next) {
+            error_report("failed for host %s", server->value->host);
+        }
         error_setg_errno(errp, errno,
-                         "Gluster connection failed for host=%s port=%d "
-                         "volume=%s path=%s transport=%s", gconf->server->host,
-                         gconf->server->port, gconf->volume, gconf->path,
-                         GlusterTransport_lookup[gconf->server->transport]);
+                         "Please refer to gluster logs for more info");
 
         /* glfs_init sometimes doesn't set errno although docs suggest that */
         if (errno == 0)
@@ -272,6 +318,176 @@ out:
     return NULL;
 }
 
+static int qapi_enum_parse(const char *opt)
+{
+    int i;
+
+    if (!opt) {
+        /* Set tcp as default */
+        return GLUSTER_TRANSPORT_TCP;
+    }
+
+    for (i = 0; i < GLUSTER_TRANSPORT__MAX; i++) {
+        if (!strcmp(opt, GlusterTransport_lookup[i])) {
+            return i;
+        }
+    }
+
+    return i;
+}
+
+/*
+ * Convert the json formatted command line into qapi.
+*/
+static int qemu_gluster_parsejson(BlockdevOptionsGluster *gconf,
+                                  QDict *options)
+{
+    QemuOpts *opts;
+    GlusterServer *gsconf;
+    GlusterServerList *curr = NULL;
+    QDict *backing_options = NULL;
+    Error *local_err = NULL;
+    char *str = NULL;
+    const char *ptr;
+    size_t num_servers;
+    int i;
+
+    /* create opts info from runtime_json_opts list */
+    opts = qemu_opts_create(&runtime_json_opts, NULL, 0, &error_abort);
+    qemu_opts_absorb_qdict(opts, options, &local_err);
+    if (local_err) {
+        goto out;
+    }
+
+    num_servers = qdict_array_entries(options, GLUSTER_OPT_SERVER_PATTERN);
+    if (num_servers < 1) {
+        error_setg(&local_err, "qemu_gluster: please provide 'server' "
+                               "option with valid fields in array of tuples");
+        goto out;
+    }
+
+    ptr = qemu_opt_get(opts, GLUSTER_OPT_VOLUME);
+    if (!ptr) {
+        error_setg(&local_err, "qemu_gluster: please provide 'volume' "
+                               "option");
+        goto out;
+    }
+    gconf->volume = g_strdup(ptr);
+
+    ptr = qemu_opt_get(opts, GLUSTER_OPT_PATH);
+    if (!ptr) {
+        error_setg(&local_err, "qemu_gluster: please provide 'path' "
+                               "option");
+        goto out;
+    }
+    gconf->path = g_strdup(ptr);
+
+    qemu_opts_del(opts);
+
+    /* create opts info from runtime_tuple_opts list */
+    for (i = 0; i < num_servers; i++) {
+        opts = qemu_opts_create(&runtime_tuple_opts, NULL, 0, &error_abort);
+        str = g_strdup_printf(GLUSTER_OPT_SERVER_PATTERN"%d.", i);
+        qdict_extract_subqdict(options, &backing_options, str);
+        qemu_opts_absorb_qdict(opts, backing_options, &local_err);
+        if (local_err) {
+            goto out;
+        }
+        qdict_del(backing_options, str);
+        g_free(str);
+        str = NULL;
+
+        ptr = qemu_opt_get(opts, GLUSTER_OPT_HOST);
+        if (!ptr) {
+            error_setg(&local_err, "qemu_gluster: server.{tuple.%d} "
+                                   "requires 'host' option", i);
+            goto out;
+        }
+
+        gsconf = g_new0(GlusterServer, 1);
+
+        gsconf->host = g_strdup(ptr);
+
+        ptr = qemu_opt_get(opts, GLUSTER_OPT_TRANSPORT);
+        /* check whether transport type specified in json command is valid */
+        gsconf->transport = qapi_enum_parse(ptr);
+        if (gsconf->transport == GLUSTER_TRANSPORT__MAX) {
+            error_setg(&local_err, "qemu_gluster: please set 'transport'"
+                                   " type in tuple.%d as tcp|rdma|unix", i);
+            qapi_free_GlusterServer(gsconf);
+            goto out;
+        }
+        gsconf->has_transport = true;
+
+        /* we don't need port number for unix transport */
+        if (gsconf->transport != GLUSTER_TRANSPORT_UNIX) {
+            gsconf->port = qemu_opt_get_number(opts, GLUSTER_OPT_PORT,
+                                               GLUSTER_DEFAULT_PORT);
+            gsconf->has_port = true;
+        }
+
+        if (gconf->server == NULL) {
+            gconf->server = g_new0(GlusterServerList, 1);
+            gconf->server->value = gsconf;
+            curr = gconf->server;
+        } else {
+            curr->next = g_new0(GlusterServerList, 1);
+            curr->next->value = gsconf;
+            curr = curr->next;
+        }
+
+        qemu_opts_del(opts);
+    }
+
+    return 0;
+
+out:
+    error_report_err(local_err);
+    qemu_opts_del(opts);
+    if (str) {
+        qdict_del(backing_options, str);
+        g_free(str);
+    }
+    errno = EINVAL;
+    return -errno;
+}
+
+static struct glfs *qemu_gluster_init(BlockdevOptionsGluster *gconf,
+                                      const char *filename,
+                                      QDict *options, Error **errp)
+{
+    int ret;
+
+    if (filename) {
+        ret = qemu_gluster_parseuri(gconf, filename);
+        if (ret < 0) {
+            error_setg(errp, "Usage: file=gluster[+transport]://[host[:port]]/"
+                             "volume/path[?socket=...]");
+            errno = -ret;
+            return NULL;
+        }
+    } else {
+        ret = qemu_gluster_parsejson(gconf, options);
+        if (ret < 0) {
+            error_setg(errp, "Usage: "
+                             "-drive driver=qcow2,file.driver=gluster,"
+                             "file.volume=testvol,file.path=/path/a.qcow2,"
+                             "file.server.0.host=1.2.3.4,"
+                             "[file.server.0.port=24007,]"
+                             "[file.server.0.transport=tcp,]"
+                             "file.server.1.host=5.6.7.8,"
+                             "[file.server.1.port=24008,]"
+                             "[file.server.1.transport=rdma,]"
+                             "file.server.2.host=/var/run/glusterd.socket,"
+                             "file.server.2.transport=unix ...");
+            errno = -ret;
+            return NULL;
+        }
+    }
+
+    return qemu_gluster_glfs_init(gconf, errp);
+}
+
 static void qemu_gluster_complete_aio(void *opaque)
 {
     GlusterAIOCB *acb = (GlusterAIOCB *)opaque;
@@ -371,7 +587,7 @@ static int qemu_gluster_open(BlockDriverState *bs,  QDict *options,
     gconf = g_new0(BlockdevOptionsGluster, 1);
     gconf->debug_level = s->debug_level;
     gconf->has_debug_level = true;
-    s->glfs = qemu_gluster_init(gconf, filename, errp);
+    s->glfs = qemu_gluster_init(gconf, filename, options, errp);
     if (!s->glfs) {
         ret = -errno;
         goto out;
@@ -442,7 +658,7 @@ static int qemu_gluster_reopen_prepare(BDRVReopenState *state,
     gconf = g_new0(BlockdevOptionsGluster, 1);
     gconf->debug_level = s->debug_level;
     gconf->has_debug_level = true;
-    reop_s->glfs = qemu_gluster_init(gconf, state->bs->filename, errp);
+    reop_s->glfs = qemu_gluster_init(gconf, state->bs->filename, NULL, errp);
     if (reop_s->glfs == NULL) {
         ret = -errno;
         goto exit;
@@ -589,7 +805,7 @@ static int qemu_gluster_create(const char *filename,
     }
     gconf->has_debug_level = true;
 
-    glfs = qemu_gluster_init(gconf, filename, errp);
+    glfs = qemu_gluster_init(gconf, filename, NULL, errp);
     if (!glfs) {
         ret = -errno;
         goto out;
@@ -969,7 +1185,7 @@ static BlockDriver bdrv_gluster = {
     .format_name                  = "gluster",
     .protocol_name                = "gluster",
     .instance_size                = sizeof(BDRVGlusterState),
-    .bdrv_needs_filename          = true,
+    .bdrv_needs_filename          = false,
     .bdrv_file_open               = qemu_gluster_open,
     .bdrv_reopen_prepare          = qemu_gluster_reopen_prepare,
     .bdrv_reopen_commit           = qemu_gluster_reopen_commit,
@@ -997,7 +1213,7 @@ static BlockDriver bdrv_gluster_tcp = {
     .format_name                  = "gluster",
     .protocol_name                = "gluster+tcp",
     .instance_size                = sizeof(BDRVGlusterState),
-    .bdrv_needs_filename          = true,
+    .bdrv_needs_filename          = false,
     .bdrv_file_open               = qemu_gluster_open,
     .bdrv_reopen_prepare          = qemu_gluster_reopen_prepare,
     .bdrv_reopen_commit           = qemu_gluster_reopen_commit,
@@ -1053,7 +1269,7 @@ static BlockDriver bdrv_gluster_rdma = {
     .format_name                  = "gluster",
     .protocol_name                = "gluster+rdma",
     .instance_size                = sizeof(BDRVGlusterState),
-    .bdrv_needs_filename          = true,
+    .bdrv_needs_filename          = false,
     .bdrv_file_open               = qemu_gluster_open,
     .bdrv_reopen_prepare          = qemu_gluster_reopen_prepare,
     .bdrv_reopen_commit           = qemu_gluster_reopen_commit,
diff --git a/qapi/block-core.json b/qapi/block-core.json
index f8b38bb..52cee7d 100644
--- a/qapi/block-core.json
+++ b/qapi/block-core.json
@@ -2077,7 +2077,7 @@
 #
 # @path:        absolute path to image file in gluster volume
 #
-# @server:      gluster server description
+# @server:      one or more gluster server descriptions
 #
 # @debug_level: #libgfapi log level (default '4' which is Error)
 #
@@ -2086,7 +2086,7 @@
 { 'struct': 'BlockdevOptionsGluster',
   'data': { 'volume': 'str',
             'path': 'str',
-            'server': 'GlusterServer',
+            'server': [ 'GlusterServer' ],
             '*debug_level': 'int' } }
 
 ##
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 11+ messages in thread

* Re: [Qemu-devel] [PATCH v18 3/4] block/gluster: using new qapi schema
  2016-07-13 13:57 ` [Qemu-devel] [PATCH v18 3/4] block/gluster: using new qapi schema Prasanna Kumar Kalever
@ 2016-07-14  7:52   ` Markus Armbruster
  2016-07-14  8:18     ` Prasanna Kalever
  0 siblings, 1 reply; 11+ messages in thread
From: Markus Armbruster @ 2016-07-14  7:52 UTC (permalink / raw)
  To: Prasanna Kumar Kalever
  Cc: qemu-devel, kwolf, pkrempa, jcody, deepakcs, bharata, rtalur

Prasanna Kumar Kalever <prasanna.kalever@redhat.com> writes:

> this patch adds 'GlusterServer' related schema in qapi/block-core.json
>
> Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>

QAPI/QMP interface review only.

> diff --git a/qapi/block-core.json b/qapi/block-core.json
> index ac8f5f6..f8b38bb 100644
> --- a/qapi/block-core.json
> +++ b/qapi/block-core.json
> @@ -1634,13 +1634,14 @@
>  # @host_device, @host_cdrom: Since 2.1
>  #
>  # Since: 2.0
> +# @gluster: Since 2.7
>  ##
>  { 'enum': 'BlockdevDriver',
>    'data': [ 'archipelago', 'blkdebug', 'blkverify', 'bochs', 'cloop',
> -            'dmg', 'file', 'ftp', 'ftps', 'host_cdrom', 'host_device',
> -            'http', 'https', 'luks', 'null-aio', 'null-co', 'parallels',
> -            'qcow', 'qcow2', 'qed', 'quorum', 'raw', 'tftp', 'vdi', 'vhdx',
> -            'vmdk', 'vpc', 'vvfat' ] }
> +            'dmg', 'file', 'ftp', 'ftps', 'gluster', 'host_cdrom',
> +            'host_device', 'http', 'https', 'luks', 'null-aio', 'null-co',
> +            'parallels', 'qcow', 'qcow2', 'qed', 'quorum', 'raw', 'tftp',
> +            'vdi', 'vhdx', 'vmdk', 'vpc', 'vvfat' ] }
>  
>  ##
>  # @BlockdevOptionsFile
> @@ -2033,6 +2034,62 @@
>              '*read-pattern': 'QuorumReadPattern' } }
>  
>  ##
> +# @GlusterTransport
> +#
> +# An enumeration of Gluster transport type
> +#
> +# @tcp:   TCP   - Transmission Control Protocol
> +#
> +# @unix:  UNIX  - Unix domain socket
> +#
> +# @rdma:  RDMA  - Remote direct memory access
> +#
> +# Since: 2.7
> +##
> +{ 'enum': 'GlusterTransport', 'data': [ 'tcp', 'unix', 'rdma'] }
> +
> +##
> +# @GlusterServer
> +#
> +# Details for connecting to a gluster server
> +#
> +# @host:       host address (hostname/ipv4/ipv6 addresses)
> +#
> +# @port:       #optional port number on which glusterd is listening
> +#               (default 24007)
> +#
> +# @transport:  #optional transport type used to connect to gluster management
> +#               daemon (default 'tcp')
> +#
> +# Since: 2.7
> +##
> +{ 'struct': 'GlusterServer',
> +  'data': { 'host': 'str',
> +            '*port': 'uint16',
> +            '*transport': 'GlusterTransport' } }

The meaning of @host and @port is obvious enough with @transport 'tcp',
and your documentation matches it.

Not the case for @transport 'unix'.  There is no such thing as 'host'
and 'port' with Unix domain sockets.  I'm afraid the interface makes no
sense in its current state.

I'm not familiar with RDMA, so I can't say whether your definition makes
sense there.  I can say that you shouldn't assume people are familiar
with RDMA.  Please explain what @host and @port mean there.  If they're
just like for 'tcp', say so.

For 'tcp', GlusterTransport duplicates InetSocketAddress, except it
doesn't support service names (@port is 'uint16' instead of 'str'),
doesn't support port ranges (no @to; not needed here), and doesn't
support controlling IPv4/IPv6 (no @ipv4, @ipv6).

Why is it necessary to roll your own address type?  Why can't you use
SocketAddress?

> +
> +##
> +# @BlockdevOptionsGluster
> +#
> +# Driver specific block device options for Gluster
> +#
> +# @volume:      name of gluster volume where VM image resides
> +#
> +# @path:        absolute path to image file in gluster volume
> +#
> +# @server:      gluster server description
> +#
> +# @debug_level: #libgfapi log level (default '4' which is Error)
> +#
> +# Since: 2.7
> +##
> +{ 'struct': 'BlockdevOptionsGluster',
> +  'data': { 'volume': 'str',
> +            'path': 'str',
> +            'server': 'GlusterServer',
> +            '*debug_level': 'int' } }

Are @volume and @path interpreted in any way in QEMU, or are they merely
sent to the Gluster server?

> +
> +##
>  # @BlockdevOptions
>  #
>  # Options for creating a block device.  Many options are available for all
> @@ -2095,7 +2152,7 @@
>        'file':       'BlockdevOptionsFile',
>        'ftp':        'BlockdevOptionsFile',
>        'ftps':       'BlockdevOptionsFile',
> -# TODO gluster: Wait for structured options
> +      'gluster':    'BlockdevOptionsGluster',
>        'host_cdrom': 'BlockdevOptionsFile',
>        'host_device':'BlockdevOptionsFile',
>        'http':       'BlockdevOptionsFile',

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [Qemu-devel] [PATCH v18 3/4] block/gluster: using new qapi schema
  2016-07-14  7:52   ` Markus Armbruster
@ 2016-07-14  8:18     ` Prasanna Kalever
  2016-07-14 10:52       ` Markus Armbruster
  0 siblings, 1 reply; 11+ messages in thread
From: Prasanna Kalever @ 2016-07-14  8:18 UTC (permalink / raw)
  To: Markus Armbruster
  Cc: Prasanna Kumar Kalever, qemu-devel, Kevin Wolf, Peter Krempa,
	Jeffrey Cody, deepakcs, bharata, Raghavendra Talur

On Thu, Jul 14, 2016 at 1:22 PM, Markus Armbruster <armbru@redhat.com> wrote:
> Prasanna Kumar Kalever <prasanna.kalever@redhat.com> writes:
>
>> this patch adds 'GlusterServer' related schema in qapi/block-core.json
>>
>> Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
>
> QAPI/QMP interface review only.
>
>> diff --git a/qapi/block-core.json b/qapi/block-core.json
>> index ac8f5f6..f8b38bb 100644
>> --- a/qapi/block-core.json
>> +++ b/qapi/block-core.json
>> @@ -1634,13 +1634,14 @@
>>  # @host_device, @host_cdrom: Since 2.1
>>  #
>>  # Since: 2.0
>> +# @gluster: Since 2.7
>>  ##
>>  { 'enum': 'BlockdevDriver',
>>    'data': [ 'archipelago', 'blkdebug', 'blkverify', 'bochs', 'cloop',
>> -            'dmg', 'file', 'ftp', 'ftps', 'host_cdrom', 'host_device',
>> -            'http', 'https', 'luks', 'null-aio', 'null-co', 'parallels',
>> -            'qcow', 'qcow2', 'qed', 'quorum', 'raw', 'tftp', 'vdi', 'vhdx',
>> -            'vmdk', 'vpc', 'vvfat' ] }
>> +            'dmg', 'file', 'ftp', 'ftps', 'gluster', 'host_cdrom',
>> +            'host_device', 'http', 'https', 'luks', 'null-aio', 'null-co',
>> +            'parallels', 'qcow', 'qcow2', 'qed', 'quorum', 'raw', 'tftp',
>> +            'vdi', 'vhdx', 'vmdk', 'vpc', 'vvfat' ] }
>>
>>  ##
>>  # @BlockdevOptionsFile
>> @@ -2033,6 +2034,62 @@
>>              '*read-pattern': 'QuorumReadPattern' } }
>>
>>  ##
>> +# @GlusterTransport
>> +#
>> +# An enumeration of Gluster transport type
>> +#
>> +# @tcp:   TCP   - Transmission Control Protocol
>> +#
>> +# @unix:  UNIX  - Unix domain socket
>> +#
>> +# @rdma:  RDMA  - Remote direct memory access
>> +#
>> +# Since: 2.7
>> +##
>> +{ 'enum': 'GlusterTransport', 'data': [ 'tcp', 'unix', 'rdma'] }
>> +
>> +##
>> +# @GlusterServer
>> +#
>> +# Details for connecting to a gluster server
>> +#
>> +# @host:       host address (hostname/ipv4/ipv6 addresses)
>> +#
>> +# @port:       #optional port number on which glusterd is listening
>> +#               (default 24007)
>> +#
>> +# @transport:  #optional transport type used to connect to gluster management
>> +#               daemon (default 'tcp')
>> +#
>> +# Since: 2.7
>> +##
>> +{ 'struct': 'GlusterServer',
>> +  'data': { 'host': 'str',
>> +            '*port': 'uint16',
>> +            '*transport': 'GlusterTransport' } }
>
> The meaning of @host and @port is obvious enough with @transport 'tcp',
> and your documentation matches it.
>
> Not the case for @transport 'unix'.  There is no such thing as 'host'
> and 'port' with Unix domain sockets.  I'm afraid the interface makes no
> sense in its current state.

Yes, agreed
I am about do something like

+ { 'struct': 'UnixAddress',
+   'data': {
+       'socket': 'str',
+   } }
+
+ { 'struct': 'TcpAddress',
+   'data': {
+       'host': 'str',
+       'port': 'uint16',
+   } }
+
+ { 'union': 'GlusterServer',
+   'data': {
+       'unix': 'UnixAddress',
+       'Tcp': 'TcpAddress'
+   } }
No rdma!

But I need to tweak them to fit flat union patches @
git://repo.or.cz/qemu/armbru.git qapi-not-next
Currently I am reading the design changes, help in this area is appreciable

> I'm not familiar with RDMA, so I can't say whether your definition makes
> sense there.  I can say that you shouldn't assume people are familiar
> with RDMA.  Please explain what @host and @port mean there.  If they're
> just like for 'tcp', say so.

I shall remove the rdma part because the gluster volfile fetch doesn't
support this,
Sorry for involving the rdma type in all my previous patches, I just
included that in case gluster supports that in future, but it doesn't
seems to be.

To confirm we don't need rdma type here.

>
> For 'tcp', GlusterTransport duplicates InetSocketAddress, except it
> doesn't support service names (@port is 'uint16' instead of 'str'),
> doesn't support port ranges (no @to; not needed here), and doesn't
> support controlling IPv4/IPv6 (no @ipv4, @ipv6).
>
> Why is it necessary to roll your own address type?  Why can't you use
> SocketAddress?

Sure, I shall take a look at InetSocketAddress and SocketAddress

>
>> +
>> +##
>> +# @BlockdevOptionsGluster
>> +#
>> +# Driver specific block device options for Gluster
>> +#
>> +# @volume:      name of gluster volume where VM image resides
>> +#
>> +# @path:        absolute path to image file in gluster volume
>> +#
>> +# @server:      gluster server description
>> +#
>> +# @debug_level: #libgfapi log level (default '4' which is Error)
>> +#
>> +# Since: 2.7
>> +##
>> +{ 'struct': 'BlockdevOptionsGluster',
>> +  'data': { 'volume': 'str',
>> +            'path': 'str',
>> +            'server': 'GlusterServer',
>> +            '*debug_level': 'int' } }
>
> Are @volume and @path interpreted in any way in QEMU, or are they merely
> sent to the Gluster server?

have a look at libglfsapi (IMO which is poorly designed)
glfs_set_volfile_server (struct glfs *fs, const char *transport, const
char *host, int port)

So the @volume and @path are parsed from the command line args and
filled in 'struct glfs' object


Thanks Markus

--
prasanna

>
>> +
>> +##
>>  # @BlockdevOptions
>>  #
>>  # Options for creating a block device.  Many options are available for all
>> @@ -2095,7 +2152,7 @@
>>        'file':       'BlockdevOptionsFile',
>>        'ftp':        'BlockdevOptionsFile',
>>        'ftps':       'BlockdevOptionsFile',
>> -# TODO gluster: Wait for structured options
>> +      'gluster':    'BlockdevOptionsGluster',
>>        'host_cdrom': 'BlockdevOptionsFile',
>>        'host_device':'BlockdevOptionsFile',
>>        'http':       'BlockdevOptionsFile',

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [Qemu-devel] [PATCH v18 3/4] block/gluster: using new qapi schema
  2016-07-14  8:18     ` Prasanna Kalever
@ 2016-07-14 10:52       ` Markus Armbruster
  2016-07-14 15:33         ` Eric Blake
  0 siblings, 1 reply; 11+ messages in thread
From: Markus Armbruster @ 2016-07-14 10:52 UTC (permalink / raw)
  To: Prasanna Kalever
  Cc: Kevin Wolf, Peter Krempa, Prasanna Kumar Kalever, Jeffrey Cody,
	qemu-devel, deepakcs, bharata, Raghavendra Talur

Prasanna Kalever <pkalever@redhat.com> writes:

> On Thu, Jul 14, 2016 at 1:22 PM, Markus Armbruster <armbru@redhat.com> wrote:
>> Prasanna Kumar Kalever <prasanna.kalever@redhat.com> writes:
>>
>>> this patch adds 'GlusterServer' related schema in qapi/block-core.json
>>>
>>> Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
>>
>> QAPI/QMP interface review only.
>>
>>> diff --git a/qapi/block-core.json b/qapi/block-core.json
>>> index ac8f5f6..f8b38bb 100644
>>> --- a/qapi/block-core.json
>>> +++ b/qapi/block-core.json
>>> @@ -1634,13 +1634,14 @@
>>>  # @host_device, @host_cdrom: Since 2.1
>>>  #
>>>  # Since: 2.0
>>> +# @gluster: Since 2.7
>>>  ##
>>>  { 'enum': 'BlockdevDriver',
>>>    'data': [ 'archipelago', 'blkdebug', 'blkverify', 'bochs', 'cloop',
>>> -            'dmg', 'file', 'ftp', 'ftps', 'host_cdrom', 'host_device',
>>> -            'http', 'https', 'luks', 'null-aio', 'null-co', 'parallels',
>>> -            'qcow', 'qcow2', 'qed', 'quorum', 'raw', 'tftp', 'vdi', 'vhdx',
>>> -            'vmdk', 'vpc', 'vvfat' ] }
>>> +            'dmg', 'file', 'ftp', 'ftps', 'gluster', 'host_cdrom',
>>> +            'host_device', 'http', 'https', 'luks', 'null-aio', 'null-co',
>>> +            'parallels', 'qcow', 'qcow2', 'qed', 'quorum', 'raw', 'tftp',
>>> +            'vdi', 'vhdx', 'vmdk', 'vpc', 'vvfat' ] }
>>>
>>>  ##
>>>  # @BlockdevOptionsFile
>>> @@ -2033,6 +2034,62 @@
>>>              '*read-pattern': 'QuorumReadPattern' } }
>>>
>>>  ##
>>> +# @GlusterTransport
>>> +#
>>> +# An enumeration of Gluster transport type
>>> +#
>>> +# @tcp:   TCP   - Transmission Control Protocol
>>> +#
>>> +# @unix:  UNIX  - Unix domain socket
>>> +#
>>> +# @rdma:  RDMA  - Remote direct memory access
>>> +#
>>> +# Since: 2.7
>>> +##
>>> +{ 'enum': 'GlusterTransport', 'data': [ 'tcp', 'unix', 'rdma'] }
>>> +
>>> +##
>>> +# @GlusterServer
>>> +#
>>> +# Details for connecting to a gluster server
>>> +#
>>> +# @host:       host address (hostname/ipv4/ipv6 addresses)
>>> +#
>>> +# @port:       #optional port number on which glusterd is listening
>>> +#               (default 24007)
>>> +#
>>> +# @transport:  #optional transport type used to connect to gluster management
>>> +#               daemon (default 'tcp')
>>> +#
>>> +# Since: 2.7
>>> +##
>>> +{ 'struct': 'GlusterServer',
>>> +  'data': { 'host': 'str',
>>> +            '*port': 'uint16',
>>> +            '*transport': 'GlusterTransport' } }
>>
>> The meaning of @host and @port is obvious enough with @transport 'tcp',
>> and your documentation matches it.
>>
>> Not the case for @transport 'unix'.  There is no such thing as 'host'
>> and 'port' with Unix domain sockets.  I'm afraid the interface makes no
>> sense in its current state.
>
> Yes, agreed
> I am about do something like
>
> + { 'struct': 'UnixAddress',
> +   'data': {
> +       'socket': 'str',
> +   } }
> +
> + { 'struct': 'TcpAddress',
> +   'data': {
> +       'host': 'str',
> +       'port': 'uint16',
> +   } }
> +
> + { 'union': 'GlusterServer',
> +   'data': {
> +       'unix': 'UnixAddress',
> +       'Tcp': 'TcpAddress'
> +   } }
> No rdma!

Even closer to SocketAddress now.

> But I need to tweak them to fit flat union patches @
> git://repo.or.cz/qemu/armbru.git qapi-not-next
> Currently I am reading the design changes, help in this area is appreciable

If you have questions, ask Eric or me in e-mail or on IRC.

>> I'm not familiar with RDMA, so I can't say whether your definition makes
>> sense there.  I can say that you shouldn't assume people are familiar
>> with RDMA.  Please explain what @host and @port mean there.  If they're
>> just like for 'tcp', say so.
>
> I shall remove the rdma part because the gluster volfile fetch doesn't
> support this,
> Sorry for involving the rdma type in all my previous patches, I just
> included that in case gluster supports that in future, but it doesn't
> seems to be.
>
> To confirm we don't need rdma type here.

If you want to prepare for future features, don't guess how they'll look
like (we all guess wrong more often than not).  Make your interface
extensible instead.  QAPI unions are a good tool for that.

>> For 'tcp', GlusterTransport duplicates InetSocketAddress, except it
>> doesn't support service names (@port is 'uint16' instead of 'str'),
>> doesn't support port ranges (no @to; not needed here), and doesn't
>> support controlling IPv4/IPv6 (no @ipv4, @ipv6).
>>
>> Why is it necessary to roll your own address type?  Why can't you use
>> SocketAddress?
>
> Sure, I shall take a look at InetSocketAddress and SocketAddress
>
>>> +
>>> +##
>>> +# @BlockdevOptionsGluster
>>> +#
>>> +# Driver specific block device options for Gluster
>>> +#
>>> +# @volume:      name of gluster volume where VM image resides
>>> +#
>>> +# @path:        absolute path to image file in gluster volume
>>> +#
>>> +# @server:      gluster server description
>>> +#
>>> +# @debug_level: #libgfapi log level (default '4' which is Error)
>>> +#
>>> +# Since: 2.7
>>> +##
>>> +{ 'struct': 'BlockdevOptionsGluster',
>>> +  'data': { 'volume': 'str',
>>> +            'path': 'str',
>>> +            'server': 'GlusterServer',
>>> +            '*debug_level': 'int' } }
>>
>> Are @volume and @path interpreted in any way in QEMU, or are they merely
>> sent to the Gluster server?
>
> have a look at libglfsapi (IMO which is poorly designed)
> glfs_set_volfile_server (struct glfs *fs, const char *transport, const
> char *host, int port)
>
> So the @volume and @path are parsed from the command line args and
> filled in 'struct glfs' object

After discussing this on IRC, I understand that:

* @server specifies a Gluster server

* @volume is effectively the name of a file name space on that server

* @path is a name in that name space

* Together, they specify an image file stored on Gluster.

If that's correct, the design makes sense for QMP.

Is the legacy syntax involving a gluster URI accessible via QMP?

> Thanks Markus

You're welcome!

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [Qemu-devel] [PATCH v18 4/4] block/gluster: add support for multiple gluster servers
  2016-07-13 13:57 ` [Qemu-devel] [PATCH v18 4/4] block/gluster: add support for multiple gluster servers Prasanna Kumar Kalever
@ 2016-07-14 12:05   ` Markus Armbruster
  2016-07-15 13:47     ` Prasanna Kalever
  0 siblings, 1 reply; 11+ messages in thread
From: Markus Armbruster @ 2016-07-14 12:05 UTC (permalink / raw)
  To: Prasanna Kumar Kalever
  Cc: qemu-devel, kwolf, pkrempa, jcody, deepakcs, bharata, rtalur

Interface and error message review only.

Prasanna Kumar Kalever <prasanna.kalever@redhat.com> writes:

> This patch adds a way to specify multiple volfile servers to the gluster
> block backend of QEMU with tcp|rdma transport types and their port numbers.
>
> Problem:
>
> Currently VM Image on gluster volume is specified like this:
>
> file=gluster[+tcp]://host[:port]/testvol/a.img
>
> Assuming we have three hosts in trusted pool with replica 3 volume
> in action and unfortunately host (mentioned in the command above) went down
> for some reason, since the volume is replica 3 we now have other 2 hosts
> active from which we can boot the VM.
>
> But currently there is no mechanism to pass the other 2 gluster host
> addresses to qemu.

Awkward.  Perhaps:

  Say we have three hosts in a trusted pool with replica 3 volume
  in action.  When the host mentioned in the command above goes down
  for some reason, the other two hosts are still available.  But there's
  currently no way to tell QEMU about them.

> Solution:
>
> New way of specifying VM Image on gluster volume with volfile servers:
> (We still support old syntax to maintain backward compatibility)
>
> Basic command line syntax looks like:
>
> Pattern I:
>  -drive driver=gluster,
>         volume=testvol,path=/path/a.raw,
>         server.0.host=1.2.3.4,
>        [server.0.port=24007,]
>        [server.0.transport=tcp,]
>         server.1.host=5.6.7.8,
>        [server.1.port=24008,]
>        [server.1.transport=rdma,]

Don't forget to update this line when you drop transport 'rdma'.  More
of the same below.

>         server.2.host=/var/run/glusterd.socket,
>         server.2.transport=unix ...
>
> Pattern II:
>  'json:{"driver":"qcow2","file":{"driver":"gluster",
>        "volume":"testvol","path":"/path/a.qcow2",
>        "server":[{tuple0},{tuple1}, ...{tupleN}]}}'

Suggest to add -drive here, for symmetry with pattern I.

JSON calls the things in { ... } objects, not tuples.  Let's stick to
JSON terminology: [{object0},{object1}, ...].  But I think spelling
things out to a similar degree as in pattern I would be clearer:

   -drive 'json:{"driver": "qcow2",
                 "file": { "driver": "gluster",
                           "volume": "testvol",
                           "path": "/path/a.qcow2",
                           "server": [
                              { "host": "1.2.3.4",
                                "port": 24007,
                                "transport": "tcp" },
                              { "host": "5.6.7.8"
                                ... },
                              ... ] }
                 ... }'

>    driver      => 'gluster' (protocol name)
>    volume      => name of gluster volume where our VM image resides
>    path        => absolute path of image in gluster volume
>
>   {tuple}      => {"host":"1.2.3.4"[,"port":"24007","transport":"tcp"]}
>
>    host        => host address (hostname/ipv4/ipv6 addresses/socket path)
>    port        => port number on which glusterd is listening. (default 24007)
>    transport   => transport type used to connect to gluster management daemon,
>                    it can be tcp|rdma|unix (default 'tcp')
>
> Examples:
> 1.
>  -drive driver=qcow2,file.driver=gluster,
>         file.volume=testvol,file.path=/path/a.qcow2,
>         file.server.0.host=1.2.3.4,
>         file.server.0.port=24007,
>         file.server.0.transport=tcp,
>         file.server.1.host=5.6.7.8,
>         file.server.1.port=24008,
>         file.server.1.transport=rdma,
>         file.server.2.host=/var/run/glusterd.socket
>         file.server.1.transport=unix
> 2.
>  'json:{"driver":"qcow2","file":{"driver":"gluster","volume":"testvol",
>          "path":"/path/a.qcow2","server":
>          [{"host":"1.2.3.4","port":"24007","transport":"tcp"},
>           {"host":"4.5.6.7","port":"24008","transport":"rdma"},
>           {"host":"/var/run/glusterd.socket","transport":"unix"}] } }'

Ah, working examples.  Good!  No need to expand your abbreviated
description of pattern II then, just say object instead of tuple.  But if
you prefer to expand it, go ahead, your choice.

> This patch gives a mechanism to provide all the server addresses, which are in
> replica set, so in case host1 is down VM can still boot from any of the
> active hosts.
>
> This is equivalent to the backup-volfile-servers option supported by
> mount.glusterfs (FUSE way of mounting gluster volume)
>
> Credits: Sincere thanks to Kevin Wolf <kwolf@redhat.com> and
> "Deepak C Shetty" <deepakcs@redhat.com> for inputs and all their support
>
> Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
> ---
>  block/gluster.c      | 290 ++++++++++++++++++++++++++++++++++++++++++++-------
>  qapi/block-core.json |   4 +-
>  2 files changed, 255 insertions(+), 39 deletions(-)
>
> diff --git a/block/gluster.c b/block/gluster.c
> index 40dcbc1..41046f0 100644
> --- a/block/gluster.c
> +++ b/block/gluster.c
> @@ -12,8 +12,15 @@
>  #include "block/block_int.h"
>  #include "qapi/error.h"
>  #include "qemu/uri.h"
> +#include "qemu/error-report.h"
>  
>  #define GLUSTER_OPT_FILENAME        "filename"
> +#define GLUSTER_OPT_VOLUME          "volume"
> +#define GLUSTER_OPT_PATH            "path"
> +#define GLUSTER_OPT_HOST            "host"
> +#define GLUSTER_OPT_PORT            "port"
> +#define GLUSTER_OPT_TRANSPORT       "transport"
> +#define GLUSTER_OPT_SERVER_PATTERN  "server."
>  #define GLUSTER_OPT_DEBUG           "debug"
>  #define GLUSTER_DEFAULT_PORT        24007
>  #define GLUSTER_DEBUG_DEFAULT       4
> @@ -82,6 +89,46 @@ static QemuOptsList runtime_opts = {
>      },
>  };
>  
> +static QemuOptsList runtime_json_opts = {
> +    .name = "gluster_json",
> +    .head = QTAILQ_HEAD_INITIALIZER(runtime_json_opts.head),
> +    .desc = {
> +        {
> +            .name = GLUSTER_OPT_VOLUME,
> +            .type = QEMU_OPT_STRING,
> +            .help = "name of gluster volume where VM image resides",
> +        },
> +        {
> +            .name = GLUSTER_OPT_PATH,
> +            .type = QEMU_OPT_STRING,
> +            .help = "absolute path to image file in gluster volume",
> +        },
> +        { /* end of list */ }
> +    },
> +};
> +
> +static QemuOptsList runtime_tuple_opts = {
> +    .name = "gluster_tuple",
> +    .head = QTAILQ_HEAD_INITIALIZER(runtime_tuple_opts.head),
> +    .desc = {
> +        {
> +            .name = GLUSTER_OPT_HOST,
> +            .type = QEMU_OPT_STRING,
> +            .help = "host address (hostname/ipv4/ipv6 addresses/socket path)",
> +        },
> +        {
> +            .name = GLUSTER_OPT_PORT,
> +            .type = QEMU_OPT_NUMBER,
> +            .help = "port number on which glusterd is listening (default 24007)",
> +        },
> +        {
> +            .name = GLUSTER_OPT_TRANSPORT,
> +            .type = QEMU_OPT_STRING,
> +            .help = "transport type 'tcp'|'rdma'|'unix' (default 'tcp')",
> +        },
> +        { /* end of list */ }
> +    },
> +};
>  
>  static int parse_volume_options(BlockdevOptionsGluster *gconf, char *path)
>  {
> @@ -148,6 +195,7 @@ static int parse_volume_options(BlockdevOptionsGluster *gconf, char *path)
>  static int qemu_gluster_parseuri(BlockdevOptionsGluster *gconf,
>                                   const char *filename)
>  {
> +    GlusterServer *gsconf;
>      URI *uri;
>      QueryParams *qp = NULL;
>      bool is_unix = false;
> @@ -158,23 +206,24 @@ static int qemu_gluster_parseuri(BlockdevOptionsGluster *gconf,
>          return -EINVAL;
>      }
>  
> -    gconf->server = g_new0(GlusterServer, 1);
> +    gconf->server = g_new0(GlusterServerList, 1);
> +    gconf->server->value = gsconf = g_new0(GlusterServer, 1);
>  
>      /* transport */
>      if (!uri->scheme || !strcmp(uri->scheme, "gluster")) {
> -        gconf->server->transport = GLUSTER_TRANSPORT_TCP;
> +        gsconf->transport = GLUSTER_TRANSPORT_TCP;
>      } else if (!strcmp(uri->scheme, "gluster+tcp")) {
> -        gconf->server->transport = GLUSTER_TRANSPORT_TCP;
> +        gsconf->transport = GLUSTER_TRANSPORT_TCP;
>      } else if (!strcmp(uri->scheme, "gluster+unix")) {
> -        gconf->server->transport = GLUSTER_TRANSPORT_UNIX;
> +        gsconf->transport = GLUSTER_TRANSPORT_UNIX;
>          is_unix = true;
>      } else if (!strcmp(uri->scheme, "gluster+rdma")) {
> -        gconf->server->transport = GLUSTER_TRANSPORT_RDMA;
> +        gsconf->transport = GLUSTER_TRANSPORT_RDMA;
>      } else {
>          ret = -EINVAL;
>          goto out;
>      }
> -    gconf->server->has_transport = true;
> +    gsconf->has_transport = true;
>  
>      ret = parse_volume_options(gconf, uri->path);
>      if (ret < 0) {
> @@ -196,15 +245,15 @@ static int qemu_gluster_parseuri(BlockdevOptionsGluster *gconf,
>              ret = -EINVAL;
>              goto out;
>          }
> -        gconf->server->host = g_strdup(qp->p[0].value);
> +        gsconf->host = g_strdup(qp->p[0].value);
>      } else {
> -        gconf->server->host = g_strdup(uri->server ? uri->server : "localhost");
> +        gsconf->host = g_strdup(uri->server ? uri->server : "localhost");
>          if (uri->port) {
> -            gconf->server->port = uri->port;
> +            gsconf->port = uri->port;
>          } else {
> -            gconf->server->port = GLUSTER_DEFAULT_PORT;
> +            gsconf->port = GLUSTER_DEFAULT_PORT;
>          }
> -        gconf->server->has_port = true;
> +        gsconf->has_port = true;
>      }
>  
>  out:
> @@ -215,31 +264,26 @@ out:
>      return ret;
>  }
>  
> -static struct glfs *qemu_gluster_init(BlockdevOptionsGluster *gconf,
> -                                      const char *filename, Error **errp)
> +static struct glfs *qemu_gluster_glfs_init(BlockdevOptionsGluster *gconf,
> +                                           Error **errp)
>  {
> -    struct glfs *glfs = NULL;
> +    struct glfs *glfs;
>      int ret;
>      int old_errno;
> -
> -    ret = qemu_gluster_parseuri(gconf, filename);
> -    if (ret < 0) {
> -        error_setg(errp, "Usage: file=gluster[+transport]://[host[:port]]/"
> -                         "volume/path[?socket=...]");
> -        errno = -ret;
> -        goto out;
> -    }
> +    GlusterServerList *server = NULL;
>  
>      glfs = glfs_new(gconf->volume);
>      if (!glfs) {
>          goto out;
>      }
>  
> -    ret = glfs_set_volfile_server(glfs,
> -                                  GlusterTransport_lookup[gconf->server->transport],
> -                                  gconf->server->host, gconf->server->port);
> -    if (ret < 0) {
> -        goto out;
> +    for (server = gconf->server; server; server = server->next) {
> +        ret = glfs_set_volfile_server(glfs,
> +                                      GlusterTransport_lookup[server->value->transport],
> +                                      server->value->host, server->value->port);
> +        if (ret < 0) {
> +            goto out;
> +        }
>      }
>  
>      ret = glfs_set_logging(glfs, "-", gconf->debug_level);
> @@ -249,11 +293,13 @@ static struct glfs *qemu_gluster_init(BlockdevOptionsGluster *gconf,
>  
>      ret = glfs_init(glfs);
>      if (ret) {
> +        error_report("Gluster connection for 'volume: %s and path: %s': ",
> +                         gconf->volume, gconf->path);
> +        for (server = gconf->server; server; server = server->next) {
> +            error_report("failed for host %s", server->value->host);
> +        }
>          error_setg_errno(errp, errno,
> -                         "Gluster connection failed for host=%s port=%d "
> -                         "volume=%s path=%s transport=%s", gconf->server->host,
> -                         gconf->server->port, gconf->volume, gconf->path,
> -                         GlusterTransport_lookup[gconf->server->transport]);
> +                         "Please refer to gluster logs for more info");

error_setg() and error_report() to not mix.

error_report() reports to stderr or the current monitor.  Okay in some
contexts, wrong in others.  Okay contexts include QEMU startup, and HMP
commands.  Wrong contexts include QMP commands, and functions with an
Error **errp parameter.

It's wrong in QMP commands, because the error must be sent via QMP.

It's wrong in functions taking errp, because how to handle errors is the
caller's decision.  In particular, the caller may decide to ignore the
error.  The function mustn't babble about it with error_report().

In particular, when this runs in QMP context, the error sent to QMP will
just be

    "Please refer to gluster logs for more info: %s", strerror(errno)

and the real error "Gluster connection for ..."  gets barfed to
stderr.

Aside: when error_report() is correct, use exactly one per error.  You
may add information with error_printf().

You should call just error_setg_errno() here, with the full error
message.  Since the error message consists of a fixed part followed by
variable number of "failed for host" parts, you have to construct it
dynamically.  Perhaps with a series of g_strdup_printf().  Inefficient,
but good enough for an error path.

"Please refer to gluster logs for more info" is a hint.  The function to
add hints to an Error is error_append_hint().  Search the tree for
examples.

Finally, do test your error messages to make sure they come out
readable!

>  
>          /* glfs_init sometimes doesn't set errno although docs suggest that */
>          if (errno == 0)
               errno = EINVAL;

Well, if that's the case, shouldn't you do this before you pass
(possibly zero) errno to error_setg_errno()?

> @@ -272,6 +318,176 @@ out:
>      return NULL;
>  }
>  
> +static int qapi_enum_parse(const char *opt)
> +{
> +    int i;
> +
> +    if (!opt) {
> +        /* Set tcp as default */
> +        return GLUSTER_TRANSPORT_TCP;
> +    }
> +
> +    for (i = 0; i < GLUSTER_TRANSPORT__MAX; i++) {
> +        if (!strcmp(opt, GlusterTransport_lookup[i])) {
> +            return i;
> +        }
> +    }
> +
> +    return i;
> +}
> +
> +/*
> + * Convert the json formatted command line into qapi.
> +*/
> +static int qemu_gluster_parsejson(BlockdevOptionsGluster *gconf,

"parsejson" isn't a word.  "parse_json" would be two :)

> +                                  QDict *options)
> +{
> +    QemuOpts *opts;
> +    GlusterServer *gsconf;
> +    GlusterServerList *curr = NULL;
> +    QDict *backing_options = NULL;
> +    Error *local_err = NULL;
> +    char *str = NULL;
> +    const char *ptr;
> +    size_t num_servers;
> +    int i;
> +
> +    /* create opts info from runtime_json_opts list */
> +    opts = qemu_opts_create(&runtime_json_opts, NULL, 0, &error_abort);
> +    qemu_opts_absorb_qdict(opts, options, &local_err);
> +    if (local_err) {
> +        goto out;
> +    }
> +
> +    num_servers = qdict_array_entries(options, GLUSTER_OPT_SERVER_PATTERN);
> +    if (num_servers < 1) {
> +        error_setg(&local_err, "qemu_gluster: please provide 'server' "
> +                               "option with valid fields in array of tuples");

This isn't an error message, it's instructions what to do.  Such
instructions can be useful when they're correct, but they can't replace
an error message.  The error message should state what's wrong.  No
less, no more.

Moreover, avoid prefixes like "qemu_gluster:".  Usually, the fact that
this is about Gluster is obvious.  When it isn't, providing context is
the caller's job.

> +        goto out;
> +    }
> +
> +    ptr = qemu_opt_get(opts, GLUSTER_OPT_VOLUME);
> +    if (!ptr) {
> +        error_setg(&local_err, "qemu_gluster: please provide 'volume' "
> +                               "option");

Not an error message.

> +        goto out;
> +    }
> +    gconf->volume = g_strdup(ptr);
> +
> +    ptr = qemu_opt_get(opts, GLUSTER_OPT_PATH);
> +    if (!ptr) {
> +        error_setg(&local_err, "qemu_gluster: please provide 'path' "
> +                               "option");

Not an error message.

> +        goto out;
> +    }
> +    gconf->path = g_strdup(ptr);
> +
> +    qemu_opts_del(opts);
> +
> +    /* create opts info from runtime_tuple_opts list */
> +    for (i = 0; i < num_servers; i++) {
> +        opts = qemu_opts_create(&runtime_tuple_opts, NULL, 0, &error_abort);
> +        str = g_strdup_printf(GLUSTER_OPT_SERVER_PATTERN"%d.", i);
> +        qdict_extract_subqdict(options, &backing_options, str);
> +        qemu_opts_absorb_qdict(opts, backing_options, &local_err);
> +        if (local_err) {
> +            goto out;
> +        }
> +        qdict_del(backing_options, str);
> +        g_free(str);
> +        str = NULL;
> +
> +        ptr = qemu_opt_get(opts, GLUSTER_OPT_HOST);
> +        if (!ptr) {
> +            error_setg(&local_err, "qemu_gluster: server.{tuple.%d} "
> +                                   "requires 'host' option", i);
> +            goto out;
> +        }
> +
> +        gsconf = g_new0(GlusterServer, 1);
> +
> +        gsconf->host = g_strdup(ptr);
> +
> +        ptr = qemu_opt_get(opts, GLUSTER_OPT_TRANSPORT);
> +        /* check whether transport type specified in json command is valid */
> +        gsconf->transport = qapi_enum_parse(ptr);
> +        if (gsconf->transport == GLUSTER_TRANSPORT__MAX) {
> +            error_setg(&local_err, "qemu_gluster: please set 'transport'"
> +                                   " type in tuple.%d as tcp|rdma|unix", i);

Not an error message.

> +            qapi_free_GlusterServer(gsconf);
> +            goto out;
> +        }
> +        gsconf->has_transport = true;
> +
> +        /* we don't need port number for unix transport */
> +        if (gsconf->transport != GLUSTER_TRANSPORT_UNIX) {
> +            gsconf->port = qemu_opt_get_number(opts, GLUSTER_OPT_PORT,
> +                                               GLUSTER_DEFAULT_PORT);
> +            gsconf->has_port = true;
> +        }
> +
> +        if (gconf->server == NULL) {
> +            gconf->server = g_new0(GlusterServerList, 1);
> +            gconf->server->value = gsconf;
> +            curr = gconf->server;
> +        } else {
> +            curr->next = g_new0(GlusterServerList, 1);
> +            curr->next->value = gsconf;
> +            curr = curr->next;
> +        }
> +
> +        qemu_opts_del(opts);
> +    }
> +
> +    return 0;
> +
> +out:
> +    error_report_err(local_err);

Aha, qemu_gluster_parsejson() reports errors with error_report_err().

> +    qemu_opts_del(opts);
> +    if (str) {
> +        qdict_del(backing_options, str);
> +        g_free(str);
> +    }
> +    errno = EINVAL;
> +    return -errno;
> +}
> +
> +static struct glfs *qemu_gluster_init(BlockdevOptionsGluster *gconf,
> +                                      const char *filename,
> +                                      QDict *options, Error **errp)
> +{
> +    int ret;
> +
> +    if (filename) {
> +        ret = qemu_gluster_parseuri(gconf, filename);
> +        if (ret < 0) {
> +            error_setg(errp, "Usage: file=gluster[+transport]://[host[:port]]/"
> +                             "volume/path[?socket=...]");

Not an error message.

> +            errno = -ret;
> +            return NULL;
> +        }
> +    } else {
> +        ret = qemu_gluster_parsejson(gconf, options);

Anti-pattern: function that reports errors with error_report() or
similar called from a function with an Error **errp parameter.

In QMP context, qemu_gluster_parsejson() writes the real error to
stderr, and QMP sends back only the useless "Usage: ..." error below.

qemu_gluster_parsejson() needs to return the Error instead, via errp
parameter.  Then you can error_propagate() here, and...

> +        if (ret < 0) {
> +            error_setg(errp, "Usage: "
> +                             "-drive driver=qcow2,file.driver=gluster,"
> +                             "file.volume=testvol,file.path=/path/a.qcow2,"
> +                             "file.server.0.host=1.2.3.4,"
> +                             "[file.server.0.port=24007,]"
> +                             "[file.server.0.transport=tcp,]"
> +                             "file.server.1.host=5.6.7.8,"
> +                             "[file.server.1.port=24008,]"
> +                             "[file.server.1.transport=rdma,]"
> +                             "file.server.2.host=/var/run/glusterd.socket,"
> +                             "file.server.2.transport=unix ...");

... drop this usage hint.

> +            errno = -ret;
> +            return NULL;
> +        }
> +    }
> +
> +    return qemu_gluster_glfs_init(gconf, errp);
> +}
> +
>  static void qemu_gluster_complete_aio(void *opaque)
>  {
>      GlusterAIOCB *acb = (GlusterAIOCB *)opaque;
> @@ -371,7 +587,7 @@ static int qemu_gluster_open(BlockDriverState *bs,  QDict *options,
>      gconf = g_new0(BlockdevOptionsGluster, 1);
>      gconf->debug_level = s->debug_level;
>      gconf->has_debug_level = true;
> -    s->glfs = qemu_gluster_init(gconf, filename, errp);
> +    s->glfs = qemu_gluster_init(gconf, filename, options, errp);
>      if (!s->glfs) {
>          ret = -errno;
>          goto out;
> @@ -442,7 +658,7 @@ static int qemu_gluster_reopen_prepare(BDRVReopenState *state,
>      gconf = g_new0(BlockdevOptionsGluster, 1);
>      gconf->debug_level = s->debug_level;
>      gconf->has_debug_level = true;
> -    reop_s->glfs = qemu_gluster_init(gconf, state->bs->filename, errp);
> +    reop_s->glfs = qemu_gluster_init(gconf, state->bs->filename, NULL, errp);
>      if (reop_s->glfs == NULL) {
>          ret = -errno;
>          goto exit;
> @@ -589,7 +805,7 @@ static int qemu_gluster_create(const char *filename,
>      }
>      gconf->has_debug_level = true;
>  
> -    glfs = qemu_gluster_init(gconf, filename, errp);
> +    glfs = qemu_gluster_init(gconf, filename, NULL, errp);
>      if (!glfs) {
>          ret = -errno;
>          goto out;
> @@ -969,7 +1185,7 @@ static BlockDriver bdrv_gluster = {
>      .format_name                  = "gluster",
>      .protocol_name                = "gluster",
>      .instance_size                = sizeof(BDRVGlusterState),
> -    .bdrv_needs_filename          = true,
> +    .bdrv_needs_filename          = false,
>      .bdrv_file_open               = qemu_gluster_open,
>      .bdrv_reopen_prepare          = qemu_gluster_reopen_prepare,
>      .bdrv_reopen_commit           = qemu_gluster_reopen_commit,
> @@ -997,7 +1213,7 @@ static BlockDriver bdrv_gluster_tcp = {
>      .format_name                  = "gluster",
>      .protocol_name                = "gluster+tcp",
>      .instance_size                = sizeof(BDRVGlusterState),
> -    .bdrv_needs_filename          = true,
> +    .bdrv_needs_filename          = false,
>      .bdrv_file_open               = qemu_gluster_open,
>      .bdrv_reopen_prepare          = qemu_gluster_reopen_prepare,
>      .bdrv_reopen_commit           = qemu_gluster_reopen_commit,
> @@ -1053,7 +1269,7 @@ static BlockDriver bdrv_gluster_rdma = {
>      .format_name                  = "gluster",
>      .protocol_name                = "gluster+rdma",
>      .instance_size                = sizeof(BDRVGlusterState),
> -    .bdrv_needs_filename          = true,
> +    .bdrv_needs_filename          = false,
>      .bdrv_file_open               = qemu_gluster_open,
>      .bdrv_reopen_prepare          = qemu_gluster_reopen_prepare,
>      .bdrv_reopen_commit           = qemu_gluster_reopen_commit,
> diff --git a/qapi/block-core.json b/qapi/block-core.json
> index f8b38bb..52cee7d 100644
> --- a/qapi/block-core.json
> +++ b/qapi/block-core.json
> @@ -2077,7 +2077,7 @@
>  #
>  # @path:        absolute path to image file in gluster volume
>  #
> -# @server:      gluster server description
> +# @server:      one or more gluster server descriptions
>  #
>  # @debug_level: #libgfapi log level (default '4' which is Error)
>  #
> @@ -2086,7 +2086,7 @@
>  { 'struct': 'BlockdevOptionsGluster',
>    'data': { 'volume': 'str',
>              'path': 'str',
> -            'server': 'GlusterServer',
> +            'server': [ 'GlusterServer' ],
>              '*debug_level': 'int' } }
>  
>  ##

Incompatible change, okay because the thing got added only in the
previous patch.

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [Qemu-devel] [PATCH v18 3/4] block/gluster: using new qapi schema
  2016-07-14 10:52       ` Markus Armbruster
@ 2016-07-14 15:33         ` Eric Blake
  0 siblings, 0 replies; 11+ messages in thread
From: Eric Blake @ 2016-07-14 15:33 UTC (permalink / raw)
  To: Markus Armbruster, Prasanna Kalever
  Cc: Kevin Wolf, Peter Krempa, Prasanna Kumar Kalever, Jeffrey Cody,
	qemu-devel, deepakcs, bharata, Raghavendra Talur

[-- Attachment #1: Type: text/plain, Size: 2587 bytes --]

On 07/14/2016 04:52 AM, Markus Armbruster wrote:

>>>> +
>>>> +##
>>>> +# @BlockdevOptionsGluster
>>>> +#
>>>> +# Driver specific block device options for Gluster
>>>> +#
>>>> +# @volume:      name of gluster volume where VM image resides
>>>> +#
>>>> +# @path:        absolute path to image file in gluster volume
>>>> +#
>>>> +# @server:      gluster server description
>>>> +#
>>>> +# @debug_level: #libgfapi log level (default '4' which is Error)
>>>> +#
>>>> +# Since: 2.7
>>>> +##
>>>> +{ 'struct': 'BlockdevOptionsGluster',
>>>> +  'data': { 'volume': 'str',
>>>> +            'path': 'str',
>>>> +            'server': 'GlusterServer',
>>>> +            '*debug_level': 'int' } }
>>>

GlusterServer is very similar to the existing SocketAddress; the
questions at hand are whether it should be a flat union instead of a
simple union, and whether 'port' on the 'inet' branch should accept
integers instead of (or in addition to) strings.  Also, if you're going
to immediately convert it to an array of servers in the next patch, it
may be better to do that up front; I guess it boils down to how much
churn there is in converting the rest of the code.  If it is
intentionally different from the final version, at least explicitly call
that out in the commit message.

>>> Are @volume and @path interpreted in any way in QEMU, or are they merely
>>> sent to the Gluster server?
>>
>> have a look at libglfsapi (IMO which is poorly designed)
>> glfs_set_volfile_server (struct glfs *fs, const char *transport, const
>> char *host, int port)
>>
>> So the @volume and @path are parsed from the command line args and
>> filled in 'struct glfs' object
> 
> After discussing this on IRC, I understand that:
> 
> * @server specifies a Gluster server
> 
> * @volume is effectively the name of a file name space on that server
> 
> * @path is a name in that name space
> 
> * Together, they specify an image file stored on Gluster.
> 
> If that's correct, the design makes sense for QMP.
> 
> Is the legacy syntax involving a gluster URI accessible via QMP?

As far as I know, 'blockdev-add' doesn't yet support gluster, so the
only way to hotplug a gluster volume at the moment via QMP is to resort
to HMP command passthrough.  At least libvirt is still using HMP
passthrough for all block device hotplugs, for two reasons: 1.
blockdev-add is incomplete, 2. libvirt hasn't been taught to use node
names everywhere

-- 
Eric Blake   eblake redhat com    +1-919-301-3266
Libvirt virtualization library http://libvirt.org


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 604 bytes --]

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [Qemu-devel] [PATCH v18 4/4] block/gluster: add support for multiple gluster servers
  2016-07-14 12:05   ` Markus Armbruster
@ 2016-07-15 13:47     ` Prasanna Kalever
  0 siblings, 0 replies; 11+ messages in thread
From: Prasanna Kalever @ 2016-07-15 13:47 UTC (permalink / raw)
  To: Markus Armbruster
  Cc: Prasanna Kumar Kalever, qemu-devel, Kevin Wolf, Peter Krempa,
	Jeffrey Cody, deepakcs, bharata, Raghavendra Talur

On Thu, Jul 14, 2016 at 5:35 PM, Markus Armbruster <armbru@redhat.com> wrote:
> Interface and error message review only.
>
> Prasanna Kumar Kalever <prasanna.kalever@redhat.com> writes:
>
>> This patch adds a way to specify multiple volfile servers to the gluster
>> block backend of QEMU with tcp|rdma transport types and their port numbers.
>>
>> Problem:
>>
>> Currently VM Image on gluster volume is specified like this:
>>
>> file=gluster[+tcp]://host[:port]/testvol/a.img
>>
>> Assuming we have three hosts in trusted pool with replica 3 volume
>> in action and unfortunately host (mentioned in the command above) went down
>> for some reason, since the volume is replica 3 we now have other 2 hosts
>> active from which we can boot the VM.
>>
>> But currently there is no mechanism to pass the other 2 gluster host
>> addresses to qemu.
>
> Awkward.  Perhaps:
>
>   Say we have three hosts in a trusted pool with replica 3 volume
>   in action.  When the host mentioned in the command above goes down
>   for some reason, the other two hosts are still available.  But there's
>   currently no way to tell QEMU about them.

Will incorporate in v19

>
>> Solution:
>>
>> New way of specifying VM Image on gluster volume with volfile servers:
>> (We still support old syntax to maintain backward compatibility)
>>
>> Basic command line syntax looks like:
>>
>> Pattern I:
>>  -drive driver=gluster,
>>         volume=testvol,path=/path/a.raw,
>>         server.0.host=1.2.3.4,
>>        [server.0.port=24007,]
>>        [server.0.transport=tcp,]
>>         server.1.host=5.6.7.8,
>>        [server.1.port=24008,]
>>        [server.1.transport=rdma,]
>
> Don't forget to update this line when you drop transport 'rdma'.  More
> of the same below.
>
>>         server.2.host=/var/run/glusterd.socket,
>>         server.2.transport=unix ...
>>
>> Pattern II:
>>  'json:{"driver":"qcow2","file":{"driver":"gluster",
>>        "volume":"testvol","path":"/path/a.qcow2",
>>        "server":[{tuple0},{tuple1}, ...{tupleN}]}}'
>
> Suggest to add -drive here, for symmetry with pattern I.
>
> JSON calls the things in { ... } objects, not tuples.  Let's stick to
> JSON terminology: [{object0},{object1}, ...].  But I think spelling
> things out to a similar degree as in pattern I would be clearer:
>
>    -drive 'json:{"driver": "qcow2",
>                  "file": { "driver": "gluster",
>                            "volume": "testvol",
>                            "path": "/path/a.qcow2",
>                            "server": [
>                               { "host": "1.2.3.4",
>                                 "port": 24007,
>                                 "transport": "tcp" },
>                               { "host": "5.6.7.8"
>                                 ... },
>                               ... ] }
>                  ... }'

IMO, this doesn't work, I have given a try.

>
>>    driver      => 'gluster' (protocol name)
>>    volume      => name of gluster volume where our VM image resides
>>    path        => absolute path of image in gluster volume
>>
>>   {tuple}      => {"host":"1.2.3.4"[,"port":"24007","transport":"tcp"]}
>>
>>    host        => host address (hostname/ipv4/ipv6 addresses/socket path)
>>    port        => port number on which glusterd is listening. (default 24007)
>>    transport   => transport type used to connect to gluster management daemon,
>>                    it can be tcp|rdma|unix (default 'tcp')
>>
>> Examples:
>> 1.
>>  -drive driver=qcow2,file.driver=gluster,
>>         file.volume=testvol,file.path=/path/a.qcow2,
>>         file.server.0.host=1.2.3.4,
>>         file.server.0.port=24007,
>>         file.server.0.transport=tcp,
>>         file.server.1.host=5.6.7.8,
>>         file.server.1.port=24008,
>>         file.server.1.transport=rdma,
>>         file.server.2.host=/var/run/glusterd.socket
>>         file.server.1.transport=unix
>> 2.
>>  'json:{"driver":"qcow2","file":{"driver":"gluster","volume":"testvol",
>>          "path":"/path/a.qcow2","server":
>>          [{"host":"1.2.3.4","port":"24007","transport":"tcp"},
>>           {"host":"4.5.6.7","port":"24008","transport":"rdma"},
>>           {"host":"/var/run/glusterd.socket","transport":"unix"}] } }'
>
> Ah, working examples.  Good!  No need to expand your abbreviated
> description of pattern II then, just say object instead of tuple.  But if
> you prefer to expand it, go ahead, your choice.
>
>> This patch gives a mechanism to provide all the server addresses, which are in
>> replica set, so in case host1 is down VM can still boot from any of the
>> active hosts.
>>
>> This is equivalent to the backup-volfile-servers option supported by
>> mount.glusterfs (FUSE way of mounting gluster volume)
>>
>> Credits: Sincere thanks to Kevin Wolf <kwolf@redhat.com> and
>> "Deepak C Shetty" <deepakcs@redhat.com> for inputs and all their support
>>
>> Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
>> ---
>>  block/gluster.c      | 290 ++++++++++++++++++++++++++++++++++++++++++++-------
>>  qapi/block-core.json |   4 +-
>>  2 files changed, 255 insertions(+), 39 deletions(-)
>>
>> diff --git a/block/gluster.c b/block/gluster.c
>> index 40dcbc1..41046f0 100644
>> --- a/block/gluster.c
>> +++ b/block/gluster.c
>> @@ -12,8 +12,15 @@
>>  #include "block/block_int.h"
>>  #include "qapi/error.h"
>>  #include "qemu/uri.h"
>> +#include "qemu/error-report.h"
>>
>>  #define GLUSTER_OPT_FILENAME        "filename"
>> +#define GLUSTER_OPT_VOLUME          "volume"
>> +#define GLUSTER_OPT_PATH            "path"
>> +#define GLUSTER_OPT_HOST            "host"
>> +#define GLUSTER_OPT_PORT            "port"
>> +#define GLUSTER_OPT_TRANSPORT       "transport"
>> +#define GLUSTER_OPT_SERVER_PATTERN  "server."
>>  #define GLUSTER_OPT_DEBUG           "debug"
>>  #define GLUSTER_DEFAULT_PORT        24007
>>  #define GLUSTER_DEBUG_DEFAULT       4
>> @@ -82,6 +89,46 @@ static QemuOptsList runtime_opts = {
>>      },
>>  };
>>
>> +static QemuOptsList runtime_json_opts = {
>> +    .name = "gluster_json",
>> +    .head = QTAILQ_HEAD_INITIALIZER(runtime_json_opts.head),
>> +    .desc = {
>> +        {
>> +            .name = GLUSTER_OPT_VOLUME,
>> +            .type = QEMU_OPT_STRING,
>> +            .help = "name of gluster volume where VM image resides",
>> +        },
>> +        {
>> +            .name = GLUSTER_OPT_PATH,
>> +            .type = QEMU_OPT_STRING,
>> +            .help = "absolute path to image file in gluster volume",
>> +        },
>> +        { /* end of list */ }
>> +    },
>> +};
>> +
>> +static QemuOptsList runtime_tuple_opts = {
>> +    .name = "gluster_tuple",
>> +    .head = QTAILQ_HEAD_INITIALIZER(runtime_tuple_opts.head),
>> +    .desc = {
>> +        {
>> +            .name = GLUSTER_OPT_HOST,
>> +            .type = QEMU_OPT_STRING,
>> +            .help = "host address (hostname/ipv4/ipv6 addresses/socket path)",
>> +        },
>> +        {
>> +            .name = GLUSTER_OPT_PORT,
>> +            .type = QEMU_OPT_NUMBER,
>> +            .help = "port number on which glusterd is listening (default 24007)",
>> +        },
>> +        {
>> +            .name = GLUSTER_OPT_TRANSPORT,
>> +            .type = QEMU_OPT_STRING,
>> +            .help = "transport type 'tcp'|'rdma'|'unix' (default 'tcp')",
>> +        },
>> +        { /* end of list */ }
>> +    },
>> +};
>>
>>  static int parse_volume_options(BlockdevOptionsGluster *gconf, char *path)
>>  {
>> @@ -148,6 +195,7 @@ static int parse_volume_options(BlockdevOptionsGluster *gconf, char *path)
>>  static int qemu_gluster_parseuri(BlockdevOptionsGluster *gconf,
>>                                   const char *filename)
>>  {
>> +    GlusterServer *gsconf;
>>      URI *uri;
>>      QueryParams *qp = NULL;
>>      bool is_unix = false;
>> @@ -158,23 +206,24 @@ static int qemu_gluster_parseuri(BlockdevOptionsGluster *gconf,
>>          return -EINVAL;
>>      }
>>
>> -    gconf->server = g_new0(GlusterServer, 1);
>> +    gconf->server = g_new0(GlusterServerList, 1);
>> +    gconf->server->value = gsconf = g_new0(GlusterServer, 1);
>>
>>      /* transport */
>>      if (!uri->scheme || !strcmp(uri->scheme, "gluster")) {
>> -        gconf->server->transport = GLUSTER_TRANSPORT_TCP;
>> +        gsconf->transport = GLUSTER_TRANSPORT_TCP;
>>      } else if (!strcmp(uri->scheme, "gluster+tcp")) {
>> -        gconf->server->transport = GLUSTER_TRANSPORT_TCP;
>> +        gsconf->transport = GLUSTER_TRANSPORT_TCP;
>>      } else if (!strcmp(uri->scheme, "gluster+unix")) {
>> -        gconf->server->transport = GLUSTER_TRANSPORT_UNIX;
>> +        gsconf->transport = GLUSTER_TRANSPORT_UNIX;
>>          is_unix = true;
>>      } else if (!strcmp(uri->scheme, "gluster+rdma")) {
>> -        gconf->server->transport = GLUSTER_TRANSPORT_RDMA;
>> +        gsconf->transport = GLUSTER_TRANSPORT_RDMA;
>>      } else {
>>          ret = -EINVAL;
>>          goto out;
>>      }
>> -    gconf->server->has_transport = true;
>> +    gsconf->has_transport = true;
>>
>>      ret = parse_volume_options(gconf, uri->path);
>>      if (ret < 0) {
>> @@ -196,15 +245,15 @@ static int qemu_gluster_parseuri(BlockdevOptionsGluster *gconf,
>>              ret = -EINVAL;
>>              goto out;
>>          }
>> -        gconf->server->host = g_strdup(qp->p[0].value);
>> +        gsconf->host = g_strdup(qp->p[0].value);
>>      } else {
>> -        gconf->server->host = g_strdup(uri->server ? uri->server : "localhost");
>> +        gsconf->host = g_strdup(uri->server ? uri->server : "localhost");
>>          if (uri->port) {
>> -            gconf->server->port = uri->port;
>> +            gsconf->port = uri->port;
>>          } else {
>> -            gconf->server->port = GLUSTER_DEFAULT_PORT;
>> +            gsconf->port = GLUSTER_DEFAULT_PORT;
>>          }
>> -        gconf->server->has_port = true;
>> +        gsconf->has_port = true;
>>      }
>>
>>  out:
>> @@ -215,31 +264,26 @@ out:
>>      return ret;
>>  }
>>
>> -static struct glfs *qemu_gluster_init(BlockdevOptionsGluster *gconf,
>> -                                      const char *filename, Error **errp)
>> +static struct glfs *qemu_gluster_glfs_init(BlockdevOptionsGluster *gconf,
>> +                                           Error **errp)
>>  {
>> -    struct glfs *glfs = NULL;
>> +    struct glfs *glfs;
>>      int ret;
>>      int old_errno;
>> -
>> -    ret = qemu_gluster_parseuri(gconf, filename);
>> -    if (ret < 0) {
>> -        error_setg(errp, "Usage: file=gluster[+transport]://[host[:port]]/"
>> -                         "volume/path[?socket=...]");
>> -        errno = -ret;
>> -        goto out;
>> -    }
>> +    GlusterServerList *server = NULL;
>>
>>      glfs = glfs_new(gconf->volume);
>>      if (!glfs) {
>>          goto out;
>>      }
>>
>> -    ret = glfs_set_volfile_server(glfs,
>> -                                  GlusterTransport_lookup[gconf->server->transport],
>> -                                  gconf->server->host, gconf->server->port);
>> -    if (ret < 0) {
>> -        goto out;
>> +    for (server = gconf->server; server; server = server->next) {
>> +        ret = glfs_set_volfile_server(glfs,
>> +                                      GlusterTransport_lookup[server->value->transport],
>> +                                      server->value->host, server->value->port);
>> +        if (ret < 0) {
>> +            goto out;
>> +        }
>>      }
>>
>>      ret = glfs_set_logging(glfs, "-", gconf->debug_level);
>> @@ -249,11 +293,13 @@ static struct glfs *qemu_gluster_init(BlockdevOptionsGluster *gconf,
>>
>>      ret = glfs_init(glfs);
>>      if (ret) {
>> +        error_report("Gluster connection for 'volume: %s and path: %s': ",
>> +                         gconf->volume, gconf->path);
>> +        for (server = gconf->server; server; server = server->next) {
>> +            error_report("failed for host %s", server->value->host);
>> +        }
>>          error_setg_errno(errp, errno,
>> -                         "Gluster connection failed for host=%s port=%d "
>> -                         "volume=%s path=%s transport=%s", gconf->server->host,
>> -                         gconf->server->port, gconf->volume, gconf->path,
>> -                         GlusterTransport_lookup[gconf->server->transport]);
>> +                         "Please refer to gluster logs for more info");
>
> error_setg() and error_report() to not mix.
>
> error_report() reports to stderr or the current monitor.  Okay in some
> contexts, wrong in others.  Okay contexts include QEMU startup, and HMP
> commands.  Wrong contexts include QMP commands, and functions with an
> Error **errp parameter.
>
> It's wrong in QMP commands, because the error must be sent via QMP.
>
> It's wrong in functions taking errp, because how to handle errors is the
> caller's decision.  In particular, the caller may decide to ignore the
> error.  The function mustn't babble about it with error_report().
>
> In particular, when this runs in QMP context, the error sent to QMP will
> just be
>
>     "Please refer to gluster logs for more info: %s", strerror(errno)
>
> and the real error "Gluster connection for ..."  gets barfed to
> stderr.
>
> Aside: when error_report() is correct, use exactly one per error.  You
> may add information with error_printf().
>
> You should call just error_setg_errno() here, with the full error
> message.  Since the error message consists of a fixed part followed by
> variable number of "failed for host" parts, you have to construct it
> dynamically.  Perhaps with a series of g_strdup_printf().  Inefficient,
> but good enough for an error path.
>
> "Please refer to gluster logs for more info" is a hint.  The function to
> add hints to an Error is error_append_hint().  Search the tree for
> examples.
>
> Finally, do test your error messages to make sure they come out
> readable!

That's awesome, thanks for the keen theory I hope that I will not
disappoint you in v19.

>
>>
>>          /* glfs_init sometimes doesn't set errno although docs suggest that */
>>          if (errno == 0)
>                errno = EINVAL;
>
> Well, if that's the case, shouldn't you do this before you pass
> (possibly zero) errno to error_setg_errno()?
>
>> @@ -272,6 +318,176 @@ out:
>>      return NULL;
>>  }
>>
>> +static int qapi_enum_parse(const char *opt)
>> +{
>> +    int i;
>> +
>> +    if (!opt) {
>> +        /* Set tcp as default */
>> +        return GLUSTER_TRANSPORT_TCP;
>> +    }
>> +
>> +    for (i = 0; i < GLUSTER_TRANSPORT__MAX; i++) {
>> +        if (!strcmp(opt, GlusterTransport_lookup[i])) {
>> +            return i;
>> +        }
>> +    }
>> +
>> +    return i;
>> +}
>> +
>> +/*
>> + * Convert the json formatted command line into qapi.
>> +*/
>> +static int qemu_gluster_parsejson(BlockdevOptionsGluster *gconf,
>
> "parsejson" isn't a word.  "parse_json" would be two :)

taken care.

>
>> +                                  QDict *options)
>> +{
>> +    QemuOpts *opts;
>> +    GlusterServer *gsconf;
>> +    GlusterServerList *curr = NULL;
>> +    QDict *backing_options = NULL;
>> +    Error *local_err = NULL;
>> +    char *str = NULL;
>> +    const char *ptr;
>> +    size_t num_servers;
>> +    int i;
>> +
>> +    /* create opts info from runtime_json_opts list */
>> +    opts = qemu_opts_create(&runtime_json_opts, NULL, 0, &error_abort);
>> +    qemu_opts_absorb_qdict(opts, options, &local_err);
>> +    if (local_err) {
>> +        goto out;
>> +    }
>> +
>> +    num_servers = qdict_array_entries(options, GLUSTER_OPT_SERVER_PATTERN);
>> +    if (num_servers < 1) {
>> +        error_setg(&local_err, "qemu_gluster: please provide 'server' "
>> +                               "option with valid fields in array of tuples");
>
> This isn't an error message, it's instructions what to do.  Such
> instructions can be useful when they're correct, but they can't replace
> an error message.  The error message should state what's wrong.  No
> less, no more.
>
> Moreover, avoid prefixes like "qemu_gluster:".  Usually, the fact that
> this is about Gluster is obvious.  When it isn't, providing context is
> the caller's job.

sure, I have modified accordingly.
>
>> +        goto out;
>> +    }
>> +
>> +    ptr = qemu_opt_get(opts, GLUSTER_OPT_VOLUME);
>> +    if (!ptr) {
>> +        error_setg(&local_err, "qemu_gluster: please provide 'volume' "
>> +                               "option");
>
> Not an error message.
>
>> +        goto out;
>> +    }
>> +    gconf->volume = g_strdup(ptr);
>> +
>> +    ptr = qemu_opt_get(opts, GLUSTER_OPT_PATH);
>> +    if (!ptr) {
>> +        error_setg(&local_err, "qemu_gluster: please provide 'path' "
>> +                               "option");
>
> Not an error message.
>
>> +        goto out;
>> +    }
>> +    gconf->path = g_strdup(ptr);
>> +
>> +    qemu_opts_del(opts);
>> +
>> +    /* create opts info from runtime_tuple_opts list */
>> +    for (i = 0; i < num_servers; i++) {
>> +        opts = qemu_opts_create(&runtime_tuple_opts, NULL, 0, &error_abort);
>> +        str = g_strdup_printf(GLUSTER_OPT_SERVER_PATTERN"%d.", i);
>> +        qdict_extract_subqdict(options, &backing_options, str);
>> +        qemu_opts_absorb_qdict(opts, backing_options, &local_err);
>> +        if (local_err) {
>> +            goto out;
>> +        }
>> +        qdict_del(backing_options, str);
>> +        g_free(str);
>> +        str = NULL;
>> +
>> +        ptr = qemu_opt_get(opts, GLUSTER_OPT_HOST);
>> +        if (!ptr) {
>> +            error_setg(&local_err, "qemu_gluster: server.{tuple.%d} "
>> +                                   "requires 'host' option", i);
>> +            goto out;
>> +        }
>> +
>> +        gsconf = g_new0(GlusterServer, 1);
>> +
>> +        gsconf->host = g_strdup(ptr);
>> +
>> +        ptr = qemu_opt_get(opts, GLUSTER_OPT_TRANSPORT);
>> +        /* check whether transport type specified in json command is valid */
>> +        gsconf->transport = qapi_enum_parse(ptr);
>> +        if (gsconf->transport == GLUSTER_TRANSPORT__MAX) {
>> +            error_setg(&local_err, "qemu_gluster: please set 'transport'"
>> +                                   " type in tuple.%d as tcp|rdma|unix", i);
>
> Not an error message.
>
>> +            qapi_free_GlusterServer(gsconf);
>> +            goto out;
>> +        }
>> +        gsconf->has_transport = true;
>> +
>> +        /* we don't need port number for unix transport */
>> +        if (gsconf->transport != GLUSTER_TRANSPORT_UNIX) {
>> +            gsconf->port = qemu_opt_get_number(opts, GLUSTER_OPT_PORT,
>> +                                               GLUSTER_DEFAULT_PORT);
>> +            gsconf->has_port = true;
>> +        }
>> +
>> +        if (gconf->server == NULL) {
>> +            gconf->server = g_new0(GlusterServerList, 1);
>> +            gconf->server->value = gsconf;
>> +            curr = gconf->server;
>> +        } else {
>> +            curr->next = g_new0(GlusterServerList, 1);
>> +            curr->next->value = gsconf;
>> +            curr = curr->next;
>> +        }
>> +
>> +        qemu_opts_del(opts);
>> +    }
>> +
>> +    return 0;
>> +
>> +out:
>> +    error_report_err(local_err);
>
> Aha, qemu_gluster_parsejson() reports errors with error_report_err().

oops...!

>
>> +    qemu_opts_del(opts);
>> +    if (str) {
>> +        qdict_del(backing_options, str);
>> +        g_free(str);
>> +    }
>> +    errno = EINVAL;
>> +    return -errno;
>> +}
>> +
>> +static struct glfs *qemu_gluster_init(BlockdevOptionsGluster *gconf,
>> +                                      const char *filename,
>> +                                      QDict *options, Error **errp)
>> +{
>> +    int ret;
>> +
>> +    if (filename) {
>> +        ret = qemu_gluster_parseuri(gconf, filename);
>> +        if (ret < 0) {
>> +            error_setg(errp, "Usage: file=gluster[+transport]://[host[:port]]/"
>> +                             "volume/path[?socket=...]");
>
> Not an error message.
>
>> +            errno = -ret;
>> +            return NULL;
>> +        }
>> +    } else {
>> +        ret = qemu_gluster_parsejson(gconf, options);
>
> Anti-pattern: function that reports errors with error_report() or
> similar called from a function with an Error **errp parameter.
>
> In QMP context, qemu_gluster_parsejson() writes the real error to
> stderr, and QMP sends back only the useless "Usage: ..." error below.
>
> qemu_gluster_parsejson() needs to return the Error instead, via errp
> parameter.  Then you can error_propagate() here, and...
>
>> +        if (ret < 0) {
>> +            error_setg(errp, "Usage: "
>> +                             "-drive driver=qcow2,file.driver=gluster,"
>> +                             "file.volume=testvol,file.path=/path/a.qcow2,"
>> +                             "file.server.0.host=1.2.3.4,"
>> +                             "[file.server.0.port=24007,]"
>> +                             "[file.server.0.transport=tcp,]"
>> +                             "file.server.1.host=5.6.7.8,"
>> +                             "[file.server.1.port=24008,]"
>> +                             "[file.server.1.transport=rdma,]"
>> +                             "file.server.2.host=/var/run/glusterd.socket,"
>> +                             "file.server.2.transport=unix ...");
>
> ... drop this usage hint.

I still think this is needed with a slight modified version.

Thanks Markus!
--
Prasanna

>
>> +            errno = -ret;
>> +            return NULL;
>> +        }
>> +    }
>> +
>> +    return qemu_gluster_glfs_init(gconf, errp);
>> +}
>> +
>>  static void qemu_gluster_complete_aio(void *opaque)
>>  {
>>      GlusterAIOCB *acb = (GlusterAIOCB *)opaque;
>> @@ -371,7 +587,7 @@ static int qemu_gluster_open(BlockDriverState *bs,  QDict *options,
>>      gconf = g_new0(BlockdevOptionsGluster, 1);
>>      gconf->debug_level = s->debug_level;
>>      gconf->has_debug_level = true;
>> -    s->glfs = qemu_gluster_init(gconf, filename, errp);
>> +    s->glfs = qemu_gluster_init(gconf, filename, options, errp);
>>      if (!s->glfs) {
>>          ret = -errno;
>>          goto out;
>> @@ -442,7 +658,7 @@ static int qemu_gluster_reopen_prepare(BDRVReopenState *state,
>>      gconf = g_new0(BlockdevOptionsGluster, 1);
>>      gconf->debug_level = s->debug_level;
>>      gconf->has_debug_level = true;
>> -    reop_s->glfs = qemu_gluster_init(gconf, state->bs->filename, errp);
>> +    reop_s->glfs = qemu_gluster_init(gconf, state->bs->filename, NULL, errp);
>>      if (reop_s->glfs == NULL) {
>>          ret = -errno;
>>          goto exit;
>> @@ -589,7 +805,7 @@ static int qemu_gluster_create(const char *filename,
>>      }
>>      gconf->has_debug_level = true;
>>
>> -    glfs = qemu_gluster_init(gconf, filename, errp);
>> +    glfs = qemu_gluster_init(gconf, filename, NULL, errp);
>>      if (!glfs) {
>>          ret = -errno;
>>          goto out;
>> @@ -969,7 +1185,7 @@ static BlockDriver bdrv_gluster = {
>>      .format_name                  = "gluster",
>>      .protocol_name                = "gluster",
>>      .instance_size                = sizeof(BDRVGlusterState),
>> -    .bdrv_needs_filename          = true,
>> +    .bdrv_needs_filename          = false,
>>      .bdrv_file_open               = qemu_gluster_open,
>>      .bdrv_reopen_prepare          = qemu_gluster_reopen_prepare,
>>      .bdrv_reopen_commit           = qemu_gluster_reopen_commit,
>> @@ -997,7 +1213,7 @@ static BlockDriver bdrv_gluster_tcp = {
>>      .format_name                  = "gluster",
>>      .protocol_name                = "gluster+tcp",
>>      .instance_size                = sizeof(BDRVGlusterState),
>> -    .bdrv_needs_filename          = true,
>> +    .bdrv_needs_filename          = false,
>>      .bdrv_file_open               = qemu_gluster_open,
>>      .bdrv_reopen_prepare          = qemu_gluster_reopen_prepare,
>>      .bdrv_reopen_commit           = qemu_gluster_reopen_commit,
>> @@ -1053,7 +1269,7 @@ static BlockDriver bdrv_gluster_rdma = {
>>      .format_name                  = "gluster",
>>      .protocol_name                = "gluster+rdma",
>>      .instance_size                = sizeof(BDRVGlusterState),
>> -    .bdrv_needs_filename          = true,
>> +    .bdrv_needs_filename          = false,
>>      .bdrv_file_open               = qemu_gluster_open,
>>      .bdrv_reopen_prepare          = qemu_gluster_reopen_prepare,
>>      .bdrv_reopen_commit           = qemu_gluster_reopen_commit,
>> diff --git a/qapi/block-core.json b/qapi/block-core.json
>> index f8b38bb..52cee7d 100644
>> --- a/qapi/block-core.json
>> +++ b/qapi/block-core.json
>> @@ -2077,7 +2077,7 @@
>>  #
>>  # @path:        absolute path to image file in gluster volume
>>  #
>> -# @server:      gluster server description
>> +# @server:      one or more gluster server descriptions
>>  #
>>  # @debug_level: #libgfapi log level (default '4' which is Error)
>>  #
>> @@ -2086,7 +2086,7 @@
>>  { 'struct': 'BlockdevOptionsGluster',
>>    'data': { 'volume': 'str',
>>              'path': 'str',
>> -            'server': 'GlusterServer',
>> +            'server': [ 'GlusterServer' ],
>>              '*debug_level': 'int' } }
>>
>>  ##
>
> Incompatible change, okay because the thing got added only in the
> previous patch.

^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2016-07-15 13:47 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-07-13 13:57 [Qemu-devel] [PATCH v18 0/4] block/gluster: add support for multiple gluster servers Prasanna Kumar Kalever
2016-07-13 13:57 ` [Qemu-devel] [PATCH v18 1/4] block/gluster: rename [server, volname, image] -> [host, volume, path] Prasanna Kumar Kalever
2016-07-13 13:57 ` [Qemu-devel] [PATCH v18 2/4] block/gluster: code cleanup Prasanna Kumar Kalever
2016-07-13 13:57 ` [Qemu-devel] [PATCH v18 3/4] block/gluster: using new qapi schema Prasanna Kumar Kalever
2016-07-14  7:52   ` Markus Armbruster
2016-07-14  8:18     ` Prasanna Kalever
2016-07-14 10:52       ` Markus Armbruster
2016-07-14 15:33         ` Eric Blake
2016-07-13 13:57 ` [Qemu-devel] [PATCH v18 4/4] block/gluster: add support for multiple gluster servers Prasanna Kumar Kalever
2016-07-14 12:05   ` Markus Armbruster
2016-07-15 13:47     ` Prasanna Kalever

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.