All of lore.kernel.org
 help / color / mirror / Atom feed
* [Qemu-devel] [PATCH 0/4] block/gluster: add support for multiple gluster servers
@ 2015-11-12 10:22 Prasanna Kumar Kalever
  2015-11-12 10:22 ` [Qemu-devel] [PATCH 1/4] block/gluster: rename [server, volname, image] -> [host, volume, path] Prasanna Kumar Kalever
                   ` (4 more replies)
  0 siblings, 5 replies; 18+ messages in thread
From: Prasanna Kumar Kalever @ 2015-11-12 10:22 UTC (permalink / raw)
  To: qemu-devel
  Cc: kwolf, pkrempa, Prasanna Kumar Kalever, stefanha, jcody,
	deepakcs, bharata, rtalur

This release is rebased on qemu master branch.
In this series of patches 1/4 and 2/4 are unchanged.

Prasanna Kumar Kalever (4):
  block/gluster: rename [server, volname, image] -> [host, volume, path]
  block/gluster: code cleanup
  block/gluster: using new qapi schema
  block/gluster: add support for multiple gluster servers

v1:
multiple host addresses but common port number and transport type
pattern: URI syntax with query (?) delimitor
syntax:
    file=gluster[+transport-type]://host1:24007/testvol/a.img\
         ?server=host2&server=host3

v2:
multiple host addresses each have their own port number, but all use
                                                         common transport type
pattern: URI syntax  with query (?) delimiter
syntax:
    file=gluster[+transport-type]://[host[:port]]/testvol/a.img\
         [?server=host1[:port]\
          &server=host2[:port]]

v3:
multiple host addresses each have their own port number and transport type
pattern: changed to json
syntax:
    'json:{"driver":"qcow2","file":{"driver":"gluster","volume":"testvol",
           "path":"/path/a.qcow2","server":
         [{"host":"1.2.3.4","port":"24007","transport":"tcp"},
          {"host":"4.5.6.7","port":"24008","transport":"rdma"}] } }'

v4, v5:
address comments from "Eric Blake" <eblake@redhat.com>
renamed:
'backup-volfile-servers' -> 'volfile-servers'

v6:
address comments from Peter Krempa <pkrempa@redhat.com>
renamed:
 'volname'    ->  'volume'
 'image-path' ->  'path'
 'server'     ->  'host'

v7:
fix for v6 (initialize num_servers to 1 and other typos)

v8:
split patch set v7 into series of 3 as per Peter Krempa <pkrempa@redhat.com>
review comments

v9:
reorder the series of patches addressing "Eric Blake" <eblake@redhat.com>
review comments

v10:
fix mem-leak as per Peter Krempa <pkrempa@redhat.com> review comments

v11:
using qapi-types* defined structures as per "Eric Blake" <eblake@redhat.com>
review comments.

v12:
fix crash caused in qapi_free_BlockdevOptionsGluster

v13:
address comments from "Jeff Cody" <jcody@redhat.com>

v14:
address comments from "Eric Blake" <eblake@redhat.com>
split patch 3/3 into two
rename input option and variable from 'servers' to 'server'


 block/gluster.c      | 467 +++++++++++++++++++++++++++++++++++++--------------
 qapi/block-core.json |  60 ++++++-
 2 files changed, 400 insertions(+), 127 deletions(-)

-- 
2.1.0

^ permalink raw reply	[flat|nested] 18+ messages in thread

* [Qemu-devel] [PATCH 1/4] block/gluster: rename [server, volname, image] -> [host, volume, path]
  2015-11-12 10:22 [Qemu-devel] [PATCH 0/4] block/gluster: add support for multiple gluster servers Prasanna Kumar Kalever
@ 2015-11-12 10:22 ` Prasanna Kumar Kalever
  2015-11-12 20:28   ` Eric Blake
  2015-11-12 10:22 ` [Qemu-devel] [PATCH 2/4] block/gluster: code cleanup Prasanna Kumar Kalever
                   ` (3 subsequent siblings)
  4 siblings, 1 reply; 18+ messages in thread
From: Prasanna Kumar Kalever @ 2015-11-12 10:22 UTC (permalink / raw)
  To: qemu-devel
  Cc: kwolf, pkrempa, Prasanna Kumar Kalever, stefanha, jcody,
	deepakcs, bharata, rtalur

this patch is very much be meaningful after next patch which adds multiple
gluster servers support. After that,

an example is, in  'servers' tuple values we use 'server' variable for key
'host' in the code, it will be quite messy to have colliding names for
variables, so to maintain better readability and makes it consistent with other
existing code as well as the input keys/options, this patch renames the
following variables
'server'  -> 'host'
'image'   -> 'path'
'volname' -> 'volume'

Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
---
 block/gluster.c | 54 +++++++++++++++++++++++++++---------------------------
 1 file changed, 27 insertions(+), 27 deletions(-)

diff --git a/block/gluster.c b/block/gluster.c
index 1eb3a8c..513a774 100644
--- a/block/gluster.c
+++ b/block/gluster.c
@@ -25,19 +25,19 @@ typedef struct BDRVGlusterState {
 } BDRVGlusterState;
 
 typedef struct GlusterConf {
-    char *server;
+    char *host;
     int port;
-    char *volname;
-    char *image;
+    char *volume;
+    char *path;
     char *transport;
 } GlusterConf;
 
 static void qemu_gluster_gconf_free(GlusterConf *gconf)
 {
     if (gconf) {
-        g_free(gconf->server);
-        g_free(gconf->volname);
-        g_free(gconf->image);
+        g_free(gconf->host);
+        g_free(gconf->volume);
+        g_free(gconf->path);
         g_free(gconf->transport);
         g_free(gconf);
     }
@@ -57,19 +57,19 @@ static int parse_volume_options(GlusterConf *gconf, char *path)
     if (*p == '\0') {
         return -EINVAL;
     }
-    gconf->volname = g_strndup(q, p - q);
+    gconf->volume = g_strndup(q, p - q);
 
-    /* image */
+    /* path */
     p += strspn(p, "/");
     if (*p == '\0') {
         return -EINVAL;
     }
-    gconf->image = g_strdup(p);
+    gconf->path = g_strdup(p);
     return 0;
 }
 
 /*
- * file=gluster[+transport]://[server[:port]]/volname/image[?socket=...]
+ * file=gluster[+transport]://[host[:port]]/volume/path[?socket=...]
  *
  * 'gluster' is the protocol.
  *
@@ -78,10 +78,10 @@ static int parse_volume_options(GlusterConf *gconf, char *path)
  * tcp, unix and rdma. If a transport type isn't specified, then tcp
  * type is assumed.
  *
- * 'server' specifies the server where the volume file specification for
+ * 'host' specifies the host where the volume file specification for
  * the given volume resides. This can be either hostname, ipv4 address
  * or ipv6 address. ipv6 address needs to be within square brackets [ ].
- * If transport type is 'unix', then 'server' field should not be specified.
+ * If transport type is 'unix', then 'host' field should not be specified.
  * The 'socket' field needs to be populated with the path to unix domain
  * socket.
  *
@@ -90,9 +90,9 @@ static int parse_volume_options(GlusterConf *gconf, char *path)
  * default port. If the transport type is unix, then 'port' should not be
  * specified.
  *
- * 'volname' is the name of the gluster volume which contains the VM image.
+ * 'volume' is the name of the gluster volume which contains the VM image.
  *
- * 'image' is the path to the actual VM image that resides on gluster volume.
+ * 'path' is the path to the actual VM image that resides on gluster volume.
  *
  * Examples:
  *
@@ -101,7 +101,7 @@ static int parse_volume_options(GlusterConf *gconf, char *path)
  * file=gluster+tcp://1.2.3.4:24007/testvol/dir/a.img
  * file=gluster+tcp://[1:2:3:4:5:6:7:8]/testvol/dir/a.img
  * file=gluster+tcp://[1:2:3:4:5:6:7:8]:24007/testvol/dir/a.img
- * file=gluster+tcp://server.domain.com:24007/testvol/dir/a.img
+ * file=gluster+tcp://host.domain.com:24007/testvol/dir/a.img
  * file=gluster+unix:///testvol/dir/a.img?socket=/tmp/glusterd.socket
  * file=gluster+rdma://1.2.3.4:24007/testvol/a.img
  */
@@ -152,9 +152,9 @@ static int qemu_gluster_parseuri(GlusterConf *gconf, const char *filename)
             ret = -EINVAL;
             goto out;
         }
-        gconf->server = g_strdup(qp->p[0].value);
+        gconf->host = g_strdup(qp->p[0].value);
     } else {
-        gconf->server = g_strdup(uri->server ? uri->server : "localhost");
+        gconf->host = g_strdup(uri->server ? uri->server : "localhost");
         gconf->port = uri->port;
     }
 
@@ -175,18 +175,18 @@ static struct glfs *qemu_gluster_init(GlusterConf *gconf, const char *filename,
 
     ret = qemu_gluster_parseuri(gconf, filename);
     if (ret < 0) {
-        error_setg(errp, "Usage: file=gluster[+transport]://[server[:port]]/"
-                   "volname/image[?socket=...]");
+        error_setg(errp, "Usage: file=gluster[+transport]://[host[:port]]/"
+                   "volume/path[?socket=...]");
         errno = -ret;
         goto out;
     }
 
-    glfs = glfs_new(gconf->volname);
+    glfs = glfs_new(gconf->volume);
     if (!glfs) {
         goto out;
     }
 
-    ret = glfs_set_volfile_server(glfs, gconf->transport, gconf->server,
+    ret = glfs_set_volfile_server(glfs, gconf->transport, gconf->host,
             gconf->port);
     if (ret < 0) {
         goto out;
@@ -204,9 +204,9 @@ static struct glfs *qemu_gluster_init(GlusterConf *gconf, const char *filename,
     ret = glfs_init(glfs);
     if (ret) {
         error_setg_errno(errp, errno,
-                         "Gluster connection failed for server=%s port=%d "
-                         "volume=%s image=%s transport=%s", gconf->server,
-                         gconf->port, gconf->volname, gconf->image,
+                         "Gluster connection failed for host=%s port=%d "
+                         "volume=%s path=%s transport=%s", gconf->host,
+                         gconf->port, gconf->volume, gconf->path,
                          gconf->transport);
 
         /* glfs_init sometimes doesn't set errno although docs suggest that */
@@ -314,7 +314,7 @@ static int qemu_gluster_open(BlockDriverState *bs,  QDict *options,
 
     qemu_gluster_parse_flags(bdrv_flags, &open_flags);
 
-    s->fd = glfs_open(s->glfs, gconf->image, open_flags);
+    s->fd = glfs_open(s->glfs, gconf->path, open_flags);
     if (!s->fd) {
         ret = -errno;
     }
@@ -364,7 +364,7 @@ static int qemu_gluster_reopen_prepare(BDRVReopenState *state,
         goto exit;
     }
 
-    reop_s->fd = glfs_open(reop_s->glfs, gconf->image, open_flags);
+    reop_s->fd = glfs_open(reop_s->glfs, gconf->path, open_flags);
     if (reop_s->fd == NULL) {
         /* reops->glfs will be cleaned up in _abort */
         ret = -errno;
@@ -511,7 +511,7 @@ static int qemu_gluster_create(const char *filename,
         goto out;
     }
 
-    fd = glfs_creat(glfs, gconf->image,
+    fd = glfs_creat(glfs, gconf->path,
         O_WRONLY | O_CREAT | O_TRUNC | O_BINARY, S_IRUSR | S_IWUSR);
     if (!fd) {
         ret = -errno;
-- 
2.1.0

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [Qemu-devel] [PATCH 2/4] block/gluster: code cleanup
  2015-11-12 10:22 [Qemu-devel] [PATCH 0/4] block/gluster: add support for multiple gluster servers Prasanna Kumar Kalever
  2015-11-12 10:22 ` [Qemu-devel] [PATCH 1/4] block/gluster: rename [server, volname, image] -> [host, volume, path] Prasanna Kumar Kalever
@ 2015-11-12 10:22 ` Prasanna Kumar Kalever
  2015-11-12 10:22 ` [Qemu-devel] [PATCH 3/4] block/gluster: using new qapi schema Prasanna Kumar Kalever
                   ` (2 subsequent siblings)
  4 siblings, 0 replies; 18+ messages in thread
From: Prasanna Kumar Kalever @ 2015-11-12 10:22 UTC (permalink / raw)
  To: qemu-devel
  Cc: kwolf, pkrempa, Prasanna Kumar Kalever, stefanha, jcody,
	deepakcs, bharata, rtalur

unified coding styles of multiline function arguments and other error functions
moved random declarations of structures and other list variables

Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
---
 block/gluster.c | 113 ++++++++++++++++++++++++++++++--------------------------
 1 file changed, 60 insertions(+), 53 deletions(-)

diff --git a/block/gluster.c b/block/gluster.c
index 513a774..ededda2 100644
--- a/block/gluster.c
+++ b/block/gluster.c
@@ -24,6 +24,11 @@ typedef struct BDRVGlusterState {
     struct glfs_fd *fd;
 } BDRVGlusterState;
 
+typedef struct BDRVGlusterReopenState {
+    struct glfs *glfs;
+    struct glfs_fd *fd;
+} BDRVGlusterReopenState;
+
 typedef struct GlusterConf {
     char *host;
     int port;
@@ -32,6 +37,39 @@ typedef struct GlusterConf {
     char *transport;
 } GlusterConf;
 
+
+static QemuOptsList qemu_gluster_create_opts = {
+    .name = "qemu-gluster-create-opts",
+    .head = QTAILQ_HEAD_INITIALIZER(qemu_gluster_create_opts.head),
+    .desc = {
+        {
+            .name = BLOCK_OPT_SIZE,
+            .type = QEMU_OPT_SIZE,
+            .help = "Virtual disk size"
+        },
+        {
+            .name = BLOCK_OPT_PREALLOC,
+            .type = QEMU_OPT_STRING,
+            .help = "Preallocation mode (allowed values: off, full)"
+        },
+        { /* end of list */ }
+    }
+};
+
+static QemuOptsList runtime_opts = {
+    .name = "gluster",
+    .head = QTAILQ_HEAD_INITIALIZER(runtime_opts.head),
+    .desc = {
+        {
+            .name = "filename",
+            .type = QEMU_OPT_STRING,
+            .help = "URL to the gluster image",
+        },
+        { /* end of list */ }
+    },
+};
+
+
 static void qemu_gluster_gconf_free(GlusterConf *gconf)
 {
     if (gconf) {
@@ -176,7 +214,7 @@ static struct glfs *qemu_gluster_init(GlusterConf *gconf, const char *filename,
     ret = qemu_gluster_parseuri(gconf, filename);
     if (ret < 0) {
         error_setg(errp, "Usage: file=gluster[+transport]://[host[:port]]/"
-                   "volume/path[?socket=...]");
+                         "volume/path[?socket=...]");
         errno = -ret;
         goto out;
     }
@@ -254,20 +292,6 @@ static void gluster_finish_aiocb(struct glfs_fd *fd, ssize_t ret, void *arg)
     qemu_bh_schedule(acb->bh);
 }
 
-/* TODO Convert to fine grained options */
-static QemuOptsList runtime_opts = {
-    .name = "gluster",
-    .head = QTAILQ_HEAD_INITIALIZER(runtime_opts.head),
-    .desc = {
-        {
-            .name = "filename",
-            .type = QEMU_OPT_STRING,
-            .help = "URL to the gluster image",
-        },
-        { /* end of list */ }
-    },
-};
-
 static void qemu_gluster_parse_flags(int bdrv_flags, int *open_flags)
 {
     assert(open_flags != NULL);
@@ -285,7 +309,7 @@ static void qemu_gluster_parse_flags(int bdrv_flags, int *open_flags)
     }
 }
 
-static int qemu_gluster_open(BlockDriverState *bs,  QDict *options,
+static int qemu_gluster_open(BlockDriverState *bs, QDict *options,
                              int bdrv_flags, Error **errp)
 {
     BDRVGlusterState *s = bs->opaque;
@@ -334,12 +358,6 @@ out:
     return ret;
 }
 
-typedef struct BDRVGlusterReopenState {
-    struct glfs *glfs;
-    struct glfs_fd *fd;
-} BDRVGlusterReopenState;
-
-
 static int qemu_gluster_reopen_prepare(BDRVReopenState *state,
                                        BlockReopenQueue *queue, Error **errp)
 {
@@ -426,7 +444,9 @@ static void qemu_gluster_reopen_abort(BDRVReopenState *state)
 
 #ifdef CONFIG_GLUSTERFS_ZEROFILL
 static coroutine_fn int qemu_gluster_co_write_zeroes(BlockDriverState *bs,
-        int64_t sector_num, int nb_sectors, BdrvRequestFlags flags)
+                                                     int64_t sector_num,
+                                                     int nb_sectors,
+                                                     BdrvRequestFlags flags)
 {
     int ret;
     GlusterAIOCB *acb = g_slice_new(GlusterAIOCB);
@@ -459,7 +479,7 @@ static inline bool gluster_supports_zerofill(void)
 }
 
 static inline int qemu_gluster_zerofill(struct glfs_fd *fd, int64_t offset,
-        int64_t size)
+                                        int64_t size)
 {
     return glfs_zerofill(fd, offset, size);
 }
@@ -471,7 +491,7 @@ static inline bool gluster_supports_zerofill(void)
 }
 
 static inline int qemu_gluster_zerofill(struct glfs_fd *fd, int64_t offset,
-        int64_t size)
+                                        int64_t size)
 {
     return 0;
 }
@@ -500,19 +520,17 @@ static int qemu_gluster_create(const char *filename,
     tmp = qemu_opt_get_del(opts, BLOCK_OPT_PREALLOC);
     if (!tmp || !strcmp(tmp, "off")) {
         prealloc = 0;
-    } else if (!strcmp(tmp, "full") &&
-               gluster_supports_zerofill()) {
+    } else if (!strcmp(tmp, "full") && gluster_supports_zerofill()) {
         prealloc = 1;
     } else {
         error_setg(errp, "Invalid preallocation mode: '%s'"
-            " or GlusterFS doesn't support zerofill API",
-            tmp);
+                         " or GlusterFS doesn't support zerofill API", tmp);
         ret = -EINVAL;
         goto out;
     }
 
     fd = glfs_creat(glfs, gconf->path,
-        O_WRONLY | O_CREAT | O_TRUNC | O_BINARY, S_IRUSR | S_IWUSR);
+                    O_WRONLY | O_CREAT | O_TRUNC | O_BINARY, S_IRUSR | S_IWUSR);
     if (!fd) {
         ret = -errno;
     } else {
@@ -538,7 +556,8 @@ out:
 }
 
 static coroutine_fn int qemu_gluster_co_rw(BlockDriverState *bs,
-        int64_t sector_num, int nb_sectors, QEMUIOVector *qiov, int write)
+                                           int64_t sector_num, int nb_sectors,
+                                           QEMUIOVector *qiov, int write)
 {
     int ret;
     GlusterAIOCB *acb = g_slice_new(GlusterAIOCB);
@@ -553,10 +572,10 @@ static coroutine_fn int qemu_gluster_co_rw(BlockDriverState *bs,
 
     if (write) {
         ret = glfs_pwritev_async(s->fd, qiov->iov, qiov->niov, offset, 0,
-            &gluster_finish_aiocb, acb);
+                                 &gluster_finish_aiocb, acb);
     } else {
         ret = glfs_preadv_async(s->fd, qiov->iov, qiov->niov, offset, 0,
-            &gluster_finish_aiocb, acb);
+                                &gluster_finish_aiocb, acb);
     }
 
     if (ret < 0) {
@@ -586,13 +605,17 @@ static int qemu_gluster_truncate(BlockDriverState *bs, int64_t offset)
 }
 
 static coroutine_fn int qemu_gluster_co_readv(BlockDriverState *bs,
-        int64_t sector_num, int nb_sectors, QEMUIOVector *qiov)
+                                              int64_t sector_num,
+                                              int nb_sectors,
+                                              QEMUIOVector *qiov)
 {
     return qemu_gluster_co_rw(bs, sector_num, nb_sectors, qiov, 0);
 }
 
 static coroutine_fn int qemu_gluster_co_writev(BlockDriverState *bs,
-        int64_t sector_num, int nb_sectors, QEMUIOVector *qiov)
+                                               int64_t sector_num,
+                                               int nb_sectors,
+                                               QEMUIOVector *qiov)
 {
     return qemu_gluster_co_rw(bs, sector_num, nb_sectors, qiov, 1);
 }
@@ -624,7 +647,8 @@ out:
 
 #ifdef CONFIG_GLUSTERFS_DISCARD
 static coroutine_fn int qemu_gluster_co_discard(BlockDriverState *bs,
-        int64_t sector_num, int nb_sectors)
+                                                int64_t sector_num,
+                                                int nb_sectors)
 {
     int ret;
     GlusterAIOCB *acb = g_slice_new(GlusterAIOCB);
@@ -696,23 +720,6 @@ static int qemu_gluster_has_zero_init(BlockDriverState *bs)
     return 0;
 }
 
-static QemuOptsList qemu_gluster_create_opts = {
-    .name = "qemu-gluster-create-opts",
-    .head = QTAILQ_HEAD_INITIALIZER(qemu_gluster_create_opts.head),
-    .desc = {
-        {
-            .name = BLOCK_OPT_SIZE,
-            .type = QEMU_OPT_SIZE,
-            .help = "Virtual disk size"
-        },
-        {
-            .name = BLOCK_OPT_PREALLOC,
-            .type = QEMU_OPT_STRING,
-            .help = "Preallocation mode (allowed values: off, full)"
-        },
-        { /* end of list */ }
-    }
-};
 
 static BlockDriver bdrv_gluster = {
     .format_name                  = "gluster",
-- 
2.1.0

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [Qemu-devel] [PATCH 3/4] block/gluster: using new qapi schema
  2015-11-12 10:22 [Qemu-devel] [PATCH 0/4] block/gluster: add support for multiple gluster servers Prasanna Kumar Kalever
  2015-11-12 10:22 ` [Qemu-devel] [PATCH 1/4] block/gluster: rename [server, volname, image] -> [host, volume, path] Prasanna Kumar Kalever
  2015-11-12 10:22 ` [Qemu-devel] [PATCH 2/4] block/gluster: code cleanup Prasanna Kumar Kalever
@ 2015-11-12 10:22 ` Prasanna Kumar Kalever
  2015-11-12 20:00   ` Jeff Cody
                     ` (4 more replies)
  2015-11-12 10:22 ` [Qemu-devel] [PATCH 4/4] block/gluster: add support for multiple gluster servers Prasanna Kumar Kalever
  2015-11-12 22:54 ` [Qemu-devel] [PATCH 0/4] " Eric Blake
  4 siblings, 5 replies; 18+ messages in thread
From: Prasanna Kumar Kalever @ 2015-11-12 10:22 UTC (permalink / raw)
  To: qemu-devel
  Cc: kwolf, pkrempa, Prasanna Kumar Kalever, stefanha, jcody,
	deepakcs, bharata, rtalur

this patch adds GlusterConf to qapi/block-core.json

Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
---
 block/gluster.c      | 104 +++++++++++++++++++++++++--------------------------
 qapi/block-core.json |  60 +++++++++++++++++++++++++++--
 2 files changed, 109 insertions(+), 55 deletions(-)

diff --git a/block/gluster.c b/block/gluster.c
index ededda2..615f28b 100644
--- a/block/gluster.c
+++ b/block/gluster.c
@@ -11,6 +11,10 @@
 #include "block/block_int.h"
 #include "qemu/uri.h"
 
+#define GLUSTER_OPT_FILENAME        "filename"
+#define GLUSTER_DEFAULT_PORT        24007
+
+
 typedef struct GlusterAIOCB {
     int64_t size;
     int ret;
@@ -29,15 +33,6 @@ typedef struct BDRVGlusterReopenState {
     struct glfs_fd *fd;
 } BDRVGlusterReopenState;
 
-typedef struct GlusterConf {
-    char *host;
-    int port;
-    char *volume;
-    char *path;
-    char *transport;
-} GlusterConf;
-
-
 static QemuOptsList qemu_gluster_create_opts = {
     .name = "qemu-gluster-create-opts",
     .head = QTAILQ_HEAD_INITIALIZER(qemu_gluster_create_opts.head),
@@ -61,7 +56,7 @@ static QemuOptsList runtime_opts = {
     .head = QTAILQ_HEAD_INITIALIZER(runtime_opts.head),
     .desc = {
         {
-            .name = "filename",
+            .name = GLUSTER_OPT_FILENAME,
             .type = QEMU_OPT_STRING,
             .help = "URL to the gluster image",
         },
@@ -70,18 +65,7 @@ static QemuOptsList runtime_opts = {
 };
 
 
-static void qemu_gluster_gconf_free(GlusterConf *gconf)
-{
-    if (gconf) {
-        g_free(gconf->host);
-        g_free(gconf->volume);
-        g_free(gconf->path);
-        g_free(gconf->transport);
-        g_free(gconf);
-    }
-}
-
-static int parse_volume_options(GlusterConf *gconf, char *path)
+static int parse_volume_options(BlockdevOptionsGluster *gconf, char *path)
 {
     char *p, *q;
 
@@ -143,8 +127,10 @@ static int parse_volume_options(GlusterConf *gconf, char *path)
  * file=gluster+unix:///testvol/dir/a.img?socket=/tmp/glusterd.socket
  * file=gluster+rdma://1.2.3.4:24007/testvol/a.img
  */
-static int qemu_gluster_parseuri(GlusterConf *gconf, const char *filename)
+static int qemu_gluster_parseuri(BlockdevOptionsGluster **pgconf,
+                                 const char *filename)
 {
+    BlockdevOptionsGluster *gconf;
     URI *uri;
     QueryParams *qp = NULL;
     bool is_unix = false;
@@ -155,20 +141,24 @@ static int qemu_gluster_parseuri(GlusterConf *gconf, const char *filename)
         return -EINVAL;
     }
 
+    gconf = g_new0(BlockdevOptionsGluster, 1);
+    gconf->server = g_new0(GlusterServer, 1);
+
     /* transport */
     if (!uri->scheme || !strcmp(uri->scheme, "gluster")) {
-        gconf->transport = g_strdup("tcp");
+        gconf->server->transport = GLUSTER_TRANSPORT_TCP;
     } else if (!strcmp(uri->scheme, "gluster+tcp")) {
-        gconf->transport = g_strdup("tcp");
+        gconf->server->transport = GLUSTER_TRANSPORT_TCP;
     } else if (!strcmp(uri->scheme, "gluster+unix")) {
-        gconf->transport = g_strdup("unix");
+        gconf->server->transport = GLUSTER_TRANSPORT_UNIX;
         is_unix = true;
     } else if (!strcmp(uri->scheme, "gluster+rdma")) {
-        gconf->transport = g_strdup("rdma");
+        gconf->server->transport = GLUSTER_TRANSPORT_RDMA;
     } else {
         ret = -EINVAL;
         goto out;
     }
+    gconf->server->has_transport = true;
 
     ret = parse_volume_options(gconf, uri->path);
     if (ret < 0) {
@@ -190,13 +180,23 @@ static int qemu_gluster_parseuri(GlusterConf *gconf, const char *filename)
             ret = -EINVAL;
             goto out;
         }
-        gconf->host = g_strdup(qp->p[0].value);
+        gconf->server->host = g_strdup(qp->p[0].value);
     } else {
-        gconf->host = g_strdup(uri->server ? uri->server : "localhost");
-        gconf->port = uri->port;
+        gconf->server->host = g_strdup(uri->server ? uri->server : "localhost");
+        if (uri->port) {
+            gconf->server->port = uri->port;
+        } else {
+            gconf->server->port = GLUSTER_DEFAULT_PORT;
+        }
+        gconf->server->has_port = true;
     }
 
+    *pgconf = gconf;
+
 out:
+    if (ret < 0) {
+        qapi_free_BlockdevOptionsGluster(gconf);
+    }
     if (qp) {
         query_params_free(qp);
     }
@@ -204,14 +204,15 @@ out:
     return ret;
 }
 
-static struct glfs *qemu_gluster_init(GlusterConf *gconf, const char *filename,
-                                      Error **errp)
+static struct glfs *qemu_gluster_init(BlockdevOptionsGluster **pgconf,
+                                      const char *filename, Error **errp)
 {
-    struct glfs *glfs = NULL;
+    struct glfs *glfs;
     int ret;
     int old_errno;
+    BlockdevOptionsGluster *gconf;
 
-    ret = qemu_gluster_parseuri(gconf, filename);
+    ret = qemu_gluster_parseuri(&gconf, filename);
     if (ret < 0) {
         error_setg(errp, "Usage: file=gluster[+transport]://[host[:port]]/"
                          "volume/path[?socket=...]");
@@ -224,8 +225,9 @@ static struct glfs *qemu_gluster_init(GlusterConf *gconf, const char *filename,
         goto out;
     }
 
-    ret = glfs_set_volfile_server(glfs, gconf->transport, gconf->host,
-            gconf->port);
+    ret = glfs_set_volfile_server(glfs,
+                                  GlusterTransport_lookup[gconf->server->transport],
+                                  gconf->server->host, gconf->server->port);
     if (ret < 0) {
         goto out;
     }
@@ -242,10 +244,10 @@ static struct glfs *qemu_gluster_init(GlusterConf *gconf, const char *filename,
     ret = glfs_init(glfs);
     if (ret) {
         error_setg_errno(errp, errno,
-                         "Gluster connection failed for host=%s port=%d "
-                         "volume=%s path=%s transport=%s", gconf->host,
-                         gconf->port, gconf->volume, gconf->path,
-                         gconf->transport);
+                         "Gluster connection failed for host=%s port=%ld "
+                         "volume=%s path=%s transport=%s", gconf->server->host,
+                         gconf->server->port, gconf->volume, gconf->path,
+                         GlusterTransport_lookup[gconf->server->transport]);
 
         /* glfs_init sometimes doesn't set errno although docs suggest that */
         if (errno == 0)
@@ -253,6 +255,7 @@ static struct glfs *qemu_gluster_init(GlusterConf *gconf, const char *filename,
 
         goto out;
     }
+    *pgconf = gconf;
     return glfs;
 
 out:
@@ -315,7 +318,7 @@ static int qemu_gluster_open(BlockDriverState *bs, QDict *options,
     BDRVGlusterState *s = bs->opaque;
     int open_flags = 0;
     int ret = 0;
-    GlusterConf *gconf = g_new0(GlusterConf, 1);
+    BlockdevOptionsGluster *gconf = NULL;
     QemuOpts *opts;
     Error *local_err = NULL;
     const char *filename;
@@ -329,8 +332,7 @@ static int qemu_gluster_open(BlockDriverState *bs, QDict *options,
     }
 
     filename = qemu_opt_get(opts, "filename");
-
-    s->glfs = qemu_gluster_init(gconf, filename, errp);
+    s->glfs = qemu_gluster_init(&gconf, filename, errp);
     if (!s->glfs) {
         ret = -errno;
         goto out;
@@ -345,7 +347,7 @@ static int qemu_gluster_open(BlockDriverState *bs, QDict *options,
 
 out:
     qemu_opts_del(opts);
-    qemu_gluster_gconf_free(gconf);
+    qapi_free_BlockdevOptionsGluster(gconf);
     if (!ret) {
         return ret;
     }
@@ -363,7 +365,7 @@ static int qemu_gluster_reopen_prepare(BDRVReopenState *state,
 {
     int ret = 0;
     BDRVGlusterReopenState *reop_s;
-    GlusterConf *gconf = NULL;
+    BlockdevOptionsGluster *gconf = NULL;
     int open_flags = 0;
 
     assert(state != NULL);
@@ -374,9 +376,7 @@ static int qemu_gluster_reopen_prepare(BDRVReopenState *state,
 
     qemu_gluster_parse_flags(state->flags, &open_flags);
 
-    gconf = g_new0(GlusterConf, 1);
-
-    reop_s->glfs = qemu_gluster_init(gconf, state->bs->filename, errp);
+    reop_s->glfs = qemu_gluster_init(&gconf, state->bs->filename, errp);
     if (reop_s->glfs == NULL) {
         ret = -errno;
         goto exit;
@@ -391,7 +391,7 @@ static int qemu_gluster_reopen_prepare(BDRVReopenState *state,
 
 exit:
     /* state->opaque will be freed in either the _abort or _commit */
-    qemu_gluster_gconf_free(gconf);
+    qapi_free_BlockdevOptionsGluster(gconf);
     return ret;
 }
 
@@ -500,15 +500,15 @@ static inline int qemu_gluster_zerofill(struct glfs_fd *fd, int64_t offset,
 static int qemu_gluster_create(const char *filename,
                                QemuOpts *opts, Error **errp)
 {
+    BlockdevOptionsGluster *gconf = NULL;
     struct glfs *glfs;
     struct glfs_fd *fd;
     int ret = 0;
     int prealloc = 0;
     int64_t total_size = 0;
     char *tmp = NULL;
-    GlusterConf *gconf = g_new0(GlusterConf, 1);
 
-    glfs = qemu_gluster_init(gconf, filename, errp);
+    glfs = qemu_gluster_init(&gconf, filename, errp);
     if (!glfs) {
         ret = -errno;
         goto out;
@@ -548,7 +548,7 @@ static int qemu_gluster_create(const char *filename,
     }
 out:
     g_free(tmp);
-    qemu_gluster_gconf_free(gconf);
+    qapi_free_BlockdevOptionsGluster(gconf);
     if (glfs) {
         glfs_fini(glfs);
     }
diff --git a/qapi/block-core.json b/qapi/block-core.json
index 425fdab..bbefe43 100644
--- a/qapi/block-core.json
+++ b/qapi/block-core.json
@@ -1375,13 +1375,14 @@
 # Drivers that are supported in block device operations.
 #
 # @host_device, @host_cdrom: Since 2.1
+# @gluster: Since 2.5
 #
 # Since: 2.0
 ##
 { 'enum': 'BlockdevDriver',
   'data': [ 'archipelago', 'blkdebug', 'blkverify', 'bochs', 'cloop',
-            'dmg', 'file', 'ftp', 'ftps', 'host_cdrom', 'host_device',
-            'http', 'https', 'null-aio', 'null-co', 'parallels',
+            'dmg', 'file', 'ftp', 'ftps', 'gluster', 'host_cdrom',
+            'host_device', 'http', 'https', 'null-aio', 'null-co', 'parallels',
             'qcow', 'qcow2', 'qed', 'quorum', 'raw', 'tftp', 'vdi', 'vhdx',
             'vmdk', 'vpc', 'vvfat' ] }
 
@@ -1797,6 +1798,59 @@
             '*read-pattern': 'QuorumReadPattern' } }
 
 ##
+# @GlusterTransport
+#
+# An enumeration of Gluster transport type
+#
+# @tcp:   TCP   - Transmission Control Protocol
+#
+# @unix:  UNIX  - Unix domain socket
+#
+# @rdma:  RDMA  - Remote direct memory access
+#
+# Since: 2.5
+##
+{ 'enum': 'GlusterTransport', 'data': [ 'tcp', 'unix', 'rdma'] }
+
+##
+# @GlusterServer
+#
+# Details for connecting to a gluster server
+#
+# @host:       host address (hostname/ipv4/ipv6 addresses)
+#
+# @port:       #optional port number on which glusterd is listening
+#               (default 24007)
+#
+# @transport:  #optional transport type used to connect to gluster management
+#               daemon (default 'tcp')
+#
+# Since: 2.5
+##
+{ 'struct': 'GlusterServer',
+  'data': { 'host': 'str',
+            '*port': 'int',
+            '*transport': 'GlusterTransport' } }
+
+##
+# @BlockdevOptionsGluster
+#
+# Driver specific block device options for Gluster
+#
+# @volume:   name of gluster volume where VM image resides
+#
+# @path:     absolute path to image file in gluster volume
+#
+# @servers:  gluster server description
+#
+# Since: 2.5
+##
+{ 'struct': 'BlockdevOptionsGluster',
+  'data': { 'volume': 'str',
+            'path': 'str',
+            'server': 'GlusterServer' } }
+
+##
 # @BlockdevOptions
 #
 # Options for creating a block device.
@@ -1816,7 +1870,7 @@
       'file':       'BlockdevOptionsFile',
       'ftp':        'BlockdevOptionsFile',
       'ftps':       'BlockdevOptionsFile',
-# TODO gluster: Wait for structured options
+      'gluster':    'BlockdevOptionsGluster',
       'host_cdrom': 'BlockdevOptionsFile',
       'host_device':'BlockdevOptionsFile',
       'http':       'BlockdevOptionsFile',
-- 
2.1.0

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [Qemu-devel] [PATCH 4/4] block/gluster: add support for multiple gluster servers
  2015-11-12 10:22 [Qemu-devel] [PATCH 0/4] block/gluster: add support for multiple gluster servers Prasanna Kumar Kalever
                   ` (2 preceding siblings ...)
  2015-11-12 10:22 ` [Qemu-devel] [PATCH 3/4] block/gluster: using new qapi schema Prasanna Kumar Kalever
@ 2015-11-12 10:22 ` Prasanna Kumar Kalever
  2015-11-12 20:00   ` Jeff Cody
  2015-11-12 22:36   ` Eric Blake
  2015-11-12 22:54 ` [Qemu-devel] [PATCH 0/4] " Eric Blake
  4 siblings, 2 replies; 18+ messages in thread
From: Prasanna Kumar Kalever @ 2015-11-12 10:22 UTC (permalink / raw)
  To: qemu-devel
  Cc: kwolf, pkrempa, Prasanna Kumar Kalever, stefanha, jcody,
	deepakcs, bharata, rtalur

This patch adds a way to specify multiple volfile servers to the gluster
block backend of QEMU with tcp|rdma transport types and their port numbers.

Problem:

Currently VM Image on gluster volume is specified like this:

file=gluster[+tcp]://host[:port]/testvol/a.img

Assuming we have three hosts in trusted pool with replica 3 volume
in action and unfortunately host (mentioned in the command above) went down
for some reason, since the volume is replica 3 we now have other 2 hosts
active from which we can boot the VM.

But currently there is no mechanism to pass the other 2 gluster host
addresses to qemu.

Solution:

New way of specifying VM Image on gluster volume with volfile servers:
(We still support old syntax to maintain backward compatibility)

Basic command line syntax looks like:

Pattern I:
 -drive driver=gluster,
        volume=testvol,path=/path/a.raw,
        server.0.host=1.2.3.4,
       [server.0.port=24007,]
       [server.0.transport=tcp,]
        server.1.host=5.6.7.8,
       [server.1.port=24008,]
       [server.1.transport=rdma,] ...

Pattern II:
 'json:{"driver":"qcow2","file":{"driver":"gluster",
       "volume":"testvol","path":"/path/a.qcow2",
       "server":[{tuple0},{tuple1}, ...{tupleN}]}}'

   driver      => 'gluster' (protocol name)
   volume      => name of gluster volume where our VM image resides
   path        => absolute path of image in gluster volume

  {tuple}      => {"host":"1.2.3.4"[,"port":"24007","transport":"tcp"]}

   host        => host address (hostname/ipv4/ipv6 addresses)
   port        => port number on which glusterd is listening. (default 24007)
   transport   => transport type used to connect to gluster management daemon,
                   it can be tcp|rdma (default 'tcp')

Examples:
1.
 -drive driver=qcow2,file.driver=gluster,
        file.volume=testvol,file.path=/path/a.qcow2,
        file.server.0.host=1.2.3.4,
        file.server.0.port=24007,
        file.server.0.transport=tcp,
        file.server.1.host=5.6.7.8,
        file.server.1.port=24008,
        file.server.1.transport=rdma
2.
 'json:{"driver":"qcow2","file":{"driver":"gluster","volume":"testvol",
         "path":"/path/a.qcow2","server":
         [{"host":"1.2.3.4","port":"24007","transport":"tcp"},
          {"host":"4.5.6.7","port":"24008","transport":"rdma"}] } }'

This patch gives a mechanism to provide all the server addresses, which are in
replica set, so in case host1 is down VM can still boot from any of the
active hosts.

This is equivalent to the backup-volfile-servers option supported by
mount.glusterfs (FUSE way of mounting gluster volume)

Credits: Sincere thanks to Kevin Wolf <kwolf@redhat.com> and
"Deepak C Shetty" <deepakcs@redhat.com> for inputs and all their support

Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
---
 block/gluster.c      | 288 ++++++++++++++++++++++++++++++++++++++++++++-------
 qapi/block-core.json |   4 +-
 2 files changed, 252 insertions(+), 40 deletions(-)

diff --git a/block/gluster.c b/block/gluster.c
index 615f28b..ba209cf 100644
--- a/block/gluster.c
+++ b/block/gluster.c
@@ -12,6 +12,13 @@
 #include "qemu/uri.h"
 
 #define GLUSTER_OPT_FILENAME        "filename"
+#define GLUSTER_OPT_VOLUME          "volume"
+#define GLUSTER_OPT_PATH            "path"
+#define GLUSTER_OPT_HOST            "host"
+#define GLUSTER_OPT_PORT            "port"
+#define GLUSTER_OPT_TRANSPORT       "transport"
+#define GLUSTER_OPT_SERVER_PATTERN  "server."
+
 #define GLUSTER_DEFAULT_PORT        24007
 
 
@@ -64,6 +71,46 @@ static QemuOptsList runtime_opts = {
     },
 };
 
+static QemuOptsList runtime_json_opts = {
+    .name = "gluster_json",
+    .head = QTAILQ_HEAD_INITIALIZER(runtime_json_opts.head),
+    .desc = {
+        {
+            .name = GLUSTER_OPT_VOLUME,
+            .type = QEMU_OPT_STRING,
+            .help = "name of gluster volume where VM image resides",
+        },
+        {
+            .name = GLUSTER_OPT_PATH,
+            .type = QEMU_OPT_STRING,
+            .help = "absolute path to image file in gluster volume",
+        },
+        { /* end of list */ }
+    },
+};
+
+static QemuOptsList runtime_tuple_opts = {
+    .name = "gluster_tuple",
+    .head = QTAILQ_HEAD_INITIALIZER(runtime_tuple_opts.head),
+    .desc = {
+        {
+            .name = GLUSTER_OPT_HOST,
+            .type = QEMU_OPT_STRING,
+            .help = "host address (hostname/ipv4/ipv6 addresses)",
+        },
+        {
+            .name = GLUSTER_OPT_PORT,
+            .type = QEMU_OPT_NUMBER,
+            .help = "port number on which glusterd is listening (default 24007)",
+        },
+        {
+            .name = GLUSTER_OPT_TRANSPORT,
+            .type = QEMU_OPT_STRING,
+            .help = "transport type 'tcp' or 'rdma' (default 'tcp')",
+        },
+        { /* end of list */ }
+    },
+};
 
 static int parse_volume_options(BlockdevOptionsGluster *gconf, char *path)
 {
@@ -131,6 +178,7 @@ static int qemu_gluster_parseuri(BlockdevOptionsGluster **pgconf,
                                  const char *filename)
 {
     BlockdevOptionsGluster *gconf;
+    GlusterServer *gsconf;
     URI *uri;
     QueryParams *qp = NULL;
     bool is_unix = false;
@@ -142,23 +190,24 @@ static int qemu_gluster_parseuri(BlockdevOptionsGluster **pgconf,
     }
 
     gconf = g_new0(BlockdevOptionsGluster, 1);
-    gconf->server = g_new0(GlusterServer, 1);
+    gconf->server = g_new0(GlusterServerList, 1);
+    gconf->server->value = gsconf = g_new0(GlusterServer, 1);
 
     /* transport */
     if (!uri->scheme || !strcmp(uri->scheme, "gluster")) {
-        gconf->server->transport = GLUSTER_TRANSPORT_TCP;
+        gsconf->transport = GLUSTER_TRANSPORT_TCP;
     } else if (!strcmp(uri->scheme, "gluster+tcp")) {
-        gconf->server->transport = GLUSTER_TRANSPORT_TCP;
+        gsconf->transport = GLUSTER_TRANSPORT_TCP;
     } else if (!strcmp(uri->scheme, "gluster+unix")) {
-        gconf->server->transport = GLUSTER_TRANSPORT_UNIX;
+        gsconf->transport = GLUSTER_TRANSPORT_UNIX;
         is_unix = true;
     } else if (!strcmp(uri->scheme, "gluster+rdma")) {
-        gconf->server->transport = GLUSTER_TRANSPORT_RDMA;
+        gsconf->transport = GLUSTER_TRANSPORT_RDMA;
     } else {
         ret = -EINVAL;
         goto out;
     }
-    gconf->server->has_transport = true;
+    gsconf->has_transport = true;
 
     ret = parse_volume_options(gconf, uri->path);
     if (ret < 0) {
@@ -180,15 +229,15 @@ static int qemu_gluster_parseuri(BlockdevOptionsGluster **pgconf,
             ret = -EINVAL;
             goto out;
         }
-        gconf->server->host = g_strdup(qp->p[0].value);
+        gsconf->host = g_strdup(qp->p[0].value);
     } else {
-        gconf->server->host = g_strdup(uri->server ? uri->server : "localhost");
+        gsconf->host = g_strdup(uri->server ? uri->server : "localhost");
         if (uri->port) {
-            gconf->server->port = uri->port;
+            gsconf->port = uri->port;
         } else {
-            gconf->server->port = GLUSTER_DEFAULT_PORT;
+            gsconf->port = GLUSTER_DEFAULT_PORT;
         }
-        gconf->server->has_port = true;
+        gsconf->has_port = true;
     }
 
     *pgconf = gconf;
@@ -204,32 +253,26 @@ out:
     return ret;
 }
 
-static struct glfs *qemu_gluster_init(BlockdevOptionsGluster **pgconf,
-                                      const char *filename, Error **errp)
+static struct glfs *qemu_gluster_glfs_init(BlockdevOptionsGluster *gconf,
+                                           Error **errp)
 {
     struct glfs *glfs;
     int ret;
     int old_errno;
-    BlockdevOptionsGluster *gconf;
-
-    ret = qemu_gluster_parseuri(&gconf, filename);
-    if (ret < 0) {
-        error_setg(errp, "Usage: file=gluster[+transport]://[host[:port]]/"
-                         "volume/path[?socket=...]");
-        errno = -ret;
-        goto out;
-    }
+    GlusterServerList *server;
 
     glfs = glfs_new(gconf->volume);
     if (!glfs) {
         goto out;
     }
 
-    ret = glfs_set_volfile_server(glfs,
-                                  GlusterTransport_lookup[gconf->server->transport],
-                                  gconf->server->host, gconf->server->port);
-    if (ret < 0) {
-        goto out;
+    for (server = gconf->server; server; server = server->next) {
+        ret = glfs_set_volfile_server(glfs,
+                                      GlusterTransport_lookup[server->value->transport],
+                                      server->value->host, server->value->port);
+        if (ret < 0) {
+            goto out;
+        }
     }
 
     /*
@@ -244,10 +287,9 @@ static struct glfs *qemu_gluster_init(BlockdevOptionsGluster **pgconf,
     ret = glfs_init(glfs);
     if (ret) {
         error_setg_errno(errp, errno,
-                         "Gluster connection failed for host=%s port=%ld "
-                         "volume=%s path=%s transport=%s", gconf->server->host,
-                         gconf->server->port, gconf->volume, gconf->path,
-                         GlusterTransport_lookup[gconf->server->transport]);
+                         "Gluster connection failed for given hosts "
+                         "volume:'%s' path:'%s' host1:'%s'", gconf->volume,
+                         gconf->path, gconf->server->value->host);
 
         /* glfs_init sometimes doesn't set errno although docs suggest that */
         if (errno == 0)
@@ -255,7 +297,6 @@ static struct glfs *qemu_gluster_init(BlockdevOptionsGluster **pgconf,
 
         goto out;
     }
-    *pgconf = gconf;
     return glfs;
 
 out:
@@ -267,6 +308,177 @@ out:
     return NULL;
 }
 
+static int parse_transport_option(const char *opt)
+{
+    int i;
+
+    if (!opt) {
+        /* Set tcp as default */
+        return GLUSTER_TRANSPORT_TCP;
+    }
+
+    for (i = 0; i < GLUSTER_TRANSPORT_MAX; i++) {
+        if (!strcmp(opt, GlusterTransport_lookup[i])) {
+            return i;
+        }
+    }
+
+    return i;
+}
+
+/*
+ * Convert the json formatted command line into qapi.
+*/
+static int qemu_gluster_parsejson(BlockdevOptionsGluster **pgconf,
+                                  QDict *options)
+{
+    QemuOpts *opts;
+    BlockdevOptionsGluster *gconf = NULL;
+    GlusterServer *gsconf;
+    GlusterServerList **prev;
+    GlusterServerList *curr = NULL;
+    QDict *backing_options = NULL;
+    Error *local_err = NULL;
+    char *str = NULL;
+    const char *ptr;
+    size_t num_servers;
+    int i;
+
+    /* create opts info from runtime_json_opts list */
+    opts = qemu_opts_create(&runtime_json_opts, NULL, 0, &error_abort);
+    qemu_opts_absorb_qdict(opts, options, &local_err);
+    if (local_err) {
+        goto out;
+    }
+
+    gconf = g_new0(BlockdevOptionsGluster, 1);
+
+    num_servers = qdict_array_entries(options, GLUSTER_OPT_SERVER_PATTERN);
+    if (num_servers < 1) {
+        error_setg(&local_err, "qemu_gluster: please provide 'server' "
+                               "option with valid fields in array of tuples");
+        goto out;
+    }
+
+    ptr = qemu_opt_get(opts, GLUSTER_OPT_VOLUME);
+    if (!ptr) {
+        error_setg(&local_err, "qemu_gluster: please provide 'volume' "
+                               "option");
+        goto out;
+    }
+    gconf->volume = g_strdup(ptr);
+
+    ptr = qemu_opt_get(opts, GLUSTER_OPT_PATH);
+    if (!ptr) {
+        error_setg(&local_err, "qemu_gluster: please provide 'path' "
+                               "option");
+        goto out;
+    }
+    gconf->path = g_strdup(ptr);
+
+    qemu_opts_del(opts);
+
+    /* create opts info from runtime_tuple_opts list */
+    for (i = 0; i < num_servers; i++) {
+        opts = qemu_opts_create(&runtime_tuple_opts, NULL, 0, &error_abort);
+         str = g_strdup_printf(GLUSTER_OPT_SERVER_PATTERN"%d.", i);
+        qdict_extract_subqdict(options, &backing_options, str);
+        qemu_opts_absorb_qdict(opts, backing_options, &local_err);
+        if (local_err) {
+            goto out;
+        }
+        qdict_del(backing_options, str);
+
+        ptr = qemu_opt_get(opts, GLUSTER_OPT_HOST);
+        if (!ptr) {
+            error_setg(&local_err, "qemu_gluster: server.{tuple.%d} "
+                                   "requires 'host' option", i);
+            goto out;
+        }
+
+        gsconf = g_new0(GlusterServer, 1);
+
+        gsconf->host = g_strdup(ptr);
+
+        ptr = qemu_opt_get(opts, GLUSTER_OPT_TRANSPORT);
+        /* check whether transport type specified in json command is valid */
+        gsconf->transport = parse_transport_option(ptr);
+        if (gsconf->transport == GLUSTER_TRANSPORT_MAX) {
+            error_setg(&local_err, "qemu_gluster: please set 'transport'"
+                                   " type in tuple.%d as tcp or rdma", i);
+            g_free(gsconf);
+            goto out;
+        }
+        gsconf->has_transport = true;
+
+        gsconf->port = qemu_opt_get_number(opts, GLUSTER_OPT_PORT,
+                                           GLUSTER_DEFAULT_PORT);
+        gsconf->has_port = true;
+
+        if (gconf->server == NULL) {
+            gconf->server = g_new0(GlusterServerList, 1);
+            gconf->server->value = gsconf;
+            curr = gconf->server;
+        } else {
+            prev = &curr->next;
+            curr = g_new0(GlusterServerList, 1);
+            curr->value = gsconf;
+            *prev = curr;
+        }
+
+        qemu_opts_del(opts);
+    }
+
+    *pgconf = gconf;
+    g_free(str);
+    return 0;
+
+out:
+    error_report_err(local_err);
+    qemu_opts_del(opts);
+    qapi_free_BlockdevOptionsGluster(gconf);
+    if (str) {
+        qdict_del(backing_options, str);
+        g_free(str);
+    }
+    errno = EINVAL;
+    return -errno;
+}
+
+static struct glfs *qemu_gluster_init(BlockdevOptionsGluster **gconf,
+                                      const char *filename,
+                                      QDict *options, Error **errp)
+{
+    int ret;
+
+    if (filename) {
+        ret = qemu_gluster_parseuri(gconf, filename);
+        if (ret < 0) {
+            error_setg(errp, "Usage: file=gluster[+transport]://[host[:port]]/"
+                             "volume/path[?socket=...]");
+            errno = -ret;
+            return NULL;
+        }
+    } else {
+        ret = qemu_gluster_parsejson(gconf, options);
+        if (ret < 0) {
+            error_setg(errp, "Usage: "
+                             "-drive driver=qcow2,file.driver=gluster,"
+                             "file.volume=testvol,file.path=/path/a.qcow2,"
+                             "file.server.0.host=1.2.3.4,"
+                             "[file.server.0.port=24007,]"
+                             "[file.server.0.transport=tcp,]"
+                             "file.server.1.host=5.6.7.8,"
+                             "[file.server.1.port=24008,]"
+                             "[file.server.1.transport=rdma,] ...");
+            errno = -ret;
+            return NULL;
+        }
+    }
+
+    return qemu_gluster_glfs_init(*gconf, errp);
+}
+
 static void qemu_gluster_complete_aio(void *opaque)
 {
     GlusterAIOCB *acb = (GlusterAIOCB *)opaque;
@@ -332,7 +544,7 @@ static int qemu_gluster_open(BlockDriverState *bs, QDict *options,
     }
 
     filename = qemu_opt_get(opts, "filename");
-    s->glfs = qemu_gluster_init(&gconf, filename, errp);
+    s->glfs = qemu_gluster_init(&gconf, filename, options, errp);
     if (!s->glfs) {
         ret = -errno;
         goto out;
@@ -376,7 +588,7 @@ static int qemu_gluster_reopen_prepare(BDRVReopenState *state,
 
     qemu_gluster_parse_flags(state->flags, &open_flags);
 
-    reop_s->glfs = qemu_gluster_init(&gconf, state->bs->filename, errp);
+    reop_s->glfs = qemu_gluster_init(&gconf, state->bs->filename, NULL, errp);
     if (reop_s->glfs == NULL) {
         ret = -errno;
         goto exit;
@@ -508,7 +720,7 @@ static int qemu_gluster_create(const char *filename,
     int64_t total_size = 0;
     char *tmp = NULL;
 
-    glfs = qemu_gluster_init(&gconf, filename, errp);
+    glfs = qemu_gluster_init(&gconf, filename, NULL, errp);
     if (!glfs) {
         ret = -errno;
         goto out;
@@ -725,7 +937,7 @@ static BlockDriver bdrv_gluster = {
     .format_name                  = "gluster",
     .protocol_name                = "gluster",
     .instance_size                = sizeof(BDRVGlusterState),
-    .bdrv_needs_filename          = true,
+    .bdrv_needs_filename          = false,
     .bdrv_file_open               = qemu_gluster_open,
     .bdrv_reopen_prepare          = qemu_gluster_reopen_prepare,
     .bdrv_reopen_commit           = qemu_gluster_reopen_commit,
@@ -752,7 +964,7 @@ static BlockDriver bdrv_gluster_tcp = {
     .format_name                  = "gluster",
     .protocol_name                = "gluster+tcp",
     .instance_size                = sizeof(BDRVGlusterState),
-    .bdrv_needs_filename          = true,
+    .bdrv_needs_filename          = false,
     .bdrv_file_open               = qemu_gluster_open,
     .bdrv_reopen_prepare          = qemu_gluster_reopen_prepare,
     .bdrv_reopen_commit           = qemu_gluster_reopen_commit,
@@ -806,7 +1018,7 @@ static BlockDriver bdrv_gluster_rdma = {
     .format_name                  = "gluster",
     .protocol_name                = "gluster+rdma",
     .instance_size                = sizeof(BDRVGlusterState),
-    .bdrv_needs_filename          = true,
+    .bdrv_needs_filename          = false,
     .bdrv_file_open               = qemu_gluster_open,
     .bdrv_reopen_prepare          = qemu_gluster_reopen_prepare,
     .bdrv_reopen_commit           = qemu_gluster_reopen_commit,
diff --git a/qapi/block-core.json b/qapi/block-core.json
index bbefe43..c28cb9f 100644
--- a/qapi/block-core.json
+++ b/qapi/block-core.json
@@ -1841,14 +1841,14 @@
 #
 # @path:     absolute path to image file in gluster volume
 #
-# @servers:  gluster server description
+# @servers:  one or more gluster server descriptions
 #
 # Since: 2.5
 ##
 { 'struct': 'BlockdevOptionsGluster',
   'data': { 'volume': 'str',
             'path': 'str',
-            'server': 'GlusterServer' } }
+            'server': [ 'GlusterServer' ] } }
 
 ##
 # @BlockdevOptions
-- 
2.1.0

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* Re: [Qemu-devel] [PATCH 3/4] block/gluster: using new qapi schema
  2015-11-12 10:22 ` [Qemu-devel] [PATCH 3/4] block/gluster: using new qapi schema Prasanna Kumar Kalever
@ 2015-11-12 20:00   ` Jeff Cody
  2015-11-12 21:16   ` Eric Blake
                     ` (3 subsequent siblings)
  4 siblings, 0 replies; 18+ messages in thread
From: Jeff Cody @ 2015-11-12 20:00 UTC (permalink / raw)
  To: Prasanna Kumar Kalever
  Cc: kwolf, pkrempa, stefanha, qemu-devel, deepakcs, bharata, rtalur

On Thu, Nov 12, 2015 at 03:52:07PM +0530, Prasanna Kumar Kalever wrote:
> this patch adds GlusterConf to qapi/block-core.json
> 
> Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
> ---
>  block/gluster.c      | 104 +++++++++++++++++++++++++--------------------------
>  qapi/block-core.json |  60 +++++++++++++++++++++++++++--
>  2 files changed, 109 insertions(+), 55 deletions(-)
> 
> diff --git a/block/gluster.c b/block/gluster.c
> index ededda2..615f28b 100644
> --- a/block/gluster.c
> +++ b/block/gluster.c
> @@ -11,6 +11,10 @@
>  #include "block/block_int.h"
>  #include "qemu/uri.h"
>  
> +#define GLUSTER_OPT_FILENAME        "filename"
> +#define GLUSTER_DEFAULT_PORT        24007
> +
> +
>  typedef struct GlusterAIOCB {
>      int64_t size;
>      int ret;
> @@ -29,15 +33,6 @@ typedef struct BDRVGlusterReopenState {
>      struct glfs_fd *fd;
>  } BDRVGlusterReopenState;
>  
> -typedef struct GlusterConf {
> -    char *host;
> -    int port;
> -    char *volume;
> -    char *path;
> -    char *transport;
> -} GlusterConf;
> -
> -
>  static QemuOptsList qemu_gluster_create_opts = {
>      .name = "qemu-gluster-create-opts",
>      .head = QTAILQ_HEAD_INITIALIZER(qemu_gluster_create_opts.head),
> @@ -61,7 +56,7 @@ static QemuOptsList runtime_opts = {
>      .head = QTAILQ_HEAD_INITIALIZER(runtime_opts.head),
>      .desc = {
>          {
> -            .name = "filename",
> +            .name = GLUSTER_OPT_FILENAME,
>              .type = QEMU_OPT_STRING,
>              .help = "URL to the gluster image",
>          },
> @@ -70,18 +65,7 @@ static QemuOptsList runtime_opts = {
>  };
>  
>  
> -static void qemu_gluster_gconf_free(GlusterConf *gconf)
> -{
> -    if (gconf) {
> -        g_free(gconf->host);
> -        g_free(gconf->volume);
> -        g_free(gconf->path);
> -        g_free(gconf->transport);
> -        g_free(gconf);
> -    }
> -}
> -
> -static int parse_volume_options(GlusterConf *gconf, char *path)
> +static int parse_volume_options(BlockdevOptionsGluster *gconf, char *path)
>  {
>      char *p, *q;
>  
> @@ -143,8 +127,10 @@ static int parse_volume_options(GlusterConf *gconf, char *path)
>   * file=gluster+unix:///testvol/dir/a.img?socket=/tmp/glusterd.socket
>   * file=gluster+rdma://1.2.3.4:24007/testvol/a.img
>   */
> -static int qemu_gluster_parseuri(GlusterConf *gconf, const char *filename)
> +static int qemu_gluster_parseuri(BlockdevOptionsGluster **pgconf,
> +                                 const char *filename)
>  {
> +    BlockdevOptionsGluster *gconf;
>      URI *uri;
>      QueryParams *qp = NULL;
>      bool is_unix = false;
> @@ -155,20 +141,24 @@ static int qemu_gluster_parseuri(GlusterConf *gconf, const char *filename)
>          return -EINVAL;
>      }
>  
> +    gconf = g_new0(BlockdevOptionsGluster, 1);
> +    gconf->server = g_new0(GlusterServer, 1);
> +
>      /* transport */
>      if (!uri->scheme || !strcmp(uri->scheme, "gluster")) {
> -        gconf->transport = g_strdup("tcp");
> +        gconf->server->transport = GLUSTER_TRANSPORT_TCP;
>      } else if (!strcmp(uri->scheme, "gluster+tcp")) {
> -        gconf->transport = g_strdup("tcp");
> +        gconf->server->transport = GLUSTER_TRANSPORT_TCP;
>      } else if (!strcmp(uri->scheme, "gluster+unix")) {
> -        gconf->transport = g_strdup("unix");
> +        gconf->server->transport = GLUSTER_TRANSPORT_UNIX;
>          is_unix = true;
>      } else if (!strcmp(uri->scheme, "gluster+rdma")) {
> -        gconf->transport = g_strdup("rdma");
> +        gconf->server->transport = GLUSTER_TRANSPORT_RDMA;
>      } else {
>          ret = -EINVAL;
>          goto out;
>      }
> +    gconf->server->has_transport = true;
>  
>      ret = parse_volume_options(gconf, uri->path);
>      if (ret < 0) {
> @@ -190,13 +180,23 @@ static int qemu_gluster_parseuri(GlusterConf *gconf, const char *filename)
>              ret = -EINVAL;
>              goto out;
>          }
> -        gconf->host = g_strdup(qp->p[0].value);
> +        gconf->server->host = g_strdup(qp->p[0].value);
>      } else {
> -        gconf->host = g_strdup(uri->server ? uri->server : "localhost");
> -        gconf->port = uri->port;
> +        gconf->server->host = g_strdup(uri->server ? uri->server : "localhost");
> +        if (uri->port) {
> +            gconf->server->port = uri->port;
> +        } else {
> +            gconf->server->port = GLUSTER_DEFAULT_PORT;
> +        }
> +        gconf->server->has_port = true;
>      }
>  
> +    *pgconf = gconf;
> +
>  out:
> +    if (ret < 0) {
> +        qapi_free_BlockdevOptionsGluster(gconf);
> +    }
>      if (qp) {
>          query_params_free(qp);
>      }
> @@ -204,14 +204,15 @@ out:
>      return ret;
>  }
>  
> -static struct glfs *qemu_gluster_init(GlusterConf *gconf, const char *filename,
> -                                      Error **errp)
> +static struct glfs *qemu_gluster_init(BlockdevOptionsGluster **pgconf,
> +                                      const char *filename, Error **errp)
>  {
> -    struct glfs *glfs = NULL;
> +    struct glfs *glfs;

This must be initialized to NULL in this patch, because...

>      int ret;
>      int old_errno;
> +    BlockdevOptionsGluster *gconf;
>  
> -    ret = qemu_gluster_parseuri(gconf, filename);
> +    ret = qemu_gluster_parseuri(&gconf, filename);
>      if (ret < 0) {
>          error_setg(errp, "Usage: file=gluster[+transport]://[host[:port]]/"
>                           "volume/path[?socket=...]");


This error jumps to out, before glfs is allocated (not shown here).

After patch 4, it isn't an issue anymore, but in this patch it breaks
compilation, and so also breaks git bisect.

> @@ -224,8 +225,9 @@ static struct glfs *qemu_gluster_init(GlusterConf *gconf, const char *filename,
>          goto out;
>      }
>  
> -    ret = glfs_set_volfile_server(glfs, gconf->transport, gconf->host,
> -            gconf->port);
> +    ret = glfs_set_volfile_server(glfs,
> +                                  GlusterTransport_lookup[gconf->server->transport],
> +                                  gconf->server->host, gconf->server->port);
>      if (ret < 0) {
>          goto out;
>      }
> @@ -242,10 +244,10 @@ static struct glfs *qemu_gluster_init(GlusterConf *gconf, const char *filename,
>      ret = glfs_init(glfs);
>      if (ret) {
>          error_setg_errno(errp, errno,
> -                         "Gluster connection failed for host=%s port=%d "
> -                         "volume=%s path=%s transport=%s", gconf->host,
> -                         gconf->port, gconf->volume, gconf->path,
> -                         gconf->transport);
> +                         "Gluster connection failed for host=%s port=%ld "
> +                         "volume=%s path=%s transport=%s", gconf->server->host,
> +                         gconf->server->port, gconf->volume, gconf->path,
> +                         GlusterTransport_lookup[gconf->server->transport]);
>  
>          /* glfs_init sometimes doesn't set errno although docs suggest that */
>          if (errno == 0)
> @@ -253,6 +255,7 @@ static struct glfs *qemu_gluster_init(GlusterConf *gconf, const char *filename,
>  
>          goto out;
>      }
> +    *pgconf = gconf;
>      return glfs;
>  
>  out:
> @@ -315,7 +318,7 @@ static int qemu_gluster_open(BlockDriverState *bs, QDict *options,
>      BDRVGlusterState *s = bs->opaque;
>      int open_flags = 0;
>      int ret = 0;
> -    GlusterConf *gconf = g_new0(GlusterConf, 1);
> +    BlockdevOptionsGluster *gconf = NULL;
>      QemuOpts *opts;
>      Error *local_err = NULL;
>      const char *filename;
> @@ -329,8 +332,7 @@ static int qemu_gluster_open(BlockDriverState *bs, QDict *options,
>      }
>  
>      filename = qemu_opt_get(opts, "filename");
> -
> -    s->glfs = qemu_gluster_init(gconf, filename, errp);
> +    s->glfs = qemu_gluster_init(&gconf, filename, errp);
>      if (!s->glfs) {
>          ret = -errno;
>          goto out;
> @@ -345,7 +347,7 @@ static int qemu_gluster_open(BlockDriverState *bs, QDict *options,
>  
>  out:
>      qemu_opts_del(opts);
> -    qemu_gluster_gconf_free(gconf);
> +    qapi_free_BlockdevOptionsGluster(gconf);
>      if (!ret) {
>          return ret;
>      }
> @@ -363,7 +365,7 @@ static int qemu_gluster_reopen_prepare(BDRVReopenState *state,
>  {
>      int ret = 0;
>      BDRVGlusterReopenState *reop_s;
> -    GlusterConf *gconf = NULL;
> +    BlockdevOptionsGluster *gconf = NULL;
>      int open_flags = 0;
>  
>      assert(state != NULL);
> @@ -374,9 +376,7 @@ static int qemu_gluster_reopen_prepare(BDRVReopenState *state,
>  
>      qemu_gluster_parse_flags(state->flags, &open_flags);
>  
> -    gconf = g_new0(GlusterConf, 1);
> -
> -    reop_s->glfs = qemu_gluster_init(gconf, state->bs->filename, errp);
> +    reop_s->glfs = qemu_gluster_init(&gconf, state->bs->filename, errp);
>      if (reop_s->glfs == NULL) {
>          ret = -errno;
>          goto exit;
> @@ -391,7 +391,7 @@ static int qemu_gluster_reopen_prepare(BDRVReopenState *state,
>  
>  exit:
>      /* state->opaque will be freed in either the _abort or _commit */
> -    qemu_gluster_gconf_free(gconf);
> +    qapi_free_BlockdevOptionsGluster(gconf);
>      return ret;
>  }
>  
> @@ -500,15 +500,15 @@ static inline int qemu_gluster_zerofill(struct glfs_fd *fd, int64_t offset,
>  static int qemu_gluster_create(const char *filename,
>                                 QemuOpts *opts, Error **errp)
>  {
> +    BlockdevOptionsGluster *gconf = NULL;
>      struct glfs *glfs;
>      struct glfs_fd *fd;
>      int ret = 0;
>      int prealloc = 0;
>      int64_t total_size = 0;
>      char *tmp = NULL;
> -    GlusterConf *gconf = g_new0(GlusterConf, 1);
>  
> -    glfs = qemu_gluster_init(gconf, filename, errp);
> +    glfs = qemu_gluster_init(&gconf, filename, errp);
>      if (!glfs) {
>          ret = -errno;
>          goto out;
> @@ -548,7 +548,7 @@ static int qemu_gluster_create(const char *filename,
>      }
>  out:
>      g_free(tmp);
> -    qemu_gluster_gconf_free(gconf);
> +    qapi_free_BlockdevOptionsGluster(gconf);
>      if (glfs) {
>          glfs_fini(glfs);
>      }
> diff --git a/qapi/block-core.json b/qapi/block-core.json
> index 425fdab..bbefe43 100644
> --- a/qapi/block-core.json
> +++ b/qapi/block-core.json
> @@ -1375,13 +1375,14 @@
>  # Drivers that are supported in block device operations.
>  #
>  # @host_device, @host_cdrom: Since 2.1
> +# @gluster: Since 2.5
>  #
>  # Since: 2.0
>  ##
>  { 'enum': 'BlockdevDriver',
>    'data': [ 'archipelago', 'blkdebug', 'blkverify', 'bochs', 'cloop',
> -            'dmg', 'file', 'ftp', 'ftps', 'host_cdrom', 'host_device',
> -            'http', 'https', 'null-aio', 'null-co', 'parallels',
> +            'dmg', 'file', 'ftp', 'ftps', 'gluster', 'host_cdrom',
> +            'host_device', 'http', 'https', 'null-aio', 'null-co', 'parallels',
>              'qcow', 'qcow2', 'qed', 'quorum', 'raw', 'tftp', 'vdi', 'vhdx',
>              'vmdk', 'vpc', 'vvfat' ] }
>  
> @@ -1797,6 +1798,59 @@
>              '*read-pattern': 'QuorumReadPattern' } }
>  
>  ##
> +# @GlusterTransport
> +#
> +# An enumeration of Gluster transport type
> +#
> +# @tcp:   TCP   - Transmission Control Protocol
> +#
> +# @unix:  UNIX  - Unix domain socket
> +#
> +# @rdma:  RDMA  - Remote direct memory access
> +#
> +# Since: 2.5
> +##
> +{ 'enum': 'GlusterTransport', 'data': [ 'tcp', 'unix', 'rdma'] }
> +
> +##
> +# @GlusterServer
> +#
> +# Details for connecting to a gluster server
> +#
> +# @host:       host address (hostname/ipv4/ipv6 addresses)
> +#
> +# @port:       #optional port number on which glusterd is listening
> +#               (default 24007)
> +#
> +# @transport:  #optional transport type used to connect to gluster management
> +#               daemon (default 'tcp')
> +#
> +# Since: 2.5
> +##
> +{ 'struct': 'GlusterServer',
> +  'data': { 'host': 'str',
> +            '*port': 'int',
> +            '*transport': 'GlusterTransport' } }
> +
> +##
> +# @BlockdevOptionsGluster
> +#
> +# Driver specific block device options for Gluster
> +#
> +# @volume:   name of gluster volume where VM image resides
> +#
> +# @path:     absolute path to image file in gluster volume
> +#
> +# @servers:  gluster server description
> +#
> +# Since: 2.5
> +##
> +{ 'struct': 'BlockdevOptionsGluster',
> +  'data': { 'volume': 'str',
> +            'path': 'str',
> +            'server': 'GlusterServer' } }
> +
> +##
>  # @BlockdevOptions
>  #
>  # Options for creating a block device.
> @@ -1816,7 +1870,7 @@
>        'file':       'BlockdevOptionsFile',
>        'ftp':        'BlockdevOptionsFile',
>        'ftps':       'BlockdevOptionsFile',
> -# TODO gluster: Wait for structured options
> +      'gluster':    'BlockdevOptionsGluster',
>        'host_cdrom': 'BlockdevOptionsFile',
>        'host_device':'BlockdevOptionsFile',
>        'http':       'BlockdevOptionsFile',
> -- 
> 2.1.0
> 

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [Qemu-devel] [PATCH 4/4] block/gluster: add support for multiple gluster servers
  2015-11-12 10:22 ` [Qemu-devel] [PATCH 4/4] block/gluster: add support for multiple gluster servers Prasanna Kumar Kalever
@ 2015-11-12 20:00   ` Jeff Cody
  2015-11-12 22:36   ` Eric Blake
  1 sibling, 0 replies; 18+ messages in thread
From: Jeff Cody @ 2015-11-12 20:00 UTC (permalink / raw)
  To: Prasanna Kumar Kalever
  Cc: kwolf, pkrempa, stefanha, qemu-devel, deepakcs, bharata, rtalur

On Thu, Nov 12, 2015 at 03:52:08PM +0530, Prasanna Kumar Kalever wrote:
> This patch adds a way to specify multiple volfile servers to the gluster
> block backend of QEMU with tcp|rdma transport types and their port numbers.
> 
> Problem:
> 
> Currently VM Image on gluster volume is specified like this:
> 
> file=gluster[+tcp]://host[:port]/testvol/a.img
> 
> Assuming we have three hosts in trusted pool with replica 3 volume
> in action and unfortunately host (mentioned in the command above) went down
> for some reason, since the volume is replica 3 we now have other 2 hosts
> active from which we can boot the VM.
> 
> But currently there is no mechanism to pass the other 2 gluster host
> addresses to qemu.
> 
> Solution:
> 
> New way of specifying VM Image on gluster volume with volfile servers:
> (We still support old syntax to maintain backward compatibility)
> 
> Basic command line syntax looks like:
> 
> Pattern I:
>  -drive driver=gluster,
>         volume=testvol,path=/path/a.raw,
>         server.0.host=1.2.3.4,
>        [server.0.port=24007,]
>        [server.0.transport=tcp,]
>         server.1.host=5.6.7.8,
>        [server.1.port=24008,]
>        [server.1.transport=rdma,] ...
> 
> Pattern II:
>  'json:{"driver":"qcow2","file":{"driver":"gluster",
>        "volume":"testvol","path":"/path/a.qcow2",
>        "server":[{tuple0},{tuple1}, ...{tupleN}]}}'
> 
>    driver      => 'gluster' (protocol name)
>    volume      => name of gluster volume where our VM image resides
>    path        => absolute path of image in gluster volume
> 
>   {tuple}      => {"host":"1.2.3.4"[,"port":"24007","transport":"tcp"]}
> 
>    host        => host address (hostname/ipv4/ipv6 addresses)
>    port        => port number on which glusterd is listening. (default 24007)
>    transport   => transport type used to connect to gluster management daemon,
>                    it can be tcp|rdma (default 'tcp')
> 
> Examples:
> 1.
>  -drive driver=qcow2,file.driver=gluster,
>         file.volume=testvol,file.path=/path/a.qcow2,
>         file.server.0.host=1.2.3.4,
>         file.server.0.port=24007,
>         file.server.0.transport=tcp,
>         file.server.1.host=5.6.7.8,
>         file.server.1.port=24008,
>         file.server.1.transport=rdma
> 2.
>  'json:{"driver":"qcow2","file":{"driver":"gluster","volume":"testvol",
>          "path":"/path/a.qcow2","server":
>          [{"host":"1.2.3.4","port":"24007","transport":"tcp"},
>           {"host":"4.5.6.7","port":"24008","transport":"rdma"}] } }'
> 
> This patch gives a mechanism to provide all the server addresses, which are in
> replica set, so in case host1 is down VM can still boot from any of the
> active hosts.
> 
> This is equivalent to the backup-volfile-servers option supported by
> mount.glusterfs (FUSE way of mounting gluster volume)
> 
> Credits: Sincere thanks to Kevin Wolf <kwolf@redhat.com> and
> "Deepak C Shetty" <deepakcs@redhat.com> for inputs and all their support
> 
> Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
> ---
>  block/gluster.c      | 288 ++++++++++++++++++++++++++++++++++++++++++++-------
>  qapi/block-core.json |   4 +-
>  2 files changed, 252 insertions(+), 40 deletions(-)
> 
> diff --git a/block/gluster.c b/block/gluster.c
> index 615f28b..ba209cf 100644
> --- a/block/gluster.c
> +++ b/block/gluster.c
> @@ -12,6 +12,13 @@
>  #include "qemu/uri.h"
>  
>  #define GLUSTER_OPT_FILENAME        "filename"
> +#define GLUSTER_OPT_VOLUME          "volume"
> +#define GLUSTER_OPT_PATH            "path"
> +#define GLUSTER_OPT_HOST            "host"
> +#define GLUSTER_OPT_PORT            "port"
> +#define GLUSTER_OPT_TRANSPORT       "transport"
> +#define GLUSTER_OPT_SERVER_PATTERN  "server."
> +
>  #define GLUSTER_DEFAULT_PORT        24007
>  
>  
> @@ -64,6 +71,46 @@ static QemuOptsList runtime_opts = {
>      },
>  };
>  
> +static QemuOptsList runtime_json_opts = {
> +    .name = "gluster_json",
> +    .head = QTAILQ_HEAD_INITIALIZER(runtime_json_opts.head),
> +    .desc = {
> +        {
> +            .name = GLUSTER_OPT_VOLUME,
> +            .type = QEMU_OPT_STRING,
> +            .help = "name of gluster volume where VM image resides",
> +        },
> +        {
> +            .name = GLUSTER_OPT_PATH,
> +            .type = QEMU_OPT_STRING,
> +            .help = "absolute path to image file in gluster volume",
> +        },
> +        { /* end of list */ }
> +    },
> +};
> +
> +static QemuOptsList runtime_tuple_opts = {
> +    .name = "gluster_tuple",
> +    .head = QTAILQ_HEAD_INITIALIZER(runtime_tuple_opts.head),
> +    .desc = {
> +        {
> +            .name = GLUSTER_OPT_HOST,
> +            .type = QEMU_OPT_STRING,
> +            .help = "host address (hostname/ipv4/ipv6 addresses)",
> +        },
> +        {
> +            .name = GLUSTER_OPT_PORT,
> +            .type = QEMU_OPT_NUMBER,
> +            .help = "port number on which glusterd is listening (default 24007)",
> +        },
> +        {
> +            .name = GLUSTER_OPT_TRANSPORT,
> +            .type = QEMU_OPT_STRING,
> +            .help = "transport type 'tcp' or 'rdma' (default 'tcp')",
> +        },
> +        { /* end of list */ }
> +    },
> +};
>  
>  static int parse_volume_options(BlockdevOptionsGluster *gconf, char *path)
>  {
> @@ -131,6 +178,7 @@ static int qemu_gluster_parseuri(BlockdevOptionsGluster **pgconf,
>                                   const char *filename)
>  {
>      BlockdevOptionsGluster *gconf;
> +    GlusterServer *gsconf;
>      URI *uri;
>      QueryParams *qp = NULL;
>      bool is_unix = false;
> @@ -142,23 +190,24 @@ static int qemu_gluster_parseuri(BlockdevOptionsGluster **pgconf,
>      }
>  
>      gconf = g_new0(BlockdevOptionsGluster, 1);
> -    gconf->server = g_new0(GlusterServer, 1);
> +    gconf->server = g_new0(GlusterServerList, 1);
> +    gconf->server->value = gsconf = g_new0(GlusterServer, 1);
>  
>      /* transport */
>      if (!uri->scheme || !strcmp(uri->scheme, "gluster")) {
> -        gconf->server->transport = GLUSTER_TRANSPORT_TCP;
> +        gsconf->transport = GLUSTER_TRANSPORT_TCP;
>      } else if (!strcmp(uri->scheme, "gluster+tcp")) {
> -        gconf->server->transport = GLUSTER_TRANSPORT_TCP;
> +        gsconf->transport = GLUSTER_TRANSPORT_TCP;
>      } else if (!strcmp(uri->scheme, "gluster+unix")) {
> -        gconf->server->transport = GLUSTER_TRANSPORT_UNIX;
> +        gsconf->transport = GLUSTER_TRANSPORT_UNIX;
>          is_unix = true;
>      } else if (!strcmp(uri->scheme, "gluster+rdma")) {
> -        gconf->server->transport = GLUSTER_TRANSPORT_RDMA;
> +        gsconf->transport = GLUSTER_TRANSPORT_RDMA;
>      } else {
>          ret = -EINVAL;
>          goto out;
>      }
> -    gconf->server->has_transport = true;
> +    gsconf->has_transport = true;
>  
>      ret = parse_volume_options(gconf, uri->path);
>      if (ret < 0) {
> @@ -180,15 +229,15 @@ static int qemu_gluster_parseuri(BlockdevOptionsGluster **pgconf,
>              ret = -EINVAL;
>              goto out;
>          }
> -        gconf->server->host = g_strdup(qp->p[0].value);
> +        gsconf->host = g_strdup(qp->p[0].value);
>      } else {
> -        gconf->server->host = g_strdup(uri->server ? uri->server : "localhost");
> +        gsconf->host = g_strdup(uri->server ? uri->server : "localhost");
>          if (uri->port) {
> -            gconf->server->port = uri->port;
> +            gsconf->port = uri->port;
>          } else {
> -            gconf->server->port = GLUSTER_DEFAULT_PORT;
> +            gsconf->port = GLUSTER_DEFAULT_PORT;
>          }
> -        gconf->server->has_port = true;
> +        gsconf->has_port = true;
>      }
>  
>      *pgconf = gconf;
> @@ -204,32 +253,26 @@ out:
>      return ret;
>  }
>  
> -static struct glfs *qemu_gluster_init(BlockdevOptionsGluster **pgconf,
> -                                      const char *filename, Error **errp)
> +static struct glfs *qemu_gluster_glfs_init(BlockdevOptionsGluster *gconf,
> +                                           Error **errp)
>  {
>      struct glfs *glfs;
>      int ret;
>      int old_errno;
> -    BlockdevOptionsGluster *gconf;
> -
> -    ret = qemu_gluster_parseuri(&gconf, filename);
> -    if (ret < 0) {
> -        error_setg(errp, "Usage: file=gluster[+transport]://[host[:port]]/"
> -                         "volume/path[?socket=...]");
> -        errno = -ret;
> -        goto out;
> -    }
> +    GlusterServerList *server;
>  
>      glfs = glfs_new(gconf->volume);
>      if (!glfs) {
>          goto out;
>      }
>  
> -    ret = glfs_set_volfile_server(glfs,
> -                                  GlusterTransport_lookup[gconf->server->transport],
> -                                  gconf->server->host, gconf->server->port);
> -    if (ret < 0) {
> -        goto out;
> +    for (server = gconf->server; server; server = server->next) {
> +        ret = glfs_set_volfile_server(glfs,
> +                                      GlusterTransport_lookup[server->value->transport],
> +                                      server->value->host, server->value->port);
> +        if (ret < 0) {
> +            goto out;
> +        }
>      }
>  
>      /*
> @@ -244,10 +287,9 @@ static struct glfs *qemu_gluster_init(BlockdevOptionsGluster **pgconf,
>      ret = glfs_init(glfs);
>      if (ret) {
>          error_setg_errno(errp, errno,
> -                         "Gluster connection failed for host=%s port=%ld "
> -                         "volume=%s path=%s transport=%s", gconf->server->host,
> -                         gconf->server->port, gconf->volume, gconf->path,
> -                         GlusterTransport_lookup[gconf->server->transport]);
> +                         "Gluster connection failed for given hosts "
> +                         "volume:'%s' path:'%s' host1:'%s'", gconf->volume,
> +                         gconf->path, gconf->server->value->host);
>  
>          /* glfs_init sometimes doesn't set errno although docs suggest that */
>          if (errno == 0)
> @@ -255,7 +297,6 @@ static struct glfs *qemu_gluster_init(BlockdevOptionsGluster **pgconf,
>  
>          goto out;
>      }
> -    *pgconf = gconf;
>      return glfs;
>  
>  out:
> @@ -267,6 +308,177 @@ out:
>      return NULL;
>  }
>  
> +static int parse_transport_option(const char *opt)
> +{
> +    int i;
> +
> +    if (!opt) {
> +        /* Set tcp as default */
> +        return GLUSTER_TRANSPORT_TCP;
> +    }
> +
> +    for (i = 0; i < GLUSTER_TRANSPORT_MAX; i++) {
> +        if (!strcmp(opt, GlusterTransport_lookup[i])) {
> +            return i;
> +        }
> +    }
> +
> +    return i;
> +}
> +
> +/*
> + * Convert the json formatted command line into qapi.
> +*/
> +static int qemu_gluster_parsejson(BlockdevOptionsGluster **pgconf,
> +                                  QDict *options)
> +{
> +    QemuOpts *opts;
> +    BlockdevOptionsGluster *gconf = NULL;
> +    GlusterServer *gsconf;
> +    GlusterServerList **prev;
> +    GlusterServerList *curr = NULL;
> +    QDict *backing_options = NULL;
> +    Error *local_err = NULL;
> +    char *str = NULL;
> +    const char *ptr;
> +    size_t num_servers;
> +    int i;
> +
> +    /* create opts info from runtime_json_opts list */
> +    opts = qemu_opts_create(&runtime_json_opts, NULL, 0, &error_abort);
> +    qemu_opts_absorb_qdict(opts, options, &local_err);
> +    if (local_err) {
> +        goto out;
> +    }
> +
> +    gconf = g_new0(BlockdevOptionsGluster, 1);
> +
> +    num_servers = qdict_array_entries(options, GLUSTER_OPT_SERVER_PATTERN);
> +    if (num_servers < 1) {
> +        error_setg(&local_err, "qemu_gluster: please provide 'server' "
> +                               "option with valid fields in array of tuples");
> +        goto out;
> +    }
> +
> +    ptr = qemu_opt_get(opts, GLUSTER_OPT_VOLUME);
> +    if (!ptr) {
> +        error_setg(&local_err, "qemu_gluster: please provide 'volume' "
> +                               "option");
> +        goto out;
> +    }
> +    gconf->volume = g_strdup(ptr);
> +
> +    ptr = qemu_opt_get(opts, GLUSTER_OPT_PATH);
> +    if (!ptr) {
> +        error_setg(&local_err, "qemu_gluster: please provide 'path' "
> +                               "option");
> +        goto out;
> +    }
> +    gconf->path = g_strdup(ptr);
> +
> +    qemu_opts_del(opts);
> +
> +    /* create opts info from runtime_tuple_opts list */
> +    for (i = 0; i < num_servers; i++) {
> +        opts = qemu_opts_create(&runtime_tuple_opts, NULL, 0, &error_abort);
> +         str = g_strdup_printf(GLUSTER_OPT_SERVER_PATTERN"%d.", i);
> +        qdict_extract_subqdict(options, &backing_options, str);
> +        qemu_opts_absorb_qdict(opts, backing_options, &local_err);
> +        if (local_err) {
> +            goto out;
> +        }
> +        qdict_del(backing_options, str);
> +
> +        ptr = qemu_opt_get(opts, GLUSTER_OPT_HOST);
> +        if (!ptr) {
> +            error_setg(&local_err, "qemu_gluster: server.{tuple.%d} "
> +                                   "requires 'host' option", i);
> +            goto out;
> +        }
> +
> +        gsconf = g_new0(GlusterServer, 1);
> +
> +        gsconf->host = g_strdup(ptr);
> +
> +        ptr = qemu_opt_get(opts, GLUSTER_OPT_TRANSPORT);
> +        /* check whether transport type specified in json command is valid */
> +        gsconf->transport = parse_transport_option(ptr);
> +        if (gsconf->transport == GLUSTER_TRANSPORT_MAX) {
> +            error_setg(&local_err, "qemu_gluster: please set 'transport'"
> +                                   " type in tuple.%d as tcp or rdma", i);
> +            g_free(gsconf);

This leaks gsconf->host, unfortunately.

> +            goto out;
> +        }
> +        gsconf->has_transport = true;
> +
> +        gsconf->port = qemu_opt_get_number(opts, GLUSTER_OPT_PORT,
> +                                           GLUSTER_DEFAULT_PORT);
> +        gsconf->has_port = true;
> +
> +        if (gconf->server == NULL) {
> +            gconf->server = g_new0(GlusterServerList, 1);
> +            gconf->server->value = gsconf;
> +            curr = gconf->server;
> +        } else {
> +            prev = &curr->next;
> +            curr = g_new0(GlusterServerList, 1);
> +            curr->value = gsconf;
> +            *prev = curr;

Optional:
This else {} chunk could be reduced to (and drop the *prev variable):

            curr->next = g_new0(GlusterServerList, 1);
            curr->next->value = gsconf;
            curr = curr->next;

I guess just a matter of style, no need to rev for that alone.

> +        }
> +
> +        qemu_opts_del(opts);
> +    }
> +
> +    *pgconf = gconf;
> +    g_free(str);
> +    return 0;
> +
> +out:
> +    error_report_err(local_err);
> +    qemu_opts_del(opts);
> +    qapi_free_BlockdevOptionsGluster(gconf);
> +    if (str) {
> +        qdict_del(backing_options, str);
> +        g_free(str);
> +    }
> +    errno = EINVAL;
> +    return -errno;
> +}
> +
> +static struct glfs *qemu_gluster_init(BlockdevOptionsGluster **gconf,
> +                                      const char *filename,
> +                                      QDict *options, Error **errp)
> +{
> +    int ret;
> +
> +    if (filename) {
> +        ret = qemu_gluster_parseuri(gconf, filename);
> +        if (ret < 0) {
> +            error_setg(errp, "Usage: file=gluster[+transport]://[host[:port]]/"
> +                             "volume/path[?socket=...]");
> +            errno = -ret;
> +            return NULL;
> +        }
> +    } else {
> +        ret = qemu_gluster_parsejson(gconf, options);
> +        if (ret < 0) {
> +            error_setg(errp, "Usage: "
> +                             "-drive driver=qcow2,file.driver=gluster,"
> +                             "file.volume=testvol,file.path=/path/a.qcow2,"
> +                             "file.server.0.host=1.2.3.4,"
> +                             "[file.server.0.port=24007,]"
> +                             "[file.server.0.transport=tcp,]"
> +                             "file.server.1.host=5.6.7.8,"
> +                             "[file.server.1.port=24008,]"
> +                             "[file.server.1.transport=rdma,] ...");
> +            errno = -ret;
> +            return NULL;
> +        }
> +    }
> +
> +    return qemu_gluster_glfs_init(*gconf, errp);
> +}
> +
>  static void qemu_gluster_complete_aio(void *opaque)
>  {
>      GlusterAIOCB *acb = (GlusterAIOCB *)opaque;
> @@ -332,7 +544,7 @@ static int qemu_gluster_open(BlockDriverState *bs, QDict *options,
>      }
>  
>      filename = qemu_opt_get(opts, "filename");
> -    s->glfs = qemu_gluster_init(&gconf, filename, errp);
> +    s->glfs = qemu_gluster_init(&gconf, filename, options, errp);
>      if (!s->glfs) {
>          ret = -errno;
>          goto out;
> @@ -376,7 +588,7 @@ static int qemu_gluster_reopen_prepare(BDRVReopenState *state,
>  
>      qemu_gluster_parse_flags(state->flags, &open_flags);
>  
> -    reop_s->glfs = qemu_gluster_init(&gconf, state->bs->filename, errp);
> +    reop_s->glfs = qemu_gluster_init(&gconf, state->bs->filename, NULL, errp);
>      if (reop_s->glfs == NULL) {
>          ret = -errno;
>          goto exit;
> @@ -508,7 +720,7 @@ static int qemu_gluster_create(const char *filename,
>      int64_t total_size = 0;
>      char *tmp = NULL;
>  
> -    glfs = qemu_gluster_init(&gconf, filename, errp);
> +    glfs = qemu_gluster_init(&gconf, filename, NULL, errp);
>      if (!glfs) {
>          ret = -errno;
>          goto out;
> @@ -725,7 +937,7 @@ static BlockDriver bdrv_gluster = {
>      .format_name                  = "gluster",
>      .protocol_name                = "gluster",
>      .instance_size                = sizeof(BDRVGlusterState),
> -    .bdrv_needs_filename          = true,
> +    .bdrv_needs_filename          = false,
>      .bdrv_file_open               = qemu_gluster_open,
>      .bdrv_reopen_prepare          = qemu_gluster_reopen_prepare,
>      .bdrv_reopen_commit           = qemu_gluster_reopen_commit,
> @@ -752,7 +964,7 @@ static BlockDriver bdrv_gluster_tcp = {
>      .format_name                  = "gluster",
>      .protocol_name                = "gluster+tcp",
>      .instance_size                = sizeof(BDRVGlusterState),
> -    .bdrv_needs_filename          = true,
> +    .bdrv_needs_filename          = false,
>      .bdrv_file_open               = qemu_gluster_open,
>      .bdrv_reopen_prepare          = qemu_gluster_reopen_prepare,
>      .bdrv_reopen_commit           = qemu_gluster_reopen_commit,
> @@ -806,7 +1018,7 @@ static BlockDriver bdrv_gluster_rdma = {
>      .format_name                  = "gluster",
>      .protocol_name                = "gluster+rdma",
>      .instance_size                = sizeof(BDRVGlusterState),
> -    .bdrv_needs_filename          = true,
> +    .bdrv_needs_filename          = false,
>      .bdrv_file_open               = qemu_gluster_open,
>      .bdrv_reopen_prepare          = qemu_gluster_reopen_prepare,
>      .bdrv_reopen_commit           = qemu_gluster_reopen_commit,
> diff --git a/qapi/block-core.json b/qapi/block-core.json
> index bbefe43..c28cb9f 100644
> --- a/qapi/block-core.json
> +++ b/qapi/block-core.json
> @@ -1841,14 +1841,14 @@
>  #
>  # @path:     absolute path to image file in gluster volume
>  #
> -# @servers:  gluster server description
> +# @servers:  one or more gluster server descriptions
>  #
>  # Since: 2.5
>  ##
>  { 'struct': 'BlockdevOptionsGluster',
>    'data': { 'volume': 'str',
>              'path': 'str',
> -            'server': 'GlusterServer' } }
> +            'server': [ 'GlusterServer' ] } }
>  
>  ##
>  # @BlockdevOptions
> -- 
> 2.1.0
> 

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [Qemu-devel] [PATCH 1/4] block/gluster: rename [server, volname, image] -> [host, volume, path]
  2015-11-12 10:22 ` [Qemu-devel] [PATCH 1/4] block/gluster: rename [server, volname, image] -> [host, volume, path] Prasanna Kumar Kalever
@ 2015-11-12 20:28   ` Eric Blake
  0 siblings, 0 replies; 18+ messages in thread
From: Eric Blake @ 2015-11-12 20:28 UTC (permalink / raw)
  To: Prasanna Kumar Kalever, qemu-devel
  Cc: kwolf, pkrempa, stefanha, jcody, deepakcs, bharata, rtalur

[-- Attachment #1: Type: text/plain, Size: 1487 bytes --]

On 11/12/2015 03:22 AM, Prasanna Kumar Kalever wrote:
> this patch is very much be meaningful after next patch which adds multiple
> gluster servers support. After that,
> 
> an example is, in  'servers' tuple values we use 'server' variable for key

Awkward line break mid-sentence.  The commit message alone is not a
reason to hold up this patch, so maybe the maintainer can adjust it.

> 'host' in the code, it will be quite messy to have colliding names for
> variables, so to maintain better readability and makes it consistent with other
> existing code as well as the input keys/options, this patch renames the
> following variables

I'd suggest replacing everything up to here with:

A future patch will add support for multiple gluster servers.  Our
existing terminology is a bit unusual in relation to what names are used
by other networked devices, and doesn't map very well to the terminology
we expect to use for multiple servers.  Therefore, rename the following
options:

> 'server'  -> 'host'
> 'image'   -> 'path'
> 'volname' -> 'volume'
> 
> Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
> Reviewed-by: Eric Blake <eblake@redhat.com>
> ---
>  block/gluster.c | 54 +++++++++++++++++++++++++++---------------------------
>  1 file changed, 27 insertions(+), 27 deletions(-)

R-b still stands on this one.

-- 
Eric Blake   eblake redhat com    +1-919-301-3266
Libvirt virtualization library http://libvirt.org


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 604 bytes --]

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [Qemu-devel] [PATCH 3/4] block/gluster: using new qapi schema
  2015-11-12 10:22 ` [Qemu-devel] [PATCH 3/4] block/gluster: using new qapi schema Prasanna Kumar Kalever
  2015-11-12 20:00   ` Jeff Cody
@ 2015-11-12 21:16   ` Eric Blake
  2015-11-12 21:37   ` Eric Blake
                     ` (2 subsequent siblings)
  4 siblings, 0 replies; 18+ messages in thread
From: Eric Blake @ 2015-11-12 21:16 UTC (permalink / raw)
  To: Prasanna Kumar Kalever, qemu-devel
  Cc: kwolf, pkrempa, stefanha, jcody, deepakcs, bharata, rtalur

[-- Attachment #1: Type: text/plain, Size: 13543 bytes --]

On 11/12/2015 03:22 AM, Prasanna Kumar Kalever wrote:
> this patch adds GlusterConf to qapi/block-core.json

Missing a vNN in the subject line.  I think we're up to v14?  But it
doesn't affect what 'git am' will do.

> 
> Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
> ---
>  block/gluster.c      | 104 +++++++++++++++++++++++++--------------------------
>  qapi/block-core.json |  60 +++++++++++++++++++++++++++--
>  2 files changed, 109 insertions(+), 55 deletions(-)
> 

Modulo Jeff's findings,

> diff --git a/block/gluster.c b/block/gluster.c
> index ededda2..615f28b 100644
> --- a/block/gluster.c
> +++ b/block/gluster.c

> -typedef struct GlusterConf {
> -    char *host;
> -    int port;
> -    char *volume;
> -    char *path;
> -    char *transport;
> -} GlusterConf;
> -
> -

So this is the struct being replaced by qapi BlockdevOptionsGluster.
/me jumps ahead to [1] in my review, before continuing here...

I'm back. Looks like your qapi struct matches this nicely, with the
possible exception of what happens if we try to avoid churn by
using/enforcing a 1-element array now rather than converting to array in
patch 4.

> @@ -143,8 +127,10 @@ static int parse_volume_options(GlusterConf *gconf, char *path)
>   * file=gluster+unix:///testvol/dir/a.img?socket=/tmp/glusterd.socket
>   * file=gluster+rdma://1.2.3.4:24007/testvol/a.img
>   */
> -static int qemu_gluster_parseuri(GlusterConf *gconf, const char *filename)
> +static int qemu_gluster_parseuri(BlockdevOptionsGluster **pgconf,
> +                                 const char *filename)

I'm not sure from looking at just the signature why you changed from
*gconf to **pgconf; maybe that sort of conversion would have been worth
mentioning in the commit message (a good rule of thumb - if the change
isn't blatantly obvious, then calling it out in the commit message will
prepare reviewers for it).

> @@ -190,13 +180,23 @@ static int qemu_gluster_parseuri(GlusterConf *gconf, const char *filename)
>              ret = -EINVAL;
>              goto out;
>          }
> -        gconf->host = g_strdup(qp->p[0].value);
> +        gconf->server->host = g_strdup(qp->p[0].value);
>      } else {
> -        gconf->host = g_strdup(uri->server ? uri->server : "localhost");
> -        gconf->port = uri->port;
> +        gconf->server->host = g_strdup(uri->server ? uri->server : "localhost");
> +        if (uri->port) {
> +            gconf->server->port = uri->port;
> +        } else {
> +            gconf->server->port = GLUSTER_DEFAULT_PORT;
> +        }
> +        gconf->server->has_port = true;
>      }
>  
> +    *pgconf = gconf;

Okay, now I see where the change in signature comes into play - you want
to return a new allocation to the user, but only on success.  But I'm
still not necessarily convinced that you need it.  See more at [3] below.

> +
>  out:
> +    if (ret < 0) {
> +        qapi_free_BlockdevOptionsGluster(gconf);
> +    }
>      if (qp) {
>          query_params_free(qp);
>      }
> @@ -204,14 +204,15 @@ out:
>      return ret;
>  }

Okay, this parseuri conversion is sane.  It will need tweaking in patch
4 to deal with gconf->server becoming a list rather than a single
server, but as long as both patches go in, we should be okay.

>  
> -static struct glfs *qemu_gluster_init(GlusterConf *gconf, const char *filename,
> -                                      Error **errp)
> +static struct glfs *qemu_gluster_init(BlockdevOptionsGluster **pgconf,
> +                                      const char *filename, Error **errp)
>  {
> -    struct glfs *glfs = NULL;
> +    struct glfs *glfs;

Jeff already spotted that the change here is spurious.

>      int ret;
>      int old_errno;
> +    BlockdevOptionsGluster *gconf;
>  
> -    ret = qemu_gluster_parseuri(gconf, filename);
> +    ret = qemu_gluster_parseuri(&gconf, filename);
>      if (ret < 0) {
>          error_setg(errp, "Usage: file=gluster[+transport]://[host[:port]]/"
>                           "volume/path[?socket=...]");
> @@ -224,8 +225,9 @@ static struct glfs *qemu_gluster_init(GlusterConf *gconf, const char *filename,
>          goto out;
>      }
>  
> -    ret = glfs_set_volfile_server(glfs, gconf->transport, gconf->host,
> -            gconf->port);
> +    ret = glfs_set_volfile_server(glfs,
> +                                  GlusterTransport_lookup[gconf->server->transport],

Line longer than 80 characters; I might have used an intermediate const
char * variable to cut down on the length. But as long as it gets past
scripts/checkpatch.pl, I won't insist on a reformat.

> +                                  gconf->server->host, gconf->server->port);

Ouch - since you aren't validating that gconf->server->port fits in 16
bits, you may be passing something so large that it silently wraps around.

>      if (ret < 0) {
>          goto out;
>      }
> @@ -242,10 +244,10 @@ static struct glfs *qemu_gluster_init(GlusterConf *gconf, const char *filename,
>      ret = glfs_init(glfs);
>      if (ret) {
>          error_setg_errno(errp, errno,
> -                         "Gluster connection failed for host=%s port=%d "
> -                         "volume=%s path=%s transport=%s", gconf->host,
> -                         gconf->port, gconf->volume, gconf->path,
> -                         gconf->transport);
> +                         "Gluster connection failed for host=%s port=%ld "

Change to %ld is due to your choice of 'int' for port; had we gone with
'uint16', you could keep %d, and would be slightly better off (the
parser would validate that things fit in 16 bits, rather than you having
to worry about wraparound).

> +                         "volume=%s path=%s transport=%s", gconf->server->host,
> +                         gconf->server->port, gconf->volume, gconf->path,
> +                         GlusterTransport_lookup[gconf->server->transport]);
>  
>          /* glfs_init sometimes doesn't set errno although docs suggest that */
>          if (errno == 0)
> @@ -253,6 +255,7 @@ static struct glfs *qemu_gluster_init(GlusterConf *gconf, const char *filename,
>  
>          goto out;
>      }
> +    *pgconf = gconf;
>      return glfs;
>  
>  out:

If I'm reading this right, you leaks gconf on failure.  The old code
took gconf as a parameter and modified it in place (so it won't be
freeing anything); the new code creates a local gconf, and then passes
its address to the parseuri helper which allocates, so you must have a
matching free on all code paths that do not pass gconf on to *pgconf.

But if you hadn't changed the signature, then this would be no different
to what it was pre-patch.

> @@ -315,7 +318,7 @@ static int qemu_gluster_open(BlockDriverState *bs, QDict *options,
>      BDRVGlusterState *s = bs->opaque;
>      int open_flags = 0;
>      int ret = 0;
> -    GlusterConf *gconf = g_new0(GlusterConf, 1);
> +    BlockdevOptionsGluster *gconf = NULL;

[3] The old code allocates storage, then lets the helpers populate it.
The new code relies on the helpers to allocate storage.  But what was
wrong with the old style?  Why can't we just
g_new0(BlockdevOptionsGluster, 1), and pass that gconf to all the
helpers to be modified in place?

See, because you didn't mention it in the commit message, I have to
guess at things that aren't obvious.

>      QemuOpts *opts;
>      Error *local_err = NULL;
>      const char *filename;
> @@ -329,8 +332,7 @@ static int qemu_gluster_open(BlockDriverState *bs, QDict *options,
>      }
>  
>      filename = qemu_opt_get(opts, "filename");
> -
> -    s->glfs = qemu_gluster_init(gconf, filename, errp);
> +    s->glfs = qemu_gluster_init(&gconf, filename, errp);

Why the loss of the blank line?

>      if (!s->glfs) {
>          ret = -errno;
>          goto out;
> @@ -345,7 +347,7 @@ static int qemu_gluster_open(BlockDriverState *bs, QDict *options,
>  
>  out:
>      qemu_opts_del(opts);
> -    qemu_gluster_gconf_free(gconf);
> +    qapi_free_BlockdevOptionsGluster(gconf);
>      if (!ret) {
>          return ret;
>      }
> @@ -363,7 +365,7 @@ static int qemu_gluster_reopen_prepare(BDRVReopenState *state,
>  {
>      int ret = 0;
>      BDRVGlusterReopenState *reop_s;
> -    GlusterConf *gconf = NULL;
> +    BlockdevOptionsGluster *gconf = NULL;
>      int open_flags = 0;
>  
>      assert(state != NULL);
> @@ -374,9 +376,7 @@ static int qemu_gluster_reopen_prepare(BDRVReopenState *state,
>  
>      qemu_gluster_parse_flags(state->flags, &open_flags);
>  
> -    gconf = g_new0(GlusterConf, 1);
> -
> -    reop_s->glfs = qemu_gluster_init(gconf, state->bs->filename, errp);
> +    reop_s->glfs = qemu_gluster_init(&gconf, state->bs->filename, errp);

Here, the changed signature merely changes who is allocating gconf.

>      if (reop_s->glfs == NULL) {
>          ret = -errno;
>          goto exit;
> @@ -391,7 +391,7 @@ static int qemu_gluster_reopen_prepare(BDRVReopenState *state,
>  
>  exit:
>      /* state->opaque will be freed in either the _abort or _commit */
> -    qemu_gluster_gconf_free(gconf);
> +    qapi_free_BlockdevOptionsGluster(gconf);
>      return ret;
>  }
>  
> @@ -500,15 +500,15 @@ static inline int qemu_gluster_zerofill(struct glfs_fd *fd, int64_t offset,
>  static int qemu_gluster_create(const char *filename,
>                                 QemuOpts *opts, Error **errp)
>  {
> +    BlockdevOptionsGluster *gconf = NULL;
>      struct glfs *glfs;
>      struct glfs_fd *fd;
>      int ret = 0;
>      int prealloc = 0;
>      int64_t total_size = 0;
>      char *tmp = NULL;
> -    GlusterConf *gconf = g_new0(GlusterConf, 1);

I might have made the insertion and deletion at the same line, rather
than floating it up the declarations.  Minor style.

>  
> -    glfs = qemu_gluster_init(gconf, filename, errp);
> +    glfs = qemu_gluster_init(&gconf, filename, errp);
>      if (!glfs) {
>          ret = -errno;
>          goto out;
> @@ -548,7 +548,7 @@ static int qemu_gluster_create(const char *filename,
>      }
>  out:
>      g_free(tmp);
> -    qemu_gluster_gconf_free(gconf);
> +    qapi_free_BlockdevOptionsGluster(gconf);
>      if (glfs) {
>          glfs_fini(glfs);
>      }

Okay, we're at the end. Down to [2] for the conclusion

> diff --git a/qapi/block-core.json b/qapi/block-core.json
> index 425fdab..bbefe43 100644
> --- a/qapi/block-core.json
> +++ b/qapi/block-core.json

[1] I'm reviewing the interface first, then the implementation

> @@ -1375,13 +1375,14 @@
>  # Drivers that are supported in block device operations.
>  #
>  # @host_device, @host_cdrom: Since 2.1
> +# @gluster: Since 2.5
>  #

Not your fault, but this list, and the list for BlockDeviceInfo, are
similar to one another yet not identical.  Doesn't hold up this patch,
but it would be nice to have a unified story some day.

> +##
> +# @GlusterServer
> +#
> +# Details for connecting to a gluster server
> +#
> +# @host:       host address (hostname/ipv4/ipv6 addresses)
> +#
> +# @port:       #optional port number on which glusterd is listening
> +#               (default 24007)
> +#
> +# @transport:  #optional transport type used to connect to gluster management
> +#               daemon (default 'tcp')
> +#
> +# Since: 2.5
> +##
> +{ 'struct': 'GlusterServer',
> +  'data': { 'host': 'str',
> +            '*port': 'int',
> +            '*transport': 'GlusterTransport' } }

Looks reasonable.  It seems like we may need similar types for other
networked devices, but hopefully we aren't painting ourselves into a
corner.  'int' is rather large for port, if you wanted to go with
'uint16' instead.

> +
> +##
> +# @BlockdevOptionsGluster
> +#
> +# Driver specific block device options for Gluster
> +#
> +# @volume:   name of gluster volume where VM image resides
> +#
> +# @path:     absolute path to image file in gluster volume
> +#
> +# @servers:  gluster server description
> +#
> +# Since: 2.5
> +##
> +{ 'struct': 'BlockdevOptionsGluster',
> +  'data': { 'volume': 'str',
> +            'path': 'str',
> +            'server': 'GlusterServer' } }

If this patch goes in to 2.5, but 4/4 does not, then you have painted
yourself into a corner.  That's because 4/4 changes this to
'server':['GlusterServer'], which is not a compatible change.  We can
get away with it as long as both patches go into the same release (and
therefore I won't complain too loudly), but I really would have liked to
see this patch use/enforce a one-element array rather than having to
retouch things to convert from single element to array in patch 4.
(That is, your split wasn't quite done along the lines I had envisioned
- but it's not necessarily a show-stopper if patch 4 fixes things.)

[2] Overall, I think the remaining items might be fixable by a
maintainer, rather than forcing a delay due to requiring a respin.

Jeff, if you are comfortable making changes to fix at least the memleak,
or more invasively to undo the change from *gconf to **pgconf in the
various signatures, you can add:
Reviewed-by: Eric Blake <eblake@redhat.com>

Or, we can wait for a v15, but that puts getting this into 2.5 at more risk.

-- 
Eric Blake   eblake redhat com    +1-919-301-3266
Libvirt virtualization library http://libvirt.org


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 604 bytes --]

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [Qemu-devel] [PATCH 3/4] block/gluster: using new qapi schema
  2015-11-12 10:22 ` [Qemu-devel] [PATCH 3/4] block/gluster: using new qapi schema Prasanna Kumar Kalever
  2015-11-12 20:00   ` Jeff Cody
  2015-11-12 21:16   ` Eric Blake
@ 2015-11-12 21:37   ` Eric Blake
  2015-11-12 22:44   ` Eric Blake
  2015-11-13  8:04   ` Markus Armbruster
  4 siblings, 0 replies; 18+ messages in thread
From: Eric Blake @ 2015-11-12 21:37 UTC (permalink / raw)
  To: Prasanna Kumar Kalever, qemu-devel
  Cc: kwolf, pkrempa, stefanha, jcody, deepakcs, bharata, rtalur

[-- Attachment #1: Type: text/plain, Size: 2539 bytes --]

On 11/12/2015 03:22 AM, Prasanna Kumar Kalever wrote:
> this patch adds GlusterConf to qapi/block-core.json
> 
> Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
> ---
>  block/gluster.c      | 104 +++++++++++++++++++++++++--------------------------
>  qapi/block-core.json |  60 +++++++++++++++++++++++++++--
>  2 files changed, 109 insertions(+), 55 deletions(-)

One more comment:

> -static int qemu_gluster_parseuri(GlusterConf *gconf, const char *filename)
> +static int qemu_gluster_parseuri(BlockdevOptionsGluster **pgconf,
> +                                 const char *filename)
>  {
> +    BlockdevOptionsGluster *gconf;
>      URI *uri;
>      QueryParams *qp = NULL;
>      bool is_unix = false;
> @@ -155,20 +141,24 @@ static int qemu_gluster_parseuri(GlusterConf *gconf, const char *filename)
>          return -EINVAL;

If we hit this early return, then *pgconf was never assigned...


> +static struct glfs *qemu_gluster_init(BlockdevOptionsGluster **pgconf,
> +                                      const char *filename, Error **errp)
>  {
> -    struct glfs *glfs = NULL;
> +    struct glfs *glfs;
>      int ret;
>      int old_errno;
> +    BlockdevOptionsGluster *gconf;

but here, gconf is uninitialized,

>  
> -    ret = qemu_gluster_parseuri(gconf, filename);
> +    ret = qemu_gluster_parseuri(&gconf, filename);
>      if (ret < 0) {
>          error_setg(errp, "Usage: file=gluster[+transport]://[host[:port]]/"
>                           "volume/path[?socket=...]");

which means we can goto out with it uninitialized...

> @@ -224,8 +225,9 @@ static struct glfs *qemu_gluster_init(GlusterConf *gconf, const char *filename,
>          goto out;
>      }
>  
> -    ret = glfs_set_volfile_server(glfs, gconf->transport, gconf->host,
> -            gconf->port);
> +    ret = glfs_set_volfile_server(glfs,
> +                                  GlusterTransport_lookup[gconf->server->transport],
> +                                  gconf->server->host, gconf->server->port);
>      if (ret < 0) {
>          goto out;
>      }

...vs. here where we can goto out with it initialized.

So whatever solution you use to plug the leak must be careful to not
free uninitialized memory.  Easiest solution - initialize gconf to NULL
before qemu_gluster_parseuri (or else go back to a *gconf parameter
rather than **pgconf).

-- 
Eric Blake   eblake redhat com    +1-919-301-3266
Libvirt virtualization library http://libvirt.org


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 604 bytes --]

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [Qemu-devel] [PATCH 4/4] block/gluster: add support for multiple gluster servers
  2015-11-12 10:22 ` [Qemu-devel] [PATCH 4/4] block/gluster: add support for multiple gluster servers Prasanna Kumar Kalever
  2015-11-12 20:00   ` Jeff Cody
@ 2015-11-12 22:36   ` Eric Blake
  2016-02-04 13:22     ` Kevin Wolf
  1 sibling, 1 reply; 18+ messages in thread
From: Eric Blake @ 2015-11-12 22:36 UTC (permalink / raw)
  To: Prasanna Kumar Kalever, qemu-devel
  Cc: kwolf, pkrempa, stefanha, jcody, deepakcs, bharata, rtalur

[-- Attachment #1: Type: text/plain, Size: 15986 bytes --]

On 11/12/2015 03:22 AM, Prasanna Kumar Kalever wrote:
> This patch adds a way to specify multiple volfile servers to the gluster
> block backend of QEMU with tcp|rdma transport types and their port numbers.
> 

> This patch gives a mechanism to provide all the server addresses, which are in
> replica set, so in case host1 is down VM can still boot from any of the
> active hosts.
> 
> This is equivalent to the backup-volfile-servers option supported by
> mount.glusterfs (FUSE way of mounting gluster volume)
> 
> Credits: Sincere thanks to Kevin Wolf <kwolf@redhat.com> and
> "Deepak C Shetty" <deepakcs@redhat.com> for inputs and all their support
> 
> Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
> ---
>  block/gluster.c      | 288 ++++++++++++++++++++++++++++++++++++++++++++-------
>  qapi/block-core.json |   4 +-
>  2 files changed, 252 insertions(+), 40 deletions(-)

All right - the diffstat is smaller this time around (288 is nicer than
468 lines changed in v13).  There's always a psychological barrier to
reviewing large patches, and breaking things into bite-sized chunks
helps even if the same amount of work is done overall.

> 
> diff --git a/block/gluster.c b/block/gluster.c
> index 615f28b..ba209cf 100644
> --- a/block/gluster.c
> +++ b/block/gluster.c
> @@ -12,6 +12,13 @@
>  #include "qemu/uri.h"
>  
>  #define GLUSTER_OPT_FILENAME        "filename"
> +#define GLUSTER_OPT_VOLUME          "volume"
> +#define GLUSTER_OPT_PATH            "path"
> +#define GLUSTER_OPT_HOST            "host"
> +#define GLUSTER_OPT_PORT            "port"
> +#define GLUSTER_OPT_TRANSPORT       "transport"
> +#define GLUSTER_OPT_SERVER_PATTERN  "server."
> +
>  #define GLUSTER_DEFAULT_PORT        24007

Once again, I'm jumping to the interface first [1]


> @@ -131,6 +178,7 @@ static int qemu_gluster_parseuri(BlockdevOptionsGluster **pgconf,
>                                   const char *filename)
>  {
>      BlockdevOptionsGluster *gconf;
> +    GlusterServer *gsconf;
>      URI *uri;
>      QueryParams *qp = NULL;
>      bool is_unix = false;
> @@ -142,23 +190,24 @@ static int qemu_gluster_parseuri(BlockdevOptionsGluster **pgconf,
>      }
>  
>      gconf = g_new0(BlockdevOptionsGluster, 1);
> -    gconf->server = g_new0(GlusterServer, 1);
> +    gconf->server = g_new0(GlusterServerList, 1);
> +    gconf->server->value = gsconf = g_new0(GlusterServer, 1);
>  
>      /* transport */
>      if (!uri->scheme || !strcmp(uri->scheme, "gluster")) {
> -        gconf->server->transport = GLUSTER_TRANSPORT_TCP;
> +        gsconf->transport = GLUSTER_TRANSPORT_TCP;

Most of the changes here in parseuri could have been in patch 3/4 if we
weren't churning on the qapi definition.  But looks like your conversion
here is correct.

> -static struct glfs *qemu_gluster_init(BlockdevOptionsGluster **pgconf,
> -                                      const char *filename, Error **errp)
> +static struct glfs *qemu_gluster_glfs_init(BlockdevOptionsGluster *gconf,
> +                                           Error **errp)
>  {

I might have split the refactoring of qemu_gluster_glfs_init() into its
own patch, but not the end of the world the way it was done here.

>      struct glfs *glfs;
>      int ret;
>      int old_errno;
> -    BlockdevOptionsGluster *gconf;
> -
> -    ret = qemu_gluster_parseuri(&gconf, filename);
> -    if (ret < 0) {
> -        error_setg(errp, "Usage: file=gluster[+transport]://[host[:port]]/"
> -                         "volume/path[?socket=...]");
> -        errno = -ret;
> -        goto out;
> -    }
> +    GlusterServerList *server;
>  
>      glfs = glfs_new(gconf->volume);
>      if (!glfs) {
>          goto out;
>      }
>  
> -    ret = glfs_set_volfile_server(glfs,
> -                                  GlusterTransport_lookup[gconf->server->transport],
> -                                  gconf->server->host, gconf->server->port);
> -    if (ret < 0) {
> -        goto out;
> +    for (server = gconf->server; server; server = server->next) {

I still wonder if gconf->servers (and therefore servers.0, servers.1 in
the command line, instead of server.0, server.1) would have been a
better name for the list, but I don't know if it is worth repainting the
bikeshed at this point in time.  On the other hand, it's user-visible,
so once it gets released, we're stuck with the name, but up until then,
we can do a followup patch if anyone else has a strong opinion.

> +        ret = glfs_set_volfile_server(glfs,
> +                                      GlusterTransport_lookup[server->value->transport],
> +                                      server->value->host, server->value->port);

I asked in v13 if all initializations set the optional transport and
port.  See [3] below

> +        if (ret < 0) {
> +            goto out;
> +        }
>      }
>  
>      /*
> @@ -244,10 +287,9 @@ static struct glfs *qemu_gluster_init(BlockdevOptionsGluster **pgconf,
>      ret = glfs_init(glfs);
>      if (ret) {
>          error_setg_errno(errp, errno,
> -                         "Gluster connection failed for host=%s port=%ld "
> -                         "volume=%s path=%s transport=%s", gconf->server->host,
> -                         gconf->server->port, gconf->volume, gconf->path,
> -                         GlusterTransport_lookup[gconf->server->transport]);
> +                         "Gluster connection failed for given hosts "
> +                         "volume:'%s' path:'%s' host1:'%s'", gconf->volume,

Reads a bit awkwardly: why "host1"?  I might have done (pseudocode):

"Gluster connection for '%s/%s' failed for host '%s'", volume, path, host

but not worth a respin by itself.

>  
> +static int parse_transport_option(const char *opt)
> +{
> +    int i;
> +
> +    if (!opt) {
> +        /* Set tcp as default */
> +        return GLUSTER_TRANSPORT_TCP;
> +    }
> +
> +    for (i = 0; i < GLUSTER_TRANSPORT_MAX; i++) {
> +        if (!strcmp(opt, GlusterTransport_lookup[i])) {
> +            return i;
> +        }
> +    }
> +
> +    return i;
> +}

This function feels like it could be replaced by qapi_enum_parse() with
appropriate arguments.  See [2] below

> +
> +/*
> + * Convert the json formatted command line into qapi.
> +*/
> +static int qemu_gluster_parsejson(BlockdevOptionsGluster **pgconf,
> +                                  QDict *options)
> +{
> +    QemuOpts *opts;
> +    BlockdevOptionsGluster *gconf = NULL;
> +    GlusterServer *gsconf;
> +    GlusterServerList **prev;
> +    GlusterServerList *curr = NULL;
> +    QDict *backing_options = NULL;
> +    Error *local_err = NULL;
> +    char *str = NULL;
> +    const char *ptr;
> +    size_t num_servers;
> +    int i;
> +
> +    /* create opts info from runtime_json_opts list */
> +    opts = qemu_opts_create(&runtime_json_opts, NULL, 0, &error_abort);
> +    qemu_opts_absorb_qdict(opts, options, &local_err);
> +    if (local_err) {
> +        goto out;
> +    }
> +
> +    gconf = g_new0(BlockdevOptionsGluster, 1);
> +
> +    num_servers = qdict_array_entries(options, GLUSTER_OPT_SERVER_PATTERN);
> +    if (num_servers < 1) {
> +        error_setg(&local_err, "qemu_gluster: please provide 'server' "
> +                               "option with valid fields in array of tuples");

I'm not convinced the 'qemu_gluster: ' prefix on the errors is
necessary, but to date I haven't applied the patch to actually try
triggering an error to see what gets printed.  At any rate, even if the
message could use some polish, it is at least accurate, and shouldn't
hold up getting the initial patch in.

> +        goto out;
> +    }
> +
> +    ptr = qemu_opt_get(opts, GLUSTER_OPT_VOLUME);
> +    if (!ptr) {
> +        error_setg(&local_err, "qemu_gluster: please provide 'volume' "
> +                               "option");
> +        goto out;
> +    }
> +    gconf->volume = g_strdup(ptr);
> +
> +    ptr = qemu_opt_get(opts, GLUSTER_OPT_PATH);
> +    if (!ptr) {
> +        error_setg(&local_err, "qemu_gluster: please provide 'path' "
> +                               "option");
> +        goto out;
> +    }
> +    gconf->path = g_strdup(ptr);
> +
> +    qemu_opts_del(opts);
> +
> +    /* create opts info from runtime_tuple_opts list */
> +    for (i = 0; i < num_servers; i++) {
> +        opts = qemu_opts_create(&runtime_tuple_opts, NULL, 0, &error_abort);
> +         str = g_strdup_printf(GLUSTER_OPT_SERVER_PATTERN"%d.", i);

Indentation is off.

str is allocated on first iteration, then overwritten in second, for a
memory leak.

> +        qdict_extract_subqdict(options, &backing_options, str);
> +        qemu_opts_absorb_qdict(opts, backing_options, &local_err);
> +        if (local_err) {
> +            goto out;
> +        }
> +        qdict_del(backing_options, str);
> +
> +        ptr = qemu_opt_get(opts, GLUSTER_OPT_HOST);
> +        if (!ptr) {
> +            error_setg(&local_err, "qemu_gluster: server.{tuple.%d} "
> +                                   "requires 'host' option", i);
> +            goto out;
> +        }
> +
> +        gsconf = g_new0(GlusterServer, 1);
> +
> +        gsconf->host = g_strdup(ptr);
> +
> +        ptr = qemu_opt_get(opts, GLUSTER_OPT_TRANSPORT);
> +        /* check whether transport type specified in json command is valid */
> +        gsconf->transport = parse_transport_option(ptr);
> +        if (gsconf->transport == GLUSTER_TRANSPORT_MAX) {
> +            error_setg(&local_err, "qemu_gluster: please set 'transport'"
> +                                   " type in tuple.%d as tcp or rdma", i);

[2] Yep, as this looks to be the only caller of parse_transport_option,
I really wonder if using qapi_enum_parse() here would be better to make
your error message more consistent with other failed string-to-enum
conversions.

> +            g_free(gsconf);

Jeff pointed out the leak; fix it by using
qapi_free_GlusterServer(gsconf) instead of g_free().

> +            goto out;
> +        }
> +        gsconf->has_transport = true;
> +
> +        gsconf->port = qemu_opt_get_number(opts, GLUSTER_OPT_PORT,
> +                                           GLUSTER_DEFAULT_PORT);
> +        gsconf->has_port = true;

[3] Okay, so all initialization paths from the command line do set the
optional parameters.  And blockdev-add does its QMP magic by converting
to a QDict as if it had gone through the command line, so even that
looks like it is safe.  Phew.  It would still be nice if qapi supported
the notion of default values, but that won't land before 2.6, so your
approach is fine in the meantime.

> +
> +        if (gconf->server == NULL) {
> +            gconf->server = g_new0(GlusterServerList, 1);
> +            gconf->server->value = gsconf;
> +            curr = gconf->server;
> +        } else {
> +            prev = &curr->next;
> +            curr = g_new0(GlusterServerList, 1);
> +            curr->value = gsconf;
> +            *prev = curr;
> +        }
> +
> +        qemu_opts_del(opts);
> +    }
> +
> +    *pgconf = gconf;
> +    g_free(str);
> +    return 0;
> +
> +out:
> +    error_report_err(local_err);
> +    qemu_opts_del(opts);
> +    qapi_free_BlockdevOptionsGluster(gconf);
> +    if (str) {
> +        qdict_del(backing_options, str);
> +        g_free(str);
> +    }
> +    errno = EINVAL;
> +    return -errno;
> +}

Once the leaks are fixed, it looks like you have things working here.

> +
> +static struct glfs *qemu_gluster_init(BlockdevOptionsGluster **gconf,
> +                                      const char *filename,
> +                                      QDict *options, Error **errp)
> +{
> +    int ret;
> +
> +    if (filename) {
> +        ret = qemu_gluster_parseuri(gconf, filename);
> +        if (ret < 0) {
> +            error_setg(errp, "Usage: file=gluster[+transport]://[host[:port]]/"
> +                             "volume/path[?socket=...]");

Hmm, just noticing this now, even though this error message is just code
motion.  It looks like the optional [?socket=...] part of a URI is only
important when using gluster+unix (is it silently ignored otherwise?).
And if it is used, you are then assigning it to the host field?

I almost wonder if GlusterServer should be a discriminated union.  That
is, in qapi-pseudocode (won't actually compile yet, because it depends
on features that I have queued for 2.6):

{ 'union':'GlusterServer', 'base':{'transport':'GlusterTransport'},
  'discriminator':'transport', 'data':{
    'tcp':{'host':'str', '*port':'uint16'},
    'unix':{'socket':'str'},
    'rdma':{...} } }

Hmm. Qapi doesn't (yet) allow for an optional discriminator (where the
omission of the discriminator picks a default branch) - another RFE for
my qapi work for 2.6.

Command-line wise, this would mean you could do in JSON:

'servers':[{'transport':'tcp', 'host':'foo'},
           {'transport':'unix', 'socket':'/path/to/bar'},
           {'host':'blah'}]

where the third entry defaults to transport tcp.

If we think that description is better than what we proposed in 3/4,
then it's really late to be adding it now, especially since (without
qapi changes) we'd have a mandatory rather than optional 'transport';
but worse if we commit to the interface of 3/4 and don't get the
conversion made in time to the nicer interface.  At least it's okay from
back-compat perspective to make a mandatory field become optional in
later releases.

If it were just gluster code I was worried about, then I could live with
the interface proposal.  But since seeing this error message is making
me double-guess the interface correctness, and that will have an impact
on libvirt, I'm starting to have doubts on what it means for qemu 2.5.

> +            errno = -ret;
> +            return NULL;
> +        }
> +    } else {
> +        ret = qemu_gluster_parsejson(gconf, options);
> +        if (ret < 0) {
> +            error_setg(errp, "Usage: "

Rather broad error message; might be nice for a later patch to change
parsejson() to take an errp parameter, so it can complain about the
particular problem. But that's internal to gluster, and doesn't affect
the overall interface.


> diff --git a/qapi/block-core.json b/qapi/block-core.json
> index bbefe43..c28cb9f 100644
> --- a/qapi/block-core.json

[1] Interface review

> +++ b/qapi/block-core.json
> @@ -1841,14 +1841,14 @@
>  #
>  # @path:     absolute path to image file in gluster volume
>  #
> -# @servers:  gluster server description
> +# @servers:  one or more gluster server descriptions
>  #
>  # Since: 2.5
>  ##
>  { 'struct': 'BlockdevOptionsGluster',
>    'data': { 'volume': 'str',
>              'path': 'str',
> -            'server': 'GlusterServer' } }
> +            'server': [ 'GlusterServer' ] } }

As mentioned in 3/4, I would have preferred to have just enforced a
one-element array then, and no interface change here. But it's not the
end of the world the way you split it, and it looks like the patches
will go in together or not at all, so we won't see the churn in released
versions.

Back up to the implementation, before returning here.

Overall summary:
I think most of the problems that have been pointed out could be fixed
by a maintainer (Jeff, up to you if you want to require a v15), but I'm
very worried that we still haven't quite nailed the interface of
GlusterServer.  If I knew for sure that we had time to clean up any mess
before 2.5 bakes things in, then you can add:
Reviewed-by: Eric Blake <eblake@redhat.com>

-- 
Eric Blake   eblake redhat com    +1-919-301-3266
Libvirt virtualization library http://libvirt.org


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 604 bytes --]

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [Qemu-devel] [PATCH 3/4] block/gluster: using new qapi schema
  2015-11-12 10:22 ` [Qemu-devel] [PATCH 3/4] block/gluster: using new qapi schema Prasanna Kumar Kalever
                     ` (2 preceding siblings ...)
  2015-11-12 21:37   ` Eric Blake
@ 2015-11-12 22:44   ` Eric Blake
  2015-11-13  8:04   ` Markus Armbruster
  4 siblings, 0 replies; 18+ messages in thread
From: Eric Blake @ 2015-11-12 22:44 UTC (permalink / raw)
  To: Prasanna Kumar Kalever, qemu-devel
  Cc: kwolf, pkrempa, stefanha, jcody, deepakcs, bharata, rtalur

[-- Attachment #1: Type: text/plain, Size: 2740 bytes --]

On 11/12/2015 03:22 AM, Prasanna Kumar Kalever wrote:
> this patch adds GlusterConf to qapi/block-core.json
> 
> Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
> ---
>  block/gluster.c      | 104 +++++++++++++++++++++++++--------------------------
>  qapi/block-core.json |  60 +++++++++++++++++++++++++++--
>  2 files changed, 109 insertions(+), 55 deletions(-)
> 

Pointing it out here for completeness, even though I first stumbled on
it when reviewing 4/4:

> @@ -190,13 +180,23 @@ static int qemu_gluster_parseuri(GlusterConf *gconf, const char *filename)
>              ret = -EINVAL;
>              goto out;
>          }
> -        gconf->host = g_strdup(qp->p[0].value);
> +        gconf->server->host = g_strdup(qp->p[0].value);

This is abusing the 'host' field of GlusterServer to track a socket
path, and ignores the fact that port is meaningless for a
gluster+unix:// connection.

> @@ -224,8 +225,9 @@ static struct glfs *qemu_gluster_init(GlusterConf *gconf, const char *filename,
>          goto out;
>      }
>  
> -    ret = glfs_set_volfile_server(glfs, gconf->transport, gconf->host,
> -            gconf->port);
> +    ret = glfs_set_volfile_server(glfs,
> +                                  GlusterTransport_lookup[gconf->server->transport],
> +                                  gconf->server->host, gconf->server->port);

At least gluster itself has the same overloaded abuse of terminology;
I'm hoping that a port of 0 is okay when requesting a "unix"
volfile_server.  [I don't know, because I didn't read the docs for
glfs_set_volfile_server()]

> +##
> +# @GlusterServer
> +#
> +# Details for connecting to a gluster server
> +#
> +# @host:       host address (hostname/ipv4/ipv6 addresses)
> +#
> +# @port:       #optional port number on which glusterd is listening
> +#               (default 24007)
> +#
> +# @transport:  #optional transport type used to connect to gluster management
> +#               daemon (default 'tcp')
> +#
> +# Since: 2.5
> +##
> +{ 'struct': 'GlusterServer',
> +  'data': { 'host': 'str',
> +            '*port': 'int',
> +            '*transport': 'GlusterTransport' } }

And my idea on patch 4/4 was that converting this from simple struct to
flat union might be a more realistic view of things (if transport is
'unix', there can't be a port; and rather than abusing the name 'host'
we could use the name 'socket'; similarly for 'rdma') - but without
additional qapi support, I don't know that we can have an optional
'transport' and still have a discriminated union in time for 2.5.

-- 
Eric Blake   eblake redhat com    +1-919-301-3266
Libvirt virtualization library http://libvirt.org


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 604 bytes --]

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [Qemu-devel] [PATCH 0/4] block/gluster: add support for multiple gluster servers
  2015-11-12 10:22 [Qemu-devel] [PATCH 0/4] block/gluster: add support for multiple gluster servers Prasanna Kumar Kalever
                   ` (3 preceding siblings ...)
  2015-11-12 10:22 ` [Qemu-devel] [PATCH 4/4] block/gluster: add support for multiple gluster servers Prasanna Kumar Kalever
@ 2015-11-12 22:54 ` Eric Blake
  4 siblings, 0 replies; 18+ messages in thread
From: Eric Blake @ 2015-11-12 22:54 UTC (permalink / raw)
  To: Prasanna Kumar Kalever, qemu-devel
  Cc: kwolf, pkrempa, qemu block, stefanha, jcody, deepakcs, bharata, rtalur

[-- Attachment #1: Type: text/plain, Size: 868 bytes --]

[adding qemu-block]

On 11/12/2015 03:22 AM, Prasanna Kumar Kalever wrote:
> This release is rebased on qemu master branch.
> In this series of patches 1/4 and 2/4 are unchanged.

According to scripts/get-maintainer.pl, this series should have cc'd
qemu-block@nongnu.org.  I don't know if anyone on the block list missed
my reviews because they were only on qemu-devel; and it may matter to
other networked block devices that also need to implement structured
options for use in blockdev-add.

> 
> Prasanna Kumar Kalever (4):
>   block/gluster: rename [server, volname, image] -> [host, volume, path]
>   block/gluster: code cleanup
>   block/gluster: using new qapi schema
>   block/gluster: add support for multiple gluster servers
> 

-- 
Eric Blake   eblake redhat com    +1-919-301-3266
Libvirt virtualization library http://libvirt.org


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 604 bytes --]

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [Qemu-devel] [PATCH 3/4] block/gluster: using new qapi schema
  2015-11-12 10:22 ` [Qemu-devel] [PATCH 3/4] block/gluster: using new qapi schema Prasanna Kumar Kalever
                     ` (3 preceding siblings ...)
  2015-11-12 22:44   ` Eric Blake
@ 2015-11-13  8:04   ` Markus Armbruster
  4 siblings, 0 replies; 18+ messages in thread
From: Markus Armbruster @ 2015-11-13  8:04 UTC (permalink / raw)
  To: Prasanna Kumar Kalever
  Cc: kwolf, pkrempa, stefanha, jcody, qemu-devel, deepakcs, bharata, rtalur

Prasanna Kumar Kalever <prasanna.kalever@redhat.com> writes:

> this patch adds GlusterConf to qapi/block-core.json
>
> Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
> ---
>  block/gluster.c      | 104 +++++++++++++++++++++++++--------------------------
>  qapi/block-core.json |  60 +++++++++++++++++++++++++++--
>  2 files changed, 109 insertions(+), 55 deletions(-)
[...]
> diff --git a/qapi/block-core.json b/qapi/block-core.json
> index 425fdab..bbefe43 100644
> --- a/qapi/block-core.json
> +++ b/qapi/block-core.json
> @@ -1375,13 +1375,14 @@
>  # Drivers that are supported in block device operations.
>  #
>  # @host_device, @host_cdrom: Since 2.1
> +# @gluster: Since 2.5
>  #
>  # Since: 2.0
>  ##
>  { 'enum': 'BlockdevDriver',
>    'data': [ 'archipelago', 'blkdebug', 'blkverify', 'bochs', 'cloop',
> -            'dmg', 'file', 'ftp', 'ftps', 'host_cdrom', 'host_device',
> -            'http', 'https', 'null-aio', 'null-co', 'parallels',
> +            'dmg', 'file', 'ftp', 'ftps', 'gluster', 'host_cdrom',
> +            'host_device', 'http', 'https', 'null-aio', 'null-co', 'parallels',
>              'qcow', 'qcow2', 'qed', 'quorum', 'raw', 'tftp', 'vdi', 'vhdx',
>              'vmdk', 'vpc', 'vvfat' ] }
>  
> @@ -1797,6 +1798,59 @@
>              '*read-pattern': 'QuorumReadPattern' } }
>  
>  ##
> +# @GlusterTransport
> +#
> +# An enumeration of Gluster transport type
> +#
> +# @tcp:   TCP   - Transmission Control Protocol
> +#
> +# @unix:  UNIX  - Unix domain socket
> +#
> +# @rdma:  RDMA  - Remote direct memory access
> +#
> +# Since: 2.5
> +##
> +{ 'enum': 'GlusterTransport', 'data': [ 'tcp', 'unix', 'rdma'] }
> +
> +##
> +# @GlusterServer
> +#
> +# Details for connecting to a gluster server
> +#
> +# @host:       host address (hostname/ipv4/ipv6 addresses)
> +#
> +# @port:       #optional port number on which glusterd is listening
> +#               (default 24007)
> +#
> +# @transport:  #optional transport type used to connect to gluster management
> +#               daemon (default 'tcp')
> +#
> +# Since: 2.5
> +##
> +{ 'struct': 'GlusterServer',
> +  'data': { 'host': 'str',
> +            '*port': 'int',
> +            '*transport': 'GlusterTransport' } }

Are you reinventing SocketAddress?

Differences:

* You support transport rdma.

* You don't have a way to control IPv4/IPv6.  Why?

* You don't support file descriptor passing.  Why?

Should this be something like

   { 'union': 'GlusterServer',
  'data': {
    ... all the applicable members of SocketAddress ...
    'rdma': 'RdmaAddress'
   }

?

> +
> +##
> +# @BlockdevOptionsGluster
> +#
> +# Driver specific block device options for Gluster
> +#
> +# @volume:   name of gluster volume where VM image resides
> +#
> +# @path:     absolute path to image file in gluster volume
> +#
> +# @servers:  gluster server description
> +#
> +# Since: 2.5
> +##
> +{ 'struct': 'BlockdevOptionsGluster',
> +  'data': { 'volume': 'str',
> +            'path': 'str',
> +            'server': 'GlusterServer' } }
> +
> +##
>  # @BlockdevOptions
>  #
>  # Options for creating a block device.
> @@ -1816,7 +1870,7 @@
>        'file':       'BlockdevOptionsFile',
>        'ftp':        'BlockdevOptionsFile',
>        'ftps':       'BlockdevOptionsFile',
> -# TODO gluster: Wait for structured options
> +      'gluster':    'BlockdevOptionsGluster',
>        'host_cdrom': 'BlockdevOptionsFile',
>        'host_device':'BlockdevOptionsFile',
>        'http':       'BlockdevOptionsFile',

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [Qemu-devel] [PATCH 4/4] block/gluster: add support for multiple gluster servers
  2015-11-12 22:36   ` Eric Blake
@ 2016-02-04 13:22     ` Kevin Wolf
  2016-02-05 13:17       ` Prasanna Kumar Kalever
  0 siblings, 1 reply; 18+ messages in thread
From: Kevin Wolf @ 2016-02-04 13:22 UTC (permalink / raw)
  To: Eric Blake
  Cc: pkrempa, Prasanna Kumar Kalever, stefanha, jcody, qemu-devel,
	deepakcs, bharata, rtalur, ndevos

Am 12.11.2015 um 23:36 hat Eric Blake geschrieben:
> On 11/12/2015 03:22 AM, Prasanna Kumar Kalever wrote:
> > +static struct glfs *qemu_gluster_init(BlockdevOptionsGluster **gconf,
> > +                                      const char *filename,
> > +                                      QDict *options, Error **errp)
> > +{
> > +    int ret;
> > +
> > +    if (filename) {
> > +        ret = qemu_gluster_parseuri(gconf, filename);
> > +        if (ret < 0) {
> > +            error_setg(errp, "Usage: file=gluster[+transport]://[host[:port]]/"
> > +                             "volume/path[?socket=...]");
> 
> Hmm, just noticing this now, even though this error message is just code
> motion.  It looks like the optional [?socket=...] part of a URI is only
> important when using gluster+unix (is it silently ignored otherwise?).
> And if it is used, you are then assigning it to the host field?
> 
> I almost wonder if GlusterServer should be a discriminated union.  That
> is, in qapi-pseudocode (won't actually compile yet, because it depends
> on features that I have queued for 2.6):
> 
> { 'union':'GlusterServer', 'base':{'transport':'GlusterTransport'},
>   'discriminator':'transport', 'data':{
>     'tcp':{'host':'str', '*port':'uint16'},
>     'unix':{'socket':'str'},
>     'rdma':{...} } }
> 
> Hmm. Qapi doesn't (yet) allow for an optional discriminator (where the
> omission of the discriminator picks a default branch) - another RFE for
> my qapi work for 2.6.

Eric, Prasanna, is this QAPI extension what we're waiting for or what is
the status of this series? Niels (CCed) was hacking on the same thing,
so maybe it's time to get this moving again.

Kevin

> Command-line wise, this would mean you could do in JSON:
> 
> 'servers':[{'transport':'tcp', 'host':'foo'},
>            {'transport':'unix', 'socket':'/path/to/bar'},
>            {'host':'blah'}]
> 
> where the third entry defaults to transport tcp.
> 
> If we think that description is better than what we proposed in 3/4,
> then it's really late to be adding it now, especially since (without
> qapi changes) we'd have a mandatory rather than optional 'transport';
> but worse if we commit to the interface of 3/4 and don't get the
> conversion made in time to the nicer interface.  At least it's okay from
> back-compat perspective to make a mandatory field become optional in
> later releases.
> 
> If it were just gluster code I was worried about, then I could live with
> the interface proposal.  But since seeing this error message is making
> me double-guess the interface correctness, and that will have an impact
> on libvirt, I'm starting to have doubts on what it means for qemu 2.5.

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [Qemu-devel] [PATCH 4/4] block/gluster: add support for multiple gluster servers
  2016-02-04 13:22     ` Kevin Wolf
@ 2016-02-05 13:17       ` Prasanna Kumar Kalever
  2016-03-23 12:16         ` Prasanna Kalever
  0 siblings, 1 reply; 18+ messages in thread
From: Prasanna Kumar Kalever @ 2016-02-05 13:17 UTC (permalink / raw)
  To: Kevin Wolf
  Cc: pkrempa, stefanha, jcody, qemu-devel, deepakcs, bharata, rtalur, ndevos

On Thursday, February 4, 2016 6:52:15 PM Kevin Wolf Wrote:
> Am 12.11.2015 um 23:36 hat Eric Blake geschrieben:
> > On 11/12/2015 03:22 AM, Prasanna Kumar Kalever wrote:
> > > +static struct glfs *qemu_gluster_init(BlockdevOptionsGluster **gconf,
> > > +                                      const char *filename,
> > > +                                      QDict *options, Error **errp)
> > > +{
> > > +    int ret;
> > > +
> > > +    if (filename) {
> > > +        ret = qemu_gluster_parseuri(gconf, filename);
> > > +        if (ret < 0) {
> > > +            error_setg(errp, "Usage:
> > > file=gluster[+transport]://[host[:port]]/"
> > > +                             "volume/path[?socket=...]");
> > 
> > Hmm, just noticing this now, even though this error message is just code
> > motion.  It looks like the optional [?socket=...] part of a URI is only
> > important when using gluster+unix (is it silently ignored otherwise?).
> > And if it is used, you are then assigning it to the host field?
> > 
> > I almost wonder if GlusterServer should be a discriminated union.  That
> > is, in qapi-pseudocode (won't actually compile yet, because it depends
> > on features that I have queued for 2.6):
> > 
> > { 'union':'GlusterServer', 'base':{'transport':'GlusterTransport'},
> >   'discriminator':'transport', 'data':{
> >     'tcp':{'host':'str', '*port':'uint16'},
> >     'unix':{'socket':'str'},
> >     'rdma':{...} } }
> > 
> > Hmm. Qapi doesn't (yet) allow for an optional discriminator (where the
> > omission of the discriminator picks a default branch) - another RFE for
> > my qapi work for 2.6.
> 
> Eric, Prasanna, is this QAPI extension what we're waiting for or what is
> the status of this series? Niels (CCed) was hacking on the same thing,
> so maybe it's time to get this moving again.

Kevin, correct me if I am wrong, union discriminator support is not yet added
into qemu, I am waiting for this. I spoke to Eric Blake regarding the same may be
a month ago from now.

-Prasanna

> 
> Kevin
> 
> > Command-line wise, this would mean you could do in JSON:
> > 
> > 'servers':[{'transport':'tcp', 'host':'foo'},
> >            {'transport':'unix', 'socket':'/path/to/bar'},
> >            {'host':'blah'}]
> > 
> > where the third entry defaults to transport tcp.
> > 
> > If we think that description is better than what we proposed in 3/4,
> > then it's really late to be adding it now, especially since (without
> > qapi changes) we'd have a mandatory rather than optional 'transport';
> > but worse if we commit to the interface of 3/4 and don't get the
> > conversion made in time to the nicer interface.  At least it's okay from
> > back-compat perspective to make a mandatory field become optional in
> > later releases.
> > 
> > If it were just gluster code I was worried about, then I could live with
> > the interface proposal.  But since seeing this error message is making
> > me double-guess the interface correctness, and that will have an impact
> > on libvirt, I'm starting to have doubts on what it means for qemu 2.5.
> 
> 

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [Qemu-devel] [PATCH 4/4] block/gluster: add support for multiple gluster servers
  2016-02-05 13:17       ` Prasanna Kumar Kalever
@ 2016-03-23 12:16         ` Prasanna Kalever
  2016-03-23 12:22           ` Prasanna Kalever
  0 siblings, 1 reply; 18+ messages in thread
From: Prasanna Kalever @ 2016-03-23 12:16 UTC (permalink / raw)
  To: Kevin Wolf
  Cc: Sankarshan Mukhopadhyay, pkrempa, stefanha, jcody, qemu-devel,
	deepakcs, Bose, Sahina, bharata, rtalur, ndevos

On Fri, Feb 5, 2016 at 6:47 PM, Prasanna Kumar Kalever
<pkalever@redhat.com> wrote:
>
> On Thursday, February 4, 2016 6:52:15 PM Kevin Wolf Wrote:
> > Am 12.11.2015 um 23:36 hat Eric Blake geschrieben:
> > > On 11/12/2015 03:22 AM, Prasanna Kumar Kalever wrote:
> > > > +static struct glfs *qemu_gluster_init(BlockdevOptionsGluster **gconf,
> > > > +                                      const char *filename,
> > > > +                                      QDict *options, Error **errp)
> > > > +{
> > > > +    int ret;
> > > > +
> > > > +    if (filename) {
> > > > +        ret = qemu_gluster_parseuri(gconf, filename);
> > > > +        if (ret < 0) {
> > > > +            error_setg(errp, "Usage:
> > > > file=gluster[+transport]://[host[:port]]/"
> > > > +                             "volume/path[?socket=...]");
> > >
> > > Hmm, just noticing this now, even though this error message is just code
> > > motion.  It looks like the optional [?socket=...] part of a URI is only
> > > important when using gluster+unix (is it silently ignored otherwise?).
> > > And if it is used, you are then assigning it to the host field?
> > >
> > > I almost wonder if GlusterServer should be a discriminated union.  That
> > > is, in qapi-pseudocode (won't actually compile yet, because it depends
> > > on features that I have queued for 2.6):
> > >
> > > { 'union':'GlusterServer', 'base':{'transport':'GlusterTransport'},
> > >   'discriminator':'transport', 'data':{
> > >     'tcp':{'host':'str', '*port':'uint16'},
> > >     'unix':{'socket':'str'},
> > >     'rdma':{...} } }
> > >
> > > Hmm. Qapi doesn't (yet) allow for an optional discriminator (where the
> > > omission of the discriminator picks a default branch) - another RFE for
> > > my qapi work for 2.6.
> >
> > Eric, Prasanna, is this QAPI extension what we're waiting for or what is
> > the status of this series? Niels (CCed) was hacking on the same thing,
> > so maybe it's time to get this moving again.
>
> Kevin, correct me if I am wrong, union discriminator support is not yet added
> into qemu, I am waiting for this. I spoke to Eric Blake regarding the same may be
> a month ago from now.
>
> -Prasanna

Eric, Kevin, any updates on union discriminator related patches,
any hope for taking these patches in 2.6 ?

-Prasanna

>
> >
> > Kevin
> >
> > > Command-line wise, this would mean you could do in JSON:
> > >
> > > 'servers':[{'transport':'tcp', 'host':'foo'},
> > >            {'transport':'unix', 'socket':'/path/to/bar'},
> > >            {'host':'blah'}]
> > >
> > > where the third entry defaults to transport tcp.
> > >
> > > If we think that description is better than what we proposed in 3/4,
> > > then it's really late to be adding it now, especially since (without
> > > qapi changes) we'd have a mandatory rather than optional 'transport';
> > > but worse if we commit to the interface of 3/4 and don't get the
> > > conversion made in time to the nicer interface.  At least it's okay from
> > > back-compat perspective to make a mandatory field become optional in
> > > later releases.
> > >
> > > If it were just gluster code I was worried about, then I could live with
> > > the interface proposal.  But since seeing this error message is making
> > > me double-guess the interface correctness, and that will have an impact
> > > on libvirt, I'm starting to have doubts on what it means for qemu 2.5.
> >
> >

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [Qemu-devel] [PATCH 4/4] block/gluster: add support for multiple gluster servers
  2016-03-23 12:16         ` Prasanna Kalever
@ 2016-03-23 12:22           ` Prasanna Kalever
  0 siblings, 0 replies; 18+ messages in thread
From: Prasanna Kalever @ 2016-03-23 12:22 UTC (permalink / raw)
  To: Eric Blake, Kevin Wolf
  Cc: Sankarshan Mukhopadhyay, pkrempa, stefanha, jcody, qemu-devel,
	deepakcs, Bose, Sahina, bharata, rtalur, ndevos

On Wed, Mar 23, 2016 at 5:46 PM, Prasanna Kalever <pkalever@redhat.com> wrote:
> On Fri, Feb 5, 2016 at 6:47 PM, Prasanna Kumar Kalever
> <pkalever@redhat.com> wrote:
>>
>> On Thursday, February 4, 2016 6:52:15 PM Kevin Wolf Wrote:
>> > Am 12.11.2015 um 23:36 hat Eric Blake geschrieben:
>> > > On 11/12/2015 03:22 AM, Prasanna Kumar Kalever wrote:
>> > > > +static struct glfs *qemu_gluster_init(BlockdevOptionsGluster **gconf,
>> > > > +                                      const char *filename,
>> > > > +                                      QDict *options, Error **errp)
>> > > > +{
>> > > > +    int ret;
>> > > > +
>> > > > +    if (filename) {
>> > > > +        ret = qemu_gluster_parseuri(gconf, filename);
>> > > > +        if (ret < 0) {
>> > > > +            error_setg(errp, "Usage:
>> > > > file=gluster[+transport]://[host[:port]]/"
>> > > > +                             "volume/path[?socket=...]");
>> > >
>> > > Hmm, just noticing this now, even though this error message is just code
>> > > motion.  It looks like the optional [?socket=...] part of a URI is only
>> > > important when using gluster+unix (is it silently ignored otherwise?).
>> > > And if it is used, you are then assigning it to the host field?
>> > >
>> > > I almost wonder if GlusterServer should be a discriminated union.  That
>> > > is, in qapi-pseudocode (won't actually compile yet, because it depends
>> > > on features that I have queued for 2.6):
>> > >
>> > > { 'union':'GlusterServer', 'base':{'transport':'GlusterTransport'},
>> > >   'discriminator':'transport', 'data':{
>> > >     'tcp':{'host':'str', '*port':'uint16'},
>> > >     'unix':{'socket':'str'},
>> > >     'rdma':{...} } }
>> > >
>> > > Hmm. Qapi doesn't (yet) allow for an optional discriminator (where the
>> > > omission of the discriminator picks a default branch) - another RFE for
>> > > my qapi work for 2.6.
>> >
>> > Eric, Prasanna, is this QAPI extension what we're waiting for or what is
>> > the status of this series? Niels (CCed) was hacking on the same thing,
>> > so maybe it's time to get this moving again.
>>
>> Kevin, correct me if I am wrong, union discriminator support is not yet added
>> into qemu, I am waiting for this. I spoke to Eric Blake regarding the same may be
>> a month ago from now.
>>
>> -Prasanna

[ Adding Eric in 'to:' list ]

>
> Eric, Kevin, any updates on union discriminator related patches,
> any hope for taking these patches in 2.6 ?
>
> -Prasanna
>
>>
>> >
>> > Kevin
>> >
>> > > Command-line wise, this would mean you could do in JSON:
>> > >
>> > > 'servers':[{'transport':'tcp', 'host':'foo'},
>> > >            {'transport':'unix', 'socket':'/path/to/bar'},
>> > >            {'host':'blah'}]
>> > >
>> > > where the third entry defaults to transport tcp.
>> > >
>> > > If we think that description is better than what we proposed in 3/4,
>> > > then it's really late to be adding it now, especially since (without
>> > > qapi changes) we'd have a mandatory rather than optional 'transport';
>> > > but worse if we commit to the interface of 3/4 and don't get the
>> > > conversion made in time to the nicer interface.  At least it's okay from
>> > > back-compat perspective to make a mandatory field become optional in
>> > > later releases.
>> > >
>> > > If it were just gluster code I was worried about, then I could live with
>> > > the interface proposal.  But since seeing this error message is making
>> > > me double-guess the interface correctness, and that will have an impact
>> > > on libvirt, I'm starting to have doubts on what it means for qemu 2.5.
>> >
>> >

^ permalink raw reply	[flat|nested] 18+ messages in thread

end of thread, other threads:[~2016-03-23 12:23 UTC | newest]

Thread overview: 18+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-11-12 10:22 [Qemu-devel] [PATCH 0/4] block/gluster: add support for multiple gluster servers Prasanna Kumar Kalever
2015-11-12 10:22 ` [Qemu-devel] [PATCH 1/4] block/gluster: rename [server, volname, image] -> [host, volume, path] Prasanna Kumar Kalever
2015-11-12 20:28   ` Eric Blake
2015-11-12 10:22 ` [Qemu-devel] [PATCH 2/4] block/gluster: code cleanup Prasanna Kumar Kalever
2015-11-12 10:22 ` [Qemu-devel] [PATCH 3/4] block/gluster: using new qapi schema Prasanna Kumar Kalever
2015-11-12 20:00   ` Jeff Cody
2015-11-12 21:16   ` Eric Blake
2015-11-12 21:37   ` Eric Blake
2015-11-12 22:44   ` Eric Blake
2015-11-13  8:04   ` Markus Armbruster
2015-11-12 10:22 ` [Qemu-devel] [PATCH 4/4] block/gluster: add support for multiple gluster servers Prasanna Kumar Kalever
2015-11-12 20:00   ` Jeff Cody
2015-11-12 22:36   ` Eric Blake
2016-02-04 13:22     ` Kevin Wolf
2016-02-05 13:17       ` Prasanna Kumar Kalever
2016-03-23 12:16         ` Prasanna Kalever
2016-03-23 12:22           ` Prasanna Kalever
2015-11-12 22:54 ` [Qemu-devel] [PATCH 0/4] " Eric Blake

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.