All of lore.kernel.org
 help / color / mirror / Atom feed
* [Qemu-devel] [PATCH 0/3] block/gluster: add support for multiple gluster servers
@ 2015-11-10  9:09 Prasanna Kumar Kalever
  2015-11-10  9:09 ` [Qemu-devel] [PATCH v2 1/3] block/gluster: rename [server, volname, image] -> [host, volume, path] Prasanna Kumar Kalever
                   ` (3 more replies)
  0 siblings, 4 replies; 14+ messages in thread
From: Prasanna Kumar Kalever @ 2015-11-10  9:09 UTC (permalink / raw)
  To: qemu-devel
  Cc: kwolf, pkrempa, Prasanna Kumar Kalever, stefanha, jcody,
	deepakcs, bharata, rtalur

This release is rebased on qemu master branch.
In this series of patches 1/3 and 2/3 are unchanged.

Prasanna Kumar Kalever (3):
  block/gluster: rename [server, volname, image] -> [host, volume, path]
  block/gluster: code cleanup
  block/gluster: add support for multiple gluster servers

 block/gluster.c      | 597 ++++++++++++++++++++++++++++++++++++++++-----------
 qapi/block-core.json |  60 +++++-
 2 files changed, 529 insertions(+), 128 deletions(-)

-- 
2.1.0

^ permalink raw reply	[flat|nested] 14+ messages in thread

* [Qemu-devel] [PATCH v2 1/3] block/gluster: rename [server, volname, image] -> [host, volume, path]
  2015-11-10  9:09 [Qemu-devel] [PATCH 0/3] block/gluster: add support for multiple gluster servers Prasanna Kumar Kalever
@ 2015-11-10  9:09 ` Prasanna Kumar Kalever
  2015-11-10 15:28   ` Eric Blake
  2015-11-10  9:09 ` [Qemu-devel] [PATCH v2 2/3] block/gluster: code cleanup Prasanna Kumar Kalever
                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 14+ messages in thread
From: Prasanna Kumar Kalever @ 2015-11-10  9:09 UTC (permalink / raw)
  To: qemu-devel
  Cc: kwolf, pkrempa, Prasanna Kumar Kalever, stefanha, jcody,
	deepakcs, bharata, rtalur

this patch is very much be meaningful after next patch which adds multiple
gluster servers support. After that,

an example is, in  'servers' tuple values we use 'server' variable for key
'host' in the code, it will be quite messy to have colliding names for
variables, so to maintain better readability and makes it consistent with other
existing code as well as the input keys/options, this patch renames the
following variables
'server'  -> 'host'
'image'   -> 'path'
'volname' -> 'volume'

Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
---
 block/gluster.c | 54 +++++++++++++++++++++++++++---------------------------
 1 file changed, 27 insertions(+), 27 deletions(-)

diff --git a/block/gluster.c b/block/gluster.c
index 1eb3a8c..513a774 100644
--- a/block/gluster.c
+++ b/block/gluster.c
@@ -25,19 +25,19 @@ typedef struct BDRVGlusterState {
 } BDRVGlusterState;
 
 typedef struct GlusterConf {
-    char *server;
+    char *host;
     int port;
-    char *volname;
-    char *image;
+    char *volume;
+    char *path;
     char *transport;
 } GlusterConf;
 
 static void qemu_gluster_gconf_free(GlusterConf *gconf)
 {
     if (gconf) {
-        g_free(gconf->server);
-        g_free(gconf->volname);
-        g_free(gconf->image);
+        g_free(gconf->host);
+        g_free(gconf->volume);
+        g_free(gconf->path);
         g_free(gconf->transport);
         g_free(gconf);
     }
@@ -57,19 +57,19 @@ static int parse_volume_options(GlusterConf *gconf, char *path)
     if (*p == '\0') {
         return -EINVAL;
     }
-    gconf->volname = g_strndup(q, p - q);
+    gconf->volume = g_strndup(q, p - q);
 
-    /* image */
+    /* path */
     p += strspn(p, "/");
     if (*p == '\0') {
         return -EINVAL;
     }
-    gconf->image = g_strdup(p);
+    gconf->path = g_strdup(p);
     return 0;
 }
 
 /*
- * file=gluster[+transport]://[server[:port]]/volname/image[?socket=...]
+ * file=gluster[+transport]://[host[:port]]/volume/path[?socket=...]
  *
  * 'gluster' is the protocol.
  *
@@ -78,10 +78,10 @@ static int parse_volume_options(GlusterConf *gconf, char *path)
  * tcp, unix and rdma. If a transport type isn't specified, then tcp
  * type is assumed.
  *
- * 'server' specifies the server where the volume file specification for
+ * 'host' specifies the host where the volume file specification for
  * the given volume resides. This can be either hostname, ipv4 address
  * or ipv6 address. ipv6 address needs to be within square brackets [ ].
- * If transport type is 'unix', then 'server' field should not be specified.
+ * If transport type is 'unix', then 'host' field should not be specified.
  * The 'socket' field needs to be populated with the path to unix domain
  * socket.
  *
@@ -90,9 +90,9 @@ static int parse_volume_options(GlusterConf *gconf, char *path)
  * default port. If the transport type is unix, then 'port' should not be
  * specified.
  *
- * 'volname' is the name of the gluster volume which contains the VM image.
+ * 'volume' is the name of the gluster volume which contains the VM image.
  *
- * 'image' is the path to the actual VM image that resides on gluster volume.
+ * 'path' is the path to the actual VM image that resides on gluster volume.
  *
  * Examples:
  *
@@ -101,7 +101,7 @@ static int parse_volume_options(GlusterConf *gconf, char *path)
  * file=gluster+tcp://1.2.3.4:24007/testvol/dir/a.img
  * file=gluster+tcp://[1:2:3:4:5:6:7:8]/testvol/dir/a.img
  * file=gluster+tcp://[1:2:3:4:5:6:7:8]:24007/testvol/dir/a.img
- * file=gluster+tcp://server.domain.com:24007/testvol/dir/a.img
+ * file=gluster+tcp://host.domain.com:24007/testvol/dir/a.img
  * file=gluster+unix:///testvol/dir/a.img?socket=/tmp/glusterd.socket
  * file=gluster+rdma://1.2.3.4:24007/testvol/a.img
  */
@@ -152,9 +152,9 @@ static int qemu_gluster_parseuri(GlusterConf *gconf, const char *filename)
             ret = -EINVAL;
             goto out;
         }
-        gconf->server = g_strdup(qp->p[0].value);
+        gconf->host = g_strdup(qp->p[0].value);
     } else {
-        gconf->server = g_strdup(uri->server ? uri->server : "localhost");
+        gconf->host = g_strdup(uri->server ? uri->server : "localhost");
         gconf->port = uri->port;
     }
 
@@ -175,18 +175,18 @@ static struct glfs *qemu_gluster_init(GlusterConf *gconf, const char *filename,
 
     ret = qemu_gluster_parseuri(gconf, filename);
     if (ret < 0) {
-        error_setg(errp, "Usage: file=gluster[+transport]://[server[:port]]/"
-                   "volname/image[?socket=...]");
+        error_setg(errp, "Usage: file=gluster[+transport]://[host[:port]]/"
+                   "volume/path[?socket=...]");
         errno = -ret;
         goto out;
     }
 
-    glfs = glfs_new(gconf->volname);
+    glfs = glfs_new(gconf->volume);
     if (!glfs) {
         goto out;
     }
 
-    ret = glfs_set_volfile_server(glfs, gconf->transport, gconf->server,
+    ret = glfs_set_volfile_server(glfs, gconf->transport, gconf->host,
             gconf->port);
     if (ret < 0) {
         goto out;
@@ -204,9 +204,9 @@ static struct glfs *qemu_gluster_init(GlusterConf *gconf, const char *filename,
     ret = glfs_init(glfs);
     if (ret) {
         error_setg_errno(errp, errno,
-                         "Gluster connection failed for server=%s port=%d "
-                         "volume=%s image=%s transport=%s", gconf->server,
-                         gconf->port, gconf->volname, gconf->image,
+                         "Gluster connection failed for host=%s port=%d "
+                         "volume=%s path=%s transport=%s", gconf->host,
+                         gconf->port, gconf->volume, gconf->path,
                          gconf->transport);
 
         /* glfs_init sometimes doesn't set errno although docs suggest that */
@@ -314,7 +314,7 @@ static int qemu_gluster_open(BlockDriverState *bs,  QDict *options,
 
     qemu_gluster_parse_flags(bdrv_flags, &open_flags);
 
-    s->fd = glfs_open(s->glfs, gconf->image, open_flags);
+    s->fd = glfs_open(s->glfs, gconf->path, open_flags);
     if (!s->fd) {
         ret = -errno;
     }
@@ -364,7 +364,7 @@ static int qemu_gluster_reopen_prepare(BDRVReopenState *state,
         goto exit;
     }
 
-    reop_s->fd = glfs_open(reop_s->glfs, gconf->image, open_flags);
+    reop_s->fd = glfs_open(reop_s->glfs, gconf->path, open_flags);
     if (reop_s->fd == NULL) {
         /* reops->glfs will be cleaned up in _abort */
         ret = -errno;
@@ -511,7 +511,7 @@ static int qemu_gluster_create(const char *filename,
         goto out;
     }
 
-    fd = glfs_creat(glfs, gconf->image,
+    fd = glfs_creat(glfs, gconf->path,
         O_WRONLY | O_CREAT | O_TRUNC | O_BINARY, S_IRUSR | S_IWUSR);
     if (!fd) {
         ret = -errno;
-- 
2.1.0

^ permalink raw reply	[flat|nested] 14+ messages in thread

* [Qemu-devel] [PATCH v2 2/3] block/gluster: code cleanup
  2015-11-10  9:09 [Qemu-devel] [PATCH 0/3] block/gluster: add support for multiple gluster servers Prasanna Kumar Kalever
  2015-11-10  9:09 ` [Qemu-devel] [PATCH v2 1/3] block/gluster: rename [server, volname, image] -> [host, volume, path] Prasanna Kumar Kalever
@ 2015-11-10  9:09 ` Prasanna Kumar Kalever
  2015-11-10  9:09 ` [Qemu-devel] [PATCH v13 3/3] block/gluster: add support for multiple gluster servers Prasanna Kumar Kalever
  2015-11-10 15:26 ` [Qemu-devel] [PATCH 0/3] " Eric Blake
  3 siblings, 0 replies; 14+ messages in thread
From: Prasanna Kumar Kalever @ 2015-11-10  9:09 UTC (permalink / raw)
  To: qemu-devel
  Cc: kwolf, pkrempa, Prasanna Kumar Kalever, stefanha, jcody,
	deepakcs, bharata, rtalur

unified coding styles of multiline function arguments and other error functions
moved random declarations of structures and other list variables

Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
---
 block/gluster.c | 113 ++++++++++++++++++++++++++++++--------------------------
 1 file changed, 60 insertions(+), 53 deletions(-)

diff --git a/block/gluster.c b/block/gluster.c
index 513a774..ededda2 100644
--- a/block/gluster.c
+++ b/block/gluster.c
@@ -24,6 +24,11 @@ typedef struct BDRVGlusterState {
     struct glfs_fd *fd;
 } BDRVGlusterState;
 
+typedef struct BDRVGlusterReopenState {
+    struct glfs *glfs;
+    struct glfs_fd *fd;
+} BDRVGlusterReopenState;
+
 typedef struct GlusterConf {
     char *host;
     int port;
@@ -32,6 +37,39 @@ typedef struct GlusterConf {
     char *transport;
 } GlusterConf;
 
+
+static QemuOptsList qemu_gluster_create_opts = {
+    .name = "qemu-gluster-create-opts",
+    .head = QTAILQ_HEAD_INITIALIZER(qemu_gluster_create_opts.head),
+    .desc = {
+        {
+            .name = BLOCK_OPT_SIZE,
+            .type = QEMU_OPT_SIZE,
+            .help = "Virtual disk size"
+        },
+        {
+            .name = BLOCK_OPT_PREALLOC,
+            .type = QEMU_OPT_STRING,
+            .help = "Preallocation mode (allowed values: off, full)"
+        },
+        { /* end of list */ }
+    }
+};
+
+static QemuOptsList runtime_opts = {
+    .name = "gluster",
+    .head = QTAILQ_HEAD_INITIALIZER(runtime_opts.head),
+    .desc = {
+        {
+            .name = "filename",
+            .type = QEMU_OPT_STRING,
+            .help = "URL to the gluster image",
+        },
+        { /* end of list */ }
+    },
+};
+
+
 static void qemu_gluster_gconf_free(GlusterConf *gconf)
 {
     if (gconf) {
@@ -176,7 +214,7 @@ static struct glfs *qemu_gluster_init(GlusterConf *gconf, const char *filename,
     ret = qemu_gluster_parseuri(gconf, filename);
     if (ret < 0) {
         error_setg(errp, "Usage: file=gluster[+transport]://[host[:port]]/"
-                   "volume/path[?socket=...]");
+                         "volume/path[?socket=...]");
         errno = -ret;
         goto out;
     }
@@ -254,20 +292,6 @@ static void gluster_finish_aiocb(struct glfs_fd *fd, ssize_t ret, void *arg)
     qemu_bh_schedule(acb->bh);
 }
 
-/* TODO Convert to fine grained options */
-static QemuOptsList runtime_opts = {
-    .name = "gluster",
-    .head = QTAILQ_HEAD_INITIALIZER(runtime_opts.head),
-    .desc = {
-        {
-            .name = "filename",
-            .type = QEMU_OPT_STRING,
-            .help = "URL to the gluster image",
-        },
-        { /* end of list */ }
-    },
-};
-
 static void qemu_gluster_parse_flags(int bdrv_flags, int *open_flags)
 {
     assert(open_flags != NULL);
@@ -285,7 +309,7 @@ static void qemu_gluster_parse_flags(int bdrv_flags, int *open_flags)
     }
 }
 
-static int qemu_gluster_open(BlockDriverState *bs,  QDict *options,
+static int qemu_gluster_open(BlockDriverState *bs, QDict *options,
                              int bdrv_flags, Error **errp)
 {
     BDRVGlusterState *s = bs->opaque;
@@ -334,12 +358,6 @@ out:
     return ret;
 }
 
-typedef struct BDRVGlusterReopenState {
-    struct glfs *glfs;
-    struct glfs_fd *fd;
-} BDRVGlusterReopenState;
-
-
 static int qemu_gluster_reopen_prepare(BDRVReopenState *state,
                                        BlockReopenQueue *queue, Error **errp)
 {
@@ -426,7 +444,9 @@ static void qemu_gluster_reopen_abort(BDRVReopenState *state)
 
 #ifdef CONFIG_GLUSTERFS_ZEROFILL
 static coroutine_fn int qemu_gluster_co_write_zeroes(BlockDriverState *bs,
-        int64_t sector_num, int nb_sectors, BdrvRequestFlags flags)
+                                                     int64_t sector_num,
+                                                     int nb_sectors,
+                                                     BdrvRequestFlags flags)
 {
     int ret;
     GlusterAIOCB *acb = g_slice_new(GlusterAIOCB);
@@ -459,7 +479,7 @@ static inline bool gluster_supports_zerofill(void)
 }
 
 static inline int qemu_gluster_zerofill(struct glfs_fd *fd, int64_t offset,
-        int64_t size)
+                                        int64_t size)
 {
     return glfs_zerofill(fd, offset, size);
 }
@@ -471,7 +491,7 @@ static inline bool gluster_supports_zerofill(void)
 }
 
 static inline int qemu_gluster_zerofill(struct glfs_fd *fd, int64_t offset,
-        int64_t size)
+                                        int64_t size)
 {
     return 0;
 }
@@ -500,19 +520,17 @@ static int qemu_gluster_create(const char *filename,
     tmp = qemu_opt_get_del(opts, BLOCK_OPT_PREALLOC);
     if (!tmp || !strcmp(tmp, "off")) {
         prealloc = 0;
-    } else if (!strcmp(tmp, "full") &&
-               gluster_supports_zerofill()) {
+    } else if (!strcmp(tmp, "full") && gluster_supports_zerofill()) {
         prealloc = 1;
     } else {
         error_setg(errp, "Invalid preallocation mode: '%s'"
-            " or GlusterFS doesn't support zerofill API",
-            tmp);
+                         " or GlusterFS doesn't support zerofill API", tmp);
         ret = -EINVAL;
         goto out;
     }
 
     fd = glfs_creat(glfs, gconf->path,
-        O_WRONLY | O_CREAT | O_TRUNC | O_BINARY, S_IRUSR | S_IWUSR);
+                    O_WRONLY | O_CREAT | O_TRUNC | O_BINARY, S_IRUSR | S_IWUSR);
     if (!fd) {
         ret = -errno;
     } else {
@@ -538,7 +556,8 @@ out:
 }
 
 static coroutine_fn int qemu_gluster_co_rw(BlockDriverState *bs,
-        int64_t sector_num, int nb_sectors, QEMUIOVector *qiov, int write)
+                                           int64_t sector_num, int nb_sectors,
+                                           QEMUIOVector *qiov, int write)
 {
     int ret;
     GlusterAIOCB *acb = g_slice_new(GlusterAIOCB);
@@ -553,10 +572,10 @@ static coroutine_fn int qemu_gluster_co_rw(BlockDriverState *bs,
 
     if (write) {
         ret = glfs_pwritev_async(s->fd, qiov->iov, qiov->niov, offset, 0,
-            &gluster_finish_aiocb, acb);
+                                 &gluster_finish_aiocb, acb);
     } else {
         ret = glfs_preadv_async(s->fd, qiov->iov, qiov->niov, offset, 0,
-            &gluster_finish_aiocb, acb);
+                                &gluster_finish_aiocb, acb);
     }
 
     if (ret < 0) {
@@ -586,13 +605,17 @@ static int qemu_gluster_truncate(BlockDriverState *bs, int64_t offset)
 }
 
 static coroutine_fn int qemu_gluster_co_readv(BlockDriverState *bs,
-        int64_t sector_num, int nb_sectors, QEMUIOVector *qiov)
+                                              int64_t sector_num,
+                                              int nb_sectors,
+                                              QEMUIOVector *qiov)
 {
     return qemu_gluster_co_rw(bs, sector_num, nb_sectors, qiov, 0);
 }
 
 static coroutine_fn int qemu_gluster_co_writev(BlockDriverState *bs,
-        int64_t sector_num, int nb_sectors, QEMUIOVector *qiov)
+                                               int64_t sector_num,
+                                               int nb_sectors,
+                                               QEMUIOVector *qiov)
 {
     return qemu_gluster_co_rw(bs, sector_num, nb_sectors, qiov, 1);
 }
@@ -624,7 +647,8 @@ out:
 
 #ifdef CONFIG_GLUSTERFS_DISCARD
 static coroutine_fn int qemu_gluster_co_discard(BlockDriverState *bs,
-        int64_t sector_num, int nb_sectors)
+                                                int64_t sector_num,
+                                                int nb_sectors)
 {
     int ret;
     GlusterAIOCB *acb = g_slice_new(GlusterAIOCB);
@@ -696,23 +720,6 @@ static int qemu_gluster_has_zero_init(BlockDriverState *bs)
     return 0;
 }
 
-static QemuOptsList qemu_gluster_create_opts = {
-    .name = "qemu-gluster-create-opts",
-    .head = QTAILQ_HEAD_INITIALIZER(qemu_gluster_create_opts.head),
-    .desc = {
-        {
-            .name = BLOCK_OPT_SIZE,
-            .type = QEMU_OPT_SIZE,
-            .help = "Virtual disk size"
-        },
-        {
-            .name = BLOCK_OPT_PREALLOC,
-            .type = QEMU_OPT_STRING,
-            .help = "Preallocation mode (allowed values: off, full)"
-        },
-        { /* end of list */ }
-    }
-};
 
 static BlockDriver bdrv_gluster = {
     .format_name                  = "gluster",
-- 
2.1.0

^ permalink raw reply	[flat|nested] 14+ messages in thread

* [Qemu-devel] [PATCH v13 3/3] block/gluster: add support for multiple gluster servers
  2015-11-10  9:09 [Qemu-devel] [PATCH 0/3] block/gluster: add support for multiple gluster servers Prasanna Kumar Kalever
  2015-11-10  9:09 ` [Qemu-devel] [PATCH v2 1/3] block/gluster: rename [server, volname, image] -> [host, volume, path] Prasanna Kumar Kalever
  2015-11-10  9:09 ` [Qemu-devel] [PATCH v2 2/3] block/gluster: code cleanup Prasanna Kumar Kalever
@ 2015-11-10  9:09 ` Prasanna Kumar Kalever
  2015-11-10 16:07   ` Eric Blake
  2015-11-10 17:24   ` Jeff Cody
  2015-11-10 15:26 ` [Qemu-devel] [PATCH 0/3] " Eric Blake
  3 siblings, 2 replies; 14+ messages in thread
From: Prasanna Kumar Kalever @ 2015-11-10  9:09 UTC (permalink / raw)
  To: qemu-devel
  Cc: kwolf, pkrempa, Prasanna Kumar Kalever, stefanha, jcody,
	deepakcs, bharata, rtalur

This patch adds a way to specify multiple volfile servers to the gluster
block backend of QEMU with tcp|rdma transport types and their port numbers.

Problem:

Currently VM Image on gluster volume is specified like this:

file=gluster[+tcp]://host[:port]/testvol/a.img

Assuming we have three hosts in trusted pool with replica 3 volume
in action and unfortunately host (mentioned in the command above) went down
for some reason, since the volume is replica 3 we now have other 2 hosts
active from which we can boot the VM.

But currently there is no mechanism to pass the other 2 gluster host
addresses to qemu.

Solution:

New way of specifying VM Image on gluster volume with volfile servers:
(We still support old syntax to maintain backward compatibility)

Basic command line syntax looks like:

Pattern I:
 -drive driver=gluster,
        volume=testvol,path=/path/a.raw,
        servers.0.host=1.2.3.4,
       [servers.0.port=24007,]
       [servers.0.transport=tcp,]
        servers.1.host=5.6.7.8,
       [servers.1.port=24008,]
       [servers.1.transport=rdma,] ...

Pattern II:
 'json:{"driver":"qcow2","file":{"driver":"gluster",
       "volume":"testvol","path":"/path/a.qcow2",
       "servers":[{tuple0},{tuple1}, ...{tupleN}]}}'

   driver      => 'gluster' (protocol name)
   volume      => name of gluster volume where our VM image resides
   path        => absolute path of image in gluster volume

  {tuple}      => {"host":"1.2.3.4"[,"port":"24007","transport":"tcp"]}

   host        => host address (hostname/ipv4/ipv6 addresses)
   port        => port number on which glusterd is listening. (default 24007)
   transport   => transport type used to connect to gluster management daemon,
                   it can be tcp|rdma (default 'tcp')

Examples:
1.
 -drive driver=qcow2,file.driver=gluster,
        file.volume=testvol,file.path=/path/a.qcow2,
        file.servers.0.host=1.2.3.4,
        file.servers.0.port=24007,
        file.servers.0.transport=tcp,
        file.servers.1.host=5.6.7.8,
        file.servers.1.port=24008,
        file.servers.1.transport=rdma
2.
 'json:{"driver":"qcow2","file":{"driver":"gluster","volume":"testvol",
         "path":"/path/a.qcow2","servers":
         [{"host":"1.2.3.4","port":"24007","transport":"tcp"},
          {"host":"4.5.6.7","port":"24008","transport":"rdma"}] } }'

This patch gives a mechanism to provide all the server addresses, which are in
replica set, so in case host1 is down VM can still boot from any of the
active hosts.

This is equivalent to the backup-volfile-servers option supported by
mount.glusterfs (FUSE way of mounting gluster volume)

Credits: Sincere thanks to Kevin Wolf <kwolf@redhat.com> and
"Deepak C Shetty" <deepakcs@redhat.com> for inputs and all their support

Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
---
v1:
multiple host addresses but common port number and transport type
pattern: URI syntax with query (?) delimitor
syntax:
    file=gluster[+transport-type]://host1:24007/testvol/a.img\
         ?servers=host2&servers=host3

v2:
multiple host addresses each have their own port number, but all use
                                                         common transport type
pattern: URI syntax  with query (?) delimiter
syntax:
    file=gluster[+transport-type]://[host[:port]]/testvol/a.img\
         [?servers=host1[:port]\
          &servers=host2[:port]]

v3:
multiple host addresses each have their own port number and transport type
pattern: changed to json
syntax:
    'json:{"driver":"qcow2","file":{"driver":"gluster","volume":"testvol",
           "path":"/path/a.qcow2","servers":
         [{"host":"1.2.3.4","port":"24007","transport":"tcp"},
          {"host":"4.5.6.7","port":"24008","transport":"rdma"}] } }'

v4, v5:
address comments from "Eric Blake" <eblake@redhat.com>
renamed:
'backup-volfile-servers' -> 'volfile-servers'

v6:
address comments from Peter Krempa <pkrempa@redhat.com>
renamed:
 'volname'    ->  'volume'
 'image-path' ->  'path'
 'server'     ->  'host'

v7:
fix for v6 (initialize num_servers to 1 and other typos)

v8:
split patch set v7 into series of 3 as per Peter Krempa <pkrempa@redhat.com>
review comments

v9:
reorder the series of patches addressing "Eric Blake" <eblake@redhat.com>
review comments

v10:
fix mem-leak as per Peter Krempa <pkrempa@redhat.com> review comments

v11:
using qapi-types* defined structures as per "Eric Blake" <eblake@redhat.com>
review comments.

v12:
fix crash caused in qapi_free_BlockdevOptionsGluster

v13:
address comments from "Jeff Cody" <jcody@redhat.com>
---
 block/gluster.c      | 468 ++++++++++++++++++++++++++++++++++++++++++++-------
 qapi/block-core.json |  60 ++++++-
 2 files changed, 461 insertions(+), 67 deletions(-)

diff --git a/block/gluster.c b/block/gluster.c
index ededda2..8939072 100644
--- a/block/gluster.c
+++ b/block/gluster.c
@@ -11,6 +11,19 @@
 #include "block/block_int.h"
 #include "qemu/uri.h"
 
+#define GLUSTER_OPT_FILENAME        "filename"
+#define GLUSTER_OPT_VOLUME          "volume"
+#define GLUSTER_OPT_PATH            "path"
+#define GLUSTER_OPT_HOST            "host"
+#define GLUSTER_OPT_PORT            "port"
+#define GLUSTER_OPT_TRANSPORT       "transport"
+#define GLUSTER_OPT_SERVERS_PATTERN "servers."
+
+#define GLUSTER_DEFAULT_PORT        24007
+
+#define MAX_SERVERS                 "10000"
+
+
 typedef struct GlusterAIOCB {
     int64_t size;
     int ret;
@@ -29,15 +42,6 @@ typedef struct BDRVGlusterReopenState {
     struct glfs_fd *fd;
 } BDRVGlusterReopenState;
 
-typedef struct GlusterConf {
-    char *host;
-    int port;
-    char *volume;
-    char *path;
-    char *transport;
-} GlusterConf;
-
-
 static QemuOptsList qemu_gluster_create_opts = {
     .name = "qemu-gluster-create-opts",
     .head = QTAILQ_HEAD_INITIALIZER(qemu_gluster_create_opts.head),
@@ -61,7 +65,7 @@ static QemuOptsList runtime_opts = {
     .head = QTAILQ_HEAD_INITIALIZER(runtime_opts.head),
     .desc = {
         {
-            .name = "filename",
+            .name = GLUSTER_OPT_FILENAME,
             .type = QEMU_OPT_STRING,
             .help = "URL to the gluster image",
         },
@@ -69,19 +73,48 @@ static QemuOptsList runtime_opts = {
     },
 };
 
+static QemuOptsList runtime_json_opts = {
+    .name = "gluster_json",
+    .head = QTAILQ_HEAD_INITIALIZER(runtime_json_opts.head),
+    .desc = {
+        {
+            .name = GLUSTER_OPT_VOLUME,
+            .type = QEMU_OPT_STRING,
+            .help = "name of gluster volume where VM image resides",
+        },
+        {
+            .name = GLUSTER_OPT_PATH,
+            .type = QEMU_OPT_STRING,
+            .help = "absolute path to image file in gluster volume",
+        },
+        { /* end of list */ }
+    },
+};
 
-static void qemu_gluster_gconf_free(GlusterConf *gconf)
-{
-    if (gconf) {
-        g_free(gconf->host);
-        g_free(gconf->volume);
-        g_free(gconf->path);
-        g_free(gconf->transport);
-        g_free(gconf);
-    }
-}
+static QemuOptsList runtime_tuple_opts = {
+    .name = "gluster_tuple",
+    .head = QTAILQ_HEAD_INITIALIZER(runtime_tuple_opts.head),
+    .desc = {
+        {
+            .name = GLUSTER_OPT_HOST,
+            .type = QEMU_OPT_STRING,
+            .help = "host address (hostname/ipv4/ipv6 addresses)",
+        },
+        {
+            .name = GLUSTER_OPT_PORT,
+            .type = QEMU_OPT_NUMBER,
+            .help = "port number on which glusterd is listening (default 24007)",
+        },
+        {
+            .name = GLUSTER_OPT_TRANSPORT,
+            .type = QEMU_OPT_STRING,
+            .help = "transport type 'tcp' or 'rdma' (default 'tcp')",
+        },
+        { /* end of list */ }
+    },
+};
 
-static int parse_volume_options(GlusterConf *gconf, char *path)
+static int parse_volume_options(BlockdevOptionsGluster *gconf, char *path)
 {
     char *p, *q;
 
@@ -143,8 +176,11 @@ static int parse_volume_options(GlusterConf *gconf, char *path)
  * file=gluster+unix:///testvol/dir/a.img?socket=/tmp/glusterd.socket
  * file=gluster+rdma://1.2.3.4:24007/testvol/a.img
  */
-static int qemu_gluster_parseuri(GlusterConf *gconf, const char *filename)
+static int qemu_gluster_parseuri(BlockdevOptionsGluster **pgconf,
+                                 const char *filename)
 {
+    BlockdevOptionsGluster *gconf;
+    GlusterServer *gsconf;
     URI *uri;
     QueryParams *qp = NULL;
     bool is_unix = false;
@@ -155,20 +191,24 @@ static int qemu_gluster_parseuri(GlusterConf *gconf, const char *filename)
         return -EINVAL;
     }
 
+    gconf = g_new0(BlockdevOptionsGluster, 1);
+    gsconf = g_new0(GlusterServer, 1);
+
     /* transport */
     if (!uri->scheme || !strcmp(uri->scheme, "gluster")) {
-        gconf->transport = g_strdup("tcp");
+        gsconf->transport = GLUSTER_TRANSPORT_TCP;
     } else if (!strcmp(uri->scheme, "gluster+tcp")) {
-        gconf->transport = g_strdup("tcp");
+        gsconf->transport = GLUSTER_TRANSPORT_TCP;
     } else if (!strcmp(uri->scheme, "gluster+unix")) {
-        gconf->transport = g_strdup("unix");
+        gsconf->transport = GLUSTER_TRANSPORT_UNIX;
         is_unix = true;
     } else if (!strcmp(uri->scheme, "gluster+rdma")) {
-        gconf->transport = g_strdup("rdma");
+        gsconf->transport = GLUSTER_TRANSPORT_RDMA;
     } else {
         ret = -EINVAL;
         goto out;
     }
+    gsconf->has_transport = true;
 
     ret = parse_volume_options(gconf, uri->path);
     if (ret < 0) {
@@ -190,13 +230,27 @@ static int qemu_gluster_parseuri(GlusterConf *gconf, const char *filename)
             ret = -EINVAL;
             goto out;
         }
-        gconf->host = g_strdup(qp->p[0].value);
+        gsconf->host = g_strdup(qp->p[0].value);
     } else {
-        gconf->host = g_strdup(uri->server ? uri->server : "localhost");
-        gconf->port = uri->port;
+        gsconf->host = g_strdup(uri->server ? uri->server : "localhost");
+        if (uri->port) {
+            gsconf->port = uri->port;
+        } else {
+            gsconf->port = GLUSTER_DEFAULT_PORT;
+        }
+        gsconf->has_port = true;
     }
 
+    gconf->servers = g_new0(GlusterServerList, 1);
+    gconf->servers->value = gsconf;
+    gconf->servers->next = NULL;
+
+    *pgconf = gconf;
+
 out:
+    if (ret < 0) {
+        qapi_free_BlockdevOptionsGluster(gconf);
+    }
     if (qp) {
         query_params_free(qp);
     }
@@ -204,30 +258,26 @@ out:
     return ret;
 }
 
-static struct glfs *qemu_gluster_init(GlusterConf *gconf, const char *filename,
-                                      Error **errp)
+static struct glfs *qemu_gluster_glfs_init(BlockdevOptionsGluster *gconf,
+                                           Error **errp)
 {
-    struct glfs *glfs = NULL;
+    struct glfs *glfs;
     int ret;
     int old_errno;
-
-    ret = qemu_gluster_parseuri(gconf, filename);
-    if (ret < 0) {
-        error_setg(errp, "Usage: file=gluster[+transport]://[host[:port]]/"
-                         "volume/path[?socket=...]");
-        errno = -ret;
-        goto out;
-    }
+    GlusterServerList *server;
 
     glfs = glfs_new(gconf->volume);
     if (!glfs) {
         goto out;
     }
 
-    ret = glfs_set_volfile_server(glfs, gconf->transport, gconf->host,
-            gconf->port);
-    if (ret < 0) {
-        goto out;
+    for (server = gconf->servers; server != NULL; server = server->next) {
+        ret = glfs_set_volfile_server(glfs,
+                                      GlusterTransport_lookup[server->value->transport],
+                                      server->value->host, server->value->port);
+        if (ret < 0) {
+            goto out;
+        }
     }
 
     /*
@@ -242,10 +292,9 @@ static struct glfs *qemu_gluster_init(GlusterConf *gconf, const char *filename,
     ret = glfs_init(glfs);
     if (ret) {
         error_setg_errno(errp, errno,
-                         "Gluster connection failed for host=%s port=%d "
-                         "volume=%s path=%s transport=%s", gconf->host,
-                         gconf->port, gconf->volume, gconf->path,
-                         gconf->transport);
+                         "Error: Gluster connection failed for given hosts "
+                         "volume:'%s' path:'%s' host1: %s", gconf->volume,
+                         gconf->path, gconf->servers->value->host);
 
         /* glfs_init sometimes doesn't set errno although docs suggest that */
         if (errno == 0)
@@ -264,6 +313,300 @@ out:
     return NULL;
 }
 
+static int parse_transport_option(const char *opt)
+{
+    int i;
+
+    if (!opt) {
+        /* Set tcp as default */
+        return GLUSTER_TRANSPORT_TCP;
+    }
+
+    for (i = 0; i < GLUSTER_TRANSPORT_MAX; i++) {
+        if (!strcmp(opt, GlusterTransport_lookup[i])) {
+            return i;
+        }
+    }
+
+    return -EINVAL;
+}
+
+/*
+*
+*  Basic command line syntax looks like:
+*
+* Pattern I:
+* -drive driver=gluster,
+*        volume=testvol,file.path=/path/a.raw,
+*        servers.0.host=1.2.3.4,
+*       [servers.0.port=24007,]
+*       [servers.0.transport=tcp,]
+*        servers.1.host=5.6.7.8,
+*       [servers.1.port=24008,]
+*       [servers.1.transport=rdma,] ...
+*
+* Pattern II:
+* 'json:{"driver":"qcow2","file":{"driver":"gluster",
+*       "volume":"testvol","path":"/path/a.qcow2",
+*       "servers":[{tuple0},{tuple1}, ...{tupleN}]}}'
+*
+*
+*   driver    => 'gluster' (protocol name)
+*   volume    => name of gluster volume where VM image resides
+*   path      => absolute path of image in gluster volume
+*
+*  {tuple}    => {"host":"1.2.3.4"[,"port":"24007","transport":"tcp"]}
+*
+*   host      => host address (hostname/ipv4/ipv6 addresses)
+*   port      => port number on which glusterd is listening (default 24007)
+*   tranport  => transport type used to connect to gluster management daemon,
+*                 it can be tcp|rdma (default 'tcp')
+*
+*
+* Examples:
+* Pattern I:
+* -drive driver=qcow2,file.driver=gluster,
+*        file.volume=testvol,file.path=/path/a.qcow2,
+*        file.servers.0.host=1.2.3.4,
+*        file.servers.0.port=24007,
+*        file.servers.0.transport=tcp,
+*        file.servers.1.host=5.6.7.8,
+*        file.servers.1.port=24008,
+*        file.servers.1.transport=rdma, ...
+*
+* -drive driver=qcow2,file.driver=gluster,
+*        file.volume=testvol,file.path=/path/a.qcow2,
+*        file.servers.0.host=1.2.3.4,
+*        file.servers.1.host=5.6.7.8, ...
+*
+* -drive driver=qcow2,file.driver=gluster,
+*        file.volume=testvol,file.path=/path/a.qcow2,
+*        file.servers.0.host=1.2.3.4,
+*        file.servers.0.port=24007,
+*        file.servers.1.host=5.6.7.8,
+*        file.servers.1.port=24008, ...
+*
+* -drive driver=qcow2,file.driver=gluster,
+*        file.volume=testvol,file.path=/path/a.qcow2,
+*        file.servers.0.host=1.2.3.4,
+*        file.servers.0.transport=tcp,
+*        file.servers.1.host=5.6.7.8,
+*        file.servers.1.transport=rdma, ...
+*
+* Pattern II:
+* 'json:{"driver":"qcow2","file":{"driver":"gluster","volume":"testvol",
+*         "path":"/path/a.qcow2","servers":
+*         [{"host":"1.2.3.4","port":"24007","transport":"tcp"},
+*          {"host":"4.5.6.7","port":"24008","transport":"rdma"}, ...]}}'
+*
+* 'json:{"driver":"qcow2","file":{"driver":"gluster","volume":"testvol",
+*         "path":"/path/a.qcow2","servers":
+*                                    [{"host":"1.2.3.4"},
+*                                     {"host":"4.5.6.7"}, ...]}}'
+*
+*
+* 'json:{"driver":"qcow2","file":{"driver":"gluster","volume":"testvol",
+*         "path":"/path/a.qcow2","servers":
+*                                  [{"host":"1.2.3.4","port":"24007"},
+*                                   {"host":"4.5.6.7","port":"24008"}, ...]}}'
+*
+*
+* 'json:{"driver":"qcow2","file":{"driver":"gluster","volume":"testvol",
+*        "path":"/path/a.qcow2","servers":
+*                              [{"host":"1.2.3.4","transport":"tcp"},
+*                               {"host":"4.5.6.7","transport":"rdma"}, ...]}}'
+*
+* Just for better readability pattern II is kept as:
+* json:
+* {
+*    "driver":"qcow2",
+*    "file":{
+*       "driver":"gluster",
+*       "volume":"testvol",
+*       "path":"/path/a.qcow2",
+*       "servers":[
+*          {
+*             "host":"1.2.3.4",
+*             "port":"24007",
+*             "transport":"tcp"
+*          },
+*          {
+*             "host":"5.6.7.8",
+*             "port":"24008",
+*             "transport":"rdma"
+*          }
+*       ]
+*    }
+* }
+*
+*/
+static int qemu_gluster_parsejson(BlockdevOptionsGluster **pgconf,
+                                  QDict *options)
+{
+    QemuOpts *opts;
+    BlockdevOptionsGluster *gconf = NULL;
+    GlusterServer *gsconf;
+    GlusterServerList **prev;
+    GlusterServerList *curr = NULL;
+    QDict *backing_options = NULL;
+    Error *local_err = NULL;
+    char *str = NULL;
+    const char *ptr;
+    size_t num_servers;
+    size_t buff_size;
+    int i;
+
+
+    /* create opts info from runtime_json_opts list */
+    opts = qemu_opts_create(&runtime_json_opts, NULL, 0, &error_abort);
+    qemu_opts_absorb_qdict(opts, options, &local_err);
+    if (local_err) {
+        goto out;
+    }
+
+    gconf = g_new0(BlockdevOptionsGluster, 1);
+
+    num_servers = qdict_array_entries(options, GLUSTER_OPT_SERVERS_PATTERN);
+    if (num_servers < 1) {
+        error_setg(&local_err, "Error: qemu_gluster: please provide 'servers' "
+                               "option with valid fields in array of tuples");
+        goto out;
+    }
+
+    ptr = qemu_opt_get(opts, GLUSTER_OPT_VOLUME);
+    if (!ptr) {
+        error_setg(&local_err, "Error: qemu_gluster: please provide 'volume' "
+                               "option");
+        goto out;
+    }
+    gconf->volume = g_strdup(ptr);
+
+    ptr = qemu_opt_get(opts, GLUSTER_OPT_PATH);
+    if (!ptr) {
+        error_setg(&local_err, "Error: qemu_gluster: please provide 'path' "
+                               "option");
+        goto out;
+    }
+    gconf->path = g_strdup(ptr);
+
+    qemu_opts_del(opts);
+
+    /* create opts info from runtime_tuple_opts list */
+    buff_size = strlen(GLUSTER_OPT_SERVERS_PATTERN) + strlen(MAX_SERVERS) + 2;
+    str = g_malloc(buff_size);
+    for (i = 0; i < num_servers; i++) {
+        opts = qemu_opts_create(&runtime_tuple_opts, NULL, 0, &error_abort);
+        g_assert(snprintf(str, buff_size,
+                          GLUSTER_OPT_SERVERS_PATTERN"%d.", i) < buff_size);
+        qdict_extract_subqdict(options, &backing_options, str);
+        qemu_opts_absorb_qdict(opts, backing_options, &local_err);
+        if (local_err) {
+            goto out;
+        }
+        qdict_del(backing_options, str);
+
+        ptr = qemu_opt_get(opts, GLUSTER_OPT_HOST);
+        if (!ptr) {
+            error_setg(&local_err, "Error: qemu_gluster: servers.{tuple.%d} "
+                                   "requires 'host' option", i);
+            goto out;
+        }
+
+        gsconf = g_new0(GlusterServer, 1);
+
+        gsconf->host = g_strdup(ptr);
+
+        ptr = qemu_opt_get(opts, GLUSTER_OPT_TRANSPORT);
+        /* check whether transport type specified in json command is valid */
+        if (parse_transport_option(ptr) < 0) {
+            error_setg(&local_err, "Error: qemu_gluster: please set 'transport'"
+                                   " type in tuple.%d as tcp or rdma", i);
+            goto out;
+        }
+        /* only if valid transport i.e. either of tcp|rdma is specified */
+        gsconf->transport = parse_transport_option(ptr);
+        gsconf->has_transport = true;
+
+        gsconf->port = qemu_opt_get_number(opts, GLUSTER_OPT_PORT,
+                                                    GLUSTER_DEFAULT_PORT);
+        gsconf->has_port = true;
+
+        if (gconf->servers == NULL) {
+            gconf->servers = g_new0(GlusterServerList, 1);
+            gconf->servers->value = gsconf;
+            curr = gconf->servers;
+        } else {
+            prev = &curr->next;
+            curr = g_new0(GlusterServerList, 1);
+            curr->value = gsconf;
+            *prev = curr;
+        }
+        curr->next = NULL;
+
+        qemu_opts_del(opts);
+    }
+
+    *pgconf = gconf;
+    g_free(str);
+    return 0;
+
+out:
+    error_report_err(local_err);
+    qapi_free_BlockdevOptionsGluster(gconf);
+    qemu_opts_del(opts);
+    if (str) {
+        qdict_del(backing_options, str);
+        g_free(str);
+    }
+    errno = EINVAL;
+    return -errno;
+}
+
+static struct glfs *qemu_gluster_init(BlockdevOptionsGluster **gconf,
+                                      const char *filename,
+                                      QDict *options, Error **errp)
+{
+    int ret;
+
+    if (filename) {
+        ret = qemu_gluster_parseuri(gconf, filename);
+        if (ret < 0) {
+            error_setg(errp, "Usage: file=gluster[+transport]://[host[:port]]/"
+                             "volume/path[?socket=...]");
+            errno = -ret;
+            return NULL;
+        }
+    } else {
+        ret = qemu_gluster_parsejson(gconf, options);
+        if (ret < 0) {
+            error_setg(errp, "Wrong Usage.");
+            error_append_hint(errp,
+                             "Usage1: "
+                             "-drive driver=qcow2,file.driver=gluster,"
+                             "file.volume=testvol,file.path=/path/a.qcow2,"
+                             "file.servers.0.host=1.2.3.4,"
+                             "[file.servers.0.port=24007,]"
+                             "[file.servers.0.transport=tcp,]"
+                             "file.servers.1.host=5.6.7.8,"
+                             "[file.servers.1.port=24008,]"
+                             "[file.servers.1.transport=rdma,] ...");
+            error_append_hint(errp,
+                             "\nUsage2: "
+                             "'json:{\"driver\":\"qcow2\",\"file\":"
+                             "{\"driver\":\"gluster\",\"volume\":\""
+                             "testvol\",\"path\":\"/path/a.qcow2\","
+                             "\"servers\":[{\"host\":\"1.2.3.4\","
+                             "\"port\":\"24007\",\"transport\":\"tcp\"},"
+                             "{\"host\":\"4.5.6.7\",\"port\":\"24007\","
+                             "\"transport\":\"rdma\"}, ...]}}'");
+            errno = -ret;
+            return NULL;
+        }
+    }
+
+    return qemu_gluster_glfs_init(*gconf, errp);
+}
+
 static void qemu_gluster_complete_aio(void *opaque)
 {
     GlusterAIOCB *acb = (GlusterAIOCB *)opaque;
@@ -315,7 +658,7 @@ static int qemu_gluster_open(BlockDriverState *bs, QDict *options,
     BDRVGlusterState *s = bs->opaque;
     int open_flags = 0;
     int ret = 0;
-    GlusterConf *gconf = g_new0(GlusterConf, 1);
+    BlockdevOptionsGluster *gconf = NULL;
     QemuOpts *opts;
     Error *local_err = NULL;
     const char *filename;
@@ -329,8 +672,7 @@ static int qemu_gluster_open(BlockDriverState *bs, QDict *options,
     }
 
     filename = qemu_opt_get(opts, "filename");
-
-    s->glfs = qemu_gluster_init(gconf, filename, errp);
+    s->glfs = qemu_gluster_init(&gconf, filename, options, errp);
     if (!s->glfs) {
         ret = -errno;
         goto out;
@@ -345,7 +687,7 @@ static int qemu_gluster_open(BlockDriverState *bs, QDict *options,
 
 out:
     qemu_opts_del(opts);
-    qemu_gluster_gconf_free(gconf);
+    qapi_free_BlockdevOptionsGluster(gconf);
     if (!ret) {
         return ret;
     }
@@ -363,7 +705,7 @@ static int qemu_gluster_reopen_prepare(BDRVReopenState *state,
 {
     int ret = 0;
     BDRVGlusterReopenState *reop_s;
-    GlusterConf *gconf = NULL;
+    BlockdevOptionsGluster *gconf = NULL;
     int open_flags = 0;
 
     assert(state != NULL);
@@ -374,9 +716,7 @@ static int qemu_gluster_reopen_prepare(BDRVReopenState *state,
 
     qemu_gluster_parse_flags(state->flags, &open_flags);
 
-    gconf = g_new0(GlusterConf, 1);
-
-    reop_s->glfs = qemu_gluster_init(gconf, state->bs->filename, errp);
+    reop_s->glfs = qemu_gluster_init(&gconf, state->bs->filename, NULL, errp);
     if (reop_s->glfs == NULL) {
         ret = -errno;
         goto exit;
@@ -391,7 +731,7 @@ static int qemu_gluster_reopen_prepare(BDRVReopenState *state,
 
 exit:
     /* state->opaque will be freed in either the _abort or _commit */
-    qemu_gluster_gconf_free(gconf);
+    qapi_free_BlockdevOptionsGluster(gconf);
     return ret;
 }
 
@@ -500,15 +840,15 @@ static inline int qemu_gluster_zerofill(struct glfs_fd *fd, int64_t offset,
 static int qemu_gluster_create(const char *filename,
                                QemuOpts *opts, Error **errp)
 {
+    BlockdevOptionsGluster *gconf = NULL;
     struct glfs *glfs;
     struct glfs_fd *fd;
     int ret = 0;
     int prealloc = 0;
     int64_t total_size = 0;
     char *tmp = NULL;
-    GlusterConf *gconf = g_new0(GlusterConf, 1);
 
-    glfs = qemu_gluster_init(gconf, filename, errp);
+    glfs = qemu_gluster_init(&gconf, filename, NULL, errp);
     if (!glfs) {
         ret = -errno;
         goto out;
@@ -523,7 +863,7 @@ static int qemu_gluster_create(const char *filename,
     } else if (!strcmp(tmp, "full") && gluster_supports_zerofill()) {
         prealloc = 1;
     } else {
-        error_setg(errp, "Invalid preallocation mode: '%s'"
+        error_setg(errp, "Error: Invalid preallocation mode: '%s'"
                          " or GlusterFS doesn't support zerofill API", tmp);
         ret = -EINVAL;
         goto out;
@@ -548,7 +888,7 @@ static int qemu_gluster_create(const char *filename,
     }
 out:
     g_free(tmp);
-    qemu_gluster_gconf_free(gconf);
+    qapi_free_BlockdevOptionsGluster(gconf);
     if (glfs) {
         glfs_fini(glfs);
     }
@@ -725,7 +1065,7 @@ static BlockDriver bdrv_gluster = {
     .format_name                  = "gluster",
     .protocol_name                = "gluster",
     .instance_size                = sizeof(BDRVGlusterState),
-    .bdrv_needs_filename          = true,
+    .bdrv_needs_filename          = false,
     .bdrv_file_open               = qemu_gluster_open,
     .bdrv_reopen_prepare          = qemu_gluster_reopen_prepare,
     .bdrv_reopen_commit           = qemu_gluster_reopen_commit,
@@ -752,7 +1092,7 @@ static BlockDriver bdrv_gluster_tcp = {
     .format_name                  = "gluster",
     .protocol_name                = "gluster+tcp",
     .instance_size                = sizeof(BDRVGlusterState),
-    .bdrv_needs_filename          = true,
+    .bdrv_needs_filename          = false,
     .bdrv_file_open               = qemu_gluster_open,
     .bdrv_reopen_prepare          = qemu_gluster_reopen_prepare,
     .bdrv_reopen_commit           = qemu_gluster_reopen_commit,
@@ -806,7 +1146,7 @@ static BlockDriver bdrv_gluster_rdma = {
     .format_name                  = "gluster",
     .protocol_name                = "gluster+rdma",
     .instance_size                = sizeof(BDRVGlusterState),
-    .bdrv_needs_filename          = true,
+    .bdrv_needs_filename          = false,
     .bdrv_file_open               = qemu_gluster_open,
     .bdrv_reopen_prepare          = qemu_gluster_reopen_prepare,
     .bdrv_reopen_commit           = qemu_gluster_reopen_commit,
diff --git a/qapi/block-core.json b/qapi/block-core.json
index 425fdab..a67c862 100644
--- a/qapi/block-core.json
+++ b/qapi/block-core.json
@@ -1375,13 +1375,14 @@
 # Drivers that are supported in block device operations.
 #
 # @host_device, @host_cdrom: Since 2.1
+# @gluster: Since 2.5
 #
 # Since: 2.0
 ##
 { 'enum': 'BlockdevDriver',
   'data': [ 'archipelago', 'blkdebug', 'blkverify', 'bochs', 'cloop',
-            'dmg', 'file', 'ftp', 'ftps', 'host_cdrom', 'host_device',
-            'http', 'https', 'null-aio', 'null-co', 'parallels',
+            'dmg', 'file', 'ftp', 'ftps', 'gluster', 'host_cdrom',
+            'host_device', 'http', 'https', 'null-aio', 'null-co', 'parallels',
             'qcow', 'qcow2', 'qed', 'quorum', 'raw', 'tftp', 'vdi', 'vhdx',
             'vmdk', 'vpc', 'vvfat' ] }
 
@@ -1797,6 +1798,59 @@
             '*read-pattern': 'QuorumReadPattern' } }
 
 ##
+# @GlusterTransport
+#
+# An enumeration of Gluster transport type
+#
+# @tcp:   TCP   - Transmission Control Protocol
+#
+# @unix:  UNIX  - Unix domain socket
+#
+# @rdma:  RDMA  - Remote direct memory access
+#
+# Since: 2.5
+##
+{ 'enum': 'GlusterTransport', 'data': [ 'tcp', 'unix', 'rdma'] }
+
+##
+# @GlusterServer
+#
+# Gluster tuple set
+#
+# @host:       host address (hostname/ipv4/ipv6 addresses)
+#
+# @port:       #optional port number on which glusterd is listening
+#               (default 24007)
+#
+# @transport:  #optional transport type used to connect to gluster management
+#               daemon (default 'tcp')
+#
+# Since: 2.5
+##
+{ 'struct': 'GlusterServer',
+  'data': { 'host': 'str',
+            '*port': 'int',
+            '*transport': 'GlusterTransport' } }
+
+##
+# @BlockdevOptionsGluster
+#
+# Driver specific block device options for Gluster
+#
+# @volume:   name of gluster volume where VM image resides
+#
+# @path:     absolute path to image file in gluster volume
+#
+# @servers:  one or more gluster server descriptions (host, port, and transport)
+#
+# Since: 2.5
+##
+{ 'struct': 'BlockdevOptionsGluster',
+  'data': { 'volume': 'str',
+            'path': 'str',
+            'servers': [ 'GlusterServer' ] } }
+
+##
 # @BlockdevOptions
 #
 # Options for creating a block device.
@@ -1816,7 +1870,7 @@
       'file':       'BlockdevOptionsFile',
       'ftp':        'BlockdevOptionsFile',
       'ftps':       'BlockdevOptionsFile',
-# TODO gluster: Wait for structured options
+      'gluster':    'BlockdevOptionsGluster',
       'host_cdrom': 'BlockdevOptionsFile',
       'host_device':'BlockdevOptionsFile',
       'http':       'BlockdevOptionsFile',
-- 
2.1.0

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [Qemu-devel] [PATCH 0/3] block/gluster: add support for multiple gluster servers
  2015-11-10  9:09 [Qemu-devel] [PATCH 0/3] block/gluster: add support for multiple gluster servers Prasanna Kumar Kalever
                   ` (2 preceding siblings ...)
  2015-11-10  9:09 ` [Qemu-devel] [PATCH v13 3/3] block/gluster: add support for multiple gluster servers Prasanna Kumar Kalever
@ 2015-11-10 15:26 ` Eric Blake
  3 siblings, 0 replies; 14+ messages in thread
From: Eric Blake @ 2015-11-10 15:26 UTC (permalink / raw)
  To: Prasanna Kumar Kalever, qemu-devel
  Cc: kwolf, pkrempa, stefanha, jcody, deepakcs, bharata, rtalur

[-- Attachment #1: Type: text/plain, Size: 914 bytes --]

On 11/10/2015 02:09 AM, Prasanna Kumar Kalever wrote:
> This release is rebased on qemu master branch.
> In this series of patches 1/3 and 2/3 are unchanged.

It's still nice to send the _entire_ series with v13 in the subject line
(cover letter included), rather than mixing and matching (no version in
cover letter, v2 in patches 1 and 2, and v13 in patch 3).  'git
send-email -v13' makes this easy.

> 
> Prasanna Kumar Kalever (3):
>   block/gluster: rename [server, volname, image] -> [host, volume, path]
>   block/gluster: code cleanup
>   block/gluster: add support for multiple gluster servers
> 
>  block/gluster.c      | 597 ++++++++++++++++++++++++++++++++++++++++-----------
>  qapi/block-core.json |  60 +++++-
>  2 files changed, 529 insertions(+), 128 deletions(-)
> 

-- 
Eric Blake   eblake redhat com    +1-919-301-3266
Libvirt virtualization library http://libvirt.org


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 604 bytes --]

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [Qemu-devel] [PATCH v2 1/3] block/gluster: rename [server, volname, image] -> [host, volume, path]
  2015-11-10  9:09 ` [Qemu-devel] [PATCH v2 1/3] block/gluster: rename [server, volname, image] -> [host, volume, path] Prasanna Kumar Kalever
@ 2015-11-10 15:28   ` Eric Blake
  0 siblings, 0 replies; 14+ messages in thread
From: Eric Blake @ 2015-11-10 15:28 UTC (permalink / raw)
  To: Prasanna Kumar Kalever, qemu-devel
  Cc: kwolf, pkrempa, stefanha, jcody, deepakcs, bharata, rtalur

[-- Attachment #1: Type: text/plain, Size: 1520 bytes --]

On 11/10/2015 02:09 AM, Prasanna Kumar Kalever wrote:
> this patch is very much be meaningful after next patch which adds multiple
> gluster servers support. After that,
> 
> an example is, in  'servers' tuple values we use 'server' variable for key
> 'host' in the code, it will be quite messy to have colliding names for
> variables, so to maintain better readability and makes it consistent with other
> existing code as well as the input keys/options, this patch renames the
> following variables
> 'server'  -> 'host'
> 'image'   -> 'path'
> 'volname' -> 'volume'
> 
> Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
> ---
>  block/gluster.c | 54 +++++++++++++++++++++++++++---------------------------
>  1 file changed, 27 insertions(+), 27 deletions(-)

When this has been previously positively reviewed, and you are making no
further changes to the code, it speeds up review to add the reviewer's
line to your commit message (so that the reviewer knows it is unchanged,
and so that other readers know it has been reviewed at least once).  Of
course, if you change a patch, dropping the R-b line is appropriate.

My review from
https://lists.gnu.org/archive/html/qemu-devel/2015-10/msg04916.html
still stands, so that would mean adding:

Reviewed-by: Eric Blake <eblake@redhat.com>

to the commit body if you have a reason to post v14 of this series.

-- 
Eric Blake   eblake redhat com    +1-919-301-3266
Libvirt virtualization library http://libvirt.org


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 604 bytes --]

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [Qemu-devel] [PATCH v13 3/3] block/gluster: add support for multiple gluster servers
  2015-11-10  9:09 ` [Qemu-devel] [PATCH v13 3/3] block/gluster: add support for multiple gluster servers Prasanna Kumar Kalever
@ 2015-11-10 16:07   ` Eric Blake
  2015-11-10 17:19     ` Jeff Cody
  2015-11-12 10:04     ` Prasanna Kumar Kalever
  2015-11-10 17:24   ` Jeff Cody
  1 sibling, 2 replies; 14+ messages in thread
From: Eric Blake @ 2015-11-10 16:07 UTC (permalink / raw)
  To: Prasanna Kumar Kalever, qemu-devel
  Cc: kwolf, pkrempa, stefanha, jcody, deepakcs, bharata, rtalur

[-- Attachment #1: Type: text/plain, Size: 20754 bytes --]

On 11/10/2015 02:09 AM, Prasanna Kumar Kalever wrote:
> This patch adds a way to specify multiple volfile servers to the gluster
> block backend of QEMU with tcp|rdma transport types and their port numbers.
> 

[...]

> 2.
>  'json:{"driver":"qcow2","file":{"driver":"gluster","volume":"testvol",
>          "path":"/path/a.qcow2","servers":
>          [{"host":"1.2.3.4","port":"24007","transport":"tcp"},
>           {"host":"4.5.6.7","port":"24008","transport":"rdma"}] } }'
> 
> This patch gives a mechanism to provide all the server addresses, which are in
> replica set, so in case host1 is down VM can still boot from any of the
> active hosts.
> 
> This is equivalent to the backup-volfile-servers option supported by
> mount.glusterfs (FUSE way of mounting gluster volume)
> 
> Credits: Sincere thanks to Kevin Wolf <kwolf@redhat.com> and
> "Deepak C Shetty" <deepakcs@redhat.com> for inputs and all their support
> 
> Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
> ---

> v10:
> fix mem-leak as per Peter Krempa <pkrempa@redhat.com> review comments
> 
> v11:
> using qapi-types* defined structures as per "Eric Blake" <eblake@redhat.com>
> review comments.
> 
> v12:
> fix crash caused in qapi_free_BlockdevOptionsGluster
> 
> v13:
> address comments from "Jeff Cody" <jcody@redhat.com>

I had some other comments against v10 that I don't see addressed yet:
https://lists.gnu.org/archive/html/qemu-devel/2015-10/msg06377.html

> ---
>  block/gluster.c      | 468 ++++++++++++++++++++++++++++++++++++++++++++-------
>  qapi/block-core.json |  60 ++++++-
>  2 files changed, 461 insertions(+), 67 deletions(-)
> 
> diff --git a/block/gluster.c b/block/gluster.c
> index ededda2..8939072 100644
> --- a/block/gluster.c
> +++ b/block/gluster.c
> @@ -11,6 +11,19 @@
>  #include "block/block_int.h"
>  #include "qemu/uri.h"
>  
> +#define GLUSTER_OPT_FILENAME        "filename"
> +#define GLUSTER_OPT_VOLUME          "volume"
> +#define GLUSTER_OPT_PATH            "path"
> +#define GLUSTER_OPT_HOST            "host"
> +#define GLUSTER_OPT_PORT            "port"
> +#define GLUSTER_OPT_TRANSPORT       "transport"
> +#define GLUSTER_OPT_SERVERS_PATTERN "servers."
> +
> +#define GLUSTER_DEFAULT_PORT        24007
> +
> +#define MAX_SERVERS                 "10000"

Why is this a string rather than an integer?

> +
> +
>  typedef struct GlusterAIOCB {
>      int64_t size;
>      int ret;
> @@ -29,15 +42,6 @@ typedef struct BDRVGlusterReopenState {
>      struct glfs_fd *fd;
>  } BDRVGlusterReopenState;
>  
> -typedef struct GlusterConf {
> -    char *host;
> -    int port;
> -    char *volume;
> -    char *path;
> -    char *transport;
> -} GlusterConf;
> -

This patch feels pretty big. It may be smarter to break it into two
pieces - one that adds GlusterConf to qapi/block-core.json and replaces
existing uses of this definition to the qapi type but with no changes in
semantics; and the other that then extends things to add support for
multiple servers (so that we aren't trying to do too much in one patch).

> @@ -143,8 +176,11 @@ static int parse_volume_options(GlusterConf *gconf, char *path)
>   * file=gluster+unix:///testvol/dir/a.img?socket=/tmp/glusterd.socket
>   * file=gluster+rdma://1.2.3.4:24007/testvol/a.img
>   */
> -static int qemu_gluster_parseuri(GlusterConf *gconf, const char *filename)
> +static int qemu_gluster_parseuri(BlockdevOptionsGluster **pgconf,
> +                                 const char *filename)
>  {
> +    BlockdevOptionsGluster *gconf;
> +    GlusterServer *gsconf;
>      URI *uri;
>      QueryParams *qp = NULL;
>      bool is_unix = false;
> @@ -155,20 +191,24 @@ static int qemu_gluster_parseuri(GlusterConf *gconf, const char *filename)
>          return -EINVAL;
>      }
>  
> +    gconf = g_new0(BlockdevOptionsGluster, 1);
> +    gsconf = g_new0(GlusterServer, 1);

gconf and gsconf are both allocated here...

> +
>      /* transport */
>      if (!uri->scheme || !strcmp(uri->scheme, "gluster")) {
> -        gconf->transport = g_strdup("tcp");
> +        gsconf->transport = GLUSTER_TRANSPORT_TCP;
>      } else if (!strcmp(uri->scheme, "gluster+tcp")) {
> -        gconf->transport = g_strdup("tcp");
> +        gsconf->transport = GLUSTER_TRANSPORT_TCP;
>      } else if (!strcmp(uri->scheme, "gluster+unix")) {
> -        gconf->transport = g_strdup("unix");
> +        gsconf->transport = GLUSTER_TRANSPORT_UNIX;
>          is_unix = true;
>      } else if (!strcmp(uri->scheme, "gluster+rdma")) {
> -        gconf->transport = g_strdup("rdma");
> +        gsconf->transport = GLUSTER_TRANSPORT_RDMA;
>      } else {
>          ret = -EINVAL;
>          goto out;

...but you can error here...

>      }
> +    gsconf->has_transport = true;
>  
>      ret = parse_volume_options(gconf, uri->path);
>      if (ret < 0) {
> @@ -190,13 +230,27 @@ static int qemu_gluster_parseuri(GlusterConf *gconf, const char *filename)
>              ret = -EINVAL;
>              goto out;
>          }
> -        gconf->host = g_strdup(qp->p[0].value);
> +        gsconf->host = g_strdup(qp->p[0].value);
>      } else {
> -        gconf->host = g_strdup(uri->server ? uri->server : "localhost");
> -        gconf->port = uri->port;
> +        gsconf->host = g_strdup(uri->server ? uri->server : "localhost");
> +        if (uri->port) {
> +            gsconf->port = uri->port;
> +        } else {
> +            gsconf->port = GLUSTER_DEFAULT_PORT;
> +        }
> +        gsconf->has_port = true;
>      }
>  
> +    gconf->servers = g_new0(GlusterServerList, 1);
> +    gconf->servers->value = gsconf;
> +    gconf->servers->next = NULL;

Dead assignment (gconf->servers->next is already NULL because of g_new0).

> +
> +    *pgconf = gconf;
> +
>  out:
> +    if (ret < 0) {
> +        qapi_free_BlockdevOptionsGluster(gconf);
> +    }
>      if (qp) {
>          query_params_free(qp);
>      }
> @@ -204,30 +258,26 @@ out:
>      return ret;
>  }

...which means you leak gsconf if you hit an error.

>  
> -static struct glfs *qemu_gluster_init(GlusterConf *gconf, const char *filename,
> -                                      Error **errp)
> +static struct glfs *qemu_gluster_glfs_init(BlockdevOptionsGluster *gconf,
> +                                           Error **errp)
>  {
> -    struct glfs *glfs = NULL;
> +    struct glfs *glfs;
>      int ret;
>      int old_errno;
> -
> -    ret = qemu_gluster_parseuri(gconf, filename);
> -    if (ret < 0) {
> -        error_setg(errp, "Usage: file=gluster[+transport]://[host[:port]]/"
> -                         "volume/path[?socket=...]");
> -        errno = -ret;
> -        goto out;
> -    }
> +    GlusterServerList *server;
>  
>      glfs = glfs_new(gconf->volume);
>      if (!glfs) {
>          goto out;
>      }
>  
> -    ret = glfs_set_volfile_server(glfs, gconf->transport, gconf->host,
> -            gconf->port);
> -    if (ret < 0) {
> -        goto out;
> +    for (server = gconf->servers; server != NULL; server = server->next) {

It's okay to use 'server;' rather than 'server != NULL;' as the loop
condition.  Matter of personal style.

> +        ret = glfs_set_volfile_server(glfs,
> +                                      GlusterTransport_lookup[server->value->transport],
> +                                      server->value->host, server->value->port);

port and transport are optional; which means you should probably be
checking has_port and has_transport before blindly using them (unless
you made sure that ALL initialization paths set things to sane defaults
when the user omitted the arguments).

> +        if (ret < 0) {
> +            goto out;
> +        }
>      }
>  
>      /*
> @@ -242,10 +292,9 @@ static struct glfs *qemu_gluster_init(GlusterConf *gconf, const char *filename,
>      ret = glfs_init(glfs);
>      if (ret) {
>          error_setg_errno(errp, errno,
> -                         "Gluster connection failed for host=%s port=%d "
> -                         "volume=%s path=%s transport=%s", gconf->host,
> -                         gconf->port, gconf->volume, gconf->path,
> -                         gconf->transport);
> +                         "Error: Gluster connection failed for given hosts "

Don't start messages with "Error: ", error_setg_errno() already does
that for you.

> +                         "volume:'%s' path:'%s' host1: %s", gconf->volume,

Inconsistent on whether there is a space after ':'.  "given hosts
volume" sounds odd.

> +                         gconf->path, gconf->servers->value->host);
>  
>          /* glfs_init sometimes doesn't set errno although docs suggest that */
>          if (errno == 0)
> @@ -264,6 +313,300 @@ out:
>      return NULL;
>  }
>  
> +static int parse_transport_option(const char *opt)
> +{
> +    int i;
> +
> +    if (!opt) {
> +        /* Set tcp as default */
> +        return GLUSTER_TRANSPORT_TCP;
> +    }
> +
> +    for (i = 0; i < GLUSTER_TRANSPORT_MAX; i++) {
> +        if (!strcmp(opt, GlusterTransport_lookup[i])) {
> +            return i;
> +        }
> +    }
> +
> +    return -EINVAL;
> +}
> +
> +/*
> +*
> +*  Basic command line syntax looks like:
> +*

You have 110 lines from /* to */.  In v10, I already mentioned that this
comment is probably 100 lines too long.  You do NOT need to repeat the
syntax, examples, or even more-readable example here; having them in the
commit body was enough.  Someone that knows how to read qapi will be
able to deduce what this function is doing if you were to simplify to
just this:

/*
 * Convert the command line into qapi.
 */

> +*
> +*/
> +static int qemu_gluster_parsejson(BlockdevOptionsGluster **pgconf,
> +                                  QDict *options)
> +{
> +    QemuOpts *opts;
> +    BlockdevOptionsGluster *gconf = NULL;
> +    GlusterServer *gsconf;
> +    GlusterServerList **prev;
> +    GlusterServerList *curr = NULL;
> +    QDict *backing_options = NULL;
> +    Error *local_err = NULL;
> +    char *str = NULL;
> +    const char *ptr;
> +    size_t num_servers;
> +    size_t buff_size;
> +    int i;
> +
> +
> +    /* create opts info from runtime_json_opts list */

Why two blank lines?

> +    opts = qemu_opts_create(&runtime_json_opts, NULL, 0, &error_abort);
> +    qemu_opts_absorb_qdict(opts, options, &local_err);
> +    if (local_err) {
> +        goto out;
> +    }
> +
> +    gconf = g_new0(BlockdevOptionsGluster, 1);
> +
> +    num_servers = qdict_array_entries(options, GLUSTER_OPT_SERVERS_PATTERN);
> +    if (num_servers < 1) {
> +        error_setg(&local_err, "Error: qemu_gluster: please provide 'servers' "

Again, error messages created with error_setg() need not start with
"Error: ".

A good thing to try when you are adding an error message for a command
line parsing scenario is to try and come up with the command line that
would trigger the error to see if the result looks sane.

> +                               "option with valid fields in array of tuples");
> +        goto out;
> +    }
> +
> +    ptr = qemu_opt_get(opts, GLUSTER_OPT_VOLUME);
> +    if (!ptr) {
> +        error_setg(&local_err, "Error: qemu_gluster: please provide 'volume' "
> +                               "option");
> +        goto out;
> +    }
> +    gconf->volume = g_strdup(ptr);
> +
> +    ptr = qemu_opt_get(opts, GLUSTER_OPT_PATH);
> +    if (!ptr) {
> +        error_setg(&local_err, "Error: qemu_gluster: please provide 'path' "

More "Error: " prefixes.  I'll quit pointing them out.

> +                               "option");
> +        goto out;
> +    }
> +    gconf->path = g_strdup(ptr);
> +
> +    qemu_opts_del(opts);
> +
> +    /* create opts info from runtime_tuple_opts list */
> +    buff_size = strlen(GLUSTER_OPT_SERVERS_PATTERN) + strlen(MAX_SERVERS) + 2;
> +    str = g_malloc(buff_size);
> +    for (i = 0; i < num_servers; i++) {
> +        opts = qemu_opts_create(&runtime_tuple_opts, NULL, 0, &error_abort);
> +        g_assert(snprintf(str, buff_size,
> +                          GLUSTER_OPT_SERVERS_PATTERN"%d.", i) < buff_size);

Gross - you have side effects inside g_assert().  (Absolute bug if you
do that inside plain assert(); possibly excusable in g_assert() but
still not good practice).

If I were writing this, then instead of futzing around with snprintf,
I'd just use g_strdup_printf() to malloc the appropriately sized string
without worrying about sizing myself, and be sure I didn't leak things.

> +        qdict_extract_subqdict(options, &backing_options, str);
> +        qemu_opts_absorb_qdict(opts, backing_options, &local_err);
> +        if (local_err) {
> +            goto out;
> +        }
> +        qdict_del(backing_options, str);
> +
> +        ptr = qemu_opt_get(opts, GLUSTER_OPT_HOST);
> +        if (!ptr) {
> +            error_setg(&local_err, "Error: qemu_gluster: servers.{tuple.%d} "
> +                                   "requires 'host' option", i);
> +            goto out;
> +        }
> +
> +        gsconf = g_new0(GlusterServer, 1);

gsconf is allocated here...

> +
> +        gsconf->host = g_strdup(ptr);
> +
> +        ptr = qemu_opt_get(opts, GLUSTER_OPT_TRANSPORT);
> +        /* check whether transport type specified in json command is valid */
> +        if (parse_transport_option(ptr) < 0) {
> +            error_setg(&local_err, "Error: qemu_gluster: please set 'transport'"
> +                                   " type in tuple.%d as tcp or rdma", i);
> +            goto out;

...but if you error here...

> +        }
> +        /* only if valid transport i.e. either of tcp|rdma is specified */
> +        gsconf->transport = parse_transport_option(ptr);

Why are you calling parse_transport_option() twice?

> +        gsconf->has_transport = true;
> +
> +        gsconf->port = qemu_opt_get_number(opts, GLUSTER_OPT_PORT,
> +                                                    GLUSTER_DEFAULT_PORT);

Indentation is off.

> +        gsconf->has_port = true;
> +
> +        if (gconf->servers == NULL) {
> +            gconf->servers = g_new0(GlusterServerList, 1);
> +            gconf->servers->value = gsconf;
> +            curr = gconf->servers;
> +        } else {
> +            prev = &curr->next;
> +            curr = g_new0(GlusterServerList, 1);
> +            curr->value = gsconf;
> +            *prev = curr;
> +        }
> +        curr->next = NULL;
> +
> +        qemu_opts_del(opts);
> +    }
> +
> +    *pgconf = gconf;
> +    g_free(str);
> +    return 0;
> +
> +out:
> +    error_report_err(local_err);
> +    qapi_free_BlockdevOptionsGluster(gconf);
> +    qemu_opts_del(opts);
> +    if (str) {
> +        qdict_del(backing_options, str);
> +        g_free(str);
> +    }
> +    errno = EINVAL;
> +    return -errno;
> +}

...then gsconf is leaked.

> +
> +static struct glfs *qemu_gluster_init(BlockdevOptionsGluster **gconf,
> +                                      const char *filename,
> +                                      QDict *options, Error **errp)
> +{
> +    int ret;
> +
> +    if (filename) {
> +        ret = qemu_gluster_parseuri(gconf, filename);
> +        if (ret < 0) {
> +            error_setg(errp, "Usage: file=gluster[+transport]://[host[:port]]/"
> +                             "volume/path[?socket=...]");
> +            errno = -ret;
> +            return NULL;
> +        }
> +    } else {
> +        ret = qemu_gluster_parsejson(gconf, options);
> +        if (ret < 0) {
> +            error_setg(errp, "Wrong Usage.");

Don't end error_setg() messages with a period.  And Talking In All Camel
Case Is Odd.

> +            error_append_hint(errp,
> +                             "Usage1: "
> +                             "-drive driver=qcow2,file.driver=gluster,"
> +                             "file.volume=testvol,file.path=/path/a.qcow2,"
> +                             "file.servers.0.host=1.2.3.4,"
> +                             "[file.servers.0.port=24007,]"
> +                             "[file.servers.0.transport=tcp,]"
> +                             "file.servers.1.host=5.6.7.8,"
> +                             "[file.servers.1.port=24008,]"
> +                             "[file.servers.1.transport=rdma,] ...");
> +            error_append_hint(errp,
> +                             "\nUsage2: "
> +                             "'json:{\"driver\":\"qcow2\",\"file\":"
> +                             "{\"driver\":\"gluster\",\"volume\":\""
> +                             "testvol\",\"path\":\"/path/a.qcow2\","
> +                             "\"servers\":[{\"host\":\"1.2.3.4\","
> +                             "\"port\":\"24007\",\"transport\":\"tcp\"},"
> +                             "{\"host\":\"4.5.6.7\",\"port\":\"24007\","
> +                             "\"transport\":\"rdma\"}, ...]}}'");

Rather long. I think a single hint is long enough; you don't need to
display the json:{} usage.


> @@ -523,7 +863,7 @@ static int qemu_gluster_create(const char *filename,
>      } else if (!strcmp(tmp, "full") && gluster_supports_zerofill()) {
>          prealloc = 1;
>      } else {
> -        error_setg(errp, "Invalid preallocation mode: '%s'"
> +        error_setg(errp, "Error: Invalid preallocation mode: '%s'"
>                           " or GlusterFS doesn't support zerofill API", tmp);

Spurious hunk.


> +++ b/qapi/block-core.json
> @@ -1375,13 +1375,14 @@
>  # Drivers that are supported in block device operations.
>  #
>  # @host_device, @host_cdrom: Since 2.1
> +# @gluster: Since 2.5

Sadly, I think we've found enough issues that this will not make 2.5
hard freeze deadline, so at this point, you should change things to
state 2.6.

> +
> +##
> +# @GlusterServer
> +#
> +# Gluster tuple set

Not very descriptive.  Better might be 'Details for connecting to a
gluster server'.

> +#
> +# @host:       host address (hostname/ipv4/ipv6 addresses)
> +#
> +# @port:       #optional port number on which glusterd is listening
> +#               (default 24007)
> +#
> +# @transport:  #optional transport type used to connect to gluster management
> +#               daemon (default 'tcp')
> +#
> +# Since: 2.5
> +##
> +{ 'struct': 'GlusterServer',
> +  'data': { 'host': 'str',
> +            '*port': 'int',
> +            '*transport': 'GlusterTransport' } }

I really do think it might be easier to split this into two patches; one
that introduces GlusterServer in the .json and converts existing uses to
it, and the other that introduces BlockdevOptionsGluster along with its
ability to use a GlusterServerList.

> +
> +##
> +# @BlockdevOptionsGluster
> +#
> +# Driver specific block device options for Gluster
> +#
> +# @volume:   name of gluster volume where VM image resides
> +#
> +# @path:     absolute path to image file in gluster volume
> +#
> +# @servers:  one or more gluster server descriptions (host, port, and transport)

80 columns; might be nice to keep things at 79 or less.  In fact, since
GlusterServer is already documented, you could get away with:

# @servers: one or more gluster server descriptions

> +#
> +# Since: 2.5
> +##
> +{ 'struct': 'BlockdevOptionsGluster',
> +  'data': { 'volume': 'str',
> +            'path': 'str',
> +            'servers': [ 'GlusterServer' ] } }
> +
> +##
>  # @BlockdevOptions
>  #
>  # Options for creating a block device.
> @@ -1816,7 +1870,7 @@
>        'file':       'BlockdevOptionsFile',
>        'ftp':        'BlockdevOptionsFile',
>        'ftps':       'BlockdevOptionsFile',
> -# TODO gluster: Wait for structured options
> +      'gluster':    'BlockdevOptionsGluster',
>        'host_cdrom': 'BlockdevOptionsFile',
>        'host_device':'BlockdevOptionsFile',
>        'http':       'BlockdevOptionsFile',
> 

Overall, I think we are probably on the right track for the QMP
interface; but since blockdev-add is NOT stable yet for 2.5, it won't
hurt to wait to get this in until 2.6, to make sure we have plenty of
time; and it would also be nice to make sure we get nbd, nfs, rbd,
sheepdog all supported in the same release; possibly by sharing common
types instead of introducing GlusterServer as a one-off type.

-- 
Eric Blake   eblake redhat com    +1-919-301-3266
Libvirt virtualization library http://libvirt.org


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 604 bytes --]

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [Qemu-devel] [PATCH v13 3/3] block/gluster: add support for multiple gluster servers
  2015-11-10 16:07   ` Eric Blake
@ 2015-11-10 17:19     ` Jeff Cody
  2015-11-12 10:04     ` Prasanna Kumar Kalever
  1 sibling, 0 replies; 14+ messages in thread
From: Jeff Cody @ 2015-11-10 17:19 UTC (permalink / raw)
  To: Eric Blake
  Cc: kwolf, pkrempa, Prasanna Kumar Kalever, stefanha, qemu-devel,
	deepakcs, bharata, rtalur

On Tue, Nov 10, 2015 at 09:07:20AM -0700, Eric Blake wrote:
> On 11/10/2015 02:09 AM, Prasanna Kumar Kalever wrote:
> > This patch adds a way to specify multiple volfile servers to the gluster
> > block backend of QEMU with tcp|rdma transport types and their port numbers.
> > 

[...]

> > +/*
> > +*
> > +*  Basic command line syntax looks like:
> > +*
> 
> You have 110 lines from /* to */.  In v10, I already mentioned that this
> comment is probably 100 lines too long.  You do NOT need to repeat the
> syntax, examples, or even more-readable example here; having them in the
> commit body was enough.  Someone that knows how to read qapi will be
> able to deduce what this function is doing if you were to simplify to
> just this:
> 
> /*
>  * Convert the command line into qapi.
>  */
> 

Maybe a good compromise could be moving this into docs/gluster.txt
(which doesn't exist, but could get its start this way!).  That way,
the info is there to read / find with grep without needing to look at
commit logs.


[...]


-Jeff

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [Qemu-devel] [PATCH v13 3/3] block/gluster: add support for multiple gluster servers
  2015-11-10  9:09 ` [Qemu-devel] [PATCH v13 3/3] block/gluster: add support for multiple gluster servers Prasanna Kumar Kalever
  2015-11-10 16:07   ` Eric Blake
@ 2015-11-10 17:24   ` Jeff Cody
  2015-11-12  9:46     ` Prasanna Kumar Kalever
  1 sibling, 1 reply; 14+ messages in thread
From: Jeff Cody @ 2015-11-10 17:24 UTC (permalink / raw)
  To: Prasanna Kumar Kalever
  Cc: kwolf, pkrempa, stefanha, qemu-devel, deepakcs, bharata, rtalur

On Tue, Nov 10, 2015 at 02:39:16PM +0530, Prasanna Kumar Kalever wrote:
> This patch adds a way to specify multiple volfile servers to the gluster
> block backend of QEMU with tcp|rdma transport types and their port numbers.
> 
> Problem:
> 
> Currently VM Image on gluster volume is specified like this:
> 
> file=gluster[+tcp]://host[:port]/testvol/a.img
> 
> Assuming we have three hosts in trusted pool with replica 3 volume
> in action and unfortunately host (mentioned in the command above) went down
> for some reason, since the volume is replica 3 we now have other 2 hosts
> active from which we can boot the VM.
> 
> But currently there is no mechanism to pass the other 2 gluster host
> addresses to qemu.
> 
> Solution:
> 
> New way of specifying VM Image on gluster volume with volfile servers:
> (We still support old syntax to maintain backward compatibility)
> 
> Basic command line syntax looks like:
> 
> Pattern I:
>  -drive driver=gluster,
>         volume=testvol,path=/path/a.raw,
>         servers.0.host=1.2.3.4,
>        [servers.0.port=24007,]
>        [servers.0.transport=tcp,]
>         servers.1.host=5.6.7.8,
>        [servers.1.port=24008,]
>        [servers.1.transport=rdma,] ...
> 
> Pattern II:
>  'json:{"driver":"qcow2","file":{"driver":"gluster",
>        "volume":"testvol","path":"/path/a.qcow2",
>        "servers":[{tuple0},{tuple1}, ...{tupleN}]}}'
> 
>    driver      => 'gluster' (protocol name)
>    volume      => name of gluster volume where our VM image resides
>    path        => absolute path of image in gluster volume
> 
>   {tuple}      => {"host":"1.2.3.4"[,"port":"24007","transport":"tcp"]}
> 
>    host        => host address (hostname/ipv4/ipv6 addresses)
>    port        => port number on which glusterd is listening. (default 24007)
>    transport   => transport type used to connect to gluster management daemon,
>                    it can be tcp|rdma (default 'tcp')
> 
> Examples:
> 1.
>  -drive driver=qcow2,file.driver=gluster,
>         file.volume=testvol,file.path=/path/a.qcow2,
>         file.servers.0.host=1.2.3.4,
>         file.servers.0.port=24007,
>         file.servers.0.transport=tcp,
>         file.servers.1.host=5.6.7.8,
>         file.servers.1.port=24008,
>         file.servers.1.transport=rdma
> 2.
>  'json:{"driver":"qcow2","file":{"driver":"gluster","volume":"testvol",
>          "path":"/path/a.qcow2","servers":
>          [{"host":"1.2.3.4","port":"24007","transport":"tcp"},
>           {"host":"4.5.6.7","port":"24008","transport":"rdma"}] } }'
> 
> This patch gives a mechanism to provide all the server addresses, which are in
> replica set, so in case host1 is down VM can still boot from any of the
> active hosts.
> 
> This is equivalent to the backup-volfile-servers option supported by
> mount.glusterfs (FUSE way of mounting gluster volume)
> 
> Credits: Sincere thanks to Kevin Wolf <kwolf@redhat.com> and
> "Deepak C Shetty" <deepakcs@redhat.com> for inputs and all their support
> 
> Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>


Previous versions of this commit mentioned that the new functionality
is dependent on a recent fix in libgfapi.  This commit message is
missing that line; does its absence mean that the new functionality is
not dependent on any particular libgfapi version?

What happens if the new functionality is tried on the last stable
libgfapi release?

Thanks!
Jeff

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [Qemu-devel] [PATCH v13 3/3] block/gluster: add support for multiple gluster servers
  2015-11-10 17:24   ` Jeff Cody
@ 2015-11-12  9:46     ` Prasanna Kumar Kalever
  0 siblings, 0 replies; 14+ messages in thread
From: Prasanna Kumar Kalever @ 2015-11-12  9:46 UTC (permalink / raw)
  To: Jeff Cody; +Cc: kwolf, pkrempa, stefanha, qemu-devel, deepakcs, bharata, rtalur

On Tuesday, November 10, 2015 10:54:25 PM, Jeff Cody wrote:
> 
> On Tue, Nov 10, 2015 at 02:39:16PM +0530, Prasanna Kumar Kalever wrote:
> > This patch adds a way to specify multiple volfile servers to the gluster
> > block backend of QEMU with tcp|rdma transport types and their port numbers.
> > 
> > Problem:
> > 
> > Currently VM Image on gluster volume is specified like this:
> > 
> > file=gluster[+tcp]://host[:port]/testvol/a.img
> > 
> > Assuming we have three hosts in trusted pool with replica 3 volume
> > in action and unfortunately host (mentioned in the command above) went down
> > for some reason, since the volume is replica 3 we now have other 2 hosts
> > active from which we can boot the VM.
> > 
> > But currently there is no mechanism to pass the other 2 gluster host
> > addresses to qemu.
> > 
> > Solution:
> > 
> > New way of specifying VM Image on gluster volume with volfile servers:
> > (We still support old syntax to maintain backward compatibility)
> > 
> > Basic command line syntax looks like:
> > 
> > Pattern I:
> >  -drive driver=gluster,
> >         volume=testvol,path=/path/a.raw,
> >         servers.0.host=1.2.3.4,
> >        [servers.0.port=24007,]
> >        [servers.0.transport=tcp,]
> >         servers.1.host=5.6.7.8,
> >        [servers.1.port=24008,]
> >        [servers.1.transport=rdma,] ...
> > 
> > Pattern II:
> >  'json:{"driver":"qcow2","file":{"driver":"gluster",
> >        "volume":"testvol","path":"/path/a.qcow2",
> >        "servers":[{tuple0},{tuple1}, ...{tupleN}]}}'
> > 
> >    driver      => 'gluster' (protocol name)
> >    volume      => name of gluster volume where our VM image resides
> >    path        => absolute path of image in gluster volume
> > 
> >   {tuple}      => {"host":"1.2.3.4"[,"port":"24007","transport":"tcp"]}
> > 
> >    host        => host address (hostname/ipv4/ipv6 addresses)
> >    port        => port number on which glusterd is listening. (default
> >    24007)
> >    transport   => transport type used to connect to gluster management
> >    daemon,
> >                    it can be tcp|rdma (default 'tcp')
> > 
> > Examples:
> > 1.
> >  -drive driver=qcow2,file.driver=gluster,
> >         file.volume=testvol,file.path=/path/a.qcow2,
> >         file.servers.0.host=1.2.3.4,
> >         file.servers.0.port=24007,
> >         file.servers.0.transport=tcp,
> >         file.servers.1.host=5.6.7.8,
> >         file.servers.1.port=24008,
> >         file.servers.1.transport=rdma
> > 2.
> >  'json:{"driver":"qcow2","file":{"driver":"gluster","volume":"testvol",
> >          "path":"/path/a.qcow2","servers":
> >          [{"host":"1.2.3.4","port":"24007","transport":"tcp"},
> >           {"host":"4.5.6.7","port":"24008","transport":"rdma"}] } }'
> > 
> > This patch gives a mechanism to provide all the server addresses, which are
> > in
> > replica set, so in case host1 is down VM can still boot from any of the
> > active hosts.
> > 
> > This is equivalent to the backup-volfile-servers option supported by
> > mount.glusterfs (FUSE way of mounting gluster volume)
> > 
> > Credits: Sincere thanks to Kevin Wolf <kwolf@redhat.com> and
> > "Deepak C Shetty" <deepakcs@redhat.com> for inputs and all their support
> > 
> > Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
> 
> 
> Previous versions of this commit mentioned that the new functionality
> is dependent on a recent fix in libgfapi.  This commit message is
> missing that line; does its absence mean that the new functionality is
> not dependent on any particular libgfapi version?
> 
> What happens if the new functionality is tried on the last stable
> libgfapi release?

Sorry for not removing this since long, actually the libgfapi fix is for defaults values
i.e. When glfs_set_volfile_server is invocated multiple times, only on the first
invocation gfapi code replace port 0 with 24007 and transport NULL with "tcp".

Any have to remove this dependency, I have put up code that will take care of defaults.

Thanks,
-prasanna 

Hence, replacing the parameters at the entry function is the right way.
> 
> Thanks!
> Jeff
> 
> 

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [Qemu-devel] [PATCH v13 3/3] block/gluster: add support for multiple gluster servers
  2015-11-10 16:07   ` Eric Blake
  2015-11-10 17:19     ` Jeff Cody
@ 2015-11-12 10:04     ` Prasanna Kumar Kalever
  2015-11-12 16:23       ` Jeff Cody
  1 sibling, 1 reply; 14+ messages in thread
From: Prasanna Kumar Kalever @ 2015-11-12 10:04 UTC (permalink / raw)
  To: Eric Blake
  Cc: kwolf, pkrempa, stefanha, jcody, qemu-devel, deepakcs, bharata, rtalur

On Tuesday, November 10, 2015 9:37:20 PM, Eric Blake wrote:
> 
> On 11/10/2015 02:09 AM, Prasanna Kumar Kalever wrote:
> > This patch adds a way to specify multiple volfile servers to the gluster
> > block backend of QEMU with tcp|rdma transport types and their port numbers.
> > 
> 
> [...]

[...]

> 
> Overall, I think we are probably on the right track for the QMP
> interface; but since blockdev-add is NOT stable yet for 2.5, it won't
> hurt to wait to get this in until 2.6, to make sure we have plenty of
> time; and it would also be nice to make sure we get nbd, nfs, rbd,
> sheepdog all supported in the same release; possibly by sharing common
> types instead of introducing GlusterServer as a one-off type.

We are hoping this to go in 2.5 which is really important for gluster
hyper-convergence release (next Feb).

Is there any possibility of getting exception for this patch ?

Thanks,
-Prasanna

> 
> --
> Eric Blake   eblake redhat com    +1-919-301-3266
> Libvirt virtualization library http://libvirt.org
> 
> 

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [Qemu-devel] [PATCH v13 3/3] block/gluster: add support for multiple gluster servers
  2015-11-12 10:04     ` Prasanna Kumar Kalever
@ 2015-11-12 16:23       ` Jeff Cody
  2015-11-13  4:54         ` Jeff Cody
  0 siblings, 1 reply; 14+ messages in thread
From: Jeff Cody @ 2015-11-12 16:23 UTC (permalink / raw)
  To: Prasanna Kumar Kalever
  Cc: kwolf, peter.maydell, pkrempa, stefanha, qemu-devel, deepakcs,
	bharata, rtalur

On Thu, Nov 12, 2015 at 05:04:02AM -0500, Prasanna Kumar Kalever wrote:
> On Tuesday, November 10, 2015 9:37:20 PM, Eric Blake wrote:
> > 
> > On 11/10/2015 02:09 AM, Prasanna Kumar Kalever wrote:
> > > This patch adds a way to specify multiple volfile servers to the gluster
> > > block backend of QEMU with tcp|rdma transport types and their port numbers.
> > > 
> > 
> > [...]
> 
> [...]
> 
> > 
> > Overall, I think we are probably on the right track for the QMP
> > interface; but since blockdev-add is NOT stable yet for 2.5, it won't
> > hurt to wait to get this in until 2.6, to make sure we have plenty of
> > time; and it would also be nice to make sure we get nbd, nfs, rbd,
> > sheepdog all supported in the same release; possibly by sharing common
> > types instead of introducing GlusterServer as a one-off type.
> 
> We are hoping this to go in 2.5 which is really important for gluster
> hyper-convergence release (next Feb).
> 
> Is there any possibility of getting exception for this patch ?
> 
> Thanks,
> -Prasanna
>

Today is the hard freeze for 2.5 for new features. As the v14 patches
hit the list today, I will review them, and if they look good I can
pull patches into my tree. 

Whether there is an exception is really up to Peter Maydell
(added him to the cc) - if all looks good with the patches, I could
send a pull request out later today with them, but that will be late
in the evening for Peter.

-Jeff

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [Qemu-devel] [PATCH v13 3/3] block/gluster: add support for multiple gluster servers
  2015-11-12 16:23       ` Jeff Cody
@ 2015-11-13  4:54         ` Jeff Cody
  0 siblings, 0 replies; 14+ messages in thread
From: Jeff Cody @ 2015-11-13  4:54 UTC (permalink / raw)
  To: Prasanna Kumar Kalever
  Cc: kwolf, peter.maydell, pkrempa, stefanha, qemu-devel, deepakcs,
	bharata, rtalur

On Thu, Nov 12, 2015 at 11:23:35AM -0500, Jeff Cody wrote:
> On Thu, Nov 12, 2015 at 05:04:02AM -0500, Prasanna Kumar Kalever wrote:
> > On Tuesday, November 10, 2015 9:37:20 PM, Eric Blake wrote:
> > > 
> > > On 11/10/2015 02:09 AM, Prasanna Kumar Kalever wrote:
> > > > This patch adds a way to specify multiple volfile servers to the gluster
> > > > block backend of QEMU with tcp|rdma transport types and their port numbers.
> > > > 
> > > 
> > > [...]
> > 
> > [...]
> > 
> > > 
> > > Overall, I think we are probably on the right track for the QMP
> > > interface; but since blockdev-add is NOT stable yet for 2.5, it won't
> > > hurt to wait to get this in until 2.6, to make sure we have plenty of
> > > time; and it would also be nice to make sure we get nbd, nfs, rbd,
> > > sheepdog all supported in the same release; possibly by sharing common
> > > types instead of introducing GlusterServer as a one-off type.
> > 
> > We are hoping this to go in 2.5 which is really important for gluster
> > hyper-convergence release (next Feb).
> > 
> > Is there any possibility of getting exception for this patch ?
> > 
> > Thanks,
> > -Prasanna
> >
> 
> Today is the hard freeze for 2.5 for new features. As the v14 patches
> hit the list today, I will review them, and if they look good I can
> pull patches into my tree. 
> 
> Whether there is an exception is really up to Peter Maydell
> (added him to the cc) - if all looks good with the patches, I could
> send a pull request out later today with them, but that will be late
> in the evening for Peter.
>

I wanted to give a follow-up - as Eric pointed out in his reviews, the
API interface could be made better by using some QAPI features that
are not currently in qemu (discriminated unions). 

So I won't issue a pull request for this for 2.5, and we'll pick it up for
2.6 once the interface issues can be worked out.

-Jeff

^ permalink raw reply	[flat|nested] 14+ messages in thread

* [Qemu-devel] [PATCH 0/3] block/gluster: add support for multiple gluster servers
@ 2015-11-09 10:02 Prasanna Kumar Kalever
  0 siblings, 0 replies; 14+ messages in thread
From: Prasanna Kumar Kalever @ 2015-11-09 10:02 UTC (permalink / raw)
  To: qemu-devel
  Cc: kwolf, pkrempa, Prasanna Kumar Kalever, stefanha, deepakcs,
	bharata, rtalur

Prasanna Kumar Kalever (3):
  block/gluster: rename [server, volname, image] -> [host, volume, path]
  block/gluster: code cleanup
  block/gluster: add support for multiple gluster servers

 block/gluster.c      | 589 ++++++++++++++++++++++++++++++++++++++++-----------
 qapi/block-core.json |  64 +++++-
 2 files changed, 523 insertions(+), 130 deletions(-)

-- 
2.1.0

^ permalink raw reply	[flat|nested] 14+ messages in thread

end of thread, other threads:[~2015-11-13  4:54 UTC | newest]

Thread overview: 14+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-11-10  9:09 [Qemu-devel] [PATCH 0/3] block/gluster: add support for multiple gluster servers Prasanna Kumar Kalever
2015-11-10  9:09 ` [Qemu-devel] [PATCH v2 1/3] block/gluster: rename [server, volname, image] -> [host, volume, path] Prasanna Kumar Kalever
2015-11-10 15:28   ` Eric Blake
2015-11-10  9:09 ` [Qemu-devel] [PATCH v2 2/3] block/gluster: code cleanup Prasanna Kumar Kalever
2015-11-10  9:09 ` [Qemu-devel] [PATCH v13 3/3] block/gluster: add support for multiple gluster servers Prasanna Kumar Kalever
2015-11-10 16:07   ` Eric Blake
2015-11-10 17:19     ` Jeff Cody
2015-11-12 10:04     ` Prasanna Kumar Kalever
2015-11-12 16:23       ` Jeff Cody
2015-11-13  4:54         ` Jeff Cody
2015-11-10 17:24   ` Jeff Cody
2015-11-12  9:46     ` Prasanna Kumar Kalever
2015-11-10 15:26 ` [Qemu-devel] [PATCH 0/3] " Eric Blake
  -- strict thread matches above, loose matches on Subject: below --
2015-11-09 10:02 Prasanna Kumar Kalever

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.