All of lore.kernel.org
 help / color / mirror / Atom feed
* [Qemu-devel] [PATCH v19 0/5] block/gluster: add support for multiple gluster servers
@ 2016-07-15 15:00 Prasanna Kumar Kalever
  2016-07-15 15:00 ` [Qemu-devel] [PATCH v19 1/5] block/gluster: rename [server, volname, image] -> [host, volume, path] Prasanna Kumar Kalever
                   ` (4 more replies)
  0 siblings, 5 replies; 23+ messages in thread
From: Prasanna Kumar Kalever @ 2016-07-15 15:00 UTC (permalink / raw)
  To: qemu-devel
  Cc: eblake, armbru, jcody, berrange, vbellur, rtalur, Prasanna Kumar Kalever

This version of patches are rebased repo at
git://repo.or.cz/qemu/armbru.git qapi-not-next

Prasanna Kumar Kalever (5):
  block/gluster: rename [server, volname, image] -> [host, volume, path]
  block/gluster: code cleanup
  block/gluster: remove rdma transport
  block/gluster: using new qapi schema
  block/gluster: add support for multiple gluster servers

v1:
multiple host addresses but common port number and transport type
pattern: URI syntax with query (?) delimitor
syntax:
    file=gluster[+transport-type]://host1:24007/testvol/a.img\
         ?server=host2&server=host3

v2:
multiple host addresses each have their own port number, but all use
                                                         common transport type
pattern: URI syntax  with query (?) delimiter
syntax:
    file=gluster[+transport-type]://[host[:port]]/testvol/a.img\
         [?server=host1[:port]\
          &server=host2[:port]]

v3:
multiple host addresses each have their own port number and transport type
pattern: changed to json
syntax:
    'json:{"driver":"qcow2","file":{"driver":"gluster","volume":"testvol",
           "path":"/path/a.qcow2","server":
         [{"host":"1.2.3.4","port":"24007","transport":"tcp"},
          {"host":"4.5.6.7","port":"24008","transport":"rdma"}] } }'

v4, v5:
address comments from "Eric Blake" <eblake@redhat.com>
renamed:
'backup-volfile-servers' -> 'volfile-servers'

v6:
address comments from Peter Krempa <pkrempa@redhat.com>
renamed:
 'volname'    ->  'volume'
 'image-path' ->  'path'
 'server'     ->  'host'

v7:
fix for v6 (initialize num_servers to 1 and other typos)

v8:
split patch set v7 into series of 3 as per Peter Krempa <pkrempa@redhat.com>
review comments

v9:
reorder the series of patches addressing "Eric Blake" <eblake@redhat.com>
review comments

v10:
fix mem-leak as per Peter Krempa <pkrempa@redhat.com> review comments

v11:
using qapi-types* defined structures as per "Eric Blake" <eblake@redhat.com>
review comments.

v12:
fix crash caused in qapi_free_BlockdevOptionsGluster

v13:
address comments from "Jeff Cody" <jcody@redhat.com>

v14:
address comments from "Eric Blake" <eblake@redhat.com>
split patch 3/3 into two
rename input option and variable from 'servers' to 'server'

v15:
patch 1/4 changed the commit message as per Eric's comment
patch 2/4 are unchanged
patch 3/4 addressed Jeff's comments
patch 4/4 concentrates on unix transport related help info,
rename 'parse_transport_option()' to 'qapi_enum_parse()',
address memory leaks and other comments given by Jeff and Eric

v16:
In patch 4/4 fixed segfault on glfs_init() error case, as per Jeff's comments
other patches in this series remain unchanged

v17:
rebase of v16 on latest master

v18:
rebase of v17 on latest master
rebase has demanded type conversion of 'qemu_gluster_init()'[s] first argument
from 'BlockdevOptionsGluster**' to 'BlockdevOptionsGluster*' and all its callees
both in 3/4 and 4/4 patches

v19:
patches 1/5, 2/5 remains unchanged

patch 3/5 is something new, in which the rdma deadcode is removed

patch 4/5 (i.e. 3/4 in v18) now uses union discriminator, I have made a choice
to use gluster with custom schema since @UnixSocketAddress uses 'path' as key,
which may be confusing with gluster, and in @InetSocketAddress port was str
again I have made a choice to keep it uint16 which really make sense.
Hmmm.. As Markus suggested in v18 qemu_gluster_parseuri() is *parse_uri() same
with *parse_json() (in 5/5)

patch 5/5 (i.e 4/4 in v18) adds a list of servers and json parser functionality
as usual

Thanks to Markus and Eric for help in understanding the new schema changes.

 block/gluster.c      | 602 ++++++++++++++++++++++++++++++++++++---------------
 qapi/block-core.json |  94 +++++++-
 2 files changed, 518 insertions(+), 178 deletions(-)

-- 
2.7.4

^ permalink raw reply	[flat|nested] 23+ messages in thread

* [Qemu-devel] [PATCH v19 1/5] block/gluster: rename [server, volname, image] -> [host, volume, path]
  2016-07-15 15:00 [Qemu-devel] [PATCH v19 0/5] block/gluster: add support for multiple gluster servers Prasanna Kumar Kalever
@ 2016-07-15 15:00 ` Prasanna Kumar Kalever
  2016-07-15 15:00 ` [Qemu-devel] [PATCH v19 2/5] block/gluster: code cleanup Prasanna Kumar Kalever
                   ` (3 subsequent siblings)
  4 siblings, 0 replies; 23+ messages in thread
From: Prasanna Kumar Kalever @ 2016-07-15 15:00 UTC (permalink / raw)
  To: qemu-devel
  Cc: eblake, armbru, jcody, berrange, vbellur, rtalur, Prasanna Kumar Kalever

A future patch will add support for multiple gluster servers. Existing
terminology is a bit unusual in relation to what names are used by
other networked devices, and doesn't map very well to the terminology
we expect to use for multiple servers.  Therefore, rename the following
options:
'server'  -> 'host'
'image'   -> 'path'
'volname' -> 'volume'

Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Jeff Cody <jcody@redhat.com>
---
 block/gluster.c | 54 +++++++++++++++++++++++++++---------------------------
 1 file changed, 27 insertions(+), 27 deletions(-)

diff --git a/block/gluster.c b/block/gluster.c
index 16f7778..f1ac9a2 100644
--- a/block/gluster.c
+++ b/block/gluster.c
@@ -29,10 +29,10 @@ typedef struct BDRVGlusterState {
 } BDRVGlusterState;
 
 typedef struct GlusterConf {
-    char *server;
+    char *host;
     int port;
-    char *volname;
-    char *image;
+    char *volume;
+    char *path;
     char *transport;
     int debug_level;
 } GlusterConf;
@@ -40,9 +40,9 @@ typedef struct GlusterConf {
 static void qemu_gluster_gconf_free(GlusterConf *gconf)
 {
     if (gconf) {
-        g_free(gconf->server);
-        g_free(gconf->volname);
-        g_free(gconf->image);
+        g_free(gconf->host);
+        g_free(gconf->volume);
+        g_free(gconf->path);
         g_free(gconf->transport);
         g_free(gconf);
     }
@@ -62,19 +62,19 @@ static int parse_volume_options(GlusterConf *gconf, char *path)
     if (*p == '\0') {
         return -EINVAL;
     }
-    gconf->volname = g_strndup(q, p - q);
+    gconf->volume = g_strndup(q, p - q);
 
-    /* image */
+    /* path */
     p += strspn(p, "/");
     if (*p == '\0') {
         return -EINVAL;
     }
-    gconf->image = g_strdup(p);
+    gconf->path = g_strdup(p);
     return 0;
 }
 
 /*
- * file=gluster[+transport]://[server[:port]]/volname/image[?socket=...]
+ * file=gluster[+transport]://[host[:port]]/volume/path[?socket=...]
  *
  * 'gluster' is the protocol.
  *
@@ -83,10 +83,10 @@ static int parse_volume_options(GlusterConf *gconf, char *path)
  * tcp, unix and rdma. If a transport type isn't specified, then tcp
  * type is assumed.
  *
- * 'server' specifies the server where the volume file specification for
+ * 'host' specifies the host where the volume file specification for
  * the given volume resides. This can be either hostname, ipv4 address
  * or ipv6 address. ipv6 address needs to be within square brackets [ ].
- * If transport type is 'unix', then 'server' field should not be specified.
+ * If transport type is 'unix', then 'host' field should not be specified.
  * The 'socket' field needs to be populated with the path to unix domain
  * socket.
  *
@@ -95,9 +95,9 @@ static int parse_volume_options(GlusterConf *gconf, char *path)
  * default port. If the transport type is unix, then 'port' should not be
  * specified.
  *
- * 'volname' is the name of the gluster volume which contains the VM image.
+ * 'volume' is the name of the gluster volume which contains the VM image.
  *
- * 'image' is the path to the actual VM image that resides on gluster volume.
+ * 'path' is the path to the actual VM image that resides on gluster volume.
  *
  * Examples:
  *
@@ -106,7 +106,7 @@ static int parse_volume_options(GlusterConf *gconf, char *path)
  * file=gluster+tcp://1.2.3.4:24007/testvol/dir/a.img
  * file=gluster+tcp://[1:2:3:4:5:6:7:8]/testvol/dir/a.img
  * file=gluster+tcp://[1:2:3:4:5:6:7:8]:24007/testvol/dir/a.img
- * file=gluster+tcp://server.domain.com:24007/testvol/dir/a.img
+ * file=gluster+tcp://host.domain.com:24007/testvol/dir/a.img
  * file=gluster+unix:///testvol/dir/a.img?socket=/tmp/glusterd.socket
  * file=gluster+rdma://1.2.3.4:24007/testvol/a.img
  */
@@ -157,9 +157,9 @@ static int qemu_gluster_parseuri(GlusterConf *gconf, const char *filename)
             ret = -EINVAL;
             goto out;
         }
-        gconf->server = g_strdup(qp->p[0].value);
+        gconf->host = g_strdup(qp->p[0].value);
     } else {
-        gconf->server = g_strdup(uri->server ? uri->server : "localhost");
+        gconf->host = g_strdup(uri->server ? uri->server : "localhost");
         gconf->port = uri->port;
     }
 
@@ -180,18 +180,18 @@ static struct glfs *qemu_gluster_init(GlusterConf *gconf, const char *filename,
 
     ret = qemu_gluster_parseuri(gconf, filename);
     if (ret < 0) {
-        error_setg(errp, "Usage: file=gluster[+transport]://[server[:port]]/"
-                   "volname/image[?socket=...]");
+        error_setg(errp, "Usage: file=gluster[+transport]://[host[:port]]/"
+                   "volume/path[?socket=...]");
         errno = -ret;
         goto out;
     }
 
-    glfs = glfs_new(gconf->volname);
+    glfs = glfs_new(gconf->volume);
     if (!glfs) {
         goto out;
     }
 
-    ret = glfs_set_volfile_server(glfs, gconf->transport, gconf->server,
+    ret = glfs_set_volfile_server(glfs, gconf->transport, gconf->host,
             gconf->port);
     if (ret < 0) {
         goto out;
@@ -205,9 +205,9 @@ static struct glfs *qemu_gluster_init(GlusterConf *gconf, const char *filename,
     ret = glfs_init(glfs);
     if (ret) {
         error_setg_errno(errp, errno,
-                         "Gluster connection failed for server=%s port=%d "
-                         "volume=%s image=%s transport=%s", gconf->server,
-                         gconf->port, gconf->volname, gconf->image,
+                         "Gluster connection failed for host=%s port=%d "
+                         "volume=%s path=%s transport=%s", gconf->host,
+                         gconf->port, gconf->volume, gconf->path,
                          gconf->transport);
 
         /* glfs_init sometimes doesn't set errno although docs suggest that */
@@ -373,7 +373,7 @@ static int qemu_gluster_open(BlockDriverState *bs,  QDict *options,
 
     qemu_gluster_parse_flags(bdrv_flags, &open_flags);
 
-    s->fd = glfs_open(s->glfs, gconf->image, open_flags);
+    s->fd = glfs_open(s->glfs, gconf->path, open_flags);
     if (!s->fd) {
         ret = -errno;
     }
@@ -439,7 +439,7 @@ static int qemu_gluster_reopen_prepare(BDRVReopenState *state,
     }
 #endif
 
-    reop_s->fd = glfs_open(reop_s->glfs, gconf->image, open_flags);
+    reop_s->fd = glfs_open(reop_s->glfs, gconf->path, open_flags);
     if (reop_s->fd == NULL) {
         /* reops->glfs will be cleaned up in _abort */
         ret = -errno;
@@ -587,7 +587,7 @@ static int qemu_gluster_create(const char *filename,
         goto out;
     }
 
-    fd = glfs_creat(glfs, gconf->image,
+    fd = glfs_creat(glfs, gconf->path,
         O_WRONLY | O_CREAT | O_TRUNC | O_BINARY, S_IRUSR | S_IWUSR);
     if (!fd) {
         ret = -errno;
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [Qemu-devel] [PATCH v19 2/5] block/gluster: code cleanup
  2016-07-15 15:00 [Qemu-devel] [PATCH v19 0/5] block/gluster: add support for multiple gluster servers Prasanna Kumar Kalever
  2016-07-15 15:00 ` [Qemu-devel] [PATCH v19 1/5] block/gluster: rename [server, volname, image] -> [host, volume, path] Prasanna Kumar Kalever
@ 2016-07-15 15:00 ` Prasanna Kumar Kalever
  2016-07-15 15:00 ` [Qemu-devel] [PATCH v19 3/5] block/gluster: remove rdma transport Prasanna Kumar Kalever
                   ` (2 subsequent siblings)
  4 siblings, 0 replies; 23+ messages in thread
From: Prasanna Kumar Kalever @ 2016-07-15 15:00 UTC (permalink / raw)
  To: qemu-devel
  Cc: eblake, armbru, jcody, berrange, vbellur, rtalur, Prasanna Kumar Kalever

unified coding styles of multiline function arguments and other error functions
moved random declarations of structures and other list variables

Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Jeff Cody <jcody@redhat.com>
---
 block/gluster.c | 143 +++++++++++++++++++++++++++++---------------------------
 1 file changed, 75 insertions(+), 68 deletions(-)

diff --git a/block/gluster.c b/block/gluster.c
index f1ac9a2..40ee852 100644
--- a/block/gluster.c
+++ b/block/gluster.c
@@ -13,6 +13,12 @@
 #include "qapi/error.h"
 #include "qemu/uri.h"
 
+#define GLUSTER_OPT_FILENAME        "filename"
+#define GLUSTER_OPT_DEBUG           "debug"
+#define GLUSTER_DEBUG_DEFAULT       4
+#define GLUSTER_DEBUG_MAX           9
+
+
 typedef struct GlusterAIOCB {
     int64_t size;
     int ret;
@@ -28,6 +34,11 @@ typedef struct BDRVGlusterState {
     int debug_level;
 } BDRVGlusterState;
 
+typedef struct BDRVGlusterReopenState {
+    struct glfs *glfs;
+    struct glfs_fd *fd;
+} BDRVGlusterReopenState;
+
 typedef struct GlusterConf {
     char *host;
     int port;
@@ -37,6 +48,49 @@ typedef struct GlusterConf {
     int debug_level;
 } GlusterConf;
 
+
+static QemuOptsList qemu_gluster_create_opts = {
+    .name = "qemu-gluster-create-opts",
+    .head = QTAILQ_HEAD_INITIALIZER(qemu_gluster_create_opts.head),
+    .desc = {
+        {
+            .name = BLOCK_OPT_SIZE,
+            .type = QEMU_OPT_SIZE,
+            .help = "Virtual disk size"
+        },
+        {
+            .name = BLOCK_OPT_PREALLOC,
+            .type = QEMU_OPT_STRING,
+            .help = "Preallocation mode (allowed values: off, full)"
+        },
+        {
+            .name = GLUSTER_OPT_DEBUG,
+            .type = QEMU_OPT_NUMBER,
+            .help = "Gluster log level, valid range is 0-9",
+        },
+        { /* end of list */ }
+    }
+};
+
+static QemuOptsList runtime_opts = {
+    .name = "gluster",
+    .head = QTAILQ_HEAD_INITIALIZER(runtime_opts.head),
+    .desc = {
+        {
+            .name = GLUSTER_OPT_FILENAME,
+            .type = QEMU_OPT_STRING,
+            .help = "URL to the gluster image",
+        },
+        {
+            .name = GLUSTER_OPT_DEBUG,
+            .type = QEMU_OPT_NUMBER,
+            .help = "Gluster log level, valid range is 0-9",
+        },
+        { /* end of list */ }
+    },
+};
+
+
 static void qemu_gluster_gconf_free(GlusterConf *gconf)
 {
     if (gconf) {
@@ -181,7 +235,7 @@ static struct glfs *qemu_gluster_init(GlusterConf *gconf, const char *filename,
     ret = qemu_gluster_parseuri(gconf, filename);
     if (ret < 0) {
         error_setg(errp, "Usage: file=gluster[+transport]://[host[:port]]/"
-                   "volume/path[?socket=...]");
+                         "volume/path[?socket=...]");
         errno = -ret;
         goto out;
     }
@@ -255,30 +309,6 @@ static void gluster_finish_aiocb(struct glfs_fd *fd, ssize_t ret, void *arg)
     qemu_bh_schedule(acb->bh);
 }
 
-#define GLUSTER_OPT_FILENAME "filename"
-#define GLUSTER_OPT_DEBUG "debug"
-#define GLUSTER_DEBUG_DEFAULT 4
-#define GLUSTER_DEBUG_MAX 9
-
-/* TODO Convert to fine grained options */
-static QemuOptsList runtime_opts = {
-    .name = "gluster",
-    .head = QTAILQ_HEAD_INITIALIZER(runtime_opts.head),
-    .desc = {
-        {
-            .name = GLUSTER_OPT_FILENAME,
-            .type = QEMU_OPT_STRING,
-            .help = "URL to the gluster image",
-        },
-        {
-            .name = GLUSTER_OPT_DEBUG,
-            .type = QEMU_OPT_NUMBER,
-            .help = "Gluster log level, valid range is 0-9",
-        },
-        { /* end of list */ }
-    },
-};
-
 static void qemu_gluster_parse_flags(int bdrv_flags, int *open_flags)
 {
     assert(open_flags != NULL);
@@ -395,12 +425,6 @@ out:
     return ret;
 }
 
-typedef struct BDRVGlusterReopenState {
-    struct glfs *glfs;
-    struct glfs_fd *fd;
-} BDRVGlusterReopenState;
-
-
 static int qemu_gluster_reopen_prepare(BDRVReopenState *state,
                                        BlockReopenQueue *queue, Error **errp)
 {
@@ -501,7 +525,9 @@ static void qemu_gluster_reopen_abort(BDRVReopenState *state)
 
 #ifdef CONFIG_GLUSTERFS_ZEROFILL
 static coroutine_fn int qemu_gluster_co_pwrite_zeroes(BlockDriverState *bs,
-        int64_t offset, int size, BdrvRequestFlags flags)
+                                                      int64_t offset,
+                                                      int size,
+                                                      BdrvRequestFlags flags)
 {
     int ret;
     GlusterAIOCB acb;
@@ -527,7 +553,7 @@ static inline bool gluster_supports_zerofill(void)
 }
 
 static inline int qemu_gluster_zerofill(struct glfs_fd *fd, int64_t offset,
-        int64_t size)
+                                        int64_t size)
 {
     return glfs_zerofill(fd, offset, size);
 }
@@ -539,7 +565,7 @@ static inline bool gluster_supports_zerofill(void)
 }
 
 static inline int qemu_gluster_zerofill(struct glfs_fd *fd, int64_t offset,
-        int64_t size)
+                                        int64_t size)
 {
     return 0;
 }
@@ -576,19 +602,17 @@ static int qemu_gluster_create(const char *filename,
     tmp = qemu_opt_get_del(opts, BLOCK_OPT_PREALLOC);
     if (!tmp || !strcmp(tmp, "off")) {
         prealloc = 0;
-    } else if (!strcmp(tmp, "full") &&
-               gluster_supports_zerofill()) {
+    } else if (!strcmp(tmp, "full") && gluster_supports_zerofill()) {
         prealloc = 1;
     } else {
         error_setg(errp, "Invalid preallocation mode: '%s'"
-            " or GlusterFS doesn't support zerofill API",
-            tmp);
+                         " or GlusterFS doesn't support zerofill API", tmp);
         ret = -EINVAL;
         goto out;
     }
 
     fd = glfs_creat(glfs, gconf->path,
-        O_WRONLY | O_CREAT | O_TRUNC | O_BINARY, S_IRUSR | S_IWUSR);
+                    O_WRONLY | O_CREAT | O_TRUNC | O_BINARY, S_IRUSR | S_IWUSR);
     if (!fd) {
         ret = -errno;
     } else {
@@ -614,7 +638,8 @@ out:
 }
 
 static coroutine_fn int qemu_gluster_co_rw(BlockDriverState *bs,
-        int64_t sector_num, int nb_sectors, QEMUIOVector *qiov, int write)
+                                           int64_t sector_num, int nb_sectors,
+                                           QEMUIOVector *qiov, int write)
 {
     int ret;
     GlusterAIOCB acb;
@@ -629,10 +654,10 @@ static coroutine_fn int qemu_gluster_co_rw(BlockDriverState *bs,
 
     if (write) {
         ret = glfs_pwritev_async(s->fd, qiov->iov, qiov->niov, offset, 0,
-            gluster_finish_aiocb, &acb);
+                                 gluster_finish_aiocb, &acb);
     } else {
         ret = glfs_preadv_async(s->fd, qiov->iov, qiov->niov, offset, 0,
-            gluster_finish_aiocb, &acb);
+                                gluster_finish_aiocb, &acb);
     }
 
     if (ret < 0) {
@@ -657,13 +682,17 @@ static int qemu_gluster_truncate(BlockDriverState *bs, int64_t offset)
 }
 
 static coroutine_fn int qemu_gluster_co_readv(BlockDriverState *bs,
-        int64_t sector_num, int nb_sectors, QEMUIOVector *qiov)
+                                              int64_t sector_num,
+                                              int nb_sectors,
+                                              QEMUIOVector *qiov)
 {
     return qemu_gluster_co_rw(bs, sector_num, nb_sectors, qiov, 0);
 }
 
 static coroutine_fn int qemu_gluster_co_writev(BlockDriverState *bs,
-        int64_t sector_num, int nb_sectors, QEMUIOVector *qiov)
+                                               int64_t sector_num,
+                                               int nb_sectors,
+                                               QEMUIOVector *qiov)
 {
     return qemu_gluster_co_rw(bs, sector_num, nb_sectors, qiov, 1);
 }
@@ -725,7 +754,8 @@ error:
 
 #ifdef CONFIG_GLUSTERFS_DISCARD
 static coroutine_fn int qemu_gluster_co_discard(BlockDriverState *bs,
-        int64_t sector_num, int nb_sectors)
+                                                int64_t sector_num,
+                                                int nb_sectors)
 {
     int ret;
     GlusterAIOCB acb;
@@ -934,29 +964,6 @@ static int64_t coroutine_fn qemu_gluster_co_get_block_status(
 }
 
 
-static QemuOptsList qemu_gluster_create_opts = {
-    .name = "qemu-gluster-create-opts",
-    .head = QTAILQ_HEAD_INITIALIZER(qemu_gluster_create_opts.head),
-    .desc = {
-        {
-            .name = BLOCK_OPT_SIZE,
-            .type = QEMU_OPT_SIZE,
-            .help = "Virtual disk size"
-        },
-        {
-            .name = BLOCK_OPT_PREALLOC,
-            .type = QEMU_OPT_STRING,
-            .help = "Preallocation mode (allowed values: off, full)"
-        },
-        {
-            .name = GLUSTER_OPT_DEBUG,
-            .type = QEMU_OPT_NUMBER,
-            .help = "Gluster log level, valid range is 0-9",
-        },
-        { /* end of list */ }
-    }
-};
-
 static BlockDriver bdrv_gluster = {
     .format_name                  = "gluster",
     .protocol_name                = "gluster",
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [Qemu-devel] [PATCH v19 3/5] block/gluster: remove rdma transport
  2016-07-15 15:00 [Qemu-devel] [PATCH v19 0/5] block/gluster: add support for multiple gluster servers Prasanna Kumar Kalever
  2016-07-15 15:00 ` [Qemu-devel] [PATCH v19 1/5] block/gluster: rename [server, volname, image] -> [host, volume, path] Prasanna Kumar Kalever
  2016-07-15 15:00 ` [Qemu-devel] [PATCH v19 2/5] block/gluster: code cleanup Prasanna Kumar Kalever
@ 2016-07-15 15:00 ` Prasanna Kumar Kalever
  2016-07-18  8:53   ` Markus Armbruster
  2016-07-15 15:00 ` [Qemu-devel] [PATCH v19 4/5] block/gluster: using new qapi schema Prasanna Kumar Kalever
  2016-07-15 15:00 ` [Qemu-devel] [PATCH v19 5/5] block/gluster: add support for multiple gluster servers Prasanna Kumar Kalever
  4 siblings, 1 reply; 23+ messages in thread
From: Prasanna Kumar Kalever @ 2016-07-15 15:00 UTC (permalink / raw)
  To: qemu-devel
  Cc: eblake, armbru, jcody, berrange, vbellur, rtalur, Prasanna Kumar Kalever

gluster volfile server fetch happens through unix and/or tcp, it doesn't
support volfile fetch over rdma, hence removing the dead code

Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
---
 block/gluster.c | 35 +----------------------------------
 1 file changed, 1 insertion(+), 34 deletions(-)

diff --git a/block/gluster.c b/block/gluster.c
index 40ee852..59f77bb 100644
--- a/block/gluster.c
+++ b/block/gluster.c
@@ -134,8 +134,7 @@ static int parse_volume_options(GlusterConf *gconf, char *path)
  *
  * 'transport' specifies the transport type used to connect to gluster
  * management daemon (glusterd). Valid transport types are
- * tcp, unix and rdma. If a transport type isn't specified, then tcp
- * type is assumed.
+ * tcp, unix. If a transport type isn't specified, then tcp type is assumed.
  *
  * 'host' specifies the host where the volume file specification for
  * the given volume resides. This can be either hostname, ipv4 address
@@ -162,7 +161,6 @@ static int parse_volume_options(GlusterConf *gconf, char *path)
  * file=gluster+tcp://[1:2:3:4:5:6:7:8]:24007/testvol/dir/a.img
  * file=gluster+tcp://host.domain.com:24007/testvol/dir/a.img
  * file=gluster+unix:///testvol/dir/a.img?socket=/tmp/glusterd.socket
- * file=gluster+rdma://1.2.3.4:24007/testvol/a.img
  */
 static int qemu_gluster_parseuri(GlusterConf *gconf, const char *filename)
 {
@@ -184,8 +182,6 @@ static int qemu_gluster_parseuri(GlusterConf *gconf, const char *filename)
     } else if (!strcmp(uri->scheme, "gluster+unix")) {
         gconf->transport = g_strdup("unix");
         is_unix = true;
-    } else if (!strcmp(uri->scheme, "gluster+rdma")) {
-        gconf->transport = g_strdup("rdma");
     } else {
         ret = -EINVAL;
         goto out;
@@ -1048,37 +1044,8 @@ static BlockDriver bdrv_gluster_unix = {
     .create_opts                  = &qemu_gluster_create_opts,
 };
 
-static BlockDriver bdrv_gluster_rdma = {
-    .format_name                  = "gluster",
-    .protocol_name                = "gluster+rdma",
-    .instance_size                = sizeof(BDRVGlusterState),
-    .bdrv_needs_filename          = true,
-    .bdrv_file_open               = qemu_gluster_open,
-    .bdrv_reopen_prepare          = qemu_gluster_reopen_prepare,
-    .bdrv_reopen_commit           = qemu_gluster_reopen_commit,
-    .bdrv_reopen_abort            = qemu_gluster_reopen_abort,
-    .bdrv_close                   = qemu_gluster_close,
-    .bdrv_create                  = qemu_gluster_create,
-    .bdrv_getlength               = qemu_gluster_getlength,
-    .bdrv_get_allocated_file_size = qemu_gluster_allocated_file_size,
-    .bdrv_truncate                = qemu_gluster_truncate,
-    .bdrv_co_readv                = qemu_gluster_co_readv,
-    .bdrv_co_writev               = qemu_gluster_co_writev,
-    .bdrv_co_flush_to_disk        = qemu_gluster_co_flush_to_disk,
-    .bdrv_has_zero_init           = qemu_gluster_has_zero_init,
-#ifdef CONFIG_GLUSTERFS_DISCARD
-    .bdrv_co_discard              = qemu_gluster_co_discard,
-#endif
-#ifdef CONFIG_GLUSTERFS_ZEROFILL
-    .bdrv_co_pwrite_zeroes        = qemu_gluster_co_pwrite_zeroes,
-#endif
-    .bdrv_co_get_block_status     = qemu_gluster_co_get_block_status,
-    .create_opts                  = &qemu_gluster_create_opts,
-};
-
 static void bdrv_gluster_init(void)
 {
-    bdrv_register(&bdrv_gluster_rdma);
     bdrv_register(&bdrv_gluster_unix);
     bdrv_register(&bdrv_gluster_tcp);
     bdrv_register(&bdrv_gluster);
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [Qemu-devel] [PATCH v19 4/5] block/gluster: using new qapi schema
  2016-07-15 15:00 [Qemu-devel] [PATCH v19 0/5] block/gluster: add support for multiple gluster servers Prasanna Kumar Kalever
                   ` (2 preceding siblings ...)
  2016-07-15 15:00 ` [Qemu-devel] [PATCH v19 3/5] block/gluster: remove rdma transport Prasanna Kumar Kalever
@ 2016-07-15 15:00 ` Prasanna Kumar Kalever
  2016-07-18  9:29   ` Markus Armbruster
  2016-07-15 15:00 ` [Qemu-devel] [PATCH v19 5/5] block/gluster: add support for multiple gluster servers Prasanna Kumar Kalever
  4 siblings, 1 reply; 23+ messages in thread
From: Prasanna Kumar Kalever @ 2016-07-15 15:00 UTC (permalink / raw)
  To: qemu-devel
  Cc: eblake, armbru, jcody, berrange, vbellur, rtalur, Prasanna Kumar Kalever

this patch adds 'GlusterServer' related schema in qapi/block-core.json

Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
---
 block/gluster.c      | 111 +++++++++++++++++++++++++++++----------------------
 qapi/block-core.json |  94 ++++++++++++++++++++++++++++++++++++++++---
 2 files changed, 153 insertions(+), 52 deletions(-)

diff --git a/block/gluster.c b/block/gluster.c
index 59f77bb..ff1e783 100644
--- a/block/gluster.c
+++ b/block/gluster.c
@@ -15,6 +15,7 @@
 
 #define GLUSTER_OPT_FILENAME        "filename"
 #define GLUSTER_OPT_DEBUG           "debug"
+#define GLUSTER_DEFAULT_PORT        24007
 #define GLUSTER_DEBUG_DEFAULT       4
 #define GLUSTER_DEBUG_MAX           9
 
@@ -39,15 +40,6 @@ typedef struct BDRVGlusterReopenState {
     struct glfs_fd *fd;
 } BDRVGlusterReopenState;
 
-typedef struct GlusterConf {
-    char *host;
-    int port;
-    char *volume;
-    char *path;
-    char *transport;
-    int debug_level;
-} GlusterConf;
-
 
 static QemuOptsList qemu_gluster_create_opts = {
     .name = "qemu-gluster-create-opts",
@@ -91,18 +83,7 @@ static QemuOptsList runtime_opts = {
 };
 
 
-static void qemu_gluster_gconf_free(GlusterConf *gconf)
-{
-    if (gconf) {
-        g_free(gconf->host);
-        g_free(gconf->volume);
-        g_free(gconf->path);
-        g_free(gconf->transport);
-        g_free(gconf);
-    }
-}
-
-static int parse_volume_options(GlusterConf *gconf, char *path)
+static int parse_volume_options(BlockdevOptionsGluster *gconf, char *path)
 {
     char *p, *q;
 
@@ -162,8 +143,10 @@ static int parse_volume_options(GlusterConf *gconf, char *path)
  * file=gluster+tcp://host.domain.com:24007/testvol/dir/a.img
  * file=gluster+unix:///testvol/dir/a.img?socket=/tmp/glusterd.socket
  */
-static int qemu_gluster_parseuri(GlusterConf *gconf, const char *filename)
+static int qemu_gluster_parse_uri(BlockdevOptionsGluster *gconf,
+                                  const char *filename)
 {
+    GlusterServer *gsconf;
     URI *uri;
     QueryParams *qp = NULL;
     bool is_unix = false;
@@ -174,13 +157,15 @@ static int qemu_gluster_parseuri(GlusterConf *gconf, const char *filename)
         return -EINVAL;
     }
 
+    gconf->server = gsconf = g_new0(GlusterServer, 1);
+
     /* transport */
     if (!uri->scheme || !strcmp(uri->scheme, "gluster")) {
-        gconf->transport = g_strdup("tcp");
+        gsconf->type = GLUSTER_TRANSPORT_TCP;
     } else if (!strcmp(uri->scheme, "gluster+tcp")) {
-        gconf->transport = g_strdup("tcp");
+        gsconf->type = GLUSTER_TRANSPORT_TCP;
     } else if (!strcmp(uri->scheme, "gluster+unix")) {
-        gconf->transport = g_strdup("unix");
+        gsconf->type = GLUSTER_TRANSPORT_UNIX;
         is_unix = true;
     } else {
         ret = -EINVAL;
@@ -207,10 +192,15 @@ static int qemu_gluster_parseuri(GlusterConf *gconf, const char *filename)
             ret = -EINVAL;
             goto out;
         }
-        gconf->host = g_strdup(qp->p[0].value);
+        gsconf->u.q_unix.socket = g_strdup(qp->p[0].value);
     } else {
-        gconf->host = g_strdup(uri->server ? uri->server : "localhost");
-        gconf->port = uri->port;
+        gsconf->u.tcp.host = g_strdup(uri->server ? uri->server : "localhost");
+        if (uri->port) {
+            gsconf->u.tcp.port = uri->port;
+        } else {
+            gsconf->u.tcp.port = GLUSTER_DEFAULT_PORT;
+        }
+        gsconf->u.tcp.has_port = true;
     }
 
 out:
@@ -221,14 +211,14 @@ out:
     return ret;
 }
 
-static struct glfs *qemu_gluster_init(GlusterConf *gconf, const char *filename,
-                                      Error **errp)
+static struct glfs *qemu_gluster_init(BlockdevOptionsGluster *gconf,
+                                      const char *filename, Error **errp)
 {
     struct glfs *glfs = NULL;
     int ret;
     int old_errno;
 
-    ret = qemu_gluster_parseuri(gconf, filename);
+    ret = qemu_gluster_parse_uri(gconf, filename);
     if (ret < 0) {
         error_setg(errp, "Usage: file=gluster[+transport]://[host[:port]]/"
                          "volume/path[?socket=...]");
@@ -241,8 +231,16 @@ static struct glfs *qemu_gluster_init(GlusterConf *gconf, const char *filename,
         goto out;
     }
 
-    ret = glfs_set_volfile_server(glfs, gconf->transport, gconf->host,
-            gconf->port);
+    if (gconf->server->type == GLUSTER_TRANSPORT_UNIX) {
+        ret = glfs_set_volfile_server(glfs,
+                                      GlusterTransport_lookup[gconf->server->type],
+                                      gconf->server->u.q_unix.socket, 0);
+    } else {
+        ret = glfs_set_volfile_server(glfs,
+                                      GlusterTransport_lookup[gconf->server->type],
+                                      gconf->server->u.tcp.host,
+                                      gconf->server->u.tcp.port);
+    }
     if (ret < 0) {
         goto out;
     }
@@ -254,15 +252,24 @@ static struct glfs *qemu_gluster_init(GlusterConf *gconf, const char *filename,
 
     ret = glfs_init(glfs);
     if (ret) {
-        error_setg_errno(errp, errno,
-                         "Gluster connection failed for host=%s port=%d "
-                         "volume=%s path=%s transport=%s", gconf->host,
-                         gconf->port, gconf->volume, gconf->path,
-                         gconf->transport);
+        if (gconf->server->type == GLUSTER_TRANSPORT_UNIX) {
+            error_setg(errp,
+                       "Gluster connection for volume: %s, path: %s "
+                       "failed to connect on socket %s ",
+                       gconf->volume, gconf->path,
+                       gconf->server->u.q_unix.socket);
+        } else {
+            error_setg(errp,
+                       "Gluster connection for volume: %s, path: %s "
+                       "failed to connect  host  %s and port %d ",
+                       gconf->volume, gconf->path,
+                       gconf->server->u.tcp.host, gconf->server->u.tcp.port);
+        }
 
         /* glfs_init sometimes doesn't set errno although docs suggest that */
-        if (errno == 0)
+        if (errno == 0) {
             errno = EINVAL;
+        }
 
         goto out;
     }
@@ -350,7 +357,7 @@ static int qemu_gluster_open(BlockDriverState *bs,  QDict *options,
     BDRVGlusterState *s = bs->opaque;
     int open_flags = 0;
     int ret = 0;
-    GlusterConf *gconf = g_new0(GlusterConf, 1);
+    BlockdevOptionsGluster *gconf = NULL;
     QemuOpts *opts;
     Error *local_err = NULL;
     const char *filename;
@@ -373,7 +380,9 @@ static int qemu_gluster_open(BlockDriverState *bs,  QDict *options,
         s->debug_level = GLUSTER_DEBUG_MAX;
     }
 
+    gconf = g_new0(BlockdevOptionsGluster, 1);
     gconf->debug_level = s->debug_level;
+    gconf->has_debug_level = true;
     s->glfs = qemu_gluster_init(gconf, filename, errp);
     if (!s->glfs) {
         ret = -errno;
@@ -408,7 +417,9 @@ static int qemu_gluster_open(BlockDriverState *bs,  QDict *options,
 
 out:
     qemu_opts_del(opts);
-    qemu_gluster_gconf_free(gconf);
+    if (gconf) {
+        qapi_free_BlockdevOptionsGluster(gconf);
+    }
     if (!ret) {
         return ret;
     }
@@ -427,7 +438,7 @@ static int qemu_gluster_reopen_prepare(BDRVReopenState *state,
     int ret = 0;
     BDRVGlusterState *s;
     BDRVGlusterReopenState *reop_s;
-    GlusterConf *gconf = NULL;
+    BlockdevOptionsGluster *gconf;
     int open_flags = 0;
 
     assert(state != NULL);
@@ -440,9 +451,9 @@ static int qemu_gluster_reopen_prepare(BDRVReopenState *state,
 
     qemu_gluster_parse_flags(state->flags, &open_flags);
 
-    gconf = g_new0(GlusterConf, 1);
-
+    gconf = g_new0(BlockdevOptionsGluster, 1);
     gconf->debug_level = s->debug_level;
+    gconf->has_debug_level = true;
     reop_s->glfs = qemu_gluster_init(gconf, state->bs->filename, errp);
     if (reop_s->glfs == NULL) {
         ret = -errno;
@@ -468,7 +479,9 @@ static int qemu_gluster_reopen_prepare(BDRVReopenState *state,
 
 exit:
     /* state->opaque will be freed in either the _abort or _commit */
-    qemu_gluster_gconf_free(gconf);
+    if (gconf) {
+        qapi_free_BlockdevOptionsGluster(gconf);
+    }
     return ret;
 }
 
@@ -570,14 +583,15 @@ static inline int qemu_gluster_zerofill(struct glfs_fd *fd, int64_t offset,
 static int qemu_gluster_create(const char *filename,
                                QemuOpts *opts, Error **errp)
 {
+    BlockdevOptionsGluster *gconf;
     struct glfs *glfs;
     struct glfs_fd *fd;
     int ret = 0;
     int prealloc = 0;
     int64_t total_size = 0;
     char *tmp = NULL;
-    GlusterConf *gconf = g_new0(GlusterConf, 1);
 
+    gconf = g_new0(BlockdevOptionsGluster, 1);
     gconf->debug_level = qemu_opt_get_number_del(opts, GLUSTER_OPT_DEBUG,
                                                  GLUSTER_DEBUG_DEFAULT);
     if (gconf->debug_level < 0) {
@@ -585,6 +599,7 @@ static int qemu_gluster_create(const char *filename,
     } else if (gconf->debug_level > GLUSTER_DEBUG_MAX) {
         gconf->debug_level = GLUSTER_DEBUG_MAX;
     }
+    gconf->has_debug_level = true;
 
     glfs = qemu_gluster_init(gconf, filename, errp);
     if (!glfs) {
@@ -626,7 +641,9 @@ static int qemu_gluster_create(const char *filename,
     }
 out:
     g_free(tmp);
-    qemu_gluster_gconf_free(gconf);
+    if (gconf) {
+        qapi_free_BlockdevOptionsGluster(gconf);
+    }
     if (glfs) {
         glfs_fini(glfs);
     }
diff --git a/qapi/block-core.json b/qapi/block-core.json
index a7fdb28..d7b5c76 100644
--- a/qapi/block-core.json
+++ b/qapi/block-core.json
@@ -1658,13 +1658,14 @@
 # @host_device, @host_cdrom: Since 2.1
 #
 # Since: 2.0
+# @gluster: Since 2.7
 ##
 { 'enum': 'BlockdevDriver',
   'data': [ 'archipelago', 'blkdebug', 'blkverify', 'bochs', 'cloop',
-            'dmg', 'file', 'ftp', 'ftps', 'host_cdrom', 'host_device',
-            'http', 'https', 'luks', 'null-aio', 'null-co', 'parallels',
-            'qcow', 'qcow2', 'qed', 'quorum', 'raw', 'tftp', 'vdi', 'vhdx',
-            'vmdk', 'vpc', 'vvfat' ] }
+            'dmg', 'file', 'ftp', 'ftps', 'gluster', 'host_cdrom',
+            'host_device', 'http', 'https', 'luks', 'null-aio', 'null-co',
+            'parallels', 'qcow', 'qcow2', 'qed', 'quorum', 'raw', 'tftp',
+            'vdi', 'vhdx', 'vmdk', 'vpc', 'vvfat' ] }
 
 ##
 # @BlockdevOptionsFile
@@ -2057,6 +2058,89 @@
             '*read-pattern': 'QuorumReadPattern' } }
 
 ##
+# @GlusterTransport
+#
+# An enumeration of Gluster transport type
+#
+# @tcp:   TCP   - Transmission Control Protocol
+#
+# @unix:  UNIX  - Unix domain socket
+#
+# Since: 2.7
+##
+{ 'enum': 'GlusterTransport',
+  'data': [ 'unix', 'tcp' ] }
+
+##
+# @GlusterUnixSocketAddress
+#
+# Captures a socket address in the local ("Unix socket") namespace.
+#
+# @socket:   absolute path to socket file
+#
+# Since 2.7
+##
+{ 'struct': 'GlusterUnixSocketAddress',
+  'data': { 'socket': 'str' } }
+
+##
+# @GlusterInetSocketAddress
+#
+# Captures a socket address or address range in the Internet namespace.
+#
+# @host:  host part of the address
+#
+# @port:  #optional port part of the address, or lowest port if @to is present
+#
+# Since 2.7
+##
+{ 'struct': 'GlusterInetSocketAddress',
+  'data': { 'host': 'str',
+            '*port': 'uint16' } }
+
+##
+# @GlusterServer
+#
+# Captures the address of a socket
+#
+# Details for connecting to a gluster server
+#
+# @type:       Transport type used for gluster connection
+#
+# @unix:       socket file
+#
+# @tcp:        host address and port number
+#
+# Since: 2.7
+##
+{ 'union': 'GlusterServer',
+  'base': { 'type': 'GlusterTransport' },
+  'discriminator': 'type',
+  'data': { 'unix': 'GlusterUnixSocketAddress',
+            'tcp': 'GlusterInetSocketAddress' } }
+
+##
+# @BlockdevOptionsGluster
+#
+# Driver specific block device options for Gluster
+#
+# @volume:      name of gluster volume where VM image resides
+#
+# @path:        absolute path to image file in gluster volume
+#
+# @server:      gluster server description
+#
+# @debug_level: #optional libgfapi log level (default '4' which is Error)
+#
+# Since: 2.7
+##
+{ 'struct': 'BlockdevOptionsGluster',
+  'data': { 'volume': 'str',
+            'path': 'str',
+            'server': 'GlusterServer',
+            '*debug_level': 'int' } }
+
+##
 # @BlockdevOptions
 #
 # Options for creating a block device.  Many options are available for all
@@ -2119,7 +2203,7 @@
       'file':       'BlockdevOptionsFile',
       'ftp':        'BlockdevOptionsFile',
       'ftps':       'BlockdevOptionsFile',
-# TODO gluster: Wait for structured options
+      'gluster':    'BlockdevOptionsGluster',
       'host_cdrom': 'BlockdevOptionsFile',
       'host_device':'BlockdevOptionsFile',
       'http':       'BlockdevOptionsFile',
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [Qemu-devel] [PATCH v19 5/5] block/gluster: add support for multiple gluster servers
  2016-07-15 15:00 [Qemu-devel] [PATCH v19 0/5] block/gluster: add support for multiple gluster servers Prasanna Kumar Kalever
                   ` (3 preceding siblings ...)
  2016-07-15 15:00 ` [Qemu-devel] [PATCH v19 4/5] block/gluster: using new qapi schema Prasanna Kumar Kalever
@ 2016-07-15 15:00 ` Prasanna Kumar Kalever
  2016-07-18 10:17   ` Markus Armbruster
  4 siblings, 1 reply; 23+ messages in thread
From: Prasanna Kumar Kalever @ 2016-07-15 15:00 UTC (permalink / raw)
  To: qemu-devel
  Cc: eblake, armbru, jcody, berrange, vbellur, rtalur, Prasanna Kumar Kalever

This patch adds a way to specify multiple volfile servers to the gluster
block backend of QEMU with tcp|rdma transport types and their port numbers.

Problem:

Currently VM Image on gluster volume is specified like this:

file=gluster[+tcp]://host[:port]/testvol/a.img

Say we have three hosts in a trusted pool with replica 3 volume in action.
When the host mentioned in the command above goes down for some reason,
the other two hosts are still available. But there's currently no way
to tell QEMU about them.

Solution:

New way of specifying VM Image on gluster volume with volfile servers:
(We still support old syntax to maintain backward compatibility)

Basic command line syntax looks like:

Pattern I:
 -drive driver=gluster,
        volume=testvol,path=/path/a.raw,[debug=N,]
        server.0.type=tcp,
        server.0.host=1.2.3.4,
       [server.0.port=24007,]
        server.1.type=unix,
        server.1.socket=/path/socketfile

Pattern II:
 'json:{"driver":"qcow2","file":{"driver":"gluster",
       "volume":"testvol","path":"/path/a.qcow2",["debug":N,]
       "server":[{hostinfo_1}, ...{hostinfo_N}]}}'

   driver      => 'gluster' (protocol name)
   volume      => name of gluster volume where our VM image resides
   path        => absolute path of image in gluster volume
  [debug]      => libgfapi loglevel [(0 - 9) default 4 -> Error]

  {hostinfo}   => {{type:"tcp",host:"1.2.3.4"[,port=24007]},
                   {type:"unix",socket:"/path/sockfile"}}

   type        => transport type used to connect to gluster management daemon,
                  it can be tcp|unix
   host        => host address (hostname/ipv4/ipv6 addresses/socket path)
  [port]       => port number on which glusterd is listening. (default 24007)
   socket      => path to socket file

Examples:
1.
 -drive driver=qcow2,file.driver=gluster,
        file.volume=testvol,file.path=/path/a.qcow2,file.debug=9,
        file.server.0.type=tcp,
        file.server.0.host=1.2.3.4,
        file.server.0.port=24007,
        file.server.1.type=tcp,
        file.server.1.socket=/var/run/glusterd.socket
2.
 'json:{"driver":"qcow2","file":{"driver":"gluster","volume":"testvol",
         "path":"/path/a.qcow2","debug":9,"server":
         [{type:"tcp",host:"1.2.3.4",port=24007},
          {type:"unix",socket:"/var/run/glusterd.socket"}] } }'

This patch gives a mechanism to provide all the server addresses, which are in
replica set, so in case host1 is down VM can still boot from any of the
active hosts.

This is equivalent to the backup-volfile-servers option supported by
mount.glusterfs (FUSE way of mounting gluster volume)

credits: sincere thanks to all the supporters

Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
---
 block/gluster.c      | 347 +++++++++++++++++++++++++++++++++++++++++++++------
 qapi/block-core.json |   2 +-
 2 files changed, 307 insertions(+), 42 deletions(-)

diff --git a/block/gluster.c b/block/gluster.c
index ff1e783..fd2279d 100644
--- a/block/gluster.c
+++ b/block/gluster.c
@@ -12,8 +12,16 @@
 #include "block/block_int.h"
 #include "qapi/error.h"
 #include "qemu/uri.h"
+#include "qemu/error-report.h"
 
 #define GLUSTER_OPT_FILENAME        "filename"
+#define GLUSTER_OPT_VOLUME          "volume"
+#define GLUSTER_OPT_PATH            "path"
+#define GLUSTER_OPT_TYPE            "type"
+#define GLUSTER_OPT_SERVER_PATTERN  "server."
+#define GLUSTER_OPT_HOST            "host"
+#define GLUSTER_OPT_PORT            "port"
+#define GLUSTER_OPT_SOCKET          "socket"
 #define GLUSTER_OPT_DEBUG           "debug"
 #define GLUSTER_DEFAULT_PORT        24007
 #define GLUSTER_DEBUG_DEFAULT       4
@@ -82,6 +90,77 @@ static QemuOptsList runtime_opts = {
     },
 };
 
+static QemuOptsList runtime_json_opts = {
+    .name = "gluster_json",
+    .head = QTAILQ_HEAD_INITIALIZER(runtime_json_opts.head),
+    .desc = {
+        {
+            .name = GLUSTER_OPT_VOLUME,
+            .type = QEMU_OPT_STRING,
+            .help = "name of gluster volume where VM image resides",
+        },
+        {
+            .name = GLUSTER_OPT_PATH,
+            .type = QEMU_OPT_STRING,
+            .help = "absolute path to image file in gluster volume",
+        },
+        {
+            .name = GLUSTER_OPT_DEBUG,
+            .type = QEMU_OPT_NUMBER,
+            .help = "Gluster log level, valid range is 0-9",
+        },
+        { /* end of list */ }
+    },
+};
+
+static QemuOptsList runtime_type_opts = {
+    .name = "gluster_type",
+    .head = QTAILQ_HEAD_INITIALIZER(runtime_type_opts.head),
+    .desc = {
+        {
+            .name = GLUSTER_OPT_TYPE,
+            .type = QEMU_OPT_STRING,
+            .help = "tcp|unix",
+        },
+        { /* end of list */ }
+    },
+};
+
+static QemuOptsList runtime_unix_opts = {
+    .name = "gluster_unix",
+    .head = QTAILQ_HEAD_INITIALIZER(runtime_unix_opts.head),
+    .desc = {
+        {
+            .name = GLUSTER_OPT_SOCKET,
+            .type = QEMU_OPT_STRING,
+            .help = "socket file path)",
+        },
+        { /* end of list */ }
+    },
+};
+
+static QemuOptsList runtime_tcp_opts = {
+    .name = "gluster_tcp",
+    .head = QTAILQ_HEAD_INITIALIZER(runtime_tcp_opts.head),
+    .desc = {
+        {
+            .name = GLUSTER_OPT_TYPE,
+            .type = QEMU_OPT_STRING,
+            .help = "tcp|unix",
+        },
+        {
+            .name = GLUSTER_OPT_HOST,
+            .type = QEMU_OPT_STRING,
+            .help = "host address (hostname/ipv4/ipv6 addresses)",
+        },
+        {
+            .name = GLUSTER_OPT_PORT,
+            .type = QEMU_OPT_NUMBER,
+            .help = "port number on which glusterd is listening (default 24007)",
+        },
+        { /* end of list */ }
+    },
+};
 
 static int parse_volume_options(BlockdevOptionsGluster *gconf, char *path)
 {
@@ -157,7 +236,8 @@ static int qemu_gluster_parse_uri(BlockdevOptionsGluster *gconf,
         return -EINVAL;
     }
 
-    gconf->server = gsconf = g_new0(GlusterServer, 1);
+    gconf->server = g_new0(GlusterServerList, 1);
+    gconf->server->value = gsconf = g_new0(GlusterServer, 1);
 
     /* transport */
     if (!uri->scheme || !strcmp(uri->scheme, "gluster")) {
@@ -211,38 +291,34 @@ out:
     return ret;
 }
 
-static struct glfs *qemu_gluster_init(BlockdevOptionsGluster *gconf,
-                                      const char *filename, Error **errp)
+static struct glfs *qemu_gluster_glfs_init(BlockdevOptionsGluster *gconf,
+                                           Error **errp)
 {
-    struct glfs *glfs = NULL;
+    struct glfs *glfs;
     int ret;
     int old_errno;
-
-    ret = qemu_gluster_parse_uri(gconf, filename);
-    if (ret < 0) {
-        error_setg(errp, "Usage: file=gluster[+transport]://[host[:port]]/"
-                         "volume/path[?socket=...]");
-        errno = -ret;
-        goto out;
-    }
+    GlusterServerList *server;
 
     glfs = glfs_new(gconf->volume);
     if (!glfs) {
         goto out;
     }
 
-    if (gconf->server->type == GLUSTER_TRANSPORT_UNIX) {
-        ret = glfs_set_volfile_server(glfs,
-                                      GlusterTransport_lookup[gconf->server->type],
-                                      gconf->server->u.q_unix.socket, 0);
-    } else {
-        ret = glfs_set_volfile_server(glfs,
-                                      GlusterTransport_lookup[gconf->server->type],
-                                      gconf->server->u.tcp.host,
-                                      gconf->server->u.tcp.port);
-    }
-    if (ret < 0) {
-        goto out;
+    for (server = gconf->server; server; server = server->next) {
+        if (server->value->type  == GLUSTER_TRANSPORT_UNIX) {
+            ret = glfs_set_volfile_server(glfs,
+                                          GlusterTransport_lookup[server->value->type],
+                                          server->value->u.q_unix.socket, 0);
+        } else {
+            ret = glfs_set_volfile_server(glfs,
+                                          GlusterTransport_lookup[server->value->type],
+                                          server->value->u.tcp.host,
+                                          server->value->u.tcp.port);
+        }
+
+        if (ret < 0) {
+            goto out;
+        }
     }
 
     ret = glfs_set_logging(glfs, "-", gconf->debug_level);
@@ -252,19 +328,19 @@ static struct glfs *qemu_gluster_init(BlockdevOptionsGluster *gconf,
 
     ret = glfs_init(glfs);
     if (ret) {
-        if (gconf->server->type == GLUSTER_TRANSPORT_UNIX) {
-            error_setg(errp,
-                       "Gluster connection for volume: %s, path: %s "
-                       "failed to connect on socket %s ",
-                       gconf->volume, gconf->path,
-                       gconf->server->u.q_unix.socket);
-        } else {
-            error_setg(errp,
-                       "Gluster connection for volume: %s, path: %s "
-                       "failed to connect  host  %s and port %d ",
-                       gconf->volume, gconf->path,
-                       gconf->server->u.tcp.host, gconf->server->u.tcp.port);
+        error_setg(errp, "Gluster connection for volume: %s, path: %s ",
+                   gconf->volume, gconf->path);
+        for (server = gconf->server; server; server = server->next) {
+            if (server->value->type  == GLUSTER_TRANSPORT_UNIX) {
+                error_append_hint(errp, "failed to connect on socket %s ",
+                                  server->value->u.q_unix.socket);
+            } else {
+                error_append_hint(errp, "failed to connect host %s and port %d ",
+                                  server->value->u.tcp.host,
+                                  server->value->u.tcp.port);
+            }
         }
+        error_append_hint(errp, "Please refer to gluster logs for more info ");
 
         /* glfs_init sometimes doesn't set errno although docs suggest that */
         if (errno == 0) {
@@ -284,6 +360,195 @@ out:
     return NULL;
 }
 
+static int qapi_enum_parse(const char *opt)
+{
+    int i;
+
+    if (!opt) {
+        return GLUSTER_TRANSPORT__MAX;
+    }
+
+    for (i = 0; i < GLUSTER_TRANSPORT__MAX; i++) {
+        if (!strcmp(opt, GlusterTransport_lookup[i])) {
+            return i;
+        }
+    }
+
+    return i;
+}
+
+/*
+ * Convert the json formatted command line into qapi.
+*/
+static int qemu_gluster_parse_json(BlockdevOptionsGluster *gconf,
+                                  QDict *options, Error **errp)
+{
+    QemuOpts *opts;
+    GlusterServer *gsconf;
+    GlusterServerList *curr = NULL;
+    QDict *backing_options = NULL;
+    Error *local_err = NULL;
+    char *str = NULL;
+    const char *ptr;
+    size_t num_servers;
+    int i;
+
+    /* create opts info from runtime_json_opts list */
+    opts = qemu_opts_create(&runtime_json_opts, NULL, 0, &error_abort);
+    qemu_opts_absorb_qdict(opts, options, &local_err);
+    if (local_err) {
+        goto out;
+    }
+
+    num_servers = qdict_array_entries(options, GLUSTER_OPT_SERVER_PATTERN);
+    if (num_servers < 1) {
+        error_setg(&local_err, "please provide 'server' option with valid "
+                               "fields in array of hostinfo ");
+        goto out;
+    }
+
+    ptr = qemu_opt_get(opts, GLUSTER_OPT_VOLUME);
+    if (!ptr) {
+        error_setg(&local_err, "please provide 'volume' option ");
+        goto out;
+    }
+    gconf->volume = g_strdup(ptr);
+
+    ptr = qemu_opt_get(opts, GLUSTER_OPT_PATH);
+    if (!ptr) {
+        error_setg(&local_err, "please provide 'path' option ");
+        goto out;
+    }
+    gconf->path = g_strdup(ptr);
+    qemu_opts_del(opts);
+
+    for (i = 0; i < num_servers; i++) {
+        str = g_strdup_printf(GLUSTER_OPT_SERVER_PATTERN"%d.", i);
+        qdict_extract_subqdict(options, &backing_options, str);
+
+        /* create opts info from runtime_type_opts list */
+        opts = qemu_opts_create(&runtime_type_opts, NULL, 0, &error_abort);
+        qemu_opts_absorb_qdict(opts, backing_options, &local_err);
+        if (local_err) {
+            goto out;
+        }
+
+        ptr = qemu_opt_get(opts, GLUSTER_OPT_TYPE);
+        gsconf = g_new0(GlusterServer, 1);
+        gsconf->type = qapi_enum_parse(ptr);
+        if (!ptr) {
+            error_setg(&local_err, "please provide 'type' in hostinfo.%d as "
+                                   "tcp|unix ", i);
+            goto out;
+
+        }
+        if (gsconf->type == GLUSTER_TRANSPORT__MAX) {
+            error_setg(&local_err, "'type' in hostinfo.%d can be tcp|unix ", i);
+            goto out;
+        }
+        qemu_opts_del(opts);
+
+        if (gsconf->type == GLUSTER_TRANSPORT_TCP) {
+            /* create opts info from runtime_tcp_opts list */
+            opts = qemu_opts_create(&runtime_tcp_opts, NULL, 0, &error_abort);
+            qemu_opts_absorb_qdict(opts, backing_options, &local_err);
+            if (local_err) {
+                goto out;
+            }
+
+            ptr = qemu_opt_get(opts, GLUSTER_OPT_HOST);
+            if (!ptr) {
+                error_setg(&local_err, "please provide 'host' in hostinfo.%d ",
+                           i);
+                goto out;
+            }
+            gsconf->u.tcp.host = g_strdup(ptr);
+            ptr = qemu_opt_get(opts, GLUSTER_OPT_PORT);
+            gsconf->u.tcp.port = qemu_opt_get_number(opts, GLUSTER_OPT_PORT,
+                                               GLUSTER_DEFAULT_PORT);
+            gsconf->u.tcp.has_port = true;
+            qemu_opts_del(opts);
+        } else {
+            /* create opts info from runtime_unix_opts list */
+            opts = qemu_opts_create(&runtime_unix_opts, NULL, 0, &error_abort);
+            qemu_opts_absorb_qdict(opts, backing_options, &local_err);
+            if (local_err) {
+                goto out;
+            }
+
+            ptr = qemu_opt_get(opts, GLUSTER_OPT_SOCKET);
+            if (!ptr) {
+                error_setg(&local_err, "please provide 'socket' in hostinfo.%d ",
+                           i);
+                goto out;
+            }
+            gsconf->u.q_unix.socket = g_strdup(ptr);
+            qemu_opts_del(opts);
+        }
+
+        if (gconf->server == NULL) {
+            gconf->server = g_new0(GlusterServerList, 1);
+            gconf->server->value = gsconf;
+            curr = gconf->server;
+        } else {
+            curr->next = g_new0(GlusterServerList, 1);
+            curr->next->value = gsconf;
+            curr = curr->next;
+        }
+
+        qdict_del(backing_options, str);
+        g_free(str);
+        str = NULL;
+    }
+
+    return 0;
+
+out:
+    if (local_err) {
+        error_propagate(errp, local_err);
+    }
+    qemu_opts_del(opts);
+    if (str) {
+        qdict_del(backing_options, str);
+        g_free(str);
+    }
+    errno = EINVAL;
+    return -errno;
+}
+
+static struct glfs *qemu_gluster_init(BlockdevOptionsGluster *gconf,
+                                      const char *filename,
+                                      QDict *options, Error **errp)
+{
+    int ret;
+    if (filename) {
+        ret = qemu_gluster_parse_uri(gconf, filename);
+        if (ret < 0) {
+            error_setg(errp, "Usage: file=gluster[+transport]://[host[:port]]/"
+                             "volume/path[?socket=...]");
+            errno = -ret;
+            return NULL;
+        }
+    } else {
+        ret = qemu_gluster_parse_json(gconf, options, errp);
+        if (ret < 0) {
+            error_append_hint(errp, "Usage: "
+                             "-drive driver=qcow2,file.driver=gluster,"
+                             "file.volume=testvol,file.path=/path/a.qcow2"
+                             "[,file.debug=9],file.server.0.type=tcp,"
+                             "file.server.0.host=1.2.3.4,"
+                             "[file.server.0.port=24007,]"
+                             "file.server.1.transport=unix,"
+                             "file.server.1.socket=/var/run/glusterd.socket ...");
+            errno = -ret;
+            return NULL;
+        }
+
+    }
+
+    return qemu_gluster_glfs_init(gconf, errp);
+}
+
 static void qemu_gluster_complete_aio(void *opaque)
 {
     GlusterAIOCB *acb = (GlusterAIOCB *)opaque;
@@ -383,7 +648,7 @@ static int qemu_gluster_open(BlockDriverState *bs,  QDict *options,
     gconf = g_new0(BlockdevOptionsGluster, 1);
     gconf->debug_level = s->debug_level;
     gconf->has_debug_level = true;
-    s->glfs = qemu_gluster_init(gconf, filename, errp);
+    s->glfs = qemu_gluster_init(gconf, filename, options, errp);
     if (!s->glfs) {
         ret = -errno;
         goto out;
@@ -454,7 +719,7 @@ static int qemu_gluster_reopen_prepare(BDRVReopenState *state,
     gconf = g_new0(BlockdevOptionsGluster, 1);
     gconf->debug_level = s->debug_level;
     gconf->has_debug_level = true;
-    reop_s->glfs = qemu_gluster_init(gconf, state->bs->filename, errp);
+    reop_s->glfs = qemu_gluster_init(gconf, state->bs->filename, NULL, errp);
     if (reop_s->glfs == NULL) {
         ret = -errno;
         goto exit;
@@ -601,7 +866,7 @@ static int qemu_gluster_create(const char *filename,
     }
     gconf->has_debug_level = true;
 
-    glfs = qemu_gluster_init(gconf, filename, errp);
+    glfs = qemu_gluster_init(gconf, filename, NULL, errp);
     if (!glfs) {
         ret = -errno;
         goto out;
@@ -981,7 +1246,7 @@ static BlockDriver bdrv_gluster = {
     .format_name                  = "gluster",
     .protocol_name                = "gluster",
     .instance_size                = sizeof(BDRVGlusterState),
-    .bdrv_needs_filename          = true,
+    .bdrv_needs_filename          = false,
     .bdrv_file_open               = qemu_gluster_open,
     .bdrv_reopen_prepare          = qemu_gluster_reopen_prepare,
     .bdrv_reopen_commit           = qemu_gluster_reopen_commit,
@@ -1009,7 +1274,7 @@ static BlockDriver bdrv_gluster_tcp = {
     .format_name                  = "gluster",
     .protocol_name                = "gluster+tcp",
     .instance_size                = sizeof(BDRVGlusterState),
-    .bdrv_needs_filename          = true,
+    .bdrv_needs_filename          = false,
     .bdrv_file_open               = qemu_gluster_open,
     .bdrv_reopen_prepare          = qemu_gluster_reopen_prepare,
     .bdrv_reopen_commit           = qemu_gluster_reopen_commit,
diff --git a/qapi/block-core.json b/qapi/block-core.json
index d7b5c76..5557f1c 100644
--- a/qapi/block-core.json
+++ b/qapi/block-core.json
@@ -2137,7 +2137,7 @@
 { 'struct': 'BlockdevOptionsGluster',
   'data': { 'volume': 'str',
             'path': 'str',
-            'server': 'GlusterServer',
+            'server': ['GlusterServer'],
             '*debug_level': 'int' } }
 
 ##
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* Re: [Qemu-devel] [PATCH v19 3/5] block/gluster: remove rdma transport
  2016-07-15 15:00 ` [Qemu-devel] [PATCH v19 3/5] block/gluster: remove rdma transport Prasanna Kumar Kalever
@ 2016-07-18  8:53   ` Markus Armbruster
  2016-07-18 11:12     ` Prasanna Kalever
  0 siblings, 1 reply; 23+ messages in thread
From: Markus Armbruster @ 2016-07-18  8:53 UTC (permalink / raw)
  To: Prasanna Kumar Kalever; +Cc: qemu-devel, vbellur, jcody, rtalur

Prasanna Kumar Kalever <prasanna.kalever@redhat.com> writes:

> gluster volfile server fetch happens through unix and/or tcp, it doesn't
> support volfile fetch over rdma, hence removing the dead code
>
> Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
> ---
>  block/gluster.c | 35 +----------------------------------
>  1 file changed, 1 insertion(+), 34 deletions(-)
>
> diff --git a/block/gluster.c b/block/gluster.c
> index 40ee852..59f77bb 100644
> --- a/block/gluster.c
> +++ b/block/gluster.c
> @@ -134,8 +134,7 @@ static int parse_volume_options(GlusterConf *gconf, char *path)
>   *
>   * 'transport' specifies the transport type used to connect to gluster
>   * management daemon (glusterd). Valid transport types are
> - * tcp, unix and rdma. If a transport type isn't specified, then tcp
> - * type is assumed.
> + * tcp, unix. If a transport type isn't specified, then tcp type is assumed.
>   *
>   * 'host' specifies the host where the volume file specification for
>   * the given volume resides. This can be either hostname, ipv4 address
> @@ -162,7 +161,6 @@ static int parse_volume_options(GlusterConf *gconf, char *path)
>   * file=gluster+tcp://[1:2:3:4:5:6:7:8]:24007/testvol/dir/a.img
>   * file=gluster+tcp://host.domain.com:24007/testvol/dir/a.img
>   * file=gluster+unix:///testvol/dir/a.img?socket=/tmp/glusterd.socket
> - * file=gluster+rdma://1.2.3.4:24007/testvol/a.img
>   */
>  static int qemu_gluster_parseuri(GlusterConf *gconf, const char *filename)
>  {
> @@ -184,8 +182,6 @@ static int qemu_gluster_parseuri(GlusterConf *gconf, const char *filename)
>      } else if (!strcmp(uri->scheme, "gluster+unix")) {
>          gconf->transport = g_strdup("unix");

Outside this patch's scope: string literals would be just fine for
gconf->transport.

>          is_unix = true;
> -    } else if (!strcmp(uri->scheme, "gluster+rdma")) {
> -        gconf->transport = g_strdup("rdma");
>      } else {
>          ret = -EINVAL;
>          goto out;
> @@ -1048,37 +1044,8 @@ static BlockDriver bdrv_gluster_unix = {
>      .create_opts                  = &qemu_gluster_create_opts,
>  };
>  
> -static BlockDriver bdrv_gluster_rdma = {
> -    .format_name                  = "gluster",
> -    .protocol_name                = "gluster+rdma",
> -    .instance_size                = sizeof(BDRVGlusterState),
> -    .bdrv_needs_filename          = true,
> -    .bdrv_file_open               = qemu_gluster_open,
> -    .bdrv_reopen_prepare          = qemu_gluster_reopen_prepare,
> -    .bdrv_reopen_commit           = qemu_gluster_reopen_commit,
> -    .bdrv_reopen_abort            = qemu_gluster_reopen_abort,
> -    .bdrv_close                   = qemu_gluster_close,
> -    .bdrv_create                  = qemu_gluster_create,
> -    .bdrv_getlength               = qemu_gluster_getlength,
> -    .bdrv_get_allocated_file_size = qemu_gluster_allocated_file_size,
> -    .bdrv_truncate                = qemu_gluster_truncate,
> -    .bdrv_co_readv                = qemu_gluster_co_readv,
> -    .bdrv_co_writev               = qemu_gluster_co_writev,
> -    .bdrv_co_flush_to_disk        = qemu_gluster_co_flush_to_disk,
> -    .bdrv_has_zero_init           = qemu_gluster_has_zero_init,
> -#ifdef CONFIG_GLUSTERFS_DISCARD
> -    .bdrv_co_discard              = qemu_gluster_co_discard,
> -#endif
> -#ifdef CONFIG_GLUSTERFS_ZEROFILL
> -    .bdrv_co_pwrite_zeroes        = qemu_gluster_co_pwrite_zeroes,
> -#endif
> -    .bdrv_co_get_block_status     = qemu_gluster_co_get_block_status,
> -    .create_opts                  = &qemu_gluster_create_opts,
> -};
> -
>  static void bdrv_gluster_init(void)
>  {
> -    bdrv_register(&bdrv_gluster_rdma);
>      bdrv_register(&bdrv_gluster_unix);
>      bdrv_register(&bdrv_gluster_tcp);
>      bdrv_register(&bdrv_gluster);

This is fine if gluster+rdma never actually worked.  I tried to find out
at https://github.com/gluster/glusterfs/blob/master/api/src/glfs.h.
Transport rdma is mentioned there.  Does it work?

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [Qemu-devel] [PATCH v19 4/5] block/gluster: using new qapi schema
  2016-07-15 15:00 ` [Qemu-devel] [PATCH v19 4/5] block/gluster: using new qapi schema Prasanna Kumar Kalever
@ 2016-07-18  9:29   ` Markus Armbruster
  2016-07-18 11:29     ` Prasanna Kalever
  0 siblings, 1 reply; 23+ messages in thread
From: Markus Armbruster @ 2016-07-18  9:29 UTC (permalink / raw)
  To: Prasanna Kumar Kalever; +Cc: qemu-devel, vbellur, jcody, rtalur

Prasanna Kumar Kalever <prasanna.kalever@redhat.com> writes:

> this patch adds 'GlusterServer' related schema in qapi/block-core.json
>
> Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
> ---
>  block/gluster.c      | 111 +++++++++++++++++++++++++++++----------------------
>  qapi/block-core.json |  94 ++++++++++++++++++++++++++++++++++++++++---
>  2 files changed, 153 insertions(+), 52 deletions(-)
[Skipping ahead to QAPI schema...]
> diff --git a/qapi/block-core.json b/qapi/block-core.json
> index a7fdb28..d7b5c76 100644
> --- a/qapi/block-core.json
> +++ b/qapi/block-core.json
> @@ -1658,13 +1658,14 @@
>  # @host_device, @host_cdrom: Since 2.1
>  #
>  # Since: 2.0
> +# @gluster: Since 2.7
>  ##
>  { 'enum': 'BlockdevDriver',
>    'data': [ 'archipelago', 'blkdebug', 'blkverify', 'bochs', 'cloop',
> -            'dmg', 'file', 'ftp', 'ftps', 'host_cdrom', 'host_device',
> -            'http', 'https', 'luks', 'null-aio', 'null-co', 'parallels',
> -            'qcow', 'qcow2', 'qed', 'quorum', 'raw', 'tftp', 'vdi', 'vhdx',
> -            'vmdk', 'vpc', 'vvfat' ] }
> +            'dmg', 'file', 'ftp', 'ftps', 'gluster', 'host_cdrom',
> +            'host_device', 'http', 'https', 'luks', 'null-aio', 'null-co',
> +            'parallels', 'qcow', 'qcow2', 'qed', 'quorum', 'raw', 'tftp',
> +            'vdi', 'vhdx', 'vmdk', 'vpc', 'vvfat' ] }
>  
>  ##
>  # @BlockdevOptionsFile
> @@ -2057,6 +2058,89 @@
>              '*read-pattern': 'QuorumReadPattern' } }
>  
>  ##
> +# @GlusterTransport
> +#
> +# An enumeration of Gluster transport type
> +#
> +# @tcp:   TCP   - Transmission Control Protocol
> +#
> +# @unix:  UNIX  - Unix domain socket
> +#
> +# Since: 2.7
> +##
> +{ 'enum': 'GlusterTransport',
> +  'data': [ 'unix', 'tcp' ] }
> +
> +##
> +# @GlusterUnixSocketAddress
> +#
> +# Captures a socket address in the local ("Unix socket") namespace.
> +#
> +# @socket:   absolute path to socket file
> +#
> +# Since 2.7
> +##
> +{ 'struct': 'GlusterUnixSocketAddress',
> +  'data': { 'socket': 'str' } }

Compare:

   ##
   # @UnixSocketAddress
   #
   # Captures a socket address in the local ("Unix socket") namespace.
   #
   # @path: filesystem path to use
   #
   # Since 1.3
   ##
   { 'struct': 'UnixSocketAddress',
     'data': {
       'path': 'str' } }

> +
> +##
> +# @GlusterInetSocketAddress
> +#
> +# Captures a socket address or address range in the Internet namespace.
> +#
> +# @host:  host part of the address
> +#
> +# @port:  #optional port part of the address, or lowest port if @to is present

There is no @to.

What's the default port?

> +#
> +# Since 2.7
> +##
> +{ 'struct': 'GlusterInetSocketAddress',
> +  'data': { 'host': 'str',
> +            '*port': 'uint16' } }

Compare:

   ##
   # @InetSocketAddress
   #
   # Captures a socket address or address range in the Internet namespace.
   #
   # @host: host part of the address
   #
   # @port: port part of the address, or lowest port if @to is present
   #
   # @to: highest port to try
   #
   # @ipv4: whether to accept IPv4 addresses, default try both IPv4 and IPv6
   #        #optional
   #
   # @ipv6: whether to accept IPv6 addresses, default try both IPv4 and IPv6
   #        #optional
   #
   # Since 1.3
   ##
   { 'struct': 'InetSocketAddress',
     'data': {
       'host': 'str',
       'port': 'str',
       '*to': 'uint16',
       '*ipv4': 'bool',
       '*ipv6': 'bool' } }

> +
> +##
> +# @GlusterServer
> +#
> +# Captures the address of a socket
> +#
> +# Details for connecting to a gluster server
> +#
> +# @type:       Transport type used for gluster connection
> +#
> +# @unix:       socket file
> +#
> +# @tcp:        host address and port number
> +#
> +# Since: 2.7
> +##
> +{ 'union': 'GlusterServer',
> +  'base': { 'type': 'GlusterTransport' },
> +  'discriminator': 'type',
> +  'data': { 'unix': 'GlusterUnixSocketAddress',
> +            'tcp': 'GlusterInetSocketAddress' } }
> +

Compare:

   ##
   # @SocketAddress
   #
   # Captures the address of a socket, which could also be a named file descriptor
   #
   # Since 1.3
   ##
   { 'union': 'SocketAddress',
     'data': {
       'inet': 'InetSocketAddress',
       'unix': 'UnixSocketAddress',
       'fd': 'String' } }

You cleaned up the confusing abuse of @host as Unix domain socket file
name.  Good.

You're still defining your own QAPI types instead of using the existing
ones.  To see whether that's a good idea, let's compare your definitions
to the existing ones:

* GlusterUnixSocketAddress vs. UnixSocketAddress

  Identical but for the member name.  Why can't we use
  UnixSocketAddress?

* GlusterInetSocketAddress vs. InetSocketAddress

  Changes in GlusterInetSocketAddress over InetSocketAddress:

  - @port is optional

    Convenience feature.  We can discuss making it optional in
    InetSocketAddress, too.

  - @port has type 'uint16' instead of 'str'

    No support for service names.  Why is that a good idea?

  - Lacks @to

    No support for trying a range of ports.  I guess that simply makes
    no sense for a Gluster client.  I guess makes no sense in some other
    uses of InetSocketAddress, too.  Eric has proposed to split off
    InetSocketAddressRange off InetSocketAddress.

  - Lacks @ipv4, @ipv6

    No control over IPv4 vs. IPv6 use.  Immediate show-stopper.

  Can we use InetSocketAddress?

* GlusterServer vs. SocketAddress

  GlusterServer lacks case 'fd'.  Do file descriptors make no sense
  here, or is it merely an implementation restriction?

  GlusterServer is a flat union, SocketAddress is a simple union.  The flat
  unions is nicer on the wire:
      { "type": "tcp", "host": "gluster.example.com", ... }
  vs.
      { "type": "tcp", "data": { "host": "gluster.example.com", ... }

  Perhaps we should use a flat union for new interfaces.

> +##
> +# @BlockdevOptionsGluster
> +#
> +# Driver specific block device options for Gluster
> +#
> +# @volume:      name of gluster volume where VM image resides
> +#
> +# @path:        absolute path to image file in gluster volume
> +#
> +# @server:      gluster server description
> +#
> +# @debug_level: #optional libgfapi log level (default '4' which is Error)
> +#
> +# Since: 2.7
> +##
> +{ 'struct': 'BlockdevOptionsGluster',
> +  'data': { 'volume': 'str',
> +            'path': 'str',
> +            'server': 'GlusterServer',
> +            '*debug_level': 'int' } }
> +
> +##
>  # @BlockdevOptions
>  #
>  # Options for creating a block device.  Many options are available for all
> @@ -2119,7 +2203,7 @@
>        'file':       'BlockdevOptionsFile',
>        'ftp':        'BlockdevOptionsFile',
>        'ftps':       'BlockdevOptionsFile',
> -# TODO gluster: Wait for structured options
> +      'gluster':    'BlockdevOptionsGluster',
>        'host_cdrom': 'BlockdevOptionsFile',
>        'host_device':'BlockdevOptionsFile',
>        'http':       'BlockdevOptionsFile',

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [Qemu-devel] [PATCH v19 5/5] block/gluster: add support for multiple gluster servers
  2016-07-15 15:00 ` [Qemu-devel] [PATCH v19 5/5] block/gluster: add support for multiple gluster servers Prasanna Kumar Kalever
@ 2016-07-18 10:17   ` Markus Armbruster
  2016-07-18 11:51     ` Prasanna Kalever
  0 siblings, 1 reply; 23+ messages in thread
From: Markus Armbruster @ 2016-07-18 10:17 UTC (permalink / raw)
  To: Prasanna Kumar Kalever; +Cc: qemu-devel, vbellur, jcody, rtalur

Prasanna Kumar Kalever <prasanna.kalever@redhat.com> writes:

> This patch adds a way to specify multiple volfile servers to the gluster
> block backend of QEMU with tcp|rdma transport types and their port numbers.
>
> Problem:
>
> Currently VM Image on gluster volume is specified like this:
>
> file=gluster[+tcp]://host[:port]/testvol/a.img
>
> Say we have three hosts in a trusted pool with replica 3 volume in action.
> When the host mentioned in the command above goes down for some reason,
> the other two hosts are still available. But there's currently no way
> to tell QEMU about them.
>
> Solution:
>
> New way of specifying VM Image on gluster volume with volfile servers:
> (We still support old syntax to maintain backward compatibility)
>
> Basic command line syntax looks like:
>
> Pattern I:
>  -drive driver=gluster,
>         volume=testvol,path=/path/a.raw,[debug=N,]
>         server.0.type=tcp,
>         server.0.host=1.2.3.4,
>        [server.0.port=24007,]
>         server.1.type=unix,
>         server.1.socket=/path/socketfile
>
> Pattern II:
>  'json:{"driver":"qcow2","file":{"driver":"gluster",
>        "volume":"testvol","path":"/path/a.qcow2",["debug":N,]
>        "server":[{hostinfo_1}, ...{hostinfo_N}]}}'
>
>    driver      => 'gluster' (protocol name)
>    volume      => name of gluster volume where our VM image resides
>    path        => absolute path of image in gluster volume
>   [debug]      => libgfapi loglevel [(0 - 9) default 4 -> Error]
>
>   {hostinfo}   => {{type:"tcp",host:"1.2.3.4"[,port=24007]},
>                    {type:"unix",socket:"/path/sockfile"}}
>
>    type        => transport type used to connect to gluster management daemon,
>                   it can be tcp|unix
>    host        => host address (hostname/ipv4/ipv6 addresses/socket path)
>   [port]       => port number on which glusterd is listening. (default 24007)
>    socket      => path to socket file
>
> Examples:
> 1.
>  -drive driver=qcow2,file.driver=gluster,
>         file.volume=testvol,file.path=/path/a.qcow2,file.debug=9,
>         file.server.0.type=tcp,
>         file.server.0.host=1.2.3.4,
>         file.server.0.port=24007,
>         file.server.1.type=tcp,
>         file.server.1.socket=/var/run/glusterd.socket
> 2.
>  'json:{"driver":"qcow2","file":{"driver":"gluster","volume":"testvol",
>          "path":"/path/a.qcow2","debug":9,"server":
>          [{type:"tcp",host:"1.2.3.4",port=24007},
>           {type:"unix",socket:"/var/run/glusterd.socket"}] } }'
>
> This patch gives a mechanism to provide all the server addresses, which are in
> replica set, so in case host1 is down VM can still boot from any of the
> active hosts.
>
> This is equivalent to the backup-volfile-servers option supported by
> mount.glusterfs (FUSE way of mounting gluster volume)
>
> credits: sincere thanks to all the supporters
>
> Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
> ---
>  block/gluster.c      | 347 +++++++++++++++++++++++++++++++++++++++++++++------
>  qapi/block-core.json |   2 +-
>  2 files changed, 307 insertions(+), 42 deletions(-)
>
> diff --git a/block/gluster.c b/block/gluster.c
> index ff1e783..fd2279d 100644
> --- a/block/gluster.c
> +++ b/block/gluster.c
> @@ -12,8 +12,16 @@
>  #include "block/block_int.h"
>  #include "qapi/error.h"
>  #include "qemu/uri.h"
> +#include "qemu/error-report.h"
>  
>  #define GLUSTER_OPT_FILENAME        "filename"
> +#define GLUSTER_OPT_VOLUME          "volume"
> +#define GLUSTER_OPT_PATH            "path"
> +#define GLUSTER_OPT_TYPE            "type"
> +#define GLUSTER_OPT_SERVER_PATTERN  "server."
> +#define GLUSTER_OPT_HOST            "host"
> +#define GLUSTER_OPT_PORT            "port"
> +#define GLUSTER_OPT_SOCKET          "socket"
>  #define GLUSTER_OPT_DEBUG           "debug"
>  #define GLUSTER_DEFAULT_PORT        24007
>  #define GLUSTER_DEBUG_DEFAULT       4
> @@ -82,6 +90,77 @@ static QemuOptsList runtime_opts = {
>      },
>  };
>  
> +static QemuOptsList runtime_json_opts = {
> +    .name = "gluster_json",
> +    .head = QTAILQ_HEAD_INITIALIZER(runtime_json_opts.head),
> +    .desc = {
> +        {
> +            .name = GLUSTER_OPT_VOLUME,
> +            .type = QEMU_OPT_STRING,
> +            .help = "name of gluster volume where VM image resides",
> +        },
> +        {
> +            .name = GLUSTER_OPT_PATH,
> +            .type = QEMU_OPT_STRING,
> +            .help = "absolute path to image file in gluster volume",
> +        },
> +        {
> +            .name = GLUSTER_OPT_DEBUG,
> +            .type = QEMU_OPT_NUMBER,
> +            .help = "Gluster log level, valid range is 0-9",
> +        },
> +        { /* end of list */ }
> +    },
> +};
> +
> +static QemuOptsList runtime_type_opts = {
> +    .name = "gluster_type",
> +    .head = QTAILQ_HEAD_INITIALIZER(runtime_type_opts.head),
> +    .desc = {
> +        {
> +            .name = GLUSTER_OPT_TYPE,
> +            .type = QEMU_OPT_STRING,
> +            .help = "tcp|unix",
> +        },
> +        { /* end of list */ }
> +    },
> +};
> +
> +static QemuOptsList runtime_unix_opts = {
> +    .name = "gluster_unix",
> +    .head = QTAILQ_HEAD_INITIALIZER(runtime_unix_opts.head),
> +    .desc = {
> +        {
> +            .name = GLUSTER_OPT_SOCKET,
> +            .type = QEMU_OPT_STRING,
> +            .help = "socket file path)",
> +        },
> +        { /* end of list */ }
> +    },
> +};
> +
> +static QemuOptsList runtime_tcp_opts = {
> +    .name = "gluster_tcp",
> +    .head = QTAILQ_HEAD_INITIALIZER(runtime_tcp_opts.head),
> +    .desc = {
> +        {
> +            .name = GLUSTER_OPT_TYPE,
> +            .type = QEMU_OPT_STRING,
> +            .help = "tcp|unix",
> +        },
> +        {
> +            .name = GLUSTER_OPT_HOST,
> +            .type = QEMU_OPT_STRING,
> +            .help = "host address (hostname/ipv4/ipv6 addresses)",
> +        },
> +        {
> +            .name = GLUSTER_OPT_PORT,
> +            .type = QEMU_OPT_NUMBER,
> +            .help = "port number on which glusterd is listening (default 24007)",
> +        },
> +        { /* end of list */ }
> +    },
> +};
>  
>  static int parse_volume_options(BlockdevOptionsGluster *gconf, char *path)
>  {
> @@ -157,7 +236,8 @@ static int qemu_gluster_parse_uri(BlockdevOptionsGluster *gconf,
>          return -EINVAL;
>      }
>  
> -    gconf->server = gsconf = g_new0(GlusterServer, 1);
> +    gconf->server = g_new0(GlusterServerList, 1);
> +    gconf->server->value = gsconf = g_new0(GlusterServer, 1);
>  
>      /* transport */
>      if (!uri->scheme || !strcmp(uri->scheme, "gluster")) {
> @@ -211,38 +291,34 @@ out:
>      return ret;
>  }
>  
> -static struct glfs *qemu_gluster_init(BlockdevOptionsGluster *gconf,
> -                                      const char *filename, Error **errp)
> +static struct glfs *qemu_gluster_glfs_init(BlockdevOptionsGluster *gconf,
> +                                           Error **errp)
>  {
> -    struct glfs *glfs = NULL;
> +    struct glfs *glfs;
>      int ret;
>      int old_errno;
> -
> -    ret = qemu_gluster_parse_uri(gconf, filename);
> -    if (ret < 0) {
> -        error_setg(errp, "Usage: file=gluster[+transport]://[host[:port]]/"
> -                         "volume/path[?socket=...]");
> -        errno = -ret;
> -        goto out;
> -    }
> +    GlusterServerList *server;
>  
>      glfs = glfs_new(gconf->volume);
>      if (!glfs) {
>          goto out;
>      }
>  
> -    if (gconf->server->type == GLUSTER_TRANSPORT_UNIX) {
> -        ret = glfs_set_volfile_server(glfs,
> -                                      GlusterTransport_lookup[gconf->server->type],
> -                                      gconf->server->u.q_unix.socket, 0);
> -    } else {
> -        ret = glfs_set_volfile_server(glfs,
> -                                      GlusterTransport_lookup[gconf->server->type],
> -                                      gconf->server->u.tcp.host,
> -                                      gconf->server->u.tcp.port);
> -    }
> -    if (ret < 0) {
> -        goto out;
> +    for (server = gconf->server; server; server = server->next) {
> +        if (server->value->type  == GLUSTER_TRANSPORT_UNIX) {
> +            ret = glfs_set_volfile_server(glfs,
> +                                          GlusterTransport_lookup[server->value->type],
> +                                          server->value->u.q_unix.socket, 0);
> +        } else {
> +            ret = glfs_set_volfile_server(glfs,
> +                                          GlusterTransport_lookup[server->value->type],
> +                                          server->value->u.tcp.host,
> +                                          server->value->u.tcp.port);

server->value.u.tcp.port is optional.  Using it without checking
server->value.u.tcp.has_port relies on the default value being zero.  We
don't actually document that.  Perhaps we should.

> +        }
> +
> +        if (ret < 0) {
> +            goto out;
> +        }
>      }
>  
>      ret = glfs_set_logging(glfs, "-", gconf->debug_level);
> @@ -252,19 +328,19 @@ static struct glfs *qemu_gluster_init(BlockdevOptionsGluster *gconf,
>  
>      ret = glfs_init(glfs);
>      if (ret) {
> -        if (gconf->server->type == GLUSTER_TRANSPORT_UNIX) {
> -            error_setg(errp,
> -                       "Gluster connection for volume: %s, path: %s "
> -                       "failed to connect on socket %s ",
> -                       gconf->volume, gconf->path,
> -                       gconf->server->u.q_unix.socket);
> -        } else {
> -            error_setg(errp,
> -                       "Gluster connection for volume: %s, path: %s "
> -                       "failed to connect  host  %s and port %d ",
> -                       gconf->volume, gconf->path,
> -                       gconf->server->u.tcp.host, gconf->server->u.tcp.port);
> +        error_setg(errp, "Gluster connection for volume: %s, path: %s ",
> +                   gconf->volume, gconf->path);
> +        for (server = gconf->server; server; server = server->next) {
> +            if (server->value->type  == GLUSTER_TRANSPORT_UNIX) {
> +                error_append_hint(errp, "failed to connect on socket %s ",
> +                                  server->value->u.q_unix.socket);
> +            } else {
> +                error_append_hint(errp, "failed to connect host %s and port %d ",
> +                                  server->value->u.tcp.host,
> +                                  server->value->u.tcp.port);
> +            }
>          }
> +        error_append_hint(errp, "Please refer to gluster logs for more info ");

Your code produces the error message "Gluster connection for volume:
VOLUME, path: PATH ", which makes no sense.

It also produces a hint that is a concatenation of one or more "failed
to connect on FOO", followed by "Please refer to ..." without any
punctuation, but with a trailing space.

The error message must make sense on its own, without the hint.

A fixed error message could look like this:

    Gluster connection for volume VOLUME, path PATH failed to connect on FOO, on BAR, and on BAZ

or with a little less effort

    Gluster connection for volume VOLUME, path PATH failed to connect on FOO, BAR, BAZ

or simply

    Can't connect to Gluster volume VOLUME, path PATH at FOO, BAR, BAZ

You can't build up the error message with error_append_hint().  Using it
to append a hint pointing to Gluster logs is fine.

>  
>          /* glfs_init sometimes doesn't set errno although docs suggest that */
>          if (errno == 0) {
> @@ -284,6 +360,195 @@ out:
>      return NULL;
>  }
>  
> +static int qapi_enum_parse(const char *opt)
> +{
> +    int i;
> +
> +    if (!opt) {
> +        return GLUSTER_TRANSPORT__MAX;
> +    }
> +
> +    for (i = 0; i < GLUSTER_TRANSPORT__MAX; i++) {
> +        if (!strcmp(opt, GlusterTransport_lookup[i])) {
> +            return i;
> +        }
> +    }
> +
> +    return i;
> +}
> +
> +/*
> + * Convert the json formatted command line into qapi.
> +*/
> +static int qemu_gluster_parse_json(BlockdevOptionsGluster *gconf,
> +                                  QDict *options, Error **errp)
> +{
> +    QemuOpts *opts;
> +    GlusterServer *gsconf;
> +    GlusterServerList *curr = NULL;
> +    QDict *backing_options = NULL;
> +    Error *local_err = NULL;
> +    char *str = NULL;
> +    const char *ptr;
> +    size_t num_servers;
> +    int i;
> +
> +    /* create opts info from runtime_json_opts list */
> +    opts = qemu_opts_create(&runtime_json_opts, NULL, 0, &error_abort);
> +    qemu_opts_absorb_qdict(opts, options, &local_err);
> +    if (local_err) {
> +        goto out;
> +    }
> +
> +    num_servers = qdict_array_entries(options, GLUSTER_OPT_SERVER_PATTERN);
> +    if (num_servers < 1) {
> +        error_setg(&local_err, "please provide 'server' option with valid "
> +                               "fields in array of hostinfo ");

This isn't an error message, it's instructions what to do.  Such
instructions can be useful when they're correct, but they can't replace
an error message.  The error message should state what's wrong.  No
less, no more.

Moreover, avoid prefixes like "qemu_gluster:".  Usually, the fact that
this is about Gluster is obvious.  When it isn't, providing context is
the caller's job.

> +        goto out;
> +    }
> +
> +    ptr = qemu_opt_get(opts, GLUSTER_OPT_VOLUME);
> +    if (!ptr) {
> +        error_setg(&local_err, "please provide 'volume' option ");

Not an error message.

> +        goto out;
> +    }
> +    gconf->volume = g_strdup(ptr);
> +
> +    ptr = qemu_opt_get(opts, GLUSTER_OPT_PATH);
> +    if (!ptr) {
> +        error_setg(&local_err, "please provide 'path' option ");

Not an error message.

> +        goto out;
> +    }
> +    gconf->path = g_strdup(ptr);
> +    qemu_opts_del(opts);
> +
> +    for (i = 0; i < num_servers; i++) {
> +        str = g_strdup_printf(GLUSTER_OPT_SERVER_PATTERN"%d.", i);
> +        qdict_extract_subqdict(options, &backing_options, str);
> +
> +        /* create opts info from runtime_type_opts list */
> +        opts = qemu_opts_create(&runtime_type_opts, NULL, 0, &error_abort);
> +        qemu_opts_absorb_qdict(opts, backing_options, &local_err);
> +        if (local_err) {
> +            goto out;
> +        }
> +
> +        ptr = qemu_opt_get(opts, GLUSTER_OPT_TYPE);
> +        gsconf = g_new0(GlusterServer, 1);
> +        gsconf->type = qapi_enum_parse(ptr);
> +        if (!ptr) {
> +            error_setg(&local_err, "please provide 'type' in hostinfo.%d as "
> +                                   "tcp|unix ", i);

Not an error message.

> +            goto out;
> +
> +        }
> +        if (gsconf->type == GLUSTER_TRANSPORT__MAX) {
> +            error_setg(&local_err, "'type' in hostinfo.%d can be tcp|unix ", i);
> +            goto out;
> +        }
> +        qemu_opts_del(opts);
> +
> +        if (gsconf->type == GLUSTER_TRANSPORT_TCP) {
> +            /* create opts info from runtime_tcp_opts list */
> +            opts = qemu_opts_create(&runtime_tcp_opts, NULL, 0, &error_abort);
> +            qemu_opts_absorb_qdict(opts, backing_options, &local_err);
> +            if (local_err) {
> +                goto out;
> +            }
> +
> +            ptr = qemu_opt_get(opts, GLUSTER_OPT_HOST);
> +            if (!ptr) {
> +                error_setg(&local_err, "please provide 'host' in hostinfo.%d ",
> +                           i);
> +                goto out;
> +            }
> +            gsconf->u.tcp.host = g_strdup(ptr);
> +            ptr = qemu_opt_get(opts, GLUSTER_OPT_PORT);
> +            gsconf->u.tcp.port = qemu_opt_get_number(opts, GLUSTER_OPT_PORT,
> +                                               GLUSTER_DEFAULT_PORT);
> +            gsconf->u.tcp.has_port = true;
> +            qemu_opts_del(opts);
> +        } else {
> +            /* create opts info from runtime_unix_opts list */
> +            opts = qemu_opts_create(&runtime_unix_opts, NULL, 0, &error_abort);
> +            qemu_opts_absorb_qdict(opts, backing_options, &local_err);
> +            if (local_err) {
> +                goto out;
> +            }
> +
> +            ptr = qemu_opt_get(opts, GLUSTER_OPT_SOCKET);
> +            if (!ptr) {
> +                error_setg(&local_err, "please provide 'socket' in hostinfo.%d ",
> +                           i);
> +                goto out;
> +            }
> +            gsconf->u.q_unix.socket = g_strdup(ptr);
> +            qemu_opts_del(opts);
> +        }
> +
> +        if (gconf->server == NULL) {
> +            gconf->server = g_new0(GlusterServerList, 1);
> +            gconf->server->value = gsconf;
> +            curr = gconf->server;
> +        } else {
> +            curr->next = g_new0(GlusterServerList, 1);
> +            curr->next->value = gsconf;
> +            curr = curr->next;
> +        }
> +
> +        qdict_del(backing_options, str);
> +        g_free(str);
> +        str = NULL;
> +    }
> +
> +    return 0;
> +
> +out:
> +    if (local_err) {
> +        error_propagate(errp, local_err);
> +    }

error_propagate() does the right thing when its second argument is
null.  Please drop the conditional.

> +    qemu_opts_del(opts);
> +    if (str) {
> +        qdict_del(backing_options, str);
> +        g_free(str);
> +    }
> +    errno = EINVAL;
> +    return -errno;
> +}
> +
> +static struct glfs *qemu_gluster_init(BlockdevOptionsGluster *gconf,
> +                                      const char *filename,
> +                                      QDict *options, Error **errp)
> +{
> +    int ret;
> +    if (filename) {
> +        ret = qemu_gluster_parse_uri(gconf, filename);
> +        if (ret < 0) {
> +            error_setg(errp, "Usage: file=gluster[+transport]://[host[:port]]/"
> +                             "volume/path[?socket=...]");

Not an error message.

> +            errno = -ret;
> +            return NULL;
> +        }
> +    } else {
> +        ret = qemu_gluster_parse_json(gconf, options, errp);
> +        if (ret < 0) {
> +            error_append_hint(errp, "Usage: "
> +                             "-drive driver=qcow2,file.driver=gluster,"
> +                             "file.volume=testvol,file.path=/path/a.qcow2"
> +                             "[,file.debug=9],file.server.0.type=tcp,"
> +                             "file.server.0.host=1.2.3.4,"
> +                             "[file.server.0.port=24007,]"
> +                             "file.server.1.transport=unix,"
> +                             "file.server.1.socket=/var/run/glusterd.socket ...");
> +            errno = -ret;
> +            return NULL;
> +        }
> +
> +    }
> +
> +    return qemu_gluster_glfs_init(gconf, errp);
> +}
> +
>  static void qemu_gluster_complete_aio(void *opaque)
>  {
>      GlusterAIOCB *acb = (GlusterAIOCB *)opaque;
> @@ -383,7 +648,7 @@ static int qemu_gluster_open(BlockDriverState *bs,  QDict *options,
>      gconf = g_new0(BlockdevOptionsGluster, 1);
>      gconf->debug_level = s->debug_level;
>      gconf->has_debug_level = true;
> -    s->glfs = qemu_gluster_init(gconf, filename, errp);
> +    s->glfs = qemu_gluster_init(gconf, filename, options, errp);
>      if (!s->glfs) {
>          ret = -errno;
>          goto out;
> @@ -454,7 +719,7 @@ static int qemu_gluster_reopen_prepare(BDRVReopenState *state,
>      gconf = g_new0(BlockdevOptionsGluster, 1);
>      gconf->debug_level = s->debug_level;
>      gconf->has_debug_level = true;
> -    reop_s->glfs = qemu_gluster_init(gconf, state->bs->filename, errp);
> +    reop_s->glfs = qemu_gluster_init(gconf, state->bs->filename, NULL, errp);
>      if (reop_s->glfs == NULL) {
>          ret = -errno;
>          goto exit;
> @@ -601,7 +866,7 @@ static int qemu_gluster_create(const char *filename,
>      }
>      gconf->has_debug_level = true;
>  
> -    glfs = qemu_gluster_init(gconf, filename, errp);
> +    glfs = qemu_gluster_init(gconf, filename, NULL, errp);
>      if (!glfs) {
>          ret = -errno;
>          goto out;
> @@ -981,7 +1246,7 @@ static BlockDriver bdrv_gluster = {
>      .format_name                  = "gluster",
>      .protocol_name                = "gluster",
>      .instance_size                = sizeof(BDRVGlusterState),
> -    .bdrv_needs_filename          = true,
> +    .bdrv_needs_filename          = false,
>      .bdrv_file_open               = qemu_gluster_open,
>      .bdrv_reopen_prepare          = qemu_gluster_reopen_prepare,
>      .bdrv_reopen_commit           = qemu_gluster_reopen_commit,
> @@ -1009,7 +1274,7 @@ static BlockDriver bdrv_gluster_tcp = {
>      .format_name                  = "gluster",
>      .protocol_name                = "gluster+tcp",
>      .instance_size                = sizeof(BDRVGlusterState),
> -    .bdrv_needs_filename          = true,
> +    .bdrv_needs_filename          = false,
>      .bdrv_file_open               = qemu_gluster_open,
>      .bdrv_reopen_prepare          = qemu_gluster_reopen_prepare,
>      .bdrv_reopen_commit           = qemu_gluster_reopen_commit,
> diff --git a/qapi/block-core.json b/qapi/block-core.json
> index d7b5c76..5557f1c 100644
> --- a/qapi/block-core.json
> +++ b/qapi/block-core.json
> @@ -2137,7 +2137,7 @@
>  { 'struct': 'BlockdevOptionsGluster',
>    'data': { 'volume': 'str',
>              'path': 'str',
> -            'server': 'GlusterServer',
> +            'server': ['GlusterServer'],
>              '*debug_level': 'int' } }
>  
>  ##

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [Qemu-devel] [PATCH v19 3/5] block/gluster: remove rdma transport
  2016-07-18  8:53   ` Markus Armbruster
@ 2016-07-18 11:12     ` Prasanna Kalever
  2016-07-18 12:18       ` Markus Armbruster
  0 siblings, 1 reply; 23+ messages in thread
From: Prasanna Kalever @ 2016-07-18 11:12 UTC (permalink / raw)
  To: Markus Armbruster
  Cc: Prasanna Kumar Kalever, qemu-devel, Vijay Bellur, Jeffrey Cody,
	Raghavendra Talur

On Mon, Jul 18, 2016 at 2:23 PM, Markus Armbruster <armbru@redhat.com> wrote:
> Prasanna Kumar Kalever <prasanna.kalever@redhat.com> writes:
>
>> gluster volfile server fetch happens through unix and/or tcp, it doesn't
>> support volfile fetch over rdma, hence removing the dead code
>>
>> Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
>> ---
>>  block/gluster.c | 35 +----------------------------------
>>  1 file changed, 1 insertion(+), 34 deletions(-)
>>
>> diff --git a/block/gluster.c b/block/gluster.c
>> index 40ee852..59f77bb 100644
>> --- a/block/gluster.c
>> +++ b/block/gluster.c
>> @@ -134,8 +134,7 @@ static int parse_volume_options(GlusterConf *gconf, char *path)
>>   *
>>   * 'transport' specifies the transport type used to connect to gluster
>>   * management daemon (glusterd). Valid transport types are
>> - * tcp, unix and rdma. If a transport type isn't specified, then tcp
>> - * type is assumed.
>> + * tcp, unix. If a transport type isn't specified, then tcp type is assumed.
>>   *
>>   * 'host' specifies the host where the volume file specification for
>>   * the given volume resides. This can be either hostname, ipv4 address
>> @@ -162,7 +161,6 @@ static int parse_volume_options(GlusterConf *gconf, char *path)
>>   * file=gluster+tcp://[1:2:3:4:5:6:7:8]:24007/testvol/dir/a.img
>>   * file=gluster+tcp://host.domain.com:24007/testvol/dir/a.img
>>   * file=gluster+unix:///testvol/dir/a.img?socket=/tmp/glusterd.socket
>> - * file=gluster+rdma://1.2.3.4:24007/testvol/a.img
>>   */
>>  static int qemu_gluster_parseuri(GlusterConf *gconf, const char *filename)
>>  {
>> @@ -184,8 +182,6 @@ static int qemu_gluster_parseuri(GlusterConf *gconf, const char *filename)
>>      } else if (!strcmp(uri->scheme, "gluster+unix")) {
>>          gconf->transport = g_strdup("unix");
>
> Outside this patch's scope: string literals would be just fine for
> gconf->transport.

If we remove rdma support, again comments here may drag people into confusion.
Do you recommend to have this as a separate patch ?

>
>>          is_unix = true;
>> -    } else if (!strcmp(uri->scheme, "gluster+rdma")) {
>> -        gconf->transport = g_strdup("rdma");
>>      } else {
>>          ret = -EINVAL;
>>          goto out;
>> @@ -1048,37 +1044,8 @@ static BlockDriver bdrv_gluster_unix = {
>>      .create_opts                  = &qemu_gluster_create_opts,
>>  };
>>
>> -static BlockDriver bdrv_gluster_rdma = {
>> -    .format_name                  = "gluster",
>> -    .protocol_name                = "gluster+rdma",
>> -    .instance_size                = sizeof(BDRVGlusterState),
>> -    .bdrv_needs_filename          = true,
>> -    .bdrv_file_open               = qemu_gluster_open,
>> -    .bdrv_reopen_prepare          = qemu_gluster_reopen_prepare,
>> -    .bdrv_reopen_commit           = qemu_gluster_reopen_commit,
>> -    .bdrv_reopen_abort            = qemu_gluster_reopen_abort,
>> -    .bdrv_close                   = qemu_gluster_close,
>> -    .bdrv_create                  = qemu_gluster_create,
>> -    .bdrv_getlength               = qemu_gluster_getlength,
>> -    .bdrv_get_allocated_file_size = qemu_gluster_allocated_file_size,
>> -    .bdrv_truncate                = qemu_gluster_truncate,
>> -    .bdrv_co_readv                = qemu_gluster_co_readv,
>> -    .bdrv_co_writev               = qemu_gluster_co_writev,
>> -    .bdrv_co_flush_to_disk        = qemu_gluster_co_flush_to_disk,
>> -    .bdrv_has_zero_init           = qemu_gluster_has_zero_init,
>> -#ifdef CONFIG_GLUSTERFS_DISCARD
>> -    .bdrv_co_discard              = qemu_gluster_co_discard,
>> -#endif
>> -#ifdef CONFIG_GLUSTERFS_ZEROFILL
>> -    .bdrv_co_pwrite_zeroes        = qemu_gluster_co_pwrite_zeroes,
>> -#endif
>> -    .bdrv_co_get_block_status     = qemu_gluster_co_get_block_status,
>> -    .create_opts                  = &qemu_gluster_create_opts,
>> -};
>> -
>>  static void bdrv_gluster_init(void)
>>  {
>> -    bdrv_register(&bdrv_gluster_rdma);
>>      bdrv_register(&bdrv_gluster_unix);
>>      bdrv_register(&bdrv_gluster_tcp);
>>      bdrv_register(&bdrv_gluster);
>
> This is fine if gluster+rdma never actually worked.  I tried to find out
> at https://github.com/gluster/glusterfs/blob/master/api/src/glfs.h.
> Transport rdma is mentioned there.  Does it work?

this is transport used for fetching the volume file from the nodes.
Which is of type tcp previously and then now it also supports the unix
transport.

More response from gluster community
@http://www.gluster.org/pipermail/gluster-devel/2016-July/050112.html

Thanks Markus!

--
Prasanna

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [Qemu-devel] [PATCH v19 4/5] block/gluster: using new qapi schema
  2016-07-18  9:29   ` Markus Armbruster
@ 2016-07-18 11:29     ` Prasanna Kalever
  2016-07-18 13:11       ` Markus Armbruster
  0 siblings, 1 reply; 23+ messages in thread
From: Prasanna Kalever @ 2016-07-18 11:29 UTC (permalink / raw)
  To: Markus Armbruster
  Cc: Prasanna Kumar Kalever, qemu-devel, Vijay Bellur, Jeffrey Cody,
	Raghavendra Talur

On Mon, Jul 18, 2016 at 2:59 PM, Markus Armbruster <armbru@redhat.com> wrote:
> Prasanna Kumar Kalever <prasanna.kalever@redhat.com> writes:
>
>> this patch adds 'GlusterServer' related schema in qapi/block-core.json
>>
>> Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
>> ---
>>  block/gluster.c      | 111 +++++++++++++++++++++++++++++----------------------
>>  qapi/block-core.json |  94 ++++++++++++++++++++++++++++++++++++++++---
>>  2 files changed, 153 insertions(+), 52 deletions(-)
> [Skipping ahead to QAPI schema...]
>> diff --git a/qapi/block-core.json b/qapi/block-core.json
>> index a7fdb28..d7b5c76 100644
>> --- a/qapi/block-core.json
>> +++ b/qapi/block-core.json
>> @@ -1658,13 +1658,14 @@
>>  # @host_device, @host_cdrom: Since 2.1
>>  #
>>  # Since: 2.0
>> +# @gluster: Since 2.7
>>  ##
>>  { 'enum': 'BlockdevDriver',
>>    'data': [ 'archipelago', 'blkdebug', 'blkverify', 'bochs', 'cloop',
>> -            'dmg', 'file', 'ftp', 'ftps', 'host_cdrom', 'host_device',
>> -            'http', 'https', 'luks', 'null-aio', 'null-co', 'parallels',
>> -            'qcow', 'qcow2', 'qed', 'quorum', 'raw', 'tftp', 'vdi', 'vhdx',
>> -            'vmdk', 'vpc', 'vvfat' ] }
>> +            'dmg', 'file', 'ftp', 'ftps', 'gluster', 'host_cdrom',
>> +            'host_device', 'http', 'https', 'luks', 'null-aio', 'null-co',
>> +            'parallels', 'qcow', 'qcow2', 'qed', 'quorum', 'raw', 'tftp',
>> +            'vdi', 'vhdx', 'vmdk', 'vpc', 'vvfat' ] }
>>
>>  ##
>>  # @BlockdevOptionsFile
>> @@ -2057,6 +2058,89 @@
>>              '*read-pattern': 'QuorumReadPattern' } }
>>
>>  ##
>> +# @GlusterTransport
>> +#
>> +# An enumeration of Gluster transport type
>> +#
>> +# @tcp:   TCP   - Transmission Control Protocol
>> +#
>> +# @unix:  UNIX  - Unix domain socket
>> +#
>> +# Since: 2.7
>> +##
>> +{ 'enum': 'GlusterTransport',
>> +  'data': [ 'unix', 'tcp' ] }
>> +
>> +##
>> +# @GlusterUnixSocketAddress
>> +#
>> +# Captures a socket address in the local ("Unix socket") namespace.
>> +#
>> +# @socket:   absolute path to socket file
>> +#
>> +# Since 2.7
>> +##
>> +{ 'struct': 'GlusterUnixSocketAddress',
>> +  'data': { 'socket': 'str' } }
>
> Compare:
>
>    ##
>    # @UnixSocketAddress
>    #
>    # Captures a socket address in the local ("Unix socket") namespace.
>    #
>    # @path: filesystem path to use
>    #
>    # Since 1.3
>    ##
>    { 'struct': 'UnixSocketAddress',
>      'data': {
>        'path': 'str' } }
>
>> +
>> +##
>> +# @GlusterInetSocketAddress
>> +#
>> +# Captures a socket address or address range in the Internet namespace.
>> +#
>> +# @host:  host part of the address
>> +#
>> +# @port:  #optional port part of the address, or lowest port if @to is present
>
> There is no @to.
>
> What's the default port?

#define GLUSTER_DEFAULT_PORT        24007

>
>> +#
>> +# Since 2.7
>> +##
>> +{ 'struct': 'GlusterInetSocketAddress',
>> +  'data': { 'host': 'str',
>> +            '*port': 'uint16' } }
>
> Compare:
>
>    ##
>    # @InetSocketAddress
>    #
>    # Captures a socket address or address range in the Internet namespace.
>    #
>    # @host: host part of the address
>    #
>    # @port: port part of the address, or lowest port if @to is present
>    #
>    # @to: highest port to try
>    #
>    # @ipv4: whether to accept IPv4 addresses, default try both IPv4 and IPv6
>    #        #optional
>    #
>    # @ipv6: whether to accept IPv6 addresses, default try both IPv4 and IPv6
>    #        #optional
>    #
>    # Since 1.3
>    ##
>    { 'struct': 'InetSocketAddress',
>      'data': {
>        'host': 'str',
>        'port': 'str',
>        '*to': 'uint16',
>        '*ipv4': 'bool',
>        '*ipv6': 'bool' } }
>
>> +
>> +##
>> +# @GlusterServer
>> +#
>> +# Captures the address of a socket
>> +#
>> +# Details for connecting to a gluster server
>> +#
>> +# @type:       Transport type used for gluster connection
>> +#
>> +# @unix:       socket file
>> +#
>> +# @tcp:        host address and port number
>> +#
>> +# Since: 2.7
>> +##
>> +{ 'union': 'GlusterServer',
>> +  'base': { 'type': 'GlusterTransport' },
>> +  'discriminator': 'type',
>> +  'data': { 'unix': 'GlusterUnixSocketAddress',
>> +            'tcp': 'GlusterInetSocketAddress' } }
>> +
>
> Compare:
>
>    ##
>    # @SocketAddress
>    #
>    # Captures the address of a socket, which could also be a named file descriptor
>    #
>    # Since 1.3
>    ##
>    { 'union': 'SocketAddress',
>      'data': {
>        'inet': 'InetSocketAddress',
>        'unix': 'UnixSocketAddress',
>        'fd': 'String' } }
>
> You cleaned up the confusing abuse of @host as Unix domain socket file
> name.  Good.
>
> You're still defining your own QAPI types instead of using the existing
> ones.  To see whether that's a good idea, let's compare your definitions
> to the existing ones:
>
> * GlusterUnixSocketAddress vs. UnixSocketAddress
>
>   Identical but for the member name.  Why can't we use
>   UnixSocketAddress?

May be you are right, it may not worth just for a member name.

>
> * GlusterInetSocketAddress vs. InetSocketAddress
>
>   Changes in GlusterInetSocketAddress over InetSocketAddress:
>
>   - @port is optional
>
>     Convenience feature.  We can discuss making it optional in
>     InetSocketAddress, too.

sure.

>
>   - @port has type 'uint16' instead of 'str'
>
>     No support for service names.  Why is that a good idea?

I honestly do not understand tie service names to port.
As far all I understand at least from gluster use-case wise its just
an integer ranging from 0 - 65535 (mostly 24007)
I am happy to learn this

>
>   - Lacks @to
>
>     No support for trying a range of ports.  I guess that simply makes
>     no sense for a Gluster client.  I guess makes no sense in some other
>     uses of InetSocketAddress, too.  Eric has proposed to split off
>     InetSocketAddressRange off InetSocketAddress.
I still don't understand the essence, why we need to loop through the ports ?

>
>   - Lacks @ipv4, @ipv6

Gluster don't support ipv6 addresses (there is some work needed in it rpc code)
Do you think its worth to have a control over ipv4/ipv6 just to
restrict from ipv6 addresses?

>
>     No control over IPv4 vs. IPv6 use.  Immediate show-stopper.
>
>   Can we use InetSocketAddress?

sorry, I don't have an answer, since I was unclear in my comments above.

>
> * GlusterServer vs. SocketAddress
>
>   GlusterServer lacks case 'fd'.  Do file descriptors make no sense
>   here, or is it merely an implementation restriction?

Again, Gluster doesn't support.

>
>   GlusterServer is a flat union, SocketAddress is a simple union.  The flat
>   unions is nicer on the wire:
>       { "type": "tcp", "host": "gluster.example.com", ... }
>   vs.
>       { "type": "tcp", "data": { "host": "gluster.example.com", ... }
>
>   Perhaps we should use a flat union for new interfaces.
>
>> +##
>> +# @BlockdevOptionsGluster
>> +#
>> +# Driver specific block device options for Gluster
>> +#
>> +# @volume:      name of gluster volume where VM image resides
>> +#
>> +# @path:        absolute path to image file in gluster volume
>> +#
>> +# @server:      gluster server description
>> +#
>> +# @debug_level: #optional libgfapi log level (default '4' which is Error)
>> +#
>> +# Since: 2.7
>> +##
>> +{ 'struct': 'BlockdevOptionsGluster',
>> +  'data': { 'volume': 'str',
>> +            'path': 'str',
>> +            'server': 'GlusterServer',
>> +            '*debug_level': 'int' } }
>> +
>> +##
>>  # @BlockdevOptions
>>  #
>>  # Options for creating a block device.  Many options are available for all
>> @@ -2119,7 +2203,7 @@
>>        'file':       'BlockdevOptionsFile',
>>        'ftp':        'BlockdevOptionsFile',
>>        'ftps':       'BlockdevOptionsFile',
>> -# TODO gluster: Wait for structured options
>> +      'gluster':    'BlockdevOptionsGluster',
>>        'host_cdrom': 'BlockdevOptionsFile',
>>        'host_device':'BlockdevOptionsFile',
>>        'http':       'BlockdevOptionsFile',

Thanks Markus.

--
Prasanna

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [Qemu-devel] [PATCH v19 5/5] block/gluster: add support for multiple gluster servers
  2016-07-18 10:17   ` Markus Armbruster
@ 2016-07-18 11:51     ` Prasanna Kalever
  2016-07-18 14:39       ` Markus Armbruster
  0 siblings, 1 reply; 23+ messages in thread
From: Prasanna Kalever @ 2016-07-18 11:51 UTC (permalink / raw)
  To: Markus Armbruster
  Cc: Prasanna Kumar Kalever, qemu-devel, Vijay Bellur, Jeffrey Cody,
	Raghavendra Talur

On Mon, Jul 18, 2016 at 3:47 PM, Markus Armbruster <armbru@redhat.com> wrote:
> Prasanna Kumar Kalever <prasanna.kalever@redhat.com> writes:
>
>> This patch adds a way to specify multiple volfile servers to the gluster
>> block backend of QEMU with tcp|rdma transport types and their port numbers.
>>
>> Problem:
>>
>> Currently VM Image on gluster volume is specified like this:
>>
>> file=gluster[+tcp]://host[:port]/testvol/a.img
>>
>> Say we have three hosts in a trusted pool with replica 3 volume in action.
>> When the host mentioned in the command above goes down for some reason,
>> the other two hosts are still available. But there's currently no way
>> to tell QEMU about them.
>>
>> Solution:
>>
>> New way of specifying VM Image on gluster volume with volfile servers:
>> (We still support old syntax to maintain backward compatibility)
>>
>> Basic command line syntax looks like:
>>
>> Pattern I:
>>  -drive driver=gluster,
>>         volume=testvol,path=/path/a.raw,[debug=N,]
>>         server.0.type=tcp,
>>         server.0.host=1.2.3.4,
>>        [server.0.port=24007,]
>>         server.1.type=unix,
>>         server.1.socket=/path/socketfile
>>
>> Pattern II:
>>  'json:{"driver":"qcow2","file":{"driver":"gluster",
>>        "volume":"testvol","path":"/path/a.qcow2",["debug":N,]
>>        "server":[{hostinfo_1}, ...{hostinfo_N}]}}'
>>
>>    driver      => 'gluster' (protocol name)
>>    volume      => name of gluster volume where our VM image resides
>>    path        => absolute path of image in gluster volume
>>   [debug]      => libgfapi loglevel [(0 - 9) default 4 -> Error]
>>
>>   {hostinfo}   => {{type:"tcp",host:"1.2.3.4"[,port=24007]},
>>                    {type:"unix",socket:"/path/sockfile"}}
>>
>>    type        => transport type used to connect to gluster management daemon,
>>                   it can be tcp|unix
>>    host        => host address (hostname/ipv4/ipv6 addresses/socket path)
>>   [port]       => port number on which glusterd is listening. (default 24007)
>>    socket      => path to socket file
>>
>> Examples:
>> 1.
>>  -drive driver=qcow2,file.driver=gluster,
>>         file.volume=testvol,file.path=/path/a.qcow2,file.debug=9,
>>         file.server.0.type=tcp,
>>         file.server.0.host=1.2.3.4,
>>         file.server.0.port=24007,
>>         file.server.1.type=tcp,
>>         file.server.1.socket=/var/run/glusterd.socket
>> 2.
>>  'json:{"driver":"qcow2","file":{"driver":"gluster","volume":"testvol",
>>          "path":"/path/a.qcow2","debug":9,"server":
>>          [{type:"tcp",host:"1.2.3.4",port=24007},
>>           {type:"unix",socket:"/var/run/glusterd.socket"}] } }'
>>
>> This patch gives a mechanism to provide all the server addresses, which are in
>> replica set, so in case host1 is down VM can still boot from any of the
>> active hosts.
>>
>> This is equivalent to the backup-volfile-servers option supported by
>> mount.glusterfs (FUSE way of mounting gluster volume)
>>
>> credits: sincere thanks to all the supporters
>>
>> Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
>> ---
>>  block/gluster.c      | 347 +++++++++++++++++++++++++++++++++++++++++++++------
>>  qapi/block-core.json |   2 +-
>>  2 files changed, 307 insertions(+), 42 deletions(-)
>>
>> diff --git a/block/gluster.c b/block/gluster.c
>> index ff1e783..fd2279d 100644
>> --- a/block/gluster.c
>> +++ b/block/gluster.c
>> @@ -12,8 +12,16 @@
>>  #include "block/block_int.h"
>>  #include "qapi/error.h"
>>  #include "qemu/uri.h"
>> +#include "qemu/error-report.h"
>>
>>  #define GLUSTER_OPT_FILENAME        "filename"
>> +#define GLUSTER_OPT_VOLUME          "volume"
>> +#define GLUSTER_OPT_PATH            "path"
>> +#define GLUSTER_OPT_TYPE            "type"
>> +#define GLUSTER_OPT_SERVER_PATTERN  "server."
>> +#define GLUSTER_OPT_HOST            "host"
>> +#define GLUSTER_OPT_PORT            "port"
>> +#define GLUSTER_OPT_SOCKET          "socket"
>>  #define GLUSTER_OPT_DEBUG           "debug"
>>  #define GLUSTER_DEFAULT_PORT        24007
>>  #define GLUSTER_DEBUG_DEFAULT       4
>> @@ -82,6 +90,77 @@ static QemuOptsList runtime_opts = {
>>      },
>>  };
>>
>> +static QemuOptsList runtime_json_opts = {
>> +    .name = "gluster_json",
>> +    .head = QTAILQ_HEAD_INITIALIZER(runtime_json_opts.head),
>> +    .desc = {
>> +        {
>> +            .name = GLUSTER_OPT_VOLUME,
>> +            .type = QEMU_OPT_STRING,
>> +            .help = "name of gluster volume where VM image resides",
>> +        },
>> +        {
>> +            .name = GLUSTER_OPT_PATH,
>> +            .type = QEMU_OPT_STRING,
>> +            .help = "absolute path to image file in gluster volume",
>> +        },
>> +        {
>> +            .name = GLUSTER_OPT_DEBUG,
>> +            .type = QEMU_OPT_NUMBER,
>> +            .help = "Gluster log level, valid range is 0-9",
>> +        },
>> +        { /* end of list */ }
>> +    },
>> +};
>> +
>> +static QemuOptsList runtime_type_opts = {
>> +    .name = "gluster_type",
>> +    .head = QTAILQ_HEAD_INITIALIZER(runtime_type_opts.head),
>> +    .desc = {
>> +        {
>> +            .name = GLUSTER_OPT_TYPE,
>> +            .type = QEMU_OPT_STRING,
>> +            .help = "tcp|unix",
>> +        },
>> +        { /* end of list */ }
>> +    },
>> +};
>> +
>> +static QemuOptsList runtime_unix_opts = {
>> +    .name = "gluster_unix",
>> +    .head = QTAILQ_HEAD_INITIALIZER(runtime_unix_opts.head),
>> +    .desc = {
>> +        {
>> +            .name = GLUSTER_OPT_SOCKET,
>> +            .type = QEMU_OPT_STRING,
>> +            .help = "socket file path)",
>> +        },
>> +        { /* end of list */ }
>> +    },
>> +};
>> +
>> +static QemuOptsList runtime_tcp_opts = {
>> +    .name = "gluster_tcp",
>> +    .head = QTAILQ_HEAD_INITIALIZER(runtime_tcp_opts.head),
>> +    .desc = {
>> +        {
>> +            .name = GLUSTER_OPT_TYPE,
>> +            .type = QEMU_OPT_STRING,
>> +            .help = "tcp|unix",
>> +        },
>> +        {
>> +            .name = GLUSTER_OPT_HOST,
>> +            .type = QEMU_OPT_STRING,
>> +            .help = "host address (hostname/ipv4/ipv6 addresses)",
>> +        },
>> +        {
>> +            .name = GLUSTER_OPT_PORT,
>> +            .type = QEMU_OPT_NUMBER,
>> +            .help = "port number on which glusterd is listening (default 24007)",
>> +        },
>> +        { /* end of list */ }
>> +    },
>> +};
>>
>>  static int parse_volume_options(BlockdevOptionsGluster *gconf, char *path)
>>  {
>> @@ -157,7 +236,8 @@ static int qemu_gluster_parse_uri(BlockdevOptionsGluster *gconf,
>>          return -EINVAL;
>>      }
>>
>> -    gconf->server = gsconf = g_new0(GlusterServer, 1);
>> +    gconf->server = g_new0(GlusterServerList, 1);
>> +    gconf->server->value = gsconf = g_new0(GlusterServer, 1);
>>
>>      /* transport */
>>      if (!uri->scheme || !strcmp(uri->scheme, "gluster")) {
>> @@ -211,38 +291,34 @@ out:
>>      return ret;
>>  }
>>
>> -static struct glfs *qemu_gluster_init(BlockdevOptionsGluster *gconf,
>> -                                      const char *filename, Error **errp)
>> +static struct glfs *qemu_gluster_glfs_init(BlockdevOptionsGluster *gconf,
>> +                                           Error **errp)
>>  {
>> -    struct glfs *glfs = NULL;
>> +    struct glfs *glfs;
>>      int ret;
>>      int old_errno;
>> -
>> -    ret = qemu_gluster_parse_uri(gconf, filename);
>> -    if (ret < 0) {
>> -        error_setg(errp, "Usage: file=gluster[+transport]://[host[:port]]/"
>> -                         "volume/path[?socket=...]");
>> -        errno = -ret;
>> -        goto out;
>> -    }
>> +    GlusterServerList *server;
>>
>>      glfs = glfs_new(gconf->volume);
>>      if (!glfs) {
>>          goto out;
>>      }
>>
>> -    if (gconf->server->type == GLUSTER_TRANSPORT_UNIX) {
>> -        ret = glfs_set_volfile_server(glfs,
>> -                                      GlusterTransport_lookup[gconf->server->type],
>> -                                      gconf->server->u.q_unix.socket, 0);
>> -    } else {
>> -        ret = glfs_set_volfile_server(glfs,
>> -                                      GlusterTransport_lookup[gconf->server->type],
>> -                                      gconf->server->u.tcp.host,
>> -                                      gconf->server->u.tcp.port);
>> -    }
>> -    if (ret < 0) {
>> -        goto out;
>> +    for (server = gconf->server; server; server = server->next) {
>> +        if (server->value->type  == GLUSTER_TRANSPORT_UNIX) {
>> +            ret = glfs_set_volfile_server(glfs,
>> +                                          GlusterTransport_lookup[server->value->type],
>> +                                          server->value->u.q_unix.socket, 0);
>> +        } else {
>> +            ret = glfs_set_volfile_server(glfs,
>> +                                          GlusterTransport_lookup[server->value->type],
>> +                                          server->value->u.tcp.host,
>> +                                          server->value->u.tcp.port);
>
> server->value.u.tcp.port is optional.  Using it without checking
> server->value.u.tcp.has_port relies on the default value being zero.  We
> don't actually document that.  Perhaps we should.

I have made sure to fill it in the code, also marked
gsconf->u.tcp.has_port = true;

>
>> +        }
>> +
>> +        if (ret < 0) {
>> +            goto out;
>> +        }
>>      }
>>
>>      ret = glfs_set_logging(glfs, "-", gconf->debug_level);
>> @@ -252,19 +328,19 @@ static struct glfs *qemu_gluster_init(BlockdevOptionsGluster *gconf,
>>
>>      ret = glfs_init(glfs);
>>      if (ret) {
>> -        if (gconf->server->type == GLUSTER_TRANSPORT_UNIX) {
>> -            error_setg(errp,
>> -                       "Gluster connection for volume: %s, path: %s "
>> -                       "failed to connect on socket %s ",
>> -                       gconf->volume, gconf->path,
>> -                       gconf->server->u.q_unix.socket);
>> -        } else {
>> -            error_setg(errp,
>> -                       "Gluster connection for volume: %s, path: %s "
>> -                       "failed to connect  host  %s and port %d ",
>> -                       gconf->volume, gconf->path,
>> -                       gconf->server->u.tcp.host, gconf->server->u.tcp.port);
>> +        error_setg(errp, "Gluster connection for volume: %s, path: %s ",
>> +                   gconf->volume, gconf->path);
>> +        for (server = gconf->server; server; server = server->next) {
>> +            if (server->value->type  == GLUSTER_TRANSPORT_UNIX) {
>> +                error_append_hint(errp, "failed to connect on socket %s ",
>> +                                  server->value->u.q_unix.socket);
>> +            } else {
>> +                error_append_hint(errp, "failed to connect host %s and port %d ",
>> +                                  server->value->u.tcp.host,
>> +                                  server->value->u.tcp.port);
>> +            }
>>          }
>> +        error_append_hint(errp, "Please refer to gluster logs for more info ");
>
> Your code produces the error message "Gluster connection for volume:
> VOLUME, path: PATH ", which makes no sense.
>
> It also produces a hint that is a concatenation of one or more "failed
> to connect on FOO", followed by "Please refer to ..." without any
> punctuation, but with a trailing space.
>
> The error message must make sense on its own, without the hint.
>
> A fixed error message could look like this:
>
>     Gluster connection for volume VOLUME, path PATH failed to connect on FOO, on BAR, and on BAZ
>
> or with a little less effort
>
>     Gluster connection for volume VOLUME, path PATH failed to connect on FOO, BAR, BAZ
>
> or simply
>
>     Can't connect to Gluster volume VOLUME, path PATH at FOO, BAR, BAZ
>
> You can't build up the error message with error_append_hint().  Using it
> to append a hint pointing to Gluster logs is fine.

okay.

>
>>
>>          /* glfs_init sometimes doesn't set errno although docs suggest that */
>>          if (errno == 0) {
>> @@ -284,6 +360,195 @@ out:
>>      return NULL;
>>  }
>>
>> +static int qapi_enum_parse(const char *opt)
>> +{
>> +    int i;
>> +
>> +    if (!opt) {
>> +        return GLUSTER_TRANSPORT__MAX;
>> +    }
>> +
>> +    for (i = 0; i < GLUSTER_TRANSPORT__MAX; i++) {
>> +        if (!strcmp(opt, GlusterTransport_lookup[i])) {
>> +            return i;
>> +        }
>> +    }
>> +
>> +    return i;
>> +}
>> +
>> +/*
>> + * Convert the json formatted command line into qapi.
>> +*/
>> +static int qemu_gluster_parse_json(BlockdevOptionsGluster *gconf,
>> +                                  QDict *options, Error **errp)
>> +{
>> +    QemuOpts *opts;
>> +    GlusterServer *gsconf;
>> +    GlusterServerList *curr = NULL;
>> +    QDict *backing_options = NULL;
>> +    Error *local_err = NULL;
>> +    char *str = NULL;
>> +    const char *ptr;
>> +    size_t num_servers;
>> +    int i;
>> +
>> +    /* create opts info from runtime_json_opts list */
>> +    opts = qemu_opts_create(&runtime_json_opts, NULL, 0, &error_abort);
>> +    qemu_opts_absorb_qdict(opts, options, &local_err);
>> +    if (local_err) {
>> +        goto out;
>> +    }
>> +
>> +    num_servers = qdict_array_entries(options, GLUSTER_OPT_SERVER_PATTERN);
>> +    if (num_servers < 1) {
>> +        error_setg(&local_err, "please provide 'server' option with valid "
>> +                               "fields in array of hostinfo ");
>
> This isn't an error message, it's instructions what to do.  Such
> instructions can be useful when they're correct, but they can't replace
> an error message.  The error message should state what's wrong.  No
> less, no more.

Let me know, how to log the hint from a leaf function, where Error
object is not created yet.
I also feel its more like an error in the usage of the json command

>
> Moreover, avoid prefixes like "qemu_gluster:".  Usually, the fact that
> this is about Gluster is obvious.  When it isn't, providing context is
> the caller's job.

I think I have removed almost all 'qemu_gluster:' in v19 by respecting
you comments in v18

>
>> +        goto out;
>> +    }
>> +
>> +    ptr = qemu_opt_get(opts, GLUSTER_OPT_VOLUME);
>> +    if (!ptr) {
>> +        error_setg(&local_err, "please provide 'volume' option ");
>
> Not an error message.

Also same here ...

>
>> +        goto out;
>> +    }
>> +    gconf->volume = g_strdup(ptr);
>> +
>> +    ptr = qemu_opt_get(opts, GLUSTER_OPT_PATH);
>> +    if (!ptr) {
>> +        error_setg(&local_err, "please provide 'path' option ");
>
> Not an error message.

here ...

>
>> +        goto out;
>> +    }
>> +    gconf->path = g_strdup(ptr);
>> +    qemu_opts_del(opts);
>> +
>> +    for (i = 0; i < num_servers; i++) {
>> +        str = g_strdup_printf(GLUSTER_OPT_SERVER_PATTERN"%d.", i);
>> +        qdict_extract_subqdict(options, &backing_options, str);
>> +
>> +        /* create opts info from runtime_type_opts list */
>> +        opts = qemu_opts_create(&runtime_type_opts, NULL, 0, &error_abort);
>> +        qemu_opts_absorb_qdict(opts, backing_options, &local_err);
>> +        if (local_err) {
>> +            goto out;
>> +        }
>> +
>> +        ptr = qemu_opt_get(opts, GLUSTER_OPT_TYPE);
>> +        gsconf = g_new0(GlusterServer, 1);
>> +        gsconf->type = qapi_enum_parse(ptr);
>> +        if (!ptr) {
>> +            error_setg(&local_err, "please provide 'type' in hostinfo.%d as "
>> +                                   "tcp|unix ", i);
>
> Not an error message.

and here ...
How do I say whats really wrong in the command, which could be long
(if provides N servers in the list)

>
>> +            goto out;
>> +
>> +        }
>> +        if (gsconf->type == GLUSTER_TRANSPORT__MAX) {
>> +            error_setg(&local_err, "'type' in hostinfo.%d can be tcp|unix ", i);
>> +            goto out;
>> +        }
>> +        qemu_opts_del(opts);
>> +
>> +        if (gsconf->type == GLUSTER_TRANSPORT_TCP) {
>> +            /* create opts info from runtime_tcp_opts list */
>> +            opts = qemu_opts_create(&runtime_tcp_opts, NULL, 0, &error_abort);
>> +            qemu_opts_absorb_qdict(opts, backing_options, &local_err);
>> +            if (local_err) {
>> +                goto out;
>> +            }
>> +
>> +            ptr = qemu_opt_get(opts, GLUSTER_OPT_HOST);
>> +            if (!ptr) {
>> +                error_setg(&local_err, "please provide 'host' in hostinfo.%d ",
>> +                           i);
>> +                goto out;
>> +            }
>> +            gsconf->u.tcp.host = g_strdup(ptr);
>> +            ptr = qemu_opt_get(opts, GLUSTER_OPT_PORT);
>> +            gsconf->u.tcp.port = qemu_opt_get_number(opts, GLUSTER_OPT_PORT,
>> +                                               GLUSTER_DEFAULT_PORT);
>> +            gsconf->u.tcp.has_port = true;
>> +            qemu_opts_del(opts);
>> +        } else {
>> +            /* create opts info from runtime_unix_opts list */
>> +            opts = qemu_opts_create(&runtime_unix_opts, NULL, 0, &error_abort);
>> +            qemu_opts_absorb_qdict(opts, backing_options, &local_err);
>> +            if (local_err) {
>> +                goto out;
>> +            }
>> +
>> +            ptr = qemu_opt_get(opts, GLUSTER_OPT_SOCKET);
>> +            if (!ptr) {
>> +                error_setg(&local_err, "please provide 'socket' in hostinfo.%d ",
>> +                           i);
>> +                goto out;
>> +            }
>> +            gsconf->u.q_unix.socket = g_strdup(ptr);
>> +            qemu_opts_del(opts);
>> +        }
>> +
>> +        if (gconf->server == NULL) {
>> +            gconf->server = g_new0(GlusterServerList, 1);
>> +            gconf->server->value = gsconf;
>> +            curr = gconf->server;
>> +        } else {
>> +            curr->next = g_new0(GlusterServerList, 1);
>> +            curr->next->value = gsconf;
>> +            curr = curr->next;
>> +        }
>> +
>> +        qdict_del(backing_options, str);
>> +        g_free(str);
>> +        str = NULL;
>> +    }
>> +
>> +    return 0;
>> +
>> +out:
>> +    if (local_err) {
>> +        error_propagate(errp, local_err);
>> +    }
>
> error_propagate() does the right thing when its second argument is
> null.  Please drop the conditional.

Sure

>
>> +    qemu_opts_del(opts);
>> +    if (str) {
>> +        qdict_del(backing_options, str);
>> +        g_free(str);
>> +    }
>> +    errno = EINVAL;
>> +    return -errno;
>> +}
>> +
>> +static struct glfs *qemu_gluster_init(BlockdevOptionsGluster *gconf,
>> +                                      const char *filename,
>> +                                      QDict *options, Error **errp)
>> +{
>> +    int ret;
>> +    if (filename) {
>> +        ret = qemu_gluster_parse_uri(gconf, filename);
>> +        if (ret < 0) {
>> +            error_setg(errp, "Usage: file=gluster[+transport]://[host[:port]]/"
>> +                             "volume/path[?socket=...]");
>
> Not an error message.

Can you suggest the change required here for displaying hints from a
leaf function, I am worried for missing your intention here, since you
have most of your v18 comments here.

IIRC, the leaf function which wants to display out a hint/msg/error
should have an Error object, which is created by error_setg() and
error_append_hind() appends to it.

Sorry for disappointing you Markus.

Thanks,
--
Prasanna


>
>> +            errno = -ret;
>> +            return NULL;
>> +        }
>> +    } else {
>> +        ret = qemu_gluster_parse_json(gconf, options, errp);
>> +        if (ret < 0) {
>> +            error_append_hint(errp, "Usage: "
>> +                             "-drive driver=qcow2,file.driver=gluster,"
>> +                             "file.volume=testvol,file.path=/path/a.qcow2"
>> +                             "[,file.debug=9],file.server.0.type=tcp,"
>> +                             "file.server.0.host=1.2.3.4,"
>> +                             "[file.server.0.port=24007,]"
>> +                             "file.server.1.transport=unix,"
>> +                             "file.server.1.socket=/var/run/glusterd.socket ...");
>> +            errno = -ret;
>> +            return NULL;
>> +        }
>> +
>> +    }
>> +
>> +    return qemu_gluster_glfs_init(gconf, errp);
>> +}
>> +
>>  static void qemu_gluster_complete_aio(void *opaque)
>>  {
>>      GlusterAIOCB *acb = (GlusterAIOCB *)opaque;
>> @@ -383,7 +648,7 @@ static int qemu_gluster_open(BlockDriverState *bs,  QDict *options,
>>      gconf = g_new0(BlockdevOptionsGluster, 1);
>>      gconf->debug_level = s->debug_level;
>>      gconf->has_debug_level = true;
>> -    s->glfs = qemu_gluster_init(gconf, filename, errp);
>> +    s->glfs = qemu_gluster_init(gconf, filename, options, errp);
>>      if (!s->glfs) {
>>          ret = -errno;
>>          goto out;
>> @@ -454,7 +719,7 @@ static int qemu_gluster_reopen_prepare(BDRVReopenState *state,
>>      gconf = g_new0(BlockdevOptionsGluster, 1);
>>      gconf->debug_level = s->debug_level;
>>      gconf->has_debug_level = true;
>> -    reop_s->glfs = qemu_gluster_init(gconf, state->bs->filename, errp);
>> +    reop_s->glfs = qemu_gluster_init(gconf, state->bs->filename, NULL, errp);
>>      if (reop_s->glfs == NULL) {
>>          ret = -errno;
>>          goto exit;
>> @@ -601,7 +866,7 @@ static int qemu_gluster_create(const char *filename,
>>      }
>>      gconf->has_debug_level = true;
>>
>> -    glfs = qemu_gluster_init(gconf, filename, errp);
>> +    glfs = qemu_gluster_init(gconf, filename, NULL, errp);
>>      if (!glfs) {
>>          ret = -errno;
>>          goto out;
>> @@ -981,7 +1246,7 @@ static BlockDriver bdrv_gluster = {
>>      .format_name                  = "gluster",
>>      .protocol_name                = "gluster",
>>      .instance_size                = sizeof(BDRVGlusterState),
>> -    .bdrv_needs_filename          = true,
>> +    .bdrv_needs_filename          = false,
>>      .bdrv_file_open               = qemu_gluster_open,
>>      .bdrv_reopen_prepare          = qemu_gluster_reopen_prepare,
>>      .bdrv_reopen_commit           = qemu_gluster_reopen_commit,
>> @@ -1009,7 +1274,7 @@ static BlockDriver bdrv_gluster_tcp = {
>>      .format_name                  = "gluster",
>>      .protocol_name                = "gluster+tcp",
>>      .instance_size                = sizeof(BDRVGlusterState),
>> -    .bdrv_needs_filename          = true,
>> +    .bdrv_needs_filename          = false,
>>      .bdrv_file_open               = qemu_gluster_open,
>>      .bdrv_reopen_prepare          = qemu_gluster_reopen_prepare,
>>      .bdrv_reopen_commit           = qemu_gluster_reopen_commit,
>> diff --git a/qapi/block-core.json b/qapi/block-core.json
>> index d7b5c76..5557f1c 100644
>> --- a/qapi/block-core.json
>> +++ b/qapi/block-core.json
>> @@ -2137,7 +2137,7 @@
>>  { 'struct': 'BlockdevOptionsGluster',
>>    'data': { 'volume': 'str',
>>              'path': 'str',
>> -            'server': 'GlusterServer',
>> +            'server': ['GlusterServer'],
>>              '*debug_level': 'int' } }
>>
>>  ##

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [Qemu-devel] [PATCH v19 3/5] block/gluster: remove rdma transport
  2016-07-18 11:12     ` Prasanna Kalever
@ 2016-07-18 12:18       ` Markus Armbruster
  2016-07-18 13:35         ` Raghavendra Talur
  0 siblings, 1 reply; 23+ messages in thread
From: Markus Armbruster @ 2016-07-18 12:18 UTC (permalink / raw)
  To: Prasanna Kalever
  Cc: Raghavendra Talur, Jeffrey Cody, Prasanna Kumar Kalever,
	qemu-devel, Vijay Bellur

Prasanna Kalever <pkalever@redhat.com> writes:

> On Mon, Jul 18, 2016 at 2:23 PM, Markus Armbruster <armbru@redhat.com> wrote:
>> Prasanna Kumar Kalever <prasanna.kalever@redhat.com> writes:
>>
>>> gluster volfile server fetch happens through unix and/or tcp, it doesn't
>>> support volfile fetch over rdma, hence removing the dead code
>>>
>>> Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
>>> ---
>>>  block/gluster.c | 35 +----------------------------------
>>>  1 file changed, 1 insertion(+), 34 deletions(-)
>>>
>>> diff --git a/block/gluster.c b/block/gluster.c
>>> index 40ee852..59f77bb 100644
>>> --- a/block/gluster.c
>>> +++ b/block/gluster.c
>>> @@ -134,8 +134,7 @@ static int parse_volume_options(GlusterConf *gconf, char *path)
>>>   *
>>>   * 'transport' specifies the transport type used to connect to gluster
>>>   * management daemon (glusterd). Valid transport types are
>>> - * tcp, unix and rdma. If a transport type isn't specified, then tcp
>>> - * type is assumed.
>>> + * tcp, unix. If a transport type isn't specified, then tcp type is assumed.
>>>   *
>>>   * 'host' specifies the host where the volume file specification for
>>>   * the given volume resides. This can be either hostname, ipv4 address
>>> @@ -162,7 +161,6 @@ static int parse_volume_options(GlusterConf *gconf, char *path)
>>>   * file=gluster+tcp://[1:2:3:4:5:6:7:8]:24007/testvol/dir/a.img
>>>   * file=gluster+tcp://host.domain.com:24007/testvol/dir/a.img
>>>   * file=gluster+unix:///testvol/dir/a.img?socket=/tmp/glusterd.socket
>>> - * file=gluster+rdma://1.2.3.4:24007/testvol/a.img
>>>   */
>>>  static int qemu_gluster_parseuri(GlusterConf *gconf, const char *filename)
>>>  {
>>> @@ -184,8 +182,6 @@ static int qemu_gluster_parseuri(GlusterConf *gconf, const char *filename)
>>>      } else if (!strcmp(uri->scheme, "gluster+unix")) {
>>>          gconf->transport = g_strdup("unix");
>>
>> Outside this patch's scope: string literals would be just fine for
>> gconf->transport.
>
> If we remove rdma support, again comments here may drag people into confusion.
> Do you recommend to have this as a separate patch ?

I'm afraid we're talking about totally different things.  But it doesn't
actually matter, because I now see that I got confused.  Simply ignore
this comment.

>>
>>>          is_unix = true;
>>> -    } else if (!strcmp(uri->scheme, "gluster+rdma")) {
>>> -        gconf->transport = g_strdup("rdma");
>>>      } else {
>>>          ret = -EINVAL;
>>>          goto out;
>>> @@ -1048,37 +1044,8 @@ static BlockDriver bdrv_gluster_unix = {
>>>      .create_opts                  = &qemu_gluster_create_opts,
>>>  };
>>>
>>> -static BlockDriver bdrv_gluster_rdma = {
>>> -    .format_name                  = "gluster",
>>> -    .protocol_name                = "gluster+rdma",
>>> -    .instance_size                = sizeof(BDRVGlusterState),
>>> -    .bdrv_needs_filename          = true,
>>> -    .bdrv_file_open               = qemu_gluster_open,
>>> -    .bdrv_reopen_prepare          = qemu_gluster_reopen_prepare,
>>> -    .bdrv_reopen_commit           = qemu_gluster_reopen_commit,
>>> -    .bdrv_reopen_abort            = qemu_gluster_reopen_abort,
>>> -    .bdrv_close                   = qemu_gluster_close,
>>> -    .bdrv_create                  = qemu_gluster_create,
>>> -    .bdrv_getlength               = qemu_gluster_getlength,
>>> -    .bdrv_get_allocated_file_size = qemu_gluster_allocated_file_size,
>>> -    .bdrv_truncate                = qemu_gluster_truncate,
>>> -    .bdrv_co_readv                = qemu_gluster_co_readv,
>>> -    .bdrv_co_writev               = qemu_gluster_co_writev,
>>> -    .bdrv_co_flush_to_disk        = qemu_gluster_co_flush_to_disk,
>>> -    .bdrv_has_zero_init           = qemu_gluster_has_zero_init,
>>> -#ifdef CONFIG_GLUSTERFS_DISCARD
>>> -    .bdrv_co_discard              = qemu_gluster_co_discard,
>>> -#endif
>>> -#ifdef CONFIG_GLUSTERFS_ZEROFILL
>>> -    .bdrv_co_pwrite_zeroes        = qemu_gluster_co_pwrite_zeroes,
>>> -#endif
>>> -    .bdrv_co_get_block_status     = qemu_gluster_co_get_block_status,
>>> -    .create_opts                  = &qemu_gluster_create_opts,
>>> -};
>>> -
>>>  static void bdrv_gluster_init(void)
>>>  {
>>> -    bdrv_register(&bdrv_gluster_rdma);
>>>      bdrv_register(&bdrv_gluster_unix);
>>>      bdrv_register(&bdrv_gluster_tcp);
>>>      bdrv_register(&bdrv_gluster);
>>
>> This is fine if gluster+rdma never actually worked.  I tried to find out
>> at https://github.com/gluster/glusterfs/blob/master/api/src/glfs.h.
>> Transport rdma is mentioned there.  Does it work?
>
> this is transport used for fetching the volume file from the nodes.
> Which is of type tcp previously and then now it also supports the unix
> transport.
>
> More response from gluster community
> @http://www.gluster.org/pipermail/gluster-devel/2016-July/050112.html

Quote Raghavendra Talur's reply:

    > My understanding is that @transport argumet in
    > glfs_set_volfile_server() is meant for specifying transport used in
    > fetching volfile server,
    >

    Yes, @transport arg here is transport to use for fetching volfile.


    > IIRC which currently supports tcp and unix only...
    >
    Yes, this is correct too.

Here, Raghavendra seems to confirm that only tcp and unix are supported.

    >
    > The doc here
    > https://github.com/gluster/glusterfs/blob/master/api/src/glfs.h
    > +166 shows the rdma as well, which is something I cannot digest.
    >
    This is doc written with assumption that rdma would work too.


    >
    >
    > Can someone correct me ?
    >
    > Have we ever supported volfile fetch over rdma ?
    >

    I think no. To test, you would have to set rdma as only transport option in
    glusterd.vol and see what happens in volfile fetch.

But here, it sounds like it might work anyway!

    IMO, fetching volfile over rdma is an overkill and would not be required.
    RDMA should be kept only for IO operations.

    We should just remove it from the docs.

Don't misunderstand me, I'm very much in favor of removing the rdma
transport here.  All I'm trying to do is figure out what backward
compatibility ramifications that might have.

If protocol gluster+rdma has never actually worked, we can safely remove
it.

But if it happens to work even though it isn't really supported, the
situation is complicated.  Dropping it might break working setups.
Could perhaps be justified by "your setup works, but it's unsupported,
and I'm forcing you to switch to a supported setup now, before you come
to grief."

If it used to work but no more, or if it will stop working, it's
differently complicated.  Dropping it would still break working setups,
but they'd be bound to break anyway.

Thus, my questions: does protocol gluster+rdma work before your patch?
If no, did it ever work?  "I don't know" is an acceptable answer to the
latter question, but not so much to the former, because that one is
easily testable.

Once we have answers to these questions, we can decide what needs to be
done for compatibility, if anything.

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [Qemu-devel] [PATCH v19 4/5] block/gluster: using new qapi schema
  2016-07-18 11:29     ` Prasanna Kalever
@ 2016-07-18 13:11       ` Markus Armbruster
  2016-07-18 18:28         ` Prasanna Kalever
  0 siblings, 1 reply; 23+ messages in thread
From: Markus Armbruster @ 2016-07-18 13:11 UTC (permalink / raw)
  To: Prasanna Kalever
  Cc: Raghavendra Talur, Jeffrey Cody, Prasanna Kumar Kalever,
	qemu-devel, Vijay Bellur

Prasanna Kalever <pkalever@redhat.com> writes:

> On Mon, Jul 18, 2016 at 2:59 PM, Markus Armbruster <armbru@redhat.com> wrote:
>> Prasanna Kumar Kalever <prasanna.kalever@redhat.com> writes:
>>
>>> this patch adds 'GlusterServer' related schema in qapi/block-core.json
>>>
>>> Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
>>> ---
>>>  block/gluster.c      | 111 +++++++++++++++++++++++++++++----------------------
>>>  qapi/block-core.json |  94 ++++++++++++++++++++++++++++++++++++++++---
>>>  2 files changed, 153 insertions(+), 52 deletions(-)
>> [Skipping ahead to QAPI schema...]
>>> diff --git a/qapi/block-core.json b/qapi/block-core.json
>>> index a7fdb28..d7b5c76 100644
>>> --- a/qapi/block-core.json
>>> +++ b/qapi/block-core.json
>>> @@ -1658,13 +1658,14 @@
>>>  # @host_device, @host_cdrom: Since 2.1
>>>  #
>>>  # Since: 2.0
>>> +# @gluster: Since 2.7
>>>  ##
>>>  { 'enum': 'BlockdevDriver',
>>>    'data': [ 'archipelago', 'blkdebug', 'blkverify', 'bochs', 'cloop',
>>> -            'dmg', 'file', 'ftp', 'ftps', 'host_cdrom', 'host_device',
>>> -            'http', 'https', 'luks', 'null-aio', 'null-co', 'parallels',
>>> -            'qcow', 'qcow2', 'qed', 'quorum', 'raw', 'tftp', 'vdi', 'vhdx',
>>> -            'vmdk', 'vpc', 'vvfat' ] }
>>> +            'dmg', 'file', 'ftp', 'ftps', 'gluster', 'host_cdrom',
>>> +            'host_device', 'http', 'https', 'luks', 'null-aio', 'null-co',
>>> +            'parallels', 'qcow', 'qcow2', 'qed', 'quorum', 'raw', 'tftp',
>>> +            'vdi', 'vhdx', 'vmdk', 'vpc', 'vvfat' ] }
>>>
>>>  ##
>>>  # @BlockdevOptionsFile
>>> @@ -2057,6 +2058,89 @@
>>>              '*read-pattern': 'QuorumReadPattern' } }
>>>
>>>  ##
>>> +# @GlusterTransport
>>> +#
>>> +# An enumeration of Gluster transport type
>>> +#
>>> +# @tcp:   TCP   - Transmission Control Protocol
>>> +#
>>> +# @unix:  UNIX  - Unix domain socket
>>> +#
>>> +# Since: 2.7
>>> +##
>>> +{ 'enum': 'GlusterTransport',
>>> +  'data': [ 'unix', 'tcp' ] }
>>> +
>>> +##
>>> +# @GlusterUnixSocketAddress
>>> +#
>>> +# Captures a socket address in the local ("Unix socket") namespace.
>>> +#
>>> +# @socket:   absolute path to socket file
>>> +#
>>> +# Since 2.7
>>> +##
>>> +{ 'struct': 'GlusterUnixSocketAddress',
>>> +  'data': { 'socket': 'str' } }
>>
>> Compare:
>>
>>    ##
>>    # @UnixSocketAddress
>>    #
>>    # Captures a socket address in the local ("Unix socket") namespace.
>>    #
>>    # @path: filesystem path to use
>>    #
>>    # Since 1.3
>>    ##
>>    { 'struct': 'UnixSocketAddress',
>>      'data': {
>>        'path': 'str' } }
>>
>>> +
>>> +##
>>> +# @GlusterInetSocketAddress
>>> +#
>>> +# Captures a socket address or address range in the Internet namespace.
>>> +#
>>> +# @host:  host part of the address
>>> +#
>>> +# @port:  #optional port part of the address, or lowest port if @to is present
>>
>> There is no @to.
>>
>> What's the default port?
>
> #define GLUSTER_DEFAULT_PORT        24007

I know, but the poor reader of the docs may not, so the docs better
spell it out :)

>>> +#
>>> +# Since 2.7
>>> +##
>>> +{ 'struct': 'GlusterInetSocketAddress',
>>> +  'data': { 'host': 'str',
>>> +            '*port': 'uint16' } }
>>
>> Compare:
>>
>>    ##
>>    # @InetSocketAddress
>>    #
>>    # Captures a socket address or address range in the Internet namespace.
>>    #
>>    # @host: host part of the address
>>    #
>>    # @port: port part of the address, or lowest port if @to is present
>>    #
>>    # @to: highest port to try
>>    #
>>    # @ipv4: whether to accept IPv4 addresses, default try both IPv4 and IPv6
>>    #        #optional
>>    #
>>    # @ipv6: whether to accept IPv6 addresses, default try both IPv4 and IPv6
>>    #        #optional
>>    #
>>    # Since 1.3
>>    ##
>>    { 'struct': 'InetSocketAddress',
>>      'data': {
>>        'host': 'str',
>>        'port': 'str',
>>        '*to': 'uint16',
>>        '*ipv4': 'bool',
>>        '*ipv6': 'bool' } }
>>
>>> +
>>> +##
>>> +# @GlusterServer
>>> +#
>>> +# Captures the address of a socket
>>> +#
>>> +# Details for connecting to a gluster server
>>> +#
>>> +# @type:       Transport type used for gluster connection
>>> +#
>>> +# @unix:       socket file
>>> +#
>>> +# @tcp:        host address and port number
>>> +#
>>> +# Since: 2.7
>>> +##
>>> +{ 'union': 'GlusterServer',
>>> +  'base': { 'type': 'GlusterTransport' },
>>> +  'discriminator': 'type',
>>> +  'data': { 'unix': 'GlusterUnixSocketAddress',
>>> +            'tcp': 'GlusterInetSocketAddress' } }
>>> +
>>
>> Compare:
>>
>>    ##
>>    # @SocketAddress
>>    #
>>    # Captures the address of a socket, which could also be a named file descriptor
>>    #
>>    # Since 1.3
>>    ##
>>    { 'union': 'SocketAddress',
>>      'data': {
>>        'inet': 'InetSocketAddress',
>>        'unix': 'UnixSocketAddress',
>>        'fd': 'String' } }
>>
>> You cleaned up the confusing abuse of @host as Unix domain socket file
>> name.  Good.
>>
>> You're still defining your own QAPI types instead of using the existing
>> ones.  To see whether that's a good idea, let's compare your definitions
>> to the existing ones:

I've since been gently referred to this note in the cover letter:

    patch 4/5 (i.e. 3/4 in v18) now uses union discriminator, I have
    made a choice to use gluster with custom schema since
    @UnixSocketAddress uses 'path' as key, which may be confusing with
    gluster,

Can you briefly explain why 'path' may be confusing?

    and in @InetSocketAddress port was str again I have made a choice to
    keep it uint16 which really make sense.

A port can be given in symbolic form (service name) and in numeric form
(port number), just like a host.  For instance, TCP service name "smtp"
means port number 25.  See also services(5).  Naturally, a symbolic name
only exists for sufficiently well-known ports.

Interfaces should accept both service name and port.  InetSocketAddress
does, in the same way as getaddrinfo(): it takes a string, which may
either be a service name or a port number.  Perhaps it should take an
alternate of int16 and string instead, but that's a separate question.

    Hmmm.. As Markus suggested in v18 qemu_gluster_parseuri() is
    *parse_uri() same with *parse_json() (in 5/5)

Not sure I got that.  Do you mean "I renamed qemu_gluster_parseuri() to
qemu_gluster_parse_uri() for consistency with
qemu_gluster_parse_json()"?

>>
>> * GlusterUnixSocketAddress vs. UnixSocketAddress
>>
>>   Identical but for the member name.  Why can't we use
>>   UnixSocketAddress?
>
> May be you are right, it may not worth just for a member name.

Can't say, yet; that's why I ask you to explain why it may be confusing.

>> * GlusterInetSocketAddress vs. InetSocketAddress
>>
>>   Changes in GlusterInetSocketAddress over InetSocketAddress:
>>
>>   - @port is optional
>>
>>     Convenience feature.  We can discuss making it optional in
>>     InetSocketAddress, too.
>
> sure.
>
>>
>>   - @port has type 'uint16' instead of 'str'
>>
>>     No support for service names.  Why is that a good idea?
>
> I honestly do not understand tie service names to port.
> As far all I understand at least from gluster use-case wise its just
> an integer ranging from 0 - 65535 (mostly 24007)
> I am happy to learn this

Hope I was able to explain this above.

>>   - Lacks @to
>>
>>     No support for trying a range of ports.  I guess that simply makes
>>     no sense for a Gluster client.  I guess makes no sense in some other
>>     uses of InetSocketAddress, too.  Eric has proposed to split off
>>     InetSocketAddressRange off InetSocketAddress.
> I still don't understand the essence, why we need to loop through the ports ?

Best explained by looking at a use of this feature.  With -display vnc,
we start a VNC server.  By default, it listens on port 5900.  If this
port isn't available, it fails like this:

    qemu-system-x86_64:-display vnc=:0: Failed to start VNC server: Failed to bind socket: Address already in use

If you don't care about the port, you can use something like "-display
vnc=:0,to=9" to bind the first free port in the range of 5900 to 5909.

>>   - Lacks @ipv4, @ipv6
>
> Gluster don't support ipv6 addresses (there is some work needed in it rpc code)
> Do you think its worth to have a control over ipv4/ipv6 just to
> restrict from ipv6 addresses?

In that case, it's not a show-stopper.  When Gluster acquires IPv6
support, we'll need them.  Until then, we can omit them.  If we don't
omit them, the gluster driver should reject "ipv4": false.

>>     No control over IPv4 vs. IPv6 use.  Immediate show-stopper.
>>
>>   Can we use InetSocketAddress?
>
> sorry, I don't have an answer, since I was unclear in my comments above.
>
>>
>> * GlusterServer vs. SocketAddress
>>
>>   GlusterServer lacks case 'fd'.  Do file descriptors make no sense
>>   here, or is it merely an implementation restriction?
>
> Again, Gluster doesn't support.

Yes, the library we use to talk to Gluster doesn't let you pass in a
file descriptor today.

My question is whether this *could* be supported.  The answer is
probably "yes".

Fd support is usually desirable for privilege separation.  There, we
want to run QEMU with the least possible privileges.  Ideally no way to
open TCP connections.  Instead, the management application does the
opening, and passes the open fd to QEMU.  Makes sense because it limits
what a malicious guest can gain by attacking QEMU.

>>   GlusterServer is a flat union, SocketAddress is a simple union.  The flat
>>   unions is nicer on the wire:
>>       { "type": "tcp", "host": "gluster.example.com", ... }
>>   vs.
>>       { "type": "tcp", "data": { "host": "gluster.example.com", ... }
>>
>>   Perhaps we should use a flat union for new interfaces.
>>
>>> +##
>>> +# @BlockdevOptionsGluster
>>> +#
>>> +# Driver specific block device options for Gluster
>>> +#
>>> +# @volume:      name of gluster volume where VM image resides
>>> +#
>>> +# @path:        absolute path to image file in gluster volume
>>> +#
>>> +# @server:      gluster server description
>>> +#
>>> +# @debug_level: #optional libgfapi log level (default '4' which is Error)
>>> +#
>>> +# Since: 2.7
>>> +##
>>> +{ 'struct': 'BlockdevOptionsGluster',
>>> +  'data': { 'volume': 'str',
>>> +            'path': 'str',
>>> +            'server': 'GlusterServer',
>>> +            '*debug_level': 'int' } }
>>> +
>>> +##
>>>  # @BlockdevOptions
>>>  #
>>>  # Options for creating a block device.  Many options are available for all
>>> @@ -2119,7 +2203,7 @@
>>>        'file':       'BlockdevOptionsFile',
>>>        'ftp':        'BlockdevOptionsFile',
>>>        'ftps':       'BlockdevOptionsFile',
>>> -# TODO gluster: Wait for structured options
>>> +      'gluster':    'BlockdevOptionsGluster',
>>>        'host_cdrom': 'BlockdevOptionsFile',
>>>        'host_device':'BlockdevOptionsFile',
>>>        'http':       'BlockdevOptionsFile',
>
> Thanks Markus.
>
> --
> Prasanna

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [Qemu-devel] [PATCH v19 3/5] block/gluster: remove rdma transport
  2016-07-18 12:18       ` Markus Armbruster
@ 2016-07-18 13:35         ` Raghavendra Talur
  2016-07-18 13:50           ` Prasanna Kalever
  2016-07-18 14:02           ` Markus Armbruster
  0 siblings, 2 replies; 23+ messages in thread
From: Raghavendra Talur @ 2016-07-18 13:35 UTC (permalink / raw)
  To: Markus Armbruster
  Cc: Prasanna Kalever, Jeffrey Cody, Prasanna Kumar Kalever,
	qemu-devel, Vijay Bellur, Rafi Kavungal Chundattu Parambil,
	Poornima Gurusiddaiah

On Mon, Jul 18, 2016 at 5:48 PM, Markus Armbruster <armbru@redhat.com>
wrote:

> Prasanna Kalever <pkalever@redhat.com> writes:
>
> > On Mon, Jul 18, 2016 at 2:23 PM, Markus Armbruster <armbru@redhat.com>
> wrote:
> >> Prasanna Kumar Kalever <prasanna.kalever@redhat.com> writes:
> >>
> >>> gluster volfile server fetch happens through unix and/or tcp, it
> doesn't
> >>> support volfile fetch over rdma, hence removing the dead code
> >>>
> >>> Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
> >>> ---
> >>>  block/gluster.c | 35 +----------------------------------
> >>>  1 file changed, 1 insertion(+), 34 deletions(-)
> >>>
> >>> diff --git a/block/gluster.c b/block/gluster.c
> >>> index 40ee852..59f77bb 100644
> >>> --- a/block/gluster.c
> >>> +++ b/block/gluster.c
> >>> @@ -134,8 +134,7 @@ static int parse_volume_options(GlusterConf
> *gconf, char *path)
> >>>   *
> >>>   * 'transport' specifies the transport type used to connect to gluster
> >>>   * management daemon (glusterd). Valid transport types are
> >>> - * tcp, unix and rdma. If a transport type isn't specified, then tcp
> >>> - * type is assumed.
> >>> + * tcp, unix. If a transport type isn't specified, then tcp type is
> assumed.
> >>>   *
> >>>   * 'host' specifies the host where the volume file specification for
> >>>   * the given volume resides. This can be either hostname, ipv4 address
> >>> @@ -162,7 +161,6 @@ static int parse_volume_options(GlusterConf
> *gconf, char *path)
> >>>   * file=gluster+tcp://[1:2:3:4:5:6:7:8]:24007/testvol/dir/a.img
> >>>   * file=gluster+tcp://host.domain.com:24007/testvol/dir/a.img
> >>>   * file=gluster+unix:///testvol/dir/a.img?socket=/tmp/glusterd.socket
> >>> - * file=gluster+rdma://1.2.3.4:24007/testvol/a.img
> >>>   */
> >>>  static int qemu_gluster_parseuri(GlusterConf *gconf, const char
> *filename)
> >>>  {
> >>> @@ -184,8 +182,6 @@ static int qemu_gluster_parseuri(GlusterConf
> *gconf, const char *filename)
> >>>      } else if (!strcmp(uri->scheme, "gluster+unix")) {
> >>>          gconf->transport = g_strdup("unix");
> >>
> >> Outside this patch's scope: string literals would be just fine for
> >> gconf->transport.
> >
> > If we remove rdma support, again comments here may drag people into
> confusion.
> > Do you recommend to have this as a separate patch ?
>
> I'm afraid we're talking about totally different things.  But it doesn't
> actually matter, because I now see that I got confused.  Simply ignore
> this comment.
>
> >>
> >>>          is_unix = true;
> >>> -    } else if (!strcmp(uri->scheme, "gluster+rdma")) {
> >>> -        gconf->transport = g_strdup("rdma");
> >>>      } else {
> >>>          ret = -EINVAL;
> >>>          goto out;
> >>> @@ -1048,37 +1044,8 @@ static BlockDriver bdrv_gluster_unix = {
> >>>      .create_opts                  = &qemu_gluster_create_opts,
> >>>  };
> >>>
> >>> -static BlockDriver bdrv_gluster_rdma = {
> >>> -    .format_name                  = "gluster",
> >>> -    .protocol_name                = "gluster+rdma",
> >>> -    .instance_size                = sizeof(BDRVGlusterState),
> >>> -    .bdrv_needs_filename          = true,
> >>> -    .bdrv_file_open               = qemu_gluster_open,
> >>> -    .bdrv_reopen_prepare          = qemu_gluster_reopen_prepare,
> >>> -    .bdrv_reopen_commit           = qemu_gluster_reopen_commit,
> >>> -    .bdrv_reopen_abort            = qemu_gluster_reopen_abort,
> >>> -    .bdrv_close                   = qemu_gluster_close,
> >>> -    .bdrv_create                  = qemu_gluster_create,
> >>> -    .bdrv_getlength               = qemu_gluster_getlength,
> >>> -    .bdrv_get_allocated_file_size = qemu_gluster_allocated_file_size,
> >>> -    .bdrv_truncate                = qemu_gluster_truncate,
> >>> -    .bdrv_co_readv                = qemu_gluster_co_readv,
> >>> -    .bdrv_co_writev               = qemu_gluster_co_writev,
> >>> -    .bdrv_co_flush_to_disk        = qemu_gluster_co_flush_to_disk,
> >>> -    .bdrv_has_zero_init           = qemu_gluster_has_zero_init,
> >>> -#ifdef CONFIG_GLUSTERFS_DISCARD
> >>> -    .bdrv_co_discard              = qemu_gluster_co_discard,
> >>> -#endif
> >>> -#ifdef CONFIG_GLUSTERFS_ZEROFILL
> >>> -    .bdrv_co_pwrite_zeroes        = qemu_gluster_co_pwrite_zeroes,
> >>> -#endif
> >>> -    .bdrv_co_get_block_status     = qemu_gluster_co_get_block_status,
> >>> -    .create_opts                  = &qemu_gluster_create_opts,
> >>> -};
> >>> -
> >>>  static void bdrv_gluster_init(void)
> >>>  {
> >>> -    bdrv_register(&bdrv_gluster_rdma);
> >>>      bdrv_register(&bdrv_gluster_unix);
> >>>      bdrv_register(&bdrv_gluster_tcp);
> >>>      bdrv_register(&bdrv_gluster);
> >>
> >> This is fine if gluster+rdma never actually worked.  I tried to find out
> >> at https://github.com/gluster/glusterfs/blob/master/api/src/glfs.h.
> >> Transport rdma is mentioned there.  Does it work?
> >
> > this is transport used for fetching the volume file from the nodes.
> > Which is of type tcp previously and then now it also supports the unix
> > transport.
> >
> > More response from gluster community
> > @http://www.gluster.org/pipermail/gluster-devel/2016-July/050112.html
>
> Quote Raghavendra Talur's reply:
>
>     > My understanding is that @transport argumet in
>     > glfs_set_volfile_server() is meant for specifying transport used in
>     > fetching volfile server,
>     >
>
>     Yes, @transport arg here is transport to use for fetching volfile.
>
>
>     > IIRC which currently supports tcp and unix only...
>     >
>     Yes, this is correct too.
>
> Here, Raghavendra seems to confirm that only tcp and unix are supported.
>
>     >
>     > The doc here
>     > https://github.com/gluster/glusterfs/blob/master/api/src/glfs.h
>     > +166 shows the rdma as well, which is something I cannot digest.
>     >
>     This is doc written with assumption that rdma would work too.
>
>
>     >
>     >
>     > Can someone correct me ?
>     >
>     > Have we ever supported volfile fetch over rdma ?
>     >
>
>     I think no. To test, you would have to set rdma as only transport
> option in
>     glusterd.vol and see what happens in volfile fetch.
>
> But here, it sounds like it might work anyway!
>

Prasanna, Rafi and I tested this. When rdma option is specified, gluster
defaults to tcp silently. I do not like this behavior.


>     IMO, fetching volfile over rdma is an overkill and would not be
> required.
>     RDMA should be kept only for IO operations.
>
>     We should just remove it from the docs.
>
> Don't misunderstand me, I'm very much in favor of removing the rdma
> transport here.  All I'm trying to do is figure out what backward
> compatibility ramifications that might have.
>
> If protocol gluster+rdma has never actually worked, we can safely remove
> it.
>
> But if it happens to work even though it isn't really supported, the
> situation is complicated.  Dropping it might break working setups.
> Could perhaps be justified by "your setup works, but it's unsupported,
> and I'm forcing you to switch to a supported setup now, before you come
> to grief."
>
> If it used to work but no more, or if it will stop working, it's
> differently complicated.  Dropping it would still break working setups,
> but they'd be bound to break anyway.
>
> Thus, my questions: does protocol gluster+rdma work before your patch?
> If no, did it ever work?  "I don't know" is an acceptable answer to the
> latter question, but not so much to the former, because that one is
> easily testable.
>

Yes, it appeared to user that the option worked and removing the option
would break such setups. I agree with Markus that removing the option right
away would hurt users and we should follow a deprecation strategy for
backward compatibility.


>
> Once we have answers to these questions, we can decide what needs to be
> done for compatibility, if anything.
>

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [Qemu-devel] [PATCH v19 3/5] block/gluster: remove rdma transport
  2016-07-18 13:35         ` Raghavendra Talur
@ 2016-07-18 13:50           ` Prasanna Kalever
  2016-07-18 14:02           ` Markus Armbruster
  1 sibling, 0 replies; 23+ messages in thread
From: Prasanna Kalever @ 2016-07-18 13:50 UTC (permalink / raw)
  To: Raghavendra Talur, Markus Armbruster
  Cc: Jeffrey Cody, Prasanna Kumar Kalever, qemu-devel, Vijay Bellur,
	Rafi Kavungal Chundattu Parambil, Poornima Gurusiddaiah

On Mon, Jul 18, 2016 at 7:05 PM, Raghavendra Talur <rtalur@redhat.com> wrote:
>
>
> On Mon, Jul 18, 2016 at 5:48 PM, Markus Armbruster <armbru@redhat.com>
> wrote:
>>
>> Prasanna Kalever <pkalever@redhat.com> writes:
>>
>> > On Mon, Jul 18, 2016 at 2:23 PM, Markus Armbruster <armbru@redhat.com>
>> > wrote:
>> >> Prasanna Kumar Kalever <prasanna.kalever@redhat.com> writes:
>> >>
>> >>> gluster volfile server fetch happens through unix and/or tcp, it
>> >>> doesn't
>> >>> support volfile fetch over rdma, hence removing the dead code
>> >>>
>> >>> Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
>> >>> ---
>> >>>  block/gluster.c | 35 +----------------------------------
>> >>>  1 file changed, 1 insertion(+), 34 deletions(-)
>> >>>
>> >>> diff --git a/block/gluster.c b/block/gluster.c
>> >>> index 40ee852..59f77bb 100644
>> >>> --- a/block/gluster.c
>> >>> +++ b/block/gluster.c
>> >>> @@ -134,8 +134,7 @@ static int parse_volume_options(GlusterConf
>> >>> *gconf, char *path)
>> >>>   *
>> >>>   * 'transport' specifies the transport type used to connect to
>> >>> gluster
>> >>>   * management daemon (glusterd). Valid transport types are
>> >>> - * tcp, unix and rdma. If a transport type isn't specified, then tcp
>> >>> - * type is assumed.
>> >>> + * tcp, unix. If a transport type isn't specified, then tcp type is
>> >>> assumed.
>> >>>   *
>> >>>   * 'host' specifies the host where the volume file specification for
>> >>>   * the given volume resides. This can be either hostname, ipv4
>> >>> address
>> >>> @@ -162,7 +161,6 @@ static int parse_volume_options(GlusterConf
>> >>> *gconf, char *path)
>> >>>   * file=gluster+tcp://[1:2:3:4:5:6:7:8]:24007/testvol/dir/a.img
>> >>>   * file=gluster+tcp://host.domain.com:24007/testvol/dir/a.img
>> >>>   * file=gluster+unix:///testvol/dir/a.img?socket=/tmp/glusterd.socket
>> >>> - * file=gluster+rdma://1.2.3.4:24007/testvol/a.img
>> >>>   */
>> >>>  static int qemu_gluster_parseuri(GlusterConf *gconf, const char
>> >>> *filename)
>> >>>  {
>> >>> @@ -184,8 +182,6 @@ static int qemu_gluster_parseuri(GlusterConf
>> >>> *gconf, const char *filename)
>> >>>      } else if (!strcmp(uri->scheme, "gluster+unix")) {
>> >>>          gconf->transport = g_strdup("unix");
>> >>
>> >> Outside this patch's scope: string literals would be just fine for
>> >> gconf->transport.
>> >
>> > If we remove rdma support, again comments here may drag people into
>> > confusion.
>> > Do you recommend to have this as a separate patch ?
>>
>> I'm afraid we're talking about totally different things.  But it doesn't
>> actually matter, because I now see that I got confused.  Simply ignore
>> this comment.
>>
>> >>
>> >>>          is_unix = true;
>> >>> -    } else if (!strcmp(uri->scheme, "gluster+rdma")) {
>> >>> -        gconf->transport = g_strdup("rdma");
>> >>>      } else {
>> >>>          ret = -EINVAL;
>> >>>          goto out;
>> >>> @@ -1048,37 +1044,8 @@ static BlockDriver bdrv_gluster_unix = {
>> >>>      .create_opts                  = &qemu_gluster_create_opts,
>> >>>  };
>> >>>
>> >>> -static BlockDriver bdrv_gluster_rdma = {
>> >>> -    .format_name                  = "gluster",
>> >>> -    .protocol_name                = "gluster+rdma",
>> >>> -    .instance_size                = sizeof(BDRVGlusterState),
>> >>> -    .bdrv_needs_filename          = true,
>> >>> -    .bdrv_file_open               = qemu_gluster_open,
>> >>> -    .bdrv_reopen_prepare          = qemu_gluster_reopen_prepare,
>> >>> -    .bdrv_reopen_commit           = qemu_gluster_reopen_commit,
>> >>> -    .bdrv_reopen_abort            = qemu_gluster_reopen_abort,
>> >>> -    .bdrv_close                   = qemu_gluster_close,
>> >>> -    .bdrv_create                  = qemu_gluster_create,
>> >>> -    .bdrv_getlength               = qemu_gluster_getlength,
>> >>> -    .bdrv_get_allocated_file_size = qemu_gluster_allocated_file_size,
>> >>> -    .bdrv_truncate                = qemu_gluster_truncate,
>> >>> -    .bdrv_co_readv                = qemu_gluster_co_readv,
>> >>> -    .bdrv_co_writev               = qemu_gluster_co_writev,
>> >>> -    .bdrv_co_flush_to_disk        = qemu_gluster_co_flush_to_disk,
>> >>> -    .bdrv_has_zero_init           = qemu_gluster_has_zero_init,
>> >>> -#ifdef CONFIG_GLUSTERFS_DISCARD
>> >>> -    .bdrv_co_discard              = qemu_gluster_co_discard,
>> >>> -#endif
>> >>> -#ifdef CONFIG_GLUSTERFS_ZEROFILL
>> >>> -    .bdrv_co_pwrite_zeroes        = qemu_gluster_co_pwrite_zeroes,
>> >>> -#endif
>> >>> -    .bdrv_co_get_block_status     = qemu_gluster_co_get_block_status,
>> >>> -    .create_opts                  = &qemu_gluster_create_opts,
>> >>> -};
>> >>> -
>> >>>  static void bdrv_gluster_init(void)
>> >>>  {
>> >>> -    bdrv_register(&bdrv_gluster_rdma);
>> >>>      bdrv_register(&bdrv_gluster_unix);
>> >>>      bdrv_register(&bdrv_gluster_tcp);
>> >>>      bdrv_register(&bdrv_gluster);
>> >>
>> >> This is fine if gluster+rdma never actually worked.  I tried to find
>> >> out
>> >> at https://github.com/gluster/glusterfs/blob/master/api/src/glfs.h.
>> >> Transport rdma is mentioned there.  Does it work?
>> >
>> > this is transport used for fetching the volume file from the nodes.
>> > Which is of type tcp previously and then now it also supports the unix
>> > transport.
>> >
>> > More response from gluster community
>> > @http://www.gluster.org/pipermail/gluster-devel/2016-July/050112.html
>>
>> Quote Raghavendra Talur's reply:
>>
>>     > My understanding is that @transport argumet in
>>     > glfs_set_volfile_server() is meant for specifying transport used in
>>     > fetching volfile server,
>>     >
>>
>>     Yes, @transport arg here is transport to use for fetching volfile.
>>
>>
>>     > IIRC which currently supports tcp and unix only...
>>     >
>>     Yes, this is correct too.
>>
>> Here, Raghavendra seems to confirm that only tcp and unix are supported.
>>
>>     >
>>     > The doc here
>>     > https://github.com/gluster/glusterfs/blob/master/api/src/glfs.h
>>     > +166 shows the rdma as well, which is something I cannot digest.
>>     >
>>     This is doc written with assumption that rdma would work too.
>>
>>
>>     >
>>     >
>>     > Can someone correct me ?
>>     >
>>     > Have we ever supported volfile fetch over rdma ?
>>     >
>>
>>     I think no. To test, you would have to set rdma as only transport
>> option in
>>     glusterd.vol and see what happens in volfile fetch.
>>
>> But here, it sounds like it might work anyway!
>
>
> Prasanna, Rafi and I tested this. When rdma option is specified, gluster
> defaults to tcp silently. I do not like this behavior.
>
>>
>>     IMO, fetching volfile over rdma is an overkill and would not be
>> required.
>>     RDMA should be kept only for IO operations.
>>
>>     We should just remove it from the docs.
>>
>> Don't misunderstand me, I'm very much in favor of removing the rdma
>> transport here.  All I'm trying to do is figure out what backward
>> compatibility ramifications that might have.
>>
>> If protocol gluster+rdma has never actually worked, we can safely remove
>> it.
>>
>> But if it happens to work even though it isn't really supported, the
>> situation is complicated.  Dropping it might break working setups.
>> Could perhaps be justified by "your setup works, but it's unsupported,
>> and I'm forcing you to switch to a supported setup now, before you come
>> to grief."
>>
>> If it used to work but no more, or if it will stop working, it's
>> differently complicated.  Dropping it would still break working setups,
>> but they'd be bound to break anyway.
>>
>> Thus, my questions: does protocol gluster+rdma work before your patch?
>> If no, did it ever work?  "I don't know" is an acceptable answer to the
>> latter question, but not so much to the former, because that one is
>> easily testable.
>
>
> Yes, it appeared to user that the option worked and removing the option
> would break such setups. I agree with Markus that removing the option right
> away would hurt users and we should follow a deprecation strategy for
> backward compatibility.

I agree with Raghavendra and Markus here.

Hence, just for backward compatibility I would like to redirect the
rdma transport to tcp may be with a warning, which says this is
deprecated and something gluster won't even support after its 4.0
(strict checks)

Markus thanks for pointing me the docs links there, I shall right away
remove it from glfs.h

In any regards I am not in favor of providing rdma as part of Gluster
transport schema.

Any suggestions are deeply appreciable.

Thanks,
--
Prasanna

>
>>
>>
>> Once we have answers to these questions, we can decide what needs to be
>> done for compatibility, if anything.
>
>

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [Qemu-devel] [PATCH v19 3/5] block/gluster: remove rdma transport
  2016-07-18 13:35         ` Raghavendra Talur
  2016-07-18 13:50           ` Prasanna Kalever
@ 2016-07-18 14:02           ` Markus Armbruster
  1 sibling, 0 replies; 23+ messages in thread
From: Markus Armbruster @ 2016-07-18 14:02 UTC (permalink / raw)
  To: Raghavendra Talur
  Cc: Prasanna Kumar Kalever, Poornima Gurusiddaiah, Vijay Bellur,
	Jeffrey Cody, qemu-devel, Prasanna Kalever,
	Rafi Kavungal Chundattu Parambil

Raghavendra Talur <rtalur@redhat.com> writes:

> On Mon, Jul 18, 2016 at 5:48 PM, Markus Armbruster <armbru@redhat.com>
> wrote:
>
>> Prasanna Kalever <pkalever@redhat.com> writes:
>>
>> > On Mon, Jul 18, 2016 at 2:23 PM, Markus Armbruster <armbru@redhat.com>
[...]
>> >> This is fine if gluster+rdma never actually worked.  I tried to find out
>> >> at https://github.com/gluster/glusterfs/blob/master/api/src/glfs.h.
>> >> Transport rdma is mentioned there.  Does it work?
>> >
>> > this is transport used for fetching the volume file from the nodes.
>> > Which is of type tcp previously and then now it also supports the unix
>> > transport.
>> >
>> > More response from gluster community
>> > @http://www.gluster.org/pipermail/gluster-devel/2016-July/050112.html
>>
>> Quote Raghavendra Talur's reply:
>>
>>     > My understanding is that @transport argumet in
>>     > glfs_set_volfile_server() is meant for specifying transport used in
>>     > fetching volfile server,
>>     >
>>
>>     Yes, @transport arg here is transport to use for fetching volfile.
>>
>>
>>     > IIRC which currently supports tcp and unix only...
>>     >
>>     Yes, this is correct too.
>>
>> Here, Raghavendra seems to confirm that only tcp and unix are supported.
>>
>>     >
>>     > The doc here
>>     > https://github.com/gluster/glusterfs/blob/master/api/src/glfs.h
>>     > +166 shows the rdma as well, which is something I cannot digest.
>>     >
>>     This is doc written with assumption that rdma would work too.
>>
>>
>>     >
>>     >
>>     > Can someone correct me ?
>>     >
>>     > Have we ever supported volfile fetch over rdma ?
>>     >
>>
>>     I think no. To test, you would have to set rdma as only transport
>> option in
>>     glusterd.vol and see what happens in volfile fetch.
>>
>> But here, it sounds like it might work anyway!
>>
>
> Prasanna, Rafi and I tested this. When rdma option is specified, gluster
> defaults to tcp silently.

Good to know.  Thanks!

>                           I do not like this behavior.

Understandable :)

>>     IMO, fetching volfile over rdma is an overkill and would not be
>> required.
>>     RDMA should be kept only for IO operations.
>>
>>     We should just remove it from the docs.
>>
>> Don't misunderstand me, I'm very much in favor of removing the rdma
>> transport here.  All I'm trying to do is figure out what backward
>> compatibility ramifications that might have.
>>
>> If protocol gluster+rdma has never actually worked, we can safely remove
>> it.
>>
>> But if it happens to work even though it isn't really supported, the
>> situation is complicated.  Dropping it might break working setups.
>> Could perhaps be justified by "your setup works, but it's unsupported,
>> and I'm forcing you to switch to a supported setup now, before you come
>> to grief."
>>
>> If it used to work but no more, or if it will stop working, it's
>> differently complicated.  Dropping it would still break working setups,
>> but they'd be bound to break anyway.
>>
>> Thus, my questions: does protocol gluster+rdma work before your patch?
>> If no, did it ever work?  "I don't know" is an acceptable answer to the
>> latter question, but not so much to the former, because that one is
>> easily testable.
>>
>
> Yes, it appeared to user that the option worked and removing the option
> would break such setups. I agree with Markus that removing the option right
> away would hurt users and we should follow a deprecation strategy for
> backward compatibility.

Since Gluster maps rdma to tcp, I think the following should do:

* If the user asks for file=gluster+rdma:..., print a warning and use
  file=gluster+tcp:... instead.  Deprecate this usage.

* Don't add transport 'rdma' to the QAPI schema.

What do you think?

[...]

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [Qemu-devel] [PATCH v19 5/5] block/gluster: add support for multiple gluster servers
  2016-07-18 11:51     ` Prasanna Kalever
@ 2016-07-18 14:39       ` Markus Armbruster
  2016-07-18 19:02         ` Prasanna Kalever
  0 siblings, 1 reply; 23+ messages in thread
From: Markus Armbruster @ 2016-07-18 14:39 UTC (permalink / raw)
  To: Prasanna Kalever
  Cc: Raghavendra Talur, Jeffrey Cody, Prasanna Kumar Kalever,
	qemu-devel, Vijay Bellur

Prasanna Kalever <pkalever@redhat.com> writes:

> On Mon, Jul 18, 2016 at 3:47 PM, Markus Armbruster <armbru@redhat.com> wrote:
>> Prasanna Kumar Kalever <prasanna.kalever@redhat.com> writes:
>>
>>> This patch adds a way to specify multiple volfile servers to the gluster
>>> block backend of QEMU with tcp|rdma transport types and their port numbers.
>>>
>>> Problem:
>>>
>>> Currently VM Image on gluster volume is specified like this:
>>>
>>> file=gluster[+tcp]://host[:port]/testvol/a.img
>>>
>>> Say we have three hosts in a trusted pool with replica 3 volume in action.
>>> When the host mentioned in the command above goes down for some reason,
>>> the other two hosts are still available. But there's currently no way
>>> to tell QEMU about them.
>>>
>>> Solution:
>>>
>>> New way of specifying VM Image on gluster volume with volfile servers:
>>> (We still support old syntax to maintain backward compatibility)
>>>
>>> Basic command line syntax looks like:
>>>
>>> Pattern I:
>>>  -drive driver=gluster,
>>>         volume=testvol,path=/path/a.raw,[debug=N,]
>>>         server.0.type=tcp,
>>>         server.0.host=1.2.3.4,
>>>        [server.0.port=24007,]
>>>         server.1.type=unix,
>>>         server.1.socket=/path/socketfile
>>>
>>> Pattern II:
>>>  'json:{"driver":"qcow2","file":{"driver":"gluster",
>>>        "volume":"testvol","path":"/path/a.qcow2",["debug":N,]
>>>        "server":[{hostinfo_1}, ...{hostinfo_N}]}}'
>>>
>>>    driver      => 'gluster' (protocol name)
>>>    volume      => name of gluster volume where our VM image resides
>>>    path        => absolute path of image in gluster volume
>>>   [debug]      => libgfapi loglevel [(0 - 9) default 4 -> Error]
>>>
>>>   {hostinfo}   => {{type:"tcp",host:"1.2.3.4"[,port=24007]},
>>>                    {type:"unix",socket:"/path/sockfile"}}
>>>
>>>    type        => transport type used to connect to gluster management daemon,
>>>                   it can be tcp|unix
>>>    host        => host address (hostname/ipv4/ipv6 addresses/socket path)
>>>   [port]       => port number on which glusterd is listening. (default 24007)
>>>    socket      => path to socket file
>>>
>>> Examples:
>>> 1.
>>>  -drive driver=qcow2,file.driver=gluster,
>>>         file.volume=testvol,file.path=/path/a.qcow2,file.debug=9,
>>>         file.server.0.type=tcp,
>>>         file.server.0.host=1.2.3.4,
>>>         file.server.0.port=24007,
>>>         file.server.1.type=tcp,
>>>         file.server.1.socket=/var/run/glusterd.socket
>>> 2.
>>>  'json:{"driver":"qcow2","file":{"driver":"gluster","volume":"testvol",
>>>          "path":"/path/a.qcow2","debug":9,"server":
>>>          [{type:"tcp",host:"1.2.3.4",port=24007},
>>>           {type:"unix",socket:"/var/run/glusterd.socket"}] } }'
>>>
>>> This patch gives a mechanism to provide all the server addresses, which are in
>>> replica set, so in case host1 is down VM can still boot from any of the
>>> active hosts.
>>>
>>> This is equivalent to the backup-volfile-servers option supported by
>>> mount.glusterfs (FUSE way of mounting gluster volume)
>>>
>>> credits: sincere thanks to all the supporters
>>>
>>> Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
>>> ---
>>>  block/gluster.c      | 347 +++++++++++++++++++++++++++++++++++++++++++++------
>>>  qapi/block-core.json |   2 +-
>>>  2 files changed, 307 insertions(+), 42 deletions(-)
>>>
>>> diff --git a/block/gluster.c b/block/gluster.c
>>> index ff1e783..fd2279d 100644
>>> --- a/block/gluster.c
>>> +++ b/block/gluster.c
>>> @@ -12,8 +12,16 @@
>>>  #include "block/block_int.h"
>>>  #include "qapi/error.h"
>>>  #include "qemu/uri.h"
>>> +#include "qemu/error-report.h"
>>>
>>>  #define GLUSTER_OPT_FILENAME        "filename"
>>> +#define GLUSTER_OPT_VOLUME          "volume"
>>> +#define GLUSTER_OPT_PATH            "path"
>>> +#define GLUSTER_OPT_TYPE            "type"
>>> +#define GLUSTER_OPT_SERVER_PATTERN  "server."
>>> +#define GLUSTER_OPT_HOST            "host"
>>> +#define GLUSTER_OPT_PORT            "port"
>>> +#define GLUSTER_OPT_SOCKET          "socket"
>>>  #define GLUSTER_OPT_DEBUG           "debug"
>>>  #define GLUSTER_DEFAULT_PORT        24007
>>>  #define GLUSTER_DEBUG_DEFAULT       4
>>> @@ -82,6 +90,77 @@ static QemuOptsList runtime_opts = {
>>>      },
>>>  };
>>>
>>> +static QemuOptsList runtime_json_opts = {
>>> +    .name = "gluster_json",
>>> +    .head = QTAILQ_HEAD_INITIALIZER(runtime_json_opts.head),
>>> +    .desc = {
>>> +        {
>>> +            .name = GLUSTER_OPT_VOLUME,
>>> +            .type = QEMU_OPT_STRING,
>>> +            .help = "name of gluster volume where VM image resides",
>>> +        },
>>> +        {
>>> +            .name = GLUSTER_OPT_PATH,
>>> +            .type = QEMU_OPT_STRING,
>>> +            .help = "absolute path to image file in gluster volume",
>>> +        },
>>> +        {
>>> +            .name = GLUSTER_OPT_DEBUG,
>>> +            .type = QEMU_OPT_NUMBER,
>>> +            .help = "Gluster log level, valid range is 0-9",
>>> +        },
>>> +        { /* end of list */ }
>>> +    },
>>> +};
>>> +
>>> +static QemuOptsList runtime_type_opts = {
>>> +    .name = "gluster_type",
>>> +    .head = QTAILQ_HEAD_INITIALIZER(runtime_type_opts.head),
>>> +    .desc = {
>>> +        {
>>> +            .name = GLUSTER_OPT_TYPE,
>>> +            .type = QEMU_OPT_STRING,
>>> +            .help = "tcp|unix",
>>> +        },
>>> +        { /* end of list */ }
>>> +    },
>>> +};
>>> +
>>> +static QemuOptsList runtime_unix_opts = {
>>> +    .name = "gluster_unix",
>>> +    .head = QTAILQ_HEAD_INITIALIZER(runtime_unix_opts.head),
>>> +    .desc = {
>>> +        {
>>> +            .name = GLUSTER_OPT_SOCKET,
>>> +            .type = QEMU_OPT_STRING,
>>> +            .help = "socket file path)",
>>> +        },
>>> +        { /* end of list */ }
>>> +    },
>>> +};
>>> +
>>> +static QemuOptsList runtime_tcp_opts = {
>>> +    .name = "gluster_tcp",
>>> +    .head = QTAILQ_HEAD_INITIALIZER(runtime_tcp_opts.head),
>>> +    .desc = {
>>> +        {
>>> +            .name = GLUSTER_OPT_TYPE,
>>> +            .type = QEMU_OPT_STRING,
>>> +            .help = "tcp|unix",
>>> +        },
>>> +        {
>>> +            .name = GLUSTER_OPT_HOST,
>>> +            .type = QEMU_OPT_STRING,
>>> +            .help = "host address (hostname/ipv4/ipv6 addresses)",
>>> +        },
>>> +        {
>>> +            .name = GLUSTER_OPT_PORT,
>>> +            .type = QEMU_OPT_NUMBER,
>>> +            .help = "port number on which glusterd is listening (default 24007)",
>>> +        },
>>> +        { /* end of list */ }
>>> +    },
>>> +};
>>>
>>>  static int parse_volume_options(BlockdevOptionsGluster *gconf, char *path)
>>>  {
>>> @@ -157,7 +236,8 @@ static int qemu_gluster_parse_uri(BlockdevOptionsGluster *gconf,
>>>          return -EINVAL;
>>>      }
>>>
>>> -    gconf->server = gsconf = g_new0(GlusterServer, 1);
>>> +    gconf->server = g_new0(GlusterServerList, 1);
>>> +    gconf->server->value = gsconf = g_new0(GlusterServer, 1);
>>>
>>>      /* transport */
>>>      if (!uri->scheme || !strcmp(uri->scheme, "gluster")) {
>>> @@ -211,38 +291,34 @@ out:
>>>      return ret;
>>>  }
>>>
>>> -static struct glfs *qemu_gluster_init(BlockdevOptionsGluster *gconf,
>>> -                                      const char *filename, Error **errp)
>>> +static struct glfs *qemu_gluster_glfs_init(BlockdevOptionsGluster *gconf,
>>> +                                           Error **errp)
>>>  {
>>> -    struct glfs *glfs = NULL;
>>> +    struct glfs *glfs;
>>>      int ret;
>>>      int old_errno;
>>> -
>>> -    ret = qemu_gluster_parse_uri(gconf, filename);
>>> -    if (ret < 0) {
>>> -        error_setg(errp, "Usage: file=gluster[+transport]://[host[:port]]/"
>>> -                         "volume/path[?socket=...]");
>>> -        errno = -ret;
>>> -        goto out;
>>> -    }
>>> +    GlusterServerList *server;
>>>
>>>      glfs = glfs_new(gconf->volume);
>>>      if (!glfs) {
>>>          goto out;
>>>      }
>>>
>>> -    if (gconf->server->type == GLUSTER_TRANSPORT_UNIX) {
>>> -        ret = glfs_set_volfile_server(glfs,
>>> -                                      GlusterTransport_lookup[gconf->server->type],
>>> -                                      gconf->server->u.q_unix.socket, 0);
>>> -    } else {
>>> -        ret = glfs_set_volfile_server(glfs,
>>> -                                      GlusterTransport_lookup[gconf->server->type],
>>> -                                      gconf->server->u.tcp.host,
>>> -                                      gconf->server->u.tcp.port);
>>> -    }
>>> -    if (ret < 0) {
>>> -        goto out;
>>> +    for (server = gconf->server; server; server = server->next) {
>>> +        if (server->value->type  == GLUSTER_TRANSPORT_UNIX) {
>>> +            ret = glfs_set_volfile_server(glfs,
>>> +                                          GlusterTransport_lookup[server->value->type],
>>> +                                          server->value->u.q_unix.socket, 0);
>>> +        } else {
>>> +            ret = glfs_set_volfile_server(glfs,
>>> +                                          GlusterTransport_lookup[server->value->type],
>>> +                                          server->value->u.tcp.host,
>>> +                                          server->value->u.tcp.port);
>>
>> server->value.u.tcp.port is optional.  Using it without checking
>> server->value.u.tcp.has_port relies on the default value being zero.  We
>> don't actually document that.  Perhaps we should.
>
> I have made sure to fill it in the code, also marked
> gsconf->u.tcp.has_port = true;
>
>>
>>> +        }
>>> +
>>> +        if (ret < 0) {
>>> +            goto out;
>>> +        }
>>>      }
>>>
>>>      ret = glfs_set_logging(glfs, "-", gconf->debug_level);
>>> @@ -252,19 +328,19 @@ static struct glfs *qemu_gluster_init(BlockdevOptionsGluster *gconf,
>>>
>>>      ret = glfs_init(glfs);
>>>      if (ret) {
>>> -        if (gconf->server->type == GLUSTER_TRANSPORT_UNIX) {
>>> -            error_setg(errp,
>>> -                       "Gluster connection for volume: %s, path: %s "
>>> -                       "failed to connect on socket %s ",
>>> -                       gconf->volume, gconf->path,
>>> -                       gconf->server->u.q_unix.socket);
>>> -        } else {
>>> -            error_setg(errp,
>>> -                       "Gluster connection for volume: %s, path: %s "
>>> -                       "failed to connect  host  %s and port %d ",
>>> -                       gconf->volume, gconf->path,
>>> -                       gconf->server->u.tcp.host, gconf->server->u.tcp.port);
>>> +        error_setg(errp, "Gluster connection for volume: %s, path: %s ",
>>> +                   gconf->volume, gconf->path);
>>> +        for (server = gconf->server; server; server = server->next) {
>>> +            if (server->value->type  == GLUSTER_TRANSPORT_UNIX) {
>>> +                error_append_hint(errp, "failed to connect on socket %s ",
>>> +                                  server->value->u.q_unix.socket);
>>> +            } else {
>>> +                error_append_hint(errp, "failed to connect host %s and port %d ",
>>> +                                  server->value->u.tcp.host,
>>> +                                  server->value->u.tcp.port);
>>> +            }
>>>          }
>>> +        error_append_hint(errp, "Please refer to gluster logs for more info ");
>>
>> Your code produces the error message "Gluster connection for volume:
>> VOLUME, path: PATH ", which makes no sense.
>>
>> It also produces a hint that is a concatenation of one or more "failed
>> to connect on FOO", followed by "Please refer to ..." without any
>> punctuation, but with a trailing space.
>>
>> The error message must make sense on its own, without the hint.
>>
>> A fixed error message could look like this:
>>
>>     Gluster connection for volume VOLUME, path PATH failed to connect on FOO, on BAR, and on BAZ
>>
>> or with a little less effort
>>
>>     Gluster connection for volume VOLUME, path PATH failed to connect on FOO, BAR, BAZ
>>
>> or simply
>>
>>     Can't connect to Gluster volume VOLUME, path PATH at FOO, BAR, BAZ
>>
>> You can't build up the error message with error_append_hint().  Using it
>> to append a hint pointing to Gluster logs is fine.
>
> okay.
>
>>
>>>
>>>          /* glfs_init sometimes doesn't set errno although docs suggest that */
>>>          if (errno == 0) {
>>> @@ -284,6 +360,195 @@ out:
>>>      return NULL;
>>>  }
>>>
>>> +static int qapi_enum_parse(const char *opt)
>>> +{
>>> +    int i;
>>> +
>>> +    if (!opt) {
>>> +        return GLUSTER_TRANSPORT__MAX;
>>> +    }
>>> +
>>> +    for (i = 0; i < GLUSTER_TRANSPORT__MAX; i++) {
>>> +        if (!strcmp(opt, GlusterTransport_lookup[i])) {
>>> +            return i;
>>> +        }
>>> +    }
>>> +
>>> +    return i;
>>> +}
>>> +
>>> +/*
>>> + * Convert the json formatted command line into qapi.
>>> +*/
>>> +static int qemu_gluster_parse_json(BlockdevOptionsGluster *gconf,
>>> +                                  QDict *options, Error **errp)
>>> +{
>>> +    QemuOpts *opts;
>>> +    GlusterServer *gsconf;
>>> +    GlusterServerList *curr = NULL;
>>> +    QDict *backing_options = NULL;
>>> +    Error *local_err = NULL;
>>> +    char *str = NULL;
>>> +    const char *ptr;
>>> +    size_t num_servers;
>>> +    int i;
>>> +
>>> +    /* create opts info from runtime_json_opts list */
>>> +    opts = qemu_opts_create(&runtime_json_opts, NULL, 0, &error_abort);
>>> +    qemu_opts_absorb_qdict(opts, options, &local_err);
>>> +    if (local_err) {
>>> +        goto out;
>>> +    }
>>> +
>>> +    num_servers = qdict_array_entries(options, GLUSTER_OPT_SERVER_PATTERN);
>>> +    if (num_servers < 1) {
>>> +        error_setg(&local_err, "please provide 'server' option with valid "
>>> +                               "fields in array of hostinfo ");
>>
>> This isn't an error message, it's instructions what to do.  Such
>> instructions can be useful when they're correct, but they can't replace
>> an error message.  The error message should state what's wrong.  No
>> less, no more.
>
> Let me know, how to log the hint from a leaf function, where Error
> object is not created yet.
> I also feel its more like an error in the usage of the json command

The error we diagnose here is that @options either lacks a 'server' key,
or its value is invalid.  Since I'm too lazy to come up with an error
message for that, I grep for this kind of error (value of
qdict_array_entries() negative), and find one in quorum.c:

    s->num_children = qdict_array_entries(options, "children.");
    if (s->num_children < 0) {
        error_setg(&local_err, "Option children is not a valid array");
        ret = -EINVAL;
        goto exit;
    }

>> Moreover, avoid prefixes like "qemu_gluster:".  Usually, the fact that
>> this is about Gluster is obvious.  When it isn't, providing context is
>> the caller's job.
>
> I think I have removed almost all 'qemu_gluster:' in v19 by respecting
> you comments in v18

Pasto, sorry.

>>> +        goto out;
>>> +    }
>>> +
>>> +    ptr = qemu_opt_get(opts, GLUSTER_OPT_VOLUME);
>>> +    if (!ptr) {
>>> +        error_setg(&local_err, "please provide 'volume' option ");
>>
>> Not an error message.
>
> Also same here ...

The usual error message for a missing mandatory key 'volume' is
"Parameter 'volume' is missing".  For historical reasons, that's
commonly written as

    error_setg(errp, QERR_MISSING_PARAMETER, "volume");

If I didn't know that already, I'd try to find out the same way as for
the previous one: grep for this kind of error (value of qemu_opt_get()
null), stick to the most common way to report it.

Feel free to use a message that's more tailored to this particular
error.  My point here isn't that you should reuse existing error
messages (that's a complete non-requirement), only that an error message
should first and foremost tell the user what's wrong.  Telling him what
he might have to do fix it is secondary.  It might be helpful, but in
practice it's often misleading, because users often screw up in more
ways than you anticipated when writing the error message.

>>> +        goto out;
>>> +    }
>>> +    gconf->volume = g_strdup(ptr);
>>> +
>>> +    ptr = qemu_opt_get(opts, GLUSTER_OPT_PATH);
>>> +    if (!ptr) {
>>> +        error_setg(&local_err, "please provide 'path' option ");
>>
>> Not an error message.
>
> here ...
>
>>
>>> +        goto out;
>>> +    }
>>> +    gconf->path = g_strdup(ptr);
>>> +    qemu_opts_del(opts);
>>> +
>>> +    for (i = 0; i < num_servers; i++) {
>>> +        str = g_strdup_printf(GLUSTER_OPT_SERVER_PATTERN"%d.", i);
>>> +        qdict_extract_subqdict(options, &backing_options, str);
>>> +
>>> +        /* create opts info from runtime_type_opts list */
>>> +        opts = qemu_opts_create(&runtime_type_opts, NULL, 0, &error_abort);
>>> +        qemu_opts_absorb_qdict(opts, backing_options, &local_err);
>>> +        if (local_err) {
>>> +            goto out;
>>> +        }
>>> +
>>> +        ptr = qemu_opt_get(opts, GLUSTER_OPT_TYPE);
>>> +        gsconf = g_new0(GlusterServer, 1);
>>> +        gsconf->type = qapi_enum_parse(ptr);
>>> +        if (!ptr) {
>>> +            error_setg(&local_err, "please provide 'type' in hostinfo.%d as "
>>> +                                   "tcp|unix ", i);
>>
>> Not an error message.
>
> and here ...
> How do I say whats really wrong in the command, which could be long
> (if provides N servers in the list)

The usual error message for a key 'type' having a value other than 'tcp'
or 'unix' would be "Parameter 'type' expects 'tcp' or 'unix'".  For
historical reasons, that's commonly written as

    error_setg(errp, QERR_INVALID_PARAMETER_VALUE, "type", "tcp or unix");

If I didn't know that already, I'd again grep.

Once again, feel free to improve on this message.

>>> +            goto out;
>>> +
>>> +        }
>>> +        if (gsconf->type == GLUSTER_TRANSPORT__MAX) {
>>> +            error_setg(&local_err, "'type' in hostinfo.%d can be tcp|unix ", i);
>>> +            goto out;
>>> +        }
>>> +        qemu_opts_del(opts);
>>> +
>>> +        if (gsconf->type == GLUSTER_TRANSPORT_TCP) {
>>> +            /* create opts info from runtime_tcp_opts list */
>>> +            opts = qemu_opts_create(&runtime_tcp_opts, NULL, 0, &error_abort);
>>> +            qemu_opts_absorb_qdict(opts, backing_options, &local_err);
>>> +            if (local_err) {
>>> +                goto out;
>>> +            }
>>> +
>>> +            ptr = qemu_opt_get(opts, GLUSTER_OPT_HOST);
>>> +            if (!ptr) {
>>> +                error_setg(&local_err, "please provide 'host' in hostinfo.%d ",
>>> +                           i);
>>> +                goto out;
>>> +            }
>>> +            gsconf->u.tcp.host = g_strdup(ptr);
>>> +            ptr = qemu_opt_get(opts, GLUSTER_OPT_PORT);
>>> +            gsconf->u.tcp.port = qemu_opt_get_number(opts, GLUSTER_OPT_PORT,
>>> +                                               GLUSTER_DEFAULT_PORT);
>>> +            gsconf->u.tcp.has_port = true;
>>> +            qemu_opts_del(opts);
>>> +        } else {
>>> +            /* create opts info from runtime_unix_opts list */
>>> +            opts = qemu_opts_create(&runtime_unix_opts, NULL, 0, &error_abort);
>>> +            qemu_opts_absorb_qdict(opts, backing_options, &local_err);
>>> +            if (local_err) {
>>> +                goto out;
>>> +            }
>>> +
>>> +            ptr = qemu_opt_get(opts, GLUSTER_OPT_SOCKET);
>>> +            if (!ptr) {
>>> +                error_setg(&local_err, "please provide 'socket' in hostinfo.%d ",
>>> +                           i);
>>> +                goto out;
>>> +            }
>>> +            gsconf->u.q_unix.socket = g_strdup(ptr);
>>> +            qemu_opts_del(opts);
>>> +        }
>>> +
>>> +        if (gconf->server == NULL) {
>>> +            gconf->server = g_new0(GlusterServerList, 1);
>>> +            gconf->server->value = gsconf;
>>> +            curr = gconf->server;
>>> +        } else {
>>> +            curr->next = g_new0(GlusterServerList, 1);
>>> +            curr->next->value = gsconf;
>>> +            curr = curr->next;
>>> +        }
>>> +
>>> +        qdict_del(backing_options, str);
>>> +        g_free(str);
>>> +        str = NULL;
>>> +    }
>>> +
>>> +    return 0;
>>> +
>>> +out:
>>> +    if (local_err) {
>>> +        error_propagate(errp, local_err);
>>> +    }
>>
>> error_propagate() does the right thing when its second argument is
>> null.  Please drop the conditional.
>
> Sure
>
>>
>>> +    qemu_opts_del(opts);
>>> +    if (str) {
>>> +        qdict_del(backing_options, str);
>>> +        g_free(str);
>>> +    }
>>> +    errno = EINVAL;
>>> +    return -errno;
>>> +}
>>> +
>>> +static struct glfs *qemu_gluster_init(BlockdevOptionsGluster *gconf,
>>> +                                      const char *filename,
>>> +                                      QDict *options, Error **errp)
>>> +{
>>> +    int ret;
>>> +    if (filename) {
>>> +        ret = qemu_gluster_parse_uri(gconf, filename);
>>> +        if (ret < 0) {
>>> +            error_setg(errp, "Usage: file=gluster[+transport]://[host[:port]]/"
>>> +                             "volume/path[?socket=...]");
>>
>> Not an error message.
>
> Can you suggest the change required here for displaying hints from a
> leaf function, I am worried for missing your intention here, since you
> have most of your v18 comments here.
>
> IIRC, the leaf function which wants to display out a hint/msg/error
> should have an Error object, which is created by error_setg() and
> error_append_hind() appends to it.

The best you can do here is a generic error message like "invalid URI".
To be more specific, you have to create the error in
qemu_gluster_parse_uri(), because only there you know what exactly is
wrong.  Requires giving it an Error **errp parameter.  I'm *not* asking
you for that.  It's a separate improvement.

"invalid URI" isn't as horrible as you might think, because the actual
error message should come out like this:

    qemu-system-x86_64: -drive file=gluster+tcp://junk: invalid URI

which I find clearer than

    qemu-system-x86_64: -drive file=gluster+tcp://junk: Usage: file=gluster[+transport]://[server[:port]]/volname/image[?socket=...]

I reiterate my plea to *test* error messages.

We generally don't print usage information hints after errors.  If we
did, then the code would look like this:

            error_setg(errp, "invalid URI");
            error_append_hint(errp, "Usage: file=gluster[+transport]:"
                              "//[host[:port]]/volume/path[?socket=...]\n");

Should look like this:

    qemu-system-x86_64: -drive file=gluster+tcp://junk: invalid URI
    Usage: file=gluster[+transport]://[host[:port]]/volume/path[?socket=...]

By the way, not sure what [?socket...] means.

> Sorry for disappointing you Markus.

We'll get there :)

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [Qemu-devel] [PATCH v19 4/5] block/gluster: using new qapi schema
  2016-07-18 13:11       ` Markus Armbruster
@ 2016-07-18 18:28         ` Prasanna Kalever
  2016-07-19 11:12           ` Markus Armbruster
  0 siblings, 1 reply; 23+ messages in thread
From: Prasanna Kalever @ 2016-07-18 18:28 UTC (permalink / raw)
  To: Markus Armbruster
  Cc: Raghavendra Talur, Jeffrey Cody, Prasanna Kumar Kalever,
	qemu-devel, Vijay Bellur

On Mon, Jul 18, 2016 at 6:41 PM, Markus Armbruster <armbru@redhat.com> wrote:
> Prasanna Kalever <pkalever@redhat.com> writes:
>
>> On Mon, Jul 18, 2016 at 2:59 PM, Markus Armbruster <armbru@redhat.com> wrote:
>>> Prasanna Kumar Kalever <prasanna.kalever@redhat.com> writes:
>>>
>>>> this patch adds 'GlusterServer' related schema in qapi/block-core.json
>>>>
>>>> Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
>>>> ---
>>>>  block/gluster.c      | 111 +++++++++++++++++++++++++++++----------------------
>>>>  qapi/block-core.json |  94 ++++++++++++++++++++++++++++++++++++++++---
>>>>  2 files changed, 153 insertions(+), 52 deletions(-)
>>> [Skipping ahead to QAPI schema...]
>>>> diff --git a/qapi/block-core.json b/qapi/block-core.json
>>>> index a7fdb28..d7b5c76 100644
>>>> --- a/qapi/block-core.json
>>>> +++ b/qapi/block-core.json
>>>> @@ -1658,13 +1658,14 @@
>>>>  # @host_device, @host_cdrom: Since 2.1
>>>>  #
>>>>  # Since: 2.0
>>>> +# @gluster: Since 2.7
>>>>  ##
>>>>  { 'enum': 'BlockdevDriver',
>>>>    'data': [ 'archipelago', 'blkdebug', 'blkverify', 'bochs', 'cloop',
>>>> -            'dmg', 'file', 'ftp', 'ftps', 'host_cdrom', 'host_device',
>>>> -            'http', 'https', 'luks', 'null-aio', 'null-co', 'parallels',
>>>> -            'qcow', 'qcow2', 'qed', 'quorum', 'raw', 'tftp', 'vdi', 'vhdx',
>>>> -            'vmdk', 'vpc', 'vvfat' ] }
>>>> +            'dmg', 'file', 'ftp', 'ftps', 'gluster', 'host_cdrom',
>>>> +            'host_device', 'http', 'https', 'luks', 'null-aio', 'null-co',
>>>> +            'parallels', 'qcow', 'qcow2', 'qed', 'quorum', 'raw', 'tftp',
>>>> +            'vdi', 'vhdx', 'vmdk', 'vpc', 'vvfat' ] }
>>>>
>>>>  ##
>>>>  # @BlockdevOptionsFile
>>>> @@ -2057,6 +2058,89 @@
>>>>              '*read-pattern': 'QuorumReadPattern' } }
>>>>
>>>>  ##
>>>> +# @GlusterTransport
>>>> +#
>>>> +# An enumeration of Gluster transport type
>>>> +#
>>>> +# @tcp:   TCP   - Transmission Control Protocol
>>>> +#
>>>> +# @unix:  UNIX  - Unix domain socket
>>>> +#
>>>> +# Since: 2.7
>>>> +##
>>>> +{ 'enum': 'GlusterTransport',
>>>> +  'data': [ 'unix', 'tcp' ] }
>>>> +
>>>> +##
>>>> +# @GlusterUnixSocketAddress
>>>> +#
>>>> +# Captures a socket address in the local ("Unix socket") namespace.
>>>> +#
>>>> +# @socket:   absolute path to socket file
>>>> +#
>>>> +# Since 2.7
>>>> +##
>>>> +{ 'struct': 'GlusterUnixSocketAddress',
>>>> +  'data': { 'socket': 'str' } }
>>>
>>> Compare:
>>>
>>>    ##
>>>    # @UnixSocketAddress
>>>    #
>>>    # Captures a socket address in the local ("Unix socket") namespace.
>>>    #
>>>    # @path: filesystem path to use
>>>    #
>>>    # Since 1.3
>>>    ##
>>>    { 'struct': 'UnixSocketAddress',
>>>      'data': {
>>>        'path': 'str' } }
>>>
>>>> +
>>>> +##
>>>> +# @GlusterInetSocketAddress
>>>> +#
>>>> +# Captures a socket address or address range in the Internet namespace.
>>>> +#
>>>> +# @host:  host part of the address
>>>> +#
>>>> +# @port:  #optional port part of the address, or lowest port if @to is present
>>>
>>> There is no @to.
>>>
>>> What's the default port?
>>
>> #define GLUSTER_DEFAULT_PORT        24007
>
> I know, but the poor reader of the docs may not, so the docs better
> spell it out :)

:) Done

>
>>>> +#
>>>> +# Since 2.7
>>>> +##
>>>> +{ 'struct': 'GlusterInetSocketAddress',
>>>> +  'data': { 'host': 'str',
>>>> +            '*port': 'uint16' } }
>>>
>>> Compare:
>>>
>>>    ##
>>>    # @InetSocketAddress
>>>    #
>>>    # Captures a socket address or address range in the Internet namespace.
>>>    #
>>>    # @host: host part of the address
>>>    #
>>>    # @port: port part of the address, or lowest port if @to is present
>>>    #
>>>    # @to: highest port to try
>>>    #
>>>    # @ipv4: whether to accept IPv4 addresses, default try both IPv4 and IPv6
>>>    #        #optional
>>>    #
>>>    # @ipv6: whether to accept IPv6 addresses, default try both IPv4 and IPv6
>>>    #        #optional
>>>    #
>>>    # Since 1.3
>>>    ##
>>>    { 'struct': 'InetSocketAddress',
>>>      'data': {
>>>        'host': 'str',
>>>        'port': 'str',
>>>        '*to': 'uint16',
>>>        '*ipv4': 'bool',
>>>        '*ipv6': 'bool' } }
>>>
>>>> +
>>>> +##
>>>> +# @GlusterServer
>>>> +#
>>>> +# Captures the address of a socket
>>>> +#
>>>> +# Details for connecting to a gluster server
>>>> +#
>>>> +# @type:       Transport type used for gluster connection
>>>> +#
>>>> +# @unix:       socket file
>>>> +#
>>>> +# @tcp:        host address and port number
>>>> +#
>>>> +# Since: 2.7
>>>> +##
>>>> +{ 'union': 'GlusterServer',
>>>> +  'base': { 'type': 'GlusterTransport' },
>>>> +  'discriminator': 'type',
>>>> +  'data': { 'unix': 'GlusterUnixSocketAddress',
>>>> +            'tcp': 'GlusterInetSocketAddress' } }
>>>> +
>>>
>>> Compare:
>>>
>>>    ##
>>>    # @SocketAddress
>>>    #
>>>    # Captures the address of a socket, which could also be a named file descriptor
>>>    #
>>>    # Since 1.3
>>>    ##
>>>    { 'union': 'SocketAddress',
>>>      'data': {
>>>        'inet': 'InetSocketAddress',
>>>        'unix': 'UnixSocketAddress',
>>>        'fd': 'String' } }
>>>
>>> You cleaned up the confusing abuse of @host as Unix domain socket file
>>> name.  Good.
>>>
>>> You're still defining your own QAPI types instead of using the existing
>>> ones.  To see whether that's a good idea, let's compare your definitions
>>> to the existing ones:
>
> I've since been gently referred to this note in the cover letter:
>
>     patch 4/5 (i.e. 3/4 in v18) now uses union discriminator, I have
>     made a choice to use gluster with custom schema since
>     @UnixSocketAddress uses 'path' as key, which may be confusing with
>     gluster,
>
> Can you briefly explain why 'path' may be confusing?
>

As you would have noticed below in my previous reply about my ACK

>     and in @InetSocketAddress port was str again I have made a choice to
>     keep it uint16 which really make sense.
>
> A port can be given in symbolic form (service name) and in numeric form
> (port number), just like a host.  For instance, TCP service name "smtp"
> means port number 25.  See also services(5).  Naturally, a symbolic name
> only exists for sufficiently well-known ports.
>
> Interfaces should accept both service name and port.  InetSocketAddress
> does, in the same way as getaddrinfo(): it takes a string, which may
> either be a service name or a port number.  Perhaps it should take an
> alternate of int16 and string instead, but that's a separate question.

This really improved my understanding, thanks Markus
Having agreed that, I need to say about feeder api glfs_setvolfile_server()
accept only int;

look at the scaffolding here

int
pub_glfs_set_volfile_server (struct glfs *fs, const char *transport,
                             const char *host, int port)

So, I hope you stand with me, in making port as int;

>
>     Hmmm.. As Markus suggested in v18 qemu_gluster_parseuri() is
>     *parse_uri() same with *parse_json() (in 5/5)
>
> Not sure I got that.  Do you mean "I renamed qemu_gluster_parseuri() to
> qemu_gluster_parse_uri() for consistency with
> qemu_gluster_parse_json()"?

In a comment you did mentioned "parsejson isn't a word.  "parse_json"
would be two :)"

So to maintain consistency for parseuri with parsejson, I made the changes.

If I have over taken them, please excuse me for this :)

>
>>>
>>> * GlusterUnixSocketAddress vs. UnixSocketAddress
>>>
>>>   Identical but for the member name.  Why can't we use
>>>   UnixSocketAddress?
>>
>> May be you are right, it may not worth just for a member name.
>
> Can't say, yet; that's why I ask you to explain why it may be confusing.
>
>>> * GlusterInetSocketAddress vs. InetSocketAddress
>>>
>>>   Changes in GlusterInetSocketAddress over InetSocketAddress:
>>>
>>>   - @port is optional
>>>
>>>     Convenience feature.  We can discuss making it optional in
>>>     InetSocketAddress, too.
>>
>> sure.
>>
>>>
>>>   - @port has type 'uint16' instead of 'str'
>>>
>>>     No support for service names.  Why is that a good idea?
>>
>> I honestly do not understand tie service names to port.
>> As far all I understand at least from gluster use-case wise its just
>> an integer ranging from 0 - 65535 (mostly 24007)
>> I am happy to learn this
>
> Hope I was able to explain this above.

yes!

>
>>>   - Lacks @to
>>>
>>>     No support for trying a range of ports.  I guess that simply makes
>>>     no sense for a Gluster client.  I guess makes no sense in some other
>>>     uses of InetSocketAddress, too.  Eric has proposed to split off
>>>     InetSocketAddressRange off InetSocketAddress.
>> I still don't understand the essence, why we need to loop through the ports ?
>
> Best explained by looking at a use of this feature.  With -display vnc,
> we start a VNC server.  By default, it listens on port 5900.  If this
> port isn't available, it fails like this:
>
>     qemu-system-x86_64:-display vnc=:0: Failed to start VNC server: Failed to bind socket: Address already in use
>
> If you don't care about the port, you can use something like "-display
> vnc=:0,to=9" to bind the first free port in the range of 5900 to 5909.
>

In gluster case arriving at the port that is in use, is non trivial.
Its mostly 24007 or something user choice (say 5007) which is
something unpredictable or looping for that may not worth;

So, IMO this is not the case in gluster port pickup;

>>>   - Lacks @ipv4, @ipv6
>>
>> Gluster don't support ipv6 addresses (there is some work needed in it rpc code)
>> Do you think its worth to have a control over ipv4/ipv6 just to
>> restrict from ipv6 addresses?
>
> In that case, it's not a show-stopper.  When Gluster acquires IPv6
> support, we'll need them.  Until then, we can omit them.  If we don't
> omit them, the gluster driver should reject "ipv4": false.
>

So I am dropping this for now; will send a patch soon after gluster
fully supports ipv6 addressing

>>>     No control over IPv4 vs. IPv6 use.  Immediate show-stopper.
>>>
>>>   Can we use InetSocketAddress?
>>
>> sorry, I don't have an answer, since I was unclear in my comments above.
>>
>>>
>>> * GlusterServer vs. SocketAddress
>>>
>>>   GlusterServer lacks case 'fd'.  Do file descriptors make no sense
>>>   here, or is it merely an implementation restriction?
>>
>> Again, Gluster doesn't support.
>
> Yes, the library we use to talk to Gluster doesn't let you pass in a
> file descriptor today.
>
> My question is whether this *could* be supported.  The answer is
> probably "yes".
>
> Fd support is usually desirable for privilege separation.  There, we
> want to run QEMU with the least possible privileges.  Ideally no way to
> open TCP connections.  Instead, the management application does the
> opening, and passes the open fd to QEMU.  Makes sense because it limits
> what a malicious guest can gain by attacking QEMU.
>

I got your point here;

since we are clear that gluster doesn't support this at least for now;
Somehow I came to a opinion from all the points described above,  we
don't need 'SocketAddress' for the same reasons that gluster needs a
tweaked 'InetSocketAddress' , so lets also keep-off the fd for now.

{ 'struct': 'InetSocketAddress',
  'data': {
    'host': 'str',
    'port': 'str',
    '*to': 'uint16',
    '*ipv4': 'bool',
    '*ipv6': 'bool' } }

tweaked version of InetSocketAddress i.e GlusterInetSocketAddress
* not require 'ipv6'
* doesn't need 'to' and
* 'port' should be optional and integer, as glfs_set_volfile_server() demands

so,

{ 'struct': 'GlusterInetSocketAddress',
  'data': {
    'host': 'str',
    '*port': 'str' }}

Hence,

{ 'union': 'SocketAddress',
  'data': {
    'inet': 'InetSocketAddress',
    'unix': 'UnixSocketAddress',
    'fd': 'String' } }

after removing 'fd' part which is not supported now, this look like

{ 'union': 'GlusterSocketAddress',
  'data': {
    'inet': 'GlusterInetSocketAddress',
    'unix': 'UnixSocketAddress' } }

What do you think ?

Thanks,
--
Prasanna

>>>   GlusterServer is a flat union, SocketAddress is a simple union.  The flat
>>>   unions is nicer on the wire:
>>>       { "type": "tcp", "host": "gluster.example.com", ... }
>>>   vs.
>>>       { "type": "tcp", "data": { "host": "gluster.example.com", ... }
>>>
>>>   Perhaps we should use a flat union for new interfaces.
>>>
>>>> +##
>>>> +# @BlockdevOptionsGluster
>>>> +#
>>>> +# Driver specific block device options for Gluster
>>>> +#
>>>> +# @volume:      name of gluster volume where VM image resides
>>>> +#
>>>> +# @path:        absolute path to image file in gluster volume
>>>> +#
>>>> +# @server:      gluster server description
>>>> +#
>>>> +# @debug_level: #optional libgfapi log level (default '4' which is Error)
>>>> +#
>>>> +# Since: 2.7
>>>> +##
>>>> +{ 'struct': 'BlockdevOptionsGluster',
>>>> +  'data': { 'volume': 'str',
>>>> +            'path': 'str',
>>>> +            'server': 'GlusterServer',
>>>> +            '*debug_level': 'int' } }
>>>> +
>>>> +##
>>>>  # @BlockdevOptions
>>>>  #
>>>>  # Options for creating a block device.  Many options are available for all
>>>> @@ -2119,7 +2203,7 @@
>>>>        'file':       'BlockdevOptionsFile',
>>>>        'ftp':        'BlockdevOptionsFile',
>>>>        'ftps':       'BlockdevOptionsFile',
>>>> -# TODO gluster: Wait for structured options
>>>> +      'gluster':    'BlockdevOptionsGluster',
>>>>        'host_cdrom': 'BlockdevOptionsFile',
>>>>        'host_device':'BlockdevOptionsFile',
>>>>        'http':       'BlockdevOptionsFile',
>>
>> Thanks Markus.
>>
>> --
>> Prasanna

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [Qemu-devel] [PATCH v19 5/5] block/gluster: add support for multiple gluster servers
  2016-07-18 14:39       ` Markus Armbruster
@ 2016-07-18 19:02         ` Prasanna Kalever
  0 siblings, 0 replies; 23+ messages in thread
From: Prasanna Kalever @ 2016-07-18 19:02 UTC (permalink / raw)
  To: Markus Armbruster
  Cc: Raghavendra Talur, Jeffrey Cody, Prasanna Kumar Kalever,
	qemu-devel, Vijay Bellur

On Mon, Jul 18, 2016 at 8:09 PM, Markus Armbruster <armbru@redhat.com> wrote:
> Prasanna Kalever <pkalever@redhat.com> writes:
>
>> On Mon, Jul 18, 2016 at 3:47 PM, Markus Armbruster <armbru@redhat.com> wrote:
>>> Prasanna Kumar Kalever <prasanna.kalever@redhat.com> writes:
>>>
>>>> This patch adds a way to specify multiple volfile servers to the gluster
>>>> block backend of QEMU with tcp|rdma transport types and their port numbers.
>>>>
>>>> Problem:
>>>>
>>>> Currently VM Image on gluster volume is specified like this:
>>>>
>>>> file=gluster[+tcp]://host[:port]/testvol/a.img
>>>>
>>>> Say we have three hosts in a trusted pool with replica 3 volume in action.
>>>> When the host mentioned in the command above goes down for some reason,
>>>> the other two hosts are still available. But there's currently no way
>>>> to tell QEMU about them.
>>>>
>>>> Solution:
>>>>
>>>> New way of specifying VM Image on gluster volume with volfile servers:
>>>> (We still support old syntax to maintain backward compatibility)
>>>>
>>>> Basic command line syntax looks like:
>>>>
>>>> Pattern I:
>>>>  -drive driver=gluster,
>>>>         volume=testvol,path=/path/a.raw,[debug=N,]
>>>>         server.0.type=tcp,
>>>>         server.0.host=1.2.3.4,
>>>>        [server.0.port=24007,]
>>>>         server.1.type=unix,
>>>>         server.1.socket=/path/socketfile
>>>>
>>>> Pattern II:
>>>>  'json:{"driver":"qcow2","file":{"driver":"gluster",
>>>>        "volume":"testvol","path":"/path/a.qcow2",["debug":N,]
>>>>        "server":[{hostinfo_1}, ...{hostinfo_N}]}}'
>>>>
>>>>    driver      => 'gluster' (protocol name)
>>>>    volume      => name of gluster volume where our VM image resides
>>>>    path        => absolute path of image in gluster volume
>>>>   [debug]      => libgfapi loglevel [(0 - 9) default 4 -> Error]
>>>>
>>>>   {hostinfo}   => {{type:"tcp",host:"1.2.3.4"[,port=24007]},
>>>>                    {type:"unix",socket:"/path/sockfile"}}
>>>>
>>>>    type        => transport type used to connect to gluster management daemon,
>>>>                   it can be tcp|unix
>>>>    host        => host address (hostname/ipv4/ipv6 addresses/socket path)
>>>>   [port]       => port number on which glusterd is listening. (default 24007)
>>>>    socket      => path to socket file
>>>>
>>>> Examples:
>>>> 1.
>>>>  -drive driver=qcow2,file.driver=gluster,
>>>>         file.volume=testvol,file.path=/path/a.qcow2,file.debug=9,
>>>>         file.server.0.type=tcp,
>>>>         file.server.0.host=1.2.3.4,
>>>>         file.server.0.port=24007,
>>>>         file.server.1.type=tcp,
>>>>         file.server.1.socket=/var/run/glusterd.socket
>>>> 2.
>>>>  'json:{"driver":"qcow2","file":{"driver":"gluster","volume":"testvol",
>>>>          "path":"/path/a.qcow2","debug":9,"server":
>>>>          [{type:"tcp",host:"1.2.3.4",port=24007},
>>>>           {type:"unix",socket:"/var/run/glusterd.socket"}] } }'
>>>>
>>>> This patch gives a mechanism to provide all the server addresses, which are in
>>>> replica set, so in case host1 is down VM can still boot from any of the
>>>> active hosts.
>>>>
>>>> This is equivalent to the backup-volfile-servers option supported by
>>>> mount.glusterfs (FUSE way of mounting gluster volume)
>>>>
>>>> credits: sincere thanks to all the supporters
>>>>
>>>> Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
>>>> ---
>>>>  block/gluster.c      | 347 +++++++++++++++++++++++++++++++++++++++++++++------
>>>>  qapi/block-core.json |   2 +-
>>>>  2 files changed, 307 insertions(+), 42 deletions(-)
>>>>
>>>> diff --git a/block/gluster.c b/block/gluster.c
>>>> index ff1e783..fd2279d 100644
>>>> --- a/block/gluster.c
>>>> +++ b/block/gluster.c
>>>> @@ -12,8 +12,16 @@
>>>>  #include "block/block_int.h"
>>>>  #include "qapi/error.h"
>>>>  #include "qemu/uri.h"
>>>> +#include "qemu/error-report.h"
>>>>
>>>>  #define GLUSTER_OPT_FILENAME        "filename"
>>>> +#define GLUSTER_OPT_VOLUME          "volume"
>>>> +#define GLUSTER_OPT_PATH            "path"
>>>> +#define GLUSTER_OPT_TYPE            "type"
>>>> +#define GLUSTER_OPT_SERVER_PATTERN  "server."
>>>> +#define GLUSTER_OPT_HOST            "host"
>>>> +#define GLUSTER_OPT_PORT            "port"
>>>> +#define GLUSTER_OPT_SOCKET          "socket"
>>>>  #define GLUSTER_OPT_DEBUG           "debug"
>>>>  #define GLUSTER_DEFAULT_PORT        24007
>>>>  #define GLUSTER_DEBUG_DEFAULT       4
>>>> @@ -82,6 +90,77 @@ static QemuOptsList runtime_opts = {
>>>>      },
>>>>  };
>>>>
>>>> +static QemuOptsList runtime_json_opts = {
>>>> +    .name = "gluster_json",
>>>> +    .head = QTAILQ_HEAD_INITIALIZER(runtime_json_opts.head),
>>>> +    .desc = {
>>>> +        {
>>>> +            .name = GLUSTER_OPT_VOLUME,
>>>> +            .type = QEMU_OPT_STRING,
>>>> +            .help = "name of gluster volume where VM image resides",
>>>> +        },
>>>> +        {
>>>> +            .name = GLUSTER_OPT_PATH,
>>>> +            .type = QEMU_OPT_STRING,
>>>> +            .help = "absolute path to image file in gluster volume",
>>>> +        },
>>>> +        {
>>>> +            .name = GLUSTER_OPT_DEBUG,
>>>> +            .type = QEMU_OPT_NUMBER,
>>>> +            .help = "Gluster log level, valid range is 0-9",
>>>> +        },
>>>> +        { /* end of list */ }
>>>> +    },
>>>> +};
>>>> +
>>>> +static QemuOptsList runtime_type_opts = {
>>>> +    .name = "gluster_type",
>>>> +    .head = QTAILQ_HEAD_INITIALIZER(runtime_type_opts.head),
>>>> +    .desc = {
>>>> +        {
>>>> +            .name = GLUSTER_OPT_TYPE,
>>>> +            .type = QEMU_OPT_STRING,
>>>> +            .help = "tcp|unix",
>>>> +        },
>>>> +        { /* end of list */ }
>>>> +    },
>>>> +};
>>>> +
>>>> +static QemuOptsList runtime_unix_opts = {
>>>> +    .name = "gluster_unix",
>>>> +    .head = QTAILQ_HEAD_INITIALIZER(runtime_unix_opts.head),
>>>> +    .desc = {
>>>> +        {
>>>> +            .name = GLUSTER_OPT_SOCKET,
>>>> +            .type = QEMU_OPT_STRING,
>>>> +            .help = "socket file path)",
>>>> +        },
>>>> +        { /* end of list */ }
>>>> +    },
>>>> +};
>>>> +
>>>> +static QemuOptsList runtime_tcp_opts = {
>>>> +    .name = "gluster_tcp",
>>>> +    .head = QTAILQ_HEAD_INITIALIZER(runtime_tcp_opts.head),
>>>> +    .desc = {
>>>> +        {
>>>> +            .name = GLUSTER_OPT_TYPE,
>>>> +            .type = QEMU_OPT_STRING,
>>>> +            .help = "tcp|unix",
>>>> +        },
>>>> +        {
>>>> +            .name = GLUSTER_OPT_HOST,
>>>> +            .type = QEMU_OPT_STRING,
>>>> +            .help = "host address (hostname/ipv4/ipv6 addresses)",
>>>> +        },
>>>> +        {
>>>> +            .name = GLUSTER_OPT_PORT,
>>>> +            .type = QEMU_OPT_NUMBER,
>>>> +            .help = "port number on which glusterd is listening (default 24007)",
>>>> +        },
>>>> +        { /* end of list */ }
>>>> +    },
>>>> +};
>>>>
>>>>  static int parse_volume_options(BlockdevOptionsGluster *gconf, char *path)
>>>>  {
>>>> @@ -157,7 +236,8 @@ static int qemu_gluster_parse_uri(BlockdevOptionsGluster *gconf,
>>>>          return -EINVAL;
>>>>      }
>>>>
>>>> -    gconf->server = gsconf = g_new0(GlusterServer, 1);
>>>> +    gconf->server = g_new0(GlusterServerList, 1);
>>>> +    gconf->server->value = gsconf = g_new0(GlusterServer, 1);
>>>>
>>>>      /* transport */
>>>>      if (!uri->scheme || !strcmp(uri->scheme, "gluster")) {
>>>> @@ -211,38 +291,34 @@ out:
>>>>      return ret;
>>>>  }
>>>>
>>>> -static struct glfs *qemu_gluster_init(BlockdevOptionsGluster *gconf,
>>>> -                                      const char *filename, Error **errp)
>>>> +static struct glfs *qemu_gluster_glfs_init(BlockdevOptionsGluster *gconf,
>>>> +                                           Error **errp)
>>>>  {
>>>> -    struct glfs *glfs = NULL;
>>>> +    struct glfs *glfs;
>>>>      int ret;
>>>>      int old_errno;
>>>> -
>>>> -    ret = qemu_gluster_parse_uri(gconf, filename);
>>>> -    if (ret < 0) {
>>>> -        error_setg(errp, "Usage: file=gluster[+transport]://[host[:port]]/"
>>>> -                         "volume/path[?socket=...]");
>>>> -        errno = -ret;
>>>> -        goto out;
>>>> -    }
>>>> +    GlusterServerList *server;
>>>>
>>>>      glfs = glfs_new(gconf->volume);
>>>>      if (!glfs) {
>>>>          goto out;
>>>>      }
>>>>
>>>> -    if (gconf->server->type == GLUSTER_TRANSPORT_UNIX) {
>>>> -        ret = glfs_set_volfile_server(glfs,
>>>> -                                      GlusterTransport_lookup[gconf->server->type],
>>>> -                                      gconf->server->u.q_unix.socket, 0);
>>>> -    } else {
>>>> -        ret = glfs_set_volfile_server(glfs,
>>>> -                                      GlusterTransport_lookup[gconf->server->type],
>>>> -                                      gconf->server->u.tcp.host,
>>>> -                                      gconf->server->u.tcp.port);
>>>> -    }
>>>> -    if (ret < 0) {
>>>> -        goto out;
>>>> +    for (server = gconf->server; server; server = server->next) {
>>>> +        if (server->value->type  == GLUSTER_TRANSPORT_UNIX) {
>>>> +            ret = glfs_set_volfile_server(glfs,
>>>> +                                          GlusterTransport_lookup[server->value->type],
>>>> +                                          server->value->u.q_unix.socket, 0);
>>>> +        } else {
>>>> +            ret = glfs_set_volfile_server(glfs,
>>>> +                                          GlusterTransport_lookup[server->value->type],
>>>> +                                          server->value->u.tcp.host,
>>>> +                                          server->value->u.tcp.port);
>>>
>>> server->value.u.tcp.port is optional.  Using it without checking
>>> server->value.u.tcp.has_port relies on the default value being zero.  We
>>> don't actually document that.  Perhaps we should.
>>
>> I have made sure to fill it in the code, also marked
>> gsconf->u.tcp.has_port = true;
>>
>>>
>>>> +        }
>>>> +
>>>> +        if (ret < 0) {
>>>> +            goto out;
>>>> +        }
>>>>      }
>>>>
>>>>      ret = glfs_set_logging(glfs, "-", gconf->debug_level);
>>>> @@ -252,19 +328,19 @@ static struct glfs *qemu_gluster_init(BlockdevOptionsGluster *gconf,
>>>>
>>>>      ret = glfs_init(glfs);
>>>>      if (ret) {
>>>> -        if (gconf->server->type == GLUSTER_TRANSPORT_UNIX) {
>>>> -            error_setg(errp,
>>>> -                       "Gluster connection for volume: %s, path: %s "
>>>> -                       "failed to connect on socket %s ",
>>>> -                       gconf->volume, gconf->path,
>>>> -                       gconf->server->u.q_unix.socket);
>>>> -        } else {
>>>> -            error_setg(errp,
>>>> -                       "Gluster connection for volume: %s, path: %s "
>>>> -                       "failed to connect  host  %s and port %d ",
>>>> -                       gconf->volume, gconf->path,
>>>> -                       gconf->server->u.tcp.host, gconf->server->u.tcp.port);
>>>> +        error_setg(errp, "Gluster connection for volume: %s, path: %s ",
>>>> +                   gconf->volume, gconf->path);
>>>> +        for (server = gconf->server; server; server = server->next) {
>>>> +            if (server->value->type  == GLUSTER_TRANSPORT_UNIX) {
>>>> +                error_append_hint(errp, "failed to connect on socket %s ",
>>>> +                                  server->value->u.q_unix.socket);
>>>> +            } else {
>>>> +                error_append_hint(errp, "failed to connect host %s and port %d ",
>>>> +                                  server->value->u.tcp.host,
>>>> +                                  server->value->u.tcp.port);
>>>> +            }
>>>>          }
>>>> +        error_append_hint(errp, "Please refer to gluster logs for more info ");
>>>
>>> Your code produces the error message "Gluster connection for volume:
>>> VOLUME, path: PATH ", which makes no sense.
>>>
>>> It also produces a hint that is a concatenation of one or more "failed
>>> to connect on FOO", followed by "Please refer to ..." without any
>>> punctuation, but with a trailing space.
>>>
>>> The error message must make sense on its own, without the hint.
>>>
>>> A fixed error message could look like this:
>>>
>>>     Gluster connection for volume VOLUME, path PATH failed to connect on FOO, on BAR, and on BAZ
>>>
>>> or with a little less effort
>>>
>>>     Gluster connection for volume VOLUME, path PATH failed to connect on FOO, BAR, BAZ
>>>
>>> or simply
>>>
>>>     Can't connect to Gluster volume VOLUME, path PATH at FOO, BAR, BAZ
>>>
>>> You can't build up the error message with error_append_hint().  Using it
>>> to append a hint pointing to Gluster logs is fine.
>>
>> okay.
>>
>>>
>>>>
>>>>          /* glfs_init sometimes doesn't set errno although docs suggest that */
>>>>          if (errno == 0) {
>>>> @@ -284,6 +360,195 @@ out:
>>>>      return NULL;
>>>>  }
>>>>
>>>> +static int qapi_enum_parse(const char *opt)
>>>> +{
>>>> +    int i;
>>>> +
>>>> +    if (!opt) {
>>>> +        return GLUSTER_TRANSPORT__MAX;
>>>> +    }
>>>> +
>>>> +    for (i = 0; i < GLUSTER_TRANSPORT__MAX; i++) {
>>>> +        if (!strcmp(opt, GlusterTransport_lookup[i])) {
>>>> +            return i;
>>>> +        }
>>>> +    }
>>>> +
>>>> +    return i;
>>>> +}
>>>> +
>>>> +/*
>>>> + * Convert the json formatted command line into qapi.
>>>> +*/
>>>> +static int qemu_gluster_parse_json(BlockdevOptionsGluster *gconf,
>>>> +                                  QDict *options, Error **errp)
>>>> +{
>>>> +    QemuOpts *opts;
>>>> +    GlusterServer *gsconf;
>>>> +    GlusterServerList *curr = NULL;
>>>> +    QDict *backing_options = NULL;
>>>> +    Error *local_err = NULL;
>>>> +    char *str = NULL;
>>>> +    const char *ptr;
>>>> +    size_t num_servers;
>>>> +    int i;
>>>> +
>>>> +    /* create opts info from runtime_json_opts list */
>>>> +    opts = qemu_opts_create(&runtime_json_opts, NULL, 0, &error_abort);
>>>> +    qemu_opts_absorb_qdict(opts, options, &local_err);
>>>> +    if (local_err) {
>>>> +        goto out;
>>>> +    }
>>>> +
>>>> +    num_servers = qdict_array_entries(options, GLUSTER_OPT_SERVER_PATTERN);
>>>> +    if (num_servers < 1) {
>>>> +        error_setg(&local_err, "please provide 'server' option with valid "
>>>> +                               "fields in array of hostinfo ");
>>>
>>> This isn't an error message, it's instructions what to do.  Such
>>> instructions can be useful when they're correct, but they can't replace
>>> an error message.  The error message should state what's wrong.  No
>>> less, no more.
>>
>> Let me know, how to log the hint from a leaf function, where Error
>> object is not created yet.
>> I also feel its more like an error in the usage of the json command
>
> The error we diagnose here is that @options either lacks a 'server' key,
> or its value is invalid.  Since I'm too lazy to come up with an error
> message for that, I grep for this kind of error (value of
> qdict_array_entries() negative), and find one in quorum.c:
>
>     s->num_children = qdict_array_entries(options, "children.");
>     if (s->num_children < 0) {
>         error_setg(&local_err, "Option children is not a valid array");
>         ret = -EINVAL;
>         goto exit;
>     }
>
>>> Moreover, avoid prefixes like "qemu_gluster:".  Usually, the fact that
>>> this is about Gluster is obvious.  When it isn't, providing context is
>>> the caller's job.
>>
>> I think I have removed almost all 'qemu_gluster:' in v19 by respecting
>> you comments in v18
>
> Pasto, sorry.
>
>>>> +        goto out;
>>>> +    }
>>>> +
>>>> +    ptr = qemu_opt_get(opts, GLUSTER_OPT_VOLUME);
>>>> +    if (!ptr) {
>>>> +        error_setg(&local_err, "please provide 'volume' option ");
>>>
>>> Not an error message.
>>
>> Also same here ...
>
> The usual error message for a missing mandatory key 'volume' is
> "Parameter 'volume' is missing".  For historical reasons, that's
> commonly written as
>
>     error_setg(errp, QERR_MISSING_PARAMETER, "volume");
>
> If I didn't know that already, I'd try to find out the same way as for
> the previous one: grep for this kind of error (value of qemu_opt_get()
> null), stick to the most common way to report it.
>
> Feel free to use a message that's more tailored to this particular
> error.  My point here isn't that you should reuse existing error
> messages (that's a complete non-requirement), only that an error message
> should first and foremost tell the user what's wrong.  Telling him what
> he might have to do fix it is secondary.  It might be helpful, but in
> practice it's often misleading, because users often screw up in more
> ways than you anticipated when writing the error message.
>
>>>> +        goto out;
>>>> +    }
>>>> +    gconf->volume = g_strdup(ptr);
>>>> +
>>>> +    ptr = qemu_opt_get(opts, GLUSTER_OPT_PATH);
>>>> +    if (!ptr) {
>>>> +        error_setg(&local_err, "please provide 'path' option ");
>>>
>>> Not an error message.
>>
>> here ...
>>
>>>
>>>> +        goto out;
>>>> +    }
>>>> +    gconf->path = g_strdup(ptr);
>>>> +    qemu_opts_del(opts);
>>>> +
>>>> +    for (i = 0; i < num_servers; i++) {
>>>> +        str = g_strdup_printf(GLUSTER_OPT_SERVER_PATTERN"%d.", i);
>>>> +        qdict_extract_subqdict(options, &backing_options, str);
>>>> +
>>>> +        /* create opts info from runtime_type_opts list */
>>>> +        opts = qemu_opts_create(&runtime_type_opts, NULL, 0, &error_abort);
>>>> +        qemu_opts_absorb_qdict(opts, backing_options, &local_err);
>>>> +        if (local_err) {
>>>> +            goto out;
>>>> +        }
>>>> +
>>>> +        ptr = qemu_opt_get(opts, GLUSTER_OPT_TYPE);
>>>> +        gsconf = g_new0(GlusterServer, 1);
>>>> +        gsconf->type = qapi_enum_parse(ptr);
>>>> +        if (!ptr) {
>>>> +            error_setg(&local_err, "please provide 'type' in hostinfo.%d as "
>>>> +                                   "tcp|unix ", i);
>>>
>>> Not an error message.
>>
>> and here ...
>> How do I say whats really wrong in the command, which could be long
>> (if provides N servers in the list)
>
> The usual error message for a key 'type' having a value other than 'tcp'
> or 'unix' would be "Parameter 'type' expects 'tcp' or 'unix'".  For
> historical reasons, that's commonly written as
>
>     error_setg(errp, QERR_INVALID_PARAMETER_VALUE, "type", "tcp or unix");
>
> If I didn't know that already, I'd again grep.
>
> Once again, feel free to improve on this message.
>
>>>> +            goto out;
>>>> +
>>>> +        }
>>>> +        if (gsconf->type == GLUSTER_TRANSPORT__MAX) {
>>>> +            error_setg(&local_err, "'type' in hostinfo.%d can be tcp|unix ", i);
>>>> +            goto out;
>>>> +        }
>>>> +        qemu_opts_del(opts);
>>>> +
>>>> +        if (gsconf->type == GLUSTER_TRANSPORT_TCP) {
>>>> +            /* create opts info from runtime_tcp_opts list */
>>>> +            opts = qemu_opts_create(&runtime_tcp_opts, NULL, 0, &error_abort);
>>>> +            qemu_opts_absorb_qdict(opts, backing_options, &local_err);
>>>> +            if (local_err) {
>>>> +                goto out;
>>>> +            }
>>>> +
>>>> +            ptr = qemu_opt_get(opts, GLUSTER_OPT_HOST);
>>>> +            if (!ptr) {
>>>> +                error_setg(&local_err, "please provide 'host' in hostinfo.%d ",
>>>> +                           i);
>>>> +                goto out;
>>>> +            }
>>>> +            gsconf->u.tcp.host = g_strdup(ptr);
>>>> +            ptr = qemu_opt_get(opts, GLUSTER_OPT_PORT);
>>>> +            gsconf->u.tcp.port = qemu_opt_get_number(opts, GLUSTER_OPT_PORT,
>>>> +                                               GLUSTER_DEFAULT_PORT);
>>>> +            gsconf->u.tcp.has_port = true;
>>>> +            qemu_opts_del(opts);
>>>> +        } else {
>>>> +            /* create opts info from runtime_unix_opts list */
>>>> +            opts = qemu_opts_create(&runtime_unix_opts, NULL, 0, &error_abort);
>>>> +            qemu_opts_absorb_qdict(opts, backing_options, &local_err);
>>>> +            if (local_err) {
>>>> +                goto out;
>>>> +            }
>>>> +
>>>> +            ptr = qemu_opt_get(opts, GLUSTER_OPT_SOCKET);
>>>> +            if (!ptr) {
>>>> +                error_setg(&local_err, "please provide 'socket' in hostinfo.%d ",
>>>> +                           i);
>>>> +                goto out;
>>>> +            }
>>>> +            gsconf->u.q_unix.socket = g_strdup(ptr);
>>>> +            qemu_opts_del(opts);
>>>> +        }
>>>> +
>>>> +        if (gconf->server == NULL) {
>>>> +            gconf->server = g_new0(GlusterServerList, 1);
>>>> +            gconf->server->value = gsconf;
>>>> +            curr = gconf->server;
>>>> +        } else {
>>>> +            curr->next = g_new0(GlusterServerList, 1);
>>>> +            curr->next->value = gsconf;
>>>> +            curr = curr->next;
>>>> +        }
>>>> +
>>>> +        qdict_del(backing_options, str);
>>>> +        g_free(str);
>>>> +        str = NULL;
>>>> +    }
>>>> +
>>>> +    return 0;
>>>> +
>>>> +out:
>>>> +    if (local_err) {
>>>> +        error_propagate(errp, local_err);
>>>> +    }
>>>
>>> error_propagate() does the right thing when its second argument is
>>> null.  Please drop the conditional.
>>
>> Sure
>>
>>>
>>>> +    qemu_opts_del(opts);
>>>> +    if (str) {
>>>> +        qdict_del(backing_options, str);
>>>> +        g_free(str);
>>>> +    }
>>>> +    errno = EINVAL;
>>>> +    return -errno;
>>>> +}
>>>> +
>>>> +static struct glfs *qemu_gluster_init(BlockdevOptionsGluster *gconf,
>>>> +                                      const char *filename,
>>>> +                                      QDict *options, Error **errp)
>>>> +{
>>>> +    int ret;
>>>> +    if (filename) {
>>>> +        ret = qemu_gluster_parse_uri(gconf, filename);
>>>> +        if (ret < 0) {
>>>> +            error_setg(errp, "Usage: file=gluster[+transport]://[host[:port]]/"
>>>> +                             "volume/path[?socket=...]");
>>>
>>> Not an error message.
>>
>> Can you suggest the change required here for displaying hints from a
>> leaf function, I am worried for missing your intention here, since you
>> have most of your v18 comments here.
>>
>> IIRC, the leaf function which wants to display out a hint/msg/error
>> should have an Error object, which is created by error_setg() and
>> error_append_hind() appends to it.
>
> The best you can do here is a generic error message like "invalid URI".
> To be more specific, you have to create the error in
> qemu_gluster_parse_uri(), because only there you know what exactly is
> wrong.  Requires giving it an Error **errp parameter.  I'm *not* asking
> you for that.  It's a separate improvement.
>
> "invalid URI" isn't as horrible as you might think, because the actual
> error message should come out like this:
>
>     qemu-system-x86_64: -drive file=gluster+tcp://junk: invalid URI
>
> which I find clearer than
>
>     qemu-system-x86_64: -drive file=gluster+tcp://junk: Usage: file=gluster[+transport]://[server[:port]]/volname/image[?socket=...]
>
> I reiterate my plea to *test* error messages.
>
> We generally don't print usage information hints after errors.  If we
> did, then the code would look like this:
>
>             error_setg(errp, "invalid URI");
>             error_append_hint(errp, "Usage: file=gluster[+transport]:"
>                               "//[host[:port]]/volume/path[?socket=...]\n");
>
> Should look like this:
>
>     qemu-system-x86_64: -drive file=gluster+tcp://junk: invalid URI
>     Usage: file=gluster[+transport]://[host[:port]]/volume/path[?socket=...]
>9

Make sense to me; shall split them to have 'generic msg' (first) +
'what went wrong' (hint next, if required)
first part with error_setg() + error_append_hint() for hints

> By the way, not sure what [?socket...] means.
>

[?socket] is the url query argument, which is unix domain socket file
path, on which glusterd is listening for management connection.

example:
qemu-system-x86_64
gluster+unix:///volume/fedora23.qcow2?socket=/var/run/glusterd.socket

In case, if the volfile resides on local machine, it fetches it form
there without involving the tcp loopback;

--
Prasanna

> Sorry for disappointing you Markus.
>
> We'll get there :)

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [Qemu-devel] [PATCH v19 4/5] block/gluster: using new qapi schema
  2016-07-18 18:28         ` Prasanna Kalever
@ 2016-07-19 11:12           ` Markus Armbruster
  2016-07-19 12:57             ` Eric Blake
  0 siblings, 1 reply; 23+ messages in thread
From: Markus Armbruster @ 2016-07-19 11:12 UTC (permalink / raw)
  To: Prasanna Kalever
  Cc: Raghavendra Talur, Jeffrey Cody, Prasanna Kumar Kalever,
	qemu-devel, Vijay Bellur, Eric Blake

Cc'ing Eric, because I'd like his advice on a couple of points.

Prasanna Kalever <pkalever@redhat.com> writes:

> On Mon, Jul 18, 2016 at 6:41 PM, Markus Armbruster <armbru@redhat.com> wrote:
>> Prasanna Kalever <pkalever@redhat.com> writes:
>>
>>> On Mon, Jul 18, 2016 at 2:59 PM, Markus Armbruster <armbru@redhat.com> wrote:
>>>> Prasanna Kumar Kalever <prasanna.kalever@redhat.com> writes:
>>>>
>>>>> this patch adds 'GlusterServer' related schema in qapi/block-core.json
>>>>>
>>>>> Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
>>>>> ---
>>>>>  block/gluster.c      | 111 +++++++++++++++++++++++++++++----------------------
>>>>>  qapi/block-core.json |  94 ++++++++++++++++++++++++++++++++++++++++---
>>>>>  2 files changed, 153 insertions(+), 52 deletions(-)
>>>> [Skipping ahead to QAPI schema...]
>>>>> diff --git a/qapi/block-core.json b/qapi/block-core.json
>>>>> index a7fdb28..d7b5c76 100644
>>>>> --- a/qapi/block-core.json
>>>>> +++ b/qapi/block-core.json
>>>>> @@ -1658,13 +1658,14 @@
>>>>>  # @host_device, @host_cdrom: Since 2.1
>>>>>  #
>>>>>  # Since: 2.0
>>>>> +# @gluster: Since 2.7
>>>>>  ##
>>>>>  { 'enum': 'BlockdevDriver',
>>>>>    'data': [ 'archipelago', 'blkdebug', 'blkverify', 'bochs', 'cloop',
>>>>> -            'dmg', 'file', 'ftp', 'ftps', 'host_cdrom', 'host_device',
>>>>> -            'http', 'https', 'luks', 'null-aio', 'null-co', 'parallels',
>>>>> -            'qcow', 'qcow2', 'qed', 'quorum', 'raw', 'tftp', 'vdi', 'vhdx',
>>>>> -            'vmdk', 'vpc', 'vvfat' ] }
>>>>> +            'dmg', 'file', 'ftp', 'ftps', 'gluster', 'host_cdrom',
>>>>> +            'host_device', 'http', 'https', 'luks', 'null-aio', 'null-co',
>>>>> +            'parallels', 'qcow', 'qcow2', 'qed', 'quorum', 'raw', 'tftp',
>>>>> +            'vdi', 'vhdx', 'vmdk', 'vpc', 'vvfat' ] }
>>>>>
>>>>>  ##
>>>>>  # @BlockdevOptionsFile
>>>>> @@ -2057,6 +2058,89 @@
>>>>>              '*read-pattern': 'QuorumReadPattern' } }
>>>>>
>>>>>  ##
>>>>> +# @GlusterTransport
>>>>> +#
>>>>> +# An enumeration of Gluster transport type
>>>>> +#
>>>>> +# @tcp:   TCP   - Transmission Control Protocol
>>>>> +#
>>>>> +# @unix:  UNIX  - Unix domain socket
>>>>> +#
>>>>> +# Since: 2.7
>>>>> +##
>>>>> +{ 'enum': 'GlusterTransport',
>>>>> +  'data': [ 'unix', 'tcp' ] }
>>>>> +
>>>>> +##
>>>>> +# @GlusterUnixSocketAddress
>>>>> +#
>>>>> +# Captures a socket address in the local ("Unix socket") namespace.
>>>>> +#
>>>>> +# @socket:   absolute path to socket file
>>>>> +#
>>>>> +# Since 2.7
>>>>> +##
>>>>> +{ 'struct': 'GlusterUnixSocketAddress',
>>>>> +  'data': { 'socket': 'str' } }
>>>>
>>>> Compare:
>>>>
>>>>    ##
>>>>    # @UnixSocketAddress
>>>>    #
>>>>    # Captures a socket address in the local ("Unix socket") namespace.
>>>>    #
>>>>    # @path: filesystem path to use
>>>>    #
>>>>    # Since 1.3
>>>>    ##
>>>>    { 'struct': 'UnixSocketAddress',
>>>>      'data': {
>>>>        'path': 'str' } }
>>>>
>>>>> +
>>>>> +##
>>>>> +# @GlusterInetSocketAddress
>>>>> +#
>>>>> +# Captures a socket address or address range in the Internet namespace.
>>>>> +#
>>>>> +# @host:  host part of the address
>>>>> +#
>>>>> +# @port:  #optional port part of the address, or lowest port if @to is present
>>>>
>>>> There is no @to.
>>>>
>>>> What's the default port?
>>>
>>> #define GLUSTER_DEFAULT_PORT        24007
>>
>> I know, but the poor reader of the docs may not, so the docs better
>> spell it out :)
>
> :) Done
>
>>
>>>>> +#
>>>>> +# Since 2.7
>>>>> +##
>>>>> +{ 'struct': 'GlusterInetSocketAddress',
>>>>> +  'data': { 'host': 'str',
>>>>> +            '*port': 'uint16' } }
>>>>
>>>> Compare:
>>>>
>>>>    ##
>>>>    # @InetSocketAddress
>>>>    #
>>>>    # Captures a socket address or address range in the Internet namespace.
>>>>    #
>>>>    # @host: host part of the address
>>>>    #
>>>>    # @port: port part of the address, or lowest port if @to is present
>>>>    #
>>>>    # @to: highest port to try
>>>>    #
>>>>    # @ipv4: whether to accept IPv4 addresses, default try both IPv4 and IPv6
>>>>    #        #optional
>>>>    #
>>>>    # @ipv6: whether to accept IPv6 addresses, default try both IPv4 and IPv6
>>>>    #        #optional
>>>>    #
>>>>    # Since 1.3
>>>>    ##
>>>>    { 'struct': 'InetSocketAddress',
>>>>      'data': {
>>>>        'host': 'str',
>>>>        'port': 'str',
>>>>        '*to': 'uint16',
>>>>        '*ipv4': 'bool',
>>>>        '*ipv6': 'bool' } }
>>>>
>>>>> +
>>>>> +##
>>>>> +# @GlusterServer
>>>>> +#
>>>>> +# Captures the address of a socket
>>>>> +#
>>>>> +# Details for connecting to a gluster server
>>>>> +#
>>>>> +# @type:       Transport type used for gluster connection
>>>>> +#
>>>>> +# @unix:       socket file
>>>>> +#
>>>>> +# @tcp:        host address and port number
>>>>> +#
>>>>> +# Since: 2.7
>>>>> +##
>>>>> +{ 'union': 'GlusterServer',
>>>>> +  'base': { 'type': 'GlusterTransport' },
>>>>> +  'discriminator': 'type',
>>>>> +  'data': { 'unix': 'GlusterUnixSocketAddress',
>>>>> +            'tcp': 'GlusterInetSocketAddress' } }
>>>>> +
>>>>
>>>> Compare:
>>>>
>>>>    ##
>>>>    # @SocketAddress
>>>>    #
>>>>    # Captures the address of a socket, which could also be a named file descriptor
>>>>    #
>>>>    # Since 1.3
>>>>    ##
>>>>    { 'union': 'SocketAddress',
>>>>      'data': {
>>>>        'inet': 'InetSocketAddress',
>>>>        'unix': 'UnixSocketAddress',
>>>>        'fd': 'String' } }
>>>>
>>>> You cleaned up the confusing abuse of @host as Unix domain socket file
>>>> name.  Good.
>>>>
>>>> You're still defining your own QAPI types instead of using the existing
>>>> ones.  To see whether that's a good idea, let's compare your definitions
>>>> to the existing ones:
>>
>> I've since been gently referred to this note in the cover letter:
>>
>>     patch 4/5 (i.e. 3/4 in v18) now uses union discriminator, I have
>>     made a choice to use gluster with custom schema since
>>     @UnixSocketAddress uses 'path' as key, which may be confusing with
>>     gluster,
>>
>> Can you briefly explain why 'path' may be confusing?
>>
>
> As you would have noticed below in my previous reply about my ACK
>
>>     and in @InetSocketAddress port was str again I have made a choice to
>>     keep it uint16 which really make sense.
>>
>> A port can be given in symbolic form (service name) and in numeric form
>> (port number), just like a host.  For instance, TCP service name "smtp"
>> means port number 25.  See also services(5).  Naturally, a symbolic name
>> only exists for sufficiently well-known ports.
>>
>> Interfaces should accept both service name and port.  InetSocketAddress
>> does, in the same way as getaddrinfo(): it takes a string, which may
>> either be a service name or a port number.  Perhaps it should take an
>> alternate of int16 and string instead, but that's a separate question.
>
> This really improved my understanding, thanks Markus
> Having agreed that, I need to say about feeder api glfs_setvolfile_server()
> accept only int;
>
> look at the scaffolding here
>
> int
> pub_glfs_set_volfile_server (struct glfs *fs, const char *transport,
>                              const char *host, int port)
>
> So, I hope you stand with me, in making port as int;

If we consider just interfacing with (the current version of)
glusterfs-devel, int is the natural type of port.

However, we already have a related abstraction: InetSocketAddress.  If
it fits, we should probably reuse it.  If we decide not to reuse it now,
we may still want to minimize differences to keep the door open for
future reuse.

InetSocketAddress's port is str.  If we make GlusterInetSocketAddress's
port an integer, then a future unification with InetSocketAddress will
have to make port an alternate.  Not impossible, but why tie our hands
that way now?  But let's not get bogged down in this detail now.  We
first have to decide whether to reuse InetSocketAddress now (more on
that below).  If yes, the question is moot.

>>     Hmmm.. As Markus suggested in v18 qemu_gluster_parseuri() is
>>     *parse_uri() same with *parse_json() (in 5/5)
>>
>> Not sure I got that.  Do you mean "I renamed qemu_gluster_parseuri() to
>> qemu_gluster_parse_uri() for consistency with
>> qemu_gluster_parse_json()"?
>
> In a comment you did mentioned "parsejson isn't a word.  "parse_json"
> would be two :)"
>
> So to maintain consistency for parseuri with parsejson, I made the changes.

Makes sense.

> If I have over taken them, please excuse me for this :)

Nah, I just didn't understand your note, and wanted to make sure I
didn't miss anything.

>>>> * GlusterUnixSocketAddress vs. UnixSocketAddress
>>>>
>>>>   Identical but for the member name.  Why can't we use
>>>>   UnixSocketAddress?
>>>
>>> May be you are right, it may not worth just for a member name.
>>
>> Can't say, yet; that's why I ask you to explain why it may be confusing.

Okay, looks like GlusterUnixSocketAddress is not really necessary.

>>>> * GlusterInetSocketAddress vs. InetSocketAddress
>>>>
>>>>   Changes in GlusterInetSocketAddress over InetSocketAddress:
>>>>
>>>>   - @port is optional
>>>>
>>>>     Convenience feature.  We can discuss making it optional in
>>>>     InetSocketAddress, too.
>>>
>>> sure.

Of course, it's too late in the development cycle for making it optional
*now*.

If we reuse InetSocketAddress now, as is, then the QMP client has to
specify the Gluster port, even though it's usually 24007.  For command
line and HMP, we can still make it optional.  I think that's tolerable.
We can investigate making it optional later.

>>>>   - @port has type 'uint16' instead of 'str'
>>>>
>>>>     No support for service names.  Why is that a good idea?
>>>
>>> I honestly do not understand tie service names to port.
>>> As far all I understand at least from gluster use-case wise its just
>>> an integer ranging from 0 - 65535 (mostly 24007)
>>> I am happy to learn this
>>
>> Hope I was able to explain this above.
>
> yes!
>
>>
>>>>   - Lacks @to
>>>>
>>>>     No support for trying a range of ports.  I guess that simply makes
>>>>     no sense for a Gluster client.  I guess makes no sense in some other
>>>>     uses of InetSocketAddress, too.  Eric has proposed to split off
>>>>     InetSocketAddressRange off InetSocketAddress.
>>> I still don't understand the essence, why we need to loop through the ports ?
>>
>> Best explained by looking at a use of this feature.  With -display vnc,
>> we start a VNC server.  By default, it listens on port 5900.  If this
>> port isn't available, it fails like this:
>>
>>     qemu-system-x86_64:-display vnc=:0: Failed to start VNC server: Failed to bind socket: Address already in use
>>
>> If you don't care about the port, you can use something like "-display
>> vnc=:0,to=9" to bind the first free port in the range of 5900 to 5909.
>>
>
> In gluster case arriving at the port that is in use, is non trivial.
> Its mostly 24007 or something user choice (say 5007) which is
> something unpredictable or looping for that may not worth;
>
> So, IMO this is not the case in gluster port pickup;

Port ranges make sense for some users of InetSocketAddress, but not for
others.  Gluster is of the latter kind.

So far, the latter kind simply uses InetSocketAddress, and either
rejects @to or ignores it.  The latter would be a bug.

We could do the same for Gluster: if has_to, error out.

Alternatively, we could split InetSocketAddressRange (with @to) off
InetSocketAddress (delete @to).  Makes range support visible in
introspection.

If we use InetSocketAddress as is now, and have gluster.c reject @to, we
can still split off InetSocketAddressRange later.  The external
interface doesn't care whether @to is rejected as unknown parameter by
QAPI or as unsupported parameter by gluster.c.

Remember, the purpose of reviewing differences between
GlusterInetSocketAddress and InetSocketAddressRange is to figure out
whether we can use InetSocketAddress for Gluster, and if we can, whether
we should.  The result here is that @to doesn't preclude reuse of
InetSocketAddress.  We've already seen that @port doesn't preclude it,
either.

>>>>   - Lacks @ipv4, @ipv6
>>>
>>> Gluster don't support ipv6 addresses (there is some work needed in it rpc code)
>>> Do you think its worth to have a control over ipv4/ipv6 just to
>>> restrict from ipv6 addresses?
>>
>> In that case, it's not a show-stopper.  When Gluster acquires IPv6
>> support, we'll need them.  Until then, we can omit them.  If we don't
>> omit them, the gluster driver should reject "ipv4": false.
>>
>
> So I am dropping this for now; will send a patch soon after gluster
> fully supports ipv6 addressing

Like @to, @ipv4 and @ipv6 don't preclude reuse of InetSocketAddress: C
code can simply reject the ones it can't obey.

Now let me summarize.  We could do without GlusterInetSocketAddress,
because:

* SocketAddress's non-optional @port is a minor inconvenience which
  we can address in the future without breaking compatibility.

* SocketAddress's string @port is a minor inconvenience for the C code
  using it.  Keeping the external interface consistent (same type for
  TCP ports everywhere) is worth some inconvenience in the C code.

* SocketAddress's @to complicates the external interface, but the
  complication exists elsewhere already.  gluster.c can reject @to.  We
  can clean up the external interface in the future without breaking
  compatibility.

* SocketAddress's @ipv4, @ipv6 cannot be fully implemented by gluster.c,
  yet.  gluster.c can simply reject the settings it doesn't implement,
  until they get implemented.

Reasons for having a separate GlusterInetSocketAddress:

* Slightly less code in gluster.c.  I reject that reason.  Keeping the
  interface lean is worth some extra code.

  Note that extra schema definitions to avoid code complications may be
  fine as long as they don't affect external interfaces.

* Makes non-support of port ranges and IPv6 visible in introspection.
  That's a valid argument, but it's an argument for having
  Inet4SockAddress, not for GlusterInetSocketAddress.

  Eric, do you think there's a need for introspecting IPv6 support?

Any other reasons?

>>>>     No control over IPv4 vs. IPv6 use.  Immediate show-stopper.
>>>>
>>>>   Can we use InetSocketAddress?
>>>
>>> sorry, I don't have an answer, since I was unclear in my comments above.
>>>
>>>>
>>>> * GlusterServer vs. SocketAddress
>>>>
>>>>   GlusterServer lacks case 'fd'.  Do file descriptors make no sense
>>>>   here, or is it merely an implementation restriction?
>>>
>>> Again, Gluster doesn't support.
>>
>> Yes, the library we use to talk to Gluster doesn't let you pass in a
>> file descriptor today.
>>
>> My question is whether this *could* be supported.  The answer is
>> probably "yes".
>>
>> Fd support is usually desirable for privilege separation.  There, we
>> want to run QEMU with the least possible privileges.  Ideally no way to
>> open TCP connections.  Instead, the management application does the
>> opening, and passes the open fd to QEMU.  Makes sense because it limits
>> what a malicious guest can gain by attacking QEMU.
>>
>
> I got your point here;
>
> since we are clear that gluster doesn't support this at least for now;
> Somehow I came to a opinion from all the points described above,  we
> don't need 'SocketAddress' for the same reasons that gluster needs a
> tweaked 'InetSocketAddress' , so lets also keep-off the fd for now.

Similar design question as for @ipv4, @ipv6: reuse of SocketAddress is
possible, but the C code has to reject options it doesn't implement,
i.e. "type": "fd".  Non-support of fd isn't visible in introspection
then.

Eric, do you think there's a need for introspecting fd support?

Additional design question: do we want to move away from SocketAddress's
pointless nesting on the wire for new interfaces?  I.e. move from
something like

    { "type": "tcp", "data": { "host": "gluster.example.com", ... }

to

    { "type": "tcp", "host": "gluster.example.com", ... }

This isn't a Gluster question, it's a general question.  Eric, what do
you think?

> { 'struct': 'InetSocketAddress',
>   'data': {
>     'host': 'str',
>     'port': 'str',
>     '*to': 'uint16',
>     '*ipv4': 'bool',
>     '*ipv6': 'bool' } }
>
> tweaked version of InetSocketAddress i.e GlusterInetSocketAddress
> * not require 'ipv6'
> * doesn't need 'to' and
> * 'port' should be optional and integer, as glfs_set_volfile_server() demands
>
> so,
>
> { 'struct': 'GlusterInetSocketAddress',
>   'data': {
>     'host': 'str',
>     '*port': 'str' }}
>
> Hence,
>
> { 'union': 'SocketAddress',
>   'data': {
>     'inet': 'InetSocketAddress',
>     'unix': 'UnixSocketAddress',
>     'fd': 'String' } }
>
> after removing 'fd' part which is not supported now, this look like
>
> { 'union': 'GlusterSocketAddress',
>   'data': {
>     'inet': 'GlusterInetSocketAddress',
>     'unix': 'UnixSocketAddress' } }
>
> What do you think ?

This is fine if

* we decide we want a new GlusterInetSocketAddress instead of reusing
  InetSocketAddress, and

* we decide we don't want to avoid nesting on the wire.

But we haven't settled these questions.  I'd like to settle them today,
taking Eric's advice into account.

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [Qemu-devel] [PATCH v19 4/5] block/gluster: using new qapi schema
  2016-07-19 11:12           ` Markus Armbruster
@ 2016-07-19 12:57             ` Eric Blake
  2016-07-19 14:29               ` Markus Armbruster
  0 siblings, 1 reply; 23+ messages in thread
From: Eric Blake @ 2016-07-19 12:57 UTC (permalink / raw)
  To: Markus Armbruster, Prasanna Kalever
  Cc: Raghavendra Talur, Jeffrey Cody, Prasanna Kumar Kalever,
	qemu-devel, Vijay Bellur

[-- Attachment #1: Type: text/plain, Size: 10097 bytes --]

On 07/19/2016 05:12 AM, Markus Armbruster wrote:
> Cc'ing Eric, because I'd like his advice on a couple of points.

I've been following the thread, but with this specific invitation, I'll
definitely chime in.


>> int
>> pub_glfs_set_volfile_server (struct glfs *fs, const char *transport,
>>                              const char *host, int port)
>>
>> So, I hope you stand with me, in making port as int;
> 
> If we consider just interfacing with (the current version of)
> glusterfs-devel, int is the natural type of port.
> 
> However, we already have a related abstraction: InetSocketAddress.  If
> it fits, we should probably reuse it.  If we decide not to reuse it now,
> we may still want to minimize differences to keep the door open for
> future reuse.

Furthermore, just because gluster code doesn't accept a string does not
prevent qemu from accepting a string, looking it up in the database of
named ports, and passing the corresponding number on to gluster.  I am
very strongly in favor of supporting named ports in the qemu interface,
even if it means more work under the hood for qemu to pass on to
gluster.  Whether we support that by an alternate between int and
string, or keep symmetry with existing interfaces being string only, is
where it becomes a judgment call (and going string-only for now can
always be improved later for convenience).

> 
> InetSocketAddress's port is str.  If we make GlusterInetSocketAddress's
> port an integer, then a future unification with InetSocketAddress will
> have to make port an alternate.  Not impossible, but why tie our hands
> that way now?  But let's not get bogged down in this detail now.  We
> first have to decide whether to reuse InetSocketAddress now (more on
> that below).  If yes, the question is moot.
> 

>>>>>   Changes in GlusterInetSocketAddress over InetSocketAddress:
>>>>>
>>>>>   - @port is optional
>>>>>
>>>>>     Convenience feature.  We can discuss making it optional in
>>>>>     InetSocketAddress, too.
>>>>
>>>> sure.
> 
> Of course, it's too late in the development cycle for making it optional
> *now*.
> 
> If we reuse InetSocketAddress now, as is, then the QMP client has to
> specify the Gluster port, even though it's usually 24007.  For command
> line and HMP, we can still make it optional.  I think that's tolerable.
> We can investigate making it optional later.

Agree - we're too late to make it optional in 2.7, but there is nothing
preventing us from making it optional in the future, and it's neither
too much burden on libvirt to supply a sane default if it is mandatory,
nor too hard for the command line and HMP to make it optional on top of
a mandatory QMP, for the sake of human interaction.


> 
> Port ranges make sense for some users of InetSocketAddress, but not for
> others.  Gluster is of the latter kind.
> 
> So far, the latter kind simply uses InetSocketAddress, and either
> rejects @to or ignores it.  The latter would be a bug.
> 
> We could do the same for Gluster: if has_to, error out.
> 
> Alternatively, we could split InetSocketAddressRange (with @to) off
> InetSocketAddress (delete @to).  Makes range support visible in
> introspection.

Ultimately, I think we should split InetSocketAddress. But we're too
late to do it in 2.7.

> 
> If we use InetSocketAddress as is now, and have gluster.c reject @to, we
> can still split off InetSocketAddressRange later.  The external
> interface doesn't care whether @to is rejected as unknown parameter by
> QAPI or as unsupported parameter by gluster.c.
> 

Agree - at this point, manually rejecting has_to will let us reuse the
type, and would then be part of the cleanup work when 2.8 splits
InetSocketAddress.


> Now let me summarize.  We could do without GlusterInetSocketAddress,
> because:
> 
> * SocketAddress's non-optional @port is a minor inconvenience which
>   we can address in the future without breaking compatibility.
> 
> * SocketAddress's string @port is a minor inconvenience for the C code
>   using it.  Keeping the external interface consistent (same type for
>   TCP ports everywhere) is worth some inconvenience in the C code.
> 
> * SocketAddress's @to complicates the external interface, but the
>   complication exists elsewhere already.  gluster.c can reject @to.  We
>   can clean up the external interface in the future without breaking
>   compatibility.

In fact, introspection will be nicer when we split InetSocketAddress to
no longer include 'to', as you will then be able to precisely determine
which uses support ranges.  For now, management apps just have to be
aware that ranges may be rejected.

> 
> * SocketAddress's @ipv4, @ipv6 cannot be fully implemented by gluster.c,
>   yet.  gluster.c can simply reject the settings it doesn't implement,
>   until they get implemented.
> 
> Reasons for having a separate GlusterInetSocketAddress:
> 
> * Slightly less code in gluster.c.  I reject that reason.  Keeping the
>   interface lean is worth some extra code.
> 
>   Note that extra schema definitions to avoid code complications may be
>   fine as long as they don't affect external interfaces.
> 
> * Makes non-support of port ranges and IPv6 visible in introspection.
>   That's a valid argument, but it's an argument for having
>   Inet4SockAddress, not for GlusterInetSocketAddress.
> 
>   Eric, do you think there's a need for introspecting IPv6 support?

I'm borderline on that one.  If the error message given when attempting
an IPv6 address is nice enough, then libvirt can just blindly try an
IPv6 address and inform the user that qemu/gluster is too old to support
it.  If the error message is lousy, then being able to introspect
whether IPv6 support is present would let libvirt provide a saner error
message.  But at the moment, I'm not convinced that introspection alone
is reason to create a struct without IPv6 now, where IPv6 is added later
once gluster supports it.

Furthermore, a system administrator that NEEDS to use IPv6 is very
likely to upgrade their system to support things before trying to use
IPv6.  So in this case, I'm leaning towards just reusing the type and
rejecting IPv6 in the gluster code until it works.


> 
> Similar design question as for @ipv4, @ipv6: reuse of SocketAddress is
> possible, but the C code has to reject options it doesn't implement,
> i.e. "type": "fd".  Non-support of fd isn't visible in introspection
> then.
> 
> Eric, do you think there's a need for introspecting fd support?

There's no easy way to declare one QAPI union that is a superset of
another (all the same branches as before, plus these additional
branches); it requires duplication of all the common branches.
Arguably, we could extend QAPI if we find ourselves doing that a lot, to
make it supported with less copy and paste; but manual duplication right
now doesn't hurt.

Here, I think introspection is probably more useful, particularly since
there are security aspects in fd passing (where libvirt can pre-open the
connection) that are nicer to know up front if they will work, and where
it is less obvious to a system administrator that they need/want to use
fd passing under the hood.  IPv6 is very easy to say: "will my
deployment need IPv6 addresses - if so, upgrade gluster to a version
that supports IPv6".  But fd passing is not so straightforward.  So I
think being able to introspect this one is useful - but I also think
that having a union that shares the same underlying types for the common
branches is still slightly nicer than having different types for the
same-named branches (that is, since both the reduced union without 'fd'
support and the full union will have 'unix' and 'inet' branches, both
the 'unix' and 'inet' branch should use the same underlying type rather
than being subtly different).

> 
> Additional design question: do we want to move away from SocketAddress's
> pointless nesting on the wire for new interfaces?  I.e. move from
> something like
> 
>     { "type": "tcp", "data": { "host": "gluster.example.com", ... }
> 
> to
> 
>     { "type": "tcp", "host": "gluster.example.com", ... }
> 
> This isn't a Gluster question, it's a general question.  Eric, what do
> you think?
> 

Nested vs. flat is somewhat cosmetic - machine-generated code (like
libvirt's interactions) can cope with either.  On the other hand, adding
a flat type now may make OTHER blockdev-add drivers easier to QAPIfy,
such as nbd and sheepdog, and I kind of like the simplicity afforded by
a flat type.

>> { 'union': 'SocketAddress',
>>   'data': {
>>     'inet': 'InetSocketAddress',
>>     'unix': 'UnixSocketAddress',
>>     'fd': 'String' } }

I also think that if we add a flat type, it would be nicer to use
'fd':'str' instead of 'fd':'String' (which is itself another pointless
nesting due to backwards compatibility), at least for the version of the
flat type where 'fd' is supported.

>>
>> after removing 'fd' part which is not supported now, this look like
>>
>> { 'union': 'GlusterSocketAddress',
>>   'data': {
>>     'inet': 'GlusterInetSocketAddress',
>>     'unix': 'UnixSocketAddress' } }
>>
>> What do you think ?
> 
> This is fine if
> 
> * we decide we want a new GlusterInetSocketAddress instead of reusing
>   InetSocketAddress, and
> 
> * we decide we don't want to avoid nesting on the wire.
> 
> But we haven't settled these questions.  I'd like to settle them today,
> taking Eric's advice into account.

I think sharing InetSocketAddress is reasonable, but having separate
GlusterSocketAddress (or more generically, FlatSocketAddress that can be
shared by gluster, sheepdog, and nbd), seems okay.  I'm also not fussed
about naming - naming it 'GlusterSocketAddress' now and renaming it
'FlatSocketAddress' later when it gets more use does not impact
introspection.

-- 
Eric Blake   eblake redhat com    +1-919-301-3266
Libvirt virtualization library http://libvirt.org


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 604 bytes --]

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [Qemu-devel] [PATCH v19 4/5] block/gluster: using new qapi schema
  2016-07-19 12:57             ` Eric Blake
@ 2016-07-19 14:29               ` Markus Armbruster
  0 siblings, 0 replies; 23+ messages in thread
From: Markus Armbruster @ 2016-07-19 14:29 UTC (permalink / raw)
  To: Eric Blake
  Cc: Prasanna Kalever, Raghavendra Talur, Jeffrey Cody,
	Prasanna Kumar Kalever, qemu-devel, Vijay Bellur

Eric Blake <eblake@redhat.com> writes:

> On 07/19/2016 05:12 AM, Markus Armbruster wrote:
>> Cc'ing Eric, because I'd like his advice on a couple of points.
>
> I've been following the thread, but with this specific invitation, I'll
> definitely chime in.
>
>
>>> int
>>> pub_glfs_set_volfile_server (struct glfs *fs, const char *transport,
>>>                              const char *host, int port)
>>>
>>> So, I hope you stand with me, in making port as int;
>> 
>> If we consider just interfacing with (the current version of)
>> glusterfs-devel, int is the natural type of port.
>> 
>> However, we already have a related abstraction: InetSocketAddress.  If
>> it fits, we should probably reuse it.  If we decide not to reuse it now,
>> we may still want to minimize differences to keep the door open for
>> future reuse.
>
> Furthermore, just because gluster code doesn't accept a string does not
> prevent qemu from accepting a string, looking it up in the database of
> named ports, and passing the corresponding number on to gluster.  I am
> very strongly in favor of supporting named ports in the qemu interface,
> even if it means more work under the hood for qemu to pass on to
> gluster.  Whether we support that by an alternate between int and
> string, or keep symmetry with existing interfaces being string only, is
> where it becomes a judgment call (and going string-only for now can
> always be improved later for convenience).

Let's do just string now.  It's consistent with what we use elsewhere,
and doesn't tie our hands.

To avoid further delay, arbitrarily restrict the string to port number
for now.

>> InetSocketAddress's port is str.  If we make GlusterInetSocketAddress's
>> port an integer, then a future unification with InetSocketAddress will
>> have to make port an alternate.  Not impossible, but why tie our hands
>> that way now?  But let's not get bogged down in this detail now.  We
>> first have to decide whether to reuse InetSocketAddress now (more on
>> that below).  If yes, the question is moot.
>> 
>
>>>>>>   Changes in GlusterInetSocketAddress over InetSocketAddress:
>>>>>>
>>>>>>   - @port is optional
>>>>>>
>>>>>>     Convenience feature.  We can discuss making it optional in
>>>>>>     InetSocketAddress, too.
>>>>>
>>>>> sure.
>> 
>> Of course, it's too late in the development cycle for making it optional
>> *now*.
>> 
>> If we reuse InetSocketAddress now, as is, then the QMP client has to
>> specify the Gluster port, even though it's usually 24007.  For command
>> line and HMP, we can still make it optional.  I think that's tolerable.
>> We can investigate making it optional later.
>
> Agree - we're too late to make it optional in 2.7, but there is nothing
> preventing us from making it optional in the future, and it's neither
> too much burden on libvirt to supply a sane default if it is mandatory,
> nor too hard for the command line and HMP to make it optional on top of
> a mandatory QMP, for the sake of human interaction.

Agreed then.

>> Port ranges make sense for some users of InetSocketAddress, but not for
>> others.  Gluster is of the latter kind.
>> 
>> So far, the latter kind simply uses InetSocketAddress, and either
>> rejects @to or ignores it.  The latter would be a bug.
>> 
>> We could do the same for Gluster: if has_to, error out.
>> 
>> Alternatively, we could split InetSocketAddressRange (with @to) off
>> InetSocketAddress (delete @to).  Makes range support visible in
>> introspection.
>
> Ultimately, I think we should split InetSocketAddress. But we're too
> late to do it in 2.7.

Makes sense.

>> If we use InetSocketAddress as is now, and have gluster.c reject @to, we
>> can still split off InetSocketAddressRange later.  The external
>> interface doesn't care whether @to is rejected as unknown parameter by
>> QAPI or as unsupported parameter by gluster.c.
>> 
>
> Agree - at this point, manually rejecting has_to will let us reuse the
> type, and would then be part of the cleanup work when 2.8 splits
> InetSocketAddress.
>
>> Now let me summarize.  We could do without GlusterInetSocketAddress,
>> because:
>> 
>> * SocketAddress's non-optional @port is a minor inconvenience which
>>   we can address in the future without breaking compatibility.
>> 
>> * SocketAddress's string @port is a minor inconvenience for the C code
>>   using it.  Keeping the external interface consistent (same type for
>>   TCP ports everywhere) is worth some inconvenience in the C code.
>> 
>> * SocketAddress's @to complicates the external interface, but the
>>   complication exists elsewhere already.  gluster.c can reject @to.  We
>>   can clean up the external interface in the future without breaking
>>   compatibility.
>
> In fact, introspection will be nicer when we split InetSocketAddress to
> no longer include 'to', as you will then be able to precisely determine
> which uses support ranges.  For now, management apps just have to be
> aware that ranges may be rejected.
>
>> 
>> * SocketAddress's @ipv4, @ipv6 cannot be fully implemented by gluster.c,
>>   yet.  gluster.c can simply reject the settings it doesn't implement,
>>   until they get implemented.
>> 
>> Reasons for having a separate GlusterInetSocketAddress:
>> 
>> * Slightly less code in gluster.c.  I reject that reason.  Keeping the
>>   interface lean is worth some extra code.
>> 
>>   Note that extra schema definitions to avoid code complications may be
>>   fine as long as they don't affect external interfaces.
>> 
>> * Makes non-support of port ranges and IPv6 visible in introspection.
>>   That's a valid argument, but it's an argument for having
>>   Inet4SockAddress, not for GlusterInetSocketAddress.
>> 
>>   Eric, do you think there's a need for introspecting IPv6 support?
>
> I'm borderline on that one.  If the error message given when attempting
> an IPv6 address is nice enough, then libvirt can just blindly try an
> IPv6 address and inform the user that qemu/gluster is too old to support
> it.  If the error message is lousy, then being able to introspect
> whether IPv6 support is present would let libvirt provide a saner error
> message.  But at the moment, I'm not convinced that introspection alone
> is reason to create a struct without IPv6 now, where IPv6 is added later
> once gluster supports it.
>
> Furthermore, a system administrator that NEEDS to use IPv6 is very
> likely to upgrade their system to support things before trying to use
> IPv6.  So in this case, I'm leaning towards just reusing the type and
> rejecting IPv6 in the gluster code until it works.

Okay, let's reuse InetSocketAddress, and reject the following in C code:

* has_to

* has_ipv4 || has_ipv6

  This is overkill.  We only have to reject the combinations that
  normally result in v6, but don't with Gluster.
  inet_ai_family_from_address()'s function comment documents how we pick
  the address family.  I'm proposing overkill since this is at v19, and
  we need to make progress.

>> Similar design question as for @ipv4, @ipv6: reuse of SocketAddress is
>> possible, but the C code has to reject options it doesn't implement,
>> i.e. "type": "fd".  Non-support of fd isn't visible in introspection
>> then.
>> 
>> Eric, do you think there's a need for introspecting fd support?
>
> There's no easy way to declare one QAPI union that is a superset of
> another (all the same branches as before, plus these additional
> branches); it requires duplication of all the common branches.

Yes.

> Arguably, we could extend QAPI if we find ourselves doing that a lot, to
> make it supported with less copy and paste; but manual duplication right
> now doesn't hurt.

Agreed.

> Here, I think introspection is probably more useful, particularly since
> there are security aspects in fd passing (where libvirt can pre-open the
> connection) that are nicer to know up front if they will work, and where
> it is less obvious to a system administrator that they need/want to use
> fd passing under the hood.  IPv6 is very easy to say: "will my
> deployment need IPv6 addresses - if so, upgrade gluster to a version
> that supports IPv6".  But fd passing is not so straightforward.  So I
> think being able to introspect this one is useful - but I also think
> that having a union that shares the same underlying types for the common
> branches is still slightly nicer than having different types for the
> same-named branches (that is, since both the reduced union without 'fd'
> support and the full union will have 'unix' and 'inet' branches, both
> the 'unix' and 'inet' branch should use the same underlying type rather
> than being subtly different).

Makes sense.

>> Additional design question: do we want to move away from SocketAddress's
>> pointless nesting on the wire for new interfaces?  I.e. move from
>> something like
>> 
>>     { "type": "tcp", "data": { "host": "gluster.example.com", ... }
>> 
>> to
>> 
>>     { "type": "tcp", "host": "gluster.example.com", ... }
>> 
>> This isn't a Gluster question, it's a general question.  Eric, what do
>> you think?
>> 
>
> Nested vs. flat is somewhat cosmetic - machine-generated code (like
> libvirt's interactions) can cope with either.  On the other hand, adding
> a flat type now may make OTHER blockdev-add drivers easier to QAPIfy,
> such as nbd and sheepdog, and I kind of like the simplicity afforded by
> a flat type.

The longer I do QAPI, the more I dislike "simple" unions.

>>> { 'union': 'SocketAddress',
>>>   'data': {
>>>     'inet': 'InetSocketAddress',
>>>     'unix': 'UnixSocketAddress',
>>>     'fd': 'String' } }
>
> I also think that if we add a flat type, it would be nicer to use
> 'fd':'str' instead of 'fd':'String' (which is itself another pointless
> nesting due to backwards compatibility), at least for the version of the
> flat type where 'fd' is supported.

Oh yes.  String's documentation justifies its existence with "to be
embedded in lists."  Well, strList exists.

>>> after removing 'fd' part which is not supported now, this look like
>>>
>>> { 'union': 'GlusterSocketAddress',
>>>   'data': {
>>>     'inet': 'GlusterInetSocketAddress',
>>>     'unix': 'UnixSocketAddress' } }
>>>
>>> What do you think ?
>> 
>> This is fine if
>> 
>> * we decide we want a new GlusterInetSocketAddress instead of reusing
>>   InetSocketAddress, and
>> 
>> * we decide we don't want to avoid nesting on the wire.
>> 
>> But we haven't settled these questions.  I'd like to settle them today,
>> taking Eric's advice into account.
>
> I think sharing InetSocketAddress is reasonable, but having separate
> GlusterSocketAddress (or more generically, FlatSocketAddress that can be
> shared by gluster, sheepdog, and nbd), seems okay.  I'm also not fussed
> about naming - naming it 'GlusterSocketAddress' now and renaming it
> 'FlatSocketAddress' later when it gets more use does not impact
> introspection.

Renaming won't be a big deal because it should affect mostly just
gluster.c.  So let's start with

{ 'enum': 'GlusterTransport',
  'data': [ 'unix', 'tcp' ] }

{ 'union': 'GlusterServer',
  'base': { 'type': 'GlusterTransport' },
  'discriminator': 'type',
  'data': { 'unix': 'UnixSocketAddress',
            'tcp': 'InetSocketAddress' } }

I.e. the current patch with GlusterUnixSocketAddress and
GlusterSocketAddress dropped.  I'll discuss the next steps with Prasanna
on IRC right away.

This GlusterServer type can be compatibly evolved into a perfectly
generic replacement for the current SocketAddress type to be used in new
code.  The current SocketAddress is only used by commands
nbd-server-start and chardev-add.

^ permalink raw reply	[flat|nested] 23+ messages in thread

end of thread, other threads:[~2016-07-19 14:29 UTC | newest]

Thread overview: 23+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-07-15 15:00 [Qemu-devel] [PATCH v19 0/5] block/gluster: add support for multiple gluster servers Prasanna Kumar Kalever
2016-07-15 15:00 ` [Qemu-devel] [PATCH v19 1/5] block/gluster: rename [server, volname, image] -> [host, volume, path] Prasanna Kumar Kalever
2016-07-15 15:00 ` [Qemu-devel] [PATCH v19 2/5] block/gluster: code cleanup Prasanna Kumar Kalever
2016-07-15 15:00 ` [Qemu-devel] [PATCH v19 3/5] block/gluster: remove rdma transport Prasanna Kumar Kalever
2016-07-18  8:53   ` Markus Armbruster
2016-07-18 11:12     ` Prasanna Kalever
2016-07-18 12:18       ` Markus Armbruster
2016-07-18 13:35         ` Raghavendra Talur
2016-07-18 13:50           ` Prasanna Kalever
2016-07-18 14:02           ` Markus Armbruster
2016-07-15 15:00 ` [Qemu-devel] [PATCH v19 4/5] block/gluster: using new qapi schema Prasanna Kumar Kalever
2016-07-18  9:29   ` Markus Armbruster
2016-07-18 11:29     ` Prasanna Kalever
2016-07-18 13:11       ` Markus Armbruster
2016-07-18 18:28         ` Prasanna Kalever
2016-07-19 11:12           ` Markus Armbruster
2016-07-19 12:57             ` Eric Blake
2016-07-19 14:29               ` Markus Armbruster
2016-07-15 15:00 ` [Qemu-devel] [PATCH v19 5/5] block/gluster: add support for multiple gluster servers Prasanna Kumar Kalever
2016-07-18 10:17   ` Markus Armbruster
2016-07-18 11:51     ` Prasanna Kalever
2016-07-18 14:39       ` Markus Armbruster
2016-07-18 19:02         ` Prasanna Kalever

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.