All of lore.kernel.org
 help / color / mirror / Atom feed
* [Qemu-devel] [PATCH v3 0/5] Fix handling of IPv4/IPv6 dual stack
@ 2017-06-01  8:29 Daniel P. Berrange
  2017-06-01  8:29 ` [Qemu-devel] [PATCH v3 1/5] sockets: ensure we can bind to both ipv4 & ipv6 separately Daniel P. Berrange
                   ` (6 more replies)
  0 siblings, 7 replies; 9+ messages in thread
From: Daniel P. Berrange @ 2017-06-01  8:29 UTC (permalink / raw)
  To: qemu-devel
  Cc: Gerd Hoffmann, Paolo Bonzini, Philippe Mathieu-Daudé,
	Eric Blake, Daniel P. Berrange

This is a followup to:

  v1: https://lists.nongnu.org/archive/html/qemu-devel/2017-04/msg05659.html
  v2: https://lists.nongnu.org/archive/html/qemu-devel/2017-05/msg04706.html

This series aims to fix a lot of bugs related to handling of IPv4 / IPv6
dual stack.

 - The VNC server mistakenly listened on two separate ports 5900+5901
   when the to= parameter was given
 - IPv6 sockets are accepting IPv4 clients even when IPv4 is set to
   be disabled
 - IPv6 sockets are failing to accept IPv4 clients when IPv4 is not set
   to be disabled
 - The VNC server was loosing the ipv4=/ipv6= settings due to a bug
   in the DNS resolver

The behaviour of all this is really subtle and hard to get working correctly
across all the different network backends. Thus, the most important part of
this patch series is the last patch which adds a test case covering the
backends for -vnc, -chardev tcp, -net socket, and -incoming socket, with
a 120 entry matrix.

IOW, if you think any of the first 4 patches are applying the wrong logic,
then take a look at the last patch and indicate which test matrix entries
are believed to be defining wrong behaviour :-)

Changed in v2:

 - Clarified error message in commit message
 - Replace assert with exit (Eric)
 - Fix typo in test comment (Eric)
 - Fix wierd copyright line (Eric)

Daniel P. Berrange (5):
  sockets: ensure we can bind to both ipv4 & ipv6 separately
  sockets: don't block IPv4 clients when listening on "::"
  sockets: ensure we don't accept IPv4 clients when IPv4 is disabled
  io: preserve ipv4/ipv6 flags when resolving InetSocketAddress
  tests: add functional test validating ipv4/ipv6 address flag handling

 io/dns-resolver.c          |   6 +-
 tests/.gitignore           |   1 +
 tests/Makefile.include     |   4 +
 tests/test-sockets-proto.c | 855 +++++++++++++++++++++++++++++++++++++++++++++
 util/qemu-sockets.c        |  71 +++-
 5 files changed, 916 insertions(+), 21 deletions(-)
 create mode 100644 tests/test-sockets-proto.c

-- 
2.9.3

^ permalink raw reply	[flat|nested] 9+ messages in thread

* [Qemu-devel] [PATCH v3 1/5] sockets: ensure we can bind to both ipv4 & ipv6 separately
  2017-06-01  8:29 [Qemu-devel] [PATCH v3 0/5] Fix handling of IPv4/IPv6 dual stack Daniel P. Berrange
@ 2017-06-01  8:29 ` Daniel P. Berrange
  2017-06-01  8:29 ` [Qemu-devel] [PATCH v3 2/5] sockets: don't block IPv4 clients when listening on "::" Daniel P. Berrange
                   ` (5 subsequent siblings)
  6 siblings, 0 replies; 9+ messages in thread
From: Daniel P. Berrange @ 2017-06-01  8:29 UTC (permalink / raw)
  To: qemu-devel
  Cc: Gerd Hoffmann, Paolo Bonzini, Philippe Mathieu-Daudé,
	Eric Blake, Daniel P. Berrange

When binding to an IPv6 socket we currently force the
IPV6_V6ONLY flag to off. This means that the IPv6 socket
will accept both IPv4 & IPv6 sockets when QEMU is launched
with something like

  -vnc :::1

While this is good for that case, it is bad for other
cases. For example if an empty hostname is given,
getaddrinfo resolves it to 2 addresses 0.0.0.0 and ::,
in that order. We will thus bind to 0.0.0.0 first, and
then fail to bind to :: on the same port. The same
problem can happen if any other hostname lookup causes
the IPv4 address to be reported before the IPv6 address.

When we get an IPv6 bind failure, we should re-try the
same port, but with IPV6_V6ONLY turned on again, to
avoid clash with any IPv4 listener.

This ensures that

  -vnc :1

will bind successfully to both 0.0.0.0 and ::, and also
avoid

  -vnc :1,to=2

from mistakenly using a 2nd port for the :: listener.

Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
---
 util/qemu-sockets.c | 31 +++++++++++++++++++++++--------
 1 file changed, 23 insertions(+), 8 deletions(-)

diff --git a/util/qemu-sockets.c b/util/qemu-sockets.c
index b39ae74..8720097 100644
--- a/util/qemu-sockets.c
+++ b/util/qemu-sockets.c
@@ -208,22 +208,37 @@ static int inet_listen_saddr(InetSocketAddress *saddr,
         }
 
         socket_set_fast_reuse(slisten);
-#ifdef IPV6_V6ONLY
-        if (e->ai_family == PF_INET6) {
-            /* listen on both ipv4 and ipv6 */
-            const int off = 0;
-            qemu_setsockopt(slisten, IPPROTO_IPV6, IPV6_V6ONLY, &off,
-                            sizeof(off));
-        }
-#endif
 
         port_min = inet_getport(e);
         port_max = saddr->has_to ? saddr->to + port_offset : port_min;
         for (p = port_min; p <= port_max; p++) {
+#ifdef IPV6_V6ONLY
+            /* listen on both ipv4 and ipv6 */
+            int v6only = 0;
+#endif
             inet_setport(e, p);
+#ifdef IPV6_V6ONLY
+        rebind:
+            if (e->ai_family == PF_INET6) {
+                qemu_setsockopt(slisten, IPPROTO_IPV6, IPV6_V6ONLY, &v6only,
+                                sizeof(v6only));
+            }
+#endif
             if (bind(slisten, e->ai_addr, e->ai_addrlen) == 0) {
                 goto listen;
             }
+
+#ifdef IPV6_V6ONLY
+            /* If we got EADDRINUSE from an IPv6 bind & V6ONLY is unset,
+             * it could be that the IPv4 port is already claimed, so retry
+             * with V6ONLY set
+             */
+            if (e->ai_family == PF_INET6 && errno == EADDRINUSE && !v6only) {
+                v6only = 1;
+                goto rebind;
+            }
+#endif
+
             if (p == port_max) {
                 if (!e->ai_next) {
                     error_setg_errno(errp, errno, "Failed to bind socket");
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [Qemu-devel] [PATCH v3 2/5] sockets: don't block IPv4 clients when listening on "::"
  2017-06-01  8:29 [Qemu-devel] [PATCH v3 0/5] Fix handling of IPv4/IPv6 dual stack Daniel P. Berrange
  2017-06-01  8:29 ` [Qemu-devel] [PATCH v3 1/5] sockets: ensure we can bind to both ipv4 & ipv6 separately Daniel P. Berrange
@ 2017-06-01  8:29 ` Daniel P. Berrange
  2017-06-01  8:29 ` [Qemu-devel] [PATCH v3 3/5] sockets: ensure we don't accept IPv4 clients when IPv4 is disabled Daniel P. Berrange
                   ` (4 subsequent siblings)
  6 siblings, 0 replies; 9+ messages in thread
From: Daniel P. Berrange @ 2017-06-01  8:29 UTC (permalink / raw)
  To: qemu-devel
  Cc: Gerd Hoffmann, Paolo Bonzini, Philippe Mathieu-Daudé,
	Eric Blake, Daniel P. Berrange

When inet_parse() parses the hostname, it is forcing the
has_ipv6 && ipv6 flags if the address contains a ":". This
means that if the user had set the ipv4=on flag, to try to
restrict the listener to just ipv4, an error would not have
been raised.  eg

   -incoming tcp:[::]:9000,ipv4

should have raised an error because listening for IPv4
on "::" is a non-sensical combination. With this removed,
we now call getaddrinfo() on "::" passing PF_INET and
so getaddrinfo reports an error about the hostname being
incompatible with the requested protocol:

 qemu-system-x86_64: -incoming tcp:[::]:9000,ipv4: address resolution
    failed for :::9000: Address family for hostname not supported

Likewise it is explicitly setting the has_ipv4 & ipv4
flags when the address contains only digits + '.'. This
has no ill-effect, but also has no benefit, so is removed.

Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
---
 util/qemu-sockets.c | 4 ----
 1 file changed, 4 deletions(-)

diff --git a/util/qemu-sockets.c b/util/qemu-sockets.c
index 8720097..81bc8de 100644
--- a/util/qemu-sockets.c
+++ b/util/qemu-sockets.c
@@ -618,16 +618,12 @@ int inet_parse(InetSocketAddress *addr, const char *str, Error **errp)
             error_setg(errp, "error parsing IPv6 address '%s'", str);
             return -1;
         }
-        addr->ipv6 = addr->has_ipv6 = true;
     } else {
         /* hostname or IPv4 addr */
         if (sscanf(str, "%64[^:]:%32[^,]%n", host, port, &pos) != 2) {
             error_setg(errp, "error parsing address '%s'", str);
             return -1;
         }
-        if (host[strspn(host, "0123456789.")] == '\0') {
-            addr->ipv4 = addr->has_ipv4 = true;
-        }
     }
 
     addr->host = g_strdup(host);
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [Qemu-devel] [PATCH v3 3/5] sockets: ensure we don't accept IPv4 clients when IPv4 is disabled
  2017-06-01  8:29 [Qemu-devel] [PATCH v3 0/5] Fix handling of IPv4/IPv6 dual stack Daniel P. Berrange
  2017-06-01  8:29 ` [Qemu-devel] [PATCH v3 1/5] sockets: ensure we can bind to both ipv4 & ipv6 separately Daniel P. Berrange
  2017-06-01  8:29 ` [Qemu-devel] [PATCH v3 2/5] sockets: don't block IPv4 clients when listening on "::" Daniel P. Berrange
@ 2017-06-01  8:29 ` Daniel P. Berrange
  2017-06-02  4:45   ` Philippe Mathieu-Daudé
  2017-06-01  8:29 ` [Qemu-devel] [PATCH v3 4/5] io: preserve ipv4/ipv6 flags when resolving InetSocketAddress Daniel P. Berrange
                   ` (3 subsequent siblings)
  6 siblings, 1 reply; 9+ messages in thread
From: Daniel P. Berrange @ 2017-06-01  8:29 UTC (permalink / raw)
  To: qemu-devel
  Cc: Gerd Hoffmann, Paolo Bonzini, Philippe Mathieu-Daudé,
	Eric Blake, Daniel P. Berrange

Currently if you disable listening on IPv4 addresses, via the
CLI flag ipv4=off, we still mistakenly accept IPv4 clients via
the IPv6 listener socket due to IPV6_V6ONLY flag being unset.

We must ensure IPV6_V6ONLY is always set if ipv4=off

This fixes the following scenarios

  -incoming tcp::9000,ipv6=on
  -incoming tcp:[::]:9000,ipv6=on
  -chardev socket,id=cdev0,host=,port=9000,server,nowait,ipv4=off
  -chardev socket,id=cdev0,host=,port=9000,server,nowait,ipv6=on
  -chardev socket,id=cdev0,host=::,port=9000,server,nowait,ipv4=off
  -chardev socket,id=cdev0,host=::,port=9000,server,nowait,ipv6=on

which all mistakenly accepted IPv4 clients

Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
---
 util/qemu-sockets.c | 40 +++++++++++++++++++++++++++++++---------
 1 file changed, 31 insertions(+), 9 deletions(-)

diff --git a/util/qemu-sockets.c b/util/qemu-sockets.c
index 81bc8de..852773d 100644
--- a/util/qemu-sockets.c
+++ b/util/qemu-sockets.c
@@ -104,17 +104,16 @@ NetworkAddressFamily inet_netfamily(int family)
  *   f     t       PF_INET6
  *   t     -       PF_INET
  *   t     f       PF_INET
- *   t     t       PF_INET6
+ *   t     t       PF_INET6/PF_UNSPEC
  *
  * NB, this matrix is only about getting the necessary results
  * from getaddrinfo(). Some of the cases require further work
  * after reading results from getaddrinfo in order to fully
- * apply the logic the end user wants. eg with the last case
- * ipv4=t + ipv6=t + PF_INET6, getaddrinfo alone can only
- * guarantee the ipv6=t part of the request - we need more
- * checks to provide ipv4=t part of the guarantee. This is
- * outside scope of this method and not currently handled by
- * callers at all.
+ * apply the logic the end user wants.
+ *
+ * In the first and last cases, we must set IPV6_V6ONLY=0
+ * when binding, to allow a single listener to potentially
+ * accept both IPv4+6 addresses.
  */
 int inet_ai_family_from_address(InetSocketAddress *addr,
                                 Error **errp)
@@ -124,6 +123,23 @@ int inet_ai_family_from_address(InetSocketAddress *addr,
         error_setg(errp, "Cannot disable IPv4 and IPv6 at same time");
         return PF_UNSPEC;
     }
+    if ((addr->has_ipv6 && addr->ipv6) && (addr->has_ipv4 && addr->ipv4)) {
+        /*
+         * Some backends can only do a single listener. In that case
+         * we want empty hostname to resolve to "::" and then use the
+         * flag IPV6_V6ONLY==0 to get both protocols on 1 socket. This
+         * doesn't work for addresses other than "", so they're just
+         * inevitably broken until multiple listeners can be used,
+         * and thus we honour getaddrinfo automatic protocol detection
+         * Once all backends do multi-listener, remove the PF_INET6
+         * branch entirely.
+         */
+        if (!addr->host || g_str_equal(addr->host, "")) {
+            return PF_INET6;
+        } else {
+            return PF_UNSPEC;
+        }
+    }
     if ((addr->has_ipv6 && addr->ipv6) || (addr->has_ipv4 && !addr->ipv4)) {
         return PF_INET6;
     }
@@ -213,8 +229,14 @@ static int inet_listen_saddr(InetSocketAddress *saddr,
         port_max = saddr->has_to ? saddr->to + port_offset : port_min;
         for (p = port_min; p <= port_max; p++) {
 #ifdef IPV6_V6ONLY
-            /* listen on both ipv4 and ipv6 */
-            int v6only = 0;
+            /*
+             * Deals with first & last cases in matrix in comment
+             * for inet_ai_family_from_address().
+             */
+            int v6only =
+                ((!saddr->has_ipv4 && !saddr->has_ipv6) ||
+                 (saddr->has_ipv4 && saddr->ipv4 &&
+                  saddr->has_ipv6 && saddr->ipv6)) ? 0 : 1;
 #endif
             inet_setport(e, p);
 #ifdef IPV6_V6ONLY
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [Qemu-devel] [PATCH v3 4/5] io: preserve ipv4/ipv6 flags when resolving InetSocketAddress
  2017-06-01  8:29 [Qemu-devel] [PATCH v3 0/5] Fix handling of IPv4/IPv6 dual stack Daniel P. Berrange
                   ` (2 preceding siblings ...)
  2017-06-01  8:29 ` [Qemu-devel] [PATCH v3 3/5] sockets: ensure we don't accept IPv4 clients when IPv4 is disabled Daniel P. Berrange
@ 2017-06-01  8:29 ` Daniel P. Berrange
  2017-06-01  8:29 ` [Qemu-devel] [PATCH v3 5/5] tests: add functional test validating ipv4/ipv6 address flag handling Daniel P. Berrange
                   ` (2 subsequent siblings)
  6 siblings, 0 replies; 9+ messages in thread
From: Daniel P. Berrange @ 2017-06-01  8:29 UTC (permalink / raw)
  To: qemu-devel
  Cc: Gerd Hoffmann, Paolo Bonzini, Philippe Mathieu-Daudé,
	Eric Blake, Daniel P. Berrange

The original InetSocketAddress struct may have has_ipv4 and
has_ipv6 fields set, which will control both the ai_family
used during DNS resolution, and later use of the V6ONLY
flag.

Currently the standalone DNS resolver code drops the
has_ipv4 & has_ipv6 flags after resolving, which means
the later bind() code won't correctly set V6ONLY.

This fixes the following scenarios

  -vnc :0,ipv4=off
  -vnc :0,ipv6=on
  -vnc :::0,ipv4=off
  -vnc :::0,ipv6=on

which all mistakenly accepted IPv4 clients

Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
---
 io/dns-resolver.c | 6 ++++--
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/io/dns-resolver.c b/io/dns-resolver.c
index 57a8896..c072d12 100644
--- a/io/dns-resolver.c
+++ b/io/dns-resolver.c
@@ -116,8 +116,10 @@ static int qio_dns_resolver_lookup_sync_inet(QIODNSResolver *resolver,
             .numeric = true,
             .has_to = iaddr->has_to,
             .to = iaddr->to,
-            .has_ipv4 = false,
-            .has_ipv6 = false,
+            .has_ipv4 = iaddr->has_ipv4,
+            .ipv4 = iaddr->ipv4,
+            .has_ipv6 = iaddr->has_ipv6,
+            .ipv6 = iaddr->ipv6,
         };
 
         (*addrs)[i] = newaddr;
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [Qemu-devel] [PATCH v3 5/5] tests: add functional test validating ipv4/ipv6 address flag handling
  2017-06-01  8:29 [Qemu-devel] [PATCH v3 0/5] Fix handling of IPv4/IPv6 dual stack Daniel P. Berrange
                   ` (3 preceding siblings ...)
  2017-06-01  8:29 ` [Qemu-devel] [PATCH v3 4/5] io: preserve ipv4/ipv6 flags when resolving InetSocketAddress Daniel P. Berrange
@ 2017-06-01  8:29 ` Daniel P. Berrange
  2017-06-06 10:32 ` [Qemu-devel] [PATCH v3 0/5] Fix handling of IPv4/IPv6 dual stack Gerd Hoffmann
  2017-06-10 16:06 ` Vlad Yasevich
  6 siblings, 0 replies; 9+ messages in thread
From: Daniel P. Berrange @ 2017-06-01  8:29 UTC (permalink / raw)
  To: qemu-devel
  Cc: Gerd Hoffmann, Paolo Bonzini, Philippe Mathieu-Daudé,
	Eric Blake, Daniel P. Berrange

The semantics around handling ipv4=on|off & ipv6=on|off are quite
subtle to understand in combination with the various hostname addresses
and backend types. Introduce a massive test matrix that launches QEMU
and validates the ability to connect a client on each protocol as
appropriate.

The test requires that the host has ability to bind to both :: and
0.0.0.0, on port 9000. If either protocol is not available, or if
something is already listening on that port the test will skip.

Although it isn't using the QTest APIs, it expects the
QTEST_QEMU_BINARY env variable to be set.

Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
---
 tests/.gitignore           |   1 +
 tests/Makefile.include     |   4 +
 tests/test-sockets-proto.c | 855 +++++++++++++++++++++++++++++++++++++++++++++
 3 files changed, 860 insertions(+)
 create mode 100644 tests/test-sockets-proto.c

diff --git a/tests/.gitignore b/tests/.gitignore
index 40c2e3e..9fa14e5 100644
--- a/tests/.gitignore
+++ b/tests/.gitignore
@@ -76,6 +76,7 @@ test-qobject-output-visitor
 test-rcu-list
 test-replication
 test-shift128
+test-sockets-proto
 test-string-input-visitor
 test-string-output-visitor
 test-thread-pool
diff --git a/tests/Makefile.include b/tests/Makefile.include
index 7589383..4ae75a9 100644
--- a/tests/Makefile.include
+++ b/tests/Makefile.include
@@ -261,6 +261,7 @@ check-qtest-i386-y += tests/test-filter-redirector$(EXESUF)
 check-qtest-i386-y += tests/postcopy-test$(EXESUF)
 check-qtest-i386-y += tests/test-x86-cpuid-compat$(EXESUF)
 check-qtest-i386-y += tests/numa-test$(EXESUF)
+check-qtest-i386-y += tests/test-sockets-proto$(EXESUF)
 check-qtest-x86_64-y += $(check-qtest-i386-y)
 gcov-files-i386-y += i386-softmmu/hw/timer/mc146818rtc.c
 gcov-files-x86_64-y = $(subst i386-softmmu/,x86_64-softmmu/,$(gcov-files-i386-y))
@@ -287,6 +288,7 @@ check-qtest-ppc64-y += tests/prom-env-test$(EXESUF)
 check-qtest-ppc64-y += tests/pnv-xscom-test$(EXESUF)
 check-qtest-ppc64-y += tests/drive_del-test$(EXESUF)
 check-qtest-ppc64-y += tests/postcopy-test$(EXESUF)
+check-qtest-ppc64-y += tests/test-sockets-proto$(EXESUF)
 check-qtest-ppc64-y += tests/boot-serial-test$(EXESUF)
 check-qtest-ppc64-y += tests/rtas-test$(EXESUF)
 check-qtest-ppc64-y += tests/pxe-test$(EXESUF)
@@ -741,6 +743,8 @@ tests/usb-hcd-ehci-test$(EXESUF): tests/usb-hcd-ehci-test.o $(libqos-usb-obj-y)
 tests/usb-hcd-xhci-test$(EXESUF): tests/usb-hcd-xhci-test.o $(libqos-usb-obj-y)
 tests/pc-cpu-test$(EXESUF): tests/pc-cpu-test.o
 tests/postcopy-test$(EXESUF): tests/postcopy-test.o
+tests/test-sockets-proto$(EXESUF): tests/test-sockets-proto.o \
+	$(test-io-obj-y)
 tests/vhost-user-test$(EXESUF): tests/vhost-user-test.o $(test-util-obj-y) \
 	$(qtest-obj-y) $(test-io-obj-y) $(libqos-virtio-obj-y) $(libqos-pc-obj-y) \
 	$(chardev-obj-y)
diff --git a/tests/test-sockets-proto.c b/tests/test-sockets-proto.c
new file mode 100644
index 0000000..c6f7ee2
--- /dev/null
+++ b/tests/test-sockets-proto.c
@@ -0,0 +1,855 @@
+/*
+ * QTest for IPv4/IPv6 protocol setup
+ *
+ * Copyright (c) 2017 Red Hat, Inc.
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
+ * See the COPYING file in the top-level directory.
+ *
+ */
+
+#include "qemu/osdep.h"
+
+#include "io/channel-socket.h"
+#include "qapi/error.h"
+#include "qemu/cutils.h"
+
+typedef struct {
+    const char *name;
+    const char *args;
+    bool ipv4;
+    bool ipv6;
+    bool error;
+} QSocketsData;
+
+/*
+ * This is the giant matrix of combinations we need to consider.
+ * There are 3 axes we deal with
+ *
+ * Axis 1: Protocol flags:
+ *
+ *  ipv4=unset, ipv6=unset  -> v4 & v6 clients ([1]
+ *  ipv4=unset, ipv6=off    -> v4 clients only
+ *  ipv4=unset, ipv6=on     -> v6 clients only
+ *  ipv4=off, ipv6=unset    -> v6 clients only
+ *  ipv4=off, ipv6=off      -> error - can't disable both [2]
+ *  ipv4=off, ipv6=on       -> v6 clients only
+ *  ipv4=on, ipv6=unset     -> v4 clients only
+ *  ipv4=on, ipv6=off       -> v4 clients only
+ *  ipv4=on, ipv6=on        -> v4 & v6 clients [3]
+ *
+ * Depending on the listening address, some of those combinations
+ * may result in errors. eg ipv4=off,ipv6=on combined with 0.0.0.0
+ * is nonsensical.
+ *
+ * [1] Some backends only support a single socket listener, so
+ *     will actually only allow v4 clients
+ * [2] QEMU should fail to startup in this case
+ * [3] If hostname is "" or "::", then we get a single listener
+ *     on IPv6 and thus can also accept v4 clients. For all other
+ *     hostnames, have same problem as [1].
+ *
+ * Axis 2: Listening address:
+ *
+ *  ""        - resolves to 0.0.0.0 and ::, in that order
+ *  "0.0.0.0" - v4 clients only
+ *  "::"      - Mostly v6 clients only. Some scenarios should
+ *              permit v4 clients too.
+ *
+ * Axis 3: Backend type:
+ *
+ *  Migration - restricted to a single listener. Also relies
+ *              on buggy inet_parse() which can't accept
+ *              =off/=on parameters to ipv4/ipv6 flags
+ *  Chardevs  - restricted to a single listener.
+ *  VNC       - supports multiple listeners. Also supports
+ *              socket ranges, so has extra set of tests
+ *              in the matrix
+ *
+ */
+static QSocketsData test_data[] = {
+    /* Migrate with "" address */
+    /* XXX all settings with =off are disabled due to inet_parse() bug */
+    /* XXX multilistener bug - should be .ipv6 = 1 */
+    { .ipv4 = 1, .ipv6 = 0, .error = 0,
+      .name = "/sockets/migrate/wildcard/all",
+      .args = "-incoming tcp::9000" },
+    { .ipv4 = 1, .ipv6 = 0, .error = 0,
+      .name = "/sockets/migrate/wildcard/ipv4",
+      .args = "-incoming tcp::9000,ipv4" },
+    { .ipv4 = 0, .ipv6 = 1, .error = 0,
+      .name = "/sockets/migrate/wildcard/ipv6",
+      .args = "-incoming tcp::9000,ipv6" },
+    { .ipv4 = 1, .ipv6 = 0, .error = 0,
+      .name = "/sockets/migrate/wildcard/ipv4on",
+      .args = "-incoming tcp::9000,ipv4=on" },
+    { .ipv4 = 0, .ipv6 = 1, .error = 0,
+      .name = "/sockets/migrate/wildcard/ipv6on",
+      .args = "-incoming tcp::9000,ipv6=on" },
+    /*
+    { .ipv4 = 0, .ipv6 = 1, .error = 0,
+      .name = "/sockets/migrate/wildcard/ipv4off",
+      .args = "-incoming tcp::9000,ipv4=off" },
+    { .ipv4 = 1, .ipv6 = 0, .error = 0,
+      .name = "/sockets/migrate/wildcard/ipv6off",
+      .args = "-incoming tcp::9000,ipv6=off" },
+    { .ipv4 = 1, .ipv6 = 0, .error = 0,
+      .name = "/sockets/migrate/wildcard/ipv4onipv6off",
+      .args = "-incoming tcp::9000,ipv4=on,ipv6=off" },
+    { .ipv4 = 0, .ipv6 = 1, .error = 0,
+      .name = "/sockets/migrate/wildcard/ipv4offipv6on",
+      .args = "-incoming tcp::9000,ipv4=off,ipv6=on" },
+    */
+    { .ipv4 = 1, .ipv6 = 1, .error = 0,
+      .name = "/sockets/migrate/wildcard/ipv4onipv6on",
+      .args = "-incoming tcp::9000,ipv4=on,ipv6=on" },
+    /*
+    { .ipv4 = 0, .ipv6 = 0, .error = 1,
+      .name = "/sockets/migrate/wildcard/ipv4offipv6off",
+      .args = "-incoming tcp::9000,ipv4=off,ipv6=off" },
+    */
+
+    /* Migrate with 0.0.0.0 address */
+    { .ipv4 = 1, .ipv6 = 0, .error = 0,
+      .name = "/sockets/migrate/0.0.0.0/all",
+      .args = "-incoming tcp:0.0.0.0:9000" },
+    { .ipv4 = 1, .ipv6 = 0, .error = 0,
+      .name = "/sockets/migrate/0.0.0.0/ipv4",
+      .args = "-incoming tcp:0.0.0.0:9000,ipv4" },
+    { .ipv4 = 0, .ipv6 = 0, .error = 1,
+      .name = "/sockets/migrate/0.0.0.0/ipv6",
+      .args = "-incoming tcp:0.0.0.0:9000,ipv6" },
+    { .ipv4 = 1, .ipv6 = 0, .error = 0,
+      .name = "/sockets/migrate/0.0.0.0/ipv4on",
+      .args = "-incoming tcp:0.0.0.0:9000,ipv4=on" },
+    { .ipv4 = 0, .ipv6 = 0, .error = 1,
+      .name = "/sockets/migrate/0.0.0.0/ipv6on",
+      .args = "-incoming tcp:0.0.0.0:9000,ipv6=on" },
+    /*
+    { .ipv4 = 0, .ipv6 = 0, .error = 1,
+      .name = "/sockets/migrate/0.0.0.0/ipv4off",
+      .args = "-incoming tcp:0.0.0.0:9000,ipv4=off" },
+    { .ipv4 = 1, .ipv6 = 0, .error = 0,
+      .name = "/sockets/migrate/0.0.0.0/ipv6off",
+      .args = "-incoming tcp:0.0.0.0:9000,ipv6=off" },
+    { .ipv4 = 1, .ipv6 = 0, .error = 0,
+      .name = "/sockets/migrate/0.0.0.0/ipv4onipv6off",
+      .args = "-incoming tcp:0.0.0.0:9000,ipv4=on,ipv6=off" },
+    { .ipv4 = 0, .ipv6 = 0, .error = 1,
+      .name = "/sockets/migrate/0.0.0.0/ipv4offipv6on",
+      .args = "-incoming tcp:0.0.0.0:9000,ipv4=off,ipv6=on" },
+    */
+    { .ipv4 = 1, .ipv6 = 0, .error = 0,
+      .name = "/sockets/migrate/0.0.0.0/ipv4onipv6on",
+      .args = "-incoming tcp:0.0.0.0:9000,ipv4=on,ipv6=on" },
+    /*
+    { .ipv4 = 0, .ipv6 = 0, .error = 1,
+      .name = "/sockets/migrate/0.0.0.0/ipv4offipv6off",
+      .args = "-incoming tcp:0.0.0.0:9000,ipv4=off,ipv6=off" },
+    */
+
+    /* Migrate with :: address */
+    { .ipv4 = 1, .ipv6 = 1, .error = 0,
+      .name = "/sockets/migrate/::/all",
+      .args = "-incoming tcp:[::]:9000" },
+    { .ipv4 = 0, .ipv6 = 0, .error = 1,
+      .name = "/sockets/migrate/::/ipv4",
+      .args = "-incoming tcp:[::]:9000,ipv4" },
+    { .ipv4 = 0, .ipv6 = 1, .error = 0,
+      .name = "/sockets/migrate/::/ipv6",
+      .args = "-incoming tcp:[::]:9000,ipv6" },
+    { .ipv4 = 0, .ipv6 = 0, .error = 1,
+      .name = "/sockets/migrate/::/ipv4on",
+      .args = "-incoming tcp:[::]:9000,ipv4=on" },
+    { .ipv4 = 0, .ipv6 = 1, .error = 0,
+      .name = "/sockets/migrate/::/ipv6on",
+      .args = "-incoming tcp:[::]:9000,ipv6=on" },
+    /*
+    { .ipv4 = 0, .ipv6 = 1, .error = 0,
+      .name = "/sockets/migrate/::/ipv4off",
+      .args = "-incoming tcp:[::]:9000,ipv4=off" },
+    { .ipv4 = 0, .ipv6 = 0, .error = 1,
+      .name = "/sockets/migrate/::/ipv6off",
+      .args = "-incoming tcp:[::]:9000,ipv6=off" },
+    { .ipv4 = 0, .ipv6 = 0, .error = 1,
+      .name = "/sockets/migrate/::/ipv4onipv6off",
+      .args = "-incoming tcp:[::]:9000,ipv4=on,ipv6=off" },
+    { .ipv4 = 0, .ipv6 = 1, .error = 0,
+      .name = "/sockets/migrate/::/ipv4offipv6on",
+      .args = "-incoming tcp:[::]:9000,ipv4=off,ipv6=on" },
+    */
+    { .ipv4 = 1, .ipv6 = 1, .error = 0,
+      .name = "/sockets/migrate/::/ipv4onipv6on",
+      .args = "-incoming tcp:[::]:9000,ipv4=on,ipv6=on" },
+    /*
+    { .ipv4 = 0, .ipv6 = 0, .error = 1,
+      .name = "/sockets/migrate/::/ipv4offipv6off",
+      .args = "-incoming tcp:[::]:9000,ipv4=off,ipv6=off" },
+    */
+
+
+
+    /* Chardev with "" address */
+    /* XXX multilistener bug - should be .ipv6 = 1 */
+    { .ipv4 = 1, .ipv6 = 0, .error = 0,
+      .name = "/sockets/chardev/wildcard/all",
+      .args = "-chardev socket,id=cdev0,host=,port=9000,server,nowait" },
+    { .ipv4 = 1, .ipv6 = 0, .error = 0,
+      .name = "/sockets/chardev/wildcard/ipv4",
+      .args = "-chardev socket,id=cdev0,host=,port=9000,server,nowait,"
+              "ipv4" },
+    { .ipv4 = 0, .ipv6 = 1, .error = 0,
+      .name = "/sockets/chardev/wildcard/ipv6",
+      .args = "-chardev socket,id=cdev0,host=,port=9000,server,nowait,"
+              "ipv6" },
+    { .ipv4 = 1, .ipv6 = 0, .error = 0,
+      .name = "/sockets/chardev/wildcard/ipv4on",
+      .args = "-chardev socket,id=cdev0,host=,port=9000,server,nowait,"
+              "ipv4=on" },
+    { .ipv4 = 0, .ipv6 = 1, .error = 0,
+      .name = "/sockets/chardev/wildcard/ipv6on",
+      .args = "-chardev socket,id=cdev0,host=,port=9000,server,nowait,"
+              "ipv6=on" },
+    { .ipv4 = 0, .ipv6 = 1, .error = 0,
+      .name = "/sockets/chardev/wildcard/ipv4off",
+      .args = "-chardev socket,id=cdev0,host=,port=9000,server,nowait,"
+              "ipv4=off" },
+    { .ipv4 = 1, .ipv6 = 0, .error = 0,
+      .name = "/sockets/chardev/wildcard/ipv6off",
+      .args = "-chardev socket,id=cdev0,host=,port=9000,server,nowait,"
+              "ipv6=off" },
+    { .ipv4 = 1, .ipv6 = 0, .error = 0,
+      .name = "/sockets/chardev/wildcard/ipv4onipv6off",
+      .args = "-chardev socket,id=cdev0,host=,port=9000,server,nowait,"
+              "ipv4=on,ipv6=off" },
+    { .ipv4 = 0, .ipv6 = 1, .error = 0,
+      .name = "/sockets/chardev/wildcard/ipv4offipv6on",
+      .args = "-chardev socket,id=cdev0,host=,port=9000,server,nowait,"
+              "ipv4=off,ipv6=on" },
+    { .ipv4 = 1, .ipv6 = 1, .error = 0,
+      .name = "/sockets/chardev/wildcard/ipv4onipv6on",
+      .args = "-chardev socket,id=cdev0,host=,port=9000,server,nowait,"
+              "ipv4=on,ipv6=on" },
+    { .ipv4 = 0, .ipv6 = 0, .error = 1,
+      .name = "/sockets/chardev/wildcard/ipv4offipv6off",
+      .args = "-chardev socket,id=cdev0,host=,port=9000,server,nowait,"
+              "ipv4=off,ipv6=off" },
+
+    /* Chardev with 0.0.0.0 address */
+    { .ipv4 = 1, .ipv6 = 0, .error = 0,
+      .name = "/sockets/chardev/0.0.0.0/all",
+      .args = "-chardev socket,id=cdev0,host=0.0.0.0,port=9000,server,nowait" },
+    { .ipv4 = 1, .ipv6 = 0, .error = 0,
+      .name = "/sockets/chardev/0.0.0.0/ipv4",
+      .args = "-chardev socket,id=cdev0,host=0.0.0.0,port=9000,server,nowait,"
+              "ipv4" },
+    { .ipv4 = 0, .ipv6 = 0, .error = 1,
+      .name = "/sockets/chardev/0.0.0.0/ipv6",
+      .args = "-chardev socket,id=cdev0,host=0.0.0.0,port=9000,server,nowait,"
+              "ipv6" },
+    { .ipv4 = 1, .ipv6 = 0, .error = 0,
+      .name = "/sockets/chardev/0.0.0.0/ipv4on",
+      .args = "-chardev socket,id=cdev0,host=0.0.0.0,port=9000,server,nowait,"
+              "ipv4=on" },
+    { .ipv4 = 0, .ipv6 = 0, .error = 1,
+      .name = "/sockets/chardev/0.0.0.0/ipv6on",
+      .args = "-chardev socket,id=cdev0,host=0.0.0.0,port=9000,server,nowait,"
+              "ipv6=on" },
+    { .ipv4 = 0, .ipv6 = 0, .error = 1,
+      .name = "/sockets/chardev/0.0.0.0/ipv4off",
+      .args = "-chardev socket,id=cdev0,host=0.0.0.0,port=9000,server,nowait,"
+              "ipv4=off" },
+    { .ipv4 = 1, .ipv6 = 0, .error = 0,
+      .name = "/sockets/chardev/0.0.0.0/ipv6off",
+      .args = "-chardev socket,id=cdev0,host=0.0.0.0,port=9000,server,nowait,"
+              "ipv6=off" },
+    { .ipv4 = 1, .ipv6 = 0, .error = 0,
+      .name = "/sockets/chardev/0.0.0.0/ipv4onipv6off",
+      .args = "-chardev socket,id=cdev0,host=0.0.0.0,port=9000,server,nowait,"
+              "ipv4=on,ipv6=off" },
+    { .ipv4 = 0, .ipv6 = 0, .error = 1,
+      .name = "/sockets/chardev/0.0.0.0/ipv4offipv6on",
+      .args = "-chardev socket,id=cdev0,host=0.0.0.0,port=9000,server,nowait,"
+              "ipv4=off,ipv6=on" },
+    { .ipv4 = 1, .ipv6 = 0, .error = 0,
+      .name = "/sockets/chardev/0.0.0.0/ipv4onipv6on",
+      .args = "-chardev socket,id=cdev0,host=0.0.0.0,port=9000,server,nowait,"
+              "ipv4=on,ipv6=on" },
+    { .ipv4 = 0, .ipv6 = 0, .error = 1,
+      .name = "/sockets/chardev/0.0.0.0/ipv4offipv6off",
+      .args = "-chardev socket,id=cdev0,host=0.0.0.0,port=9000,server,nowait,"
+              "ipv4=off,ipv6=off" },
+
+    /* Chardev with :: address */
+    { .ipv4 = 1, .ipv6 = 1, .error = 0,
+      .name = "/sockets/chardev/::/all",
+      .args = "-chardev socket,id=cdev0,host=::,port=9000,server,nowait" },
+    { .ipv4 = 0, .ipv6 = 0, .error = 1,
+      .name = "/sockets/chardev/::/ipv4",
+      .args = "-chardev socket,id=cdev0,host=::,port=9000,server,nowait,"
+              "ipv4" },
+    { .ipv4 = 0, .ipv6 = 1, .error = 0,
+      .name = "/sockets/chardev/::/ipv6",
+      .args = "-chardev socket,id=cdev0,host=::,port=9000,server,nowait,"
+              "ipv6" },
+    { .ipv4 = 0, .ipv6 = 0, .error = 1,
+      .name = "/sockets/chardev/::/ipv4on",
+      .args = "-chardev socket,id=cdev0,host=::,port=9000,server,nowait,"
+              "ipv4=on" },
+    { .ipv4 = 0, .ipv6 = 1, .error = 0,
+      .name = "/sockets/chardev/::/ipv6on",
+      .args = "-chardev socket,id=cdev0,host=::,port=9000,server,nowait,"
+              "ipv6=on" },
+    { .ipv4 = 0, .ipv6 = 1, .error = 0,
+      .name = "/sockets/chardev/::/ipv4off",
+      .args = "-chardev socket,id=cdev0,host=::,port=9000,server,nowait,"
+              "ipv4=off" },
+    { .ipv4 = 0, .ipv6 = 0, .error = 1,
+      .name = "/sockets/chardev/::/ipv6off",
+      .args = "-chardev socket,id=cdev0,host=::,port=9000,server,nowait,"
+              "ipv6=off" },
+    { .ipv4 = 0, .ipv6 = 0, .error = 1,
+      .name = "/sockets/chardev/::/ipv4onipv6off",
+      .args = "-chardev socket,id=cdev0,host=::,port=9000,server,nowait,"
+              "ipv4=on,ipv6=off" },
+    { .ipv4 = 0, .ipv6 = 1, .error = 0,
+      .name = "/sockets/chardev/::/ipv4offipv6on",
+      .args = "-chardev socket,id=cdev0,host=::,port=9000,server,nowait,"
+              "ipv4=off,ipv6=on" },
+    { .ipv4 = 1, .ipv6 = 1, .error = 0,
+      .name = "/sockets/chardev/::/ipv4onipv6on",
+      .args = "-chardev socket,id=cdev0,host=::,port=9000,server,nowait,"
+              "ipv4=on,ipv6=on" },
+    { .ipv4 = 0, .ipv6 = 0, .error = 1,
+      .name = "/sockets/chardev/::/ipv4offipv6off",
+      .args = "-chardev socket,id=cdev0,host=::,port=9000,server,nowait,"
+              "ipv4=off,ipv6=off" },
+
+
+
+    /* Net with "" address */
+    /* XXX does not yet support ipv4/ipv6 flags at all */
+    /* XXX multilistener bug - should be .ipv6 = 1 */
+    { .ipv4 = 1, .ipv6 = 0, .error = 0,
+      .name = "/sockets/net/wildcard/all",
+      .args = "-netdev socket,id=net0,listen=:9000" },
+    /*
+    { .ipv4 = 1, .ipv6 = 0, .error = 0,
+      .name = "/sockets/net/wildcard/ipv4",
+      .args = "-netdev socket,id=net0,listen=:9000,ipv4" },
+    { .ipv4 = 0, .ipv6 = 1, .error = 0,
+      .name = "/sockets/net/wildcard/ipv6",
+      .args = "-netdev socket,id=net0,listen=:9000,ipv6" },
+    { .ipv4 = 1, .ipv6 = 0, .error = 0,
+      .name = "/sockets/net/wildcard/ipv4on",
+      .args = "-netdev socket,id=net0,listen=:9000,ipv4=on" },
+    { .ipv4 = 0, .ipv6 = 1, .error = 0,
+      .name = "/sockets/net/wildcard/ipv6on",
+      .args = "-netdev socket,id=net0,listen=:9000,ipv6=on" },
+    { .ipv4 = 0, .ipv6 = 1, .error = 0,
+      .name = "/sockets/net/wildcard/ipv4off",
+      .args = "-netdev socket,id=net0,listen=:9000,ipv4=off" },
+    { .ipv4 = 1, .ipv6 = 0, .error = 0,
+      .name = "/sockets/net/wildcard/ipv6off",
+      .args = "-netdev socket,id=net0,listen=:9000,ipv6=off" },
+    { .ipv4 = 1, .ipv6 = 0, .error = 0,
+      .name = "/sockets/net/wildcard/ipv4onipv6off",
+      .args = "-netdev socket,id=net0,listen=:9000,ipv4=on,ipv6=off" },
+    { .ipv4 = 0, .ipv6 = 1, .error = 0,
+      .name = "/sockets/net/wildcard/ipv4offipv6on",
+      .args = "-netdev socket,id=net0,listen=:9000,ipv4=off,ipv6=on" },
+    { .ipv4 = 1, .ipv6 = 1, .error = 0,
+      .name = "/sockets/net/wildcard/ipv4onipv6on",
+      .args = "-netdev socket,id=net0,listen=:9000,ipv4=on,ipv6=on" },
+    { .ipv4 = 0, .ipv6 = 0, .error = 1,
+      .name = "/sockets/net/wildcard/ipv4offipv6off",
+      .args = "-netdev socket,id=net0,listen=:9000,ipv4=off,ipv6=off" },
+    */
+
+    /* Net with 0.0.0.0 address */
+    { .ipv4 = 1, .ipv6 = 0, .error = 0,
+      .name = "/sockets/net/0.0.0.0/all",
+      .args = "-netdev socket,id=net0,listen=0.0.0.0:9000" },
+    /*
+    { .ipv4 = 1, .ipv6 = 0, .error = 0,
+      .name = "/sockets/net/0.0.0.0/ipv4",
+      .args = "-netdev socket,id=net0,listen=0.0.0.0:9000,ipv4" },
+    { .ipv4 = 0, .ipv6 = 0, .error = 1,
+      .name = "/sockets/net/0.0.0.0/ipv6",
+      .args = "-netdev socket,id=net0,listen=0.0.0.0:9000,ipv6" },
+    { .ipv4 = 1, .ipv6 = 0, .error = 0,
+      .name = "/sockets/net/0.0.0.0/ipv4on",
+      .args = "-netdev socket,id=net0,listen=0.0.0.0:9000,ipv4=on" },
+    { .ipv4 = 0, .ipv6 = 0, .error = 1,
+      .name = "/sockets/net/0.0.0.0/ipv6on",
+      .args = "-netdev socket,id=net0,listen=0.0.0.0:9000,ipv6=on" },
+    { .ipv4 = 0, .ipv6 = 0, .error = 1,
+      .name = "/sockets/net/0.0.0.0/ipv4off",
+      .args = "-netdev socket,id=net0,listen=0.0.0.0:9000,ipv4=off" },
+    { .ipv4 = 1, .ipv6 = 0, .error = 0,
+      .name = "/sockets/net/0.0.0.0/ipv6off",
+      .args = "-netdev socket,id=net0,listen=0.0.0.0:9000,ipv6=off" },
+    { .ipv4 = 1, .ipv6 = 0, .error = 0,
+      .name = "/sockets/net/0.0.0.0/ipv4onipv6off",
+      .args = "-netdev socket,id=net0,listen=0.0.0.0:9000,ipv4=on,ipv6=off" },
+    { .ipv4 = 0, .ipv6 = 0, .error = 1,
+      .name = "/sockets/net/0.0.0.0/ipv4offipv6on",
+      .args = "-netdev socket,id=net0,listen=0.0.0.0:9000,ipv4=off,ipv6=on" },
+    { .ipv4 = 1, .ipv6 = 0, .error = 0,
+      .name = "/sockets/net/0.0.0.0/ipv4onipv6on",
+      .args = "-netdev socket,id=net0,listen=0.0.0.0:9000,ipv4=on,ipv6=on" },
+    { .ipv4 = 0, .ipv6 = 0, .error = 1,
+      .name = "/sockets/net/0.0.0.0/ipv4offipv6off",
+      .args = "-netdev socket,id=net0,listen=0.0.0.0:9000,ipv4=off,ipv6=off" },
+    */
+
+    /* Net with :: address */
+    { .ipv4 = 1, .ipv6 = 1, .error = 0,
+      .name = "/sockets/net/::/all",
+      .args = "-netdev socket,id=net0,listen=[::]:9000" },
+    /*
+    { .ipv4 = 0, .ipv6 = 0, .error = 1,
+      .name = "/sockets/net/::/ipv4",
+      .args = "-netdev socket,id=net0,listen=[::]:9000,ipv4" },
+    { .ipv4 = 0, .ipv6 = 1, .error = 0,
+      .name = "/sockets/net/::/ipv6",
+      .args = "-netdev socket,id=net0,listen=[::]:9000,ipv6" },
+    { .ipv4 = 0, .ipv6 = 0, .error = 1,
+      .name = "/sockets/net/::/ipv4on",
+      .args = "-netdev socket,id=net0,listen=[::]:9000,ipv4=on" },
+    { .ipv4 = 0, .ipv6 = 1, .error = 0,
+      .name = "/sockets/net/::/ipv6on",
+      .args = "-netdev socket,id=net0,listen=[::]:9000,ipv6=on" },
+    { .ipv4 = 0, .ipv6 = 1, .error = 0,
+      .name = "/sockets/net/::/ipv4off",
+      .args = "-netdev socket,id=net0,listen=[::]:9000,ipv4=off" },
+    { .ipv4 = 0, .ipv6 = 0, .error = 1,
+      .name = "/sockets/net/::/ipv6off",
+      .args = "-netdev socket,id=net0,listen=[::]:9000,ipv6=off" },
+    { .ipv4 = 0, .ipv6 = 0, .error = 1,
+      .name = "/sockets/net/::/ipv4onipv6off",
+      .args = "-netdev socket,id=net0,listen=[::]:9000,ipv4=on,ipv6=off" },
+    { .ipv4 = 0, .ipv6 = 1, .error = 0,
+      .name = "/sockets/net/::/ipv4offipv6on",
+      .args = "-netdev socket,id=net0,listen=[::]:9000,ipv4=off,ipv6=on" },
+    { .ipv4 = 1, .ipv6 = 1, .error = 0,
+      .name = "/sockets/net/::/ipv4onipv6on",
+      .args = "-netdev socket,id=net0,listen=[::]:9000,ipv4=on,ipv6=on" },
+    { .ipv4 = 0, .ipv6 = 0, .error = 1,
+      .name = "/sockets/net/::/ipv4offipv6off",
+      .args = "-netdev socket,id=net0,listen=[::]:9000,ipv4=off,ipv6=off" },
+    */
+
+
+    /* VNC with "" address */
+    { .ipv4 = 1, .ipv6 = 1, .error = 0,
+      .name = "/sockets/vnc/wildcard/all",
+      .args = "-vnc :3100" },
+    { .ipv4 = 1, .ipv6 = 0, .error = 0,
+      .name = "/sockets/vnc/wildcard/ipv4",
+      .args = "-vnc :3100,ipv4" },
+    { .ipv4 = 0, .ipv6 = 1, .error = 0,
+      .name = "/sockets/vnc/wildcard/ipv6",
+      .args = "-vnc :3100,ipv6" },
+    { .ipv4 = 1, .ipv6 = 0, .error = 0,
+      .name = "/sockets/vnc/wildcard/ipv4on",
+      .args = "-vnc :3100,ipv4=on" },
+    { .ipv4 = 0, .ipv6 = 1, .error = 0,
+      .name = "/sockets/vnc/wildcard/ipv6on",
+      .args = "-vnc :3100,ipv6=on" },
+    { .ipv4 = 0, .ipv6 = 1, .error = 0,
+      .name = "/sockets/vnc/wildcard/ipv4off",
+      .args = "-vnc :3100,ipv4=off" },
+    { .ipv4 = 1, .ipv6 = 0, .error = 0,
+      .name = "/sockets/vnc/wildcard/ipv6off",
+      .args = "-vnc :3100,ipv6=off" },
+    { .ipv4 = 1, .ipv6 = 0, .error = 0,
+      .name = "/sockets/vnc/wildcard/ipv4onipv6off",
+      .args = "-vnc :3100,ipv4=on,ipv6=off" },
+    { .ipv4 = 0, .ipv6 = 1, .error = 0,
+      .name = "/sockets/vnc/wildcard/ipv4offipv6on",
+      .args = "-vnc :3100,ipv4=off,ipv6=on" },
+    { .ipv4 = 1, .ipv6 = 1, .error = 0,
+      .name = "/sockets/vnc/wildcard/ipv4onipv6on",
+      .args = "-vnc :3100,ipv4=on,ipv6=on" },
+    { .ipv4 = 0, .ipv6 = 0, .error = 1,
+      .name = "/sockets/vnc/wildcard/ipv4offipv6off",
+      .args = "-vnc :3100,ipv4=off,ipv6=off" },
+
+    /* VNC with 0.0.0.0 address */
+    { .ipv4 = 1, .ipv6 = 0, .error = 0,
+      .name = "/sockets/vnc/0.0.0.0/all",
+      .args = "-vnc 0.0.0.0:3100" },
+    { .ipv4 = 1, .ipv6 = 0, .error = 0,
+      .name = "/sockets/vnc/0.0.0.0/ipv4",
+      .args = "-vnc 0.0.0.0:3100,ipv4" },
+    { .ipv4 = 0, .ipv6 = 0, .error = 1,
+      .name = "/sockets/vnc/0.0.0.0/ipv6",
+      .args = "-vnc 0.0.0.0:3100,ipv6" },
+    { .ipv4 = 1, .ipv6 = 0, .error = 0,
+      .name = "/sockets/vnc/0.0.0.0/ipv4on",
+      .args = "-vnc 0.0.0.0:3100,ipv4=on" },
+    { .ipv4 = 0, .ipv6 = 0, .error = 1,
+      .name = "/sockets/vnc/0.0.0.0/ipv6on",
+      .args = "-vnc 0.0.0.0:3100,ipv6=on" },
+    { .ipv4 = 0, .ipv6 = 0, .error = 1,
+      .name = "/sockets/vnc/0.0.0.0/ipv4off",
+      .args = "-vnc 0.0.0.0:3100,ipv4=off" },
+    { .ipv4 = 1, .ipv6 = 0, .error = 0,
+      .name = "/sockets/vnc/0.0.0.0/ipv6off",
+      .args = "-vnc 0.0.0.0:3100,ipv6=off" },
+    { .ipv4 = 1, .ipv6 = 0, .error = 0,
+      .name = "/sockets/vnc/0.0.0.0/ipv4onipv6off",
+      .args = "-vnc 0.0.0.0:3100,ipv4=on,ipv6=off" },
+    { .ipv4 = 0, .ipv6 = 0, .error = 1,
+      .name = "/sockets/vnc/0.0.0.0/ipv4offipv6on",
+      .args = "-vnc 0.0.0.0:3100,ipv4=off,ipv6=on" },
+    { .ipv4 = 1, .ipv6 = 0, .error = 0,
+      .name = "/sockets/vnc/0.0.0.0/ipv4onipv6on",
+      .args = "-vnc 0.0.0.0:3100,ipv4=on,ipv6=on" },
+    { .ipv4 = 0, .ipv6 = 0, .error = 1,
+      .name = "/sockets/vnc/0.0.0.0/ipv4offipv6off",
+      .args = "-vnc 0.0.0.0:3100,ipv4=off,ipv6=off" },
+
+    /* VNC with :: address */
+    { .ipv4 = 1, .ipv6 = 1, .error = 0,
+      .name = "/sockets/vnc/::/all",
+      .args = "-vnc :::3100" },
+    { .ipv4 = 0, .ipv6 = 0, .error = 1,
+      .name = "/sockets/vnc/::/ipv4",
+      .args = "-vnc :::3100,ipv4" },
+    { .ipv4 = 0, .ipv6 = 1, .error = 0,
+      .name = "/sockets/vnc/::/ipv6",
+      .args = "-vnc :::3100,ipv6" },
+    { .ipv4 = 0, .ipv6 = 0, .error = 1,
+      .name = "/sockets/vnc/::/ipv4on",
+      .args = "-vnc :::3100,ipv4=on" },
+    { .ipv4 = 0, .ipv6 = 1, .error = 0,
+      .name = "/sockets/vnc/::/ipv6on",
+      .args = "-vnc :::3100,ipv6=on" },
+    { .ipv4 = 0, .ipv6 = 1, .error = 0,
+      .name = "/sockets/vnc/::/ipv4off",
+      .args = "-vnc :::3100,ipv4=off" },
+    { .ipv4 = 0, .ipv6 = 0, .error = 1,
+      .name = "/sockets/vnc/::/ipv6off",
+      .args = "-vnc :::3100,ipv6=off" },
+    { .ipv4 = 0, .ipv6 = 0, .error = 1,
+      .name = "/sockets/vnc/::/ipv4onipv6off",
+      .args = "-vnc :::3100,ipv4=on,ipv6=off" },
+    { .ipv4 = 0, .ipv6 = 1, .error = 0,
+      .name = "/sockets/vnc/::/ipv4offipv6on",
+      .args = "-vnc :::3100,ipv4=off,ipv6=on" },
+    { .ipv4 = 1, .ipv6 = 1, .error = 0,
+      .name = "/sockets/vnc/::/ipv4onipv6on",
+      .args = "-vnc :::3100,ipv4=on,ipv6=on" },
+    { .ipv4 = 0, .ipv6 = 0, .error = 1,
+      .name = "/sockets/vnc/::/ipv4offipv6off",
+      .args = "-vnc :::3100,ipv4=off,ipv6=off" },
+
+
+
+    /* VNC with "" address and range */
+    { .ipv4 = 1, .ipv6 = 1, .error = 0,
+      .name = "/sockets/vnc-to/wildcard/all",
+      .args = "-vnc :3100,to=9005" },
+    { .ipv4 = 1, .ipv6 = 0, .error = 0,
+      .name = "/sockets/vnc-to/wildcard/ipv4",
+      .args = "-vnc :3100,to=9005,ipv4" },
+    { .ipv4 = 0, .ipv6 = 1, .error = 0,
+      .name = "/sockets/vnc-to/wildcard/ipv6",
+      .args = "-vnc :3100,to=9005,ipv6" },
+    { .ipv4 = 1, .ipv6 = 0, .error = 0,
+      .name = "/sockets/vnc-to/wildcard/ipv4on",
+      .args = "-vnc :3100,to=9005,ipv4=on" },
+    { .ipv4 = 0, .ipv6 = 1, .error = 0,
+      .name = "/sockets/vnc-to/wildcard/ipv6on",
+      .args = "-vnc :3100,to=9005,ipv6=on" },
+    { .ipv4 = 0, .ipv6 = 1, .error = 0,
+      .name = "/sockets/vnc-to/wildcard/ipv4off",
+      .args = "-vnc :3100,to=9005,ipv4=off" },
+    { .ipv4 = 1, .ipv6 = 0, .error = 0,
+      .name = "/sockets/vnc-to/wildcard/ipv6off",
+      .args = "-vnc :3100,to=9005,ipv6=off" },
+    { .ipv4 = 1, .ipv6 = 0, .error = 0,
+      .name = "/sockets/vnc-to/wildcard/ipv4onipv6off",
+      .args = "-vnc :3100,to=9005,ipv4=on,ipv6=off" },
+    { .ipv4 = 0, .ipv6 = 1, .error = 0,
+      .name = "/sockets/vnc-to/wildcard/ipv4offipv6on",
+      .args = "-vnc :3100,to=9005,ipv4=off,ipv6=on" },
+    { .ipv4 = 1, .ipv6 = 1, .error = 0,
+      .name = "/sockets/vnc-to/wildcard/ipv4onipv6on",
+      .args = "-vnc :3100,to=9005,ipv4=on,ipv6=on" },
+    { .ipv4 = 0, .ipv6 = 0, .error = 1,
+      .name = "/sockets/vnc-to/wildcard/ipv4offipv6off",
+      .args = "-vnc :3100,to=9005,ipv4=off,ipv6=off" },
+
+    /* VNC with 0.0.0.0 address and range */
+    { .ipv4 = 1, .ipv6 = 0, .error = 0,
+      .name = "/sockets/vnc-to/0.0.0.0/all",
+      .args = "-vnc 0.0.0.0:3100,to=9005" },
+    { .ipv4 = 1, .ipv6 = 0, .error = 0,
+      .name = "/sockets/vnc-to/0.0.0.0/ipv4",
+      .args = "-vnc 0.0.0.0:3100,to=9005,ipv4" },
+    { .ipv4 = 0, .ipv6 = 0, .error = 1,
+      .name = "/sockets/vnc-to/0.0.0.0/ipv6",
+      .args = "-vnc 0.0.0.0:3100,to=9005,ipv6" },
+    { .ipv4 = 1, .ipv6 = 0, .error = 0,
+      .name = "/sockets/vnc-to/0.0.0.0/ipv4on",
+      .args = "-vnc 0.0.0.0:3100,to=9005,ipv4=on" },
+    { .ipv4 = 0, .ipv6 = 0, .error = 1,
+      .name = "/sockets/vnc-to/0.0.0.0/ipv6on",
+      .args = "-vnc 0.0.0.0:3100,to=9005,ipv6=on" },
+    { .ipv4 = 0, .ipv6 = 0, .error = 1,
+      .name = "/sockets/vnc-to/0.0.0.0/ipv4off",
+      .args = "-vnc 0.0.0.0:3100,to=9005,ipv4=off" },
+    { .ipv4 = 1, .ipv6 = 0, .error = 0,
+      .name = "/sockets/vnc-to/0.0.0.0/ipv6off",
+      .args = "-vnc 0.0.0.0:3100,to=9005,ipv6=off" },
+    { .ipv4 = 1, .ipv6 = 0, .error = 0,
+      .name = "/sockets/vnc-to/0.0.0.0/ipv4onipv6off",
+      .args = "-vnc 0.0.0.0:3100,to=9005,ipv4=on,ipv6=off" },
+    { .ipv4 = 0, .ipv6 = 0, .error = 1,
+      .name = "/sockets/vnc-to/0.0.0.0/ipv4offipv6on",
+      .args = "-vnc 0.0.0.0:3100,to=9005,ipv4=off,ipv6=on" },
+    { .ipv4 = 1, .ipv6 = 0, .error = 0,
+      .name = "/sockets/vnc-to/0.0.0.0/ipv4onipv6on",
+      .args = "-vnc 0.0.0.0:3100,to=9005,ipv4=on,ipv6=on" },
+    { .ipv4 = 0, .ipv6 = 0, .error = 1,
+      .name = "/sockets/vnc-to/0.0.0.0/ipv4offipv6off",
+      .args = "-vnc 0.0.0.0:3100,to=9005,ipv4=off,ipv6=off" },
+
+    /* VNC with :: address and range */
+    { .ipv4 = 1, .ipv6 = 1, .error = 0,
+      .name = "/sockets/vnc-to/::/all",
+      .args = "-vnc :::3100,to=9005" },
+    { .ipv4 = 0, .ipv6 = 0, .error = 1,
+      .name = "/sockets/vnc-to/::/ipv4",
+      .args = "-vnc :::3100,to=9005,ipv4" },
+    { .ipv4 = 0, .ipv6 = 1, .error = 0,
+      .name = "/sockets/vnc-to/::/ipv6",
+      .args = "-vnc :::3100,to=9005,ipv6" },
+    { .ipv4 = 0, .ipv6 = 0, .error = 1,
+      .name = "/sockets/vnc-to/::/ipv4on",
+      .args = "-vnc :::3100,to=9005,ipv4=on" },
+    { .ipv4 = 0, .ipv6 = 1, .error = 0,
+      .name = "/sockets/vnc-to/::/ipv6on",
+      .args = "-vnc :::3100,to=9005,ipv6=on" },
+    { .ipv4 = 0, .ipv6 = 1, .error = 0,
+      .name = "/sockets/vnc-to/::/ipv4off",
+      .args = "-vnc :::3100,to=9005,ipv4=off" },
+    { .ipv4 = 0, .ipv6 = 0, .error = 1,
+      .name = "/sockets/vnc-to/::/ipv6off",
+      .args = "-vnc :::3100,to=9005,ipv6=off" },
+    { .ipv4 = 0, .ipv6 = 0, .error = 1,
+      .name = "/sockets/vnc-to/::/ipv4onipv6off",
+      .args = "-vnc :::3100,to=9005,ipv4=on,ipv6=off" },
+    { .ipv4 = 0, .ipv6 = 1, .error = 0,
+      .name = "/sockets/vnc-to/::/ipv4offipv6on",
+      .args = "-vnc :::3100,to=9005,ipv4=off,ipv6=on" },
+    { .ipv4 = 1, .ipv6 = 1, .error = 0,
+      .name = "/sockets/vnc-to/::/ipv4onipv6on",
+      .args = "-vnc :::3100,to=9005,ipv4=on,ipv6=on" },
+    { .ipv4 = 0, .ipv6 = 0, .error = 1,
+      .name = "/sockets/vnc-to/::/ipv4offipv6off",
+      .args = "-vnc :::3100,to=9005,ipv4=off,ipv6=off" },
+};
+
+static int check_bind(const char *hostname)
+{
+    int fd = -1;
+    struct addrinfo ai, *res = NULL;
+    int rc;
+    int ret = -1;
+
+    memset(&ai, 0, sizeof(ai));
+    ai.ai_flags = AI_CANONNAME | AI_ADDRCONFIG;
+    ai.ai_family = AF_UNSPEC;
+    ai.ai_socktype = SOCK_STREAM;
+
+    /* lookup */
+    rc = getaddrinfo(hostname, "9000", &ai, &res);
+    if (rc != 0) {
+        goto cleanup;
+    }
+
+    fd = qemu_socket(res->ai_family, res->ai_socktype, res->ai_protocol);
+    if (fd < 0) {
+        goto cleanup;
+    }
+
+    if (bind(fd, res->ai_addr, res->ai_addrlen) < 0) {
+        goto cleanup;
+    }
+
+    ret = 0;
+
+ cleanup:
+    if (fd != -1) {
+        close(fd);
+    }
+    if (res) {
+        freeaddrinfo(res);
+    }
+    return ret;
+}
+
+static int check_protocol_support(void)
+{
+    if (check_bind("0.0.0.0") < 0) {
+        return -1;
+    }
+    if (check_bind("::") < 0) {
+        return -1;
+    }
+
+    return 0;
+}
+
+static pid_t run_qemu(const char *args)
+{
+    const char *pidfile = "test-sockets-proto.pid";
+    char *pidstr;
+    pid_t child;
+    int status;
+    pid_t ret;
+    const char *binary = getenv("QTEST_QEMU_BINARY");
+    long pid;
+    if (binary == NULL) {
+        g_printerr("Missing QTEST_QEMU_BINARY env variable");
+        exit(1);
+    }
+
+    unlink(pidfile);
+    child = fork();
+    if (child == 0) {
+        setenv("QEMU_AUDIO_DRV", "none", true);
+        char *cmd = g_strdup_printf(
+            "exec %s -pidfile %s -daemonize -nodefconfig -nodefaults "
+            "-display none %s 1>/dev/null 2>&1",
+            binary, pidfile, args);
+        execlp("/bin/sh", "sh", "-c", cmd, NULL);
+        _exit(1);
+    }
+
+    do {
+        ret = waitpid(child, &status, 0);
+    } while (ret == -1 && errno == EINTR);
+
+    if (WEXITSTATUS(status) != 0) {
+        return 0;
+    }
+
+    if (!g_file_get_contents(pidfile, &pidstr, NULL, NULL)) {
+        return 0;
+    }
+
+    qemu_strtol(pidstr, NULL, 0, &pid);
+    return (pid_t)pid;
+}
+
+static void test_listen(const void *opaque)
+{
+    const QSocketsData *data = opaque;
+    QIOChannelSocket *sioc;
+    SocketAddress *saddr;
+    Error *err = NULL;
+    pid_t child;
+
+    /* First test IPv4 */
+    saddr = g_new(SocketAddress, 1);
+    saddr->type = SOCKET_ADDRESS_TYPE_INET;
+    saddr->u.inet.host = g_strdup("127.0.0.1");
+    saddr->u.inet.port = g_strdup("9000");
+    saddr->u.inet.has_ipv4 = true;
+    saddr->u.inet.ipv4 = true;
+    saddr->u.inet.has_ipv6 = true;
+    saddr->u.inet.ipv6 = false;
+
+    child = run_qemu(data->args);
+
+    if (!child) {
+        /* QEMU failed to start, so make sure we are expecting
+         * this scenario to fail
+         */
+        g_assert(data->error);
+        goto cleanup;
+    } else {
+        g_assert(!data->error);
+    }
+
+    sioc = qio_channel_socket_new();
+    qio_channel_socket_connect_sync(sioc, saddr, &err);
+
+    if (err != NULL) {
+        /* We failed to connect to IPv4, make sure that
+         * matches the scenario expectation
+         */
+        g_assert(data->ipv4 == 0);
+        error_free(err);
+        err = NULL;
+    } else {
+        g_assert(data->ipv4 != 0);
+        object_unref(OBJECT(sioc));
+    }
+
+    kill(child, SIGKILL);
+
+
+    /* Now test IPv6 */
+    child = run_qemu(data->args);
+
+    /*
+     * The child should always succeed, because its the
+     * same config as the successful run we just did above
+     */
+    g_assert(child != 0);
+
+    g_free(saddr->u.inet.host);
+    saddr->u.inet.host = g_strdup("::1");
+    saddr->u.inet.ipv4 = false;
+    saddr->u.inet.ipv6 = true;
+
+    sioc = qio_channel_socket_new();
+    qio_channel_socket_connect_sync(sioc, saddr, &err);
+
+    if (err != NULL) {
+        /* We failed to connect to IPv6, make sure that
+         * matches the scenario expectation
+         */
+        g_assert(data->ipv6 == 0);
+        error_free(err);
+        err = NULL;
+    } else {
+        g_assert(data->ipv6 != 0);
+        object_unref(OBJECT(sioc));
+    }
+    kill(child, SIGKILL);
+
+ cleanup:
+    qapi_free_SocketAddress(saddr);
+}
+
+
+int main(int argc, char **argv)
+{
+    int ret;
+    gsize i;
+
+    if (check_protocol_support() < 0) {
+        return 0; /* Skip test if we can't bind */
+    }
+
+    signal(SIGPIPE, SIG_IGN);
+
+    module_call_init(MODULE_INIT_QOM);
+    g_test_init(&argc, &argv, NULL);
+
+    for (i = 0; i < G_N_ELEMENTS(test_data); i++) {
+        const QSocketsData *data = &test_data[i];
+        g_test_add_data_func(data->name, data, test_listen);
+    }
+
+    ret = g_test_run();
+
+    return ret == 0 ? EXIT_SUCCESS : EXIT_FAILURE;
+}
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* Re: [Qemu-devel] [PATCH v3 3/5] sockets: ensure we don't accept IPv4 clients when IPv4 is disabled
  2017-06-01  8:29 ` [Qemu-devel] [PATCH v3 3/5] sockets: ensure we don't accept IPv4 clients when IPv4 is disabled Daniel P. Berrange
@ 2017-06-02  4:45   ` Philippe Mathieu-Daudé
  0 siblings, 0 replies; 9+ messages in thread
From: Philippe Mathieu-Daudé @ 2017-06-02  4:45 UTC (permalink / raw)
  To: Daniel P. Berrange, qemu-devel; +Cc: Gerd Hoffmann, Paolo Bonzini, Eric Blake

On 06/01/2017 05:29 AM, Daniel P. Berrange wrote:
> Currently if you disable listening on IPv4 addresses, via the
> CLI flag ipv4=off, we still mistakenly accept IPv4 clients via
> the IPv6 listener socket due to IPV6_V6ONLY flag being unset.
>
> We must ensure IPV6_V6ONLY is always set if ipv4=off
>
> This fixes the following scenarios
>
>   -incoming tcp::9000,ipv6=on
>   -incoming tcp:[::]:9000,ipv6=on
>   -chardev socket,id=cdev0,host=,port=9000,server,nowait,ipv4=off
>   -chardev socket,id=cdev0,host=,port=9000,server,nowait,ipv6=on
>   -chardev socket,id=cdev0,host=::,port=9000,server,nowait,ipv4=off
>   -chardev socket,id=cdev0,host=::,port=9000,server,nowait,ipv6=on
>
> which all mistakenly accepted IPv4 clients
>
> Reviewed-by: Eric Blake <eblake@redhat.com>
> Signed-off-by: Daniel P. Berrange <berrange@redhat.com>

Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>

> ---
>  util/qemu-sockets.c | 40 +++++++++++++++++++++++++++++++---------
>  1 file changed, 31 insertions(+), 9 deletions(-)
>
> diff --git a/util/qemu-sockets.c b/util/qemu-sockets.c
> index 81bc8de..852773d 100644
> --- a/util/qemu-sockets.c
> +++ b/util/qemu-sockets.c
> @@ -104,17 +104,16 @@ NetworkAddressFamily inet_netfamily(int family)
>   *   f     t       PF_INET6
>   *   t     -       PF_INET
>   *   t     f       PF_INET
> - *   t     t       PF_INET6
> + *   t     t       PF_INET6/PF_UNSPEC
>   *
>   * NB, this matrix is only about getting the necessary results
>   * from getaddrinfo(). Some of the cases require further work
>   * after reading results from getaddrinfo in order to fully
> - * apply the logic the end user wants. eg with the last case
> - * ipv4=t + ipv6=t + PF_INET6, getaddrinfo alone can only
> - * guarantee the ipv6=t part of the request - we need more
> - * checks to provide ipv4=t part of the guarantee. This is
> - * outside scope of this method and not currently handled by
> - * callers at all.
> + * apply the logic the end user wants.
> + *
> + * In the first and last cases, we must set IPV6_V6ONLY=0
> + * when binding, to allow a single listener to potentially
> + * accept both IPv4+6 addresses.
>   */
>  int inet_ai_family_from_address(InetSocketAddress *addr,
>                                  Error **errp)
> @@ -124,6 +123,23 @@ int inet_ai_family_from_address(InetSocketAddress *addr,
>          error_setg(errp, "Cannot disable IPv4 and IPv6 at same time");
>          return PF_UNSPEC;
>      }
> +    if ((addr->has_ipv6 && addr->ipv6) && (addr->has_ipv4 && addr->ipv4)) {
> +        /*
> +         * Some backends can only do a single listener. In that case
> +         * we want empty hostname to resolve to "::" and then use the
> +         * flag IPV6_V6ONLY==0 to get both protocols on 1 socket. This
> +         * doesn't work for addresses other than "", so they're just
> +         * inevitably broken until multiple listeners can be used,
> +         * and thus we honour getaddrinfo automatic protocol detection
> +         * Once all backends do multi-listener, remove the PF_INET6
> +         * branch entirely.
> +         */
> +        if (!addr->host || g_str_equal(addr->host, "")) {
> +            return PF_INET6;
> +        } else {
> +            return PF_UNSPEC;
> +        }
> +    }
>      if ((addr->has_ipv6 && addr->ipv6) || (addr->has_ipv4 && !addr->ipv4)) {
>          return PF_INET6;
>      }
> @@ -213,8 +229,14 @@ static int inet_listen_saddr(InetSocketAddress *saddr,
>          port_max = saddr->has_to ? saddr->to + port_offset : port_min;
>          for (p = port_min; p <= port_max; p++) {
>  #ifdef IPV6_V6ONLY
> -            /* listen on both ipv4 and ipv6 */
> -            int v6only = 0;
> +            /*
> +             * Deals with first & last cases in matrix in comment
> +             * for inet_ai_family_from_address().
> +             */
> +            int v6only =
> +                ((!saddr->has_ipv4 && !saddr->has_ipv6) ||
> +                 (saddr->has_ipv4 && saddr->ipv4 &&
> +                  saddr->has_ipv6 && saddr->ipv6)) ? 0 : 1;
>  #endif
>              inet_setport(e, p);
>  #ifdef IPV6_V6ONLY
>

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [Qemu-devel] [PATCH v3 0/5] Fix handling of IPv4/IPv6 dual stack
  2017-06-01  8:29 [Qemu-devel] [PATCH v3 0/5] Fix handling of IPv4/IPv6 dual stack Daniel P. Berrange
                   ` (4 preceding siblings ...)
  2017-06-01  8:29 ` [Qemu-devel] [PATCH v3 5/5] tests: add functional test validating ipv4/ipv6 address flag handling Daniel P. Berrange
@ 2017-06-06 10:32 ` Gerd Hoffmann
  2017-06-10 16:06 ` Vlad Yasevich
  6 siblings, 0 replies; 9+ messages in thread
From: Gerd Hoffmann @ 2017-06-06 10:32 UTC (permalink / raw)
  To: Daniel P. Berrange, qemu-devel; +Cc: Paolo Bonzini, Philippe Mathieu-Daudé

On Thu, 2017-06-01 at 09:29 +0100, Daniel P. Berrange wrote:
> This series aims to fix a lot of bugs related to handling of IPv4 /
> IPv6
> dual stack.
> 
>  - The VNC server mistakenly listened on two separate ports 5900+5901
>    when the to= parameter was given
>  - IPv6 sockets are accepting IPv4 clients even when IPv4 is set to
>    be disabled
>  - IPv6 sockets are failing to accept IPv4 clients when IPv4 is not
> set
>    to be disabled
>  - The VNC server was loosing the ipv4=/ipv6= settings due to a bug
>    in the DNS resolver
> 
> The behaviour of all this is really subtle and hard to get working
> correctly
> across all the different network backends. Thus, the most important
> part of
> this patch series is the last patch which adds a test case covering
> the
> backends for -vnc, -chardev tcp, -net socket, and -incoming socket,
> with
> a 120 entry matrix.

Acked-by: Gerd Hoffmann <kraxel@gmail.com>

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [Qemu-devel] [PATCH v3 0/5] Fix handling of IPv4/IPv6 dual stack
  2017-06-01  8:29 [Qemu-devel] [PATCH v3 0/5] Fix handling of IPv4/IPv6 dual stack Daniel P. Berrange
                   ` (5 preceding siblings ...)
  2017-06-06 10:32 ` [Qemu-devel] [PATCH v3 0/5] Fix handling of IPv4/IPv6 dual stack Gerd Hoffmann
@ 2017-06-10 16:06 ` Vlad Yasevich
  6 siblings, 0 replies; 9+ messages in thread
From: Vlad Yasevich @ 2017-06-10 16:06 UTC (permalink / raw)
  To: Daniel P. Berrange, qemu-devel
  Cc: Paolo Bonzini, Gerd Hoffmann, Philippe Mathieu-Daudé

On 06/01/2017 04:29 AM, Daniel P. Berrange wrote:
> This is a followup to:
> 
>   v1: https://lists.nongnu.org/archive/html/qemu-devel/2017-04/msg05659.html
>   v2: https://lists.nongnu.org/archive/html/qemu-devel/2017-05/msg04706.html
> 
> This series aims to fix a lot of bugs related to handling of IPv4 / IPv6
> dual stack.
> 
>  - The VNC server mistakenly listened on two separate ports 5900+5901
>    when the to= parameter was given
>  - IPv6 sockets are accepting IPv4 clients even when IPv4 is set to
>    be disabled
>  - IPv6 sockets are failing to accept IPv4 clients when IPv4 is not set
>    to be disabled
>  - The VNC server was loosing the ipv4=/ipv6= settings due to a bug
>    in the DNS resolver
> 
> The behaviour of all this is really subtle and hard to get working correctly
> across all the different network backends. Thus, the most important part of
> this patch series is the last patch which adds a test case covering the
> backends for -vnc, -chardev tcp, -net socket, and -incoming socket, with
> a 120 entry matrix.
> 
> IOW, if you think any of the first 4 patches are applying the wrong logic,
> then take a look at the last patch and indicate which test matrix entries
> are believed to be defining wrong behaviour :-)
> 
> Changed in v2:
> 
>  - Clarified error message in commit message
>  - Replace assert with exit (Eric)
>  - Fix typo in test comment (Eric)
>  - Fix wierd copyright line (Eric)
> 
> Daniel P. Berrange (5):
>   sockets: ensure we can bind to both ipv4 & ipv6 separately
>   sockets: don't block IPv4 clients when listening on "::"
>   sockets: ensure we don't accept IPv4 clients when IPv4 is disabled
>   io: preserve ipv4/ipv6 flags when resolving InetSocketAddress
>   tests: add functional test validating ipv4/ipv6 address flag handling
> 
>  io/dns-resolver.c          |   6 +-
>  tests/.gitignore           |   1 +
>  tests/Makefile.include     |   4 +
>  tests/test-sockets-proto.c | 855 +++++++++++++++++++++++++++++++++++++++++++++
>  util/qemu-sockets.c        |  71 +++-
>  5 files changed, 916 insertions(+), 21 deletions(-)
>  create mode 100644 tests/test-sockets-proto.c
> 

Series Reviewed-by: Vlad Yasevich <vyasevic@redhat.com>

-vlad

^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2017-06-10 16:06 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-06-01  8:29 [Qemu-devel] [PATCH v3 0/5] Fix handling of IPv4/IPv6 dual stack Daniel P. Berrange
2017-06-01  8:29 ` [Qemu-devel] [PATCH v3 1/5] sockets: ensure we can bind to both ipv4 & ipv6 separately Daniel P. Berrange
2017-06-01  8:29 ` [Qemu-devel] [PATCH v3 2/5] sockets: don't block IPv4 clients when listening on "::" Daniel P. Berrange
2017-06-01  8:29 ` [Qemu-devel] [PATCH v3 3/5] sockets: ensure we don't accept IPv4 clients when IPv4 is disabled Daniel P. Berrange
2017-06-02  4:45   ` Philippe Mathieu-Daudé
2017-06-01  8:29 ` [Qemu-devel] [PATCH v3 4/5] io: preserve ipv4/ipv6 flags when resolving InetSocketAddress Daniel P. Berrange
2017-06-01  8:29 ` [Qemu-devel] [PATCH v3 5/5] tests: add functional test validating ipv4/ipv6 address flag handling Daniel P. Berrange
2017-06-06 10:32 ` [Qemu-devel] [PATCH v3 0/5] Fix handling of IPv4/IPv6 dual stack Gerd Hoffmann
2017-06-10 16:06 ` Vlad Yasevich

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.