All of lore.kernel.org
 help / color / mirror / Atom feed
* [Qemu-devel] [PATCH v1 00/14] port network layer onto glib
@ 2013-05-07  5:46 Liu Ping Fan
  2013-05-07  5:46 ` [Qemu-devel] [PATCH v1 01/14] util: introduce gsource event abstraction Liu Ping Fan
                   ` (14 more replies)
  0 siblings, 15 replies; 20+ messages in thread
From: Liu Ping Fan @ 2013-05-07  5:46 UTC (permalink / raw)
  To: qemu-devel
  Cc: mdroth, Jan Kiszka, Stefan Hajnoczi, Anthony Liguori, Paolo Bonzini

summary:
  patch1: GSource event abstraction
  patch2~6: port network backend to glib
  patch7~10: make network core re-entrant
  patch11~14:  port the slirp backend onto glib

The slirp->lock's deadlock problem has been eliminated and works fine.
And other components seems more stable, so I change from RFCv- to v-




rfcv5->v1:
  1. re-arrange logic for net/socket.c

rfcv4->rfcv5:
  1.use GList to reimplement EventsGSource
  2.make readable()/writable() return events which the backend is interested in
  3.fix the slirp->lock's potential deadlock issue

rfcv3->rfcv4:
  1.separate GSource event to dedicated file
  2.integrated with net core re-entrant
  3.make slirp/  re-entrant

rfcv2->rfcv3:
  1.drop hub and the frontend(virtio net)
  2.split the patch for NetClientSource

rfcv1->rfcv2:
  1.NetClientState can associate with up to 2 GSource, for virtio net, one for tx, one for rx,
    so vq can run on different threads.
  2.make network front-end onto glib, currently virtio net dataplane



Liu Ping Fan (14):
  util: introduce gsource event abstraction
  net: introduce bind_ctx to NetClientInfo
  net: port vde onto GSource
  net: port socket to GSource
  net: port tap onto GSource
  net: port tap-win32 onto GSource
  net: hub use lock to protect ports list
  net: introduce lock to protect NetQueue
  net: introduce lock to protect NetClientState's peer's access
  net: make netclient re-entrant with refcnt
  slirp: make timeout local
  slirp: make slirp event dispatch based on slirp instance, not global
  slirp: handle race condition
  slirp: use lock to protect the slirp_instances

 hw/qdev-properties-system.c |   14 +
 include/net/net.h           |   12 +
 include/qemu/module.h       |    2 +
 main-loop.c                 |    4 -
 net/hub.c                   |   28 ++-
 net/net.c                   |  123 ++++++++-
 net/queue.c                 |   15 +-
 net/slirp.c                 |   35 +++-
 net/socket.c                |  194 +++++++++++---
 net/tap-win32.c             |   31 ++-
 net/tap.c                   |   64 ++++-
 net/vde.c                   |   31 ++-
 slirp/if.c                  |   57 +++-
 slirp/libslirp.h            |    7 +-
 slirp/main.h                |    3 +-
 slirp/mbuf.h                |    2 +
 slirp/slirp.c               |  670 ++++++++++++++++++++++---------------------
 slirp/slirp.h               |   11 +-
 slirp/socket.c              |    2 +
 slirp/socket.h              |    1 +
 stubs/slirp.c               |    8 -
 util/Makefile.objs          |    1 +
 util/event_gsource.c        |  157 ++++++++++
 util/event_gsource.h        |   49 ++++
 24 files changed, 1105 insertions(+), 416 deletions(-)
 create mode 100644 util/event_gsource.c
 create mode 100644 util/event_gsource.h

-- 
1.7.4.4

^ permalink raw reply	[flat|nested] 20+ messages in thread

* [Qemu-devel] [PATCH v1 01/14] util: introduce gsource event abstraction
  2013-05-07  5:46 [Qemu-devel] [PATCH v1 00/14] port network layer onto glib Liu Ping Fan
@ 2013-05-07  5:46 ` Liu Ping Fan
  2013-05-07  5:46 ` [Qemu-devel] [PATCH v1 02/14] net: introduce bind_ctx to NetClientInfo Liu Ping Fan
                   ` (13 subsequent siblings)
  14 siblings, 0 replies; 20+ messages in thread
From: Liu Ping Fan @ 2013-05-07  5:46 UTC (permalink / raw)
  To: qemu-devel
  Cc: mdroth, Jan Kiszka, Stefan Hajnoczi, Anthony Liguori, Paolo Bonzini

From: Liu Ping Fan <pingfank@linux.vnet.ibm.com>

Introduce two structs EventGSource, EventsGSource
EventGSource is used to abstract the event with single backend file.
EventsGSource is used to abstract the event with dynamically changed
backend file, ex, slirp.

Signed-off-by: Liu Ping Fan <pingfank@linux.vnet.ibm.com>
---
 util/Makefile.objs   |    1 +
 util/event_gsource.c |  157 ++++++++++++++++++++++++++++++++++++++++++++++++++
 util/event_gsource.h |   49 ++++++++++++++++
 3 files changed, 207 insertions(+), 0 deletions(-)
 create mode 100644 util/event_gsource.c
 create mode 100644 util/event_gsource.h

diff --git a/util/Makefile.objs b/util/Makefile.objs
index 495a178..a676d7d 100644
--- a/util/Makefile.objs
+++ b/util/Makefile.objs
@@ -8,3 +8,4 @@ util-obj-y += error.o qemu-error.o
 util-obj-$(CONFIG_POSIX) += compatfd.o
 util-obj-y += iov.o aes.o qemu-config.o qemu-sockets.o uri.o notify.o
 util-obj-y += qemu-option.o qemu-progress.o
+util-obj-y += event_gsource.o
diff --git a/util/event_gsource.c b/util/event_gsource.c
new file mode 100644
index 0000000..12b5967
--- /dev/null
+++ b/util/event_gsource.c
@@ -0,0 +1,157 @@
+/*
+ *  Copyright (C) 2013 IBM
+ *
+ *  This program is free software; you can redistribute it and/or modify
+ *  it under the terms of the GNU General Public License as published by
+ *  the Free Software Foundation; under version 2 of the License.
+ *
+ *  This program is distributed in the hope that it will be useful,
+ *  but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *  GNU General Public License for more details.
+ *
+ *  You should have received a copy of the GNU General Public License
+ *  along with this program; if not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include "event_gsource.h"
+#include "qemu/bitops.h"
+
+static gboolean prepare(GSource *src, gint *time)
+{
+    EventGSource *nsrc = (EventGSource *)src;
+    int events = 0;
+
+    if (!nsrc->readable && !nsrc->writable) {
+        return false;
+    }
+    if (nsrc->readable) {
+        events = nsrc->readable(nsrc->opaque);
+    }
+    if ((nsrc->writable)) {
+        events |= nsrc->writable(nsrc->opaque);
+    }
+    nsrc->gfd.events = events;
+
+    return false;
+}
+
+static gboolean check(GSource *src)
+{
+    EventGSource *nsrc = (EventGSource *)src;
+
+    if (nsrc->gfd.revents & nsrc->gfd.events) {
+        return true;
+    }
+    return false;
+}
+
+static gboolean dispatch(GSource *src, GSourceFunc cb, gpointer data)
+{
+    gboolean ret = false;
+
+    if (cb) {
+        ret = cb(data);
+    }
+    return ret;
+}
+
+static GSourceFuncs net_gsource_funcs = {
+    prepare,
+    check,
+    dispatch,
+    NULL
+};
+
+EventGSource *event_source_new(int fd, GSourceFunc dispatch_cb, void *opaque)
+{
+    EventGSource *nsrc = (EventGSource *)g_source_new(&net_gsource_funcs,
+                                                    sizeof(EventGSource));
+    nsrc->gfd.fd = fd;
+    nsrc->opaque = opaque;
+    g_source_set_callback(&nsrc->source, dispatch_cb, nsrc, NULL);
+    g_source_add_poll(&nsrc->source, &nsrc->gfd);
+
+    return nsrc;
+}
+
+void event_source_release(EventGSource *src)
+{
+    g_source_destroy(&src->source);
+}
+
+GPollFD *events_source_add_gfd(EventsGSource *src, int fd)
+{
+    GPollFD *retfd;
+
+    retfd = g_slice_alloc(sizeof(GPollFD));
+    retfd->events = 0;
+    retfd->fd = fd;
+    src->pollfds_list = g_list_append(src->pollfds_list, retfd);
+    if (fd >= 0) {
+        g_source_add_poll(&src->source, retfd);
+    }
+
+    return retfd;
+}
+
+void events_source_remove_gfd(EventsGSource *src, GPollFD *pollfd)
+{
+    g_source_remove_poll(&src->source, pollfd);
+    src->pollfds_list = g_list_remove(src->pollfds_list, pollfd);
+    g_slice_free(GPollFD, pollfd);
+}
+
+static gboolean events_source_check(GSource *src)
+{
+    EventsGSource *nsrc = (EventsGSource *)src;
+    GList *cur;
+    GPollFD *gfd;
+
+    cur = nsrc->pollfds_list;
+    while (cur) {
+        gfd = cur->data;
+        if (gfd->fd >= 0 && (gfd->revents & gfd->events)) {
+            return true;
+        }
+        cur = g_list_next(cur);
+    }
+
+    return false;
+}
+
+static gboolean events_source_dispatch(GSource *src, GSourceFunc cb,
+    gpointer data)
+{
+    gboolean ret = false;
+
+    if (cb) {
+        ret = cb(data);
+    }
+    return ret;
+}
+
+EventsGSource *events_source_new(GPrepare prepare, GSourceFunc dispatch_cb,
+    void *opaque)
+{
+    EventsGSource *src;
+    GSourceFuncs *gfuncs = g_new0(GSourceFuncs, 1);
+    gfuncs->prepare = prepare;
+    gfuncs->check = events_source_check,
+    gfuncs->dispatch = events_source_dispatch,
+
+    src = (EventsGSource *)g_source_new(gfuncs, sizeof(EventsGSource));
+    src->gfuncs = gfuncs;
+    src->pollfds_list = NULL;
+    src->opaque = opaque;
+    g_source_set_callback(&src->source, dispatch_cb, src, NULL);
+
+    return src;
+}
+
+void events_source_release(EventsGSource *src)
+{
+    assert(!src->pollfds_list);
+    g_free(src->gfuncs);
+    g_source_destroy(&src->source);
+}
diff --git a/util/event_gsource.h b/util/event_gsource.h
new file mode 100644
index 0000000..25b15d7
--- /dev/null
+++ b/util/event_gsource.h
@@ -0,0 +1,49 @@
+/*
+ *  Copyright (C) 2013 IBM
+ *
+ *  This program is free software; you can redistribute it and/or modify
+ *  it under the terms of the GNU General Public License as published by
+ *  the Free Software Foundation; under version 2 of the License.
+ *
+ *  This program is distributed in the hope that it will be useful,
+ *  but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *  GNU General Public License for more details.
+ *
+ *  You should have received a copy of the GNU General Public License
+ *  along with this program; if not, see <http://www.gnu.org/licenses/>.
+ */
+
+#ifndef EVENT_GSOURCE_H
+#define EVENT_GSOURCE_H
+#include "qemu-common.h"
+
+typedef gushort (*Pollable)(void *opaque);
+typedef gboolean (*GPrepare)(GSource *source, gint *timeout_);
+
+/* single fd drive gsource */
+typedef struct EventGSource {
+    GSource source;
+    GPollFD gfd;
+    Pollable readable;
+    Pollable writable;
+    void *opaque;
+} EventGSource;
+
+EventGSource *event_source_new(int fd, GSourceFunc dispatch_cb, void *opaque);
+void event_source_release(EventGSource *src);
+
+/* multi fd drive gsource*/
+typedef struct EventsGSource {
+    GSource source;
+    GList *pollfds_list;
+    GSourceFuncs *gfuncs;
+    void *opaque;
+} EventsGSource;
+
+EventsGSource *events_source_new(GPrepare prepare, GSourceFunc dispatch_cb,
+    void *opaque);
+void events_source_release(EventsGSource *src);
+GPollFD *events_source_add_gfd(EventsGSource *src, int fd);
+void events_source_remove_gfd(EventsGSource *src, GPollFD *pollfd);
+#endif
-- 
1.7.4.4

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [Qemu-devel] [PATCH v1 02/14] net: introduce bind_ctx to NetClientInfo
  2013-05-07  5:46 [Qemu-devel] [PATCH v1 00/14] port network layer onto glib Liu Ping Fan
  2013-05-07  5:46 ` [Qemu-devel] [PATCH v1 01/14] util: introduce gsource event abstraction Liu Ping Fan
@ 2013-05-07  5:46 ` Liu Ping Fan
  2013-05-07  5:46 ` [Qemu-devel] [PATCH v1 03/14] net: port vde onto GSource Liu Ping Fan
                   ` (12 subsequent siblings)
  14 siblings, 0 replies; 20+ messages in thread
From: Liu Ping Fan @ 2013-05-07  5:46 UTC (permalink / raw)
  To: qemu-devel
  Cc: mdroth, Jan Kiszka, Stefan Hajnoczi, Anthony Liguori, Paolo Bonzini

From: Liu Ping Fan <pingfank@linux.vnet.ibm.com>

Introduce bind_ctx interface for NetClientState. It will help to
bind NetClientState with a GSource. Currently, these GSource attached
with default context, but in future, after resolving all the race
condition in network layer, NetClientStates can run on different
threads

Signed-off-by: Liu Ping Fan <pingfank@linux.vnet.ibm.com>
---
 include/net/net.h |    2 ++
 1 files changed, 2 insertions(+), 0 deletions(-)

diff --git a/include/net/net.h b/include/net/net.h
index cb049a1..88332d2 100644
--- a/include/net/net.h
+++ b/include/net/net.h
@@ -44,6 +44,7 @@ typedef ssize_t (NetReceiveIOV)(NetClientState *, const struct iovec *, int);
 typedef void (NetCleanup) (NetClientState *);
 typedef void (LinkStatusChanged)(NetClientState *);
 typedef void (NetClientDestructor)(NetClientState *);
+typedef void (NetClientBindCtx)(NetClientState *, GMainContext *);
 
 typedef struct NetClientInfo {
     NetClientOptionsKind type;
@@ -55,6 +56,7 @@ typedef struct NetClientInfo {
     NetCleanup *cleanup;
     LinkStatusChanged *link_status_changed;
     NetPoll *poll;
+    NetClientBindCtx *bind_ctx;
 } NetClientInfo;
 
 struct NetClientState {
-- 
1.7.4.4

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [Qemu-devel] [PATCH v1 03/14] net: port vde onto GSource
  2013-05-07  5:46 [Qemu-devel] [PATCH v1 00/14] port network layer onto glib Liu Ping Fan
  2013-05-07  5:46 ` [Qemu-devel] [PATCH v1 01/14] util: introduce gsource event abstraction Liu Ping Fan
  2013-05-07  5:46 ` [Qemu-devel] [PATCH v1 02/14] net: introduce bind_ctx to NetClientInfo Liu Ping Fan
@ 2013-05-07  5:46 ` Liu Ping Fan
  2013-05-07  5:46 ` [Qemu-devel] [PATCH v1 04/14] net: port socket to GSource Liu Ping Fan
                   ` (11 subsequent siblings)
  14 siblings, 0 replies; 20+ messages in thread
From: Liu Ping Fan @ 2013-05-07  5:46 UTC (permalink / raw)
  To: qemu-devel
  Cc: mdroth, Jan Kiszka, Stefan Hajnoczi, Anthony Liguori, Paolo Bonzini

From: Liu Ping Fan <pingfank@linux.vnet.ibm.com>

Signed-off-by: Liu Ping Fan <pingfank@linux.vnet.ibm.com>
---
 net/vde.c |   31 +++++++++++++++++++++++++++++--
 1 files changed, 29 insertions(+), 2 deletions(-)

diff --git a/net/vde.c b/net/vde.c
index 4dea32d..fe763dd 100644
--- a/net/vde.c
+++ b/net/vde.c
@@ -30,10 +30,12 @@
 #include "qemu-common.h"
 #include "qemu/option.h"
 #include "qemu/main-loop.h"
+#include "util/event_gsource.h"
 
 typedef struct VDEState {
     NetClientState nc;
     VDECONN *vde;
+    EventGSource *nsrc;
 } VDEState;
 
 static void vde_to_qemu(void *opaque)
@@ -60,20 +62,43 @@ static ssize_t vde_receive(NetClientState *nc, const uint8_t *buf, size_t size)
     return ret;
 }
 
+static gboolean vde_handler(gpointer data)
+{
+    EventGSource *nsrc = (EventGSource *)data;
+
+    if (nsrc->gfd.revents & (G_IO_IN | G_IO_HUP | G_IO_ERR)) {
+        vde_to_qemu(nsrc->opaque);
+    }
+    return true;
+}
+
 static void vde_cleanup(NetClientState *nc)
 {
     VDEState *s = DO_UPCAST(VDEState, nc, nc);
-    qemu_set_fd_handler(vde_datafd(s->vde), NULL, NULL, NULL);
+    event_source_release(s->nsrc);
     vde_close(s->vde);
 }
 
+static void vde_bind_ctx(NetClientState *nc, GMainContext *ctx)
+{
+    VDEState *s = DO_UPCAST(VDEState, nc, nc);
+
+    g_source_attach(&s->nsrc->source, ctx);
+}
+
 static NetClientInfo net_vde_info = {
     .type = NET_CLIENT_OPTIONS_KIND_VDE,
     .size = sizeof(VDEState),
     .receive = vde_receive,
     .cleanup = vde_cleanup,
+    .bind_ctx = vde_bind_ctx,
 };
 
+static gushort readable(void *opaque)
+{
+    return G_IO_IN | G_IO_HUP | G_IO_ERR;
+}
+
 static int net_vde_init(NetClientState *peer, const char *model,
                         const char *name, const char *sock,
                         int port, const char *group, int mode)
@@ -104,7 +129,9 @@ static int net_vde_init(NetClientState *peer, const char *model,
 
     s->vde = vde;
 
-    qemu_set_fd_handler(vde_datafd(s->vde), vde_to_qemu, NULL, s);
+    s->nsrc = event_source_new(vde_datafd(vde), vde_handler, s);
+    s->nsrc->readable = readable;
+    nc->info->bind_ctx(nc, NULL);
 
     return 0;
 }
-- 
1.7.4.4

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [Qemu-devel] [PATCH v1 04/14] net: port socket to GSource
  2013-05-07  5:46 [Qemu-devel] [PATCH v1 00/14] port network layer onto glib Liu Ping Fan
                   ` (2 preceding siblings ...)
  2013-05-07  5:46 ` [Qemu-devel] [PATCH v1 03/14] net: port vde onto GSource Liu Ping Fan
@ 2013-05-07  5:46 ` Liu Ping Fan
  2013-05-07  5:46 ` [Qemu-devel] [PATCH v1 05/14] net: port tap onto GSource Liu Ping Fan
                   ` (10 subsequent siblings)
  14 siblings, 0 replies; 20+ messages in thread
From: Liu Ping Fan @ 2013-05-07  5:46 UTC (permalink / raw)
  To: qemu-devel
  Cc: mdroth, Jan Kiszka, Stefan Hajnoczi, Anthony Liguori, Paolo Bonzini

From: Liu Ping Fan <pingfank@linux.vnet.ibm.com>

Port NetSocketState onto NetClientSource. The only thing specail is that
owning to the socket's state machine changes, we need to change the handler.
We implement that by destroy the old NetClientSource and attach a new one
with NetSocketState.

Signed-off-by: Liu Ping Fan <pingfank@linux.vnet.ibm.com>
---
 net/socket.c |  194 ++++++++++++++++++++++++++++++++++++++++++++++++----------
 1 files changed, 161 insertions(+), 33 deletions(-)

diff --git a/net/socket.c b/net/socket.c
index 396dc8c..d52991d 100644
--- a/net/socket.c
+++ b/net/socket.c
@@ -31,6 +31,8 @@
 #include "qemu/option.h"
 #include "qemu/sockets.h"
 #include "qemu/iov.h"
+#include "util/event_gsource.h"
+
 
 typedef struct NetSocketState {
     NetClientState nc;
@@ -42,13 +44,15 @@ typedef struct NetSocketState {
     unsigned int send_index;      /* number of bytes sent (only SOCK_STREAM) */
     uint8_t buf[4096];
     struct sockaddr_in dgram_dst; /* contains inet host and port destination iff connectionless (SOCK_DGRAM) */
-    IOHandler *send_fn;           /* differs between SOCK_STREAM/SOCK_DGRAM */
     bool read_poll;               /* waiting to receive data? */
     bool write_poll;              /* waiting to transmit data? */
+    EventGSource *nsrc;
 } NetSocketState;
 
-static void net_socket_accept(void *opaque);
 static void net_socket_writable(void *opaque);
+static gboolean net_socket_listen_handler(gpointer data);
+static gboolean net_socket_establish_handler(gpointer data);
+
 
 /* Only read packets from socket when peer can receive them */
 static int net_socket_can_send(void *opaque)
@@ -58,25 +62,14 @@ static int net_socket_can_send(void *opaque)
     return qemu_can_send_packet(&s->nc);
 }
 
-static void net_socket_update_fd_handler(NetSocketState *s)
-{
-    qemu_set_fd_handler2(s->fd,
-                         s->read_poll  ? net_socket_can_send : NULL,
-                         s->read_poll  ? s->send_fn : NULL,
-                         s->write_poll ? net_socket_writable : NULL,
-                         s);
-}
-
 static void net_socket_read_poll(NetSocketState *s, bool enable)
 {
     s->read_poll = enable;
-    net_socket_update_fd_handler(s);
 }
 
 static void net_socket_write_poll(NetSocketState *s, bool enable)
 {
     s->write_poll = enable;
-    net_socket_update_fd_handler(s);
 }
 
 static void net_socket_writable(void *opaque)
@@ -141,6 +134,59 @@ static ssize_t net_socket_receive_dgram(NetClientState *nc, const uint8_t *buf,
     return ret;
 }
 
+static gushort socket_connecting_writable(void *opaque)
+{
+    return G_IO_OUT | G_IO_ERR;
+}
+
+static gushort socket_listen_readable(void *opaque)
+{
+    return G_IO_IN | G_IO_HUP | G_IO_ERR;
+}
+
+static gushort socket_establish_readable(void *opaque)
+{
+    NetSocketState *s = opaque;
+
+    /* rely on net_socket_send to handle err */
+    if (s->read_poll && net_socket_can_send(s)) {
+        return G_IO_IN | G_IO_HUP | G_IO_ERR;
+    }
+    return 0;
+}
+
+static gushort socket_establish_writable(void *opaque)
+{
+    NetSocketState *s = opaque;
+
+    if (s->write_poll) {
+        return G_IO_OUT | G_IO_HUP | G_IO_ERR;
+    }
+    return 0;
+}
+
+static gushort socket_dgram_readable(void *opaque)
+{
+    NetSocketState *s = opaque;
+
+    /* rely on net_socket_send_dgram to handle err */
+    if (s->read_poll && net_socket_can_send(s)) {
+        return G_IO_IN | G_IO_ERR;
+    }
+    return 0;
+}
+
+static gushort socket_dgram_writable(void *opaque)
+{
+    NetSocketState *s = opaque;
+
+    if (s->write_poll) {
+        return G_IO_OUT | G_IO_ERR;
+    }
+    return 0;
+}
+
+/* common handler for accept-established or connecting case */
 static void net_socket_send(void *opaque)
 {
     NetSocketState *s = opaque;
@@ -159,8 +205,13 @@ static void net_socket_send(void *opaque)
     eoc:
         net_socket_read_poll(s, false);
         net_socket_write_poll(s, false);
+        /* rely on this to tell the accept-established or connecting case */
         if (s->listen_fd != -1) {
-            qemu_set_fd_handler(s->listen_fd, net_socket_accept, NULL, s);
+            event_source_release(s->nsrc);
+            s->nsrc = event_source_new(s->listen_fd, net_socket_listen_handler,
+                                s);
+            s->nsrc->readable = socket_listen_readable;
+            s->nc.info->bind_ctx(&s->nc, NULL);
         }
         closesocket(s->fd);
 
@@ -231,6 +282,8 @@ static void net_socket_send_dgram(void *opaque)
         /* end of connection */
         net_socket_read_poll(s, false);
         net_socket_write_poll(s, false);
+        /* for dgram err, removing it */
+        g_source_remove_poll(&s->nsrc->source, &s->nsrc->gfd);
         return;
     }
     qemu_send_packet(&s->nc, s->buf, size);
@@ -331,6 +384,14 @@ static void net_socket_cleanup(NetClientState *nc)
         closesocket(s->listen_fd);
         s->listen_fd = -1;
     }
+    event_source_release(s->nsrc);
+}
+
+static void net_socket_bind_ctx(NetClientState *nc, GMainContext *ctx)
+{
+    NetSocketState *s = DO_UPCAST(NetSocketState, nc, nc);
+
+    g_source_attach(&s->nsrc->source, ctx);
 }
 
 static NetClientInfo net_dgram_socket_info = {
@@ -338,8 +399,24 @@ static NetClientInfo net_dgram_socket_info = {
     .size = sizeof(NetSocketState),
     .receive = net_socket_receive_dgram,
     .cleanup = net_socket_cleanup,
+    .bind_ctx = net_socket_bind_ctx,
 };
 
+static gboolean net_socket_dgram_handler(gpointer data)
+{
+    EventGSource *nsrc = (EventGSource *)data;
+    NetSocketState *s = nsrc->opaque;
+
+    /* for err, unregister the handler */
+    if (nsrc->gfd.revents & (G_IO_IN | G_IO_ERR)) {
+        net_socket_send_dgram(s);
+    }
+    if (nsrc->gfd.revents & (G_IO_OUT | G_IO_ERR)) {
+        net_socket_writable(s);
+    }
+    return true;
+}
+
 static NetSocketState *net_socket_fd_init_dgram(NetClientState *peer,
                                                 const char *model,
                                                 const char *name,
@@ -393,8 +470,12 @@ static NetSocketState *net_socket_fd_init_dgram(NetClientState *peer,
 
     s->fd = fd;
     s->listen_fd = -1;
-    s->send_fn = net_socket_send_dgram;
+    s->nsrc = event_source_new(fd, net_socket_dgram_handler, s);
+    s->nsrc->readable = socket_dgram_readable;
+    s->nsrc->writable = socket_dgram_writable;
+    nc->info->bind_ctx(nc, NULL);
     net_socket_read_poll(s, true);
+    net_socket_write_poll(s, true);
 
     /* mcast: save bound address as dst */
     if (is_connected) {
@@ -408,20 +489,31 @@ err:
     return NULL;
 }
 
-static void net_socket_connect(void *opaque)
-{
-    NetSocketState *s = opaque;
-    s->send_fn = net_socket_send;
-    net_socket_read_poll(s, true);
-}
-
 static NetClientInfo net_socket_info = {
     .type = NET_CLIENT_OPTIONS_KIND_SOCKET,
     .size = sizeof(NetSocketState),
     .receive = net_socket_receive,
     .cleanup = net_socket_cleanup,
+    .bind_ctx = net_socket_bind_ctx,
 };
 
+static gboolean net_socket_connect_handler(gpointer data)
+{
+    EventGSource *nsrc = data;
+    NetSocketState *s = nsrc->opaque;
+
+    event_source_release(s->nsrc);
+    /* For error handle, delay to net_socket_establish_handler */
+    s->nsrc = event_source_new(s->fd, net_socket_establish_handler, s);
+    s->nsrc->readable = socket_establish_readable;
+    s->nsrc->writable = socket_establish_writable;
+    s->nc.info->bind_ctx(&s->nc, NULL);
+    net_socket_read_poll(s, true);
+    net_socket_write_poll(s, true);
+
+    return true;
+}
+
 static NetSocketState *net_socket_fd_init_stream(NetClientState *peer,
                                                  const char *model,
                                                  const char *name,
@@ -440,9 +532,20 @@ static NetSocketState *net_socket_fd_init_stream(NetClientState *peer,
     s->listen_fd = -1;
 
     if (is_connected) {
-        net_socket_connect(s);
+        assert(!s->nsrc);
+        s->nsrc = event_source_new(fd, net_socket_establish_handler, s);
+        s->nsrc->readable = socket_establish_readable;
+        s->nsrc->writable = socket_establish_writable;
+        nc->info->bind_ctx(nc, NULL);
+        net_socket_read_poll(s, true);
+        net_socket_write_poll(s, true);
     } else {
-        qemu_set_fd_handler(s->fd, NULL, net_socket_connect, s);
+        assert(!s->nsrc);
+        s->nsrc = event_source_new(fd, net_socket_connect_handler, s);
+        s->nsrc->writable = socket_connecting_writable;
+        nc->info->bind_ctx(nc, NULL);
+        net_socket_read_poll(s, false);
+        net_socket_write_poll(s, true);
     }
     return s;
 }
@@ -473,30 +576,52 @@ static NetSocketState *net_socket_fd_init(NetClientState *peer,
     return NULL;
 }
 
-static void net_socket_accept(void *opaque)
+static gboolean net_socket_establish_handler(gpointer data)
 {
-    NetSocketState *s = opaque;
+    EventGSource *nsrc = (EventGSource *)data;
+    NetSocketState *s = nsrc->opaque;
+
+    if (nsrc->gfd.revents & (G_IO_IN | G_IO_HUP | G_IO_ERR)) {
+        net_socket_send(s);
+    }
+    if ((nsrc->gfd.revents & (G_IO_OUT | G_IO_HUP | G_IO_ERR))) {
+        net_socket_writable(s);
+    }
+    return true;
+}
+
+static gboolean net_socket_listen_handler(gpointer data)
+{
+    EventGSource *nsrc = data;
+    NetSocketState *s = nsrc->opaque;
     struct sockaddr_in saddr;
     socklen_t len;
     int fd;
 
-    for(;;) {
-        len = sizeof(saddr);
+    len = sizeof(saddr);
+    do {
         fd = qemu_accept(s->listen_fd, (struct sockaddr *)&saddr, &len);
         if (fd < 0 && errno != EINTR) {
-            return;
+            return false;
         } else if (fd >= 0) {
-            qemu_set_fd_handler(s->listen_fd, NULL, NULL, NULL);
             break;
         }
-    }
+    } while (true);
 
     s->fd = fd;
     s->nc.link_down = false;
-    net_socket_connect(s);
+    /* prevent more than one connect req */
+    event_source_release(s->nsrc);
+    s->nsrc = event_source_new(fd, net_socket_establish_handler, s);
+    s->nsrc->readable = socket_establish_readable;
+    s->nsrc->writable = socket_establish_writable;
+    s->nc.info->bind_ctx(&s->nc, NULL);
+    net_socket_read_poll(s, true);
     snprintf(s->nc.info_str, sizeof(s->nc.info_str),
              "socket: connection from %s:%d",
              inet_ntoa(saddr.sin_addr), ntohs(saddr.sin_port));
+
+    return true;
 }
 
 static int net_socket_listen_init(NetClientState *peer,
@@ -542,7 +667,10 @@ static int net_socket_listen_init(NetClientState *peer,
     s->listen_fd = fd;
     s->nc.link_down = true;
 
-    qemu_set_fd_handler(s->listen_fd, net_socket_accept, NULL, s);
+    s->nsrc = event_source_new(fd, net_socket_listen_handler, s);
+    s->nsrc->readable = socket_listen_readable;
+    nc->info->bind_ctx(nc, NULL);
+
     return 0;
 }
 
-- 
1.7.4.4

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [Qemu-devel] [PATCH v1 05/14] net: port tap onto GSource
  2013-05-07  5:46 [Qemu-devel] [PATCH v1 00/14] port network layer onto glib Liu Ping Fan
                   ` (3 preceding siblings ...)
  2013-05-07  5:46 ` [Qemu-devel] [PATCH v1 04/14] net: port socket to GSource Liu Ping Fan
@ 2013-05-07  5:46 ` Liu Ping Fan
  2013-05-07  5:46 ` [Qemu-devel] [PATCH v1 06/14] net: port tap-win32 " Liu Ping Fan
                   ` (9 subsequent siblings)
  14 siblings, 0 replies; 20+ messages in thread
From: Liu Ping Fan @ 2013-05-07  5:46 UTC (permalink / raw)
  To: qemu-devel
  Cc: mdroth, Jan Kiszka, Stefan Hajnoczi, Anthony Liguori, Paolo Bonzini

From: Liu Ping Fan <pingfank@linux.vnet.ibm.com>

Signed-off-by: Liu Ping Fan <pingfank@linux.vnet.ibm.com>
---
 net/tap.c |   64 +++++++++++++++++++++++++++++++++++++++++++++++++++---------
 1 files changed, 54 insertions(+), 10 deletions(-)

diff --git a/net/tap.c b/net/tap.c
index daab350..5f4d59f 100644
--- a/net/tap.c
+++ b/net/tap.c
@@ -41,6 +41,7 @@
 #include "qemu/error-report.h"
 
 #include "net/tap.h"
+#include "util/event_gsource.h"
 
 #include "hw/vhost_net.h"
 
@@ -62,6 +63,7 @@ typedef struct TAPState {
     bool enabled;
     VHostNetState *vhost_net;
     unsigned host_vnet_hdr_len;
+    EventGSource *nsrc;
 } TAPState;
 
 static int launch_script(const char *setup_script, const char *ifname, int fd);
@@ -70,25 +72,48 @@ static int tap_can_send(void *opaque);
 static void tap_send(void *opaque);
 static void tap_writable(void *opaque);
 
-static void tap_update_fd_handler(TAPState *s)
+static gushort readable(void *opaque)
 {
-    qemu_set_fd_handler2(s->fd,
-                         s->read_poll && s->enabled ? tap_can_send : NULL,
-                         s->read_poll && s->enabled ? tap_send     : NULL,
-                         s->write_poll && s->enabled ? tap_writable : NULL,
-                         s);
+    TAPState *s = opaque;
+
+    if (s->enabled && s->read_poll &&
+        tap_can_send(s)) {
+        return G_IO_IN;
+    }
+    return 0;
+}
+
+static gushort writable(void *opaque)
+{
+    TAPState *s = opaque;
+
+    if (s->enabled && s->write_poll) {
+        return G_IO_OUT;
+    }
+    return 0;
+}
+
+static gboolean tap_handler(gpointer data)
+{
+    EventGSource *nsrc = data;
+
+    if (nsrc->gfd.revents & G_IO_IN) {
+        tap_send(nsrc->opaque);
+    }
+    if (nsrc->gfd.revents & G_IO_OUT) {
+        tap_writable(nsrc->opaque);
+    }
+    return true;
 }
 
 static void tap_read_poll(TAPState *s, bool enable)
 {
     s->read_poll = enable;
-    tap_update_fd_handler(s);
 }
 
 static void tap_write_poll(TAPState *s, bool enable)
 {
     s->write_poll = enable;
-    tap_update_fd_handler(s);
 }
 
 static void tap_writable(void *opaque)
@@ -291,6 +316,7 @@ static void tap_cleanup(NetClientState *nc)
 
     tap_read_poll(s, false);
     tap_write_poll(s, false);
+    event_source_release(s->nsrc);
     close(s->fd);
     s->fd = -1;
 }
@@ -300,6 +326,12 @@ static void tap_poll(NetClientState *nc, bool enable)
     TAPState *s = DO_UPCAST(TAPState, nc, nc);
     tap_read_poll(s, enable);
     tap_write_poll(s, enable);
+
+    if (!enable) {
+        g_source_remove_poll(&s->nsrc->source, &s->nsrc->gfd);
+    } else {
+        g_source_add_poll(&s->nsrc->source, &s->nsrc->gfd);
+    }
 }
 
 int tap_get_fd(NetClientState *nc)
@@ -309,6 +341,13 @@ int tap_get_fd(NetClientState *nc)
     return s->fd;
 }
 
+static void tap_bind_ctx(NetClientState *nc, GMainContext *ctx)
+{
+    TAPState *s = DO_UPCAST(TAPState, nc, nc);
+
+    g_source_attach(&s->nsrc->source, ctx);
+}
+
 /* fd support */
 
 static NetClientInfo net_tap_info = {
@@ -319,6 +358,7 @@ static NetClientInfo net_tap_info = {
     .receive_iov = tap_receive_iov,
     .poll = tap_poll,
     .cleanup = tap_cleanup,
+    .bind_ctx = tap_bind_ctx,
 };
 
 static TAPState *net_tap_fd_init(NetClientState *peer,
@@ -596,6 +636,7 @@ static int net_init_tap_one(const NetdevTapOptions *tap, NetClientState *peer,
                             int vnet_hdr, int fd)
 {
     TAPState *s;
+    EventGSource *nsrc;
 
     s = net_tap_fd_init(peer, model, name, fd, vnet_hdr);
     if (!s) {
@@ -606,6 +647,11 @@ static int net_init_tap_one(const NetdevTapOptions *tap, NetClientState *peer,
     if (tap_set_sndbuf(s->fd, tap) < 0) {
         return -1;
     }
+    nsrc = event_source_new(s->fd, tap_handler, s);
+    nsrc->readable = readable;
+    nsrc->writable = writable;
+    s->nsrc = nsrc;
+    s->nc.info->bind_ctx(&s->nc, NULL);
 
     if (tap->has_fd || tap->has_fds) {
         snprintf(s->nc.info_str, sizeof(s->nc.info_str), "fd=%d", fd);
@@ -844,7 +890,6 @@ int tap_enable(NetClientState *nc)
         ret = tap_fd_enable(s->fd);
         if (ret == 0) {
             s->enabled = true;
-            tap_update_fd_handler(s);
         }
         return ret;
     }
@@ -862,7 +907,6 @@ int tap_disable(NetClientState *nc)
         if (ret == 0) {
             qemu_purge_queued_packets(nc);
             s->enabled = false;
-            tap_update_fd_handler(s);
         }
         return ret;
     }
-- 
1.7.4.4

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [Qemu-devel] [PATCH v1 06/14] net: port tap-win32 onto GSource
  2013-05-07  5:46 [Qemu-devel] [PATCH v1 00/14] port network layer onto glib Liu Ping Fan
                   ` (4 preceding siblings ...)
  2013-05-07  5:46 ` [Qemu-devel] [PATCH v1 05/14] net: port tap onto GSource Liu Ping Fan
@ 2013-05-07  5:46 ` Liu Ping Fan
  2013-05-07  5:46 ` [Qemu-devel] [PATCH v1 07/14] net: hub use lock to protect ports list Liu Ping Fan
                   ` (8 subsequent siblings)
  14 siblings, 0 replies; 20+ messages in thread
From: Liu Ping Fan @ 2013-05-07  5:46 UTC (permalink / raw)
  To: qemu-devel
  Cc: mdroth, Jan Kiszka, Stefan Hajnoczi, Anthony Liguori, Paolo Bonzini

From: Liu Ping Fan <pingfank@linux.vnet.ibm.com>

Signed-off-by: Liu Ping Fan <pingfank@linux.vnet.ibm.com>
---
 net/tap-win32.c |   31 +++++++++++++++++++++++++++++--
 1 files changed, 29 insertions(+), 2 deletions(-)

diff --git a/net/tap-win32.c b/net/tap-win32.c
index 91e9e84..7a84195 100644
--- a/net/tap-win32.c
+++ b/net/tap-win32.c
@@ -635,13 +635,14 @@ static int tap_win32_open(tap_win32_overlapped_t **phandle,
  typedef struct TAPState {
      NetClientState nc;
      tap_win32_overlapped_t *handle;
+     EventGSource *nsrc;
  } TAPState;
 
 static void tap_cleanup(NetClientState *nc)
 {
     TAPState *s = DO_UPCAST(TAPState, nc, nc);
 
-    qemu_del_wait_object(s->handle->tap_semaphore, NULL, NULL);
+    event_source_release(s->nsrc);
 
     /* FIXME: need to kill thread and close file handle:
        tap_win32_close(s);
@@ -669,13 +670,37 @@ static void tap_win32_send(void *opaque)
     }
 }
 
+static void tap_bind_ctx(NetClientState *nc, GMainContext *ctx)
+{
+    TAPState *s = DO_UPCAST(TAPState, nc, nc);
+
+    g_source_attach(&s->nsrc->source, ctx);
+}
+
 static NetClientInfo net_tap_win32_info = {
     .type = NET_CLIENT_OPTIONS_KIND_TAP,
     .size = sizeof(TAPState),
     .receive = tap_receive,
     .cleanup = tap_cleanup,
+    .bind_ctx = tap_bind_ctx,
 };
 
+static gboolean tap_win32_handler(gpointer data)
+{
+    EventGSource *nsrc = data;
+    TAPState *s = nsrc->opaque;
+
+    if (nsrc->gfd.revents & G_IO_IN) {
+        tap_win32_send(s);
+    }
+    return true;
+}
+
+static gushort readable(void *opaque)
+{
+    return G_IO_IN;
+}
+
 static int tap_win32_init(NetClientState *peer, const char *model,
                           const char *name, const char *ifname)
 {
@@ -697,7 +722,9 @@ static int tap_win32_init(NetClientState *peer, const char *model,
 
     s->handle = handle;
 
-    qemu_add_wait_object(s->handle->tap_semaphore, tap_win32_send, s);
+    s->nsrc = event_source_new(s->handle->tap_semaphore, tap_win32_handler, s);
+    s->nsrc->readable = readable;
+    nc->info->bind_ctx(&s->nc, NULL);
 
     return 0;
 }
-- 
1.7.4.4

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [Qemu-devel] [PATCH v1 07/14] net: hub use lock to protect ports list
  2013-05-07  5:46 [Qemu-devel] [PATCH v1 00/14] port network layer onto glib Liu Ping Fan
                   ` (5 preceding siblings ...)
  2013-05-07  5:46 ` [Qemu-devel] [PATCH v1 06/14] net: port tap-win32 " Liu Ping Fan
@ 2013-05-07  5:46 ` Liu Ping Fan
  2013-05-21 13:57   ` Stefan Hajnoczi
  2013-05-07  5:46 ` [Qemu-devel] [PATCH v1 08/14] net: introduce lock to protect NetQueue Liu Ping Fan
                   ` (7 subsequent siblings)
  14 siblings, 1 reply; 20+ messages in thread
From: Liu Ping Fan @ 2013-05-07  5:46 UTC (permalink / raw)
  To: qemu-devel
  Cc: mdroth, Jan Kiszka, Stefan Hajnoczi, Anthony Liguori, Paolo Bonzini

From: Liu Ping Fan <pingfank@linux.vnet.ibm.com>

Hub ports will run on multi-threads, so use lock to protect them.

Signed-off-by: Liu Ping Fan <pingfank@linux.vnet.ibm.com>
---
 net/hub.c |   25 ++++++++++++++++++++++++-
 1 files changed, 24 insertions(+), 1 deletions(-)

diff --git a/net/hub.c b/net/hub.c
index df32074..812a6dc 100644
--- a/net/hub.c
+++ b/net/hub.c
@@ -37,6 +37,7 @@ struct NetHub {
     int id;
     QLIST_ENTRY(NetHub) next;
     int num_ports;
+    QemuMutex ports_lock;
     QLIST_HEAD(, NetHubPort) ports;
 };
 
@@ -47,6 +48,7 @@ static ssize_t net_hub_receive(NetHub *hub, NetHubPort *source_port,
 {
     NetHubPort *port;
 
+    qemu_mutex_lock(&hub->ports_lock);
     QLIST_FOREACH(port, &hub->ports, next) {
         if (port == source_port) {
             continue;
@@ -54,6 +56,7 @@ static ssize_t net_hub_receive(NetHub *hub, NetHubPort *source_port,
 
         qemu_send_packet(&port->nc, buf, len);
     }
+    qemu_mutex_unlock(&hub->ports_lock);
     return len;
 }
 
@@ -63,6 +66,7 @@ static ssize_t net_hub_receive_iov(NetHub *hub, NetHubPort *source_port,
     NetHubPort *port;
     ssize_t len = iov_size(iov, iovcnt);
 
+    qemu_mutex_lock(&hub->ports_lock);
     QLIST_FOREACH(port, &hub->ports, next) {
         if (port == source_port) {
             continue;
@@ -70,6 +74,7 @@ static ssize_t net_hub_receive_iov(NetHub *hub, NetHubPort *source_port,
 
         qemu_sendv_packet(&port->nc, iov, iovcnt);
     }
+    qemu_mutex_unlock(&hub->ports_lock);
     return len;
 }
 
@@ -80,6 +85,7 @@ static NetHub *net_hub_new(int id)
     hub = g_malloc(sizeof(*hub));
     hub->id = id;
     hub->num_ports = 0;
+    qemu_mutex_init(&hub->ports_lock);
     QLIST_INIT(&hub->ports);
 
     QLIST_INSERT_HEAD(&hubs, hub, next);
@@ -93,16 +99,19 @@ static int net_hub_port_can_receive(NetClientState *nc)
     NetHubPort *src_port = DO_UPCAST(NetHubPort, nc, nc);
     NetHub *hub = src_port->hub;
 
+    qemu_mutex_lock(&hub->ports_lock);
     QLIST_FOREACH(port, &hub->ports, next) {
         if (port == src_port) {
             continue;
         }
 
         if (qemu_can_send_packet(&port->nc)) {
+            qemu_mutex_unlock(&hub->ports_lock);
             return 1;
         }
     }
 
+    qemu_mutex_unlock(&hub->ports_lock);
     return 0;
 }
 
@@ -155,8 +164,9 @@ static NetHubPort *net_hub_port_new(NetHub *hub, const char *name)
     port = DO_UPCAST(NetHubPort, nc, nc);
     port->id = id;
     port->hub = hub;
-
+    qemu_mutex_lock(&hub->ports_lock);
     QLIST_INSERT_HEAD(&hub->ports, port, next);
+    qemu_mutex_unlock(&hub->ports_lock);
 
     return port;
 }
@@ -197,13 +207,16 @@ NetClientState *net_hub_find_client_by_name(int hub_id, const char *name)
 
     QLIST_FOREACH(hub, &hubs, next) {
         if (hub->id == hub_id) {
+            qemu_mutex_lock(&hub->ports_lock);
             QLIST_FOREACH(port, &hub->ports, next) {
                 peer = port->nc.peer;
 
                 if (peer && strcmp(peer->name, name) == 0) {
+                    qemu_mutex_unlock(&hub->ports_lock);
                     return peer;
                 }
             }
+            qemu_mutex_unlock(&hub->ports_lock);
         }
     }
     return NULL;
@@ -220,12 +233,15 @@ NetClientState *net_hub_port_find(int hub_id)
 
     QLIST_FOREACH(hub, &hubs, next) {
         if (hub->id == hub_id) {
+            qemu_mutex_lock(&hub->ports_lock);
             QLIST_FOREACH(port, &hub->ports, next) {
                 nc = port->nc.peer;
                 if (!nc) {
+                    qemu_mutex_unlock(&hub->ports_lock);
                     return &(port->nc);
                 }
             }
+            qemu_mutex_unlock(&hub->ports_lock);
             break;
         }
     }
@@ -244,12 +260,14 @@ void net_hub_info(Monitor *mon)
 
     QLIST_FOREACH(hub, &hubs, next) {
         monitor_printf(mon, "hub %d\n", hub->id);
+        qemu_mutex_lock(&hub->ports_lock);
         QLIST_FOREACH(port, &hub->ports, next) {
             if (port->nc.peer) {
                 monitor_printf(mon, " \\ ");
                 print_net_client(mon, port->nc.peer);
             }
         }
+        qemu_mutex_unlock(&hub->ports_lock);
     }
 }
 
@@ -306,6 +324,7 @@ void net_hub_check_clients(void)
     QLIST_FOREACH(hub, &hubs, next) {
         int has_nic = 0, has_host_dev = 0;
 
+        qemu_mutex_lock(&hub->ports_lock);
         QLIST_FOREACH(port, &hub->ports, next) {
             peer = port->nc.peer;
             if (!peer) {
@@ -328,6 +347,7 @@ void net_hub_check_clients(void)
                 break;
             }
         }
+        qemu_mutex_unlock(&hub->ports_lock);
         if (has_host_dev && !has_nic) {
             fprintf(stderr, "Warning: vlan %d with no nics\n", hub->id);
         }
@@ -343,12 +363,15 @@ bool net_hub_flush(NetClientState *nc)
 {
     NetHubPort *port;
     NetHubPort *source_port = DO_UPCAST(NetHubPort, nc, nc);
+    NetHub *hub = source_port->hub;
     int ret = 0;
 
+    qemu_mutex_lock(&hub->ports_lock);
     QLIST_FOREACH(port, &source_port->hub->ports, next) {
         if (port != source_port) {
             ret += qemu_net_queue_flush(port->nc.send_queue);
         }
     }
+    qemu_mutex_unlock(&hub->ports_lock);
     return ret ? true : false;
 }
-- 
1.7.4.4

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [Qemu-devel] [PATCH v1 08/14] net: introduce lock to protect NetQueue
  2013-05-07  5:46 [Qemu-devel] [PATCH v1 00/14] port network layer onto glib Liu Ping Fan
                   ` (6 preceding siblings ...)
  2013-05-07  5:46 ` [Qemu-devel] [PATCH v1 07/14] net: hub use lock to protect ports list Liu Ping Fan
@ 2013-05-07  5:46 ` Liu Ping Fan
  2013-05-21 14:04   ` Stefan Hajnoczi
  2013-05-07  5:46 ` [Qemu-devel] [PATCH v1 09/14] net: introduce lock to protect NetClientState's peer's access Liu Ping Fan
                   ` (6 subsequent siblings)
  14 siblings, 1 reply; 20+ messages in thread
From: Liu Ping Fan @ 2013-05-07  5:46 UTC (permalink / raw)
  To: qemu-devel
  Cc: mdroth, Jan Kiszka, Stefan Hajnoczi, Anthony Liguori, Paolo Bonzini

From: Liu Ping Fan <pingfank@linux.vnet.ibm.com>

NetQueue will be accessed by nc and its peers at the same time,
need lock to protect it.

Signed-off-by: Liu Ping Fan <pingfank@linux.vnet.ibm.com>
---
 net/queue.c |   11 +++++++++++
 1 files changed, 11 insertions(+), 0 deletions(-)

diff --git a/net/queue.c b/net/queue.c
index 859d02a..2856c1d 100644
--- a/net/queue.c
+++ b/net/queue.c
@@ -53,6 +53,7 @@ struct NetQueue {
     uint32_t nq_maxlen;
     uint32_t nq_count;
 
+    QemuMutex lock;
     QTAILQ_HEAD(packets, NetPacket) packets;
 
     unsigned delivering : 1;
@@ -68,6 +69,7 @@ NetQueue *qemu_new_net_queue(void *opaque)
     queue->nq_maxlen = 10000;
     queue->nq_count = 0;
 
+    qemu_mutex_init(&queue->lock);
     QTAILQ_INIT(&queue->packets);
 
     queue->delivering = 0;
@@ -107,7 +109,9 @@ static void qemu_net_queue_append(NetQueue *queue,
     memcpy(packet->data, buf, size);
 
     queue->nq_count++;
+    qemu_mutex_lock(&queue->lock);
     QTAILQ_INSERT_TAIL(&queue->packets, packet, entry);
+    qemu_mutex_unlock(&queue->lock);
 }
 
 static void qemu_net_queue_append_iov(NetQueue *queue,
@@ -142,7 +146,9 @@ static void qemu_net_queue_append_iov(NetQueue *queue,
     }
 
     queue->nq_count++;
+    qemu_mutex_lock(&queue->lock);
     QTAILQ_INSERT_TAIL(&queue->packets, packet, entry);
+    qemu_mutex_unlock(&queue->lock);
 }
 
 static ssize_t qemu_net_queue_deliver(NetQueue *queue,
@@ -229,6 +235,7 @@ void qemu_net_queue_purge(NetQueue *queue, NetClientState *from)
 {
     NetPacket *packet, *next;
 
+    qemu_mutex_lock(&queue->lock);
     QTAILQ_FOREACH_SAFE(packet, &queue->packets, entry, next) {
         if (packet->sender == from) {
             QTAILQ_REMOVE(&queue->packets, packet, entry);
@@ -236,10 +243,12 @@ void qemu_net_queue_purge(NetQueue *queue, NetClientState *from)
             g_free(packet);
         }
     }
+    qemu_mutex_unlock(&queue->lock);
 }
 
 bool qemu_net_queue_flush(NetQueue *queue)
 {
+    qemu_mutex_lock(&queue->lock);
     while (!QTAILQ_EMPTY(&queue->packets)) {
         NetPacket *packet;
         int ret;
@@ -256,6 +265,7 @@ bool qemu_net_queue_flush(NetQueue *queue)
         if (ret == 0) {
             queue->nq_count++;
             QTAILQ_INSERT_HEAD(&queue->packets, packet, entry);
+            qemu_mutex_unlock(&queue->lock);
             return false;
         }
 
@@ -265,5 +275,6 @@ bool qemu_net_queue_flush(NetQueue *queue)
 
         g_free(packet);
     }
+    qemu_mutex_unlock(&queue->lock);
     return true;
 }
-- 
1.7.4.4

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [Qemu-devel] [PATCH v1 09/14] net: introduce lock to protect NetClientState's peer's access
  2013-05-07  5:46 [Qemu-devel] [PATCH v1 00/14] port network layer onto glib Liu Ping Fan
                   ` (7 preceding siblings ...)
  2013-05-07  5:46 ` [Qemu-devel] [PATCH v1 08/14] net: introduce lock to protect NetQueue Liu Ping Fan
@ 2013-05-07  5:46 ` Liu Ping Fan
  2013-05-07  5:46 ` [Qemu-devel] [PATCH v1 10/14] net: make netclient re-entrant with refcnt Liu Ping Fan
                   ` (5 subsequent siblings)
  14 siblings, 0 replies; 20+ messages in thread
From: Liu Ping Fan @ 2013-05-07  5:46 UTC (permalink / raw)
  To: qemu-devel
  Cc: mdroth, Jan Kiszka, Stefan Hajnoczi, Anthony Liguori, Paolo Bonzini

From: Liu Ping Fan <pingfank@linux.vnet.ibm.com>

Introduce nc->peer_lock to shield off the race of nc->peer's reader and
deleter. With it, after deleter finish, no new qemu_send_packet_xx()
will append packet to peer->send_queue, therefore no new reference from
packet->sender to nc will exist in nc->peer->send_queue.

Signed-off-by: Liu Ping Fan <pingfank@linux.vnet.ibm.com>
---
 include/net/net.h |    7 +++++
 net/net.c         |   79 ++++++++++++++++++++++++++++++++++++++++++++++++++---
 net/queue.c       |    4 +-
 3 files changed, 84 insertions(+), 6 deletions(-)

diff --git a/include/net/net.h b/include/net/net.h
index 88332d2..54f91ea 100644
--- a/include/net/net.h
+++ b/include/net/net.h
@@ -5,6 +5,7 @@
 #include "qemu-common.h"
 #include "qapi/qmp/qdict.h"
 #include "qemu/option.h"
+#include "qemu/thread.h"
 #include "net/queue.h"
 #include "migration/vmstate.h"
 #include "qapi-types.h"
@@ -63,6 +64,10 @@ struct NetClientState {
     NetClientInfo *info;
     int link_down;
     QTAILQ_ENTRY(NetClientState) next;
+    /* protect the race access of peer only between reader and writer.
+         * to resolve the writer's race condition, resort on biglock.
+         */
+    QemuMutex peer_lock;
     NetClientState *peer;
     NetQueue *send_queue;
     char *model;
@@ -75,6 +80,7 @@ struct NetClientState {
 
 typedef struct NICState {
     NetClientState *ncs;
+    NetClientState **pending_peer;
     NICConf *conf;
     void *opaque;
     bool peer_deleted;
@@ -102,6 +108,7 @@ NetClientState *qemu_find_vlan_client_by_name(Monitor *mon, int vlan_id,
                                               const char *client_str);
 typedef void (*qemu_nic_foreach)(NICState *nic, void *opaque);
 void qemu_foreach_nic(qemu_nic_foreach func, void *opaque);
+int qemu_can_send_packet_nolock(NetClientState *sender);
 int qemu_can_send_packet(NetClientState *nc);
 ssize_t qemu_sendv_packet(NetClientState *nc, const struct iovec *iov,
                           int iovcnt);
diff --git a/net/net.c b/net/net.c
index f3d67f8..7619762 100644
--- a/net/net.c
+++ b/net/net.c
@@ -207,6 +207,7 @@ static void qemu_net_client_setup(NetClientState *nc,
         nc->peer = peer;
         peer->peer = nc;
     }
+    qemu_mutex_init(&nc->peer_lock);
     QTAILQ_INSERT_TAIL(&net_clients, nc, next);
 
     nc->send_queue = qemu_new_net_queue(nc);
@@ -246,6 +247,7 @@ NICState *qemu_new_nic(NetClientInfo *info,
     nic->ncs = (void *)nic + info->size;
     nic->conf = conf;
     nic->opaque = opaque;
+    nic->pending_peer = g_malloc0(sizeof(NetClientState *) * queues);
 
     for (i = 0; i < queues; i++) {
         qemu_net_client_setup(&nic->ncs[i], info, peers[i], model, name,
@@ -304,6 +306,38 @@ static void qemu_free_net_client(NetClientState *nc)
     }
 }
 
+/* elimate the reference and sync with exit of rx/tx action.
+ * And flush out peer's queue.
+ */
+static void qemu_net_client_detach_flush(NetClientState *nc)
+{
+    NetClientState *peer;
+
+    /* reader of self's peer field , fixme? the deleters are not concurrent,
+         * so this pair lock can save.
+         */
+    qemu_mutex_lock(&nc->peer_lock);
+    peer = nc->peer;
+    qemu_mutex_unlock(&nc->peer_lock);
+
+    /* writer of peer's peer field*/
+    if (peer) {
+        /* exclude the race with tx to @nc */
+        qemu_mutex_lock(&peer->peer_lock);
+        peer->peer = NULL;
+        qemu_mutex_unlock(&peer->peer_lock);
+    }
+
+    /* writer of self's peer field*/
+    /*  exclude the race with tx from @nc */
+    qemu_mutex_lock(&nc->peer_lock);
+    nc->peer = NULL;
+    if (peer) {
+        qemu_net_queue_purge(peer->send_queue, nc);
+    }
+    qemu_mutex_unlock(&nc->peer_lock);
+}
+
 void qemu_del_net_client(NetClientState *nc)
 {
     NetClientState *ncs[MAX_QUEUE_NUM];
@@ -334,7 +368,9 @@ void qemu_del_net_client(NetClientState *nc)
         }
 
         for (i = 0; i < queues; i++) {
+            qemu_net_client_detach_flush(ncs[i]);
             qemu_cleanup_net_client(ncs[i]);
+            nic->pending_peer[i] = ncs[i];
         }
 
         return;
@@ -343,6 +379,7 @@ void qemu_del_net_client(NetClientState *nc)
     assert(nc->info->type != NET_CLIENT_OPTIONS_KIND_NIC);
 
     for (i = 0; i < queues; i++) {
+        qemu_net_client_detach_flush(ncs[i]);
         qemu_cleanup_net_client(ncs[i]);
         qemu_free_net_client(ncs[i]);
     }
@@ -355,17 +392,19 @@ void qemu_del_nic(NICState *nic)
     /* If this is a peer NIC and peer has already been deleted, free it now. */
     if (nic->peer_deleted) {
         for (i = 0; i < queues; i++) {
-            qemu_free_net_client(qemu_get_subqueue(nic, i)->peer);
+            qemu_free_net_client(nic->pending_peer[i]);
         }
     }
 
     for (i = queues - 1; i >= 0; i--) {
         NetClientState *nc = qemu_get_subqueue(nic, i);
 
+        qemu_net_client_detach_flush(nc);
         qemu_cleanup_net_client(nc);
         qemu_free_net_client(nc);
     }
 
+    g_free(nic->pending_peer);
     g_free(nic);
 }
 
@@ -382,7 +421,7 @@ void qemu_foreach_nic(qemu_nic_foreach func, void *opaque)
     }
 }
 
-int qemu_can_send_packet(NetClientState *sender)
+int qemu_can_send_packet_nolock(NetClientState *sender)
 {
     if (!sender->peer) {
         return 1;
@@ -397,6 +436,28 @@ int qemu_can_send_packet(NetClientState *sender)
     return 1;
 }
 
+int qemu_can_send_packet(NetClientState *sender)
+{
+    int ret = 1;
+
+    qemu_mutex_lock(&sender->peer_lock);
+    if (!sender->peer) {
+        goto unlock;
+    }
+
+    if (sender->peer->receive_disabled) {
+        ret = 0;
+        goto unlock;
+    } else if (sender->peer->info->can_receive &&
+               !sender->peer->info->can_receive(sender->peer)) {
+        ret = 0;
+        goto unlock;
+    }
+unlock:
+    qemu_mutex_unlock(&sender->peer_lock);
+    return ret;
+}
+
 ssize_t qemu_deliver_packet(NetClientState *sender,
                             unsigned flags,
                             const uint8_t *data,
@@ -460,19 +521,24 @@ static ssize_t qemu_send_packet_async_with_flags(NetClientState *sender,
                                                  NetPacketSent *sent_cb)
 {
     NetQueue *queue;
+    ssize_t sz;
 
 #ifdef DEBUG_NET
     printf("qemu_send_packet_async:\n");
     hex_dump(stdout, buf, size);
 #endif
 
+    qemu_mutex_lock(&sender->peer_lock);
     if (sender->link_down || !sender->peer) {
+        qemu_mutex_unlock(&sender->peer_lock);
         return size;
     }
 
     queue = sender->peer->send_queue;
 
-    return qemu_net_queue_send(queue, sender, flags, buf, size, sent_cb);
+    sz = qemu_net_queue_send(queue, sender, flags, buf, size, sent_cb);
+    qemu_mutex_unlock(&sender->peer_lock);
+    return sz;
 }
 
 ssize_t qemu_send_packet_async(NetClientState *sender,
@@ -540,16 +606,21 @@ ssize_t qemu_sendv_packet_async(NetClientState *sender,
                                 NetPacketSent *sent_cb)
 {
     NetQueue *queue;
+    ssize_t sz;
 
+    qemu_mutex_lock(&sender->peer_lock);
     if (sender->link_down || !sender->peer) {
+        qemu_mutex_unlock(&sender->peer_lock);
         return iov_size(iov, iovcnt);
     }
 
     queue = sender->peer->send_queue;
 
-    return qemu_net_queue_send_iov(queue, sender,
+    sz = qemu_net_queue_send_iov(queue, sender,
                                    QEMU_NET_PACKET_FLAG_NONE,
                                    iov, iovcnt, sent_cb);
+    qemu_mutex_unlock(&sender->peer_lock);
+    return sz;
 }
 
 ssize_t
diff --git a/net/queue.c b/net/queue.c
index 2856c1d..123c338 100644
--- a/net/queue.c
+++ b/net/queue.c
@@ -190,7 +190,7 @@ ssize_t qemu_net_queue_send(NetQueue *queue,
 {
     ssize_t ret;
 
-    if (queue->delivering || !qemu_can_send_packet(sender)) {
+    if (queue->delivering || !qemu_can_send_packet_nolock(sender)) {
         qemu_net_queue_append(queue, sender, flags, data, size, sent_cb);
         return 0;
     }
@@ -215,7 +215,7 @@ ssize_t qemu_net_queue_send_iov(NetQueue *queue,
 {
     ssize_t ret;
 
-    if (queue->delivering || !qemu_can_send_packet(sender)) {
+    if (queue->delivering || !qemu_can_send_packet_nolock(sender)) {
         qemu_net_queue_append_iov(queue, sender, flags, iov, iovcnt, sent_cb);
         return 0;
     }
-- 
1.7.4.4

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [Qemu-devel] [PATCH v1 10/14] net: make netclient re-entrant with refcnt
  2013-05-07  5:46 [Qemu-devel] [PATCH v1 00/14] port network layer onto glib Liu Ping Fan
                   ` (8 preceding siblings ...)
  2013-05-07  5:46 ` [Qemu-devel] [PATCH v1 09/14] net: introduce lock to protect NetClientState's peer's access Liu Ping Fan
@ 2013-05-07  5:46 ` Liu Ping Fan
  2013-05-07  5:46 ` [Qemu-devel] [PATCH v1 11/14] slirp: make timeout local Liu Ping Fan
                   ` (4 subsequent siblings)
  14 siblings, 0 replies; 20+ messages in thread
From: Liu Ping Fan @ 2013-05-07  5:46 UTC (permalink / raw)
  To: qemu-devel
  Cc: mdroth, Jan Kiszka, Stefan Hajnoczi, Anthony Liguori, Paolo Bonzini

From: Liu Ping Fan <pingfank@linux.vnet.ibm.com>

With refcnt, NetClientState's user can run agaist deleter.

Signed-off-by: Liu Ping Fan <pingfank@linux.vnet.ibm.com>
---
 hw/qdev-properties-system.c |   14 +++++++++++++
 include/net/net.h           |    3 ++
 net/hub.c                   |    3 ++
 net/net.c                   |   46 ++++++++++++++++++++++++++++++++++++++++--
 net/slirp.c                 |    3 +-
 5 files changed, 65 insertions(+), 4 deletions(-)

diff --git a/hw/qdev-properties-system.c b/hw/qdev-properties-system.c
index ce3af22..14c6d49 100644
--- a/hw/qdev-properties-system.c
+++ b/hw/qdev-properties-system.c
@@ -301,6 +301,7 @@ static void set_vlan(Object *obj, Visitor *v, void *opaque,
         return;
     }
 
+    /* inc ref, released when unset property */
     hubport = net_hub_port_find(id);
     if (!hubport) {
         error_set(errp, QERR_INVALID_PARAMETER_VALUE,
@@ -310,11 +311,24 @@ static void set_vlan(Object *obj, Visitor *v, void *opaque,
     *ptr = hubport;
 }
 
+static void release_vlan(Object *obj, const char *name, void *opaque)
+{
+    DeviceState *dev = DEVICE(obj);
+    Property *prop = opaque;
+    NICPeers *peers_ptr = qdev_get_prop_ptr(dev, prop);
+    NetClientState **ptr = &peers_ptr->ncs[0];
+
+    if (*ptr) {
+        netclient_unref(*ptr);
+    }
+}
+
 PropertyInfo qdev_prop_vlan = {
     .name  = "vlan",
     .print = print_vlan,
     .get   = get_vlan,
     .set   = set_vlan,
+    .release = release_vlan,
 };
 
 int qdev_prop_set_drive(DeviceState *dev, const char *name,
diff --git a/include/net/net.h b/include/net/net.h
index 54f91ea..ef4137d 100644
--- a/include/net/net.h
+++ b/include/net/net.h
@@ -61,6 +61,7 @@ typedef struct NetClientInfo {
 } NetClientInfo;
 
 struct NetClientState {
+    int ref;
     NetClientInfo *info;
     int link_down;
     QTAILQ_ENTRY(NetClientState) next;
@@ -89,6 +90,8 @@ typedef struct NICState {
 NetClientState *qemu_find_netdev(const char *id);
 int qemu_find_net_clients_except(const char *id, NetClientState **ncs,
                                  NetClientOptionsKind type, int max);
+void netclient_ref(NetClientState *nc);
+void netclient_unref(NetClientState *nc);
 NetClientState *qemu_new_net_client(NetClientInfo *info,
                                     NetClientState *peer,
                                     const char *model,
diff --git a/net/hub.c b/net/hub.c
index 812a6dc..2970f8e 100644
--- a/net/hub.c
+++ b/net/hub.c
@@ -212,6 +212,7 @@ NetClientState *net_hub_find_client_by_name(int hub_id, const char *name)
                 peer = port->nc.peer;
 
                 if (peer && strcmp(peer->name, name) == 0) {
+                    netclient_ref(peer);
                     qemu_mutex_unlock(&hub->ports_lock);
                     return peer;
                 }
@@ -237,6 +238,7 @@ NetClientState *net_hub_port_find(int hub_id)
             QLIST_FOREACH(port, &hub->ports, next) {
                 nc = port->nc.peer;
                 if (!nc) {
+                    netclient_ref(&port->nc);
                     qemu_mutex_unlock(&hub->ports_lock);
                     return &(port->nc);
                 }
@@ -247,6 +249,7 @@ NetClientState *net_hub_port_find(int hub_id)
     }
 
     nc = net_hub_add_port(hub_id, NULL);
+    netclient_ref(nc);
     return nc;
 }
 
diff --git a/net/net.c b/net/net.c
index 7619762..ac859ff 100644
--- a/net/net.c
+++ b/net/net.c
@@ -45,6 +45,7 @@
 # define CONFIG_NET_BRIDGE
 #endif
 
+static QemuMutex net_clients_lock;
 static QTAILQ_HEAD(, NetClientState) net_clients;
 
 int default_net = 1;
@@ -166,6 +167,7 @@ static char *assign_name(NetClientState *nc1, const char *model)
     char buf[256];
     int id = 0;
 
+    qemu_mutex_lock(&net_clients_lock);
     QTAILQ_FOREACH(nc, &net_clients, next) {
         if (nc == nc1) {
             continue;
@@ -176,6 +178,7 @@ static char *assign_name(NetClientState *nc1, const char *model)
             id++;
         }
     }
+    qemu_mutex_unlock(&net_clients_lock);
 
     snprintf(buf, sizeof(buf), "%s.%d", model, id);
 
@@ -206,9 +209,13 @@ static void qemu_net_client_setup(NetClientState *nc,
         assert(!peer->peer);
         nc->peer = peer;
         peer->peer = nc;
+        netclient_ref(peer);
+        netclient_ref(nc);
     }
     qemu_mutex_init(&nc->peer_lock);
+    qemu_mutex_lock(&net_clients_lock);
     QTAILQ_INSERT_TAIL(&net_clients, nc, next);
+    qemu_mutex_unlock(&net_clients_lock);
 
     nc->send_queue = qemu_new_net_queue(nc);
     nc->destructor = destructor;
@@ -224,6 +231,7 @@ NetClientState *qemu_new_net_client(NetClientInfo *info,
     assert(info->size >= sizeof(NetClientState));
 
     nc = g_malloc0(info->size);
+    netclient_ref(nc);
     qemu_net_client_setup(nc, info, peer, model, name,
                           qemu_net_client_destructor);
 
@@ -284,7 +292,9 @@ void *qemu_get_nic_opaque(NetClientState *nc)
 
 static void qemu_cleanup_net_client(NetClientState *nc)
 {
+    qemu_mutex_lock(&net_clients_lock);
     QTAILQ_REMOVE(&net_clients, nc, next);
+    qemu_mutex_unlock(&net_clients_lock);
 
     if (nc->info->cleanup) {
         nc->info->cleanup(nc);
@@ -306,6 +316,18 @@ static void qemu_free_net_client(NetClientState *nc)
     }
 }
 
+void netclient_ref(NetClientState *nc)
+{
+    __sync_add_and_fetch(&nc->ref, 1);
+}
+
+void netclient_unref(NetClientState *nc)
+{
+    if (__sync_sub_and_fetch(&nc->ref, 1) == 0) {
+        qemu_free_net_client(nc);
+    }
+}
+
 /* elimate the reference and sync with exit of rx/tx action.
  * And flush out peer's queue.
  */
@@ -334,8 +356,10 @@ static void qemu_net_client_detach_flush(NetClientState *nc)
     nc->peer = NULL;
     if (peer) {
         qemu_net_queue_purge(peer->send_queue, nc);
+        netclient_unref(peer);
     }
     qemu_mutex_unlock(&nc->peer_lock);
+    netclient_unref(nc);
 }
 
 void qemu_del_net_client(NetClientState *nc)
@@ -381,7 +405,7 @@ void qemu_del_net_client(NetClientState *nc)
     for (i = 0; i < queues; i++) {
         qemu_net_client_detach_flush(ncs[i]);
         qemu_cleanup_net_client(ncs[i]);
-        qemu_free_net_client(ncs[i]);
+        netclient_unref(ncs[i]);
     }
 }
 
@@ -392,7 +416,7 @@ void qemu_del_nic(NICState *nic)
     /* If this is a peer NIC and peer has already been deleted, free it now. */
     if (nic->peer_deleted) {
         for (i = 0; i < queues; i++) {
-            qemu_free_net_client(nic->pending_peer[i]);
+            netclient_unref(nic->pending_peer[i]);
         }
     }
 
@@ -401,7 +425,7 @@ void qemu_del_nic(NICState *nic)
 
         qemu_net_client_detach_flush(nc);
         qemu_cleanup_net_client(nc);
-        qemu_free_net_client(nc);
+        netclient_unref(nc);
     }
 
     g_free(nic->pending_peer);
@@ -412,6 +436,7 @@ void qemu_foreach_nic(qemu_nic_foreach func, void *opaque)
 {
     NetClientState *nc;
 
+    qemu_mutex_lock(&net_clients_lock);
     QTAILQ_FOREACH(nc, &net_clients, next) {
         if (nc->info->type == NET_CLIENT_OPTIONS_KIND_NIC) {
             if (nc->queue_index == 0) {
@@ -419,6 +444,7 @@ void qemu_foreach_nic(qemu_nic_foreach func, void *opaque)
             }
         }
     }
+    qemu_mutex_unlock(&net_clients_lock);
 }
 
 int qemu_can_send_packet_nolock(NetClientState *sender)
@@ -633,13 +659,17 @@ NetClientState *qemu_find_netdev(const char *id)
 {
     NetClientState *nc;
 
+    qemu_mutex_lock(&net_clients_lock);
     QTAILQ_FOREACH(nc, &net_clients, next) {
         if (nc->info->type == NET_CLIENT_OPTIONS_KIND_NIC)
             continue;
         if (!strcmp(nc->name, id)) {
+            netclient_ref(nc);
+            qemu_mutex_unlock(&net_clients_lock);
             return nc;
         }
     }
+    qemu_mutex_unlock(&net_clients_lock);
 
     return NULL;
 }
@@ -650,6 +680,7 @@ int qemu_find_net_clients_except(const char *id, NetClientState **ncs,
     NetClientState *nc;
     int ret = 0;
 
+    qemu_mutex_lock(&net_clients_lock);
     QTAILQ_FOREACH(nc, &net_clients, next) {
         if (nc->info->type == type) {
             continue;
@@ -661,6 +692,7 @@ int qemu_find_net_clients_except(const char *id, NetClientState **ncs,
             ret++;
         }
     }
+    qemu_mutex_unlock(&net_clients_lock);
 
     return ret;
 }
@@ -969,6 +1001,7 @@ void net_host_device_remove(Monitor *mon, const QDict *qdict)
         return;
     }
     qemu_del_net_client(nc);
+    netclient_unref(nc);
 }
 
 void netdev_add(QemuOpts *opts, Error **errp)
@@ -1024,6 +1057,7 @@ void qmp_netdev_del(const char *id, Error **errp)
     }
 
     qemu_del_net_client(nc);
+    netclient_unref(nc);
     qemu_opts_del(opts);
 }
 
@@ -1042,6 +1076,7 @@ void do_info_network(Monitor *mon, const QDict *qdict)
 
     net_hub_info(mon);
 
+    qemu_mutex_lock(&net_clients_lock);
     QTAILQ_FOREACH(nc, &net_clients, next) {
         peer = nc->peer;
         type = nc->info->type;
@@ -1059,6 +1094,7 @@ void do_info_network(Monitor *mon, const QDict *qdict)
             print_net_client(mon, peer);
         }
     }
+    qemu_mutex_unlock(&net_clients_lock);
 }
 
 void qmp_set_link(const char *name, bool up, Error **errp)
@@ -1112,6 +1148,7 @@ void net_cleanup(void)
             qemu_del_net_client(nc);
         }
     }
+    qemu_mutex_destroy(&net_clients_lock);
 }
 
 void net_check_clients(void)
@@ -1133,6 +1170,7 @@ void net_check_clients(void)
 
     net_hub_check_clients();
 
+    qemu_mutex_lock(&net_clients_lock);
     QTAILQ_FOREACH(nc, &net_clients, next) {
         if (!nc->peer) {
             fprintf(stderr, "Warning: %s %s has no peer\n",
@@ -1140,6 +1178,7 @@ void net_check_clients(void)
                     "nic" : "netdev", nc->name);
         }
     }
+    qemu_mutex_unlock(&net_clients_lock);
 
     /* Check that all NICs requested via -net nic actually got created.
      * NICs created via -device don't need to be checked here because
@@ -1197,6 +1236,7 @@ int net_init_clients(void)
 #endif
     }
 
+    qemu_mutex_init(&net_clients_lock);
     QTAILQ_INIT(&net_clients);
 
     if (qemu_opts_foreach(qemu_find_opts("netdev"), net_init_netdev, NULL, 1) == -1)
diff --git a/net/slirp.c b/net/slirp.c
index 4df550f..a6116d5 100644
--- a/net/slirp.c
+++ b/net/slirp.c
@@ -346,7 +346,7 @@ void net_slirp_hostfwd_remove(Monitor *mon, const QDict *qdict)
 
     err = slirp_remove_hostfwd(QTAILQ_FIRST(&slirp_stacks)->slirp, is_udp,
                                host_addr, host_port);
-
+    netclient_unref(&s->nc);
     monitor_printf(mon, "host forwarding rule for %s %s\n", src_str,
                    err ? "not found" : "removed");
     return;
@@ -437,6 +437,7 @@ void net_slirp_hostfwd_add(Monitor *mon, const QDict *qdict)
     }
     if (s) {
         slirp_hostfwd(s, redir_str, 0);
+        netclient_unref(&s->nc);
     }
 
 }
-- 
1.7.4.4

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [Qemu-devel] [PATCH v1 11/14] slirp: make timeout local
  2013-05-07  5:46 [Qemu-devel] [PATCH v1 00/14] port network layer onto glib Liu Ping Fan
                   ` (9 preceding siblings ...)
  2013-05-07  5:46 ` [Qemu-devel] [PATCH v1 10/14] net: make netclient re-entrant with refcnt Liu Ping Fan
@ 2013-05-07  5:46 ` Liu Ping Fan
  2013-05-07  5:47 ` [Qemu-devel] [PATCH v1 12/14] slirp: make slirp event dispatch based on slirp instance, not global Liu Ping Fan
                   ` (3 subsequent siblings)
  14 siblings, 0 replies; 20+ messages in thread
From: Liu Ping Fan @ 2013-05-07  5:46 UTC (permalink / raw)
  To: qemu-devel
  Cc: mdroth, Jan Kiszka, Stefan Hajnoczi, Anthony Liguori, Paolo Bonzini

From: Liu Ping Fan <pingfank@linux.vnet.ibm.com>

Each slirp has its own time to caculate timeout.

Signed-off-by: Liu Ping Fan <pingfank@linux.vnet.ibm.com>
---
 slirp/slirp.c |   22 ++++++++++------------
 slirp/slirp.h |    3 +++
 2 files changed, 13 insertions(+), 12 deletions(-)

diff --git a/slirp/slirp.c b/slirp/slirp.c
index bd9b7cb..08c6b26 100644
--- a/slirp/slirp.c
+++ b/slirp/slirp.c
@@ -40,8 +40,6 @@ static const uint8_t special_ethaddr[ETH_ALEN] = {
 static const uint8_t zero_ethaddr[ETH_ALEN] = { 0, 0, 0, 0, 0, 0 };
 
 u_int curtime;
-static u_int time_fasttimo, last_slowtimo;
-static int do_slowtimo;
 
 static QTAILQ_HEAD(slirp_instances, Slirp) slirp_instances =
     QTAILQ_HEAD_INITIALIZER(slirp_instances);
@@ -278,14 +276,13 @@ void slirp_pollfds_fill(GArray *pollfds)
     /*
      * First, TCP sockets
      */
-    do_slowtimo = 0;
 
     QTAILQ_FOREACH(slirp, &slirp_instances, entry) {
         /*
          * *_slowtimo needs calling if there are IP fragments
          * in the fragment queue, or there are TCP connections active
          */
-        do_slowtimo |= ((slirp->tcb.so_next != &slirp->tcb) ||
+        slirp->do_slowtimo = ((slirp->tcb.so_next != &slirp->tcb) ||
                 (&slirp->ipq.ip_link != slirp->ipq.ip_link.next));
 
         for (so = slirp->tcb.so_next; so != &slirp->tcb;
@@ -299,8 +296,9 @@ void slirp_pollfds_fill(GArray *pollfds)
             /*
              * See if we need a tcp_fasttimo
              */
-            if (time_fasttimo == 0 && so->so_tcpcb->t_flags & TF_DELACK) {
-                time_fasttimo = curtime; /* Flag when we want a fasttimo */
+            if (slirp->time_fasttimo == 0 &&
+                so->so_tcpcb->t_flags & TF_DELACK) {
+                slirp->time_fasttimo = curtime; /* Flag when want a fasttimo */
             }
 
             /*
@@ -381,7 +379,7 @@ void slirp_pollfds_fill(GArray *pollfds)
                     udp_detach(so);
                     continue;
                 } else {
-                    do_slowtimo = 1; /* Let socket expire */
+                    slirp->do_slowtimo = 1; /* Let socket expire */
                 }
             }
 
@@ -422,7 +420,7 @@ void slirp_pollfds_fill(GArray *pollfds)
                     icmp_detach(so);
                     continue;
                 } else {
-                    do_slowtimo = 1; /* Let socket expire */
+                    slirp->do_slowtimo = 1; /* Let socket expire */
                 }
             }
 
@@ -454,14 +452,14 @@ void slirp_pollfds_poll(GArray *pollfds, int select_error)
         /*
          * See if anything has timed out
          */
-        if (time_fasttimo && ((curtime - time_fasttimo) >= 2)) {
+        if (slirp->time_fasttimo && ((curtime - slirp->time_fasttimo) >= 2)) {
             tcp_fasttimo(slirp);
-            time_fasttimo = 0;
+            slirp->time_fasttimo = 0;
         }
-        if (do_slowtimo && ((curtime - last_slowtimo) >= 499)) {
+        if (slirp->do_slowtimo && ((curtime - slirp->last_slowtimo) >= 499)) {
             ip_slowtimo(slirp);
             tcp_slowtimo(slirp);
-            last_slowtimo = curtime;
+            slirp->last_slowtimo = curtime;
         }
 
         /*
diff --git a/slirp/slirp.h b/slirp/slirp.h
index fe0e65d..008360e 100644
--- a/slirp/slirp.h
+++ b/slirp/slirp.h
@@ -203,6 +203,9 @@ bool arp_table_search(Slirp *slirp, uint32_t ip_addr,
 
 struct Slirp {
     QTAILQ_ENTRY(Slirp) entry;
+    u_int time_fasttimo;
+    u_int last_slowtimo;
+    int do_slowtimo;
 
     /* virtual network configuration */
     struct in_addr vnetwork_addr;
-- 
1.7.4.4

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [Qemu-devel] [PATCH v1 12/14] slirp: make slirp event dispatch based on slirp instance, not global
  2013-05-07  5:46 [Qemu-devel] [PATCH v1 00/14] port network layer onto glib Liu Ping Fan
                   ` (10 preceding siblings ...)
  2013-05-07  5:46 ` [Qemu-devel] [PATCH v1 11/14] slirp: make timeout local Liu Ping Fan
@ 2013-05-07  5:47 ` Liu Ping Fan
  2013-05-07  5:47 ` [Qemu-devel] [PATCH v1 13/14] slirp: handle race condition Liu Ping Fan
                   ` (2 subsequent siblings)
  14 siblings, 0 replies; 20+ messages in thread
From: Liu Ping Fan @ 2013-05-07  5:47 UTC (permalink / raw)
  To: qemu-devel
  Cc: mdroth, Jan Kiszka, Stefan Hajnoczi, Anthony Liguori, Paolo Bonzini

From: Liu Ping Fan <pingfank@linux.vnet.ibm.com>

Split slirp_pollfds_fill/_poll actions into each slirp, so that SlirpState
can run on dedicated context. Each slirp socket will corresponds to a GPollFD,
and its SlirpState stands for a GSource(EventsGSource). Finally different
SlirpState can run on different context.

The logic in slirp_pollfds_fill/_poll is not changed, but due to drop of
the functions, rearrange the code to obey the coding style. For other minor
changes, they accord to the nearby style.

Signed-off-by: Liu Ping Fan <pingfank@linux.vnet.ibm.com>
---
 main-loop.c      |    4 -
 net/slirp.c      |   32 +++
 slirp/libslirp.h |    7 +-
 slirp/slirp.c    |  567 +++++++++++++++++++++++++-----------------------------
 slirp/socket.c   |    2 +
 slirp/socket.h   |    1 +
 stubs/slirp.c    |    8 -
 7 files changed, 299 insertions(+), 322 deletions(-)

diff --git a/main-loop.c b/main-loop.c
index 8c9b58c..970f25d 100644
--- a/main-loop.c
+++ b/main-loop.c
@@ -432,14 +432,10 @@ int main_loop_wait(int nonblocking)
     /* XXX: separate device handlers from system ones */
 #ifdef CONFIG_SLIRP
     slirp_update_timeout(&timeout);
-    slirp_pollfds_fill(gpollfds);
 #endif
     qemu_iohandler_fill(gpollfds);
     ret = os_host_main_loop_wait(timeout);
     qemu_iohandler_poll(gpollfds, ret);
-#ifdef CONFIG_SLIRP
-    slirp_pollfds_poll(gpollfds, (ret < 0));
-#endif
 
     qemu_run_all_timers();
 
diff --git a/net/slirp.c b/net/slirp.c
index a6116d5..6ff5ca8 100644
--- a/net/slirp.c
+++ b/net/slirp.c
@@ -36,6 +36,7 @@
 #include "qemu/sockets.h"
 #include "slirp/libslirp.h"
 #include "char/char.h"
+#include "util/event_gsource.h"
 
 static int get_str_sep(char *buf, int buf_size, const char **pp, int sep)
 {
@@ -76,6 +77,7 @@ typedef struct SlirpState {
 #ifndef _WIN32
     char smb_dir[128];
 #endif
+    EventsGSource *slirp_src;
 } SlirpState;
 
 static struct slirp_config_str *slirp_configs;
@@ -120,17 +122,44 @@ static void net_slirp_cleanup(NetClientState *nc)
     SlirpState *s = DO_UPCAST(SlirpState, nc, nc);
 
     slirp_cleanup(s->slirp);
+    events_source_release(s->slirp_src);
     slirp_smb_cleanup(s);
     QTAILQ_REMOVE(&slirp_stacks, s, entry);
 }
 
+static void net_slirp_bind_ctx(NetClientState *nc, GMainContext *ctx)
+{
+    SlirpState *s = DO_UPCAST(SlirpState, nc, nc);
+
+    g_source_attach(&s->slirp_src->source, ctx);
+}
+
 static NetClientInfo net_slirp_info = {
     .type = NET_CLIENT_OPTIONS_KIND_USER,
     .size = sizeof(SlirpState),
     .receive = net_slirp_receive,
     .cleanup = net_slirp_cleanup,
+    .bind_ctx = net_slirp_bind_ctx,
 };
 
+GPollFD *slirp_gsource_get_gfd(void *opaque, int fd)
+{
+    GPollFD *retfd;
+    SlirpState *s = opaque;
+    EventsGSource *src = s->slirp_src;
+    retfd = events_source_add_gfd(src, fd);
+
+    return retfd;
+}
+
+void slirp_gsource_close_gfd(void *opaque, GPollFD *pollfd)
+{
+    SlirpState *s = opaque;
+    EventsGSource *src = s->slirp_src;
+
+    events_source_remove_gfd(src, pollfd);
+}
+
 static int net_slirp_init(NetClientState *peer, const char *model,
                           const char *name, int restricted,
                           const char *vnetwork, const char *vhost,
@@ -244,6 +273,8 @@ static int net_slirp_init(NetClientState *peer, const char *model,
 
     s->slirp = slirp_init(restricted, net, mask, host, vhostname,
                           tftp_export, bootfile, dhcp, dns, dnssearch, s);
+    s->slirp_src = events_source_new(slirp_prepare, slirp_handler, s->slirp);
+
     QTAILQ_INSERT_TAIL(&slirp_stacks, s, entry);
 
     for (config = slirp_configs; config; config = config->next) {
@@ -266,6 +297,7 @@ static int net_slirp_init(NetClientState *peer, const char *model,
             goto error;
     }
 #endif
+    s->nc.info->bind_ctx(&s->nc, NULL);
 
     return 0;
 
diff --git a/slirp/libslirp.h b/slirp/libslirp.h
index ceabff8..1aad5a4 100644
--- a/slirp/libslirp.h
+++ b/slirp/libslirp.h
@@ -17,11 +17,10 @@ Slirp *slirp_init(int restricted, struct in_addr vnetwork,
 void slirp_cleanup(Slirp *slirp);
 
 void slirp_update_timeout(uint32_t *timeout);
-void slirp_pollfds_fill(GArray *pollfds);
-
-void slirp_pollfds_poll(GArray *pollfds, int select_error);
 
 void slirp_input(Slirp *slirp, const uint8_t *pkt, int pkt_len);
+gboolean slirp_prepare(GSource *source, gint *time);
+gboolean slirp_handler(gpointer data);
 
 /* you must provide the following functions: */
 void slirp_output(void *opaque, const uint8_t *pkt, int pkt_len);
@@ -40,5 +39,7 @@ void slirp_socket_recv(Slirp *slirp, struct in_addr guest_addr,
                        int guest_port, const uint8_t *buf, int size);
 size_t slirp_socket_can_recv(Slirp *slirp, struct in_addr guest_addr,
                              int guest_port);
+GPollFD *slirp_gsource_get_gfd(void *opaque, int fd);
+void slirp_gsource_close_gfd(void *opaque, GPollFD *pollfd);
 
 #endif
diff --git a/slirp/slirp.c b/slirp/slirp.c
index 08c6b26..691f82f 100644
--- a/slirp/slirp.c
+++ b/slirp/slirp.c
@@ -26,6 +26,7 @@
 #include "char/char.h"
 #include "slirp.h"
 #include "hw/hw.h"
+#include "util/event_gsource.h"
 
 /* host loopback address */
 struct in_addr loopback_addr;
@@ -262,386 +263,338 @@ void slirp_update_timeout(uint32_t *timeout)
     if (!QTAILQ_EMPTY(&slirp_instances)) {
         *timeout = MIN(1000, *timeout);
     }
+    curtime = qemu_get_clock_ms(rt_clock);
 }
 
-void slirp_pollfds_fill(GArray *pollfds)
+gboolean slirp_prepare(GSource *source, gint *time)
 {
-    Slirp *slirp;
+    EventsGSource *slirp_src = (EventsGSource *)source;
+    Slirp *slirp = slirp_src->opaque;
     struct socket *so, *so_next;
-
-    if (QTAILQ_EMPTY(&slirp_instances)) {
-        return;
-    }
+    int events = 0;
 
     /*
-     * First, TCP sockets
+     * *_slowtimo needs calling if there are IP fragments
+     * in the fragment queue, or there are TCP connections active
      */
+    slirp->do_slowtimo = ((slirp->tcb.so_next != &slirp->tcb) ||
+            (&slirp->ipq.ip_link != slirp->ipq.ip_link.next));
+
+    for (so = slirp->tcb.so_next; so != &slirp->tcb;
+            so = so_next) {
 
-    QTAILQ_FOREACH(slirp, &slirp_instances, entry) {
+        so_next = so->so_next;
+        if (so->pollfd->fd == -1 && so->s != -1) {
+            so->pollfd->fd = so->s;
+            g_source_add_poll(source, so->pollfd);
+        }
         /*
-         * *_slowtimo needs calling if there are IP fragments
-         * in the fragment queue, or there are TCP connections active
+         * See if we need a tcp_fasttimo
          */
-        slirp->do_slowtimo = ((slirp->tcb.so_next != &slirp->tcb) ||
-                (&slirp->ipq.ip_link != slirp->ipq.ip_link.next));
-
-        for (so = slirp->tcb.so_next; so != &slirp->tcb;
-                so = so_next) {
-            int events = 0;
-
-            so_next = so->so_next;
-
-            so->pollfds_idx = -1;
-
-            /*
-             * See if we need a tcp_fasttimo
-             */
-            if (slirp->time_fasttimo == 0 &&
-                so->so_tcpcb->t_flags & TF_DELACK) {
-                slirp->time_fasttimo = curtime; /* Flag when want a fasttimo */
-            }
-
-            /*
-             * NOFDREF can include still connecting to local-host,
-             * newly socreated() sockets etc. Don't want to select these.
-             */
-            if (so->so_state & SS_NOFDREF || so->s == -1) {
-                continue;
-            }
-
-            /*
-             * Set for reading sockets which are accepting
-             */
-            if (so->so_state & SS_FACCEPTCONN) {
-                GPollFD pfd = {
-                    .fd = so->s,
-                    .events = G_IO_IN | G_IO_HUP | G_IO_ERR,
-                };
-                so->pollfds_idx = pollfds->len;
-                g_array_append_val(pollfds, pfd);
-                continue;
-            }
+        if (slirp->time_fasttimo == 0 &&
+             so->so_tcpcb->t_flags & TF_DELACK) {
+            slirp->time_fasttimo = curtime; /* Flag when want a fasttimo */
+        }
 
-            /*
-             * Set for writing sockets which are connecting
-             */
-            if (so->so_state & SS_ISFCONNECTING) {
-                GPollFD pfd = {
-                    .fd = so->s,
-                    .events = G_IO_OUT | G_IO_ERR,
-                };
-                so->pollfds_idx = pollfds->len;
-                g_array_append_val(pollfds, pfd);
-                continue;
-            }
+        /*
+         * NOFDREF can include still connecting to local-host,
+         * newly socreated() sockets etc. Don't want to select these.
+         */
+        if (so->so_state & SS_NOFDREF || so->s == -1) {
+            continue;
+        }
 
-            /*
-             * Set for writing if we are connected, can send more, and
-             * we have something to send
-             */
-            if (CONN_CANFSEND(so) && so->so_rcv.sb_cc) {
-                events |= G_IO_OUT | G_IO_ERR;
-            }
+        /*
+         * Set for reading sockets which are accepting
+         */
+        if (so->so_state & SS_FACCEPTCONN) {
+            so->pollfd->events = G_IO_IN | G_IO_HUP | G_IO_ERR;
+            continue;
+        }
 
-            /*
-             * Set for reading (and urgent data) if we are connected, can
-             * receive more, and we have room for it XXX /2 ?
-             */
-            if (CONN_CANFRCV(so) &&
-                (so->so_snd.sb_cc < (so->so_snd.sb_datalen/2))) {
-                events |= G_IO_IN | G_IO_HUP | G_IO_ERR | G_IO_PRI;
-            }
+        /*
+         * Set for writing sockets which are connecting
+         */
+        if (so->so_state & SS_ISFCONNECTING) {
+            so->pollfd->events = G_IO_OUT | G_IO_ERR;
+            continue;
+        }
 
-            if (events) {
-                GPollFD pfd = {
-                    .fd = so->s,
-                    .events = events,
-                };
-                so->pollfds_idx = pollfds->len;
-                g_array_append_val(pollfds, pfd);
-            }
+        /*
+         * Set for writing if we are connected, can send more, and
+         * we have something to send
+         */
+        if (CONN_CANFSEND(so) && so->so_rcv.sb_cc) {
+            events |= G_IO_OUT | G_IO_ERR;
         }
 
         /*
-         * UDP sockets
+         * Set for reading (and urgent data) if we are connected, can
+         * receive more, and we have room for it XXX /2 ?
          */
-        for (so = slirp->udb.so_next; so != &slirp->udb;
-                so = so_next) {
-            so_next = so->so_next;
+        if (CONN_CANFRCV(so) &&
+            (so->so_snd.sb_cc < (so->so_snd.sb_datalen/2))) {
+            events |= G_IO_IN | G_IO_HUP | G_IO_ERR | G_IO_PRI;
+        }
 
-            so->pollfds_idx = -1;
+        if (events) {
+            so->pollfd->events = events;
+        }
+    }
 
-            /*
-             * See if it's timed out
-             */
-            if (so->so_expire) {
-                if (so->so_expire <= curtime) {
-                    udp_detach(so);
-                    continue;
-                } else {
-                    slirp->do_slowtimo = 1; /* Let socket expire */
-                }
-            }
+    /*
+     * UDP sockets
+     */
+    for (so = slirp->udb.so_next; so != &slirp->udb;
+            so = so_next) {
+        so_next = so->so_next;
 
-            /*
-             * When UDP packets are received from over the
-             * link, they're sendto()'d straight away, so
-             * no need for setting for writing
-             * Limit the number of packets queued by this session
-             * to 4.  Note that even though we try and limit this
-             * to 4 packets, the session could have more queued
-             * if the packets needed to be fragmented
-             * (XXX <= 4 ?)
-             */
-            if ((so->so_state & SS_ISFCONNECTED) && so->so_queued <= 4) {
-                GPollFD pfd = {
-                    .fd = so->s,
-                    .events = G_IO_IN | G_IO_HUP | G_IO_ERR,
-                };
-                so->pollfds_idx = pollfds->len;
-                g_array_append_val(pollfds, pfd);
+        /*
+         * See if it's timed out
+         */
+        if (so->so_expire) {
+            if (so->so_expire <= curtime) {
+                udp_detach(so);
+                continue;
+            } else {
+                slirp->do_slowtimo = 1; /* Let socket expire */
             }
         }
 
         /*
-         * ICMP sockets
+         * When UDP packets are received from over the
+         * link, they're sendto()'d straight away, so
+         * no need for setting for writing
+         * Limit the number of packets queued by this session
+         * to 4.  Note that even though we try and limit this
+         * to 4 packets, the session could have more queued
+         * if the packets needed to be fragmented
+         * (XXX <= 4 ?)
          */
-        for (so = slirp->icmp.so_next; so != &slirp->icmp;
-                so = so_next) {
-            so_next = so->so_next;
+        if ((so->so_state & SS_ISFCONNECTED) && so->so_queued <= 4) {
+            so->pollfd->events = G_IO_IN | G_IO_HUP | G_IO_ERR;
+        }
+    }
 
-            so->pollfds_idx = -1;
+    /*
+     * ICMP sockets
+     */
+    for (so = slirp->icmp.so_next; so != &slirp->icmp;
+            so = so_next) {
+        so_next = so->so_next;
 
-            /*
-             * See if it's timed out
-             */
-            if (so->so_expire) {
-                if (so->so_expire <= curtime) {
-                    icmp_detach(so);
-                    continue;
-                } else {
-                    slirp->do_slowtimo = 1; /* Let socket expire */
-                }
+        /*
+         * See if it's timed out
+         */
+        if (so->so_expire) {
+            if (so->so_expire <= curtime) {
+                icmp_detach(so);
+                continue;
+            } else {
+                slirp->do_slowtimo = 1; /* Let socket expire */
             }
+        }
 
-            if (so->so_state & SS_ISFCONNECTED) {
-                GPollFD pfd = {
-                    .fd = so->s,
-                    .events = G_IO_IN | G_IO_HUP | G_IO_ERR,
-                };
-                so->pollfds_idx = pollfds->len;
-                g_array_append_val(pollfds, pfd);
-            }
+        if (so->so_state & SS_ISFCONNECTED) {
+            so->pollfd->events = G_IO_IN | G_IO_HUP | G_IO_ERR;
         }
     }
+
+    return false;
 }
 
-void slirp_pollfds_poll(GArray *pollfds, int select_error)
+gboolean slirp_handler(gpointer data)
 {
-    Slirp *slirp;
+    EventsGSource *src = data;
+    Slirp *slirp = src->opaque;
     struct socket *so, *so_next;
     int ret;
 
-    if (QTAILQ_EMPTY(&slirp_instances)) {
-        return;
+    /*
+     * See if anything has timed out
+     */
+    if (slirp->time_fasttimo && ((curtime - slirp->time_fasttimo) >= 2)) {
+        tcp_fasttimo(slirp);
+        slirp->time_fasttimo = 0;
+    }
+    if (slirp->do_slowtimo && ((curtime - slirp->last_slowtimo) >= 499)) {
+        ip_slowtimo(slirp);
+        tcp_slowtimo(slirp);
+        slirp->last_slowtimo = curtime;
     }
 
-    curtime = qemu_get_clock_ms(rt_clock);
+    /*
+     * Check TCP sockets
+     */
+    for (so = slirp->tcb.so_next; so != &slirp->tcb;
+            so = so_next) {
+        int revents;
 
-    QTAILQ_FOREACH(slirp, &slirp_instances, entry) {
-        /*
-         * See if anything has timed out
-         */
-        if (slirp->time_fasttimo && ((curtime - slirp->time_fasttimo) >= 2)) {
-            tcp_fasttimo(slirp);
-            slirp->time_fasttimo = 0;
+        so_next = so->so_next;
+
+        revents = 0;
+        if (so->pollfd) {
+            revents = so->pollfd->revents;
         }
-        if (slirp->do_slowtimo && ((curtime - slirp->last_slowtimo) >= 499)) {
-            ip_slowtimo(slirp);
-            tcp_slowtimo(slirp);
-            slirp->last_slowtimo = curtime;
+        if (so->so_state & SS_NOFDREF || so->s == -1) {
+            continue;
         }
 
         /*
-         * Check sockets
+         * Check for URG data
+         * This will soread as well, so no need to
+         * test for G_IO_IN below if this succeeds
          */
-        if (!select_error) {
+        if (revents & G_IO_PRI) {
+            sorecvoob(so);
+        }
+        /*
+         * Check sockets for reading
+         */
+        else if (revents & (G_IO_IN | G_IO_HUP | G_IO_ERR)) {
             /*
-             * Check TCP sockets
+             * Check for incoming connections
              */
-            for (so = slirp->tcb.so_next; so != &slirp->tcb;
-                    so = so_next) {
-                int revents;
-
-                so_next = so->so_next;
-
-                revents = 0;
-                if (so->pollfds_idx != -1) {
-                    revents = g_array_index(pollfds, GPollFD,
-                                            so->pollfds_idx).revents;
-                }
+            if (so->so_state & SS_FACCEPTCONN) {
+                tcp_connect(so);
+                continue;
+            } /* else */
+            ret = soread(so);
 
-                if (so->so_state & SS_NOFDREF || so->s == -1) {
-                    continue;
-                }
+            /* Output it if we read something */
+            if (ret > 0) {
+                tcp_output(sototcpcb(so));
+            }
+        }
 
-                /*
-                 * Check for URG data
-                 * This will soread as well, so no need to
-                 * test for G_IO_IN below if this succeeds
-                 */
-                if (revents & G_IO_PRI) {
-                    sorecvoob(so);
-                }
-                /*
-                 * Check sockets for reading
-                 */
-                else if (revents & (G_IO_IN | G_IO_HUP | G_IO_ERR)) {
-                    /*
-                     * Check for incoming connections
-                     */
-                    if (so->so_state & SS_FACCEPTCONN) {
-                        tcp_connect(so);
+        /*
+         * Check sockets for writing
+         */
+        if (!(so->so_state & SS_NOFDREF) &&
+                (revents & (G_IO_OUT | G_IO_ERR))) {
+            /*
+             * Check for non-blocking, still-connecting sockets
+             */
+            if (so->so_state & SS_ISFCONNECTING) {
+                /* Connected */
+                so->so_state &= ~SS_ISFCONNECTING;
+
+                ret = send(so->s, (const void *) &ret, 0, 0);
+                if (ret < 0) {
+                    /* XXXXX Must fix, zero bytes is a NOP */
+                    if (errno == EAGAIN || errno == EWOULDBLOCK ||
+                        errno == EINPROGRESS || errno == ENOTCONN) {
                         continue;
-                    } /* else */
-                    ret = soread(so);
-
-                    /* Output it if we read something */
-                    if (ret > 0) {
-                        tcp_output(sototcpcb(so));
                     }
-                }
 
-                /*
-                 * Check sockets for writing
-                 */
-                if (!(so->so_state & SS_NOFDREF) &&
-                        (revents & (G_IO_OUT | G_IO_ERR))) {
-                    /*
-                     * Check for non-blocking, still-connecting sockets
-                     */
-                    if (so->so_state & SS_ISFCONNECTING) {
-                        /* Connected */
-                        so->so_state &= ~SS_ISFCONNECTING;
-
-                        ret = send(so->s, (const void *) &ret, 0, 0);
-                        if (ret < 0) {
-                            /* XXXXX Must fix, zero bytes is a NOP */
-                            if (errno == EAGAIN || errno == EWOULDBLOCK ||
-                                errno == EINPROGRESS || errno == ENOTCONN) {
-                                continue;
-                            }
-
-                            /* else failed */
-                            so->so_state &= SS_PERSISTENT_MASK;
-                            so->so_state |= SS_NOFDREF;
-                        }
-                        /* else so->so_state &= ~SS_ISFCONNECTING; */
-
-                        /*
-                         * Continue tcp_input
-                         */
-                        tcp_input((struct mbuf *)NULL, sizeof(struct ip), so);
-                        /* continue; */
-                    } else {
-                        ret = sowrite(so);
-                    }
-                    /*
-                     * XXXXX If we wrote something (a lot), there
-                     * could be a need for a window update.
-                     * In the worst case, the remote will send
-                     * a window probe to get things going again
-                     */
+                    /* else failed */
+                    so->so_state &= SS_PERSISTENT_MASK;
+                    so->so_state |= SS_NOFDREF;
                 }
+                /* else so->so_state &= ~SS_ISFCONNECTING; */
 
                 /*
-                 * Probe a still-connecting, non-blocking socket
-                 * to check if it's still alive
+                 * Continue tcp_input
                  */
-#ifdef PROBE_CONN
-                if (so->so_state & SS_ISFCONNECTING) {
-                    ret = qemu_recv(so->s, &ret, 0, 0);
-
-                    if (ret < 0) {
-                        /* XXX */
-                        if (errno == EAGAIN || errno == EWOULDBLOCK ||
-                            errno == EINPROGRESS || errno == ENOTCONN) {
-                            continue; /* Still connecting, continue */
-                        }
-
-                        /* else failed */
-                        so->so_state &= SS_PERSISTENT_MASK;
-                        so->so_state |= SS_NOFDREF;
-
-                        /* tcp_input will take care of it */
-                    } else {
-                        ret = send(so->s, &ret, 0, 0);
-                        if (ret < 0) {
-                            /* XXX */
-                            if (errno == EAGAIN || errno == EWOULDBLOCK ||
-                                errno == EINPROGRESS || errno == ENOTCONN) {
-                                continue;
-                            }
-                            /* else failed */
-                            so->so_state &= SS_PERSISTENT_MASK;
-                            so->so_state |= SS_NOFDREF;
-                        } else {
-                            so->so_state &= ~SS_ISFCONNECTING;
-                        }
-
-                    }
-                    tcp_input((struct mbuf *)NULL, sizeof(struct ip), so);
-                } /* SS_ISFCONNECTING */
-#endif
+                tcp_input((struct mbuf *)NULL, sizeof(struct ip), so);
+                /* continue; */
+            } else {
+                ret = sowrite(so);
             }
 
             /*
-             * Now UDP sockets.
-             * Incoming packets are sent straight away, they're not buffered.
-             * Incoming UDP data isn't buffered either.
+             * XXXXX If we wrote something (a lot), there
+             * could be a need for a window update.
+             * In the worst case, the remote will send
+             * a window probe to get things going again
              */
-            for (so = slirp->udb.so_next; so != &slirp->udb;
-                    so = so_next) {
-                int revents;
-
-                so_next = so->so_next;
+        }
 
-                revents = 0;
-                if (so->pollfds_idx != -1) {
-                    revents = g_array_index(pollfds, GPollFD,
-                            so->pollfds_idx).revents;
+        /*
+         * Probe a still-connecting, non-blocking socket
+         * to check if it's still alive
+         */
+#ifdef PROBE_CONN
+        if (so->so_state & SS_ISFCONNECTING) {
+            ret = qemu_recv(so->s, &ret, 0, 0);
+
+            if (ret < 0) {
+                /* XXX */
+                if (errno == EAGAIN || errno == EWOULDBLOCK ||
+                    errno == EINPROGRESS || errno == ENOTCONN) {
+                    continue; /* Still connecting, continue */
                 }
 
-                if (so->s != -1 &&
-                    (revents & (G_IO_IN | G_IO_HUP | G_IO_ERR))) {
-                    sorecvfrom(so);
+                /* else failed */
+                so->so_state &= SS_PERSISTENT_MASK;
+                so->so_state |= SS_NOFDREF;
+
+                /* tcp_input will take care of it */
+            } else {
+                ret = send(so->s, &ret, 0, 0);
+                if (ret < 0) {
+                    /* XXX */
+                    if (errno == EAGAIN || errno == EWOULDBLOCK ||
+                        errno == EINPROGRESS || errno == ENOTCONN) {
+                        continue;
+                    }
+                    /* else failed */
+                    so->so_state &= SS_PERSISTENT_MASK;
+                    so->so_state |= SS_NOFDREF;
+                } else {
+                    so->so_state &= ~SS_ISFCONNECTING;
                 }
+
             }
+            tcp_input((struct mbuf *)NULL, sizeof(struct ip), so);
+        } /* SS_ISFCONNECTING */
+#endif
+    }
 
-            /*
-             * Check incoming ICMP relies.
-             */
-            for (so = slirp->icmp.so_next; so != &slirp->icmp;
-                    so = so_next) {
-                    int revents;
+    /*
+     * Now UDP sockets.
+     * Incoming packets are sent straight away, they're not buffered.
+     * Incoming UDP data isn't buffered either.
+     */
+    for (so = slirp->udb.so_next; so != &slirp->udb;
+            so = so_next) {
+        int revents;
 
-                    so_next = so->so_next;
+        so_next = so->so_next;
 
-                    revents = 0;
-                    if (so->pollfds_idx != -1) {
-                        revents = g_array_index(pollfds, GPollFD,
-                                                so->pollfds_idx).revents;
-                    }
+        revents = 0;
+        if (so->pollfd) {
+            revents = so->pollfd->revents;
+        }
 
-                    if (so->s != -1 &&
-                        (revents & (G_IO_IN | G_IO_HUP | G_IO_ERR))) {
-                    icmp_receive(so);
-                }
-            }
+        if (so->s != -1 &&
+            (revents & (G_IO_IN | G_IO_HUP | G_IO_ERR))) {
+            sorecvfrom(so);
         }
+    }
+
+    /*
+     * Check incoming ICMP relies.
+     */
+    for (so = slirp->icmp.so_next; so != &slirp->icmp;
+        so = so_next) {
+        int revents;
+
+        so_next = so->so_next;
 
-        if_start(slirp);
+        revents = 0;
+        if (so->pollfd) {
+            revents = so->pollfd->revents;
+        }
+
+        if (so->s != -1 &&
+            (revents & (G_IO_IN | G_IO_HUP | G_IO_ERR))) {
+            icmp_receive(so);
+        }
     }
+
+    if_start(slirp);
+    return true;
 }
 
 static void arp_input(Slirp *slirp, const uint8_t *pkt, int pkt_len)
diff --git a/slirp/socket.c b/slirp/socket.c
index bb639ae..058d2e3 100644
--- a/slirp/socket.c
+++ b/slirp/socket.c
@@ -52,6 +52,7 @@ socreate(Slirp *slirp)
     so->s = -1;
     so->slirp = slirp;
     so->pollfds_idx = -1;
+    so->pollfd = slirp_gsource_get_gfd(slirp->opaque, so->s);
   }
   return(so);
 }
@@ -64,6 +65,7 @@ sofree(struct socket *so)
 {
   Slirp *slirp = so->slirp;
 
+  slirp_gsource_close_gfd(slirp->opaque, so->pollfd);
   if (so->so_emu==EMU_RSH && so->extra) {
 	sofree(so->extra);
 	so->extra=NULL;
diff --git a/slirp/socket.h b/slirp/socket.h
index 57e0407..522c5f0 100644
--- a/slirp/socket.h
+++ b/slirp/socket.h
@@ -21,6 +21,7 @@ struct socket {
   int s;                           /* The actual socket */
 
   int pollfds_idx;                 /* GPollFD GArray index */
+  GPollFD *pollfd;
 
   Slirp *slirp;			   /* managing slirp instance */
 
diff --git a/stubs/slirp.c b/stubs/slirp.c
index f1fc833..c343364 100644
--- a/stubs/slirp.c
+++ b/stubs/slirp.c
@@ -5,11 +5,3 @@ void slirp_update_timeout(uint32_t *timeout)
 {
 }
 
-void slirp_pollfds_fill(GArray *pollfds)
-{
-}
-
-void slirp_pollfds_poll(GArray *pollfds, int select_error)
-{
-}
-
-- 
1.7.4.4

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [Qemu-devel] [PATCH v1 13/14] slirp: handle race condition
  2013-05-07  5:46 [Qemu-devel] [PATCH v1 00/14] port network layer onto glib Liu Ping Fan
                   ` (11 preceding siblings ...)
  2013-05-07  5:47 ` [Qemu-devel] [PATCH v1 12/14] slirp: make slirp event dispatch based on slirp instance, not global Liu Ping Fan
@ 2013-05-07  5:47 ` Liu Ping Fan
  2013-05-07  5:47 ` [Qemu-devel] [PATCH v1 14/14] slirp: use lock to protect the slirp_instances Liu Ping Fan
  2013-05-15  8:10 ` [Qemu-devel] [PATCH v1 00/14] port network layer onto glib Stefan Hajnoczi
  14 siblings, 0 replies; 20+ messages in thread
From: Liu Ping Fan @ 2013-05-07  5:47 UTC (permalink / raw)
  To: qemu-devel
  Cc: mdroth, Jan Kiszka, Stefan Hajnoczi, Anthony Liguori, Paolo Bonzini

From: Liu Ping Fan <pingfank@linux.vnet.ibm.com>

Slirp and its peer can run on different context at the same time.
Using lock to protect. Lock rule: no extra lock can be hold after
slirp->lock. This will protect us from deadlock when calling to peer.

As to coding style, they accord to the nearby code's style.

Signed-off-by: Liu Ping Fan <pingfank@linux.vnet.ibm.com>
---
 slirp/if.c    |   57 ++++++++++++++++++++++++++++++++--------
 slirp/main.h  |    3 +-
 slirp/mbuf.h  |    2 +
 slirp/slirp.c |   81 ++++++++++++++++++++++++++++++++++++++++++---------------
 slirp/slirp.h |    6 +++-
 5 files changed, 115 insertions(+), 34 deletions(-)

diff --git a/slirp/if.c b/slirp/if.c
index dcd5faf..b6a30a8 100644
--- a/slirp/if.c
+++ b/slirp/if.c
@@ -132,12 +132,21 @@ diddit:
 		}
 	}
 
-#ifndef FULL_BOLT
-	/*
-	 * This prevents us from malloc()ing too many mbufs
-	 */
-	if_start(ifm->slirp);
-#endif
+}
+
+static void mbuf_free(gpointer data, gpointer user_data)
+{
+    struct mbuf *ifm = data;
+    m_free(ifm);
+}
+
+static void if_send_free(gpointer data, gpointer user_data)
+{
+    struct mbuf *ifm = data;
+    Slirp *slirp = user_data;
+
+    if_encap(slirp, ifm);
+    m_free(ifm);
 }
 
 /*
@@ -156,7 +165,10 @@ void if_start(Slirp *slirp)
 {
     uint64_t now = qemu_get_clock_ns(rt_clock);
     bool from_batchq, next_from_batchq;
-    struct mbuf *ifm, *ifm_next, *ifqt;
+    struct mbuf *ifm, *ifm_next, *ifqt, *mclone;
+    GList *drop_list, *send_list;
+    drop_list = send_list = NULL;
+    int ret;
 
     DEBUG_CALL("if_start");
 
@@ -192,9 +204,27 @@ void if_start(Slirp *slirp)
         }
 
         /* Try to send packet unless it already expired */
-        if (ifm->expiration_date >= now && !if_encap(slirp, ifm)) {
-            /* Packet is delayed due to pending ARP resolution */
-            continue;
+        if (ifm->expiration_date < now) {
+            drop_list = g_list_append(drop_list, ifm);
+        } else {
+            ret = if_query(slirp, ifm);
+            switch (ret) {
+            case 2:
+                send_list = g_list_append(send_list, ifm);
+                break;
+            case 1:
+                mclone = m_get(slirp);
+                m_copy(mclone, ifm, 0, ifm->m_len);
+                mclone->arp_requested = true;
+                send_list = g_list_append(send_list, mclone);
+                /* Packet is delayed due to pending ARP resolution */
+                continue;
+            case 0:
+                continue;
+            case -1:
+                drop_list = g_list_append(drop_list, ifm);
+                break;
+            }
         }
 
         if (ifm == slirp->next_m) {
@@ -230,8 +260,13 @@ void if_start(Slirp *slirp)
             ifm->ifq_so->so_nqueued = 0;
         }
 
-        m_free(ifm);
     }
 
     slirp->if_start_busy = false;
+    qemu_mutex_unlock(&slirp->lock);
+
+    g_list_foreach(drop_list, mbuf_free, NULL);
+    g_list_free(drop_list);
+    g_list_foreach(send_list, if_send_free, slirp);
+    g_list_free(send_list);
 }
diff --git a/slirp/main.h b/slirp/main.h
index f2e58cf..c0b7881 100644
--- a/slirp/main.h
+++ b/slirp/main.h
@@ -44,7 +44,8 @@ extern int tcp_keepintvl;
 #define PROTO_PPP 0x2
 #endif
 
-int if_encap(Slirp *slirp, struct mbuf *ifm);
+int if_query(Slirp *slirp, struct mbuf *ifm);
+void if_encap(Slirp *slirp, struct mbuf *ifm);
 ssize_t slirp_send(struct socket *so, const void *buf, size_t len, int flags);
 
 #endif
diff --git a/slirp/mbuf.h b/slirp/mbuf.h
index 3f3ab09..a61ab94 100644
--- a/slirp/mbuf.h
+++ b/slirp/mbuf.h
@@ -34,6 +34,7 @@
 #define _MBUF_H_
 
 #define MINCSIZE 4096	/* Amount to increase mbuf if too small */
+#define ETH_ALEN 6
 
 /*
  * Macros for type conversion
@@ -82,6 +83,7 @@ struct m_hdr {
 struct mbuf {
 	struct	m_hdr m_hdr;
 	Slirp *slirp;
+	uint8_t ethaddr[ETH_ALEN];
 	bool	arp_requested;
 	uint64_t expiration_date;
 	/* start of dynamic buffer area, must be last element */
diff --git a/slirp/slirp.c b/slirp/slirp.c
index 691f82f..8f5cbe0 100644
--- a/slirp/slirp.c
+++ b/slirp/slirp.c
@@ -206,6 +206,7 @@ Slirp *slirp_init(int restricted, struct in_addr vnetwork,
 
     slirp_init_once();
 
+    qemu_mutex_init(&slirp->lock);
     slirp->restricted = restricted;
 
     if_init(slirp);
@@ -248,6 +249,7 @@ void slirp_cleanup(Slirp *slirp)
 
     ip_cleanup(slirp);
     m_cleanup(slirp);
+    qemu_mutex_destroy(&slirp->lock);
 
     g_free(slirp->vdnssearch);
     g_free(slirp->tftp_prefix);
@@ -410,6 +412,7 @@ gboolean slirp_handler(gpointer data)
     struct socket *so, *so_next;
     int ret;
 
+    qemu_mutex_lock(&slirp->lock);
     /*
      * See if anything has timed out
      */
@@ -593,6 +596,7 @@ gboolean slirp_handler(gpointer data)
         }
     }
 
+    /* drop the slirp->lock inside it */
     if_start(slirp);
     return true;
 }
@@ -612,6 +616,7 @@ static void arp_input(Slirp *slirp, const uint8_t *pkt, int pkt_len)
         if (ah->ar_tip == ah->ar_sip) {
             /* Gratuitous ARP */
             arp_table_add(slirp, ah->ar_sip, ah->ar_sha);
+            qemu_mutex_unlock(&slirp->lock);
             return;
         }
 
@@ -624,6 +629,7 @@ static void arp_input(Slirp *slirp, const uint8_t *pkt, int pkt_len)
                 if (ex_ptr->ex_addr.s_addr == ah->ar_tip)
                     goto arp_ok;
             }
+            qemu_mutex_unlock(&slirp->lock);
             return;
         arp_ok:
             memset(arp_reply, 0, sizeof(arp_reply));
@@ -645,13 +651,19 @@ static void arp_input(Slirp *slirp, const uint8_t *pkt, int pkt_len)
             rah->ar_sip = ah->ar_tip;
             memcpy(rah->ar_tha, ah->ar_sha, ETH_ALEN);
             rah->ar_tip = ah->ar_sip;
+            qemu_mutex_unlock(&slirp->lock);
+            /* lock should be dropped before calling peer */
             slirp_output(slirp->opaque, arp_reply, sizeof(arp_reply));
+        } else {
+            qemu_mutex_unlock(&slirp->lock);
         }
         break;
     case ARPOP_REPLY:
         arp_table_add(slirp, ah->ar_sip, ah->ar_sha);
+        qemu_mutex_unlock(&slirp->lock);
         break;
     default:
+        qemu_mutex_unlock(&slirp->lock);
         break;
     }
 }
@@ -665,14 +677,18 @@ void slirp_input(Slirp *slirp, const uint8_t *pkt, int pkt_len)
         return;
 
     proto = ntohs(*(uint16_t *)(pkt + 12));
+
+    qemu_mutex_lock(&slirp->lock);
     switch(proto) {
     case ETH_P_ARP:
+        /* drop slirp->lock inside */
         arp_input(slirp, pkt, pkt_len);
-        break;
+        return;
     case ETH_P_IP:
         m = m_get(slirp);
-        if (!m)
-            return;
+        if (!m) {
+            break;
+        }
         /* Note: we add to align the IP header */
         if (M_FREEROOM(m) < pkt_len + 2) {
             m_inc(m, pkt_len + 2);
@@ -682,34 +698,51 @@ void slirp_input(Slirp *slirp, const uint8_t *pkt, int pkt_len)
 
         m->m_data += 2 + ETH_HLEN;
         m->m_len -= 2 + ETH_HLEN;
-
+        /* It just append packet, does not send immediately,
+                * so no need to drop slirp->lock inside.
+                */
         ip_input(m);
-        break;
+        /* drop slirp->lock inside */
+        if_start(slirp);
+        return;
     default:
         break;
     }
+    qemu_mutex_unlock(&slirp->lock);
 }
 
-/* Output the IP packet to the ethernet device. Returns 0 if the packet must be
- * re-queued.
- */
-int if_encap(Slirp *slirp, struct mbuf *ifm)
+/* -1 silent drop, 0 sllent, 1 need send out arp, 2 normal */
+int if_query(Slirp *slirp, struct mbuf *ifm)
 {
     uint8_t buf[1600];
-    struct ethhdr *eh = (struct ethhdr *)buf;
-    uint8_t ethaddr[ETH_ALEN];
     const struct ip *iph = (const struct ip *)ifm->m_data;
 
     if (ifm->m_len + ETH_HLEN > sizeof(buf)) {
+        return -1;
+    }
+    if (ifm->arp_requested) {
+        return 0;
+    }
+    if (!arp_table_search(slirp, iph->ip_dst.s_addr, ifm->ethaddr)) {
+        ifm->arp_requested = true;
         return 1;
     }
+    return 2;
+}
 
-    if (!arp_table_search(slirp, iph->ip_dst.s_addr, ethaddr)) {
+/* Output the IP packet to the ethernet device.
+ */
+void if_encap(Slirp *slirp, struct mbuf *ifm)
+{
+    uint8_t buf[1600];
+    struct ethhdr *eh = (struct ethhdr *)buf;
+    const struct ip *iph = (const struct ip *)ifm->m_data;
+
+    if (ifm->arp_requested) {
         uint8_t arp_req[ETH_HLEN + sizeof(struct arphdr)];
         struct ethhdr *reh = (struct ethhdr *)arp_req;
         struct arphdr *rah = (struct arphdr *)(arp_req + ETH_HLEN);
 
-        if (!ifm->arp_requested) {
             /* If the client addr is not known, send an ARP request */
             memset(reh->h_dest, 0xff, ETH_ALEN);
             memcpy(reh->h_source, special_ethaddr, ETH_ALEN - 4);
@@ -735,21 +768,17 @@ int if_encap(Slirp *slirp, struct mbuf *ifm)
             rah->ar_tip = iph->ip_dst.s_addr;
             slirp->client_ipaddr = iph->ip_dst;
             slirp_output(slirp->opaque, arp_req, sizeof(arp_req));
-            ifm->arp_requested = true;
 
             /* Expire request and drop outgoing packet after 1 second */
             ifm->expiration_date = qemu_get_clock_ns(rt_clock) + 1000000000ULL;
-        }
-        return 0;
     } else {
-        memcpy(eh->h_dest, ethaddr, ETH_ALEN);
+        memcpy(eh->h_dest, ifm->ethaddr, ETH_ALEN);
         memcpy(eh->h_source, special_ethaddr, ETH_ALEN - 4);
         /* XXX: not correct */
         memcpy(&eh->h_source[2], &slirp->vhost_addr, 4);
         eh->h_proto = htons(ETH_P_IP);
         memcpy(buf + sizeof(struct ethhdr), ifm->m_data, ifm->m_len);
         slirp_output(slirp->opaque, buf, ifm->m_len + ETH_HLEN);
-        return 1;
     }
 }
 
@@ -860,15 +889,25 @@ void slirp_socket_recv(Slirp *slirp, struct in_addr guest_addr, int guest_port,
                        const uint8_t *buf, int size)
 {
     int ret;
-    struct socket *so = slirp_find_ctl_socket(slirp, guest_addr, guest_port);
+    struct socket *so;
+
+    qemu_mutex_lock(&slirp->lock);
+    so = slirp_find_ctl_socket(slirp, guest_addr, guest_port);
 
-    if (!so)
+    if (!so) {
+        qemu_mutex_unlock(&slirp->lock);
         return;
+    }
 
     ret = soreadbuf(so, (const char *)buf, size);
 
-    if (ret > 0)
+    if (ret > 0) {
         tcp_output(sototcpcb(so));
+        /* release lock inside */
+        if_start(slirp);
+        return;
+    }
+    qemu_mutex_unlock(&slirp->lock);
 }
 
 static void slirp_tcp_save(QEMUFile *f, struct tcpcb *tp)
diff --git a/slirp/slirp.h b/slirp/slirp.h
index 008360e..8ec0888 100644
--- a/slirp/slirp.h
+++ b/slirp/slirp.h
@@ -135,6 +135,7 @@ void free(void *ptr);
 
 #include "qemu/queue.h"
 #include "qemu/sockets.h"
+#include "qemu/thread.h"
 
 #include "libslirp.h"
 #include "ip.h"
@@ -158,7 +159,6 @@ void free(void *ptr);
 #include "bootp.h"
 #include "tftp.h"
 
-#define ETH_ALEN 6
 #define ETH_HLEN 14
 
 #define ETH_P_IP  0x0800        /* Internet Protocol packet  */
@@ -207,6 +207,10 @@ struct Slirp {
     u_int last_slowtimo;
     int do_slowtimo;
 
+    /* lock to protect slirp running both on frontend or SlirpState context.
+         * Lock rule: biglock ->lock.  Should be dropped before calling peer.
+         */
+    QemuMutex lock;
     /* virtual network configuration */
     struct in_addr vnetwork_addr;
     struct in_addr vnetwork_mask;
-- 
1.7.4.4

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [Qemu-devel] [PATCH v1 14/14] slirp: use lock to protect the slirp_instances
  2013-05-07  5:46 [Qemu-devel] [PATCH v1 00/14] port network layer onto glib Liu Ping Fan
                   ` (12 preceding siblings ...)
  2013-05-07  5:47 ` [Qemu-devel] [PATCH v1 13/14] slirp: handle race condition Liu Ping Fan
@ 2013-05-07  5:47 ` Liu Ping Fan
  2013-05-15  8:10 ` [Qemu-devel] [PATCH v1 00/14] port network layer onto glib Stefan Hajnoczi
  14 siblings, 0 replies; 20+ messages in thread
From: Liu Ping Fan @ 2013-05-07  5:47 UTC (permalink / raw)
  To: qemu-devel
  Cc: mdroth, Jan Kiszka, Stefan Hajnoczi, Anthony Liguori, Paolo Bonzini

From: Liu Ping Fan <pingfank@linux.vnet.ibm.com>

slirps will run on dedicated thread, and dynamically join or disjoin
this list, so need lock to protect the global list.

Signed-off-by: Liu Ping Fan <pingfank@linux.vnet.ibm.com>
---
 include/qemu/module.h |    2 ++
 slirp/slirp.c         |   20 ++++++++++++++++++++
 2 files changed, 22 insertions(+), 0 deletions(-)

diff --git a/include/qemu/module.h b/include/qemu/module.h
index c4ccd57..2720943 100644
--- a/include/qemu/module.h
+++ b/include/qemu/module.h
@@ -22,6 +22,7 @@ static void __attribute__((constructor)) do_qemu_init_ ## function(void) {  \
 
 typedef enum {
     MODULE_INIT_BLOCK,
+    MODULE_INIT_SLIRP,
     MODULE_INIT_MACHINE,
     MODULE_INIT_QAPI,
     MODULE_INIT_QOM,
@@ -29,6 +30,7 @@ typedef enum {
 } module_init_type;
 
 #define block_init(function) module_init(function, MODULE_INIT_BLOCK)
+#define slirplayer_init(function) module_init(function, MODULE_INIT_SLIRP)
 #define machine_init(function) module_init(function, MODULE_INIT_MACHINE)
 #define qapi_init(function) module_init(function, MODULE_INIT_QAPI)
 #define type_init(function) module_init(function, MODULE_INIT_QOM)
diff --git a/slirp/slirp.c b/slirp/slirp.c
index 8f5cbe0..3008c7b 100644
--- a/slirp/slirp.c
+++ b/slirp/slirp.c
@@ -42,6 +42,7 @@ static const uint8_t zero_ethaddr[ETH_ALEN] = { 0, 0, 0, 0, 0, 0 };
 
 u_int curtime;
 
+static QemuMutex slirp_instances_lock;
 static QTAILQ_HEAD(slirp_instances, Slirp) slirp_instances =
     QTAILQ_HEAD_INITIALIZER(slirp_instances);
 
@@ -236,14 +237,18 @@ Slirp *slirp_init(int restricted, struct in_addr vnetwork,
     register_savevm(NULL, "slirp", 0, 3,
                     slirp_state_save, slirp_state_load, slirp);
 
+    qemu_mutex_lock(&slirp_instances_lock);
     QTAILQ_INSERT_TAIL(&slirp_instances, slirp, entry);
+    qemu_mutex_unlock(&slirp_instances_lock);
 
     return slirp;
 }
 
 void slirp_cleanup(Slirp *slirp)
 {
+    qemu_mutex_lock(&slirp_instances_lock);
     QTAILQ_REMOVE(&slirp_instances, slirp, entry);
+    qemu_mutex_unlock(&slirp_instances_lock);
 
     unregister_savevm(NULL, "slirp", slirp);
 
@@ -262,9 +267,12 @@ void slirp_cleanup(Slirp *slirp)
 
 void slirp_update_timeout(uint32_t *timeout)
 {
+    qemu_mutex_lock(&slirp_instances_lock);
     if (!QTAILQ_EMPTY(&slirp_instances)) {
         *timeout = MIN(1000, *timeout);
     }
+    qemu_mutex_unlock(&slirp_instances_lock);
+
     curtime = qemu_get_clock_ms(rt_clock);
 }
 
@@ -1167,3 +1175,15 @@ static int slirp_state_load(QEMUFile *f, void *opaque, int version_id)
 
     return 0;
 }
+
+static void slirplayer_cleanup(void)
+{
+    qemu_mutex_destroy(&slirp_instances_lock);
+}
+
+static void slirplayer_bootup(void)
+{
+    qemu_mutex_init(&slirp_instances_lock);
+    atexit(&slirplayer_cleanup);
+}
+slirplayer_init(slirplayer_bootup)
-- 
1.7.4.4

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* Re: [Qemu-devel] [PATCH v1 00/14] port network layer onto glib
  2013-05-07  5:46 [Qemu-devel] [PATCH v1 00/14] port network layer onto glib Liu Ping Fan
                   ` (13 preceding siblings ...)
  2013-05-07  5:47 ` [Qemu-devel] [PATCH v1 14/14] slirp: use lock to protect the slirp_instances Liu Ping Fan
@ 2013-05-15  8:10 ` Stefan Hajnoczi
  2013-05-15  8:58   ` liu ping fan
  14 siblings, 1 reply; 20+ messages in thread
From: Stefan Hajnoczi @ 2013-05-15  8:10 UTC (permalink / raw)
  To: Liu Ping Fan
  Cc: mdroth, Jan Kiszka, qemu-devel, Anthony Liguori, Paolo Bonzini

On Tue, May 07, 2013 at 01:46:48PM +0800, Liu Ping Fan wrote:
> summary:
>   patch1: GSource event abstraction
>   patch2~6: port network backend to glib
>   patch7~10: make network core re-entrant
>   patch11~14:  port the slirp backend onto glib
> 
> The slirp->lock's deadlock problem has been eliminated and works fine.
> And other components seems more stable, so I change from RFCv- to v-
> 
> 
> 
> 
> rfcv5->v1:
>   1. re-arrange logic for net/socket.c
> 
> rfcv4->rfcv5:
>   1.use GList to reimplement EventsGSource
>   2.make readable()/writable() return events which the backend is interested in
>   3.fix the slirp->lock's potential deadlock issue
> 
> rfcv3->rfcv4:
>   1.separate GSource event to dedicated file
>   2.integrated with net core re-entrant
>   3.make slirp/  re-entrant
> 
> rfcv2->rfcv3:
>   1.drop hub and the frontend(virtio net)
>   2.split the patch for NetClientSource
> 
> rfcv1->rfcv2:
>   1.NetClientState can associate with up to 2 GSource, for virtio net, one for tx, one for rx,
>     so vq can run on different threads.
>   2.make network front-end onto glib, currently virtio net dataplane
> 
> 
> 
> Liu Ping Fan (14):
>   util: introduce gsource event abstraction
>   net: introduce bind_ctx to NetClientInfo
>   net: port vde onto GSource
>   net: port socket to GSource
>   net: port tap onto GSource
>   net: port tap-win32 onto GSource
>   net: hub use lock to protect ports list
>   net: introduce lock to protect NetQueue
>   net: introduce lock to protect NetClientState's peer's access
>   net: make netclient re-entrant with refcnt
>   slirp: make timeout local
>   slirp: make slirp event dispatch based on slirp instance, not global
>   slirp: handle race condition
>   slirp: use lock to protect the slirp_instances
> 
>  hw/qdev-properties-system.c |   14 +
>  include/net/net.h           |   12 +
>  include/qemu/module.h       |    2 +
>  main-loop.c                 |    4 -
>  net/hub.c                   |   28 ++-
>  net/net.c                   |  123 ++++++++-
>  net/queue.c                 |   15 +-
>  net/slirp.c                 |   35 +++-
>  net/socket.c                |  194 +++++++++++---
>  net/tap-win32.c             |   31 ++-
>  net/tap.c                   |   64 ++++-
>  net/vde.c                   |   31 ++-
>  slirp/if.c                  |   57 +++-
>  slirp/libslirp.h            |    7 +-
>  slirp/main.h                |    3 +-
>  slirp/mbuf.h                |    2 +
>  slirp/slirp.c               |  670 ++++++++++++++++++++++---------------------
>  slirp/slirp.h               |   11 +-
>  slirp/socket.c              |    2 +
>  slirp/socket.h              |    1 +
>  stubs/slirp.c               |    8 -
>  util/Makefile.objs          |    1 +
>  util/event_gsource.c        |  157 ++++++++++
>  util/event_gsource.h        |   49 ++++
>  24 files changed, 1105 insertions(+), 416 deletions(-)
>  create mode 100644 util/event_gsource.c
>  create mode 100644 util/event_gsource.h

Hi Ping Fan,
Sorry for the slow review, I'm focussing on QEMU 1.5 at the moment but
will look at this series soon.

Stefan

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [Qemu-devel] [PATCH v1 00/14] port network layer onto glib
  2013-05-15  8:10 ` [Qemu-devel] [PATCH v1 00/14] port network layer onto glib Stefan Hajnoczi
@ 2013-05-15  8:58   ` liu ping fan
  0 siblings, 0 replies; 20+ messages in thread
From: liu ping fan @ 2013-05-15  8:58 UTC (permalink / raw)
  To: Stefan Hajnoczi
  Cc: mdroth, Jan Kiszka, qemu-devel, Anthony Liguori, Paolo Bonzini

On Wed, May 15, 2013 at 4:10 PM, Stefan Hajnoczi <stefanha@redhat.com> wrote:
> On Tue, May 07, 2013 at 01:46:48PM +0800, Liu Ping Fan wrote:
>> summary:
>>   patch1: GSource event abstraction
>>   patch2~6: port network backend to glib
>>   patch7~10: make network core re-entrant
>>   patch11~14:  port the slirp backend onto glib
>>
>> The slirp->lock's deadlock problem has been eliminated and works fine.
>> And other components seems more stable, so I change from RFCv- to v-
>>
>>
>>
>>
>> rfcv5->v1:
>>   1. re-arrange logic for net/socket.c
>>
>> rfcv4->rfcv5:
>>   1.use GList to reimplement EventsGSource
>>   2.make readable()/writable() return events which the backend is interested in
>>   3.fix the slirp->lock's potential deadlock issue
>>
>> rfcv3->rfcv4:
>>   1.separate GSource event to dedicated file
>>   2.integrated with net core re-entrant
>>   3.make slirp/  re-entrant
>>
>> rfcv2->rfcv3:
>>   1.drop hub and the frontend(virtio net)
>>   2.split the patch for NetClientSource
>>
>> rfcv1->rfcv2:
>>   1.NetClientState can associate with up to 2 GSource, for virtio net, one for tx, one for rx,
>>     so vq can run on different threads.
>>   2.make network front-end onto glib, currently virtio net dataplane
>>
>>
>>
>> Liu Ping Fan (14):
>>   util: introduce gsource event abstraction
>>   net: introduce bind_ctx to NetClientInfo
>>   net: port vde onto GSource
>>   net: port socket to GSource
>>   net: port tap onto GSource
>>   net: port tap-win32 onto GSource
>>   net: hub use lock to protect ports list
>>   net: introduce lock to protect NetQueue
>>   net: introduce lock to protect NetClientState's peer's access
>>   net: make netclient re-entrant with refcnt
>>   slirp: make timeout local
>>   slirp: make slirp event dispatch based on slirp instance, not global
>>   slirp: handle race condition
>>   slirp: use lock to protect the slirp_instances
>>
>>  hw/qdev-properties-system.c |   14 +
>>  include/net/net.h           |   12 +
>>  include/qemu/module.h       |    2 +
>>  main-loop.c                 |    4 -
>>  net/hub.c                   |   28 ++-
>>  net/net.c                   |  123 ++++++++-
>>  net/queue.c                 |   15 +-
>>  net/slirp.c                 |   35 +++-
>>  net/socket.c                |  194 +++++++++++---
>>  net/tap-win32.c             |   31 ++-
>>  net/tap.c                   |   64 ++++-
>>  net/vde.c                   |   31 ++-
>>  slirp/if.c                  |   57 +++-
>>  slirp/libslirp.h            |    7 +-
>>  slirp/main.h                |    3 +-
>>  slirp/mbuf.h                |    2 +
>>  slirp/slirp.c               |  670 ++++++++++++++++++++++---------------------
>>  slirp/slirp.h               |   11 +-
>>  slirp/socket.c              |    2 +
>>  slirp/socket.h              |    1 +
>>  stubs/slirp.c               |    8 -
>>  util/Makefile.objs          |    1 +
>>  util/event_gsource.c        |  157 ++++++++++
>>  util/event_gsource.h        |   49 ++++
>>  24 files changed, 1105 insertions(+), 416 deletions(-)
>>  create mode 100644 util/event_gsource.c
>>  create mode 100644 util/event_gsource.h
>
> Hi Ping Fan,
> Sorry for the slow review, I'm focussing on QEMU 1.5 at the moment but
> will look at this series soon.
>
Thank you very much :)
And as mentioned, please skip patch2~6: port network backend to glib.
I think mdroth method is better than mind, will port onto his.

Regards,
Pingfan
> Stefan

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [Qemu-devel] [PATCH v1 07/14] net: hub use lock to protect ports list
  2013-05-07  5:46 ` [Qemu-devel] [PATCH v1 07/14] net: hub use lock to protect ports list Liu Ping Fan
@ 2013-05-21 13:57   ` Stefan Hajnoczi
  2013-05-29  1:41     ` liu ping fan
  0 siblings, 1 reply; 20+ messages in thread
From: Stefan Hajnoczi @ 2013-05-21 13:57 UTC (permalink / raw)
  To: Liu Ping Fan
  Cc: mdroth, Jan Kiszka, qemu-devel, Anthony Liguori, Paolo Bonzini

On Tue, May 07, 2013 at 01:46:55PM +0800, Liu Ping Fan wrote:
> From: Liu Ping Fan <pingfank@linux.vnet.ibm.com>
> 
> Hub ports will run on multi-threads, so use lock to protect them.
> 
> Signed-off-by: Liu Ping Fan <pingfank@linux.vnet.ibm.com>
> ---
>  net/hub.c |   25 ++++++++++++++++++++++++-
>  1 files changed, 24 insertions(+), 1 deletions(-)

Does this rely on net queue re-entrancy detection?  Otherwise we'll
deadlock when slirp re-enters the net layer, for example for ARP
requests.

I suggest moving this later in the patch series when the net queue is
protected already.

Stefan

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [Qemu-devel] [PATCH v1 08/14] net: introduce lock to protect NetQueue
  2013-05-07  5:46 ` [Qemu-devel] [PATCH v1 08/14] net: introduce lock to protect NetQueue Liu Ping Fan
@ 2013-05-21 14:04   ` Stefan Hajnoczi
  0 siblings, 0 replies; 20+ messages in thread
From: Stefan Hajnoczi @ 2013-05-21 14:04 UTC (permalink / raw)
  To: Liu Ping Fan
  Cc: mdroth, Jan Kiszka, qemu-devel, Anthony Liguori, Paolo Bonzini

On Tue, May 07, 2013 at 01:46:56PM +0800, Liu Ping Fan wrote:
> From: Liu Ping Fan <pingfank@linux.vnet.ibm.com>
> 
> NetQueue will be accessed by nc and its peers at the same time,
> need lock to protect it.
> 
> Signed-off-by: Liu Ping Fan <pingfank@linux.vnet.ibm.com>
> ---
>  net/queue.c |   11 +++++++++++
>  1 files changed, 11 insertions(+), 0 deletions(-)
> 
> diff --git a/net/queue.c b/net/queue.c
> index 859d02a..2856c1d 100644
> --- a/net/queue.c
> +++ b/net/queue.c
> @@ -53,6 +53,7 @@ struct NetQueue {
>      uint32_t nq_maxlen;
>      uint32_t nq_count;
>  
> +    QemuMutex lock;
>      QTAILQ_HEAD(packets, NetPacket) packets;
>  
>      unsigned delivering : 1;
> @@ -68,6 +69,7 @@ NetQueue *qemu_new_net_queue(void *opaque)
>      queue->nq_maxlen = 10000;
>      queue->nq_count = 0;
>  
> +    qemu_mutex_init(&queue->lock);
>      QTAILQ_INIT(&queue->packets);
>  
>      queue->delivering = 0;
> @@ -107,7 +109,9 @@ static void qemu_net_queue_append(NetQueue *queue,
>      memcpy(packet->data, buf, size);
>  
>      queue->nq_count++;
> +    qemu_mutex_lock(&queue->lock);
>      QTAILQ_INSERT_TAIL(&queue->packets, packet, entry);
> +    qemu_mutex_unlock(&queue->lock);

nq_count must be protected too.

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [Qemu-devel] [PATCH v1 07/14] net: hub use lock to protect ports list
  2013-05-21 13:57   ` Stefan Hajnoczi
@ 2013-05-29  1:41     ` liu ping fan
  0 siblings, 0 replies; 20+ messages in thread
From: liu ping fan @ 2013-05-29  1:41 UTC (permalink / raw)
  To: Stefan Hajnoczi
  Cc: mdroth, Jan Kiszka, qemu-devel, Anthony Liguori, Paolo Bonzini

On Tue, May 21, 2013 at 9:57 PM, Stefan Hajnoczi <stefanha@redhat.com> wrote:
> On Tue, May 07, 2013 at 01:46:55PM +0800, Liu Ping Fan wrote:
>> From: Liu Ping Fan <pingfank@linux.vnet.ibm.com>
>>
>> Hub ports will run on multi-threads, so use lock to protect them.
>>
>> Signed-off-by: Liu Ping Fan <pingfank@linux.vnet.ibm.com>
>> ---
>>  net/hub.c |   25 ++++++++++++++++++++++++-
>>  1 files changed, 24 insertions(+), 1 deletions(-)
>
> Does this rely on net queue re-entrancy detection?  Otherwise we'll
> deadlock when slirp re-enters the net layer, for example for ARP
> requests.
>
Yes, you are right.

> I suggest moving this later in the patch series when the net queue is
> protected already.
>
Ok.

Thanks
Pingfan
> Stefan

^ permalink raw reply	[flat|nested] 20+ messages in thread

end of thread, other threads:[~2013-05-29  1:42 UTC | newest]

Thread overview: 20+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2013-05-07  5:46 [Qemu-devel] [PATCH v1 00/14] port network layer onto glib Liu Ping Fan
2013-05-07  5:46 ` [Qemu-devel] [PATCH v1 01/14] util: introduce gsource event abstraction Liu Ping Fan
2013-05-07  5:46 ` [Qemu-devel] [PATCH v1 02/14] net: introduce bind_ctx to NetClientInfo Liu Ping Fan
2013-05-07  5:46 ` [Qemu-devel] [PATCH v1 03/14] net: port vde onto GSource Liu Ping Fan
2013-05-07  5:46 ` [Qemu-devel] [PATCH v1 04/14] net: port socket to GSource Liu Ping Fan
2013-05-07  5:46 ` [Qemu-devel] [PATCH v1 05/14] net: port tap onto GSource Liu Ping Fan
2013-05-07  5:46 ` [Qemu-devel] [PATCH v1 06/14] net: port tap-win32 " Liu Ping Fan
2013-05-07  5:46 ` [Qemu-devel] [PATCH v1 07/14] net: hub use lock to protect ports list Liu Ping Fan
2013-05-21 13:57   ` Stefan Hajnoczi
2013-05-29  1:41     ` liu ping fan
2013-05-07  5:46 ` [Qemu-devel] [PATCH v1 08/14] net: introduce lock to protect NetQueue Liu Ping Fan
2013-05-21 14:04   ` Stefan Hajnoczi
2013-05-07  5:46 ` [Qemu-devel] [PATCH v1 09/14] net: introduce lock to protect NetClientState's peer's access Liu Ping Fan
2013-05-07  5:46 ` [Qemu-devel] [PATCH v1 10/14] net: make netclient re-entrant with refcnt Liu Ping Fan
2013-05-07  5:46 ` [Qemu-devel] [PATCH v1 11/14] slirp: make timeout local Liu Ping Fan
2013-05-07  5:47 ` [Qemu-devel] [PATCH v1 12/14] slirp: make slirp event dispatch based on slirp instance, not global Liu Ping Fan
2013-05-07  5:47 ` [Qemu-devel] [PATCH v1 13/14] slirp: handle race condition Liu Ping Fan
2013-05-07  5:47 ` [Qemu-devel] [PATCH v1 14/14] slirp: use lock to protect the slirp_instances Liu Ping Fan
2013-05-15  8:10 ` [Qemu-devel] [PATCH v1 00/14] port network layer onto glib Stefan Hajnoczi
2013-05-15  8:58   ` liu ping fan

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.