All of lore.kernel.org
 help / color / mirror / Atom feed
* [Qemu-devel] [PATCH for-2.2 00/10] AioContext cleanups and Win32 socket support
@ 2014-07-09  9:53 Paolo Bonzini
  2014-07-09  9:53 ` [Qemu-devel] [PATCH 01/10] AioContext: take bottom halves into account when computing aio_poll timeout Paolo Bonzini
                   ` (11 more replies)
  0 siblings, 12 replies; 25+ messages in thread
From: Paolo Bonzini @ 2014-07-09  9:53 UTC (permalink / raw)
  To: qemu-devel

This series simplifies heavily aio_poll by splitting it into three
phases: prepare (aio_compute_timeout), poll, dispatch.  The resulting
code shares more logic between aio_poll and the GSource wrappers,
and makes it easier to add Win32 support for sockets.

Win32 support for sockets is a prerequisite for moving the NBD server
into the BlockDriverState's attached AioContext.  It is done in the
final patch, based on earlier work from Or Goshen (from Intel).
I had to more or less rewrite it to fit the new framework, but you
can see parts of Or's work, as well as traces of aio-posix.c and
main-loop.c logic.

Tested with NBD boot under Wine.

Paolo

Paolo Bonzini (10):
  AioContext: take bottom halves into account when computing aio_poll
    timeout
  aio-win32: Evaluate timers after handles
  aio-win32: Factor out duplicate code into aio_dispatch_handlers
  AioContext: run bottom halves after polling
  AioContext: export and use aio_dispatch
  test-aio: test timers on Windows too
  aio-win32: add aio_set_dispatching optimization
  AioContext: introduce aio_prepare
  qemu-coroutine-io: fix for Win32
  aio-win32: add support for sockets

 aio-posix.c         |  58 ++++--------
 aio-win32.c         | 262 +++++++++++++++++++++++++++++++++++++++-------------
 async.c             |  39 +++++---
 block/Makefile.objs |   2 -
 include/block/aio.h |  25 ++++-
 nbd.c               |   2 +-
 qemu-coroutine-io.c |   4 +-
 tests/test-aio.c    |  48 +++-------
 8 files changed, 277 insertions(+), 163 deletions(-)

-- 
1.9.3

^ permalink raw reply	[flat|nested] 25+ messages in thread

* [Qemu-devel] [PATCH 01/10] AioContext: take bottom halves into account when computing aio_poll timeout
  2014-07-09  9:53 [Qemu-devel] [PATCH for-2.2 00/10] AioContext cleanups and Win32 socket support Paolo Bonzini
@ 2014-07-09  9:53 ` Paolo Bonzini
  2014-08-01 14:34   ` Stefan Hajnoczi
  2014-07-09  9:53 ` [Qemu-devel] [PATCH 02/10] aio-win32: Evaluate timers after handles Paolo Bonzini
                   ` (10 subsequent siblings)
  11 siblings, 1 reply; 25+ messages in thread
From: Paolo Bonzini @ 2014-07-09  9:53 UTC (permalink / raw)
  To: qemu-devel

Right now, QEMU invokes aio_bh_poll before the "poll" phase
of aio_poll.  It is simpler to do it afterwards and skip the
"poll" phase altogether when the OS-dependent parts of AioContext
are invoked from GSource.  This way, AioContext behaves more
similarly when used as a GSource vs. when used as stand-alone.

As a start, take bottom halves into account when computing the
poll timeout.  If a bottom half is ready, do a non-blocking
poll.  As a side effect, this makes idle bottom halves work
with aio_poll; an improvement, but not really an important
one since they are deprecated.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 aio-posix.c         |  2 +-
 aio-win32.c         |  4 ++--
 async.c             | 32 ++++++++++++++++++--------------
 include/block/aio.h |  8 ++++++++
 4 files changed, 29 insertions(+), 17 deletions(-)

diff --git a/aio-posix.c b/aio-posix.c
index 2eada2e..55706f8 100644
--- a/aio-posix.c
+++ b/aio-posix.c
@@ -249,7 +249,7 @@ bool aio_poll(AioContext *ctx, bool blocking)
     /* wait until next event */
     ret = qemu_poll_ns((GPollFD *)ctx->pollfds->data,
                          ctx->pollfds->len,
-                         blocking ? timerlistgroup_deadline_ns(&ctx->tlg) : 0);
+                         blocking ? aio_compute_timeout(ctx) : 0);
 
     /* if we have any readable fds, dispatch event */
     if (ret > 0) {
diff --git a/aio-win32.c b/aio-win32.c
index c12f61e..fe7ee5b 100644
--- a/aio-win32.c
+++ b/aio-win32.c
@@ -165,8 +165,8 @@ bool aio_poll(AioContext *ctx, bool blocking)
     while (count > 0) {
         int ret;
 
-        timeout = blocking ?
-            qemu_timeout_ns_to_ms(timerlistgroup_deadline_ns(&ctx->tlg)) : 0;
+        timeout = blocking
+            ? qemu_timeout_ns_to_ms(aio_compute_timeout(ctx)) : 0;
         ret = WaitForMultipleObjects(count, events, FALSE, timeout);
 
         /* if we have any signaled events, dispatch event */
diff --git a/async.c b/async.c
index 34af0b2..ac40eab 100644
--- a/async.c
+++ b/async.c
@@ -152,39 +152,43 @@ void qemu_bh_delete(QEMUBH *bh)
     bh->deleted = 1;
 }
 
-static gboolean
-aio_ctx_prepare(GSource *source, gint    *timeout)
+int
+aio_compute_timeout(AioContext *ctx)
 {
-    AioContext *ctx = (AioContext *) source;
+    int64_t deadline;
+    int timeout = -1;
     QEMUBH *bh;
-    int deadline;
 
-    /* We assume there is no timeout already supplied */
-    *timeout = -1;
     for (bh = ctx->first_bh; bh; bh = bh->next) {
         if (!bh->deleted && bh->scheduled) {
             if (bh->idle) {
                 /* idle bottom halves will be polled at least
                  * every 10ms */
-                *timeout = 10;
+                timeout = 10000000;
             } else {
                 /* non-idle bottom halves will be executed
                  * immediately */
-                *timeout = 0;
-                return true;
+                return 0;
             }
         }
     }
 
-    deadline = qemu_timeout_ns_to_ms(timerlistgroup_deadline_ns(&ctx->tlg));
+    deadline = timerlistgroup_deadline_ns(&ctx->tlg);
     if (deadline == 0) {
-        *timeout = 0;
-        return true;
+        return 0;
     } else {
-        *timeout = qemu_soonest_timeout(*timeout, deadline);
+        return qemu_soonest_timeout(timeout, deadline);
     }
+}
 
-    return false;
+static gboolean
+aio_ctx_prepare(GSource *source, gint    *timeout)
+{
+    AioContext *ctx = (AioContext *) source;
+
+    /* We assume there is no timeout already supplied */
+    *timeout = qemu_timeout_ns_to_ms(aio_compute_timeout(ctx));
+    return *timeout == 0;
 }
 
 static gboolean
diff --git a/include/block/aio.h b/include/block/aio.h
index c23de3c..7eeb961 100644
--- a/include/block/aio.h
+++ b/include/block/aio.h
@@ -303,4 +303,12 @@ static inline void aio_timer_init(AioContext *ctx,
     timer_init(ts, ctx->tlg.tl[type], scale, cb, opaque);
 }
 
+/**
+ * aio_compute_timeout:
+ * @ctx: the aio context
+ *
+ * Compute the timeout that a blocking aio_poll should use.
+ */
+int aio_compute_timeout(AioContext *ctx);
+
 #endif
-- 
1.9.3

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [Qemu-devel] [PATCH 02/10] aio-win32: Evaluate timers after handles
  2014-07-09  9:53 [Qemu-devel] [PATCH for-2.2 00/10] AioContext cleanups and Win32 socket support Paolo Bonzini
  2014-07-09  9:53 ` [Qemu-devel] [PATCH 01/10] AioContext: take bottom halves into account when computing aio_poll timeout Paolo Bonzini
@ 2014-07-09  9:53 ` Paolo Bonzini
  2014-07-09  9:53 ` [Qemu-devel] [PATCH 03/10] aio-win32: Factor out duplicate code into aio_dispatch_handlers Paolo Bonzini
                   ` (9 subsequent siblings)
  11 siblings, 0 replies; 25+ messages in thread
From: Paolo Bonzini @ 2014-07-09  9:53 UTC (permalink / raw)
  To: qemu-devel

This is similar to what aio_poll does in the stand-alone case.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 aio-win32.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/aio-win32.c b/aio-win32.c
index fe7ee5b..7b28411 100644
--- a/aio-win32.c
+++ b/aio-win32.c
@@ -109,9 +109,6 @@ bool aio_poll(AioContext *ctx, bool blocking)
         progress = true;
     }
 
-    /* Run timers */
-    progress |= timerlistgroup_run_timers(&ctx->tlg);
-
     /*
      * Then dispatch any pending callbacks from the GSource.
      *
@@ -145,6 +142,9 @@ bool aio_poll(AioContext *ctx, bool blocking)
         }
     }
 
+    /* Run timers */
+    progress |= timerlistgroup_run_timers(&ctx->tlg);
+
     if (progress && !blocking) {
         return true;
     }
-- 
1.9.3

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [Qemu-devel] [PATCH 03/10] aio-win32: Factor out duplicate code into aio_dispatch_handlers
  2014-07-09  9:53 [Qemu-devel] [PATCH for-2.2 00/10] AioContext cleanups and Win32 socket support Paolo Bonzini
  2014-07-09  9:53 ` [Qemu-devel] [PATCH 01/10] AioContext: take bottom halves into account when computing aio_poll timeout Paolo Bonzini
  2014-07-09  9:53 ` [Qemu-devel] [PATCH 02/10] aio-win32: Evaluate timers after handles Paolo Bonzini
@ 2014-07-09  9:53 ` Paolo Bonzini
  2014-07-09  9:53 ` [Qemu-devel] [PATCH 04/10] AioContext: run bottom halves after polling Paolo Bonzini
                   ` (8 subsequent siblings)
  11 siblings, 0 replies; 25+ messages in thread
From: Paolo Bonzini @ 2014-07-09  9:53 UTC (permalink / raw)
  To: qemu-devel

Later, the call to aio_dispatch will move int the GSource wrapper, while the
standalone case will still be call the component functions aio_bh_poll,
aio_dispatch_handlers and timerlistgroup_run_timers.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 aio-win32.c | 89 +++++++++++++++++++++++++++----------------------------------
 1 file changed, 39 insertions(+), 50 deletions(-)

diff --git a/aio-win32.c b/aio-win32.c
index 7b28411..5e37b42 100644
--- a/aio-win32.c
+++ b/aio-win32.c
@@ -89,29 +89,12 @@ bool aio_pending(AioContext *ctx)
     return false;
 }
 
-bool aio_poll(AioContext *ctx, bool blocking)
+static bool aio_dispatch_handlers(AioContext *ctx, HANDLE event)
 {
     AioHandler *node;
-    HANDLE events[MAXIMUM_WAIT_OBJECTS + 1];
-    bool progress;
-    int count;
-    int timeout;
-
-    progress = false;
-
-    /*
-     * If there are callbacks left that have been queued, we need to call then.
-     * Do not call select in this case, because it is possible that the caller
-     * does not need a complete flush (as is the case for aio_poll loops).
-     */
-    if (aio_bh_poll(ctx)) {
-        blocking = false;
-        progress = true;
-    }
+    bool progress = false;
 
     /*
-     * Then dispatch any pending callbacks from the GSource.
-     *
      * We have to walk very carefully in case aio_set_fd_handler is
      * called while we're walking.
      */
@@ -121,7 +104,9 @@ bool aio_poll(AioContext *ctx, bool blocking)
 
         ctx->walking_handlers++;
 
-        if (node->pfd.revents && node->io_notify) {
+        if (!node->deleted &&
+            (node->pfd.revents || event_notifier_get_handle(node->e) == event) &&
+            node->io_notify) {
             node->pfd.revents = 0;
             node->io_notify(node->e);
 
@@ -142,8 +127,40 @@ bool aio_poll(AioContext *ctx, bool blocking)
         }
     }
 
-    /* Run timers */
+    return progress;
+}
+
+static bool aio_dispatch(AioContext *ctx)
+{
+    bool progress;
+
+    progress = aio_dispatch_handlers(ctx, INVALID_HANDLE_VALUE);
     progress |= timerlistgroup_run_timers(&ctx->tlg);
+    return progress;
+}
+
+bool aio_poll(AioContext *ctx, bool blocking)
+{
+    AioHandler *node;
+    HANDLE events[MAXIMUM_WAIT_OBJECTS + 1];
+    bool progress;
+    int count;
+    int timeout;
+
+    progress = false;
+
+    /*
+     * If there are callbacks left that have been queued, we need to call then.
+     * Do not call select in this case, because it is possible that the caller
+     * does not need a complete flush (as is the case for aio_poll loops).
+     */
+    if (aio_bh_poll(ctx)) {
+        blocking = false;
+        progress = true;
+    }
+
+    /* Dispatch any pending callbacks from the GSource.  */
+    progress |= aio_dispatch(ctx);
 
     if (progress && !blocking) {
         return true;
@@ -176,35 +193,7 @@ bool aio_poll(AioContext *ctx, bool blocking)
 
         blocking = false;
 
-        /* we have to walk very carefully in case
-         * aio_set_fd_handler is called while we're walking */
-        node = QLIST_FIRST(&ctx->aio_handlers);
-        while (node) {
-            AioHandler *tmp;
-
-            ctx->walking_handlers++;
-
-            if (!node->deleted &&
-                event_notifier_get_handle(node->e) == events[ret - WAIT_OBJECT_0] &&
-                node->io_notify) {
-                node->io_notify(node->e);
-
-                /* aio_notify() does not count as progress */
-                if (node->e != &ctx->notifier) {
-                    progress = true;
-                }
-            }
-
-            tmp = node;
-            node = QLIST_NEXT(node, node);
-
-            ctx->walking_handlers--;
-
-            if (!ctx->walking_handlers && tmp->deleted) {
-                QLIST_REMOVE(tmp, node);
-                g_free(tmp);
-            }
-        }
+        progress |= aio_dispatch_handlers(ctx, events[ret - WAIT_OBJECT_0]);
 
         /* Try again, but only call each handler once.  */
         events[ret - WAIT_OBJECT_0] = events[--count];
-- 
1.9.3

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [Qemu-devel] [PATCH 04/10] AioContext: run bottom halves after polling
  2014-07-09  9:53 [Qemu-devel] [PATCH for-2.2 00/10] AioContext cleanups and Win32 socket support Paolo Bonzini
                   ` (2 preceding siblings ...)
  2014-07-09  9:53 ` [Qemu-devel] [PATCH 03/10] aio-win32: Factor out duplicate code into aio_dispatch_handlers Paolo Bonzini
@ 2014-07-09  9:53 ` Paolo Bonzini
  2014-07-09  9:53 ` [Qemu-devel] [PATCH 05/10] AioContext: export and use aio_dispatch Paolo Bonzini
                   ` (7 subsequent siblings)
  11 siblings, 0 replies; 25+ messages in thread
From: Paolo Bonzini @ 2014-07-09  9:53 UTC (permalink / raw)
  To: qemu-devel

Make the dispatching phase the same before blocking and afterwards.
The next patch will make aio_dispatch public and use it directly
for the GSource case, instead of aio_poll.  aio_poll can then be
simplified heavily.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 aio-posix.c | 4 ++++
 aio-win32.c | 8 +++++++-
 2 files changed, 11 insertions(+), 1 deletion(-)

diff --git a/aio-posix.c b/aio-posix.c
index 55706f8..798a3ff 100644
--- a/aio-posix.c
+++ b/aio-posix.c
@@ -264,6 +264,10 @@ bool aio_poll(AioContext *ctx, bool blocking)
 
     /* Run dispatch even if there were no readable fds to run timers */
     aio_set_dispatching(ctx, true);
+    if (aio_bh_poll(ctx)) {
+        progress = true;
+    }
+
     if (aio_dispatch(ctx)) {
         progress = true;
     }
diff --git a/aio-win32.c b/aio-win32.c
index 5e37b42..2ac38a8 100644
--- a/aio-win32.c
+++ b/aio-win32.c
@@ -143,7 +143,7 @@ bool aio_poll(AioContext *ctx, bool blocking)
 {
     AioHandler *node;
     HANDLE events[MAXIMUM_WAIT_OBJECTS + 1];
-    bool progress;
+    bool progress, first;
     int count;
     int timeout;
 
@@ -177,6 +177,7 @@ bool aio_poll(AioContext *ctx, bool blocking)
     }
 
     ctx->walking_handlers--;
+    first = true;
 
     /* wait until next event */
     while (count > 0) {
@@ -186,6 +187,11 @@ bool aio_poll(AioContext *ctx, bool blocking)
             ? qemu_timeout_ns_to_ms(aio_compute_timeout(ctx)) : 0;
         ret = WaitForMultipleObjects(count, events, FALSE, timeout);
 
+        if (first && aio_bh_poll(ctx)) {
+            progress = true;
+        }
+        first = false;
+
         /* if we have any signaled events, dispatch event */
         if ((DWORD) (ret - WAIT_OBJECT_0) >= count) {
             break;
-- 
1.9.3

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [Qemu-devel] [PATCH 05/10] AioContext: export and use aio_dispatch
  2014-07-09  9:53 [Qemu-devel] [PATCH for-2.2 00/10] AioContext cleanups and Win32 socket support Paolo Bonzini
                   ` (3 preceding siblings ...)
  2014-07-09  9:53 ` [Qemu-devel] [PATCH 04/10] AioContext: run bottom halves after polling Paolo Bonzini
@ 2014-07-09  9:53 ` Paolo Bonzini
  2014-07-09  9:53 ` [Qemu-devel] [PATCH 06/10] test-aio: test timers on Windows too Paolo Bonzini
                   ` (6 subsequent siblings)
  11 siblings, 0 replies; 25+ messages in thread
From: Paolo Bonzini @ 2014-07-09  9:53 UTC (permalink / raw)
  To: qemu-devel

So far, aio_poll's scheme was dispatch/poll/dispatch, where
the first dispatch phase was used only in the GSource case in
order to avoid a blocking poll.  Earlier patches changed it to
dispatch/prepare/poll/dispatch, where prepare is aio_compute_timeout.

By making aio_dispatch public, we can remove the first dispatch
phase altogether, so that both aio_poll and the GSource use the same
prepare/poll/dispatch scheme.

This patch breaks the invariant that aio_poll(..., true) will not block
the first time it returns false.  This used to be fundamental for
qemu_aio_flush's implementation as "while (qemu_aio_wait()) {}" but
no code in QEMU relies on this invariant anymore.  The return value
of aio_poll() is now comparable with that of g_main_context_iteration.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 aio-posix.c         | 55 +++++++++++++----------------------------------------
 aio-win32.c         | 31 ++++--------------------------
 async.c             |  2 +-
 include/block/aio.h |  6 ++++++
 4 files changed, 24 insertions(+), 70 deletions(-)

diff --git a/aio-posix.c b/aio-posix.c
index 798a3ff..0936b4f 100644
--- a/aio-posix.c
+++ b/aio-posix.c
@@ -119,12 +119,21 @@ bool aio_pending(AioContext *ctx)
     return false;
 }
 
-static bool aio_dispatch(AioContext *ctx)
+bool aio_dispatch(AioContext *ctx)
 {
     AioHandler *node;
     bool progress = false;
 
     /*
+     * If there are callbacks left that have been queued, we need to call them.
+     * Do not call select in this case, because it is possible that the caller
+     * does not need a complete flush (as is the case for aio_poll loops).
+     */
+    if (aio_bh_poll(ctx)) {
+        progress = true;
+    }
+
+    /*
      * We have to walk very carefully in case aio_set_fd_handler is
      * called while we're walking.
      */
@@ -184,22 +193,9 @@ bool aio_poll(AioContext *ctx, bool blocking)
 
     /* aio_notify can avoid the expensive event_notifier_set if
      * everything (file descriptors, bottom halves, timers) will
-     * be re-evaluated before the next blocking poll().  This happens
-     * in two cases:
-     *
-     * 1) when aio_poll is called with blocking == false
-     *
-     * 2) when we are called after poll().  If we are called before
-     *    poll(), bottom halves will not be re-evaluated and we need
-     *    aio_notify() if blocking == true.
-     *
-     * The first aio_dispatch() only does something when AioContext is
-     * running as a GSource, and in that case aio_poll is used only
-     * with blocking == false, so this optimization is already quite
-     * effective.  However, the code is ugly and should be restructured
-     * to have a single aio_dispatch() call.  To do this, we need to
-     * reorganize aio_poll into a prepare/poll/dispatch model like
-     * glib's.
+     * be re-evaluated before the next blocking poll().  This is
+     * already true when aio_poll is called with blocking == false;
+     * if blocking == true, it is only true after poll() returns.
      *
      * If we're in a nested event loop, ctx->dispatching might be true.
      * In that case we can restore it just before returning, but we
@@ -207,26 +203,6 @@ bool aio_poll(AioContext *ctx, bool blocking)
      */
     aio_set_dispatching(ctx, !blocking);
 
-    /*
-     * If there are callbacks left that have been queued, we need to call them.
-     * Do not call select in this case, because it is possible that the caller
-     * does not need a complete flush (as is the case for aio_poll loops).
-     */
-    if (aio_bh_poll(ctx)) {
-        blocking = false;
-        progress = true;
-    }
-
-    /* Re-evaluate condition (1) above.  */
-    aio_set_dispatching(ctx, !blocking);
-    if (aio_dispatch(ctx)) {
-        progress = true;
-    }
-
-    if (progress && !blocking) {
-        goto out;
-    }
-
     ctx->walking_handlers++;
 
     g_array_set_size(ctx->pollfds, 0);
@@ -264,15 +240,10 @@ bool aio_poll(AioContext *ctx, bool blocking)
 
     /* Run dispatch even if there were no readable fds to run timers */
     aio_set_dispatching(ctx, true);
-    if (aio_bh_poll(ctx)) {
-        progress = true;
-    }
-
     if (aio_dispatch(ctx)) {
         progress = true;
     }
 
-out:
     aio_set_dispatching(ctx, was_dispatching);
     return progress;
 }
diff --git a/aio-win32.c b/aio-win32.c
index 2ac38a8..1ec434a 100644
--- a/aio-win32.c
+++ b/aio-win32.c
@@ -130,11 +130,12 @@ static bool aio_dispatch_handlers(AioContext *ctx, HANDLE event)
     return progress;
 }
 
-static bool aio_dispatch(AioContext *ctx)
+bool aio_dispatch(AioContext *ctx)
 {
     bool progress;
 
-    progress = aio_dispatch_handlers(ctx, INVALID_HANDLE_VALUE);
+    progress = aio_bh_poll(ctx);
+    progress |= aio_dispatch_handlers(ctx, INVALID_HANDLE_VALUE);
     progress |= timerlistgroup_run_timers(&ctx->tlg);
     return progress;
 }
@@ -149,23 +150,6 @@ bool aio_poll(AioContext *ctx, bool blocking)
 
     progress = false;
 
-    /*
-     * If there are callbacks left that have been queued, we need to call then.
-     * Do not call select in this case, because it is possible that the caller
-     * does not need a complete flush (as is the case for aio_poll loops).
-     */
-    if (aio_bh_poll(ctx)) {
-        blocking = false;
-        progress = true;
-    }
-
-    /* Dispatch any pending callbacks from the GSource.  */
-    progress |= aio_dispatch(ctx);
-
-    if (progress && !blocking) {
-        return true;
-    }
-
     ctx->walking_handlers++;
 
     /* fill fd sets */
@@ -205,14 +189,7 @@ bool aio_poll(AioContext *ctx, bool blocking)
         events[ret - WAIT_OBJECT_0] = events[--count];
     }
 
-    if (blocking) {
-        /* Run the timers a second time. We do this because otherwise aio_wait
-         * will not note progress - and will stop a drain early - if we have
-         * a timer that was not ready to run entering g_poll but is ready
-         * after g_poll. This will only do anything if a timer has expired.
-         */
-        progress |= timerlistgroup_run_timers(&ctx->tlg);
-    }
+    progress |= timerlistgroup_run_timers(&ctx->tlg);
 
     return progress;
 }
diff --git a/async.c b/async.c
index ac40eab..a5126ff 100644
--- a/async.c
+++ b/async.c
@@ -213,7 +213,7 @@ aio_ctx_dispatch(GSource     *source,
     AioContext *ctx = (AioContext *) source;
 
     assert(callback == NULL);
-    aio_poll(ctx, false);
+    aio_dispatch(ctx);
     return true;
 }
 
diff --git a/include/block/aio.h b/include/block/aio.h
index 7eeb961..45408f7 100644
--- a/include/block/aio.h
+++ b/include/block/aio.h
@@ -211,6 +211,12 @@ void qemu_bh_delete(QEMUBH *bh);
  */
 bool aio_pending(AioContext *ctx);
 
+/* Dispatch any pending callbacks from the GSource attached to the AioContext.
+ *
+ * This is used internally in the implementation of the GSource.
+ */
+bool aio_dispatch(AioContext *ctx);
+
 /* Progress in completing AIO work to occur.  This can issue new pending
  * aio as a result of executing I/O completion or bh callbacks.
  *
-- 
1.9.3

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [Qemu-devel] [PATCH 06/10] test-aio: test timers on Windows too
  2014-07-09  9:53 [Qemu-devel] [PATCH for-2.2 00/10] AioContext cleanups and Win32 socket support Paolo Bonzini
                   ` (4 preceding siblings ...)
  2014-07-09  9:53 ` [Qemu-devel] [PATCH 05/10] AioContext: export and use aio_dispatch Paolo Bonzini
@ 2014-07-09  9:53 ` Paolo Bonzini
  2014-07-09  9:53 ` [Qemu-devel] [PATCH 07/10] aio-win32: add aio_set_dispatching optimization Paolo Bonzini
                   ` (5 subsequent siblings)
  11 siblings, 0 replies; 25+ messages in thread
From: Paolo Bonzini @ 2014-07-09  9:53 UTC (permalink / raw)
  To: qemu-devel

Use EventNotifier instead of a pipe, which makes it trivial to test
timers on Windows.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 tests/test-aio.c | 48 +++++++++++-------------------------------------
 1 file changed, 11 insertions(+), 37 deletions(-)

diff --git a/tests/test-aio.c b/tests/test-aio.c
index 4c40a49..23ca10a 100644
--- a/tests/test-aio.c
+++ b/tests/test-aio.c
@@ -57,8 +57,6 @@ static void bh_test_cb(void *opaque)
     }
 }
 
-#if !defined(_WIN32)
-
 static void timer_test_cb(void *opaque)
 {
     TimerTestData *data = opaque;
@@ -68,12 +66,10 @@ static void timer_test_cb(void *opaque)
     }
 }
 
-static void dummy_io_handler_read(void *opaque)
+static void dummy_io_handler_read(EventNotifier *e)
 {
 }
 
-#endif /* !_WIN32 */
-
 static void bh_delete_cb(void *opaque)
 {
     BHTestData *data = opaque;
@@ -428,24 +424,18 @@ static void test_wait_event_notifier_noflush(void)
     event_notifier_cleanup(&data.e);
 }
 
-#if !defined(_WIN32)
-
 static void test_timer_schedule(void)
 {
     TimerTestData data = { .n = 0, .ctx = ctx, .ns = SCALE_MS * 750LL,
                            .max = 2,
                            .clock_type = QEMU_CLOCK_VIRTUAL };
-    int pipefd[2];
+    EventNotifier e;
 
     /* aio_poll will not block to wait for timers to complete unless it has
      * an fd to wait on. Fixing this breaks other tests. So create a dummy one.
      */
-    g_assert(!qemu_pipe(pipefd));
-    qemu_set_nonblock(pipefd[0]);
-    qemu_set_nonblock(pipefd[1]);
-
-    aio_set_fd_handler(ctx, pipefd[0],
-                       dummy_io_handler_read, NULL, NULL);
+    event_notifier_init(&e, false);
+    aio_set_event_notifier(ctx, &e, dummy_io_handler_read);
     aio_poll(ctx, false);
 
     aio_timer_init(ctx, &data.timer, data.clock_type,
@@ -484,15 +474,12 @@ static void test_timer_schedule(void)
     g_assert(!aio_poll(ctx, false));
     g_assert_cmpint(data.n, ==, 2);
 
-    aio_set_fd_handler(ctx, pipefd[0], NULL, NULL, NULL);
-    close(pipefd[0]);
-    close(pipefd[1]);
+    aio_set_event_notifier(ctx, &e, NULL);
+    event_notifier_cleanup(&e);
 
     timer_del(&data.timer);
 }
 
-#endif /* !_WIN32 */
-
 /* Now the same tests, using the context as a GSource.  They are
  * very similar to the ones above, with g_main_context_iteration
  * replacing aio_poll.  However:
@@ -775,25 +762,19 @@ static void test_source_wait_event_notifier_noflush(void)
     event_notifier_cleanup(&data.e);
 }
 
-#if !defined(_WIN32)
-
 static void test_source_timer_schedule(void)
 {
     TimerTestData data = { .n = 0, .ctx = ctx, .ns = SCALE_MS * 750LL,
                            .max = 2,
                            .clock_type = QEMU_CLOCK_VIRTUAL };
-    int pipefd[2];
+    EventNotifier e;
     int64_t expiry;
 
     /* aio_poll will not block to wait for timers to complete unless it has
      * an fd to wait on. Fixing this breaks other tests. So create a dummy one.
      */
-    g_assert(!qemu_pipe(pipefd));
-    qemu_set_nonblock(pipefd[0]);
-    qemu_set_nonblock(pipefd[1]);
-
-    aio_set_fd_handler(ctx, pipefd[0],
-                       dummy_io_handler_read, NULL, NULL);
+    event_notifier_init(&e, false);
+    aio_set_event_notifier(ctx, &e, dummy_io_handler_read);
     do {} while (g_main_context_iteration(NULL, false));
 
     aio_timer_init(ctx, &data.timer, data.clock_type,
@@ -818,15 +799,12 @@ static void test_source_timer_schedule(void)
     g_assert_cmpint(data.n, ==, 2);
     g_assert(qemu_clock_get_ns(data.clock_type) > expiry);
 
-    aio_set_fd_handler(ctx, pipefd[0], NULL, NULL, NULL);
-    close(pipefd[0]);
-    close(pipefd[1]);
+    aio_set_event_notifier(ctx, &e, NULL);
+    event_notifier_cleanup(&e);
 
     timer_del(&data.timer);
 }
 
-#endif /* !_WIN32 */
-
 
 /* End of tests.  */
 
@@ -857,9 +835,7 @@ int main(int argc, char **argv)
     g_test_add_func("/aio/event/wait",              test_wait_event_notifier);
     g_test_add_func("/aio/event/wait/no-flush-cb",  test_wait_event_notifier_noflush);
     g_test_add_func("/aio/event/flush",             test_flush_event_notifier);
-#if !defined(_WIN32)
     g_test_add_func("/aio/timer/schedule",          test_timer_schedule);
-#endif
 
     g_test_add_func("/aio-gsource/notify",                  test_source_notify);
     g_test_add_func("/aio-gsource/flush",                   test_source_flush);
@@ -874,8 +850,6 @@ int main(int argc, char **argv)
     g_test_add_func("/aio-gsource/event/wait",              test_source_wait_event_notifier);
     g_test_add_func("/aio-gsource/event/wait/no-flush-cb",  test_source_wait_event_notifier_noflush);
     g_test_add_func("/aio-gsource/event/flush",             test_source_flush_event_notifier);
-#if !defined(_WIN32)
     g_test_add_func("/aio-gsource/timer/schedule",          test_source_timer_schedule);
-#endif
     return g_test_run();
 }
-- 
1.9.3

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [Qemu-devel] [PATCH 07/10] aio-win32: add aio_set_dispatching optimization
  2014-07-09  9:53 [Qemu-devel] [PATCH for-2.2 00/10] AioContext cleanups and Win32 socket support Paolo Bonzini
                   ` (5 preceding siblings ...)
  2014-07-09  9:53 ` [Qemu-devel] [PATCH 06/10] test-aio: test timers on Windows too Paolo Bonzini
@ 2014-07-09  9:53 ` Paolo Bonzini
  2014-07-09  9:53 ` [Qemu-devel] [PATCH 08/10] AioContext: introduce aio_prepare Paolo Bonzini
                   ` (4 subsequent siblings)
  11 siblings, 0 replies; 25+ messages in thread
From: Paolo Bonzini @ 2014-07-09  9:53 UTC (permalink / raw)
  To: qemu-devel

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 aio-win32.c | 17 ++++++++++++++++-
 1 file changed, 16 insertions(+), 1 deletion(-)

diff --git a/aio-win32.c b/aio-win32.c
index 1ec434a..fd52686 100644
--- a/aio-win32.c
+++ b/aio-win32.c
@@ -144,12 +144,25 @@ bool aio_poll(AioContext *ctx, bool blocking)
 {
     AioHandler *node;
     HANDLE events[MAXIMUM_WAIT_OBJECTS + 1];
-    bool progress, first;
+    bool was_dispatching, progress, first;
     int count;
     int timeout;
 
+    was_dispatching = ctx->dispatching;
     progress = false;
 
+    /* aio_notify can avoid the expensive event_notifier_set if
+     * everything (file descriptors, bottom halves, timers) will
+     * be re-evaluated before the next blocking poll().  This is
+     * already true when aio_poll is called with blocking == false;
+     * if blocking == true, it is only true after poll() returns.
+     *
+     * If we're in a nested event loop, ctx->dispatching might be true.
+     * In that case we can restore it just before returning, but we
+     * have to clear it now.
+     */
+    aio_set_dispatching(ctx, !blocking);
+
     ctx->walking_handlers++;
 
     /* fill fd sets */
@@ -170,6 +183,7 @@ bool aio_poll(AioContext *ctx, bool blocking)
         timeout = blocking
             ? qemu_timeout_ns_to_ms(aio_compute_timeout(ctx)) : 0;
         ret = WaitForMultipleObjects(count, events, FALSE, timeout);
+        aio_set_dispatching(ctx, true);
 
         if (first && aio_bh_poll(ctx)) {
             progress = true;
@@ -191,5 +205,6 @@ bool aio_poll(AioContext *ctx, bool blocking)
 
     progress |= timerlistgroup_run_timers(&ctx->tlg);
 
+    aio_set_dispatching(ctx, was_dispatching);
     return progress;
 }
-- 
1.9.3

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [Qemu-devel] [PATCH 08/10] AioContext: introduce aio_prepare
  2014-07-09  9:53 [Qemu-devel] [PATCH for-2.2 00/10] AioContext cleanups and Win32 socket support Paolo Bonzini
                   ` (6 preceding siblings ...)
  2014-07-09  9:53 ` [Qemu-devel] [PATCH 07/10] aio-win32: add aio_set_dispatching optimization Paolo Bonzini
@ 2014-07-09  9:53 ` Paolo Bonzini
  2014-07-09  9:53 ` [Qemu-devel] [PATCH 09/10] qemu-coroutine-io: fix for Win32 Paolo Bonzini
                   ` (3 subsequent siblings)
  11 siblings, 0 replies; 25+ messages in thread
From: Paolo Bonzini @ 2014-07-09  9:53 UTC (permalink / raw)
  To: qemu-devel

This will be used to implement socket polling on Windows.
On Windows, select() and g_poll() are completely different;
sockets are polled with select() before calling g_poll,
and the g_poll must be nonblocking if select() says a
socket is ready.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 aio-posix.c         | 5 +++++
 aio-win32.c         | 5 +++++
 async.c             | 5 +++++
 include/block/aio.h | 9 ++++++++-
 4 files changed, 23 insertions(+), 1 deletion(-)

diff --git a/aio-posix.c b/aio-posix.c
index 0936b4f..d3ac06e 100644
--- a/aio-posix.c
+++ b/aio-posix.c
@@ -100,6 +100,11 @@ void aio_set_event_notifier(AioContext *ctx,
                        (IOHandler *)io_read, NULL, notifier);
 }
 
+bool aio_prepare(AioContext *ctx)
+{
+    return false;
+}
+
 bool aio_pending(AioContext *ctx)
 {
     AioHandler *node;
diff --git a/aio-win32.c b/aio-win32.c
index fd52686..4542270 100644
--- a/aio-win32.c
+++ b/aio-win32.c
@@ -76,6 +76,11 @@ void aio_set_event_notifier(AioContext *ctx,
     aio_notify(ctx);
 }
 
+bool aio_prepare(AioContext *ctx)
+{
+    return false;
+}
+
 bool aio_pending(AioContext *ctx)
 {
     AioHandler *node;
diff --git a/async.c b/async.c
index a5126ff..bcba052 100644
--- a/async.c
+++ b/async.c
@@ -188,6 +188,11 @@ aio_ctx_prepare(GSource *source, gint    *timeout)
 
     /* We assume there is no timeout already supplied */
     *timeout = qemu_timeout_ns_to_ms(aio_compute_timeout(ctx));
+
+    if (aio_prepare(ctx)) {
+        *timeout = 0;
+    }
+
     return *timeout == 0;
 }
 
diff --git a/include/block/aio.h b/include/block/aio.h
index 45408f7..d129e22 100644
--- a/include/block/aio.h
+++ b/include/block/aio.h
@@ -205,7 +205,14 @@ void qemu_bh_cancel(QEMUBH *bh);
 void qemu_bh_delete(QEMUBH *bh);
 
 /* Return whether there are any pending callbacks from the GSource
- * attached to the AioContext.
+ * attached to the AioContext, before g_poll is invoked.
+ *
+ * This is used internally in the implementation of the GSource.
+ */
+bool aio_prepare(AioContext *ctx);
+
+/* Return whether there are any pending callbacks from the GSource
+ * attached to the AioContext, after g_poll is invoked.
  *
  * This is used internally in the implementation of the GSource.
  */
-- 
1.9.3

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [Qemu-devel] [PATCH 09/10] qemu-coroutine-io: fix for Win32
  2014-07-09  9:53 [Qemu-devel] [PATCH for-2.2 00/10] AioContext cleanups and Win32 socket support Paolo Bonzini
                   ` (7 preceding siblings ...)
  2014-07-09  9:53 ` [Qemu-devel] [PATCH 08/10] AioContext: introduce aio_prepare Paolo Bonzini
@ 2014-07-09  9:53 ` Paolo Bonzini
  2014-07-09  9:53 ` [Qemu-devel] [PATCH 10/10] aio-win32: add support for sockets Paolo Bonzini
                   ` (2 subsequent siblings)
  11 siblings, 0 replies; 25+ messages in thread
From: Paolo Bonzini @ 2014-07-09  9:53 UTC (permalink / raw)
  To: qemu-devel

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 nbd.c               | 2 +-
 qemu-coroutine-io.c | 4 +++-
 2 files changed, 4 insertions(+), 2 deletions(-)

diff --git a/nbd.c b/nbd.c
index e7d1cee..5c28f71 100644
--- a/nbd.c
+++ b/nbd.c
@@ -156,7 +156,7 @@ ssize_t nbd_wr_sync(int fd, void *buffer, size_t size, bool do_read)
             err = socket_error();
 
             /* recoverable error */
-            if (err == EINTR || (offset > 0 && err == EAGAIN)) {
+            if (err == EINTR || (offset > 0 && (err == EAGAIN || err == EWOULDBLOCK))) {
                 continue;
             }
 
diff --git a/qemu-coroutine-io.c b/qemu-coroutine-io.c
index 054ca70..d404926 100644
--- a/qemu-coroutine-io.c
+++ b/qemu-coroutine-io.c
@@ -34,13 +34,15 @@ qemu_co_sendv_recvv(int sockfd, struct iovec *iov, unsigned iov_cnt,
 {
     size_t done = 0;
     ssize_t ret;
+    int err;
     while (done < bytes) {
         ret = iov_send_recv(sockfd, iov, iov_cnt,
                             offset + done, bytes - done, do_send);
         if (ret > 0) {
             done += ret;
         } else if (ret < 0) {
-            if (errno == EAGAIN) {
+            err = socket_error();
+            if (err == EAGAIN || err == EWOULDBLOCK) {
                 qemu_coroutine_yield();
             } else if (done == 0) {
                 return -1;
-- 
1.9.3

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [Qemu-devel] [PATCH 10/10] aio-win32: add support for sockets
  2014-07-09  9:53 [Qemu-devel] [PATCH for-2.2 00/10] AioContext cleanups and Win32 socket support Paolo Bonzini
                   ` (8 preceding siblings ...)
  2014-07-09  9:53 ` [Qemu-devel] [PATCH 09/10] qemu-coroutine-io: fix for Win32 Paolo Bonzini
@ 2014-07-09  9:53 ` Paolo Bonzini
  2014-09-12  1:39   ` TeLeMan
  2014-09-12  1:43   ` TeLeMan
  2014-08-01 14:52 ` [Qemu-devel] [PATCH for-2.2 00/10] AioContext cleanups and Win32 socket support Stefan Hajnoczi
  2014-08-28 14:00 ` Stefan Hajnoczi
  11 siblings, 2 replies; 25+ messages in thread
From: Paolo Bonzini @ 2014-07-09  9:53 UTC (permalink / raw)
  To: qemu-devel

Uses the same select/WSAEventSelect scheme as main-loop.c.
WSAEventSelect() is edge-triggered, so it cannot be used
directly, but it is still used as a way to exit from a
blocking g_poll().

Before g_poll() is called, we poll sockets with a non-blocking
select() to achieve the level-triggered semantics we require:
if a socket is ready, the g_poll() is made non-blocking too.

Based on a patch from Or Goshen.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 aio-win32.c         | 150 ++++++++++++++++++++++++++++++++++++++++++++++++++--
 block/Makefile.objs |   2 -
 include/block/aio.h |   2 -
 3 files changed, 145 insertions(+), 9 deletions(-)

diff --git a/aio-win32.c b/aio-win32.c
index 4542270..61e3d2d 100644
--- a/aio-win32.c
+++ b/aio-win32.c
@@ -22,12 +22,80 @@
 
 struct AioHandler {
     EventNotifier *e;
+    IOHandler *io_read;
+    IOHandler *io_write;
     EventNotifierHandler *io_notify;
     GPollFD pfd;
     int deleted;
+    void *opaque;
     QLIST_ENTRY(AioHandler) node;
 };
 
+void aio_set_fd_handler(AioContext *ctx,
+                        int fd,
+                        IOHandler *io_read,
+                        IOHandler *io_write,
+                        void *opaque)
+{
+    /* fd is a SOCKET in our case */
+    AioHandler *node;
+
+    QLIST_FOREACH(node, &ctx->aio_handlers, node) {
+        if (node->pfd.fd == fd && !node->deleted) {
+            break;
+        }
+    }
+
+    /* Are we deleting the fd handler? */
+    if (!io_read && !io_write) {
+        if (node) {
+            /* If the lock is held, just mark the node as deleted */
+            if (ctx->walking_handlers) {
+                node->deleted = 1;
+                node->pfd.revents = 0;
+            } else {
+                /* Otherwise, delete it for real.  We can't just mark it as
+                 * deleted because deleted nodes are only cleaned up after
+                 * releasing the walking_handlers lock.
+                 */
+                QLIST_REMOVE(node, node);
+                g_free(node);
+            }
+        }
+    } else {
+        HANDLE event;
+
+        if (node == NULL) {
+            /* Alloc and insert if it's not already there */
+            node = g_malloc0(sizeof(AioHandler));
+            node->pfd.fd = fd;
+            QLIST_INSERT_HEAD(&ctx->aio_handlers, node, node);
+        }
+
+        node->pfd.events = 0;
+        if (node->io_read) {
+            node->pfd.events |= G_IO_IN;
+        }
+        if (node->io_write) {
+            node->pfd.events |= G_IO_OUT;
+        }
+
+        node->e = &ctx->notifier;
+
+        /* Update handler with latest information */
+        node->opaque = opaque;
+        node->io_read = io_read;
+        node->io_write = io_write;
+
+        event = event_notifier_get_handle(&ctx->notifier);
+        WSAEventSelect(node->pfd.fd, event,
+                       FD_READ | FD_ACCEPT | FD_CLOSE |
+                       FD_CONNECT | FD_WRITE | FD_OOB);
+    }
+
+    aio_notify(ctx);
+}
+
 void aio_set_event_notifier(AioContext *ctx,
                             EventNotifier *e,
                             EventNotifierHandler *io_notify)
@@ -78,7 +146,39 @@ void aio_set_event_notifier(AioContext *ctx,
 
 bool aio_prepare(AioContext *ctx)
 {
-    return false;
+    static struct timeval tv0;
+    AioHandler *node;
+    bool have_select_revents = false;
+    fd_set rfds, wfds;
+
+    /* fill fd sets */
+    FD_ZERO(&rfds);
+    FD_ZERO(&wfds);
+    QLIST_FOREACH(node, &ctx->aio_handlers, node) {
+        if (node->io_read) {
+            FD_SET ((SOCKET)node->pfd.fd, &rfds);
+        }
+        if (node->io_write) {
+            FD_SET ((SOCKET)node->pfd.fd, &wfds);
+        }
+    }
+
+    if (select(0, &rfds, &wfds, NULL, &tv0) > 0) {
+        QLIST_FOREACH(node, &ctx->aio_handlers, node) {
+            node->pfd.revents = 0;
+            if (FD_ISSET(node->pfd.fd, &rfds)) {
+                node->pfd.revents |= G_IO_IN;
+                have_select_revents = true;
+            }
+
+            if (FD_ISSET(node->pfd.fd, &wfds)) {
+                node->pfd.revents |= G_IO_OUT;
+                have_select_revents = true;
+            }
+        }
+    }
+
+    return have_select_revents;
 }
 
 bool aio_pending(AioContext *ctx)
@@ -89,6 +189,13 @@ bool aio_pending(AioContext *ctx)
         if (node->pfd.revents && node->io_notify) {
             return true;
         }
+
+        if ((node->pfd.revents & G_IO_IN) && node->io_read) {
+            return true;
+        }
+        if ((node->pfd.revents & G_IO_OUT) && node->io_write) {
+            return true;
+        }
     }
 
     return false;
@@ -106,11 +213,12 @@ static bool aio_dispatch_handlers(AioContext *ctx, HANDLE event)
     node = QLIST_FIRST(&ctx->aio_handlers);
     while (node) {
         AioHandler *tmp;
+        int revents = node->pfd.revents;
 
         ctx->walking_handlers++;
 
         if (!node->deleted &&
-            (node->pfd.revents || event_notifier_get_handle(node->e) == event) &&
+            (revents || event_notifier_get_handle(node->e) == event) &&
             node->io_notify) {
             node->pfd.revents = 0;
             node->io_notify(node->e);
@@ -121,6 +229,28 @@ static bool aio_dispatch_handlers(AioContext *ctx, HANDLE event)
             }
         }
 
+        if (!node->deleted &&
+            (node->io_read || node->io_write)) {
+            node->pfd.revents = 0;
+            if ((revents & G_IO_IN) && node->io_read) {
+                node->io_read(node->opaque);
+                progress = true;
+            }
+            if ((revents & G_IO_OUT) && node->io_write) {
+                node->io_write(node->opaque);
+                progress = true;
+            }
+
+            /* if the next select() will return an event, we have progressed */
+            if (event == event_notifier_get_handle(&ctx->notifier)) {
+                WSANETWORKEVENTS ev;
+                WSAEnumNetworkEvents(node->pfd.fd, event, &ev);
+                if (ev.lNetworkEvents) {
+                    progress = true;
+                }
+            }
+        }
+
         tmp = node;
         node = QLIST_NEXT(node, node);
 
@@ -149,10 +279,15 @@ bool aio_poll(AioContext *ctx, bool blocking)
 {
     AioHandler *node;
     HANDLE events[MAXIMUM_WAIT_OBJECTS + 1];
-    bool was_dispatching, progress, first;
+    bool was_dispatching, progress, have_select_revents, first;
     int count;
     int timeout;
 
+    if (aio_prepare(ctx)) {
+        blocking = false;
+        have_select_revents = true;
+    }
+
     was_dispatching = ctx->dispatching;
     progress = false;
 
@@ -183,6 +318,7 @@ bool aio_poll(AioContext *ctx, bool blocking)
 
     /* wait until next event */
     while (count > 0) {
+        HANDLE event;
         int ret;
 
         timeout = blocking
@@ -196,13 +332,17 @@ bool aio_poll(AioContext *ctx, bool blocking)
         first = false;
 
         /* if we have any signaled events, dispatch event */
-        if ((DWORD) (ret - WAIT_OBJECT_0) >= count) {
+        event = NULL;
+        if ((DWORD) (ret - WAIT_OBJECT_0) < count) {
+            event = events[ret - WAIT_OBJECT_0];
+        } else if (!have_select_revents) {
             break;
         }
 
+        have_select_revents = false;
         blocking = false;
 
-        progress |= aio_dispatch_handlers(ctx, events[ret - WAIT_OBJECT_0]);
+        progress |= aio_dispatch_handlers(ctx, event);
 
         /* Try again, but only call each handler once.  */
         events[ret - WAIT_OBJECT_0] = events[--count];
diff --git a/block/Makefile.objs b/block/Makefile.objs
index fd88c03..07eabf7 100644
--- a/block/Makefile.objs
+++ b/block/Makefile.objs
@@ -10,7 +10,6 @@ block-obj-$(CONFIG_WIN32) += raw-win32.o win32-aio.o
 block-obj-$(CONFIG_POSIX) += raw-posix.o
 block-obj-$(CONFIG_LINUX_AIO) += linux-aio.o
 
-ifeq ($(CONFIG_POSIX),y)
 block-obj-y += nbd.o nbd-client.o sheepdog.o
 block-obj-$(CONFIG_LIBISCSI) += iscsi.o
 block-obj-$(CONFIG_LIBNFS) += nfs.o
@@ -18,7 +17,6 @@ block-obj-$(CONFIG_CURL) += curl.o
 block-obj-$(CONFIG_RBD) += rbd.o
 block-obj-$(CONFIG_GLUSTERFS) += gluster.o
 block-obj-$(CONFIG_LIBSSH2) += ssh.o
-endif
 
 common-obj-y += stream.o
 common-obj-y += commit.o
diff --git a/include/block/aio.h b/include/block/aio.h
index d129e22..78fa2ee 100644
--- a/include/block/aio.h
+++ b/include/block/aio.h
@@ -239,7 +239,6 @@ bool aio_dispatch(AioContext *ctx);
  */
 bool aio_poll(AioContext *ctx, bool blocking);
 
-#ifdef CONFIG_POSIX
 /* Register a file descriptor and associated callbacks.  Behaves very similarly
  * to qemu_set_fd_handler2.  Unlike qemu_set_fd_handler2, these callbacks will
  * be invoked when using aio_poll().
@@ -252,7 +251,6 @@ void aio_set_fd_handler(AioContext *ctx,
                         IOHandler *io_read,
                         IOHandler *io_write,
                         void *opaque);
-#endif
 
 /* Register an event notifier and associated callbacks.  Behaves very similarly
  * to event_notifier_set_handler.  Unlike event_notifier_set_handler, these callbacks
-- 
1.9.3

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* Re: [Qemu-devel] [PATCH 01/10] AioContext: take bottom halves into account when computing aio_poll timeout
  2014-07-09  9:53 ` [Qemu-devel] [PATCH 01/10] AioContext: take bottom halves into account when computing aio_poll timeout Paolo Bonzini
@ 2014-08-01 14:34   ` Stefan Hajnoczi
  2014-08-01 16:03     ` Paolo Bonzini
  0 siblings, 1 reply; 25+ messages in thread
From: Stefan Hajnoczi @ 2014-08-01 14:34 UTC (permalink / raw)
  To: Paolo Bonzini; +Cc: qemu-devel

[-- Attachment #1: Type: text/plain, Size: 568 bytes --]

On Wed, Jul 09, 2014 at 11:53:01AM +0200, Paolo Bonzini wrote:
> diff --git a/async.c b/async.c
> index 34af0b2..ac40eab 100644
> --- a/async.c
> +++ b/async.c
> @@ -152,39 +152,43 @@ void qemu_bh_delete(QEMUBH *bh)
>      bh->deleted = 1;
>  }
>  
> -static gboolean
> -aio_ctx_prepare(GSource *source, gint    *timeout)
> +int
> +aio_compute_timeout(AioContext *ctx)

The return value is now nanoseconds so a 32-bit int doesn't offer much
range (only 2 seconds for a signed int).

Any reason to use int instead of int64_t as used by the timer API?

[-- Attachment #2: Type: application/pgp-signature, Size: 473 bytes --]

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [Qemu-devel] [PATCH for-2.2 00/10] AioContext cleanups and Win32 socket support
  2014-07-09  9:53 [Qemu-devel] [PATCH for-2.2 00/10] AioContext cleanups and Win32 socket support Paolo Bonzini
                   ` (9 preceding siblings ...)
  2014-07-09  9:53 ` [Qemu-devel] [PATCH 10/10] aio-win32: add support for sockets Paolo Bonzini
@ 2014-08-01 14:52 ` Stefan Hajnoczi
  2014-08-01 15:07   ` Paolo Bonzini
  2014-08-28 14:00 ` Stefan Hajnoczi
  11 siblings, 1 reply; 25+ messages in thread
From: Stefan Hajnoczi @ 2014-08-01 14:52 UTC (permalink / raw)
  To: Paolo Bonzini; +Cc: qemu-devel

[-- Attachment #1: Type: text/plain, Size: 1807 bytes --]

On Wed, Jul 09, 2014 at 11:53:00AM +0200, Paolo Bonzini wrote:
> This series simplifies heavily aio_poll by splitting it into three
> phases: prepare (aio_compute_timeout), poll, dispatch.  The resulting
> code shares more logic between aio_poll and the GSource wrappers,
> and makes it easier to add Win32 support for sockets.
> 
> Win32 support for sockets is a prerequisite for moving the NBD server
> into the BlockDriverState's attached AioContext.  It is done in the
> final patch, based on earlier work from Or Goshen (from Intel).
> I had to more or less rewrite it to fit the new framework, but you
> can see parts of Or's work, as well as traces of aio-posix.c and
> main-loop.c logic.
> 
> Tested with NBD boot under Wine.
> 
> Paolo
> 
> Paolo Bonzini (10):
>   AioContext: take bottom halves into account when computing aio_poll
>     timeout
>   aio-win32: Evaluate timers after handles
>   aio-win32: Factor out duplicate code into aio_dispatch_handlers
>   AioContext: run bottom halves after polling
>   AioContext: export and use aio_dispatch
>   test-aio: test timers on Windows too
>   aio-win32: add aio_set_dispatching optimization
>   AioContext: introduce aio_prepare
>   qemu-coroutine-io: fix for Win32
>   aio-win32: add support for sockets
> 
>  aio-posix.c         |  58 ++++--------
>  aio-win32.c         | 262 +++++++++++++++++++++++++++++++++++++++-------------
>  async.c             |  39 +++++---
>  block/Makefile.objs |   2 -
>  include/block/aio.h |  25 ++++-
>  nbd.c               |   2 +-
>  qemu-coroutine-io.c |   4 +-
>  tests/test-aio.c    |  48 +++-------
>  8 files changed, 277 insertions(+), 163 deletions(-)

I'm happy with this series except for my question about int vs int64_t
types for nanosecond time values.

[-- Attachment #2: Type: application/pgp-signature, Size: 473 bytes --]

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [Qemu-devel] [PATCH for-2.2 00/10] AioContext cleanups and Win32 socket support
  2014-08-01 14:52 ` [Qemu-devel] [PATCH for-2.2 00/10] AioContext cleanups and Win32 socket support Stefan Hajnoczi
@ 2014-08-01 15:07   ` Paolo Bonzini
  0 siblings, 0 replies; 25+ messages in thread
From: Paolo Bonzini @ 2014-08-01 15:07 UTC (permalink / raw)
  To: Stefan Hajnoczi; +Cc: qemu-devel

Il 01/08/2014 16:52, Stefan Hajnoczi ha scritto:
> I'm happy with this series except for my question about int vs int64_t
> types for nanosecond time values.

That was just an oversight, thanks for the review.  I'll take a look
next Monday.

Paolo

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [Qemu-devel] [PATCH 01/10] AioContext: take bottom halves into account when computing aio_poll timeout
  2014-08-01 14:34   ` Stefan Hajnoczi
@ 2014-08-01 16:03     ` Paolo Bonzini
  0 siblings, 0 replies; 25+ messages in thread
From: Paolo Bonzini @ 2014-08-01 16:03 UTC (permalink / raw)
  To: Stefan Hajnoczi; +Cc: qemu-devel

Il 01/08/2014 16:34, Stefan Hajnoczi ha scritto:
> On Wed, Jul 09, 2014 at 11:53:01AM +0200, Paolo Bonzini wrote:
>> diff --git a/async.c b/async.c
>> index 34af0b2..ac40eab 100644
>> --- a/async.c
>> +++ b/async.c
>> @@ -152,39 +152,43 @@ void qemu_bh_delete(QEMUBH *bh)
>>      bh->deleted = 1;
>>  }
>>  
>> -static gboolean
>> -aio_ctx_prepare(GSource *source, gint    *timeout)
>> +int
>> +aio_compute_timeout(AioContext *ctx)
> 
> The return value is now nanoseconds so a 32-bit int doesn't offer much
> range (only 2 seconds for a signed int).
> 
> Any reason to use int instead of int64_t as used by the timer API?

Can you squash the obvious

diff --git a/async.c b/async.c
index ac40eab..09e09c6 100644
--- a/async.c
+++ b/async.c
@@ -152,7 +152,7 @@ void qemu_bh_delete(QEMUBH *bh)
     bh->deleted = 1;
 }

-int
+int64_t
 aio_compute_timeout(AioContext *ctx)
 {
     int64_t deadline;
diff --git a/include/block/aio.h b/include/block/aio.h
index 7eeb961..05b531c 100644
--- a/include/block/aio.h
+++ b/include/block/aio.h
@@ -309,6 +309,6 @@ static inline void aio_timer_init(AioContext *ctx,
  *
  * Compute the timeout that a blocking aio_poll should use.
  */
-int aio_compute_timeout(AioContext *ctx);
+int64_t aio_compute_timeout(AioContext *ctx);

 #endif


yourself, or should I send v2 of this patch or the whole series?

Paolo

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* Re: [Qemu-devel] [PATCH for-2.2 00/10] AioContext cleanups and Win32 socket support
  2014-07-09  9:53 [Qemu-devel] [PATCH for-2.2 00/10] AioContext cleanups and Win32 socket support Paolo Bonzini
                   ` (10 preceding siblings ...)
  2014-08-01 14:52 ` [Qemu-devel] [PATCH for-2.2 00/10] AioContext cleanups and Win32 socket support Stefan Hajnoczi
@ 2014-08-28 14:00 ` Stefan Hajnoczi
  11 siblings, 0 replies; 25+ messages in thread
From: Stefan Hajnoczi @ 2014-08-28 14:00 UTC (permalink / raw)
  To: Paolo Bonzini; +Cc: qemu-devel

[-- Attachment #1: Type: text/plain, Size: 1827 bytes --]

On Wed, Jul 09, 2014 at 11:53:00AM +0200, Paolo Bonzini wrote:
> This series simplifies heavily aio_poll by splitting it into three
> phases: prepare (aio_compute_timeout), poll, dispatch.  The resulting
> code shares more logic between aio_poll and the GSource wrappers,
> and makes it easier to add Win32 support for sockets.
> 
> Win32 support for sockets is a prerequisite for moving the NBD server
> into the BlockDriverState's attached AioContext.  It is done in the
> final patch, based on earlier work from Or Goshen (from Intel).
> I had to more or less rewrite it to fit the new framework, but you
> can see parts of Or's work, as well as traces of aio-posix.c and
> main-loop.c logic.
> 
> Tested with NBD boot under Wine.
> 
> Paolo
> 
> Paolo Bonzini (10):
>   AioContext: take bottom halves into account when computing aio_poll
>     timeout
>   aio-win32: Evaluate timers after handles
>   aio-win32: Factor out duplicate code into aio_dispatch_handlers
>   AioContext: run bottom halves after polling
>   AioContext: export and use aio_dispatch
>   test-aio: test timers on Windows too
>   aio-win32: add aio_set_dispatching optimization
>   AioContext: introduce aio_prepare
>   qemu-coroutine-io: fix for Win32
>   aio-win32: add support for sockets
> 
>  aio-posix.c         |  58 ++++--------
>  aio-win32.c         | 262 +++++++++++++++++++++++++++++++++++++++-------------
>  async.c             |  39 +++++---
>  block/Makefile.objs |   2 -
>  include/block/aio.h |  25 ++++-
>  nbd.c               |   2 +-
>  qemu-coroutine-io.c |   4 +-
>  tests/test-aio.c    |  48 +++-------
>  8 files changed, 277 insertions(+), 163 deletions(-)

Squashed your fix for Patch 1.

Thanks, applied to my block tree:
https://github.com/stefanha/qemu/commits/block

Stefan

[-- Attachment #2: Type: application/pgp-signature, Size: 473 bytes --]

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [Qemu-devel] [PATCH 10/10] aio-win32: add support for sockets
  2014-07-09  9:53 ` [Qemu-devel] [PATCH 10/10] aio-win32: add support for sockets Paolo Bonzini
@ 2014-09-12  1:39   ` TeLeMan
  2014-09-12 10:05     ` Paolo Bonzini
  2014-09-12 12:51     ` Stefan Hajnoczi
  2014-09-12  1:43   ` TeLeMan
  1 sibling, 2 replies; 25+ messages in thread
From: TeLeMan @ 2014-09-12  1:39 UTC (permalink / raw)
  To: Paolo Bonzini; +Cc: qemu-devel

On Wed, Jul 9, 2014 at 5:53 PM, Paolo Bonzini <pbonzini@redhat.com> wrote:
> Uses the same select/WSAEventSelect scheme as main-loop.c.
> WSAEventSelect() is edge-triggered, so it cannot be used
> directly, but it is still used as a way to exit from a
> blocking g_poll().
>
> Before g_poll() is called, we poll sockets with a non-blocking
> select() to achieve the level-triggered semantics we require:
> if a socket is ready, the g_poll() is made non-blocking too.
>
> Based on a patch from Or Goshen.
>
> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
> ---
>  aio-win32.c         | 150 ++++++++++++++++++++++++++++++++++++++++++++++++++--
>  block/Makefile.objs |   2 -
>  include/block/aio.h |   2 -
>  3 files changed, 145 insertions(+), 9 deletions(-)
>
> diff --git a/aio-win32.c b/aio-win32.c
> index 4542270..61e3d2d 100644
> --- a/aio-win32.c
> +++ b/aio-win32.c

> @@ -183,6 +318,7 @@ bool aio_poll(AioContext *ctx, bool blocking)
>
>      /* wait until next event */
>      while (count > 0) {
> +        HANDLE event;
>          int ret;
>
>          timeout = blocking
> @@ -196,13 +332,17 @@ bool aio_poll(AioContext *ctx, bool blocking)
>          first = false;
>
>          /* if we have any signaled events, dispatch event */
> -        if ((DWORD) (ret - WAIT_OBJECT_0) >= count) {
> +        event = NULL;
> +        if ((DWORD) (ret - WAIT_OBJECT_0) < count) {
> +            event = events[ret - WAIT_OBJECT_0];
> +        } else if (!have_select_revents) {

when (ret - WAIT_OBJECT_0) >= count and have_select_revents is true,
the following events[ret - WAIT_OBJECT_0] will be overflowed.

>              break;
>          }
>
> +        have_select_revents = false;
>          blocking = false;
>
> -        progress |= aio_dispatch_handlers(ctx, events[ret - WAIT_OBJECT_0]);
> +        progress |= aio_dispatch_handlers(ctx, event);
>
>          /* Try again, but only call each handler once.  */
>          events[ret - WAIT_OBJECT_0] = events[--count];

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [Qemu-devel] [PATCH 10/10] aio-win32: add support for sockets
  2014-07-09  9:53 ` [Qemu-devel] [PATCH 10/10] aio-win32: add support for sockets Paolo Bonzini
  2014-09-12  1:39   ` TeLeMan
@ 2014-09-12  1:43   ` TeLeMan
  1 sibling, 0 replies; 25+ messages in thread
From: TeLeMan @ 2014-09-12  1:43 UTC (permalink / raw)
  To: Paolo Bonzini; +Cc: qemu-devel

On Wed, Jul 9, 2014 at 5:53 PM, Paolo Bonzini <pbonzini@redhat.com> wrote:
> Uses the same select/WSAEventSelect scheme as main-loop.c.
> WSAEventSelect() is edge-triggered, so it cannot be used
> directly, but it is still used as a way to exit from a
> blocking g_poll().
>
> Before g_poll() is called, we poll sockets with a non-blocking
> select() to achieve the level-triggered semantics we require:
> if a socket is ready, the g_poll() is made non-blocking too.
>
> Based on a patch from Or Goshen.
>
> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
> ---
>  aio-win32.c         | 150 ++++++++++++++++++++++++++++++++++++++++++++++++++--
>  block/Makefile.objs |   2 -
>  include/block/aio.h |   2 -
>  3 files changed, 145 insertions(+), 9 deletions(-)
>
> diff --git a/aio-win32.c b/aio-win32.c
> index 4542270..61e3d2d 100644
> --- a/aio-win32.c
> +++ b/aio-win32.c
>
> @@ -149,10 +279,15 @@ bool aio_poll(AioContext *ctx, bool blocking)
>  {
>      AioHandler *node;
>      HANDLE events[MAXIMUM_WAIT_OBJECTS + 1];
> -    bool was_dispatching, progress, first;
> +    bool was_dispatching, progress, have_select_revents, first;
have_select_revents has no initial value.

>      int count;
>      int timeout;
>
> +    if (aio_prepare(ctx)) {
> +        blocking = false;
> +        have_select_revents = true;
> +    }
> +
>      was_dispatching = ctx->dispatching;
>      progress = false;
>
> @@ -183,6 +318,7 @@ bool aio_poll(AioContext *ctx, bool blocking)
>
>      /* wait until next event */
>      while (count > 0) {
> +        HANDLE event;
>          int ret;
>
>          timeout = blocking
> @@ -196,13 +332,17 @@ bool aio_poll(AioContext *ctx, bool blocking)
>          first = false;
>
>          /* if we have any signaled events, dispatch event */
> -        if ((DWORD) (ret - WAIT_OBJECT_0) >= count) {
> +        event = NULL;
> +        if ((DWORD) (ret - WAIT_OBJECT_0) < count) {
> +            event = events[ret - WAIT_OBJECT_0];
> +        } else if (!have_select_revents) {
>              break;
>          }
>
> +        have_select_revents = false;
>          blocking = false;
>
> -        progress |= aio_dispatch_handlers(ctx, events[ret - WAIT_OBJECT_0]);
> +        progress |= aio_dispatch_handlers(ctx, event);
>
>          /* Try again, but only call each handler once.  */
>          events[ret - WAIT_OBJECT_0] = events[--count];
> diff --git a/block/Makefile.objs b/block/Makefile.objs
> index fd88c03..07eabf7 100644
> --- a/block/Makefile.objs
> +++ b/block/Makefile.objs
> @@ -10,7 +10,6 @@ block-obj-$(CONFIG_WIN32) += raw-win32.o win32-aio.o
>  block-obj-$(CONFIG_POSIX) += raw-posix.o
>  block-obj-$(CONFIG_LINUX_AIO) += linux-aio.o
>
> -ifeq ($(CONFIG_POSIX),y)
>  block-obj-y += nbd.o nbd-client.o sheepdog.o
>  block-obj-$(CONFIG_LIBISCSI) += iscsi.o
>  block-obj-$(CONFIG_LIBNFS) += nfs.o
> @@ -18,7 +17,6 @@ block-obj-$(CONFIG_CURL) += curl.o
>  block-obj-$(CONFIG_RBD) += rbd.o
>  block-obj-$(CONFIG_GLUSTERFS) += gluster.o
>  block-obj-$(CONFIG_LIBSSH2) += ssh.o
> -endif
>
>  common-obj-y += stream.o
>  common-obj-y += commit.o
> diff --git a/include/block/aio.h b/include/block/aio.h
> index d129e22..78fa2ee 100644
> --- a/include/block/aio.h
> +++ b/include/block/aio.h
> @@ -239,7 +239,6 @@ bool aio_dispatch(AioContext *ctx);
>   */
>  bool aio_poll(AioContext *ctx, bool blocking);
>
> -#ifdef CONFIG_POSIX
>  /* Register a file descriptor and associated callbacks.  Behaves very similarly
>   * to qemu_set_fd_handler2.  Unlike qemu_set_fd_handler2, these callbacks will
>   * be invoked when using aio_poll().
> @@ -252,7 +251,6 @@ void aio_set_fd_handler(AioContext *ctx,
>                          IOHandler *io_read,
>                          IOHandler *io_write,
>                          void *opaque);
> -#endif
>
>  /* Register an event notifier and associated callbacks.  Behaves very similarly
>   * to event_notifier_set_handler.  Unlike event_notifier_set_handler, these callbacks
> --
> 1.9.3
>
>

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [Qemu-devel] [PATCH 10/10] aio-win32: add support for sockets
  2014-09-12  1:39   ` TeLeMan
@ 2014-09-12 10:05     ` Paolo Bonzini
  2014-09-13  2:22       ` TeLeMan
  2014-09-12 12:51     ` Stefan Hajnoczi
  1 sibling, 1 reply; 25+ messages in thread
From: Paolo Bonzini @ 2014-09-12 10:05 UTC (permalink / raw)
  To: TeLeMan; +Cc: qemu-devel

Il 12/09/2014 03:39, TeLeMan ha scritto:
> On Wed, Jul 9, 2014 at 5:53 PM, Paolo Bonzini <pbonzini@redhat.com> wrote:
>> diff --git a/aio-win32.c b/aio-win32.c
>> index 4542270..61e3d2d 100644
>> --- a/aio-win32.c
>> +++ b/aio-win32.c
>> +    bool was_dispatching, progress, have_select_revents, first;
> have_select_revents has no initial value.

Good catch here...

> 
>> @@ -183,6 +318,7 @@ bool aio_poll(AioContext *ctx, bool blocking)
>>
>>      /* wait until next event */
>>      while (count > 0) {
>> +        HANDLE event;
>>          int ret;
>>
>>          timeout = blocking
>> @@ -196,13 +332,17 @@ bool aio_poll(AioContext *ctx, bool blocking)
>>          first = false;
>>
>>          /* if we have any signaled events, dispatch event */
>> -        if ((DWORD) (ret - WAIT_OBJECT_0) >= count) {
>> +        event = NULL;
>> +        if ((DWORD) (ret - WAIT_OBJECT_0) < count) {
>> +            event = events[ret - WAIT_OBJECT_0];
>> +        } else if (!have_select_revents) {
> 
> when (ret - WAIT_OBJECT_0) >= count and have_select_revents is true,
> the following events[ret - WAIT_OBJECT_0] will be overflowed.

... this instead is not a problem, ret - WAIT_OBJECT_0 can be at most
equal to count, and events[] is declared with MAXIMUM_WAIT_OBJECTS + 1
places.  So the

	events[ret - WAIT_OBJECT_0] = events[--count];

is equal to

	events[count] = events[count - 1];
	--count;

and this is harmless.

Paolo

>>              break;
>>          }
>>
>> +        have_select_revents = false;
>>          blocking = false;
>>
>> -        progress |= aio_dispatch_handlers(ctx, events[ret - WAIT_OBJECT_0]);
>> +        progress |= aio_dispatch_handlers(ctx, event);
>>
>>          /* Try again, but only call each handler once.  */
>>          events[ret - WAIT_OBJECT_0] = events[--count];

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [Qemu-devel] [PATCH 10/10] aio-win32: add support for sockets
  2014-09-12  1:39   ` TeLeMan
  2014-09-12 10:05     ` Paolo Bonzini
@ 2014-09-12 12:51     ` Stefan Hajnoczi
  2014-09-12 12:52       ` Paolo Bonzini
  1 sibling, 1 reply; 25+ messages in thread
From: Stefan Hajnoczi @ 2014-09-12 12:51 UTC (permalink / raw)
  To: TeLeMan; +Cc: Paolo Bonzini, qemu-devel

[-- Attachment #1: Type: text/plain, Size: 1966 bytes --]

On Fri, Sep 12, 2014 at 09:39:16AM +0800, TeLeMan wrote:
> On Wed, Jul 9, 2014 at 5:53 PM, Paolo Bonzini <pbonzini@redhat.com> wrote:
> > Uses the same select/WSAEventSelect scheme as main-loop.c.
> > WSAEventSelect() is edge-triggered, so it cannot be used
> > directly, but it is still used as a way to exit from a
> > blocking g_poll().
> >
> > Before g_poll() is called, we poll sockets with a non-blocking
> > select() to achieve the level-triggered semantics we require:
> > if a socket is ready, the g_poll() is made non-blocking too.
> >
> > Based on a patch from Or Goshen.
> >
> > Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
> > ---
> >  aio-win32.c         | 150 ++++++++++++++++++++++++++++++++++++++++++++++++++--
> >  block/Makefile.objs |   2 -
> >  include/block/aio.h |   2 -
> >  3 files changed, 145 insertions(+), 9 deletions(-)
> >
> > diff --git a/aio-win32.c b/aio-win32.c
> > index 4542270..61e3d2d 100644
> > --- a/aio-win32.c
> > +++ b/aio-win32.c
> 
> > @@ -183,6 +318,7 @@ bool aio_poll(AioContext *ctx, bool blocking)
> >
> >      /* wait until next event */
> >      while (count > 0) {
> > +        HANDLE event;
> >          int ret;
> >
> >          timeout = blocking
> > @@ -196,13 +332,17 @@ bool aio_poll(AioContext *ctx, bool blocking)
> >          first = false;
> >
> >          /* if we have any signaled events, dispatch event */
> > -        if ((DWORD) (ret - WAIT_OBJECT_0) >= count) {
> > +        event = NULL;
> > +        if ((DWORD) (ret - WAIT_OBJECT_0) < count) {
> > +            event = events[ret - WAIT_OBJECT_0];
> > +        } else if (!have_select_revents) {
> 
> when (ret - WAIT_OBJECT_0) >= count and have_select_revents is true,
> the following events[ret - WAIT_OBJECT_0] will be overflowed.

Thanks for your review.  Paolo has hardware problems and is travelling
next week.

Are you able to send patches to fix these issues?

Thanks,
Stefan

[-- Attachment #2: Type: application/pgp-signature, Size: 473 bytes --]

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [Qemu-devel] [PATCH 10/10] aio-win32: add support for sockets
  2014-09-12 12:51     ` Stefan Hajnoczi
@ 2014-09-12 12:52       ` Paolo Bonzini
  0 siblings, 0 replies; 25+ messages in thread
From: Paolo Bonzini @ 2014-09-12 12:52 UTC (permalink / raw)
  To: Stefan Hajnoczi, TeLeMan; +Cc: qemu-devel

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Il 12/09/2014 14:51, Stefan Hajnoczi ha scritto:
>>> 
>>> when (ret - WAIT_OBJECT_0) >= count and have_select_revents is
>>> true, the following events[ret - WAIT_OBJECT_0] will be
>>> overflowed.
> Thanks for your review.  Paolo has hardware problems and is
> travelling next week.
> 
> Are you able to send patches to fix these issues?

I already sent a patch actually. :)  The hardware problems only affect
my kernel work.

Paolo
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2

iQIcBAEBAgAGBQJUEuyOAAoJEBvWZb6bTYbyM7IP/iC66+f+rX9591lr9EexTRZz
y1Ps6qtA6YXdvkg3L2LyyzFAZrS6lwc0V2FgQa7kN8HnBwDT5r698nUnGU3TB5JX
CT3E8c69tuzCUgSz11GIOusiPsmv0/J0MgmfCERERGxFKhJcjtMzh2lUtQ6MbJBt
kWmtURrX4QK+fV9fs1Lu0t16g0vcDwDtkw89DZzl8ODy5rk5IeUyrrP48VnmjLTp
NixqhXZzanyw22T4a/HzoNUp5LlG1v5Wbj6jYRku7+nHw+C8orT02RXqh8Cpiux8
ywqaVJUAxYoXi/JbdL4EQDcQK9RFK3YP74JfWsGKhNSbDTtSPYPdqAdp8tE3Ed4M
/MXguttk2QVEZZlFb2B8o/huQD2AaWqEMKp5n+0nKw9qNGJvRA0zEJLSM86GJwRW
+fzxTthBA+Udm2PSkh+/6ypD2Ih9Wi68rmYcpJU8N/FyuRjCiplZWURhKGfv0w5A
ThA9QFhoxAZDYnakFHRbC/fHGqu7KctM1Bgz5H+WAlqHWsOs+rNohmeEDFQArSvy
mfrnbQBs/z0Z5S5mU1ea8tgehJIUgaoxC4tCGdplHy4T6uPwyb7Et6wcr8ZR6Ilp
YWRdyf+sbR1oLJiTTrWvhH+cZSCzHxY8cYOqPpiFAF79YrAa5ODU7aETyVP8I263
5AJ6QXcKCGKcCTavGtnA
=OOHI
-----END PGP SIGNATURE-----

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [Qemu-devel] [PATCH 10/10] aio-win32: add support for sockets
  2014-09-12 10:05     ` Paolo Bonzini
@ 2014-09-13  2:22       ` TeLeMan
  2014-09-13 10:33         ` Paolo Bonzini
  0 siblings, 1 reply; 25+ messages in thread
From: TeLeMan @ 2014-09-13  2:22 UTC (permalink / raw)
  To: Paolo Bonzini; +Cc: qemu-devel

On Fri, Sep 12, 2014 at 6:05 PM, Paolo Bonzini <pbonzini@redhat.com> wrote:
> Il 12/09/2014 03:39, TeLeMan ha scritto:
>> On Wed, Jul 9, 2014 at 5:53 PM, Paolo Bonzini <pbonzini@redhat.com> wrote:
>>> diff --git a/aio-win32.c b/aio-win32.c
>>> index 4542270..61e3d2d 100644
>>> --- a/aio-win32.c
>>> +++ b/aio-win32.c
>>> +    bool was_dispatching, progress, have_select_revents, first;
>> have_select_revents has no initial value.
>
> Good catch here...
>
>>
>>> @@ -183,6 +318,7 @@ bool aio_poll(AioContext *ctx, bool blocking)
>>>
>>>      /* wait until next event */
>>>      while (count > 0) {
>>> +        HANDLE event;
>>>          int ret;
>>>
>>>          timeout = blocking
>>> @@ -196,13 +332,17 @@ bool aio_poll(AioContext *ctx, bool blocking)
>>>          first = false;
>>>
>>>          /* if we have any signaled events, dispatch event */
>>> -        if ((DWORD) (ret - WAIT_OBJECT_0) >= count) {
>>> +        event = NULL;
>>> +        if ((DWORD) (ret - WAIT_OBJECT_0) < count) {
>>> +            event = events[ret - WAIT_OBJECT_0];
>>> +        } else if (!have_select_revents) {
>>
>> when (ret - WAIT_OBJECT_0) >= count and have_select_revents is true,
>> the following events[ret - WAIT_OBJECT_0] will be overflowed.
>
> ... this instead is not a problem, ret - WAIT_OBJECT_0 can be at most
> equal to count, and events[] is declared with MAXIMUM_WAIT_OBJECTS + 1
> places.  So the
>
>         events[ret - WAIT_OBJECT_0] = events[--count];
>
> is equal to
>
>         events[count] = events[count - 1];
>         --count;
>
> and this is harmless.

WAIT_ABANDONED_0 & WAIT_TIMEOUT & WAIT_FAILED are larger than
MAXIMUM_WAIT_OBJECTS.

> Paolo
>
>>>              break;
>>>          }
>>>
>>> +        have_select_revents = false;
>>>          blocking = false;
>>>
>>> -        progress |= aio_dispatch_handlers(ctx, events[ret - WAIT_OBJECT_0]);
>>> +        progress |= aio_dispatch_handlers(ctx, event);
>>>
>>>          /* Try again, but only call each handler once.  */
>>>          events[ret - WAIT_OBJECT_0] = events[--count];
>

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [Qemu-devel] [PATCH 10/10] aio-win32: add support for sockets
  2014-09-13  2:22       ` TeLeMan
@ 2014-09-13 10:33         ` Paolo Bonzini
  2014-09-15  1:18           ` TeLeMan
  0 siblings, 1 reply; 25+ messages in thread
From: Paolo Bonzini @ 2014-09-13 10:33 UTC (permalink / raw)
  To: TeLeMan; +Cc: qemu-devel

Il 13/09/2014 04:22, TeLeMan ha scritto:
> On Fri, Sep 12, 2014 at 6:05 PM, Paolo Bonzini <pbonzini@redhat.com> wrote:
>> Il 12/09/2014 03:39, TeLeMan ha scritto:
>>> On Wed, Jul 9, 2014 at 5:53 PM, Paolo Bonzini <pbonzini@redhat.com> wrote:
>>>> diff --git a/aio-win32.c b/aio-win32.c
>>>> index 4542270..61e3d2d 100644
>>>> --- a/aio-win32.c
>>>> +++ b/aio-win32.c
>>>> +    bool was_dispatching, progress, have_select_revents, first;
>>> have_select_revents has no initial value.
>>
>> Good catch here...
>>
>>>
>>>> @@ -183,6 +318,7 @@ bool aio_poll(AioContext *ctx, bool blocking)
>>>>
>>>>      /* wait until next event */
>>>>      while (count > 0) {
>>>> +        HANDLE event;
>>>>          int ret;
>>>>
>>>>          timeout = blocking
>>>> @@ -196,13 +332,17 @@ bool aio_poll(AioContext *ctx, bool blocking)
>>>>          first = false;
>>>>
>>>>          /* if we have any signaled events, dispatch event */
>>>> -        if ((DWORD) (ret - WAIT_OBJECT_0) >= count) {
>>>> +        event = NULL;
>>>> +        if ((DWORD) (ret - WAIT_OBJECT_0) < count) {
>>>> +            event = events[ret - WAIT_OBJECT_0];
>>>> +        } else if (!have_select_revents) {
>>>
>>> when (ret - WAIT_OBJECT_0) >= count and have_select_revents is true,
>>> the following events[ret - WAIT_OBJECT_0] will be overflowed.
>>
>> ... this instead is not a problem, ret - WAIT_OBJECT_0 can be at most
>> equal to count, and events[] is declared with MAXIMUM_WAIT_OBJECTS + 1
>> places.  So the
>>
>>         events[ret - WAIT_OBJECT_0] = events[--count];
>>
>> is equal to
>>
>>         events[count] = events[count - 1];
>>         --count;
>>
>> and this is harmless.
> 
> WAIT_ABANDONED_0 & WAIT_TIMEOUT & WAIT_FAILED are larger than
> MAXIMUM_WAIT_OBJECTS.

WAIT_ABANDONED_0 and WAIT_FAILED cannot happen, but you're right about
WAIT_TIMEOUT.  Are you going to send a patch?

Paolo

>> Paolo
>>
>>>>              break;
>>>>          }
>>>>
>>>> +        have_select_revents = false;
>>>>          blocking = false;
>>>>
>>>> -        progress |= aio_dispatch_handlers(ctx, events[ret - WAIT_OBJECT_0]);
>>>> +        progress |= aio_dispatch_handlers(ctx, event);
>>>>
>>>>          /* Try again, but only call each handler once.  */
>>>>          events[ret - WAIT_OBJECT_0] = events[--count];
>>
> 
> 

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [Qemu-devel] [PATCH 10/10] aio-win32: add support for sockets
  2014-09-13 10:33         ` Paolo Bonzini
@ 2014-09-15  1:18           ` TeLeMan
  2014-09-15 15:16             ` Paolo Bonzini
  0 siblings, 1 reply; 25+ messages in thread
From: TeLeMan @ 2014-09-15  1:18 UTC (permalink / raw)
  To: Paolo Bonzini; +Cc: qemu-devel

On Sat, Sep 13, 2014 at 6:33 PM, Paolo Bonzini <pbonzini@redhat.com> wrote:
> Il 13/09/2014 04:22, TeLeMan ha scritto:
>> On Fri, Sep 12, 2014 at 6:05 PM, Paolo Bonzini <pbonzini@redhat.com> wrote:
>>> Il 12/09/2014 03:39, TeLeMan ha scritto:
>>>> On Wed, Jul 9, 2014 at 5:53 PM, Paolo Bonzini <pbonzini@redhat.com> wrote:
>>>>> diff --git a/aio-win32.c b/aio-win32.c
>>>>> index 4542270..61e3d2d 100644
>>>>> --- a/aio-win32.c
>>>>> +++ b/aio-win32.c
>>>>> +    bool was_dispatching, progress, have_select_revents, first;
>>>> have_select_revents has no initial value.
>>>
>>> Good catch here...
>>>
>>>>
>>>>> @@ -183,6 +318,7 @@ bool aio_poll(AioContext *ctx, bool blocking)
>>>>>
>>>>>      /* wait until next event */
>>>>>      while (count > 0) {
>>>>> +        HANDLE event;
>>>>>          int ret;
>>>>>
>>>>>          timeout = blocking
>>>>> @@ -196,13 +332,17 @@ bool aio_poll(AioContext *ctx, bool blocking)
>>>>>          first = false;
>>>>>
>>>>>          /* if we have any signaled events, dispatch event */
>>>>> -        if ((DWORD) (ret - WAIT_OBJECT_0) >= count) {
>>>>> +        event = NULL;
>>>>> +        if ((DWORD) (ret - WAIT_OBJECT_0) < count) {
>>>>> +            event = events[ret - WAIT_OBJECT_0];
>>>>> +        } else if (!have_select_revents) {
>>>>
>>>> when (ret - WAIT_OBJECT_0) >= count and have_select_revents is true,
>>>> the following events[ret - WAIT_OBJECT_0] will be overflowed.
>>>
>>> ... this instead is not a problem, ret - WAIT_OBJECT_0 can be at most
>>> equal to count, and events[] is declared with MAXIMUM_WAIT_OBJECTS + 1
>>> places.  So the
>>>
>>>         events[ret - WAIT_OBJECT_0] = events[--count];
>>>
>>> is equal to
>>>
>>>         events[count] = events[count - 1];
>>>         --count;
>>>
>>> and this is harmless.
>>
>> WAIT_ABANDONED_0 & WAIT_TIMEOUT & WAIT_FAILED are larger than
>> MAXIMUM_WAIT_OBJECTS.
>
> WAIT_ABANDONED_0 and WAIT_FAILED cannot happen, but you're right about
> WAIT_TIMEOUT.  Are you going to send a patch?

No, because I was rejected to submit the patch, so I just report the issues.

> Paolo
>
>>> Paolo
>>>
>>>>>              break;
>>>>>          }
>>>>>
>>>>> +        have_select_revents = false;
>>>>>          blocking = false;
>>>>>
>>>>> -        progress |= aio_dispatch_handlers(ctx, events[ret - WAIT_OBJECT_0]);
>>>>> +        progress |= aio_dispatch_handlers(ctx, event);
>>>>>
>>>>>          /* Try again, but only call each handler once.  */
>>>>>          events[ret - WAIT_OBJECT_0] = events[--count];
>>>
>>
>>
>

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [Qemu-devel] [PATCH 10/10] aio-win32: add support for sockets
  2014-09-15  1:18           ` TeLeMan
@ 2014-09-15 15:16             ` Paolo Bonzini
  0 siblings, 0 replies; 25+ messages in thread
From: Paolo Bonzini @ 2014-09-15 15:16 UTC (permalink / raw)
  To: TeLeMan; +Cc: qemu-devel

Il 15/09/2014 03:18, TeLeMan ha scritto:
> On Sat, Sep 13, 2014 at 6:33 PM, Paolo Bonzini <pbonzini@redhat.com> wrote:
>> Il 13/09/2014 04:22, TeLeMan ha scritto:
>>> On Fri, Sep 12, 2014 at 6:05 PM, Paolo Bonzini <pbonzini@redhat.com> wrote:
>>>> Il 12/09/2014 03:39, TeLeMan ha scritto:
>>>>> On Wed, Jul 9, 2014 at 5:53 PM, Paolo Bonzini <pbonzini@redhat.com> wrote:
>>>>>> diff --git a/aio-win32.c b/aio-win32.c
>>>>>> index 4542270..61e3d2d 100644
>>>>>> --- a/aio-win32.c
>>>>>> +++ b/aio-win32.c
>>>>>> +    bool was_dispatching, progress, have_select_revents, first;
>>>>> have_select_revents has no initial value.
>>>>
>>>> Good catch here...
>>>>
>>>>>
>>>>>> @@ -183,6 +318,7 @@ bool aio_poll(AioContext *ctx, bool blocking)
>>>>>>
>>>>>>      /* wait until next event */
>>>>>>      while (count > 0) {
>>>>>> +        HANDLE event;
>>>>>>          int ret;
>>>>>>
>>>>>>          timeout = blocking
>>>>>> @@ -196,13 +332,17 @@ bool aio_poll(AioContext *ctx, bool blocking)
>>>>>>          first = false;
>>>>>>
>>>>>>          /* if we have any signaled events, dispatch event */
>>>>>> -        if ((DWORD) (ret - WAIT_OBJECT_0) >= count) {
>>>>>> +        event = NULL;
>>>>>> +        if ((DWORD) (ret - WAIT_OBJECT_0) < count) {
>>>>>> +            event = events[ret - WAIT_OBJECT_0];
>>>>>> +        } else if (!have_select_revents) {
>>>>>
>>>>> when (ret - WAIT_OBJECT_0) >= count and have_select_revents is true,
>>>>> the following events[ret - WAIT_OBJECT_0] will be overflowed.
>>>>
>>>> ... this instead is not a problem, ret - WAIT_OBJECT_0 can be at most
>>>> equal to count, and events[] is declared with MAXIMUM_WAIT_OBJECTS + 1
>>>> places.  So the
>>>>
>>>>         events[ret - WAIT_OBJECT_0] = events[--count];
>>>>
>>>> is equal to
>>>>
>>>>         events[count] = events[count - 1];
>>>>         --count;
>>>>
>>>> and this is harmless.
>>>
>>> WAIT_ABANDONED_0 & WAIT_TIMEOUT & WAIT_FAILED are larger than
>>> MAXIMUM_WAIT_OBJECTS.
>>
>> WAIT_ABANDONED_0 and WAIT_FAILED cannot happen, but you're right about
>> WAIT_TIMEOUT.  Are you going to send a patch?
> 
> No, because I was rejected to submit the patch, so I just report the issues.

If this is because of the pseudonym, I think what was done with Blue
Swirl was that he told someone his real name.  You can do the same if
you wish to contribute more than just bug reports.  Can you review
patches too?  The "Reviewed-by" can be done with a pseudonym.

Paolo

^ permalink raw reply	[flat|nested] 25+ messages in thread

end of thread, other threads:[~2014-09-15 15:16 UTC | newest]

Thread overview: 25+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-07-09  9:53 [Qemu-devel] [PATCH for-2.2 00/10] AioContext cleanups and Win32 socket support Paolo Bonzini
2014-07-09  9:53 ` [Qemu-devel] [PATCH 01/10] AioContext: take bottom halves into account when computing aio_poll timeout Paolo Bonzini
2014-08-01 14:34   ` Stefan Hajnoczi
2014-08-01 16:03     ` Paolo Bonzini
2014-07-09  9:53 ` [Qemu-devel] [PATCH 02/10] aio-win32: Evaluate timers after handles Paolo Bonzini
2014-07-09  9:53 ` [Qemu-devel] [PATCH 03/10] aio-win32: Factor out duplicate code into aio_dispatch_handlers Paolo Bonzini
2014-07-09  9:53 ` [Qemu-devel] [PATCH 04/10] AioContext: run bottom halves after polling Paolo Bonzini
2014-07-09  9:53 ` [Qemu-devel] [PATCH 05/10] AioContext: export and use aio_dispatch Paolo Bonzini
2014-07-09  9:53 ` [Qemu-devel] [PATCH 06/10] test-aio: test timers on Windows too Paolo Bonzini
2014-07-09  9:53 ` [Qemu-devel] [PATCH 07/10] aio-win32: add aio_set_dispatching optimization Paolo Bonzini
2014-07-09  9:53 ` [Qemu-devel] [PATCH 08/10] AioContext: introduce aio_prepare Paolo Bonzini
2014-07-09  9:53 ` [Qemu-devel] [PATCH 09/10] qemu-coroutine-io: fix for Win32 Paolo Bonzini
2014-07-09  9:53 ` [Qemu-devel] [PATCH 10/10] aio-win32: add support for sockets Paolo Bonzini
2014-09-12  1:39   ` TeLeMan
2014-09-12 10:05     ` Paolo Bonzini
2014-09-13  2:22       ` TeLeMan
2014-09-13 10:33         ` Paolo Bonzini
2014-09-15  1:18           ` TeLeMan
2014-09-15 15:16             ` Paolo Bonzini
2014-09-12 12:51     ` Stefan Hajnoczi
2014-09-12 12:52       ` Paolo Bonzini
2014-09-12  1:43   ` TeLeMan
2014-08-01 14:52 ` [Qemu-devel] [PATCH for-2.2 00/10] AioContext cleanups and Win32 socket support Stefan Hajnoczi
2014-08-01 15:07   ` Paolo Bonzini
2014-08-28 14:00 ` Stefan Hajnoczi

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.