All of lore.kernel.org
 help / color / mirror / Atom feed
* [Qemu-devel] [PATCH v4 00/17] aio: drop io_flush()
@ 2013-06-14 11:43 Stefan Hajnoczi
  2013-06-14 11:43 ` [Qemu-devel] [PATCH v4 01/17] block: stop relying on io_flush() in bdrv_drain_all() Stefan Hajnoczi
                   ` (18 more replies)
  0 siblings, 19 replies; 29+ messages in thread
From: Stefan Hajnoczi @ 2013-06-14 11:43 UTC (permalink / raw)
  To: qemu-devel
  Cc: Kevin Wolf, Paolo Bonzini, Anthony Liguori, Ping Fan Liu,
	Stefan Hajnoczi

v4:
 * Ensure pending BHs are processed in bdrv_drain_all() [bonzini]

v3:
 * I forgot about this series, time to push it again!
 * Rebase onto qemu.git/master
 * Drop now-unused AioFlushHandler typedef [bonzini]

This series gets rid of aio's .io_flush() callback.  It's based on Paolo's
insight that bdrv_drain_all() can be implemented using the bs->tracked_requests
list.  io_flush() is redundant since the block layer already knows if requests
are pending.

The point of this effort is to simplify our event loop(s).  If we can reduce
custom features like io_flush() it becomes easier to unify event loops and
reuse glib or other options.

This is also important to me for dataplane, since bdrv_drain_all() is one of
the synchronization points between threads.  QEMU monitor commands invoke
bdrv_drain_all() while the block device is accessed from a dataplane thread.

Background on io_flush() semantics:

The io_flush() handler must return 1 if this aio fd handler is active.  That
is, requests are pending and we'd like to make progress by monitoring the fd.

If io_flush() returns 0, the aio event loop skips monitoring this fd.  This is
critical for block drivers like iscsi where we have an idle TCP socket which we
want to block on *only* when there are pending requests.

The series works as follows:

Part 1 - stop relying on .io_flush()

The first patches change aio_poll() callers to check their termination
condition themselves instead of relying on .io_flush():

  76f9848 block: stop relying on io_flush() in bdrv_drain_all()
  42c7aac dataplane/virtio-blk: check exit conditions before aio_poll()
  b7c8b9a tests: adjust test-aio to new aio_poll() semantics
  55c7714 tests: adjust test-thread-pool to new aio_poll() semantics

Part 2 - stop calling .io_flush() from aio_poll()

The semantic change to aio_poll() is made here:

  8f188ac aio: stop using .io_flush()

Part 3 - drop io_flush() handlers, just pass NULL for the io_flush argument

Remove the now dead code from all .io_flush() users:

  5967680 block/curl: drop curl_aio_flush()
  16ee264 block/gluster: drop qemu_gluster_aio_flush_cb()
  9563d0f block/iscsi: drop iscsi_process_flush()
  6690260 block/linux-aio: drop qemu_laio_completion_cb()
  3c30643 block/nbd: drop nbd_have_request()
  d47edee block/rbd: drop qemu_rbd_aio_flush_cb()
  edbf4b5 block/sheepdog: drop have_co_req() and aio_flush_request()
  3fee517 block/ssh: drop return_true()
  d59650d dataplane/virtio-blk: drop flush_true() and flush_io()
  0492713 thread-pool: drop thread_pool_active()
  77b3518 tests: drop event_active_cb()

Part 4 - remove io_flush arguments from aio functions

The big, mechanical patch that drops the io_flush argument:

  5938cf4 aio: drop io_flush argument

Note that I split Part 3 from Part 4 so it's easy to review individual block
drivers.

Stefan Hajnoczi (17):
  block: stop relying on io_flush() in bdrv_drain_all()
  dataplane/virtio-blk: check exit conditions before aio_poll()
  tests: adjust test-aio to new aio_poll() semantics
  tests: adjust test-thread-pool to new aio_poll() semantics
  aio: stop using .io_flush()
  block/curl: drop curl_aio_flush()
  block/gluster: drop qemu_gluster_aio_flush_cb()
  block/iscsi: drop iscsi_process_flush()
  block/linux-aio: drop qemu_laio_completion_cb()
  block/nbd: drop nbd_have_request()
  block/rbd: drop qemu_rbd_aio_flush_cb()
  block/sheepdog: drop have_co_req() and aio_flush_request()
  block/ssh: drop return_true()
  dataplane/virtio-blk: drop flush_true() and flush_io()
  thread-pool: drop thread_pool_active()
  tests: drop event_active_cb()
  aio: drop io_flush argument

 aio-posix.c                     | 36 ++++++------------
 aio-win32.c                     | 37 ++++++++-----------
 async.c                         |  4 +-
 block.c                         | 50 ++++++++++++++++++-------
 block/curl.c                    | 25 ++-----------
 block/gluster.c                 | 21 ++---------
 block/iscsi.c                   | 10 +----
 block/linux-aio.c               | 18 +--------
 block/nbd.c                     | 18 ++-------
 block/rbd.c                     | 16 +-------
 block/sheepdog.c                | 33 ++++-------------
 block/ssh.c                     | 12 +-----
 hw/block/dataplane/virtio-blk.c | 25 +++----------
 include/block/aio.h             | 14 +------
 main-loop.c                     |  9 ++---
 tests/test-aio.c                | 82 +++++++++++++++++++++--------------------
 tests/test-thread-pool.c        | 24 ++++++------
 thread-pool.c                   | 11 +-----
 18 files changed, 158 insertions(+), 287 deletions(-)

-- 
1.8.1.4

^ permalink raw reply	[flat|nested] 29+ messages in thread

* [Qemu-devel] [PATCH v4 01/17] block: stop relying on io_flush() in bdrv_drain_all()
  2013-06-14 11:43 [Qemu-devel] [PATCH v4 00/17] aio: drop io_flush() Stefan Hajnoczi
@ 2013-06-14 11:43 ` Stefan Hajnoczi
  2013-06-27 13:13   ` Paolo Bonzini
  2013-06-14 11:43 ` [Qemu-devel] [PATCH v4 02/17] dataplane/virtio-blk: check exit conditions before aio_poll() Stefan Hajnoczi
                   ` (17 subsequent siblings)
  18 siblings, 1 reply; 29+ messages in thread
From: Stefan Hajnoczi @ 2013-06-14 11:43 UTC (permalink / raw)
  To: qemu-devel
  Cc: Kevin Wolf, Paolo Bonzini, Anthony Liguori, Ping Fan Liu,
	Stefan Hajnoczi

If a block driver has no file descriptors to monitor but there are still
active requests, it can return 1 from .io_flush().  This is used to spin
during synchronous I/O.

Stop relying on .io_flush() and instead check
QLIST_EMPTY(&bs->tracked_requests) to decide whether there are active
requests.

This is the first step in removing .io_flush() so that event loops no
longer need to have the concept of synchronous I/O.  Eventually we may
be able to kill synchronous I/O completely by running everything in a
coroutine, but that is future work.

Note this patch moves bs->throttled_reqs initialization to bdrv_new() so
that bdrv_requests_pending(bs) can safely access it.  In practice bs is
g_malloc0() so the memory is already zeroed but it's safer to initialize
the queue properly.

In bdrv_delete() make sure to call bdrv_make_anon() *after* bdrv_close()
so that the device is still seen by bdrv_drain_all() when iterating
bdrv_states.

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 block.c | 50 +++++++++++++++++++++++++++++++++++++-------------
 1 file changed, 37 insertions(+), 13 deletions(-)

diff --git a/block.c b/block.c
index 79ad33d..04821d8 100644
--- a/block.c
+++ b/block.c
@@ -148,7 +148,6 @@ static void bdrv_block_timer(void *opaque)
 
 void bdrv_io_limits_enable(BlockDriverState *bs)
 {
-    qemu_co_queue_init(&bs->throttled_reqs);
     bs->block_timer = qemu_new_timer_ns(vm_clock, bdrv_block_timer, bs);
     bs->io_limits_enabled = true;
 }
@@ -305,6 +304,7 @@ BlockDriverState *bdrv_new(const char *device_name)
     }
     bdrv_iostatus_disable(bs);
     notifier_list_init(&bs->close_notifiers);
+    qemu_co_queue_init(&bs->throttled_reqs);
 
     return bs;
 }
@@ -1412,6 +1412,35 @@ void bdrv_close_all(void)
     }
 }
 
+/* Check if any requests are in-flight (including throttled requests) */
+static bool bdrv_requests_pending(BlockDriverState *bs)
+{
+    if (!QLIST_EMPTY(&bs->tracked_requests)) {
+        return true;
+    }
+    if (!qemu_co_queue_empty(&bs->throttled_reqs)) {
+        return true;
+    }
+    if (bs->file && bdrv_requests_pending(bs->file)) {
+        return true;
+    }
+    if (bs->backing_hd && bdrv_requests_pending(bs->backing_hd)) {
+        return true;
+    }
+    return false;
+}
+
+static bool bdrv_requests_pending_all(void)
+{
+    BlockDriverState *bs;
+    QTAILQ_FOREACH(bs, &bdrv_states, list) {
+        if (bdrv_requests_pending(bs)) {
+            return true;
+        }
+    }
+    return false;
+}
+
 /*
  * Wait for pending requests to complete across all BlockDriverStates
  *
@@ -1426,27 +1455,22 @@ void bdrv_close_all(void)
  */
 void bdrv_drain_all(void)
 {
+    /* Always run first iteration so any pending completion BHs run */
+    bool busy = true;
     BlockDriverState *bs;
-    bool busy;
-
-    do {
-        busy = qemu_aio_wait();
 
+    while (busy) {
         /* FIXME: We do not have timer support here, so this is effectively
          * a busy wait.
          */
         QTAILQ_FOREACH(bs, &bdrv_states, list) {
             if (!qemu_co_queue_empty(&bs->throttled_reqs)) {
                 qemu_co_queue_restart_all(&bs->throttled_reqs);
-                busy = true;
             }
         }
-    } while (busy);
 
-    /* If requests are still pending there is a bug somewhere */
-    QTAILQ_FOREACH(bs, &bdrv_states, list) {
-        assert(QLIST_EMPTY(&bs->tracked_requests));
-        assert(qemu_co_queue_empty(&bs->throttled_reqs));
+        busy = bdrv_requests_pending_all();
+        busy |= aio_poll(qemu_get_aio_context(), busy);
     }
 }
 
@@ -1591,11 +1615,11 @@ void bdrv_delete(BlockDriverState *bs)
     assert(!bs->job);
     assert(!bs->in_use);
 
+    bdrv_close(bs);
+
     /* remove from list, if necessary */
     bdrv_make_anon(bs);
 
-    bdrv_close(bs);
-
     g_free(bs);
 }
 
-- 
1.8.1.4

^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [Qemu-devel] [PATCH v4 02/17] dataplane/virtio-blk: check exit conditions before aio_poll()
  2013-06-14 11:43 [Qemu-devel] [PATCH v4 00/17] aio: drop io_flush() Stefan Hajnoczi
  2013-06-14 11:43 ` [Qemu-devel] [PATCH v4 01/17] block: stop relying on io_flush() in bdrv_drain_all() Stefan Hajnoczi
@ 2013-06-14 11:43 ` Stefan Hajnoczi
  2013-06-27 13:14   ` Paolo Bonzini
  2013-06-14 11:43 ` [Qemu-devel] [PATCH v4 03/17] tests: adjust test-aio to new aio_poll() semantics Stefan Hajnoczi
                   ` (16 subsequent siblings)
  18 siblings, 1 reply; 29+ messages in thread
From: Stefan Hajnoczi @ 2013-06-14 11:43 UTC (permalink / raw)
  To: qemu-devel
  Cc: Kevin Wolf, Paolo Bonzini, Anthony Liguori, Ping Fan Liu,
	Stefan Hajnoczi

Check exit conditions before entering blocking aio_poll().  This is
mainly for consistency since it's unlikely that we are stopping in the
first event loop iteration.

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 hw/block/dataplane/virtio-blk.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/hw/block/dataplane/virtio-blk.c b/hw/block/dataplane/virtio-blk.c
index 0356665..0509d3f 100644
--- a/hw/block/dataplane/virtio-blk.c
+++ b/hw/block/dataplane/virtio-blk.c
@@ -379,9 +379,9 @@ static void *data_plane_thread(void *opaque)
 {
     VirtIOBlockDataPlane *s = opaque;
 
-    do {
+    while (!s->stopping || s->num_reqs > 0) {
         aio_poll(s->ctx, true);
-    } while (!s->stopping || s->num_reqs > 0);
+    }
     return NULL;
 }
 
-- 
1.8.1.4

^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [Qemu-devel] [PATCH v4 03/17] tests: adjust test-aio to new aio_poll() semantics
  2013-06-14 11:43 [Qemu-devel] [PATCH v4 00/17] aio: drop io_flush() Stefan Hajnoczi
  2013-06-14 11:43 ` [Qemu-devel] [PATCH v4 01/17] block: stop relying on io_flush() in bdrv_drain_all() Stefan Hajnoczi
  2013-06-14 11:43 ` [Qemu-devel] [PATCH v4 02/17] dataplane/virtio-blk: check exit conditions before aio_poll() Stefan Hajnoczi
@ 2013-06-14 11:43 ` Stefan Hajnoczi
  2013-06-27 13:15   ` Paolo Bonzini
  2013-06-27 13:28   ` Paolo Bonzini
  2013-06-14 11:43 ` [Qemu-devel] [PATCH v4 04/17] tests: adjust test-thread-pool " Stefan Hajnoczi
                   ` (15 subsequent siblings)
  18 siblings, 2 replies; 29+ messages in thread
From: Stefan Hajnoczi @ 2013-06-14 11:43 UTC (permalink / raw)
  To: qemu-devel
  Cc: Kevin Wolf, Paolo Bonzini, Anthony Liguori, Ping Fan Liu,
	Stefan Hajnoczi

aio_poll(ctx, true) will soon block if any fd handlers have been set.
Previously it would only block when .io_flush() returned true.

This means that callers must check their wait condition *before*
aio_poll() to avoid deadlock.

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 tests/test-aio.c | 26 +++++++++++++++++---------
 1 file changed, 17 insertions(+), 9 deletions(-)

diff --git a/tests/test-aio.c b/tests/test-aio.c
index c173870..20bf5e6 100644
--- a/tests/test-aio.c
+++ b/tests/test-aio.c
@@ -15,6 +15,13 @@
 
 AioContext *ctx;
 
+typedef struct {
+    EventNotifier e;
+    int n;
+    int active;
+    bool auto_set;
+} EventNotifierTestData;
+
 /* Wait until there are no more BHs or AIO requests */
 static void wait_for_aio(void)
 {
@@ -23,6 +30,14 @@ static void wait_for_aio(void)
     }
 }
 
+/* Wait until event notifier becomes inactive */
+static void wait_until_inactive(EventNotifierTestData *data)
+{
+    while (data->active > 0) {
+        aio_poll(ctx, true);
+    }
+}
+
 /* Simple callbacks for testing.  */
 
 typedef struct {
@@ -50,13 +65,6 @@ static void bh_delete_cb(void *opaque)
     }
 }
 
-typedef struct {
-    EventNotifier e;
-    int n;
-    int active;
-    bool auto_set;
-} EventNotifierTestData;
-
 static int event_active_cb(EventNotifier *e)
 {
     EventNotifierTestData *data = container_of(e, EventNotifierTestData, e);
@@ -281,7 +289,7 @@ static void test_flush_event_notifier(void)
     g_assert_cmpint(data.active, ==, 9);
     g_assert(aio_poll(ctx, false));
 
-    wait_for_aio();
+    wait_until_inactive(&data);
     g_assert_cmpint(data.n, ==, 10);
     g_assert_cmpint(data.active, ==, 0);
     g_assert(!aio_poll(ctx, false));
@@ -325,7 +333,7 @@ static void test_wait_event_notifier_noflush(void)
     g_assert_cmpint(data.n, ==, 2);
 
     event_notifier_set(&dummy.e);
-    wait_for_aio();
+    wait_until_inactive(&dummy);
     g_assert_cmpint(data.n, ==, 2);
     g_assert_cmpint(dummy.n, ==, 1);
     g_assert_cmpint(dummy.active, ==, 0);
-- 
1.8.1.4

^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [Qemu-devel] [PATCH v4 04/17] tests: adjust test-thread-pool to new aio_poll() semantics
  2013-06-14 11:43 [Qemu-devel] [PATCH v4 00/17] aio: drop io_flush() Stefan Hajnoczi
                   ` (2 preceding siblings ...)
  2013-06-14 11:43 ` [Qemu-devel] [PATCH v4 03/17] tests: adjust test-aio to new aio_poll() semantics Stefan Hajnoczi
@ 2013-06-14 11:43 ` Stefan Hajnoczi
  2013-06-27 13:16   ` Paolo Bonzini
  2013-06-14 11:43 ` [Qemu-devel] [PATCH v4 05/17] aio: stop using .io_flush() Stefan Hajnoczi
                   ` (14 subsequent siblings)
  18 siblings, 1 reply; 29+ messages in thread
From: Stefan Hajnoczi @ 2013-06-14 11:43 UTC (permalink / raw)
  To: qemu-devel
  Cc: Kevin Wolf, Paolo Bonzini, Anthony Liguori, Ping Fan Liu,
	Stefan Hajnoczi

aio_poll(ctx, true) will soon block when fd handlers have been set.
Previously aio_poll() would return early if all .io_flush() returned
false.  This means we need to check the equivalent of the .io_flush()
condition *before* calling aio_poll(ctx, true) to avoid deadlock.

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 tests/test-thread-pool.c | 24 ++++++++++++------------
 1 file changed, 12 insertions(+), 12 deletions(-)

diff --git a/tests/test-thread-pool.c b/tests/test-thread-pool.c
index 22915aa..f0b2ef1 100644
--- a/tests/test-thread-pool.c
+++ b/tests/test-thread-pool.c
@@ -40,19 +40,13 @@ static void done_cb(void *opaque, int ret)
     active--;
 }
 
-/* Wait until all aio and bh activity has finished */
-static void qemu_aio_wait_all(void)
-{
-    while (aio_poll(ctx, true)) {
-        /* Do nothing */
-    }
-}
-
 static void test_submit(void)
 {
     WorkerTestData data = { .n = 0 };
     thread_pool_submit(pool, worker_cb, &data);
-    qemu_aio_wait_all();
+    while (data.n == 0) {
+        aio_poll(ctx, true);
+    }
     g_assert_cmpint(data.n, ==, 1);
 }
 
@@ -65,7 +59,9 @@ static void test_submit_aio(void)
     /* The callbacks are not called until after the first wait.  */
     active = 1;
     g_assert_cmpint(data.ret, ==, -EINPROGRESS);
-    qemu_aio_wait_all();
+    while (data.ret == -EINPROGRESS) {
+        aio_poll(ctx, true);
+    }
     g_assert_cmpint(active, ==, 0);
     g_assert_cmpint(data.n, ==, 1);
     g_assert_cmpint(data.ret, ==, 0);
@@ -103,7 +99,9 @@ static void test_submit_co(void)
 
     /* qemu_aio_wait_all will execute the rest of the coroutine.  */
 
-    qemu_aio_wait_all();
+    while (data.ret == -EINPROGRESS) {
+        aio_poll(ctx, true);
+    }
 
     /* Back here after the coroutine has finished.  */
 
@@ -187,7 +185,9 @@ static void test_cancel(void)
     }
 
     /* Finish execution and execute any remaining callbacks.  */
-    qemu_aio_wait_all();
+    while (active > 0) {
+        aio_poll(ctx, true);
+    }
     g_assert_cmpint(active, ==, 0);
     for (i = 0; i < 100; i++) {
         if (data[i].n == 3) {
-- 
1.8.1.4

^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [Qemu-devel] [PATCH v4 05/17] aio: stop using .io_flush()
  2013-06-14 11:43 [Qemu-devel] [PATCH v4 00/17] aio: drop io_flush() Stefan Hajnoczi
                   ` (3 preceding siblings ...)
  2013-06-14 11:43 ` [Qemu-devel] [PATCH v4 04/17] tests: adjust test-thread-pool " Stefan Hajnoczi
@ 2013-06-14 11:43 ` Stefan Hajnoczi
  2013-06-27 13:19   ` Paolo Bonzini
  2013-06-14 11:43 ` [Qemu-devel] [PATCH v4 06/17] block/curl: drop curl_aio_flush() Stefan Hajnoczi
                   ` (13 subsequent siblings)
  18 siblings, 1 reply; 29+ messages in thread
From: Stefan Hajnoczi @ 2013-06-14 11:43 UTC (permalink / raw)
  To: qemu-devel
  Cc: Kevin Wolf, Paolo Bonzini, Anthony Liguori, Ping Fan Liu,
	Stefan Hajnoczi

Now that aio_poll() users check their termination condition themselves,
it is no longer necessary to call .io_flush() handlers.

The behavior of aio_poll() changes as follows:

1. .io_flush() is no longer invoked and file descriptors are *always*
monitored.  Previously returning 0 from .io_flush() would skip this file
descriptor.

Due to this change it is essential to check that requests are pending
before calling qemu_aio_wait().  Failure to do so means we block, for
example, waiting for an idle iSCSI socket to become readable when there
are no requests.  Currently all qemu_aio_wait()/aio_poll() callers check
before calling.

2. aio_poll() now returns true if progress was made (BH or fd handlers
executed) and false otherwise.  Previously it would return true whenever
'busy', which means that .io_flush() returned true.  The 'busy' concept
no longer exists so just progress is returned.

Due to this change we need to update tests/test-aio.c which asserts
aio_poll() return values.  Note that QEMU doesn't actually rely on these
return values so only tests/test-aio.c cares.

Note that ctx->notifier, the EventNotifier fd used for aio_notify(), is
now handled as a special case.  This is a little ugly but maintains
aio_poll() semantics, i.e. aio_notify() does not count as 'progress' and
aio_poll() avoids blocking when the user has not set any fd handlers yet.

Patches after this remove .io_flush() handler code until we can finally
drop the io_flush arguments to aio_set_fd_handler() and friends.

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 aio-posix.c      | 29 +++++++++--------------------
 aio-win32.c      | 34 ++++++++++++++--------------------
 tests/test-aio.c | 10 +++++-----
 3 files changed, 28 insertions(+), 45 deletions(-)

diff --git a/aio-posix.c b/aio-posix.c
index b68eccd..7d66048 100644
--- a/aio-posix.c
+++ b/aio-posix.c
@@ -23,7 +23,6 @@ struct AioHandler
     GPollFD pfd;
     IOHandler *io_read;
     IOHandler *io_write;
-    AioFlushHandler *io_flush;
     int deleted;
     int pollfds_idx;
     void *opaque;
@@ -84,7 +83,6 @@ void aio_set_fd_handler(AioContext *ctx,
         /* Update handler with latest information */
         node->io_read = io_read;
         node->io_write = io_write;
-        node->io_flush = io_flush;
         node->opaque = opaque;
         node->pollfds_idx = -1;
 
@@ -147,7 +145,11 @@ static bool aio_dispatch(AioContext *ctx)
             (revents & (G_IO_IN | G_IO_HUP | G_IO_ERR)) &&
             node->io_read) {
             node->io_read(node->opaque);
-            progress = true;
+
+            /* aio_notify() does not count as progress */
+            if (node->opaque != &ctx->notifier) {
+                progress = true;
+            }
         }
         if (!node->deleted &&
             (revents & (G_IO_OUT | G_IO_ERR)) &&
@@ -173,7 +175,7 @@ bool aio_poll(AioContext *ctx, bool blocking)
 {
     AioHandler *node;
     int ret;
-    bool busy, progress;
+    bool progress;
 
     progress = false;
 
@@ -200,20 +202,8 @@ bool aio_poll(AioContext *ctx, bool blocking)
     g_array_set_size(ctx->pollfds, 0);
 
     /* fill pollfds */
-    busy = false;
     QLIST_FOREACH(node, &ctx->aio_handlers, node) {
         node->pollfds_idx = -1;
-
-        /* If there aren't pending AIO operations, don't invoke callbacks.
-         * Otherwise, if there are no AIO requests, qemu_aio_wait() would
-         * wait indefinitely.
-         */
-        if (!node->deleted && node->io_flush) {
-            if (node->io_flush(node->opaque) == 0) {
-                continue;
-            }
-            busy = true;
-        }
         if (!node->deleted && node->pfd.events) {
             GPollFD pfd = {
                 .fd = node->pfd.fd,
@@ -226,8 +216,8 @@ bool aio_poll(AioContext *ctx, bool blocking)
 
     ctx->walking_handlers--;
 
-    /* No AIO operations?  Get us out of here */
-    if (!busy) {
+    /* early return if we only have the aio_notify() fd */
+    if (ctx->pollfds->len == 1) {
         return progress;
     }
 
@@ -250,6 +240,5 @@ bool aio_poll(AioContext *ctx, bool blocking)
         }
     }
 
-    assert(progress || busy);
-    return true;
+    return progress;
 }
diff --git a/aio-win32.c b/aio-win32.c
index 38723bf..4309c16 100644
--- a/aio-win32.c
+++ b/aio-win32.c
@@ -23,7 +23,6 @@
 struct AioHandler {
     EventNotifier *e;
     EventNotifierHandler *io_notify;
-    AioFlushEventNotifierHandler *io_flush;
     GPollFD pfd;
     int deleted;
     QLIST_ENTRY(AioHandler) node;
@@ -73,7 +72,6 @@ void aio_set_event_notifier(AioContext *ctx,
         }
         /* Update handler with latest information */
         node->io_notify = io_notify;
-        node->io_flush = io_flush;
     }
 
     aio_notify(ctx);
@@ -96,7 +94,7 @@ bool aio_poll(AioContext *ctx, bool blocking)
 {
     AioHandler *node;
     HANDLE events[MAXIMUM_WAIT_OBJECTS + 1];
-    bool busy, progress;
+    bool progress;
     int count;
 
     progress = false;
@@ -126,7 +124,11 @@ bool aio_poll(AioContext *ctx, bool blocking)
         if (node->pfd.revents && node->io_notify) {
             node->pfd.revents = 0;
             node->io_notify(node->e);
-            progress = true;
+
+            /* aio_notify() does not count as progress */
+            if (node->opaque != &ctx->notifier) {
+                progress = true;
+            }
         }
 
         tmp = node;
@@ -147,19 +149,8 @@ bool aio_poll(AioContext *ctx, bool blocking)
     ctx->walking_handlers++;
 
     /* fill fd sets */
-    busy = false;
     count = 0;
     QLIST_FOREACH(node, &ctx->aio_handlers, node) {
-        /* If there aren't pending AIO operations, don't invoke callbacks.
-         * Otherwise, if there are no AIO requests, qemu_aio_wait() would
-         * wait indefinitely.
-         */
-        if (!node->deleted && node->io_flush) {
-            if (node->io_flush(node->e) == 0) {
-                continue;
-            }
-            busy = true;
-        }
         if (!node->deleted && node->io_notify) {
             events[count++] = event_notifier_get_handle(node->e);
         }
@@ -167,8 +158,8 @@ bool aio_poll(AioContext *ctx, bool blocking)
 
     ctx->walking_handlers--;
 
-    /* No AIO operations?  Get us out of here */
-    if (!busy) {
+    /* early return if we only have the aio_notify() fd */
+    if (count == 1) {
         return progress;
     }
 
@@ -196,7 +187,11 @@ bool aio_poll(AioContext *ctx, bool blocking)
                 event_notifier_get_handle(node->e) == events[ret - WAIT_OBJECT_0] &&
                 node->io_notify) {
                 node->io_notify(node->e);
-                progress = true;
+
+                /* aio_notify() does not count as progress */
+                if (node->opaque != &ctx->notifier) {
+                    progress = true;
+                }
             }
 
             tmp = node;
@@ -214,6 +209,5 @@ bool aio_poll(AioContext *ctx, bool blocking)
         events[ret - WAIT_OBJECT_0] = events[--count];
     }
 
-    assert(progress || busy);
-    return true;
+    return progress;
 }
diff --git a/tests/test-aio.c b/tests/test-aio.c
index 20bf5e6..1251952 100644
--- a/tests/test-aio.c
+++ b/tests/test-aio.c
@@ -254,7 +254,7 @@ static void test_wait_event_notifier(void)
     EventNotifierTestData data = { .n = 0, .active = 1 };
     event_notifier_init(&data.e, false);
     aio_set_event_notifier(ctx, &data.e, event_ready_cb, event_active_cb);
-    g_assert(aio_poll(ctx, false));
+    g_assert(!aio_poll(ctx, false));
     g_assert_cmpint(data.n, ==, 0);
     g_assert_cmpint(data.active, ==, 1);
 
@@ -279,7 +279,7 @@ static void test_flush_event_notifier(void)
     EventNotifierTestData data = { .n = 0, .active = 10, .auto_set = true };
     event_notifier_init(&data.e, false);
     aio_set_event_notifier(ctx, &data.e, event_ready_cb, event_active_cb);
-    g_assert(aio_poll(ctx, false));
+    g_assert(!aio_poll(ctx, false));
     g_assert_cmpint(data.n, ==, 0);
     g_assert_cmpint(data.active, ==, 10);
 
@@ -313,7 +313,7 @@ static void test_wait_event_notifier_noflush(void)
     /* Until there is an active descriptor, aio_poll may or may not call
      * event_ready_cb.  Still, it must not block.  */
     event_notifier_set(&data.e);
-    g_assert(!aio_poll(ctx, true));
+    g_assert(aio_poll(ctx, true));
     data.n = 0;
 
     /* An active event notifier forces aio_poll to look at EventNotifiers.  */
@@ -323,13 +323,13 @@ static void test_wait_event_notifier_noflush(void)
     event_notifier_set(&data.e);
     g_assert(aio_poll(ctx, false));
     g_assert_cmpint(data.n, ==, 1);
-    g_assert(aio_poll(ctx, false));
+    g_assert(!aio_poll(ctx, false));
     g_assert_cmpint(data.n, ==, 1);
 
     event_notifier_set(&data.e);
     g_assert(aio_poll(ctx, false));
     g_assert_cmpint(data.n, ==, 2);
-    g_assert(aio_poll(ctx, false));
+    g_assert(!aio_poll(ctx, false));
     g_assert_cmpint(data.n, ==, 2);
 
     event_notifier_set(&dummy.e);
-- 
1.8.1.4

^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [Qemu-devel] [PATCH v4 06/17] block/curl: drop curl_aio_flush()
  2013-06-14 11:43 [Qemu-devel] [PATCH v4 00/17] aio: drop io_flush() Stefan Hajnoczi
                   ` (4 preceding siblings ...)
  2013-06-14 11:43 ` [Qemu-devel] [PATCH v4 05/17] aio: stop using .io_flush() Stefan Hajnoczi
@ 2013-06-14 11:43 ` Stefan Hajnoczi
  2013-06-14 11:43 ` [Qemu-devel] [PATCH v4 07/17] block/gluster: drop qemu_gluster_aio_flush_cb() Stefan Hajnoczi
                   ` (12 subsequent siblings)
  18 siblings, 0 replies; 29+ messages in thread
From: Stefan Hajnoczi @ 2013-06-14 11:43 UTC (permalink / raw)
  To: qemu-devel
  Cc: Kevin Wolf, Paolo Bonzini, Anthony Liguori, Ping Fan Liu,
	Stefan Hajnoczi

.io_flush() is no longer called so drop curl_aio_flush().  The acb[]
array that the function checks is still used in other parts of
block/curl.c.  Therefore we cannot remove acb[], it is needed.

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 block/curl.c | 22 +++-------------------
 1 file changed, 3 insertions(+), 19 deletions(-)

diff --git a/block/curl.c b/block/curl.c
index b8935fd..2147076 100644
--- a/block/curl.c
+++ b/block/curl.c
@@ -85,7 +85,6 @@ typedef struct BDRVCURLState {
 
 static void curl_clean_state(CURLState *s);
 static void curl_multi_do(void *arg);
-static int curl_aio_flush(void *opaque);
 
 static int curl_sock_cb(CURL *curl, curl_socket_t fd, int action,
                         void *s, void *sp)
@@ -93,14 +92,14 @@ static int curl_sock_cb(CURL *curl, curl_socket_t fd, int action,
     DPRINTF("CURL (AIO): Sock action %d on fd %d\n", action, fd);
     switch (action) {
         case CURL_POLL_IN:
-            qemu_aio_set_fd_handler(fd, curl_multi_do, NULL, curl_aio_flush, s);
+            qemu_aio_set_fd_handler(fd, curl_multi_do, NULL, NULL, s);
             break;
         case CURL_POLL_OUT:
-            qemu_aio_set_fd_handler(fd, NULL, curl_multi_do, curl_aio_flush, s);
+            qemu_aio_set_fd_handler(fd, NULL, curl_multi_do, NULL, s);
             break;
         case CURL_POLL_INOUT:
             qemu_aio_set_fd_handler(fd, curl_multi_do, curl_multi_do,
-                                    curl_aio_flush, s);
+                                    NULL, s);
             break;
         case CURL_POLL_REMOVE:
             qemu_aio_set_fd_handler(fd, NULL, NULL, NULL, NULL);
@@ -479,21 +478,6 @@ out_noclean:
     return -EINVAL;
 }
 
-static int curl_aio_flush(void *opaque)
-{
-    BDRVCURLState *s = opaque;
-    int i, j;
-
-    for (i=0; i < CURL_NUM_STATES; i++) {
-        for(j=0; j < CURL_NUM_ACB; j++) {
-            if (s->states[i].acb[j]) {
-                return 1;
-            }
-        }
-    }
-    return 0;
-}
-
 static void curl_aio_cancel(BlockDriverAIOCB *blockacb)
 {
     // Do we have to implement canceling? Seems to work without...
-- 
1.8.1.4

^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [Qemu-devel] [PATCH v4 07/17] block/gluster: drop qemu_gluster_aio_flush_cb()
  2013-06-14 11:43 [Qemu-devel] [PATCH v4 00/17] aio: drop io_flush() Stefan Hajnoczi
                   ` (5 preceding siblings ...)
  2013-06-14 11:43 ` [Qemu-devel] [PATCH v4 06/17] block/curl: drop curl_aio_flush() Stefan Hajnoczi
@ 2013-06-14 11:43 ` Stefan Hajnoczi
  2013-06-14 11:43 ` [Qemu-devel] [PATCH v4 08/17] block/iscsi: drop iscsi_process_flush() Stefan Hajnoczi
                   ` (11 subsequent siblings)
  18 siblings, 0 replies; 29+ messages in thread
From: Stefan Hajnoczi @ 2013-06-14 11:43 UTC (permalink / raw)
  To: qemu-devel
  Cc: Kevin Wolf, Paolo Bonzini, Anthony Liguori, Ping Fan Liu,
	Stefan Hajnoczi

Since .io_flush() is no longer called we do not need
qemu_gluster_aio_flush_cb() anymore.  It turns out that qemu_aio_count
is unused now and can be dropped.

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 block/gluster.c | 16 +---------------
 1 file changed, 1 insertion(+), 15 deletions(-)

diff --git a/block/gluster.c b/block/gluster.c
index 91acde2..7a69a12 100644
--- a/block/gluster.c
+++ b/block/gluster.c
@@ -32,7 +32,6 @@ typedef struct BDRVGlusterState {
     struct glfs *glfs;
     int fds[2];
     struct glfs_fd *fd;
-    int qemu_aio_count;
     int event_reader_pos;
     GlusterAIOCB *event_acb;
 } BDRVGlusterState;
@@ -247,7 +246,6 @@ static void qemu_gluster_complete_aio(GlusterAIOCB *acb, BDRVGlusterState *s)
         ret = -EIO; /* Partial read/write - fail it */
     }
 
-    s->qemu_aio_count--;
     qemu_aio_release(acb);
     cb(opaque, ret);
     if (finished) {
@@ -275,13 +273,6 @@ static void qemu_gluster_aio_event_reader(void *opaque)
     } while (ret < 0 && errno == EINTR);
 }
 
-static int qemu_gluster_aio_flush_cb(void *opaque)
-{
-    BDRVGlusterState *s = opaque;
-
-    return (s->qemu_aio_count > 0);
-}
-
 /* TODO Convert to fine grained options */
 static QemuOptsList runtime_opts = {
     .name = "gluster",
@@ -348,7 +339,7 @@ static int qemu_gluster_open(BlockDriverState *bs,  QDict *options,
     }
     fcntl(s->fds[GLUSTER_FD_READ], F_SETFL, O_NONBLOCK);
     qemu_aio_set_fd_handler(s->fds[GLUSTER_FD_READ],
-        qemu_gluster_aio_event_reader, NULL, qemu_gluster_aio_flush_cb, s);
+        qemu_gluster_aio_event_reader, NULL, NULL, s);
 
 out:
     qemu_opts_del(opts);
@@ -445,7 +436,6 @@ static void gluster_finish_aiocb(struct glfs_fd *fd, ssize_t ret, void *arg)
         qemu_mutex_lock_iothread(); /* We are in gluster thread context */
         acb->common.cb(acb->common.opaque, -EIO);
         qemu_aio_release(acb);
-        s->qemu_aio_count--;
         close(s->fds[GLUSTER_FD_READ]);
         close(s->fds[GLUSTER_FD_WRITE]);
         qemu_aio_set_fd_handler(s->fds[GLUSTER_FD_READ], NULL, NULL, NULL,
@@ -467,7 +457,6 @@ static BlockDriverAIOCB *qemu_gluster_aio_rw(BlockDriverState *bs,
 
     offset = sector_num * BDRV_SECTOR_SIZE;
     size = nb_sectors * BDRV_SECTOR_SIZE;
-    s->qemu_aio_count++;
 
     acb = qemu_aio_get(&gluster_aiocb_info, bs, cb, opaque);
     acb->size = size;
@@ -488,7 +477,6 @@ static BlockDriverAIOCB *qemu_gluster_aio_rw(BlockDriverState *bs,
     return &acb->common;
 
 out:
-    s->qemu_aio_count--;
     qemu_aio_release(acb);
     return NULL;
 }
@@ -518,7 +506,6 @@ static BlockDriverAIOCB *qemu_gluster_aio_flush(BlockDriverState *bs,
     acb->size = 0;
     acb->ret = 0;
     acb->finished = NULL;
-    s->qemu_aio_count++;
 
     ret = glfs_fsync_async(s->fd, &gluster_finish_aiocb, acb);
     if (ret < 0) {
@@ -527,7 +514,6 @@ static BlockDriverAIOCB *qemu_gluster_aio_flush(BlockDriverState *bs,
     return &acb->common;
 
 out:
-    s->qemu_aio_count--;
     qemu_aio_release(acb);
     return NULL;
 }
-- 
1.8.1.4

^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [Qemu-devel] [PATCH v4 08/17] block/iscsi: drop iscsi_process_flush()
  2013-06-14 11:43 [Qemu-devel] [PATCH v4 00/17] aio: drop io_flush() Stefan Hajnoczi
                   ` (6 preceding siblings ...)
  2013-06-14 11:43 ` [Qemu-devel] [PATCH v4 07/17] block/gluster: drop qemu_gluster_aio_flush_cb() Stefan Hajnoczi
@ 2013-06-14 11:43 ` Stefan Hajnoczi
  2013-06-14 11:43 ` [Qemu-devel] [PATCH v4 09/17] block/linux-aio: drop qemu_laio_completion_cb() Stefan Hajnoczi
                   ` (10 subsequent siblings)
  18 siblings, 0 replies; 29+ messages in thread
From: Stefan Hajnoczi @ 2013-06-14 11:43 UTC (permalink / raw)
  To: qemu-devel
  Cc: Kevin Wolf, Paolo Bonzini, Anthony Liguori, Ping Fan Liu,
	Stefan Hajnoczi

.io_flush() is no longer called so drop iscsi_process_flush().

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 block/iscsi.c | 9 +--------
 1 file changed, 1 insertion(+), 8 deletions(-)

diff --git a/block/iscsi.c b/block/iscsi.c
index f7199c1..e2041ca 100644
--- a/block/iscsi.c
+++ b/block/iscsi.c
@@ -147,13 +147,6 @@ static const AIOCBInfo iscsi_aiocb_info = {
 static void iscsi_process_read(void *arg);
 static void iscsi_process_write(void *arg);
 
-static int iscsi_process_flush(void *arg)
-{
-    IscsiLun *iscsilun = arg;
-
-    return iscsi_queue_length(iscsilun->iscsi) > 0;
-}
-
 static void
 iscsi_set_events(IscsiLun *iscsilun)
 {
@@ -167,7 +160,7 @@ iscsi_set_events(IscsiLun *iscsilun)
         qemu_aio_set_fd_handler(iscsi_get_fd(iscsi),
                       iscsi_process_read,
                       (ev & POLLOUT) ? iscsi_process_write : NULL,
-                      iscsi_process_flush,
+                      NULL,
                       iscsilun);
 
     }
-- 
1.8.1.4

^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [Qemu-devel] [PATCH v4 09/17] block/linux-aio: drop qemu_laio_completion_cb()
  2013-06-14 11:43 [Qemu-devel] [PATCH v4 00/17] aio: drop io_flush() Stefan Hajnoczi
                   ` (7 preceding siblings ...)
  2013-06-14 11:43 ` [Qemu-devel] [PATCH v4 08/17] block/iscsi: drop iscsi_process_flush() Stefan Hajnoczi
@ 2013-06-14 11:43 ` Stefan Hajnoczi
  2013-06-14 11:43 ` [Qemu-devel] [PATCH v4 10/17] block/nbd: drop nbd_have_request() Stefan Hajnoczi
                   ` (9 subsequent siblings)
  18 siblings, 0 replies; 29+ messages in thread
From: Stefan Hajnoczi @ 2013-06-14 11:43 UTC (permalink / raw)
  To: qemu-devel
  Cc: Kevin Wolf, Paolo Bonzini, Anthony Liguori, Ping Fan Liu,
	Stefan Hajnoczi

.io_flush() is no longer called so drop qemu_laio_completion_cb().  It
turns out that count is now unused so drop that too.

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 block/linux-aio.c | 17 ++---------------
 1 file changed, 2 insertions(+), 15 deletions(-)

diff --git a/block/linux-aio.c b/block/linux-aio.c
index ee0f8d1..d9128f3 100644
--- a/block/linux-aio.c
+++ b/block/linux-aio.c
@@ -39,7 +39,6 @@ struct qemu_laiocb {
 struct qemu_laio_state {
     io_context_t ctx;
     EventNotifier e;
-    int count;
 };
 
 static inline ssize_t io_event_ret(struct io_event *ev)
@@ -55,8 +54,6 @@ static void qemu_laio_process_completion(struct qemu_laio_state *s,
 {
     int ret;
 
-    s->count--;
-
     ret = laiocb->ret;
     if (ret != -ECANCELED) {
         if (ret == laiocb->nbytes) {
@@ -101,13 +98,6 @@ static void qemu_laio_completion_cb(EventNotifier *e)
     }
 }
 
-static int qemu_laio_flush_cb(EventNotifier *e)
-{
-    struct qemu_laio_state *s = container_of(e, struct qemu_laio_state, e);
-
-    return (s->count > 0) ? 1 : 0;
-}
-
 static void laio_cancel(BlockDriverAIOCB *blockacb)
 {
     struct qemu_laiocb *laiocb = (struct qemu_laiocb *)blockacb;
@@ -177,14 +167,11 @@ BlockDriverAIOCB *laio_submit(BlockDriverState *bs, void *aio_ctx, int fd,
         goto out_free_aiocb;
     }
     io_set_eventfd(&laiocb->iocb, event_notifier_get_fd(&s->e));
-    s->count++;
 
     if (io_submit(s->ctx, 1, &iocbs) < 0)
-        goto out_dec_count;
+        goto out_free_aiocb;
     return &laiocb->common;
 
-out_dec_count:
-    s->count--;
 out_free_aiocb:
     qemu_aio_release(laiocb);
     return NULL;
@@ -204,7 +191,7 @@ void *laio_init(void)
     }
 
     qemu_aio_set_event_notifier(&s->e, qemu_laio_completion_cb,
-                                qemu_laio_flush_cb);
+                                NULL);
 
     return s;
 
-- 
1.8.1.4

^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [Qemu-devel] [PATCH v4 10/17] block/nbd: drop nbd_have_request()
  2013-06-14 11:43 [Qemu-devel] [PATCH v4 00/17] aio: drop io_flush() Stefan Hajnoczi
                   ` (8 preceding siblings ...)
  2013-06-14 11:43 ` [Qemu-devel] [PATCH v4 09/17] block/linux-aio: drop qemu_laio_completion_cb() Stefan Hajnoczi
@ 2013-06-14 11:43 ` Stefan Hajnoczi
  2013-06-14 11:43 ` [Qemu-devel] [PATCH v4 11/17] block/rbd: drop qemu_rbd_aio_flush_cb() Stefan Hajnoczi
                   ` (8 subsequent siblings)
  18 siblings, 0 replies; 29+ messages in thread
From: Stefan Hajnoczi @ 2013-06-14 11:43 UTC (permalink / raw)
  To: qemu-devel
  Cc: Kevin Wolf, Paolo Bonzini, Anthony Liguori, Ping Fan Liu,
	Stefan Hajnoczi

.io_flush() is no longer called so drop nbd_have_request().  We cannot
drop in_flight since it is still used by other block/nbd.c code.

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 block/nbd.c | 13 +++----------
 1 file changed, 3 insertions(+), 10 deletions(-)

diff --git a/block/nbd.c b/block/nbd.c
index 30e3b78..80d2b31 100644
--- a/block/nbd.c
+++ b/block/nbd.c
@@ -270,13 +270,6 @@ static void nbd_coroutine_start(BDRVNBDState *s, struct nbd_request *request)
     request->handle = INDEX_TO_HANDLE(s, i);
 }
 
-static int nbd_have_request(void *opaque)
-{
-    BDRVNBDState *s = opaque;
-
-    return s->in_flight > 0;
-}
-
 static void nbd_reply_ready(void *opaque)
 {
     BDRVNBDState *s = opaque;
@@ -333,7 +326,7 @@ static int nbd_co_send_request(BDRVNBDState *s, struct nbd_request *request,
     qemu_co_mutex_lock(&s->send_mutex);
     s->send_coroutine = qemu_coroutine_self();
     qemu_aio_set_fd_handler(s->sock, nbd_reply_ready, nbd_restart_write,
-                            nbd_have_request, s);
+                            NULL, s);
     if (qiov) {
         if (!s->is_unix) {
             socket_set_cork(s->sock, 1);
@@ -353,7 +346,7 @@ static int nbd_co_send_request(BDRVNBDState *s, struct nbd_request *request,
         rc = nbd_send_request(s->sock, request);
     }
     qemu_aio_set_fd_handler(s->sock, nbd_reply_ready, NULL,
-                            nbd_have_request, s);
+                            NULL, s);
     s->send_coroutine = NULL;
     qemu_co_mutex_unlock(&s->send_mutex);
     return rc;
@@ -430,7 +423,7 @@ static int nbd_establish_connection(BlockDriverState *bs)
      * kick the reply mechanism.  */
     qemu_set_nonblock(sock);
     qemu_aio_set_fd_handler(sock, nbd_reply_ready, NULL,
-                            nbd_have_request, s);
+                            NULL, s);
 
     s->sock = sock;
     s->size = size;
-- 
1.8.1.4

^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [Qemu-devel] [PATCH v4 11/17] block/rbd: drop qemu_rbd_aio_flush_cb()
  2013-06-14 11:43 [Qemu-devel] [PATCH v4 00/17] aio: drop io_flush() Stefan Hajnoczi
                   ` (9 preceding siblings ...)
  2013-06-14 11:43 ` [Qemu-devel] [PATCH v4 10/17] block/nbd: drop nbd_have_request() Stefan Hajnoczi
@ 2013-06-14 11:43 ` Stefan Hajnoczi
  2013-06-14 11:43 ` [Qemu-devel] [PATCH v4 12/17] block/sheepdog: drop have_co_req() and aio_flush_request() Stefan Hajnoczi
                   ` (7 subsequent siblings)
  18 siblings, 0 replies; 29+ messages in thread
From: Stefan Hajnoczi @ 2013-06-14 11:43 UTC (permalink / raw)
  To: qemu-devel
  Cc: Kevin Wolf, Paolo Bonzini, Anthony Liguori, Ping Fan Liu,
	Stefan Hajnoczi

.io_flush() is no longer called so drop qemu_rbd_aio_flush_cb().
qemu_aio_count is unused now so drop it too.

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 block/rbd.c | 14 +-------------
 1 file changed, 1 insertion(+), 13 deletions(-)

diff --git a/block/rbd.c b/block/rbd.c
index 0f2608b..40e5d55 100644
--- a/block/rbd.c
+++ b/block/rbd.c
@@ -100,7 +100,6 @@ typedef struct BDRVRBDState {
     rados_ioctx_t io_ctx;
     rbd_image_t image;
     char name[RBD_MAX_IMAGE_NAME_SIZE];
-    int qemu_aio_count;
     char *snap;
     int event_reader_pos;
     RADOSCB *event_rcb;
@@ -428,19 +427,11 @@ static void qemu_rbd_aio_event_reader(void *opaque)
             if (s->event_reader_pos == sizeof(s->event_rcb)) {
                 s->event_reader_pos = 0;
                 qemu_rbd_complete_aio(s->event_rcb);
-                s->qemu_aio_count--;
             }
         }
     } while (ret < 0 && errno == EINTR);
 }
 
-static int qemu_rbd_aio_flush_cb(void *opaque)
-{
-    BDRVRBDState *s = opaque;
-
-    return (s->qemu_aio_count > 0);
-}
-
 /* TODO Convert to fine grained options */
 static QemuOptsList runtime_opts = {
     .name = "rbd",
@@ -554,7 +545,7 @@ static int qemu_rbd_open(BlockDriverState *bs, QDict *options, int flags)
     fcntl(s->fds[0], F_SETFL, O_NONBLOCK);
     fcntl(s->fds[1], F_SETFL, O_NONBLOCK);
     qemu_aio_set_fd_handler(s->fds[RBD_FD_READ], qemu_rbd_aio_event_reader,
-                            NULL, qemu_rbd_aio_flush_cb, s);
+                            NULL, NULL, s);
 
 
     qemu_opts_del(opts);
@@ -741,8 +732,6 @@ static BlockDriverAIOCB *rbd_start_aio(BlockDriverState *bs,
     off = sector_num * BDRV_SECTOR_SIZE;
     size = nb_sectors * BDRV_SECTOR_SIZE;
 
-    s->qemu_aio_count++; /* All the RADOSCB */
-
     rcb = g_malloc(sizeof(RADOSCB));
     rcb->done = 0;
     rcb->acb = acb;
@@ -779,7 +768,6 @@ static BlockDriverAIOCB *rbd_start_aio(BlockDriverState *bs,
 
 failed:
     g_free(rcb);
-    s->qemu_aio_count--;
     qemu_aio_release(acb);
     return NULL;
 }
-- 
1.8.1.4

^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [Qemu-devel] [PATCH v4 12/17] block/sheepdog: drop have_co_req() and aio_flush_request()
  2013-06-14 11:43 [Qemu-devel] [PATCH v4 00/17] aio: drop io_flush() Stefan Hajnoczi
                   ` (10 preceding siblings ...)
  2013-06-14 11:43 ` [Qemu-devel] [PATCH v4 11/17] block/rbd: drop qemu_rbd_aio_flush_cb() Stefan Hajnoczi
@ 2013-06-14 11:43 ` Stefan Hajnoczi
  2013-06-14 11:43 ` [Qemu-devel] [PATCH v4 13/17] block/ssh: drop return_true() Stefan Hajnoczi
                   ` (6 subsequent siblings)
  18 siblings, 0 replies; 29+ messages in thread
From: Stefan Hajnoczi @ 2013-06-14 11:43 UTC (permalink / raw)
  To: qemu-devel
  Cc: Kevin Wolf, Paolo Bonzini, Anthony Liguori, Ping Fan Liu,
	Stefan Hajnoczi

.io_flush() is no longer called so drop have_co_req() and
aio_flush_request().

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 block/sheepdog.c | 25 +++++--------------------
 1 file changed, 5 insertions(+), 20 deletions(-)

diff --git a/block/sheepdog.c b/block/sheepdog.c
index 21a4edf..66918c6 100644
--- a/block/sheepdog.c
+++ b/block/sheepdog.c
@@ -509,13 +509,6 @@ static void restart_co_req(void *opaque)
     qemu_coroutine_enter(co, NULL);
 }
 
-static int have_co_req(void *opaque)
-{
-    /* this handler is set only when there is a pending request, so
-     * always returns 1. */
-    return 1;
-}
-
 typedef struct SheepdogReqCo {
     int sockfd;
     SheepdogReq *hdr;
@@ -538,14 +531,14 @@ static coroutine_fn void do_co_req(void *opaque)
     unsigned int *rlen = srco->rlen;
 
     co = qemu_coroutine_self();
-    qemu_aio_set_fd_handler(sockfd, NULL, restart_co_req, have_co_req, co);
+    qemu_aio_set_fd_handler(sockfd, NULL, restart_co_req, NULL, co);
 
     ret = send_co_req(sockfd, hdr, data, wlen);
     if (ret < 0) {
         goto out;
     }
 
-    qemu_aio_set_fd_handler(sockfd, restart_co_req, NULL, have_co_req, co);
+    qemu_aio_set_fd_handler(sockfd, restart_co_req, NULL, NULL, co);
 
     ret = qemu_co_recv(sockfd, hdr, sizeof(*hdr));
     if (ret < sizeof(*hdr)) {
@@ -796,14 +789,6 @@ static void co_write_request(void *opaque)
     qemu_coroutine_enter(s->co_send, NULL);
 }
 
-static int aio_flush_request(void *opaque)
-{
-    BDRVSheepdogState *s = opaque;
-
-    return !QLIST_EMPTY(&s->inflight_aio_head) ||
-        !QLIST_EMPTY(&s->pending_aio_head);
-}
-
 /*
  * Return a socket discriptor to read/write objects.
  *
@@ -819,7 +804,7 @@ static int get_sheep_fd(BDRVSheepdogState *s)
         return fd;
     }
 
-    qemu_aio_set_fd_handler(fd, co_read_response, NULL, aio_flush_request, s);
+    qemu_aio_set_fd_handler(fd, co_read_response, NULL, NULL, s);
     return fd;
 }
 
@@ -1070,7 +1055,7 @@ static int coroutine_fn add_aio_request(BDRVSheepdogState *s, AIOReq *aio_req,
     qemu_co_mutex_lock(&s->lock);
     s->co_send = qemu_coroutine_self();
     qemu_aio_set_fd_handler(s->fd, co_read_response, co_write_request,
-                            aio_flush_request, s);
+                            NULL, s);
     socket_set_cork(s->fd, 1);
 
     /* send a header */
@@ -1092,7 +1077,7 @@ static int coroutine_fn add_aio_request(BDRVSheepdogState *s, AIOReq *aio_req,
 
     socket_set_cork(s->fd, 0);
     qemu_aio_set_fd_handler(s->fd, co_read_response, NULL,
-                            aio_flush_request, s);
+                            NULL, s);
     qemu_co_mutex_unlock(&s->lock);
 
     return 0;
-- 
1.8.1.4

^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [Qemu-devel] [PATCH v4 13/17] block/ssh: drop return_true()
  2013-06-14 11:43 [Qemu-devel] [PATCH v4 00/17] aio: drop io_flush() Stefan Hajnoczi
                   ` (11 preceding siblings ...)
  2013-06-14 11:43 ` [Qemu-devel] [PATCH v4 12/17] block/sheepdog: drop have_co_req() and aio_flush_request() Stefan Hajnoczi
@ 2013-06-14 11:43 ` Stefan Hajnoczi
  2013-06-14 11:44 ` [Qemu-devel] [PATCH v4 14/17] dataplane/virtio-blk: drop flush_true() and flush_io() Stefan Hajnoczi
                   ` (5 subsequent siblings)
  18 siblings, 0 replies; 29+ messages in thread
From: Stefan Hajnoczi @ 2013-06-14 11:43 UTC (permalink / raw)
  To: qemu-devel
  Cc: Kevin Wolf, Paolo Bonzini, Anthony Liguori, Ping Fan Liu,
	Stefan Hajnoczi

.io_flush() is no longer called so drop return_true().

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 block/ssh.c | 10 +---------
 1 file changed, 1 insertion(+), 9 deletions(-)

diff --git a/block/ssh.c b/block/ssh.c
index 246a70d..ed525cc 100644
--- a/block/ssh.c
+++ b/block/ssh.c
@@ -725,14 +725,6 @@ static void restart_coroutine(void *opaque)
     qemu_coroutine_enter(co, NULL);
 }
 
-/* Always true because when we have called set_fd_handler there is
- * always a request being processed.
- */
-static int return_true(void *opaque)
-{
-    return 1;
-}
-
 static coroutine_fn void set_fd_handler(BDRVSSHState *s)
 {
     int r;
@@ -751,7 +743,7 @@ static coroutine_fn void set_fd_handler(BDRVSSHState *s)
     DPRINTF("s->sock=%d rd_handler=%p wr_handler=%p", s->sock,
             rd_handler, wr_handler);
 
-    qemu_aio_set_fd_handler(s->sock, rd_handler, wr_handler, return_true, co);
+    qemu_aio_set_fd_handler(s->sock, rd_handler, wr_handler, NULL, co);
 }
 
 static coroutine_fn void clear_fd_handler(BDRVSSHState *s)
-- 
1.8.1.4

^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [Qemu-devel] [PATCH v4 14/17] dataplane/virtio-blk: drop flush_true() and flush_io()
  2013-06-14 11:43 [Qemu-devel] [PATCH v4 00/17] aio: drop io_flush() Stefan Hajnoczi
                   ` (12 preceding siblings ...)
  2013-06-14 11:43 ` [Qemu-devel] [PATCH v4 13/17] block/ssh: drop return_true() Stefan Hajnoczi
@ 2013-06-14 11:44 ` Stefan Hajnoczi
  2013-06-14 11:44 ` [Qemu-devel] [PATCH v4 15/17] thread-pool: drop thread_pool_active() Stefan Hajnoczi
                   ` (4 subsequent siblings)
  18 siblings, 0 replies; 29+ messages in thread
From: Stefan Hajnoczi @ 2013-06-14 11:44 UTC (permalink / raw)
  To: qemu-devel
  Cc: Kevin Wolf, Paolo Bonzini, Anthony Liguori, Ping Fan Liu,
	Stefan Hajnoczi

.io_flush() is no longer called so drop flush_true() and flush_io().

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 hw/block/dataplane/virtio-blk.c | 17 ++---------------
 1 file changed, 2 insertions(+), 15 deletions(-)

diff --git a/hw/block/dataplane/virtio-blk.c b/hw/block/dataplane/virtio-blk.c
index 0509d3f..9e6d32b 100644
--- a/hw/block/dataplane/virtio-blk.c
+++ b/hw/block/dataplane/virtio-blk.c
@@ -264,11 +264,6 @@ static int process_request(IOQueue *ioq, struct iovec iov[],
     }
 }
 
-static int flush_true(EventNotifier *e)
-{
-    return true;
-}
-
 static void handle_notify(EventNotifier *e)
 {
     VirtIOBlockDataPlane *s = container_of(e, VirtIOBlockDataPlane,
@@ -348,14 +343,6 @@ static void handle_notify(EventNotifier *e)
     }
 }
 
-static int flush_io(EventNotifier *e)
-{
-    VirtIOBlockDataPlane *s = container_of(e, VirtIOBlockDataPlane,
-                                           io_notifier);
-
-    return s->num_reqs > 0;
-}
-
 static void handle_io(EventNotifier *e)
 {
     VirtIOBlockDataPlane *s = container_of(e, VirtIOBlockDataPlane,
@@ -486,7 +473,7 @@ void virtio_blk_data_plane_start(VirtIOBlockDataPlane *s)
         exit(1);
     }
     s->host_notifier = *virtio_queue_get_host_notifier(vq);
-    aio_set_event_notifier(s->ctx, &s->host_notifier, handle_notify, flush_true);
+    aio_set_event_notifier(s->ctx, &s->host_notifier, handle_notify, NULL);
 
     /* Set up ioqueue */
     ioq_init(&s->ioqueue, s->fd, REQ_MAX);
@@ -494,7 +481,7 @@ void virtio_blk_data_plane_start(VirtIOBlockDataPlane *s)
         ioq_put_iocb(&s->ioqueue, &s->requests[i].iocb);
     }
     s->io_notifier = *ioq_get_notifier(&s->ioqueue);
-    aio_set_event_notifier(s->ctx, &s->io_notifier, handle_io, flush_io);
+    aio_set_event_notifier(s->ctx, &s->io_notifier, handle_io, NULL);
 
     s->started = true;
     trace_virtio_blk_data_plane_start(s);
-- 
1.8.1.4

^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [Qemu-devel] [PATCH v4 15/17] thread-pool: drop thread_pool_active()
  2013-06-14 11:43 [Qemu-devel] [PATCH v4 00/17] aio: drop io_flush() Stefan Hajnoczi
                   ` (13 preceding siblings ...)
  2013-06-14 11:44 ` [Qemu-devel] [PATCH v4 14/17] dataplane/virtio-blk: drop flush_true() and flush_io() Stefan Hajnoczi
@ 2013-06-14 11:44 ` Stefan Hajnoczi
  2013-06-14 11:44 ` [Qemu-devel] [PATCH v4 16/17] tests: drop event_active_cb() Stefan Hajnoczi
                   ` (3 subsequent siblings)
  18 siblings, 0 replies; 29+ messages in thread
From: Stefan Hajnoczi @ 2013-06-14 11:44 UTC (permalink / raw)
  To: qemu-devel
  Cc: Kevin Wolf, Paolo Bonzini, Anthony Liguori, Ping Fan Liu,
	Stefan Hajnoczi

.io_flush() is no longer called so drop thread_pool_active().  The block
layer is the only thread-pool.c user and it already tracks in-flight
requests, therefore we do not need thread_pool_active().

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 thread-pool.c | 8 +-------
 1 file changed, 1 insertion(+), 7 deletions(-)

diff --git a/thread-pool.c b/thread-pool.c
index 0ebd4c2..096f007 100644
--- a/thread-pool.c
+++ b/thread-pool.c
@@ -197,12 +197,6 @@ restart:
     }
 }
 
-static int thread_pool_active(EventNotifier *notifier)
-{
-    ThreadPool *pool = container_of(notifier, ThreadPool, notifier);
-    return !QLIST_EMPTY(&pool->head);
-}
-
 static void thread_pool_cancel(BlockDriverAIOCB *acb)
 {
     ThreadPoolElement *elem = (ThreadPoolElement *)acb;
@@ -310,7 +304,7 @@ static void thread_pool_init_one(ThreadPool *pool, AioContext *ctx)
     QTAILQ_INIT(&pool->request_list);
 
     aio_set_event_notifier(ctx, &pool->notifier, event_notifier_ready,
-                           thread_pool_active);
+                           NULL);
 }
 
 ThreadPool *thread_pool_new(AioContext *ctx)
-- 
1.8.1.4

^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [Qemu-devel] [PATCH v4 16/17] tests: drop event_active_cb()
  2013-06-14 11:43 [Qemu-devel] [PATCH v4 00/17] aio: drop io_flush() Stefan Hajnoczi
                   ` (14 preceding siblings ...)
  2013-06-14 11:44 ` [Qemu-devel] [PATCH v4 15/17] thread-pool: drop thread_pool_active() Stefan Hajnoczi
@ 2013-06-14 11:44 ` Stefan Hajnoczi
  2013-06-27 13:20   ` Paolo Bonzini
  2013-06-14 11:44 ` [Qemu-devel] [PATCH v4 17/17] aio: drop io_flush argument Stefan Hajnoczi
                   ` (2 subsequent siblings)
  18 siblings, 1 reply; 29+ messages in thread
From: Stefan Hajnoczi @ 2013-06-14 11:44 UTC (permalink / raw)
  To: qemu-devel
  Cc: Kevin Wolf, Paolo Bonzini, Anthony Liguori, Ping Fan Liu,
	Stefan Hajnoczi

The .io_flush() handler no longer exists and has no users.  Drop the
io_flush argument to aio_set_fd_handler() and related functions.

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 tests/test-aio.c | 22 ++++++++--------------
 1 file changed, 8 insertions(+), 14 deletions(-)

diff --git a/tests/test-aio.c b/tests/test-aio.c
index 1251952..7b2892a 100644
--- a/tests/test-aio.c
+++ b/tests/test-aio.c
@@ -65,12 +65,6 @@ static void bh_delete_cb(void *opaque)
     }
 }
 
-static int event_active_cb(EventNotifier *e)
-{
-    EventNotifierTestData *data = container_of(e, EventNotifierTestData, e);
-    return data->active > 0;
-}
-
 static void event_ready_cb(EventNotifier *e)
 {
     EventNotifierTestData *data = container_of(e, EventNotifierTestData, e);
@@ -239,7 +233,7 @@ static void test_set_event_notifier(void)
 {
     EventNotifierTestData data = { .n = 0, .active = 0 };
     event_notifier_init(&data.e, false);
-    aio_set_event_notifier(ctx, &data.e, event_ready_cb, event_active_cb);
+    aio_set_event_notifier(ctx, &data.e, event_ready_cb, NULL);
     g_assert(!aio_poll(ctx, false));
     g_assert_cmpint(data.n, ==, 0);
 
@@ -253,7 +247,7 @@ static void test_wait_event_notifier(void)
 {
     EventNotifierTestData data = { .n = 0, .active = 1 };
     event_notifier_init(&data.e, false);
-    aio_set_event_notifier(ctx, &data.e, event_ready_cb, event_active_cb);
+    aio_set_event_notifier(ctx, &data.e, event_ready_cb, NULL);
     g_assert(!aio_poll(ctx, false));
     g_assert_cmpint(data.n, ==, 0);
     g_assert_cmpint(data.active, ==, 1);
@@ -278,7 +272,7 @@ static void test_flush_event_notifier(void)
 {
     EventNotifierTestData data = { .n = 0, .active = 10, .auto_set = true };
     event_notifier_init(&data.e, false);
-    aio_set_event_notifier(ctx, &data.e, event_ready_cb, event_active_cb);
+    aio_set_event_notifier(ctx, &data.e, event_ready_cb, NULL);
     g_assert(!aio_poll(ctx, false));
     g_assert_cmpint(data.n, ==, 0);
     g_assert_cmpint(data.active, ==, 10);
@@ -318,7 +312,7 @@ static void test_wait_event_notifier_noflush(void)
 
     /* An active event notifier forces aio_poll to look at EventNotifiers.  */
     event_notifier_init(&dummy.e, false);
-    aio_set_event_notifier(ctx, &dummy.e, event_ready_cb, event_active_cb);
+    aio_set_event_notifier(ctx, &dummy.e, event_ready_cb, NULL);
 
     event_notifier_set(&data.e);
     g_assert(aio_poll(ctx, false));
@@ -521,7 +515,7 @@ static void test_source_set_event_notifier(void)
 {
     EventNotifierTestData data = { .n = 0, .active = 0 };
     event_notifier_init(&data.e, false);
-    aio_set_event_notifier(ctx, &data.e, event_ready_cb, event_active_cb);
+    aio_set_event_notifier(ctx, &data.e, event_ready_cb, NULL);
     while (g_main_context_iteration(NULL, false));
     g_assert_cmpint(data.n, ==, 0);
 
@@ -535,7 +529,7 @@ static void test_source_wait_event_notifier(void)
 {
     EventNotifierTestData data = { .n = 0, .active = 1 };
     event_notifier_init(&data.e, false);
-    aio_set_event_notifier(ctx, &data.e, event_ready_cb, event_active_cb);
+    aio_set_event_notifier(ctx, &data.e, event_ready_cb, NULL);
     g_assert(g_main_context_iteration(NULL, false));
     g_assert_cmpint(data.n, ==, 0);
     g_assert_cmpint(data.active, ==, 1);
@@ -560,7 +554,7 @@ static void test_source_flush_event_notifier(void)
 {
     EventNotifierTestData data = { .n = 0, .active = 10, .auto_set = true };
     event_notifier_init(&data.e, false);
-    aio_set_event_notifier(ctx, &data.e, event_ready_cb, event_active_cb);
+    aio_set_event_notifier(ctx, &data.e, event_ready_cb, NULL);
     g_assert(g_main_context_iteration(NULL, false));
     g_assert_cmpint(data.n, ==, 0);
     g_assert_cmpint(data.active, ==, 10);
@@ -600,7 +594,7 @@ static void test_source_wait_event_notifier_noflush(void)
 
     /* An active event notifier forces aio_poll to look at EventNotifiers.  */
     event_notifier_init(&dummy.e, false);
-    aio_set_event_notifier(ctx, &dummy.e, event_ready_cb, event_active_cb);
+    aio_set_event_notifier(ctx, &dummy.e, event_ready_cb, NULL);
 
     event_notifier_set(&data.e);
     g_assert(g_main_context_iteration(NULL, false));
-- 
1.8.1.4

^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [Qemu-devel] [PATCH v4 17/17] aio: drop io_flush argument
  2013-06-14 11:43 [Qemu-devel] [PATCH v4 00/17] aio: drop io_flush() Stefan Hajnoczi
                   ` (15 preceding siblings ...)
  2013-06-14 11:44 ` [Qemu-devel] [PATCH v4 16/17] tests: drop event_active_cb() Stefan Hajnoczi
@ 2013-06-14 11:44 ` Stefan Hajnoczi
  2013-06-27 13:21   ` Paolo Bonzini
  2013-06-27 12:23 ` [Qemu-devel] [PATCH v4 00/17] aio: drop io_flush() Stefan Hajnoczi
  2013-06-27 13:22 ` Paolo Bonzini
  18 siblings, 1 reply; 29+ messages in thread
From: Stefan Hajnoczi @ 2013-06-14 11:44 UTC (permalink / raw)
  To: qemu-devel
  Cc: Kevin Wolf, Paolo Bonzini, Anthony Liguori, Ping Fan Liu,
	Stefan Hajnoczi

The .io_flush() handler no longer exists and has no users.  Drop the
io_flush argument to aio_set_fd_handler() and related functions.

The AioFlushEventNotifierHandler and AioFlushHandler typedefs are no
longer used and are dropped too.

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 aio-posix.c                     |  7 ++-----
 aio-win32.c                     |  3 +--
 async.c                         |  4 ++--
 block/curl.c                    |  9 ++++-----
 block/gluster.c                 |  7 +++----
 block/iscsi.c                   |  3 +--
 block/linux-aio.c               |  3 +--
 block/nbd.c                     | 11 ++++-------
 block/rbd.c                     |  4 ++--
 block/sheepdog.c                | 18 ++++++++----------
 block/ssh.c                     |  4 ++--
 hw/block/dataplane/virtio-blk.c |  8 ++++----
 include/block/aio.h             | 14 ++------------
 main-loop.c                     |  9 +++------
 tests/test-aio.c                | 40 ++++++++++++++++++++--------------------
 thread-pool.c                   |  5 ++---
 16 files changed, 61 insertions(+), 88 deletions(-)

diff --git a/aio-posix.c b/aio-posix.c
index 7d66048..2440eb9 100644
--- a/aio-posix.c
+++ b/aio-posix.c
@@ -46,7 +46,6 @@ void aio_set_fd_handler(AioContext *ctx,
                         int fd,
                         IOHandler *io_read,
                         IOHandler *io_write,
-                        AioFlushHandler *io_flush,
                         void *opaque)
 {
     AioHandler *node;
@@ -95,12 +94,10 @@ void aio_set_fd_handler(AioContext *ctx,
 
 void aio_set_event_notifier(AioContext *ctx,
                             EventNotifier *notifier,
-                            EventNotifierHandler *io_read,
-                            AioFlushEventNotifierHandler *io_flush)
+                            EventNotifierHandler *io_read)
 {
     aio_set_fd_handler(ctx, event_notifier_get_fd(notifier),
-                       (IOHandler *)io_read, NULL,
-                       (AioFlushHandler *)io_flush, notifier);
+                       (IOHandler *)io_read, NULL, notifier);
 }
 
 bool aio_pending(AioContext *ctx)
diff --git a/aio-win32.c b/aio-win32.c
index 4309c16..78b2801 100644
--- a/aio-win32.c
+++ b/aio-win32.c
@@ -30,8 +30,7 @@ struct AioHandler {
 
 void aio_set_event_notifier(AioContext *ctx,
                             EventNotifier *e,
-                            EventNotifierHandler *io_notify,
-                            AioFlushEventNotifierHandler *io_flush)
+                            EventNotifierHandler *io_notify)
 {
     AioHandler *node;
 
diff --git a/async.c b/async.c
index 90fe906..fe2c8bf 100644
--- a/async.c
+++ b/async.c
@@ -174,7 +174,7 @@ aio_ctx_finalize(GSource     *source)
     AioContext *ctx = (AioContext *) source;
 
     thread_pool_free(ctx->thread_pool);
-    aio_set_event_notifier(ctx, &ctx->notifier, NULL, NULL);
+    aio_set_event_notifier(ctx, &ctx->notifier, NULL);
     event_notifier_cleanup(&ctx->notifier);
     g_array_free(ctx->pollfds, TRUE);
 }
@@ -214,7 +214,7 @@ AioContext *aio_context_new(void)
     event_notifier_init(&ctx->notifier, false);
     aio_set_event_notifier(ctx, &ctx->notifier, 
                            (EventNotifierHandler *)
-                           event_notifier_test_and_clear, NULL);
+                           event_notifier_test_and_clear);
 
     return ctx;
 }
diff --git a/block/curl.c b/block/curl.c
index 2147076..e88621a 100644
--- a/block/curl.c
+++ b/block/curl.c
@@ -92,17 +92,16 @@ static int curl_sock_cb(CURL *curl, curl_socket_t fd, int action,
     DPRINTF("CURL (AIO): Sock action %d on fd %d\n", action, fd);
     switch (action) {
         case CURL_POLL_IN:
-            qemu_aio_set_fd_handler(fd, curl_multi_do, NULL, NULL, s);
+            qemu_aio_set_fd_handler(fd, curl_multi_do, NULL, s);
             break;
         case CURL_POLL_OUT:
-            qemu_aio_set_fd_handler(fd, NULL, curl_multi_do, NULL, s);
+            qemu_aio_set_fd_handler(fd, NULL, curl_multi_do, s);
             break;
         case CURL_POLL_INOUT:
-            qemu_aio_set_fd_handler(fd, curl_multi_do, curl_multi_do,
-                                    NULL, s);
+            qemu_aio_set_fd_handler(fd, curl_multi_do, curl_multi_do, s);
             break;
         case CURL_POLL_REMOVE:
-            qemu_aio_set_fd_handler(fd, NULL, NULL, NULL, NULL);
+            qemu_aio_set_fd_handler(fd, NULL, NULL, NULL);
             break;
     }
 
diff --git a/block/gluster.c b/block/gluster.c
index 7a69a12..3cff308 100644
--- a/block/gluster.c
+++ b/block/gluster.c
@@ -339,7 +339,7 @@ static int qemu_gluster_open(BlockDriverState *bs,  QDict *options,
     }
     fcntl(s->fds[GLUSTER_FD_READ], F_SETFL, O_NONBLOCK);
     qemu_aio_set_fd_handler(s->fds[GLUSTER_FD_READ],
-        qemu_gluster_aio_event_reader, NULL, NULL, s);
+        qemu_gluster_aio_event_reader, NULL, s);
 
 out:
     qemu_opts_del(opts);
@@ -438,8 +438,7 @@ static void gluster_finish_aiocb(struct glfs_fd *fd, ssize_t ret, void *arg)
         qemu_aio_release(acb);
         close(s->fds[GLUSTER_FD_READ]);
         close(s->fds[GLUSTER_FD_WRITE]);
-        qemu_aio_set_fd_handler(s->fds[GLUSTER_FD_READ], NULL, NULL, NULL,
-            NULL);
+        qemu_aio_set_fd_handler(s->fds[GLUSTER_FD_READ], NULL, NULL, NULL);
         bs->drv = NULL; /* Make the disk inaccessible */
         qemu_mutex_unlock_iothread();
     }
@@ -551,7 +550,7 @@ static void qemu_gluster_close(BlockDriverState *bs)
 
     close(s->fds[GLUSTER_FD_READ]);
     close(s->fds[GLUSTER_FD_WRITE]);
-    qemu_aio_set_fd_handler(s->fds[GLUSTER_FD_READ], NULL, NULL, NULL, NULL);
+    qemu_aio_set_fd_handler(s->fds[GLUSTER_FD_READ], NULL, NULL, NULL);
 
     if (s->fd) {
         glfs_close(s->fd);
diff --git a/block/iscsi.c b/block/iscsi.c
index e2041ca..721c7d9 100644
--- a/block/iscsi.c
+++ b/block/iscsi.c
@@ -160,7 +160,6 @@ iscsi_set_events(IscsiLun *iscsilun)
         qemu_aio_set_fd_handler(iscsi_get_fd(iscsi),
                       iscsi_process_read,
                       (ev & POLLOUT) ? iscsi_process_write : NULL,
-                      NULL,
                       iscsilun);
 
     }
@@ -1176,7 +1175,7 @@ static void iscsi_close(BlockDriverState *bs)
         qemu_del_timer(iscsilun->nop_timer);
         qemu_free_timer(iscsilun->nop_timer);
     }
-    qemu_aio_set_fd_handler(iscsi_get_fd(iscsi), NULL, NULL, NULL, NULL);
+    qemu_aio_set_fd_handler(iscsi_get_fd(iscsi), NULL, NULL, NULL);
     iscsi_destroy_context(iscsi);
     memset(iscsilun, 0, sizeof(IscsiLun));
 }
diff --git a/block/linux-aio.c b/block/linux-aio.c
index d9128f3..53434e2 100644
--- a/block/linux-aio.c
+++ b/block/linux-aio.c
@@ -190,8 +190,7 @@ void *laio_init(void)
         goto out_close_efd;
     }
 
-    qemu_aio_set_event_notifier(&s->e, qemu_laio_completion_cb,
-                                NULL);
+    qemu_aio_set_event_notifier(&s->e, qemu_laio_completion_cb);
 
     return s;
 
diff --git a/block/nbd.c b/block/nbd.c
index 80d2b31..f5350fb 100644
--- a/block/nbd.c
+++ b/block/nbd.c
@@ -325,8 +325,7 @@ static int nbd_co_send_request(BDRVNBDState *s, struct nbd_request *request,
 
     qemu_co_mutex_lock(&s->send_mutex);
     s->send_coroutine = qemu_coroutine_self();
-    qemu_aio_set_fd_handler(s->sock, nbd_reply_ready, nbd_restart_write,
-                            NULL, s);
+    qemu_aio_set_fd_handler(s->sock, nbd_reply_ready, nbd_restart_write, s);
     if (qiov) {
         if (!s->is_unix) {
             socket_set_cork(s->sock, 1);
@@ -345,8 +344,7 @@ static int nbd_co_send_request(BDRVNBDState *s, struct nbd_request *request,
     } else {
         rc = nbd_send_request(s->sock, request);
     }
-    qemu_aio_set_fd_handler(s->sock, nbd_reply_ready, NULL,
-                            NULL, s);
+    qemu_aio_set_fd_handler(s->sock, nbd_reply_ready, NULL, s);
     s->send_coroutine = NULL;
     qemu_co_mutex_unlock(&s->send_mutex);
     return rc;
@@ -422,8 +420,7 @@ static int nbd_establish_connection(BlockDriverState *bs)
     /* Now that we're connected, set the socket to be non-blocking and
      * kick the reply mechanism.  */
     qemu_set_nonblock(sock);
-    qemu_aio_set_fd_handler(sock, nbd_reply_ready, NULL,
-                            NULL, s);
+    qemu_aio_set_fd_handler(sock, nbd_reply_ready, NULL, s);
 
     s->sock = sock;
     s->size = size;
@@ -443,7 +440,7 @@ static void nbd_teardown_connection(BlockDriverState *bs)
     request.len = 0;
     nbd_send_request(s->sock, &request);
 
-    qemu_aio_set_fd_handler(s->sock, NULL, NULL, NULL, NULL);
+    qemu_aio_set_fd_handler(s->sock, NULL, NULL, NULL);
     closesocket(s->sock);
 }
 
diff --git a/block/rbd.c b/block/rbd.c
index 40e5d55..78b8564 100644
--- a/block/rbd.c
+++ b/block/rbd.c
@@ -545,7 +545,7 @@ static int qemu_rbd_open(BlockDriverState *bs, QDict *options, int flags)
     fcntl(s->fds[0], F_SETFL, O_NONBLOCK);
     fcntl(s->fds[1], F_SETFL, O_NONBLOCK);
     qemu_aio_set_fd_handler(s->fds[RBD_FD_READ], qemu_rbd_aio_event_reader,
-                            NULL, NULL, s);
+                            NULL, s);
 
 
     qemu_opts_del(opts);
@@ -569,7 +569,7 @@ static void qemu_rbd_close(BlockDriverState *bs)
 
     close(s->fds[0]);
     close(s->fds[1]);
-    qemu_aio_set_fd_handler(s->fds[RBD_FD_READ], NULL, NULL, NULL, NULL);
+    qemu_aio_set_fd_handler(s->fds[RBD_FD_READ], NULL, NULL, NULL);
 
     rbd_close(s->image);
     rados_ioctx_destroy(s->io_ctx);
diff --git a/block/sheepdog.c b/block/sheepdog.c
index 66918c6..be2a876 100644
--- a/block/sheepdog.c
+++ b/block/sheepdog.c
@@ -531,14 +531,14 @@ static coroutine_fn void do_co_req(void *opaque)
     unsigned int *rlen = srco->rlen;
 
     co = qemu_coroutine_self();
-    qemu_aio_set_fd_handler(sockfd, NULL, restart_co_req, NULL, co);
+    qemu_aio_set_fd_handler(sockfd, NULL, restart_co_req, co);
 
     ret = send_co_req(sockfd, hdr, data, wlen);
     if (ret < 0) {
         goto out;
     }
 
-    qemu_aio_set_fd_handler(sockfd, restart_co_req, NULL, NULL, co);
+    qemu_aio_set_fd_handler(sockfd, restart_co_req, NULL, co);
 
     ret = qemu_co_recv(sockfd, hdr, sizeof(*hdr));
     if (ret < sizeof(*hdr)) {
@@ -563,7 +563,7 @@ static coroutine_fn void do_co_req(void *opaque)
 out:
     /* there is at most one request for this sockfd, so it is safe to
      * set each handler to NULL. */
-    qemu_aio_set_fd_handler(sockfd, NULL, NULL, NULL, NULL);
+    qemu_aio_set_fd_handler(sockfd, NULL, NULL, NULL);
 
     srco->ret = ret;
     srco->finished = true;
@@ -804,7 +804,7 @@ static int get_sheep_fd(BDRVSheepdogState *s)
         return fd;
     }
 
-    qemu_aio_set_fd_handler(fd, co_read_response, NULL, NULL, s);
+    qemu_aio_set_fd_handler(fd, co_read_response, NULL, s);
     return fd;
 }
 
@@ -1054,8 +1054,7 @@ static int coroutine_fn add_aio_request(BDRVSheepdogState *s, AIOReq *aio_req,
 
     qemu_co_mutex_lock(&s->lock);
     s->co_send = qemu_coroutine_self();
-    qemu_aio_set_fd_handler(s->fd, co_read_response, co_write_request,
-                            NULL, s);
+    qemu_aio_set_fd_handler(s->fd, co_read_response, co_write_request, s);
     socket_set_cork(s->fd, 1);
 
     /* send a header */
@@ -1076,8 +1075,7 @@ static int coroutine_fn add_aio_request(BDRVSheepdogState *s, AIOReq *aio_req,
     }
 
     socket_set_cork(s->fd, 0);
-    qemu_aio_set_fd_handler(s->fd, co_read_response, NULL,
-                            NULL, s);
+    qemu_aio_set_fd_handler(s->fd, co_read_response, NULL, s);
     qemu_co_mutex_unlock(&s->lock);
 
     return 0;
@@ -1335,7 +1333,7 @@ static int sd_open(BlockDriverState *bs, QDict *options, int flags)
     g_free(buf);
     return 0;
 out:
-    qemu_aio_set_fd_handler(s->fd, NULL, NULL, NULL, NULL);
+    qemu_aio_set_fd_handler(s->fd, NULL, NULL, NULL);
     if (s->fd >= 0) {
         closesocket(s->fd);
     }
@@ -1563,7 +1561,7 @@ static void sd_close(BlockDriverState *bs)
         error_report("%s, %s", sd_strerror(rsp->result), s->name);
     }
 
-    qemu_aio_set_fd_handler(s->fd, NULL, NULL, NULL, NULL);
+    qemu_aio_set_fd_handler(s->fd, NULL, NULL, NULL);
     closesocket(s->fd);
     g_free(s->host_spec);
 }
diff --git a/block/ssh.c b/block/ssh.c
index ed525cc..b78253f 100644
--- a/block/ssh.c
+++ b/block/ssh.c
@@ -743,13 +743,13 @@ static coroutine_fn void set_fd_handler(BDRVSSHState *s)
     DPRINTF("s->sock=%d rd_handler=%p wr_handler=%p", s->sock,
             rd_handler, wr_handler);
 
-    qemu_aio_set_fd_handler(s->sock, rd_handler, wr_handler, NULL, co);
+    qemu_aio_set_fd_handler(s->sock, rd_handler, wr_handler, co);
 }
 
 static coroutine_fn void clear_fd_handler(BDRVSSHState *s)
 {
     DPRINTF("s->sock=%d", s->sock);
-    qemu_aio_set_fd_handler(s->sock, NULL, NULL, NULL, NULL);
+    qemu_aio_set_fd_handler(s->sock, NULL, NULL, NULL);
 }
 
 /* A non-blocking call returned EAGAIN, so yield, ensuring the
diff --git a/hw/block/dataplane/virtio-blk.c b/hw/block/dataplane/virtio-blk.c
index 9e6d32b..bd1b51a 100644
--- a/hw/block/dataplane/virtio-blk.c
+++ b/hw/block/dataplane/virtio-blk.c
@@ -473,7 +473,7 @@ void virtio_blk_data_plane_start(VirtIOBlockDataPlane *s)
         exit(1);
     }
     s->host_notifier = *virtio_queue_get_host_notifier(vq);
-    aio_set_event_notifier(s->ctx, &s->host_notifier, handle_notify, NULL);
+    aio_set_event_notifier(s->ctx, &s->host_notifier, handle_notify);
 
     /* Set up ioqueue */
     ioq_init(&s->ioqueue, s->fd, REQ_MAX);
@@ -481,7 +481,7 @@ void virtio_blk_data_plane_start(VirtIOBlockDataPlane *s)
         ioq_put_iocb(&s->ioqueue, &s->requests[i].iocb);
     }
     s->io_notifier = *ioq_get_notifier(&s->ioqueue);
-    aio_set_event_notifier(s->ctx, &s->io_notifier, handle_io, NULL);
+    aio_set_event_notifier(s->ctx, &s->io_notifier, handle_io);
 
     s->started = true;
     trace_virtio_blk_data_plane_start(s);
@@ -513,10 +513,10 @@ void virtio_blk_data_plane_stop(VirtIOBlockDataPlane *s)
         qemu_thread_join(&s->thread);
     }
 
-    aio_set_event_notifier(s->ctx, &s->io_notifier, NULL, NULL);
+    aio_set_event_notifier(s->ctx, &s->io_notifier, NULL);
     ioq_cleanup(&s->ioqueue);
 
-    aio_set_event_notifier(s->ctx, &s->host_notifier, NULL, NULL);
+    aio_set_event_notifier(s->ctx, &s->host_notifier, NULL);
     k->set_host_notifier(qbus->parent, 0, false);
 
     aio_context_unref(s->ctx);
diff --git a/include/block/aio.h b/include/block/aio.h
index 1836793..e17066b 100644
--- a/include/block/aio.h
+++ b/include/block/aio.h
@@ -71,9 +71,6 @@ typedef struct AioContext {
     struct ThreadPool *thread_pool;
 } AioContext;
 
-/* Returns 1 if there are still outstanding AIO requests; 0 otherwise */
-typedef int (AioFlushEventNotifierHandler)(EventNotifier *e);
-
 /**
  * aio_context_new: Allocate a new AioContext.
  *
@@ -191,9 +188,6 @@ bool aio_pending(AioContext *ctx);
 bool aio_poll(AioContext *ctx, bool blocking);
 
 #ifdef CONFIG_POSIX
-/* Returns 1 if there are still outstanding AIO requests; 0 otherwise */
-typedef int (AioFlushHandler)(void *opaque);
-
 /* Register a file descriptor and associated callbacks.  Behaves very similarly
  * to qemu_set_fd_handler2.  Unlike qemu_set_fd_handler2, these callbacks will
  * be invoked when using qemu_aio_wait().
@@ -205,7 +199,6 @@ void aio_set_fd_handler(AioContext *ctx,
                         int fd,
                         IOHandler *io_read,
                         IOHandler *io_write,
-                        AioFlushHandler *io_flush,
                         void *opaque);
 #endif
 
@@ -218,8 +211,7 @@ void aio_set_fd_handler(AioContext *ctx,
  */
 void aio_set_event_notifier(AioContext *ctx,
                             EventNotifier *notifier,
-                            EventNotifierHandler *io_read,
-                            AioFlushEventNotifierHandler *io_flush);
+                            EventNotifierHandler *io_read);
 
 /* Return a GSource that lets the main loop poll the file descriptors attached
  * to this AioContext.
@@ -233,14 +225,12 @@ struct ThreadPool *aio_get_thread_pool(AioContext *ctx);
 
 bool qemu_aio_wait(void);
 void qemu_aio_set_event_notifier(EventNotifier *notifier,
-                                 EventNotifierHandler *io_read,
-                                 AioFlushEventNotifierHandler *io_flush);
+                                 EventNotifierHandler *io_read);
 
 #ifdef CONFIG_POSIX
 void qemu_aio_set_fd_handler(int fd,
                              IOHandler *io_read,
                              IOHandler *io_write,
-                             AioFlushHandler *io_flush,
                              void *opaque);
 #endif
 
diff --git a/main-loop.c b/main-loop.c
index cf36645..2581939 100644
--- a/main-loop.c
+++ b/main-loop.c
@@ -488,17 +488,14 @@ bool qemu_aio_wait(void)
 void qemu_aio_set_fd_handler(int fd,
                              IOHandler *io_read,
                              IOHandler *io_write,
-                             AioFlushHandler *io_flush,
                              void *opaque)
 {
-    aio_set_fd_handler(qemu_aio_context, fd, io_read, io_write, io_flush,
-                       opaque);
+    aio_set_fd_handler(qemu_aio_context, fd, io_read, io_write, opaque);
 }
 #endif
 
 void qemu_aio_set_event_notifier(EventNotifier *notifier,
-                                 EventNotifierHandler *io_read,
-                                 AioFlushEventNotifierHandler *io_flush)
+                                 EventNotifierHandler *io_read)
 {
-    aio_set_event_notifier(qemu_aio_context, notifier, io_read, io_flush);
+    aio_set_event_notifier(qemu_aio_context, notifier, io_read);
 }
diff --git a/tests/test-aio.c b/tests/test-aio.c
index 7b2892a..1ab5637 100644
--- a/tests/test-aio.c
+++ b/tests/test-aio.c
@@ -233,11 +233,11 @@ static void test_set_event_notifier(void)
 {
     EventNotifierTestData data = { .n = 0, .active = 0 };
     event_notifier_init(&data.e, false);
-    aio_set_event_notifier(ctx, &data.e, event_ready_cb, NULL);
+    aio_set_event_notifier(ctx, &data.e, event_ready_cb);
     g_assert(!aio_poll(ctx, false));
     g_assert_cmpint(data.n, ==, 0);
 
-    aio_set_event_notifier(ctx, &data.e, NULL, NULL);
+    aio_set_event_notifier(ctx, &data.e, NULL);
     g_assert(!aio_poll(ctx, false));
     g_assert_cmpint(data.n, ==, 0);
     event_notifier_cleanup(&data.e);
@@ -247,7 +247,7 @@ static void test_wait_event_notifier(void)
 {
     EventNotifierTestData data = { .n = 0, .active = 1 };
     event_notifier_init(&data.e, false);
-    aio_set_event_notifier(ctx, &data.e, event_ready_cb, NULL);
+    aio_set_event_notifier(ctx, &data.e, event_ready_cb);
     g_assert(!aio_poll(ctx, false));
     g_assert_cmpint(data.n, ==, 0);
     g_assert_cmpint(data.active, ==, 1);
@@ -261,7 +261,7 @@ static void test_wait_event_notifier(void)
     g_assert_cmpint(data.n, ==, 1);
     g_assert_cmpint(data.active, ==, 0);
 
-    aio_set_event_notifier(ctx, &data.e, NULL, NULL);
+    aio_set_event_notifier(ctx, &data.e, NULL);
     g_assert(!aio_poll(ctx, false));
     g_assert_cmpint(data.n, ==, 1);
 
@@ -272,7 +272,7 @@ static void test_flush_event_notifier(void)
 {
     EventNotifierTestData data = { .n = 0, .active = 10, .auto_set = true };
     event_notifier_init(&data.e, false);
-    aio_set_event_notifier(ctx, &data.e, event_ready_cb, NULL);
+    aio_set_event_notifier(ctx, &data.e, event_ready_cb);
     g_assert(!aio_poll(ctx, false));
     g_assert_cmpint(data.n, ==, 0);
     g_assert_cmpint(data.active, ==, 10);
@@ -288,7 +288,7 @@ static void test_flush_event_notifier(void)
     g_assert_cmpint(data.active, ==, 0);
     g_assert(!aio_poll(ctx, false));
 
-    aio_set_event_notifier(ctx, &data.e, NULL, NULL);
+    aio_set_event_notifier(ctx, &data.e, NULL);
     g_assert(!aio_poll(ctx, false));
     event_notifier_cleanup(&data.e);
 }
@@ -299,7 +299,7 @@ static void test_wait_event_notifier_noflush(void)
     EventNotifierTestData dummy = { .n = 0, .active = 1 };
 
     event_notifier_init(&data.e, false);
-    aio_set_event_notifier(ctx, &data.e, event_ready_cb, NULL);
+    aio_set_event_notifier(ctx, &data.e, event_ready_cb);
 
     g_assert(!aio_poll(ctx, false));
     g_assert_cmpint(data.n, ==, 0);
@@ -312,7 +312,7 @@ static void test_wait_event_notifier_noflush(void)
 
     /* An active event notifier forces aio_poll to look at EventNotifiers.  */
     event_notifier_init(&dummy.e, false);
-    aio_set_event_notifier(ctx, &dummy.e, event_ready_cb, NULL);
+    aio_set_event_notifier(ctx, &dummy.e, event_ready_cb);
 
     event_notifier_set(&data.e);
     g_assert(aio_poll(ctx, false));
@@ -332,10 +332,10 @@ static void test_wait_event_notifier_noflush(void)
     g_assert_cmpint(dummy.n, ==, 1);
     g_assert_cmpint(dummy.active, ==, 0);
 
-    aio_set_event_notifier(ctx, &dummy.e, NULL, NULL);
+    aio_set_event_notifier(ctx, &dummy.e, NULL);
     event_notifier_cleanup(&dummy.e);
 
-    aio_set_event_notifier(ctx, &data.e, NULL, NULL);
+    aio_set_event_notifier(ctx, &data.e, NULL);
     g_assert(!aio_poll(ctx, false));
     g_assert_cmpint(data.n, ==, 2);
 
@@ -515,11 +515,11 @@ static void test_source_set_event_notifier(void)
 {
     EventNotifierTestData data = { .n = 0, .active = 0 };
     event_notifier_init(&data.e, false);
-    aio_set_event_notifier(ctx, &data.e, event_ready_cb, NULL);
+    aio_set_event_notifier(ctx, &data.e, event_ready_cb);
     while (g_main_context_iteration(NULL, false));
     g_assert_cmpint(data.n, ==, 0);
 
-    aio_set_event_notifier(ctx, &data.e, NULL, NULL);
+    aio_set_event_notifier(ctx, &data.e, NULL);
     while (g_main_context_iteration(NULL, false));
     g_assert_cmpint(data.n, ==, 0);
     event_notifier_cleanup(&data.e);
@@ -529,7 +529,7 @@ static void test_source_wait_event_notifier(void)
 {
     EventNotifierTestData data = { .n = 0, .active = 1 };
     event_notifier_init(&data.e, false);
-    aio_set_event_notifier(ctx, &data.e, event_ready_cb, NULL);
+    aio_set_event_notifier(ctx, &data.e, event_ready_cb);
     g_assert(g_main_context_iteration(NULL, false));
     g_assert_cmpint(data.n, ==, 0);
     g_assert_cmpint(data.active, ==, 1);
@@ -543,7 +543,7 @@ static void test_source_wait_event_notifier(void)
     g_assert_cmpint(data.n, ==, 1);
     g_assert_cmpint(data.active, ==, 0);
 
-    aio_set_event_notifier(ctx, &data.e, NULL, NULL);
+    aio_set_event_notifier(ctx, &data.e, NULL);
     while (g_main_context_iteration(NULL, false));
     g_assert_cmpint(data.n, ==, 1);
 
@@ -554,7 +554,7 @@ static void test_source_flush_event_notifier(void)
 {
     EventNotifierTestData data = { .n = 0, .active = 10, .auto_set = true };
     event_notifier_init(&data.e, false);
-    aio_set_event_notifier(ctx, &data.e, event_ready_cb, NULL);
+    aio_set_event_notifier(ctx, &data.e, event_ready_cb);
     g_assert(g_main_context_iteration(NULL, false));
     g_assert_cmpint(data.n, ==, 0);
     g_assert_cmpint(data.active, ==, 10);
@@ -570,7 +570,7 @@ static void test_source_flush_event_notifier(void)
     g_assert_cmpint(data.active, ==, 0);
     g_assert(!g_main_context_iteration(NULL, false));
 
-    aio_set_event_notifier(ctx, &data.e, NULL, NULL);
+    aio_set_event_notifier(ctx, &data.e, NULL);
     while (g_main_context_iteration(NULL, false));
     event_notifier_cleanup(&data.e);
 }
@@ -581,7 +581,7 @@ static void test_source_wait_event_notifier_noflush(void)
     EventNotifierTestData dummy = { .n = 0, .active = 1 };
 
     event_notifier_init(&data.e, false);
-    aio_set_event_notifier(ctx, &data.e, event_ready_cb, NULL);
+    aio_set_event_notifier(ctx, &data.e, event_ready_cb);
 
     while (g_main_context_iteration(NULL, false));
     g_assert_cmpint(data.n, ==, 0);
@@ -594,7 +594,7 @@ static void test_source_wait_event_notifier_noflush(void)
 
     /* An active event notifier forces aio_poll to look at EventNotifiers.  */
     event_notifier_init(&dummy.e, false);
-    aio_set_event_notifier(ctx, &dummy.e, event_ready_cb, NULL);
+    aio_set_event_notifier(ctx, &dummy.e, event_ready_cb);
 
     event_notifier_set(&data.e);
     g_assert(g_main_context_iteration(NULL, false));
@@ -614,10 +614,10 @@ static void test_source_wait_event_notifier_noflush(void)
     g_assert_cmpint(dummy.n, ==, 1);
     g_assert_cmpint(dummy.active, ==, 0);
 
-    aio_set_event_notifier(ctx, &dummy.e, NULL, NULL);
+    aio_set_event_notifier(ctx, &dummy.e, NULL);
     event_notifier_cleanup(&dummy.e);
 
-    aio_set_event_notifier(ctx, &data.e, NULL, NULL);
+    aio_set_event_notifier(ctx, &data.e, NULL);
     while (g_main_context_iteration(NULL, false));
     g_assert_cmpint(data.n, ==, 2);
 
diff --git a/thread-pool.c b/thread-pool.c
index 096f007..5025567 100644
--- a/thread-pool.c
+++ b/thread-pool.c
@@ -303,8 +303,7 @@ static void thread_pool_init_one(ThreadPool *pool, AioContext *ctx)
     QLIST_INIT(&pool->head);
     QTAILQ_INIT(&pool->request_list);
 
-    aio_set_event_notifier(ctx, &pool->notifier, event_notifier_ready,
-                           NULL);
+    aio_set_event_notifier(ctx, &pool->notifier, event_notifier_ready);
 }
 
 ThreadPool *thread_pool_new(AioContext *ctx)
@@ -338,7 +337,7 @@ void thread_pool_free(ThreadPool *pool)
 
     qemu_mutex_unlock(&pool->lock);
 
-    aio_set_event_notifier(pool->ctx, &pool->notifier, NULL, NULL);
+    aio_set_event_notifier(pool->ctx, &pool->notifier, NULL);
     qemu_sem_destroy(&pool->sem);
     qemu_cond_destroy(&pool->check_cancel);
     qemu_cond_destroy(&pool->worker_stopped);
-- 
1.8.1.4

^ permalink raw reply related	[flat|nested] 29+ messages in thread

* Re: [Qemu-devel] [PATCH v4 00/17] aio: drop io_flush()
  2013-06-14 11:43 [Qemu-devel] [PATCH v4 00/17] aio: drop io_flush() Stefan Hajnoczi
                   ` (16 preceding siblings ...)
  2013-06-14 11:44 ` [Qemu-devel] [PATCH v4 17/17] aio: drop io_flush argument Stefan Hajnoczi
@ 2013-06-27 12:23 ` Stefan Hajnoczi
  2013-06-27 13:22 ` Paolo Bonzini
  18 siblings, 0 replies; 29+ messages in thread
From: Stefan Hajnoczi @ 2013-06-27 12:23 UTC (permalink / raw)
  To: Stefan Hajnoczi
  Cc: Kevin Wolf, Paolo Bonzini, Anthony Liguori, Ping Fan Liu, qemu-devel

On Fri, Jun 14, 2013 at 1:43 PM, Stefan Hajnoczi <stefanha@redhat.com> wrote:
> v4:
>  * Ensure pending BHs are processed in bdrv_drain_all() [bonzini]
>
> v3:
>  * I forgot about this series, time to push it again!
>  * Rebase onto qemu.git/master
>  * Drop now-unused AioFlushHandler typedef [bonzini]
>
> This series gets rid of aio's .io_flush() callback.  It's based on Paolo's
> insight that bdrv_drain_all() can be implemented using the bs->tracked_requests
> list.  io_flush() is redundant since the block layer already knows if requests
> are pending.
>
> The point of this effort is to simplify our event loop(s).  If we can reduce
> custom features like io_flush() it becomes easier to unify event loops and
> reuse glib or other options.
>
> This is also important to me for dataplane, since bdrv_drain_all() is one of
> the synchronization points between threads.  QEMU monitor commands invoke
> bdrv_drain_all() while the block device is accessed from a dataplane thread.
>
> Background on io_flush() semantics:
>
> The io_flush() handler must return 1 if this aio fd handler is active.  That
> is, requests are pending and we'd like to make progress by monitoring the fd.
>
> If io_flush() returns 0, the aio event loop skips monitoring this fd.  This is
> critical for block drivers like iscsi where we have an idle TCP socket which we
> want to block on *only* when there are pending requests.
>
> The series works as follows:
>
> Part 1 - stop relying on .io_flush()
>
> The first patches change aio_poll() callers to check their termination
> condition themselves instead of relying on .io_flush():
>
>   76f9848 block: stop relying on io_flush() in bdrv_drain_all()
>   42c7aac dataplane/virtio-blk: check exit conditions before aio_poll()
>   b7c8b9a tests: adjust test-aio to new aio_poll() semantics
>   55c7714 tests: adjust test-thread-pool to new aio_poll() semantics
>
> Part 2 - stop calling .io_flush() from aio_poll()
>
> The semantic change to aio_poll() is made here:
>
>   8f188ac aio: stop using .io_flush()
>
> Part 3 - drop io_flush() handlers, just pass NULL for the io_flush argument
>
> Remove the now dead code from all .io_flush() users:
>
>   5967680 block/curl: drop curl_aio_flush()
>   16ee264 block/gluster: drop qemu_gluster_aio_flush_cb()
>   9563d0f block/iscsi: drop iscsi_process_flush()
>   6690260 block/linux-aio: drop qemu_laio_completion_cb()
>   3c30643 block/nbd: drop nbd_have_request()
>   d47edee block/rbd: drop qemu_rbd_aio_flush_cb()
>   edbf4b5 block/sheepdog: drop have_co_req() and aio_flush_request()
>   3fee517 block/ssh: drop return_true()
>   d59650d dataplane/virtio-blk: drop flush_true() and flush_io()
>   0492713 thread-pool: drop thread_pool_active()
>   77b3518 tests: drop event_active_cb()
>
> Part 4 - remove io_flush arguments from aio functions
>
> The big, mechanical patch that drops the io_flush argument:
>
>   5938cf4 aio: drop io_flush argument
>
> Note that I split Part 3 from Part 4 so it's easy to review individual block
> drivers.
>
> Stefan Hajnoczi (17):
>   block: stop relying on io_flush() in bdrv_drain_all()
>   dataplane/virtio-blk: check exit conditions before aio_poll()
>   tests: adjust test-aio to new aio_poll() semantics
>   tests: adjust test-thread-pool to new aio_poll() semantics
>   aio: stop using .io_flush()
>   block/curl: drop curl_aio_flush()
>   block/gluster: drop qemu_gluster_aio_flush_cb()
>   block/iscsi: drop iscsi_process_flush()
>   block/linux-aio: drop qemu_laio_completion_cb()
>   block/nbd: drop nbd_have_request()
>   block/rbd: drop qemu_rbd_aio_flush_cb()
>   block/sheepdog: drop have_co_req() and aio_flush_request()
>   block/ssh: drop return_true()
>   dataplane/virtio-blk: drop flush_true() and flush_io()
>   thread-pool: drop thread_pool_active()
>   tests: drop event_active_cb()
>   aio: drop io_flush argument
>
>  aio-posix.c                     | 36 ++++++------------
>  aio-win32.c                     | 37 ++++++++-----------
>  async.c                         |  4 +-
>  block.c                         | 50 ++++++++++++++++++-------
>  block/curl.c                    | 25 ++-----------
>  block/gluster.c                 | 21 ++---------
>  block/iscsi.c                   | 10 +----
>  block/linux-aio.c               | 18 +--------
>  block/nbd.c                     | 18 ++-------
>  block/rbd.c                     | 16 +-------
>  block/sheepdog.c                | 33 ++++-------------
>  block/ssh.c                     | 12 +-----
>  hw/block/dataplane/virtio-blk.c | 25 +++----------
>  include/block/aio.h             | 14 +------
>  main-loop.c                     |  9 ++---
>  tests/test-aio.c                | 82 +++++++++++++++++++++--------------------
>  tests/test-thread-pool.c        | 24 ++++++------
>  thread-pool.c                   | 11 +-----
>  18 files changed, 158 insertions(+), 287 deletions(-)

Ping?

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [Qemu-devel] [PATCH v4 01/17] block: stop relying on io_flush() in bdrv_drain_all()
  2013-06-14 11:43 ` [Qemu-devel] [PATCH v4 01/17] block: stop relying on io_flush() in bdrv_drain_all() Stefan Hajnoczi
@ 2013-06-27 13:13   ` Paolo Bonzini
  2013-06-27 13:29     ` Stefan Hajnoczi
  0 siblings, 1 reply; 29+ messages in thread
From: Paolo Bonzini @ 2013-06-27 13:13 UTC (permalink / raw)
  To: Stefan Hajnoczi; +Cc: Kevin Wolf, Anthony Liguori, Ping Fan Liu, qemu-devel

Il 14/06/2013 13:43, Stefan Hajnoczi ha scritto:
> If a block driver has no file descriptors to monitor but there are still
> active requests, it can return 1 from .io_flush().  This is used to spin
> during synchronous I/O.
> 
> Stop relying on .io_flush() and instead check
> QLIST_EMPTY(&bs->tracked_requests) to decide whether there are active
> requests.
> 
> This is the first step in removing .io_flush() so that event loops no
> longer need to have the concept of synchronous I/O.  Eventually we may
> be able to kill synchronous I/O completely by running everything in a
> coroutine, but that is future work.
> 
> Note this patch moves bs->throttled_reqs initialization to bdrv_new() so
> that bdrv_requests_pending(bs) can safely access it.  In practice bs is
> g_malloc0() so the memory is already zeroed but it's safer to initialize
> the queue properly.
> 
> In bdrv_delete() make sure to call bdrv_make_anon() *after* bdrv_close()
> so that the device is still seen by bdrv_drain_all() when iterating
> bdrv_states.

I wonder if this last change should be separated out and CCed to
qemu-stable.  It seems like a bug if you close a device that has pending
throttled operations.

Paolo

> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
> ---
>  block.c | 50 +++++++++++++++++++++++++++++++++++++-------------
>  1 file changed, 37 insertions(+), 13 deletions(-)
> 
> diff --git a/block.c b/block.c
> index 79ad33d..04821d8 100644
> --- a/block.c
> +++ b/block.c
> @@ -148,7 +148,6 @@ static void bdrv_block_timer(void *opaque)
>  
>  void bdrv_io_limits_enable(BlockDriverState *bs)
>  {
> -    qemu_co_queue_init(&bs->throttled_reqs);
>      bs->block_timer = qemu_new_timer_ns(vm_clock, bdrv_block_timer, bs);
>      bs->io_limits_enabled = true;
>  }
> @@ -305,6 +304,7 @@ BlockDriverState *bdrv_new(const char *device_name)
>      }
>      bdrv_iostatus_disable(bs);
>      notifier_list_init(&bs->close_notifiers);
> +    qemu_co_queue_init(&bs->throttled_reqs);
>  
>      return bs;
>  }
> @@ -1412,6 +1412,35 @@ void bdrv_close_all(void)
>      }
>  }
>  
> +/* Check if any requests are in-flight (including throttled requests) */
> +static bool bdrv_requests_pending(BlockDriverState *bs)
> +{
> +    if (!QLIST_EMPTY(&bs->tracked_requests)) {
> +        return true;
> +    }
> +    if (!qemu_co_queue_empty(&bs->throttled_reqs)) {
> +        return true;
> +    }
> +    if (bs->file && bdrv_requests_pending(bs->file)) {
> +        return true;
> +    }
> +    if (bs->backing_hd && bdrv_requests_pending(bs->backing_hd)) {
> +        return true;
> +    }
> +    return false;
> +}
> +
> +static bool bdrv_requests_pending_all(void)
> +{
> +    BlockDriverState *bs;
> +    QTAILQ_FOREACH(bs, &bdrv_states, list) {
> +        if (bdrv_requests_pending(bs)) {
> +            return true;
> +        }
> +    }
> +    return false;
> +}
> +
>  /*
>   * Wait for pending requests to complete across all BlockDriverStates
>   *
> @@ -1426,27 +1455,22 @@ void bdrv_close_all(void)
>   */
>  void bdrv_drain_all(void)
>  {
> +    /* Always run first iteration so any pending completion BHs run */
> +    bool busy = true;
>      BlockDriverState *bs;
> -    bool busy;
> -
> -    do {
> -        busy = qemu_aio_wait();
>  
> +    while (busy) {
>          /* FIXME: We do not have timer support here, so this is effectively
>           * a busy wait.
>           */
>          QTAILQ_FOREACH(bs, &bdrv_states, list) {
>              if (!qemu_co_queue_empty(&bs->throttled_reqs)) {
>                  qemu_co_queue_restart_all(&bs->throttled_reqs);
> -                busy = true;
>              }
>          }
> -    } while (busy);
>  
> -    /* If requests are still pending there is a bug somewhere */
> -    QTAILQ_FOREACH(bs, &bdrv_states, list) {
> -        assert(QLIST_EMPTY(&bs->tracked_requests));
> -        assert(qemu_co_queue_empty(&bs->throttled_reqs));
> +        busy = bdrv_requests_pending_all();
> +        busy |= aio_poll(qemu_get_aio_context(), busy);
>      }
>  }
>  
> @@ -1591,11 +1615,11 @@ void bdrv_delete(BlockDriverState *bs)
>      assert(!bs->job);
>      assert(!bs->in_use);
>  
> +    bdrv_close(bs);
> +
>      /* remove from list, if necessary */
>      bdrv_make_anon(bs);
>  
> -    bdrv_close(bs);
> -
>      g_free(bs);
>  }
>  
> 

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [Qemu-devel] [PATCH v4 02/17] dataplane/virtio-blk: check exit conditions before aio_poll()
  2013-06-14 11:43 ` [Qemu-devel] [PATCH v4 02/17] dataplane/virtio-blk: check exit conditions before aio_poll() Stefan Hajnoczi
@ 2013-06-27 13:14   ` Paolo Bonzini
  0 siblings, 0 replies; 29+ messages in thread
From: Paolo Bonzini @ 2013-06-27 13:14 UTC (permalink / raw)
  To: Stefan Hajnoczi; +Cc: Kevin Wolf, Anthony Liguori, Ping Fan Liu, qemu-devel

Il 14/06/2013 13:43, Stefan Hajnoczi ha scritto:
> Check exit conditions before entering blocking aio_poll().  This is
> mainly for consistency since it's unlikely that we are stopping in the
> first event loop iteration.
> 
> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
> ---
>  hw/block/dataplane/virtio-blk.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/hw/block/dataplane/virtio-blk.c b/hw/block/dataplane/virtio-blk.c
> index 0356665..0509d3f 100644
> --- a/hw/block/dataplane/virtio-blk.c
> +++ b/hw/block/dataplane/virtio-blk.c
> @@ -379,9 +379,9 @@ static void *data_plane_thread(void *opaque)
>  {
>      VirtIOBlockDataPlane *s = opaque;
>  
> -    do {
> +    while (!s->stopping || s->num_reqs > 0) {
>          aio_poll(s->ctx, true);
> -    } while (!s->stopping || s->num_reqs > 0);
> +    }
>      return NULL;
>  }
>  
> 

ACK

Paolo

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [Qemu-devel] [PATCH v4 03/17] tests: adjust test-aio to new aio_poll() semantics
  2013-06-14 11:43 ` [Qemu-devel] [PATCH v4 03/17] tests: adjust test-aio to new aio_poll() semantics Stefan Hajnoczi
@ 2013-06-27 13:15   ` Paolo Bonzini
  2013-06-27 13:28   ` Paolo Bonzini
  1 sibling, 0 replies; 29+ messages in thread
From: Paolo Bonzini @ 2013-06-27 13:15 UTC (permalink / raw)
  To: Stefan Hajnoczi; +Cc: Kevin Wolf, Anthony Liguori, Ping Fan Liu, qemu-devel

Il 14/06/2013 13:43, Stefan Hajnoczi ha scritto:
> aio_poll(ctx, true) will soon block if any fd handlers have been set.
> Previously it would only block when .io_flush() returned true.
> 
> This means that callers must check their wait condition *before*
> aio_poll() to avoid deadlock.
> 
> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
> ---
>  tests/test-aio.c | 26 +++++++++++++++++---------
>  1 file changed, 17 insertions(+), 9 deletions(-)
> 
> diff --git a/tests/test-aio.c b/tests/test-aio.c
> index c173870..20bf5e6 100644
> --- a/tests/test-aio.c
> +++ b/tests/test-aio.c
> @@ -15,6 +15,13 @@
>  
>  AioContext *ctx;
>  
> +typedef struct {
> +    EventNotifier e;
> +    int n;
> +    int active;
> +    bool auto_set;
> +} EventNotifierTestData;
> +
>  /* Wait until there are no more BHs or AIO requests */
>  static void wait_for_aio(void)
>  {
> @@ -23,6 +30,14 @@ static void wait_for_aio(void)
>      }
>  }
>  
> +/* Wait until event notifier becomes inactive */
> +static void wait_until_inactive(EventNotifierTestData *data)
> +{
> +    while (data->active > 0) {
> +        aio_poll(ctx, true);
> +    }
> +}
> +
>  /* Simple callbacks for testing.  */
>  
>  typedef struct {
> @@ -50,13 +65,6 @@ static void bh_delete_cb(void *opaque)
>      }
>  }
>  
> -typedef struct {
> -    EventNotifier e;
> -    int n;
> -    int active;
> -    bool auto_set;
> -} EventNotifierTestData;
> -
>  static int event_active_cb(EventNotifier *e)
>  {
>      EventNotifierTestData *data = container_of(e, EventNotifierTestData, e);
> @@ -281,7 +289,7 @@ static void test_flush_event_notifier(void)
>      g_assert_cmpint(data.active, ==, 9);
>      g_assert(aio_poll(ctx, false));
>  
> -    wait_for_aio();
> +    wait_until_inactive(&data);
>      g_assert_cmpint(data.n, ==, 10);
>      g_assert_cmpint(data.active, ==, 0);
>      g_assert(!aio_poll(ctx, false));
> @@ -325,7 +333,7 @@ static void test_wait_event_notifier_noflush(void)
>      g_assert_cmpint(data.n, ==, 2);
>  
>      event_notifier_set(&dummy.e);
> -    wait_for_aio();
> +    wait_until_inactive(&dummy);
>      g_assert_cmpint(data.n, ==, 2);
>      g_assert_cmpint(dummy.n, ==, 1);
>      g_assert_cmpint(dummy.active, ==, 0);
> 

Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [Qemu-devel] [PATCH v4 04/17] tests: adjust test-thread-pool to new aio_poll() semantics
  2013-06-14 11:43 ` [Qemu-devel] [PATCH v4 04/17] tests: adjust test-thread-pool " Stefan Hajnoczi
@ 2013-06-27 13:16   ` Paolo Bonzini
  0 siblings, 0 replies; 29+ messages in thread
From: Paolo Bonzini @ 2013-06-27 13:16 UTC (permalink / raw)
  To: Stefan Hajnoczi; +Cc: Kevin Wolf, Anthony Liguori, Ping Fan Liu, qemu-devel

Il 14/06/2013 13:43, Stefan Hajnoczi ha scritto:
>  
> -/* Wait until all aio and bh activity has finished */
> -static void qemu_aio_wait_all(void)
> -{
> -    while (aio_poll(ctx, true)) {
> -        /* Do nothing */
> -    }
> -}
> -
>  static void test_submit(void)
>  {
>      WorkerTestData data = { .n = 0 };
>      thread_pool_submit(pool, worker_cb, &data);
> -    qemu_aio_wait_all();
> +    while (data.n == 0) {
> +        aio_poll(ctx, true);
> +    }
>      g_assert_cmpint(data.n, ==, 1);

These will now loop forever for a bad thread_pool_submit that just
returns immediately.  Slightly worse for debuggability, but
thread-pool.c is not expected to have big churn.

Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [Qemu-devel] [PATCH v4 05/17] aio: stop using .io_flush()
  2013-06-14 11:43 ` [Qemu-devel] [PATCH v4 05/17] aio: stop using .io_flush() Stefan Hajnoczi
@ 2013-06-27 13:19   ` Paolo Bonzini
  0 siblings, 0 replies; 29+ messages in thread
From: Paolo Bonzini @ 2013-06-27 13:19 UTC (permalink / raw)
  To: Stefan Hajnoczi; +Cc: Kevin Wolf, Anthony Liguori, Ping Fan Liu, qemu-devel

Il 14/06/2013 13:43, Stefan Hajnoczi ha scritto:
> Now that aio_poll() users check their termination condition themselves,
> it is no longer necessary to call .io_flush() handlers.
> 
> The behavior of aio_poll() changes as follows:
> 
> 1. .io_flush() is no longer invoked and file descriptors are *always*
> monitored.  Previously returning 0 from .io_flush() would skip this file
> descriptor.
> 
> Due to this change it is essential to check that requests are pending
> before calling qemu_aio_wait().  Failure to do so means we block, for
> example, waiting for an idle iSCSI socket to become readable when there
> are no requests.  Currently all qemu_aio_wait()/aio_poll() callers check
> before calling.
> 
> 2. aio_poll() now returns true if progress was made (BH or fd handlers
> executed) and false otherwise.  Previously it would return true whenever
> 'busy', which means that .io_flush() returned true.  The 'busy' concept
> no longer exists so just progress is returned.
> 
> Due to this change we need to update tests/test-aio.c which asserts
> aio_poll() return values.  Note that QEMU doesn't actually rely on these
> return values so only tests/test-aio.c cares.
> 
> Note that ctx->notifier, the EventNotifier fd used for aio_notify(), is
> now handled as a special case.  This is a little ugly but maintains
> aio_poll() semantics, i.e. aio_notify() does not count as 'progress' and
> aio_poll() avoids blocking when the user has not set any fd handlers yet.
> 
> Patches after this remove .io_flush() handler code until we can finally
> drop the io_flush arguments to aio_set_fd_handler() and friends.
> 
> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
> ---
>  aio-posix.c      | 29 +++++++++--------------------
>  aio-win32.c      | 34 ++++++++++++++--------------------
>  tests/test-aio.c | 10 +++++-----
>  3 files changed, 28 insertions(+), 45 deletions(-)
> 
> diff --git a/aio-posix.c b/aio-posix.c
> index b68eccd..7d66048 100644
> --- a/aio-posix.c
> +++ b/aio-posix.c
> @@ -23,7 +23,6 @@ struct AioHandler
>      GPollFD pfd;
>      IOHandler *io_read;
>      IOHandler *io_write;
> -    AioFlushHandler *io_flush;
>      int deleted;
>      int pollfds_idx;
>      void *opaque;
> @@ -84,7 +83,6 @@ void aio_set_fd_handler(AioContext *ctx,
>          /* Update handler with latest information */
>          node->io_read = io_read;
>          node->io_write = io_write;
> -        node->io_flush = io_flush;
>          node->opaque = opaque;
>          node->pollfds_idx = -1;
>  
> @@ -147,7 +145,11 @@ static bool aio_dispatch(AioContext *ctx)
>              (revents & (G_IO_IN | G_IO_HUP | G_IO_ERR)) &&
>              node->io_read) {
>              node->io_read(node->opaque);
> -            progress = true;
> +
> +            /* aio_notify() does not count as progress */
> +            if (node->opaque != &ctx->notifier) {
> +                progress = true;
> +            }
>          }
>          if (!node->deleted &&
>              (revents & (G_IO_OUT | G_IO_ERR)) &&
> @@ -173,7 +175,7 @@ bool aio_poll(AioContext *ctx, bool blocking)
>  {
>      AioHandler *node;
>      int ret;
> -    bool busy, progress;
> +    bool progress;
>  
>      progress = false;
>  
> @@ -200,20 +202,8 @@ bool aio_poll(AioContext *ctx, bool blocking)
>      g_array_set_size(ctx->pollfds, 0);
>  
>      /* fill pollfds */
> -    busy = false;
>      QLIST_FOREACH(node, &ctx->aio_handlers, node) {
>          node->pollfds_idx = -1;
> -
> -        /* If there aren't pending AIO operations, don't invoke callbacks.
> -         * Otherwise, if there are no AIO requests, qemu_aio_wait() would
> -         * wait indefinitely.
> -         */
> -        if (!node->deleted && node->io_flush) {
> -            if (node->io_flush(node->opaque) == 0) {
> -                continue;
> -            }
> -            busy = true;
> -        }
>          if (!node->deleted && node->pfd.events) {
>              GPollFD pfd = {
>                  .fd = node->pfd.fd,
> @@ -226,8 +216,8 @@ bool aio_poll(AioContext *ctx, bool blocking)
>  
>      ctx->walking_handlers--;
>  
> -    /* No AIO operations?  Get us out of here */
> -    if (!busy) {
> +    /* early return if we only have the aio_notify() fd */
> +    if (ctx->pollfds->len == 1) {
>          return progress;
>      }
>  
> @@ -250,6 +240,5 @@ bool aio_poll(AioContext *ctx, bool blocking)
>          }
>      }
>  
> -    assert(progress || busy);
> -    return true;
> +    return progress;
>  }
> diff --git a/aio-win32.c b/aio-win32.c
> index 38723bf..4309c16 100644
> --- a/aio-win32.c
> +++ b/aio-win32.c
> @@ -23,7 +23,6 @@
>  struct AioHandler {
>      EventNotifier *e;
>      EventNotifierHandler *io_notify;
> -    AioFlushEventNotifierHandler *io_flush;
>      GPollFD pfd;
>      int deleted;
>      QLIST_ENTRY(AioHandler) node;
> @@ -73,7 +72,6 @@ void aio_set_event_notifier(AioContext *ctx,
>          }
>          /* Update handler with latest information */
>          node->io_notify = io_notify;
> -        node->io_flush = io_flush;
>      }
>  
>      aio_notify(ctx);
> @@ -96,7 +94,7 @@ bool aio_poll(AioContext *ctx, bool blocking)
>  {
>      AioHandler *node;
>      HANDLE events[MAXIMUM_WAIT_OBJECTS + 1];
> -    bool busy, progress;
> +    bool progress;
>      int count;
>  
>      progress = false;
> @@ -126,7 +124,11 @@ bool aio_poll(AioContext *ctx, bool blocking)
>          if (node->pfd.revents && node->io_notify) {
>              node->pfd.revents = 0;
>              node->io_notify(node->e);
> -            progress = true;
> +
> +            /* aio_notify() does not count as progress */
> +            if (node->opaque != &ctx->notifier) {
> +                progress = true;
> +            }
>          }
>  
>          tmp = node;
> @@ -147,19 +149,8 @@ bool aio_poll(AioContext *ctx, bool blocking)
>      ctx->walking_handlers++;
>  
>      /* fill fd sets */
> -    busy = false;
>      count = 0;
>      QLIST_FOREACH(node, &ctx->aio_handlers, node) {
> -        /* If there aren't pending AIO operations, don't invoke callbacks.
> -         * Otherwise, if there are no AIO requests, qemu_aio_wait() would
> -         * wait indefinitely.
> -         */
> -        if (!node->deleted && node->io_flush) {
> -            if (node->io_flush(node->e) == 0) {
> -                continue;
> -            }
> -            busy = true;
> -        }
>          if (!node->deleted && node->io_notify) {
>              events[count++] = event_notifier_get_handle(node->e);
>          }
> @@ -167,8 +158,8 @@ bool aio_poll(AioContext *ctx, bool blocking)
>  
>      ctx->walking_handlers--;
>  
> -    /* No AIO operations?  Get us out of here */
> -    if (!busy) {
> +    /* early return if we only have the aio_notify() fd */
> +    if (count == 1) {
>          return progress;
>      }
>  
> @@ -196,7 +187,11 @@ bool aio_poll(AioContext *ctx, bool blocking)
>                  event_notifier_get_handle(node->e) == events[ret - WAIT_OBJECT_0] &&
>                  node->io_notify) {
>                  node->io_notify(node->e);
> -                progress = true;
> +
> +                /* aio_notify() does not count as progress */
> +                if (node->opaque != &ctx->notifier) {
> +                    progress = true;
> +                }
>              }
>  
>              tmp = node;
> @@ -214,6 +209,5 @@ bool aio_poll(AioContext *ctx, bool blocking)
>          events[ret - WAIT_OBJECT_0] = events[--count];
>      }
>  
> -    assert(progress || busy);
> -    return true;
> +    return progress;
>  }
> diff --git a/tests/test-aio.c b/tests/test-aio.c
> index 20bf5e6..1251952 100644
> --- a/tests/test-aio.c
> +++ b/tests/test-aio.c
> @@ -254,7 +254,7 @@ static void test_wait_event_notifier(void)
>      EventNotifierTestData data = { .n = 0, .active = 1 };
>      event_notifier_init(&data.e, false);
>      aio_set_event_notifier(ctx, &data.e, event_ready_cb, event_active_cb);
> -    g_assert(aio_poll(ctx, false));
> +    g_assert(!aio_poll(ctx, false));
>      g_assert_cmpint(data.n, ==, 0);
>      g_assert_cmpint(data.active, ==, 1);
>  
> @@ -279,7 +279,7 @@ static void test_flush_event_notifier(void)
>      EventNotifierTestData data = { .n = 0, .active = 10, .auto_set = true };
>      event_notifier_init(&data.e, false);
>      aio_set_event_notifier(ctx, &data.e, event_ready_cb, event_active_cb);
> -    g_assert(aio_poll(ctx, false));
> +    g_assert(!aio_poll(ctx, false));
>      g_assert_cmpint(data.n, ==, 0);
>      g_assert_cmpint(data.active, ==, 10);
>  
> @@ -313,7 +313,7 @@ static void test_wait_event_notifier_noflush(void)
>      /* Until there is an active descriptor, aio_poll may or may not call
>       * event_ready_cb.  Still, it must not block.  */
>      event_notifier_set(&data.e);
> -    g_assert(!aio_poll(ctx, true));
> +    g_assert(aio_poll(ctx, true));
>      data.n = 0;
>  
>      /* An active event notifier forces aio_poll to look at EventNotifiers.  */
> @@ -323,13 +323,13 @@ static void test_wait_event_notifier_noflush(void)
>      event_notifier_set(&data.e);
>      g_assert(aio_poll(ctx, false));
>      g_assert_cmpint(data.n, ==, 1);
> -    g_assert(aio_poll(ctx, false));
> +    g_assert(!aio_poll(ctx, false));
>      g_assert_cmpint(data.n, ==, 1);
>  
>      event_notifier_set(&data.e);
>      g_assert(aio_poll(ctx, false));
>      g_assert_cmpint(data.n, ==, 2);
> -    g_assert(aio_poll(ctx, false));
> +    g_assert(!aio_poll(ctx, false));
>      g_assert_cmpint(data.n, ==, 2);
>  
>      event_notifier_set(&dummy.e);
> 

Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [Qemu-devel] [PATCH v4 16/17] tests: drop event_active_cb()
  2013-06-14 11:44 ` [Qemu-devel] [PATCH v4 16/17] tests: drop event_active_cb() Stefan Hajnoczi
@ 2013-06-27 13:20   ` Paolo Bonzini
  0 siblings, 0 replies; 29+ messages in thread
From: Paolo Bonzini @ 2013-06-27 13:20 UTC (permalink / raw)
  To: Stefan Hajnoczi; +Cc: Kevin Wolf, Anthony Liguori, Ping Fan Liu, qemu-devel

Il 14/06/2013 13:44, Stefan Hajnoczi ha scritto:
> The .io_flush() handler no longer exists and has no users.  Drop the
> io_flush argument to aio_set_fd_handler() and related functions.

Wrong commit message, but otherwise

Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>


> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
> ---
>  tests/test-aio.c | 22 ++++++++--------------
>  1 file changed, 8 insertions(+), 14 deletions(-)
> 
> diff --git a/tests/test-aio.c b/tests/test-aio.c
> index 1251952..7b2892a 100644
> --- a/tests/test-aio.c
> +++ b/tests/test-aio.c
> @@ -65,12 +65,6 @@ static void bh_delete_cb(void *opaque)
>      }
>  }
>  
> -static int event_active_cb(EventNotifier *e)
> -{
> -    EventNotifierTestData *data = container_of(e, EventNotifierTestData, e);
> -    return data->active > 0;
> -}
> -
>  static void event_ready_cb(EventNotifier *e)
>  {
>      EventNotifierTestData *data = container_of(e, EventNotifierTestData, e);
> @@ -239,7 +233,7 @@ static void test_set_event_notifier(void)
>  {
>      EventNotifierTestData data = { .n = 0, .active = 0 };
>      event_notifier_init(&data.e, false);
> -    aio_set_event_notifier(ctx, &data.e, event_ready_cb, event_active_cb);
> +    aio_set_event_notifier(ctx, &data.e, event_ready_cb, NULL);
>      g_assert(!aio_poll(ctx, false));
>      g_assert_cmpint(data.n, ==, 0);
>  
> @@ -253,7 +247,7 @@ static void test_wait_event_notifier(void)
>  {
>      EventNotifierTestData data = { .n = 0, .active = 1 };
>      event_notifier_init(&data.e, false);
> -    aio_set_event_notifier(ctx, &data.e, event_ready_cb, event_active_cb);
> +    aio_set_event_notifier(ctx, &data.e, event_ready_cb, NULL);
>      g_assert(!aio_poll(ctx, false));
>      g_assert_cmpint(data.n, ==, 0);
>      g_assert_cmpint(data.active, ==, 1);
> @@ -278,7 +272,7 @@ static void test_flush_event_notifier(void)
>  {
>      EventNotifierTestData data = { .n = 0, .active = 10, .auto_set = true };
>      event_notifier_init(&data.e, false);
> -    aio_set_event_notifier(ctx, &data.e, event_ready_cb, event_active_cb);
> +    aio_set_event_notifier(ctx, &data.e, event_ready_cb, NULL);
>      g_assert(!aio_poll(ctx, false));
>      g_assert_cmpint(data.n, ==, 0);
>      g_assert_cmpint(data.active, ==, 10);
> @@ -318,7 +312,7 @@ static void test_wait_event_notifier_noflush(void)
>  
>      /* An active event notifier forces aio_poll to look at EventNotifiers.  */
>      event_notifier_init(&dummy.e, false);
> -    aio_set_event_notifier(ctx, &dummy.e, event_ready_cb, event_active_cb);
> +    aio_set_event_notifier(ctx, &dummy.e, event_ready_cb, NULL);
>  
>      event_notifier_set(&data.e);
>      g_assert(aio_poll(ctx, false));
> @@ -521,7 +515,7 @@ static void test_source_set_event_notifier(void)
>  {
>      EventNotifierTestData data = { .n = 0, .active = 0 };
>      event_notifier_init(&data.e, false);
> -    aio_set_event_notifier(ctx, &data.e, event_ready_cb, event_active_cb);
> +    aio_set_event_notifier(ctx, &data.e, event_ready_cb, NULL);
>      while (g_main_context_iteration(NULL, false));
>      g_assert_cmpint(data.n, ==, 0);
>  
> @@ -535,7 +529,7 @@ static void test_source_wait_event_notifier(void)
>  {
>      EventNotifierTestData data = { .n = 0, .active = 1 };
>      event_notifier_init(&data.e, false);
> -    aio_set_event_notifier(ctx, &data.e, event_ready_cb, event_active_cb);
> +    aio_set_event_notifier(ctx, &data.e, event_ready_cb, NULL);
>      g_assert(g_main_context_iteration(NULL, false));
>      g_assert_cmpint(data.n, ==, 0);
>      g_assert_cmpint(data.active, ==, 1);
> @@ -560,7 +554,7 @@ static void test_source_flush_event_notifier(void)
>  {
>      EventNotifierTestData data = { .n = 0, .active = 10, .auto_set = true };
>      event_notifier_init(&data.e, false);
> -    aio_set_event_notifier(ctx, &data.e, event_ready_cb, event_active_cb);
> +    aio_set_event_notifier(ctx, &data.e, event_ready_cb, NULL);
>      g_assert(g_main_context_iteration(NULL, false));
>      g_assert_cmpint(data.n, ==, 0);
>      g_assert_cmpint(data.active, ==, 10);
> @@ -600,7 +594,7 @@ static void test_source_wait_event_notifier_noflush(void)
>  
>      /* An active event notifier forces aio_poll to look at EventNotifiers.  */
>      event_notifier_init(&dummy.e, false);
> -    aio_set_event_notifier(ctx, &dummy.e, event_ready_cb, event_active_cb);
> +    aio_set_event_notifier(ctx, &dummy.e, event_ready_cb, NULL);
>  
>      event_notifier_set(&data.e);
>      g_assert(g_main_context_iteration(NULL, false));
> 

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [Qemu-devel] [PATCH v4 17/17] aio: drop io_flush argument
  2013-06-14 11:44 ` [Qemu-devel] [PATCH v4 17/17] aio: drop io_flush argument Stefan Hajnoczi
@ 2013-06-27 13:21   ` Paolo Bonzini
  0 siblings, 0 replies; 29+ messages in thread
From: Paolo Bonzini @ 2013-06-27 13:21 UTC (permalink / raw)
  To: Stefan Hajnoczi; +Cc: Kevin Wolf, Anthony Liguori, Ping Fan Liu, qemu-devel

Il 14/06/2013 13:44, Stefan Hajnoczi ha scritto:
> The .io_flush() handler no longer exists and has no users.  Drop the
> io_flush argument to aio_set_fd_handler() and related functions.
> 
> The AioFlushEventNotifierHandler and AioFlushHandler typedefs are no
> longer used and are dropped too.
> 
> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
> ---
>  aio-posix.c                     |  7 ++-----
>  aio-win32.c                     |  3 +--
>  async.c                         |  4 ++--
>  block/curl.c                    |  9 ++++-----
>  block/gluster.c                 |  7 +++----
>  block/iscsi.c                   |  3 +--
>  block/linux-aio.c               |  3 +--
>  block/nbd.c                     | 11 ++++-------
>  block/rbd.c                     |  4 ++--
>  block/sheepdog.c                | 18 ++++++++----------
>  block/ssh.c                     |  4 ++--
>  hw/block/dataplane/virtio-blk.c |  8 ++++----
>  include/block/aio.h             | 14 ++------------
>  main-loop.c                     |  9 +++------
>  tests/test-aio.c                | 40 ++++++++++++++++++++--------------------
>  thread-pool.c                   |  5 ++---
>  16 files changed, 61 insertions(+), 88 deletions(-)
> 
> diff --git a/aio-posix.c b/aio-posix.c
> index 7d66048..2440eb9 100644
> --- a/aio-posix.c
> +++ b/aio-posix.c
> @@ -46,7 +46,6 @@ void aio_set_fd_handler(AioContext *ctx,
>                          int fd,
>                          IOHandler *io_read,
>                          IOHandler *io_write,
> -                        AioFlushHandler *io_flush,
>                          void *opaque)
>  {
>      AioHandler *node;
> @@ -95,12 +94,10 @@ void aio_set_fd_handler(AioContext *ctx,
>  
>  void aio_set_event_notifier(AioContext *ctx,
>                              EventNotifier *notifier,
> -                            EventNotifierHandler *io_read,
> -                            AioFlushEventNotifierHandler *io_flush)
> +                            EventNotifierHandler *io_read)
>  {
>      aio_set_fd_handler(ctx, event_notifier_get_fd(notifier),
> -                       (IOHandler *)io_read, NULL,
> -                       (AioFlushHandler *)io_flush, notifier);
> +                       (IOHandler *)io_read, NULL, notifier);
>  }
>  
>  bool aio_pending(AioContext *ctx)
> diff --git a/aio-win32.c b/aio-win32.c
> index 4309c16..78b2801 100644
> --- a/aio-win32.c
> +++ b/aio-win32.c
> @@ -30,8 +30,7 @@ struct AioHandler {
>  
>  void aio_set_event_notifier(AioContext *ctx,
>                              EventNotifier *e,
> -                            EventNotifierHandler *io_notify,
> -                            AioFlushEventNotifierHandler *io_flush)
> +                            EventNotifierHandler *io_notify)
>  {
>      AioHandler *node;
>  
> diff --git a/async.c b/async.c
> index 90fe906..fe2c8bf 100644
> --- a/async.c
> +++ b/async.c
> @@ -174,7 +174,7 @@ aio_ctx_finalize(GSource     *source)
>      AioContext *ctx = (AioContext *) source;
>  
>      thread_pool_free(ctx->thread_pool);
> -    aio_set_event_notifier(ctx, &ctx->notifier, NULL, NULL);
> +    aio_set_event_notifier(ctx, &ctx->notifier, NULL);
>      event_notifier_cleanup(&ctx->notifier);
>      g_array_free(ctx->pollfds, TRUE);
>  }
> @@ -214,7 +214,7 @@ AioContext *aio_context_new(void)
>      event_notifier_init(&ctx->notifier, false);
>      aio_set_event_notifier(ctx, &ctx->notifier, 
>                             (EventNotifierHandler *)
> -                           event_notifier_test_and_clear, NULL);
> +                           event_notifier_test_and_clear);
>  
>      return ctx;
>  }
> diff --git a/block/curl.c b/block/curl.c
> index 2147076..e88621a 100644
> --- a/block/curl.c
> +++ b/block/curl.c
> @@ -92,17 +92,16 @@ static int curl_sock_cb(CURL *curl, curl_socket_t fd, int action,
>      DPRINTF("CURL (AIO): Sock action %d on fd %d\n", action, fd);
>      switch (action) {
>          case CURL_POLL_IN:
> -            qemu_aio_set_fd_handler(fd, curl_multi_do, NULL, NULL, s);
> +            qemu_aio_set_fd_handler(fd, curl_multi_do, NULL, s);
>              break;
>          case CURL_POLL_OUT:
> -            qemu_aio_set_fd_handler(fd, NULL, curl_multi_do, NULL, s);
> +            qemu_aio_set_fd_handler(fd, NULL, curl_multi_do, s);
>              break;
>          case CURL_POLL_INOUT:
> -            qemu_aio_set_fd_handler(fd, curl_multi_do, curl_multi_do,
> -                                    NULL, s);
> +            qemu_aio_set_fd_handler(fd, curl_multi_do, curl_multi_do, s);
>              break;
>          case CURL_POLL_REMOVE:
> -            qemu_aio_set_fd_handler(fd, NULL, NULL, NULL, NULL);
> +            qemu_aio_set_fd_handler(fd, NULL, NULL, NULL);
>              break;
>      }
>  
> diff --git a/block/gluster.c b/block/gluster.c
> index 7a69a12..3cff308 100644
> --- a/block/gluster.c
> +++ b/block/gluster.c
> @@ -339,7 +339,7 @@ static int qemu_gluster_open(BlockDriverState *bs,  QDict *options,
>      }
>      fcntl(s->fds[GLUSTER_FD_READ], F_SETFL, O_NONBLOCK);
>      qemu_aio_set_fd_handler(s->fds[GLUSTER_FD_READ],
> -        qemu_gluster_aio_event_reader, NULL, NULL, s);
> +        qemu_gluster_aio_event_reader, NULL, s);
>  
>  out:
>      qemu_opts_del(opts);
> @@ -438,8 +438,7 @@ static void gluster_finish_aiocb(struct glfs_fd *fd, ssize_t ret, void *arg)
>          qemu_aio_release(acb);
>          close(s->fds[GLUSTER_FD_READ]);
>          close(s->fds[GLUSTER_FD_WRITE]);
> -        qemu_aio_set_fd_handler(s->fds[GLUSTER_FD_READ], NULL, NULL, NULL,
> -            NULL);
> +        qemu_aio_set_fd_handler(s->fds[GLUSTER_FD_READ], NULL, NULL, NULL);
>          bs->drv = NULL; /* Make the disk inaccessible */
>          qemu_mutex_unlock_iothread();
>      }
> @@ -551,7 +550,7 @@ static void qemu_gluster_close(BlockDriverState *bs)
>  
>      close(s->fds[GLUSTER_FD_READ]);
>      close(s->fds[GLUSTER_FD_WRITE]);
> -    qemu_aio_set_fd_handler(s->fds[GLUSTER_FD_READ], NULL, NULL, NULL, NULL);
> +    qemu_aio_set_fd_handler(s->fds[GLUSTER_FD_READ], NULL, NULL, NULL);
>  
>      if (s->fd) {
>          glfs_close(s->fd);
> diff --git a/block/iscsi.c b/block/iscsi.c
> index e2041ca..721c7d9 100644
> --- a/block/iscsi.c
> +++ b/block/iscsi.c
> @@ -160,7 +160,6 @@ iscsi_set_events(IscsiLun *iscsilun)
>          qemu_aio_set_fd_handler(iscsi_get_fd(iscsi),
>                        iscsi_process_read,
>                        (ev & POLLOUT) ? iscsi_process_write : NULL,
> -                      NULL,
>                        iscsilun);
>  
>      }
> @@ -1176,7 +1175,7 @@ static void iscsi_close(BlockDriverState *bs)
>          qemu_del_timer(iscsilun->nop_timer);
>          qemu_free_timer(iscsilun->nop_timer);
>      }
> -    qemu_aio_set_fd_handler(iscsi_get_fd(iscsi), NULL, NULL, NULL, NULL);
> +    qemu_aio_set_fd_handler(iscsi_get_fd(iscsi), NULL, NULL, NULL);
>      iscsi_destroy_context(iscsi);
>      memset(iscsilun, 0, sizeof(IscsiLun));
>  }
> diff --git a/block/linux-aio.c b/block/linux-aio.c
> index d9128f3..53434e2 100644
> --- a/block/linux-aio.c
> +++ b/block/linux-aio.c
> @@ -190,8 +190,7 @@ void *laio_init(void)
>          goto out_close_efd;
>      }
>  
> -    qemu_aio_set_event_notifier(&s->e, qemu_laio_completion_cb,
> -                                NULL);
> +    qemu_aio_set_event_notifier(&s->e, qemu_laio_completion_cb);
>  
>      return s;
>  
> diff --git a/block/nbd.c b/block/nbd.c
> index 80d2b31..f5350fb 100644
> --- a/block/nbd.c
> +++ b/block/nbd.c
> @@ -325,8 +325,7 @@ static int nbd_co_send_request(BDRVNBDState *s, struct nbd_request *request,
>  
>      qemu_co_mutex_lock(&s->send_mutex);
>      s->send_coroutine = qemu_coroutine_self();
> -    qemu_aio_set_fd_handler(s->sock, nbd_reply_ready, nbd_restart_write,
> -                            NULL, s);
> +    qemu_aio_set_fd_handler(s->sock, nbd_reply_ready, nbd_restart_write, s);
>      if (qiov) {
>          if (!s->is_unix) {
>              socket_set_cork(s->sock, 1);
> @@ -345,8 +344,7 @@ static int nbd_co_send_request(BDRVNBDState *s, struct nbd_request *request,
>      } else {
>          rc = nbd_send_request(s->sock, request);
>      }
> -    qemu_aio_set_fd_handler(s->sock, nbd_reply_ready, NULL,
> -                            NULL, s);
> +    qemu_aio_set_fd_handler(s->sock, nbd_reply_ready, NULL, s);
>      s->send_coroutine = NULL;
>      qemu_co_mutex_unlock(&s->send_mutex);
>      return rc;
> @@ -422,8 +420,7 @@ static int nbd_establish_connection(BlockDriverState *bs)
>      /* Now that we're connected, set the socket to be non-blocking and
>       * kick the reply mechanism.  */
>      qemu_set_nonblock(sock);
> -    qemu_aio_set_fd_handler(sock, nbd_reply_ready, NULL,
> -                            NULL, s);
> +    qemu_aio_set_fd_handler(sock, nbd_reply_ready, NULL, s);
>  
>      s->sock = sock;
>      s->size = size;
> @@ -443,7 +440,7 @@ static void nbd_teardown_connection(BlockDriverState *bs)
>      request.len = 0;
>      nbd_send_request(s->sock, &request);
>  
> -    qemu_aio_set_fd_handler(s->sock, NULL, NULL, NULL, NULL);
> +    qemu_aio_set_fd_handler(s->sock, NULL, NULL, NULL);
>      closesocket(s->sock);
>  }
>  
> diff --git a/block/rbd.c b/block/rbd.c
> index 40e5d55..78b8564 100644
> --- a/block/rbd.c
> +++ b/block/rbd.c
> @@ -545,7 +545,7 @@ static int qemu_rbd_open(BlockDriverState *bs, QDict *options, int flags)
>      fcntl(s->fds[0], F_SETFL, O_NONBLOCK);
>      fcntl(s->fds[1], F_SETFL, O_NONBLOCK);
>      qemu_aio_set_fd_handler(s->fds[RBD_FD_READ], qemu_rbd_aio_event_reader,
> -                            NULL, NULL, s);
> +                            NULL, s);
>  
>  
>      qemu_opts_del(opts);
> @@ -569,7 +569,7 @@ static void qemu_rbd_close(BlockDriverState *bs)
>  
>      close(s->fds[0]);
>      close(s->fds[1]);
> -    qemu_aio_set_fd_handler(s->fds[RBD_FD_READ], NULL, NULL, NULL, NULL);
> +    qemu_aio_set_fd_handler(s->fds[RBD_FD_READ], NULL, NULL, NULL);
>  
>      rbd_close(s->image);
>      rados_ioctx_destroy(s->io_ctx);
> diff --git a/block/sheepdog.c b/block/sheepdog.c
> index 66918c6..be2a876 100644
> --- a/block/sheepdog.c
> +++ b/block/sheepdog.c
> @@ -531,14 +531,14 @@ static coroutine_fn void do_co_req(void *opaque)
>      unsigned int *rlen = srco->rlen;
>  
>      co = qemu_coroutine_self();
> -    qemu_aio_set_fd_handler(sockfd, NULL, restart_co_req, NULL, co);
> +    qemu_aio_set_fd_handler(sockfd, NULL, restart_co_req, co);
>  
>      ret = send_co_req(sockfd, hdr, data, wlen);
>      if (ret < 0) {
>          goto out;
>      }
>  
> -    qemu_aio_set_fd_handler(sockfd, restart_co_req, NULL, NULL, co);
> +    qemu_aio_set_fd_handler(sockfd, restart_co_req, NULL, co);
>  
>      ret = qemu_co_recv(sockfd, hdr, sizeof(*hdr));
>      if (ret < sizeof(*hdr)) {
> @@ -563,7 +563,7 @@ static coroutine_fn void do_co_req(void *opaque)
>  out:
>      /* there is at most one request for this sockfd, so it is safe to
>       * set each handler to NULL. */
> -    qemu_aio_set_fd_handler(sockfd, NULL, NULL, NULL, NULL);
> +    qemu_aio_set_fd_handler(sockfd, NULL, NULL, NULL);
>  
>      srco->ret = ret;
>      srco->finished = true;
> @@ -804,7 +804,7 @@ static int get_sheep_fd(BDRVSheepdogState *s)
>          return fd;
>      }
>  
> -    qemu_aio_set_fd_handler(fd, co_read_response, NULL, NULL, s);
> +    qemu_aio_set_fd_handler(fd, co_read_response, NULL, s);
>      return fd;
>  }
>  
> @@ -1054,8 +1054,7 @@ static int coroutine_fn add_aio_request(BDRVSheepdogState *s, AIOReq *aio_req,
>  
>      qemu_co_mutex_lock(&s->lock);
>      s->co_send = qemu_coroutine_self();
> -    qemu_aio_set_fd_handler(s->fd, co_read_response, co_write_request,
> -                            NULL, s);
> +    qemu_aio_set_fd_handler(s->fd, co_read_response, co_write_request, s);
>      socket_set_cork(s->fd, 1);
>  
>      /* send a header */
> @@ -1076,8 +1075,7 @@ static int coroutine_fn add_aio_request(BDRVSheepdogState *s, AIOReq *aio_req,
>      }
>  
>      socket_set_cork(s->fd, 0);
> -    qemu_aio_set_fd_handler(s->fd, co_read_response, NULL,
> -                            NULL, s);
> +    qemu_aio_set_fd_handler(s->fd, co_read_response, NULL, s);
>      qemu_co_mutex_unlock(&s->lock);
>  
>      return 0;
> @@ -1335,7 +1333,7 @@ static int sd_open(BlockDriverState *bs, QDict *options, int flags)
>      g_free(buf);
>      return 0;
>  out:
> -    qemu_aio_set_fd_handler(s->fd, NULL, NULL, NULL, NULL);
> +    qemu_aio_set_fd_handler(s->fd, NULL, NULL, NULL);
>      if (s->fd >= 0) {
>          closesocket(s->fd);
>      }
> @@ -1563,7 +1561,7 @@ static void sd_close(BlockDriverState *bs)
>          error_report("%s, %s", sd_strerror(rsp->result), s->name);
>      }
>  
> -    qemu_aio_set_fd_handler(s->fd, NULL, NULL, NULL, NULL);
> +    qemu_aio_set_fd_handler(s->fd, NULL, NULL, NULL);
>      closesocket(s->fd);
>      g_free(s->host_spec);
>  }
> diff --git a/block/ssh.c b/block/ssh.c
> index ed525cc..b78253f 100644
> --- a/block/ssh.c
> +++ b/block/ssh.c
> @@ -743,13 +743,13 @@ static coroutine_fn void set_fd_handler(BDRVSSHState *s)
>      DPRINTF("s->sock=%d rd_handler=%p wr_handler=%p", s->sock,
>              rd_handler, wr_handler);
>  
> -    qemu_aio_set_fd_handler(s->sock, rd_handler, wr_handler, NULL, co);
> +    qemu_aio_set_fd_handler(s->sock, rd_handler, wr_handler, co);
>  }
>  
>  static coroutine_fn void clear_fd_handler(BDRVSSHState *s)
>  {
>      DPRINTF("s->sock=%d", s->sock);
> -    qemu_aio_set_fd_handler(s->sock, NULL, NULL, NULL, NULL);
> +    qemu_aio_set_fd_handler(s->sock, NULL, NULL, NULL);
>  }
>  
>  /* A non-blocking call returned EAGAIN, so yield, ensuring the
> diff --git a/hw/block/dataplane/virtio-blk.c b/hw/block/dataplane/virtio-blk.c
> index 9e6d32b..bd1b51a 100644
> --- a/hw/block/dataplane/virtio-blk.c
> +++ b/hw/block/dataplane/virtio-blk.c
> @@ -473,7 +473,7 @@ void virtio_blk_data_plane_start(VirtIOBlockDataPlane *s)
>          exit(1);
>      }
>      s->host_notifier = *virtio_queue_get_host_notifier(vq);
> -    aio_set_event_notifier(s->ctx, &s->host_notifier, handle_notify, NULL);
> +    aio_set_event_notifier(s->ctx, &s->host_notifier, handle_notify);
>  
>      /* Set up ioqueue */
>      ioq_init(&s->ioqueue, s->fd, REQ_MAX);
> @@ -481,7 +481,7 @@ void virtio_blk_data_plane_start(VirtIOBlockDataPlane *s)
>          ioq_put_iocb(&s->ioqueue, &s->requests[i].iocb);
>      }
>      s->io_notifier = *ioq_get_notifier(&s->ioqueue);
> -    aio_set_event_notifier(s->ctx, &s->io_notifier, handle_io, NULL);
> +    aio_set_event_notifier(s->ctx, &s->io_notifier, handle_io);
>  
>      s->started = true;
>      trace_virtio_blk_data_plane_start(s);
> @@ -513,10 +513,10 @@ void virtio_blk_data_plane_stop(VirtIOBlockDataPlane *s)
>          qemu_thread_join(&s->thread);
>      }
>  
> -    aio_set_event_notifier(s->ctx, &s->io_notifier, NULL, NULL);
> +    aio_set_event_notifier(s->ctx, &s->io_notifier, NULL);
>      ioq_cleanup(&s->ioqueue);
>  
> -    aio_set_event_notifier(s->ctx, &s->host_notifier, NULL, NULL);
> +    aio_set_event_notifier(s->ctx, &s->host_notifier, NULL);
>      k->set_host_notifier(qbus->parent, 0, false);
>  
>      aio_context_unref(s->ctx);
> diff --git a/include/block/aio.h b/include/block/aio.h
> index 1836793..e17066b 100644
> --- a/include/block/aio.h
> +++ b/include/block/aio.h
> @@ -71,9 +71,6 @@ typedef struct AioContext {
>      struct ThreadPool *thread_pool;
>  } AioContext;
>  
> -/* Returns 1 if there are still outstanding AIO requests; 0 otherwise */
> -typedef int (AioFlushEventNotifierHandler)(EventNotifier *e);
> -
>  /**
>   * aio_context_new: Allocate a new AioContext.
>   *
> @@ -191,9 +188,6 @@ bool aio_pending(AioContext *ctx);
>  bool aio_poll(AioContext *ctx, bool blocking);
>  
>  #ifdef CONFIG_POSIX
> -/* Returns 1 if there are still outstanding AIO requests; 0 otherwise */
> -typedef int (AioFlushHandler)(void *opaque);
> -
>  /* Register a file descriptor and associated callbacks.  Behaves very similarly
>   * to qemu_set_fd_handler2.  Unlike qemu_set_fd_handler2, these callbacks will
>   * be invoked when using qemu_aio_wait().
> @@ -205,7 +199,6 @@ void aio_set_fd_handler(AioContext *ctx,
>                          int fd,
>                          IOHandler *io_read,
>                          IOHandler *io_write,
> -                        AioFlushHandler *io_flush,
>                          void *opaque);
>  #endif
>  
> @@ -218,8 +211,7 @@ void aio_set_fd_handler(AioContext *ctx,
>   */
>  void aio_set_event_notifier(AioContext *ctx,
>                              EventNotifier *notifier,
> -                            EventNotifierHandler *io_read,
> -                            AioFlushEventNotifierHandler *io_flush);
> +                            EventNotifierHandler *io_read);
>  
>  /* Return a GSource that lets the main loop poll the file descriptors attached
>   * to this AioContext.
> @@ -233,14 +225,12 @@ struct ThreadPool *aio_get_thread_pool(AioContext *ctx);
>  
>  bool qemu_aio_wait(void);
>  void qemu_aio_set_event_notifier(EventNotifier *notifier,
> -                                 EventNotifierHandler *io_read,
> -                                 AioFlushEventNotifierHandler *io_flush);
> +                                 EventNotifierHandler *io_read);
>  
>  #ifdef CONFIG_POSIX
>  void qemu_aio_set_fd_handler(int fd,
>                               IOHandler *io_read,
>                               IOHandler *io_write,
> -                             AioFlushHandler *io_flush,
>                               void *opaque);
>  #endif
>  
> diff --git a/main-loop.c b/main-loop.c
> index cf36645..2581939 100644
> --- a/main-loop.c
> +++ b/main-loop.c
> @@ -488,17 +488,14 @@ bool qemu_aio_wait(void)
>  void qemu_aio_set_fd_handler(int fd,
>                               IOHandler *io_read,
>                               IOHandler *io_write,
> -                             AioFlushHandler *io_flush,
>                               void *opaque)
>  {
> -    aio_set_fd_handler(qemu_aio_context, fd, io_read, io_write, io_flush,
> -                       opaque);
> +    aio_set_fd_handler(qemu_aio_context, fd, io_read, io_write, opaque);
>  }
>  #endif
>  
>  void qemu_aio_set_event_notifier(EventNotifier *notifier,
> -                                 EventNotifierHandler *io_read,
> -                                 AioFlushEventNotifierHandler *io_flush)
> +                                 EventNotifierHandler *io_read)
>  {
> -    aio_set_event_notifier(qemu_aio_context, notifier, io_read, io_flush);
> +    aio_set_event_notifier(qemu_aio_context, notifier, io_read);
>  }
> diff --git a/tests/test-aio.c b/tests/test-aio.c
> index 7b2892a..1ab5637 100644
> --- a/tests/test-aio.c
> +++ b/tests/test-aio.c
> @@ -233,11 +233,11 @@ static void test_set_event_notifier(void)
>  {
>      EventNotifierTestData data = { .n = 0, .active = 0 };
>      event_notifier_init(&data.e, false);
> -    aio_set_event_notifier(ctx, &data.e, event_ready_cb, NULL);
> +    aio_set_event_notifier(ctx, &data.e, event_ready_cb);
>      g_assert(!aio_poll(ctx, false));
>      g_assert_cmpint(data.n, ==, 0);
>  
> -    aio_set_event_notifier(ctx, &data.e, NULL, NULL);
> +    aio_set_event_notifier(ctx, &data.e, NULL);
>      g_assert(!aio_poll(ctx, false));
>      g_assert_cmpint(data.n, ==, 0);
>      event_notifier_cleanup(&data.e);
> @@ -247,7 +247,7 @@ static void test_wait_event_notifier(void)
>  {
>      EventNotifierTestData data = { .n = 0, .active = 1 };
>      event_notifier_init(&data.e, false);
> -    aio_set_event_notifier(ctx, &data.e, event_ready_cb, NULL);
> +    aio_set_event_notifier(ctx, &data.e, event_ready_cb);
>      g_assert(!aio_poll(ctx, false));
>      g_assert_cmpint(data.n, ==, 0);
>      g_assert_cmpint(data.active, ==, 1);
> @@ -261,7 +261,7 @@ static void test_wait_event_notifier(void)
>      g_assert_cmpint(data.n, ==, 1);
>      g_assert_cmpint(data.active, ==, 0);
>  
> -    aio_set_event_notifier(ctx, &data.e, NULL, NULL);
> +    aio_set_event_notifier(ctx, &data.e, NULL);
>      g_assert(!aio_poll(ctx, false));
>      g_assert_cmpint(data.n, ==, 1);
>  
> @@ -272,7 +272,7 @@ static void test_flush_event_notifier(void)
>  {
>      EventNotifierTestData data = { .n = 0, .active = 10, .auto_set = true };
>      event_notifier_init(&data.e, false);
> -    aio_set_event_notifier(ctx, &data.e, event_ready_cb, NULL);
> +    aio_set_event_notifier(ctx, &data.e, event_ready_cb);
>      g_assert(!aio_poll(ctx, false));
>      g_assert_cmpint(data.n, ==, 0);
>      g_assert_cmpint(data.active, ==, 10);
> @@ -288,7 +288,7 @@ static void test_flush_event_notifier(void)
>      g_assert_cmpint(data.active, ==, 0);
>      g_assert(!aio_poll(ctx, false));
>  
> -    aio_set_event_notifier(ctx, &data.e, NULL, NULL);
> +    aio_set_event_notifier(ctx, &data.e, NULL);
>      g_assert(!aio_poll(ctx, false));
>      event_notifier_cleanup(&data.e);
>  }
> @@ -299,7 +299,7 @@ static void test_wait_event_notifier_noflush(void)
>      EventNotifierTestData dummy = { .n = 0, .active = 1 };
>  
>      event_notifier_init(&data.e, false);
> -    aio_set_event_notifier(ctx, &data.e, event_ready_cb, NULL);
> +    aio_set_event_notifier(ctx, &data.e, event_ready_cb);
>  
>      g_assert(!aio_poll(ctx, false));
>      g_assert_cmpint(data.n, ==, 0);
> @@ -312,7 +312,7 @@ static void test_wait_event_notifier_noflush(void)
>  
>      /* An active event notifier forces aio_poll to look at EventNotifiers.  */
>      event_notifier_init(&dummy.e, false);
> -    aio_set_event_notifier(ctx, &dummy.e, event_ready_cb, NULL);
> +    aio_set_event_notifier(ctx, &dummy.e, event_ready_cb);
>  
>      event_notifier_set(&data.e);
>      g_assert(aio_poll(ctx, false));
> @@ -332,10 +332,10 @@ static void test_wait_event_notifier_noflush(void)
>      g_assert_cmpint(dummy.n, ==, 1);
>      g_assert_cmpint(dummy.active, ==, 0);
>  
> -    aio_set_event_notifier(ctx, &dummy.e, NULL, NULL);
> +    aio_set_event_notifier(ctx, &dummy.e, NULL);
>      event_notifier_cleanup(&dummy.e);
>  
> -    aio_set_event_notifier(ctx, &data.e, NULL, NULL);
> +    aio_set_event_notifier(ctx, &data.e, NULL);
>      g_assert(!aio_poll(ctx, false));
>      g_assert_cmpint(data.n, ==, 2);
>  
> @@ -515,11 +515,11 @@ static void test_source_set_event_notifier(void)
>  {
>      EventNotifierTestData data = { .n = 0, .active = 0 };
>      event_notifier_init(&data.e, false);
> -    aio_set_event_notifier(ctx, &data.e, event_ready_cb, NULL);
> +    aio_set_event_notifier(ctx, &data.e, event_ready_cb);
>      while (g_main_context_iteration(NULL, false));
>      g_assert_cmpint(data.n, ==, 0);
>  
> -    aio_set_event_notifier(ctx, &data.e, NULL, NULL);
> +    aio_set_event_notifier(ctx, &data.e, NULL);
>      while (g_main_context_iteration(NULL, false));
>      g_assert_cmpint(data.n, ==, 0);
>      event_notifier_cleanup(&data.e);
> @@ -529,7 +529,7 @@ static void test_source_wait_event_notifier(void)
>  {
>      EventNotifierTestData data = { .n = 0, .active = 1 };
>      event_notifier_init(&data.e, false);
> -    aio_set_event_notifier(ctx, &data.e, event_ready_cb, NULL);
> +    aio_set_event_notifier(ctx, &data.e, event_ready_cb);
>      g_assert(g_main_context_iteration(NULL, false));
>      g_assert_cmpint(data.n, ==, 0);
>      g_assert_cmpint(data.active, ==, 1);
> @@ -543,7 +543,7 @@ static void test_source_wait_event_notifier(void)
>      g_assert_cmpint(data.n, ==, 1);
>      g_assert_cmpint(data.active, ==, 0);
>  
> -    aio_set_event_notifier(ctx, &data.e, NULL, NULL);
> +    aio_set_event_notifier(ctx, &data.e, NULL);
>      while (g_main_context_iteration(NULL, false));
>      g_assert_cmpint(data.n, ==, 1);
>  
> @@ -554,7 +554,7 @@ static void test_source_flush_event_notifier(void)
>  {
>      EventNotifierTestData data = { .n = 0, .active = 10, .auto_set = true };
>      event_notifier_init(&data.e, false);
> -    aio_set_event_notifier(ctx, &data.e, event_ready_cb, NULL);
> +    aio_set_event_notifier(ctx, &data.e, event_ready_cb);
>      g_assert(g_main_context_iteration(NULL, false));
>      g_assert_cmpint(data.n, ==, 0);
>      g_assert_cmpint(data.active, ==, 10);
> @@ -570,7 +570,7 @@ static void test_source_flush_event_notifier(void)
>      g_assert_cmpint(data.active, ==, 0);
>      g_assert(!g_main_context_iteration(NULL, false));
>  
> -    aio_set_event_notifier(ctx, &data.e, NULL, NULL);
> +    aio_set_event_notifier(ctx, &data.e, NULL);
>      while (g_main_context_iteration(NULL, false));
>      event_notifier_cleanup(&data.e);
>  }
> @@ -581,7 +581,7 @@ static void test_source_wait_event_notifier_noflush(void)
>      EventNotifierTestData dummy = { .n = 0, .active = 1 };
>  
>      event_notifier_init(&data.e, false);
> -    aio_set_event_notifier(ctx, &data.e, event_ready_cb, NULL);
> +    aio_set_event_notifier(ctx, &data.e, event_ready_cb);
>  
>      while (g_main_context_iteration(NULL, false));
>      g_assert_cmpint(data.n, ==, 0);
> @@ -594,7 +594,7 @@ static void test_source_wait_event_notifier_noflush(void)
>  
>      /* An active event notifier forces aio_poll to look at EventNotifiers.  */
>      event_notifier_init(&dummy.e, false);
> -    aio_set_event_notifier(ctx, &dummy.e, event_ready_cb, NULL);
> +    aio_set_event_notifier(ctx, &dummy.e, event_ready_cb);
>  
>      event_notifier_set(&data.e);
>      g_assert(g_main_context_iteration(NULL, false));
> @@ -614,10 +614,10 @@ static void test_source_wait_event_notifier_noflush(void)
>      g_assert_cmpint(dummy.n, ==, 1);
>      g_assert_cmpint(dummy.active, ==, 0);
>  
> -    aio_set_event_notifier(ctx, &dummy.e, NULL, NULL);
> +    aio_set_event_notifier(ctx, &dummy.e, NULL);
>      event_notifier_cleanup(&dummy.e);
>  
> -    aio_set_event_notifier(ctx, &data.e, NULL, NULL);
> +    aio_set_event_notifier(ctx, &data.e, NULL);
>      while (g_main_context_iteration(NULL, false));
>      g_assert_cmpint(data.n, ==, 2);
>  
> diff --git a/thread-pool.c b/thread-pool.c
> index 096f007..5025567 100644
> --- a/thread-pool.c
> +++ b/thread-pool.c
> @@ -303,8 +303,7 @@ static void thread_pool_init_one(ThreadPool *pool, AioContext *ctx)
>      QLIST_INIT(&pool->head);
>      QTAILQ_INIT(&pool->request_list);
>  
> -    aio_set_event_notifier(ctx, &pool->notifier, event_notifier_ready,
> -                           NULL);
> +    aio_set_event_notifier(ctx, &pool->notifier, event_notifier_ready);
>  }
>  
>  ThreadPool *thread_pool_new(AioContext *ctx)
> @@ -338,7 +337,7 @@ void thread_pool_free(ThreadPool *pool)
>  
>      qemu_mutex_unlock(&pool->lock);
>  
> -    aio_set_event_notifier(pool->ctx, &pool->notifier, NULL, NULL);
> +    aio_set_event_notifier(pool->ctx, &pool->notifier, NULL);
>      qemu_sem_destroy(&pool->sem);
>      qemu_cond_destroy(&pool->check_cancel);
>      qemu_cond_destroy(&pool->worker_stopped);
> 

Yay!

Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>

Paolo

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [Qemu-devel] [PATCH v4 00/17] aio: drop io_flush()
  2013-06-14 11:43 [Qemu-devel] [PATCH v4 00/17] aio: drop io_flush() Stefan Hajnoczi
                   ` (17 preceding siblings ...)
  2013-06-27 12:23 ` [Qemu-devel] [PATCH v4 00/17] aio: drop io_flush() Stefan Hajnoczi
@ 2013-06-27 13:22 ` Paolo Bonzini
  18 siblings, 0 replies; 29+ messages in thread
From: Paolo Bonzini @ 2013-06-27 13:22 UTC (permalink / raw)
  To: Stefan Hajnoczi; +Cc: Kevin Wolf, Anthony Liguori, Ping Fan Liu, qemu-devel

Il 14/06/2013 13:43, Stefan Hajnoczi ha scritto:
> v4:
>  * Ensure pending BHs are processed in bdrv_drain_all() [bonzini]
> 
> v3:
>  * I forgot about this series, time to push it again!
>  * Rebase onto qemu.git/master
>  * Drop now-unused AioFlushHandler typedef [bonzini]
> 
> This series gets rid of aio's .io_flush() callback.  It's based on Paolo's
> insight that bdrv_drain_all() can be implemented using the bs->tracked_requests
> list.  io_flush() is redundant since the block layer already knows if requests
> are pending.
> 
> The point of this effort is to simplify our event loop(s).  If we can reduce
> custom features like io_flush() it becomes easier to unify event loops and
> reuse glib or other options.
> 
> This is also important to me for dataplane, since bdrv_drain_all() is one of
> the synchronization points between threads.  QEMU monitor commands invoke
> bdrv_drain_all() while the block device is accessed from a dataplane thread.
> 
> Background on io_flush() semantics:
> 
> The io_flush() handler must return 1 if this aio fd handler is active.  That
> is, requests are pending and we'd like to make progress by monitoring the fd.
> 
> If io_flush() returns 0, the aio event loop skips monitoring this fd.  This is
> critical for block drivers like iscsi where we have an idle TCP socket which we
> want to block on *only* when there are pending requests.
> 
> The series works as follows:
> 
> Part 1 - stop relying on .io_flush()
> 
> The first patches change aio_poll() callers to check their termination
> condition themselves instead of relying on .io_flush():
> 
>   76f9848 block: stop relying on io_flush() in bdrv_drain_all()
>   42c7aac dataplane/virtio-blk: check exit conditions before aio_poll()
>   b7c8b9a tests: adjust test-aio to new aio_poll() semantics
>   55c7714 tests: adjust test-thread-pool to new aio_poll() semantics
> 
> Part 2 - stop calling .io_flush() from aio_poll()
> 
> The semantic change to aio_poll() is made here:
> 
>   8f188ac aio: stop using .io_flush()
> 
> Part 3 - drop io_flush() handlers, just pass NULL for the io_flush argument
> 
> Remove the now dead code from all .io_flush() users:
> 
>   5967680 block/curl: drop curl_aio_flush()
>   16ee264 block/gluster: drop qemu_gluster_aio_flush_cb()
>   9563d0f block/iscsi: drop iscsi_process_flush()
>   6690260 block/linux-aio: drop qemu_laio_completion_cb()
>   3c30643 block/nbd: drop nbd_have_request()
>   d47edee block/rbd: drop qemu_rbd_aio_flush_cb()
>   edbf4b5 block/sheepdog: drop have_co_req() and aio_flush_request()
>   3fee517 block/ssh: drop return_true()
>   d59650d dataplane/virtio-blk: drop flush_true() and flush_io()
>   0492713 thread-pool: drop thread_pool_active()
>   77b3518 tests: drop event_active_cb()
> 
> Part 4 - remove io_flush arguments from aio functions
> 
> The big, mechanical patch that drops the io_flush argument:
> 
>   5938cf4 aio: drop io_flush argument
> 
> Note that I split Part 3 from Part 4 so it's easy to review individual block
> drivers.
> 
> Stefan Hajnoczi (17):
>   block: stop relying on io_flush() in bdrv_drain_all()
>   dataplane/virtio-blk: check exit conditions before aio_poll()
>   tests: adjust test-aio to new aio_poll() semantics
>   tests: adjust test-thread-pool to new aio_poll() semantics
>   aio: stop using .io_flush()
>   block/curl: drop curl_aio_flush()
>   block/gluster: drop qemu_gluster_aio_flush_cb()
>   block/iscsi: drop iscsi_process_flush()
>   block/linux-aio: drop qemu_laio_completion_cb()
>   block/nbd: drop nbd_have_request()
>   block/rbd: drop qemu_rbd_aio_flush_cb()
>   block/sheepdog: drop have_co_req() and aio_flush_request()
>   block/ssh: drop return_true()
>   dataplane/virtio-blk: drop flush_true() and flush_io()
>   thread-pool: drop thread_pool_active()
>   tests: drop event_active_cb()
>   aio: drop io_flush argument
> 
>  aio-posix.c                     | 36 ++++++------------
>  aio-win32.c                     | 37 ++++++++-----------
>  async.c                         |  4 +-
>  block.c                         | 50 ++++++++++++++++++-------
>  block/curl.c                    | 25 ++-----------
>  block/gluster.c                 | 21 ++---------
>  block/iscsi.c                   | 10 +----
>  block/linux-aio.c               | 18 +--------
>  block/nbd.c                     | 18 ++-------
>  block/rbd.c                     | 16 +-------
>  block/sheepdog.c                | 33 ++++-------------
>  block/ssh.c                     | 12 +-----
>  hw/block/dataplane/virtio-blk.c | 25 +++----------
>  include/block/aio.h             | 14 +------
>  main-loop.c                     |  9 ++---
>  tests/test-aio.c                | 82 +++++++++++++++++++++--------------------
>  tests/test-thread-pool.c        | 24 ++++++------
>  thread-pool.c                   | 11 +-----
>  18 files changed, 158 insertions(+), 287 deletions(-)
> 

The series looks fine, just two nits: one patch to extract out of the
first, and one commit message to adjust in patch 16.  Otherwise

Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>

Paolo

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [Qemu-devel] [PATCH v4 03/17] tests: adjust test-aio to new aio_poll() semantics
  2013-06-14 11:43 ` [Qemu-devel] [PATCH v4 03/17] tests: adjust test-aio to new aio_poll() semantics Stefan Hajnoczi
  2013-06-27 13:15   ` Paolo Bonzini
@ 2013-06-27 13:28   ` Paolo Bonzini
  1 sibling, 0 replies; 29+ messages in thread
From: Paolo Bonzini @ 2013-06-27 13:28 UTC (permalink / raw)
  To: Stefan Hajnoczi; +Cc: Kevin Wolf, Anthony Liguori, Ping Fan Liu, qemu-devel

Il 14/06/2013 13:43, Stefan Hajnoczi ha scritto:
> aio_poll(ctx, true) will soon block if any fd handlers have been set.
> Previously it would only block when .io_flush() returned true.
> 
> This means that callers must check their wait condition *before*
> aio_poll() to avoid deadlock.
> 
> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
> ---
>  tests/test-aio.c | 26 +++++++++++++++++---------
>  1 file changed, 17 insertions(+), 9 deletions(-)
> 
> diff --git a/tests/test-aio.c b/tests/test-aio.c
> index c173870..20bf5e6 100644
> --- a/tests/test-aio.c
> +++ b/tests/test-aio.c
> @@ -15,6 +15,13 @@
>  
>  AioContext *ctx;
>  
> +typedef struct {
> +    EventNotifier e;
> +    int n;
> +    int active;
> +    bool auto_set;
> +} EventNotifierTestData;
> +
>  /* Wait until there are no more BHs or AIO requests */
>  static void wait_for_aio(void)
>  {
> @@ -23,6 +30,14 @@ static void wait_for_aio(void)
>      }
>  }
>  
> +/* Wait until event notifier becomes inactive */
> +static void wait_until_inactive(EventNotifierTestData *data)
> +{
> +    while (data->active > 0) {
> +        aio_poll(ctx, true);
> +    }
> +}
> +
>  /* Simple callbacks for testing.  */
>  
>  typedef struct {
> @@ -50,13 +65,6 @@ static void bh_delete_cb(void *opaque)
>      }
>  }
>  
> -typedef struct {
> -    EventNotifier e;
> -    int n;
> -    int active;
> -    bool auto_set;
> -} EventNotifierTestData;
> -
>  static int event_active_cb(EventNotifier *e)
>  {
>      EventNotifierTestData *data = container_of(e, EventNotifierTestData, e);
> @@ -281,7 +289,7 @@ static void test_flush_event_notifier(void)
>      g_assert_cmpint(data.active, ==, 9);
>      g_assert(aio_poll(ctx, false));
>  
> -    wait_for_aio();
> +    wait_until_inactive(&data);
>      g_assert_cmpint(data.n, ==, 10);
>      g_assert_cmpint(data.active, ==, 0);
>      g_assert(!aio_poll(ctx, false));
> @@ -325,7 +333,7 @@ static void test_wait_event_notifier_noflush(void)
>      g_assert_cmpint(data.n, ==, 2);
>  
>      event_notifier_set(&dummy.e);
> -    wait_for_aio();
> +    wait_until_inactive(&dummy);
>      g_assert_cmpint(data.n, ==, 2);
>      g_assert_cmpint(dummy.n, ==, 1);
>      g_assert_cmpint(dummy.active, ==, 0);
> 

Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [Qemu-devel] [PATCH v4 01/17] block: stop relying on io_flush() in bdrv_drain_all()
  2013-06-27 13:13   ` Paolo Bonzini
@ 2013-06-27 13:29     ` Stefan Hajnoczi
  0 siblings, 0 replies; 29+ messages in thread
From: Stefan Hajnoczi @ 2013-06-27 13:29 UTC (permalink / raw)
  To: Paolo Bonzini; +Cc: Kevin Wolf, Anthony Liguori, Ping Fan Liu, qemu-devel

On Thu, Jun 27, 2013 at 03:13:19PM +0200, Paolo Bonzini wrote:
> Il 14/06/2013 13:43, Stefan Hajnoczi ha scritto:
> > If a block driver has no file descriptors to monitor but there are still
> > active requests, it can return 1 from .io_flush().  This is used to spin
> > during synchronous I/O.
> > 
> > Stop relying on .io_flush() and instead check
> > QLIST_EMPTY(&bs->tracked_requests) to decide whether there are active
> > requests.
> > 
> > This is the first step in removing .io_flush() so that event loops no
> > longer need to have the concept of synchronous I/O.  Eventually we may
> > be able to kill synchronous I/O completely by running everything in a
> > coroutine, but that is future work.
> > 
> > Note this patch moves bs->throttled_reqs initialization to bdrv_new() so
> > that bdrv_requests_pending(bs) can safely access it.  In practice bs is
> > g_malloc0() so the memory is already zeroed but it's safer to initialize
> > the queue properly.
> > 
> > In bdrv_delete() make sure to call bdrv_make_anon() *after* bdrv_close()
> > so that the device is still seen by bdrv_drain_all() when iterating
> > bdrv_states.
> 
> I wonder if this last change should be separated out and CCed to
> qemu-stable.  It seems like a bug if you close a device that has pending
> throttled operations.

Seems like a good idea.  I'll send the next version with a separated Cc:
qemu-stable@nongnu.org patch.

Stefan

^ permalink raw reply	[flat|nested] 29+ messages in thread

end of thread, other threads:[~2013-06-27 13:41 UTC | newest]

Thread overview: 29+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2013-06-14 11:43 [Qemu-devel] [PATCH v4 00/17] aio: drop io_flush() Stefan Hajnoczi
2013-06-14 11:43 ` [Qemu-devel] [PATCH v4 01/17] block: stop relying on io_flush() in bdrv_drain_all() Stefan Hajnoczi
2013-06-27 13:13   ` Paolo Bonzini
2013-06-27 13:29     ` Stefan Hajnoczi
2013-06-14 11:43 ` [Qemu-devel] [PATCH v4 02/17] dataplane/virtio-blk: check exit conditions before aio_poll() Stefan Hajnoczi
2013-06-27 13:14   ` Paolo Bonzini
2013-06-14 11:43 ` [Qemu-devel] [PATCH v4 03/17] tests: adjust test-aio to new aio_poll() semantics Stefan Hajnoczi
2013-06-27 13:15   ` Paolo Bonzini
2013-06-27 13:28   ` Paolo Bonzini
2013-06-14 11:43 ` [Qemu-devel] [PATCH v4 04/17] tests: adjust test-thread-pool " Stefan Hajnoczi
2013-06-27 13:16   ` Paolo Bonzini
2013-06-14 11:43 ` [Qemu-devel] [PATCH v4 05/17] aio: stop using .io_flush() Stefan Hajnoczi
2013-06-27 13:19   ` Paolo Bonzini
2013-06-14 11:43 ` [Qemu-devel] [PATCH v4 06/17] block/curl: drop curl_aio_flush() Stefan Hajnoczi
2013-06-14 11:43 ` [Qemu-devel] [PATCH v4 07/17] block/gluster: drop qemu_gluster_aio_flush_cb() Stefan Hajnoczi
2013-06-14 11:43 ` [Qemu-devel] [PATCH v4 08/17] block/iscsi: drop iscsi_process_flush() Stefan Hajnoczi
2013-06-14 11:43 ` [Qemu-devel] [PATCH v4 09/17] block/linux-aio: drop qemu_laio_completion_cb() Stefan Hajnoczi
2013-06-14 11:43 ` [Qemu-devel] [PATCH v4 10/17] block/nbd: drop nbd_have_request() Stefan Hajnoczi
2013-06-14 11:43 ` [Qemu-devel] [PATCH v4 11/17] block/rbd: drop qemu_rbd_aio_flush_cb() Stefan Hajnoczi
2013-06-14 11:43 ` [Qemu-devel] [PATCH v4 12/17] block/sheepdog: drop have_co_req() and aio_flush_request() Stefan Hajnoczi
2013-06-14 11:43 ` [Qemu-devel] [PATCH v4 13/17] block/ssh: drop return_true() Stefan Hajnoczi
2013-06-14 11:44 ` [Qemu-devel] [PATCH v4 14/17] dataplane/virtio-blk: drop flush_true() and flush_io() Stefan Hajnoczi
2013-06-14 11:44 ` [Qemu-devel] [PATCH v4 15/17] thread-pool: drop thread_pool_active() Stefan Hajnoczi
2013-06-14 11:44 ` [Qemu-devel] [PATCH v4 16/17] tests: drop event_active_cb() Stefan Hajnoczi
2013-06-27 13:20   ` Paolo Bonzini
2013-06-14 11:44 ` [Qemu-devel] [PATCH v4 17/17] aio: drop io_flush argument Stefan Hajnoczi
2013-06-27 13:21   ` Paolo Bonzini
2013-06-27 12:23 ` [Qemu-devel] [PATCH v4 00/17] aio: drop io_flush() Stefan Hajnoczi
2013-06-27 13:22 ` Paolo Bonzini

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.