All of lore.kernel.org
 help / color / mirror / Atom feed
* [Qemu-devel] [PATCH v6 00/18] aio: drop io_flush()
@ 2013-07-25 15:18 Stefan Hajnoczi
  2013-07-25 15:18 ` [Qemu-devel] [PATCH v6 01/18] block: ensure bdrv_drain_all() works during bdrv_delete() Stefan Hajnoczi
                   ` (18 more replies)
  0 siblings, 19 replies; 53+ messages in thread
From: Stefan Hajnoczi @ 2013-07-25 15:18 UTC (permalink / raw)
  To: qemu-devel
  Cc: Kevin Wolf, Ping Fan Liu, alex, Michael Roth, Stefan Hajnoczi,
	Paolo Bonzini

v6:
 * Fix block/stream.c:close_unused_images() dangling pointer in Patch 2
 * Rebase onto qemu.git/master

v5:
 * Split out bdrv_delete() drain fix [bonzini]
 * Fix commit message  [bonzini]

v4:
 * Ensure pending BHs are processed in bdrv_drain_all() [bonzini]

v3:
 * I forgot about this series, time to push it again!
 * Rebase onto qemu.git/master
 * Drop now-unused AioFlushHandler typedef [bonzini]

This series gets rid of aio's .io_flush() callback.  It's based on Paolo's
insight that bdrv_drain_all() can be implemented using the bs->tracked_requests
list.  io_flush() is redundant since the block layer already knows if requests
are pending.

The point of this effort is to simplify our event loop(s).  If we can reduce
custom features like io_flush() it becomes possible to unify AioContext and
main-loop.c, maybe even to replace it with glib's main loop.

This is also important to me for dataplane, since bdrv_drain_all() is one of
the synchronization points between threads.  QEMU monitor commands invoke
bdrv_drain_all() while the block device is accessed from a dataplane thread.

Background on io_flush() semantics:

The io_flush() handler must return 1 if this aio fd handler is active.  That
is, requests are pending and we'd like to make progress by monitoring the fd.

If io_flush() returns 0, the aio event loop skips monitoring this fd.  This is
critical for block drivers like iscsi where we have an idle TCP socket which we
want to block on *only* when there are pending requests.

The series works as follows:

Part 1 - stop relying on .io_flush()

The first patches change aio_poll() callers to check their termination
condition themselves instead of relying on .io_flush():

  bcfa688 block: ensure bdrv_drain_all() works during bdrv_delete()
  c1351f5 block: stop relying on io_flush() in bdrv_drain_all()
  b75d9e5 dataplane/virtio-blk: check exit conditions before aio_poll()
  18fc4d6 tests: adjust test-aio to new aio_poll() semantics
  4c52a7c tests: adjust test-thread-pool to new aio_poll() semantics

Part 2 - stop calling .io_flush() from aio_poll()

The semantic change to aio_poll() is made here:

  bacae7a aio: stop using .io_flush()

Part 3 - drop io_flush() handlers, just pass NULL for the io_flush argument

Remove the now dead code from all .io_flush() users:

  720e0ad block/curl: drop curl_aio_flush()
  2830cf3 block/gluster: drop qemu_gluster_aio_flush_cb()
  c683e8e block/iscsi: drop iscsi_process_flush()
  5019686 block/linux-aio: drop qemu_laio_completion_cb()
  b862bcf block/nbd: drop nbd_have_request()
  516e517 block/rbd: drop qemu_rbd_aio_flush_cb()
  177090b block/sheepdog: drop have_co_req() and aio_flush_request()
  e2e9e85 block/ssh: drop return_true()
  7cc7bac dataplane/virtio-blk: drop flush_true() and flush_io()
  db3cb18 thread-pool: drop thread_pool_active()
  10a783b tests: drop event_active_cb()

Part 4 - remove io_flush arguments from aio functions

The big, mechanical patch that drops the io_flush argument:

  4a8c36b aio: drop io_flush argument

Note that I split Part 3 from Part 4 so it's easy to review individual block
drivers.

Stefan Hajnoczi (18):
  block: ensure bdrv_drain_all() works during bdrv_delete()
  block: stop relying on io_flush() in bdrv_drain_all()
  dataplane/virtio-blk: check exit conditions before aio_poll()
  tests: adjust test-aio to new aio_poll() semantics
  tests: adjust test-thread-pool to new aio_poll() semantics
  aio: stop using .io_flush()
  block/curl: drop curl_aio_flush()
  block/gluster: drop qemu_gluster_aio_flush_cb()
  block/iscsi: drop iscsi_process_flush()
  block/linux-aio: drop qemu_laio_completion_cb()
  block/nbd: drop nbd_have_request()
  block/rbd: drop qemu_rbd_aio_flush_cb()
  block/sheepdog: drop have_co_req() and aio_flush_request()
  block/ssh: drop return_true()
  dataplane/virtio-blk: drop flush_true() and flush_io()
  thread-pool: drop thread_pool_active()
  tests: drop event_active_cb()
  aio: drop io_flush argument

 aio-posix.c                     | 36 ++++++------------
 aio-win32.c                     | 37 ++++++++-----------
 async.c                         |  4 +-
 block.c                         | 50 ++++++++++++++++++-------
 block/curl.c                    | 25 ++-----------
 block/gluster.c                 | 21 ++---------
 block/iscsi.c                   | 10 +----
 block/linux-aio.c               | 18 +--------
 block/nbd.c                     | 18 ++-------
 block/rbd.c                     | 16 +-------
 block/sheepdog.c                | 33 ++++-------------
 block/ssh.c                     | 12 +-----
 block/stream.c                  |  6 ++-
 hw/block/dataplane/virtio-blk.c | 25 +++----------
 include/block/aio.h             | 14 +------
 main-loop.c                     |  9 ++---
 tests/test-aio.c                | 82 +++++++++++++++++++++--------------------
 tests/test-thread-pool.c        | 24 ++++++------
 thread-pool.c                   | 11 +-----
 19 files changed, 163 insertions(+), 288 deletions(-)

-- 
1.8.1.4

^ permalink raw reply	[flat|nested] 53+ messages in thread

* [Qemu-devel] [PATCH v6 01/18] block: ensure bdrv_drain_all() works during bdrv_delete()
  2013-07-25 15:18 [Qemu-devel] [PATCH v6 00/18] aio: drop io_flush() Stefan Hajnoczi
@ 2013-07-25 15:18 ` Stefan Hajnoczi
  2013-07-26  6:43   ` Wenchao Xia
  2013-08-07  7:42   ` Stefan Hajnoczi
  2013-07-25 15:18 ` [Qemu-devel] [PATCH v6 02/18] block: stop relying on io_flush() in bdrv_drain_all() Stefan Hajnoczi
                   ` (17 subsequent siblings)
  18 siblings, 2 replies; 53+ messages in thread
From: Stefan Hajnoczi @ 2013-07-25 15:18 UTC (permalink / raw)
  To: qemu-devel
  Cc: Kevin Wolf, Ping Fan Liu, alex, Michael Roth, qemu-stable,
	Stefan Hajnoczi, Paolo Bonzini

In bdrv_delete() make sure to call bdrv_make_anon() *after* bdrv_close()
so that the device is still seen by bdrv_drain_all() when iterating
bdrv_states.

Cc: qemu-stable@nongnu.org
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 block.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/block.c b/block.c
index 6cd39fa..9d68270 100644
--- a/block.c
+++ b/block.c
@@ -1600,11 +1600,11 @@ void bdrv_delete(BlockDriverState *bs)
     assert(!bs->job);
     assert(!bs->in_use);
 
+    bdrv_close(bs);
+
     /* remove from list, if necessary */
     bdrv_make_anon(bs);
 
-    bdrv_close(bs);
-
     g_free(bs);
 }
 
-- 
1.8.1.4

^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [Qemu-devel] [PATCH v6 02/18] block: stop relying on io_flush() in bdrv_drain_all()
  2013-07-25 15:18 [Qemu-devel] [PATCH v6 00/18] aio: drop io_flush() Stefan Hajnoczi
  2013-07-25 15:18 ` [Qemu-devel] [PATCH v6 01/18] block: ensure bdrv_drain_all() works during bdrv_delete() Stefan Hajnoczi
@ 2013-07-25 15:18 ` Stefan Hajnoczi
  2013-07-29  7:01   ` Wenchao Xia
  2013-07-25 15:18 ` [Qemu-devel] [PATCH v6 03/18] dataplane/virtio-blk: check exit conditions before aio_poll() Stefan Hajnoczi
                   ` (16 subsequent siblings)
  18 siblings, 1 reply; 53+ messages in thread
From: Stefan Hajnoczi @ 2013-07-25 15:18 UTC (permalink / raw)
  To: qemu-devel
  Cc: Kevin Wolf, Ping Fan Liu, alex, Michael Roth, Stefan Hajnoczi,
	Paolo Bonzini

If a block driver has no file descriptors to monitor but there are still
active requests, it can return 1 from .io_flush().  This is used to spin
during synchronous I/O.

Stop relying on .io_flush() and instead check
QLIST_EMPTY(&bs->tracked_requests) to decide whether there are active
requests.

This is the first step in removing .io_flush() so that event loops no
longer need to have the concept of synchronous I/O.  Eventually we may
be able to kill synchronous I/O completely by running everything in a
coroutine, but that is future work.

Note this patch moves bs->throttled_reqs initialization to bdrv_new() so
that bdrv_requests_pending(bs) can safely access it.  In practice bs is
g_malloc0() so the memory is already zeroed but it's safer to initialize
the queue properly.

We also need to fix up block/stream.c:close_unused_images() to prevent
traversing a dangling pointer while it rearranges the backing file
chain.  This is necessary since the new bdrv_drain_all() traverses the
backing file chain.

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 block.c        | 46 +++++++++++++++++++++++++++++++++++-----------
 block/stream.c |  6 +++++-
 2 files changed, 40 insertions(+), 12 deletions(-)

diff --git a/block.c b/block.c
index 9d68270..3ab3333 100644
--- a/block.c
+++ b/block.c
@@ -148,7 +148,6 @@ static void bdrv_block_timer(void *opaque)
 
 void bdrv_io_limits_enable(BlockDriverState *bs)
 {
-    qemu_co_queue_init(&bs->throttled_reqs);
     bs->block_timer = qemu_new_timer_ns(vm_clock, bdrv_block_timer, bs);
     bs->io_limits_enabled = true;
 }
@@ -306,6 +305,7 @@ BlockDriverState *bdrv_new(const char *device_name)
     bdrv_iostatus_disable(bs);
     notifier_list_init(&bs->close_notifiers);
     notifier_with_return_list_init(&bs->before_write_notifiers);
+    qemu_co_queue_init(&bs->throttled_reqs);
 
     return bs;
 }
@@ -1421,6 +1421,35 @@ void bdrv_close_all(void)
     }
 }
 
+/* Check if any requests are in-flight (including throttled requests) */
+static bool bdrv_requests_pending(BlockDriverState *bs)
+{
+    if (!QLIST_EMPTY(&bs->tracked_requests)) {
+        return true;
+    }
+    if (!qemu_co_queue_empty(&bs->throttled_reqs)) {
+        return true;
+    }
+    if (bs->file && bdrv_requests_pending(bs->file)) {
+        return true;
+    }
+    if (bs->backing_hd && bdrv_requests_pending(bs->backing_hd)) {
+        return true;
+    }
+    return false;
+}
+
+static bool bdrv_requests_pending_all(void)
+{
+    BlockDriverState *bs;
+    QTAILQ_FOREACH(bs, &bdrv_states, list) {
+        if (bdrv_requests_pending(bs)) {
+            return true;
+        }
+    }
+    return false;
+}
+
 /*
  * Wait for pending requests to complete across all BlockDriverStates
  *
@@ -1435,27 +1464,22 @@ void bdrv_close_all(void)
  */
 void bdrv_drain_all(void)
 {
+    /* Always run first iteration so any pending completion BHs run */
+    bool busy = true;
     BlockDriverState *bs;
-    bool busy;
-
-    do {
-        busy = qemu_aio_wait();
 
+    while (busy) {
         /* FIXME: We do not have timer support here, so this is effectively
          * a busy wait.
          */
         QTAILQ_FOREACH(bs, &bdrv_states, list) {
             if (!qemu_co_queue_empty(&bs->throttled_reqs)) {
                 qemu_co_queue_restart_all(&bs->throttled_reqs);
-                busy = true;
             }
         }
-    } while (busy);
 
-    /* If requests are still pending there is a bug somewhere */
-    QTAILQ_FOREACH(bs, &bdrv_states, list) {
-        assert(QLIST_EMPTY(&bs->tracked_requests));
-        assert(qemu_co_queue_empty(&bs->throttled_reqs));
+        busy = bdrv_requests_pending_all();
+        busy |= aio_poll(qemu_get_aio_context(), busy);
     }
 }
 
diff --git a/block/stream.c b/block/stream.c
index 7fe9e48..db49b4d 100644
--- a/block/stream.c
+++ b/block/stream.c
@@ -57,6 +57,11 @@ static void close_unused_images(BlockDriverState *top, BlockDriverState *base,
     BlockDriverState *intermediate;
     intermediate = top->backing_hd;
 
+    /* Must assign before bdrv_delete() to prevent traversing dangling pointer
+     * while we delete backing image instances.
+     */
+    top->backing_hd = base;
+
     while (intermediate) {
         BlockDriverState *unused;
 
@@ -70,7 +75,6 @@ static void close_unused_images(BlockDriverState *top, BlockDriverState *base,
         unused->backing_hd = NULL;
         bdrv_delete(unused);
     }
-    top->backing_hd = base;
 }
 
 static void coroutine_fn stream_run(void *opaque)
-- 
1.8.1.4

^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [Qemu-devel] [PATCH v6 03/18] dataplane/virtio-blk: check exit conditions before aio_poll()
  2013-07-25 15:18 [Qemu-devel] [PATCH v6 00/18] aio: drop io_flush() Stefan Hajnoczi
  2013-07-25 15:18 ` [Qemu-devel] [PATCH v6 01/18] block: ensure bdrv_drain_all() works during bdrv_delete() Stefan Hajnoczi
  2013-07-25 15:18 ` [Qemu-devel] [PATCH v6 02/18] block: stop relying on io_flush() in bdrv_drain_all() Stefan Hajnoczi
@ 2013-07-25 15:18 ` Stefan Hajnoczi
  2013-07-26 16:44   ` Jeff Cody
  2013-07-29  7:10   ` Wenchao Xia
  2013-07-25 15:18 ` [Qemu-devel] [PATCH v6 04/18] tests: adjust test-aio to new aio_poll() semantics Stefan Hajnoczi
                   ` (15 subsequent siblings)
  18 siblings, 2 replies; 53+ messages in thread
From: Stefan Hajnoczi @ 2013-07-25 15:18 UTC (permalink / raw)
  To: qemu-devel
  Cc: Kevin Wolf, Ping Fan Liu, alex, Michael Roth, Stefan Hajnoczi,
	Paolo Bonzini

Check exit conditions before entering blocking aio_poll().  This is
mainly for consistency since it's unlikely that we are stopping in the
first event loop iteration.

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 hw/block/dataplane/virtio-blk.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/hw/block/dataplane/virtio-blk.c b/hw/block/dataplane/virtio-blk.c
index 2faed43..8d3e145 100644
--- a/hw/block/dataplane/virtio-blk.c
+++ b/hw/block/dataplane/virtio-blk.c
@@ -379,9 +379,9 @@ static void *data_plane_thread(void *opaque)
 {
     VirtIOBlockDataPlane *s = opaque;
 
-    do {
+    while (!s->stopping || s->num_reqs > 0) {
         aio_poll(s->ctx, true);
-    } while (!s->stopping || s->num_reqs > 0);
+    }
     return NULL;
 }
 
-- 
1.8.1.4

^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [Qemu-devel] [PATCH v6 04/18] tests: adjust test-aio to new aio_poll() semantics
  2013-07-25 15:18 [Qemu-devel] [PATCH v6 00/18] aio: drop io_flush() Stefan Hajnoczi
                   ` (2 preceding siblings ...)
  2013-07-25 15:18 ` [Qemu-devel] [PATCH v6 03/18] dataplane/virtio-blk: check exit conditions before aio_poll() Stefan Hajnoczi
@ 2013-07-25 15:18 ` Stefan Hajnoczi
  2013-07-29  7:39   ` Wenchao Xia
  2013-07-29  8:24   ` Wenchao Xia
  2013-07-25 15:18 ` [Qemu-devel] [PATCH v6 05/18] tests: adjust test-thread-pool " Stefan Hajnoczi
                   ` (14 subsequent siblings)
  18 siblings, 2 replies; 53+ messages in thread
From: Stefan Hajnoczi @ 2013-07-25 15:18 UTC (permalink / raw)
  To: qemu-devel
  Cc: Kevin Wolf, Ping Fan Liu, alex, Michael Roth, Stefan Hajnoczi,
	Paolo Bonzini

aio_poll(ctx, true) will soon block if any fd handlers have been set.
Previously it would only block when .io_flush() returned true.

This means that callers must check their wait condition *before*
aio_poll() to avoid deadlock.

Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 tests/test-aio.c | 26 +++++++++++++++++---------
 1 file changed, 17 insertions(+), 9 deletions(-)

diff --git a/tests/test-aio.c b/tests/test-aio.c
index c173870..20bf5e6 100644
--- a/tests/test-aio.c
+++ b/tests/test-aio.c
@@ -15,6 +15,13 @@
 
 AioContext *ctx;
 
+typedef struct {
+    EventNotifier e;
+    int n;
+    int active;
+    bool auto_set;
+} EventNotifierTestData;
+
 /* Wait until there are no more BHs or AIO requests */
 static void wait_for_aio(void)
 {
@@ -23,6 +30,14 @@ static void wait_for_aio(void)
     }
 }
 
+/* Wait until event notifier becomes inactive */
+static void wait_until_inactive(EventNotifierTestData *data)
+{
+    while (data->active > 0) {
+        aio_poll(ctx, true);
+    }
+}
+
 /* Simple callbacks for testing.  */
 
 typedef struct {
@@ -50,13 +65,6 @@ static void bh_delete_cb(void *opaque)
     }
 }
 
-typedef struct {
-    EventNotifier e;
-    int n;
-    int active;
-    bool auto_set;
-} EventNotifierTestData;
-
 static int event_active_cb(EventNotifier *e)
 {
     EventNotifierTestData *data = container_of(e, EventNotifierTestData, e);
@@ -281,7 +289,7 @@ static void test_flush_event_notifier(void)
     g_assert_cmpint(data.active, ==, 9);
     g_assert(aio_poll(ctx, false));
 
-    wait_for_aio();
+    wait_until_inactive(&data);
     g_assert_cmpint(data.n, ==, 10);
     g_assert_cmpint(data.active, ==, 0);
     g_assert(!aio_poll(ctx, false));
@@ -325,7 +333,7 @@ static void test_wait_event_notifier_noflush(void)
     g_assert_cmpint(data.n, ==, 2);
 
     event_notifier_set(&dummy.e);
-    wait_for_aio();
+    wait_until_inactive(&dummy);
     g_assert_cmpint(data.n, ==, 2);
     g_assert_cmpint(dummy.n, ==, 1);
     g_assert_cmpint(dummy.active, ==, 0);
-- 
1.8.1.4

^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [Qemu-devel] [PATCH v6 05/18] tests: adjust test-thread-pool to new aio_poll() semantics
  2013-07-25 15:18 [Qemu-devel] [PATCH v6 00/18] aio: drop io_flush() Stefan Hajnoczi
                   ` (3 preceding siblings ...)
  2013-07-25 15:18 ` [Qemu-devel] [PATCH v6 04/18] tests: adjust test-aio to new aio_poll() semantics Stefan Hajnoczi
@ 2013-07-25 15:18 ` Stefan Hajnoczi
  2013-07-29  7:51   ` Wenchao Xia
  2013-07-25 15:18 ` [Qemu-devel] [PATCH v6 06/18] aio: stop using .io_flush() Stefan Hajnoczi
                   ` (13 subsequent siblings)
  18 siblings, 1 reply; 53+ messages in thread
From: Stefan Hajnoczi @ 2013-07-25 15:18 UTC (permalink / raw)
  To: qemu-devel
  Cc: Kevin Wolf, Ping Fan Liu, alex, Michael Roth, Stefan Hajnoczi,
	Paolo Bonzini

aio_poll(ctx, true) will soon block when fd handlers have been set.
Previously aio_poll() would return early if all .io_flush() returned
false.  This means we need to check the equivalent of the .io_flush()
condition *before* calling aio_poll(ctx, true) to avoid deadlock.

Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 tests/test-thread-pool.c | 24 ++++++++++++------------
 1 file changed, 12 insertions(+), 12 deletions(-)

diff --git a/tests/test-thread-pool.c b/tests/test-thread-pool.c
index b62338f..8188d1a 100644
--- a/tests/test-thread-pool.c
+++ b/tests/test-thread-pool.c
@@ -40,19 +40,13 @@ static void done_cb(void *opaque, int ret)
     active--;
 }
 
-/* Wait until all aio and bh activity has finished */
-static void qemu_aio_wait_all(void)
-{
-    while (aio_poll(ctx, true)) {
-        /* Do nothing */
-    }
-}
-
 static void test_submit(void)
 {
     WorkerTestData data = { .n = 0 };
     thread_pool_submit(pool, worker_cb, &data);
-    qemu_aio_wait_all();
+    while (data.n == 0) {
+        aio_poll(ctx, true);
+    }
     g_assert_cmpint(data.n, ==, 1);
 }
 
@@ -65,7 +59,9 @@ static void test_submit_aio(void)
     /* The callbacks are not called until after the first wait.  */
     active = 1;
     g_assert_cmpint(data.ret, ==, -EINPROGRESS);
-    qemu_aio_wait_all();
+    while (data.ret == -EINPROGRESS) {
+        aio_poll(ctx, true);
+    }
     g_assert_cmpint(active, ==, 0);
     g_assert_cmpint(data.n, ==, 1);
     g_assert_cmpint(data.ret, ==, 0);
@@ -103,7 +99,9 @@ static void test_submit_co(void)
 
     /* qemu_aio_wait_all will execute the rest of the coroutine.  */
 
-    qemu_aio_wait_all();
+    while (data.ret == -EINPROGRESS) {
+        aio_poll(ctx, true);
+    }
 
     /* Back here after the coroutine has finished.  */
 
@@ -187,7 +185,9 @@ static void test_cancel(void)
     }
 
     /* Finish execution and execute any remaining callbacks.  */
-    qemu_aio_wait_all();
+    while (active > 0) {
+        aio_poll(ctx, true);
+    }
     g_assert_cmpint(active, ==, 0);
     for (i = 0; i < 100; i++) {
         if (data[i].n == 3) {
-- 
1.8.1.4

^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [Qemu-devel] [PATCH v6 06/18] aio: stop using .io_flush()
  2013-07-25 15:18 [Qemu-devel] [PATCH v6 00/18] aio: drop io_flush() Stefan Hajnoczi
                   ` (4 preceding siblings ...)
  2013-07-25 15:18 ` [Qemu-devel] [PATCH v6 05/18] tests: adjust test-thread-pool " Stefan Hajnoczi
@ 2013-07-25 15:18 ` Stefan Hajnoczi
  2013-07-26 16:10   ` Jeff Cody
                     ` (2 more replies)
  2013-07-25 15:18 ` [Qemu-devel] [PATCH v6 07/18] block/curl: drop curl_aio_flush() Stefan Hajnoczi
                   ` (12 subsequent siblings)
  18 siblings, 3 replies; 53+ messages in thread
From: Stefan Hajnoczi @ 2013-07-25 15:18 UTC (permalink / raw)
  To: qemu-devel
  Cc: Kevin Wolf, Ping Fan Liu, alex, Michael Roth, Stefan Hajnoczi,
	Paolo Bonzini

Now that aio_poll() users check their termination condition themselves,
it is no longer necessary to call .io_flush() handlers.

The behavior of aio_poll() changes as follows:

1. .io_flush() is no longer invoked and file descriptors are *always*
monitored.  Previously returning 0 from .io_flush() would skip this file
descriptor.

Due to this change it is essential to check that requests are pending
before calling qemu_aio_wait().  Failure to do so means we block, for
example, waiting for an idle iSCSI socket to become readable when there
are no requests.  Currently all qemu_aio_wait()/aio_poll() callers check
before calling.

2. aio_poll() now returns true if progress was made (BH or fd handlers
executed) and false otherwise.  Previously it would return true whenever
'busy', which means that .io_flush() returned true.  The 'busy' concept
no longer exists so just progress is returned.

Due to this change we need to update tests/test-aio.c which asserts
aio_poll() return values.  Note that QEMU doesn't actually rely on these
return values so only tests/test-aio.c cares.

Note that ctx->notifier, the EventNotifier fd used for aio_notify(), is
now handled as a special case.  This is a little ugly but maintains
aio_poll() semantics, i.e. aio_notify() does not count as 'progress' and
aio_poll() avoids blocking when the user has not set any fd handlers yet.

Patches after this remove .io_flush() handler code until we can finally
drop the io_flush arguments to aio_set_fd_handler() and friends.

Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 aio-posix.c      | 29 +++++++++--------------------
 aio-win32.c      | 34 ++++++++++++++--------------------
 tests/test-aio.c | 10 +++++-----
 3 files changed, 28 insertions(+), 45 deletions(-)

diff --git a/aio-posix.c b/aio-posix.c
index b68eccd..7d66048 100644
--- a/aio-posix.c
+++ b/aio-posix.c
@@ -23,7 +23,6 @@ struct AioHandler
     GPollFD pfd;
     IOHandler *io_read;
     IOHandler *io_write;
-    AioFlushHandler *io_flush;
     int deleted;
     int pollfds_idx;
     void *opaque;
@@ -84,7 +83,6 @@ void aio_set_fd_handler(AioContext *ctx,
         /* Update handler with latest information */
         node->io_read = io_read;
         node->io_write = io_write;
-        node->io_flush = io_flush;
         node->opaque = opaque;
         node->pollfds_idx = -1;
 
@@ -147,7 +145,11 @@ static bool aio_dispatch(AioContext *ctx)
             (revents & (G_IO_IN | G_IO_HUP | G_IO_ERR)) &&
             node->io_read) {
             node->io_read(node->opaque);
-            progress = true;
+
+            /* aio_notify() does not count as progress */
+            if (node->opaque != &ctx->notifier) {
+                progress = true;
+            }
         }
         if (!node->deleted &&
             (revents & (G_IO_OUT | G_IO_ERR)) &&
@@ -173,7 +175,7 @@ bool aio_poll(AioContext *ctx, bool blocking)
 {
     AioHandler *node;
     int ret;
-    bool busy, progress;
+    bool progress;
 
     progress = false;
 
@@ -200,20 +202,8 @@ bool aio_poll(AioContext *ctx, bool blocking)
     g_array_set_size(ctx->pollfds, 0);
 
     /* fill pollfds */
-    busy = false;
     QLIST_FOREACH(node, &ctx->aio_handlers, node) {
         node->pollfds_idx = -1;
-
-        /* If there aren't pending AIO operations, don't invoke callbacks.
-         * Otherwise, if there are no AIO requests, qemu_aio_wait() would
-         * wait indefinitely.
-         */
-        if (!node->deleted && node->io_flush) {
-            if (node->io_flush(node->opaque) == 0) {
-                continue;
-            }
-            busy = true;
-        }
         if (!node->deleted && node->pfd.events) {
             GPollFD pfd = {
                 .fd = node->pfd.fd,
@@ -226,8 +216,8 @@ bool aio_poll(AioContext *ctx, bool blocking)
 
     ctx->walking_handlers--;
 
-    /* No AIO operations?  Get us out of here */
-    if (!busy) {
+    /* early return if we only have the aio_notify() fd */
+    if (ctx->pollfds->len == 1) {
         return progress;
     }
 
@@ -250,6 +240,5 @@ bool aio_poll(AioContext *ctx, bool blocking)
         }
     }
 
-    assert(progress || busy);
-    return true;
+    return progress;
 }
diff --git a/aio-win32.c b/aio-win32.c
index 38723bf..4309c16 100644
--- a/aio-win32.c
+++ b/aio-win32.c
@@ -23,7 +23,6 @@
 struct AioHandler {
     EventNotifier *e;
     EventNotifierHandler *io_notify;
-    AioFlushEventNotifierHandler *io_flush;
     GPollFD pfd;
     int deleted;
     QLIST_ENTRY(AioHandler) node;
@@ -73,7 +72,6 @@ void aio_set_event_notifier(AioContext *ctx,
         }
         /* Update handler with latest information */
         node->io_notify = io_notify;
-        node->io_flush = io_flush;
     }
 
     aio_notify(ctx);
@@ -96,7 +94,7 @@ bool aio_poll(AioContext *ctx, bool blocking)
 {
     AioHandler *node;
     HANDLE events[MAXIMUM_WAIT_OBJECTS + 1];
-    bool busy, progress;
+    bool progress;
     int count;
 
     progress = false;
@@ -126,7 +124,11 @@ bool aio_poll(AioContext *ctx, bool blocking)
         if (node->pfd.revents && node->io_notify) {
             node->pfd.revents = 0;
             node->io_notify(node->e);
-            progress = true;
+
+            /* aio_notify() does not count as progress */
+            if (node->opaque != &ctx->notifier) {
+                progress = true;
+            }
         }
 
         tmp = node;
@@ -147,19 +149,8 @@ bool aio_poll(AioContext *ctx, bool blocking)
     ctx->walking_handlers++;
 
     /* fill fd sets */
-    busy = false;
     count = 0;
     QLIST_FOREACH(node, &ctx->aio_handlers, node) {
-        /* If there aren't pending AIO operations, don't invoke callbacks.
-         * Otherwise, if there are no AIO requests, qemu_aio_wait() would
-         * wait indefinitely.
-         */
-        if (!node->deleted && node->io_flush) {
-            if (node->io_flush(node->e) == 0) {
-                continue;
-            }
-            busy = true;
-        }
         if (!node->deleted && node->io_notify) {
             events[count++] = event_notifier_get_handle(node->e);
         }
@@ -167,8 +158,8 @@ bool aio_poll(AioContext *ctx, bool blocking)
 
     ctx->walking_handlers--;
 
-    /* No AIO operations?  Get us out of here */
-    if (!busy) {
+    /* early return if we only have the aio_notify() fd */
+    if (count == 1) {
         return progress;
     }
 
@@ -196,7 +187,11 @@ bool aio_poll(AioContext *ctx, bool blocking)
                 event_notifier_get_handle(node->e) == events[ret - WAIT_OBJECT_0] &&
                 node->io_notify) {
                 node->io_notify(node->e);
-                progress = true;
+
+                /* aio_notify() does not count as progress */
+                if (node->opaque != &ctx->notifier) {
+                    progress = true;
+                }
             }
 
             tmp = node;
@@ -214,6 +209,5 @@ bool aio_poll(AioContext *ctx, bool blocking)
         events[ret - WAIT_OBJECT_0] = events[--count];
     }
 
-    assert(progress || busy);
-    return true;
+    return progress;
 }
diff --git a/tests/test-aio.c b/tests/test-aio.c
index 20bf5e6..1251952 100644
--- a/tests/test-aio.c
+++ b/tests/test-aio.c
@@ -254,7 +254,7 @@ static void test_wait_event_notifier(void)
     EventNotifierTestData data = { .n = 0, .active = 1 };
     event_notifier_init(&data.e, false);
     aio_set_event_notifier(ctx, &data.e, event_ready_cb, event_active_cb);
-    g_assert(aio_poll(ctx, false));
+    g_assert(!aio_poll(ctx, false));
     g_assert_cmpint(data.n, ==, 0);
     g_assert_cmpint(data.active, ==, 1);
 
@@ -279,7 +279,7 @@ static void test_flush_event_notifier(void)
     EventNotifierTestData data = { .n = 0, .active = 10, .auto_set = true };
     event_notifier_init(&data.e, false);
     aio_set_event_notifier(ctx, &data.e, event_ready_cb, event_active_cb);
-    g_assert(aio_poll(ctx, false));
+    g_assert(!aio_poll(ctx, false));
     g_assert_cmpint(data.n, ==, 0);
     g_assert_cmpint(data.active, ==, 10);
 
@@ -313,7 +313,7 @@ static void test_wait_event_notifier_noflush(void)
     /* Until there is an active descriptor, aio_poll may or may not call
      * event_ready_cb.  Still, it must not block.  */
     event_notifier_set(&data.e);
-    g_assert(!aio_poll(ctx, true));
+    g_assert(aio_poll(ctx, true));
     data.n = 0;
 
     /* An active event notifier forces aio_poll to look at EventNotifiers.  */
@@ -323,13 +323,13 @@ static void test_wait_event_notifier_noflush(void)
     event_notifier_set(&data.e);
     g_assert(aio_poll(ctx, false));
     g_assert_cmpint(data.n, ==, 1);
-    g_assert(aio_poll(ctx, false));
+    g_assert(!aio_poll(ctx, false));
     g_assert_cmpint(data.n, ==, 1);
 
     event_notifier_set(&data.e);
     g_assert(aio_poll(ctx, false));
     g_assert_cmpint(data.n, ==, 2);
-    g_assert(aio_poll(ctx, false));
+    g_assert(!aio_poll(ctx, false));
     g_assert_cmpint(data.n, ==, 2);
 
     event_notifier_set(&dummy.e);
-- 
1.8.1.4

^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [Qemu-devel] [PATCH v6 07/18] block/curl: drop curl_aio_flush()
  2013-07-25 15:18 [Qemu-devel] [PATCH v6 00/18] aio: drop io_flush() Stefan Hajnoczi
                   ` (5 preceding siblings ...)
  2013-07-25 15:18 ` [Qemu-devel] [PATCH v6 06/18] aio: stop using .io_flush() Stefan Hajnoczi
@ 2013-07-25 15:18 ` Stefan Hajnoczi
  2013-07-29  8:11   ` Wenchao Xia
  2013-07-25 15:18 ` [Qemu-devel] [PATCH v6 08/18] block/gluster: drop qemu_gluster_aio_flush_cb() Stefan Hajnoczi
                   ` (11 subsequent siblings)
  18 siblings, 1 reply; 53+ messages in thread
From: Stefan Hajnoczi @ 2013-07-25 15:18 UTC (permalink / raw)
  To: qemu-devel
  Cc: Kevin Wolf, Ping Fan Liu, alex, Michael Roth, Stefan Hajnoczi,
	Paolo Bonzini

.io_flush() is no longer called so drop curl_aio_flush().  The acb[]
array that the function checks is still used in other parts of
block/curl.c.  Therefore we cannot remove acb[], it is needed.

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 block/curl.c | 22 +++-------------------
 1 file changed, 3 insertions(+), 19 deletions(-)

diff --git a/block/curl.c b/block/curl.c
index 82d39ff..5999924 100644
--- a/block/curl.c
+++ b/block/curl.c
@@ -86,7 +86,6 @@ typedef struct BDRVCURLState {
 
 static void curl_clean_state(CURLState *s);
 static void curl_multi_do(void *arg);
-static int curl_aio_flush(void *opaque);
 
 static int curl_sock_cb(CURL *curl, curl_socket_t fd, int action,
                         void *s, void *sp)
@@ -94,14 +93,14 @@ static int curl_sock_cb(CURL *curl, curl_socket_t fd, int action,
     DPRINTF("CURL (AIO): Sock action %d on fd %d\n", action, fd);
     switch (action) {
         case CURL_POLL_IN:
-            qemu_aio_set_fd_handler(fd, curl_multi_do, NULL, curl_aio_flush, s);
+            qemu_aio_set_fd_handler(fd, curl_multi_do, NULL, NULL, s);
             break;
         case CURL_POLL_OUT:
-            qemu_aio_set_fd_handler(fd, NULL, curl_multi_do, curl_aio_flush, s);
+            qemu_aio_set_fd_handler(fd, NULL, curl_multi_do, NULL, s);
             break;
         case CURL_POLL_INOUT:
             qemu_aio_set_fd_handler(fd, curl_multi_do, curl_multi_do,
-                                    curl_aio_flush, s);
+                                    NULL, s);
             break;
         case CURL_POLL_REMOVE:
             qemu_aio_set_fd_handler(fd, NULL, NULL, NULL, NULL);
@@ -495,21 +494,6 @@ out_noclean:
     return -EINVAL;
 }
 
-static int curl_aio_flush(void *opaque)
-{
-    BDRVCURLState *s = opaque;
-    int i, j;
-
-    for (i=0; i < CURL_NUM_STATES; i++) {
-        for(j=0; j < CURL_NUM_ACB; j++) {
-            if (s->states[i].acb[j]) {
-                return 1;
-            }
-        }
-    }
-    return 0;
-}
-
 static void curl_aio_cancel(BlockDriverAIOCB *blockacb)
 {
     // Do we have to implement canceling? Seems to work without...
-- 
1.8.1.4

^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [Qemu-devel] [PATCH v6 08/18] block/gluster: drop qemu_gluster_aio_flush_cb()
  2013-07-25 15:18 [Qemu-devel] [PATCH v6 00/18] aio: drop io_flush() Stefan Hajnoczi
                   ` (6 preceding siblings ...)
  2013-07-25 15:18 ` [Qemu-devel] [PATCH v6 07/18] block/curl: drop curl_aio_flush() Stefan Hajnoczi
@ 2013-07-25 15:18 ` Stefan Hajnoczi
  2013-07-29  8:17   ` Wenchao Xia
  2013-07-25 15:18 ` [Qemu-devel] [PATCH v6 09/18] block/iscsi: drop iscsi_process_flush() Stefan Hajnoczi
                   ` (10 subsequent siblings)
  18 siblings, 1 reply; 53+ messages in thread
From: Stefan Hajnoczi @ 2013-07-25 15:18 UTC (permalink / raw)
  To: qemu-devel
  Cc: Kevin Wolf, Ping Fan Liu, alex, Michael Roth, Stefan Hajnoczi,
	Paolo Bonzini

Since .io_flush() is no longer called we do not need
qemu_gluster_aio_flush_cb() anymore.  It turns out that qemu_aio_count
is unused now and can be dropped.

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 block/gluster.c | 16 +---------------
 1 file changed, 1 insertion(+), 15 deletions(-)

diff --git a/block/gluster.c b/block/gluster.c
index 6de418c..7c1a9f2 100644
--- a/block/gluster.c
+++ b/block/gluster.c
@@ -32,7 +32,6 @@ typedef struct BDRVGlusterState {
     struct glfs *glfs;
     int fds[2];
     struct glfs_fd *fd;
-    int qemu_aio_count;
     int event_reader_pos;
     GlusterAIOCB *event_acb;
 } BDRVGlusterState;
@@ -247,7 +246,6 @@ static void qemu_gluster_complete_aio(GlusterAIOCB *acb, BDRVGlusterState *s)
         ret = -EIO; /* Partial read/write - fail it */
     }
 
-    s->qemu_aio_count--;
     qemu_aio_release(acb);
     cb(opaque, ret);
     if (finished) {
@@ -275,13 +273,6 @@ static void qemu_gluster_aio_event_reader(void *opaque)
     } while (ret < 0 && errno == EINTR);
 }
 
-static int qemu_gluster_aio_flush_cb(void *opaque)
-{
-    BDRVGlusterState *s = opaque;
-
-    return (s->qemu_aio_count > 0);
-}
-
 /* TODO Convert to fine grained options */
 static QemuOptsList runtime_opts = {
     .name = "gluster",
@@ -348,7 +339,7 @@ static int qemu_gluster_open(BlockDriverState *bs,  QDict *options,
     }
     fcntl(s->fds[GLUSTER_FD_READ], F_SETFL, O_NONBLOCK);
     qemu_aio_set_fd_handler(s->fds[GLUSTER_FD_READ],
-        qemu_gluster_aio_event_reader, NULL, qemu_gluster_aio_flush_cb, s);
+        qemu_gluster_aio_event_reader, NULL, NULL, s);
 
 out:
     qemu_opts_del(opts);
@@ -445,7 +436,6 @@ static void gluster_finish_aiocb(struct glfs_fd *fd, ssize_t ret, void *arg)
         qemu_mutex_lock_iothread(); /* We are in gluster thread context */
         acb->common.cb(acb->common.opaque, -EIO);
         qemu_aio_release(acb);
-        s->qemu_aio_count--;
         close(s->fds[GLUSTER_FD_READ]);
         close(s->fds[GLUSTER_FD_WRITE]);
         qemu_aio_set_fd_handler(s->fds[GLUSTER_FD_READ], NULL, NULL, NULL,
@@ -467,7 +457,6 @@ static BlockDriverAIOCB *qemu_gluster_aio_rw(BlockDriverState *bs,
 
     offset = sector_num * BDRV_SECTOR_SIZE;
     size = nb_sectors * BDRV_SECTOR_SIZE;
-    s->qemu_aio_count++;
 
     acb = qemu_aio_get(&gluster_aiocb_info, bs, cb, opaque);
     acb->size = size;
@@ -488,7 +477,6 @@ static BlockDriverAIOCB *qemu_gluster_aio_rw(BlockDriverState *bs,
     return &acb->common;
 
 out:
-    s->qemu_aio_count--;
     qemu_aio_release(acb);
     return NULL;
 }
@@ -518,7 +506,6 @@ static BlockDriverAIOCB *qemu_gluster_aio_flush(BlockDriverState *bs,
     acb->size = 0;
     acb->ret = 0;
     acb->finished = NULL;
-    s->qemu_aio_count++;
 
     ret = glfs_fsync_async(s->fd, &gluster_finish_aiocb, acb);
     if (ret < 0) {
@@ -527,7 +514,6 @@ static BlockDriverAIOCB *qemu_gluster_aio_flush(BlockDriverState *bs,
     return &acb->common;
 
 out:
-    s->qemu_aio_count--;
     qemu_aio_release(acb);
     return NULL;
 }
-- 
1.8.1.4

^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [Qemu-devel] [PATCH v6 09/18] block/iscsi: drop iscsi_process_flush()
  2013-07-25 15:18 [Qemu-devel] [PATCH v6 00/18] aio: drop io_flush() Stefan Hajnoczi
                   ` (7 preceding siblings ...)
  2013-07-25 15:18 ` [Qemu-devel] [PATCH v6 08/18] block/gluster: drop qemu_gluster_aio_flush_cb() Stefan Hajnoczi
@ 2013-07-25 15:18 ` Stefan Hajnoczi
  2013-07-29  8:17   ` Wenchao Xia
  2013-07-25 15:18 ` [Qemu-devel] [PATCH v6 10/18] block/linux-aio: drop qemu_laio_completion_cb() Stefan Hajnoczi
                   ` (9 subsequent siblings)
  18 siblings, 1 reply; 53+ messages in thread
From: Stefan Hajnoczi @ 2013-07-25 15:18 UTC (permalink / raw)
  To: qemu-devel
  Cc: Kevin Wolf, Ping Fan Liu, alex, Michael Roth, Stefan Hajnoczi,
	Paolo Bonzini

.io_flush() is no longer called so drop iscsi_process_flush().

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 block/iscsi.c | 9 +--------
 1 file changed, 1 insertion(+), 8 deletions(-)

diff --git a/block/iscsi.c b/block/iscsi.c
index 5f28c6a..e2692d6 100644
--- a/block/iscsi.c
+++ b/block/iscsi.c
@@ -146,13 +146,6 @@ static const AIOCBInfo iscsi_aiocb_info = {
 static void iscsi_process_read(void *arg);
 static void iscsi_process_write(void *arg);
 
-static int iscsi_process_flush(void *arg)
-{
-    IscsiLun *iscsilun = arg;
-
-    return iscsi_queue_length(iscsilun->iscsi) > 0;
-}
-
 static void
 iscsi_set_events(IscsiLun *iscsilun)
 {
@@ -166,7 +159,7 @@ iscsi_set_events(IscsiLun *iscsilun)
         qemu_aio_set_fd_handler(iscsi_get_fd(iscsi),
                       iscsi_process_read,
                       (ev & POLLOUT) ? iscsi_process_write : NULL,
-                      iscsi_process_flush,
+                      NULL,
                       iscsilun);
 
     }
-- 
1.8.1.4

^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [Qemu-devel] [PATCH v6 10/18] block/linux-aio: drop qemu_laio_completion_cb()
  2013-07-25 15:18 [Qemu-devel] [PATCH v6 00/18] aio: drop io_flush() Stefan Hajnoczi
                   ` (8 preceding siblings ...)
  2013-07-25 15:18 ` [Qemu-devel] [PATCH v6 09/18] block/iscsi: drop iscsi_process_flush() Stefan Hajnoczi
@ 2013-07-25 15:18 ` Stefan Hajnoczi
  2013-07-29  8:19   ` Wenchao Xia
  2013-07-25 15:18 ` [Qemu-devel] [PATCH v6 11/18] block/nbd: drop nbd_have_request() Stefan Hajnoczi
                   ` (8 subsequent siblings)
  18 siblings, 1 reply; 53+ messages in thread
From: Stefan Hajnoczi @ 2013-07-25 15:18 UTC (permalink / raw)
  To: qemu-devel
  Cc: Kevin Wolf, Ping Fan Liu, alex, Michael Roth, Stefan Hajnoczi,
	Paolo Bonzini

.io_flush() is no longer called so drop qemu_laio_completion_cb().  It
turns out that count is now unused so drop that too.

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 block/linux-aio.c | 17 ++---------------
 1 file changed, 2 insertions(+), 15 deletions(-)

diff --git a/block/linux-aio.c b/block/linux-aio.c
index ee0f8d1..d9128f3 100644
--- a/block/linux-aio.c
+++ b/block/linux-aio.c
@@ -39,7 +39,6 @@ struct qemu_laiocb {
 struct qemu_laio_state {
     io_context_t ctx;
     EventNotifier e;
-    int count;
 };
 
 static inline ssize_t io_event_ret(struct io_event *ev)
@@ -55,8 +54,6 @@ static void qemu_laio_process_completion(struct qemu_laio_state *s,
 {
     int ret;
 
-    s->count--;
-
     ret = laiocb->ret;
     if (ret != -ECANCELED) {
         if (ret == laiocb->nbytes) {
@@ -101,13 +98,6 @@ static void qemu_laio_completion_cb(EventNotifier *e)
     }
 }
 
-static int qemu_laio_flush_cb(EventNotifier *e)
-{
-    struct qemu_laio_state *s = container_of(e, struct qemu_laio_state, e);
-
-    return (s->count > 0) ? 1 : 0;
-}
-
 static void laio_cancel(BlockDriverAIOCB *blockacb)
 {
     struct qemu_laiocb *laiocb = (struct qemu_laiocb *)blockacb;
@@ -177,14 +167,11 @@ BlockDriverAIOCB *laio_submit(BlockDriverState *bs, void *aio_ctx, int fd,
         goto out_free_aiocb;
     }
     io_set_eventfd(&laiocb->iocb, event_notifier_get_fd(&s->e));
-    s->count++;
 
     if (io_submit(s->ctx, 1, &iocbs) < 0)
-        goto out_dec_count;
+        goto out_free_aiocb;
     return &laiocb->common;
 
-out_dec_count:
-    s->count--;
 out_free_aiocb:
     qemu_aio_release(laiocb);
     return NULL;
@@ -204,7 +191,7 @@ void *laio_init(void)
     }
 
     qemu_aio_set_event_notifier(&s->e, qemu_laio_completion_cb,
-                                qemu_laio_flush_cb);
+                                NULL);
 
     return s;
 
-- 
1.8.1.4

^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [Qemu-devel] [PATCH v6 11/18] block/nbd: drop nbd_have_request()
  2013-07-25 15:18 [Qemu-devel] [PATCH v6 00/18] aio: drop io_flush() Stefan Hajnoczi
                   ` (9 preceding siblings ...)
  2013-07-25 15:18 ` [Qemu-devel] [PATCH v6 10/18] block/linux-aio: drop qemu_laio_completion_cb() Stefan Hajnoczi
@ 2013-07-25 15:18 ` Stefan Hajnoczi
  2013-07-29  8:19   ` Wenchao Xia
  2013-07-25 15:18 ` [Qemu-devel] [PATCH v6 12/18] block/rbd: drop qemu_rbd_aio_flush_cb() Stefan Hajnoczi
                   ` (7 subsequent siblings)
  18 siblings, 1 reply; 53+ messages in thread
From: Stefan Hajnoczi @ 2013-07-25 15:18 UTC (permalink / raw)
  To: qemu-devel
  Cc: Kevin Wolf, Ping Fan Liu, alex, Michael Roth, Stefan Hajnoczi,
	Paolo Bonzini

.io_flush() is no longer called so drop nbd_have_request().  We cannot
drop in_flight since it is still used by other block/nbd.c code.

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 block/nbd.c | 13 +++----------
 1 file changed, 3 insertions(+), 10 deletions(-)

diff --git a/block/nbd.c b/block/nbd.c
index 9c480b8..f1619f9 100644
--- a/block/nbd.c
+++ b/block/nbd.c
@@ -279,13 +279,6 @@ static void nbd_coroutine_start(BDRVNBDState *s, struct nbd_request *request)
     request->handle = INDEX_TO_HANDLE(s, i);
 }
 
-static int nbd_have_request(void *opaque)
-{
-    BDRVNBDState *s = opaque;
-
-    return s->in_flight > 0;
-}
-
 static void nbd_reply_ready(void *opaque)
 {
     BDRVNBDState *s = opaque;
@@ -342,7 +335,7 @@ static int nbd_co_send_request(BDRVNBDState *s, struct nbd_request *request,
     qemu_co_mutex_lock(&s->send_mutex);
     s->send_coroutine = qemu_coroutine_self();
     qemu_aio_set_fd_handler(s->sock, nbd_reply_ready, nbd_restart_write,
-                            nbd_have_request, s);
+                            NULL, s);
     if (qiov) {
         if (!s->is_unix) {
             socket_set_cork(s->sock, 1);
@@ -362,7 +355,7 @@ static int nbd_co_send_request(BDRVNBDState *s, struct nbd_request *request,
         rc = nbd_send_request(s->sock, request);
     }
     qemu_aio_set_fd_handler(s->sock, nbd_reply_ready, NULL,
-                            nbd_have_request, s);
+                            NULL, s);
     s->send_coroutine = NULL;
     qemu_co_mutex_unlock(&s->send_mutex);
     return rc;
@@ -439,7 +432,7 @@ static int nbd_establish_connection(BlockDriverState *bs)
      * kick the reply mechanism.  */
     qemu_set_nonblock(sock);
     qemu_aio_set_fd_handler(sock, nbd_reply_ready, NULL,
-                            nbd_have_request, s);
+                            NULL, s);
 
     s->sock = sock;
     s->size = size;
-- 
1.8.1.4

^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [Qemu-devel] [PATCH v6 12/18] block/rbd: drop qemu_rbd_aio_flush_cb()
  2013-07-25 15:18 [Qemu-devel] [PATCH v6 00/18] aio: drop io_flush() Stefan Hajnoczi
                   ` (10 preceding siblings ...)
  2013-07-25 15:18 ` [Qemu-devel] [PATCH v6 11/18] block/nbd: drop nbd_have_request() Stefan Hajnoczi
@ 2013-07-25 15:18 ` Stefan Hajnoczi
  2013-07-29  8:20   ` Wenchao Xia
  2013-07-25 15:18 ` [Qemu-devel] [PATCH v6 13/18] block/sheepdog: drop have_co_req() and aio_flush_request() Stefan Hajnoczi
                   ` (6 subsequent siblings)
  18 siblings, 1 reply; 53+ messages in thread
From: Stefan Hajnoczi @ 2013-07-25 15:18 UTC (permalink / raw)
  To: qemu-devel
  Cc: Kevin Wolf, Ping Fan Liu, alex, Michael Roth, Stefan Hajnoczi,
	Paolo Bonzini

.io_flush() is no longer called so drop qemu_rbd_aio_flush_cb().
qemu_aio_count is unused now so drop it too.

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 block/rbd.c | 14 +-------------
 1 file changed, 1 insertion(+), 13 deletions(-)

diff --git a/block/rbd.c b/block/rbd.c
index cb71751..71b4a0c 100644
--- a/block/rbd.c
+++ b/block/rbd.c
@@ -100,7 +100,6 @@ typedef struct BDRVRBDState {
     rados_ioctx_t io_ctx;
     rbd_image_t image;
     char name[RBD_MAX_IMAGE_NAME_SIZE];
-    int qemu_aio_count;
     char *snap;
     int event_reader_pos;
     RADOSCB *event_rcb;
@@ -428,19 +427,11 @@ static void qemu_rbd_aio_event_reader(void *opaque)
             if (s->event_reader_pos == sizeof(s->event_rcb)) {
                 s->event_reader_pos = 0;
                 qemu_rbd_complete_aio(s->event_rcb);
-                s->qemu_aio_count--;
             }
         }
     } while (ret < 0 && errno == EINTR);
 }
 
-static int qemu_rbd_aio_flush_cb(void *opaque)
-{
-    BDRVRBDState *s = opaque;
-
-    return (s->qemu_aio_count > 0);
-}
-
 /* TODO Convert to fine grained options */
 static QemuOptsList runtime_opts = {
     .name = "rbd",
@@ -554,7 +545,7 @@ static int qemu_rbd_open(BlockDriverState *bs, QDict *options, int flags)
     fcntl(s->fds[0], F_SETFL, O_NONBLOCK);
     fcntl(s->fds[1], F_SETFL, O_NONBLOCK);
     qemu_aio_set_fd_handler(s->fds[RBD_FD_READ], qemu_rbd_aio_event_reader,
-                            NULL, qemu_rbd_aio_flush_cb, s);
+                            NULL, NULL, s);
 
 
     qemu_opts_del(opts);
@@ -741,8 +732,6 @@ static BlockDriverAIOCB *rbd_start_aio(BlockDriverState *bs,
     off = sector_num * BDRV_SECTOR_SIZE;
     size = nb_sectors * BDRV_SECTOR_SIZE;
 
-    s->qemu_aio_count++; /* All the RADOSCB */
-
     rcb = g_malloc(sizeof(RADOSCB));
     rcb->done = 0;
     rcb->acb = acb;
@@ -779,7 +768,6 @@ static BlockDriverAIOCB *rbd_start_aio(BlockDriverState *bs,
 
 failed:
     g_free(rcb);
-    s->qemu_aio_count--;
     qemu_aio_release(acb);
     return NULL;
 }
-- 
1.8.1.4

^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [Qemu-devel] [PATCH v6 13/18] block/sheepdog: drop have_co_req() and aio_flush_request()
  2013-07-25 15:18 [Qemu-devel] [PATCH v6 00/18] aio: drop io_flush() Stefan Hajnoczi
                   ` (11 preceding siblings ...)
  2013-07-25 15:18 ` [Qemu-devel] [PATCH v6 12/18] block/rbd: drop qemu_rbd_aio_flush_cb() Stefan Hajnoczi
@ 2013-07-25 15:18 ` Stefan Hajnoczi
  2013-07-26  5:50   ` MORITA Kazutaka
  2013-07-29  8:24   ` Wenchao Xia
  2013-07-25 15:18 ` [Qemu-devel] [PATCH v6 14/18] block/ssh: drop return_true() Stefan Hajnoczi
                   ` (5 subsequent siblings)
  18 siblings, 2 replies; 53+ messages in thread
From: Stefan Hajnoczi @ 2013-07-25 15:18 UTC (permalink / raw)
  To: qemu-devel
  Cc: Kevin Wolf, Ping Fan Liu, alex, Michael Roth, Stefan Hajnoczi,
	Paolo Bonzini

.io_flush() is no longer called so drop have_co_req() and
aio_flush_request().

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 block/sheepdog.c | 25 +++++--------------------
 1 file changed, 5 insertions(+), 20 deletions(-)

diff --git a/block/sheepdog.c b/block/sheepdog.c
index 6a41ad9..e216047 100644
--- a/block/sheepdog.c
+++ b/block/sheepdog.c
@@ -509,13 +509,6 @@ static void restart_co_req(void *opaque)
     qemu_coroutine_enter(co, NULL);
 }
 
-static int have_co_req(void *opaque)
-{
-    /* this handler is set only when there is a pending request, so
-     * always returns 1. */
-    return 1;
-}
-
 typedef struct SheepdogReqCo {
     int sockfd;
     SheepdogReq *hdr;
@@ -538,14 +531,14 @@ static coroutine_fn void do_co_req(void *opaque)
     unsigned int *rlen = srco->rlen;
 
     co = qemu_coroutine_self();
-    qemu_aio_set_fd_handler(sockfd, NULL, restart_co_req, have_co_req, co);
+    qemu_aio_set_fd_handler(sockfd, NULL, restart_co_req, NULL, co);
 
     ret = send_co_req(sockfd, hdr, data, wlen);
     if (ret < 0) {
         goto out;
     }
 
-    qemu_aio_set_fd_handler(sockfd, restart_co_req, NULL, have_co_req, co);
+    qemu_aio_set_fd_handler(sockfd, restart_co_req, NULL, NULL, co);
 
     ret = qemu_co_recv(sockfd, hdr, sizeof(*hdr));
     if (ret < sizeof(*hdr)) {
@@ -796,14 +789,6 @@ static void co_write_request(void *opaque)
     qemu_coroutine_enter(s->co_send, NULL);
 }
 
-static int aio_flush_request(void *opaque)
-{
-    BDRVSheepdogState *s = opaque;
-
-    return !QLIST_EMPTY(&s->inflight_aio_head) ||
-        !QLIST_EMPTY(&s->pending_aio_head);
-}
-
 /*
  * Return a socket discriptor to read/write objects.
  *
@@ -819,7 +804,7 @@ static int get_sheep_fd(BDRVSheepdogState *s)
         return fd;
     }
 
-    qemu_aio_set_fd_handler(fd, co_read_response, NULL, aio_flush_request, s);
+    qemu_aio_set_fd_handler(fd, co_read_response, NULL, NULL, s);
     return fd;
 }
 
@@ -1070,7 +1055,7 @@ static int coroutine_fn add_aio_request(BDRVSheepdogState *s, AIOReq *aio_req,
     qemu_co_mutex_lock(&s->lock);
     s->co_send = qemu_coroutine_self();
     qemu_aio_set_fd_handler(s->fd, co_read_response, co_write_request,
-                            aio_flush_request, s);
+                            NULL, s);
     socket_set_cork(s->fd, 1);
 
     /* send a header */
@@ -1092,7 +1077,7 @@ static int coroutine_fn add_aio_request(BDRVSheepdogState *s, AIOReq *aio_req,
 
     socket_set_cork(s->fd, 0);
     qemu_aio_set_fd_handler(s->fd, co_read_response, NULL,
-                            aio_flush_request, s);
+                            NULL, s);
     qemu_co_mutex_unlock(&s->lock);
 
     return 0;
-- 
1.8.1.4

^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [Qemu-devel] [PATCH v6 14/18] block/ssh: drop return_true()
  2013-07-25 15:18 [Qemu-devel] [PATCH v6 00/18] aio: drop io_flush() Stefan Hajnoczi
                   ` (12 preceding siblings ...)
  2013-07-25 15:18 ` [Qemu-devel] [PATCH v6 13/18] block/sheepdog: drop have_co_req() and aio_flush_request() Stefan Hajnoczi
@ 2013-07-25 15:18 ` Stefan Hajnoczi
  2013-07-29  8:25   ` Wenchao Xia
  2013-07-25 15:18 ` [Qemu-devel] [PATCH v6 15/18] dataplane/virtio-blk: drop flush_true() and flush_io() Stefan Hajnoczi
                   ` (4 subsequent siblings)
  18 siblings, 1 reply; 53+ messages in thread
From: Stefan Hajnoczi @ 2013-07-25 15:18 UTC (permalink / raw)
  To: qemu-devel
  Cc: Kevin Wolf, Ping Fan Liu, alex, Michael Roth, Stefan Hajnoczi,
	Paolo Bonzini

.io_flush() is no longer called so drop return_true().

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 block/ssh.c | 10 +---------
 1 file changed, 1 insertion(+), 9 deletions(-)

diff --git a/block/ssh.c b/block/ssh.c
index d7e7bf8..e149da9 100644
--- a/block/ssh.c
+++ b/block/ssh.c
@@ -740,14 +740,6 @@ static void restart_coroutine(void *opaque)
     qemu_coroutine_enter(co, NULL);
 }
 
-/* Always true because when we have called set_fd_handler there is
- * always a request being processed.
- */
-static int return_true(void *opaque)
-{
-    return 1;
-}
-
 static coroutine_fn void set_fd_handler(BDRVSSHState *s)
 {
     int r;
@@ -766,7 +758,7 @@ static coroutine_fn void set_fd_handler(BDRVSSHState *s)
     DPRINTF("s->sock=%d rd_handler=%p wr_handler=%p", s->sock,
             rd_handler, wr_handler);
 
-    qemu_aio_set_fd_handler(s->sock, rd_handler, wr_handler, return_true, co);
+    qemu_aio_set_fd_handler(s->sock, rd_handler, wr_handler, NULL, co);
 }
 
 static coroutine_fn void clear_fd_handler(BDRVSSHState *s)
-- 
1.8.1.4

^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [Qemu-devel] [PATCH v6 15/18] dataplane/virtio-blk: drop flush_true() and flush_io()
  2013-07-25 15:18 [Qemu-devel] [PATCH v6 00/18] aio: drop io_flush() Stefan Hajnoczi
                   ` (13 preceding siblings ...)
  2013-07-25 15:18 ` [Qemu-devel] [PATCH v6 14/18] block/ssh: drop return_true() Stefan Hajnoczi
@ 2013-07-25 15:18 ` Stefan Hajnoczi
  2013-07-29  8:32   ` Wenchao Xia
  2013-07-25 15:18 ` [Qemu-devel] [PATCH v6 16/18] thread-pool: drop thread_pool_active() Stefan Hajnoczi
                   ` (3 subsequent siblings)
  18 siblings, 1 reply; 53+ messages in thread
From: Stefan Hajnoczi @ 2013-07-25 15:18 UTC (permalink / raw)
  To: qemu-devel
  Cc: Kevin Wolf, Ping Fan Liu, alex, Michael Roth, Stefan Hajnoczi,
	Paolo Bonzini

.io_flush() is no longer called so drop flush_true() and flush_io().

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 hw/block/dataplane/virtio-blk.c | 17 ++---------------
 1 file changed, 2 insertions(+), 15 deletions(-)

diff --git a/hw/block/dataplane/virtio-blk.c b/hw/block/dataplane/virtio-blk.c
index 8d3e145..f8624d1 100644
--- a/hw/block/dataplane/virtio-blk.c
+++ b/hw/block/dataplane/virtio-blk.c
@@ -264,11 +264,6 @@ static int process_request(IOQueue *ioq, struct iovec iov[],
     }
 }
 
-static int flush_true(EventNotifier *e)
-{
-    return true;
-}
-
 static void handle_notify(EventNotifier *e)
 {
     VirtIOBlockDataPlane *s = container_of(e, VirtIOBlockDataPlane,
@@ -348,14 +343,6 @@ static void handle_notify(EventNotifier *e)
     }
 }
 
-static int flush_io(EventNotifier *e)
-{
-    VirtIOBlockDataPlane *s = container_of(e, VirtIOBlockDataPlane,
-                                           io_notifier);
-
-    return s->num_reqs > 0;
-}
-
 static void handle_io(EventNotifier *e)
 {
     VirtIOBlockDataPlane *s = container_of(e, VirtIOBlockDataPlane,
@@ -486,7 +473,7 @@ void virtio_blk_data_plane_start(VirtIOBlockDataPlane *s)
         exit(1);
     }
     s->host_notifier = *virtio_queue_get_host_notifier(vq);
-    aio_set_event_notifier(s->ctx, &s->host_notifier, handle_notify, flush_true);
+    aio_set_event_notifier(s->ctx, &s->host_notifier, handle_notify, NULL);
 
     /* Set up ioqueue */
     ioq_init(&s->ioqueue, s->fd, REQ_MAX);
@@ -494,7 +481,7 @@ void virtio_blk_data_plane_start(VirtIOBlockDataPlane *s)
         ioq_put_iocb(&s->ioqueue, &s->requests[i].iocb);
     }
     s->io_notifier = *ioq_get_notifier(&s->ioqueue);
-    aio_set_event_notifier(s->ctx, &s->io_notifier, handle_io, flush_io);
+    aio_set_event_notifier(s->ctx, &s->io_notifier, handle_io, NULL);
 
     s->started = true;
     trace_virtio_blk_data_plane_start(s);
-- 
1.8.1.4

^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [Qemu-devel] [PATCH v6 16/18] thread-pool: drop thread_pool_active()
  2013-07-25 15:18 [Qemu-devel] [PATCH v6 00/18] aio: drop io_flush() Stefan Hajnoczi
                   ` (14 preceding siblings ...)
  2013-07-25 15:18 ` [Qemu-devel] [PATCH v6 15/18] dataplane/virtio-blk: drop flush_true() and flush_io() Stefan Hajnoczi
@ 2013-07-25 15:18 ` Stefan Hajnoczi
  2013-07-29  8:33   ` Wenchao Xia
  2013-07-25 15:18 ` [Qemu-devel] [PATCH v6 17/18] tests: drop event_active_cb() Stefan Hajnoczi
                   ` (2 subsequent siblings)
  18 siblings, 1 reply; 53+ messages in thread
From: Stefan Hajnoczi @ 2013-07-25 15:18 UTC (permalink / raw)
  To: qemu-devel
  Cc: Kevin Wolf, Ping Fan Liu, alex, Michael Roth, Stefan Hajnoczi,
	Paolo Bonzini

.io_flush() is no longer called so drop thread_pool_active().  The block
layer is the only thread-pool.c user and it already tracks in-flight
requests, therefore we do not need thread_pool_active().

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 thread-pool.c | 8 +-------
 1 file changed, 1 insertion(+), 7 deletions(-)

diff --git a/thread-pool.c b/thread-pool.c
index 0ebd4c2..096f007 100644
--- a/thread-pool.c
+++ b/thread-pool.c
@@ -197,12 +197,6 @@ restart:
     }
 }
 
-static int thread_pool_active(EventNotifier *notifier)
-{
-    ThreadPool *pool = container_of(notifier, ThreadPool, notifier);
-    return !QLIST_EMPTY(&pool->head);
-}
-
 static void thread_pool_cancel(BlockDriverAIOCB *acb)
 {
     ThreadPoolElement *elem = (ThreadPoolElement *)acb;
@@ -310,7 +304,7 @@ static void thread_pool_init_one(ThreadPool *pool, AioContext *ctx)
     QTAILQ_INIT(&pool->request_list);
 
     aio_set_event_notifier(ctx, &pool->notifier, event_notifier_ready,
-                           thread_pool_active);
+                           NULL);
 }
 
 ThreadPool *thread_pool_new(AioContext *ctx)
-- 
1.8.1.4

^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [Qemu-devel] [PATCH v6 17/18] tests: drop event_active_cb()
  2013-07-25 15:18 [Qemu-devel] [PATCH v6 00/18] aio: drop io_flush() Stefan Hajnoczi
                   ` (15 preceding siblings ...)
  2013-07-25 15:18 ` [Qemu-devel] [PATCH v6 16/18] thread-pool: drop thread_pool_active() Stefan Hajnoczi
@ 2013-07-25 15:18 ` Stefan Hajnoczi
  2013-07-29  8:34   ` Wenchao Xia
  2013-07-25 15:18 ` [Qemu-devel] [PATCH v6 18/18] aio: drop io_flush argument Stefan Hajnoczi
  2013-08-06 15:07 ` [Qemu-devel] [PATCH v6 00/18] aio: drop io_flush() Stefan Hajnoczi
  18 siblings, 1 reply; 53+ messages in thread
From: Stefan Hajnoczi @ 2013-07-25 15:18 UTC (permalink / raw)
  To: qemu-devel
  Cc: Kevin Wolf, Ping Fan Liu, alex, Michael Roth, Stefan Hajnoczi,
	Paolo Bonzini

Drop the io_flush argument to aio_set_event_notifier().

Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 tests/test-aio.c | 22 ++++++++--------------
 1 file changed, 8 insertions(+), 14 deletions(-)

diff --git a/tests/test-aio.c b/tests/test-aio.c
index 1251952..7b2892a 100644
--- a/tests/test-aio.c
+++ b/tests/test-aio.c
@@ -65,12 +65,6 @@ static void bh_delete_cb(void *opaque)
     }
 }
 
-static int event_active_cb(EventNotifier *e)
-{
-    EventNotifierTestData *data = container_of(e, EventNotifierTestData, e);
-    return data->active > 0;
-}
-
 static void event_ready_cb(EventNotifier *e)
 {
     EventNotifierTestData *data = container_of(e, EventNotifierTestData, e);
@@ -239,7 +233,7 @@ static void test_set_event_notifier(void)
 {
     EventNotifierTestData data = { .n = 0, .active = 0 };
     event_notifier_init(&data.e, false);
-    aio_set_event_notifier(ctx, &data.e, event_ready_cb, event_active_cb);
+    aio_set_event_notifier(ctx, &data.e, event_ready_cb, NULL);
     g_assert(!aio_poll(ctx, false));
     g_assert_cmpint(data.n, ==, 0);
 
@@ -253,7 +247,7 @@ static void test_wait_event_notifier(void)
 {
     EventNotifierTestData data = { .n = 0, .active = 1 };
     event_notifier_init(&data.e, false);
-    aio_set_event_notifier(ctx, &data.e, event_ready_cb, event_active_cb);
+    aio_set_event_notifier(ctx, &data.e, event_ready_cb, NULL);
     g_assert(!aio_poll(ctx, false));
     g_assert_cmpint(data.n, ==, 0);
     g_assert_cmpint(data.active, ==, 1);
@@ -278,7 +272,7 @@ static void test_flush_event_notifier(void)
 {
     EventNotifierTestData data = { .n = 0, .active = 10, .auto_set = true };
     event_notifier_init(&data.e, false);
-    aio_set_event_notifier(ctx, &data.e, event_ready_cb, event_active_cb);
+    aio_set_event_notifier(ctx, &data.e, event_ready_cb, NULL);
     g_assert(!aio_poll(ctx, false));
     g_assert_cmpint(data.n, ==, 0);
     g_assert_cmpint(data.active, ==, 10);
@@ -318,7 +312,7 @@ static void test_wait_event_notifier_noflush(void)
 
     /* An active event notifier forces aio_poll to look at EventNotifiers.  */
     event_notifier_init(&dummy.e, false);
-    aio_set_event_notifier(ctx, &dummy.e, event_ready_cb, event_active_cb);
+    aio_set_event_notifier(ctx, &dummy.e, event_ready_cb, NULL);
 
     event_notifier_set(&data.e);
     g_assert(aio_poll(ctx, false));
@@ -521,7 +515,7 @@ static void test_source_set_event_notifier(void)
 {
     EventNotifierTestData data = { .n = 0, .active = 0 };
     event_notifier_init(&data.e, false);
-    aio_set_event_notifier(ctx, &data.e, event_ready_cb, event_active_cb);
+    aio_set_event_notifier(ctx, &data.e, event_ready_cb, NULL);
     while (g_main_context_iteration(NULL, false));
     g_assert_cmpint(data.n, ==, 0);
 
@@ -535,7 +529,7 @@ static void test_source_wait_event_notifier(void)
 {
     EventNotifierTestData data = { .n = 0, .active = 1 };
     event_notifier_init(&data.e, false);
-    aio_set_event_notifier(ctx, &data.e, event_ready_cb, event_active_cb);
+    aio_set_event_notifier(ctx, &data.e, event_ready_cb, NULL);
     g_assert(g_main_context_iteration(NULL, false));
     g_assert_cmpint(data.n, ==, 0);
     g_assert_cmpint(data.active, ==, 1);
@@ -560,7 +554,7 @@ static void test_source_flush_event_notifier(void)
 {
     EventNotifierTestData data = { .n = 0, .active = 10, .auto_set = true };
     event_notifier_init(&data.e, false);
-    aio_set_event_notifier(ctx, &data.e, event_ready_cb, event_active_cb);
+    aio_set_event_notifier(ctx, &data.e, event_ready_cb, NULL);
     g_assert(g_main_context_iteration(NULL, false));
     g_assert_cmpint(data.n, ==, 0);
     g_assert_cmpint(data.active, ==, 10);
@@ -600,7 +594,7 @@ static void test_source_wait_event_notifier_noflush(void)
 
     /* An active event notifier forces aio_poll to look at EventNotifiers.  */
     event_notifier_init(&dummy.e, false);
-    aio_set_event_notifier(ctx, &dummy.e, event_ready_cb, event_active_cb);
+    aio_set_event_notifier(ctx, &dummy.e, event_ready_cb, NULL);
 
     event_notifier_set(&data.e);
     g_assert(g_main_context_iteration(NULL, false));
-- 
1.8.1.4

^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [Qemu-devel] [PATCH v6 18/18] aio: drop io_flush argument
  2013-07-25 15:18 [Qemu-devel] [PATCH v6 00/18] aio: drop io_flush() Stefan Hajnoczi
                   ` (16 preceding siblings ...)
  2013-07-25 15:18 ` [Qemu-devel] [PATCH v6 17/18] tests: drop event_active_cb() Stefan Hajnoczi
@ 2013-07-25 15:18 ` Stefan Hajnoczi
  2013-07-29  8:37   ` Wenchao Xia
  2013-08-06 15:07 ` [Qemu-devel] [PATCH v6 00/18] aio: drop io_flush() Stefan Hajnoczi
  18 siblings, 1 reply; 53+ messages in thread
From: Stefan Hajnoczi @ 2013-07-25 15:18 UTC (permalink / raw)
  To: qemu-devel
  Cc: Kevin Wolf, Ping Fan Liu, alex, Michael Roth, Stefan Hajnoczi,
	Paolo Bonzini

The .io_flush() handler no longer exists and has no users.  Drop the
io_flush argument to aio_set_fd_handler() and related functions.

The AioFlushEventNotifierHandler and AioFlushHandler typedefs are no
longer used and are dropped too.

Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 aio-posix.c                     |  7 ++-----
 aio-win32.c                     |  3 +--
 async.c                         |  4 ++--
 block/curl.c                    |  9 ++++-----
 block/gluster.c                 |  7 +++----
 block/iscsi.c                   |  3 +--
 block/linux-aio.c               |  3 +--
 block/nbd.c                     | 11 ++++-------
 block/rbd.c                     |  4 ++--
 block/sheepdog.c                | 18 ++++++++----------
 block/ssh.c                     |  4 ++--
 hw/block/dataplane/virtio-blk.c |  8 ++++----
 include/block/aio.h             | 14 ++------------
 main-loop.c                     |  9 +++------
 tests/test-aio.c                | 40 ++++++++++++++++++++--------------------
 thread-pool.c                   |  5 ++---
 16 files changed, 61 insertions(+), 88 deletions(-)

diff --git a/aio-posix.c b/aio-posix.c
index 7d66048..2440eb9 100644
--- a/aio-posix.c
+++ b/aio-posix.c
@@ -46,7 +46,6 @@ void aio_set_fd_handler(AioContext *ctx,
                         int fd,
                         IOHandler *io_read,
                         IOHandler *io_write,
-                        AioFlushHandler *io_flush,
                         void *opaque)
 {
     AioHandler *node;
@@ -95,12 +94,10 @@ void aio_set_fd_handler(AioContext *ctx,
 
 void aio_set_event_notifier(AioContext *ctx,
                             EventNotifier *notifier,
-                            EventNotifierHandler *io_read,
-                            AioFlushEventNotifierHandler *io_flush)
+                            EventNotifierHandler *io_read)
 {
     aio_set_fd_handler(ctx, event_notifier_get_fd(notifier),
-                       (IOHandler *)io_read, NULL,
-                       (AioFlushHandler *)io_flush, notifier);
+                       (IOHandler *)io_read, NULL, notifier);
 }
 
 bool aio_pending(AioContext *ctx)
diff --git a/aio-win32.c b/aio-win32.c
index 4309c16..78b2801 100644
--- a/aio-win32.c
+++ b/aio-win32.c
@@ -30,8 +30,7 @@ struct AioHandler {
 
 void aio_set_event_notifier(AioContext *ctx,
                             EventNotifier *e,
-                            EventNotifierHandler *io_notify,
-                            AioFlushEventNotifierHandler *io_flush)
+                            EventNotifierHandler *io_notify)
 {
     AioHandler *node;
 
diff --git a/async.c b/async.c
index 5ce3633..9791d8e 100644
--- a/async.c
+++ b/async.c
@@ -201,7 +201,7 @@ aio_ctx_finalize(GSource     *source)
     AioContext *ctx = (AioContext *) source;
 
     thread_pool_free(ctx->thread_pool);
-    aio_set_event_notifier(ctx, &ctx->notifier, NULL, NULL);
+    aio_set_event_notifier(ctx, &ctx->notifier, NULL);
     event_notifier_cleanup(&ctx->notifier);
     qemu_mutex_destroy(&ctx->bh_lock);
     g_array_free(ctx->pollfds, TRUE);
@@ -243,7 +243,7 @@ AioContext *aio_context_new(void)
     event_notifier_init(&ctx->notifier, false);
     aio_set_event_notifier(ctx, &ctx->notifier, 
                            (EventNotifierHandler *)
-                           event_notifier_test_and_clear, NULL);
+                           event_notifier_test_and_clear);
 
     return ctx;
 }
diff --git a/block/curl.c b/block/curl.c
index 5999924..e566855 100644
--- a/block/curl.c
+++ b/block/curl.c
@@ -93,17 +93,16 @@ static int curl_sock_cb(CURL *curl, curl_socket_t fd, int action,
     DPRINTF("CURL (AIO): Sock action %d on fd %d\n", action, fd);
     switch (action) {
         case CURL_POLL_IN:
-            qemu_aio_set_fd_handler(fd, curl_multi_do, NULL, NULL, s);
+            qemu_aio_set_fd_handler(fd, curl_multi_do, NULL, s);
             break;
         case CURL_POLL_OUT:
-            qemu_aio_set_fd_handler(fd, NULL, curl_multi_do, NULL, s);
+            qemu_aio_set_fd_handler(fd, NULL, curl_multi_do, s);
             break;
         case CURL_POLL_INOUT:
-            qemu_aio_set_fd_handler(fd, curl_multi_do, curl_multi_do,
-                                    NULL, s);
+            qemu_aio_set_fd_handler(fd, curl_multi_do, curl_multi_do, s);
             break;
         case CURL_POLL_REMOVE:
-            qemu_aio_set_fd_handler(fd, NULL, NULL, NULL, NULL);
+            qemu_aio_set_fd_handler(fd, NULL, NULL, NULL);
             break;
     }
 
diff --git a/block/gluster.c b/block/gluster.c
index 7c1a9f2..79a10ec 100644
--- a/block/gluster.c
+++ b/block/gluster.c
@@ -339,7 +339,7 @@ static int qemu_gluster_open(BlockDriverState *bs,  QDict *options,
     }
     fcntl(s->fds[GLUSTER_FD_READ], F_SETFL, O_NONBLOCK);
     qemu_aio_set_fd_handler(s->fds[GLUSTER_FD_READ],
-        qemu_gluster_aio_event_reader, NULL, NULL, s);
+        qemu_gluster_aio_event_reader, NULL, s);
 
 out:
     qemu_opts_del(opts);
@@ -438,8 +438,7 @@ static void gluster_finish_aiocb(struct glfs_fd *fd, ssize_t ret, void *arg)
         qemu_aio_release(acb);
         close(s->fds[GLUSTER_FD_READ]);
         close(s->fds[GLUSTER_FD_WRITE]);
-        qemu_aio_set_fd_handler(s->fds[GLUSTER_FD_READ], NULL, NULL, NULL,
-            NULL);
+        qemu_aio_set_fd_handler(s->fds[GLUSTER_FD_READ], NULL, NULL, NULL);
         bs->drv = NULL; /* Make the disk inaccessible */
         qemu_mutex_unlock_iothread();
     }
@@ -584,7 +583,7 @@ static void qemu_gluster_close(BlockDriverState *bs)
 
     close(s->fds[GLUSTER_FD_READ]);
     close(s->fds[GLUSTER_FD_WRITE]);
-    qemu_aio_set_fd_handler(s->fds[GLUSTER_FD_READ], NULL, NULL, NULL, NULL);
+    qemu_aio_set_fd_handler(s->fds[GLUSTER_FD_READ], NULL, NULL, NULL);
 
     if (s->fd) {
         glfs_close(s->fd);
diff --git a/block/iscsi.c b/block/iscsi.c
index e2692d6..a9cf297 100644
--- a/block/iscsi.c
+++ b/block/iscsi.c
@@ -159,7 +159,6 @@ iscsi_set_events(IscsiLun *iscsilun)
         qemu_aio_set_fd_handler(iscsi_get_fd(iscsi),
                       iscsi_process_read,
                       (ev & POLLOUT) ? iscsi_process_write : NULL,
-                      NULL,
                       iscsilun);
 
     }
@@ -1206,7 +1205,7 @@ static void iscsi_close(BlockDriverState *bs)
         qemu_del_timer(iscsilun->nop_timer);
         qemu_free_timer(iscsilun->nop_timer);
     }
-    qemu_aio_set_fd_handler(iscsi_get_fd(iscsi), NULL, NULL, NULL, NULL);
+    qemu_aio_set_fd_handler(iscsi_get_fd(iscsi), NULL, NULL, NULL);
     iscsi_destroy_context(iscsi);
     memset(iscsilun, 0, sizeof(IscsiLun));
 }
diff --git a/block/linux-aio.c b/block/linux-aio.c
index d9128f3..53434e2 100644
--- a/block/linux-aio.c
+++ b/block/linux-aio.c
@@ -190,8 +190,7 @@ void *laio_init(void)
         goto out_close_efd;
     }
 
-    qemu_aio_set_event_notifier(&s->e, qemu_laio_completion_cb,
-                                NULL);
+    qemu_aio_set_event_notifier(&s->e, qemu_laio_completion_cb);
 
     return s;
 
diff --git a/block/nbd.c b/block/nbd.c
index f1619f9..691066f 100644
--- a/block/nbd.c
+++ b/block/nbd.c
@@ -334,8 +334,7 @@ static int nbd_co_send_request(BDRVNBDState *s, struct nbd_request *request,
 
     qemu_co_mutex_lock(&s->send_mutex);
     s->send_coroutine = qemu_coroutine_self();
-    qemu_aio_set_fd_handler(s->sock, nbd_reply_ready, nbd_restart_write,
-                            NULL, s);
+    qemu_aio_set_fd_handler(s->sock, nbd_reply_ready, nbd_restart_write, s);
     if (qiov) {
         if (!s->is_unix) {
             socket_set_cork(s->sock, 1);
@@ -354,8 +353,7 @@ static int nbd_co_send_request(BDRVNBDState *s, struct nbd_request *request,
     } else {
         rc = nbd_send_request(s->sock, request);
     }
-    qemu_aio_set_fd_handler(s->sock, nbd_reply_ready, NULL,
-                            NULL, s);
+    qemu_aio_set_fd_handler(s->sock, nbd_reply_ready, NULL, s);
     s->send_coroutine = NULL;
     qemu_co_mutex_unlock(&s->send_mutex);
     return rc;
@@ -431,8 +429,7 @@ static int nbd_establish_connection(BlockDriverState *bs)
     /* Now that we're connected, set the socket to be non-blocking and
      * kick the reply mechanism.  */
     qemu_set_nonblock(sock);
-    qemu_aio_set_fd_handler(sock, nbd_reply_ready, NULL,
-                            NULL, s);
+    qemu_aio_set_fd_handler(sock, nbd_reply_ready, NULL, s);
 
     s->sock = sock;
     s->size = size;
@@ -452,7 +449,7 @@ static void nbd_teardown_connection(BlockDriverState *bs)
     request.len = 0;
     nbd_send_request(s->sock, &request);
 
-    qemu_aio_set_fd_handler(s->sock, NULL, NULL, NULL, NULL);
+    qemu_aio_set_fd_handler(s->sock, NULL, NULL, NULL);
     closesocket(s->sock);
 }
 
diff --git a/block/rbd.c b/block/rbd.c
index 71b4a0c..e798e19 100644
--- a/block/rbd.c
+++ b/block/rbd.c
@@ -545,7 +545,7 @@ static int qemu_rbd_open(BlockDriverState *bs, QDict *options, int flags)
     fcntl(s->fds[0], F_SETFL, O_NONBLOCK);
     fcntl(s->fds[1], F_SETFL, O_NONBLOCK);
     qemu_aio_set_fd_handler(s->fds[RBD_FD_READ], qemu_rbd_aio_event_reader,
-                            NULL, NULL, s);
+                            NULL, s);
 
 
     qemu_opts_del(opts);
@@ -569,7 +569,7 @@ static void qemu_rbd_close(BlockDriverState *bs)
 
     close(s->fds[0]);
     close(s->fds[1]);
-    qemu_aio_set_fd_handler(s->fds[RBD_FD_READ], NULL, NULL, NULL, NULL);
+    qemu_aio_set_fd_handler(s->fds[RBD_FD_READ], NULL, NULL, NULL);
 
     rbd_close(s->image);
     rados_ioctx_destroy(s->io_ctx);
diff --git a/block/sheepdog.c b/block/sheepdog.c
index e216047..2848b60 100644
--- a/block/sheepdog.c
+++ b/block/sheepdog.c
@@ -531,14 +531,14 @@ static coroutine_fn void do_co_req(void *opaque)
     unsigned int *rlen = srco->rlen;
 
     co = qemu_coroutine_self();
-    qemu_aio_set_fd_handler(sockfd, NULL, restart_co_req, NULL, co);
+    qemu_aio_set_fd_handler(sockfd, NULL, restart_co_req, co);
 
     ret = send_co_req(sockfd, hdr, data, wlen);
     if (ret < 0) {
         goto out;
     }
 
-    qemu_aio_set_fd_handler(sockfd, restart_co_req, NULL, NULL, co);
+    qemu_aio_set_fd_handler(sockfd, restart_co_req, NULL, co);
 
     ret = qemu_co_recv(sockfd, hdr, sizeof(*hdr));
     if (ret < sizeof(*hdr)) {
@@ -563,7 +563,7 @@ static coroutine_fn void do_co_req(void *opaque)
 out:
     /* there is at most one request for this sockfd, so it is safe to
      * set each handler to NULL. */
-    qemu_aio_set_fd_handler(sockfd, NULL, NULL, NULL, NULL);
+    qemu_aio_set_fd_handler(sockfd, NULL, NULL, NULL);
 
     srco->ret = ret;
     srco->finished = true;
@@ -804,7 +804,7 @@ static int get_sheep_fd(BDRVSheepdogState *s)
         return fd;
     }
 
-    qemu_aio_set_fd_handler(fd, co_read_response, NULL, NULL, s);
+    qemu_aio_set_fd_handler(fd, co_read_response, NULL, s);
     return fd;
 }
 
@@ -1054,8 +1054,7 @@ static int coroutine_fn add_aio_request(BDRVSheepdogState *s, AIOReq *aio_req,
 
     qemu_co_mutex_lock(&s->lock);
     s->co_send = qemu_coroutine_self();
-    qemu_aio_set_fd_handler(s->fd, co_read_response, co_write_request,
-                            NULL, s);
+    qemu_aio_set_fd_handler(s->fd, co_read_response, co_write_request, s);
     socket_set_cork(s->fd, 1);
 
     /* send a header */
@@ -1076,8 +1075,7 @@ static int coroutine_fn add_aio_request(BDRVSheepdogState *s, AIOReq *aio_req,
     }
 
     socket_set_cork(s->fd, 0);
-    qemu_aio_set_fd_handler(s->fd, co_read_response, NULL,
-                            NULL, s);
+    qemu_aio_set_fd_handler(s->fd, co_read_response, NULL, s);
     qemu_co_mutex_unlock(&s->lock);
 
     return 0;
@@ -1335,7 +1333,7 @@ static int sd_open(BlockDriverState *bs, QDict *options, int flags)
     g_free(buf);
     return 0;
 out:
-    qemu_aio_set_fd_handler(s->fd, NULL, NULL, NULL, NULL);
+    qemu_aio_set_fd_handler(s->fd, NULL, NULL, NULL);
     if (s->fd >= 0) {
         closesocket(s->fd);
     }
@@ -1563,7 +1561,7 @@ static void sd_close(BlockDriverState *bs)
         error_report("%s, %s", sd_strerror(rsp->result), s->name);
     }
 
-    qemu_aio_set_fd_handler(s->fd, NULL, NULL, NULL, NULL);
+    qemu_aio_set_fd_handler(s->fd, NULL, NULL, NULL);
     closesocket(s->fd);
     g_free(s->host_spec);
 }
diff --git a/block/ssh.c b/block/ssh.c
index e149da9..27691b4 100644
--- a/block/ssh.c
+++ b/block/ssh.c
@@ -758,13 +758,13 @@ static coroutine_fn void set_fd_handler(BDRVSSHState *s)
     DPRINTF("s->sock=%d rd_handler=%p wr_handler=%p", s->sock,
             rd_handler, wr_handler);
 
-    qemu_aio_set_fd_handler(s->sock, rd_handler, wr_handler, NULL, co);
+    qemu_aio_set_fd_handler(s->sock, rd_handler, wr_handler, co);
 }
 
 static coroutine_fn void clear_fd_handler(BDRVSSHState *s)
 {
     DPRINTF("s->sock=%d", s->sock);
-    qemu_aio_set_fd_handler(s->sock, NULL, NULL, NULL, NULL);
+    qemu_aio_set_fd_handler(s->sock, NULL, NULL, NULL);
 }
 
 /* A non-blocking call returned EAGAIN, so yield, ensuring the
diff --git a/hw/block/dataplane/virtio-blk.c b/hw/block/dataplane/virtio-blk.c
index f8624d1..fb356ee 100644
--- a/hw/block/dataplane/virtio-blk.c
+++ b/hw/block/dataplane/virtio-blk.c
@@ -473,7 +473,7 @@ void virtio_blk_data_plane_start(VirtIOBlockDataPlane *s)
         exit(1);
     }
     s->host_notifier = *virtio_queue_get_host_notifier(vq);
-    aio_set_event_notifier(s->ctx, &s->host_notifier, handle_notify, NULL);
+    aio_set_event_notifier(s->ctx, &s->host_notifier, handle_notify);
 
     /* Set up ioqueue */
     ioq_init(&s->ioqueue, s->fd, REQ_MAX);
@@ -481,7 +481,7 @@ void virtio_blk_data_plane_start(VirtIOBlockDataPlane *s)
         ioq_put_iocb(&s->ioqueue, &s->requests[i].iocb);
     }
     s->io_notifier = *ioq_get_notifier(&s->ioqueue);
-    aio_set_event_notifier(s->ctx, &s->io_notifier, handle_io, NULL);
+    aio_set_event_notifier(s->ctx, &s->io_notifier, handle_io);
 
     s->started = true;
     trace_virtio_blk_data_plane_start(s);
@@ -513,10 +513,10 @@ void virtio_blk_data_plane_stop(VirtIOBlockDataPlane *s)
         qemu_thread_join(&s->thread);
     }
 
-    aio_set_event_notifier(s->ctx, &s->io_notifier, NULL, NULL);
+    aio_set_event_notifier(s->ctx, &s->io_notifier, NULL);
     ioq_cleanup(&s->ioqueue);
 
-    aio_set_event_notifier(s->ctx, &s->host_notifier, NULL, NULL);
+    aio_set_event_notifier(s->ctx, &s->host_notifier, NULL);
     k->set_host_notifier(qbus->parent, 0, false);
 
     aio_context_unref(s->ctx);
diff --git a/include/block/aio.h b/include/block/aio.h
index cc77771..5743bf1 100644
--- a/include/block/aio.h
+++ b/include/block/aio.h
@@ -74,9 +74,6 @@ typedef struct AioContext {
     struct ThreadPool *thread_pool;
 } AioContext;
 
-/* Returns 1 if there are still outstanding AIO requests; 0 otherwise */
-typedef int (AioFlushEventNotifierHandler)(EventNotifier *e);
-
 /**
  * aio_context_new: Allocate a new AioContext.
  *
@@ -198,9 +195,6 @@ bool aio_pending(AioContext *ctx);
 bool aio_poll(AioContext *ctx, bool blocking);
 
 #ifdef CONFIG_POSIX
-/* Returns 1 if there are still outstanding AIO requests; 0 otherwise */
-typedef int (AioFlushHandler)(void *opaque);
-
 /* Register a file descriptor and associated callbacks.  Behaves very similarly
  * to qemu_set_fd_handler2.  Unlike qemu_set_fd_handler2, these callbacks will
  * be invoked when using qemu_aio_wait().
@@ -212,7 +206,6 @@ void aio_set_fd_handler(AioContext *ctx,
                         int fd,
                         IOHandler *io_read,
                         IOHandler *io_write,
-                        AioFlushHandler *io_flush,
                         void *opaque);
 #endif
 
@@ -225,8 +218,7 @@ void aio_set_fd_handler(AioContext *ctx,
  */
 void aio_set_event_notifier(AioContext *ctx,
                             EventNotifier *notifier,
-                            EventNotifierHandler *io_read,
-                            AioFlushEventNotifierHandler *io_flush);
+                            EventNotifierHandler *io_read);
 
 /* Return a GSource that lets the main loop poll the file descriptors attached
  * to this AioContext.
@@ -240,14 +232,12 @@ struct ThreadPool *aio_get_thread_pool(AioContext *ctx);
 
 bool qemu_aio_wait(void);
 void qemu_aio_set_event_notifier(EventNotifier *notifier,
-                                 EventNotifierHandler *io_read,
-                                 AioFlushEventNotifierHandler *io_flush);
+                                 EventNotifierHandler *io_read);
 
 #ifdef CONFIG_POSIX
 void qemu_aio_set_fd_handler(int fd,
                              IOHandler *io_read,
                              IOHandler *io_write,
-                             AioFlushHandler *io_flush,
                              void *opaque);
 #endif
 
diff --git a/main-loop.c b/main-loop.c
index a44fff6..2d9774e 100644
--- a/main-loop.c
+++ b/main-loop.c
@@ -489,17 +489,14 @@ bool qemu_aio_wait(void)
 void qemu_aio_set_fd_handler(int fd,
                              IOHandler *io_read,
                              IOHandler *io_write,
-                             AioFlushHandler *io_flush,
                              void *opaque)
 {
-    aio_set_fd_handler(qemu_aio_context, fd, io_read, io_write, io_flush,
-                       opaque);
+    aio_set_fd_handler(qemu_aio_context, fd, io_read, io_write, opaque);
 }
 #endif
 
 void qemu_aio_set_event_notifier(EventNotifier *notifier,
-                                 EventNotifierHandler *io_read,
-                                 AioFlushEventNotifierHandler *io_flush)
+                                 EventNotifierHandler *io_read)
 {
-    aio_set_event_notifier(qemu_aio_context, notifier, io_read, io_flush);
+    aio_set_event_notifier(qemu_aio_context, notifier, io_read);
 }
diff --git a/tests/test-aio.c b/tests/test-aio.c
index 7b2892a..1ab5637 100644
--- a/tests/test-aio.c
+++ b/tests/test-aio.c
@@ -233,11 +233,11 @@ static void test_set_event_notifier(void)
 {
     EventNotifierTestData data = { .n = 0, .active = 0 };
     event_notifier_init(&data.e, false);
-    aio_set_event_notifier(ctx, &data.e, event_ready_cb, NULL);
+    aio_set_event_notifier(ctx, &data.e, event_ready_cb);
     g_assert(!aio_poll(ctx, false));
     g_assert_cmpint(data.n, ==, 0);
 
-    aio_set_event_notifier(ctx, &data.e, NULL, NULL);
+    aio_set_event_notifier(ctx, &data.e, NULL);
     g_assert(!aio_poll(ctx, false));
     g_assert_cmpint(data.n, ==, 0);
     event_notifier_cleanup(&data.e);
@@ -247,7 +247,7 @@ static void test_wait_event_notifier(void)
 {
     EventNotifierTestData data = { .n = 0, .active = 1 };
     event_notifier_init(&data.e, false);
-    aio_set_event_notifier(ctx, &data.e, event_ready_cb, NULL);
+    aio_set_event_notifier(ctx, &data.e, event_ready_cb);
     g_assert(!aio_poll(ctx, false));
     g_assert_cmpint(data.n, ==, 0);
     g_assert_cmpint(data.active, ==, 1);
@@ -261,7 +261,7 @@ static void test_wait_event_notifier(void)
     g_assert_cmpint(data.n, ==, 1);
     g_assert_cmpint(data.active, ==, 0);
 
-    aio_set_event_notifier(ctx, &data.e, NULL, NULL);
+    aio_set_event_notifier(ctx, &data.e, NULL);
     g_assert(!aio_poll(ctx, false));
     g_assert_cmpint(data.n, ==, 1);
 
@@ -272,7 +272,7 @@ static void test_flush_event_notifier(void)
 {
     EventNotifierTestData data = { .n = 0, .active = 10, .auto_set = true };
     event_notifier_init(&data.e, false);
-    aio_set_event_notifier(ctx, &data.e, event_ready_cb, NULL);
+    aio_set_event_notifier(ctx, &data.e, event_ready_cb);
     g_assert(!aio_poll(ctx, false));
     g_assert_cmpint(data.n, ==, 0);
     g_assert_cmpint(data.active, ==, 10);
@@ -288,7 +288,7 @@ static void test_flush_event_notifier(void)
     g_assert_cmpint(data.active, ==, 0);
     g_assert(!aio_poll(ctx, false));
 
-    aio_set_event_notifier(ctx, &data.e, NULL, NULL);
+    aio_set_event_notifier(ctx, &data.e, NULL);
     g_assert(!aio_poll(ctx, false));
     event_notifier_cleanup(&data.e);
 }
@@ -299,7 +299,7 @@ static void test_wait_event_notifier_noflush(void)
     EventNotifierTestData dummy = { .n = 0, .active = 1 };
 
     event_notifier_init(&data.e, false);
-    aio_set_event_notifier(ctx, &data.e, event_ready_cb, NULL);
+    aio_set_event_notifier(ctx, &data.e, event_ready_cb);
 
     g_assert(!aio_poll(ctx, false));
     g_assert_cmpint(data.n, ==, 0);
@@ -312,7 +312,7 @@ static void test_wait_event_notifier_noflush(void)
 
     /* An active event notifier forces aio_poll to look at EventNotifiers.  */
     event_notifier_init(&dummy.e, false);
-    aio_set_event_notifier(ctx, &dummy.e, event_ready_cb, NULL);
+    aio_set_event_notifier(ctx, &dummy.e, event_ready_cb);
 
     event_notifier_set(&data.e);
     g_assert(aio_poll(ctx, false));
@@ -332,10 +332,10 @@ static void test_wait_event_notifier_noflush(void)
     g_assert_cmpint(dummy.n, ==, 1);
     g_assert_cmpint(dummy.active, ==, 0);
 
-    aio_set_event_notifier(ctx, &dummy.e, NULL, NULL);
+    aio_set_event_notifier(ctx, &dummy.e, NULL);
     event_notifier_cleanup(&dummy.e);
 
-    aio_set_event_notifier(ctx, &data.e, NULL, NULL);
+    aio_set_event_notifier(ctx, &data.e, NULL);
     g_assert(!aio_poll(ctx, false));
     g_assert_cmpint(data.n, ==, 2);
 
@@ -515,11 +515,11 @@ static void test_source_set_event_notifier(void)
 {
     EventNotifierTestData data = { .n = 0, .active = 0 };
     event_notifier_init(&data.e, false);
-    aio_set_event_notifier(ctx, &data.e, event_ready_cb, NULL);
+    aio_set_event_notifier(ctx, &data.e, event_ready_cb);
     while (g_main_context_iteration(NULL, false));
     g_assert_cmpint(data.n, ==, 0);
 
-    aio_set_event_notifier(ctx, &data.e, NULL, NULL);
+    aio_set_event_notifier(ctx, &data.e, NULL);
     while (g_main_context_iteration(NULL, false));
     g_assert_cmpint(data.n, ==, 0);
     event_notifier_cleanup(&data.e);
@@ -529,7 +529,7 @@ static void test_source_wait_event_notifier(void)
 {
     EventNotifierTestData data = { .n = 0, .active = 1 };
     event_notifier_init(&data.e, false);
-    aio_set_event_notifier(ctx, &data.e, event_ready_cb, NULL);
+    aio_set_event_notifier(ctx, &data.e, event_ready_cb);
     g_assert(g_main_context_iteration(NULL, false));
     g_assert_cmpint(data.n, ==, 0);
     g_assert_cmpint(data.active, ==, 1);
@@ -543,7 +543,7 @@ static void test_source_wait_event_notifier(void)
     g_assert_cmpint(data.n, ==, 1);
     g_assert_cmpint(data.active, ==, 0);
 
-    aio_set_event_notifier(ctx, &data.e, NULL, NULL);
+    aio_set_event_notifier(ctx, &data.e, NULL);
     while (g_main_context_iteration(NULL, false));
     g_assert_cmpint(data.n, ==, 1);
 
@@ -554,7 +554,7 @@ static void test_source_flush_event_notifier(void)
 {
     EventNotifierTestData data = { .n = 0, .active = 10, .auto_set = true };
     event_notifier_init(&data.e, false);
-    aio_set_event_notifier(ctx, &data.e, event_ready_cb, NULL);
+    aio_set_event_notifier(ctx, &data.e, event_ready_cb);
     g_assert(g_main_context_iteration(NULL, false));
     g_assert_cmpint(data.n, ==, 0);
     g_assert_cmpint(data.active, ==, 10);
@@ -570,7 +570,7 @@ static void test_source_flush_event_notifier(void)
     g_assert_cmpint(data.active, ==, 0);
     g_assert(!g_main_context_iteration(NULL, false));
 
-    aio_set_event_notifier(ctx, &data.e, NULL, NULL);
+    aio_set_event_notifier(ctx, &data.e, NULL);
     while (g_main_context_iteration(NULL, false));
     event_notifier_cleanup(&data.e);
 }
@@ -581,7 +581,7 @@ static void test_source_wait_event_notifier_noflush(void)
     EventNotifierTestData dummy = { .n = 0, .active = 1 };
 
     event_notifier_init(&data.e, false);
-    aio_set_event_notifier(ctx, &data.e, event_ready_cb, NULL);
+    aio_set_event_notifier(ctx, &data.e, event_ready_cb);
 
     while (g_main_context_iteration(NULL, false));
     g_assert_cmpint(data.n, ==, 0);
@@ -594,7 +594,7 @@ static void test_source_wait_event_notifier_noflush(void)
 
     /* An active event notifier forces aio_poll to look at EventNotifiers.  */
     event_notifier_init(&dummy.e, false);
-    aio_set_event_notifier(ctx, &dummy.e, event_ready_cb, NULL);
+    aio_set_event_notifier(ctx, &dummy.e, event_ready_cb);
 
     event_notifier_set(&data.e);
     g_assert(g_main_context_iteration(NULL, false));
@@ -614,10 +614,10 @@ static void test_source_wait_event_notifier_noflush(void)
     g_assert_cmpint(dummy.n, ==, 1);
     g_assert_cmpint(dummy.active, ==, 0);
 
-    aio_set_event_notifier(ctx, &dummy.e, NULL, NULL);
+    aio_set_event_notifier(ctx, &dummy.e, NULL);
     event_notifier_cleanup(&dummy.e);
 
-    aio_set_event_notifier(ctx, &data.e, NULL, NULL);
+    aio_set_event_notifier(ctx, &data.e, NULL);
     while (g_main_context_iteration(NULL, false));
     g_assert_cmpint(data.n, ==, 2);
 
diff --git a/thread-pool.c b/thread-pool.c
index 096f007..5025567 100644
--- a/thread-pool.c
+++ b/thread-pool.c
@@ -303,8 +303,7 @@ static void thread_pool_init_one(ThreadPool *pool, AioContext *ctx)
     QLIST_INIT(&pool->head);
     QTAILQ_INIT(&pool->request_list);
 
-    aio_set_event_notifier(ctx, &pool->notifier, event_notifier_ready,
-                           NULL);
+    aio_set_event_notifier(ctx, &pool->notifier, event_notifier_ready);
 }
 
 ThreadPool *thread_pool_new(AioContext *ctx)
@@ -338,7 +337,7 @@ void thread_pool_free(ThreadPool *pool)
 
     qemu_mutex_unlock(&pool->lock);
 
-    aio_set_event_notifier(pool->ctx, &pool->notifier, NULL, NULL);
+    aio_set_event_notifier(pool->ctx, &pool->notifier, NULL);
     qemu_sem_destroy(&pool->sem);
     qemu_cond_destroy(&pool->check_cancel);
     qemu_cond_destroy(&pool->worker_stopped);
-- 
1.8.1.4

^ permalink raw reply related	[flat|nested] 53+ messages in thread

* Re: [Qemu-devel] [PATCH v6 13/18] block/sheepdog: drop have_co_req() and aio_flush_request()
  2013-07-25 15:18 ` [Qemu-devel] [PATCH v6 13/18] block/sheepdog: drop have_co_req() and aio_flush_request() Stefan Hajnoczi
@ 2013-07-26  5:50   ` MORITA Kazutaka
  2013-07-29  8:24   ` Wenchao Xia
  1 sibling, 0 replies; 53+ messages in thread
From: MORITA Kazutaka @ 2013-07-26  5:50 UTC (permalink / raw)
  To: Stefan Hajnoczi
  Cc: Kevin Wolf, Ping Fan Liu, qemu-devel, Michael Roth, alex, Paolo Bonzini

At Thu, 25 Jul 2013 17:18:20 +0200,
Stefan Hajnoczi wrote:
> 
> .io_flush() is no longer called so drop have_co_req() and
> aio_flush_request().
> 
> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
> ---
>  block/sheepdog.c | 25 +++++--------------------
>  1 file changed, 5 insertions(+), 20 deletions(-)

Reviewed-by: MORITA Kazutaka <morita.kazutaka@lab.ntt.co.jp>

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [Qemu-devel] [PATCH v6 01/18] block: ensure bdrv_drain_all() works during bdrv_delete()
  2013-07-25 15:18 ` [Qemu-devel] [PATCH v6 01/18] block: ensure bdrv_drain_all() works during bdrv_delete() Stefan Hajnoczi
@ 2013-07-26  6:43   ` Wenchao Xia
  2013-08-06 15:06     ` Stefan Hajnoczi
  2013-08-07  7:42   ` Stefan Hajnoczi
  1 sibling, 1 reply; 53+ messages in thread
From: Wenchao Xia @ 2013-07-26  6:43 UTC (permalink / raw)
  To: Stefan Hajnoczi
  Cc: Kevin Wolf, Ping Fan Liu, Michael Roth, qemu-stable, qemu-devel,
	alex, Paolo Bonzini

Reviewed-by: Wenchao Xia <xiawenc@linux.vnet.ibm.com>

One question: old code missed itself in bdrv_drain_all(), is that a bug?

> In bdrv_delete() make sure to call bdrv_make_anon() *after* bdrv_close()
> so that the device is still seen by bdrv_drain_all() when iterating
> bdrv_states.
> 
> Cc: qemu-stable@nongnu.org
> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
> ---
>   block.c | 4 ++--
>   1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/block.c b/block.c
> index 6cd39fa..9d68270 100644
> --- a/block.c
> +++ b/block.c
> @@ -1600,11 +1600,11 @@ void bdrv_delete(BlockDriverState *bs)
>       assert(!bs->job);
>       assert(!bs->in_use);
> 
> +    bdrv_close(bs);
> +
>       /* remove from list, if necessary */
>       bdrv_make_anon(bs);
> 
> -    bdrv_close(bs);
> -
>       g_free(bs);
>   }
> 


-- 
Best Regards

Wenchao Xia

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [Qemu-devel] [PATCH v6 06/18] aio: stop using .io_flush()
  2013-07-25 15:18 ` [Qemu-devel] [PATCH v6 06/18] aio: stop using .io_flush() Stefan Hajnoczi
@ 2013-07-26 16:10   ` Jeff Cody
  2013-07-26 16:25     ` Paolo Bonzini
  2013-07-26 16:43   ` Jeff Cody
  2013-07-29  8:08   ` Wenchao Xia
  2 siblings, 1 reply; 53+ messages in thread
From: Jeff Cody @ 2013-07-26 16:10 UTC (permalink / raw)
  To: Stefan Hajnoczi
  Cc: Kevin Wolf, Ping Fan Liu, qemu-devel, Michael Roth, alex, Paolo Bonzini

On Thu, Jul 25, 2013 at 05:18:13PM +0200, Stefan Hajnoczi wrote:
> Now that aio_poll() users check their termination condition themselves,
> it is no longer necessary to call .io_flush() handlers.
> 
> The behavior of aio_poll() changes as follows:
> 
> 1. .io_flush() is no longer invoked and file descriptors are *always*
> monitored.  Previously returning 0 from .io_flush() would skip this file
> descriptor.
> 
> Due to this change it is essential to check that requests are pending
> before calling qemu_aio_wait().  Failure to do so means we block, for
> example, waiting for an idle iSCSI socket to become readable when there
> are no requests.  Currently all qemu_aio_wait()/aio_poll() callers check
> before calling.
> 
> 2. aio_poll() now returns true if progress was made (BH or fd handlers
> executed) and false otherwise.  Previously it would return true whenever
> 'busy', which means that .io_flush() returned true.  The 'busy' concept
> no longer exists so just progress is returned.
> 
> Due to this change we need to update tests/test-aio.c which asserts
> aio_poll() return values.  Note that QEMU doesn't actually rely on these
> return values so only tests/test-aio.c cares.
> 
> Note that ctx->notifier, the EventNotifier fd used for aio_notify(), is
> now handled as a special case.  This is a little ugly but maintains
> aio_poll() semantics, i.e. aio_notify() does not count as 'progress' and
> aio_poll() avoids blocking when the user has not set any fd handlers yet.
> 
> Patches after this remove .io_flush() handler code until we can finally
> drop the io_flush arguments to aio_set_fd_handler() and friends.
> 
> Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
> ---
>  aio-posix.c      | 29 +++++++++--------------------
>  aio-win32.c      | 34 ++++++++++++++--------------------
>  tests/test-aio.c | 10 +++++-----
>  3 files changed, 28 insertions(+), 45 deletions(-)
> 
> diff --git a/aio-posix.c b/aio-posix.c
> index b68eccd..7d66048 100644
> --- a/aio-posix.c
> +++ b/aio-posix.c
> @@ -23,7 +23,6 @@ struct AioHandler
>      GPollFD pfd;
>      IOHandler *io_read;
>      IOHandler *io_write;
> -    AioFlushHandler *io_flush;
>      int deleted;
>      int pollfds_idx;
>      void *opaque;
> @@ -84,7 +83,6 @@ void aio_set_fd_handler(AioContext *ctx,
>          /* Update handler with latest information */
>          node->io_read = io_read;
>          node->io_write = io_write;
> -        node->io_flush = io_flush;
>          node->opaque = opaque;
>          node->pollfds_idx = -1;
>  
> @@ -147,7 +145,11 @@ static bool aio_dispatch(AioContext *ctx)
>              (revents & (G_IO_IN | G_IO_HUP | G_IO_ERR)) &&
>              node->io_read) {
>              node->io_read(node->opaque);
> -            progress = true;
> +
> +            /* aio_notify() does not count as progress */
> +            if (node->opaque != &ctx->notifier) {
> +                progress = true;
> +            }
>          }
>          if (!node->deleted &&
>              (revents & (G_IO_OUT | G_IO_ERR)) &&
> @@ -173,7 +175,7 @@ bool aio_poll(AioContext *ctx, bool blocking)
>  {
>      AioHandler *node;
>      int ret;
> -    bool busy, progress;
> +    bool progress;
>  
>      progress = false;
>  
> @@ -200,20 +202,8 @@ bool aio_poll(AioContext *ctx, bool blocking)
>      g_array_set_size(ctx->pollfds, 0);
>  
>      /* fill pollfds */
> -    busy = false;
>      QLIST_FOREACH(node, &ctx->aio_handlers, node) {
>          node->pollfds_idx = -1;
> -
> -        /* If there aren't pending AIO operations, don't invoke callbacks.
> -         * Otherwise, if there are no AIO requests, qemu_aio_wait() would
> -         * wait indefinitely.
> -         */
> -        if (!node->deleted && node->io_flush) {
> -            if (node->io_flush(node->opaque) == 0) {
> -                continue;
> -            }
> -            busy = true;
> -        }
>          if (!node->deleted && node->pfd.events) {
>              GPollFD pfd = {
>                  .fd = node->pfd.fd,
> @@ -226,8 +216,8 @@ bool aio_poll(AioContext *ctx, bool blocking)
>  
>      ctx->walking_handlers--;
>  
> -    /* No AIO operations?  Get us out of here */
> -    if (!busy) {
> +    /* early return if we only have the aio_notify() fd */
> +    if (ctx->pollfds->len == 1) {
>          return progress;
>      }
>  
> @@ -250,6 +240,5 @@ bool aio_poll(AioContext *ctx, bool blocking)
>          }
>      }
>  
> -    assert(progress || busy);
> -    return true;
> +    return progress;
>  }
> diff --git a/aio-win32.c b/aio-win32.c
> index 38723bf..4309c16 100644
> --- a/aio-win32.c
> +++ b/aio-win32.c
> @@ -23,7 +23,6 @@
>  struct AioHandler {
>      EventNotifier *e;
>      EventNotifierHandler *io_notify;
> -    AioFlushEventNotifierHandler *io_flush;
>      GPollFD pfd;
>      int deleted;
>      QLIST_ENTRY(AioHandler) node;
> @@ -73,7 +72,6 @@ void aio_set_event_notifier(AioContext *ctx,
>          }
>          /* Update handler with latest information */
>          node->io_notify = io_notify;
> -        node->io_flush = io_flush;
>      }
>  
>      aio_notify(ctx);
> @@ -96,7 +94,7 @@ bool aio_poll(AioContext *ctx, bool blocking)
>  {
>      AioHandler *node;
>      HANDLE events[MAXIMUM_WAIT_OBJECTS + 1];
> -    bool busy, progress;
> +    bool progress;
>      int count;
>  
>      progress = false;
> @@ -126,7 +124,11 @@ bool aio_poll(AioContext *ctx, bool blocking)
>          if (node->pfd.revents && node->io_notify) {
>              node->pfd.revents = 0;
>              node->io_notify(node->e);
> -            progress = true;
> +
> +            /* aio_notify() does not count as progress */
> +            if (node->opaque != &ctx->notifier) {
> +                progress = true;
> +            }
>          }
>  
>          tmp = node;
> @@ -147,19 +149,8 @@ bool aio_poll(AioContext *ctx, bool blocking)
>      ctx->walking_handlers++;
>  
>      /* fill fd sets */
> -    busy = false;
>      count = 0;
>      QLIST_FOREACH(node, &ctx->aio_handlers, node) {
> -        /* If there aren't pending AIO operations, don't invoke callbacks.
> -         * Otherwise, if there are no AIO requests, qemu_aio_wait() would
> -         * wait indefinitely.
> -         */
> -        if (!node->deleted && node->io_flush) {
> -            if (node->io_flush(node->e) == 0) {
> -                continue;
> -            }
> -            busy = true;
> -        }
>          if (!node->deleted && node->io_notify) {
>              events[count++] = event_notifier_get_handle(node->e);
>          }
> @@ -167,8 +158,8 @@ bool aio_poll(AioContext *ctx, bool blocking)
>  
>      ctx->walking_handlers--;
>  
> -    /* No AIO operations?  Get us out of here */
> -    if (!busy) {
> +    /* early return if we only have the aio_notify() fd */
> +    if (count == 1) {
>          return progress;
>      }

Minor point, but could we simplify it a bit if the above 3 lines were
removed, and the following while() case (not shown in diff context)
was just changed to while (count > 1) instead of while(count > 0).
I.e.:

-    /* No AIO operations?  Get us out of here */
-    if (!busy) {
-        return progress;
-    }

-   /* wait until next event */
-   while (count > 0) {
+   /* wait until next event that is not aio_notify() */
+   while (count > 1) {

This would assume of course that aio_notify() is always first in the
list.


>  
> @@ -196,7 +187,11 @@ bool aio_poll(AioContext *ctx, bool blocking)
>                  event_notifier_get_handle(node->e) == events[ret - WAIT_OBJECT_0] &&
>                  node->io_notify) {
>                  node->io_notify(node->e);
> -                progress = true;
> +
> +                /* aio_notify() does not count as progress */
> +                if (node->opaque != &ctx->notifier) {
> +                    progress = true;
> +                }

^ We could then also drop this part of the patch, too, right?


>              }
>  
>              tmp = node;
> @@ -214,6 +209,5 @@ bool aio_poll(AioContext *ctx, bool blocking)
>          events[ret - WAIT_OBJECT_0] = events[--count];
>      }
>  
> -    assert(progress || busy);
> -    return true;
> +    return progress;
>  }
> diff --git a/tests/test-aio.c b/tests/test-aio.c
> index 20bf5e6..1251952 100644
> --- a/tests/test-aio.c
> +++ b/tests/test-aio.c
> @@ -254,7 +254,7 @@ static void test_wait_event_notifier(void)
>      EventNotifierTestData data = { .n = 0, .active = 1 };
>      event_notifier_init(&data.e, false);
>      aio_set_event_notifier(ctx, &data.e, event_ready_cb, event_active_cb);
> -    g_assert(aio_poll(ctx, false));
> +    g_assert(!aio_poll(ctx, false));
>      g_assert_cmpint(data.n, ==, 0);
>      g_assert_cmpint(data.active, ==, 1);
>  
> @@ -279,7 +279,7 @@ static void test_flush_event_notifier(void)
>      EventNotifierTestData data = { .n = 0, .active = 10, .auto_set = true };
>      event_notifier_init(&data.e, false);
>      aio_set_event_notifier(ctx, &data.e, event_ready_cb, event_active_cb);
> -    g_assert(aio_poll(ctx, false));
> +    g_assert(!aio_poll(ctx, false));
>      g_assert_cmpint(data.n, ==, 0);
>      g_assert_cmpint(data.active, ==, 10);
>  
> @@ -313,7 +313,7 @@ static void test_wait_event_notifier_noflush(void)
>      /* Until there is an active descriptor, aio_poll may or may not call
>       * event_ready_cb.  Still, it must not block.  */
>      event_notifier_set(&data.e);
> -    g_assert(!aio_poll(ctx, true));
> +    g_assert(aio_poll(ctx, true));
>      data.n = 0;
>  
>      /* An active event notifier forces aio_poll to look at EventNotifiers.  */
> @@ -323,13 +323,13 @@ static void test_wait_event_notifier_noflush(void)
>      event_notifier_set(&data.e);
>      g_assert(aio_poll(ctx, false));
>      g_assert_cmpint(data.n, ==, 1);
> -    g_assert(aio_poll(ctx, false));
> +    g_assert(!aio_poll(ctx, false));
>      g_assert_cmpint(data.n, ==, 1);
>  
>      event_notifier_set(&data.e);
>      g_assert(aio_poll(ctx, false));
>      g_assert_cmpint(data.n, ==, 2);
> -    g_assert(aio_poll(ctx, false));
> +    g_assert(!aio_poll(ctx, false));
>      g_assert_cmpint(data.n, ==, 2);
>  
>      event_notifier_set(&dummy.e);
> -- 
> 1.8.1.4
> 
> 

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [Qemu-devel] [PATCH v6 06/18] aio: stop using .io_flush()
  2013-07-26 16:10   ` Jeff Cody
@ 2013-07-26 16:25     ` Paolo Bonzini
  2013-07-26 16:36       ` Jeff Cody
  0 siblings, 1 reply; 53+ messages in thread
From: Paolo Bonzini @ 2013-07-26 16:25 UTC (permalink / raw)
  To: Jeff Cody
  Cc: Kevin Wolf, Ping Fan Liu, Stefan Hajnoczi, qemu-devel,
	Michael Roth, alex

Il 26/07/2013 18:10, Jeff Cody ha scritto:
> +   /* wait until next event that is not aio_notify() */
> +   while (count > 1) {
> 
> This would assume of course that aio_notify() is always first in the
> list.

No, you cannot assume that.  Any bdrv can be opened early (with -drive)
and then associated to an AioContext that is created later (when the
dataplane thread start).

Paolo

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [Qemu-devel] [PATCH v6 06/18] aio: stop using .io_flush()
  2013-07-26 16:25     ` Paolo Bonzini
@ 2013-07-26 16:36       ` Jeff Cody
  0 siblings, 0 replies; 53+ messages in thread
From: Jeff Cody @ 2013-07-26 16:36 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Kevin Wolf, Ping Fan Liu, Stefan Hajnoczi, qemu-devel,
	Michael Roth, alex

On Fri, Jul 26, 2013 at 06:25:18PM +0200, Paolo Bonzini wrote:
> Il 26/07/2013 18:10, Jeff Cody ha scritto:
> > +   /* wait until next event that is not aio_notify() */
> > +   while (count > 1) {
> > 
> > This would assume of course that aio_notify() is always first in the
> > list.
> 
> No, you cannot assume that.  Any bdrv can be opened early (with -drive)
> and then associated to an AioContext that is created later (when the
> dataplane thread start).
> 
> Paolo
>

OK, good point, thanks.

Jeff

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [Qemu-devel] [PATCH v6 06/18] aio: stop using .io_flush()
  2013-07-25 15:18 ` [Qemu-devel] [PATCH v6 06/18] aio: stop using .io_flush() Stefan Hajnoczi
  2013-07-26 16:10   ` Jeff Cody
@ 2013-07-26 16:43   ` Jeff Cody
  2013-07-29  8:08   ` Wenchao Xia
  2 siblings, 0 replies; 53+ messages in thread
From: Jeff Cody @ 2013-07-26 16:43 UTC (permalink / raw)
  To: Stefan Hajnoczi
  Cc: Kevin Wolf, Ping Fan Liu, qemu-devel, Michael Roth, alex, Paolo Bonzini

On Thu, Jul 25, 2013 at 05:18:13PM +0200, Stefan Hajnoczi wrote:
> Now that aio_poll() users check their termination condition themselves,
> it is no longer necessary to call .io_flush() handlers.
> 
> The behavior of aio_poll() changes as follows:
> 
> 1. .io_flush() is no longer invoked and file descriptors are *always*
> monitored.  Previously returning 0 from .io_flush() would skip this file
> descriptor.
> 
> Due to this change it is essential to check that requests are pending
> before calling qemu_aio_wait().  Failure to do so means we block, for
> example, waiting for an idle iSCSI socket to become readable when there
> are no requests.  Currently all qemu_aio_wait()/aio_poll() callers check
> before calling.
> 
> 2. aio_poll() now returns true if progress was made (BH or fd handlers
> executed) and false otherwise.  Previously it would return true whenever
> 'busy', which means that .io_flush() returned true.  The 'busy' concept
> no longer exists so just progress is returned.
> 
> Due to this change we need to update tests/test-aio.c which asserts
> aio_poll() return values.  Note that QEMU doesn't actually rely on these
> return values so only tests/test-aio.c cares.
> 
> Note that ctx->notifier, the EventNotifier fd used for aio_notify(), is
> now handled as a special case.  This is a little ugly but maintains
> aio_poll() semantics, i.e. aio_notify() does not count as 'progress' and
> aio_poll() avoids blocking when the user has not set any fd handlers yet.
> 
> Patches after this remove .io_flush() handler code until we can finally
> drop the io_flush arguments to aio_set_fd_handler() and friends.
> 
> Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
> ---
>  aio-posix.c      | 29 +++++++++--------------------
>  aio-win32.c      | 34 ++++++++++++++--------------------
>  tests/test-aio.c | 10 +++++-----
>  3 files changed, 28 insertions(+), 45 deletions(-)
> 

Reviewed-by: Jeff Cody <jcody@redhat.com>

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [Qemu-devel] [PATCH v6 03/18] dataplane/virtio-blk: check exit conditions before aio_poll()
  2013-07-25 15:18 ` [Qemu-devel] [PATCH v6 03/18] dataplane/virtio-blk: check exit conditions before aio_poll() Stefan Hajnoczi
@ 2013-07-26 16:44   ` Jeff Cody
  2013-07-29  7:10   ` Wenchao Xia
  1 sibling, 0 replies; 53+ messages in thread
From: Jeff Cody @ 2013-07-26 16:44 UTC (permalink / raw)
  To: Stefan Hajnoczi
  Cc: Kevin Wolf, Ping Fan Liu, qemu-devel, Michael Roth, alex, Paolo Bonzini

On Thu, Jul 25, 2013 at 05:18:10PM +0200, Stefan Hajnoczi wrote:
> Check exit conditions before entering blocking aio_poll().  This is
> mainly for consistency since it's unlikely that we are stopping in the
> first event loop iteration.
> 
> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
> ---
>  hw/block/dataplane/virtio-blk.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/hw/block/dataplane/virtio-blk.c b/hw/block/dataplane/virtio-blk.c
> index 2faed43..8d3e145 100644
> --- a/hw/block/dataplane/virtio-blk.c
> +++ b/hw/block/dataplane/virtio-blk.c
> @@ -379,9 +379,9 @@ static void *data_plane_thread(void *opaque)
>  {
>      VirtIOBlockDataPlane *s = opaque;
>  
> -    do {
> +    while (!s->stopping || s->num_reqs > 0) {
>          aio_poll(s->ctx, true);
> -    } while (!s->stopping || s->num_reqs > 0);
> +    }
>      return NULL;
>  }
>  
> -- 
> 1.8.1.4
> 
>

Reviewed-by: Jeff Cody <jcody@redhat.com>

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [Qemu-devel] [PATCH v6 02/18] block: stop relying on io_flush() in bdrv_drain_all()
  2013-07-25 15:18 ` [Qemu-devel] [PATCH v6 02/18] block: stop relying on io_flush() in bdrv_drain_all() Stefan Hajnoczi
@ 2013-07-29  7:01   ` Wenchao Xia
  0 siblings, 0 replies; 53+ messages in thread
From: Wenchao Xia @ 2013-07-29  7:01 UTC (permalink / raw)
  To: Stefan Hajnoczi
  Cc: Kevin Wolf, Ping Fan Liu, qemu-devel, Michael Roth, alex, Paolo Bonzini

Reviewed-by: Wenchao Xia <xiawenc@linux.vnet.ibm.com>



-- 
Best Regards

Wenchao Xia

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [Qemu-devel] [PATCH v6 03/18] dataplane/virtio-blk: check exit conditions before aio_poll()
  2013-07-25 15:18 ` [Qemu-devel] [PATCH v6 03/18] dataplane/virtio-blk: check exit conditions before aio_poll() Stefan Hajnoczi
  2013-07-26 16:44   ` Jeff Cody
@ 2013-07-29  7:10   ` Wenchao Xia
  1 sibling, 0 replies; 53+ messages in thread
From: Wenchao Xia @ 2013-07-29  7:10 UTC (permalink / raw)
  To: Stefan Hajnoczi
  Cc: Kevin Wolf, Ping Fan Liu, qemu-devel, Michael Roth, alex, Paolo Bonzini

> Check exit conditions before entering blocking aio_poll().  This is
> mainly for consistency since it's unlikely that we are stopping in the
> first event loop iteration.
> 
> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
> ---
>   hw/block/dataplane/virtio-blk.c | 4 ++--
>   1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/hw/block/dataplane/virtio-blk.c b/hw/block/dataplane/virtio-blk.c
> index 2faed43..8d3e145 100644
> --- a/hw/block/dataplane/virtio-blk.c
> +++ b/hw/block/dataplane/virtio-blk.c
> @@ -379,9 +379,9 @@ static void *data_plane_thread(void *opaque)
>   {
>       VirtIOBlockDataPlane *s = opaque;
> 
> -    do {
> +    while (!s->stopping || s->num_reqs > 0) {
>           aio_poll(s->ctx, true);
> -    } while (!s->stopping || s->num_reqs > 0);
> +    }
>       return NULL;
>   }
>
It seems more likely a bug fix.

Reviewed-by: Wenchao Xia <xiawenc@linux.vnet.ibm.com>

-- 
Best Regards

Wenchao Xia

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [Qemu-devel] [PATCH v6 04/18] tests: adjust test-aio to new aio_poll() semantics
  2013-07-25 15:18 ` [Qemu-devel] [PATCH v6 04/18] tests: adjust test-aio to new aio_poll() semantics Stefan Hajnoczi
@ 2013-07-29  7:39   ` Wenchao Xia
  2013-07-29  8:24   ` Wenchao Xia
  1 sibling, 0 replies; 53+ messages in thread
From: Wenchao Xia @ 2013-07-29  7:39 UTC (permalink / raw)
  To: Stefan Hajnoczi
  Cc: Kevin Wolf, Ping Fan Liu, qemu-devel, Michael Roth, alex, Paolo Bonzini

Reviewed-by: Wenchao Xia <xiawenc@linux.vnet.ibm.com>
> aio_poll(ctx, true) will soon block if any fd handlers have been set.
> Previously it would only block when .io_flush() returned true.
> 
> This means that callers must check their wait condition *before*
> aio_poll() to avoid deadlock.
> 
> Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
> ---
>   tests/test-aio.c | 26 +++++++++++++++++---------
>   1 file changed, 17 insertions(+), 9 deletions(-)
> 
> diff --git a/tests/test-aio.c b/tests/test-aio.c
> index c173870..20bf5e6 100644
> --- a/tests/test-aio.c
> +++ b/tests/test-aio.c
> @@ -15,6 +15,13 @@
> 
>   AioContext *ctx;
> 
> +typedef struct {
> +    EventNotifier e;
> +    int n;
> +    int active;
> +    bool auto_set;
> +} EventNotifierTestData;
> +
>   /* Wait until there are no more BHs or AIO requests */
>   static void wait_for_aio(void)
>   {
> @@ -23,6 +30,14 @@ static void wait_for_aio(void)
>       }
>   }
> 
> +/* Wait until event notifier becomes inactive */
> +static void wait_until_inactive(EventNotifierTestData *data)
> +{
> +    while (data->active > 0) {
> +        aio_poll(ctx, true);
> +    }
> +}
> +
>   /* Simple callbacks for testing.  */
> 
>   typedef struct {
> @@ -50,13 +65,6 @@ static void bh_delete_cb(void *opaque)
>       }
>   }
> 
> -typedef struct {
> -    EventNotifier e;
> -    int n;
> -    int active;
> -    bool auto_set;
> -} EventNotifierTestData;
> -
>   static int event_active_cb(EventNotifier *e)
>   {
>       EventNotifierTestData *data = container_of(e, EventNotifierTestData, e);
> @@ -281,7 +289,7 @@ static void test_flush_event_notifier(void)
>       g_assert_cmpint(data.active, ==, 9);
>       g_assert(aio_poll(ctx, false));
> 
> -    wait_for_aio();
> +    wait_until_inactive(&data);
>       g_assert_cmpint(data.n, ==, 10);
>       g_assert_cmpint(data.active, ==, 0);
>       g_assert(!aio_poll(ctx, false));
> @@ -325,7 +333,7 @@ static void test_wait_event_notifier_noflush(void)
>       g_assert_cmpint(data.n, ==, 2);
> 
>       event_notifier_set(&dummy.e);
> -    wait_for_aio();
> +    wait_until_inactive(&dummy);
>       g_assert_cmpint(data.n, ==, 2);
>       g_assert_cmpint(dummy.n, ==, 1);
>       g_assert_cmpint(dummy.active, ==, 0);
> 


-- 
Best Regards

Wenchao Xia

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [Qemu-devel] [PATCH v6 05/18] tests: adjust test-thread-pool to new aio_poll() semantics
  2013-07-25 15:18 ` [Qemu-devel] [PATCH v6 05/18] tests: adjust test-thread-pool " Stefan Hajnoczi
@ 2013-07-29  7:51   ` Wenchao Xia
  2013-07-29 14:26     ` Stefan Hajnoczi
  0 siblings, 1 reply; 53+ messages in thread
From: Wenchao Xia @ 2013-07-29  7:51 UTC (permalink / raw)
  To: Stefan Hajnoczi
  Cc: Kevin Wolf, Ping Fan Liu, qemu-devel, Michael Roth, alex, Paolo Bonzini

I am confused for sometime about the reason of the change, I think
it can be concluded as: "The meaning of aio_poll() will be changed, so
change the caller in this test case."

Reviewed-by: Wenchao Xia <xiawenc@linux.vnet.ibm.com>


> aio_poll(ctx, true) will soon block when fd handlers have been set.
> Previously aio_poll() would return early if all .io_flush() returned
> false.  This means we need to check the equivalent of the .io_flush()
> condition *before* calling aio_poll(ctx, true) to avoid deadlock.
> 
> Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
> ---
>   tests/test-thread-pool.c | 24 ++++++++++++------------
>   1 file changed, 12 insertions(+), 12 deletions(-)
> 
> diff --git a/tests/test-thread-pool.c b/tests/test-thread-pool.c
> index b62338f..8188d1a 100644
> --- a/tests/test-thread-pool.c
> +++ b/tests/test-thread-pool.c
> @@ -40,19 +40,13 @@ static void done_cb(void *opaque, int ret)
>       active--;
>   }
> 
> -/* Wait until all aio and bh activity has finished */
> -static void qemu_aio_wait_all(void)
> -{
> -    while (aio_poll(ctx, true)) {
> -        /* Do nothing */
> -    }
> -}
> -
>   static void test_submit(void)
>   {
>       WorkerTestData data = { .n = 0 };
>       thread_pool_submit(pool, worker_cb, &data);
> -    qemu_aio_wait_all();
> +    while (data.n == 0) {
> +        aio_poll(ctx, true);
> +    }
>       g_assert_cmpint(data.n, ==, 1);
>   }
> 
> @@ -65,7 +59,9 @@ static void test_submit_aio(void)
>       /* The callbacks are not called until after the first wait.  */
>       active = 1;
>       g_assert_cmpint(data.ret, ==, -EINPROGRESS);
> -    qemu_aio_wait_all();
> +    while (data.ret == -EINPROGRESS) {
> +        aio_poll(ctx, true);
> +    }
>       g_assert_cmpint(active, ==, 0);
>       g_assert_cmpint(data.n, ==, 1);
>       g_assert_cmpint(data.ret, ==, 0);
> @@ -103,7 +99,9 @@ static void test_submit_co(void)
> 
>       /* qemu_aio_wait_all will execute the rest of the coroutine.  */
> 
> -    qemu_aio_wait_all();
> +    while (data.ret == -EINPROGRESS) {
> +        aio_poll(ctx, true);
> +    }
> 
>       /* Back here after the coroutine has finished.  */
> 
> @@ -187,7 +185,9 @@ static void test_cancel(void)
>       }
> 
>       /* Finish execution and execute any remaining callbacks.  */
> -    qemu_aio_wait_all();
> +    while (active > 0) {
> +        aio_poll(ctx, true);
> +    }
>       g_assert_cmpint(active, ==, 0);
>       for (i = 0; i < 100; i++) {
>           if (data[i].n == 3) {
> 


-- 
Best Regards

Wenchao Xia

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [Qemu-devel] [PATCH v6 06/18] aio: stop using .io_flush()
  2013-07-25 15:18 ` [Qemu-devel] [PATCH v6 06/18] aio: stop using .io_flush() Stefan Hajnoczi
  2013-07-26 16:10   ` Jeff Cody
  2013-07-26 16:43   ` Jeff Cody
@ 2013-07-29  8:08   ` Wenchao Xia
  2013-07-29 14:32     ` Stefan Hajnoczi
  2 siblings, 1 reply; 53+ messages in thread
From: Wenchao Xia @ 2013-07-29  8:08 UTC (permalink / raw)
  To: Stefan Hajnoczi
  Cc: Kevin Wolf, Ping Fan Liu, qemu-devel, Michael Roth, alex, Paolo Bonzini

> Now that aio_poll() users check their termination condition themselves,
> it is no longer necessary to call .io_flush() handlers.
> 
> The behavior of aio_poll() changes as follows:
> 
> 1. .io_flush() is no longer invoked and file descriptors are *always*
> monitored.  Previously returning 0 from .io_flush() would skip this file
> descriptor.
> 
> Due to this change it is essential to check that requests are pending
> before calling qemu_aio_wait().  Failure to do so means we block, for
> example, waiting for an idle iSCSI socket to become readable when there
> are no requests.  Currently all qemu_aio_wait()/aio_poll() callers check
> before calling.
> 
> 2. aio_poll() now returns true if progress was made (BH or fd handlers
> executed) and false otherwise.  Previously it would return true whenever
> 'busy', which means that .io_flush() returned true.  The 'busy' concept
> no longer exists so just progress is returned.
> 
> Due to this change we need to update tests/test-aio.c which asserts
> aio_poll() return values.  Note that QEMU doesn't actually rely on these
> return values so only tests/test-aio.c cares.
> 
> Note that ctx->notifier, the EventNotifier fd used for aio_notify(), is
> now handled as a special case.  This is a little ugly but maintains
> aio_poll() semantics, i.e. aio_notify() does not count as 'progress' and
> aio_poll() avoids blocking when the user has not set any fd handlers yet.
> 
  I guess the goal is, distinguish qemu's internal used fd, with the
real meaningful external fd such as socket? How about distinguish them
with different GSource?


> Patches after this remove .io_flush() handler code until we can finally
> drop the io_flush arguments to aio_set_fd_handler() and friends.
> 
> Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
> ---
>   aio-posix.c      | 29 +++++++++--------------------
>   aio-win32.c      | 34 ++++++++++++++--------------------
>   tests/test-aio.c | 10 +++++-----
>   3 files changed, 28 insertions(+), 45 deletions(-)
> 


-- 
Best Regards

Wenchao Xia

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [Qemu-devel] [PATCH v6 07/18] block/curl: drop curl_aio_flush()
  2013-07-25 15:18 ` [Qemu-devel] [PATCH v6 07/18] block/curl: drop curl_aio_flush() Stefan Hajnoczi
@ 2013-07-29  8:11   ` Wenchao Xia
  0 siblings, 0 replies; 53+ messages in thread
From: Wenchao Xia @ 2013-07-29  8:11 UTC (permalink / raw)
  To: Stefan Hajnoczi
  Cc: Kevin Wolf, Ping Fan Liu, qemu-devel, Michael Roth, alex, Paolo Bonzini

Reviewed-by: Wenchao Xia <xiawenc@linux.vnet.ibm.com>

> .io_flush() is no longer called so drop curl_aio_flush().  The acb[]
> array that the function checks is still used in other parts of
> block/curl.c.  Therefore we cannot remove acb[], it is needed.
> 
> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
> ---
>   block/curl.c | 22 +++-------------------
>   1 file changed, 3 insertions(+), 19 deletions(-)
> 


-- 
Best Regards

Wenchao Xia

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [Qemu-devel] [PATCH v6 08/18] block/gluster: drop qemu_gluster_aio_flush_cb()
  2013-07-25 15:18 ` [Qemu-devel] [PATCH v6 08/18] block/gluster: drop qemu_gluster_aio_flush_cb() Stefan Hajnoczi
@ 2013-07-29  8:17   ` Wenchao Xia
  0 siblings, 0 replies; 53+ messages in thread
From: Wenchao Xia @ 2013-07-29  8:17 UTC (permalink / raw)
  To: Stefan Hajnoczi
  Cc: Kevin Wolf, Ping Fan Liu, qemu-devel, Michael Roth, alex, Paolo Bonzini

Reviewed-by: Wenchao Xia <xiawenc@linux.vnet.ibm.com>

> Since .io_flush() is no longer called we do not need
> qemu_gluster_aio_flush_cb() anymore.  It turns out that qemu_aio_count
> is unused now and can be dropped.
> 
> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
> ---
>   block/gluster.c | 16 +---------------
>   1 file changed, 1 insertion(+), 15 deletions(-)
> 


-- 
Best Regards

Wenchao Xia

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [Qemu-devel] [PATCH v6 09/18] block/iscsi: drop iscsi_process_flush()
  2013-07-25 15:18 ` [Qemu-devel] [PATCH v6 09/18] block/iscsi: drop iscsi_process_flush() Stefan Hajnoczi
@ 2013-07-29  8:17   ` Wenchao Xia
  0 siblings, 0 replies; 53+ messages in thread
From: Wenchao Xia @ 2013-07-29  8:17 UTC (permalink / raw)
  To: Stefan Hajnoczi
  Cc: Kevin Wolf, Ping Fan Liu, qemu-devel, Michael Roth, alex, Paolo Bonzini

Reviewed-by: Wenchao Xia <xiawenc@linux.vnet.ibm.com>

> .io_flush() is no longer called so drop iscsi_process_flush().
> 
> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
> ---
>   block/iscsi.c | 9 +--------
>   1 file changed, 1 insertion(+), 8 deletions(-)
> 
> diff --git a/block/iscsi.c b/block/iscsi.c
> index 5f28c6a..e2692d6 100644
> --- a/block/iscsi.c
> +++ b/block/iscsi.c
> @@ -146,13 +146,6 @@ static const AIOCBInfo iscsi_aiocb_info = {
>   static void iscsi_process_read(void *arg);
>   static void iscsi_process_write(void *arg);
> 
> -static int iscsi_process_flush(void *arg)
> -{
> -    IscsiLun *iscsilun = arg;
> -
> -    return iscsi_queue_length(iscsilun->iscsi) > 0;
> -}
> -
>   static void
>   iscsi_set_events(IscsiLun *iscsilun)
>   {
> @@ -166,7 +159,7 @@ iscsi_set_events(IscsiLun *iscsilun)
>           qemu_aio_set_fd_handler(iscsi_get_fd(iscsi),
>                         iscsi_process_read,
>                         (ev & POLLOUT) ? iscsi_process_write : NULL,
> -                      iscsi_process_flush,
> +                      NULL,
>                         iscsilun);
> 
>       }
> 


-- 
Best Regards

Wenchao Xia

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [Qemu-devel] [PATCH v6 10/18] block/linux-aio: drop qemu_laio_completion_cb()
  2013-07-25 15:18 ` [Qemu-devel] [PATCH v6 10/18] block/linux-aio: drop qemu_laio_completion_cb() Stefan Hajnoczi
@ 2013-07-29  8:19   ` Wenchao Xia
  0 siblings, 0 replies; 53+ messages in thread
From: Wenchao Xia @ 2013-07-29  8:19 UTC (permalink / raw)
  To: Stefan Hajnoczi
  Cc: Kevin Wolf, Ping Fan Liu, qemu-devel, Michael Roth, alex, Paolo Bonzini

Reviewed-by: Wenchao Xia <xiawenc@linux.vnet.ibm.com>

> .io_flush() is no longer called so drop qemu_laio_completion_cb().  It
> turns out that count is now unused so drop that too.
> 
> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
> ---
>   block/linux-aio.c | 17 ++---------------
>   1 file changed, 2 insertions(+), 15 deletions(-)
> 
> diff --git a/block/linux-aio.c b/block/linux-aio.c
> index ee0f8d1..d9128f3 100644
> --- a/block/linux-aio.c
> +++ b/block/linux-aio.c
> @@ -39,7 +39,6 @@ struct qemu_laiocb {
>   struct qemu_laio_state {
>       io_context_t ctx;
>       EventNotifier e;
> -    int count;
>   };
> 
>   static inline ssize_t io_event_ret(struct io_event *ev)
> @@ -55,8 +54,6 @@ static void qemu_laio_process_completion(struct qemu_laio_state *s,
>   {
>       int ret;
> 
> -    s->count--;
> -
>       ret = laiocb->ret;
>       if (ret != -ECANCELED) {
>           if (ret == laiocb->nbytes) {
> @@ -101,13 +98,6 @@ static void qemu_laio_completion_cb(EventNotifier *e)
>       }
>   }
> 
> -static int qemu_laio_flush_cb(EventNotifier *e)
> -{
> -    struct qemu_laio_state *s = container_of(e, struct qemu_laio_state, e);
> -
> -    return (s->count > 0) ? 1 : 0;
> -}
> -
>   static void laio_cancel(BlockDriverAIOCB *blockacb)
>   {
>       struct qemu_laiocb *laiocb = (struct qemu_laiocb *)blockacb;
> @@ -177,14 +167,11 @@ BlockDriverAIOCB *laio_submit(BlockDriverState *bs, void *aio_ctx, int fd,
>           goto out_free_aiocb;
>       }
>       io_set_eventfd(&laiocb->iocb, event_notifier_get_fd(&s->e));
> -    s->count++;
> 
>       if (io_submit(s->ctx, 1, &iocbs) < 0)
> -        goto out_dec_count;
> +        goto out_free_aiocb;
>       return &laiocb->common;
> 
> -out_dec_count:
> -    s->count--;
>   out_free_aiocb:
>       qemu_aio_release(laiocb);
>       return NULL;
> @@ -204,7 +191,7 @@ void *laio_init(void)
>       }
> 
>       qemu_aio_set_event_notifier(&s->e, qemu_laio_completion_cb,
> -                                qemu_laio_flush_cb);
> +                                NULL);
> 
>       return s;
> 


-- 
Best Regards

Wenchao Xia

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [Qemu-devel] [PATCH v6 11/18] block/nbd: drop nbd_have_request()
  2013-07-25 15:18 ` [Qemu-devel] [PATCH v6 11/18] block/nbd: drop nbd_have_request() Stefan Hajnoczi
@ 2013-07-29  8:19   ` Wenchao Xia
  0 siblings, 0 replies; 53+ messages in thread
From: Wenchao Xia @ 2013-07-29  8:19 UTC (permalink / raw)
  To: Stefan Hajnoczi
  Cc: Kevin Wolf, Ping Fan Liu, qemu-devel, Michael Roth, alex, Paolo Bonzini

Reviewed-by: Wenchao Xia <xiawenc@linux.vnet.ibm.com>

> .io_flush() is no longer called so drop nbd_have_request().  We cannot
> drop in_flight since it is still used by other block/nbd.c code.
> 
> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
> ---
>   block/nbd.c | 13 +++----------
>   1 file changed, 3 insertions(+), 10 deletions(-)
> 
> diff --git a/block/nbd.c b/block/nbd.c
> index 9c480b8..f1619f9 100644
> --- a/block/nbd.c
> +++ b/block/nbd.c
> @@ -279,13 +279,6 @@ static void nbd_coroutine_start(BDRVNBDState *s, struct nbd_request *request)
>       request->handle = INDEX_TO_HANDLE(s, i);
>   }
> 
> -static int nbd_have_request(void *opaque)
> -{
> -    BDRVNBDState *s = opaque;
> -
> -    return s->in_flight > 0;
> -}
> -
>   static void nbd_reply_ready(void *opaque)
>   {
>       BDRVNBDState *s = opaque;
> @@ -342,7 +335,7 @@ static int nbd_co_send_request(BDRVNBDState *s, struct nbd_request *request,
>       qemu_co_mutex_lock(&s->send_mutex);
>       s->send_coroutine = qemu_coroutine_self();
>       qemu_aio_set_fd_handler(s->sock, nbd_reply_ready, nbd_restart_write,
> -                            nbd_have_request, s);
> +                            NULL, s);
>       if (qiov) {
>           if (!s->is_unix) {
>               socket_set_cork(s->sock, 1);
> @@ -362,7 +355,7 @@ static int nbd_co_send_request(BDRVNBDState *s, struct nbd_request *request,
>           rc = nbd_send_request(s->sock, request);
>       }
>       qemu_aio_set_fd_handler(s->sock, nbd_reply_ready, NULL,
> -                            nbd_have_request, s);
> +                            NULL, s);
>       s->send_coroutine = NULL;
>       qemu_co_mutex_unlock(&s->send_mutex);
>       return rc;
> @@ -439,7 +432,7 @@ static int nbd_establish_connection(BlockDriverState *bs)
>        * kick the reply mechanism.  */
>       qemu_set_nonblock(sock);
>       qemu_aio_set_fd_handler(sock, nbd_reply_ready, NULL,
> -                            nbd_have_request, s);
> +                            NULL, s);
> 
>       s->sock = sock;
>       s->size = size;
> 


-- 
Best Regards

Wenchao Xia

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [Qemu-devel] [PATCH v6 12/18] block/rbd: drop qemu_rbd_aio_flush_cb()
  2013-07-25 15:18 ` [Qemu-devel] [PATCH v6 12/18] block/rbd: drop qemu_rbd_aio_flush_cb() Stefan Hajnoczi
@ 2013-07-29  8:20   ` Wenchao Xia
  0 siblings, 0 replies; 53+ messages in thread
From: Wenchao Xia @ 2013-07-29  8:20 UTC (permalink / raw)
  To: Stefan Hajnoczi
  Cc: Kevin Wolf, Ping Fan Liu, qemu-devel, Michael Roth, alex, Paolo Bonzini

Reviewed-by: Wenchao Xia <xiawenc@linux.vnet.ibm.com>

> .io_flush() is no longer called so drop qemu_rbd_aio_flush_cb().
> qemu_aio_count is unused now so drop it too.
> 
> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
> ---
>   block/rbd.c | 14 +-------------
>   1 file changed, 1 insertion(+), 13 deletions(-)
> 
> diff --git a/block/rbd.c b/block/rbd.c
> index cb71751..71b4a0c 100644
> --- a/block/rbd.c
> +++ b/block/rbd.c
> @@ -100,7 +100,6 @@ typedef struct BDRVRBDState {
>       rados_ioctx_t io_ctx;
>       rbd_image_t image;
>       char name[RBD_MAX_IMAGE_NAME_SIZE];
> -    int qemu_aio_count;
>       char *snap;
>       int event_reader_pos;
>       RADOSCB *event_rcb;
> @@ -428,19 +427,11 @@ static void qemu_rbd_aio_event_reader(void *opaque)
>               if (s->event_reader_pos == sizeof(s->event_rcb)) {
>                   s->event_reader_pos = 0;
>                   qemu_rbd_complete_aio(s->event_rcb);
> -                s->qemu_aio_count--;
>               }
>           }
>       } while (ret < 0 && errno == EINTR);
>   }
> 
> -static int qemu_rbd_aio_flush_cb(void *opaque)
> -{
> -    BDRVRBDState *s = opaque;
> -
> -    return (s->qemu_aio_count > 0);
> -}
> -
>   /* TODO Convert to fine grained options */
>   static QemuOptsList runtime_opts = {
>       .name = "rbd",
> @@ -554,7 +545,7 @@ static int qemu_rbd_open(BlockDriverState *bs, QDict *options, int flags)
>       fcntl(s->fds[0], F_SETFL, O_NONBLOCK);
>       fcntl(s->fds[1], F_SETFL, O_NONBLOCK);
>       qemu_aio_set_fd_handler(s->fds[RBD_FD_READ], qemu_rbd_aio_event_reader,
> -                            NULL, qemu_rbd_aio_flush_cb, s);
> +                            NULL, NULL, s);
> 
> 
>       qemu_opts_del(opts);
> @@ -741,8 +732,6 @@ static BlockDriverAIOCB *rbd_start_aio(BlockDriverState *bs,
>       off = sector_num * BDRV_SECTOR_SIZE;
>       size = nb_sectors * BDRV_SECTOR_SIZE;
> 
> -    s->qemu_aio_count++; /* All the RADOSCB */
> -
>       rcb = g_malloc(sizeof(RADOSCB));
>       rcb->done = 0;
>       rcb->acb = acb;
> @@ -779,7 +768,6 @@ static BlockDriverAIOCB *rbd_start_aio(BlockDriverState *bs,
> 
>   failed:
>       g_free(rcb);
> -    s->qemu_aio_count--;
>       qemu_aio_release(acb);
>       return NULL;
>   }
> 


-- 
Best Regards

Wenchao Xia

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [Qemu-devel] [PATCH v6 04/18] tests: adjust test-aio to new aio_poll() semantics
  2013-07-25 15:18 ` [Qemu-devel] [PATCH v6 04/18] tests: adjust test-aio to new aio_poll() semantics Stefan Hajnoczi
  2013-07-29  7:39   ` Wenchao Xia
@ 2013-07-29  8:24   ` Wenchao Xia
  1 sibling, 0 replies; 53+ messages in thread
From: Wenchao Xia @ 2013-07-29  8:24 UTC (permalink / raw)
  To: Stefan Hajnoczi
  Cc: Kevin Wolf, Ping Fan Liu, qemu-devel, Michael Roth, alex, Paolo Bonzini

Reviewed-by: Wenchao Xia <xiawenc@linux.vnet.ibm.com>

> aio_poll(ctx, true) will soon block if any fd handlers have been set.
> Previously it would only block when .io_flush() returned true.
> 
> This means that callers must check their wait condition *before*
> aio_poll() to avoid deadlock.
> 
> Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
> ---
>   tests/test-aio.c | 26 +++++++++++++++++---------
>   1 file changed, 17 insertions(+), 9 deletions(-)
> 
> diff --git a/tests/test-aio.c b/tests/test-aio.c
> index c173870..20bf5e6 100644
> --- a/tests/test-aio.c
> +++ b/tests/test-aio.c
> @@ -15,6 +15,13 @@
> 
>   AioContext *ctx;
> 
> +typedef struct {
> +    EventNotifier e;
> +    int n;
> +    int active;
> +    bool auto_set;
> +} EventNotifierTestData;
> +
>   /* Wait until there are no more BHs or AIO requests */
>   static void wait_for_aio(void)
>   {
> @@ -23,6 +30,14 @@ static void wait_for_aio(void)
>       }
>   }
> 
> +/* Wait until event notifier becomes inactive */
> +static void wait_until_inactive(EventNotifierTestData *data)
> +{
> +    while (data->active > 0) {
> +        aio_poll(ctx, true);
> +    }
> +}
> +
>   /* Simple callbacks for testing.  */
> 
>   typedef struct {
> @@ -50,13 +65,6 @@ static void bh_delete_cb(void *opaque)
>       }
>   }
> 
> -typedef struct {
> -    EventNotifier e;
> -    int n;
> -    int active;
> -    bool auto_set;
> -} EventNotifierTestData;
> -
>   static int event_active_cb(EventNotifier *e)
>   {
>       EventNotifierTestData *data = container_of(e, EventNotifierTestData, e);
> @@ -281,7 +289,7 @@ static void test_flush_event_notifier(void)
>       g_assert_cmpint(data.active, ==, 9);
>       g_assert(aio_poll(ctx, false));
> 
> -    wait_for_aio();
> +    wait_until_inactive(&data);
>       g_assert_cmpint(data.n, ==, 10);
>       g_assert_cmpint(data.active, ==, 0);
>       g_assert(!aio_poll(ctx, false));
> @@ -325,7 +333,7 @@ static void test_wait_event_notifier_noflush(void)
>       g_assert_cmpint(data.n, ==, 2);
> 
>       event_notifier_set(&dummy.e);
> -    wait_for_aio();
> +    wait_until_inactive(&dummy);
>       g_assert_cmpint(data.n, ==, 2);
>       g_assert_cmpint(dummy.n, ==, 1);
>       g_assert_cmpint(dummy.active, ==, 0);
> 


-- 
Best Regards

Wenchao Xia

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [Qemu-devel] [PATCH v6 13/18] block/sheepdog: drop have_co_req() and aio_flush_request()
  2013-07-25 15:18 ` [Qemu-devel] [PATCH v6 13/18] block/sheepdog: drop have_co_req() and aio_flush_request() Stefan Hajnoczi
  2013-07-26  5:50   ` MORITA Kazutaka
@ 2013-07-29  8:24   ` Wenchao Xia
  1 sibling, 0 replies; 53+ messages in thread
From: Wenchao Xia @ 2013-07-29  8:24 UTC (permalink / raw)
  To: Stefan Hajnoczi
  Cc: Kevin Wolf, Ping Fan Liu, qemu-devel, Michael Roth, alex, Paolo Bonzini

Reviewed-by: Wenchao Xia <xiawenc@linux.vnet.ibm.com>

> .io_flush() is no longer called so drop have_co_req() and
> aio_flush_request().
> 
> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
> ---
>   block/sheepdog.c | 25 +++++--------------------
>   1 file changed, 5 insertions(+), 20 deletions(-)
> 
> diff --git a/block/sheepdog.c b/block/sheepdog.c
> index 6a41ad9..e216047 100644
> --- a/block/sheepdog.c
> +++ b/block/sheepdog.c
> @@ -509,13 +509,6 @@ static void restart_co_req(void *opaque)
>       qemu_coroutine_enter(co, NULL);
>   }
> 
> -static int have_co_req(void *opaque)
> -{
> -    /* this handler is set only when there is a pending request, so
> -     * always returns 1. */
> -    return 1;
> -}
> -
>   typedef struct SheepdogReqCo {
>       int sockfd;
>       SheepdogReq *hdr;
> @@ -538,14 +531,14 @@ static coroutine_fn void do_co_req(void *opaque)
>       unsigned int *rlen = srco->rlen;
> 
>       co = qemu_coroutine_self();
> -    qemu_aio_set_fd_handler(sockfd, NULL, restart_co_req, have_co_req, co);
> +    qemu_aio_set_fd_handler(sockfd, NULL, restart_co_req, NULL, co);
> 
>       ret = send_co_req(sockfd, hdr, data, wlen);
>       if (ret < 0) {
>           goto out;
>       }
> 
> -    qemu_aio_set_fd_handler(sockfd, restart_co_req, NULL, have_co_req, co);
> +    qemu_aio_set_fd_handler(sockfd, restart_co_req, NULL, NULL, co);
> 
>       ret = qemu_co_recv(sockfd, hdr, sizeof(*hdr));
>       if (ret < sizeof(*hdr)) {
> @@ -796,14 +789,6 @@ static void co_write_request(void *opaque)
>       qemu_coroutine_enter(s->co_send, NULL);
>   }
> 
> -static int aio_flush_request(void *opaque)
> -{
> -    BDRVSheepdogState *s = opaque;
> -
> -    return !QLIST_EMPTY(&s->inflight_aio_head) ||
> -        !QLIST_EMPTY(&s->pending_aio_head);
> -}
> -
>   /*
>    * Return a socket discriptor to read/write objects.
>    *
> @@ -819,7 +804,7 @@ static int get_sheep_fd(BDRVSheepdogState *s)
>           return fd;
>       }
> 
> -    qemu_aio_set_fd_handler(fd, co_read_response, NULL, aio_flush_request, s);
> +    qemu_aio_set_fd_handler(fd, co_read_response, NULL, NULL, s);
>       return fd;
>   }
> 
> @@ -1070,7 +1055,7 @@ static int coroutine_fn add_aio_request(BDRVSheepdogState *s, AIOReq *aio_req,
>       qemu_co_mutex_lock(&s->lock);
>       s->co_send = qemu_coroutine_self();
>       qemu_aio_set_fd_handler(s->fd, co_read_response, co_write_request,
> -                            aio_flush_request, s);
> +                            NULL, s);
>       socket_set_cork(s->fd, 1);
> 
>       /* send a header */
> @@ -1092,7 +1077,7 @@ static int coroutine_fn add_aio_request(BDRVSheepdogState *s, AIOReq *aio_req,
> 
>       socket_set_cork(s->fd, 0);
>       qemu_aio_set_fd_handler(s->fd, co_read_response, NULL,
> -                            aio_flush_request, s);
> +                            NULL, s);
>       qemu_co_mutex_unlock(&s->lock);
> 
>       return 0;
> 


-- 
Best Regards

Wenchao Xia

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [Qemu-devel] [PATCH v6 14/18] block/ssh: drop return_true()
  2013-07-25 15:18 ` [Qemu-devel] [PATCH v6 14/18] block/ssh: drop return_true() Stefan Hajnoczi
@ 2013-07-29  8:25   ` Wenchao Xia
  0 siblings, 0 replies; 53+ messages in thread
From: Wenchao Xia @ 2013-07-29  8:25 UTC (permalink / raw)
  To: Stefan Hajnoczi
  Cc: Kevin Wolf, Ping Fan Liu, qemu-devel, Michael Roth, alex, Paolo Bonzini

Reviewed-by: Wenchao Xia <xiawenc@linux.vnet.ibm.com>

> .io_flush() is no longer called so drop return_true().
> 
> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
> ---
>   block/ssh.c | 10 +---------
>   1 file changed, 1 insertion(+), 9 deletions(-)
> 
> diff --git a/block/ssh.c b/block/ssh.c
> index d7e7bf8..e149da9 100644
> --- a/block/ssh.c
> +++ b/block/ssh.c
> @@ -740,14 +740,6 @@ static void restart_coroutine(void *opaque)
>       qemu_coroutine_enter(co, NULL);
>   }
> 
> -/* Always true because when we have called set_fd_handler there is
> - * always a request being processed.
> - */
> -static int return_true(void *opaque)
> -{
> -    return 1;
> -}
> -
>   static coroutine_fn void set_fd_handler(BDRVSSHState *s)
>   {
>       int r;
> @@ -766,7 +758,7 @@ static coroutine_fn void set_fd_handler(BDRVSSHState *s)
>       DPRINTF("s->sock=%d rd_handler=%p wr_handler=%p", s->sock,
>               rd_handler, wr_handler);
> 
> -    qemu_aio_set_fd_handler(s->sock, rd_handler, wr_handler, return_true, co);
> +    qemu_aio_set_fd_handler(s->sock, rd_handler, wr_handler, NULL, co);
>   }
> 
>   static coroutine_fn void clear_fd_handler(BDRVSSHState *s)
> 


-- 
Best Regards

Wenchao Xia

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [Qemu-devel] [PATCH v6 15/18] dataplane/virtio-blk: drop flush_true() and flush_io()
  2013-07-25 15:18 ` [Qemu-devel] [PATCH v6 15/18] dataplane/virtio-blk: drop flush_true() and flush_io() Stefan Hajnoczi
@ 2013-07-29  8:32   ` Wenchao Xia
  0 siblings, 0 replies; 53+ messages in thread
From: Wenchao Xia @ 2013-07-29  8:32 UTC (permalink / raw)
  To: Stefan Hajnoczi
  Cc: Kevin Wolf, Ping Fan Liu, qemu-devel, Michael Roth, alex, Paolo Bonzini

Reviewed-by: Wenchao Xia <xiawenc@linux.vnet.ibm.com>

> .io_flush() is no longer called so drop flush_true() and flush_io().
> 
> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
> ---
>   hw/block/dataplane/virtio-blk.c | 17 ++---------------
>   1 file changed, 2 insertions(+), 15 deletions(-)
> 
> diff --git a/hw/block/dataplane/virtio-blk.c b/hw/block/dataplane/virtio-blk.c
> index 8d3e145..f8624d1 100644
> --- a/hw/block/dataplane/virtio-blk.c
> +++ b/hw/block/dataplane/virtio-blk.c
> @@ -264,11 +264,6 @@ static int process_request(IOQueue *ioq, struct iovec iov[],
>       }
>   }
> 
> -static int flush_true(EventNotifier *e)
> -{
> -    return true;
> -}
> -
>   static void handle_notify(EventNotifier *e)
>   {
>       VirtIOBlockDataPlane *s = container_of(e, VirtIOBlockDataPlane,
> @@ -348,14 +343,6 @@ static void handle_notify(EventNotifier *e)
>       }
>   }
> 
> -static int flush_io(EventNotifier *e)
> -{
> -    VirtIOBlockDataPlane *s = container_of(e, VirtIOBlockDataPlane,
> -                                           io_notifier);
> -
> -    return s->num_reqs > 0;
> -}
> -
>   static void handle_io(EventNotifier *e)
>   {
>       VirtIOBlockDataPlane *s = container_of(e, VirtIOBlockDataPlane,
> @@ -486,7 +473,7 @@ void virtio_blk_data_plane_start(VirtIOBlockDataPlane *s)
>           exit(1);
>       }
>       s->host_notifier = *virtio_queue_get_host_notifier(vq);
> -    aio_set_event_notifier(s->ctx, &s->host_notifier, handle_notify, flush_true);
> +    aio_set_event_notifier(s->ctx, &s->host_notifier, handle_notify, NULL);
> 
>       /* Set up ioqueue */
>       ioq_init(&s->ioqueue, s->fd, REQ_MAX);
> @@ -494,7 +481,7 @@ void virtio_blk_data_plane_start(VirtIOBlockDataPlane *s)
>           ioq_put_iocb(&s->ioqueue, &s->requests[i].iocb);
>       }
>       s->io_notifier = *ioq_get_notifier(&s->ioqueue);
> -    aio_set_event_notifier(s->ctx, &s->io_notifier, handle_io, flush_io);
> +    aio_set_event_notifier(s->ctx, &s->io_notifier, handle_io, NULL);
> 
>       s->started = true;
>       trace_virtio_blk_data_plane_start(s);
> 


-- 
Best Regards

Wenchao Xia

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [Qemu-devel] [PATCH v6 16/18] thread-pool: drop thread_pool_active()
  2013-07-25 15:18 ` [Qemu-devel] [PATCH v6 16/18] thread-pool: drop thread_pool_active() Stefan Hajnoczi
@ 2013-07-29  8:33   ` Wenchao Xia
  0 siblings, 0 replies; 53+ messages in thread
From: Wenchao Xia @ 2013-07-29  8:33 UTC (permalink / raw)
  To: Stefan Hajnoczi
  Cc: Kevin Wolf, Ping Fan Liu, qemu-devel, Michael Roth, alex, Paolo Bonzini

Reviewed-by: Wenchao Xia <xiawenc@linux.vnet.ibm.com>

> .io_flush() is no longer called so drop thread_pool_active().  The block
> layer is the only thread-pool.c user and it already tracks in-flight
> requests, therefore we do not need thread_pool_active().
> 
> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
> ---
>   thread-pool.c | 8 +-------
>   1 file changed, 1 insertion(+), 7 deletions(-)
> 
> diff --git a/thread-pool.c b/thread-pool.c
> index 0ebd4c2..096f007 100644
> --- a/thread-pool.c
> +++ b/thread-pool.c
> @@ -197,12 +197,6 @@ restart:
>       }
>   }
> 
> -static int thread_pool_active(EventNotifier *notifier)
> -{
> -    ThreadPool *pool = container_of(notifier, ThreadPool, notifier);
> -    return !QLIST_EMPTY(&pool->head);
> -}
> -
>   static void thread_pool_cancel(BlockDriverAIOCB *acb)
>   {
>       ThreadPoolElement *elem = (ThreadPoolElement *)acb;
> @@ -310,7 +304,7 @@ static void thread_pool_init_one(ThreadPool *pool, AioContext *ctx)
>       QTAILQ_INIT(&pool->request_list);
> 
>       aio_set_event_notifier(ctx, &pool->notifier, event_notifier_ready,
> -                           thread_pool_active);
> +                           NULL);
>   }
> 
>   ThreadPool *thread_pool_new(AioContext *ctx)
> 


-- 
Best Regards

Wenchao Xia

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [Qemu-devel] [PATCH v6 17/18] tests: drop event_active_cb()
  2013-07-25 15:18 ` [Qemu-devel] [PATCH v6 17/18] tests: drop event_active_cb() Stefan Hajnoczi
@ 2013-07-29  8:34   ` Wenchao Xia
  0 siblings, 0 replies; 53+ messages in thread
From: Wenchao Xia @ 2013-07-29  8:34 UTC (permalink / raw)
  To: Stefan Hajnoczi
  Cc: Kevin Wolf, Ping Fan Liu, qemu-devel, Michael Roth, alex, Paolo Bonzini

Reviewed-by: Wenchao Xia <xiawenc@linux.vnet.ibm.com>

> Drop the io_flush argument to aio_set_event_notifier().
> 
> Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
> ---
>   tests/test-aio.c | 22 ++++++++--------------
>   1 file changed, 8 insertions(+), 14 deletions(-)
> 
> diff --git a/tests/test-aio.c b/tests/test-aio.c
> index 1251952..7b2892a 100644
> --- a/tests/test-aio.c
> +++ b/tests/test-aio.c
> @@ -65,12 +65,6 @@ static void bh_delete_cb(void *opaque)
>       }
>   }
> 
> -static int event_active_cb(EventNotifier *e)
> -{
> -    EventNotifierTestData *data = container_of(e, EventNotifierTestData, e);
> -    return data->active > 0;
> -}
> -
>   static void event_ready_cb(EventNotifier *e)
>   {
>       EventNotifierTestData *data = container_of(e, EventNotifierTestData, e);
> @@ -239,7 +233,7 @@ static void test_set_event_notifier(void)
>   {
>       EventNotifierTestData data = { .n = 0, .active = 0 };
>       event_notifier_init(&data.e, false);
> -    aio_set_event_notifier(ctx, &data.e, event_ready_cb, event_active_cb);
> +    aio_set_event_notifier(ctx, &data.e, event_ready_cb, NULL);
>       g_assert(!aio_poll(ctx, false));
>       g_assert_cmpint(data.n, ==, 0);
> 
> @@ -253,7 +247,7 @@ static void test_wait_event_notifier(void)
>   {
>       EventNotifierTestData data = { .n = 0, .active = 1 };
>       event_notifier_init(&data.e, false);
> -    aio_set_event_notifier(ctx, &data.e, event_ready_cb, event_active_cb);
> +    aio_set_event_notifier(ctx, &data.e, event_ready_cb, NULL);
>       g_assert(!aio_poll(ctx, false));
>       g_assert_cmpint(data.n, ==, 0);
>       g_assert_cmpint(data.active, ==, 1);
> @@ -278,7 +272,7 @@ static void test_flush_event_notifier(void)
>   {
>       EventNotifierTestData data = { .n = 0, .active = 10, .auto_set = true };
>       event_notifier_init(&data.e, false);
> -    aio_set_event_notifier(ctx, &data.e, event_ready_cb, event_active_cb);
> +    aio_set_event_notifier(ctx, &data.e, event_ready_cb, NULL);
>       g_assert(!aio_poll(ctx, false));
>       g_assert_cmpint(data.n, ==, 0);
>       g_assert_cmpint(data.active, ==, 10);
> @@ -318,7 +312,7 @@ static void test_wait_event_notifier_noflush(void)
> 
>       /* An active event notifier forces aio_poll to look at EventNotifiers.  */
>       event_notifier_init(&dummy.e, false);
> -    aio_set_event_notifier(ctx, &dummy.e, event_ready_cb, event_active_cb);
> +    aio_set_event_notifier(ctx, &dummy.e, event_ready_cb, NULL);
> 
>       event_notifier_set(&data.e);
>       g_assert(aio_poll(ctx, false));
> @@ -521,7 +515,7 @@ static void test_source_set_event_notifier(void)
>   {
>       EventNotifierTestData data = { .n = 0, .active = 0 };
>       event_notifier_init(&data.e, false);
> -    aio_set_event_notifier(ctx, &data.e, event_ready_cb, event_active_cb);
> +    aio_set_event_notifier(ctx, &data.e, event_ready_cb, NULL);
>       while (g_main_context_iteration(NULL, false));
>       g_assert_cmpint(data.n, ==, 0);
> 
> @@ -535,7 +529,7 @@ static void test_source_wait_event_notifier(void)
>   {
>       EventNotifierTestData data = { .n = 0, .active = 1 };
>       event_notifier_init(&data.e, false);
> -    aio_set_event_notifier(ctx, &data.e, event_ready_cb, event_active_cb);
> +    aio_set_event_notifier(ctx, &data.e, event_ready_cb, NULL);
>       g_assert(g_main_context_iteration(NULL, false));
>       g_assert_cmpint(data.n, ==, 0);
>       g_assert_cmpint(data.active, ==, 1);
> @@ -560,7 +554,7 @@ static void test_source_flush_event_notifier(void)
>   {
>       EventNotifierTestData data = { .n = 0, .active = 10, .auto_set = true };
>       event_notifier_init(&data.e, false);
> -    aio_set_event_notifier(ctx, &data.e, event_ready_cb, event_active_cb);
> +    aio_set_event_notifier(ctx, &data.e, event_ready_cb, NULL);
>       g_assert(g_main_context_iteration(NULL, false));
>       g_assert_cmpint(data.n, ==, 0);
>       g_assert_cmpint(data.active, ==, 10);
> @@ -600,7 +594,7 @@ static void test_source_wait_event_notifier_noflush(void)
> 
>       /* An active event notifier forces aio_poll to look at EventNotifiers.  */
>       event_notifier_init(&dummy.e, false);
> -    aio_set_event_notifier(ctx, &dummy.e, event_ready_cb, event_active_cb);
> +    aio_set_event_notifier(ctx, &dummy.e, event_ready_cb, NULL);
> 
>       event_notifier_set(&data.e);
>       g_assert(g_main_context_iteration(NULL, false));
> 


-- 
Best Regards

Wenchao Xia

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [Qemu-devel] [PATCH v6 18/18] aio: drop io_flush argument
  2013-07-25 15:18 ` [Qemu-devel] [PATCH v6 18/18] aio: drop io_flush argument Stefan Hajnoczi
@ 2013-07-29  8:37   ` Wenchao Xia
  0 siblings, 0 replies; 53+ messages in thread
From: Wenchao Xia @ 2013-07-29  8:37 UTC (permalink / raw)
  To: Stefan Hajnoczi
  Cc: Kevin Wolf, Ping Fan Liu, qemu-devel, Michael Roth, alex, Paolo Bonzini

Reviewed-by: Wenchao Xia <xiawenc@linux.vnet.ibm.com>

> The .io_flush() handler no longer exists and has no users.  Drop the
> io_flush argument to aio_set_fd_handler() and related functions.
> 
> The AioFlushEventNotifierHandler and AioFlushHandler typedefs are no
> longer used and are dropped too.
> 
> Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
> ---
>   aio-posix.c                     |  7 ++-----
>   aio-win32.c                     |  3 +--
>   async.c                         |  4 ++--
>   block/curl.c                    |  9 ++++-----
>   block/gluster.c                 |  7 +++----
>   block/iscsi.c                   |  3 +--
>   block/linux-aio.c               |  3 +--
>   block/nbd.c                     | 11 ++++-------
>   block/rbd.c                     |  4 ++--
>   block/sheepdog.c                | 18 ++++++++----------
>   block/ssh.c                     |  4 ++--
>   hw/block/dataplane/virtio-blk.c |  8 ++++----
>   include/block/aio.h             | 14 ++------------
>   main-loop.c                     |  9 +++------
>   tests/test-aio.c                | 40 ++++++++++++++++++++--------------------
>   thread-pool.c                   |  5 ++---
>   16 files changed, 61 insertions(+), 88 deletions(-)
> 

-- 
Best Regards

Wenchao Xia

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [Qemu-devel] [PATCH v6 05/18] tests: adjust test-thread-pool to new aio_poll() semantics
  2013-07-29  7:51   ` Wenchao Xia
@ 2013-07-29 14:26     ` Stefan Hajnoczi
  0 siblings, 0 replies; 53+ messages in thread
From: Stefan Hajnoczi @ 2013-07-29 14:26 UTC (permalink / raw)
  To: Wenchao Xia
  Cc: Kevin Wolf, Ping Fan Liu, Stefan Hajnoczi, qemu-devel,
	Michael Roth, alex, Paolo Bonzini

On Mon, Jul 29, 2013 at 03:51:08PM +0800, Wenchao Xia wrote:
> I am confused for sometime about the reason of the change, I think
> it can be concluded as: "The meaning of aio_poll() will be changed, so
> change the caller in this test case."

Exactly, that's what the subject line means.

Stefan

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [Qemu-devel] [PATCH v6 06/18] aio: stop using .io_flush()
  2013-07-29  8:08   ` Wenchao Xia
@ 2013-07-29 14:32     ` Stefan Hajnoczi
  0 siblings, 0 replies; 53+ messages in thread
From: Stefan Hajnoczi @ 2013-07-29 14:32 UTC (permalink / raw)
  To: Wenchao Xia
  Cc: Kevin Wolf, Ping Fan Liu, Stefan Hajnoczi, qemu-devel,
	Michael Roth, alex, Paolo Bonzini

On Mon, Jul 29, 2013 at 04:08:34PM +0800, Wenchao Xia wrote:
> > Now that aio_poll() users check their termination condition themselves,
> > it is no longer necessary to call .io_flush() handlers.
> > 
> > The behavior of aio_poll() changes as follows:
> > 
> > 1. .io_flush() is no longer invoked and file descriptors are *always*
> > monitored.  Previously returning 0 from .io_flush() would skip this file
> > descriptor.
> > 
> > Due to this change it is essential to check that requests are pending
> > before calling qemu_aio_wait().  Failure to do so means we block, for
> > example, waiting for an idle iSCSI socket to become readable when there
> > are no requests.  Currently all qemu_aio_wait()/aio_poll() callers check
> > before calling.
> > 
> > 2. aio_poll() now returns true if progress was made (BH or fd handlers
> > executed) and false otherwise.  Previously it would return true whenever
> > 'busy', which means that .io_flush() returned true.  The 'busy' concept
> > no longer exists so just progress is returned.
> > 
> > Due to this change we need to update tests/test-aio.c which asserts
> > aio_poll() return values.  Note that QEMU doesn't actually rely on these
> > return values so only tests/test-aio.c cares.
> > 
> > Note that ctx->notifier, the EventNotifier fd used for aio_notify(), is
> > now handled as a special case.  This is a little ugly but maintains
> > aio_poll() semantics, i.e. aio_notify() does not count as 'progress' and
> > aio_poll() avoids blocking when the user has not set any fd handlers yet.
> > 
>   I guess the goal is, distinguish qemu's internal used fd, with the
> real meaningful external fd such as socket?

Exactly, the AioContext's own internal EventNotifier should not count.
For example, aio_poll() must not block if the user has not added any
AioHandlers yet.

> How about distinguish them with different GSource?

AioContext itself is designed as a GSource.  Internally it does not use
GSource or the glib event loop.

Introducing the glib event loop here would be a big change.  I hope in
the future we can unify main-loop.c and AioContext.  At that point we
might operate on GSources.

Stefan

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [Qemu-devel] [PATCH v6 01/18] block: ensure bdrv_drain_all() works during bdrv_delete()
  2013-07-26  6:43   ` Wenchao Xia
@ 2013-08-06 15:06     ` Stefan Hajnoczi
  2013-08-07  2:42       ` Wenchao Xia
  0 siblings, 1 reply; 53+ messages in thread
From: Stefan Hajnoczi @ 2013-08-06 15:06 UTC (permalink / raw)
  To: Wenchao Xia
  Cc: Kevin Wolf, Ping Fan Liu, alex, qemu-stable, qemu-devel,
	Michael Roth, Stefan Hajnoczi, Paolo Bonzini

On Fri, Jul 26, 2013 at 02:43:44PM +0800, Wenchao Xia wrote:
> Reviewed-by: Wenchao Xia <xiawenc@linux.vnet.ibm.com>
> 
> One question: old code missed itself in bdrv_drain_all(), is that a bug?

Sorry, I don't understand the question.  Can you rephrase it?

> > In bdrv_delete() make sure to call bdrv_make_anon() *after* bdrv_close()
> > so that the device is still seen by bdrv_drain_all() when iterating
> > bdrv_states.
> > 
> > Cc: qemu-stable@nongnu.org
> > Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
> > ---
> >   block.c | 4 ++--
> >   1 file changed, 2 insertions(+), 2 deletions(-)
> > 
> > diff --git a/block.c b/block.c
> > index 6cd39fa..9d68270 100644
> > --- a/block.c
> > +++ b/block.c
> > @@ -1600,11 +1600,11 @@ void bdrv_delete(BlockDriverState *bs)
> >       assert(!bs->job);
> >       assert(!bs->in_use);
> > 
> > +    bdrv_close(bs);
> > +
> >       /* remove from list, if necessary */
> >       bdrv_make_anon(bs);
> > 
> > -    bdrv_close(bs);
> > -
> >       g_free(bs);
> >   }
> > 
> 
> 
> -- 
> Best Regards
> 
> Wenchao Xia
> 
> 

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [Qemu-devel] [PATCH v6 00/18] aio: drop io_flush()
  2013-07-25 15:18 [Qemu-devel] [PATCH v6 00/18] aio: drop io_flush() Stefan Hajnoczi
                   ` (17 preceding siblings ...)
  2013-07-25 15:18 ` [Qemu-devel] [PATCH v6 18/18] aio: drop io_flush argument Stefan Hajnoczi
@ 2013-08-06 15:07 ` Stefan Hajnoczi
  2013-08-09  7:14   ` Wenchao Xia
  18 siblings, 1 reply; 53+ messages in thread
From: Stefan Hajnoczi @ 2013-08-06 15:07 UTC (permalink / raw)
  To: Kevin Wolf
  Cc: Ping Fan Liu, Stefan Hajnoczi, qemu-devel, Michael Roth, alex,
	Paolo Bonzini

On Thu, Jul 25, 2013 at 05:18:07PM +0200, Stefan Hajnoczi wrote:
> v6:
>  * Fix block/stream.c:close_unused_images() dangling pointer in Patch 2
>  * Rebase onto qemu.git/master
> 
> v5:
>  * Split out bdrv_delete() drain fix [bonzini]
>  * Fix commit message  [bonzini]
> 
> v4:
>  * Ensure pending BHs are processed in bdrv_drain_all() [bonzini]
> 
> v3:
>  * I forgot about this series, time to push it again!
>  * Rebase onto qemu.git/master
>  * Drop now-unused AioFlushHandler typedef [bonzini]
> 
> This series gets rid of aio's .io_flush() callback.  It's based on Paolo's
> insight that bdrv_drain_all() can be implemented using the bs->tracked_requests
> list.  io_flush() is redundant since the block layer already knows if requests
> are pending.
> 
> The point of this effort is to simplify our event loop(s).  If we can reduce
> custom features like io_flush() it becomes possible to unify AioContext and
> main-loop.c, maybe even to replace it with glib's main loop.
> 
> This is also important to me for dataplane, since bdrv_drain_all() is one of
> the synchronization points between threads.  QEMU monitor commands invoke
> bdrv_drain_all() while the block device is accessed from a dataplane thread.
> 
> Background on io_flush() semantics:
> 
> The io_flush() handler must return 1 if this aio fd handler is active.  That
> is, requests are pending and we'd like to make progress by monitoring the fd.
> 
> If io_flush() returns 0, the aio event loop skips monitoring this fd.  This is
> critical for block drivers like iscsi where we have an idle TCP socket which we
> want to block on *only* when there are pending requests.
> 
> The series works as follows:
> 
> Part 1 - stop relying on .io_flush()
> 
> The first patches change aio_poll() callers to check their termination
> condition themselves instead of relying on .io_flush():
> 
>   bcfa688 block: ensure bdrv_drain_all() works during bdrv_delete()
>   c1351f5 block: stop relying on io_flush() in bdrv_drain_all()
>   b75d9e5 dataplane/virtio-blk: check exit conditions before aio_poll()
>   18fc4d6 tests: adjust test-aio to new aio_poll() semantics
>   4c52a7c tests: adjust test-thread-pool to new aio_poll() semantics
> 
> Part 2 - stop calling .io_flush() from aio_poll()
> 
> The semantic change to aio_poll() is made here:
> 
>   bacae7a aio: stop using .io_flush()
> 
> Part 3 - drop io_flush() handlers, just pass NULL for the io_flush argument
> 
> Remove the now dead code from all .io_flush() users:
> 
>   720e0ad block/curl: drop curl_aio_flush()
>   2830cf3 block/gluster: drop qemu_gluster_aio_flush_cb()
>   c683e8e block/iscsi: drop iscsi_process_flush()
>   5019686 block/linux-aio: drop qemu_laio_completion_cb()
>   b862bcf block/nbd: drop nbd_have_request()
>   516e517 block/rbd: drop qemu_rbd_aio_flush_cb()
>   177090b block/sheepdog: drop have_co_req() and aio_flush_request()
>   e2e9e85 block/ssh: drop return_true()
>   7cc7bac dataplane/virtio-blk: drop flush_true() and flush_io()
>   db3cb18 thread-pool: drop thread_pool_active()
>   10a783b tests: drop event_active_cb()
> 
> Part 4 - remove io_flush arguments from aio functions
> 
> The big, mechanical patch that drops the io_flush argument:
> 
>   4a8c36b aio: drop io_flush argument
> 
> Note that I split Part 3 from Part 4 so it's easy to review individual block
> drivers.
> 
> Stefan Hajnoczi (18):
>   block: ensure bdrv_drain_all() works during bdrv_delete()
>   block: stop relying on io_flush() in bdrv_drain_all()
>   dataplane/virtio-blk: check exit conditions before aio_poll()
>   tests: adjust test-aio to new aio_poll() semantics
>   tests: adjust test-thread-pool to new aio_poll() semantics
>   aio: stop using .io_flush()
>   block/curl: drop curl_aio_flush()
>   block/gluster: drop qemu_gluster_aio_flush_cb()
>   block/iscsi: drop iscsi_process_flush()
>   block/linux-aio: drop qemu_laio_completion_cb()
>   block/nbd: drop nbd_have_request()
>   block/rbd: drop qemu_rbd_aio_flush_cb()
>   block/sheepdog: drop have_co_req() and aio_flush_request()
>   block/ssh: drop return_true()
>   dataplane/virtio-blk: drop flush_true() and flush_io()
>   thread-pool: drop thread_pool_active()
>   tests: drop event_active_cb()
>   aio: drop io_flush argument
> 
>  aio-posix.c                     | 36 ++++++------------
>  aio-win32.c                     | 37 ++++++++-----------
>  async.c                         |  4 +-
>  block.c                         | 50 ++++++++++++++++++-------
>  block/curl.c                    | 25 ++-----------
>  block/gluster.c                 | 21 ++---------
>  block/iscsi.c                   | 10 +----
>  block/linux-aio.c               | 18 +--------
>  block/nbd.c                     | 18 ++-------
>  block/rbd.c                     | 16 +-------
>  block/sheepdog.c                | 33 ++++-------------
>  block/ssh.c                     | 12 +-----
>  block/stream.c                  |  6 ++-
>  hw/block/dataplane/virtio-blk.c | 25 +++----------
>  include/block/aio.h             | 14 +------
>  main-loop.c                     |  9 ++---
>  tests/test-aio.c                | 82 +++++++++++++++++++++--------------------
>  tests/test-thread-pool.c        | 24 ++++++------
>  thread-pool.c                   | 11 +-----
>  19 files changed, 163 insertions(+), 288 deletions(-)

Ping?

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [Qemu-devel] [PATCH v6 01/18] block: ensure bdrv_drain_all() works during bdrv_delete()
  2013-08-06 15:06     ` Stefan Hajnoczi
@ 2013-08-07  2:42       ` Wenchao Xia
  2013-08-07  7:31         ` Stefan Hajnoczi
  0 siblings, 1 reply; 53+ messages in thread
From: Wenchao Xia @ 2013-08-07  2:42 UTC (permalink / raw)
  To: Stefan Hajnoczi
  Cc: Kevin Wolf, Ping Fan Liu, Stefan Hajnoczi, qemu-devel,
	Michael Roth, qemu-stable, alex, Paolo Bonzini

> On Fri, Jul 26, 2013 at 02:43:44PM +0800, Wenchao Xia wrote:
>> Reviewed-by: Wenchao Xia <xiawenc@linux.vnet.ibm.com>
>>
>> One question: old code missed itself in bdrv_drain_all(), is that a bug?
>
> Sorry, I don't understand the question.  Can you rephrase it?
>
   Before this patch, in the code path: bdrv_close()->bdrv_drain_all(),
the *bs does not exist in bdrv_states, so the code missed the chance to
drain the request on *bs. That is a bug, and this patch is actually a
bugfix?


>>> In bdrv_delete() make sure to call bdrv_make_anon() *after* bdrv_close()
>>> so that the device is still seen by bdrv_drain_all() when iterating
>>> bdrv_states.
>>>
>>> Cc: qemu-stable@nongnu.org
>>> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
>>> ---
>>>    block.c | 4 ++--
>>>    1 file changed, 2 insertions(+), 2 deletions(-)
>>>
>>> diff --git a/block.c b/block.c
>>> index 6cd39fa..9d68270 100644
>>> --- a/block.c
>>> +++ b/block.c
>>> @@ -1600,11 +1600,11 @@ void bdrv_delete(BlockDriverState *bs)
>>>        assert(!bs->job);
>>>        assert(!bs->in_use);
>>>
>>> +    bdrv_close(bs);
>>> +
>>>        /* remove from list, if necessary */
>>>        bdrv_make_anon(bs);
>>>
>>> -    bdrv_close(bs);
>>> -
>>>        g_free(bs);
>>>    }
>>>
>>
>>
>> --
>> Best Regards
>>
>> Wenchao Xia
>>
>>
>


-- 
Best Regards

Wenchao Xia

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [Qemu-devel] [PATCH v6 01/18] block: ensure bdrv_drain_all() works during bdrv_delete()
  2013-08-07  2:42       ` Wenchao Xia
@ 2013-08-07  7:31         ` Stefan Hajnoczi
  0 siblings, 0 replies; 53+ messages in thread
From: Stefan Hajnoczi @ 2013-08-07  7:31 UTC (permalink / raw)
  To: Wenchao Xia
  Cc: Kevin Wolf, Ping Fan Liu, Stefan Hajnoczi, qemu-devel,
	Michael Roth, qemu-stable, alex, Paolo Bonzini

On Wed, Aug 07, 2013 at 10:42:26AM +0800, Wenchao Xia wrote:
> >On Fri, Jul 26, 2013 at 02:43:44PM +0800, Wenchao Xia wrote:
> >>Reviewed-by: Wenchao Xia <xiawenc@linux.vnet.ibm.com>
> >>
> >>One question: old code missed itself in bdrv_drain_all(), is that a bug?
> >
> >Sorry, I don't understand the question.  Can you rephrase it?
> >
>   Before this patch, in the code path: bdrv_close()->bdrv_drain_all(),
> the *bs does not exist in bdrv_states, so the code missed the chance to
> drain the request on *bs. That is a bug, and this patch is actually a
> bugfix?

Yes, exactly.  It's a bug fix and therefore I CCed
qemu-stable@nongnu.org.

Stefan

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [Qemu-devel] [PATCH v6 01/18] block: ensure bdrv_drain_all() works during bdrv_delete()
  2013-07-25 15:18 ` [Qemu-devel] [PATCH v6 01/18] block: ensure bdrv_drain_all() works during bdrv_delete() Stefan Hajnoczi
  2013-07-26  6:43   ` Wenchao Xia
@ 2013-08-07  7:42   ` Stefan Hajnoczi
  1 sibling, 0 replies; 53+ messages in thread
From: Stefan Hajnoczi @ 2013-08-07  7:42 UTC (permalink / raw)
  To: Kevin Wolf
  Cc: Ping Fan Liu, Stefan Hajnoczi, qemu-stable, qemu-devel, alex,
	Paolo Bonzini, Michael Roth

On Thu, Jul 25, 2013 at 05:18:08PM +0200, Stefan Hajnoczi wrote:
> In bdrv_delete() make sure to call bdrv_make_anon() *after* bdrv_close()
> so that the device is still seen by bdrv_drain_all() when iterating
> bdrv_states.
> 
> Cc: qemu-stable@nongnu.org
> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
> ---
>  block.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/block.c b/block.c
> index 6cd39fa..9d68270 100644
> --- a/block.c
> +++ b/block.c
> @@ -1600,11 +1600,11 @@ void bdrv_delete(BlockDriverState *bs)
>      assert(!bs->job);
>      assert(!bs->in_use);
>  
> +    bdrv_close(bs);
> +
>      /* remove from list, if necessary */
>      bdrv_make_anon(bs);
>  
> -    bdrv_close(bs);
> -
>      g_free(bs);
>  }

Kevin: Although I haven't seen reports of this bug, it might be a good
idea to merge this single patch for QEMU 1.6.

Stefan

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [Qemu-devel] [PATCH v6 00/18] aio: drop io_flush()
  2013-08-06 15:07 ` [Qemu-devel] [PATCH v6 00/18] aio: drop io_flush() Stefan Hajnoczi
@ 2013-08-09  7:14   ` Wenchao Xia
  2013-08-09  8:31     ` Stefan Hajnoczi
  0 siblings, 1 reply; 53+ messages in thread
From: Wenchao Xia @ 2013-08-09  7:14 UTC (permalink / raw)
  To: Stefan Hajnoczi
  Cc: Kevin Wolf, Ping Fan Liu, Stefan Hajnoczi, qemu-devel,
	Michael Roth, alex, Paolo Bonzini

于 2013-8-6 23:07, Stefan Hajnoczi 写道:
> On Thu, Jul 25, 2013 at 05:18:07PM +0200, Stefan Hajnoczi wrote:
>> v6:
>>   * Fix block/stream.c:close_unused_images() dangling pointer in Patch 2
>>   * Rebase onto qemu.git/master
>>
>> v5:
>>   * Split out bdrv_delete() drain fix [bonzini]
>>   * Fix commit message  [bonzini]
>>
>> v4:
>>   * Ensure pending BHs are processed in bdrv_drain_all() [bonzini]
>>
>> v3:
>>   * I forgot about this series, time to push it again!
>>   * Rebase onto qemu.git/master
>>   * Drop now-unused AioFlushHandler typedef [bonzini]
>>
>> This series gets rid of aio's .io_flush() callback.  It's based on Paolo's
>> insight that bdrv_drain_all() can be implemented using the bs->tracked_requests
>> list.  io_flush() is redundant since the block layer already knows if requests
>> are pending.
>>
>> The point of this effort is to simplify our event loop(s).  If we can reduce
>> custom features like io_flush() it becomes possible to unify AioContext and
>> main-loop.c, maybe even to replace it with glib's main loop.
>>
>> This is also important to me for dataplane, since bdrv_drain_all() is one of
>> the synchronization points between threads.  QEMU monitor commands invoke
>> bdrv_drain_all() while the block device is accessed from a dataplane thread.
>>
>> Background on io_flush() semantics:
>>
>> The io_flush() handler must return 1 if this aio fd handler is active.  That
>> is, requests are pending and we'd like to make progress by monitoring the fd.
>>
>> If io_flush() returns 0, the aio event loop skips monitoring this fd.  This is
>> critical for block drivers like iscsi where we have an idle TCP socket which we
>> want to block on *only* when there are pending requests.
>>
>> The series works as follows:
>>
>> Part 1 - stop relying on .io_flush()
>>
>> The first patches change aio_poll() callers to check their termination
>> condition themselves instead of relying on .io_flush():
>>
>>    bcfa688 block: ensure bdrv_drain_all() works during bdrv_delete()
>>    c1351f5 block: stop relying on io_flush() in bdrv_drain_all()
>>    b75d9e5 dataplane/virtio-blk: check exit conditions before aio_poll()
>>    18fc4d6 tests: adjust test-aio to new aio_poll() semantics
>>    4c52a7c tests: adjust test-thread-pool to new aio_poll() semantics
>>
>> Part 2 - stop calling .io_flush() from aio_poll()
>>
>> The semantic change to aio_poll() is made here:
>>
>>    bacae7a aio: stop using .io_flush()
>>
>> Part 3 - drop io_flush() handlers, just pass NULL for the io_flush argument
>>
>> Remove the now dead code from all .io_flush() users:
>>
>>    720e0ad block/curl: drop curl_aio_flush()
>>    2830cf3 block/gluster: drop qemu_gluster_aio_flush_cb()
>>    c683e8e block/iscsi: drop iscsi_process_flush()
>>    5019686 block/linux-aio: drop qemu_laio_completion_cb()
>>    b862bcf block/nbd: drop nbd_have_request()
>>    516e517 block/rbd: drop qemu_rbd_aio_flush_cb()
>>    177090b block/sheepdog: drop have_co_req() and aio_flush_request()
>>    e2e9e85 block/ssh: drop return_true()
>>    7cc7bac dataplane/virtio-blk: drop flush_true() and flush_io()
>>    db3cb18 thread-pool: drop thread_pool_active()
>>    10a783b tests: drop event_active_cb()
>>
>> Part 4 - remove io_flush arguments from aio functions
>>
>> The big, mechanical patch that drops the io_flush argument:
>>
>>    4a8c36b aio: drop io_flush argument
>>
>> Note that I split Part 3 from Part 4 so it's easy to review individual block
>> drivers.
>>
>> Stefan Hajnoczi (18):
>>    block: ensure bdrv_drain_all() works during bdrv_delete()
>>    block: stop relying on io_flush() in bdrv_drain_all()
>>    dataplane/virtio-blk: check exit conditions before aio_poll()
>>    tests: adjust test-aio to new aio_poll() semantics
>>    tests: adjust test-thread-pool to new aio_poll() semantics
>>    aio: stop using .io_flush()
>>    block/curl: drop curl_aio_flush()
>>    block/gluster: drop qemu_gluster_aio_flush_cb()
>>    block/iscsi: drop iscsi_process_flush()
>>    block/linux-aio: drop qemu_laio_completion_cb()
>>    block/nbd: drop nbd_have_request()
>>    block/rbd: drop qemu_rbd_aio_flush_cb()
>>    block/sheepdog: drop have_co_req() and aio_flush_request()
>>    block/ssh: drop return_true()
>>    dataplane/virtio-blk: drop flush_true() and flush_io()
>>    thread-pool: drop thread_pool_active()
>>    tests: drop event_active_cb()
>>    aio: drop io_flush argument
>>
>>   aio-posix.c                     | 36 ++++++------------
>>   aio-win32.c                     | 37 ++++++++-----------
>>   async.c                         |  4 +-
>>   block.c                         | 50 ++++++++++++++++++-------
>>   block/curl.c                    | 25 ++-----------
>>   block/gluster.c                 | 21 ++---------
>>   block/iscsi.c                   | 10 +----
>>   block/linux-aio.c               | 18 +--------
>>   block/nbd.c                     | 18 ++-------
>>   block/rbd.c                     | 16 +-------
>>   block/sheepdog.c                | 33 ++++-------------
>>   block/ssh.c                     | 12 +-----
>>   block/stream.c                  |  6 ++-
>>   hw/block/dataplane/virtio-blk.c | 25 +++----------
>>   include/block/aio.h             | 14 +------
>>   main-loop.c                     |  9 ++---
>>   tests/test-aio.c                | 82 +++++++++++++++++++++--------------------
>>   tests/test-thread-pool.c        | 24 ++++++------
>>   thread-pool.c                   | 11 +-----
>>   19 files changed, 163 insertions(+), 288 deletions(-)
>
> Ping?
>
   I tried to apply this series to do more work above it, but seems
code conflict with upstream.

-- 
Best Regards

Wenchao Xia

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [Qemu-devel] [PATCH v6 00/18] aio: drop io_flush()
  2013-08-09  7:14   ` Wenchao Xia
@ 2013-08-09  8:31     ` Stefan Hajnoczi
  0 siblings, 0 replies; 53+ messages in thread
From: Stefan Hajnoczi @ 2013-08-09  8:31 UTC (permalink / raw)
  To: Wenchao Xia
  Cc: Kevin Wolf, Ping Fan Liu, Stefan Hajnoczi, qemu-devel,
	Michael Roth, alex, Paolo Bonzini

On Fri, Aug 09, 2013 at 03:14:11PM +0800, Wenchao Xia wrote:
> 于 2013-8-6 23:07, Stefan Hajnoczi 写道:
> >On Thu, Jul 25, 2013 at 05:18:07PM +0200, Stefan Hajnoczi wrote:
> >Ping?
> >
>   I tried to apply this series to do more work above it, but seems
> code conflict with upstream.

I sent a new revision of this series based on Kevin's block-next tree:
git://github.com/stefanha/qemu.git bdrv_drain

Stefan

^ permalink raw reply	[flat|nested] 53+ messages in thread

end of thread, other threads:[~2013-08-09  8:31 UTC | newest]

Thread overview: 53+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2013-07-25 15:18 [Qemu-devel] [PATCH v6 00/18] aio: drop io_flush() Stefan Hajnoczi
2013-07-25 15:18 ` [Qemu-devel] [PATCH v6 01/18] block: ensure bdrv_drain_all() works during bdrv_delete() Stefan Hajnoczi
2013-07-26  6:43   ` Wenchao Xia
2013-08-06 15:06     ` Stefan Hajnoczi
2013-08-07  2:42       ` Wenchao Xia
2013-08-07  7:31         ` Stefan Hajnoczi
2013-08-07  7:42   ` Stefan Hajnoczi
2013-07-25 15:18 ` [Qemu-devel] [PATCH v6 02/18] block: stop relying on io_flush() in bdrv_drain_all() Stefan Hajnoczi
2013-07-29  7:01   ` Wenchao Xia
2013-07-25 15:18 ` [Qemu-devel] [PATCH v6 03/18] dataplane/virtio-blk: check exit conditions before aio_poll() Stefan Hajnoczi
2013-07-26 16:44   ` Jeff Cody
2013-07-29  7:10   ` Wenchao Xia
2013-07-25 15:18 ` [Qemu-devel] [PATCH v6 04/18] tests: adjust test-aio to new aio_poll() semantics Stefan Hajnoczi
2013-07-29  7:39   ` Wenchao Xia
2013-07-29  8:24   ` Wenchao Xia
2013-07-25 15:18 ` [Qemu-devel] [PATCH v6 05/18] tests: adjust test-thread-pool " Stefan Hajnoczi
2013-07-29  7:51   ` Wenchao Xia
2013-07-29 14:26     ` Stefan Hajnoczi
2013-07-25 15:18 ` [Qemu-devel] [PATCH v6 06/18] aio: stop using .io_flush() Stefan Hajnoczi
2013-07-26 16:10   ` Jeff Cody
2013-07-26 16:25     ` Paolo Bonzini
2013-07-26 16:36       ` Jeff Cody
2013-07-26 16:43   ` Jeff Cody
2013-07-29  8:08   ` Wenchao Xia
2013-07-29 14:32     ` Stefan Hajnoczi
2013-07-25 15:18 ` [Qemu-devel] [PATCH v6 07/18] block/curl: drop curl_aio_flush() Stefan Hajnoczi
2013-07-29  8:11   ` Wenchao Xia
2013-07-25 15:18 ` [Qemu-devel] [PATCH v6 08/18] block/gluster: drop qemu_gluster_aio_flush_cb() Stefan Hajnoczi
2013-07-29  8:17   ` Wenchao Xia
2013-07-25 15:18 ` [Qemu-devel] [PATCH v6 09/18] block/iscsi: drop iscsi_process_flush() Stefan Hajnoczi
2013-07-29  8:17   ` Wenchao Xia
2013-07-25 15:18 ` [Qemu-devel] [PATCH v6 10/18] block/linux-aio: drop qemu_laio_completion_cb() Stefan Hajnoczi
2013-07-29  8:19   ` Wenchao Xia
2013-07-25 15:18 ` [Qemu-devel] [PATCH v6 11/18] block/nbd: drop nbd_have_request() Stefan Hajnoczi
2013-07-29  8:19   ` Wenchao Xia
2013-07-25 15:18 ` [Qemu-devel] [PATCH v6 12/18] block/rbd: drop qemu_rbd_aio_flush_cb() Stefan Hajnoczi
2013-07-29  8:20   ` Wenchao Xia
2013-07-25 15:18 ` [Qemu-devel] [PATCH v6 13/18] block/sheepdog: drop have_co_req() and aio_flush_request() Stefan Hajnoczi
2013-07-26  5:50   ` MORITA Kazutaka
2013-07-29  8:24   ` Wenchao Xia
2013-07-25 15:18 ` [Qemu-devel] [PATCH v6 14/18] block/ssh: drop return_true() Stefan Hajnoczi
2013-07-29  8:25   ` Wenchao Xia
2013-07-25 15:18 ` [Qemu-devel] [PATCH v6 15/18] dataplane/virtio-blk: drop flush_true() and flush_io() Stefan Hajnoczi
2013-07-29  8:32   ` Wenchao Xia
2013-07-25 15:18 ` [Qemu-devel] [PATCH v6 16/18] thread-pool: drop thread_pool_active() Stefan Hajnoczi
2013-07-29  8:33   ` Wenchao Xia
2013-07-25 15:18 ` [Qemu-devel] [PATCH v6 17/18] tests: drop event_active_cb() Stefan Hajnoczi
2013-07-29  8:34   ` Wenchao Xia
2013-07-25 15:18 ` [Qemu-devel] [PATCH v6 18/18] aio: drop io_flush argument Stefan Hajnoczi
2013-07-29  8:37   ` Wenchao Xia
2013-08-06 15:07 ` [Qemu-devel] [PATCH v6 00/18] aio: drop io_flush() Stefan Hajnoczi
2013-08-09  7:14   ` Wenchao Xia
2013-08-09  8:31     ` Stefan Hajnoczi

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.