* [Qemu-devel] [PATCH RFC 0/3] iothread: release iothread around aio_poll
@ 2015-02-20 16:26 Paolo Bonzini
2015-02-20 16:26 ` [Qemu-devel] [PATCH 1/3] aio-posix: move pollfds to thread-local storage Paolo Bonzini
` (4 more replies)
0 siblings, 5 replies; 17+ messages in thread
From: Paolo Bonzini @ 2015-02-20 16:26 UTC (permalink / raw)
To: qemu-devel; +Cc: famz, stefanha
Right now, iothreads are relying on a "contention callback" to release
the AioContext (e.g. for block device operations or to do bdrv_drain_all).
This is necessary because aio_poll needs to be called within an
aio_context_acquire.
This series drops this requirement for aio_poll, with two effects:
1) it makes it possible to remove the "contention callback" in RFifoLock
(and possibly to convert it to a normal GRecMutex, which is why I am not
including a patch to remove callbacks from RFifoLock).
2) it makes it possible to start work around making critical sections
for the block layer fine-grained.
In order to do this, some data is moved from AioContext to local storage.
Stack allocation has size limitations, so thread-local storage is used
instead. There are no reentrancy problems because the data is only live
throughout a small part of aio_poll, and in particular not during the
invocation of callbacks.
Comments?
Paolo
Paolo Bonzini (3):
aio-posix: move pollfds to thread-local storage
AioContext: acquire/release AioContext during aio_poll
iothread: release iothread around aio_poll
aio-posix.c | 86 ++++++++++++++++++++++++++++++++++++++++-------------
aio-win32.c | 8 +++++
async.c | 10 +------
include/block/aio.h | 18 +++++------
iothread.c | 11 ++-----
tests/test-aio.c | 21 +++++++------
6 files changed, 96 insertions(+), 58 deletions(-)
--
2.3.0
^ permalink raw reply [flat|nested] 17+ messages in thread
* [Qemu-devel] [PATCH 1/3] aio-posix: move pollfds to thread-local storage
2015-02-20 16:26 [Qemu-devel] [PATCH RFC 0/3] iothread: release iothread around aio_poll Paolo Bonzini
@ 2015-02-20 16:26 ` Paolo Bonzini
2015-03-06 16:50 ` Stefan Hajnoczi
2015-02-20 16:26 ` [Qemu-devel] [PATCH 2/3] AioContext: acquire/release AioContext during aio_poll Paolo Bonzini
` (3 subsequent siblings)
4 siblings, 1 reply; 17+ messages in thread
From: Paolo Bonzini @ 2015-02-20 16:26 UTC (permalink / raw)
To: qemu-devel; +Cc: famz, stefanha
By using thread-local storage, aio_poll can stop using global data during
g_poll_ns. This will make it possible to drop callbacks from rfifolock.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
aio-posix.c | 77 ++++++++++++++++++++++++++++++++++++++---------------
async.c | 2 --
include/block/aio.h | 3 ---
3 files changed, 56 insertions(+), 26 deletions(-)
diff --git a/aio-posix.c b/aio-posix.c
index cbd4c34..4a30b77 100644
--- a/aio-posix.c
+++ b/aio-posix.c
@@ -24,7 +24,6 @@ struct AioHandler
IOHandler *io_read;
IOHandler *io_write;
int deleted;
- int pollfds_idx;
void *opaque;
QLIST_ENTRY(AioHandler) node;
};
@@ -83,7 +82,6 @@ void aio_set_fd_handler(AioContext *ctx,
node->io_read = io_read;
node->io_write = io_write;
node->opaque = opaque;
- node->pollfds_idx = -1;
node->pfd.events = (io_read ? G_IO_IN | G_IO_HUP | G_IO_ERR : 0);
node->pfd.events |= (io_write ? G_IO_OUT | G_IO_ERR : 0);
@@ -186,12 +184,59 @@ bool aio_dispatch(AioContext *ctx)
return progress;
}
+/* These thread-local variables are used only in a small part of aio_poll
+ * around the call to the poll() system call. In particular they are not
+ * used while aio_poll is performing callbacks, which makes it much easier
+ * to think about reentrancy!
+ *
+ * Stack-allocated arrays would be perfect but they have size limitations;
+ * heap allocation is expensive enough that we want to reuse arrays across
+ * calls to aio_poll(). And because poll() has to be called without holding
+ * any lock, the arrays cannot be stored in AioContext. Thread-local data
+ * has none of the disadvantages of these three options.
+ */
+static __thread GPollFD *pollfds;
+static __thread AioHandler **nodes;
+static __thread unsigned npfd, nalloc;
+static __thread Notifier pollfds_cleanup_notifier;
+
+static void pollfds_cleanup(Notifier *n, void *unused)
+{
+ g_assert(npfd == 0);
+ g_free(pollfds);
+ g_free(nodes);
+ nalloc = 0;
+}
+
+static void add_pollfd(AioHandler *node)
+{
+ if (npfd == nalloc) {
+ if (nalloc == 0) {
+ pollfds_cleanup_notifier.notify = pollfds_cleanup;
+ qemu_thread_atexit_add(&pollfds_cleanup_notifier);
+ nalloc = 8;
+ } else {
+ g_assert(nalloc <= INT_MAX);
+ nalloc *= 2;
+ }
+ pollfds = g_renew(GPollFD, pollfds, nalloc);
+ nodes = g_renew(AioHandler *, nodes, nalloc);
+ }
+ nodes[npfd] = node;
+ pollfds[npfd] = (GPollFD) {
+ .fd = node->pfd.fd,
+ .events = node->pfd.events,
+ };
+ npfd++;
+}
+
bool aio_poll(AioContext *ctx, bool blocking)
{
AioHandler *node;
bool was_dispatching;
- int ret;
+ int i, ret;
bool progress;
+ int64_t timeout;
was_dispatching = ctx->dispatching;
progress = false;
@@ -210,39 +255,29 @@ bool aio_poll(AioContext *ctx, bool blocking)
ctx->walking_handlers++;
- g_array_set_size(ctx->pollfds, 0);
+ npfd = 0;
/* fill pollfds */
QLIST_FOREACH(node, &ctx->aio_handlers, node) {
- node->pollfds_idx = -1;
if (!node->deleted && node->pfd.events) {
- GPollFD pfd = {
- .fd = node->pfd.fd,
- .events = node->pfd.events,
- };
- node->pollfds_idx = ctx->pollfds->len;
- g_array_append_val(ctx->pollfds, pfd);
+ add_pollfd(node);
}
}
- ctx->walking_handlers--;
+ timeout = blocking ? aio_compute_timeout(ctx) : 0;
/* wait until next event */
- ret = qemu_poll_ns((GPollFD *)ctx->pollfds->data,
- ctx->pollfds->len,
- blocking ? aio_compute_timeout(ctx) : 0);
+ ret = qemu_poll_ns((GPollFD *)pollfds, npfd, timeout);
/* if we have any readable fds, dispatch event */
if (ret > 0) {
- QLIST_FOREACH(node, &ctx->aio_handlers, node) {
- if (node->pollfds_idx != -1) {
- GPollFD *pfd = &g_array_index(ctx->pollfds, GPollFD,
- node->pollfds_idx);
- node->pfd.revents = pfd->revents;
- }
+ for (i = 0; i < npfd; i++) {
+ nodes[i]->pfd.revents = pollfds[i].revents;
}
}
+ ctx->walking_handlers--;
+
/* Run dispatch even if there were no readable fds to run timers */
aio_set_dispatching(ctx, true);
if (aio_dispatch(ctx)) {
diff --git a/async.c b/async.c
index cf5bd33..0463dc4 100644
--- a/async.c
+++ b/async.c
@@ -239,7 +239,6 @@ aio_ctx_finalize(GSource *source)
event_notifier_cleanup(&ctx->notifier);
rfifolock_destroy(&ctx->lock);
qemu_mutex_destroy(&ctx->bh_lock);
- g_array_free(ctx->pollfds, TRUE);
timerlistgroup_deinit(&ctx->tlg);
}
@@ -311,7 +310,6 @@ AioContext *aio_context_new(Error **errp)
aio_set_event_notifier(ctx, &ctx->notifier,
(EventNotifierHandler *)
event_notifier_test_and_clear);
- ctx->pollfds = g_array_new(FALSE, FALSE, sizeof(GPollFD));
ctx->thread_pool = NULL;
qemu_mutex_init(&ctx->bh_lock);
rfifolock_init(&ctx->lock, aio_rfifolock_cb, ctx);
diff --git a/include/block/aio.h b/include/block/aio.h
index 656c41c..499efd0 100644
--- a/include/block/aio.h
+++ b/include/block/aio.h
@@ -82,9 +82,6 @@ struct AioContext {
/* Used for aio_notify. */
EventNotifier notifier;
- /* GPollFDs for aio_poll() */
- GArray *pollfds;
-
/* Thread pool for performing work and receiving completion callbacks */
struct ThreadPool *thread_pool;
--
2.3.0
^ permalink raw reply related [flat|nested] 17+ messages in thread
* [Qemu-devel] [PATCH 2/3] AioContext: acquire/release AioContext during aio_poll
2015-02-20 16:26 [Qemu-devel] [PATCH RFC 0/3] iothread: release iothread around aio_poll Paolo Bonzini
2015-02-20 16:26 ` [Qemu-devel] [PATCH 1/3] aio-posix: move pollfds to thread-local storage Paolo Bonzini
@ 2015-02-20 16:26 ` Paolo Bonzini
2015-02-25 5:45 ` Fam Zheng
` (2 more replies)
2015-02-20 16:26 ` [Qemu-devel] [PATCH 3/3] iothread: release iothread around aio_poll Paolo Bonzini
` (2 subsequent siblings)
4 siblings, 3 replies; 17+ messages in thread
From: Paolo Bonzini @ 2015-02-20 16:26 UTC (permalink / raw)
To: qemu-devel; +Cc: famz, stefanha
This is the first step in pushing down acquire/release, and will let
rfifolock drop the contention callback feature.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
aio-posix.c | 9 +++++++++
aio-win32.c | 8 ++++++++
include/block/aio.h | 15 ++++++++-------
3 files changed, 25 insertions(+), 7 deletions(-)
diff --git a/aio-posix.c b/aio-posix.c
index 4a30b77..292ae84 100644
--- a/aio-posix.c
+++ b/aio-posix.c
@@ -238,6 +238,7 @@ bool aio_poll(AioContext *ctx, bool blocking)
bool progress;
int64_t timeout;
+ aio_context_acquire(ctx);
was_dispatching = ctx->dispatching;
progress = false;
@@ -267,7 +268,13 @@ bool aio_poll(AioContext *ctx, bool blocking)
timeout = blocking ? aio_compute_timeout(ctx) : 0;
/* wait until next event */
+ if (timeout) {
+ aio_context_release(ctx);
+ }
ret = qemu_poll_ns((GPollFD *)pollfds, npfd, timeout);
+ if (timeout) {
+ aio_context_acquire(ctx);
+ }
/* if we have any readable fds, dispatch event */
if (ret > 0) {
@@ -285,5 +292,7 @@ bool aio_poll(AioContext *ctx, bool blocking)
}
aio_set_dispatching(ctx, was_dispatching);
+ aio_context_release(ctx);
+
return progress;
}
diff --git a/aio-win32.c b/aio-win32.c
index e6f4ced..233d8f5 100644
--- a/aio-win32.c
+++ b/aio-win32.c
@@ -283,6 +283,7 @@ bool aio_poll(AioContext *ctx, bool blocking)
int count;
int timeout;
+ aio_context_acquire(ctx);
have_select_revents = aio_prepare(ctx);
if (have_select_revents) {
blocking = false;
@@ -323,7 +324,13 @@ bool aio_poll(AioContext *ctx, bool blocking)
timeout = blocking
? qemu_timeout_ns_to_ms(aio_compute_timeout(ctx)) : 0;
+ if (timeout) {
+ aio_context_release(ctx);
+ }
ret = WaitForMultipleObjects(count, events, FALSE, timeout);
+ if (timeout) {
+ aio_context_acquire(ctx);
+ }
aio_set_dispatching(ctx, true);
if (first && aio_bh_poll(ctx)) {
@@ -349,5 +356,6 @@ bool aio_poll(AioContext *ctx, bool blocking)
progress |= timerlistgroup_run_timers(&ctx->tlg);
aio_set_dispatching(ctx, was_dispatching);
+ aio_context_release(ctx);
return progress;
}
diff --git a/include/block/aio.h b/include/block/aio.h
index 499efd0..e77409d 100644
--- a/include/block/aio.h
+++ b/include/block/aio.h
@@ -118,13 +118,14 @@ void aio_context_ref(AioContext *ctx);
void aio_context_unref(AioContext *ctx);
/* Take ownership of the AioContext. If the AioContext will be shared between
- * threads, a thread must have ownership when calling aio_poll().
- *
- * Note that multiple threads calling aio_poll() means timers, BHs, and
- * callbacks may be invoked from a different thread than they were registered
- * from. Therefore, code must use AioContext acquire/release or use
- * fine-grained synchronization to protect shared state if other threads will
- * be accessing it simultaneously.
+ * threads, and a thread does not want to be interrupted, it will have to
+ * take ownership around calls to aio_poll(). Otherwise, aio_poll()
+ * automatically takes care of calling aio_context_acquire and
+ * aio_context_release.
+ *
+ * Access to timers and BHs from a thread that has not acquired AioContext
+ * is possible. Access to callbacks for now must be done while the AioContext
+ * is owned by the thread (FIXME).
*/
void aio_context_acquire(AioContext *ctx);
--
2.3.0
^ permalink raw reply related [flat|nested] 17+ messages in thread
* [Qemu-devel] [PATCH 3/3] iothread: release iothread around aio_poll
2015-02-20 16:26 [Qemu-devel] [PATCH RFC 0/3] iothread: release iothread around aio_poll Paolo Bonzini
2015-02-20 16:26 ` [Qemu-devel] [PATCH 1/3] aio-posix: move pollfds to thread-local storage Paolo Bonzini
2015-02-20 16:26 ` [Qemu-devel] [PATCH 2/3] AioContext: acquire/release AioContext during aio_poll Paolo Bonzini
@ 2015-02-20 16:26 ` Paolo Bonzini
2015-03-06 17:16 ` Stefan Hajnoczi
2015-03-31 10:35 ` [Qemu-devel] [PATCH RFC 0/3] " Paolo Bonzini
2015-03-31 14:03 ` Stefan Hajnoczi
4 siblings, 1 reply; 17+ messages in thread
From: Paolo Bonzini @ 2015-02-20 16:26 UTC (permalink / raw)
To: qemu-devel; +Cc: famz, stefanha
This is the first step towards having fine-grained critical sections in
dataplane threads, which resolves lock ordering problems between
address_space_* functions (which need the BQL when doing MMIO, even
after we complete RCU-based dispatch) and the AioContext.
Because AioContext does not use contention callbacks anymore, the
unit test has to be changed.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
async.c | 8 +-------
iothread.c | 11 ++---------
tests/test-aio.c | 21 ++++++++++++---------
3 files changed, 15 insertions(+), 25 deletions(-)
diff --git a/async.c b/async.c
index 0463dc4..e8a4c8b 100644
--- a/async.c
+++ b/async.c
@@ -289,12 +289,6 @@ static void aio_timerlist_notify(void *opaque)
aio_notify(opaque);
}
-static void aio_rfifolock_cb(void *opaque)
-{
- /* Kick owner thread in case they are blocked in aio_poll() */
- aio_notify(opaque);
-}
-
AioContext *aio_context_new(Error **errp)
{
int ret;
@@ -312,7 +306,7 @@ AioContext *aio_context_new(Error **errp)
event_notifier_test_and_clear);
ctx->thread_pool = NULL;
qemu_mutex_init(&ctx->bh_lock);
- rfifolock_init(&ctx->lock, aio_rfifolock_cb, ctx);
+ rfifolock_init(&ctx->lock, NULL, NULL);
timerlistgroup_init(&ctx->tlg, aio_timerlist_notify, ctx);
return ctx;
diff --git a/iothread.c b/iothread.c
index 342a23f..a1f9109 100644
--- a/iothread.c
+++ b/iothread.c
@@ -31,21 +31,14 @@ typedef ObjectClass IOThreadClass;
static void *iothread_run(void *opaque)
{
IOThread *iothread = opaque;
- bool blocking;
qemu_mutex_lock(&iothread->init_done_lock);
iothread->thread_id = qemu_get_thread_id();
qemu_cond_signal(&iothread->init_done_cond);
qemu_mutex_unlock(&iothread->init_done_lock);
- while (!iothread->stopping) {
- aio_context_acquire(iothread->ctx);
- blocking = true;
- while (!iothread->stopping && aio_poll(iothread->ctx, blocking)) {
- /* Progress was made, keep going */
- blocking = false;
- }
- aio_context_release(iothread->ctx);
+ while (!atomic_read(&iothread->stopping)) {
+ aio_poll(iothread->ctx, true);
}
return NULL;
}
diff --git a/tests/test-aio.c b/tests/test-aio.c
index a7cb5c9..f88a042 100644
--- a/tests/test-aio.c
+++ b/tests/test-aio.c
@@ -107,6 +107,7 @@ static void test_notify(void)
typedef struct {
QemuMutex start_lock;
+ EventNotifier notifier;
bool thread_acquired;
} AcquireTestData;
@@ -118,6 +119,8 @@ static void *test_acquire_thread(void *opaque)
qemu_mutex_lock(&data->start_lock);
qemu_mutex_unlock(&data->start_lock);
+ g_usleep(500000);
+ event_notifier_set(&data->notifier);
aio_context_acquire(ctx);
aio_context_release(ctx);
@@ -126,20 +129,19 @@ static void *test_acquire_thread(void *opaque)
return NULL;
}
-static void dummy_notifier_read(EventNotifier *unused)
+static void dummy_notifier_read(EventNotifier *n)
{
- g_assert(false); /* should never be invoked */
+ event_notifier_test_and_clear(n);
}
static void test_acquire(void)
{
QemuThread thread;
- EventNotifier notifier;
AcquireTestData data;
/* Dummy event notifier ensures aio_poll() will block */
- event_notifier_init(¬ifier, false);
- aio_set_event_notifier(ctx, ¬ifier, dummy_notifier_read);
+ event_notifier_init(&data.notifier, false);
+ aio_set_event_notifier(ctx, &data.notifier, dummy_notifier_read);
g_assert(!aio_poll(ctx, false)); /* consume aio_notify() */
qemu_mutex_init(&data.start_lock);
@@ -150,12 +152,13 @@ static void test_acquire(void)
/* Block in aio_poll(), let other thread kick us and acquire context */
aio_context_acquire(ctx);
qemu_mutex_unlock(&data.start_lock); /* let the thread run */
- g_assert(!aio_poll(ctx, true));
+ g_assert(aio_poll(ctx, true));
+ g_assert(!data.thread_acquired);
aio_context_release(ctx);
qemu_thread_join(&thread);
- aio_set_event_notifier(ctx, ¬ifier, NULL);
- event_notifier_cleanup(¬ifier);
+ aio_set_event_notifier(ctx, &data.notifier, NULL);
+ event_notifier_cleanup(&data.notifier);
g_assert(data.thread_acquired);
}
--
2.3.0
^ permalink raw reply related [flat|nested] 17+ messages in thread
* Re: [Qemu-devel] [PATCH 2/3] AioContext: acquire/release AioContext during aio_poll
2015-02-20 16:26 ` [Qemu-devel] [PATCH 2/3] AioContext: acquire/release AioContext during aio_poll Paolo Bonzini
@ 2015-02-25 5:45 ` Fam Zheng
2015-02-26 13:21 ` Paolo Bonzini
2015-03-06 17:15 ` Stefan Hajnoczi
2015-07-08 2:18 ` Fam Zheng
2 siblings, 1 reply; 17+ messages in thread
From: Fam Zheng @ 2015-02-25 5:45 UTC (permalink / raw)
To: Paolo Bonzini; +Cc: qemu-devel, stefanha
On Fri, 02/20 17:26, Paolo Bonzini wrote:
> This is the first step in pushing down acquire/release, and will let
> rfifolock drop the contention callback feature.
>
> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
> ---
> aio-posix.c | 9 +++++++++
> aio-win32.c | 8 ++++++++
> include/block/aio.h | 15 ++++++++-------
> 3 files changed, 25 insertions(+), 7 deletions(-)
>
> diff --git a/aio-posix.c b/aio-posix.c
> index 4a30b77..292ae84 100644
> --- a/aio-posix.c
> +++ b/aio-posix.c
> @@ -238,6 +238,7 @@ bool aio_poll(AioContext *ctx, bool blocking)
> bool progress;
> int64_t timeout;
>
> + aio_context_acquire(ctx);
> was_dispatching = ctx->dispatching;
> progress = false;
>
> @@ -267,7 +268,13 @@ bool aio_poll(AioContext *ctx, bool blocking)
> timeout = blocking ? aio_compute_timeout(ctx) : 0;
>
> /* wait until next event */
> + if (timeout) {
> + aio_context_release(ctx);
> + }
> ret = qemu_poll_ns((GPollFD *)pollfds, npfd, timeout);
> + if (timeout) {
> + aio_context_acquire(ctx);
> + }
>
> /* if we have any readable fds, dispatch event */
> if (ret > 0) {
> @@ -285,5 +292,7 @@ bool aio_poll(AioContext *ctx, bool blocking)
> }
>
> aio_set_dispatching(ctx, was_dispatching);
> + aio_context_release(ctx);
> +
> return progress;
> }
> diff --git a/aio-win32.c b/aio-win32.c
> index e6f4ced..233d8f5 100644
> --- a/aio-win32.c
> +++ b/aio-win32.c
> @@ -283,6 +283,7 @@ bool aio_poll(AioContext *ctx, bool blocking)
> int count;
> int timeout;
>
> + aio_context_acquire(ctx);
> have_select_revents = aio_prepare(ctx);
> if (have_select_revents) {
> blocking = false;
> @@ -323,7 +324,13 @@ bool aio_poll(AioContext *ctx, bool blocking)
>
> timeout = blocking
> ? qemu_timeout_ns_to_ms(aio_compute_timeout(ctx)) : 0;
> + if (timeout) {
> + aio_context_release(ctx);
Why are the unlock/lock pairs around poll conditional?
Fam
> + }
> ret = WaitForMultipleObjects(count, events, FALSE, timeout);
> + if (timeout) {
> + aio_context_acquire(ctx);
> + }
> aio_set_dispatching(ctx, true);
>
> if (first && aio_bh_poll(ctx)) {
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [Qemu-devel] [PATCH 2/3] AioContext: acquire/release AioContext during aio_poll
2015-02-25 5:45 ` Fam Zheng
@ 2015-02-26 13:21 ` Paolo Bonzini
0 siblings, 0 replies; 17+ messages in thread
From: Paolo Bonzini @ 2015-02-26 13:21 UTC (permalink / raw)
To: Fam Zheng; +Cc: qemu-devel, stefanha
On 25/02/2015 06:45, Fam Zheng wrote:
> On Fri, 02/20 17:26, Paolo Bonzini wrote:
>> This is the first step in pushing down acquire/release, and will let
>> rfifolock drop the contention callback feature.
>>
>> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
>> ---
>> aio-posix.c | 9 +++++++++
>> aio-win32.c | 8 ++++++++
>> include/block/aio.h | 15 ++++++++-------
>> 3 files changed, 25 insertions(+), 7 deletions(-)
>>
>> diff --git a/aio-posix.c b/aio-posix.c
>> index 4a30b77..292ae84 100644
>> --- a/aio-posix.c
>> +++ b/aio-posix.c
>> @@ -238,6 +238,7 @@ bool aio_poll(AioContext *ctx, bool blocking)
>> bool progress;
>> int64_t timeout;
>>
>> + aio_context_acquire(ctx);
>> was_dispatching = ctx->dispatching;
>> progress = false;
>>
>> @@ -267,7 +268,13 @@ bool aio_poll(AioContext *ctx, bool blocking)
>> timeout = blocking ? aio_compute_timeout(ctx) : 0;
>>
>> /* wait until next event */
>> + if (timeout) {
>> + aio_context_release(ctx);
>> + }
>> ret = qemu_poll_ns((GPollFD *)pollfds, npfd, timeout);
>> + if (timeout) {
>> + aio_context_acquire(ctx);
>> + }
>>
>> /* if we have any readable fds, dispatch event */
>> if (ret > 0) {
>> @@ -285,5 +292,7 @@ bool aio_poll(AioContext *ctx, bool blocking)
>> }
>>
>> aio_set_dispatching(ctx, was_dispatching);
>> + aio_context_release(ctx);
>> +
>> return progress;
>> }
>> diff --git a/aio-win32.c b/aio-win32.c
>> index e6f4ced..233d8f5 100644
>> --- a/aio-win32.c
>> +++ b/aio-win32.c
>> @@ -283,6 +283,7 @@ bool aio_poll(AioContext *ctx, bool blocking)
>> int count;
>> int timeout;
>>
>> + aio_context_acquire(ctx);
>> have_select_revents = aio_prepare(ctx);
>> if (have_select_revents) {
>> blocking = false;
>> @@ -323,7 +324,13 @@ bool aio_poll(AioContext *ctx, bool blocking)
>>
>> timeout = blocking
>> ? qemu_timeout_ns_to_ms(aio_compute_timeout(ctx)) : 0;
>> + if (timeout) {
>> + aio_context_release(ctx);
>
> Why are the unlock/lock pairs around poll conditional?
Both iothread.c and os_host_main_loop_wait are doing it, IIRC it was
measurably faster.
In particular, iothread.c was completely avoiding acquire/release around
non-blocking aio_poll and this patch does not have exactly the same behavior,
but it is trying to be close:
- aio_context_acquire(iothread->ctx);
- blocking = true;
- while (!iothread->stopping && aio_poll(iothread->ctx, blocking)) {
- /* Progress was made, keep going */
- blocking = false;
- }
- aio_context_release(iothread->ctx);
The exact same behavior can be done easily directly in aio_poll, for this
RFC I'm keeping the code a little simpler.
Paolo
>
> Fam
>
>> + }
>> ret = WaitForMultipleObjects(count, events, FALSE, timeout);
>> + if (timeout) {
>> + aio_context_acquire(ctx);
>> + }
>> aio_set_dispatching(ctx, true);
>>
>> if (first && aio_bh_poll(ctx)) {
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [Qemu-devel] [PATCH 1/3] aio-posix: move pollfds to thread-local storage
2015-02-20 16:26 ` [Qemu-devel] [PATCH 1/3] aio-posix: move pollfds to thread-local storage Paolo Bonzini
@ 2015-03-06 16:50 ` Stefan Hajnoczi
0 siblings, 0 replies; 17+ messages in thread
From: Stefan Hajnoczi @ 2015-03-06 16:50 UTC (permalink / raw)
To: Paolo Bonzini; +Cc: famz, qemu-devel, stefanha
[-- Attachment #1: Type: text/plain, Size: 612 bytes --]
On Fri, Feb 20, 2015 at 05:26:50PM +0100, Paolo Bonzini wrote:
> By using thread-local storage, aio_poll can stop using global data during
> g_poll_ns. This will make it possible to drop callbacks from rfifolock.
>
> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
> ---
> aio-posix.c | 77 ++++++++++++++++++++++++++++++++++++++---------------
> async.c | 2 --
> include/block/aio.h | 3 ---
> 3 files changed, 56 insertions(+), 26 deletions(-)
pollfds was always an ugly part of AioContext. I'm happy to move it.
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
[-- Attachment #2: Type: application/pgp-signature, Size: 473 bytes --]
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [Qemu-devel] [PATCH 2/3] AioContext: acquire/release AioContext during aio_poll
2015-02-20 16:26 ` [Qemu-devel] [PATCH 2/3] AioContext: acquire/release AioContext during aio_poll Paolo Bonzini
2015-02-25 5:45 ` Fam Zheng
@ 2015-03-06 17:15 ` Stefan Hajnoczi
2015-07-08 2:18 ` Fam Zheng
2 siblings, 0 replies; 17+ messages in thread
From: Stefan Hajnoczi @ 2015-03-06 17:15 UTC (permalink / raw)
To: Paolo Bonzini; +Cc: famz, qemu-devel, stefanha
[-- Attachment #1: Type: text/plain, Size: 482 bytes --]
On Fri, Feb 20, 2015 at 05:26:51PM +0100, Paolo Bonzini wrote:
> This is the first step in pushing down acquire/release, and will let
> rfifolock drop the contention callback feature.
>
> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
> ---
> aio-posix.c | 9 +++++++++
> aio-win32.c | 8 ++++++++
> include/block/aio.h | 15 ++++++++-------
> 3 files changed, 25 insertions(+), 7 deletions(-)
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
[-- Attachment #2: Type: application/pgp-signature, Size: 473 bytes --]
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [Qemu-devel] [PATCH 3/3] iothread: release iothread around aio_poll
2015-02-20 16:26 ` [Qemu-devel] [PATCH 3/3] iothread: release iothread around aio_poll Paolo Bonzini
@ 2015-03-06 17:16 ` Stefan Hajnoczi
0 siblings, 0 replies; 17+ messages in thread
From: Stefan Hajnoczi @ 2015-03-06 17:16 UTC (permalink / raw)
To: Paolo Bonzini; +Cc: famz, qemu-devel, stefanha
[-- Attachment #1: Type: text/plain, Size: 739 bytes --]
On Fri, Feb 20, 2015 at 05:26:52PM +0100, Paolo Bonzini wrote:
> This is the first step towards having fine-grained critical sections in
> dataplane threads, which resolves lock ordering problems between
> address_space_* functions (which need the BQL when doing MMIO, even
> after we complete RCU-based dispatch) and the AioContext.
>
> Because AioContext does not use contention callbacks anymore, the
> unit test has to be changed.
>
> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
> ---
> async.c | 8 +-------
> iothread.c | 11 ++---------
> tests/test-aio.c | 21 ++++++++++++---------
> 3 files changed, 15 insertions(+), 25 deletions(-)
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
[-- Attachment #2: Type: application/pgp-signature, Size: 473 bytes --]
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [Qemu-devel] [PATCH RFC 0/3] iothread: release iothread around aio_poll
2015-02-20 16:26 [Qemu-devel] [PATCH RFC 0/3] iothread: release iothread around aio_poll Paolo Bonzini
` (2 preceding siblings ...)
2015-02-20 16:26 ` [Qemu-devel] [PATCH 3/3] iothread: release iothread around aio_poll Paolo Bonzini
@ 2015-03-31 10:35 ` Paolo Bonzini
2015-03-31 14:33 ` Stefan Hajnoczi
2015-04-21 15:40 ` Stefan Hajnoczi
2015-03-31 14:03 ` Stefan Hajnoczi
4 siblings, 2 replies; 17+ messages in thread
From: Paolo Bonzini @ 2015-03-31 10:35 UTC (permalink / raw)
To: qemu-devel; +Cc: famz, stefanha
On 20/02/2015 17:26, Paolo Bonzini wrote:
> Right now, iothreads are relying on a "contention callback" to release
> the AioContext (e.g. for block device operations or to do bdrv_drain_all).
> This is necessary because aio_poll needs to be called within an
> aio_context_acquire.
>
> This series drops this requirement for aio_poll, with two effects:
>
> 1) it makes it possible to remove the "contention callback" in RFifoLock
> (and possibly to convert it to a normal GRecMutex, which is why I am not
> including a patch to remove callbacks from RFifoLock).
>
> 2) it makes it possible to start work around making critical sections
> for the block layer fine-grained.
>
> In order to do this, some data is moved from AioContext to local storage.
> Stack allocation has size limitations, so thread-local storage is used
> instead. There are no reentrancy problems because the data is only live
> throughout a small part of aio_poll, and in particular not during the
> invocation of callbacks.
>
> Comments?
Stefan, can you put this on track for 2.4 or do you need a repost?
Paolo
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [Qemu-devel] [PATCH RFC 0/3] iothread: release iothread around aio_poll
2015-02-20 16:26 [Qemu-devel] [PATCH RFC 0/3] iothread: release iothread around aio_poll Paolo Bonzini
` (3 preceding siblings ...)
2015-03-31 10:35 ` [Qemu-devel] [PATCH RFC 0/3] " Paolo Bonzini
@ 2015-03-31 14:03 ` Stefan Hajnoczi
4 siblings, 0 replies; 17+ messages in thread
From: Stefan Hajnoczi @ 2015-03-31 14:03 UTC (permalink / raw)
To: Paolo Bonzini; +Cc: famz, qemu-devel, stefanha
[-- Attachment #1: Type: text/plain, Size: 1698 bytes --]
On Fri, Feb 20, 2015 at 05:26:49PM +0100, Paolo Bonzini wrote:
> Right now, iothreads are relying on a "contention callback" to release
> the AioContext (e.g. for block device operations or to do bdrv_drain_all).
> This is necessary because aio_poll needs to be called within an
> aio_context_acquire.
>
> This series drops this requirement for aio_poll, with two effects:
>
> 1) it makes it possible to remove the "contention callback" in RFifoLock
> (and possibly to convert it to a normal GRecMutex, which is why I am not
> including a patch to remove callbacks from RFifoLock).
>
> 2) it makes it possible to start work around making critical sections
> for the block layer fine-grained.
>
> In order to do this, some data is moved from AioContext to local storage.
> Stack allocation has size limitations, so thread-local storage is used
> instead. There are no reentrancy problems because the data is only live
> throughout a small part of aio_poll, and in particular not during the
> invocation of callbacks.
>
> Comments?
>
> Paolo
>
> Paolo Bonzini (3):
> aio-posix: move pollfds to thread-local storage
> AioContext: acquire/release AioContext during aio_poll
> iothread: release iothread around aio_poll
>
> aio-posix.c | 86 ++++++++++++++++++++++++++++++++++++++++-------------
> aio-win32.c | 8 +++++
> async.c | 10 +------
> include/block/aio.h | 18 +++++------
> iothread.c | 11 ++-----
> tests/test-aio.c | 21 +++++++------
> 6 files changed, 96 insertions(+), 58 deletions(-)
Thanks, applied to my block-next tree:
https://github.com/stefanha/qemu/commits/block-next
Stefan
[-- Attachment #2: Type: application/pgp-signature, Size: 473 bytes --]
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [Qemu-devel] [PATCH RFC 0/3] iothread: release iothread around aio_poll
2015-03-31 10:35 ` [Qemu-devel] [PATCH RFC 0/3] " Paolo Bonzini
@ 2015-03-31 14:33 ` Stefan Hajnoczi
2015-04-21 15:40 ` Stefan Hajnoczi
1 sibling, 0 replies; 17+ messages in thread
From: Stefan Hajnoczi @ 2015-03-31 14:33 UTC (permalink / raw)
To: Paolo Bonzini; +Cc: famz, qemu-devel
[-- Attachment #1: Type: text/plain, Size: 1235 bytes --]
On Tue, Mar 31, 2015 at 12:35:16PM +0200, Paolo Bonzini wrote:
>
>
> On 20/02/2015 17:26, Paolo Bonzini wrote:
> > Right now, iothreads are relying on a "contention callback" to release
> > the AioContext (e.g. for block device operations or to do bdrv_drain_all).
> > This is necessary because aio_poll needs to be called within an
> > aio_context_acquire.
> >
> > This series drops this requirement for aio_poll, with two effects:
> >
> > 1) it makes it possible to remove the "contention callback" in RFifoLock
> > (and possibly to convert it to a normal GRecMutex, which is why I am not
> > including a patch to remove callbacks from RFifoLock).
> >
> > 2) it makes it possible to start work around making critical sections
> > for the block layer fine-grained.
> >
> > In order to do this, some data is moved from AioContext to local storage.
> > Stack allocation has size limitations, so thread-local storage is used
> > instead. There are no reentrancy problems because the data is only live
> > throughout a small part of aio_poll, and in particular not during the
> > invocation of callbacks.
> >
> > Comments?
>
> Stefan, can you put this on track for 2.4 or do you need a repost?
Done
[-- Attachment #2: Type: application/pgp-signature, Size: 473 bytes --]
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [Qemu-devel] [PATCH RFC 0/3] iothread: release iothread around aio_poll
2015-03-31 10:35 ` [Qemu-devel] [PATCH RFC 0/3] " Paolo Bonzini
2015-03-31 14:33 ` Stefan Hajnoczi
@ 2015-04-21 15:40 ` Stefan Hajnoczi
2015-04-21 15:54 ` Paolo Bonzini
1 sibling, 1 reply; 17+ messages in thread
From: Stefan Hajnoczi @ 2015-04-21 15:40 UTC (permalink / raw)
To: Paolo Bonzini; +Cc: Fam Zheng, qemu-devel, Stefan Hajnoczi
On Tue, Mar 31, 2015 at 11:35 AM, Paolo Bonzini <pbonzini@redhat.com> wrote:
> On 20/02/2015 17:26, Paolo Bonzini wrote:
>> Right now, iothreads are relying on a "contention callback" to release
>> the AioContext (e.g. for block device operations or to do bdrv_drain_all).
>> This is necessary because aio_poll needs to be called within an
>> aio_context_acquire.
>>
>> This series drops this requirement for aio_poll, with two effects:
>>
>> 1) it makes it possible to remove the "contention callback" in RFifoLock
>> (and possibly to convert it to a normal GRecMutex, which is why I am not
>> including a patch to remove callbacks from RFifoLock).
>>
>> 2) it makes it possible to start work around making critical sections
>> for the block layer fine-grained.
>>
>> In order to do this, some data is moved from AioContext to local storage.
>> Stack allocation has size limitations, so thread-local storage is used
>> instead. There are no reentrancy problems because the data is only live
>> throughout a small part of aio_poll, and in particular not during the
>> invocation of callbacks.
>>
>> Comments?
>
> Stefan, can you put this on track for 2.4 or do you need a repost?
This series causes qemu-iotests -qcow2 091 to fail:
9f83aea22314d928bb272153ff37d2d7f5adbf06 is the first bad commit
commit 9f83aea22314d928bb272153ff37d2d7f5adbf06
Author: Paolo Bonzini <pbonzini@redhat.com>
Date: Fri Feb 20 17:26:50 2015 +0100
aio-posix: move pollfds to thread-local storage
I think the following assertion failure is hit in pollfds_cleanup():
g_assert(npfd == 0);
$ (make -j4 && cd tests/qemu-iotests && ./check -qcow2 091)
QEMU -- ./qemu
QEMU_IMG -- ./qemu-img
QEMU_IO -- ./qemu-io
QEMU_NBD -- ./qemu-nbd
IMGFMT -- qcow2 (compat=1.1)
IMGPROTO -- file
PLATFORM -- Linux/x86_64 stefanha-thinkpad 4.0.0-rc5.bz1006536+
TEST_DIR -- /home/stefanha/qemu/tests/qemu-iotests/scratch
SOCKET_SCM_HELPER -- /home/stefanha/qemu/tests/qemu-iotests/socket_scm_helper
091 1s ... [failed, exit status 141] - output mismatch (see 091.out.bad)
--- /home/stefanha/qemu/tests/qemu-iotests/091.out 2015-03-05
20:42:23.227070978 +0000
+++ 091.out.bad 2015-04-21 16:33:02.769945594 +0100
@@ -11,18 +11,4 @@
vm1: qemu-io disk write complete
vm1: live migration started
-vm1: live migration completed
-
-=== VM 2: Post-migration, write to disk, verify running ===
-
-vm2: qemu-io disk write complete
-vm2: qemu process running successfully
-vm2: flush io, and quit
-Check image pattern
-read 4194304/4194304 bytes at offset 0
-4 MiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
-Running 'qemu-img check -r all $TEST_IMG'
-No errors were found on the image.
-80/16384 = 0.49% allocated, 0.00% fragmented, 0.00% compressed clusters
-Image end offset: 5570560
-*** done
+./common.qemu: line 1: 26535 Aborted (core dumped)
More info:
Command Line: ./qemu -nographic -serial none -monitor stdio -machine
accel=qtest -drive
file=/home/stefanha/qemu/tests/qemu-iotests/scratch/t.qcow2,cache=none,id=disk
Stack trace of thread 26556:
#0 0x00007f1ed8dd98d7 __GI_raise (libc.so.6)
#1 0x00007f1ed8ddb53a __GI_abort (libc.so.6)
#2 0x00007f1ee13075d5 g_assertion_message (libglib-2.0.so.0)
#3 0x00007f1ee130766a g_assertion_message_expr
(libglib-2.0.so.0)
#4 0x00007f1ee3100001 pollfds_cleanup (qemu-system-x86_64)
#5 0x00007f1ee3177374 notifier_list_notify (qemu-system-x86_64)
#6 0x00007f1ee316c812 qemu_thread_atexit_run
(qemu-system-x86_64)
#7 0x00007f1ee19dd1d9 __nptl_deallocate_tsd (libpthread.so.0)
#8 0x00007f1ee19de5e5 __nptl_deallocate_tsd (libpthread.so.0)
#9 0x00007f1ed8ea522d __clone (libc.so.6)
Stack trace of thread 26535:
#0 0x00007f1ee19df5e5 pthread_join (libpthread.so.0)
#1 0x00007f1ee316ce9f qemu_thread_join (qemu-system-x86_64)
#2 0x00007f1ee30adbfa migrate_fd_cleanup (qemu-system-x86_64)
#3 0x00007f1ee30f1614 aio_bh_poll (qemu-system-x86_64)
#4 0x00007f1ee3100260 aio_dispatch (qemu-system-x86_64)
#5 0x00007f1ee30f149e aio_ctx_dispatch (qemu-system-x86_64)
#6 0x00007f1ee12e17fb g_main_dispatch (libglib-2.0.so.0)
#7 0x00007f1ee30fee38 glib_pollfds_poll (qemu-system-x86_64)
#8 0x00007f1ee2ec024e main_loop (qemu-system-x86_64)
#9 0x00007f1ed8dc4fe0 __libc_start_main (libc.so.6)
#10 0x00007f1ee2ec566c _start (qemu-system-x86_64)
Stack trace of thread 26540:
#0 0x00007f1ed8e9f939 syscall (libc.so.6)
#1 0x00007f1ee316cc71 futex_wait (qemu-system-x86_64)
#2 0x00007f1ee317af96 call_rcu_thread (qemu-system-x86_64)
#3 0x00007f1ee19de52a start_thread (libpthread.so.0)
#4 0x00007f1ed8ea522d __clone (libc.so.6)
Stack trace of thread 26541:
#0 0x00007f1ee19e57f0 sem_timedwait (libpthread.so.0)
#1 0x00007f1ee316cac7 qemu_sem_timedwait (qemu-system-x86_64)
#2 0x00007f1ee30f1b1c worker_thread (qemu-system-x86_64)
#3 0x00007f1ee19de52a start_thread (libpthread.so.0)
#4 0x00007f1ed8ea522d __clone (libc.so.6)
Stack trace of thread 26544:
#0 0x00007f1ee19e6e50 do_sigwait (libpthread.so.0)
#1 0x00007f1ee2eec543 qemu_dummy_cpu_thread_fn
(qemu-system-x86_64)
#2 0x00007f1ee19de52a start_thread (libpthread.so.0)
#3 0x00007f1ed8ea522d __clone (libc.so.6)
Stefan
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [Qemu-devel] [PATCH RFC 0/3] iothread: release iothread around aio_poll
2015-04-21 15:40 ` Stefan Hajnoczi
@ 2015-04-21 15:54 ` Paolo Bonzini
2015-04-22 10:26 ` Stefan Hajnoczi
0 siblings, 1 reply; 17+ messages in thread
From: Paolo Bonzini @ 2015-04-21 15:54 UTC (permalink / raw)
To: Stefan Hajnoczi; +Cc: Fam Zheng, qemu-devel, Stefan Hajnoczi
On 21/04/2015 17:40, Stefan Hajnoczi wrote:
>> >
>> > Stefan, can you put this on track for 2.4 or do you need a repost?
> This series causes qemu-iotests -qcow2 091 to fail:
>
> 9f83aea22314d928bb272153ff37d2d7f5adbf06 is the first bad commit
> commit 9f83aea22314d928bb272153ff37d2d7f5adbf06
> Author: Paolo Bonzini <pbonzini@redhat.com>
> Date: Fri Feb 20 17:26:50 2015 +0100
>
> aio-posix: move pollfds to thread-local storage
Oops... what I intended is this:
diff --git a/aio-posix.c b/aio-posix.c
index 4a30b77..e411591 100644
--- a/aio-posix.c
+++ b/aio-posix.c
@@ -254,8 +254,7 @@ bool aio_poll(AioContext *ctx, bool blocking)
aio_set_dispatching(ctx, !blocking);
ctx->walking_handlers++;
-
- npfd = 0;
+ assert(npfd == 0);
/* fill pollfds */
QLIST_FOREACH(node, &ctx->aio_handlers, node) {
@@ -276,6 +275,7 @@ bool aio_poll(AioContext *ctx, bool blocking)
}
}
+ npfd = 0;
ctx->walking_handlers--;
/* Run dispatch even if there were no readable fds to run timers */
but the above is totally untested, so feel free to just remove
the assertion or also to drop the series.
Paolo
^ permalink raw reply related [flat|nested] 17+ messages in thread
* Re: [Qemu-devel] [PATCH RFC 0/3] iothread: release iothread around aio_poll
2015-04-21 15:54 ` Paolo Bonzini
@ 2015-04-22 10:26 ` Stefan Hajnoczi
0 siblings, 0 replies; 17+ messages in thread
From: Stefan Hajnoczi @ 2015-04-22 10:26 UTC (permalink / raw)
To: Paolo Bonzini; +Cc: Stefan Hajnoczi, Fam Zheng, qemu-devel
[-- Attachment #1: Type: text/plain, Size: 1417 bytes --]
On Tue, Apr 21, 2015 at 05:54:59PM +0200, Paolo Bonzini wrote:
>
>
> On 21/04/2015 17:40, Stefan Hajnoczi wrote:
> >> >
> >> > Stefan, can you put this on track for 2.4 or do you need a repost?
> > This series causes qemu-iotests -qcow2 091 to fail:
> >
> > 9f83aea22314d928bb272153ff37d2d7f5adbf06 is the first bad commit
> > commit 9f83aea22314d928bb272153ff37d2d7f5adbf06
> > Author: Paolo Bonzini <pbonzini@redhat.com>
> > Date: Fri Feb 20 17:26:50 2015 +0100
> >
> > aio-posix: move pollfds to thread-local storage
>
> Oops... what I intended is this:
>
> diff --git a/aio-posix.c b/aio-posix.c
> index 4a30b77..e411591 100644
> --- a/aio-posix.c
> +++ b/aio-posix.c
> @@ -254,8 +254,7 @@ bool aio_poll(AioContext *ctx, bool blocking)
> aio_set_dispatching(ctx, !blocking);
>
> ctx->walking_handlers++;
> -
> - npfd = 0;
> + assert(npfd == 0);
>
> /* fill pollfds */
> QLIST_FOREACH(node, &ctx->aio_handlers, node) {
> @@ -276,6 +275,7 @@ bool aio_poll(AioContext *ctx, bool blocking)
> }
> }
>
> + npfd = 0;
> ctx->walking_handlers--;
>
> /* Run dispatch even if there were no readable fds to run timers */
>
> but the above is totally untested, so feel free to just remove
> the assertion or also to drop the series.
Thanks, I squashed this change in and now qemu-iotests passes.
Stefan
[-- Attachment #2: Type: application/pgp-signature, Size: 473 bytes --]
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [Qemu-devel] [PATCH 2/3] AioContext: acquire/release AioContext during aio_poll
2015-02-20 16:26 ` [Qemu-devel] [PATCH 2/3] AioContext: acquire/release AioContext during aio_poll Paolo Bonzini
2015-02-25 5:45 ` Fam Zheng
2015-03-06 17:15 ` Stefan Hajnoczi
@ 2015-07-08 2:18 ` Fam Zheng
2015-07-08 7:52 ` Paolo Bonzini
2 siblings, 1 reply; 17+ messages in thread
From: Fam Zheng @ 2015-07-08 2:18 UTC (permalink / raw)
To: Paolo Bonzini; +Cc: qemu-devel
On Fri, 02/20 17:26, Paolo Bonzini wrote:
> This is the first step in pushing down acquire/release, and will let
> rfifolock drop the contention callback feature.
>
> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
> ---
> aio-posix.c | 9 +++++++++
> aio-win32.c | 8 ++++++++
> include/block/aio.h | 15 ++++++++-------
> 3 files changed, 25 insertions(+), 7 deletions(-)
>
> diff --git a/aio-posix.c b/aio-posix.c
> index 4a30b77..292ae84 100644
> --- a/aio-posix.c
> +++ b/aio-posix.c
> @@ -238,6 +238,7 @@ bool aio_poll(AioContext *ctx, bool blocking)
> bool progress;
> int64_t timeout;
>
> + aio_context_acquire(ctx);
> was_dispatching = ctx->dispatching;
> progress = false;
>
> @@ -267,7 +268,13 @@ bool aio_poll(AioContext *ctx, bool blocking)
> timeout = blocking ? aio_compute_timeout(ctx) : 0;
>
> /* wait until next event */
> + if (timeout) {
> + aio_context_release(ctx);
> + }
> ret = qemu_poll_ns((GPollFD *)pollfds, npfd, timeout);
If two threads poll concurrently on this ctx, they will get the same set of
events, is that safe? Doesn't that lead to double dispatch?
Fam
> + if (timeout) {
> + aio_context_acquire(ctx);
> + }
>
> /* if we have any readable fds, dispatch event */
> if (ret > 0) {
> @@ -285,5 +292,7 @@ bool aio_poll(AioContext *ctx, bool blocking)
> }
>
> aio_set_dispatching(ctx, was_dispatching);
> + aio_context_release(ctx);
> +
> return progress;
> }
> diff --git a/aio-win32.c b/aio-win32.c
> index e6f4ced..233d8f5 100644
> --- a/aio-win32.c
> +++ b/aio-win32.c
> @@ -283,6 +283,7 @@ bool aio_poll(AioContext *ctx, bool blocking)
> int count;
> int timeout;
>
> + aio_context_acquire(ctx);
> have_select_revents = aio_prepare(ctx);
> if (have_select_revents) {
> blocking = false;
> @@ -323,7 +324,13 @@ bool aio_poll(AioContext *ctx, bool blocking)
>
> timeout = blocking
> ? qemu_timeout_ns_to_ms(aio_compute_timeout(ctx)) : 0;
> + if (timeout) {
> + aio_context_release(ctx);
> + }
> ret = WaitForMultipleObjects(count, events, FALSE, timeout);
> + if (timeout) {
> + aio_context_acquire(ctx);
> + }
> aio_set_dispatching(ctx, true);
>
> if (first && aio_bh_poll(ctx)) {
> @@ -349,5 +356,6 @@ bool aio_poll(AioContext *ctx, bool blocking)
> progress |= timerlistgroup_run_timers(&ctx->tlg);
>
> aio_set_dispatching(ctx, was_dispatching);
> + aio_context_release(ctx);
> return progress;
> }
> diff --git a/include/block/aio.h b/include/block/aio.h
> index 499efd0..e77409d 100644
> --- a/include/block/aio.h
> +++ b/include/block/aio.h
> @@ -118,13 +118,14 @@ void aio_context_ref(AioContext *ctx);
> void aio_context_unref(AioContext *ctx);
>
> /* Take ownership of the AioContext. If the AioContext will be shared between
> - * threads, a thread must have ownership when calling aio_poll().
> - *
> - * Note that multiple threads calling aio_poll() means timers, BHs, and
> - * callbacks may be invoked from a different thread than they were registered
> - * from. Therefore, code must use AioContext acquire/release or use
> - * fine-grained synchronization to protect shared state if other threads will
> - * be accessing it simultaneously.
> + * threads, and a thread does not want to be interrupted, it will have to
> + * take ownership around calls to aio_poll(). Otherwise, aio_poll()
> + * automatically takes care of calling aio_context_acquire and
> + * aio_context_release.
> + *
> + * Access to timers and BHs from a thread that has not acquired AioContext
> + * is possible. Access to callbacks for now must be done while the AioContext
> + * is owned by the thread (FIXME).
> */
> void aio_context_acquire(AioContext *ctx);
>
> --
> 2.3.0
>
>
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [Qemu-devel] [PATCH 2/3] AioContext: acquire/release AioContext during aio_poll
2015-07-08 2:18 ` Fam Zheng
@ 2015-07-08 7:52 ` Paolo Bonzini
0 siblings, 0 replies; 17+ messages in thread
From: Paolo Bonzini @ 2015-07-08 7:52 UTC (permalink / raw)
To: Fam Zheng; +Cc: qemu-devel
On 08/07/2015 04:18, Fam Zheng wrote:
>> > @@ -267,7 +268,13 @@ bool aio_poll(AioContext *ctx, bool blocking)
>> > timeout = blocking ? aio_compute_timeout(ctx) : 0;
>> >
>> > /* wait until next event */
>> > + if (timeout) {
>> > + aio_context_release(ctx);
>> > + }
>> > ret = qemu_poll_ns((GPollFD *)pollfds, npfd, timeout);
> If two threads poll concurrently on this ctx, they will get the same set of
> events, is that safe? Doesn't that lead to double dispatch?
Yes, but handlers should be okay with spurious wakeup. They will just
get EAGAIN.
Paolo
^ permalink raw reply [flat|nested] 17+ messages in thread
end of thread, other threads:[~2015-07-08 7:52 UTC | newest]
Thread overview: 17+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-02-20 16:26 [Qemu-devel] [PATCH RFC 0/3] iothread: release iothread around aio_poll Paolo Bonzini
2015-02-20 16:26 ` [Qemu-devel] [PATCH 1/3] aio-posix: move pollfds to thread-local storage Paolo Bonzini
2015-03-06 16:50 ` Stefan Hajnoczi
2015-02-20 16:26 ` [Qemu-devel] [PATCH 2/3] AioContext: acquire/release AioContext during aio_poll Paolo Bonzini
2015-02-25 5:45 ` Fam Zheng
2015-02-26 13:21 ` Paolo Bonzini
2015-03-06 17:15 ` Stefan Hajnoczi
2015-07-08 2:18 ` Fam Zheng
2015-07-08 7:52 ` Paolo Bonzini
2015-02-20 16:26 ` [Qemu-devel] [PATCH 3/3] iothread: release iothread around aio_poll Paolo Bonzini
2015-03-06 17:16 ` Stefan Hajnoczi
2015-03-31 10:35 ` [Qemu-devel] [PATCH RFC 0/3] " Paolo Bonzini
2015-03-31 14:33 ` Stefan Hajnoczi
2015-04-21 15:40 ` Stefan Hajnoczi
2015-04-21 15:54 ` Paolo Bonzini
2015-04-22 10:26 ` Stefan Hajnoczi
2015-03-31 14:03 ` Stefan Hajnoczi
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.