From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:46736) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1X4oZN-0005VQ-7h for qemu-devel@nongnu.org; Wed, 09 Jul 2014 05:53:54 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1X4oZB-00012o-EX for qemu-devel@nongnu.org; Wed, 09 Jul 2014 05:53:45 -0400 Received: from mail-qc0-x22e.google.com ([2607:f8b0:400d:c01::22e]:57582) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1X4oZB-00012a-9E for qemu-devel@nongnu.org; Wed, 09 Jul 2014 05:53:33 -0400 Received: by mail-qc0-f174.google.com with SMTP id x13so6329145qcv.33 for ; Wed, 09 Jul 2014 02:53:32 -0700 (PDT) Received: from yakj.usersys.redhat.com (net-2-35-201-190.cust.vodafonedsl.it. [2.35.201.190]) by mx.google.com with ESMTPSA id j97sm31669122qgd.37.2014.07.09.02.53.30 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 09 Jul 2014 02:53:31 -0700 (PDT) Sender: Paolo Bonzini From: Paolo Bonzini Date: Wed, 9 Jul 2014 11:53:05 +0200 Message-Id: <1404899590-24973-6-git-send-email-pbonzini@redhat.com> In-Reply-To: <1404899590-24973-1-git-send-email-pbonzini@redhat.com> References: <1404899590-24973-1-git-send-email-pbonzini@redhat.com> Subject: [Qemu-devel] [PATCH 05/10] AioContext: export and use aio_dispatch List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: qemu-devel@nongnu.org So far, aio_poll's scheme was dispatch/poll/dispatch, where the first dispatch phase was used only in the GSource case in order to avoid a blocking poll. Earlier patches changed it to dispatch/prepare/poll/dispatch, where prepare is aio_compute_timeout. By making aio_dispatch public, we can remove the first dispatch phase altogether, so that both aio_poll and the GSource use the same prepare/poll/dispatch scheme. This patch breaks the invariant that aio_poll(..., true) will not block the first time it returns false. This used to be fundamental for qemu_aio_flush's implementation as "while (qemu_aio_wait()) {}" but no code in QEMU relies on this invariant anymore. The return value of aio_poll() is now comparable with that of g_main_context_iteration. Signed-off-by: Paolo Bonzini --- aio-posix.c | 55 +++++++++++++---------------------------------------- aio-win32.c | 31 ++++-------------------------- async.c | 2 +- include/block/aio.h | 6 ++++++ 4 files changed, 24 insertions(+), 70 deletions(-) diff --git a/aio-posix.c b/aio-posix.c index 798a3ff..0936b4f 100644 --- a/aio-posix.c +++ b/aio-posix.c @@ -119,12 +119,21 @@ bool aio_pending(AioContext *ctx) return false; } -static bool aio_dispatch(AioContext *ctx) +bool aio_dispatch(AioContext *ctx) { AioHandler *node; bool progress = false; /* + * If there are callbacks left that have been queued, we need to call them. + * Do not call select in this case, because it is possible that the caller + * does not need a complete flush (as is the case for aio_poll loops). + */ + if (aio_bh_poll(ctx)) { + progress = true; + } + + /* * We have to walk very carefully in case aio_set_fd_handler is * called while we're walking. */ @@ -184,22 +193,9 @@ bool aio_poll(AioContext *ctx, bool blocking) /* aio_notify can avoid the expensive event_notifier_set if * everything (file descriptors, bottom halves, timers) will - * be re-evaluated before the next blocking poll(). This happens - * in two cases: - * - * 1) when aio_poll is called with blocking == false - * - * 2) when we are called after poll(). If we are called before - * poll(), bottom halves will not be re-evaluated and we need - * aio_notify() if blocking == true. - * - * The first aio_dispatch() only does something when AioContext is - * running as a GSource, and in that case aio_poll is used only - * with blocking == false, so this optimization is already quite - * effective. However, the code is ugly and should be restructured - * to have a single aio_dispatch() call. To do this, we need to - * reorganize aio_poll into a prepare/poll/dispatch model like - * glib's. + * be re-evaluated before the next blocking poll(). This is + * already true when aio_poll is called with blocking == false; + * if blocking == true, it is only true after poll() returns. * * If we're in a nested event loop, ctx->dispatching might be true. * In that case we can restore it just before returning, but we @@ -207,26 +203,6 @@ bool aio_poll(AioContext *ctx, bool blocking) */ aio_set_dispatching(ctx, !blocking); - /* - * If there are callbacks left that have been queued, we need to call them. - * Do not call select in this case, because it is possible that the caller - * does not need a complete flush (as is the case for aio_poll loops). - */ - if (aio_bh_poll(ctx)) { - blocking = false; - progress = true; - } - - /* Re-evaluate condition (1) above. */ - aio_set_dispatching(ctx, !blocking); - if (aio_dispatch(ctx)) { - progress = true; - } - - if (progress && !blocking) { - goto out; - } - ctx->walking_handlers++; g_array_set_size(ctx->pollfds, 0); @@ -264,15 +240,10 @@ bool aio_poll(AioContext *ctx, bool blocking) /* Run dispatch even if there were no readable fds to run timers */ aio_set_dispatching(ctx, true); - if (aio_bh_poll(ctx)) { - progress = true; - } - if (aio_dispatch(ctx)) { progress = true; } -out: aio_set_dispatching(ctx, was_dispatching); return progress; } diff --git a/aio-win32.c b/aio-win32.c index 2ac38a8..1ec434a 100644 --- a/aio-win32.c +++ b/aio-win32.c @@ -130,11 +130,12 @@ static bool aio_dispatch_handlers(AioContext *ctx, HANDLE event) return progress; } -static bool aio_dispatch(AioContext *ctx) +bool aio_dispatch(AioContext *ctx) { bool progress; - progress = aio_dispatch_handlers(ctx, INVALID_HANDLE_VALUE); + progress = aio_bh_poll(ctx); + progress |= aio_dispatch_handlers(ctx, INVALID_HANDLE_VALUE); progress |= timerlistgroup_run_timers(&ctx->tlg); return progress; } @@ -149,23 +150,6 @@ bool aio_poll(AioContext *ctx, bool blocking) progress = false; - /* - * If there are callbacks left that have been queued, we need to call then. - * Do not call select in this case, because it is possible that the caller - * does not need a complete flush (as is the case for aio_poll loops). - */ - if (aio_bh_poll(ctx)) { - blocking = false; - progress = true; - } - - /* Dispatch any pending callbacks from the GSource. */ - progress |= aio_dispatch(ctx); - - if (progress && !blocking) { - return true; - } - ctx->walking_handlers++; /* fill fd sets */ @@ -205,14 +189,7 @@ bool aio_poll(AioContext *ctx, bool blocking) events[ret - WAIT_OBJECT_0] = events[--count]; } - if (blocking) { - /* Run the timers a second time. We do this because otherwise aio_wait - * will not note progress - and will stop a drain early - if we have - * a timer that was not ready to run entering g_poll but is ready - * after g_poll. This will only do anything if a timer has expired. - */ - progress |= timerlistgroup_run_timers(&ctx->tlg); - } + progress |= timerlistgroup_run_timers(&ctx->tlg); return progress; } diff --git a/async.c b/async.c index ac40eab..a5126ff 100644 --- a/async.c +++ b/async.c @@ -213,7 +213,7 @@ aio_ctx_dispatch(GSource *source, AioContext *ctx = (AioContext *) source; assert(callback == NULL); - aio_poll(ctx, false); + aio_dispatch(ctx); return true; } diff --git a/include/block/aio.h b/include/block/aio.h index 7eeb961..45408f7 100644 --- a/include/block/aio.h +++ b/include/block/aio.h @@ -211,6 +211,12 @@ void qemu_bh_delete(QEMUBH *bh); */ bool aio_pending(AioContext *ctx); +/* Dispatch any pending callbacks from the GSource attached to the AioContext. + * + * This is used internally in the implementation of the GSource. + */ +bool aio_dispatch(AioContext *ctx); + /* Progress in completing AIO work to occur. This can issue new pending * aio as a result of executing I/O completion or bh callbacks. * -- 1.9.3