From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:57120) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1aSoT5-0005sI-7x for qemu-devel@nongnu.org; Mon, 08 Feb 2016 11:15:16 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1aSoT4-0002Nf-7l for qemu-devel@nongnu.org; Mon, 08 Feb 2016 11:15:15 -0500 Received: from mail-wm0-x241.google.com ([2a00:1450:400c:c09::241]:36117) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1aSoT4-0002NU-0l for qemu-devel@nongnu.org; Mon, 08 Feb 2016 11:15:14 -0500 Received: by mail-wm0-x241.google.com with SMTP id 128so16170856wmz.3 for ; Mon, 08 Feb 2016 08:15:13 -0800 (PST) Sender: Paolo Bonzini From: Paolo Bonzini Date: Mon, 8 Feb 2016 17:14:54 +0100 Message-Id: <1454948107-11844-4-git-send-email-pbonzini@redhat.com> In-Reply-To: <1454948107-11844-1-git-send-email-pbonzini@redhat.com> References: <1454948107-11844-1-git-send-email-pbonzini@redhat.com> Subject: [Qemu-devel] [PATCH 03/16] aio: introduce aio_poll_internal List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: qemu-devel@nongnu.org Cc: stefanha@redhat.com Move the implemntation of aio_poll to aio_poll_internal, and introduce a wrapper for public use. For now it just asserts that aio_poll is being used correctly---either from the thread that manages the context, or with the QEMU global mutex held. The next patch, however, will completely separate the two cases. Signed-off-by: Paolo Bonzini --- aio-posix.c | 2 +- aio-win32.c | 2 +- async.c | 8 ++++++++ include/block/aio.h | 6 ++++++ 4 files changed, 16 insertions(+), 2 deletions(-) diff --git a/aio-posix.c b/aio-posix.c index fa7f8ab..4dc075c 100644 --- a/aio-posix.c +++ b/aio-posix.c @@ -401,7 +401,7 @@ static void add_pollfd(AioHandler *node) npfd++; } -bool aio_poll(AioContext *ctx, bool blocking) +bool aio_poll_internal(AioContext *ctx, bool blocking) { AioHandler *node; int i, ret; diff --git a/aio-win32.c b/aio-win32.c index 6aaa32a..86ad822 100644 --- a/aio-win32.c +++ b/aio-win32.c @@ -281,7 +281,7 @@ bool aio_dispatch(AioContext *ctx) return progress; } -bool aio_poll(AioContext *ctx, bool blocking) +bool aio_poll_internal(AioContext *ctx, bool blocking) { AioHandler *node; HANDLE events[MAXIMUM_WAIT_OBJECTS + 1]; diff --git a/async.c b/async.c index d083564..01c4891 100644 --- a/async.c +++ b/async.c @@ -300,6 +300,14 @@ void aio_notify_accept(AioContext *ctx) } } +bool aio_poll(AioContext *ctx, bool blocking) +{ + assert(qemu_mutex_iothread_locked() || + aio_context_in_iothread(ctx)); + + return aio_poll_internal(ctx, blocking); +} + static void aio_timerlist_notify(void *opaque) { aio_notify(opaque); diff --git a/include/block/aio.h b/include/block/aio.h index 9434665..986be97 100644 --- a/include/block/aio.h +++ b/include/block/aio.h @@ -287,6 +287,12 @@ bool aio_pending(AioContext *ctx); */ bool aio_dispatch(AioContext *ctx); +/* Same as aio_poll, but only meant for use in the I/O thread. + * + * This is used internally in the implementation of aio_poll. + */ +bool aio_poll_internal(AioContext *ctx, bool blocking); + /* Progress in completing AIO work to occur. This can issue new pending * aio as a result of executing I/O completion or bh callbacks. * -- 2.5.0