From: Eric Blake <eblake@redhat.com>
To: Paolo Bonzini <pbonzini@redhat.com>, qemu-devel@nongnu.org
Cc: stefanha@redhat.com
Subject: Re: [Qemu-devel] [PATCH 04/16] aio: only call aio_poll_internal from iothread
Date: Mon, 8 Feb 2016 15:22:50 -0700 [thread overview]
Message-ID: <56B9153A.8080700@redhat.com> (raw)
In-Reply-To: <1454948107-11844-5-git-send-email-pbonzini@redhat.com>
[-- Attachment #1: Type: text/plain, Size: 2429 bytes --]
On 02/08/2016 09:14 AM, Paolo Bonzini wrote:
> aio_poll is not thread safe; it can report progress incorrectly when
> called from the main thread. The bug remains latent as long as
> all of it is called within aio_context_acquire/aio_context_release,
> but this will change soon.
>
> The details of the bug are pretty simple, but fixing it in an
> efficient way is thorny. There are plenty of comments and formal
> models in the patch, so I will refer to it.
>
> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
> ---
> +++ b/async.c
> @@ -300,12 +300,224 @@ void aio_notify_accept(AioContext *ctx)
> }
> }
>
> +/* aio_poll_internal is not thread-safe; it only reports progress
> + * correctly when called from one thread, because it has no
> + * history of what happened in different threads. When called
> + * from two threads, there is a race:
> + *
> + * main thread I/O thread
> + * ----------------------- --------------------------
> + * blk_drain
> + * bdrv_requests_pending -> true
> + * aio_poll_internal
> + * process last request
> + * aio_poll_internal
> + *
> + * Now aio_poll_internal will never exit, because there is no pending
> + * I/O on the AioContext.
> + *
> + * Therefore, aio_poll is a wrapper around aio_poll_internal that allows
> + * usage from _two_ threads: the I/O thread of course, and the main thread.
> + * When called from the main thread, aio_poll just asks the I/O thread
> + * for a nudge as soon as the next call to aio_poll is complete.
> + * Because we use QemuEvent, and QemuEvent supports a single consumer
> + * only, this only works when the calling thread holds the big QEMU lock.
> + *
> + * Because aio_poll is used in a loop, spurious wakeups are okay.
> + * Therefore, the I/O thread calls qemu_event_set very liberally
> + * (it helps that qemu_event_set is cheap on an already-set event).
> + * generally used in a loop, it's okay to have spurious wakeups.
Incomplete sentence due to bad rebase leftovers?
> + * Similarly it is okay to return true when no progress was made
> + * (as long as this doesn't happen forever, or you get livelock).
> + *
> +
--
Eric Blake eblake redhat com +1-919-301-3266
Libvirt virtualization library http://libvirt.org
[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 604 bytes --]
next prev parent reply other threads:[~2016-02-08 22:22 UTC|newest]
Thread overview: 21+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-02-08 16:14 [Qemu-devel] [PATCH v2 00/16] aio: first part of aio_context_acquire/release pushdown Paolo Bonzini
2016-02-08 16:14 ` [Qemu-devel] [PATCH 01/16] aio: introduce aio_context_in_iothread Paolo Bonzini
2016-02-08 16:14 ` [Qemu-devel] [PATCH 02/16] aio: do not really acquire/release the main AIO context Paolo Bonzini
2016-02-08 16:14 ` [Qemu-devel] [PATCH 03/16] aio: introduce aio_poll_internal Paolo Bonzini
2016-02-08 16:14 ` [Qemu-devel] [PATCH 04/16] aio: only call aio_poll_internal from iothread Paolo Bonzini
2016-02-08 22:22 ` Eric Blake [this message]
2016-02-08 16:14 ` [Qemu-devel] [PATCH 05/16] iothread: release AioContext around aio_poll Paolo Bonzini
2016-02-08 16:14 ` [Qemu-devel] [PATCH 06/16] qemu-thread: introduce QemuRecMutex Paolo Bonzini
2016-02-08 16:14 ` [Qemu-devel] [PATCH 07/16] aio: convert from RFifoLock to QemuRecMutex Paolo Bonzini
2016-02-08 16:14 ` [Qemu-devel] [PATCH 08/16] aio: rename bh_lock to list_lock Paolo Bonzini
2016-02-08 16:15 ` [Qemu-devel] [PATCH 09/16] qemu-thread: introduce QemuLockCnt Paolo Bonzini
2016-02-08 22:38 ` Eric Blake
2016-02-08 16:15 ` [Qemu-devel] [PATCH 10/16] aio: make ctx->list_lock a QemuLockCnt, subsuming ctx->walking_bh Paolo Bonzini
2016-02-08 16:15 ` [Qemu-devel] [PATCH 11/16] qemu-thread: optimize QemuLockCnt with futexes on Linux Paolo Bonzini
2016-02-08 16:15 ` [Qemu-devel] [PATCH 12/16] aio: tweak walking in dispatch phase Paolo Bonzini
2016-02-08 16:15 ` [Qemu-devel] [PATCH 13/16] aio-posix: remove walking_handlers, protecting AioHandler list with list_lock Paolo Bonzini
2016-02-08 16:15 ` [Qemu-devel] [PATCH 14/16] aio-win32: " Paolo Bonzini
2016-02-08 16:15 ` [Qemu-devel] [PATCH 15/16] aio: document locking Paolo Bonzini
2016-02-08 16:15 ` [Qemu-devel] [PATCH 16/16] aio: push aio_context_acquire/release down to dispatching Paolo Bonzini
-- strict thread matches above, loose matches on Subject: below --
2016-02-09 11:45 [Qemu-devel] [PATCH v3 00/16] aio: first part of aio_context_acquire/release pushdown Paolo Bonzini
2016-02-09 11:46 ` [Qemu-devel] [PATCH 04/16] aio: only call aio_poll_internal from iothread Paolo Bonzini
2016-01-15 15:12 [Qemu-devel] [PATCH 00/16] aio: first part of aio_context_acquire/release pushdown Paolo Bonzini
2016-01-15 15:12 ` [Qemu-devel] [PATCH 04/16] aio: only call aio_poll_internal from iothread Paolo Bonzini
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=56B9153A.8080700@redhat.com \
--to=eblake@redhat.com \
--cc=pbonzini@redhat.com \
--cc=qemu-devel@nongnu.org \
--cc=stefanha@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).