* [Qemu-devel] [RFC PATCH] async: Fix aio_notify_accept @ 2018-08-03 15:49 Fam Zheng 2018-08-03 16:51 ` Paolo Bonzini 2018-08-03 17:08 ` Paolo Bonzini 0 siblings, 2 replies; 4+ messages in thread From: Fam Zheng @ 2018-08-03 15:49 UTC (permalink / raw) To: qemu-devel; +Cc: pbonzini, Fam Zheng, Stefan Hajnoczi, qemu-block >From main loop, bdrv_set_aio_context() can call IOThread's aio_poll(). That breaks aio_notify() because the ctx->notifier event can get cleared too early by this which causes IOThread hanging. See https://bugzilla.redhat.com/show_bug.cgi?id=1562750 for details. Signed-off-by: Fam Zheng <famz@redhat.com> --- util/async.c | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-) diff --git a/util/async.c b/util/async.c index 05979f8014..c6e8aebc3a 100644 --- a/util/async.c +++ b/util/async.c @@ -355,7 +355,11 @@ void aio_notify(AioContext *ctx) void aio_notify_accept(AioContext *ctx) { - if (atomic_xchg(&ctx->notified, false)) { + /* If ctx->notify_me >= 2, another aio_poll() is waiting which may need the + * ctx->notifier event to wake up, so don't already clear it just because "we" are + * done iterating. */ + if (atomic_read(&ctx->notify_me) < 2 + && atomic_xchg(&ctx->notified, false)) { event_notifier_test_and_clear(&ctx->notifier); } } -- 2.17.1 ^ permalink raw reply related [flat|nested] 4+ messages in thread
* Re: [Qemu-devel] [RFC PATCH] async: Fix aio_notify_accept 2018-08-03 15:49 [Qemu-devel] [RFC PATCH] async: Fix aio_notify_accept Fam Zheng @ 2018-08-03 16:51 ` Paolo Bonzini 2018-08-03 17:08 ` Paolo Bonzini 1 sibling, 0 replies; 4+ messages in thread From: Paolo Bonzini @ 2018-08-03 16:51 UTC (permalink / raw) To: Fam Zheng, qemu-devel; +Cc: Stefan Hajnoczi, qemu-block On 03/08/2018 17:49, Fam Zheng wrote: > void aio_notify_accept(AioContext *ctx) > { > - if (atomic_xchg(&ctx->notified, false)) { > + /* If ctx->notify_me >= 2, another aio_poll() is waiting which may need the > + * ctx->notifier event to wake up, so don't already clear it just because "we" are > + * done iterating. */ > + if (atomic_read(&ctx->notify_me) < 2 > + && atomic_xchg(&ctx->notified, false)) { > event_notifier_test_and_clear(&ctx->notifier); > } > } I'm worried that this would this cause a busy wait, and I don't understand the issue. When aio_poll()s are nested, outer calls are in the "dispatch" phase and therefore do not need notification. In your situation is notify_me actually ever >2? Paolo ^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [Qemu-devel] [RFC PATCH] async: Fix aio_notify_accept 2018-08-03 15:49 [Qemu-devel] [RFC PATCH] async: Fix aio_notify_accept Fam Zheng 2018-08-03 16:51 ` Paolo Bonzini @ 2018-08-03 17:08 ` Paolo Bonzini 2018-08-07 1:01 ` Fam Zheng 1 sibling, 1 reply; 4+ messages in thread From: Paolo Bonzini @ 2018-08-03 17:08 UTC (permalink / raw) To: Fam Zheng, qemu-devel; +Cc: Stefan Hajnoczi, qemu-block On 03/08/2018 17:49, Fam Zheng wrote: > void aio_notify_accept(AioContext *ctx) > { > - if (atomic_xchg(&ctx->notified, false)) { > + /* If ctx->notify_me >= 2, another aio_poll() is waiting which may need the > + * ctx->notifier event to wake up, so don't already clear it just because "we" are > + * done iterating. */ > + if (atomic_read(&ctx->notify_me) < 2 > + && atomic_xchg(&ctx->notified, false)) { > event_notifier_test_and_clear(&ctx->notifier); > } > } Ok, it's somewhat reassuring to see from the BZ that the aio_poll in the main thread (in bdrv_set_aio_context) is non-blocking, and that it isn't about nested aio_poll. Then it's not possible to have a busy wait there, because sooner or later the bottom halves will be exhausted and aio_wait will return false (no progress). I'm convinced that the idea in your patch---skipping aio_notify_accept---is correct, it's the ctx->notify_me test that I cannot understand. I'm not saying it's wrong, but it's tricky. So we need to improve the comments, the commit message, the way we achieve the fix, or all three. As to the comments and commit message: the BZ is a very good source of information. The comment on the main thread stealing the aio_notify was very clear. As to how to fix it, first of all, we should be clear on the invariants. It would be nice to assert that, if not in_aio_context_home_thread(ctx), blocking must be false. Two concurrent blocking aio_polls will steal aio_notify from one another, so intuitively that assertion should be true, and using AIO_WAIT_WHILE takes care of it. Second, if blocking is false, do we need to call aio_notify_accept at all? If not, and if we combine this with the assertion above, only the I/O thread will call aio_notify_accept, and the main loop will never steal the notification. So that should fix the bug. Thanks, Paolo ^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [Qemu-devel] [RFC PATCH] async: Fix aio_notify_accept 2018-08-03 17:08 ` Paolo Bonzini @ 2018-08-07 1:01 ` Fam Zheng 0 siblings, 0 replies; 4+ messages in thread From: Fam Zheng @ 2018-08-07 1:01 UTC (permalink / raw) To: Paolo Bonzini; +Cc: qemu-devel, Stefan Hajnoczi, qemu-block On Fri, 08/03 19:08, Paolo Bonzini wrote: > On 03/08/2018 17:49, Fam Zheng wrote: > > void aio_notify_accept(AioContext *ctx) > > { > > - if (atomic_xchg(&ctx->notified, false)) { > > + /* If ctx->notify_me >= 2, another aio_poll() is waiting which may need the > > + * ctx->notifier event to wake up, so don't already clear it just because "we" are > > + * done iterating. */ > > + if (atomic_read(&ctx->notify_me) < 2 > > + && atomic_xchg(&ctx->notified, false)) { > > event_notifier_test_and_clear(&ctx->notifier); > > } > > } > > Ok, it's somewhat reassuring to see from the BZ that the aio_poll in the > main thread (in bdrv_set_aio_context) is non-blocking, and that it isn't > about nested aio_poll. > > Then it's not possible to have a busy wait there, because sooner or > later the bottom halves will be exhausted and aio_wait will return false > (no progress). > > I'm convinced that the idea in your patch---skipping > aio_notify_accept---is correct, it's the ctx->notify_me test that I > cannot understand. I'm not saying it's wrong, but it's tricky. So we > need to improve the comments, the commit message, the way we achieve the > fix, or all three. > > As to the comments and commit message: the BZ is a very good source of > information. The comment on the main thread stealing the aio_notify was > very clear. Yes, it was late Friday night and I wanted to send the patch before the long weekend :) > > As to how to fix it, first of all, we should be clear on the invariants. > It would be nice to assert that, if not > in_aio_context_home_thread(ctx), blocking must be false. Two concurrent > blocking aio_polls will steal aio_notify from one another, so > intuitively that assertion should be true, and using AIO_WAIT_WHILE > takes care of it. > > Second, if blocking is false, do we need to call aio_notify_accept at > all? If not, and if we combine this with the assertion above, only the > I/O thread will call aio_notify_accept, and the main loop will never > steal the notification. So that should fix the bug. Yes, I think this is a better idea. I'll try it. Fam ^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2018-08-07 1:02 UTC | newest] Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed) -- links below jump to the message on this page -- 2018-08-03 15:49 [Qemu-devel] [RFC PATCH] async: Fix aio_notify_accept Fam Zheng 2018-08-03 16:51 ` Paolo Bonzini 2018-08-03 17:08 ` Paolo Bonzini 2018-08-07 1:01 ` Fam Zheng
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.