* [Qemu-devel] [PATCH for-2.9-rc5 v3] block: Drain BH in bdrv_drained_begin
@ 2017-04-18 10:39 Fam Zheng
2017-04-18 10:53 ` Kevin Wolf
` (2 more replies)
0 siblings, 3 replies; 5+ messages in thread
From: Fam Zheng @ 2017-04-18 10:39 UTC (permalink / raw)
To: qemu-devel
Cc: qemu-block, pbonzini, jcody, Stefan Hajnoczi, Kevin Wolf,
Fam Zheng, Max Reitz
During block job completion, nothing is preventing
block_job_defer_to_main_loop_bh from being called in a nested
aio_poll(), which is a trouble, such as in this code path:
qmp_block_commit
commit_active_start
bdrv_reopen
bdrv_reopen_multiple
bdrv_reopen_prepare
bdrv_flush
aio_poll
aio_bh_poll
aio_bh_call
block_job_defer_to_main_loop_bh
stream_complete
bdrv_reopen
block_job_defer_to_main_loop_bh is the last step of the stream job,
which should have been "paused" by the bdrv_drained_begin/end in
bdrv_reopen_multiple, but it is not done because it's in the form of a
main loop BH.
Similar to why block jobs should be paused between drained_begin and
drained_end, BHs they schedule must be excluded as well. To achieve
this, this patch forces draining the BH in BDRV_POLL_WHILE.
Also because the BH in question can do bdrv_unref and child replacing,
protect @bs carefully to avoid use-after-free.
As a side effect this fixes a hang in block_job_detach_aio_context
during system_reset when a block job is ready:
#0 0x0000555555aa79f3 in bdrv_drain_recurse
#1 0x0000555555aa825d in bdrv_drained_begin
#2 0x0000555555aa8449 in bdrv_drain
#3 0x0000555555a9c356 in blk_drain
#4 0x0000555555aa3cfd in mirror_drain
#5 0x0000555555a66e11 in block_job_detach_aio_context
#6 0x0000555555a62f4d in bdrv_detach_aio_context
#7 0x0000555555a63116 in bdrv_set_aio_context
#8 0x0000555555a9d326 in blk_set_aio_context
#9 0x00005555557e38da in virtio_blk_data_plane_stop
#10 0x00005555559f9d5f in virtio_bus_stop_ioeventfd
#11 0x00005555559fa49b in virtio_bus_stop_ioeventfd
#12 0x00005555559f6a18 in virtio_pci_stop_ioeventfd
#13 0x00005555559f6a18 in virtio_pci_reset
#14 0x00005555559139a9 in qdev_reset_one
#15 0x0000555555916738 in qbus_walk_children
#16 0x0000555555913318 in qdev_walk_children
#17 0x0000555555916738 in qbus_walk_children
#18 0x00005555559168ca in qemu_devices_reset
#19 0x000055555581fcbb in pc_machine_reset
#20 0x00005555558a4d96 in qemu_system_reset
#21 0x000055555577157a in main_loop_should_exit
#22 0x000055555577157a in main_loop
#23 0x000055555577157a in main
The rationale is that the loop in block_job_detach_aio_context cannot
make any progress in pausing/completing the job, because bs->in_flight
is 0, so bdrv_drain doesn't process the block_job_defer_to_main_loop
BH. With this patch, it does.
Reported-by: Jeff Cody <jcody@redhat.com>
Signed-off-by: Fam Zheng <famz@redhat.com>
---
v3: Report all aio_poll() progress as waited_. [Kevin]
v2: Do the BH poll in BDRV_POLL_WHILE to cover bdrv_drain_all_begin as
well. [Kevin]
---
block/io.c | 10 +++++++---
include/block/block.h | 22 ++++++++++++++--------
2 files changed, 21 insertions(+), 11 deletions(-)
diff --git a/block/io.c b/block/io.c
index 8706bfa..a472157 100644
--- a/block/io.c
+++ b/block/io.c
@@ -158,7 +158,7 @@ bool bdrv_requests_pending(BlockDriverState *bs)
static bool bdrv_drain_recurse(BlockDriverState *bs)
{
- BdrvChild *child;
+ BdrvChild *child, *tmp;
bool waited;
waited = BDRV_POLL_WHILE(bs, atomic_read(&bs->in_flight) > 0);
@@ -167,8 +167,12 @@ static bool bdrv_drain_recurse(BlockDriverState *bs)
bs->drv->bdrv_drain(bs);
}
- QLIST_FOREACH(child, &bs->children, next) {
- waited |= bdrv_drain_recurse(child->bs);
+ QLIST_FOREACH_SAFE(child, &bs->children, next, tmp) {
+ BlockDriverState *bs = child->bs;
+ assert(bs->refcnt > 0);
+ bdrv_ref(bs);
+ waited |= bdrv_drain_recurse(bs);
+ bdrv_unref(bs);
}
return waited;
diff --git a/include/block/block.h b/include/block/block.h
index 97d4330..5ddc0cf 100644
--- a/include/block/block.h
+++ b/include/block/block.h
@@ -381,12 +381,13 @@ void bdrv_drain_all(void);
#define BDRV_POLL_WHILE(bs, cond) ({ \
bool waited_ = false; \
+ bool busy_ = true; \
BlockDriverState *bs_ = (bs); \
AioContext *ctx_ = bdrv_get_aio_context(bs_); \
if (aio_context_in_iothread(ctx_)) { \
- while ((cond)) { \
- aio_poll(ctx_, true); \
- waited_ = true; \
+ while ((cond) || busy_) { \
+ busy_ = aio_poll(ctx_, (cond)); \
+ waited_ |= !!(cond) | busy_; \
} \
} else { \
assert(qemu_get_current_aio_context() == \
@@ -398,11 +399,16 @@ void bdrv_drain_all(void);
*/ \
assert(!bs_->wakeup); \
bs_->wakeup = true; \
- while ((cond)) { \
- aio_context_release(ctx_); \
- aio_poll(qemu_get_aio_context(), true); \
- aio_context_acquire(ctx_); \
- waited_ = true; \
+ while (busy_) { \
+ if ((cond)) { \
+ waited_ = busy_ = true; \
+ aio_context_release(ctx_); \
+ aio_poll(qemu_get_aio_context(), true); \
+ aio_context_acquire(ctx_); \
+ } else { \
+ busy_ = aio_poll(ctx_, false); \
+ waited_ |= busy_; \
+ } \
} \
bs_->wakeup = false; \
} \
--
2.9.3
^ permalink raw reply related [flat|nested] 5+ messages in thread
* Re: [Qemu-devel] [PATCH for-2.9-rc5 v3] block: Drain BH in bdrv_drained_begin
2017-04-18 10:39 [Qemu-devel] [PATCH for-2.9-rc5 v3] block: Drain BH in bdrv_drained_begin Fam Zheng
@ 2017-04-18 10:53 ` Kevin Wolf
2017-04-18 11:21 ` Jeff Cody
2017-04-18 12:36 ` Paolo Bonzini
2 siblings, 0 replies; 5+ messages in thread
From: Kevin Wolf @ 2017-04-18 10:53 UTC (permalink / raw)
To: Fam Zheng
Cc: qemu-devel, qemu-block, pbonzini, jcody, Stefan Hajnoczi, Max Reitz
Am 18.04.2017 um 12:39 hat Fam Zheng geschrieben:
> During block job completion, nothing is preventing
> block_job_defer_to_main_loop_bh from being called in a nested
> aio_poll(), which is a trouble, such as in this code path:
>
> qmp_block_commit
> commit_active_start
> bdrv_reopen
> bdrv_reopen_multiple
> bdrv_reopen_prepare
> bdrv_flush
> aio_poll
> aio_bh_poll
> aio_bh_call
> block_job_defer_to_main_loop_bh
> stream_complete
> bdrv_reopen
>
> block_job_defer_to_main_loop_bh is the last step of the stream job,
> which should have been "paused" by the bdrv_drained_begin/end in
> bdrv_reopen_multiple, but it is not done because it's in the form of a
> main loop BH.
>
> Similar to why block jobs should be paused between drained_begin and
> drained_end, BHs they schedule must be excluded as well. To achieve
> this, this patch forces draining the BH in BDRV_POLL_WHILE.
>
> Also because the BH in question can do bdrv_unref and child replacing,
> protect @bs carefully to avoid use-after-free.
>
> As a side effect this fixes a hang in block_job_detach_aio_context
> during system_reset when a block job is ready:
>
> #0 0x0000555555aa79f3 in bdrv_drain_recurse
> #1 0x0000555555aa825d in bdrv_drained_begin
> #2 0x0000555555aa8449 in bdrv_drain
> #3 0x0000555555a9c356 in blk_drain
> #4 0x0000555555aa3cfd in mirror_drain
> #5 0x0000555555a66e11 in block_job_detach_aio_context
> #6 0x0000555555a62f4d in bdrv_detach_aio_context
> #7 0x0000555555a63116 in bdrv_set_aio_context
> #8 0x0000555555a9d326 in blk_set_aio_context
> #9 0x00005555557e38da in virtio_blk_data_plane_stop
> #10 0x00005555559f9d5f in virtio_bus_stop_ioeventfd
> #11 0x00005555559fa49b in virtio_bus_stop_ioeventfd
> #12 0x00005555559f6a18 in virtio_pci_stop_ioeventfd
> #13 0x00005555559f6a18 in virtio_pci_reset
> #14 0x00005555559139a9 in qdev_reset_one
> #15 0x0000555555916738 in qbus_walk_children
> #16 0x0000555555913318 in qdev_walk_children
> #17 0x0000555555916738 in qbus_walk_children
> #18 0x00005555559168ca in qemu_devices_reset
> #19 0x000055555581fcbb in pc_machine_reset
> #20 0x00005555558a4d96 in qemu_system_reset
> #21 0x000055555577157a in main_loop_should_exit
> #22 0x000055555577157a in main_loop
> #23 0x000055555577157a in main
>
> The rationale is that the loop in block_job_detach_aio_context cannot
> make any progress in pausing/completing the job, because bs->in_flight
> is 0, so bdrv_drain doesn't process the block_job_defer_to_main_loop
> BH. With this patch, it does.
>
> Reported-by: Jeff Cody <jcody@redhat.com>
> Signed-off-by: Fam Zheng <famz@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [Qemu-devel] [PATCH for-2.9-rc5 v3] block: Drain BH in bdrv_drained_begin
2017-04-18 10:39 [Qemu-devel] [PATCH for-2.9-rc5 v3] block: Drain BH in bdrv_drained_begin Fam Zheng
2017-04-18 10:53 ` Kevin Wolf
@ 2017-04-18 11:21 ` Jeff Cody
2017-04-18 12:36 ` Paolo Bonzini
2 siblings, 0 replies; 5+ messages in thread
From: Jeff Cody @ 2017-04-18 11:21 UTC (permalink / raw)
To: Fam Zheng
Cc: qemu-devel, qemu-block, pbonzini, Stefan Hajnoczi, Kevin Wolf, Max Reitz
On Tue, Apr 18, 2017 at 06:39:48PM +0800, Fam Zheng wrote:
> During block job completion, nothing is preventing
> block_job_defer_to_main_loop_bh from being called in a nested
> aio_poll(), which is a trouble, such as in this code path:
>
> qmp_block_commit
> commit_active_start
> bdrv_reopen
> bdrv_reopen_multiple
> bdrv_reopen_prepare
> bdrv_flush
> aio_poll
> aio_bh_poll
> aio_bh_call
> block_job_defer_to_main_loop_bh
> stream_complete
> bdrv_reopen
>
> block_job_defer_to_main_loop_bh is the last step of the stream job,
> which should have been "paused" by the bdrv_drained_begin/end in
> bdrv_reopen_multiple, but it is not done because it's in the form of a
> main loop BH.
>
> Similar to why block jobs should be paused between drained_begin and
> drained_end, BHs they schedule must be excluded as well. To achieve
> this, this patch forces draining the BH in BDRV_POLL_WHILE.
>
> Also because the BH in question can do bdrv_unref and child replacing,
> protect @bs carefully to avoid use-after-free.
>
> As a side effect this fixes a hang in block_job_detach_aio_context
> during system_reset when a block job is ready:
>
> #0 0x0000555555aa79f3 in bdrv_drain_recurse
> #1 0x0000555555aa825d in bdrv_drained_begin
> #2 0x0000555555aa8449 in bdrv_drain
> #3 0x0000555555a9c356 in blk_drain
> #4 0x0000555555aa3cfd in mirror_drain
> #5 0x0000555555a66e11 in block_job_detach_aio_context
> #6 0x0000555555a62f4d in bdrv_detach_aio_context
> #7 0x0000555555a63116 in bdrv_set_aio_context
> #8 0x0000555555a9d326 in blk_set_aio_context
> #9 0x00005555557e38da in virtio_blk_data_plane_stop
> #10 0x00005555559f9d5f in virtio_bus_stop_ioeventfd
> #11 0x00005555559fa49b in virtio_bus_stop_ioeventfd
> #12 0x00005555559f6a18 in virtio_pci_stop_ioeventfd
> #13 0x00005555559f6a18 in virtio_pci_reset
> #14 0x00005555559139a9 in qdev_reset_one
> #15 0x0000555555916738 in qbus_walk_children
> #16 0x0000555555913318 in qdev_walk_children
> #17 0x0000555555916738 in qbus_walk_children
> #18 0x00005555559168ca in qemu_devices_reset
> #19 0x000055555581fcbb in pc_machine_reset
> #20 0x00005555558a4d96 in qemu_system_reset
> #21 0x000055555577157a in main_loop_should_exit
> #22 0x000055555577157a in main_loop
> #23 0x000055555577157a in main
>
> The rationale is that the loop in block_job_detach_aio_context cannot
> make any progress in pausing/completing the job, because bs->in_flight
> is 0, so bdrv_drain doesn't process the block_job_defer_to_main_loop
> BH. With this patch, it does.
>
> Reported-by: Jeff Cody <jcody@redhat.com>
> Signed-off-by: Fam Zheng <famz@redhat.com>
>
Verified this fixes the deadlock condition from my report:
Tested-by: Jeff Cody <jcody@redhat.com>
> ---
>
> v3: Report all aio_poll() progress as waited_. [Kevin]
>
> v2: Do the BH poll in BDRV_POLL_WHILE to cover bdrv_drain_all_begin as
> well. [Kevin]
> ---
> block/io.c | 10 +++++++---
> include/block/block.h | 22 ++++++++++++++--------
> 2 files changed, 21 insertions(+), 11 deletions(-)
>
> diff --git a/block/io.c b/block/io.c
> index 8706bfa..a472157 100644
> --- a/block/io.c
> +++ b/block/io.c
> @@ -158,7 +158,7 @@ bool bdrv_requests_pending(BlockDriverState *bs)
>
> static bool bdrv_drain_recurse(BlockDriverState *bs)
> {
> - BdrvChild *child;
> + BdrvChild *child, *tmp;
> bool waited;
>
> waited = BDRV_POLL_WHILE(bs, atomic_read(&bs->in_flight) > 0);
> @@ -167,8 +167,12 @@ static bool bdrv_drain_recurse(BlockDriverState *bs)
> bs->drv->bdrv_drain(bs);
> }
>
> - QLIST_FOREACH(child, &bs->children, next) {
> - waited |= bdrv_drain_recurse(child->bs);
> + QLIST_FOREACH_SAFE(child, &bs->children, next, tmp) {
> + BlockDriverState *bs = child->bs;
> + assert(bs->refcnt > 0);
> + bdrv_ref(bs);
> + waited |= bdrv_drain_recurse(bs);
> + bdrv_unref(bs);
> }
>
> return waited;
> diff --git a/include/block/block.h b/include/block/block.h
> index 97d4330..5ddc0cf 100644
> --- a/include/block/block.h
> +++ b/include/block/block.h
> @@ -381,12 +381,13 @@ void bdrv_drain_all(void);
>
> #define BDRV_POLL_WHILE(bs, cond) ({ \
> bool waited_ = false; \
> + bool busy_ = true; \
> BlockDriverState *bs_ = (bs); \
> AioContext *ctx_ = bdrv_get_aio_context(bs_); \
> if (aio_context_in_iothread(ctx_)) { \
> - while ((cond)) { \
> - aio_poll(ctx_, true); \
> - waited_ = true; \
> + while ((cond) || busy_) { \
> + busy_ = aio_poll(ctx_, (cond)); \
> + waited_ |= !!(cond) | busy_; \
> } \
> } else { \
> assert(qemu_get_current_aio_context() == \
> @@ -398,11 +399,16 @@ void bdrv_drain_all(void);
> */ \
> assert(!bs_->wakeup); \
> bs_->wakeup = true; \
> - while ((cond)) { \
> - aio_context_release(ctx_); \
> - aio_poll(qemu_get_aio_context(), true); \
> - aio_context_acquire(ctx_); \
> - waited_ = true; \
> + while (busy_) { \
> + if ((cond)) { \
> + waited_ = busy_ = true; \
> + aio_context_release(ctx_); \
> + aio_poll(qemu_get_aio_context(), true); \
> + aio_context_acquire(ctx_); \
> + } else { \
> + busy_ = aio_poll(ctx_, false); \
> + waited_ |= busy_; \
> + } \
> } \
> bs_->wakeup = false; \
> } \
> --
> 2.9.3
>
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [Qemu-devel] [PATCH for-2.9-rc5 v3] block: Drain BH in bdrv_drained_begin
2017-04-18 10:39 [Qemu-devel] [PATCH for-2.9-rc5 v3] block: Drain BH in bdrv_drained_begin Fam Zheng
2017-04-18 10:53 ` Kevin Wolf
2017-04-18 11:21 ` Jeff Cody
@ 2017-04-18 12:36 ` Paolo Bonzini
2017-04-18 13:46 ` Fam Zheng
2 siblings, 1 reply; 5+ messages in thread
From: Paolo Bonzini @ 2017-04-18 12:36 UTC (permalink / raw)
To: Fam Zheng, qemu-devel
Cc: qemu-block, jcody, Stefan Hajnoczi, Kevin Wolf, Max Reitz
On 18/04/2017 12:39, Fam Zheng wrote:
> + QLIST_FOREACH_SAFE(child, &bs->children, next, tmp) {
> + BlockDriverState *bs = child->bs;
> + assert(bs->refcnt > 0);
> + bdrv_ref(bs);
> + waited |= bdrv_drain_recurse(bs);
> + bdrv_unref(bs);
> }
I think this accesses global state that is not protected by the
AioContext lock?
Paolo
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [Qemu-devel] [PATCH for-2.9-rc5 v3] block: Drain BH in bdrv_drained_begin
2017-04-18 12:36 ` Paolo Bonzini
@ 2017-04-18 13:46 ` Fam Zheng
0 siblings, 0 replies; 5+ messages in thread
From: Fam Zheng @ 2017-04-18 13:46 UTC (permalink / raw)
To: Paolo Bonzini
Cc: qemu-devel, Kevin Wolf, jcody, Stefan Hajnoczi, qemu-block, Max Reitz
On Tue, 04/18 14:36, Paolo Bonzini wrote:
>
>
> On 18/04/2017 12:39, Fam Zheng wrote:
> > + QLIST_FOREACH_SAFE(child, &bs->children, next, tmp) {
> > + BlockDriverState *bs = child->bs;
> > + assert(bs->refcnt > 0);
> > + bdrv_ref(bs);
> > + waited |= bdrv_drain_recurse(bs);
> > + bdrv_unref(bs);
> > }
>
> I think this accesses global state that is not protected by the
> AioContext lock?
Good catch! If called from IOThread, this bdrv_unref is simply wrong, although
in practice it cannot delete bs because of the reference held by the owning
device.
It may be better to wrap the bdrv_ref/bdrv_unref calls with
if (qemu_get_current_aio_context() == qemu_get_aio_context())
because only the main loop needs it.
Will make this hunk a separate patch in v4.
Fam
^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2017-04-18 13:46 UTC | newest]
Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-04-18 10:39 [Qemu-devel] [PATCH for-2.9-rc5 v3] block: Drain BH in bdrv_drained_begin Fam Zheng
2017-04-18 10:53 ` Kevin Wolf
2017-04-18 11:21 ` Jeff Cody
2017-04-18 12:36 ` Paolo Bonzini
2017-04-18 13:46 ` Fam Zheng
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.