* [PATCH] block: fix QEMU crash with scsi-hd and drive_del
@ 2018-05-16 11:21 ` Greg Kurz
0 siblings, 0 replies; 10+ messages in thread
From: Greg Kurz @ 2018-05-16 11:21 UTC (permalink / raw)
To: qemu-devel; +Cc: Kevin Wolf, Max Reitz, Stefan Hajnoczi, stable
Removing a drive with drive_del while it is being used to run an I/O
intensive workload can cause QEMU to crash.
An AIO flush can yield at some point:
blk_aio_flush_entry()
blk_co_flush(blk)
bdrv_co_flush(blk->root->bs)
...
qemu_coroutine_yield()
and let the HMP command to run, free blk->root and give control
back to the AIO flush:
hmp_drive_del()
blk_remove_bs()
bdrv_root_unref_child(blk->root)
child_bs = blk->root->bs
bdrv_detach_child(blk->root)
bdrv_replace_child(blk->root, NULL)
blk->root->bs = NULL
g_free(blk->root) <============== blk->root becomes stale
bdrv_unref(child_bs)
bdrv_delete(child_bs)
bdrv_close()
bdrv_drained_begin()
bdrv_do_drained_begin()
bdrv_drain_recurse()
aio_poll()
...
qemu_coroutine_switch()
and the AIO flush completion ends up dereferencing blk->root:
blk_aio_complete()
scsi_aio_complete()
blk_get_aio_context(blk)
bs = blk_bs(blk)
ie, bs = blk->root ? blk->root->bs : NULL
^^^^^
stale
The solution to this user-after-free situation is is to clear
blk->root before calling bdrv_unref() in bdrv_detach_child(),
and let blk_get_aio_context() fall back to the main loop context
since the BDS has been removed.
Signed-off-by: Greg Kurz <groug@kaod.org>
---
The use-after-free condition is easy to reproduce with a stress-ng
run in the guest:
-device virtio-scsi-pci,id=scsi1 \
-drive file=/home/greg/images/scratch.qcow2,format=qcow2,if=none,id=drive1 \
-device scsi-hd,bus=scsi1.0,drive=drive1,id=scsi-hd1
# stress-ng --hdd 0 --aggressive
and doing drive_del from the QEMU monitor while stress-ng is still running:
(qemu) drive_del drive1
The crash is less easy to hit though, as it depends on the bs field
of the stale blk->root to have a non-NULL value that eventually breaks
something when it gets dereferenced. The following patch simulates
that, and allows to validate the fix:
--- a/block.c
+++ b/block.c
@@ -2127,6 +2127,8 @@ BdrvChild *bdrv_attach_child(BlockDriverState *parent_bs,
static void bdrv_detach_child(BdrvChild *child)
{
+ BlockDriverState *bs = child->bs;
+
if (child->next.le_prev) {
QLIST_REMOVE(child, next);
child->next.le_prev = NULL;
@@ -2135,7 +2137,15 @@ static void bdrv_detach_child(BdrvChild *child)
bdrv_replace_child(child, NULL);
g_free(child->name);
- g_free(child);
+ /* Poison the BdrvChild instead of freeing it, in order to break blk_bs()
+ * if the blk still has a pointer to this BdrvChild in blk->root.
+ */
+ if (atomic_read(&bs->in_flight)) {
+ child->bs = (BlockDriverState *) -1;
+ fprintf(stderr, "\nPoisonned BdrvChild %p\n", child);
+ } else {
+ g_free(child);
+ }
}
void bdrv_root_unref_child(BdrvChild *child)
---
block/block-backend.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/block/block-backend.c b/block/block-backend.c
index 681b240b1268..ed9434e236b9 100644
--- a/block/block-backend.c
+++ b/block/block-backend.c
@@ -756,6 +756,7 @@ void blk_remove_bs(BlockBackend *blk)
{
ThrottleGroupMember *tgm = &blk->public.throttle_group_member;
BlockDriverState *bs;
+ BdrvChild *root;
notifier_list_notify(&blk->remove_bs_notifiers, blk);
if (tgm->throttle_state) {
@@ -768,8 +769,9 @@ void blk_remove_bs(BlockBackend *blk)
blk_update_root_state(blk);
- bdrv_root_unref_child(blk->root);
+ root = blk->root;
blk->root = NULL;
+ bdrv_root_unref_child(root);
}
/*
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [Qemu-devel] [PATCH] block: fix QEMU crash with scsi-hd and drive_del
@ 2018-05-16 11:21 ` Greg Kurz
0 siblings, 0 replies; 10+ messages in thread
From: Greg Kurz @ 2018-05-16 11:21 UTC (permalink / raw)
To: qemu-devel; +Cc: Kevin Wolf, Max Reitz, Stefan Hajnoczi, stable
Removing a drive with drive_del while it is being used to run an I/O
intensive workload can cause QEMU to crash.
An AIO flush can yield at some point:
blk_aio_flush_entry()
blk_co_flush(blk)
bdrv_co_flush(blk->root->bs)
...
qemu_coroutine_yield()
and let the HMP command to run, free blk->root and give control
back to the AIO flush:
hmp_drive_del()
blk_remove_bs()
bdrv_root_unref_child(blk->root)
child_bs = blk->root->bs
bdrv_detach_child(blk->root)
bdrv_replace_child(blk->root, NULL)
blk->root->bs = NULL
g_free(blk->root) <============== blk->root becomes stale
bdrv_unref(child_bs)
bdrv_delete(child_bs)
bdrv_close()
bdrv_drained_begin()
bdrv_do_drained_begin()
bdrv_drain_recurse()
aio_poll()
...
qemu_coroutine_switch()
and the AIO flush completion ends up dereferencing blk->root:
blk_aio_complete()
scsi_aio_complete()
blk_get_aio_context(blk)
bs = blk_bs(blk)
ie, bs = blk->root ? blk->root->bs : NULL
^^^^^
stale
The solution to this user-after-free situation is is to clear
blk->root before calling bdrv_unref() in bdrv_detach_child(),
and let blk_get_aio_context() fall back to the main loop context
since the BDS has been removed.
Signed-off-by: Greg Kurz <groug@kaod.org>
---
The use-after-free condition is easy to reproduce with a stress-ng
run in the guest:
-device virtio-scsi-pci,id=scsi1 \
-drive file=/home/greg/images/scratch.qcow2,format=qcow2,if=none,id=drive1 \
-device scsi-hd,bus=scsi1.0,drive=drive1,id=scsi-hd1
# stress-ng --hdd 0 --aggressive
and doing drive_del from the QEMU monitor while stress-ng is still running:
(qemu) drive_del drive1
The crash is less easy to hit though, as it depends on the bs field
of the stale blk->root to have a non-NULL value that eventually breaks
something when it gets dereferenced. The following patch simulates
that, and allows to validate the fix:
--- a/block.c
+++ b/block.c
@@ -2127,6 +2127,8 @@ BdrvChild *bdrv_attach_child(BlockDriverState *parent_bs,
static void bdrv_detach_child(BdrvChild *child)
{
+ BlockDriverState *bs = child->bs;
+
if (child->next.le_prev) {
QLIST_REMOVE(child, next);
child->next.le_prev = NULL;
@@ -2135,7 +2137,15 @@ static void bdrv_detach_child(BdrvChild *child)
bdrv_replace_child(child, NULL);
g_free(child->name);
- g_free(child);
+ /* Poison the BdrvChild instead of freeing it, in order to break blk_bs()
+ * if the blk still has a pointer to this BdrvChild in blk->root.
+ */
+ if (atomic_read(&bs->in_flight)) {
+ child->bs = (BlockDriverState *) -1;
+ fprintf(stderr, "\nPoisonned BdrvChild %p\n", child);
+ } else {
+ g_free(child);
+ }
}
void bdrv_root_unref_child(BdrvChild *child)
---
block/block-backend.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/block/block-backend.c b/block/block-backend.c
index 681b240b1268..ed9434e236b9 100644
--- a/block/block-backend.c
+++ b/block/block-backend.c
@@ -756,6 +756,7 @@ void blk_remove_bs(BlockBackend *blk)
{
ThrottleGroupMember *tgm = &blk->public.throttle_group_member;
BlockDriverState *bs;
+ BdrvChild *root;
notifier_list_notify(&blk->remove_bs_notifiers, blk);
if (tgm->throttle_state) {
@@ -768,8 +769,9 @@ void blk_remove_bs(BlockBackend *blk)
blk_update_root_state(blk);
- bdrv_root_unref_child(blk->root);
+ root = blk->root;
blk->root = NULL;
+ bdrv_root_unref_child(root);
}
/*
^ permalink raw reply related [flat|nested] 10+ messages in thread
* Re: [PATCH] block: fix QEMU crash with scsi-hd and drive_del
2018-05-16 11:21 ` [Qemu-devel] " Greg Kurz
@ 2018-05-16 15:41 ` Greg KH
-1 siblings, 0 replies; 10+ messages in thread
From: Greg KH @ 2018-05-16 15:41 UTC (permalink / raw)
To: Greg Kurz; +Cc: qemu-devel, Kevin Wolf, Max Reitz, Stefan Hajnoczi, stable
On Wed, May 16, 2018 at 01:21:54PM +0200, Greg Kurz wrote:
> Removing a drive with drive_del while it is being used to run an I/O
> intensive workload can cause QEMU to crash.
>
> An AIO flush can yield at some point:
>
> blk_aio_flush_entry()
> blk_co_flush(blk)
> bdrv_co_flush(blk->root->bs)
> ...
> qemu_coroutine_yield()
>
> and let the HMP command to run, free blk->root and give control
> back to the AIO flush:
>
> hmp_drive_del()
> blk_remove_bs()
> bdrv_root_unref_child(blk->root)
> child_bs = blk->root->bs
> bdrv_detach_child(blk->root)
> bdrv_replace_child(blk->root, NULL)
> blk->root->bs = NULL
> g_free(blk->root) <============== blk->root becomes stale
> bdrv_unref(child_bs)
> bdrv_delete(child_bs)
> bdrv_close()
> bdrv_drained_begin()
> bdrv_do_drained_begin()
> bdrv_drain_recurse()
> aio_poll()
> ...
> qemu_coroutine_switch()
>
> and the AIO flush completion ends up dereferencing blk->root:
>
> blk_aio_complete()
> scsi_aio_complete()
> blk_get_aio_context(blk)
> bs = blk_bs(blk)
> ie, bs = blk->root ? blk->root->bs : NULL
> ^^^^^
> stale
>
> The solution to this user-after-free situation is is to clear
> blk->root before calling bdrv_unref() in bdrv_detach_child(),
> and let blk_get_aio_context() fall back to the main loop context
> since the BDS has been removed.
>
> Signed-off-by: Greg Kurz <groug@kaod.org>
> ---
>
<formletter>
This is not the correct way to submit patches for inclusion in the
stable kernel tree. Please read:
https://www.kernel.org/doc/html/latest/process/stable-kernel-rules.html
for how to do this properly.
</formletter>
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [Qemu-devel] [PATCH] block: fix QEMU crash with scsi-hd and drive_del
@ 2018-05-16 15:41 ` Greg KH
0 siblings, 0 replies; 10+ messages in thread
From: Greg KH @ 2018-05-16 15:41 UTC (permalink / raw)
To: Greg Kurz; +Cc: qemu-devel, Kevin Wolf, Max Reitz, Stefan Hajnoczi, stable
On Wed, May 16, 2018 at 01:21:54PM +0200, Greg Kurz wrote:
> Removing a drive with drive_del while it is being used to run an I/O
> intensive workload can cause QEMU to crash.
>
> An AIO flush can yield at some point:
>
> blk_aio_flush_entry()
> blk_co_flush(blk)
> bdrv_co_flush(blk->root->bs)
> ...
> qemu_coroutine_yield()
>
> and let the HMP command to run, free blk->root and give control
> back to the AIO flush:
>
> hmp_drive_del()
> blk_remove_bs()
> bdrv_root_unref_child(blk->root)
> child_bs = blk->root->bs
> bdrv_detach_child(blk->root)
> bdrv_replace_child(blk->root, NULL)
> blk->root->bs = NULL
> g_free(blk->root) <============== blk->root becomes stale
> bdrv_unref(child_bs)
> bdrv_delete(child_bs)
> bdrv_close()
> bdrv_drained_begin()
> bdrv_do_drained_begin()
> bdrv_drain_recurse()
> aio_poll()
> ...
> qemu_coroutine_switch()
>
> and the AIO flush completion ends up dereferencing blk->root:
>
> blk_aio_complete()
> scsi_aio_complete()
> blk_get_aio_context(blk)
> bs = blk_bs(blk)
> ie, bs = blk->root ? blk->root->bs : NULL
> ^^^^^
> stale
>
> The solution to this user-after-free situation is is to clear
> blk->root before calling bdrv_unref() in bdrv_detach_child(),
> and let blk_get_aio_context() fall back to the main loop context
> since the BDS has been removed.
>
> Signed-off-by: Greg Kurz <groug@kaod.org>
> ---
>
<formletter>
This is not the correct way to submit patches for inclusion in the
stable kernel tree. Please read:
https://www.kernel.org/doc/html/latest/process/stable-kernel-rules.html
for how to do this properly.
</formletter>
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [Qemu-devel] [PATCH] block: fix QEMU crash with scsi-hd and drive_del
2018-05-16 11:21 ` [Qemu-devel] " Greg Kurz
(?)
(?)
@ 2018-05-16 16:53 ` Greg Kurz
-1 siblings, 0 replies; 10+ messages in thread
From: Greg Kurz @ 2018-05-16 16:53 UTC (permalink / raw)
To: qemu-devel; +Cc: Kevin Wolf, qemu-stable, Stefan Hajnoczi, Max Reitz
Heh, of course I meant qemu-stable@nongnu.org ;)
On Wed, 16 May 2018 13:21:54 +0200
Greg Kurz <groug@kaod.org> wrote:
> Removing a drive with drive_del while it is being used to run an I/O
> intensive workload can cause QEMU to crash.
>
> An AIO flush can yield at some point:
>
> blk_aio_flush_entry()
> blk_co_flush(blk)
> bdrv_co_flush(blk->root->bs)
> ...
> qemu_coroutine_yield()
>
> and let the HMP command to run, free blk->root and give control
> back to the AIO flush:
>
> hmp_drive_del()
> blk_remove_bs()
> bdrv_root_unref_child(blk->root)
> child_bs = blk->root->bs
> bdrv_detach_child(blk->root)
> bdrv_replace_child(blk->root, NULL)
> blk->root->bs = NULL
> g_free(blk->root) <============== blk->root becomes stale
> bdrv_unref(child_bs)
> bdrv_delete(child_bs)
> bdrv_close()
> bdrv_drained_begin()
> bdrv_do_drained_begin()
> bdrv_drain_recurse()
> aio_poll()
> ...
> qemu_coroutine_switch()
>
> and the AIO flush completion ends up dereferencing blk->root:
>
> blk_aio_complete()
> scsi_aio_complete()
> blk_get_aio_context(blk)
> bs = blk_bs(blk)
> ie, bs = blk->root ? blk->root->bs : NULL
> ^^^^^
> stale
>
> The solution to this user-after-free situation is is to clear
> blk->root before calling bdrv_unref() in bdrv_detach_child(),
> and let blk_get_aio_context() fall back to the main loop context
> since the BDS has been removed.
>
> Signed-off-by: Greg Kurz <groug@kaod.org>
> ---
>
> The use-after-free condition is easy to reproduce with a stress-ng
> run in the guest:
>
> -device virtio-scsi-pci,id=scsi1 \
> -drive file=/home/greg/images/scratch.qcow2,format=qcow2,if=none,id=drive1 \
> -device scsi-hd,bus=scsi1.0,drive=drive1,id=scsi-hd1
>
> # stress-ng --hdd 0 --aggressive
>
> and doing drive_del from the QEMU monitor while stress-ng is still running:
>
> (qemu) drive_del drive1
>
> The crash is less easy to hit though, as it depends on the bs field
> of the stale blk->root to have a non-NULL value that eventually breaks
> something when it gets dereferenced. The following patch simulates
> that, and allows to validate the fix:
>
> --- a/block.c
> +++ b/block.c
> @@ -2127,6 +2127,8 @@ BdrvChild *bdrv_attach_child(BlockDriverState *parent_bs,
>
> static void bdrv_detach_child(BdrvChild *child)
> {
> + BlockDriverState *bs = child->bs;
> +
> if (child->next.le_prev) {
> QLIST_REMOVE(child, next);
> child->next.le_prev = NULL;
> @@ -2135,7 +2137,15 @@ static void bdrv_detach_child(BdrvChild *child)
> bdrv_replace_child(child, NULL);
>
> g_free(child->name);
> - g_free(child);
> + /* Poison the BdrvChild instead of freeing it, in order to break blk_bs()
> + * if the blk still has a pointer to this BdrvChild in blk->root.
> + */
> + if (atomic_read(&bs->in_flight)) {
> + child->bs = (BlockDriverState *) -1;
> + fprintf(stderr, "\nPoisonned BdrvChild %p\n", child);
> + } else {
> + g_free(child);
> + }
> }
>
> void bdrv_root_unref_child(BdrvChild *child)
> ---
> block/block-backend.c | 4 +++-
> 1 file changed, 3 insertions(+), 1 deletion(-)
>
> diff --git a/block/block-backend.c b/block/block-backend.c
> index 681b240b1268..ed9434e236b9 100644
> --- a/block/block-backend.c
> +++ b/block/block-backend.c
> @@ -756,6 +756,7 @@ void blk_remove_bs(BlockBackend *blk)
> {
> ThrottleGroupMember *tgm = &blk->public.throttle_group_member;
> BlockDriverState *bs;
> + BdrvChild *root;
>
> notifier_list_notify(&blk->remove_bs_notifiers, blk);
> if (tgm->throttle_state) {
> @@ -768,8 +769,9 @@ void blk_remove_bs(BlockBackend *blk)
>
> blk_update_root_state(blk);
>
> - bdrv_root_unref_child(blk->root);
> + root = blk->root;
> blk->root = NULL;
> + bdrv_root_unref_child(root);
> }
>
> /*
>
>
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [Qemu-devel] [PATCH] block: fix QEMU crash with scsi-hd and drive_del
2018-05-16 11:21 ` [Qemu-devel] " Greg Kurz
` (2 preceding siblings ...)
(?)
@ 2018-05-18 15:32 ` Stefan Hajnoczi
2018-05-23 14:46 ` Greg Kurz
-1 siblings, 1 reply; 10+ messages in thread
From: Stefan Hajnoczi @ 2018-05-18 15:32 UTC (permalink / raw)
To: Greg Kurz; +Cc: qemu-devel, Kevin Wolf, Max Reitz
[-- Attachment #1: Type: text/plain, Size: 1923 bytes --]
On Wed, May 16, 2018 at 01:21:54PM +0200, Greg Kurz wrote:
> Removing a drive with drive_del while it is being used to run an I/O
> intensive workload can cause QEMU to crash.
>
> An AIO flush can yield at some point:
>
> blk_aio_flush_entry()
> blk_co_flush(blk)
> bdrv_co_flush(blk->root->bs)
> ...
> qemu_coroutine_yield()
I'm surprised you didn't hit another crash later on with this patch
applied. What happens to this completion after you've set blk->root =
NULL?
> and let the HMP command to run, free blk->root and give control
> back to the AIO flush:
>
> hmp_drive_del()
> blk_remove_bs()
> bdrv_root_unref_child(blk->root)
> child_bs = blk->root->bs
> bdrv_detach_child(blk->root)
> bdrv_replace_child(blk->root, NULL)
> blk->root->bs = NULL
> g_free(blk->root) <============== blk->root becomes stale
> bdrv_unref(child_bs)
> bdrv_delete(child_bs)
> bdrv_close()
> bdrv_drained_begin()
> bdrv_do_drained_begin()
> bdrv_drain_recurse()
> aio_poll()
> ...
> qemu_coroutine_switch()
>
> and the AIO flush completion ends up dereferencing blk->root:
>
> blk_aio_complete()
> scsi_aio_complete()
> blk_get_aio_context(blk)
> bs = blk_bs(blk)
> ie, bs = blk->root ? blk->root->bs : NULL
> ^^^^^
> stale
>
> The solution to this user-after-free situation is is to clear
> blk->root before calling bdrv_unref() in bdrv_detach_child(),
> and let blk_get_aio_context() fall back to the main loop context
> since the BDS has been removed.
QEMU should drain I/O requests before making block driver graph changes.
I think the drained region in blk_remove_bs() needs to begin earlier so
that requests are completed before we begin to change things.
Stefan
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 455 bytes --]
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [Qemu-devel] [PATCH] block: fix QEMU crash with scsi-hd and drive_del
2018-05-18 15:32 ` Stefan Hajnoczi
@ 2018-05-23 14:46 ` Greg Kurz
2018-05-24 6:04 ` Paolo Bonzini
0 siblings, 1 reply; 10+ messages in thread
From: Greg Kurz @ 2018-05-23 14:46 UTC (permalink / raw)
To: Stefan Hajnoczi; +Cc: qemu-devel, Kevin Wolf, Max Reitz
[-- Attachment #1: Type: text/plain, Size: 2928 bytes --]
On Fri, 18 May 2018 16:32:46 +0100
Stefan Hajnoczi <stefanha@redhat.com> wrote:
> On Wed, May 16, 2018 at 01:21:54PM +0200, Greg Kurz wrote:
> > Removing a drive with drive_del while it is being used to run an I/O
> > intensive workload can cause QEMU to crash.
> >
> > An AIO flush can yield at some point:
> >
> > blk_aio_flush_entry()
> > blk_co_flush(blk)
> > bdrv_co_flush(blk->root->bs)
> > ...
> > qemu_coroutine_yield()
>
> I'm surprised you didn't hit another crash later on with this patch
> applied. What happens to this completion after you've set blk->root =
> NULL?
>
bdrv_co_flush() takes a BDS argument, so I don't see how it would be
affected by blk->root being set to NULL. Then blk_co_flush() returns
to blk_aio_flush_entry(), and the next user of blk->root is....
> > and let the HMP command to run, free blk->root and give control
> > back to the AIO flush:
> >
> > hmp_drive_del()
> > blk_remove_bs()
> > bdrv_root_unref_child(blk->root)
> > child_bs = blk->root->bs
> > bdrv_detach_child(blk->root)
> > bdrv_replace_child(blk->root, NULL)
> > blk->root->bs = NULL
> > g_free(blk->root) <============== blk->root becomes stale
> > bdrv_unref(child_bs)
> > bdrv_delete(child_bs)
> > bdrv_close()
> > bdrv_drained_begin()
> > bdrv_do_drained_begin()
> > bdrv_drain_recurse()
> > aio_poll()
> > ...
> > qemu_coroutine_switch()
> >
> > and the AIO flush completion ends up dereferencing blk->root:
> >
> > blk_aio_complete()
> > scsi_aio_complete()
> > blk_get_aio_context(blk)
> > bs = blk_bs(blk)
> > ie, bs = blk->root ? blk->root->bs : NULL
... here and the completion ends in the main loop context.
> > ^^^^^
> > stale
> >
> > The solution to this user-after-free situation is is to clear
> > blk->root before calling bdrv_unref() in bdrv_detach_child(),
> > and let blk_get_aio_context() fall back to the main loop context
> > since the BDS has been removed.
>
> QEMU should drain I/O requests before making block driver graph changes.
> I think the drained region in blk_remove_bs() needs to begin earlier so
> that requests are completed before we begin to change things.
>
This looks better indeed. The drained section currently begins in
bdrv_close(), which happens much later than bdrv_detach_child(),
which actually change things.
Maybe change bdrv_root_unref_child() to ensure we don't call
bdrv_close() with pending I/O requests ?
void bdrv_root_unref_child(BdrvChild *child)
{
BlockDriverState *child_bs;
child_bs = child->bs;
+ bdrv_drained_begin(child_bs);
bdrv_detach_child(child);
+ bdrv_drained_end(child_bs);
bdrv_unref(child_bs);
}
> Stefan
[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 833 bytes --]
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [Qemu-devel] [PATCH] block: fix QEMU crash with scsi-hd and drive_del
2018-05-23 14:46 ` Greg Kurz
@ 2018-05-24 6:04 ` Paolo Bonzini
2018-05-24 8:05 ` Stefan Hajnoczi
0 siblings, 1 reply; 10+ messages in thread
From: Paolo Bonzini @ 2018-05-24 6:04 UTC (permalink / raw)
To: Greg Kurz, Stefan Hajnoczi; +Cc: Kevin Wolf, qemu-devel, Max Reitz
On 23/05/2018 16:46, Greg Kurz wrote:
> Maybe change bdrv_root_unref_child() to ensure we don't call
> bdrv_close() with pending I/O requests ?
>
> void bdrv_root_unref_child(BdrvChild *child)
> {
> BlockDriverState *child_bs;
>
> child_bs = child->bs;
> + bdrv_drained_begin(child_bs);
> bdrv_detach_child(child);
> + bdrv_drained_end(child_bs);
> bdrv_unref(child_bs);
> }
Maybe bdrv_detach_child should do it.
Paolo
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [Qemu-devel] [PATCH] block: fix QEMU crash with scsi-hd and drive_del
2018-05-24 6:04 ` Paolo Bonzini
@ 2018-05-24 8:05 ` Stefan Hajnoczi
2018-05-24 8:32 ` Greg Kurz
0 siblings, 1 reply; 10+ messages in thread
From: Stefan Hajnoczi @ 2018-05-24 8:05 UTC (permalink / raw)
To: Paolo Bonzini; +Cc: Greg Kurz, Kevin Wolf, qemu-devel, Max Reitz
[-- Attachment #1: Type: text/plain, Size: 584 bytes --]
On Thu, May 24, 2018 at 08:04:59AM +0200, Paolo Bonzini wrote:
> On 23/05/2018 16:46, Greg Kurz wrote:
> > Maybe change bdrv_root_unref_child() to ensure we don't call
> > bdrv_close() with pending I/O requests ?
> >
> > void bdrv_root_unref_child(BdrvChild *child)
> > {
> > BlockDriverState *child_bs;
> >
> > child_bs = child->bs;
> > + bdrv_drained_begin(child_bs);
> > bdrv_detach_child(child);
> > + bdrv_drained_end(child_bs);
> > bdrv_unref(child_bs);
> > }
>
> Maybe bdrv_detach_child should do it.
Sounds good.
Stefan
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 455 bytes --]
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [Qemu-devel] [PATCH] block: fix QEMU crash with scsi-hd and drive_del
2018-05-24 8:05 ` Stefan Hajnoczi
@ 2018-05-24 8:32 ` Greg Kurz
0 siblings, 0 replies; 10+ messages in thread
From: Greg Kurz @ 2018-05-24 8:32 UTC (permalink / raw)
To: Stefan Hajnoczi; +Cc: Paolo Bonzini, Kevin Wolf, qemu-devel, Max Reitz
[-- Attachment #1: Type: text/plain, Size: 872 bytes --]
On Thu, 24 May 2018 09:05:53 +0100
Stefan Hajnoczi <stefanha@redhat.com> wrote:
> On Thu, May 24, 2018 at 08:04:59AM +0200, Paolo Bonzini wrote:
> > On 23/05/2018 16:46, Greg Kurz wrote:
> > > Maybe change bdrv_root_unref_child() to ensure we don't call
> > > bdrv_close() with pending I/O requests ?
> > >
> > > void bdrv_root_unref_child(BdrvChild *child)
> > > {
> > > BlockDriverState *child_bs;
> > >
> > > child_bs = child->bs;
> > > + bdrv_drained_begin(child_bs);
> > > bdrv_detach_child(child);
> > > + bdrv_drained_end(child_bs);
> > > bdrv_unref(child_bs);
> > > }
> >
> > Maybe bdrv_detach_child should do it.
>
> Sounds good.
>
> Stefan
I guess it makes sense for bdrv_detach_child() to *break* blk->root without
leaving I/O requests behind. I'll just do that then.
Cheers,
--
Greg
[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 833 bytes --]
^ permalink raw reply [flat|nested] 10+ messages in thread
end of thread, other threads:[~2018-05-24 8:32 UTC | newest]
Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-05-16 11:21 [PATCH] block: fix QEMU crash with scsi-hd and drive_del Greg Kurz
2018-05-16 11:21 ` [Qemu-devel] " Greg Kurz
2018-05-16 15:41 ` Greg KH
2018-05-16 15:41 ` [Qemu-devel] " Greg KH
2018-05-16 16:53 ` Greg Kurz
2018-05-18 15:32 ` Stefan Hajnoczi
2018-05-23 14:46 ` Greg Kurz
2018-05-24 6:04 ` Paolo Bonzini
2018-05-24 8:05 ` Stefan Hajnoczi
2018-05-24 8:32 ` Greg Kurz
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.