* [Qemu-devel] [PATCH] block: Clean up after deleting BHs
@ 2009-06-23 13:20 Avi Kivity
2009-06-23 13:37 ` Filip Navara
` (2 more replies)
0 siblings, 3 replies; 12+ messages in thread
From: Avi Kivity @ 2009-06-23 13:20 UTC (permalink / raw)
To: qemu-devel
Commit 6a7ad299 ("Call qemu_bh_delete at bdrv_aio_bh_cb") deletes emulated
aio bottom halves to prevent endless accumulation. However, it leaves a
stale ->bh pointer, which is then waited on when the aio is reused.
Zeroing the pointer fixes the issue, allowing vmdk format images to be used.
Signed-off-by: Avi Kivity <avi@redhat.com>
---
block.c | 2 ++
1 files changed, 2 insertions(+), 0 deletions(-)
diff --git a/block.c b/block.c
index aca5a6d..cefbe77 100644
--- a/block.c
+++ b/block.c
@@ -1374,6 +1374,7 @@ static void bdrv_aio_cancel_em(BlockDriverAIOCB *blockacb)
{
BlockDriverAIOCBSync *acb = (BlockDriverAIOCBSync *)blockacb;
qemu_bh_delete(acb->bh);
+ acb->bh = NULL;
qemu_aio_release(acb);
}
@@ -1391,6 +1392,7 @@ static void bdrv_aio_bh_cb(void *opaque)
qemu_vfree(acb->bounce);
acb->common.cb(acb->common.opaque, acb->ret);
qemu_bh_delete(acb->bh);
+ acb->bh = NULL;
qemu_aio_release(acb);
}
--
1.6.2.2
^ permalink raw reply related [flat|nested] 12+ messages in thread
* Re: [Qemu-devel] [PATCH] block: Clean up after deleting BHs
2009-06-23 13:20 [Qemu-devel] [PATCH] block: Clean up after deleting BHs Avi Kivity
@ 2009-06-23 13:37 ` Filip Navara
2009-06-23 16:50 ` Christoph Hellwig
2009-06-24 18:31 ` Stefan Weil
2 siblings, 0 replies; 12+ messages in thread
From: Filip Navara @ 2009-06-23 13:37 UTC (permalink / raw)
To: Avi Kivity; +Cc: qemu-devel
On Tue, Jun 23, 2009 at 3:20 PM, Avi Kivity<avi@redhat.com> wrote:
> Commit 6a7ad299 ("Call qemu_bh_delete at bdrv_aio_bh_cb") deletes emulated
> aio bottom halves to prevent endless accumulation. However, it leaves a
> stale ->bh pointer, which is then waited on when the aio is reused.
>
> Zeroing the pointer fixes the issue, allowing vmdk format images to be used.
>
> Signed-off-by: Avi Kivity <avi@redhat.com>
It also fixes the qcow2 on Win32. At least "savevm" was broken.
Acked-by: Filip Navara <filip.navara@gmail.com>
Best regards,
Filip Navara
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [Qemu-devel] [PATCH] block: Clean up after deleting BHs
2009-06-23 13:20 [Qemu-devel] [PATCH] block: Clean up after deleting BHs Avi Kivity
2009-06-23 13:37 ` Filip Navara
@ 2009-06-23 16:50 ` Christoph Hellwig
2009-06-23 16:57 ` Avi Kivity
2009-06-24 18:31 ` Stefan Weil
2 siblings, 1 reply; 12+ messages in thread
From: Christoph Hellwig @ 2009-06-23 16:50 UTC (permalink / raw)
To: Avi Kivity; +Cc: qemu-devel
On Tue, Jun 23, 2009 at 04:20:36PM +0300, Avi Kivity wrote:
> Commit 6a7ad299 ("Call qemu_bh_delete at bdrv_aio_bh_cb") deletes emulated
> aio bottom halves to prevent endless accumulation. However, it leaves a
> stale ->bh pointer, which is then waited on when the aio is reused.
>
> Zeroing the pointer fixes the issue, allowing vmdk format images to be used.
What operations on vmdk images does this cause to fail? qemu-iotests
seems to do fine on vmdk so it's nothing yet exercised by it.
> Signed-off-by: Avi Kivity <avi@redhat.com>
> ---
> block.c | 2 ++
> 1 files changed, 2 insertions(+), 0 deletions(-)
>
> diff --git a/block.c b/block.c
> index aca5a6d..cefbe77 100644
> --- a/block.c
> +++ b/block.c
> @@ -1374,6 +1374,7 @@ static void bdrv_aio_cancel_em(BlockDriverAIOCB *blockacb)
> {
> BlockDriverAIOCBSync *acb = (BlockDriverAIOCBSync *)blockacb;
> qemu_bh_delete(acb->bh);
> + acb->bh = NULL;
> qemu_aio_release(acb);
> }
>
> @@ -1391,6 +1392,7 @@ static void bdrv_aio_bh_cb(void *opaque)
> qemu_vfree(acb->bounce);
> acb->common.cb(acb->common.opaque, acb->ret);
> qemu_bh_delete(acb->bh);
> + acb->bh = NULL;
> qemu_aio_release(acb);
> }
I think not having the state of the private acb area cleared over a
free/realloc cycle is pretty dangerous. Wouldn't it be better to always
clear that space in qemu_aio_get?
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [Qemu-devel] [PATCH] block: Clean up after deleting BHs
2009-06-23 16:50 ` Christoph Hellwig
@ 2009-06-23 16:57 ` Avi Kivity
2009-06-23 18:08 ` Filip Navara
2009-06-23 18:26 ` Christoph Hellwig
0 siblings, 2 replies; 12+ messages in thread
From: Avi Kivity @ 2009-06-23 16:57 UTC (permalink / raw)
To: Christoph Hellwig; +Cc: qemu-devel
On 06/23/2009 07:50 PM, Christoph Hellwig wrote:
> On Tue, Jun 23, 2009 at 04:20:36PM +0300, Avi Kivity wrote:
>
>> Commit 6a7ad299 ("Call qemu_bh_delete at bdrv_aio_bh_cb") deletes emulated
>> aio bottom halves to prevent endless accumulation. However, it leaves a
>> stale ->bh pointer, which is then waited on when the aio is reused.
>>
>> Zeroing the pointer fixes the issue, allowing vmdk format images to be used.
>>
>
> What operations on vmdk images does this cause to fail? qemu-iotests
> seems to do fine on vmdk so it's nothing yet exercised by it.
>
Just starting qemu with a vmdk image hangs. I think the very first read
triggers it.
>> --- a/block.c
>> +++ b/block.c
>> @@ -1374,6 +1374,7 @@ static void bdrv_aio_cancel_em(BlockDriverAIOCB *blockacb)
>> {
>> BlockDriverAIOCBSync *acb = (BlockDriverAIOCBSync *)blockacb;
>> qemu_bh_delete(acb->bh);
>> + acb->bh = NULL;
>> qemu_aio_release(acb);
>> }
>>
>> @@ -1391,6 +1392,7 @@ static void bdrv_aio_bh_cb(void *opaque)
>> qemu_vfree(acb->bounce);
>> acb->common.cb(acb->common.opaque, acb->ret);
>> qemu_bh_delete(acb->bh);
>> + acb->bh = NULL;
>> qemu_aio_release(acb);
>> }
>>
>
> I think not having the state of the private acb area cleared over a
> free/realloc cycle is pretty dangerous. Wouldn't it be better to always
> clear that space in qemu_aio_get?
>
Maybe, but that's a bigger change. Let's start with this (in stable-
too) and rework aio later.
--
error compiling committee.c: too many arguments to function
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [Qemu-devel] [PATCH] block: Clean up after deleting BHs
2009-06-23 16:57 ` Avi Kivity
@ 2009-06-23 18:08 ` Filip Navara
2009-06-23 18:11 ` Avi Kivity
2009-06-23 18:26 ` Christoph Hellwig
1 sibling, 1 reply; 12+ messages in thread
From: Filip Navara @ 2009-06-23 18:08 UTC (permalink / raw)
To: Avi Kivity; +Cc: Christoph Hellwig, qemu-devel
On Tue, Jun 23, 2009 at 6:57 PM, Avi Kivity<avi@redhat.com> wrote:
> On 06/23/2009 07:50 PM, Christoph Hellwig wrote:
>>
>> On Tue, Jun 23, 2009 at 04:20:36PM +0300, Avi Kivity wrote:
>>
>>>
>>> Commit 6a7ad299 ("Call qemu_bh_delete at bdrv_aio_bh_cb") deletes
>>> emulated
>>> aio bottom halves to prevent endless accumulation. However, it leaves a
>>> stale ->bh pointer, which is then waited on when the aio is reused.
>>>
>>> Zeroing the pointer fixes the issue, allowing vmdk format images to be
>>> used.
>>>
>>
>> What operations on vmdk images does this cause to fail? qemu-iotests
>> seems to do fine on vmdk so it's nothing yet exercised by it.
>>
>
> Just starting qemu with a vmdk image hangs. I think the very first read
> triggers it.
Actually I think it's the second read ;-)
>
>>> --- a/block.c
>>> +++ b/block.c
>>> @@ -1374,6 +1374,7 @@ static void bdrv_aio_cancel_em(BlockDriverAIOCB
>>> *blockacb)
>>> {
>>> BlockDriverAIOCBSync *acb = (BlockDriverAIOCBSync *)blockacb;
>>> qemu_bh_delete(acb->bh);
>>> + acb->bh = NULL;
>>> qemu_aio_release(acb);
>>> }
>>>
>>> @@ -1391,6 +1392,7 @@ static void bdrv_aio_bh_cb(void *opaque)
>>> qemu_vfree(acb->bounce);
>>> acb->common.cb(acb->common.opaque, acb->ret);
>>> qemu_bh_delete(acb->bh);
>>> + acb->bh = NULL;
>>> qemu_aio_release(acb);
>>> }
>>>
>>
>> I think not having the state of the private acb area cleared over a
>> free/realloc cycle is pretty dangerous. Wouldn't it be better to always
>> clear that space in qemu_aio_get?
>>
>
> Maybe, but that's a bigger change. Let's start with this (in stable- too)
> and rework aio later.
>
Agreed, let's get this in, the win32 builds are seriously affected by
the bug due to the absence of AIO on the platform.
Best regards,
Filip Navara
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [Qemu-devel] [PATCH] block: Clean up after deleting BHs
2009-06-23 18:08 ` Filip Navara
@ 2009-06-23 18:11 ` Avi Kivity
0 siblings, 0 replies; 12+ messages in thread
From: Avi Kivity @ 2009-06-23 18:11 UTC (permalink / raw)
To: Filip Navara; +Cc: Christoph Hellwig, qemu-devel
On 06/23/2009 09:08 PM, Filip Navara wrote:
>> Just starting qemu with a vmdk image hangs. I think the very first read
>> triggers it.
>>
>
> Actually I think it's the second read ;-)
>
>
Ugh, off-by-one error.
> Agreed, let's get this in, the win32 builds are seriously affected by
> the bug due to the absence of AIO on the platform.
>
>
Given that it has both threads and true AIO, this could easily be changed.
--
I have a truly marvellous patch that fixes the bug which this
signature is too narrow to contain.
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [Qemu-devel] [PATCH] block: Clean up after deleting BHs
2009-06-23 16:57 ` Avi Kivity
2009-06-23 18:08 ` Filip Navara
@ 2009-06-23 18:26 ` Christoph Hellwig
2009-06-23 18:31 ` Avi Kivity
2009-06-23 20:21 ` Filip Navara
1 sibling, 2 replies; 12+ messages in thread
From: Christoph Hellwig @ 2009-06-23 18:26 UTC (permalink / raw)
To: Avi Kivity; +Cc: Christoph Hellwig, qemu-devel
On Tue, Jun 23, 2009 at 07:57:01PM +0300, Avi Kivity wrote:
> >I think not having the state of the private acb area cleared over a
> >free/realloc cycle is pretty dangerous. Wouldn't it be better to always
> >clear that space in qemu_aio_get?
> >
>
> Maybe, but that's a bigger change. Let's start with this (in stable-
> too) and rework aio later.
It's actually smaller - half the size to be exact :)
Signed-off-by: Christoph Hellwig <hch@lst.de>
Index: qemu/block.c
===================================================================
--- qemu.orig/block.c 2009-06-23 20:16:48.283930485 +0200
+++ qemu/block.c 2009-06-23 20:16:57.158834441 +0200
@@ -1515,6 +1515,7 @@ void *qemu_aio_get(AIOPool *pool, BlockD
acb->bs = bs;
acb->cb = cb;
acb->opaque = opaque;
+ memset(acb + 1, 0, pool->aiocb_size - sizeof(BlockDriverAIOCB));
return acb;
}
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [Qemu-devel] [PATCH] block: Clean up after deleting BHs
2009-06-23 18:26 ` Christoph Hellwig
@ 2009-06-23 18:31 ` Avi Kivity
2009-06-23 19:41 ` Christoph Hellwig
2009-06-23 20:21 ` Filip Navara
1 sibling, 1 reply; 12+ messages in thread
From: Avi Kivity @ 2009-06-23 18:31 UTC (permalink / raw)
To: Christoph Hellwig; +Cc: qemu-devel
On 06/23/2009 09:26 PM, Christoph Hellwig wrote:
> On Tue, Jun 23, 2009 at 07:57:01PM +0300, Avi Kivity wrote:
>
>>> I think not having the state of the private acb area cleared over a
>>> free/realloc cycle is pretty dangerous. Wouldn't it be better to always
>>> clear that space in qemu_aio_get?
>>>
>>>
>> Maybe, but that's a bigger change. Let's start with this (in stable-
>> too) and rework aio later.
>>
>
> It's actually smaller - half the size to be exact :)
>
> Signed-off-by: Christoph Hellwig<hch@lst.de>
>
> Index: qemu/block.c
> ===================================================================
> --- qemu.orig/block.c 2009-06-23 20:16:48.283930485 +0200
> +++ qemu/block.c 2009-06-23 20:16:57.158834441 +0200
> @@ -1515,6 +1515,7 @@ void *qemu_aio_get(AIOPool *pool, BlockD
> acb->bs = bs;
> acb->cb = cb;
> acb->opaque = opaque;
> + memset(acb + 1, 0, pool->aiocb_size - sizeof(BlockDriverAIOCB));
> return acb;
> }
>
I meant in an omg we have to look at all the consequences way rather
than byte count. This patch is somewhat less local.
--
I have a truly marvellous patch that fixes the bug which this
signature is too narrow to contain.
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [Qemu-devel] [PATCH] block: Clean up after deleting BHs
2009-06-23 18:31 ` Avi Kivity
@ 2009-06-23 19:41 ` Christoph Hellwig
2009-06-23 19:50 ` Avi Kivity
0 siblings, 1 reply; 12+ messages in thread
From: Christoph Hellwig @ 2009-06-23 19:41 UTC (permalink / raw)
To: Avi Kivity; +Cc: Christoph Hellwig, qemu-devel
On Tue, Jun 23, 2009 at 09:31:55PM +0300, Avi Kivity wrote:
> I meant in an omg we have to look at all the consequences way rather
> than byte count. This patch is somewhat less local.
My patch makes sure a re-used acb is in the same state as a newly
allocated one. I think that's much more sensible than messing around
with some state during freeing some acbs.
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [Qemu-devel] [PATCH] block: Clean up after deleting BHs
2009-06-23 19:41 ` Christoph Hellwig
@ 2009-06-23 19:50 ` Avi Kivity
0 siblings, 0 replies; 12+ messages in thread
From: Avi Kivity @ 2009-06-23 19:50 UTC (permalink / raw)
To: Christoph Hellwig; +Cc: qemu-devel
On 06/23/2009 10:41 PM, Christoph Hellwig wrote:
> On Tue, Jun 23, 2009 at 09:31:55PM +0300, Avi Kivity wrote:
>
>> I meant in an omg we have to look at all the consequences way rather
>> than byte count. This patch is somewhat less local.
>>
>
> My patch makes sure a re-used acb is in the same state as a newly
> allocated one. I think that's much more sensible than messing around
> with some state during freeing some acbs.
>
>
What if it has pointers to allocated stuff it was planning to reuse?
You're leaking memory.
--
I have a truly marvellous patch that fixes the bug which this
signature is too narrow to contain.
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [Qemu-devel] [PATCH] block: Clean up after deleting BHs
2009-06-23 18:26 ` Christoph Hellwig
2009-06-23 18:31 ` Avi Kivity
@ 2009-06-23 20:21 ` Filip Navara
1 sibling, 0 replies; 12+ messages in thread
From: Filip Navara @ 2009-06-23 20:21 UTC (permalink / raw)
To: Christoph Hellwig; +Cc: Avi Kivity, qemu-devel
On Tue, Jun 23, 2009 at 8:26 PM, Christoph Hellwig<hch@lst.de> wrote:
> On Tue, Jun 23, 2009 at 07:57:01PM +0300, Avi Kivity wrote:
>> >I think not having the state of the private acb area cleared over a
>> >free/realloc cycle is pretty dangerous. Wouldn't it be better to always
>> >clear that space in qemu_aio_get?
>> >
>>
>> Maybe, but that's a bigger change. Let's start with this (in stable-
>> too) and rework aio later.
>
> It's actually smaller - half the size to be exact :)
>
> Signed-off-by: Christoph Hellwig <hch@lst.de>
This one solves the issues on Win32 as well. I have no preference for
either patch as long as one of them is applied.
Best regards,
Filip Navara
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [Qemu-devel] [PATCH] block: Clean up after deleting BHs
2009-06-23 13:20 [Qemu-devel] [PATCH] block: Clean up after deleting BHs Avi Kivity
2009-06-23 13:37 ` Filip Navara
2009-06-23 16:50 ` Christoph Hellwig
@ 2009-06-24 18:31 ` Stefan Weil
2 siblings, 0 replies; 12+ messages in thread
From: Stefan Weil @ 2009-06-24 18:31 UTC (permalink / raw)
To: Avi Kivity; +Cc: qemu-devel
Avi Kivity schrieb:
> Commit 6a7ad299 ("Call qemu_bh_delete at bdrv_aio_bh_cb") deletes emulated
> aio bottom halves to prevent endless accumulation. However, it leaves a
> stale ->bh pointer, which is then waited on when the aio is reused.
>
> Zeroing the pointer fixes the issue, allowing vmdk format images to be
> used.
>
> Signed-off-by: Avi Kivity <avi@redhat.com>
> ---
> block.c | 2 ++
> 1 files changed, 2 insertions(+), 0 deletions(-)
>
> diff --git a/block.c b/block.c
> index aca5a6d..cefbe77 100644
> --- a/block.c
> +++ b/block.c
> @@ -1374,6 +1374,7 @@ static void bdrv_aio_cancel_em(BlockDriverAIOCB
> *blockacb)
> {
> BlockDriverAIOCBSync *acb = (BlockDriverAIOCBSync *)blockacb;
> qemu_bh_delete(acb->bh);
> + acb->bh = NULL;
> qemu_aio_release(acb);
> }
>
> @@ -1391,6 +1392,7 @@ static void bdrv_aio_bh_cb(void *opaque)
> qemu_vfree(acb->bounce);
> acb->common.cb(acb->common.opaque, acb->ret);
> qemu_bh_delete(acb->bh);
> + acb->bh = NULL;
> qemu_aio_release(acb);
> }
>
Thanks, your patch fixes the dma issues in linux guests when
booting with a block device without aio support (vmdk, vvfat,
vpc, nbd, ...):
without patch (many DMA errors in the guest, long boot time):
[ 5.374675] hda: QEMU HARDDISK, ATA DISK drive
[ 6.044267] hda: host max PIO4 wanted PIO255(auto-tune) selected PIO2
[ 6.045096] hda: MWDMA2 mode selected
[ 7.555726] hda: max request size: 512KiB
[ 7.556193] hda: 65536 sectors (33 MB) w/256KiB Cache, CHS=65/255/63
[ 7.569253] hda: cache flushes supported
[ 7.569909] hda:<6>ide-cd driver 5.00
[ 27.568145] hda: dma_timer_expiry: DMA status (0x21)
[ 37.568221] hda: DMA timeout error
[ 37.568838] hda: dma timeout error: status=0xd8 { Busy }
[ 37.572195] hda: DMA disabled
[ 57.624211] hda: dma_timer_expiry: DMA status (0x21)
[ 67.624221] hda: DMA timeout error
[ 67.624842] hda: dma timeout error: status=0xd8 { Busy }
[ 67.628192] hda: DMA disabled
[ 87.872210] hda: dma_timer_expiry: DMA status (0x21)
[ 97.872209] hda: DMA timeout error
[ 97.872826] hda: dma timeout error: status=0xd8 { Busy }
[ 97.876184] hda: DMA disabled
[ 117.928209] hda: dma_timer_expiry: DMA status (0x21)
[ 127.928149] hda: DMA timeout error
[ 127.928815] hda: dma timeout error: status=0xd8 { Busy }
[ 127.932120] hda: DMA disabled
with patch (no DMA errors in the guest, fast boot):
[ 5.313278] hda: QEMU HARDDISK, ATA DISK drive
[ 5.984435] hda: host max PIO4 wanted PIO255(auto-tune) selected PIO2
[ 5.985305] hda: MWDMA2 mode selected
[ 7.447866] hda: max request size: 512KiB
[ 7.448103] hda: 65536 sectors (33 MB) w/256KiB Cache, CHS=65/255/63
[ 7.461297] hda: cache flushes supported
[ 7.461948] hda: unknown partition table
I hope your patch will be applied to QEMU master soon.
Regards
Stefan
Acked-by: Stefan Weil <weil@mail.berlios.de>
^ permalink raw reply [flat|nested] 12+ messages in thread
end of thread, other threads:[~2009-06-24 18:33 UTC | newest]
Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2009-06-23 13:20 [Qemu-devel] [PATCH] block: Clean up after deleting BHs Avi Kivity
2009-06-23 13:37 ` Filip Navara
2009-06-23 16:50 ` Christoph Hellwig
2009-06-23 16:57 ` Avi Kivity
2009-06-23 18:08 ` Filip Navara
2009-06-23 18:11 ` Avi Kivity
2009-06-23 18:26 ` Christoph Hellwig
2009-06-23 18:31 ` Avi Kivity
2009-06-23 19:41 ` Christoph Hellwig
2009-06-23 19:50 ` Avi Kivity
2009-06-23 20:21 ` Filip Navara
2009-06-24 18:31 ` Stefan Weil
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.