All of lore.kernel.org
 help / color / mirror / Atom feed
* [Qemu-devel] [PATCH 2.5 v2 0/3] misc missed aio_context_acquire for bdrv_drain
@ 2015-11-04 17:27 Denis V. Lunev
  2015-11-04 17:27 ` [Qemu-devel] [PATCH 1/3] block: add missed aio_context_acquire around bdrv_set_aio_context Denis V. Lunev
                   ` (2 more replies)
  0 siblings, 3 replies; 13+ messages in thread
From: Denis V. Lunev @ 2015-11-04 17:27 UTC (permalink / raw)
  Cc: Denis V. Lunev, qemu-devel, Stefan Hajnoczi

These bits were present in the following submission
 [PATCH QEMU 2.5 v4 0/10] dataplane snapshot fixes + aio_poll fixes
and they are unrelated to the migration and safe enough to reach
QEMU 2.5. Just sent separately not to miss them.

Dangerous bits will be sent separately when we'll come into agreement
of safe bits :)

Signed-off-by: Denis V. Lunev <den@openvz.org>
CC: Stefan Hajnoczi <stefanha@redhat.com>

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [Qemu-devel] [PATCH 1/3] block: add missed aio_context_acquire around bdrv_set_aio_context
  2015-11-04 17:27 [Qemu-devel] [PATCH 2.5 v2 0/3] misc missed aio_context_acquire for bdrv_drain Denis V. Lunev
@ 2015-11-04 17:27 ` Denis V. Lunev
  2015-11-06 13:35   ` Stefan Hajnoczi
  2015-11-04 17:27 ` [Qemu-devel] [PATCH 2/3] blockdev: acquire AioContext in hmp_commit() Denis V. Lunev
  2015-11-04 17:27 ` [Qemu-devel] [PATCH 3/3] block: guard bdrv_drain in bdrv_close with aio_context_acquire Denis V. Lunev
  2 siblings, 1 reply; 13+ messages in thread
From: Denis V. Lunev @ 2015-11-04 17:27 UTC (permalink / raw)
  Cc: Denis V. Lunev, qemu-devel, Stefan Hajnoczi

It is required for bdrv_drain.

Signed-off-by: Denis V. Lunev <den@openvz.org>
CC: Stefan Hajnoczi <stefanha@redhat.com>
---
 block/block-backend.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/block/block-backend.c b/block/block-backend.c
index 19fdaae..07fcfc7 100644
--- a/block/block-backend.c
+++ b/block/block-backend.c
@@ -1065,7 +1065,10 @@ static AioContext *blk_aiocb_get_aio_context(BlockAIOCB *acb)
 void blk_set_aio_context(BlockBackend *blk, AioContext *new_context)
 {
     if (blk->bs) {
+        AioContext *ctx = blk_get_aio_context(blk);
+        aio_context_acquire(ctx);
         bdrv_set_aio_context(blk->bs, new_context);
+        aio_context_release(ctx);
     }
 }
 
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [Qemu-devel] [PATCH 2/3] blockdev: acquire AioContext in hmp_commit()
  2015-11-04 17:27 [Qemu-devel] [PATCH 2.5 v2 0/3] misc missed aio_context_acquire for bdrv_drain Denis V. Lunev
  2015-11-04 17:27 ` [Qemu-devel] [PATCH 1/3] block: add missed aio_context_acquire around bdrv_set_aio_context Denis V. Lunev
@ 2015-11-04 17:27 ` Denis V. Lunev
  2015-11-06 15:41   ` Stefan Hajnoczi
  2015-11-04 17:27 ` [Qemu-devel] [PATCH 3/3] block: guard bdrv_drain in bdrv_close with aio_context_acquire Denis V. Lunev
  2 siblings, 1 reply; 13+ messages in thread
From: Denis V. Lunev @ 2015-11-04 17:27 UTC (permalink / raw)
  Cc: Denis V. Lunev, qemu-devel, Stefan Hajnoczi

From: Stefan Hajnoczi <stefanha@redhat.com>

This one slipped through.  Although we acquire AioContext when
committing all devices we don't for just a single device.

AioContext must be acquired before calling bdrv_*() functions to
synchronize access with other threads that may be using the AioContext.

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Denis V. Lunev <den@openvz.org>
Reviewed-by: Jeff Cody <jcody@redhat.com>
---
 blockdev.c | 12 +++++++++++-
 1 file changed, 11 insertions(+), 1 deletion(-)

diff --git a/blockdev.c b/blockdev.c
index 8b8bfa9..97be42f 100644
--- a/blockdev.c
+++ b/blockdev.c
@@ -1120,6 +1120,9 @@ void hmp_commit(Monitor *mon, const QDict *qdict)
     if (!strcmp(device, "all")) {
         ret = bdrv_commit_all();
     } else {
+        BlockDriverState *bs;
+        AioContext *aio_context;
+
         blk = blk_by_name(device);
         if (!blk) {
             monitor_printf(mon, "Device '%s' not found\n", device);
@@ -1129,7 +1132,14 @@ void hmp_commit(Monitor *mon, const QDict *qdict)
             monitor_printf(mon, "Device '%s' has no medium\n", device);
             return;
         }
-        ret = bdrv_commit(blk_bs(blk));
+
+        bs = blk_bs(blk);
+        aio_context = bdrv_get_aio_context(bs);
+        aio_context_acquire(aio_context);
+
+        ret = bdrv_commit(bs);
+
+        aio_context_release(aio_context);
     }
     if (ret < 0) {
         monitor_printf(mon, "'commit' error for '%s': %s\n", device,
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [Qemu-devel] [PATCH 3/3] block: guard bdrv_drain in bdrv_close with aio_context_acquire
  2015-11-04 17:27 [Qemu-devel] [PATCH 2.5 v2 0/3] misc missed aio_context_acquire for bdrv_drain Denis V. Lunev
  2015-11-04 17:27 ` [Qemu-devel] [PATCH 1/3] block: add missed aio_context_acquire around bdrv_set_aio_context Denis V. Lunev
  2015-11-04 17:27 ` [Qemu-devel] [PATCH 2/3] blockdev: acquire AioContext in hmp_commit() Denis V. Lunev
@ 2015-11-04 17:27 ` Denis V. Lunev
  2015-11-06 13:44   ` Stefan Hajnoczi
  2 siblings, 1 reply; 13+ messages in thread
From: Denis V. Lunev @ 2015-11-04 17:27 UTC (permalink / raw)
  Cc: Denis V. Lunev, qemu-devel, Stefan Hajnoczi

bdrv_close is called in tooooo much places to properly track at the moment.

Signed-off-by: Denis V. Lunev <den@openvz.org>
CC: Stefan Hajnoczi <stefanha@redhat.com>
---
 block.c | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/block.c b/block.c
index e9f40dc..98b0b66 100644
--- a/block.c
+++ b/block.c
@@ -1895,6 +1895,7 @@ void bdrv_reopen_abort(BDRVReopenState *reopen_state)
 void bdrv_close(BlockDriverState *bs)
 {
     BdrvAioNotifier *ban, *ban_next;
+    AioContext *ctx;
 
     if (bs->job) {
         block_job_cancel_sync(bs->job);
@@ -1905,9 +1906,13 @@ void bdrv_close(BlockDriverState *bs)
         bdrv_io_limits_disable(bs);
     }
 
+    ctx = bdrv_get_aio_context(bs);
+    aio_context_acquire(ctx);
     bdrv_drain(bs); /* complete I/O */
     bdrv_flush(bs);
     bdrv_drain(bs); /* in case flush left pending I/O */
+    aio_context_release(ctx);
+
     notifier_list_notify(&bs->close_notifiers, bs);
 
     if (bs->blk) {
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* Re: [Qemu-devel] [PATCH 1/3] block: add missed aio_context_acquire around bdrv_set_aio_context
  2015-11-04 17:27 ` [Qemu-devel] [PATCH 1/3] block: add missed aio_context_acquire around bdrv_set_aio_context Denis V. Lunev
@ 2015-11-06 13:35   ` Stefan Hajnoczi
  2015-11-06 14:09     ` Denis V. Lunev
  0 siblings, 1 reply; 13+ messages in thread
From: Stefan Hajnoczi @ 2015-11-06 13:35 UTC (permalink / raw)
  To: Denis V. Lunev; +Cc: qemu-devel, Stefan Hajnoczi

[-- Attachment #1: Type: text/plain, Size: 1344 bytes --]

On Wed, Nov 04, 2015 at 08:27:22PM +0300, Denis V. Lunev wrote:
> It is required for bdrv_drain.

What bug does this patch fix?

Existing blk_set_aio_context() callers acquire the AioContext or are
sure it's already acquired by their caller, so I don't see where the bug
is.

No function in block/block-backend.c acquires AioContext internally.
The same reasoning I mentioned in another email thread about consistent
locking strategy applies here.  Either all blk_*() functions should take
the AioContext internally (which I disagree with) or none of them should
take it.

> Signed-off-by: Denis V. Lunev <den@openvz.org>
> CC: Stefan Hajnoczi <stefanha@redhat.com>
> ---
>  block/block-backend.c | 3 +++
>  1 file changed, 3 insertions(+)
> 
> diff --git a/block/block-backend.c b/block/block-backend.c
> index 19fdaae..07fcfc7 100644
> --- a/block/block-backend.c
> +++ b/block/block-backend.c
> @@ -1065,7 +1065,10 @@ static AioContext *blk_aiocb_get_aio_context(BlockAIOCB *acb)
>  void blk_set_aio_context(BlockBackend *blk, AioContext *new_context)
>  {
>      if (blk->bs) {
> +        AioContext *ctx = blk_get_aio_context(blk);
> +        aio_context_acquire(ctx);
>          bdrv_set_aio_context(blk->bs, new_context);
> +        aio_context_release(ctx);
>      }
>  }
>  
> -- 
> 2.5.0
> 
> 

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 473 bytes --]

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [Qemu-devel] [PATCH 3/3] block: guard bdrv_drain in bdrv_close with aio_context_acquire
  2015-11-04 17:27 ` [Qemu-devel] [PATCH 3/3] block: guard bdrv_drain in bdrv_close with aio_context_acquire Denis V. Lunev
@ 2015-11-06 13:44   ` Stefan Hajnoczi
  0 siblings, 0 replies; 13+ messages in thread
From: Stefan Hajnoczi @ 2015-11-06 13:44 UTC (permalink / raw)
  To: Denis V. Lunev; +Cc: qemu-devel, Stefan Hajnoczi

[-- Attachment #1: Type: text/plain, Size: 1719 bytes --]

On Wed, Nov 04, 2015 at 08:27:24PM +0300, Denis V. Lunev wrote:
> bdrv_close is called in tooooo much places to properly track at the moment.

bdrv_close() is called in 5 places.  Let's figure out what the callers
are doing wrong:

block.c:bdrv_open_inherit() - internal function, guaranteed to be safe
block.c:bdrv_close_all() - takes AioContext
block.c:bdrv_delete() - always called from main loop, after IOThreads
                        have stopped accessing BDS
blockdev.c:eject_device() - takes AioContext
blockdev.c:hmp_drive_del() - takes AioContext

I don't see a reason why adding this acquire/release to bdrv_close()
changes behavior or fixes a bug.

Can you explain why this is necessary?

> Signed-off-by: Denis V. Lunev <den@openvz.org>
> CC: Stefan Hajnoczi <stefanha@redhat.com>
> ---
>  block.c | 5 +++++
>  1 file changed, 5 insertions(+)
> 
> diff --git a/block.c b/block.c
> index e9f40dc..98b0b66 100644
> --- a/block.c
> +++ b/block.c
> @@ -1895,6 +1895,7 @@ void bdrv_reopen_abort(BDRVReopenState *reopen_state)
>  void bdrv_close(BlockDriverState *bs)
>  {
>      BdrvAioNotifier *ban, *ban_next;
> +    AioContext *ctx;
>  
>      if (bs->job) {
>          block_job_cancel_sync(bs->job);
> @@ -1905,9 +1906,13 @@ void bdrv_close(BlockDriverState *bs)
>          bdrv_io_limits_disable(bs);
>      }
>  
> +    ctx = bdrv_get_aio_context(bs);
> +    aio_context_acquire(ctx);
>      bdrv_drain(bs); /* complete I/O */
>      bdrv_flush(bs);
>      bdrv_drain(bs); /* in case flush left pending I/O */
> +    aio_context_release(ctx);
> +
>      notifier_list_notify(&bs->close_notifiers, bs);
>  
>      if (bs->blk) {
> -- 
> 2.5.0
> 
> 

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 473 bytes --]

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [Qemu-devel] [PATCH 1/3] block: add missed aio_context_acquire around bdrv_set_aio_context
  2015-11-06 13:35   ` Stefan Hajnoczi
@ 2015-11-06 14:09     ` Denis V. Lunev
  2015-11-06 14:19       ` Denis V. Lunev
  2015-11-06 14:44       ` Stefan Hajnoczi
  0 siblings, 2 replies; 13+ messages in thread
From: Denis V. Lunev @ 2015-11-06 14:09 UTC (permalink / raw)
  To: Stefan Hajnoczi; +Cc: qemu-devel, Stefan Hajnoczi

On 11/06/2015 04:35 PM, Stefan Hajnoczi wrote:
> On Wed, Nov 04, 2015 at 08:27:22PM +0300, Denis V. Lunev wrote:
>> It is required for bdrv_drain.
> What bug does this patch fix?
>
> Existing blk_set_aio_context() callers acquire the AioContext or are
> sure it's already acquired by their caller, so I don't see where the bug
> is.
>
> No function in block/block-backend.c acquires AioContext internally.
> The same reasoning I mentioned in another email thread about consistent
> locking strategy applies here.  Either all blk_*() functions should take
> the AioContext internally (which I disagree with) or none of them should
> take it.

#5  0x00005555559b31bf in bdrv_drain (bs=0x5555564274a0) at block/io.c:258
#6  0x0000555555963321 in bdrv_set_aio_context (bs=0x5555564274a0,
     new_context=0x55555640c100) at block.c:3764
#7  0x00005555559a8b67 in blk_set_aio_context (blk=0x555556425d20,
     new_context=0x55555640c100) at block/block-backend.c:1068
#8  0x00005555556a4c52 in virtio_scsi_hotplug (hotplug_dev=0x5555579f22b0,
     dev=0x555557bc8470, errp=0x7fffffffd9e0)
     at /home/den/src/qemu/hw/scsi/virtio-scsi.c:773
#9  0x00005555557eb3e2 in hotplug_handler_plug 
(plug_handler=0x5555579f22b0,
     plugged_dev=0x555557bc8470, errp=0x7fffffffd9e0) at 
hw/core/hotplug.c:22
#10 0x00005555557e7794 in device_set_realized (obj=0x555557bc8470, 
value=true,
     errp=0x7fffffffdb90) at hw/core/qdev.c:1066
#11 0x000055555595580b in property_set_bool (obj=0x555557bc8470,
     v=0x555557b51bc0, opaque=0x555557bc8740, name=0x555555a43841 
"realized",
     errp=0x7fffffffdb90) at qom/object.c:1708
#12 0x0000555555953e2a in object_property_set (obj=0x555557bc8470,
     v=0x555557b51bc0, name=0x555555a43841 "realized", errp=0x7fffffffdb90)
     at qom/object.c:965
#13 0x00005555559566c7 in object_property_set_qobject (obj=0x555557bc8470,
     value=0x555557bc8370, name=0x555555a43841 "realized", 
errp=0x7fffffffdb90)
---Type <return> to continue, or q <return> to quit---
     at qom/qom-qobject.c:24
#14 0x00005555559540cd in object_property_set_bool (obj=0x555557bc8470,
     value=true, name=0x555555a43841 "realized", errp=0x7fffffffdb90)
     at qom/object.c:1034
#15 0x0000555555760e47 in qdev_device_add (opts=0x5555563f90f0,
     errp=0x7fffffffdc18) at qdev-monitor.c:589
#16 0x0000555555772501 in device_init_func (opaque=0x0, 
opts=0x5555563f90f0,
     errp=0x0) at vl.c:2305
#17 0x0000555555a198cb in qemu_opts_foreach (
     list=0x555555e96a60 <qemu_device_opts>,
     func=0x5555557724c3 <device_init_func>, opaque=0x0, errp=0x0)
     at util/qemu-option.c:1114
#18 0x0000555555777b20 in main (argc=83, argv=0x7fffffffe058,
     envp=0x7fffffffe2f8) at vl.c:4523

You want the lock to be taken by the function. It would be
quite natural to add assert(aio_context_is_owner(ctx)) in that
case.

This assert will fail if the lock is not taken even if the thread
is not started yet. With assert that lock is taken qemu
will crash here and in bdrv_close called through
bdrv_delete. In that case caller can re-acquire the lock.
We can take the lock in the main thread immediately when
iothread was stopped.

Den

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [Qemu-devel] [PATCH 1/3] block: add missed aio_context_acquire around bdrv_set_aio_context
  2015-11-06 14:09     ` Denis V. Lunev
@ 2015-11-06 14:19       ` Denis V. Lunev
  2015-11-06 14:49         ` Stefan Hajnoczi
  2015-11-06 14:44       ` Stefan Hajnoczi
  1 sibling, 1 reply; 13+ messages in thread
From: Denis V. Lunev @ 2015-11-06 14:19 UTC (permalink / raw)
  To: Stefan Hajnoczi; +Cc: qemu-devel, Stefan Hajnoczi

On 11/06/2015 05:09 PM, Denis V. Lunev wrote:
> On 11/06/2015 04:35 PM, Stefan Hajnoczi wrote:
>> On Wed, Nov 04, 2015 at 08:27:22PM +0300, Denis V. Lunev wrote:
>>> It is required for bdrv_drain.
>> What bug does this patch fix?
>>
>> Existing blk_set_aio_context() callers acquire the AioContext or are
>> sure it's already acquired by their caller, so I don't see where the bug
>> is.
>>
>> No function in block/block-backend.c acquires AioContext internally.
>> The same reasoning I mentioned in another email thread about consistent
>> locking strategy applies here.  Either all blk_*() functions should take
>> the AioContext internally (which I disagree with) or none of them should
>> take it.
>
> #5  0x00005555559b31bf in bdrv_drain (bs=0x5555564274a0) at 
> block/io.c:258
> #6  0x0000555555963321 in bdrv_set_aio_context (bs=0x5555564274a0,
>     new_context=0x55555640c100) at block.c:3764
> #7  0x00005555559a8b67 in blk_set_aio_context (blk=0x555556425d20,
>     new_context=0x55555640c100) at block/block-backend.c:1068
> #8  0x00005555556a4c52 in virtio_scsi_hotplug 
> (hotplug_dev=0x5555579f22b0,
>     dev=0x555557bc8470, errp=0x7fffffffd9e0)
>     at /home/den/src/qemu/hw/scsi/virtio-scsi.c:773
> #9  0x00005555557eb3e2 in hotplug_handler_plug 
> (plug_handler=0x5555579f22b0,
>     plugged_dev=0x555557bc8470, errp=0x7fffffffd9e0) at 
> hw/core/hotplug.c:22
> #10 0x00005555557e7794 in device_set_realized (obj=0x555557bc8470, 
> value=true,
>     errp=0x7fffffffdb90) at hw/core/qdev.c:1066
> #11 0x000055555595580b in property_set_bool (obj=0x555557bc8470,
>     v=0x555557b51bc0, opaque=0x555557bc8740, name=0x555555a43841 
> "realized",
>     errp=0x7fffffffdb90) at qom/object.c:1708
> #12 0x0000555555953e2a in object_property_set (obj=0x555557bc8470,
>     v=0x555557b51bc0, name=0x555555a43841 "realized", 
> errp=0x7fffffffdb90)
>     at qom/object.c:965
> #13 0x00005555559566c7 in object_property_set_qobject 
> (obj=0x555557bc8470,
>     value=0x555557bc8370, name=0x555555a43841 "realized", 
> errp=0x7fffffffdb90)
> ---Type <return> to continue, or q <return> to quit---
>     at qom/qom-qobject.c:24
> #14 0x00005555559540cd in object_property_set_bool (obj=0x555557bc8470,
>     value=true, name=0x555555a43841 "realized", errp=0x7fffffffdb90)
>     at qom/object.c:1034
> #15 0x0000555555760e47 in qdev_device_add (opts=0x5555563f90f0,
>     errp=0x7fffffffdc18) at qdev-monitor.c:589
> #16 0x0000555555772501 in device_init_func (opaque=0x0, 
> opts=0x5555563f90f0,
>     errp=0x0) at vl.c:2305
> #17 0x0000555555a198cb in qemu_opts_foreach (
>     list=0x555555e96a60 <qemu_device_opts>,
>     func=0x5555557724c3 <device_init_func>, opaque=0x0, errp=0x0)
>     at util/qemu-option.c:1114
> #18 0x0000555555777b20 in main (argc=83, argv=0x7fffffffe058,
>     envp=0x7fffffffe2f8) at vl.c:4523
>
> You want the lock to be taken by the function. It would be
> quite natural to add assert(aio_context_is_owner(ctx)) in that
> case.
>
> This assert will fail if the lock is not taken even if the thread
> is not started yet. With assert that lock is taken qemu
> will crash here and in bdrv_close called through
> bdrv_delete. In that case caller can re-acquire the lock.
> We can take the lock in the main thread immediately when
> iothread was stopped.
>
> Den
it seems that virt_block and virt-scsi take locks in
a wrong order.. Something like this should be
applied.

There is something similar in virt-block code.

     blk_set_aio_context(s->conf->conf.blk, s->ctx);

     /* Kick right away to begin processing requests already in vring */
     event_notifier_set(virtio_queue_get_host_notifier(vq));

     /* Get this show started by hooking up our callbacks */
     aio_context_acquire(s->ctx);
     aio_set_event_notifier(s->ctx, &s->host_notifier, true,
                            handle_notify);
     aio_context_release(s->ctx);
     return;

Den

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [Qemu-devel] [PATCH 1/3] block: add missed aio_context_acquire around bdrv_set_aio_context
  2015-11-06 14:09     ` Denis V. Lunev
  2015-11-06 14:19       ` Denis V. Lunev
@ 2015-11-06 14:44       ` Stefan Hajnoczi
  2015-11-06 14:51         ` Denis V. Lunev
  1 sibling, 1 reply; 13+ messages in thread
From: Stefan Hajnoczi @ 2015-11-06 14:44 UTC (permalink / raw)
  To: Denis V. Lunev; +Cc: Fam Zheng, qemu-devel, Stefan Hajnoczi

On Fri, Nov 6, 2015 at 2:09 PM, Denis V. Lunev <den@openvz.org> wrote:
> On 11/06/2015 04:35 PM, Stefan Hajnoczi wrote:
>>
>> On Wed, Nov 04, 2015 at 08:27:22PM +0300, Denis V. Lunev wrote:
>>>
>>> It is required for bdrv_drain.
>>
>> What bug does this patch fix?
>>
>> Existing blk_set_aio_context() callers acquire the AioContext or are
>> sure it's already acquired by their caller, so I don't see where the bug
>> is.
>>
>> No function in block/block-backend.c acquires AioContext internally.
>> The same reasoning I mentioned in another email thread about consistent
>> locking strategy applies here.  Either all blk_*() functions should take
>> the AioContext internally (which I disagree with) or none of them should
>> take it.
>
>
> #5  0x00005555559b31bf in bdrv_drain (bs=0x5555564274a0) at block/io.c:258
> #6  0x0000555555963321 in bdrv_set_aio_context (bs=0x5555564274a0,
>     new_context=0x55555640c100) at block.c:3764
> #7  0x00005555559a8b67 in blk_set_aio_context (blk=0x555556425d20,
>     new_context=0x55555640c100) at block/block-backend.c:1068
> #8  0x00005555556a4c52 in virtio_scsi_hotplug (hotplug_dev=0x5555579f22b0,
>     dev=0x555557bc8470, errp=0x7fffffffd9e0)
>     at /home/den/src/qemu/hw/scsi/virtio-scsi.c:773
> #9  0x00005555557eb3e2 in hotplug_handler_plug (plug_handler=0x5555579f22b0,
>     plugged_dev=0x555557bc8470, errp=0x7fffffffd9e0) at hw/core/hotplug.c:22
> #10 0x00005555557e7794 in device_set_realized (obj=0x555557bc8470,
> value=true,
>     errp=0x7fffffffdb90) at hw/core/qdev.c:1066
> #11 0x000055555595580b in property_set_bool (obj=0x555557bc8470,
>     v=0x555557b51bc0, opaque=0x555557bc8740, name=0x555555a43841 "realized",
>     errp=0x7fffffffdb90) at qom/object.c:1708
> #12 0x0000555555953e2a in object_property_set (obj=0x555557bc8470,
>     v=0x555557b51bc0, name=0x555555a43841 "realized", errp=0x7fffffffdb90)
>     at qom/object.c:965
> #13 0x00005555559566c7 in object_property_set_qobject (obj=0x555557bc8470,
>     value=0x555557bc8370, name=0x555555a43841 "realized",
> errp=0x7fffffffdb90)
> ---Type <return> to continue, or q <return> to quit---
>     at qom/qom-qobject.c:24
> #14 0x00005555559540cd in object_property_set_bool (obj=0x555557bc8470,
>     value=true, name=0x555555a43841 "realized", errp=0x7fffffffdb90)
>     at qom/object.c:1034
> #15 0x0000555555760e47 in qdev_device_add (opts=0x5555563f90f0,
>     errp=0x7fffffffdc18) at qdev-monitor.c:589
> #16 0x0000555555772501 in device_init_func (opaque=0x0, opts=0x5555563f90f0,
>     errp=0x0) at vl.c:2305
> #17 0x0000555555a198cb in qemu_opts_foreach (
>     list=0x555555e96a60 <qemu_device_opts>,
>     func=0x5555557724c3 <device_init_func>, opaque=0x0, errp=0x0)
>     at util/qemu-option.c:1114
> #18 0x0000555555777b20 in main (argc=83, argv=0x7fffffffe058,
>     envp=0x7fffffffe2f8) at vl.c:4523
>
> You want the lock to be taken by the function. It would be
> quite natural to add assert(aio_context_is_owner(ctx)) in that
> case.

This stack trace is fine since hw/scsi/virtio-scsi.c:virtio_scsi_hotplug():
  aio_context_acquire(s->ctx);
  blk_set_aio_context(sd->conf.blk, s->ctx);
  aio_context_release(s->ctx);

What bug does this patch fix?  I still don't see the problem.

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [Qemu-devel] [PATCH 1/3] block: add missed aio_context_acquire around bdrv_set_aio_context
  2015-11-06 14:19       ` Denis V. Lunev
@ 2015-11-06 14:49         ` Stefan Hajnoczi
  0 siblings, 0 replies; 13+ messages in thread
From: Stefan Hajnoczi @ 2015-11-06 14:49 UTC (permalink / raw)
  To: Denis V. Lunev; +Cc: qemu-devel, Stefan Hajnoczi

On Fri, Nov 6, 2015 at 2:19 PM, Denis V. Lunev <den@openvz.org> wrote:
> On 11/06/2015 05:09 PM, Denis V. Lunev wrote:
>>
>> On 11/06/2015 04:35 PM, Stefan Hajnoczi wrote:
>>>
>>> On Wed, Nov 04, 2015 at 08:27:22PM +0300, Denis V. Lunev wrote:
>>>>
>>>> It is required for bdrv_drain.
>>>
>>> What bug does this patch fix?
>>>
>>> Existing blk_set_aio_context() callers acquire the AioContext or are
>>> sure it's already acquired by their caller, so I don't see where the bug
>>> is.
>>>
>>> No function in block/block-backend.c acquires AioContext internally.
>>> The same reasoning I mentioned in another email thread about consistent
>>> locking strategy applies here.  Either all blk_*() functions should take
>>> the AioContext internally (which I disagree with) or none of them should
>>> take it.
>>
>>
>> #5  0x00005555559b31bf in bdrv_drain (bs=0x5555564274a0) at block/io.c:258
>> #6  0x0000555555963321 in bdrv_set_aio_context (bs=0x5555564274a0,
>>     new_context=0x55555640c100) at block.c:3764
>> #7  0x00005555559a8b67 in blk_set_aio_context (blk=0x555556425d20,
>>     new_context=0x55555640c100) at block/block-backend.c:1068
>> #8  0x00005555556a4c52 in virtio_scsi_hotplug (hotplug_dev=0x5555579f22b0,
>>     dev=0x555557bc8470, errp=0x7fffffffd9e0)
>>     at /home/den/src/qemu/hw/scsi/virtio-scsi.c:773
>> #9  0x00005555557eb3e2 in hotplug_handler_plug
>> (plug_handler=0x5555579f22b0,
>>     plugged_dev=0x555557bc8470, errp=0x7fffffffd9e0) at
>> hw/core/hotplug.c:22
>> #10 0x00005555557e7794 in device_set_realized (obj=0x555557bc8470,
>> value=true,
>>     errp=0x7fffffffdb90) at hw/core/qdev.c:1066
>> #11 0x000055555595580b in property_set_bool (obj=0x555557bc8470,
>>     v=0x555557b51bc0, opaque=0x555557bc8740, name=0x555555a43841
>> "realized",
>>     errp=0x7fffffffdb90) at qom/object.c:1708
>> #12 0x0000555555953e2a in object_property_set (obj=0x555557bc8470,
>>     v=0x555557b51bc0, name=0x555555a43841 "realized", errp=0x7fffffffdb90)
>>     at qom/object.c:965
>> #13 0x00005555559566c7 in object_property_set_qobject (obj=0x555557bc8470,
>>     value=0x555557bc8370, name=0x555555a43841 "realized",
>> errp=0x7fffffffdb90)
>> ---Type <return> to continue, or q <return> to quit---
>>     at qom/qom-qobject.c:24
>> #14 0x00005555559540cd in object_property_set_bool (obj=0x555557bc8470,
>>     value=true, name=0x555555a43841 "realized", errp=0x7fffffffdb90)
>>     at qom/object.c:1034
>> #15 0x0000555555760e47 in qdev_device_add (opts=0x5555563f90f0,
>>     errp=0x7fffffffdc18) at qdev-monitor.c:589
>> #16 0x0000555555772501 in device_init_func (opaque=0x0,
>> opts=0x5555563f90f0,
>>     errp=0x0) at vl.c:2305
>> #17 0x0000555555a198cb in qemu_opts_foreach (
>>     list=0x555555e96a60 <qemu_device_opts>,
>>     func=0x5555557724c3 <device_init_func>, opaque=0x0, errp=0x0)
>>     at util/qemu-option.c:1114
>> #18 0x0000555555777b20 in main (argc=83, argv=0x7fffffffe058,
>>     envp=0x7fffffffe2f8) at vl.c:4523
>>
>> You want the lock to be taken by the function. It would be
>> quite natural to add assert(aio_context_is_owner(ctx)) in that
>> case.
>>
>> This assert will fail if the lock is not taken even if the thread
>> is not started yet. With assert that lock is taken qemu
>> will crash here and in bdrv_close called through
>> bdrv_delete. In that case caller can re-acquire the lock.
>> We can take the lock in the main thread immediately when
>> iothread was stopped.
>>
>> Den
>
> it seems that virt_block and virt-scsi take locks in
> a wrong order.. Something like this should be
> applied.
>
> There is something similar in virt-block code.

/* Context: QEMU global mutex held */
void virtio_blk_data_plane_start(VirtIOBlockDataPlane *s)
{

>     blk_set_aio_context(s->conf->conf.blk, s->ctx);

The requirement is that the blk_set_aio_context() caller holds the
current AioContext for blk.  In this case the function is documented
as holding the global mutex and we're moving the BDS from the main
loop AioContext to an IOThread AioContext.  We currently hold the
global mutex so it's safe to call blk_set_aio_context().

This seems ok to me.

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [Qemu-devel] [PATCH 1/3] block: add missed aio_context_acquire around bdrv_set_aio_context
  2015-11-06 14:44       ` Stefan Hajnoczi
@ 2015-11-06 14:51         ` Denis V. Lunev
  2015-11-06 14:55           ` Stefan Hajnoczi
  0 siblings, 1 reply; 13+ messages in thread
From: Denis V. Lunev @ 2015-11-06 14:51 UTC (permalink / raw)
  To: Stefan Hajnoczi; +Cc: Fam Zheng, qemu-devel, Stefan Hajnoczi

On 11/06/2015 05:44 PM, Stefan Hajnoczi wrote:
> On Fri, Nov 6, 2015 at 2:09 PM, Denis V. Lunev <den@openvz.org> wrote:
>> On 11/06/2015 04:35 PM, Stefan Hajnoczi wrote:
>>> On Wed, Nov 04, 2015 at 08:27:22PM +0300, Denis V. Lunev wrote:
>>>> It is required for bdrv_drain.
>>> What bug does this patch fix?
>>>
>>> Existing blk_set_aio_context() callers acquire the AioContext or are
>>> sure it's already acquired by their caller, so I don't see where the bug
>>> is.
>>>
>>> No function in block/block-backend.c acquires AioContext internally.
>>> The same reasoning I mentioned in another email thread about consistent
>>> locking strategy applies here.  Either all blk_*() functions should take
>>> the AioContext internally (which I disagree with) or none of them should
>>> take it.
>>
>> #5  0x00005555559b31bf in bdrv_drain (bs=0x5555564274a0) at block/io.c:258
>> #6  0x0000555555963321 in bdrv_set_aio_context (bs=0x5555564274a0,
>>      new_context=0x55555640c100) at block.c:3764
>> #7  0x00005555559a8b67 in blk_set_aio_context (blk=0x555556425d20,
>>      new_context=0x55555640c100) at block/block-backend.c:1068
>> #8  0x00005555556a4c52 in virtio_scsi_hotplug (hotplug_dev=0x5555579f22b0,
>>      dev=0x555557bc8470, errp=0x7fffffffd9e0)
>>      at /home/den/src/qemu/hw/scsi/virtio-scsi.c:773
>> #9  0x00005555557eb3e2 in hotplug_handler_plug (plug_handler=0x5555579f22b0,
>>      plugged_dev=0x555557bc8470, errp=0x7fffffffd9e0) at hw/core/hotplug.c:22
>> #10 0x00005555557e7794 in device_set_realized (obj=0x555557bc8470,
>> value=true,
>>      errp=0x7fffffffdb90) at hw/core/qdev.c:1066
>> #11 0x000055555595580b in property_set_bool (obj=0x555557bc8470,
>>      v=0x555557b51bc0, opaque=0x555557bc8740, name=0x555555a43841 "realized",
>>      errp=0x7fffffffdb90) at qom/object.c:1708
>> #12 0x0000555555953e2a in object_property_set (obj=0x555557bc8470,
>>      v=0x555557b51bc0, name=0x555555a43841 "realized", errp=0x7fffffffdb90)
>>      at qom/object.c:965
>> #13 0x00005555559566c7 in object_property_set_qobject (obj=0x555557bc8470,
>>      value=0x555557bc8370, name=0x555555a43841 "realized",
>> errp=0x7fffffffdb90)
>> ---Type <return> to continue, or q <return> to quit---
>>      at qom/qom-qobject.c:24
>> #14 0x00005555559540cd in object_property_set_bool (obj=0x555557bc8470,
>>      value=true, name=0x555555a43841 "realized", errp=0x7fffffffdb90)
>>      at qom/object.c:1034
>> #15 0x0000555555760e47 in qdev_device_add (opts=0x5555563f90f0,
>>      errp=0x7fffffffdc18) at qdev-monitor.c:589
>> #16 0x0000555555772501 in device_init_func (opaque=0x0, opts=0x5555563f90f0,
>>      errp=0x0) at vl.c:2305
>> #17 0x0000555555a198cb in qemu_opts_foreach (
>>      list=0x555555e96a60 <qemu_device_opts>,
>>      func=0x5555557724c3 <device_init_func>, opaque=0x0, errp=0x0)
>>      at util/qemu-option.c:1114
>> #18 0x0000555555777b20 in main (argc=83, argv=0x7fffffffe058,
>>      envp=0x7fffffffe2f8) at vl.c:4523
>>
>> You want the lock to be taken by the function. It would be
>> quite natural to add assert(aio_context_is_owner(ctx)) in that
>> case.
> This stack trace is fine since hw/scsi/virtio-scsi.c:virtio_scsi_hotplug():
>    aio_context_acquire(s->ctx);
>    blk_set_aio_context(sd->conf.blk, s->ctx);
>    aio_context_release(s->ctx);
>
> What bug does this patch fix?  I still don't see the problem.

Program received signal SIGABRT, Aborted.
0x00007ffff3e8c267 in __GI_raise (sig=sig@entry=6)
     at ../sysdeps/unix/sysv/linux/raise.c:55
55    ../sysdeps/unix/sysv/linux/raise.c: No such file or directory.
(gdb) bt
#0  0x00007ffff3e8c267 in __GI_raise (sig=sig@entry=6)
     at ../sysdeps/unix/sysv/linux/raise.c:55
#1  0x00007ffff3e8deca in __GI_abort () at abort.c:89
#2  0x00007ffff3e8503d in __assert_fail_base (
     fmt=0x7ffff3fe7028 "%s%s%s:%u: %s%sAssertion `%s' failed.\n%n",
     assertion=assertion@entry=0x555555aab968 "aio_context_is_owner(ctx)",
     file=file@entry=0x555555aab94a "aio-posix.c", line=line@entry=244,
     function=function@entry=0x555555aab9b0 <__PRETTY_FUNCTION__.21740> 
"aio_poll") at assert.c:92
#3  0x00007ffff3e850f2 in __GI___assert_fail (
     assertion=0x555555aab968 "aio_context_is_owner(ctx)",
     file=0x555555aab94a "aio-posix.c", line=244,
     function=0x555555aab9b0 <__PRETTY_FUNCTION__.21740> "aio_poll")
     at assert.c:101
#4  0x0000555555966a53 in aio_poll (ctx=0x5555563fc470, blocking=false)
     at aio-posix.c:244
#5  0x00005555559b31bf in bdrv_drain (bs=0x5555564274a0) at block/io.c:258
#6  0x0000555555963321 in bdrv_set_aio_context (bs=0x5555564274a0,
     new_context=0x55555640c100) at block.c:3764
#7  0x00005555559a8b67 in blk_set_aio_context (blk=0x555556425d20,
     new_context=0x55555640c100) at block/block-backend.c:1068
#8  0x00005555556a4c52 in virtio_scsi_hotplug (hotplug_dev=0x5555579f22b0,
     dev=0x555557bc8470, errp=0x7fffffffd9e0)
     at /home/den/src/qemu/hw/scsi/virtio-scsi.c:773
#9  0x00005555557eb3e2 in hotplug_handler_plug 
(plug_handler=0x5555579f22b0,
     plugged_dev=0x555557bc8470, errp=0x7fffffffd9e0) at 
hw/core/hotplug.c:22
#10 0x00005555557e7794 in device_set_realized (obj=0x555557bc8470, 
value=true,
     errp=0x7fffffffdb90) at hw/core/qdev.c:1066
#11 0x000055555595580b in property_set_bool (obj=0x555557bc8470,
     v=0x555557b51bc0, opaque=0x555557bc8740, name=0x555555a43841 
"realized",
     errp=0x7fffffffdb90) at qom/object.c:1708
#12 0x0000555555953e2a in object_property_set (obj=0x555557bc8470,
     v=0x555557b51bc0, name=0x555555a43841 "realized", errp=0x7fffffffdb90)
     at qom/object.c:965
#13 0x00005555559566c7 in object_property_set_qobject (obj=0x555557bc8470,
     value=0x555557bc8370, name=0x555555a43841 "realized", 
errp=0x7fffffffdb90)
---Type <return> to continue, or q <return> to quit---
     at qom/qom-qobject.c:24
#14 0x00005555559540cd in object_property_set_bool (obj=0x555557bc8470,
     value=true, name=0x555555a43841 "realized", errp=0x7fffffffdb90)
     at qom/object.c:1034
#15 0x0000555555760e47 in qdev_device_add (opts=0x5555563f90f0,
     errp=0x7fffffffdc18) at qdev-monitor.c:589
#16 0x0000555555772501 in device_init_func (opaque=0x0, 
opts=0x5555563f90f0,
     errp=0x0) at vl.c:2305
#17 0x0000555555a198cb in qemu_opts_foreach (
     list=0x555555e96a60 <qemu_device_opts>,
     func=0x5555557724c3 <device_init_func>, opaque=0x0, errp=0x0)
     at util/qemu-option.c:1114
#18 0x0000555555777b20 in main (argc=83, argv=0x7fffffffe058,
     envp=0x7fffffffe2f8) at vl.c:4523
(gdb) frame 8
#8  0x00005555556a4c52 in virtio_scsi_hotplug 
(hotplug_dev=0x5555579f22b0, dev=0x555557bc8470,
     errp=0x7fffffffd9e0) at /home/den/src/qemu/hw/scsi/virtio-scsi.c:773
warning: Source file is more recent than executable.
773            blk_set_aio_context(sd->conf.blk, s->ctx);
(gdb) list
768            if (blk_op_is_blocked(sd->conf.blk, 
BLOCK_OP_TYPE_DATAPLANE, errp)) {
769                return;
770            }
771            blk_op_block_all(sd->conf.blk, s->blocker);
772            aio_context_acquire(s->ctx);
773            blk_set_aio_context(sd->conf.blk, s->ctx);
774            aio_context_release(s->ctx);
775        }
776
777        if (virtio_vdev_has_feature(vdev, VIRTIO_SCSI_F_HOTPLUG)) {
(gdb) p s->ctx
$1 = (AioContext *) 0x55555640c100
(gdb) p blk_get_aio_context(sd->conf.blk)
[Thread 0x7fffecc10700 (LWP 480) exited]
$2 = (AioContext *) 0x5555563fc470
(gdb)

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [Qemu-devel] [PATCH 1/3] block: add missed aio_context_acquire around bdrv_set_aio_context
  2015-11-06 14:51         ` Denis V. Lunev
@ 2015-11-06 14:55           ` Stefan Hajnoczi
  0 siblings, 0 replies; 13+ messages in thread
From: Stefan Hajnoczi @ 2015-11-06 14:55 UTC (permalink / raw)
  To: Denis V. Lunev; +Cc: Fam Zheng, qemu-devel, Stefan Hajnoczi

On Fri, Nov 6, 2015 at 2:51 PM, Denis V. Lunev <den@openvz.org> wrote:
> On 11/06/2015 05:44 PM, Stefan Hajnoczi wrote:
>>
>> On Fri, Nov 6, 2015 at 2:09 PM, Denis V. Lunev <den@openvz.org> wrote:
>>>
>>> On 11/06/2015 04:35 PM, Stefan Hajnoczi wrote:
>>>>
>>>> On Wed, Nov 04, 2015 at 08:27:22PM +0300, Denis V. Lunev wrote:
>>>>>
>>>>> It is required for bdrv_drain.
>>>>
>>>> What bug does this patch fix?
>>>>
>>>> Existing blk_set_aio_context() callers acquire the AioContext or are
>>>> sure it's already acquired by their caller, so I don't see where the bug
>>>> is.
>>>>
>>>> No function in block/block-backend.c acquires AioContext internally.
>>>> The same reasoning I mentioned in another email thread about consistent
>>>> locking strategy applies here.  Either all blk_*() functions should take
>>>> the AioContext internally (which I disagree with) or none of them should
>>>> take it.
>>>
>>>
>>> #5  0x00005555559b31bf in bdrv_drain (bs=0x5555564274a0) at
>>> block/io.c:258
>>> #6  0x0000555555963321 in bdrv_set_aio_context (bs=0x5555564274a0,
>>>      new_context=0x55555640c100) at block.c:3764
>>> #7  0x00005555559a8b67 in blk_set_aio_context (blk=0x555556425d20,
>>>      new_context=0x55555640c100) at block/block-backend.c:1068
>>> #8  0x00005555556a4c52 in virtio_scsi_hotplug
>>> (hotplug_dev=0x5555579f22b0,
>>>      dev=0x555557bc8470, errp=0x7fffffffd9e0)
>>>      at /home/den/src/qemu/hw/scsi/virtio-scsi.c:773
>>> #9  0x00005555557eb3e2 in hotplug_handler_plug
>>> (plug_handler=0x5555579f22b0,
>>>      plugged_dev=0x555557bc8470, errp=0x7fffffffd9e0) at
>>> hw/core/hotplug.c:22
>>> #10 0x00005555557e7794 in device_set_realized (obj=0x555557bc8470,
>>> value=true,
>>>      errp=0x7fffffffdb90) at hw/core/qdev.c:1066
>>> #11 0x000055555595580b in property_set_bool (obj=0x555557bc8470,
>>>      v=0x555557b51bc0, opaque=0x555557bc8740, name=0x555555a43841
>>> "realized",
>>>      errp=0x7fffffffdb90) at qom/object.c:1708
>>> #12 0x0000555555953e2a in object_property_set (obj=0x555557bc8470,
>>>      v=0x555557b51bc0, name=0x555555a43841 "realized",
>>> errp=0x7fffffffdb90)
>>>      at qom/object.c:965
>>> #13 0x00005555559566c7 in object_property_set_qobject
>>> (obj=0x555557bc8470,
>>>      value=0x555557bc8370, name=0x555555a43841 "realized",
>>> errp=0x7fffffffdb90)
>>> ---Type <return> to continue, or q <return> to quit---
>>>      at qom/qom-qobject.c:24
>>> #14 0x00005555559540cd in object_property_set_bool (obj=0x555557bc8470,
>>>      value=true, name=0x555555a43841 "realized", errp=0x7fffffffdb90)
>>>      at qom/object.c:1034
>>> #15 0x0000555555760e47 in qdev_device_add (opts=0x5555563f90f0,
>>>      errp=0x7fffffffdc18) at qdev-monitor.c:589
>>> #16 0x0000555555772501 in device_init_func (opaque=0x0,
>>> opts=0x5555563f90f0,
>>>      errp=0x0) at vl.c:2305
>>> #17 0x0000555555a198cb in qemu_opts_foreach (
>>>      list=0x555555e96a60 <qemu_device_opts>,
>>>      func=0x5555557724c3 <device_init_func>, opaque=0x0, errp=0x0)
>>>      at util/qemu-option.c:1114
>>> #18 0x0000555555777b20 in main (argc=83, argv=0x7fffffffe058,
>>>      envp=0x7fffffffe2f8) at vl.c:4523
>>>
>>> You want the lock to be taken by the function. It would be
>>> quite natural to add assert(aio_context_is_owner(ctx)) in that
>>> case.
>>
>> This stack trace is fine since
>> hw/scsi/virtio-scsi.c:virtio_scsi_hotplug():
>>    aio_context_acquire(s->ctx);
>>    blk_set_aio_context(sd->conf.blk, s->ctx);
>>    aio_context_release(s->ctx);
>>
>> What bug does this patch fix?  I still don't see the problem.
>
>
> Program received signal SIGABRT, Aborted.
> 0x00007ffff3e8c267 in __GI_raise (sig=sig@entry=6)
>     at ../sysdeps/unix/sysv/linux/raise.c:55
> 55    ../sysdeps/unix/sysv/linux/raise.c: No such file or directory.
> (gdb) bt
> #0  0x00007ffff3e8c267 in __GI_raise (sig=sig@entry=6)
>     at ../sysdeps/unix/sysv/linux/raise.c:55
> #1  0x00007ffff3e8deca in __GI_abort () at abort.c:89
> #2  0x00007ffff3e8503d in __assert_fail_base (
>     fmt=0x7ffff3fe7028 "%s%s%s:%u: %s%sAssertion `%s' failed.\n%n",
>     assertion=assertion@entry=0x555555aab968 "aio_context_is_owner(ctx)",
>     file=file@entry=0x555555aab94a "aio-posix.c", line=line@entry=244,
>     function=function@entry=0x555555aab9b0 <__PRETTY_FUNCTION__.21740>
> "aio_poll") at assert.c:92
> #3  0x00007ffff3e850f2 in __GI___assert_fail (
>     assertion=0x555555aab968 "aio_context_is_owner(ctx)",
>     file=0x555555aab94a "aio-posix.c", line=244,
>     function=0x555555aab9b0 <__PRETTY_FUNCTION__.21740> "aio_poll")
>     at assert.c:101

IIRC the main loop does not acquire/release its AioContext.  It relies
on qemu_global_mutex, which means that the AioContext rfifolock is not
used for the main loop AioContext.

The assertion is failing but the code is safe.

If you want to change this you need to figure out the
AioContext<->GSource integration used by the main loop and how to add
rfifolock to that.

Stefan

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [Qemu-devel] [PATCH 2/3] blockdev: acquire AioContext in hmp_commit()
  2015-11-04 17:27 ` [Qemu-devel] [PATCH 2/3] blockdev: acquire AioContext in hmp_commit() Denis V. Lunev
@ 2015-11-06 15:41   ` Stefan Hajnoczi
  0 siblings, 0 replies; 13+ messages in thread
From: Stefan Hajnoczi @ 2015-11-06 15:41 UTC (permalink / raw)
  To: Denis V. Lunev; +Cc: qemu-devel, Stefan Hajnoczi

[-- Attachment #1: Type: text/plain, Size: 743 bytes --]

On Wed, Nov 04, 2015 at 08:27:23PM +0300, Denis V. Lunev wrote:
> From: Stefan Hajnoczi <stefanha@redhat.com>
> 
> This one slipped through.  Although we acquire AioContext when
> committing all devices we don't for just a single device.
> 
> AioContext must be acquired before calling bdrv_*() functions to
> synchronize access with other threads that may be using the AioContext.
> 
> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
> Signed-off-by: Denis V. Lunev <den@openvz.org>
> Reviewed-by: Jeff Cody <jcody@redhat.com>
> ---
>  blockdev.c | 12 +++++++++++-
>  1 file changed, 11 insertions(+), 1 deletion(-)

Thanks, applied this patch to my block tree:
https://github.com/stefanha/qemu/commits/block

Stefan

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 473 bytes --]

^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2015-11-06 15:41 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-11-04 17:27 [Qemu-devel] [PATCH 2.5 v2 0/3] misc missed aio_context_acquire for bdrv_drain Denis V. Lunev
2015-11-04 17:27 ` [Qemu-devel] [PATCH 1/3] block: add missed aio_context_acquire around bdrv_set_aio_context Denis V. Lunev
2015-11-06 13:35   ` Stefan Hajnoczi
2015-11-06 14:09     ` Denis V. Lunev
2015-11-06 14:19       ` Denis V. Lunev
2015-11-06 14:49         ` Stefan Hajnoczi
2015-11-06 14:44       ` Stefan Hajnoczi
2015-11-06 14:51         ` Denis V. Lunev
2015-11-06 14:55           ` Stefan Hajnoczi
2015-11-04 17:27 ` [Qemu-devel] [PATCH 2/3] blockdev: acquire AioContext in hmp_commit() Denis V. Lunev
2015-11-06 15:41   ` Stefan Hajnoczi
2015-11-04 17:27 ` [Qemu-devel] [PATCH 3/3] block: guard bdrv_drain in bdrv_close with aio_context_acquire Denis V. Lunev
2015-11-06 13:44   ` Stefan Hajnoczi

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.