All of lore.kernel.org
 help / color / mirror / Atom feed
* [Qemu-devel] [PATCH v3] migration/block:limit the time used for block migration
@ 2017-04-05  9:27 jemmy858585
  2017-04-05  9:34 ` Daniel P. Berrange
  2017-04-06 14:02 ` Stefan Hajnoczi
  0 siblings, 2 replies; 19+ messages in thread
From: jemmy858585 @ 2017-04-05  9:27 UTC (permalink / raw)
  To: qemu-devel; +Cc: stefanha, famz, quintela, dgilbert, qemu-block, Lidong Chen

From: Lidong Chen <lidongchen@tencent.com>

when migration with high speed, mig_save_device_bulk invoke
bdrv_is_allocated too frequently, and cause vnc reponse slowly.
this patch limit the time used for bdrv_is_allocated.

Signed-off-by: Lidong Chen <lidongchen@tencent.com>
---
 migration/block.c | 35 ++++++++++++++++++++++++++++++-----
 1 file changed, 30 insertions(+), 5 deletions(-)

diff --git a/migration/block.c b/migration/block.c
index 7734ff7..dbce931 100644
--- a/migration/block.c
+++ b/migration/block.c
@@ -39,6 +39,7 @@
 #define MAX_IS_ALLOCATED_SEARCH 65536
 
 #define MAX_INFLIGHT_IO 512
+#define BIG_DELAY 500000
 
 //#define DEBUG_BLK_MIGRATION
 
@@ -110,6 +111,7 @@ typedef struct BlkMigState {
     int transferred;
     int prev_progress;
     int bulk_completed;
+    int64_t time_ns_used;
 
     /* Lock must be taken _inside_ the iothread lock and any AioContexts.  */
     QemuMutex lock;
@@ -272,16 +274,32 @@ static int mig_save_device_bulk(QEMUFile *f, BlkMigDevState *bmds)
     BlockBackend *bb = bmds->blk;
     BlkMigBlock *blk;
     int nr_sectors;
+    uint64_t ts1, ts2;
+    int ret = 0;
+    bool timeout_flag = false;
 
     if (bmds->shared_base) {
         qemu_mutex_lock_iothread();
         aio_context_acquire(blk_get_aio_context(bb));
         /* Skip unallocated sectors; intentionally treats failure as
          * an allocated sector */
-        while (cur_sector < total_sectors &&
-               !bdrv_is_allocated(blk_bs(bb), cur_sector,
-                                  MAX_IS_ALLOCATED_SEARCH, &nr_sectors)) {
-            cur_sector += nr_sectors;
+        while (cur_sector < total_sectors) {
+            ts1 = qemu_clock_get_ns(QEMU_CLOCK_REALTIME);
+            ret = bdrv_is_allocated(blk_bs(bb), cur_sector,
+                                    MAX_IS_ALLOCATED_SEARCH, &nr_sectors);
+            ts2 = qemu_clock_get_ns(QEMU_CLOCK_REALTIME);
+
+            block_mig_state.time_ns_used += ts2 - ts1;
+
+            if (!ret) {
+                cur_sector += nr_sectors;
+                if (block_mig_state.time_ns_used > BIG_DELAY) {
+                    timeout_flag = true;
+                    break;
+                }
+            } else {
+                break;
+            }
         }
         aio_context_release(blk_get_aio_context(bb));
         qemu_mutex_unlock_iothread();
@@ -292,6 +310,11 @@ static int mig_save_device_bulk(QEMUFile *f, BlkMigDevState *bmds)
         return 1;
     }
 
+    if (timeout_flag) {
+        bmds->cur_sector = bmds->completed_sectors = cur_sector;
+        return 0;
+    }
+
     bmds->completed_sectors = cur_sector;
 
     cur_sector &= ~((int64_t)BDRV_SECTORS_PER_DIRTY_CHUNK - 1);
@@ -756,6 +779,7 @@ static int block_save_iterate(QEMUFile *f, void *opaque)
     }
 
     blk_mig_reset_dirty_cursor();
+    block_mig_state.time_ns_used = 0;
 
     /* control the rate of transfer */
     blk_mig_lock();
@@ -764,7 +788,8 @@ static int block_save_iterate(QEMUFile *f, void *opaque)
            qemu_file_get_rate_limit(f) &&
            (block_mig_state.submitted +
             block_mig_state.read_done) <
-           MAX_INFLIGHT_IO) {
+           MAX_INFLIGHT_IO &&
+           block_mig_state.time_ns_used <= BIG_DELAY) {
         blk_mig_unlock();
         if (block_mig_state.bulk_completed == 0) {
             /* first finish the bulk phase */
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 19+ messages in thread

* Re: [Qemu-devel] [PATCH v3] migration/block:limit the time used for block migration
  2017-04-05  9:27 [Qemu-devel] [PATCH v3] migration/block:limit the time used for block migration jemmy858585
@ 2017-04-05  9:34 ` Daniel P. Berrange
  2017-04-05 10:44   ` 858585 jemmy
  2017-04-06 14:02 ` Stefan Hajnoczi
  1 sibling, 1 reply; 19+ messages in thread
From: Daniel P. Berrange @ 2017-04-05  9:34 UTC (permalink / raw)
  To: jemmy858585
  Cc: qemu-devel, famz, qemu-block, quintela, dgilbert, stefanha, Lidong Chen

On Wed, Apr 05, 2017 at 05:27:58PM +0800, jemmy858585@gmail.com wrote:
> From: Lidong Chen <lidongchen@tencent.com>
> 
> when migration with high speed, mig_save_device_bulk invoke
> bdrv_is_allocated too frequently, and cause vnc reponse slowly.
> this patch limit the time used for bdrv_is_allocated.

Can you explain why calling bdrv_is_allocated is impacting VNC performance ?

Migration is running in a background thread, so shouldn't be impacting the
main thread which handles VNC, unless the block layer is perhaps acquiring
the global qemu lock ? I wouldn't expect such a lock to be held for just
the bdrv_is_allocated call though.

Regards,
Daniel
-- 
|: http://berrange.com      -o-    http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org              -o-             http://virt-manager.org :|
|: http://entangle-photo.org       -o-    http://search.cpan.org/~danberr/ :|

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [Qemu-devel] [PATCH v3] migration/block:limit the time used for block migration
  2017-04-05  9:34 ` Daniel P. Berrange
@ 2017-04-05 10:44   ` 858585 jemmy
  2017-04-06  3:18     ` 858585 jemmy
  0 siblings, 1 reply; 19+ messages in thread
From: 858585 jemmy @ 2017-04-05 10:44 UTC (permalink / raw)
  To: Daniel P. Berrange
  Cc: qemu-devel, Fam Zheng, qemu-block, quintela, dgilbert, stefanha,
	Lidong Chen

On Wed, Apr 5, 2017 at 5:34 PM, Daniel P. Berrange <berrange@redhat.com> wrote:
> On Wed, Apr 05, 2017 at 05:27:58PM +0800, jemmy858585@gmail.com wrote:
>> From: Lidong Chen <lidongchen@tencent.com>
>>
>> when migration with high speed, mig_save_device_bulk invoke
>> bdrv_is_allocated too frequently, and cause vnc reponse slowly.
>> this patch limit the time used for bdrv_is_allocated.
>
> Can you explain why calling bdrv_is_allocated is impacting VNC performance ?
>

bdrv_is_allocated is called after qemu_mutex_lock_iothread.

    if (bmds->shared_base) {
        qemu_mutex_lock_iothread();
        aio_context_acquire(blk_get_aio_context(bb));
        /* Skip unallocated sectors; intentionally treats failure as
         * an allocated sector */
        while (cur_sector < total_sectors &&
               !bdrv_is_allocated(blk_bs(bb), cur_sector,
                                  MAX_IS_ALLOCATED_SEARCH, &nr_sectors)) {
            cur_sector += nr_sectors;
        }
        aio_context_release(blk_get_aio_context(bb));
        qemu_mutex_unlock_iothread();
    }

and the main thread is also call qemu_mutex_lock_iothread.

#0  0x00007f107322f264 in __lll_lock_wait () from /lib64/libpthread.so.0
#1  0x00007f107322a508 in _L_lock_854 () from /lib64/libpthread.so.0
#2  0x00007f107322a3d7 in pthread_mutex_lock () from /lib64/libpthread.so.0
#3  0x0000000000949ecb in qemu_mutex_lock (mutex=0xfc51a0) at
util/qemu-thread-posix.c:60
#4  0x0000000000459e58 in qemu_mutex_lock_iothread () at /root/qemu/cpus.c:1516
#5  0x0000000000945322 in os_host_main_loop_wait (timeout=28911939) at
util/main-loop.c:258
#6  0x00000000009453f2 in main_loop_wait (nonblocking=0) at util/main-loop.c:517
#7  0x00000000005c76b4 in main_loop () at vl.c:1898
#8  0x00000000005ceb77 in main (argc=49, argv=0x7fff921182b8,
envp=0x7fff92118448) at vl.c:4709

> Migration is running in a background thread, so shouldn't be impacting the
> main thread which handles VNC, unless the block layer is perhaps acquiring
> the global qemu lock ? I wouldn't expect such a lock to be held for just
> the bdrv_is_allocated call though.
>
> Regards,
> Daniel
> --
> |: http://berrange.com      -o-    http://www.flickr.com/photos/dberrange/ :|
> |: http://libvirt.org              -o-             http://virt-manager.org :|
> |: http://entangle-photo.org       -o-    http://search.cpan.org/~danberr/ :|

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [Qemu-devel] [PATCH v3] migration/block:limit the time used for block migration
  2017-04-05 10:44   ` 858585 jemmy
@ 2017-04-06  3:18     ` 858585 jemmy
  0 siblings, 0 replies; 19+ messages in thread
From: 858585 jemmy @ 2017-04-06  3:18 UTC (permalink / raw)
  To: Daniel P. Berrange
  Cc: qemu-devel, Fam Zheng, qemu-block, quintela, dgilbert, stefanha,
	Lidong Chen

On Wed, Apr 5, 2017 at 6:44 PM, 858585 jemmy <jemmy858585@gmail.com> wrote:
> On Wed, Apr 5, 2017 at 5:34 PM, Daniel P. Berrange <berrange@redhat.com> wrote:
>> On Wed, Apr 05, 2017 at 05:27:58PM +0800, jemmy858585@gmail.com wrote:
>>> From: Lidong Chen <lidongchen@tencent.com>
>>>
>>> when migration with high speed, mig_save_device_bulk invoke
>>> bdrv_is_allocated too frequently, and cause vnc reponse slowly.
>>> this patch limit the time used for bdrv_is_allocated.
>>
>> Can you explain why calling bdrv_is_allocated is impacting VNC performance ?
>>
>
> bdrv_is_allocated is called after qemu_mutex_lock_iothread.
>
>     if (bmds->shared_base) {
>         qemu_mutex_lock_iothread();
>         aio_context_acquire(blk_get_aio_context(bb));
>         /* Skip unallocated sectors; intentionally treats failure as
>          * an allocated sector */
>         while (cur_sector < total_sectors &&
>                !bdrv_is_allocated(blk_bs(bb), cur_sector,
>                                   MAX_IS_ALLOCATED_SEARCH, &nr_sectors)) {
>             cur_sector += nr_sectors;
>         }
>         aio_context_release(blk_get_aio_context(bb));
>         qemu_mutex_unlock_iothread();
>     }
>
> and the main thread is also call qemu_mutex_lock_iothread.
>
> #0  0x00007f107322f264 in __lll_lock_wait () from /lib64/libpthread.so.0
> #1  0x00007f107322a508 in _L_lock_854 () from /lib64/libpthread.so.0
> #2  0x00007f107322a3d7 in pthread_mutex_lock () from /lib64/libpthread.so.0
> #3  0x0000000000949ecb in qemu_mutex_lock (mutex=0xfc51a0) at
> util/qemu-thread-posix.c:60
> #4  0x0000000000459e58 in qemu_mutex_lock_iothread () at /root/qemu/cpus.c:1516
> #5  0x0000000000945322 in os_host_main_loop_wait (timeout=28911939) at
> util/main-loop.c:258
> #6  0x00000000009453f2 in main_loop_wait (nonblocking=0) at util/main-loop.c:517
> #7  0x00000000005c76b4 in main_loop () at vl.c:1898
> #8  0x00000000005ceb77 in main (argc=49, argv=0x7fff921182b8,
> envp=0x7fff92118448) at vl.c:4709
>
>> Migration is running in a background thread, so shouldn't be impacting the
>> main thread which handles VNC, unless the block layer is perhaps acquiring
>> the global qemu lock ? I wouldn't expect such a lock to be held for just
>> the bdrv_is_allocated call though.
>>

I'm not sure it's safe to remove qemu_mutex_lock_iothread. i will
analyze and test it later.
this patch is simple, and can solve the problem now.

>> Regards,
>> Daniel
>> --
>> |: http://berrange.com      -o-    http://www.flickr.com/photos/dberrange/ :|
>> |: http://libvirt.org              -o-             http://virt-manager.org :|
>> |: http://entangle-photo.org       -o-    http://search.cpan.org/~danberr/ :|

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [Qemu-devel] [PATCH v3] migration/block:limit the time used for block migration
  2017-04-05  9:27 [Qemu-devel] [PATCH v3] migration/block:limit the time used for block migration jemmy858585
  2017-04-05  9:34 ` Daniel P. Berrange
@ 2017-04-06 14:02 ` Stefan Hajnoczi
  2017-04-07  1:30   ` 858585 jemmy
  1 sibling, 1 reply; 19+ messages in thread
From: Stefan Hajnoczi @ 2017-04-06 14:02 UTC (permalink / raw)
  To: jemmy858585; +Cc: qemu-devel, famz, quintela, dgilbert, qemu-block, Lidong Chen

[-- Attachment #1: Type: text/plain, Size: 946 bytes --]

On Wed, Apr 05, 2017 at 05:27:58PM +0800, jemmy858585@gmail.com wrote:
> From: Lidong Chen <lidongchen@tencent.com>
> 
> when migration with high speed, mig_save_device_bulk invoke
> bdrv_is_allocated too frequently, and cause vnc reponse slowly.
> this patch limit the time used for bdrv_is_allocated.

bdrv_is_allocated() is supposed to yield back to the event loop if it
needs to block.  If your VNC session is experiencing jitter then it's
probably because a system call in the bdrv_is_allocated() code path is
synchronous when it should be asynchronous.

You could try to identify the system call using strace -f -T.  In the
output you'll see the duration of each system call.  I guess there is a
file I/O system call that is taking noticable amounts of time.

A proper solution is to refactor the synchronous code to make it
asynchronous.  This might require invoking the system call from a
thread pool worker.

Stefan

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 455 bytes --]

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [Qemu-devel] [PATCH v3] migration/block:limit the time used for block migration
  2017-04-06 14:02 ` Stefan Hajnoczi
@ 2017-04-07  1:30   ` 858585 jemmy
  2017-04-07  8:26     ` 858585 jemmy
                       ` (2 more replies)
  0 siblings, 3 replies; 19+ messages in thread
From: 858585 jemmy @ 2017-04-07  1:30 UTC (permalink / raw)
  To: Stefan Hajnoczi
  Cc: qemu-devel, Fam Zheng, quintela, dgilbert, qemu-block, Lidong Chen

On Thu, Apr 6, 2017 at 10:02 PM, Stefan Hajnoczi <stefanha@redhat.com> wrote:
> On Wed, Apr 05, 2017 at 05:27:58PM +0800, jemmy858585@gmail.com wrote:
>> From: Lidong Chen <lidongchen@tencent.com>
>>
>> when migration with high speed, mig_save_device_bulk invoke
>> bdrv_is_allocated too frequently, and cause vnc reponse slowly.
>> this patch limit the time used for bdrv_is_allocated.
>
> bdrv_is_allocated() is supposed to yield back to the event loop if it
> needs to block.  If your VNC session is experiencing jitter then it's
> probably because a system call in the bdrv_is_allocated() code path is
> synchronous when it should be asynchronous.
>
> You could try to identify the system call using strace -f -T.  In the
> output you'll see the duration of each system call.  I guess there is a
> file I/O system call that is taking noticable amounts of time.

yes, i find the reason where bdrv_is_allocated needs to block.

the mainly reason is caused by qemu_co_mutex_lock invoked by
qcow2_co_get_block_status.
    qemu_co_mutex_lock(&s->lock);
    ret = qcow2_get_cluster_offset(bs, sector_num << 9, &bytes,
                                   &cluster_offset);
    qemu_co_mutex_unlock(&s->lock);

other reason is caused by l2_load invoked by
qcow2_get_cluster_offset.

    /* load the l2 table in memory */

    ret = l2_load(bs, l2_offset, &l2_table);
    if (ret < 0) {
        return ret;
    }

>
> A proper solution is to refactor the synchronous code to make it
> asynchronous.  This might require invoking the system call from a
> thread pool worker.
>

yes, i agree with you, but this is a big change.
I will try to find how to optimize this code, maybe need a long time.

this patch is not a perfect solution, but can alleviate the problem.

> Stefan

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [Qemu-devel] [PATCH v3] migration/block:limit the time used for block migration
  2017-04-07  1:30   ` 858585 jemmy
@ 2017-04-07  8:26     ` 858585 jemmy
  2017-04-07 11:33     ` Stefan Hajnoczi
  2017-04-07 11:34     ` Stefan Hajnoczi
  2 siblings, 0 replies; 19+ messages in thread
From: 858585 jemmy @ 2017-04-07  8:26 UTC (permalink / raw)
  To: Stefan Hajnoczi, Fam Zheng, quintela, dgilbert, Daniel P. Berrange
  Cc: qemu-devel, qemu-block, Lidong Chen

On Fri, Apr 7, 2017 at 9:30 AM, 858585 jemmy <jemmy858585@gmail.com> wrote:
> On Thu, Apr 6, 2017 at 10:02 PM, Stefan Hajnoczi <stefanha@redhat.com> wrote:
>> On Wed, Apr 05, 2017 at 05:27:58PM +0800, jemmy858585@gmail.com wrote:
>>> From: Lidong Chen <lidongchen@tencent.com>
>>>
>>> when migration with high speed, mig_save_device_bulk invoke
>>> bdrv_is_allocated too frequently, and cause vnc reponse slowly.
>>> this patch limit the time used for bdrv_is_allocated.
>>
>> bdrv_is_allocated() is supposed to yield back to the event loop if it
>> needs to block.  If your VNC session is experiencing jitter then it's
>> probably because a system call in the bdrv_is_allocated() code path is
>> synchronous when it should be asynchronous.
>>
>> You could try to identify the system call using strace -f -T.  In the
>> output you'll see the duration of each system call.  I guess there is a
>> file I/O system call that is taking noticable amounts of time.
>
> yes, i find the reason where bdrv_is_allocated needs to block.
>
> the mainly reason is caused by qemu_co_mutex_lock invoked by
> qcow2_co_get_block_status.
>     qemu_co_mutex_lock(&s->lock);
>     ret = qcow2_get_cluster_offset(bs, sector_num << 9, &bytes,
>                                    &cluster_offset);
>     qemu_co_mutex_unlock(&s->lock);
>
> other reason is caused by l2_load invoked by
> qcow2_get_cluster_offset.
>
>     /* load the l2 table in memory */
>
>     ret = l2_load(bs, l2_offset, &l2_table);
>     if (ret < 0) {
>         return ret;
>     }
>
>>
>> A proper solution is to refactor the synchronous code to make it
>> asynchronous.  This might require invoking the system call from a
>> thread pool worker.
>>
>
> yes, i agree with you, but this is a big change.
> I will try to find how to optimize this code, maybe need a long time.
>
> this patch is not a perfect solution, but can alleviate the problem.

Hi everyone:
    Do you think should we use this patch currently? and optimize this
code later?
    Thanks.

>
>> Stefan

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [Qemu-devel] [PATCH v3] migration/block:limit the time used for block migration
  2017-04-07  1:30   ` 858585 jemmy
  2017-04-07  8:26     ` 858585 jemmy
@ 2017-04-07 11:33     ` Stefan Hajnoczi
  2017-04-08 10:09       ` Paolo Bonzini
  2017-04-09 13:06       ` [Qemu-devel] " 858585 jemmy
  2017-04-07 11:34     ` Stefan Hajnoczi
  2 siblings, 2 replies; 19+ messages in thread
From: Stefan Hajnoczi @ 2017-04-07 11:33 UTC (permalink / raw)
  To: 858585 jemmy
  Cc: qemu-devel, Fam Zheng, quintela, dgilbert, qemu-block,
	Lidong Chen, kwolf, Paolo Bonzini

[-- Attachment #1: Type: text/plain, Size: 2985 bytes --]

On Fri, Apr 07, 2017 at 09:30:33AM +0800, 858585 jemmy wrote:
> On Thu, Apr 6, 2017 at 10:02 PM, Stefan Hajnoczi <stefanha@redhat.com> wrote:
> > On Wed, Apr 05, 2017 at 05:27:58PM +0800, jemmy858585@gmail.com wrote:
> >> From: Lidong Chen <lidongchen@tencent.com>
> >>
> >> when migration with high speed, mig_save_device_bulk invoke
> >> bdrv_is_allocated too frequently, and cause vnc reponse slowly.
> >> this patch limit the time used for bdrv_is_allocated.
> >
> > bdrv_is_allocated() is supposed to yield back to the event loop if it
> > needs to block.  If your VNC session is experiencing jitter then it's
> > probably because a system call in the bdrv_is_allocated() code path is
> > synchronous when it should be asynchronous.
> >
> > You could try to identify the system call using strace -f -T.  In the
> > output you'll see the duration of each system call.  I guess there is a
> > file I/O system call that is taking noticable amounts of time.
> 
> yes, i find the reason where bdrv_is_allocated needs to block.
> 
> the mainly reason is caused by qemu_co_mutex_lock invoked by
> qcow2_co_get_block_status.
>     qemu_co_mutex_lock(&s->lock);
>     ret = qcow2_get_cluster_offset(bs, sector_num << 9, &bytes,
>                                    &cluster_offset);
>     qemu_co_mutex_unlock(&s->lock);
> 
> other reason is caused by l2_load invoked by
> qcow2_get_cluster_offset.
> 
>     /* load the l2 table in memory */
> 
>     ret = l2_load(bs, l2_offset, &l2_table);
>     if (ret < 0) {
>         return ret;
>     }

The migration thread is holding the QEMU global mutex, the AioContext,
and the qcow2 s->lock while the L2 table is read from disk.

The QEMU global mutex is needed for block layer operations that touch
the global drives list.  bdrv_is_allocated() can be called without the
global mutex.

The VNC server's file descriptor is not in the BDS AioContext.
Therefore it can be processed while the migration thread holds the
AioContext and qcow2 s->lock.

Does the following patch solve the problem?

diff --git a/migration/block.c b/migration/block.c
index 7734ff7..072fc20 100644
--- a/migration/block.c
+++ b/migration/block.c
@@ -276,6 +276,7 @@ static int mig_save_device_bulk(QEMUFile *f, BlkMigDevState *bmds)
     if (bmds->shared_base) {
         qemu_mutex_lock_iothread();
         aio_context_acquire(blk_get_aio_context(bb));
+        qemu_mutex_unlock_iothread();
         /* Skip unallocated sectors; intentionally treats failure as
          * an allocated sector */
         while (cur_sector < total_sectors &&
@@ -283,6 +284,7 @@ static int mig_save_device_bulk(QEMUFile *f, BlkMigDevState *bmds)
                                   MAX_IS_ALLOCATED_SEARCH, &nr_sectors)) {
             cur_sector += nr_sectors;
         }
+        qemu_mutex_lock_iothread();
         aio_context_release(blk_get_aio_context(bb));
         qemu_mutex_unlock_iothread();
     }


[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 455 bytes --]

^ permalink raw reply related	[flat|nested] 19+ messages in thread

* Re: [Qemu-devel] [PATCH v3] migration/block:limit the time used for block migration
  2017-04-07  1:30   ` 858585 jemmy
  2017-04-07  8:26     ` 858585 jemmy
  2017-04-07 11:33     ` Stefan Hajnoczi
@ 2017-04-07 11:34     ` Stefan Hajnoczi
  2017-04-08 13:17       ` 858585 jemmy
  2 siblings, 1 reply; 19+ messages in thread
From: Stefan Hajnoczi @ 2017-04-07 11:34 UTC (permalink / raw)
  To: 858585 jemmy
  Cc: qemu-devel, Fam Zheng, quintela, dgilbert, qemu-block, Lidong Chen

[-- Attachment #1: Type: text/plain, Size: 663 bytes --]

On Fri, Apr 07, 2017 at 09:30:33AM +0800, 858585 jemmy wrote:
> On Thu, Apr 6, 2017 at 10:02 PM, Stefan Hajnoczi <stefanha@redhat.com> wrote:
> > On Wed, Apr 05, 2017 at 05:27:58PM +0800, jemmy858585@gmail.com wrote:
> >
> > A proper solution is to refactor the synchronous code to make it
> > asynchronous.  This might require invoking the system call from a
> > thread pool worker.
> >
> 
> yes, i agree with you, but this is a big change.
> I will try to find how to optimize this code, maybe need a long time.
> 
> this patch is not a perfect solution, but can alleviate the problem.

Let's try to understand the problem fully first.

Stefan

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 455 bytes --]

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [Qemu-devel] [PATCH v3] migration/block:limit the time used for block migration
  2017-04-07 11:33     ` Stefan Hajnoczi
@ 2017-04-08 10:09       ` Paolo Bonzini
  2017-04-10 10:01         ` [Qemu-devel] [Qemu-block] " Stefan Hajnoczi
  2017-04-09 13:06       ` [Qemu-devel] " 858585 jemmy
  1 sibling, 1 reply; 19+ messages in thread
From: Paolo Bonzini @ 2017-04-08 10:09 UTC (permalink / raw)
  To: Stefan Hajnoczi, 858585 jemmy
  Cc: qemu-devel, Fam Zheng, quintela, dgilbert, qemu-block,
	Lidong Chen, kwolf



On 07/04/2017 19:33, Stefan Hajnoczi wrote:
> The migration thread is holding the QEMU global mutex, the AioContext,
> and the qcow2 s->lock while the L2 table is read from disk.
> 
> The QEMU global mutex is needed for block layer operations that touch
> the global drives list.  bdrv_is_allocated() can be called without the
> global mutex.

Hi Stefan,

only virtio-blk and virtio-scsi take the AioContext lock (because they
support dataplane).  For block migration to work with devices such as
IDE, it needs to take the iothread lock too.  I think there's a comment
about this in migration/block.c.

However, this will hopefully be fixed in 2.10 by making the block layer
thread safe.

Paolo

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [Qemu-devel] [PATCH v3] migration/block:limit the time used for block migration
  2017-04-07 11:34     ` Stefan Hajnoczi
@ 2017-04-08 13:17       ` 858585 jemmy
  2017-04-10 13:52         ` [Qemu-devel] [Qemu-block] " Stefan Hajnoczi
  0 siblings, 1 reply; 19+ messages in thread
From: 858585 jemmy @ 2017-04-08 13:17 UTC (permalink / raw)
  To: Stefan Hajnoczi
  Cc: qemu-devel, Fam Zheng, quintela, dgilbert, qemu-block, Lidong Chen

On Fri, Apr 7, 2017 at 7:34 PM, Stefan Hajnoczi <stefanha@redhat.com> wrote:
> On Fri, Apr 07, 2017 at 09:30:33AM +0800, 858585 jemmy wrote:
>> On Thu, Apr 6, 2017 at 10:02 PM, Stefan Hajnoczi <stefanha@redhat.com> wrote:
>> > On Wed, Apr 05, 2017 at 05:27:58PM +0800, jemmy858585@gmail.com wrote:
>> >
>> > A proper solution is to refactor the synchronous code to make it
>> > asynchronous.  This might require invoking the system call from a
>> > thread pool worker.
>> >
>>
>> yes, i agree with you, but this is a big change.
>> I will try to find how to optimize this code, maybe need a long time.
>>
>> this patch is not a perfect solution, but can alleviate the problem.
>
> Let's try to understand the problem fully first.
>

when migrate the vm with high speed, i find vnc response slowly sometime.
not only vnc response slowly, virsh console aslo response slowly sometime.
and the guest os block io performance is also reduce.

the bug can be reproduce by this command:
virsh migrate-setspeed 165cf436-312f-47e7-90f2-f8aa63f34893 900
virsh migrate --live 165cf436-312f-47e7-90f2-f8aa63f34893
--copy-storage-inc qemu+ssh://10.59.163.38/system

and --copy-storage-all have no problem.
virsh migrate --live 165cf436-312f-47e7-90f2-f8aa63f34893
--copy-storage-all qemu+ssh://10.59.163.38/system

compare the difference between --copy-storage-inc and
--copy-storage-all. i find out the reason is
mig_save_device_bulk invoke bdrv_is_allocated, but bdrv_is_allocated
is synchronous and maybe wait
for a long time.

i write this code to measure the time used by  brdrv_is_allocated()

 279     static int max_time = 0;
 280     int tmp;

 288             clock_gettime(CLOCK_MONOTONIC_RAW, &ts1);
 289             ret = bdrv_is_allocated(blk_bs(bb), cur_sector,
 290                                     MAX_IS_ALLOCATED_SEARCH, &nr_sectors);
 291             clock_gettime(CLOCK_MONOTONIC_RAW, &ts2);
 292
 293
 294             tmp =  (ts2.tv_sec - ts1.tv_sec)*1000000000L
 295                            + (ts2.tv_nsec - ts1.tv_nsec);
 296             if (tmp > max_time) {
 297                max_time=tmp;
 298                fprintf(stderr, "max_time is %d\n", max_time);
 299             }

the test result is below:

 max_time is 37014
 max_time is 1075534
 max_time is 17180913
 max_time is 28586762
 max_time is 49563584
 max_time is 103085447
 max_time is 110836833
 max_time is 120331438

bdrv_is_allocated is called after qemu_mutex_lock_iothread.
and the main thread is also call qemu_mutex_lock_iothread.
so cause the main thread maybe wait for a long time.

   if (bmds->shared_base) {
        qemu_mutex_lock_iothread();
        aio_context_acquire(blk_get_aio_context(bb));
        /* Skip unallocated sectors; intentionally treats failure as
         * an allocated sector */
        while (cur_sector < total_sectors &&
               !bdrv_is_allocated(blk_bs(bb), cur_sector,
                                  MAX_IS_ALLOCATED_SEARCH, &nr_sectors)) {
            cur_sector += nr_sectors;
        }
        aio_context_release(blk_get_aio_context(bb));
        qemu_mutex_unlock_iothread();
    }

#0  0x00007f107322f264 in __lll_lock_wait () from /lib64/libpthread.so.0
#1  0x00007f107322a508 in _L_lock_854 () from /lib64/libpthread.so.0
#2  0x00007f107322a3d7 in pthread_mutex_lock () from /lib64/libpthread.so.0
#3  0x0000000000949ecb in qemu_mutex_lock (mutex=0xfc51a0) at
util/qemu-thread-posix.c:60
#4  0x0000000000459e58 in qemu_mutex_lock_iothread () at /root/qemu/cpus.c:1516
#5  0x0000000000945322 in os_host_main_loop_wait (timeout=28911939) at
util/main-loop.c:258
#6  0x00000000009453f2 in main_loop_wait (nonblocking=0) at util/main-loop.c:517
#7  0x00000000005c76b4 in main_loop () at vl.c:1898
#8  0x00000000005ceb77 in main (argc=49, argv=0x7fff921182b8,
envp=0x7fff92118448) at vl.c:4709



> Stefan

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [Qemu-devel] [PATCH v3] migration/block:limit the time used for block migration
  2017-04-07 11:33     ` Stefan Hajnoczi
  2017-04-08 10:09       ` Paolo Bonzini
@ 2017-04-09 13:06       ` 858585 jemmy
  1 sibling, 0 replies; 19+ messages in thread
From: 858585 jemmy @ 2017-04-09 13:06 UTC (permalink / raw)
  To: Stefan Hajnoczi
  Cc: qemu-devel, Fam Zheng, quintela, dgilbert, qemu-block,
	Lidong Chen, kwolf, Paolo Bonzini

On Fri, Apr 7, 2017 at 7:33 PM, Stefan Hajnoczi <stefanha@redhat.com> wrote:
> On Fri, Apr 07, 2017 at 09:30:33AM +0800, 858585 jemmy wrote:
>> On Thu, Apr 6, 2017 at 10:02 PM, Stefan Hajnoczi <stefanha@redhat.com> wrote:
>> > On Wed, Apr 05, 2017 at 05:27:58PM +0800, jemmy858585@gmail.com wrote:
>> >> From: Lidong Chen <lidongchen@tencent.com>
>> >>
>> >> when migration with high speed, mig_save_device_bulk invoke
>> >> bdrv_is_allocated too frequently, and cause vnc reponse slowly.
>> >> this patch limit the time used for bdrv_is_allocated.
>> >
>> > bdrv_is_allocated() is supposed to yield back to the event loop if it
>> > needs to block.  If your VNC session is experiencing jitter then it's
>> > probably because a system call in the bdrv_is_allocated() code path is
>> > synchronous when it should be asynchronous.
>> >
>> > You could try to identify the system call using strace -f -T.  In the
>> > output you'll see the duration of each system call.  I guess there is a
>> > file I/O system call that is taking noticable amounts of time.
>>
>> yes, i find the reason where bdrv_is_allocated needs to block.
>>
>> the mainly reason is caused by qemu_co_mutex_lock invoked by
>> qcow2_co_get_block_status.
>>     qemu_co_mutex_lock(&s->lock);
>>     ret = qcow2_get_cluster_offset(bs, sector_num << 9, &bytes,
>>                                    &cluster_offset);
>>     qemu_co_mutex_unlock(&s->lock);
>>
>> other reason is caused by l2_load invoked by
>> qcow2_get_cluster_offset.
>>
>>     /* load the l2 table in memory */
>>
>>     ret = l2_load(bs, l2_offset, &l2_table);
>>     if (ret < 0) {
>>         return ret;
>>     }
>
> The migration thread is holding the QEMU global mutex, the AioContext,
> and the qcow2 s->lock while the L2 table is read from disk.
>
> The QEMU global mutex is needed for block layer operations that touch
> the global drives list.  bdrv_is_allocated() can be called without the
> global mutex.
>
> The VNC server's file descriptor is not in the BDS AioContext.
> Therefore it can be processed while the migration thread holds the
> AioContext and qcow2 s->lock.
>
> Does the following patch solve the problem?
>
> diff --git a/migration/block.c b/migration/block.c
> index 7734ff7..072fc20 100644
> --- a/migration/block.c
> +++ b/migration/block.c
> @@ -276,6 +276,7 @@ static int mig_save_device_bulk(QEMUFile *f, BlkMigDevState *bmds)
>      if (bmds->shared_base) {
>          qemu_mutex_lock_iothread();
>          aio_context_acquire(blk_get_aio_context(bb));
> +        qemu_mutex_unlock_iothread();
>          /* Skip unallocated sectors; intentionally treats failure as
>           * an allocated sector */
>          while (cur_sector < total_sectors &&
> @@ -283,6 +284,7 @@ static int mig_save_device_bulk(QEMUFile *f, BlkMigDevState *bmds)
>                                    MAX_IS_ALLOCATED_SEARCH, &nr_sectors)) {
>              cur_sector += nr_sectors;
>          }
> +        qemu_mutex_lock_iothread();
>          aio_context_release(blk_get_aio_context(bb));
>          qemu_mutex_unlock_iothread();
>      }
>

this patch don't work. the qemu lockup.
the stack of main thread.
(gdb) bt
#0  0x00007f4256c89264 in __lll_lock_wait () from /lib64/libpthread.so.0
#1  0x00007f4256c84523 in _L_lock_892 () from /lib64/libpthread.so.0
#2  0x00007f4256c84407 in pthread_mutex_lock () from /lib64/libpthread.so.0
#3  0x0000000000949f47 in qemu_mutex_lock (mutex=0x1b04a60) at
util/qemu-thread-posix.c:60
#4  0x00000000009424cf in aio_context_acquire (ctx=0x1b04a00) at
util/async.c:484
#5  0x0000000000942b86 in thread_pool_completion_bh (opaque=0x1b25a10)
at util/thread-pool.c:168
#6  0x0000000000941610 in aio_bh_call (bh=0x1b1d570) at util/async.c:90
#7  0x00000000009416bb in aio_bh_poll (ctx=0x1b04a00) at util/async.c:118
#8  0x0000000000946baa in aio_dispatch (ctx=0x1b04a00) at util/aio-posix.c:429
#9  0x0000000000941b30 in aio_ctx_dispatch (source=0x1b04a00,
callback=0, user_data=0x0)
    at util/async.c:261
#10 0x00007f4257670f0e in g_main_context_dispatch () from
/lib64/libglib-2.0.so.0
#11 0x0000000000945282 in glib_pollfds_poll () at util/main-loop.c:213
#12 0x00000000009453a3 in os_host_main_loop_wait (timeout=754229747)
at util/main-loop.c:261
#13 0x000000000094546e in main_loop_wait (nonblocking=0) at util/main-loop.c:517
#14 0x00000000005c7664 in main_loop () at vl.c:1898
#15 0x00000000005ceb27 in main (argc=49, argv=0x7fff7907ab28,
envp=0x7fff7907acb8) at vl.c:4709

the stack of migration thread.
(gdb) bt
#0  0x00007f4256c89264 in __lll_lock_wait () from /lib64/libpthread.so.0
#1  0x00007f4256c84508 in _L_lock_854 () from /lib64/libpthread.so.0
#2  0x00007f4256c843d7 in pthread_mutex_lock () from /lib64/libpthread.so.0
#3  0x0000000000949f47 in qemu_mutex_lock (mutex=0xfc5200) at
util/qemu-thread-posix.c:60
#4  0x0000000000459e08 in qemu_mutex_lock_iothread () at /root/qemu/cpus.c:1516
#5  0x00000000007d2e04 in mig_save_device_bulk (f=0x2489720,
bmds=0x7f42500008f0)
    at migration/block.c:287
#6  0x00000000007d3579 in blk_mig_save_bulked_block (f=0x2489720) at
migration/block.c:484
#7  0x00000000007d3ebf in block_save_iterate (f=0x2489720,
opaque=0xfd3e20) at migration/block.c:773
#8  0x000000000049e840 in qemu_savevm_state_iterate (f=0x2489720,
postcopy=false)
    at /root/qemu/migration/savevm.c:1044
#9  0x00000000007c635d in migration_thread (opaque=0xf7d160) at
migration/migration.c:1976
#10 0x00007f4256c829d1 in start_thread () from /lib64/libpthread.so.0
#11 0x00007f42569cf8fd in clone () from /lib64/libc.so.6

vcpu thread.
(gdb) bt
#0  0x00007f4256c89264 in __lll_lock_wait () from /lib64/libpthread.so.0
#1  0x00007f4256c84508 in _L_lock_854 () from /lib64/libpthread.so.0
#2  0x00007f4256c843d7 in pthread_mutex_lock () from /lib64/libpthread.so.0
#3  0x0000000000949f47 in qemu_mutex_lock (mutex=0xfc5200) at
util/qemu-thread-posix.c:60
#4  0x0000000000459e08 in qemu_mutex_lock_iothread () at /root/qemu/cpus.c:1516
#5  0x00000000004146bb in prepare_mmio_access (mr=0x39010f0) at
/root/qemu/exec.c:2703
#6  0x0000000000414ad3 in address_space_read_continue (as=0xf9c520,
addr=1018, attrs=..., buf=
    0x7f4259464000 "%\001", len=1, addr1=2, l=1, mr=0x39010f0) at
/root/qemu/exec.c:2827
#7  0x0000000000414d81 in address_space_read_full (as=0xf9c520,
addr=1018, attrs=..., buf=
    0x7f4259464000 "%\001", len=1) at /root/qemu/exec.c:2895
#8  0x0000000000414e4b in address_space_read (as=0xf9c520, addr=1018,
attrs=..., buf=
    0x7f4259464000 "%\001", len=1, is_write=false) at
/root/qemu/include/exec/memory.h:1671
#9  address_space_rw (as=0xf9c520, addr=1018, attrs=...,
buf=0x7f4259464000 "%\001", len=1, is_write=
    false) at /root/qemu/exec.c:2909
#10 0x00000000004753c9 in kvm_handle_io (port=1018, attrs=...,
data=0x7f4259464000, direction=0, size=
    1, count=1) at /root/qemu/kvm-all.c:1803
#11 0x0000000000475c15 in kvm_cpu_exec (cpu=0x1b827b0) at
/root/qemu/kvm-all.c:2032
#12 0x00000000004591c8 in qemu_kvm_cpu_thread_fn (arg=0x1b827b0) at
/root/qemu/cpus.c:1087
#13 0x00007f4256c829d1 in start_thread () from /lib64/libpthread.so.0
#14 0x00007f42569cf8fd in clone () from /lib64/libc.so.6

the main thread hold qemu_mutex_lock_iothread first, and then
aio_context_acquire.
the migration thread hold aio_context_acquire first, then
qemu_mutex_lock_iothread.

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [Qemu-devel] [Qemu-block] [PATCH v3] migration/block:limit the time used for block migration
  2017-04-08 10:09       ` Paolo Bonzini
@ 2017-04-10 10:01         ` Stefan Hajnoczi
  0 siblings, 0 replies; 19+ messages in thread
From: Stefan Hajnoczi @ 2017-04-10 10:01 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Stefan Hajnoczi, 858585 jemmy, kwolf, Fam Zheng, qemu-block,
	quintela, qemu-devel, dgilbert, Lidong Chen

[-- Attachment #1: Type: text/plain, Size: 737 bytes --]

On Sat, Apr 08, 2017 at 06:09:16PM +0800, Paolo Bonzini wrote:
> On 07/04/2017 19:33, Stefan Hajnoczi wrote:
> > The migration thread is holding the QEMU global mutex, the AioContext,
> > and the qcow2 s->lock while the L2 table is read from disk.
> > 
> > The QEMU global mutex is needed for block layer operations that touch
> > the global drives list.  bdrv_is_allocated() can be called without the
> > global mutex.
> 
> Hi Stefan,
> 
> only virtio-blk and virtio-scsi take the AioContext lock (because they
> support dataplane).  For block migration to work with devices such as
> IDE, it needs to take the iothread lock too.  I think there's a comment
> about this in migration/block.c.

Good point.  :(

Stefan

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 455 bytes --]

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [Qemu-devel] [Qemu-block] [PATCH v3] migration/block:limit the time used for block migration
  2017-04-08 13:17       ` 858585 jemmy
@ 2017-04-10 13:52         ` Stefan Hajnoczi
  2017-04-11 12:19           ` 858585 jemmy
  2017-05-03  3:44           ` 858585 jemmy
  0 siblings, 2 replies; 19+ messages in thread
From: Stefan Hajnoczi @ 2017-04-10 13:52 UTC (permalink / raw)
  To: 858585 jemmy
  Cc: Stefan Hajnoczi, Fam Zheng, qemu-block, quintela, qemu-devel,
	dgilbert, Lidong Chen

[-- Attachment #1: Type: text/plain, Size: 6799 bytes --]

On Sat, Apr 08, 2017 at 09:17:58PM +0800, 858585 jemmy wrote:
> On Fri, Apr 7, 2017 at 7:34 PM, Stefan Hajnoczi <stefanha@redhat.com> wrote:
> > On Fri, Apr 07, 2017 at 09:30:33AM +0800, 858585 jemmy wrote:
> >> On Thu, Apr 6, 2017 at 10:02 PM, Stefan Hajnoczi <stefanha@redhat.com> wrote:
> >> > On Wed, Apr 05, 2017 at 05:27:58PM +0800, jemmy858585@gmail.com wrote:
> >> >
> >> > A proper solution is to refactor the synchronous code to make it
> >> > asynchronous.  This might require invoking the system call from a
> >> > thread pool worker.
> >> >
> >>
> >> yes, i agree with you, but this is a big change.
> >> I will try to find how to optimize this code, maybe need a long time.
> >>
> >> this patch is not a perfect solution, but can alleviate the problem.
> >
> > Let's try to understand the problem fully first.
> >
> 
> when migrate the vm with high speed, i find vnc response slowly sometime.
> not only vnc response slowly, virsh console aslo response slowly sometime.
> and the guest os block io performance is also reduce.
> 
> the bug can be reproduce by this command:
> virsh migrate-setspeed 165cf436-312f-47e7-90f2-f8aa63f34893 900
> virsh migrate --live 165cf436-312f-47e7-90f2-f8aa63f34893
> --copy-storage-inc qemu+ssh://10.59.163.38/system
> 
> and --copy-storage-all have no problem.
> virsh migrate --live 165cf436-312f-47e7-90f2-f8aa63f34893
> --copy-storage-all qemu+ssh://10.59.163.38/system
> 
> compare the difference between --copy-storage-inc and
> --copy-storage-all. i find out the reason is
> mig_save_device_bulk invoke bdrv_is_allocated, but bdrv_is_allocated
> is synchronous and maybe wait
> for a long time.
> 
> i write this code to measure the time used by  brdrv_is_allocated()
> 
>  279     static int max_time = 0;
>  280     int tmp;
> 
>  288             clock_gettime(CLOCK_MONOTONIC_RAW, &ts1);
>  289             ret = bdrv_is_allocated(blk_bs(bb), cur_sector,
>  290                                     MAX_IS_ALLOCATED_SEARCH, &nr_sectors);
>  291             clock_gettime(CLOCK_MONOTONIC_RAW, &ts2);
>  292
>  293
>  294             tmp =  (ts2.tv_sec - ts1.tv_sec)*1000000000L
>  295                            + (ts2.tv_nsec - ts1.tv_nsec);
>  296             if (tmp > max_time) {
>  297                max_time=tmp;
>  298                fprintf(stderr, "max_time is %d\n", max_time);
>  299             }
> 
> the test result is below:
> 
>  max_time is 37014
>  max_time is 1075534
>  max_time is 17180913
>  max_time is 28586762
>  max_time is 49563584
>  max_time is 103085447
>  max_time is 110836833
>  max_time is 120331438
> 
> bdrv_is_allocated is called after qemu_mutex_lock_iothread.
> and the main thread is also call qemu_mutex_lock_iothread.
> so cause the main thread maybe wait for a long time.
> 
>    if (bmds->shared_base) {
>         qemu_mutex_lock_iothread();
>         aio_context_acquire(blk_get_aio_context(bb));
>         /* Skip unallocated sectors; intentionally treats failure as
>          * an allocated sector */
>         while (cur_sector < total_sectors &&
>                !bdrv_is_allocated(blk_bs(bb), cur_sector,
>                                   MAX_IS_ALLOCATED_SEARCH, &nr_sectors)) {
>             cur_sector += nr_sectors;
>         }
>         aio_context_release(blk_get_aio_context(bb));
>         qemu_mutex_unlock_iothread();
>     }
> 
> #0  0x00007f107322f264 in __lll_lock_wait () from /lib64/libpthread.so.0
> #1  0x00007f107322a508 in _L_lock_854 () from /lib64/libpthread.so.0
> #2  0x00007f107322a3d7 in pthread_mutex_lock () from /lib64/libpthread.so.0
> #3  0x0000000000949ecb in qemu_mutex_lock (mutex=0xfc51a0) at
> util/qemu-thread-posix.c:60
> #4  0x0000000000459e58 in qemu_mutex_lock_iothread () at /root/qemu/cpus.c:1516
> #5  0x0000000000945322 in os_host_main_loop_wait (timeout=28911939) at
> util/main-loop.c:258
> #6  0x00000000009453f2 in main_loop_wait (nonblocking=0) at util/main-loop.c:517
> #7  0x00000000005c76b4 in main_loop () at vl.c:1898
> #8  0x00000000005ceb77 in main (argc=49, argv=0x7fff921182b8,
> envp=0x7fff92118448) at vl.c:4709

The following patch moves bdrv_is_allocated() into bb's AioContext.  It
will execute without blocking other I/O activity.

Compile-tested only.

diff --git a/migration/block.c b/migration/block.c
index 7734ff7..a5572a4 100644
--- a/migration/block.c
+++ b/migration/block.c
@@ -263,6 +263,29 @@ static void blk_mig_read_cb(void *opaque, int ret)
     blk_mig_unlock();
 }

+typedef struct {
+    int64_t *total_sectors;
+    int64_t *cur_sector;
+    BlockBackend *bb;
+    QemuEvent event;
+} MigNextAllocatedClusterData;
+
+static void coroutine_fn mig_next_allocated_cluster(void *opaque)
+{
+    MigNextAllocatedClusterData *data = opaque;
+    int nr_sectors;
+
+    /* Skip unallocated sectors; intentionally treats failure as
+     * an allocated sector */
+    while (*data->cur_sector < *data->total_sectors &&
+           !bdrv_is_allocated(blk_bs(data->bb), *data->cur_sector,
+                              MAX_IS_ALLOCATED_SEARCH, &nr_sectors)) {
+        *data->cur_sector += nr_sectors;
+    }
+
+    qemu_event_set(&data->event);
+}
+
 /* Called with no lock taken.  */

 static int mig_save_device_bulk(QEMUFile *f, BlkMigDevState *bmds)
@@ -274,17 +297,27 @@ static int mig_save_device_bulk(QEMUFile *f, BlkMigDevState *bmds)
     int nr_sectors;

     if (bmds->shared_base) {
+        /* Searching for the next allocated cluster can block.  Do it in a
+         * coroutine inside bb's AioContext.  That way we don't need to hold
+         * the global mutex while blocked.
+         */
+        AioContext *bb_ctx;
+        Coroutine *co;
+        MigNextAllocatedClusterData data = {
+            .cur_sector = &cur_sector,
+            .total_sectors = &total_sectors,
+            .bb = bb,
+        };
+
+        qemu_event_init(&data.event, false);
+
         qemu_mutex_lock_iothread();
-        aio_context_acquire(blk_get_aio_context(bb));
-        /* Skip unallocated sectors; intentionally treats failure as
-         * an allocated sector */
-        while (cur_sector < total_sectors &&
-               !bdrv_is_allocated(blk_bs(bb), cur_sector,
-                                  MAX_IS_ALLOCATED_SEARCH, &nr_sectors)) {
-            cur_sector += nr_sectors;
-        }
-        aio_context_release(blk_get_aio_context(bb));
+        bb_ctx = blk_get_aio_context(bb);
         qemu_mutex_unlock_iothread();
+
+        co = qemu_coroutine_create(mig_next_allocated_cluster, &data);
+        aio_co_schedule(bb_ctx, co);
+        qemu_event_wait(&data.event);
     }
 
     if (cur_sector >= total_sectors) {

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 455 bytes --]

^ permalink raw reply related	[flat|nested] 19+ messages in thread

* Re: [Qemu-devel] [Qemu-block] [PATCH v3] migration/block:limit the time used for block migration
  2017-04-10 13:52         ` [Qemu-devel] [Qemu-block] " Stefan Hajnoczi
@ 2017-04-11 12:19           ` 858585 jemmy
  2017-04-11 13:06             ` 858585 jemmy
  2017-05-03  3:44           ` 858585 jemmy
  1 sibling, 1 reply; 19+ messages in thread
From: 858585 jemmy @ 2017-04-11 12:19 UTC (permalink / raw)
  To: Stefan Hajnoczi
  Cc: Stefan Hajnoczi, Fam Zheng, qemu-block, quintela, qemu-devel,
	dgilbert, Lidong Chen

On Mon, Apr 10, 2017 at 9:52 PM, Stefan Hajnoczi <stefanha@gmail.com> wrote:
> On Sat, Apr 08, 2017 at 09:17:58PM +0800, 858585 jemmy wrote:
>> On Fri, Apr 7, 2017 at 7:34 PM, Stefan Hajnoczi <stefanha@redhat.com> wrote:
>> > On Fri, Apr 07, 2017 at 09:30:33AM +0800, 858585 jemmy wrote:
>> >> On Thu, Apr 6, 2017 at 10:02 PM, Stefan Hajnoczi <stefanha@redhat.com> wrote:
>> >> > On Wed, Apr 05, 2017 at 05:27:58PM +0800, jemmy858585@gmail.com wrote:
>> >> >
>> >> > A proper solution is to refactor the synchronous code to make it
>> >> > asynchronous.  This might require invoking the system call from a
>> >> > thread pool worker.
>> >> >
>> >>
>> >> yes, i agree with you, but this is a big change.
>> >> I will try to find how to optimize this code, maybe need a long time.
>> >>
>> >> this patch is not a perfect solution, but can alleviate the problem.
>> >
>> > Let's try to understand the problem fully first.
>> >
>>
>> when migrate the vm with high speed, i find vnc response slowly sometime.
>> not only vnc response slowly, virsh console aslo response slowly sometime.
>> and the guest os block io performance is also reduce.
>>
>> the bug can be reproduce by this command:
>> virsh migrate-setspeed 165cf436-312f-47e7-90f2-f8aa63f34893 900
>> virsh migrate --live 165cf436-312f-47e7-90f2-f8aa63f34893
>> --copy-storage-inc qemu+ssh://10.59.163.38/system
>>
>> and --copy-storage-all have no problem.
>> virsh migrate --live 165cf436-312f-47e7-90f2-f8aa63f34893
>> --copy-storage-all qemu+ssh://10.59.163.38/system
>>
>> compare the difference between --copy-storage-inc and
>> --copy-storage-all. i find out the reason is
>> mig_save_device_bulk invoke bdrv_is_allocated, but bdrv_is_allocated
>> is synchronous and maybe wait
>> for a long time.
>>
>> i write this code to measure the time used by  brdrv_is_allocated()
>>
>>  279     static int max_time = 0;
>>  280     int tmp;
>>
>>  288             clock_gettime(CLOCK_MONOTONIC_RAW, &ts1);
>>  289             ret = bdrv_is_allocated(blk_bs(bb), cur_sector,
>>  290                                     MAX_IS_ALLOCATED_SEARCH, &nr_sectors);
>>  291             clock_gettime(CLOCK_MONOTONIC_RAW, &ts2);
>>  292
>>  293
>>  294             tmp =  (ts2.tv_sec - ts1.tv_sec)*1000000000L
>>  295                            + (ts2.tv_nsec - ts1.tv_nsec);
>>  296             if (tmp > max_time) {
>>  297                max_time=tmp;
>>  298                fprintf(stderr, "max_time is %d\n", max_time);
>>  299             }
>>
>> the test result is below:
>>
>>  max_time is 37014
>>  max_time is 1075534
>>  max_time is 17180913
>>  max_time is 28586762
>>  max_time is 49563584
>>  max_time is 103085447
>>  max_time is 110836833
>>  max_time is 120331438
>>
>> bdrv_is_allocated is called after qemu_mutex_lock_iothread.
>> and the main thread is also call qemu_mutex_lock_iothread.
>> so cause the main thread maybe wait for a long time.
>>
>>    if (bmds->shared_base) {
>>         qemu_mutex_lock_iothread();
>>         aio_context_acquire(blk_get_aio_context(bb));
>>         /* Skip unallocated sectors; intentionally treats failure as
>>          * an allocated sector */
>>         while (cur_sector < total_sectors &&
>>                !bdrv_is_allocated(blk_bs(bb), cur_sector,
>>                                   MAX_IS_ALLOCATED_SEARCH, &nr_sectors)) {
>>             cur_sector += nr_sectors;
>>         }
>>         aio_context_release(blk_get_aio_context(bb));
>>         qemu_mutex_unlock_iothread();
>>     }
>>
>> #0  0x00007f107322f264 in __lll_lock_wait () from /lib64/libpthread.so.0
>> #1  0x00007f107322a508 in _L_lock_854 () from /lib64/libpthread.so.0
>> #2  0x00007f107322a3d7 in pthread_mutex_lock () from /lib64/libpthread.so.0
>> #3  0x0000000000949ecb in qemu_mutex_lock (mutex=0xfc51a0) at
>> util/qemu-thread-posix.c:60
>> #4  0x0000000000459e58 in qemu_mutex_lock_iothread () at /root/qemu/cpus.c:1516
>> #5  0x0000000000945322 in os_host_main_loop_wait (timeout=28911939) at
>> util/main-loop.c:258
>> #6  0x00000000009453f2 in main_loop_wait (nonblocking=0) at util/main-loop.c:517
>> #7  0x00000000005c76b4 in main_loop () at vl.c:1898
>> #8  0x00000000005ceb77 in main (argc=49, argv=0x7fff921182b8,
>> envp=0x7fff92118448) at vl.c:4709
>
> The following patch moves bdrv_is_allocated() into bb's AioContext.  It
> will execute without blocking other I/O activity.
>
> Compile-tested only.
i will try this patch.

>
> diff --git a/migration/block.c b/migration/block.c
> index 7734ff7..a5572a4 100644
> --- a/migration/block.c
> +++ b/migration/block.c
> @@ -263,6 +263,29 @@ static void blk_mig_read_cb(void *opaque, int ret)
>      blk_mig_unlock();
>  }
>
> +typedef struct {
> +    int64_t *total_sectors;
> +    int64_t *cur_sector;
> +    BlockBackend *bb;
> +    QemuEvent event;
> +} MigNextAllocatedClusterData;
> +
> +static void coroutine_fn mig_next_allocated_cluster(void *opaque)
> +{
> +    MigNextAllocatedClusterData *data = opaque;
> +    int nr_sectors;
> +
> +    /* Skip unallocated sectors; intentionally treats failure as
> +     * an allocated sector */
> +    while (*data->cur_sector < *data->total_sectors &&
> +           !bdrv_is_allocated(blk_bs(data->bb), *data->cur_sector,
> +                              MAX_IS_ALLOCATED_SEARCH, &nr_sectors)) {
> +        *data->cur_sector += nr_sectors;
> +    }
> +
> +    qemu_event_set(&data->event);
> +}
> +
>  /* Called with no lock taken.  */
>
>  static int mig_save_device_bulk(QEMUFile *f, BlkMigDevState *bmds)
> @@ -274,17 +297,27 @@ static int mig_save_device_bulk(QEMUFile *f, BlkMigDevState *bmds)
>      int nr_sectors;
>
>      if (bmds->shared_base) {
> +        /* Searching for the next allocated cluster can block.  Do it in a
> +         * coroutine inside bb's AioContext.  That way we don't need to hold
> +         * the global mutex while blocked.
> +         */
> +        AioContext *bb_ctx;
> +        Coroutine *co;
> +        MigNextAllocatedClusterData data = {
> +            .cur_sector = &cur_sector,
> +            .total_sectors = &total_sectors,
> +            .bb = bb,
> +        };
> +
> +        qemu_event_init(&data.event, false);
> +
>          qemu_mutex_lock_iothread();
> -        aio_context_acquire(blk_get_aio_context(bb));
> -        /* Skip unallocated sectors; intentionally treats failure as
> -         * an allocated sector */
> -        while (cur_sector < total_sectors &&
> -               !bdrv_is_allocated(blk_bs(bb), cur_sector,
> -                                  MAX_IS_ALLOCATED_SEARCH, &nr_sectors)) {
> -            cur_sector += nr_sectors;
> -        }
> -        aio_context_release(blk_get_aio_context(bb));
> +        bb_ctx = blk_get_aio_context(bb);
>          qemu_mutex_unlock_iothread();
> +
> +        co = qemu_coroutine_create(mig_next_allocated_cluster, &data);
> +        aio_co_schedule(bb_ctx, co);
> +        qemu_event_wait(&data.event);
>      }
>
>      if (cur_sector >= total_sectors) {

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [Qemu-devel] [Qemu-block] [PATCH v3] migration/block:limit the time used for block migration
  2017-04-11 12:19           ` 858585 jemmy
@ 2017-04-11 13:06             ` 858585 jemmy
  2017-04-11 15:32               ` Stefan Hajnoczi
  0 siblings, 1 reply; 19+ messages in thread
From: 858585 jemmy @ 2017-04-11 13:06 UTC (permalink / raw)
  To: Stefan Hajnoczi
  Cc: Stefan Hajnoczi, Fam Zheng, qemu-block, quintela, qemu-devel,
	dgilbert, Lidong Chen

On Tue, Apr 11, 2017 at 8:19 PM, 858585 jemmy <jemmy858585@gmail.com> wrote:
> On Mon, Apr 10, 2017 at 9:52 PM, Stefan Hajnoczi <stefanha@gmail.com> wrote:
>> On Sat, Apr 08, 2017 at 09:17:58PM +0800, 858585 jemmy wrote:
>>> On Fri, Apr 7, 2017 at 7:34 PM, Stefan Hajnoczi <stefanha@redhat.com> wrote:
>>> > On Fri, Apr 07, 2017 at 09:30:33AM +0800, 858585 jemmy wrote:
>>> >> On Thu, Apr 6, 2017 at 10:02 PM, Stefan Hajnoczi <stefanha@redhat.com> wrote:
>>> >> > On Wed, Apr 05, 2017 at 05:27:58PM +0800, jemmy858585@gmail.com wrote:
>>> >> >
>>> >> > A proper solution is to refactor the synchronous code to make it
>>> >> > asynchronous.  This might require invoking the system call from a
>>> >> > thread pool worker.
>>> >> >
>>> >>
>>> >> yes, i agree with you, but this is a big change.
>>> >> I will try to find how to optimize this code, maybe need a long time.
>>> >>
>>> >> this patch is not a perfect solution, but can alleviate the problem.
>>> >
>>> > Let's try to understand the problem fully first.
>>> >
>>>
>>> when migrate the vm with high speed, i find vnc response slowly sometime.
>>> not only vnc response slowly, virsh console aslo response slowly sometime.
>>> and the guest os block io performance is also reduce.
>>>
>>> the bug can be reproduce by this command:
>>> virsh migrate-setspeed 165cf436-312f-47e7-90f2-f8aa63f34893 900
>>> virsh migrate --live 165cf436-312f-47e7-90f2-f8aa63f34893
>>> --copy-storage-inc qemu+ssh://10.59.163.38/system
>>>
>>> and --copy-storage-all have no problem.
>>> virsh migrate --live 165cf436-312f-47e7-90f2-f8aa63f34893
>>> --copy-storage-all qemu+ssh://10.59.163.38/system
>>>
>>> compare the difference between --copy-storage-inc and
>>> --copy-storage-all. i find out the reason is
>>> mig_save_device_bulk invoke bdrv_is_allocated, but bdrv_is_allocated
>>> is synchronous and maybe wait
>>> for a long time.
>>>
>>> i write this code to measure the time used by  brdrv_is_allocated()
>>>
>>>  279     static int max_time = 0;
>>>  280     int tmp;
>>>
>>>  288             clock_gettime(CLOCK_MONOTONIC_RAW, &ts1);
>>>  289             ret = bdrv_is_allocated(blk_bs(bb), cur_sector,
>>>  290                                     MAX_IS_ALLOCATED_SEARCH, &nr_sectors);
>>>  291             clock_gettime(CLOCK_MONOTONIC_RAW, &ts2);
>>>  292
>>>  293
>>>  294             tmp =  (ts2.tv_sec - ts1.tv_sec)*1000000000L
>>>  295                            + (ts2.tv_nsec - ts1.tv_nsec);
>>>  296             if (tmp > max_time) {
>>>  297                max_time=tmp;
>>>  298                fprintf(stderr, "max_time is %d\n", max_time);
>>>  299             }
>>>
>>> the test result is below:
>>>
>>>  max_time is 37014
>>>  max_time is 1075534
>>>  max_time is 17180913
>>>  max_time is 28586762
>>>  max_time is 49563584
>>>  max_time is 103085447
>>>  max_time is 110836833
>>>  max_time is 120331438
>>>
>>> bdrv_is_allocated is called after qemu_mutex_lock_iothread.
>>> and the main thread is also call qemu_mutex_lock_iothread.
>>> so cause the main thread maybe wait for a long time.
>>>
>>>    if (bmds->shared_base) {
>>>         qemu_mutex_lock_iothread();
>>>         aio_context_acquire(blk_get_aio_context(bb));
>>>         /* Skip unallocated sectors; intentionally treats failure as
>>>          * an allocated sector */
>>>         while (cur_sector < total_sectors &&
>>>                !bdrv_is_allocated(blk_bs(bb), cur_sector,
>>>                                   MAX_IS_ALLOCATED_SEARCH, &nr_sectors)) {
>>>             cur_sector += nr_sectors;
>>>         }
>>>         aio_context_release(blk_get_aio_context(bb));
>>>         qemu_mutex_unlock_iothread();
>>>     }
>>>
>>> #0  0x00007f107322f264 in __lll_lock_wait () from /lib64/libpthread.so.0
>>> #1  0x00007f107322a508 in _L_lock_854 () from /lib64/libpthread.so.0
>>> #2  0x00007f107322a3d7 in pthread_mutex_lock () from /lib64/libpthread.so.0
>>> #3  0x0000000000949ecb in qemu_mutex_lock (mutex=0xfc51a0) at
>>> util/qemu-thread-posix.c:60
>>> #4  0x0000000000459e58 in qemu_mutex_lock_iothread () at /root/qemu/cpus.c:1516
>>> #5  0x0000000000945322 in os_host_main_loop_wait (timeout=28911939) at
>>> util/main-loop.c:258
>>> #6  0x00000000009453f2 in main_loop_wait (nonblocking=0) at util/main-loop.c:517
>>> #7  0x00000000005c76b4 in main_loop () at vl.c:1898
>>> #8  0x00000000005ceb77 in main (argc=49, argv=0x7fff921182b8,
>>> envp=0x7fff92118448) at vl.c:4709
>>
>> The following patch moves bdrv_is_allocated() into bb's AioContext.  It
>> will execute without blocking other I/O activity.
>>
>> Compile-tested only.
> i will try this patch.

Hi Stefan:
It work for virtio. i will test ide later.
Do you have any suggestion about the test case?
Thanks.

>
>>
>> diff --git a/migration/block.c b/migration/block.c
>> index 7734ff7..a5572a4 100644
>> --- a/migration/block.c
>> +++ b/migration/block.c
>> @@ -263,6 +263,29 @@ static void blk_mig_read_cb(void *opaque, int ret)
>>      blk_mig_unlock();
>>  }
>>
>> +typedef struct {
>> +    int64_t *total_sectors;
>> +    int64_t *cur_sector;
>> +    BlockBackend *bb;
>> +    QemuEvent event;
>> +} MigNextAllocatedClusterData;
>> +
>> +static void coroutine_fn mig_next_allocated_cluster(void *opaque)
>> +{
>> +    MigNextAllocatedClusterData *data = opaque;
>> +    int nr_sectors;
>> +
>> +    /* Skip unallocated sectors; intentionally treats failure as
>> +     * an allocated sector */
>> +    while (*data->cur_sector < *data->total_sectors &&
>> +           !bdrv_is_allocated(blk_bs(data->bb), *data->cur_sector,
>> +                              MAX_IS_ALLOCATED_SEARCH, &nr_sectors)) {
>> +        *data->cur_sector += nr_sectors;
>> +    }
>> +
>> +    qemu_event_set(&data->event);
>> +}
>> +
>>  /* Called with no lock taken.  */
>>
>>  static int mig_save_device_bulk(QEMUFile *f, BlkMigDevState *bmds)
>> @@ -274,17 +297,27 @@ static int mig_save_device_bulk(QEMUFile *f, BlkMigDevState *bmds)
>>      int nr_sectors;
>>
>>      if (bmds->shared_base) {
>> +        /* Searching for the next allocated cluster can block.  Do it in a
>> +         * coroutine inside bb's AioContext.  That way we don't need to hold
>> +         * the global mutex while blocked.
>> +         */
>> +        AioContext *bb_ctx;
>> +        Coroutine *co;
>> +        MigNextAllocatedClusterData data = {
>> +            .cur_sector = &cur_sector,
>> +            .total_sectors = &total_sectors,
>> +            .bb = bb,
>> +        };
>> +
>> +        qemu_event_init(&data.event, false);
>> +
>>          qemu_mutex_lock_iothread();
>> -        aio_context_acquire(blk_get_aio_context(bb));
>> -        /* Skip unallocated sectors; intentionally treats failure as
>> -         * an allocated sector */
>> -        while (cur_sector < total_sectors &&
>> -               !bdrv_is_allocated(blk_bs(bb), cur_sector,
>> -                                  MAX_IS_ALLOCATED_SEARCH, &nr_sectors)) {
>> -            cur_sector += nr_sectors;
>> -        }
>> -        aio_context_release(blk_get_aio_context(bb));
>> +        bb_ctx = blk_get_aio_context(bb);
>>          qemu_mutex_unlock_iothread();
>> +
>> +        co = qemu_coroutine_create(mig_next_allocated_cluster, &data);
>> +        aio_co_schedule(bb_ctx, co);
>> +        qemu_event_wait(&data.event);
>>      }
>>
>>      if (cur_sector >= total_sectors) {

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [Qemu-devel] [Qemu-block] [PATCH v3] migration/block:limit the time used for block migration
  2017-04-11 13:06             ` 858585 jemmy
@ 2017-04-11 15:32               ` Stefan Hajnoczi
  0 siblings, 0 replies; 19+ messages in thread
From: Stefan Hajnoczi @ 2017-04-11 15:32 UTC (permalink / raw)
  To: 858585 jemmy
  Cc: Stefan Hajnoczi, Fam Zheng, qemu-block, quintela, qemu-devel,
	dgilbert, Lidong Chen

[-- Attachment #1: Type: text/plain, Size: 5596 bytes --]

On Tue, Apr 11, 2017 at 09:06:37PM +0800, 858585 jemmy wrote:
> On Tue, Apr 11, 2017 at 8:19 PM, 858585 jemmy <jemmy858585@gmail.com> wrote:
> > On Mon, Apr 10, 2017 at 9:52 PM, Stefan Hajnoczi <stefanha@gmail.com> wrote:
> >> On Sat, Apr 08, 2017 at 09:17:58PM +0800, 858585 jemmy wrote:
> >>> On Fri, Apr 7, 2017 at 7:34 PM, Stefan Hajnoczi <stefanha@redhat.com> wrote:
> >>> > On Fri, Apr 07, 2017 at 09:30:33AM +0800, 858585 jemmy wrote:
> >>> >> On Thu, Apr 6, 2017 at 10:02 PM, Stefan Hajnoczi <stefanha@redhat.com> wrote:
> >>> >> > On Wed, Apr 05, 2017 at 05:27:58PM +0800, jemmy858585@gmail.com wrote:
> >>> >> >
> >>> >> > A proper solution is to refactor the synchronous code to make it
> >>> >> > asynchronous.  This might require invoking the system call from a
> >>> >> > thread pool worker.
> >>> >> >
> >>> >>
> >>> >> yes, i agree with you, but this is a big change.
> >>> >> I will try to find how to optimize this code, maybe need a long time.
> >>> >>
> >>> >> this patch is not a perfect solution, but can alleviate the problem.
> >>> >
> >>> > Let's try to understand the problem fully first.
> >>> >
> >>>
> >>> when migrate the vm with high speed, i find vnc response slowly sometime.
> >>> not only vnc response slowly, virsh console aslo response slowly sometime.
> >>> and the guest os block io performance is also reduce.
> >>>
> >>> the bug can be reproduce by this command:
> >>> virsh migrate-setspeed 165cf436-312f-47e7-90f2-f8aa63f34893 900
> >>> virsh migrate --live 165cf436-312f-47e7-90f2-f8aa63f34893
> >>> --copy-storage-inc qemu+ssh://10.59.163.38/system
> >>>
> >>> and --copy-storage-all have no problem.
> >>> virsh migrate --live 165cf436-312f-47e7-90f2-f8aa63f34893
> >>> --copy-storage-all qemu+ssh://10.59.163.38/system
> >>>
> >>> compare the difference between --copy-storage-inc and
> >>> --copy-storage-all. i find out the reason is
> >>> mig_save_device_bulk invoke bdrv_is_allocated, but bdrv_is_allocated
> >>> is synchronous and maybe wait
> >>> for a long time.
> >>>
> >>> i write this code to measure the time used by  brdrv_is_allocated()
> >>>
> >>>  279     static int max_time = 0;
> >>>  280     int tmp;
> >>>
> >>>  288             clock_gettime(CLOCK_MONOTONIC_RAW, &ts1);
> >>>  289             ret = bdrv_is_allocated(blk_bs(bb), cur_sector,
> >>>  290                                     MAX_IS_ALLOCATED_SEARCH, &nr_sectors);
> >>>  291             clock_gettime(CLOCK_MONOTONIC_RAW, &ts2);
> >>>  292
> >>>  293
> >>>  294             tmp =  (ts2.tv_sec - ts1.tv_sec)*1000000000L
> >>>  295                            + (ts2.tv_nsec - ts1.tv_nsec);
> >>>  296             if (tmp > max_time) {
> >>>  297                max_time=tmp;
> >>>  298                fprintf(stderr, "max_time is %d\n", max_time);
> >>>  299             }
> >>>
> >>> the test result is below:
> >>>
> >>>  max_time is 37014
> >>>  max_time is 1075534
> >>>  max_time is 17180913
> >>>  max_time is 28586762
> >>>  max_time is 49563584
> >>>  max_time is 103085447
> >>>  max_time is 110836833
> >>>  max_time is 120331438
> >>>
> >>> bdrv_is_allocated is called after qemu_mutex_lock_iothread.
> >>> and the main thread is also call qemu_mutex_lock_iothread.
> >>> so cause the main thread maybe wait for a long time.
> >>>
> >>>    if (bmds->shared_base) {
> >>>         qemu_mutex_lock_iothread();
> >>>         aio_context_acquire(blk_get_aio_context(bb));
> >>>         /* Skip unallocated sectors; intentionally treats failure as
> >>>          * an allocated sector */
> >>>         while (cur_sector < total_sectors &&
> >>>                !bdrv_is_allocated(blk_bs(bb), cur_sector,
> >>>                                   MAX_IS_ALLOCATED_SEARCH, &nr_sectors)) {
> >>>             cur_sector += nr_sectors;
> >>>         }
> >>>         aio_context_release(blk_get_aio_context(bb));
> >>>         qemu_mutex_unlock_iothread();
> >>>     }
> >>>
> >>> #0  0x00007f107322f264 in __lll_lock_wait () from /lib64/libpthread.so.0
> >>> #1  0x00007f107322a508 in _L_lock_854 () from /lib64/libpthread.so.0
> >>> #2  0x00007f107322a3d7 in pthread_mutex_lock () from /lib64/libpthread.so.0
> >>> #3  0x0000000000949ecb in qemu_mutex_lock (mutex=0xfc51a0) at
> >>> util/qemu-thread-posix.c:60
> >>> #4  0x0000000000459e58 in qemu_mutex_lock_iothread () at /root/qemu/cpus.c:1516
> >>> #5  0x0000000000945322 in os_host_main_loop_wait (timeout=28911939) at
> >>> util/main-loop.c:258
> >>> #6  0x00000000009453f2 in main_loop_wait (nonblocking=0) at util/main-loop.c:517
> >>> #7  0x00000000005c76b4 in main_loop () at vl.c:1898
> >>> #8  0x00000000005ceb77 in main (argc=49, argv=0x7fff921182b8,
> >>> envp=0x7fff92118448) at vl.c:4709
> >>
> >> The following patch moves bdrv_is_allocated() into bb's AioContext.  It
> >> will execute without blocking other I/O activity.
> >>
> >> Compile-tested only.
> > i will try this patch.
> 
> Hi Stefan:
> It work for virtio. i will test ide later.
> Do you have any suggestion about the test case?

1. When testing virtio-blk it's interesting to try both -object
   iothread,id=iothread0 -device virtio-blk-pci,iothread=iothread0,...
   and without iothread.  The code paths are different so there may be
   bugs that only occur with iothread or without iothread.

2. The guest should be submitting I/O requests to increase the chance of
   race conditions.  You could run "dd if=/dev/vda of=/dev/null
   iflag=direct bs=4k &" 8 times inside the guest to generate I/O.

Thanks,
Stefan

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 455 bytes --]

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [Qemu-devel] [Qemu-block] [PATCH v3] migration/block:limit the time used for block migration
  2017-04-10 13:52         ` [Qemu-devel] [Qemu-block] " Stefan Hajnoczi
  2017-04-11 12:19           ` 858585 jemmy
@ 2017-05-03  3:44           ` 858585 jemmy
  2017-05-03 13:31             ` 858585 jemmy
  1 sibling, 1 reply; 19+ messages in thread
From: 858585 jemmy @ 2017-05-03  3:44 UTC (permalink / raw)
  To: Stefan Hajnoczi
  Cc: Stefan Hajnoczi, Fam Zheng, qemu block, Juan Quintela,
	qemu-devel, Dave Gilbert, Lidong Chen

On Mon, Apr 10, 2017 at 9:52 PM, Stefan Hajnoczi <stefanha@gmail.com> wrote:
> On Sat, Apr 08, 2017 at 09:17:58PM +0800, 858585 jemmy wrote:
>> On Fri, Apr 7, 2017 at 7:34 PM, Stefan Hajnoczi <stefanha@redhat.com> wrote:
>> > On Fri, Apr 07, 2017 at 09:30:33AM +0800, 858585 jemmy wrote:
>> >> On Thu, Apr 6, 2017 at 10:02 PM, Stefan Hajnoczi <stefanha@redhat.com> wrote:
>> >> > On Wed, Apr 05, 2017 at 05:27:58PM +0800, jemmy858585@gmail.com wrote:
>> >> >
>> >> > A proper solution is to refactor the synchronous code to make it
>> >> > asynchronous.  This might require invoking the system call from a
>> >> > thread pool worker.
>> >> >
>> >>
>> >> yes, i agree with you, but this is a big change.
>> >> I will try to find how to optimize this code, maybe need a long time.
>> >>
>> >> this patch is not a perfect solution, but can alleviate the problem.
>> >
>> > Let's try to understand the problem fully first.
>> >
>>
>> when migrate the vm with high speed, i find vnc response slowly sometime.
>> not only vnc response slowly, virsh console aslo response slowly sometime.
>> and the guest os block io performance is also reduce.
>>
>> the bug can be reproduce by this command:
>> virsh migrate-setspeed 165cf436-312f-47e7-90f2-f8aa63f34893 900
>> virsh migrate --live 165cf436-312f-47e7-90f2-f8aa63f34893
>> --copy-storage-inc qemu+ssh://10.59.163.38/system
>>
>> and --copy-storage-all have no problem.
>> virsh migrate --live 165cf436-312f-47e7-90f2-f8aa63f34893
>> --copy-storage-all qemu+ssh://10.59.163.38/system
>>
>> compare the difference between --copy-storage-inc and
>> --copy-storage-all. i find out the reason is
>> mig_save_device_bulk invoke bdrv_is_allocated, but bdrv_is_allocated
>> is synchronous and maybe wait
>> for a long time.
>>
>> i write this code to measure the time used by  brdrv_is_allocated()
>>
>>  279     static int max_time = 0;
>>  280     int tmp;
>>
>>  288             clock_gettime(CLOCK_MONOTONIC_RAW, &ts1);
>>  289             ret = bdrv_is_allocated(blk_bs(bb), cur_sector,
>>  290                                     MAX_IS_ALLOCATED_SEARCH, &nr_sectors);
>>  291             clock_gettime(CLOCK_MONOTONIC_RAW, &ts2);
>>  292
>>  293
>>  294             tmp =  (ts2.tv_sec - ts1.tv_sec)*1000000000L
>>  295                            + (ts2.tv_nsec - ts1.tv_nsec);
>>  296             if (tmp > max_time) {
>>  297                max_time=tmp;
>>  298                fprintf(stderr, "max_time is %d\n", max_time);
>>  299             }
>>
>> the test result is below:
>>
>>  max_time is 37014
>>  max_time is 1075534
>>  max_time is 17180913
>>  max_time is 28586762
>>  max_time is 49563584
>>  max_time is 103085447
>>  max_time is 110836833
>>  max_time is 120331438
>>
>> bdrv_is_allocated is called after qemu_mutex_lock_iothread.
>> and the main thread is also call qemu_mutex_lock_iothread.
>> so cause the main thread maybe wait for a long time.
>>
>>    if (bmds->shared_base) {
>>         qemu_mutex_lock_iothread();
>>         aio_context_acquire(blk_get_aio_context(bb));
>>         /* Skip unallocated sectors; intentionally treats failure as
>>          * an allocated sector */
>>         while (cur_sector < total_sectors &&
>>                !bdrv_is_allocated(blk_bs(bb), cur_sector,
>>                                   MAX_IS_ALLOCATED_SEARCH, &nr_sectors)) {
>>             cur_sector += nr_sectors;
>>         }
>>         aio_context_release(blk_get_aio_context(bb));
>>         qemu_mutex_unlock_iothread();
>>     }
>>
>> #0  0x00007f107322f264 in __lll_lock_wait () from /lib64/libpthread.so.0
>> #1  0x00007f107322a508 in _L_lock_854 () from /lib64/libpthread.so.0
>> #2  0x00007f107322a3d7 in pthread_mutex_lock () from /lib64/libpthread.so.0
>> #3  0x0000000000949ecb in qemu_mutex_lock (mutex=0xfc51a0) at
>> util/qemu-thread-posix.c:60
>> #4  0x0000000000459e58 in qemu_mutex_lock_iothread () at /root/qemu/cpus.c:1516
>> #5  0x0000000000945322 in os_host_main_loop_wait (timeout=28911939) at
>> util/main-loop.c:258
>> #6  0x00000000009453f2 in main_loop_wait (nonblocking=0) at util/main-loop.c:517
>> #7  0x00000000005c76b4 in main_loop () at vl.c:1898
>> #8  0x00000000005ceb77 in main (argc=49, argv=0x7fff921182b8,
>> envp=0x7fff92118448) at vl.c:4709
>
> The following patch moves bdrv_is_allocated() into bb's AioContext.  It
> will execute without blocking other I/O activity.
>
> Compile-tested only.
>
> diff --git a/migration/block.c b/migration/block.c
> index 7734ff7..a5572a4 100644
> --- a/migration/block.c
> +++ b/migration/block.c
> @@ -263,6 +263,29 @@ static void blk_mig_read_cb(void *opaque, int ret)
>      blk_mig_unlock();
>  }
>
> +typedef struct {
> +    int64_t *total_sectors;
> +    int64_t *cur_sector;
> +    BlockBackend *bb;
> +    QemuEvent event;
> +} MigNextAllocatedClusterData;
> +
> +static void coroutine_fn mig_next_allocated_cluster(void *opaque)
> +{
> +    MigNextAllocatedClusterData *data = opaque;
> +    int nr_sectors;
> +
> +    /* Skip unallocated sectors; intentionally treats failure as
> +     * an allocated sector */
> +    while (*data->cur_sector < *data->total_sectors &&
> +           !bdrv_is_allocated(blk_bs(data->bb), *data->cur_sector,
> +                              MAX_IS_ALLOCATED_SEARCH, &nr_sectors)) {
> +        *data->cur_sector += nr_sectors;
> +    }
> +
> +    qemu_event_set(&data->event);
> +}
> +
>  /* Called with no lock taken.  */
>
>  static int mig_save_device_bulk(QEMUFile *f, BlkMigDevState *bmds)
> @@ -274,17 +297,27 @@ static int mig_save_device_bulk(QEMUFile *f, BlkMigDevState *bmds)
>      int nr_sectors;
>
>      if (bmds->shared_base) {
> +        /* Searching for the next allocated cluster can block.  Do it in a
> +         * coroutine inside bb's AioContext.  That way we don't need to hold
> +         * the global mutex while blocked.
> +         */
> +        AioContext *bb_ctx;
> +        Coroutine *co;
> +        MigNextAllocatedClusterData data = {
> +            .cur_sector = &cur_sector,
> +            .total_sectors = &total_sectors,
> +            .bb = bb,
> +        };
> +
> +        qemu_event_init(&data.event, false);
> +
>          qemu_mutex_lock_iothread();
> -        aio_context_acquire(blk_get_aio_context(bb));
> -        /* Skip unallocated sectors; intentionally treats failure as
> -         * an allocated sector */
> -        while (cur_sector < total_sectors &&
> -               !bdrv_is_allocated(blk_bs(bb), cur_sector,
> -                                  MAX_IS_ALLOCATED_SEARCH, &nr_sectors)) {
> -            cur_sector += nr_sectors;
> -        }
> -        aio_context_release(blk_get_aio_context(bb));
> +        bb_ctx = blk_get_aio_context(bb);
>          qemu_mutex_unlock_iothread();
> +

Hi Stefan:
    bb_ctx maybe change after qemu_mutex_unlock_iothread().
    blk_set_aio_context maybe invoked by vcpu thread. like this:
              blk_set_aio_context
                   virtio_blk_data_plane_stop
                            virtio_pci_stop_ioeventfd
                                  virtio_pci_common_write

    run this command in guest os:
          while [ 1 ]; do rmmod virtio_blk; modprobe virtio_blk; done

    write this code to test whether bb_ctx is changed.

    qemu_mutex_lock_iothread();
    bb_ctx = blk_get_aio_context(bb);
    qemu_mutex_unlock_iothread();

    sleep(0.1);

    qemu_mutex_lock_iothread();
    bb_ctx1 = blk_get_aio_context(bb);
    qemu_mutex_unlock_iothread();

    if (bb_ctx != bb_ctx1) {
         fprintf(stderr, "bb_ctx is not bb_ctx1\n");
    }

    and i find bb_ctx is not bb_ctx1. so i change the code.
    move aio_co_schedule into qemu_mutex_lock_iothread block.

    if (bmds->shared_base) {
        AioContext *bb_ctx;
        Coroutine *co;
        MigNextAllocatedClusterData data = {
            .cur_sector = &cur_sector,
            .total_sectors = &total_sectors,
            .bb = bb,
        };
        qemu_event_init(&data.event, false);

        qemu_mutex_lock_iothread();
        bb_ctx = blk_get_aio_context(bb);
        co = qemu_coroutine_create(mig_next_allocated_cluster, &data);
        aio_co_schedule(bb_ctx, co);
        qemu_mutex_unlock_iothread();

        qemu_event_wait(&data.event);
    }

    I test four case for this patch:

    1.qemu virtio_blk with iothreads,run dd 8 times inside guest then migrate.
    2.qemu virtio_blk without iothreads,run dd 8 times inside guest
then migrate.
    3.qemu ide,run dd 8 times inside guest then migrate.
    4.qemu virtio_blk with iothreads, run rmmod virtio_blk; modprobe virtio_blk;
    inside guest then migrate.

    All the test case passed. I will send the patch later.
    Thanks.

> +        co = qemu_coroutine_create(mig_next_allocated_cluster, &data);
> +        aio_co_schedule(bb_ctx, co);
> +        qemu_event_wait(&data.event);
>      }
>
>      if (cur_sector >= total_sectors) {

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [Qemu-devel] [Qemu-block] [PATCH v3] migration/block:limit the time used for block migration
  2017-05-03  3:44           ` 858585 jemmy
@ 2017-05-03 13:31             ` 858585 jemmy
  0 siblings, 0 replies; 19+ messages in thread
From: 858585 jemmy @ 2017-05-03 13:31 UTC (permalink / raw)
  To: Stefan Hajnoczi
  Cc: Stefan Hajnoczi, Fam Zheng, qemu block, Juan Quintela,
	qemu-devel, Dave Gilbert, Lidong Chen

On Wed, May 3, 2017 at 11:44 AM, 858585 jemmy <jemmy858585@gmail.com> wrote:
> On Mon, Apr 10, 2017 at 9:52 PM, Stefan Hajnoczi <stefanha@gmail.com> wrote:
>> On Sat, Apr 08, 2017 at 09:17:58PM +0800, 858585 jemmy wrote:
>>> On Fri, Apr 7, 2017 at 7:34 PM, Stefan Hajnoczi <stefanha@redhat.com> wrote:
>>> > On Fri, Apr 07, 2017 at 09:30:33AM +0800, 858585 jemmy wrote:
>>> >> On Thu, Apr 6, 2017 at 10:02 PM, Stefan Hajnoczi <stefanha@redhat.com> wrote:
>>> >> > On Wed, Apr 05, 2017 at 05:27:58PM +0800, jemmy858585@gmail.com wrote:
>>> >> >
>>> >> > A proper solution is to refactor the synchronous code to make it
>>> >> > asynchronous.  This might require invoking the system call from a
>>> >> > thread pool worker.
>>> >> >
>>> >>
>>> >> yes, i agree with you, but this is a big change.
>>> >> I will try to find how to optimize this code, maybe need a long time.
>>> >>
>>> >> this patch is not a perfect solution, but can alleviate the problem.
>>> >
>>> > Let's try to understand the problem fully first.
>>> >
>>>
>>> when migrate the vm with high speed, i find vnc response slowly sometime.
>>> not only vnc response slowly, virsh console aslo response slowly sometime.
>>> and the guest os block io performance is also reduce.
>>>
>>> the bug can be reproduce by this command:
>>> virsh migrate-setspeed 165cf436-312f-47e7-90f2-f8aa63f34893 900
>>> virsh migrate --live 165cf436-312f-47e7-90f2-f8aa63f34893
>>> --copy-storage-inc qemu+ssh://10.59.163.38/system
>>>
>>> and --copy-storage-all have no problem.
>>> virsh migrate --live 165cf436-312f-47e7-90f2-f8aa63f34893
>>> --copy-storage-all qemu+ssh://10.59.163.38/system
>>>
>>> compare the difference between --copy-storage-inc and
>>> --copy-storage-all. i find out the reason is
>>> mig_save_device_bulk invoke bdrv_is_allocated, but bdrv_is_allocated
>>> is synchronous and maybe wait
>>> for a long time.
>>>
>>> i write this code to measure the time used by  brdrv_is_allocated()
>>>
>>>  279     static int max_time = 0;
>>>  280     int tmp;
>>>
>>>  288             clock_gettime(CLOCK_MONOTONIC_RAW, &ts1);
>>>  289             ret = bdrv_is_allocated(blk_bs(bb), cur_sector,
>>>  290                                     MAX_IS_ALLOCATED_SEARCH, &nr_sectors);
>>>  291             clock_gettime(CLOCK_MONOTONIC_RAW, &ts2);
>>>  292
>>>  293
>>>  294             tmp =  (ts2.tv_sec - ts1.tv_sec)*1000000000L
>>>  295                            + (ts2.tv_nsec - ts1.tv_nsec);
>>>  296             if (tmp > max_time) {
>>>  297                max_time=tmp;
>>>  298                fprintf(stderr, "max_time is %d\n", max_time);
>>>  299             }
>>>
>>> the test result is below:
>>>
>>>  max_time is 37014
>>>  max_time is 1075534
>>>  max_time is 17180913
>>>  max_time is 28586762
>>>  max_time is 49563584
>>>  max_time is 103085447
>>>  max_time is 110836833
>>>  max_time is 120331438
>>>
>>> bdrv_is_allocated is called after qemu_mutex_lock_iothread.
>>> and the main thread is also call qemu_mutex_lock_iothread.
>>> so cause the main thread maybe wait for a long time.
>>>
>>>    if (bmds->shared_base) {
>>>         qemu_mutex_lock_iothread();
>>>         aio_context_acquire(blk_get_aio_context(bb));
>>>         /* Skip unallocated sectors; intentionally treats failure as
>>>          * an allocated sector */
>>>         while (cur_sector < total_sectors &&
>>>                !bdrv_is_allocated(blk_bs(bb), cur_sector,
>>>                                   MAX_IS_ALLOCATED_SEARCH, &nr_sectors)) {
>>>             cur_sector += nr_sectors;
>>>         }
>>>         aio_context_release(blk_get_aio_context(bb));
>>>         qemu_mutex_unlock_iothread();
>>>     }
>>>
>>> #0  0x00007f107322f264 in __lll_lock_wait () from /lib64/libpthread.so.0
>>> #1  0x00007f107322a508 in _L_lock_854 () from /lib64/libpthread.so.0
>>> #2  0x00007f107322a3d7 in pthread_mutex_lock () from /lib64/libpthread.so.0
>>> #3  0x0000000000949ecb in qemu_mutex_lock (mutex=0xfc51a0) at
>>> util/qemu-thread-posix.c:60
>>> #4  0x0000000000459e58 in qemu_mutex_lock_iothread () at /root/qemu/cpus.c:1516
>>> #5  0x0000000000945322 in os_host_main_loop_wait (timeout=28911939) at
>>> util/main-loop.c:258
>>> #6  0x00000000009453f2 in main_loop_wait (nonblocking=0) at util/main-loop.c:517
>>> #7  0x00000000005c76b4 in main_loop () at vl.c:1898
>>> #8  0x00000000005ceb77 in main (argc=49, argv=0x7fff921182b8,
>>> envp=0x7fff92118448) at vl.c:4709
>>
>> The following patch moves bdrv_is_allocated() into bb's AioContext.  It
>> will execute without blocking other I/O activity.
>>
>> Compile-tested only.
>>
>> diff --git a/migration/block.c b/migration/block.c
>> index 7734ff7..a5572a4 100644
>> --- a/migration/block.c
>> +++ b/migration/block.c
>> @@ -263,6 +263,29 @@ static void blk_mig_read_cb(void *opaque, int ret)
>>      blk_mig_unlock();
>>  }
>>
>> +typedef struct {
>> +    int64_t *total_sectors;
>> +    int64_t *cur_sector;
>> +    BlockBackend *bb;
>> +    QemuEvent event;
>> +} MigNextAllocatedClusterData;
>> +
>> +static void coroutine_fn mig_next_allocated_cluster(void *opaque)
>> +{
>> +    MigNextAllocatedClusterData *data = opaque;
>> +    int nr_sectors;
>> +
>> +    /* Skip unallocated sectors; intentionally treats failure as
>> +     * an allocated sector */
>> +    while (*data->cur_sector < *data->total_sectors &&
>> +           !bdrv_is_allocated(blk_bs(data->bb), *data->cur_sector,
>> +                              MAX_IS_ALLOCATED_SEARCH, &nr_sectors)) {
>> +        *data->cur_sector += nr_sectors;
>> +    }
>> +
>> +    qemu_event_set(&data->event);
>> +}
>> +
>>  /* Called with no lock taken.  */
>>
>>  static int mig_save_device_bulk(QEMUFile *f, BlkMigDevState *bmds)
>> @@ -274,17 +297,27 @@ static int mig_save_device_bulk(QEMUFile *f, BlkMigDevState *bmds)
>>      int nr_sectors;
>>
>>      if (bmds->shared_base) {
>> +        /* Searching for the next allocated cluster can block.  Do it in a
>> +         * coroutine inside bb's AioContext.  That way we don't need to hold
>> +         * the global mutex while blocked.
>> +         */
>> +        AioContext *bb_ctx;
>> +        Coroutine *co;
>> +        MigNextAllocatedClusterData data = {
>> +            .cur_sector = &cur_sector,
>> +            .total_sectors = &total_sectors,
>> +            .bb = bb,
>> +        };
>> +
>> +        qemu_event_init(&data.event, false);
>> +
>>          qemu_mutex_lock_iothread();
>> -        aio_context_acquire(blk_get_aio_context(bb));
>> -        /* Skip unallocated sectors; intentionally treats failure as
>> -         * an allocated sector */
>> -        while (cur_sector < total_sectors &&
>> -               !bdrv_is_allocated(blk_bs(bb), cur_sector,
>> -                                  MAX_IS_ALLOCATED_SEARCH, &nr_sectors)) {
>> -            cur_sector += nr_sectors;
>> -        }
>> -        aio_context_release(blk_get_aio_context(bb));
>> +        bb_ctx = blk_get_aio_context(bb);
>>          qemu_mutex_unlock_iothread();
>> +
>
> Hi Stefan:
>     bb_ctx maybe change after qemu_mutex_unlock_iothread().
>     blk_set_aio_context maybe invoked by vcpu thread. like this:
>               blk_set_aio_context
>                    virtio_blk_data_plane_stop
>                             virtio_pci_stop_ioeventfd
>                                   virtio_pci_common_write
>
>     run this command in guest os:
>           while [ 1 ]; do rmmod virtio_blk; modprobe virtio_blk; done
>
>     write this code to test whether bb_ctx is changed.
>
>     qemu_mutex_lock_iothread();
>     bb_ctx = blk_get_aio_context(bb);
>     qemu_mutex_unlock_iothread();
>
>     sleep(0.1);
>
>     qemu_mutex_lock_iothread();
>     bb_ctx1 = blk_get_aio_context(bb);
>     qemu_mutex_unlock_iothread();
>
>     if (bb_ctx != bb_ctx1) {
>          fprintf(stderr, "bb_ctx is not bb_ctx1\n");
>     }
>
>     and i find bb_ctx is not bb_ctx1. so i change the code.
>     move aio_co_schedule into qemu_mutex_lock_iothread block.
>
>     if (bmds->shared_base) {
>         AioContext *bb_ctx;
>         Coroutine *co;
>         MigNextAllocatedClusterData data = {
>             .cur_sector = &cur_sector,
>             .total_sectors = &total_sectors,
>             .bb = bb,
>         };
>         qemu_event_init(&data.event, false);
>
>         qemu_mutex_lock_iothread();
>         bb_ctx = blk_get_aio_context(bb);
>         co = qemu_coroutine_create(mig_next_allocated_cluster, &data);
>         aio_co_schedule(bb_ctx, co);
>         qemu_mutex_unlock_iothread();

I find bs->ctx still maybe change after aio_co_schedule. but before
mig_next_allocated_cluster.

i use bdrv_inc_in_flight(blk_bs(bb)) and bdrv_dec_in_flight(blk_bs(bb))
to avoid it.


>
>         qemu_event_wait(&data.event);
>     }
>
>     I test four case for this patch:
>
>     1.qemu virtio_blk with iothreads,run dd 8 times inside guest then migrate.
>     2.qemu virtio_blk without iothreads,run dd 8 times inside guest
> then migrate.
>     3.qemu ide,run dd 8 times inside guest then migrate.
>     4.qemu virtio_blk with iothreads, run rmmod virtio_blk; modprobe virtio_blk;
>     inside guest then migrate.
>
>     All the test case passed. I will send the patch later.
>     Thanks.
>
>> +        co = qemu_coroutine_create(mig_next_allocated_cluster, &data);
>> +        aio_co_schedule(bb_ctx, co);
>> +        qemu_event_wait(&data.event);
>>      }
>>
>>      if (cur_sector >= total_sectors) {

^ permalink raw reply	[flat|nested] 19+ messages in thread

end of thread, other threads:[~2017-05-03 13:31 UTC | newest]

Thread overview: 19+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-04-05  9:27 [Qemu-devel] [PATCH v3] migration/block:limit the time used for block migration jemmy858585
2017-04-05  9:34 ` Daniel P. Berrange
2017-04-05 10:44   ` 858585 jemmy
2017-04-06  3:18     ` 858585 jemmy
2017-04-06 14:02 ` Stefan Hajnoczi
2017-04-07  1:30   ` 858585 jemmy
2017-04-07  8:26     ` 858585 jemmy
2017-04-07 11:33     ` Stefan Hajnoczi
2017-04-08 10:09       ` Paolo Bonzini
2017-04-10 10:01         ` [Qemu-devel] [Qemu-block] " Stefan Hajnoczi
2017-04-09 13:06       ` [Qemu-devel] " 858585 jemmy
2017-04-07 11:34     ` Stefan Hajnoczi
2017-04-08 13:17       ` 858585 jemmy
2017-04-10 13:52         ` [Qemu-devel] [Qemu-block] " Stefan Hajnoczi
2017-04-11 12:19           ` 858585 jemmy
2017-04-11 13:06             ` 858585 jemmy
2017-04-11 15:32               ` Stefan Hajnoczi
2017-05-03  3:44           ` 858585 jemmy
2017-05-03 13:31             ` 858585 jemmy

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.