All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2 00/10] migration: improve and cleanup compression
@ 2018-03-27  9:10 ` guangrong.xiao
  0 siblings, 0 replies; 44+ messages in thread
From: guangrong.xiao @ 2018-03-27  9:10 UTC (permalink / raw)
  To: pbonzini, mst, mtosatti
  Cc: kvm, Xiao Guangrong, qemu-devel, peterx, dgilbert, wei.w.wang,
	jiang.biao2

From: Xiao Guangrong <xiaoguangrong@tencent.com>

Changelog in v2:
Thanks to the review from Dave, Peter, Wei and Jiang Biao, the changes
in this version are:
1) include the performance number in the cover letter
2)add some comments to explain how to use z_stream->opaque in the
   patchset
3) allocate a internal buffer for per thread to store the data to
   be compressed
4) add a new patch that moves some code to ram_save_host_page() so
   that 'goto' can be omitted gracefully
5) split the optimization of compression and decompress into two
   separated patches
6) refine and correct code styles


This is the first part of our work to improve compression to make it
be more useful in the production.

The first patch resolves the problem that the migration thread spends
too much CPU resource to compression memory if it jumps to a new block
that causes the network is used very deficient.

The second patch fixes the performance issue that too many VM-exits
happen during live migration if compression is being used, it is caused
by huge memory returned to kernel frequently as the memory is allocated
and freed for every signal call to compress2()

The remaining patches clean the code up dramatically

Performance numbers:
We have tested it on my desktop, i7-4790 + 16G, by locally live migrate
the VM which has 8 vCPUs + 6G memory and the max-bandwidth is limited to
350. During the migration, a workload which has 8 threads repeatedly
written total 6G memory in the VM.

Before this patchset, its bandwidth is ~25 mbps, after applying, the
bandwidth is ~50 mbp.

We also collected the perf data for patch 2 and 3 on our production,
before the patchset:
+  57.88%  kqemu  [kernel.kallsyms]        [k] queued_spin_lock_slowpath
+  10.55%  kqemu  [kernel.kallsyms]        [k] __lock_acquire
+   4.83%  kqemu  [kernel.kallsyms]        [k] flush_tlb_func_common

-   1.16%  kqemu  [kernel.kallsyms]        [k] lock_acquire                                       ▒
   - lock_acquire                                                                                 ▒
      - 15.68% _raw_spin_lock                                                                     ▒
         + 29.42% __schedule                                                                      ▒
         + 29.14% perf_event_context_sched_out                                                    ▒
         + 23.60% tdp_page_fault                                                                  ▒
         + 10.54% do_anonymous_page                                                               ▒
         + 2.07% kvm_mmu_notifier_invalidate_range_start                                          ▒
         + 1.83% zap_pte_range                                                                    ▒
         + 1.44% kvm_mmu_notifier_invalidate_range_end


apply our work:
+  51.92%  kqemu  [kernel.kallsyms]        [k] queued_spin_lock_slowpath
+  14.82%  kqemu  [kernel.kallsyms]        [k] __lock_acquire
+   1.47%  kqemu  [kernel.kallsyms]        [k] mark_lock.clone.0
+   1.46%  kqemu  [kernel.kallsyms]        [k] native_sched_clock
+   1.31%  kqemu  [kernel.kallsyms]        [k] lock_acquire
+   1.24%  kqemu  libc-2.12.so             [.] __memset_sse2

-  14.82%  kqemu  [kernel.kallsyms]        [k] __lock_acquire                                     ▒
   - __lock_acquire                                                                               ▒
      - 99.75% lock_acquire                                                                       ▒
         - 18.38% _raw_spin_lock                                                                  ▒
            + 39.62% tdp_page_fault                                                               ▒
            + 31.32% __schedule                                                                   ▒
            + 27.53% perf_event_context_sched_out                                                 ▒
            + 0.58% hrtimer_interrupt


We can see the TLB flush and mmu-lock contention have gone.

Xiao Guangrong (10):
  migration: stop compressing page in migration thread
  migration: stop compression to allocate and free memory frequently
  migration: stop decompression to allocate and free memory frequently
  migration: detect compression and decompression errors
  migration: introduce control_save_page()
  migration: move some code ram_save_host_page
  migration: move calling control_save_page to the common place
  migration: move calling save_zero_page to the common place
  migration: introduce save_normal_page()
  migration: remove ram_save_compressed_page()

 migration/qemu-file.c |  43 ++++-
 migration/qemu-file.h |   6 +-
 migration/ram.c       | 479 ++++++++++++++++++++++++++++++--------------------
 3 files changed, 322 insertions(+), 206 deletions(-)

-- 
2.14.3

^ permalink raw reply	[flat|nested] 44+ messages in thread

* [Qemu-devel] [PATCH v2 00/10] migration: improve and cleanup compression
@ 2018-03-27  9:10 ` guangrong.xiao
  0 siblings, 0 replies; 44+ messages in thread
From: guangrong.xiao @ 2018-03-27  9:10 UTC (permalink / raw)
  To: pbonzini, mst, mtosatti
  Cc: qemu-devel, kvm, dgilbert, peterx, jiang.biao2, wei.w.wang,
	Xiao Guangrong

From: Xiao Guangrong <xiaoguangrong@tencent.com>

Changelog in v2:
Thanks to the review from Dave, Peter, Wei and Jiang Biao, the changes
in this version are:
1) include the performance number in the cover letter
2)add some comments to explain how to use z_stream->opaque in the
   patchset
3) allocate a internal buffer for per thread to store the data to
   be compressed
4) add a new patch that moves some code to ram_save_host_page() so
   that 'goto' can be omitted gracefully
5) split the optimization of compression and decompress into two
   separated patches
6) refine and correct code styles


This is the first part of our work to improve compression to make it
be more useful in the production.

The first patch resolves the problem that the migration thread spends
too much CPU resource to compression memory if it jumps to a new block
that causes the network is used very deficient.

The second patch fixes the performance issue that too many VM-exits
happen during live migration if compression is being used, it is caused
by huge memory returned to kernel frequently as the memory is allocated
and freed for every signal call to compress2()

The remaining patches clean the code up dramatically

Performance numbers:
We have tested it on my desktop, i7-4790 + 16G, by locally live migrate
the VM which has 8 vCPUs + 6G memory and the max-bandwidth is limited to
350. During the migration, a workload which has 8 threads repeatedly
written total 6G memory in the VM.

Before this patchset, its bandwidth is ~25 mbps, after applying, the
bandwidth is ~50 mbp.

We also collected the perf data for patch 2 and 3 on our production,
before the patchset:
+  57.88%  kqemu  [kernel.kallsyms]        [k] queued_spin_lock_slowpath
+  10.55%  kqemu  [kernel.kallsyms]        [k] __lock_acquire
+   4.83%  kqemu  [kernel.kallsyms]        [k] flush_tlb_func_common

-   1.16%  kqemu  [kernel.kallsyms]        [k] lock_acquire                                       ▒
   - lock_acquire                                                                                 ▒
      - 15.68% _raw_spin_lock                                                                     ▒
         + 29.42% __schedule                                                                      ▒
         + 29.14% perf_event_context_sched_out                                                    ▒
         + 23.60% tdp_page_fault                                                                  ▒
         + 10.54% do_anonymous_page                                                               ▒
         + 2.07% kvm_mmu_notifier_invalidate_range_start                                          ▒
         + 1.83% zap_pte_range                                                                    ▒
         + 1.44% kvm_mmu_notifier_invalidate_range_end


apply our work:
+  51.92%  kqemu  [kernel.kallsyms]        [k] queued_spin_lock_slowpath
+  14.82%  kqemu  [kernel.kallsyms]        [k] __lock_acquire
+   1.47%  kqemu  [kernel.kallsyms]        [k] mark_lock.clone.0
+   1.46%  kqemu  [kernel.kallsyms]        [k] native_sched_clock
+   1.31%  kqemu  [kernel.kallsyms]        [k] lock_acquire
+   1.24%  kqemu  libc-2.12.so             [.] __memset_sse2

-  14.82%  kqemu  [kernel.kallsyms]        [k] __lock_acquire                                     ▒
   - __lock_acquire                                                                               ▒
      - 99.75% lock_acquire                                                                       ▒
         - 18.38% _raw_spin_lock                                                                  ▒
            + 39.62% tdp_page_fault                                                               ▒
            + 31.32% __schedule                                                                   ▒
            + 27.53% perf_event_context_sched_out                                                 ▒
            + 0.58% hrtimer_interrupt


We can see the TLB flush and mmu-lock contention have gone.

Xiao Guangrong (10):
  migration: stop compressing page in migration thread
  migration: stop compression to allocate and free memory frequently
  migration: stop decompression to allocate and free memory frequently
  migration: detect compression and decompression errors
  migration: introduce control_save_page()
  migration: move some code ram_save_host_page
  migration: move calling control_save_page to the common place
  migration: move calling save_zero_page to the common place
  migration: introduce save_normal_page()
  migration: remove ram_save_compressed_page()

 migration/qemu-file.c |  43 ++++-
 migration/qemu-file.h |   6 +-
 migration/ram.c       | 479 ++++++++++++++++++++++++++++++--------------------
 3 files changed, 322 insertions(+), 206 deletions(-)

-- 
2.14.3

^ permalink raw reply	[flat|nested] 44+ messages in thread

* [PATCH v2 01/10] migration: stop compressing page in migration thread
  2018-03-27  9:10 ` [Qemu-devel] " guangrong.xiao
@ 2018-03-27  9:10   ` guangrong.xiao
  -1 siblings, 0 replies; 44+ messages in thread
From: guangrong.xiao @ 2018-03-27  9:10 UTC (permalink / raw)
  To: pbonzini, mst, mtosatti
  Cc: kvm, Xiao Guangrong, qemu-devel, peterx, dgilbert, wei.w.wang,
	jiang.biao2

From: Xiao Guangrong <xiaoguangrong@tencent.com>

As compression is a heavy work, do not do it in migration thread,
instead, we post it out as a normal page

Reviewed-by: Wei Wang <wei.w.wang@intel.com>
Reviewed-by: Peter Xu <peterx@redhat.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Xiao Guangrong <xiaoguangrong@tencent.com>
---
 migration/ram.c | 32 ++++++++++++++++----------------
 1 file changed, 16 insertions(+), 16 deletions(-)

diff --git a/migration/ram.c b/migration/ram.c
index 0e90efa092..409c847a76 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -1137,7 +1137,7 @@ static int ram_save_compressed_page(RAMState *rs, PageSearchStatus *pss,
     int pages = -1;
     uint64_t bytes_xmit = 0;
     uint8_t *p;
-    int ret, blen;
+    int ret;
     RAMBlock *block = pss->block;
     ram_addr_t offset = pss->page << TARGET_PAGE_BITS;
 
@@ -1167,23 +1167,23 @@ static int ram_save_compressed_page(RAMState *rs, PageSearchStatus *pss,
         if (block != rs->last_sent_block) {
             flush_compressed_data(rs);
             pages = save_zero_page(rs, block, offset);
-            if (pages == -1) {
-                /* Make sure the first page is sent out before other pages */
-                bytes_xmit = save_page_header(rs, rs->f, block, offset |
-                                              RAM_SAVE_FLAG_COMPRESS_PAGE);
-                blen = qemu_put_compression_data(rs->f, p, TARGET_PAGE_SIZE,
-                                                 migrate_compress_level());
-                if (blen > 0) {
-                    ram_counters.transferred += bytes_xmit + blen;
-                    ram_counters.normal++;
-                    pages = 1;
-                } else {
-                    qemu_file_set_error(rs->f, blen);
-                    error_report("compressed data failed!");
-                }
-            }
             if (pages > 0) {
                 ram_release_pages(block->idstr, offset, pages);
+            } else {
+                /*
+                 * Make sure the first page is sent out before other pages.
+                 *
+                 * we post it as normal page as compression will take much
+                 * CPU resource.
+                 */
+                ram_counters.transferred += save_page_header(rs, rs->f, block,
+                                                offset | RAM_SAVE_FLAG_PAGE);
+                qemu_put_buffer_async(rs->f, p, TARGET_PAGE_SIZE,
+                                      migrate_release_ram() &
+                                      migration_in_postcopy());
+                ram_counters.transferred += TARGET_PAGE_SIZE;
+                ram_counters.normal++;
+                pages = 1;
             }
         } else {
             pages = save_zero_page(rs, block, offset);
-- 
2.14.3

^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [Qemu-devel] [PATCH v2 01/10] migration: stop compressing page in migration thread
@ 2018-03-27  9:10   ` guangrong.xiao
  0 siblings, 0 replies; 44+ messages in thread
From: guangrong.xiao @ 2018-03-27  9:10 UTC (permalink / raw)
  To: pbonzini, mst, mtosatti
  Cc: qemu-devel, kvm, dgilbert, peterx, jiang.biao2, wei.w.wang,
	Xiao Guangrong

From: Xiao Guangrong <xiaoguangrong@tencent.com>

As compression is a heavy work, do not do it in migration thread,
instead, we post it out as a normal page

Reviewed-by: Wei Wang <wei.w.wang@intel.com>
Reviewed-by: Peter Xu <peterx@redhat.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Xiao Guangrong <xiaoguangrong@tencent.com>
---
 migration/ram.c | 32 ++++++++++++++++----------------
 1 file changed, 16 insertions(+), 16 deletions(-)

diff --git a/migration/ram.c b/migration/ram.c
index 0e90efa092..409c847a76 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -1137,7 +1137,7 @@ static int ram_save_compressed_page(RAMState *rs, PageSearchStatus *pss,
     int pages = -1;
     uint64_t bytes_xmit = 0;
     uint8_t *p;
-    int ret, blen;
+    int ret;
     RAMBlock *block = pss->block;
     ram_addr_t offset = pss->page << TARGET_PAGE_BITS;
 
@@ -1167,23 +1167,23 @@ static int ram_save_compressed_page(RAMState *rs, PageSearchStatus *pss,
         if (block != rs->last_sent_block) {
             flush_compressed_data(rs);
             pages = save_zero_page(rs, block, offset);
-            if (pages == -1) {
-                /* Make sure the first page is sent out before other pages */
-                bytes_xmit = save_page_header(rs, rs->f, block, offset |
-                                              RAM_SAVE_FLAG_COMPRESS_PAGE);
-                blen = qemu_put_compression_data(rs->f, p, TARGET_PAGE_SIZE,
-                                                 migrate_compress_level());
-                if (blen > 0) {
-                    ram_counters.transferred += bytes_xmit + blen;
-                    ram_counters.normal++;
-                    pages = 1;
-                } else {
-                    qemu_file_set_error(rs->f, blen);
-                    error_report("compressed data failed!");
-                }
-            }
             if (pages > 0) {
                 ram_release_pages(block->idstr, offset, pages);
+            } else {
+                /*
+                 * Make sure the first page is sent out before other pages.
+                 *
+                 * we post it as normal page as compression will take much
+                 * CPU resource.
+                 */
+                ram_counters.transferred += save_page_header(rs, rs->f, block,
+                                                offset | RAM_SAVE_FLAG_PAGE);
+                qemu_put_buffer_async(rs->f, p, TARGET_PAGE_SIZE,
+                                      migrate_release_ram() &
+                                      migration_in_postcopy());
+                ram_counters.transferred += TARGET_PAGE_SIZE;
+                ram_counters.normal++;
+                pages = 1;
             }
         } else {
             pages = save_zero_page(rs, block, offset);
-- 
2.14.3

^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH v2 02/10] migration: stop compression to allocate and free memory frequently
  2018-03-27  9:10 ` [Qemu-devel] " guangrong.xiao
@ 2018-03-27  9:10   ` guangrong.xiao
  -1 siblings, 0 replies; 44+ messages in thread
From: guangrong.xiao @ 2018-03-27  9:10 UTC (permalink / raw)
  To: pbonzini, mst, mtosatti
  Cc: kvm, Xiao Guangrong, qemu-devel, peterx, dgilbert, wei.w.wang,
	jiang.biao2

From: Xiao Guangrong <xiaoguangrong@tencent.com>

Current code uses compress2() to compress memory which manages memory
internally, that causes huge memory is allocated and freed very
frequently

More worse, frequently returning memory to kernel will flush TLBs
and trigger invalidation callbacks on mmu-notification which
interacts with KVM MMU, that dramatically reduce the performance
of VM

So, we maintain the memory by ourselves and reuse it for each
compression

Reviewed-by: Jiang Biao <jiang.biao2@zte.com.cn>
Signed-off-by: Xiao Guangrong <xiaoguangrong@tencent.com>
---
 migration/qemu-file.c | 39 ++++++++++++++++++++++++++++++++-------
 migration/qemu-file.h |  6 ++++--
 migration/ram.c       | 41 +++++++++++++++++++++++++++++++++--------
 3 files changed, 69 insertions(+), 17 deletions(-)

diff --git a/migration/qemu-file.c b/migration/qemu-file.c
index e85f501f86..e924cc23c5 100644
--- a/migration/qemu-file.c
+++ b/migration/qemu-file.c
@@ -658,8 +658,32 @@ uint64_t qemu_get_be64(QEMUFile *f)
     return v;
 }
 
-/* Compress size bytes of data start at p with specific compression
- * level and store the compressed data to the buffer of f.
+/* return the size after compression, or negative value on error */
+static int qemu_compress_data(z_stream *stream, uint8_t *dest, size_t dest_len,
+                              const uint8_t *source, size_t source_len)
+{
+    int err;
+
+    err = deflateReset(stream);
+    if (err != Z_OK) {
+        return -1;
+    }
+
+    stream->avail_in = source_len;
+    stream->next_in = (uint8_t *)source;
+    stream->avail_out = dest_len;
+    stream->next_out = dest;
+
+    err = deflate(stream, Z_FINISH);
+    if (err != Z_STREAM_END) {
+        return -1;
+    }
+
+    return stream->next_out - dest;
+}
+
+/* Compress size bytes of data start at p and store the compressed
+ * data to the buffer of f.
  *
  * When f is not writable, return -1 if f has no space to save the
  * compressed data.
@@ -667,9 +691,8 @@ uint64_t qemu_get_be64(QEMUFile *f)
  * do fflush first, if f still has no space to save the compressed
  * data, return -1.
  */
-
-ssize_t qemu_put_compression_data(QEMUFile *f, const uint8_t *p, size_t size,
-                                  int level)
+ssize_t qemu_put_compression_data(QEMUFile *f, z_stream *stream,
+                                  const uint8_t *p, size_t size)
 {
     ssize_t blen = IO_BUF_SIZE - f->buf_index - sizeof(int32_t);
 
@@ -683,8 +706,10 @@ ssize_t qemu_put_compression_data(QEMUFile *f, const uint8_t *p, size_t size,
             return -1;
         }
     }
-    if (compress2(f->buf + f->buf_index + sizeof(int32_t), (uLongf *)&blen,
-                  (Bytef *)p, size, level) != Z_OK) {
+
+    blen = qemu_compress_data(stream, f->buf + f->buf_index + sizeof(int32_t),
+                              blen, p, size);
+    if (blen < 0) {
         error_report("Compress Failed!");
         return 0;
     }
diff --git a/migration/qemu-file.h b/migration/qemu-file.h
index f4f356ab12..2ccfcfb2a8 100644
--- a/migration/qemu-file.h
+++ b/migration/qemu-file.h
@@ -25,6 +25,8 @@
 #ifndef MIGRATION_QEMU_FILE_H
 #define MIGRATION_QEMU_FILE_H
 
+#include <zlib.h>
+
 /* Read a chunk of data from a file at the given position.  The pos argument
  * can be ignored if the file is only be used for streaming.  The number of
  * bytes actually read should be returned.
@@ -132,8 +134,8 @@ bool qemu_file_is_writable(QEMUFile *f);
 
 size_t qemu_peek_buffer(QEMUFile *f, uint8_t **buf, size_t size, size_t offset);
 size_t qemu_get_buffer_in_place(QEMUFile *f, uint8_t **buf, size_t size);
-ssize_t qemu_put_compression_data(QEMUFile *f, const uint8_t *p, size_t size,
-                                  int level);
+ssize_t qemu_put_compression_data(QEMUFile *f, z_stream *stream,
+                                  const uint8_t *p, size_t size);
 int qemu_put_qemu_file(QEMUFile *f_des, QEMUFile *f_src);
 
 /*
diff --git a/migration/ram.c b/migration/ram.c
index 409c847a76..e043a192e1 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -269,6 +269,7 @@ struct CompressParam {
     QemuCond cond;
     RAMBlock *block;
     ram_addr_t offset;
+    z_stream stream;
 };
 typedef struct CompressParam CompressParam;
 
@@ -299,7 +300,7 @@ static QemuThread *decompress_threads;
 static QemuMutex decomp_done_lock;
 static QemuCond decomp_done_cond;
 
-static int do_compress_ram_page(QEMUFile *f, RAMBlock *block,
+static int do_compress_ram_page(QEMUFile *f, z_stream *stream, RAMBlock *block,
                                 ram_addr_t offset);
 
 static void *do_data_compress(void *opaque)
@@ -316,7 +317,7 @@ static void *do_data_compress(void *opaque)
             param->block = NULL;
             qemu_mutex_unlock(&param->mutex);
 
-            do_compress_ram_page(param->file, block, offset);
+            do_compress_ram_page(param->file, &param->stream, block, offset);
 
             qemu_mutex_lock(&comp_done_lock);
             param->done = true;
@@ -357,10 +358,20 @@ static void compress_threads_save_cleanup(void)
     terminate_compression_threads();
     thread_count = migrate_compress_threads();
     for (i = 0; i < thread_count; i++) {
+        /*
+         * stream.opaque can be used to store private data, we use it
+         * as a indicator which shows if the thread is properly init'd
+         * or not
+         */
+        if (!comp_param[i].stream.opaque) {
+            break;
+        }
         qemu_thread_join(compress_threads + i);
         qemu_fclose(comp_param[i].file);
         qemu_mutex_destroy(&comp_param[i].mutex);
         qemu_cond_destroy(&comp_param[i].cond);
+        deflateEnd(&comp_param[i].stream);
+        comp_param[i].stream.opaque = NULL;
     }
     qemu_mutex_destroy(&comp_done_lock);
     qemu_cond_destroy(&comp_done_cond);
@@ -370,12 +381,12 @@ static void compress_threads_save_cleanup(void)
     comp_param = NULL;
 }
 
-static void compress_threads_save_setup(void)
+static int compress_threads_save_setup(void)
 {
     int i, thread_count;
 
     if (!migrate_use_compression()) {
-        return;
+        return 0;
     }
     thread_count = migrate_compress_threads();
     compress_threads = g_new0(QemuThread, thread_count);
@@ -383,6 +394,12 @@ static void compress_threads_save_setup(void)
     qemu_cond_init(&comp_done_cond);
     qemu_mutex_init(&comp_done_lock);
     for (i = 0; i < thread_count; i++) {
+        if (deflateInit(&comp_param[i].stream,
+                           migrate_compress_level()) != Z_OK) {
+            goto exit;
+        }
+        comp_param[i].stream.opaque = &comp_param[i];
+
         /* comp_param[i].file is just used as a dummy buffer to save data,
          * set its ops to empty.
          */
@@ -395,6 +412,11 @@ static void compress_threads_save_setup(void)
                            do_data_compress, comp_param + i,
                            QEMU_THREAD_JOINABLE);
     }
+    return 0;
+
+exit:
+    compress_threads_save_cleanup();
+    return -1;
 }
 
 /* Multiple fd's */
@@ -1031,7 +1053,7 @@ static int ram_save_page(RAMState *rs, PageSearchStatus *pss, bool last_stage)
     return pages;
 }
 
-static int do_compress_ram_page(QEMUFile *f, RAMBlock *block,
+static int do_compress_ram_page(QEMUFile *f, z_stream *stream, RAMBlock *block,
                                 ram_addr_t offset)
 {
     RAMState *rs = ram_state;
@@ -1040,8 +1062,7 @@ static int do_compress_ram_page(QEMUFile *f, RAMBlock *block,
 
     bytes_sent = save_page_header(rs, f, block, offset |
                                   RAM_SAVE_FLAG_COMPRESS_PAGE);
-    blen = qemu_put_compression_data(f, p, TARGET_PAGE_SIZE,
-                                     migrate_compress_level());
+    blen = qemu_put_compression_data(f, stream, p, TARGET_PAGE_SIZE);
     if (blen < 0) {
         bytes_sent = 0;
         qemu_file_set_error(migrate_get_current()->to_dst_file, blen);
@@ -2214,9 +2235,14 @@ static int ram_save_setup(QEMUFile *f, void *opaque)
     RAMState **rsp = opaque;
     RAMBlock *block;
 
+    if (compress_threads_save_setup()) {
+        return -1;
+    }
+
     /* migration has already setup the bitmap, reuse it. */
     if (!migration_in_colo_state()) {
         if (ram_init_all(rsp) != 0) {
+            compress_threads_save_cleanup();
             return -1;
         }
     }
@@ -2236,7 +2262,6 @@ static int ram_save_setup(QEMUFile *f, void *opaque)
     }
 
     rcu_read_unlock();
-    compress_threads_save_setup();
 
     ram_control_before_iterate(f, RAM_CONTROL_SETUP);
     ram_control_after_iterate(f, RAM_CONTROL_SETUP);
-- 
2.14.3

^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [Qemu-devel] [PATCH v2 02/10] migration: stop compression to allocate and free memory frequently
@ 2018-03-27  9:10   ` guangrong.xiao
  0 siblings, 0 replies; 44+ messages in thread
From: guangrong.xiao @ 2018-03-27  9:10 UTC (permalink / raw)
  To: pbonzini, mst, mtosatti
  Cc: qemu-devel, kvm, dgilbert, peterx, jiang.biao2, wei.w.wang,
	Xiao Guangrong

From: Xiao Guangrong <xiaoguangrong@tencent.com>

Current code uses compress2() to compress memory which manages memory
internally, that causes huge memory is allocated and freed very
frequently

More worse, frequently returning memory to kernel will flush TLBs
and trigger invalidation callbacks on mmu-notification which
interacts with KVM MMU, that dramatically reduce the performance
of VM

So, we maintain the memory by ourselves and reuse it for each
compression

Reviewed-by: Jiang Biao <jiang.biao2@zte.com.cn>
Signed-off-by: Xiao Guangrong <xiaoguangrong@tencent.com>
---
 migration/qemu-file.c | 39 ++++++++++++++++++++++++++++++++-------
 migration/qemu-file.h |  6 ++++--
 migration/ram.c       | 41 +++++++++++++++++++++++++++++++++--------
 3 files changed, 69 insertions(+), 17 deletions(-)

diff --git a/migration/qemu-file.c b/migration/qemu-file.c
index e85f501f86..e924cc23c5 100644
--- a/migration/qemu-file.c
+++ b/migration/qemu-file.c
@@ -658,8 +658,32 @@ uint64_t qemu_get_be64(QEMUFile *f)
     return v;
 }
 
-/* Compress size bytes of data start at p with specific compression
- * level and store the compressed data to the buffer of f.
+/* return the size after compression, or negative value on error */
+static int qemu_compress_data(z_stream *stream, uint8_t *dest, size_t dest_len,
+                              const uint8_t *source, size_t source_len)
+{
+    int err;
+
+    err = deflateReset(stream);
+    if (err != Z_OK) {
+        return -1;
+    }
+
+    stream->avail_in = source_len;
+    stream->next_in = (uint8_t *)source;
+    stream->avail_out = dest_len;
+    stream->next_out = dest;
+
+    err = deflate(stream, Z_FINISH);
+    if (err != Z_STREAM_END) {
+        return -1;
+    }
+
+    return stream->next_out - dest;
+}
+
+/* Compress size bytes of data start at p and store the compressed
+ * data to the buffer of f.
  *
  * When f is not writable, return -1 if f has no space to save the
  * compressed data.
@@ -667,9 +691,8 @@ uint64_t qemu_get_be64(QEMUFile *f)
  * do fflush first, if f still has no space to save the compressed
  * data, return -1.
  */
-
-ssize_t qemu_put_compression_data(QEMUFile *f, const uint8_t *p, size_t size,
-                                  int level)
+ssize_t qemu_put_compression_data(QEMUFile *f, z_stream *stream,
+                                  const uint8_t *p, size_t size)
 {
     ssize_t blen = IO_BUF_SIZE - f->buf_index - sizeof(int32_t);
 
@@ -683,8 +706,10 @@ ssize_t qemu_put_compression_data(QEMUFile *f, const uint8_t *p, size_t size,
             return -1;
         }
     }
-    if (compress2(f->buf + f->buf_index + sizeof(int32_t), (uLongf *)&blen,
-                  (Bytef *)p, size, level) != Z_OK) {
+
+    blen = qemu_compress_data(stream, f->buf + f->buf_index + sizeof(int32_t),
+                              blen, p, size);
+    if (blen < 0) {
         error_report("Compress Failed!");
         return 0;
     }
diff --git a/migration/qemu-file.h b/migration/qemu-file.h
index f4f356ab12..2ccfcfb2a8 100644
--- a/migration/qemu-file.h
+++ b/migration/qemu-file.h
@@ -25,6 +25,8 @@
 #ifndef MIGRATION_QEMU_FILE_H
 #define MIGRATION_QEMU_FILE_H
 
+#include <zlib.h>
+
 /* Read a chunk of data from a file at the given position.  The pos argument
  * can be ignored if the file is only be used for streaming.  The number of
  * bytes actually read should be returned.
@@ -132,8 +134,8 @@ bool qemu_file_is_writable(QEMUFile *f);
 
 size_t qemu_peek_buffer(QEMUFile *f, uint8_t **buf, size_t size, size_t offset);
 size_t qemu_get_buffer_in_place(QEMUFile *f, uint8_t **buf, size_t size);
-ssize_t qemu_put_compression_data(QEMUFile *f, const uint8_t *p, size_t size,
-                                  int level);
+ssize_t qemu_put_compression_data(QEMUFile *f, z_stream *stream,
+                                  const uint8_t *p, size_t size);
 int qemu_put_qemu_file(QEMUFile *f_des, QEMUFile *f_src);
 
 /*
diff --git a/migration/ram.c b/migration/ram.c
index 409c847a76..e043a192e1 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -269,6 +269,7 @@ struct CompressParam {
     QemuCond cond;
     RAMBlock *block;
     ram_addr_t offset;
+    z_stream stream;
 };
 typedef struct CompressParam CompressParam;
 
@@ -299,7 +300,7 @@ static QemuThread *decompress_threads;
 static QemuMutex decomp_done_lock;
 static QemuCond decomp_done_cond;
 
-static int do_compress_ram_page(QEMUFile *f, RAMBlock *block,
+static int do_compress_ram_page(QEMUFile *f, z_stream *stream, RAMBlock *block,
                                 ram_addr_t offset);
 
 static void *do_data_compress(void *opaque)
@@ -316,7 +317,7 @@ static void *do_data_compress(void *opaque)
             param->block = NULL;
             qemu_mutex_unlock(&param->mutex);
 
-            do_compress_ram_page(param->file, block, offset);
+            do_compress_ram_page(param->file, &param->stream, block, offset);
 
             qemu_mutex_lock(&comp_done_lock);
             param->done = true;
@@ -357,10 +358,20 @@ static void compress_threads_save_cleanup(void)
     terminate_compression_threads();
     thread_count = migrate_compress_threads();
     for (i = 0; i < thread_count; i++) {
+        /*
+         * stream.opaque can be used to store private data, we use it
+         * as a indicator which shows if the thread is properly init'd
+         * or not
+         */
+        if (!comp_param[i].stream.opaque) {
+            break;
+        }
         qemu_thread_join(compress_threads + i);
         qemu_fclose(comp_param[i].file);
         qemu_mutex_destroy(&comp_param[i].mutex);
         qemu_cond_destroy(&comp_param[i].cond);
+        deflateEnd(&comp_param[i].stream);
+        comp_param[i].stream.opaque = NULL;
     }
     qemu_mutex_destroy(&comp_done_lock);
     qemu_cond_destroy(&comp_done_cond);
@@ -370,12 +381,12 @@ static void compress_threads_save_cleanup(void)
     comp_param = NULL;
 }
 
-static void compress_threads_save_setup(void)
+static int compress_threads_save_setup(void)
 {
     int i, thread_count;
 
     if (!migrate_use_compression()) {
-        return;
+        return 0;
     }
     thread_count = migrate_compress_threads();
     compress_threads = g_new0(QemuThread, thread_count);
@@ -383,6 +394,12 @@ static void compress_threads_save_setup(void)
     qemu_cond_init(&comp_done_cond);
     qemu_mutex_init(&comp_done_lock);
     for (i = 0; i < thread_count; i++) {
+        if (deflateInit(&comp_param[i].stream,
+                           migrate_compress_level()) != Z_OK) {
+            goto exit;
+        }
+        comp_param[i].stream.opaque = &comp_param[i];
+
         /* comp_param[i].file is just used as a dummy buffer to save data,
          * set its ops to empty.
          */
@@ -395,6 +412,11 @@ static void compress_threads_save_setup(void)
                            do_data_compress, comp_param + i,
                            QEMU_THREAD_JOINABLE);
     }
+    return 0;
+
+exit:
+    compress_threads_save_cleanup();
+    return -1;
 }
 
 /* Multiple fd's */
@@ -1031,7 +1053,7 @@ static int ram_save_page(RAMState *rs, PageSearchStatus *pss, bool last_stage)
     return pages;
 }
 
-static int do_compress_ram_page(QEMUFile *f, RAMBlock *block,
+static int do_compress_ram_page(QEMUFile *f, z_stream *stream, RAMBlock *block,
                                 ram_addr_t offset)
 {
     RAMState *rs = ram_state;
@@ -1040,8 +1062,7 @@ static int do_compress_ram_page(QEMUFile *f, RAMBlock *block,
 
     bytes_sent = save_page_header(rs, f, block, offset |
                                   RAM_SAVE_FLAG_COMPRESS_PAGE);
-    blen = qemu_put_compression_data(f, p, TARGET_PAGE_SIZE,
-                                     migrate_compress_level());
+    blen = qemu_put_compression_data(f, stream, p, TARGET_PAGE_SIZE);
     if (blen < 0) {
         bytes_sent = 0;
         qemu_file_set_error(migrate_get_current()->to_dst_file, blen);
@@ -2214,9 +2235,14 @@ static int ram_save_setup(QEMUFile *f, void *opaque)
     RAMState **rsp = opaque;
     RAMBlock *block;
 
+    if (compress_threads_save_setup()) {
+        return -1;
+    }
+
     /* migration has already setup the bitmap, reuse it. */
     if (!migration_in_colo_state()) {
         if (ram_init_all(rsp) != 0) {
+            compress_threads_save_cleanup();
             return -1;
         }
     }
@@ -2236,7 +2262,6 @@ static int ram_save_setup(QEMUFile *f, void *opaque)
     }
 
     rcu_read_unlock();
-    compress_threads_save_setup();
 
     ram_control_before_iterate(f, RAM_CONTROL_SETUP);
     ram_control_after_iterate(f, RAM_CONTROL_SETUP);
-- 
2.14.3

^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH v2 03/10] migration: stop decompression to allocate and free memory frequently
  2018-03-27  9:10 ` [Qemu-devel] " guangrong.xiao
@ 2018-03-27  9:10   ` guangrong.xiao
  -1 siblings, 0 replies; 44+ messages in thread
From: guangrong.xiao @ 2018-03-27  9:10 UTC (permalink / raw)
  To: pbonzini, mst, mtosatti
  Cc: kvm, Xiao Guangrong, qemu-devel, peterx, dgilbert, wei.w.wang,
	jiang.biao2

From: Xiao Guangrong <xiaoguangrong@tencent.com>

Current code uses uncompress() to decompress memory which manages
memory internally, that causes huge memory is allocated and freed
very frequently, more worse, frequently returning memory to kernel
will flush TLBs

So, we maintain the memory by ourselves and reuse it for each
decompression

Reviewed-by: Jiang Biao <jiang.biao2@zte.com.cn>
Signed-off-by: Xiao Guangrong <xiaoguangrong@tencent.com>
---
 migration/ram.c | 110 ++++++++++++++++++++++++++++++++++++++++----------------
 1 file changed, 80 insertions(+), 30 deletions(-)

diff --git a/migration/ram.c b/migration/ram.c
index e043a192e1..6b699650ca 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -281,6 +281,7 @@ struct DecompressParam {
     void *des;
     uint8_t *compbuf;
     int len;
+    z_stream stream;
 };
 typedef struct DecompressParam DecompressParam;
 
@@ -2526,6 +2527,31 @@ void ram_handle_compressed(void *host, uint8_t ch, uint64_t size)
     }
 }
 
+/* return the size after decompression, or negative value on error */
+static int
+qemu_uncompress_data(z_stream *stream, uint8_t *dest, size_t dest_len,
+                     const uint8_t *source, size_t source_len)
+{
+    int err;
+
+    err = inflateReset(stream);
+    if (err != Z_OK) {
+        return -1;
+    }
+
+    stream->avail_in = source_len;
+    stream->next_in = (uint8_t *)source;
+    stream->avail_out = dest_len;
+    stream->next_out = dest;
+
+    err = inflate(stream, Z_NO_FLUSH);
+    if (err != Z_STREAM_END) {
+        return -1;
+    }
+
+    return stream->total_out;
+}
+
 static void *do_data_decompress(void *opaque)
 {
     DecompressParam *param = opaque;
@@ -2542,13 +2568,13 @@ static void *do_data_decompress(void *opaque)
             qemu_mutex_unlock(&param->mutex);
 
             pagesize = TARGET_PAGE_SIZE;
-            /* uncompress() will return failed in some case, especially
-             * when the page is dirted when doing the compression, it's
-             * not a problem because the dirty page will be retransferred
+            /* qemu_uncompress_data() will return failed in some case,
+             * especially when the page is dirtied when doing the compression,
+             * it's not a problem because the dirty page will be retransferred
              * and uncompress() won't break the data in other pages.
              */
-            uncompress((Bytef *)des, &pagesize,
-                       (const Bytef *)param->compbuf, len);
+            qemu_uncompress_data(&param->stream, des, pagesize, param->compbuf,
+                                 len);
 
             qemu_mutex_lock(&decomp_done_lock);
             param->done = true;
@@ -2583,30 +2609,6 @@ static void wait_for_decompress_done(void)
     qemu_mutex_unlock(&decomp_done_lock);
 }
 
-static void compress_threads_load_setup(void)
-{
-    int i, thread_count;
-
-    if (!migrate_use_compression()) {
-        return;
-    }
-    thread_count = migrate_decompress_threads();
-    decompress_threads = g_new0(QemuThread, thread_count);
-    decomp_param = g_new0(DecompressParam, thread_count);
-    qemu_mutex_init(&decomp_done_lock);
-    qemu_cond_init(&decomp_done_cond);
-    for (i = 0; i < thread_count; i++) {
-        qemu_mutex_init(&decomp_param[i].mutex);
-        qemu_cond_init(&decomp_param[i].cond);
-        decomp_param[i].compbuf = g_malloc0(compressBound(TARGET_PAGE_SIZE));
-        decomp_param[i].done = true;
-        decomp_param[i].quit = false;
-        qemu_thread_create(decompress_threads + i, "decompress",
-                           do_data_decompress, decomp_param + i,
-                           QEMU_THREAD_JOINABLE);
-    }
-}
-
 static void compress_threads_load_cleanup(void)
 {
     int i, thread_count;
@@ -2616,16 +2618,27 @@ static void compress_threads_load_cleanup(void)
     }
     thread_count = migrate_decompress_threads();
     for (i = 0; i < thread_count; i++) {
+        /* see the comments in compress_threads_save_cleanup() */
+        if (!decomp_param[i].stream.opaque) {
+            break;
+        }
+
         qemu_mutex_lock(&decomp_param[i].mutex);
         decomp_param[i].quit = true;
         qemu_cond_signal(&decomp_param[i].cond);
         qemu_mutex_unlock(&decomp_param[i].mutex);
     }
     for (i = 0; i < thread_count; i++) {
+        if (!decomp_param[i].stream.opaque) {
+            break;
+        }
+
         qemu_thread_join(decompress_threads + i);
         qemu_mutex_destroy(&decomp_param[i].mutex);
         qemu_cond_destroy(&decomp_param[i].cond);
         g_free(decomp_param[i].compbuf);
+        inflateEnd(&decomp_param[i].stream);
+        decomp_param[i].stream.opaque = NULL;
     }
     g_free(decompress_threads);
     g_free(decomp_param);
@@ -2633,6 +2646,40 @@ static void compress_threads_load_cleanup(void)
     decomp_param = NULL;
 }
 
+static int compress_threads_load_setup(void)
+{
+    int i, thread_count;
+
+    if (!migrate_use_compression()) {
+        return 0;
+    }
+
+    thread_count = migrate_decompress_threads();
+    decompress_threads = g_new0(QemuThread, thread_count);
+    decomp_param = g_new0(DecompressParam, thread_count);
+    qemu_mutex_init(&decomp_done_lock);
+    qemu_cond_init(&decomp_done_cond);
+    for (i = 0; i < thread_count; i++) {
+        if (inflateInit(&decomp_param[i].stream) != Z_OK) {
+            goto exit;
+        }
+        decomp_param[i].stream.opaque = &decomp_param[i];
+
+        qemu_mutex_init(&decomp_param[i].mutex);
+        qemu_cond_init(&decomp_param[i].cond);
+        decomp_param[i].compbuf = g_malloc0(compressBound(TARGET_PAGE_SIZE));
+        decomp_param[i].done = true;
+        decomp_param[i].quit = false;
+        qemu_thread_create(decompress_threads + i, "decompress",
+                           do_data_decompress, decomp_param + i,
+                           QEMU_THREAD_JOINABLE);
+    }
+    return 0;
+exit:
+    compress_threads_load_cleanup();
+    return -1;
+}
+
 static void decompress_data_with_multi_threads(QEMUFile *f,
                                                void *host, int len)
 {
@@ -2672,8 +2719,11 @@ static void decompress_data_with_multi_threads(QEMUFile *f,
  */
 static int ram_load_setup(QEMUFile *f, void *opaque)
 {
+    if (compress_threads_load_setup()) {
+        return -1;
+    }
+
     xbzrle_load_setup();
-    compress_threads_load_setup();
     ramblock_recv_map_init();
     return 0;
 }
-- 
2.14.3

^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [Qemu-devel] [PATCH v2 03/10] migration: stop decompression to allocate and free memory frequently
@ 2018-03-27  9:10   ` guangrong.xiao
  0 siblings, 0 replies; 44+ messages in thread
From: guangrong.xiao @ 2018-03-27  9:10 UTC (permalink / raw)
  To: pbonzini, mst, mtosatti
  Cc: qemu-devel, kvm, dgilbert, peterx, jiang.biao2, wei.w.wang,
	Xiao Guangrong

From: Xiao Guangrong <xiaoguangrong@tencent.com>

Current code uses uncompress() to decompress memory which manages
memory internally, that causes huge memory is allocated and freed
very frequently, more worse, frequently returning memory to kernel
will flush TLBs

So, we maintain the memory by ourselves and reuse it for each
decompression

Reviewed-by: Jiang Biao <jiang.biao2@zte.com.cn>
Signed-off-by: Xiao Guangrong <xiaoguangrong@tencent.com>
---
 migration/ram.c | 110 ++++++++++++++++++++++++++++++++++++++++----------------
 1 file changed, 80 insertions(+), 30 deletions(-)

diff --git a/migration/ram.c b/migration/ram.c
index e043a192e1..6b699650ca 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -281,6 +281,7 @@ struct DecompressParam {
     void *des;
     uint8_t *compbuf;
     int len;
+    z_stream stream;
 };
 typedef struct DecompressParam DecompressParam;
 
@@ -2526,6 +2527,31 @@ void ram_handle_compressed(void *host, uint8_t ch, uint64_t size)
     }
 }
 
+/* return the size after decompression, or negative value on error */
+static int
+qemu_uncompress_data(z_stream *stream, uint8_t *dest, size_t dest_len,
+                     const uint8_t *source, size_t source_len)
+{
+    int err;
+
+    err = inflateReset(stream);
+    if (err != Z_OK) {
+        return -1;
+    }
+
+    stream->avail_in = source_len;
+    stream->next_in = (uint8_t *)source;
+    stream->avail_out = dest_len;
+    stream->next_out = dest;
+
+    err = inflate(stream, Z_NO_FLUSH);
+    if (err != Z_STREAM_END) {
+        return -1;
+    }
+
+    return stream->total_out;
+}
+
 static void *do_data_decompress(void *opaque)
 {
     DecompressParam *param = opaque;
@@ -2542,13 +2568,13 @@ static void *do_data_decompress(void *opaque)
             qemu_mutex_unlock(&param->mutex);
 
             pagesize = TARGET_PAGE_SIZE;
-            /* uncompress() will return failed in some case, especially
-             * when the page is dirted when doing the compression, it's
-             * not a problem because the dirty page will be retransferred
+            /* qemu_uncompress_data() will return failed in some case,
+             * especially when the page is dirtied when doing the compression,
+             * it's not a problem because the dirty page will be retransferred
              * and uncompress() won't break the data in other pages.
              */
-            uncompress((Bytef *)des, &pagesize,
-                       (const Bytef *)param->compbuf, len);
+            qemu_uncompress_data(&param->stream, des, pagesize, param->compbuf,
+                                 len);
 
             qemu_mutex_lock(&decomp_done_lock);
             param->done = true;
@@ -2583,30 +2609,6 @@ static void wait_for_decompress_done(void)
     qemu_mutex_unlock(&decomp_done_lock);
 }
 
-static void compress_threads_load_setup(void)
-{
-    int i, thread_count;
-
-    if (!migrate_use_compression()) {
-        return;
-    }
-    thread_count = migrate_decompress_threads();
-    decompress_threads = g_new0(QemuThread, thread_count);
-    decomp_param = g_new0(DecompressParam, thread_count);
-    qemu_mutex_init(&decomp_done_lock);
-    qemu_cond_init(&decomp_done_cond);
-    for (i = 0; i < thread_count; i++) {
-        qemu_mutex_init(&decomp_param[i].mutex);
-        qemu_cond_init(&decomp_param[i].cond);
-        decomp_param[i].compbuf = g_malloc0(compressBound(TARGET_PAGE_SIZE));
-        decomp_param[i].done = true;
-        decomp_param[i].quit = false;
-        qemu_thread_create(decompress_threads + i, "decompress",
-                           do_data_decompress, decomp_param + i,
-                           QEMU_THREAD_JOINABLE);
-    }
-}
-
 static void compress_threads_load_cleanup(void)
 {
     int i, thread_count;
@@ -2616,16 +2618,27 @@ static void compress_threads_load_cleanup(void)
     }
     thread_count = migrate_decompress_threads();
     for (i = 0; i < thread_count; i++) {
+        /* see the comments in compress_threads_save_cleanup() */
+        if (!decomp_param[i].stream.opaque) {
+            break;
+        }
+
         qemu_mutex_lock(&decomp_param[i].mutex);
         decomp_param[i].quit = true;
         qemu_cond_signal(&decomp_param[i].cond);
         qemu_mutex_unlock(&decomp_param[i].mutex);
     }
     for (i = 0; i < thread_count; i++) {
+        if (!decomp_param[i].stream.opaque) {
+            break;
+        }
+
         qemu_thread_join(decompress_threads + i);
         qemu_mutex_destroy(&decomp_param[i].mutex);
         qemu_cond_destroy(&decomp_param[i].cond);
         g_free(decomp_param[i].compbuf);
+        inflateEnd(&decomp_param[i].stream);
+        decomp_param[i].stream.opaque = NULL;
     }
     g_free(decompress_threads);
     g_free(decomp_param);
@@ -2633,6 +2646,40 @@ static void compress_threads_load_cleanup(void)
     decomp_param = NULL;
 }
 
+static int compress_threads_load_setup(void)
+{
+    int i, thread_count;
+
+    if (!migrate_use_compression()) {
+        return 0;
+    }
+
+    thread_count = migrate_decompress_threads();
+    decompress_threads = g_new0(QemuThread, thread_count);
+    decomp_param = g_new0(DecompressParam, thread_count);
+    qemu_mutex_init(&decomp_done_lock);
+    qemu_cond_init(&decomp_done_cond);
+    for (i = 0; i < thread_count; i++) {
+        if (inflateInit(&decomp_param[i].stream) != Z_OK) {
+            goto exit;
+        }
+        decomp_param[i].stream.opaque = &decomp_param[i];
+
+        qemu_mutex_init(&decomp_param[i].mutex);
+        qemu_cond_init(&decomp_param[i].cond);
+        decomp_param[i].compbuf = g_malloc0(compressBound(TARGET_PAGE_SIZE));
+        decomp_param[i].done = true;
+        decomp_param[i].quit = false;
+        qemu_thread_create(decompress_threads + i, "decompress",
+                           do_data_decompress, decomp_param + i,
+                           QEMU_THREAD_JOINABLE);
+    }
+    return 0;
+exit:
+    compress_threads_load_cleanup();
+    return -1;
+}
+
 static void decompress_data_with_multi_threads(QEMUFile *f,
                                                void *host, int len)
 {
@@ -2672,8 +2719,11 @@ static void decompress_data_with_multi_threads(QEMUFile *f,
  */
 static int ram_load_setup(QEMUFile *f, void *opaque)
 {
+    if (compress_threads_load_setup()) {
+        return -1;
+    }
+
     xbzrle_load_setup();
-    compress_threads_load_setup();
     ramblock_recv_map_init();
     return 0;
 }
-- 
2.14.3

^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH v2 04/10] migration: detect compression and decompression errors
  2018-03-27  9:10 ` [Qemu-devel] " guangrong.xiao
@ 2018-03-27  9:10   ` guangrong.xiao
  -1 siblings, 0 replies; 44+ messages in thread
From: guangrong.xiao @ 2018-03-27  9:10 UTC (permalink / raw)
  To: pbonzini, mst, mtosatti
  Cc: kvm, Xiao Guangrong, qemu-devel, peterx, dgilbert, wei.w.wang,
	jiang.biao2

From: Xiao Guangrong <xiaoguangrong@tencent.com>

Currently the page being compressed is allowed to be updated by
the VM on the source QEMU, correspondingly the destination QEMU
just ignores the decompression error. However, we completely miss
the chance to catch real errors, then the VM is corrupted silently

To make the migration more robuster, we copy the page to a buffer
first to avoid it being written by VM, then detect and handle the
errors of both compression and decompression errors properly

Signed-off-by: Xiao Guangrong <xiaoguangrong@tencent.com>
---
 migration/qemu-file.c |  4 ++--
 migration/ram.c       | 55 +++++++++++++++++++++++++++++++++++----------------
 2 files changed, 40 insertions(+), 19 deletions(-)

diff --git a/migration/qemu-file.c b/migration/qemu-file.c
index e924cc23c5..a7614e8c28 100644
--- a/migration/qemu-file.c
+++ b/migration/qemu-file.c
@@ -710,9 +710,9 @@ ssize_t qemu_put_compression_data(QEMUFile *f, z_stream *stream,
     blen = qemu_compress_data(stream, f->buf + f->buf_index + sizeof(int32_t),
                               blen, p, size);
     if (blen < 0) {
-        error_report("Compress Failed!");
-        return 0;
+        return -1;
     }
+
     qemu_put_be32(f, blen);
     if (f->ops->writev_buffer) {
         add_to_iovec(f, f->buf + f->buf_index, blen, false);
diff --git a/migration/ram.c b/migration/ram.c
index 6b699650ca..e85191c1cb 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -269,7 +269,10 @@ struct CompressParam {
     QemuCond cond;
     RAMBlock *block;
     ram_addr_t offset;
+
+    /* internally used fields */
     z_stream stream;
+    uint8_t *originbuf;
 };
 typedef struct CompressParam CompressParam;
 
@@ -278,6 +281,7 @@ struct DecompressParam {
     bool quit;
     QemuMutex mutex;
     QemuCond cond;
+    QEMUFile *file;
     void *des;
     uint8_t *compbuf;
     int len;
@@ -302,7 +306,7 @@ static QemuMutex decomp_done_lock;
 static QemuCond decomp_done_cond;
 
 static int do_compress_ram_page(QEMUFile *f, z_stream *stream, RAMBlock *block,
-                                ram_addr_t offset);
+                                ram_addr_t offset, uint8_t *source_buf);
 
 static void *do_data_compress(void *opaque)
 {
@@ -318,7 +322,8 @@ static void *do_data_compress(void *opaque)
             param->block = NULL;
             qemu_mutex_unlock(&param->mutex);
 
-            do_compress_ram_page(param->file, &param->stream, block, offset);
+            do_compress_ram_page(param->file, &param->stream, block, offset,
+                                 param->originbuf);
 
             qemu_mutex_lock(&comp_done_lock);
             param->done = true;
@@ -372,6 +377,7 @@ static void compress_threads_save_cleanup(void)
         qemu_mutex_destroy(&comp_param[i].mutex);
         qemu_cond_destroy(&comp_param[i].cond);
         deflateEnd(&comp_param[i].stream);
+        g_free(comp_param[i].originbuf);
         comp_param[i].stream.opaque = NULL;
     }
     qemu_mutex_destroy(&comp_done_lock);
@@ -395,8 +401,14 @@ static int compress_threads_save_setup(void)
     qemu_cond_init(&comp_done_cond);
     qemu_mutex_init(&comp_done_lock);
     for (i = 0; i < thread_count; i++) {
+        comp_param[i].originbuf = g_try_malloc(TARGET_PAGE_SIZE);
+        if (!comp_param[i].originbuf) {
+            goto exit;
+        }
+
         if (deflateInit(&comp_param[i].stream,
                            migrate_compress_level()) != Z_OK) {
+            g_free(comp_param[i].originbuf);
             goto exit;
         }
         comp_param[i].stream.opaque = &comp_param[i];
@@ -1055,7 +1067,7 @@ static int ram_save_page(RAMState *rs, PageSearchStatus *pss, bool last_stage)
 }
 
 static int do_compress_ram_page(QEMUFile *f, z_stream *stream, RAMBlock *block,
-                                ram_addr_t offset)
+                                ram_addr_t offset, uint8_t *source_buf)
 {
     RAMState *rs = ram_state;
     int bytes_sent, blen;
@@ -1063,7 +1075,14 @@ static int do_compress_ram_page(QEMUFile *f, z_stream *stream, RAMBlock *block,
 
     bytes_sent = save_page_header(rs, f, block, offset |
                                   RAM_SAVE_FLAG_COMPRESS_PAGE);
-    blen = qemu_put_compression_data(f, stream, p, TARGET_PAGE_SIZE);
+
+    /*
+     * copy it to a internal buffer to avoid it being modified by VM
+     * so that we can catch up the error during compression and
+     * decompression
+     */
+    memcpy(source_buf, p, TARGET_PAGE_SIZE);
+    blen = qemu_put_compression_data(f, stream, source_buf, TARGET_PAGE_SIZE);
     if (blen < 0) {
         bytes_sent = 0;
         qemu_file_set_error(migrate_get_current()->to_dst_file, blen);
@@ -2557,7 +2576,7 @@ static void *do_data_decompress(void *opaque)
     DecompressParam *param = opaque;
     unsigned long pagesize;
     uint8_t *des;
-    int len;
+    int len, ret;
 
     qemu_mutex_lock(&param->mutex);
     while (!param->quit) {
@@ -2568,13 +2587,13 @@ static void *do_data_decompress(void *opaque)
             qemu_mutex_unlock(&param->mutex);
 
             pagesize = TARGET_PAGE_SIZE;
-            /* qemu_uncompress_data() will return failed in some case,
-             * especially when the page is dirtied when doing the compression,
-             * it's not a problem because the dirty page will be retransferred
-             * and uncompress() won't break the data in other pages.
-             */
-            qemu_uncompress_data(&param->stream, des, pagesize, param->compbuf,
-                                 len);
+
+            ret = qemu_uncompress_data(&param->stream, des, pagesize,
+                                       param->compbuf, len);
+            if (ret < 0) {
+                error_report("decompress data failed");
+                qemu_file_set_error(param->file, ret);
+            }
 
             qemu_mutex_lock(&decomp_done_lock);
             param->done = true;
@@ -2591,12 +2610,12 @@ static void *do_data_decompress(void *opaque)
     return NULL;
 }
 
-static void wait_for_decompress_done(void)
+static int wait_for_decompress_done(QEMUFile *f)
 {
     int idx, thread_count;
 
     if (!migrate_use_compression()) {
-        return;
+        return 0;
     }
 
     thread_count = migrate_decompress_threads();
@@ -2607,6 +2626,7 @@ static void wait_for_decompress_done(void)
         }
     }
     qemu_mutex_unlock(&decomp_done_lock);
+    return qemu_file_get_error(f);
 }
 
 static void compress_threads_load_cleanup(void)
@@ -2646,7 +2666,7 @@ static void compress_threads_load_cleanup(void)
     decomp_param = NULL;
 }
 
-static int compress_threads_load_setup(void)
+static int compress_threads_load_setup(QEMUFile *f)
 {
     int i, thread_count;
 
@@ -2665,6 +2685,7 @@ static int compress_threads_load_setup(void)
         }
         decomp_param[i].stream.opaque = &decomp_param[i];
 
+        decomp_param[i].file = f;
         qemu_mutex_init(&decomp_param[i].mutex);
         qemu_cond_init(&decomp_param[i].cond);
         decomp_param[i].compbuf = g_malloc0(compressBound(TARGET_PAGE_SIZE));
@@ -2719,7 +2740,7 @@ static void decompress_data_with_multi_threads(QEMUFile *f,
  */
 static int ram_load_setup(QEMUFile *f, void *opaque)
 {
-    if (compress_threads_load_setup()) {
+    if (compress_threads_load_setup(f)) {
         return -1;
     }
 
@@ -3074,7 +3095,7 @@ static int ram_load(QEMUFile *f, void *opaque, int version_id)
         }
     }
 
-    wait_for_decompress_done();
+    ret |= wait_for_decompress_done(f);
     rcu_read_unlock();
     trace_ram_load_complete(ret, seq_iter);
     return ret;
-- 
2.14.3

^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [Qemu-devel] [PATCH v2 04/10] migration: detect compression and decompression errors
@ 2018-03-27  9:10   ` guangrong.xiao
  0 siblings, 0 replies; 44+ messages in thread
From: guangrong.xiao @ 2018-03-27  9:10 UTC (permalink / raw)
  To: pbonzini, mst, mtosatti
  Cc: qemu-devel, kvm, dgilbert, peterx, jiang.biao2, wei.w.wang,
	Xiao Guangrong

From: Xiao Guangrong <xiaoguangrong@tencent.com>

Currently the page being compressed is allowed to be updated by
the VM on the source QEMU, correspondingly the destination QEMU
just ignores the decompression error. However, we completely miss
the chance to catch real errors, then the VM is corrupted silently

To make the migration more robuster, we copy the page to a buffer
first to avoid it being written by VM, then detect and handle the
errors of both compression and decompression errors properly

Signed-off-by: Xiao Guangrong <xiaoguangrong@tencent.com>
---
 migration/qemu-file.c |  4 ++--
 migration/ram.c       | 55 +++++++++++++++++++++++++++++++++++----------------
 2 files changed, 40 insertions(+), 19 deletions(-)

diff --git a/migration/qemu-file.c b/migration/qemu-file.c
index e924cc23c5..a7614e8c28 100644
--- a/migration/qemu-file.c
+++ b/migration/qemu-file.c
@@ -710,9 +710,9 @@ ssize_t qemu_put_compression_data(QEMUFile *f, z_stream *stream,
     blen = qemu_compress_data(stream, f->buf + f->buf_index + sizeof(int32_t),
                               blen, p, size);
     if (blen < 0) {
-        error_report("Compress Failed!");
-        return 0;
+        return -1;
     }
+
     qemu_put_be32(f, blen);
     if (f->ops->writev_buffer) {
         add_to_iovec(f, f->buf + f->buf_index, blen, false);
diff --git a/migration/ram.c b/migration/ram.c
index 6b699650ca..e85191c1cb 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -269,7 +269,10 @@ struct CompressParam {
     QemuCond cond;
     RAMBlock *block;
     ram_addr_t offset;
+
+    /* internally used fields */
     z_stream stream;
+    uint8_t *originbuf;
 };
 typedef struct CompressParam CompressParam;
 
@@ -278,6 +281,7 @@ struct DecompressParam {
     bool quit;
     QemuMutex mutex;
     QemuCond cond;
+    QEMUFile *file;
     void *des;
     uint8_t *compbuf;
     int len;
@@ -302,7 +306,7 @@ static QemuMutex decomp_done_lock;
 static QemuCond decomp_done_cond;
 
 static int do_compress_ram_page(QEMUFile *f, z_stream *stream, RAMBlock *block,
-                                ram_addr_t offset);
+                                ram_addr_t offset, uint8_t *source_buf);
 
 static void *do_data_compress(void *opaque)
 {
@@ -318,7 +322,8 @@ static void *do_data_compress(void *opaque)
             param->block = NULL;
             qemu_mutex_unlock(&param->mutex);
 
-            do_compress_ram_page(param->file, &param->stream, block, offset);
+            do_compress_ram_page(param->file, &param->stream, block, offset,
+                                 param->originbuf);
 
             qemu_mutex_lock(&comp_done_lock);
             param->done = true;
@@ -372,6 +377,7 @@ static void compress_threads_save_cleanup(void)
         qemu_mutex_destroy(&comp_param[i].mutex);
         qemu_cond_destroy(&comp_param[i].cond);
         deflateEnd(&comp_param[i].stream);
+        g_free(comp_param[i].originbuf);
         comp_param[i].stream.opaque = NULL;
     }
     qemu_mutex_destroy(&comp_done_lock);
@@ -395,8 +401,14 @@ static int compress_threads_save_setup(void)
     qemu_cond_init(&comp_done_cond);
     qemu_mutex_init(&comp_done_lock);
     for (i = 0; i < thread_count; i++) {
+        comp_param[i].originbuf = g_try_malloc(TARGET_PAGE_SIZE);
+        if (!comp_param[i].originbuf) {
+            goto exit;
+        }
+
         if (deflateInit(&comp_param[i].stream,
                            migrate_compress_level()) != Z_OK) {
+            g_free(comp_param[i].originbuf);
             goto exit;
         }
         comp_param[i].stream.opaque = &comp_param[i];
@@ -1055,7 +1067,7 @@ static int ram_save_page(RAMState *rs, PageSearchStatus *pss, bool last_stage)
 }
 
 static int do_compress_ram_page(QEMUFile *f, z_stream *stream, RAMBlock *block,
-                                ram_addr_t offset)
+                                ram_addr_t offset, uint8_t *source_buf)
 {
     RAMState *rs = ram_state;
     int bytes_sent, blen;
@@ -1063,7 +1075,14 @@ static int do_compress_ram_page(QEMUFile *f, z_stream *stream, RAMBlock *block,
 
     bytes_sent = save_page_header(rs, f, block, offset |
                                   RAM_SAVE_FLAG_COMPRESS_PAGE);
-    blen = qemu_put_compression_data(f, stream, p, TARGET_PAGE_SIZE);
+
+    /*
+     * copy it to a internal buffer to avoid it being modified by VM
+     * so that we can catch up the error during compression and
+     * decompression
+     */
+    memcpy(source_buf, p, TARGET_PAGE_SIZE);
+    blen = qemu_put_compression_data(f, stream, source_buf, TARGET_PAGE_SIZE);
     if (blen < 0) {
         bytes_sent = 0;
         qemu_file_set_error(migrate_get_current()->to_dst_file, blen);
@@ -2557,7 +2576,7 @@ static void *do_data_decompress(void *opaque)
     DecompressParam *param = opaque;
     unsigned long pagesize;
     uint8_t *des;
-    int len;
+    int len, ret;
 
     qemu_mutex_lock(&param->mutex);
     while (!param->quit) {
@@ -2568,13 +2587,13 @@ static void *do_data_decompress(void *opaque)
             qemu_mutex_unlock(&param->mutex);
 
             pagesize = TARGET_PAGE_SIZE;
-            /* qemu_uncompress_data() will return failed in some case,
-             * especially when the page is dirtied when doing the compression,
-             * it's not a problem because the dirty page will be retransferred
-             * and uncompress() won't break the data in other pages.
-             */
-            qemu_uncompress_data(&param->stream, des, pagesize, param->compbuf,
-                                 len);
+
+            ret = qemu_uncompress_data(&param->stream, des, pagesize,
+                                       param->compbuf, len);
+            if (ret < 0) {
+                error_report("decompress data failed");
+                qemu_file_set_error(param->file, ret);
+            }
 
             qemu_mutex_lock(&decomp_done_lock);
             param->done = true;
@@ -2591,12 +2610,12 @@ static void *do_data_decompress(void *opaque)
     return NULL;
 }
 
-static void wait_for_decompress_done(void)
+static int wait_for_decompress_done(QEMUFile *f)
 {
     int idx, thread_count;
 
     if (!migrate_use_compression()) {
-        return;
+        return 0;
     }
 
     thread_count = migrate_decompress_threads();
@@ -2607,6 +2626,7 @@ static void wait_for_decompress_done(void)
         }
     }
     qemu_mutex_unlock(&decomp_done_lock);
+    return qemu_file_get_error(f);
 }
 
 static void compress_threads_load_cleanup(void)
@@ -2646,7 +2666,7 @@ static void compress_threads_load_cleanup(void)
     decomp_param = NULL;
 }
 
-static int compress_threads_load_setup(void)
+static int compress_threads_load_setup(QEMUFile *f)
 {
     int i, thread_count;
 
@@ -2665,6 +2685,7 @@ static int compress_threads_load_setup(void)
         }
         decomp_param[i].stream.opaque = &decomp_param[i];
 
+        decomp_param[i].file = f;
         qemu_mutex_init(&decomp_param[i].mutex);
         qemu_cond_init(&decomp_param[i].cond);
         decomp_param[i].compbuf = g_malloc0(compressBound(TARGET_PAGE_SIZE));
@@ -2719,7 +2740,7 @@ static void decompress_data_with_multi_threads(QEMUFile *f,
  */
 static int ram_load_setup(QEMUFile *f, void *opaque)
 {
-    if (compress_threads_load_setup()) {
+    if (compress_threads_load_setup(f)) {
         return -1;
     }
 
@@ -3074,7 +3095,7 @@ static int ram_load(QEMUFile *f, void *opaque, int version_id)
         }
     }
 
-    wait_for_decompress_done();
+    ret |= wait_for_decompress_done(f);
     rcu_read_unlock();
     trace_ram_load_complete(ret, seq_iter);
     return ret;
-- 
2.14.3

^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH v2 05/10] migration: introduce control_save_page()
  2018-03-27  9:10 ` [Qemu-devel] " guangrong.xiao
@ 2018-03-27  9:10   ` guangrong.xiao
  -1 siblings, 0 replies; 44+ messages in thread
From: guangrong.xiao @ 2018-03-27  9:10 UTC (permalink / raw)
  To: pbonzini, mst, mtosatti
  Cc: kvm, Xiao Guangrong, qemu-devel, peterx, dgilbert, wei.w.wang,
	jiang.biao2

From: Xiao Guangrong <xiaoguangrong@tencent.com>

Abstract the common function control_save_page() to cleanup the code,
no logic is changed

Reviewed-by: Peter Xu <peterx@redhat.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Xiao Guangrong <xiaoguangrong@tencent.com>
---
 migration/ram.c | 174 +++++++++++++++++++++++++++++---------------------------
 1 file changed, 89 insertions(+), 85 deletions(-)

diff --git a/migration/ram.c b/migration/ram.c
index e85191c1cb..fc1fde7bb7 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -976,6 +976,44 @@ static void ram_release_pages(const char *rbname, uint64_t offset, int pages)
     ram_discard_range(rbname, offset, pages << TARGET_PAGE_BITS);
 }
 
+/*
+ * @pages: the number of pages written by the control path,
+ *        < 0 - error
+ *        > 0 - number of pages written
+ *
+ * Return true if the pages has been saved, otherwise false is returned.
+ */
+static bool control_save_page(RAMState *rs, RAMBlock *block, ram_addr_t offset,
+                              int *pages)
+{
+    uint64_t bytes_xmit = 0;
+    int ret;
+
+    *pages = -1;
+    ret = ram_control_save_page(rs->f, block->offset, offset, TARGET_PAGE_SIZE,
+                                &bytes_xmit);
+    if (ret == RAM_SAVE_CONTROL_NOT_SUPP) {
+        return false;
+    }
+
+    if (bytes_xmit) {
+        ram_counters.transferred += bytes_xmit;
+        *pages = 1;
+    }
+
+    if (ret == RAM_SAVE_CONTROL_DELAYED) {
+        return true;
+    }
+
+    if (bytes_xmit > 0) {
+        ram_counters.normal++;
+    } else if (bytes_xmit == 0) {
+        ram_counters.duplicate++;
+    }
+
+    return true;
+}
+
 /**
  * ram_save_page: send the given page to the stream
  *
@@ -992,56 +1030,36 @@ static void ram_release_pages(const char *rbname, uint64_t offset, int pages)
 static int ram_save_page(RAMState *rs, PageSearchStatus *pss, bool last_stage)
 {
     int pages = -1;
-    uint64_t bytes_xmit;
-    ram_addr_t current_addr;
     uint8_t *p;
-    int ret;
     bool send_async = true;
     RAMBlock *block = pss->block;
     ram_addr_t offset = pss->page << TARGET_PAGE_BITS;
+    ram_addr_t current_addr = block->offset + offset;
 
     p = block->host + offset;
     trace_ram_save_page(block->idstr, (uint64_t)offset, p);
 
-    /* In doubt sent page as normal */
-    bytes_xmit = 0;
-    ret = ram_control_save_page(rs->f, block->offset,
-                           offset, TARGET_PAGE_SIZE, &bytes_xmit);
-    if (bytes_xmit) {
-        ram_counters.transferred += bytes_xmit;
-        pages = 1;
+    if (control_save_page(rs, block, offset, &pages)) {
+        return pages;
     }
 
     XBZRLE_cache_lock();
-
-    current_addr = block->offset + offset;
-
-    if (ret != RAM_SAVE_CONTROL_NOT_SUPP) {
-        if (ret != RAM_SAVE_CONTROL_DELAYED) {
-            if (bytes_xmit > 0) {
-                ram_counters.normal++;
-            } else if (bytes_xmit == 0) {
-                ram_counters.duplicate++;
-            }
-        }
-    } else {
-        pages = save_zero_page(rs, block, offset);
-        if (pages > 0) {
-            /* Must let xbzrle know, otherwise a previous (now 0'd) cached
-             * page would be stale
+    pages = save_zero_page(rs, block, offset);
+    if (pages > 0) {
+        /* Must let xbzrle know, otherwise a previous (now 0'd) cached
+         * page would be stale
+         */
+        xbzrle_cache_zero_page(rs, current_addr);
+        ram_release_pages(block->idstr, offset, pages);
+    } else if (!rs->ram_bulk_stage &&
+               !migration_in_postcopy() && migrate_use_xbzrle()) {
+        pages = save_xbzrle_page(rs, &p, current_addr, block,
+                                 offset, last_stage);
+        if (!last_stage) {
+            /* Can't send this cached data async, since the cache page
+             * might get updated before it gets to the wire
              */
-            xbzrle_cache_zero_page(rs, current_addr);
-            ram_release_pages(block->idstr, offset, pages);
-        } else if (!rs->ram_bulk_stage &&
-                   !migration_in_postcopy() && migrate_use_xbzrle()) {
-            pages = save_xbzrle_page(rs, &p, current_addr, block,
-                                     offset, last_stage);
-            if (!last_stage) {
-                /* Can't send this cached data async, since the cache page
-                 * might get updated before it gets to the wire
-                 */
-                send_async = false;
-            }
+            send_async = false;
         }
     }
 
@@ -1176,63 +1194,49 @@ static int ram_save_compressed_page(RAMState *rs, PageSearchStatus *pss,
                                     bool last_stage)
 {
     int pages = -1;
-    uint64_t bytes_xmit = 0;
     uint8_t *p;
-    int ret;
     RAMBlock *block = pss->block;
     ram_addr_t offset = pss->page << TARGET_PAGE_BITS;
 
     p = block->host + offset;
 
-    ret = ram_control_save_page(rs->f, block->offset,
-                                offset, TARGET_PAGE_SIZE, &bytes_xmit);
-    if (bytes_xmit) {
-        ram_counters.transferred += bytes_xmit;
-        pages = 1;
+    if (control_save_page(rs, block, offset, &pages)) {
+        return pages;
     }
-    if (ret != RAM_SAVE_CONTROL_NOT_SUPP) {
-        if (ret != RAM_SAVE_CONTROL_DELAYED) {
-            if (bytes_xmit > 0) {
-                ram_counters.normal++;
-            } else if (bytes_xmit == 0) {
-                ram_counters.duplicate++;
-            }
+
+    /* When starting the process of a new block, the first page of
+     * the block should be sent out before other pages in the same
+     * block, and all the pages in last block should have been sent
+     * out, keeping this order is important, because the 'cont' flag
+     * is used to avoid resending the block name.
+     */
+    if (block != rs->last_sent_block) {
+        flush_compressed_data(rs);
+        pages = save_zero_page(rs, block, offset);
+        if (pages > 0) {
+            ram_release_pages(block->idstr, offset, pages);
+        } else {
+            /*
+             * Make sure the first page is sent out before other pages.
+             *
+             * we post it as normal page as compression will take much
+             * CPU resource.
+             */
+            ram_counters.transferred += save_page_header(rs, rs->f, block,
+                                            offset | RAM_SAVE_FLAG_PAGE);
+            qemu_put_buffer_async(rs->f, p, TARGET_PAGE_SIZE,
+                                  migrate_release_ram() &
+                                  migration_in_postcopy());
+            ram_counters.transferred += TARGET_PAGE_SIZE;
+            ram_counters.normal++;
+            pages = 1;
         }
     } else {
-        /* When starting the process of a new block, the first page of
-         * the block should be sent out before other pages in the same
-         * block, and all the pages in last block should have been sent
-         * out, keeping this order is important, because the 'cont' flag
-         * is used to avoid resending the block name.
-         */
-        if (block != rs->last_sent_block) {
-            flush_compressed_data(rs);
-            pages = save_zero_page(rs, block, offset);
-            if (pages > 0) {
-                ram_release_pages(block->idstr, offset, pages);
-            } else {
-                /*
-                 * Make sure the first page is sent out before other pages.
-                 *
-                 * we post it as normal page as compression will take much
-                 * CPU resource.
-                 */
-                ram_counters.transferred += save_page_header(rs, rs->f, block,
-                                                offset | RAM_SAVE_FLAG_PAGE);
-                qemu_put_buffer_async(rs->f, p, TARGET_PAGE_SIZE,
-                                      migrate_release_ram() &
-                                      migration_in_postcopy());
-                ram_counters.transferred += TARGET_PAGE_SIZE;
-                ram_counters.normal++;
-                pages = 1;
-            }
+        pages = save_zero_page(rs, block, offset);
+        if (pages == -1) {
+            pages = compress_page_with_multi_thread(rs, block, offset);
         } else {
-            pages = save_zero_page(rs, block, offset);
-            if (pages == -1) {
-                pages = compress_page_with_multi_thread(rs, block, offset);
-            } else {
-                ram_release_pages(block->idstr, offset, pages);
-            }
+            ram_release_pages(block->idstr, offset, pages);
         }
     }
 
-- 
2.14.3

^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [Qemu-devel] [PATCH v2 05/10] migration: introduce control_save_page()
@ 2018-03-27  9:10   ` guangrong.xiao
  0 siblings, 0 replies; 44+ messages in thread
From: guangrong.xiao @ 2018-03-27  9:10 UTC (permalink / raw)
  To: pbonzini, mst, mtosatti
  Cc: qemu-devel, kvm, dgilbert, peterx, jiang.biao2, wei.w.wang,
	Xiao Guangrong

From: Xiao Guangrong <xiaoguangrong@tencent.com>

Abstract the common function control_save_page() to cleanup the code,
no logic is changed

Reviewed-by: Peter Xu <peterx@redhat.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Xiao Guangrong <xiaoguangrong@tencent.com>
---
 migration/ram.c | 174 +++++++++++++++++++++++++++++---------------------------
 1 file changed, 89 insertions(+), 85 deletions(-)

diff --git a/migration/ram.c b/migration/ram.c
index e85191c1cb..fc1fde7bb7 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -976,6 +976,44 @@ static void ram_release_pages(const char *rbname, uint64_t offset, int pages)
     ram_discard_range(rbname, offset, pages << TARGET_PAGE_BITS);
 }
 
+/*
+ * @pages: the number of pages written by the control path,
+ *        < 0 - error
+ *        > 0 - number of pages written
+ *
+ * Return true if the pages has been saved, otherwise false is returned.
+ */
+static bool control_save_page(RAMState *rs, RAMBlock *block, ram_addr_t offset,
+                              int *pages)
+{
+    uint64_t bytes_xmit = 0;
+    int ret;
+
+    *pages = -1;
+    ret = ram_control_save_page(rs->f, block->offset, offset, TARGET_PAGE_SIZE,
+                                &bytes_xmit);
+    if (ret == RAM_SAVE_CONTROL_NOT_SUPP) {
+        return false;
+    }
+
+    if (bytes_xmit) {
+        ram_counters.transferred += bytes_xmit;
+        *pages = 1;
+    }
+
+    if (ret == RAM_SAVE_CONTROL_DELAYED) {
+        return true;
+    }
+
+    if (bytes_xmit > 0) {
+        ram_counters.normal++;
+    } else if (bytes_xmit == 0) {
+        ram_counters.duplicate++;
+    }
+
+    return true;
+}
+
 /**
  * ram_save_page: send the given page to the stream
  *
@@ -992,56 +1030,36 @@ static void ram_release_pages(const char *rbname, uint64_t offset, int pages)
 static int ram_save_page(RAMState *rs, PageSearchStatus *pss, bool last_stage)
 {
     int pages = -1;
-    uint64_t bytes_xmit;
-    ram_addr_t current_addr;
     uint8_t *p;
-    int ret;
     bool send_async = true;
     RAMBlock *block = pss->block;
     ram_addr_t offset = pss->page << TARGET_PAGE_BITS;
+    ram_addr_t current_addr = block->offset + offset;
 
     p = block->host + offset;
     trace_ram_save_page(block->idstr, (uint64_t)offset, p);
 
-    /* In doubt sent page as normal */
-    bytes_xmit = 0;
-    ret = ram_control_save_page(rs->f, block->offset,
-                           offset, TARGET_PAGE_SIZE, &bytes_xmit);
-    if (bytes_xmit) {
-        ram_counters.transferred += bytes_xmit;
-        pages = 1;
+    if (control_save_page(rs, block, offset, &pages)) {
+        return pages;
     }
 
     XBZRLE_cache_lock();
-
-    current_addr = block->offset + offset;
-
-    if (ret != RAM_SAVE_CONTROL_NOT_SUPP) {
-        if (ret != RAM_SAVE_CONTROL_DELAYED) {
-            if (bytes_xmit > 0) {
-                ram_counters.normal++;
-            } else if (bytes_xmit == 0) {
-                ram_counters.duplicate++;
-            }
-        }
-    } else {
-        pages = save_zero_page(rs, block, offset);
-        if (pages > 0) {
-            /* Must let xbzrle know, otherwise a previous (now 0'd) cached
-             * page would be stale
+    pages = save_zero_page(rs, block, offset);
+    if (pages > 0) {
+        /* Must let xbzrle know, otherwise a previous (now 0'd) cached
+         * page would be stale
+         */
+        xbzrle_cache_zero_page(rs, current_addr);
+        ram_release_pages(block->idstr, offset, pages);
+    } else if (!rs->ram_bulk_stage &&
+               !migration_in_postcopy() && migrate_use_xbzrle()) {
+        pages = save_xbzrle_page(rs, &p, current_addr, block,
+                                 offset, last_stage);
+        if (!last_stage) {
+            /* Can't send this cached data async, since the cache page
+             * might get updated before it gets to the wire
              */
-            xbzrle_cache_zero_page(rs, current_addr);
-            ram_release_pages(block->idstr, offset, pages);
-        } else if (!rs->ram_bulk_stage &&
-                   !migration_in_postcopy() && migrate_use_xbzrle()) {
-            pages = save_xbzrle_page(rs, &p, current_addr, block,
-                                     offset, last_stage);
-            if (!last_stage) {
-                /* Can't send this cached data async, since the cache page
-                 * might get updated before it gets to the wire
-                 */
-                send_async = false;
-            }
+            send_async = false;
         }
     }
 
@@ -1176,63 +1194,49 @@ static int ram_save_compressed_page(RAMState *rs, PageSearchStatus *pss,
                                     bool last_stage)
 {
     int pages = -1;
-    uint64_t bytes_xmit = 0;
     uint8_t *p;
-    int ret;
     RAMBlock *block = pss->block;
     ram_addr_t offset = pss->page << TARGET_PAGE_BITS;
 
     p = block->host + offset;
 
-    ret = ram_control_save_page(rs->f, block->offset,
-                                offset, TARGET_PAGE_SIZE, &bytes_xmit);
-    if (bytes_xmit) {
-        ram_counters.transferred += bytes_xmit;
-        pages = 1;
+    if (control_save_page(rs, block, offset, &pages)) {
+        return pages;
     }
-    if (ret != RAM_SAVE_CONTROL_NOT_SUPP) {
-        if (ret != RAM_SAVE_CONTROL_DELAYED) {
-            if (bytes_xmit > 0) {
-                ram_counters.normal++;
-            } else if (bytes_xmit == 0) {
-                ram_counters.duplicate++;
-            }
+
+    /* When starting the process of a new block, the first page of
+     * the block should be sent out before other pages in the same
+     * block, and all the pages in last block should have been sent
+     * out, keeping this order is important, because the 'cont' flag
+     * is used to avoid resending the block name.
+     */
+    if (block != rs->last_sent_block) {
+        flush_compressed_data(rs);
+        pages = save_zero_page(rs, block, offset);
+        if (pages > 0) {
+            ram_release_pages(block->idstr, offset, pages);
+        } else {
+            /*
+             * Make sure the first page is sent out before other pages.
+             *
+             * we post it as normal page as compression will take much
+             * CPU resource.
+             */
+            ram_counters.transferred += save_page_header(rs, rs->f, block,
+                                            offset | RAM_SAVE_FLAG_PAGE);
+            qemu_put_buffer_async(rs->f, p, TARGET_PAGE_SIZE,
+                                  migrate_release_ram() &
+                                  migration_in_postcopy());
+            ram_counters.transferred += TARGET_PAGE_SIZE;
+            ram_counters.normal++;
+            pages = 1;
         }
     } else {
-        /* When starting the process of a new block, the first page of
-         * the block should be sent out before other pages in the same
-         * block, and all the pages in last block should have been sent
-         * out, keeping this order is important, because the 'cont' flag
-         * is used to avoid resending the block name.
-         */
-        if (block != rs->last_sent_block) {
-            flush_compressed_data(rs);
-            pages = save_zero_page(rs, block, offset);
-            if (pages > 0) {
-                ram_release_pages(block->idstr, offset, pages);
-            } else {
-                /*
-                 * Make sure the first page is sent out before other pages.
-                 *
-                 * we post it as normal page as compression will take much
-                 * CPU resource.
-                 */
-                ram_counters.transferred += save_page_header(rs, rs->f, block,
-                                                offset | RAM_SAVE_FLAG_PAGE);
-                qemu_put_buffer_async(rs->f, p, TARGET_PAGE_SIZE,
-                                      migrate_release_ram() &
-                                      migration_in_postcopy());
-                ram_counters.transferred += TARGET_PAGE_SIZE;
-                ram_counters.normal++;
-                pages = 1;
-            }
+        pages = save_zero_page(rs, block, offset);
+        if (pages == -1) {
+            pages = compress_page_with_multi_thread(rs, block, offset);
         } else {
-            pages = save_zero_page(rs, block, offset);
-            if (pages == -1) {
-                pages = compress_page_with_multi_thread(rs, block, offset);
-            } else {
-                ram_release_pages(block->idstr, offset, pages);
-            }
+            ram_release_pages(block->idstr, offset, pages);
         }
     }
 
-- 
2.14.3

^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH v2 06/10] migration: move some code ram_save_host_page
  2018-03-27  9:10 ` [Qemu-devel] " guangrong.xiao
@ 2018-03-27  9:10   ` guangrong.xiao
  -1 siblings, 0 replies; 44+ messages in thread
From: guangrong.xiao @ 2018-03-27  9:10 UTC (permalink / raw)
  To: pbonzini, mst, mtosatti
  Cc: kvm, Xiao Guangrong, qemu-devel, peterx, dgilbert, wei.w.wang,
	jiang.biao2

From: Xiao Guangrong <xiaoguangrong@tencent.com>

Move some code from ram_save_target_page() to ram_save_host_page()
to make it be more readable for latter patches that dramatically
clean ram_save_target_page() up

Signed-off-by: Xiao Guangrong <xiaoguangrong@tencent.com>
---
 migration/ram.c | 43 +++++++++++++++++++------------------------
 1 file changed, 19 insertions(+), 24 deletions(-)

diff --git a/migration/ram.c b/migration/ram.c
index fc1fde7bb7..104d3d3e9f 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -1485,38 +1485,23 @@ err:
  * Returns the number of pages written
  *
  * @rs: current RAM state
- * @ms: current migration state
  * @pss: data about the page we want to send
  * @last_stage: if we are at the completion stage
  */
 static int ram_save_target_page(RAMState *rs, PageSearchStatus *pss,
                                 bool last_stage)
 {
-    int res = 0;
-
-    /* Check the pages is dirty and if it is send it */
-    if (migration_bitmap_clear_dirty(rs, pss->block, pss->page)) {
-        /*
-         * If xbzrle is on, stop using the data compression after first
-         * round of migration even if compression is enabled. In theory,
-         * xbzrle can do better than compression.
-         */
-        if (migrate_use_compression() &&
-            (rs->ram_bulk_stage || !migrate_use_xbzrle())) {
-            res = ram_save_compressed_page(rs, pss, last_stage);
-        } else {
-            res = ram_save_page(rs, pss, last_stage);
-        }
-
-        if (res < 0) {
-            return res;
-        }
-        if (pss->block->unsentmap) {
-            clear_bit(pss->page, pss->block->unsentmap);
-        }
+    /*
+     * If xbzrle is on, stop using the data compression after first
+     * round of migration even if compression is enabled. In theory,
+     * xbzrle can do better than compression.
+     */
+    if (migrate_use_compression() &&
+        (rs->ram_bulk_stage || !migrate_use_xbzrle())) {
+        return ram_save_compressed_page(rs, pss, last_stage);
     }
 
-    return res;
+    return ram_save_page(rs, pss, last_stage);
 }
 
 /**
@@ -1545,12 +1530,22 @@ static int ram_save_host_page(RAMState *rs, PageSearchStatus *pss,
         qemu_ram_pagesize(pss->block) >> TARGET_PAGE_BITS;
 
     do {
+        /* Check the pages is dirty and if it is send it */
+        if (!migration_bitmap_clear_dirty(rs, pss->block, pss->page)) {
+            pss->page++;
+            continue;
+        }
+
         tmppages = ram_save_target_page(rs, pss, last_stage);
         if (tmppages < 0) {
             return tmppages;
         }
 
         pages += tmppages;
+        if (pss->block->unsentmap) {
+            clear_bit(pss->page, pss->block->unsentmap);
+        }
+
         pss->page++;
     } while ((pss->page & (pagesize_bits - 1)) &&
              offset_in_ramblock(pss->block, pss->page << TARGET_PAGE_BITS));
-- 
2.14.3

^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [Qemu-devel] [PATCH v2 06/10] migration: move some code ram_save_host_page
@ 2018-03-27  9:10   ` guangrong.xiao
  0 siblings, 0 replies; 44+ messages in thread
From: guangrong.xiao @ 2018-03-27  9:10 UTC (permalink / raw)
  To: pbonzini, mst, mtosatti
  Cc: qemu-devel, kvm, dgilbert, peterx, jiang.biao2, wei.w.wang,
	Xiao Guangrong

From: Xiao Guangrong <xiaoguangrong@tencent.com>

Move some code from ram_save_target_page() to ram_save_host_page()
to make it be more readable for latter patches that dramatically
clean ram_save_target_page() up

Signed-off-by: Xiao Guangrong <xiaoguangrong@tencent.com>
---
 migration/ram.c | 43 +++++++++++++++++++------------------------
 1 file changed, 19 insertions(+), 24 deletions(-)

diff --git a/migration/ram.c b/migration/ram.c
index fc1fde7bb7..104d3d3e9f 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -1485,38 +1485,23 @@ err:
  * Returns the number of pages written
  *
  * @rs: current RAM state
- * @ms: current migration state
  * @pss: data about the page we want to send
  * @last_stage: if we are at the completion stage
  */
 static int ram_save_target_page(RAMState *rs, PageSearchStatus *pss,
                                 bool last_stage)
 {
-    int res = 0;
-
-    /* Check the pages is dirty and if it is send it */
-    if (migration_bitmap_clear_dirty(rs, pss->block, pss->page)) {
-        /*
-         * If xbzrle is on, stop using the data compression after first
-         * round of migration even if compression is enabled. In theory,
-         * xbzrle can do better than compression.
-         */
-        if (migrate_use_compression() &&
-            (rs->ram_bulk_stage || !migrate_use_xbzrle())) {
-            res = ram_save_compressed_page(rs, pss, last_stage);
-        } else {
-            res = ram_save_page(rs, pss, last_stage);
-        }
-
-        if (res < 0) {
-            return res;
-        }
-        if (pss->block->unsentmap) {
-            clear_bit(pss->page, pss->block->unsentmap);
-        }
+    /*
+     * If xbzrle is on, stop using the data compression after first
+     * round of migration even if compression is enabled. In theory,
+     * xbzrle can do better than compression.
+     */
+    if (migrate_use_compression() &&
+        (rs->ram_bulk_stage || !migrate_use_xbzrle())) {
+        return ram_save_compressed_page(rs, pss, last_stage);
     }
 
-    return res;
+    return ram_save_page(rs, pss, last_stage);
 }
 
 /**
@@ -1545,12 +1530,22 @@ static int ram_save_host_page(RAMState *rs, PageSearchStatus *pss,
         qemu_ram_pagesize(pss->block) >> TARGET_PAGE_BITS;
 
     do {
+        /* Check the pages is dirty and if it is send it */
+        if (!migration_bitmap_clear_dirty(rs, pss->block, pss->page)) {
+            pss->page++;
+            continue;
+        }
+
         tmppages = ram_save_target_page(rs, pss, last_stage);
         if (tmppages < 0) {
             return tmppages;
         }
 
         pages += tmppages;
+        if (pss->block->unsentmap) {
+            clear_bit(pss->page, pss->block->unsentmap);
+        }
+
         pss->page++;
     } while ((pss->page & (pagesize_bits - 1)) &&
              offset_in_ramblock(pss->block, pss->page << TARGET_PAGE_BITS));
-- 
2.14.3

^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH v2 07/10] migration: move calling control_save_page to the common place
  2018-03-27  9:10 ` [Qemu-devel] " guangrong.xiao
@ 2018-03-27  9:10   ` guangrong.xiao
  -1 siblings, 0 replies; 44+ messages in thread
From: guangrong.xiao @ 2018-03-27  9:10 UTC (permalink / raw)
  To: pbonzini, mst, mtosatti
  Cc: kvm, Xiao Guangrong, qemu-devel, peterx, dgilbert, wei.w.wang,
	jiang.biao2

From: Xiao Guangrong <xiaoguangrong@tencent.com>

The function is called by both ram_save_page and ram_save_target_page,
so move it to the common caller to cleanup the code

Reviewed-by: Peter Xu <peterx@redhat.com>
Signed-off-by: Xiao Guangrong <xiaoguangrong@tencent.com>
---
 migration/ram.c | 16 ++++++++--------
 1 file changed, 8 insertions(+), 8 deletions(-)

diff --git a/migration/ram.c b/migration/ram.c
index 104d3d3e9f..ce3ef4382d 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -1039,10 +1039,6 @@ static int ram_save_page(RAMState *rs, PageSearchStatus *pss, bool last_stage)
     p = block->host + offset;
     trace_ram_save_page(block->idstr, (uint64_t)offset, p);
 
-    if (control_save_page(rs, block, offset, &pages)) {
-        return pages;
-    }
-
     XBZRLE_cache_lock();
     pages = save_zero_page(rs, block, offset);
     if (pages > 0) {
@@ -1200,10 +1196,6 @@ static int ram_save_compressed_page(RAMState *rs, PageSearchStatus *pss,
 
     p = block->host + offset;
 
-    if (control_save_page(rs, block, offset, &pages)) {
-        return pages;
-    }
-
     /* When starting the process of a new block, the first page of
      * the block should be sent out before other pages in the same
      * block, and all the pages in last block should have been sent
@@ -1491,6 +1483,14 @@ err:
 static int ram_save_target_page(RAMState *rs, PageSearchStatus *pss,
                                 bool last_stage)
 {
+    RAMBlock *block = pss->block;
+    ram_addr_t offset = pss->page << TARGET_PAGE_BITS;
+    int res;
+
+    if (control_save_page(rs, block, offset, &res)) {
+        return res;
+    }
+
     /*
      * If xbzrle is on, stop using the data compression after first
      * round of migration even if compression is enabled. In theory,
-- 
2.14.3

^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [Qemu-devel] [PATCH v2 07/10] migration: move calling control_save_page to the common place
@ 2018-03-27  9:10   ` guangrong.xiao
  0 siblings, 0 replies; 44+ messages in thread
From: guangrong.xiao @ 2018-03-27  9:10 UTC (permalink / raw)
  To: pbonzini, mst, mtosatti
  Cc: qemu-devel, kvm, dgilbert, peterx, jiang.biao2, wei.w.wang,
	Xiao Guangrong

From: Xiao Guangrong <xiaoguangrong@tencent.com>

The function is called by both ram_save_page and ram_save_target_page,
so move it to the common caller to cleanup the code

Reviewed-by: Peter Xu <peterx@redhat.com>
Signed-off-by: Xiao Guangrong <xiaoguangrong@tencent.com>
---
 migration/ram.c | 16 ++++++++--------
 1 file changed, 8 insertions(+), 8 deletions(-)

diff --git a/migration/ram.c b/migration/ram.c
index 104d3d3e9f..ce3ef4382d 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -1039,10 +1039,6 @@ static int ram_save_page(RAMState *rs, PageSearchStatus *pss, bool last_stage)
     p = block->host + offset;
     trace_ram_save_page(block->idstr, (uint64_t)offset, p);
 
-    if (control_save_page(rs, block, offset, &pages)) {
-        return pages;
-    }
-
     XBZRLE_cache_lock();
     pages = save_zero_page(rs, block, offset);
     if (pages > 0) {
@@ -1200,10 +1196,6 @@ static int ram_save_compressed_page(RAMState *rs, PageSearchStatus *pss,
 
     p = block->host + offset;
 
-    if (control_save_page(rs, block, offset, &pages)) {
-        return pages;
-    }
-
     /* When starting the process of a new block, the first page of
      * the block should be sent out before other pages in the same
      * block, and all the pages in last block should have been sent
@@ -1491,6 +1483,14 @@ err:
 static int ram_save_target_page(RAMState *rs, PageSearchStatus *pss,
                                 bool last_stage)
 {
+    RAMBlock *block = pss->block;
+    ram_addr_t offset = pss->page << TARGET_PAGE_BITS;
+    int res;
+
+    if (control_save_page(rs, block, offset, &res)) {
+        return res;
+    }
+
     /*
      * If xbzrle is on, stop using the data compression after first
      * round of migration even if compression is enabled. In theory,
-- 
2.14.3

^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH v2 08/10] migration: move calling save_zero_page to the common place
  2018-03-27  9:10 ` [Qemu-devel] " guangrong.xiao
@ 2018-03-27  9:10   ` guangrong.xiao
  -1 siblings, 0 replies; 44+ messages in thread
From: guangrong.xiao @ 2018-03-27  9:10 UTC (permalink / raw)
  To: pbonzini, mst, mtosatti
  Cc: kvm, Xiao Guangrong, qemu-devel, peterx, dgilbert, wei.w.wang,
	jiang.biao2

From: Xiao Guangrong <xiaoguangrong@tencent.com>

save_zero_page() is always our first approach to try, move it to
the common place before calling ram_save_compressed_page
and ram_save_page

Reviewed-by: Peter Xu <peterx@redhat.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Xiao Guangrong <xiaoguangrong@tencent.com>
---
 migration/ram.c | 105 +++++++++++++++++++++++++++++++-------------------------
 1 file changed, 59 insertions(+), 46 deletions(-)

diff --git a/migration/ram.c b/migration/ram.c
index ce3ef4382d..771763985d 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -1040,15 +1040,8 @@ static int ram_save_page(RAMState *rs, PageSearchStatus *pss, bool last_stage)
     trace_ram_save_page(block->idstr, (uint64_t)offset, p);
 
     XBZRLE_cache_lock();
-    pages = save_zero_page(rs, block, offset);
-    if (pages > 0) {
-        /* Must let xbzrle know, otherwise a previous (now 0'd) cached
-         * page would be stale
-         */
-        xbzrle_cache_zero_page(rs, current_addr);
-        ram_release_pages(block->idstr, offset, pages);
-    } else if (!rs->ram_bulk_stage &&
-               !migration_in_postcopy() && migrate_use_xbzrle()) {
+    if (!rs->ram_bulk_stage && !migration_in_postcopy() &&
+        migrate_use_xbzrle()) {
         pages = save_xbzrle_page(rs, &p, current_addr, block,
                                  offset, last_stage);
         if (!last_stage) {
@@ -1196,40 +1189,23 @@ static int ram_save_compressed_page(RAMState *rs, PageSearchStatus *pss,
 
     p = block->host + offset;
 
-    /* When starting the process of a new block, the first page of
-     * the block should be sent out before other pages in the same
-     * block, and all the pages in last block should have been sent
-     * out, keeping this order is important, because the 'cont' flag
-     * is used to avoid resending the block name.
-     */
     if (block != rs->last_sent_block) {
-        flush_compressed_data(rs);
-        pages = save_zero_page(rs, block, offset);
-        if (pages > 0) {
-            ram_release_pages(block->idstr, offset, pages);
-        } else {
-            /*
-             * Make sure the first page is sent out before other pages.
-             *
-             * we post it as normal page as compression will take much
-             * CPU resource.
-             */
-            ram_counters.transferred += save_page_header(rs, rs->f, block,
-                                            offset | RAM_SAVE_FLAG_PAGE);
-            qemu_put_buffer_async(rs->f, p, TARGET_PAGE_SIZE,
-                                  migrate_release_ram() &
-                                  migration_in_postcopy());
-            ram_counters.transferred += TARGET_PAGE_SIZE;
-            ram_counters.normal++;
-            pages = 1;
-        }
+        /*
+         * Make sure the first page is sent out before other pages.
+         *
+         * we post it as normal page as compression will take much
+         * CPU resource.
+         */
+        ram_counters.transferred += save_page_header(rs, rs->f, block,
+                                        offset | RAM_SAVE_FLAG_PAGE);
+        qemu_put_buffer_async(rs->f, p, TARGET_PAGE_SIZE,
+                              migrate_release_ram() &
+                              migration_in_postcopy());
+        ram_counters.transferred += TARGET_PAGE_SIZE;
+        ram_counters.normal++;
+        pages = 1;
     } else {
-        pages = save_zero_page(rs, block, offset);
-        if (pages == -1) {
-            pages = compress_page_with_multi_thread(rs, block, offset);
-        } else {
-            ram_release_pages(block->idstr, offset, pages);
-        }
+        pages = compress_page_with_multi_thread(rs, block, offset);
     }
 
     return pages;
@@ -1471,6 +1447,24 @@ err:
     return -1;
 }
 
+static bool save_page_use_compression(RAMState *rs)
+{
+    if (!migrate_use_compression()) {
+        return false;
+    }
+
+    /*
+     * If xbzrle is on, stop using the data compression after first
+     * round of migration even if compression is enabled. In theory,
+     * xbzrle can do better than compression.
+     */
+    if (rs->ram_bulk_stage || !migrate_use_xbzrle()) {
+        return true;
+    }
+
+    return false;
+}
+
 /**
  * ram_save_target_page: save one target page
  *
@@ -1492,12 +1486,31 @@ static int ram_save_target_page(RAMState *rs, PageSearchStatus *pss,
     }
 
     /*
-     * If xbzrle is on, stop using the data compression after first
-     * round of migration even if compression is enabled. In theory,
-     * xbzrle can do better than compression.
+     * When starting the process of a new block, the first page of
+     * the block should be sent out before other pages in the same
+     * block, and all the pages in last block should have been sent
+     * out, keeping this order is important, because the 'cont' flag
+     * is used to avoid resending the block name.
      */
-    if (migrate_use_compression() &&
-        (rs->ram_bulk_stage || !migrate_use_xbzrle())) {
+    if (block != rs->last_sent_block && save_page_use_compression(rs)) {
+            flush_compressed_data(rs);
+    }
+
+    res = save_zero_page(rs, block, offset);
+    if (res > 0) {
+        /* Must let xbzrle know, otherwise a previous (now 0'd) cached
+         * page would be stale
+         */
+        if (!save_page_use_compression(rs)) {
+            XBZRLE_cache_lock();
+            xbzrle_cache_zero_page(rs, block->offset + offset);
+            XBZRLE_cache_unlock();
+        }
+        ram_release_pages(block->idstr, offset, res);
+        return res;
+    }
+
+    if (save_page_use_compression(rs)) {
         return ram_save_compressed_page(rs, pss, last_stage);
     }
 
-- 
2.14.3

^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [Qemu-devel] [PATCH v2 08/10] migration: move calling save_zero_page to the common place
@ 2018-03-27  9:10   ` guangrong.xiao
  0 siblings, 0 replies; 44+ messages in thread
From: guangrong.xiao @ 2018-03-27  9:10 UTC (permalink / raw)
  To: pbonzini, mst, mtosatti
  Cc: qemu-devel, kvm, dgilbert, peterx, jiang.biao2, wei.w.wang,
	Xiao Guangrong

From: Xiao Guangrong <xiaoguangrong@tencent.com>

save_zero_page() is always our first approach to try, move it to
the common place before calling ram_save_compressed_page
and ram_save_page

Reviewed-by: Peter Xu <peterx@redhat.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Xiao Guangrong <xiaoguangrong@tencent.com>
---
 migration/ram.c | 105 +++++++++++++++++++++++++++++++-------------------------
 1 file changed, 59 insertions(+), 46 deletions(-)

diff --git a/migration/ram.c b/migration/ram.c
index ce3ef4382d..771763985d 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -1040,15 +1040,8 @@ static int ram_save_page(RAMState *rs, PageSearchStatus *pss, bool last_stage)
     trace_ram_save_page(block->idstr, (uint64_t)offset, p);
 
     XBZRLE_cache_lock();
-    pages = save_zero_page(rs, block, offset);
-    if (pages > 0) {
-        /* Must let xbzrle know, otherwise a previous (now 0'd) cached
-         * page would be stale
-         */
-        xbzrle_cache_zero_page(rs, current_addr);
-        ram_release_pages(block->idstr, offset, pages);
-    } else if (!rs->ram_bulk_stage &&
-               !migration_in_postcopy() && migrate_use_xbzrle()) {
+    if (!rs->ram_bulk_stage && !migration_in_postcopy() &&
+        migrate_use_xbzrle()) {
         pages = save_xbzrle_page(rs, &p, current_addr, block,
                                  offset, last_stage);
         if (!last_stage) {
@@ -1196,40 +1189,23 @@ static int ram_save_compressed_page(RAMState *rs, PageSearchStatus *pss,
 
     p = block->host + offset;
 
-    /* When starting the process of a new block, the first page of
-     * the block should be sent out before other pages in the same
-     * block, and all the pages in last block should have been sent
-     * out, keeping this order is important, because the 'cont' flag
-     * is used to avoid resending the block name.
-     */
     if (block != rs->last_sent_block) {
-        flush_compressed_data(rs);
-        pages = save_zero_page(rs, block, offset);
-        if (pages > 0) {
-            ram_release_pages(block->idstr, offset, pages);
-        } else {
-            /*
-             * Make sure the first page is sent out before other pages.
-             *
-             * we post it as normal page as compression will take much
-             * CPU resource.
-             */
-            ram_counters.transferred += save_page_header(rs, rs->f, block,
-                                            offset | RAM_SAVE_FLAG_PAGE);
-            qemu_put_buffer_async(rs->f, p, TARGET_PAGE_SIZE,
-                                  migrate_release_ram() &
-                                  migration_in_postcopy());
-            ram_counters.transferred += TARGET_PAGE_SIZE;
-            ram_counters.normal++;
-            pages = 1;
-        }
+        /*
+         * Make sure the first page is sent out before other pages.
+         *
+         * we post it as normal page as compression will take much
+         * CPU resource.
+         */
+        ram_counters.transferred += save_page_header(rs, rs->f, block,
+                                        offset | RAM_SAVE_FLAG_PAGE);
+        qemu_put_buffer_async(rs->f, p, TARGET_PAGE_SIZE,
+                              migrate_release_ram() &
+                              migration_in_postcopy());
+        ram_counters.transferred += TARGET_PAGE_SIZE;
+        ram_counters.normal++;
+        pages = 1;
     } else {
-        pages = save_zero_page(rs, block, offset);
-        if (pages == -1) {
-            pages = compress_page_with_multi_thread(rs, block, offset);
-        } else {
-            ram_release_pages(block->idstr, offset, pages);
-        }
+        pages = compress_page_with_multi_thread(rs, block, offset);
     }
 
     return pages;
@@ -1471,6 +1447,24 @@ err:
     return -1;
 }
 
+static bool save_page_use_compression(RAMState *rs)
+{
+    if (!migrate_use_compression()) {
+        return false;
+    }
+
+    /*
+     * If xbzrle is on, stop using the data compression after first
+     * round of migration even if compression is enabled. In theory,
+     * xbzrle can do better than compression.
+     */
+    if (rs->ram_bulk_stage || !migrate_use_xbzrle()) {
+        return true;
+    }
+
+    return false;
+}
+
 /**
  * ram_save_target_page: save one target page
  *
@@ -1492,12 +1486,31 @@ static int ram_save_target_page(RAMState *rs, PageSearchStatus *pss,
     }
 
     /*
-     * If xbzrle is on, stop using the data compression after first
-     * round of migration even if compression is enabled. In theory,
-     * xbzrle can do better than compression.
+     * When starting the process of a new block, the first page of
+     * the block should be sent out before other pages in the same
+     * block, and all the pages in last block should have been sent
+     * out, keeping this order is important, because the 'cont' flag
+     * is used to avoid resending the block name.
      */
-    if (migrate_use_compression() &&
-        (rs->ram_bulk_stage || !migrate_use_xbzrle())) {
+    if (block != rs->last_sent_block && save_page_use_compression(rs)) {
+            flush_compressed_data(rs);
+    }
+
+    res = save_zero_page(rs, block, offset);
+    if (res > 0) {
+        /* Must let xbzrle know, otherwise a previous (now 0'd) cached
+         * page would be stale
+         */
+        if (!save_page_use_compression(rs)) {
+            XBZRLE_cache_lock();
+            xbzrle_cache_zero_page(rs, block->offset + offset);
+            XBZRLE_cache_unlock();
+        }
+        ram_release_pages(block->idstr, offset, res);
+        return res;
+    }
+
+    if (save_page_use_compression(rs)) {
         return ram_save_compressed_page(rs, pss, last_stage);
     }
 
-- 
2.14.3

^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH v2 09/10] migration: introduce save_normal_page()
  2018-03-27  9:10 ` [Qemu-devel] " guangrong.xiao
@ 2018-03-27  9:10   ` guangrong.xiao
  -1 siblings, 0 replies; 44+ messages in thread
From: guangrong.xiao @ 2018-03-27  9:10 UTC (permalink / raw)
  To: pbonzini, mst, mtosatti
  Cc: kvm, Xiao Guangrong, qemu-devel, peterx, dgilbert, wei.w.wang,
	jiang.biao2

From: Xiao Guangrong <xiaoguangrong@tencent.com>

It directly sends the page to the stream neither checking zero nor
using xbzrle or compression

Reviewed-by: Peter Xu <peterx@redhat.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Xiao Guangrong <xiaoguangrong@tencent.com>
---
 migration/ram.c | 50 ++++++++++++++++++++++++++++++--------------------
 1 file changed, 30 insertions(+), 20 deletions(-)

diff --git a/migration/ram.c b/migration/ram.c
index 771763985d..e71a9aee11 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -1014,6 +1014,34 @@ static bool control_save_page(RAMState *rs, RAMBlock *block, ram_addr_t offset,
     return true;
 }
 
+/*
+ * directly send the page to the stream
+ *
+ * Returns the number of pages written.
+ *
+ * @rs: current RAM state
+ * @block: block that contains the page we want to send
+ * @offset: offset inside the block for the page
+ * @buf: the page to be sent
+ * @async: send to page asyncly
+ */
+static int save_normal_page(RAMState *rs, RAMBlock *block, ram_addr_t offset,
+                            uint8_t *buf, bool async)
+{
+    ram_counters.transferred += save_page_header(rs, rs->f, block,
+                                                 offset | RAM_SAVE_FLAG_PAGE);
+    if (async) {
+        qemu_put_buffer_async(rs->f, buf, TARGET_PAGE_SIZE,
+                              migrate_release_ram() &
+                              migration_in_postcopy());
+    } else {
+        qemu_put_buffer(rs->f, buf, TARGET_PAGE_SIZE);
+    }
+    ram_counters.transferred += TARGET_PAGE_SIZE;
+    ram_counters.normal++;
+    return 1;
+}
+
 /**
  * ram_save_page: send the given page to the stream
  *
@@ -1054,18 +1082,7 @@ static int ram_save_page(RAMState *rs, PageSearchStatus *pss, bool last_stage)
 
     /* XBZRLE overflow or normal page */
     if (pages == -1) {
-        ram_counters.transferred +=
-            save_page_header(rs, rs->f, block, offset | RAM_SAVE_FLAG_PAGE);
-        if (send_async) {
-            qemu_put_buffer_async(rs->f, p, TARGET_PAGE_SIZE,
-                                  migrate_release_ram() &
-                                  migration_in_postcopy());
-        } else {
-            qemu_put_buffer(rs->f, p, TARGET_PAGE_SIZE);
-        }
-        ram_counters.transferred += TARGET_PAGE_SIZE;
-        pages = 1;
-        ram_counters.normal++;
+        pages = save_normal_page(rs, block, offset, p, send_async);
     }
 
     XBZRLE_cache_unlock();
@@ -1196,14 +1213,7 @@ static int ram_save_compressed_page(RAMState *rs, PageSearchStatus *pss,
          * we post it as normal page as compression will take much
          * CPU resource.
          */
-        ram_counters.transferred += save_page_header(rs, rs->f, block,
-                                        offset | RAM_SAVE_FLAG_PAGE);
-        qemu_put_buffer_async(rs->f, p, TARGET_PAGE_SIZE,
-                              migrate_release_ram() &
-                              migration_in_postcopy());
-        ram_counters.transferred += TARGET_PAGE_SIZE;
-        ram_counters.normal++;
-        pages = 1;
+        pages = save_normal_page(rs, block, offset, p, true);
     } else {
         pages = compress_page_with_multi_thread(rs, block, offset);
     }
-- 
2.14.3

^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [Qemu-devel] [PATCH v2 09/10] migration: introduce save_normal_page()
@ 2018-03-27  9:10   ` guangrong.xiao
  0 siblings, 0 replies; 44+ messages in thread
From: guangrong.xiao @ 2018-03-27  9:10 UTC (permalink / raw)
  To: pbonzini, mst, mtosatti
  Cc: qemu-devel, kvm, dgilbert, peterx, jiang.biao2, wei.w.wang,
	Xiao Guangrong

From: Xiao Guangrong <xiaoguangrong@tencent.com>

It directly sends the page to the stream neither checking zero nor
using xbzrle or compression

Reviewed-by: Peter Xu <peterx@redhat.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Xiao Guangrong <xiaoguangrong@tencent.com>
---
 migration/ram.c | 50 ++++++++++++++++++++++++++++++--------------------
 1 file changed, 30 insertions(+), 20 deletions(-)

diff --git a/migration/ram.c b/migration/ram.c
index 771763985d..e71a9aee11 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -1014,6 +1014,34 @@ static bool control_save_page(RAMState *rs, RAMBlock *block, ram_addr_t offset,
     return true;
 }
 
+/*
+ * directly send the page to the stream
+ *
+ * Returns the number of pages written.
+ *
+ * @rs: current RAM state
+ * @block: block that contains the page we want to send
+ * @offset: offset inside the block for the page
+ * @buf: the page to be sent
+ * @async: send to page asyncly
+ */
+static int save_normal_page(RAMState *rs, RAMBlock *block, ram_addr_t offset,
+                            uint8_t *buf, bool async)
+{
+    ram_counters.transferred += save_page_header(rs, rs->f, block,
+                                                 offset | RAM_SAVE_FLAG_PAGE);
+    if (async) {
+        qemu_put_buffer_async(rs->f, buf, TARGET_PAGE_SIZE,
+                              migrate_release_ram() &
+                              migration_in_postcopy());
+    } else {
+        qemu_put_buffer(rs->f, buf, TARGET_PAGE_SIZE);
+    }
+    ram_counters.transferred += TARGET_PAGE_SIZE;
+    ram_counters.normal++;
+    return 1;
+}
+
 /**
  * ram_save_page: send the given page to the stream
  *
@@ -1054,18 +1082,7 @@ static int ram_save_page(RAMState *rs, PageSearchStatus *pss, bool last_stage)
 
     /* XBZRLE overflow or normal page */
     if (pages == -1) {
-        ram_counters.transferred +=
-            save_page_header(rs, rs->f, block, offset | RAM_SAVE_FLAG_PAGE);
-        if (send_async) {
-            qemu_put_buffer_async(rs->f, p, TARGET_PAGE_SIZE,
-                                  migrate_release_ram() &
-                                  migration_in_postcopy());
-        } else {
-            qemu_put_buffer(rs->f, p, TARGET_PAGE_SIZE);
-        }
-        ram_counters.transferred += TARGET_PAGE_SIZE;
-        pages = 1;
-        ram_counters.normal++;
+        pages = save_normal_page(rs, block, offset, p, send_async);
     }
 
     XBZRLE_cache_unlock();
@@ -1196,14 +1213,7 @@ static int ram_save_compressed_page(RAMState *rs, PageSearchStatus *pss,
          * we post it as normal page as compression will take much
          * CPU resource.
          */
-        ram_counters.transferred += save_page_header(rs, rs->f, block,
-                                        offset | RAM_SAVE_FLAG_PAGE);
-        qemu_put_buffer_async(rs->f, p, TARGET_PAGE_SIZE,
-                              migrate_release_ram() &
-                              migration_in_postcopy());
-        ram_counters.transferred += TARGET_PAGE_SIZE;
-        ram_counters.normal++;
-        pages = 1;
+        pages = save_normal_page(rs, block, offset, p, true);
     } else {
         pages = compress_page_with_multi_thread(rs, block, offset);
     }
-- 
2.14.3

^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH v2 10/10] migration: remove ram_save_compressed_page()
  2018-03-27  9:10 ` [Qemu-devel] " guangrong.xiao
@ 2018-03-27  9:10   ` guangrong.xiao
  -1 siblings, 0 replies; 44+ messages in thread
From: guangrong.xiao @ 2018-03-27  9:10 UTC (permalink / raw)
  To: pbonzini, mst, mtosatti
  Cc: kvm, Xiao Guangrong, qemu-devel, peterx, dgilbert, wei.w.wang,
	jiang.biao2

From: Xiao Guangrong <xiaoguangrong@tencent.com>

Now, we can reuse the path in ram_save_page() to post the page out
as normal, then the only thing remained in ram_save_compressed_page()
is compression that we can move it out to the caller

Reviewed-by: Peter Xu <peterx@redhat.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Xiao Guangrong <xiaoguangrong@tencent.com>
---
 migration/ram.c | 45 ++++++++-------------------------------------
 1 file changed, 8 insertions(+), 37 deletions(-)

diff --git a/migration/ram.c b/migration/ram.c
index e71a9aee11..5187455a93 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -1186,41 +1186,6 @@ static int compress_page_with_multi_thread(RAMState *rs, RAMBlock *block,
     return pages;
 }
 
-/**
- * ram_save_compressed_page: compress the given page and send it to the stream
- *
- * Returns the number of pages written.
- *
- * @rs: current RAM state
- * @block: block that contains the page we want to send
- * @offset: offset inside the block for the page
- * @last_stage: if we are at the completion stage
- */
-static int ram_save_compressed_page(RAMState *rs, PageSearchStatus *pss,
-                                    bool last_stage)
-{
-    int pages = -1;
-    uint8_t *p;
-    RAMBlock *block = pss->block;
-    ram_addr_t offset = pss->page << TARGET_PAGE_BITS;
-
-    p = block->host + offset;
-
-    if (block != rs->last_sent_block) {
-        /*
-         * Make sure the first page is sent out before other pages.
-         *
-         * we post it as normal page as compression will take much
-         * CPU resource.
-         */
-        pages = save_normal_page(rs, block, offset, p, true);
-    } else {
-        pages = compress_page_with_multi_thread(rs, block, offset);
-    }
-
-    return pages;
-}
-
 /**
  * find_dirty_block: find the next dirty page and update any state
  * associated with the search process.
@@ -1520,8 +1485,14 @@ static int ram_save_target_page(RAMState *rs, PageSearchStatus *pss,
         return res;
     }
 
-    if (save_page_use_compression(rs)) {
-        return ram_save_compressed_page(rs, pss, last_stage);
+    /*
+     * Make sure the first page is sent out before other pages.
+     *
+     * we post it as normal page as compression will take much
+     * CPU resource.
+     */
+    if (block == rs->last_sent_block && save_page_use_compression(rs)) {
+        res = compress_page_with_multi_thread(rs, block, offset);
     }
 
     return ram_save_page(rs, pss, last_stage);
-- 
2.14.3

^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [Qemu-devel] [PATCH v2 10/10] migration: remove ram_save_compressed_page()
@ 2018-03-27  9:10   ` guangrong.xiao
  0 siblings, 0 replies; 44+ messages in thread
From: guangrong.xiao @ 2018-03-27  9:10 UTC (permalink / raw)
  To: pbonzini, mst, mtosatti
  Cc: qemu-devel, kvm, dgilbert, peterx, jiang.biao2, wei.w.wang,
	Xiao Guangrong

From: Xiao Guangrong <xiaoguangrong@tencent.com>

Now, we can reuse the path in ram_save_page() to post the page out
as normal, then the only thing remained in ram_save_compressed_page()
is compression that we can move it out to the caller

Reviewed-by: Peter Xu <peterx@redhat.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Xiao Guangrong <xiaoguangrong@tencent.com>
---
 migration/ram.c | 45 ++++++++-------------------------------------
 1 file changed, 8 insertions(+), 37 deletions(-)

diff --git a/migration/ram.c b/migration/ram.c
index e71a9aee11..5187455a93 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -1186,41 +1186,6 @@ static int compress_page_with_multi_thread(RAMState *rs, RAMBlock *block,
     return pages;
 }
 
-/**
- * ram_save_compressed_page: compress the given page and send it to the stream
- *
- * Returns the number of pages written.
- *
- * @rs: current RAM state
- * @block: block that contains the page we want to send
- * @offset: offset inside the block for the page
- * @last_stage: if we are at the completion stage
- */
-static int ram_save_compressed_page(RAMState *rs, PageSearchStatus *pss,
-                                    bool last_stage)
-{
-    int pages = -1;
-    uint8_t *p;
-    RAMBlock *block = pss->block;
-    ram_addr_t offset = pss->page << TARGET_PAGE_BITS;
-
-    p = block->host + offset;
-
-    if (block != rs->last_sent_block) {
-        /*
-         * Make sure the first page is sent out before other pages.
-         *
-         * we post it as normal page as compression will take much
-         * CPU resource.
-         */
-        pages = save_normal_page(rs, block, offset, p, true);
-    } else {
-        pages = compress_page_with_multi_thread(rs, block, offset);
-    }
-
-    return pages;
-}
-
 /**
  * find_dirty_block: find the next dirty page and update any state
  * associated with the search process.
@@ -1520,8 +1485,14 @@ static int ram_save_target_page(RAMState *rs, PageSearchStatus *pss,
         return res;
     }
 
-    if (save_page_use_compression(rs)) {
-        return ram_save_compressed_page(rs, pss, last_stage);
+    /*
+     * Make sure the first page is sent out before other pages.
+     *
+     * we post it as normal page as compression will take much
+     * CPU resource.
+     */
+    if (block == rs->last_sent_block && save_page_use_compression(rs)) {
+        res = compress_page_with_multi_thread(rs, block, offset);
     }
 
     return ram_save_page(rs, pss, last_stage);
-- 
2.14.3

^ permalink raw reply related	[flat|nested] 44+ messages in thread

* Re: [PATCH v2 02/10] migration: stop compression to allocate and free memory frequently
  2018-03-27  9:10   ` [Qemu-devel] " guangrong.xiao
@ 2018-03-28  9:25     ` Peter Xu
  -1 siblings, 0 replies; 44+ messages in thread
From: Peter Xu @ 2018-03-28  9:25 UTC (permalink / raw)
  To: guangrong.xiao
  Cc: kvm, mst, mtosatti, Xiao Guangrong, dgilbert, qemu-devel,
	wei.w.wang, jiang.biao2, pbonzini

On Tue, Mar 27, 2018 at 05:10:35PM +0800, guangrong.xiao@gmail.com wrote:

[...]

> @@ -357,10 +358,20 @@ static void compress_threads_save_cleanup(void)
>      terminate_compression_threads();
>      thread_count = migrate_compress_threads();
>      for (i = 0; i < thread_count; i++) {
> +        /*
> +         * stream.opaque can be used to store private data, we use it
> +         * as a indicator which shows if the thread is properly init'd
> +         * or not
> +         */
> +        if (!comp_param[i].stream.opaque) {
> +            break;
> +        }

How about using comp_param[i].file?  The opaque seems to be hiding
deeper, and...

>          qemu_thread_join(compress_threads + i);
>          qemu_fclose(comp_param[i].file);
>          qemu_mutex_destroy(&comp_param[i].mutex);
>          qemu_cond_destroy(&comp_param[i].cond);
> +        deflateEnd(&comp_param[i].stream);
> +        comp_param[i].stream.opaque = NULL;
>      }
>      qemu_mutex_destroy(&comp_done_lock);
>      qemu_cond_destroy(&comp_done_cond);
> @@ -370,12 +381,12 @@ static void compress_threads_save_cleanup(void)
>      comp_param = NULL;
>  }
>  
> -static void compress_threads_save_setup(void)
> +static int compress_threads_save_setup(void)
>  {
>      int i, thread_count;
>  
>      if (!migrate_use_compression()) {
> -        return;
> +        return 0;
>      }
>      thread_count = migrate_compress_threads();
>      compress_threads = g_new0(QemuThread, thread_count);
> @@ -383,6 +394,12 @@ static void compress_threads_save_setup(void)
>      qemu_cond_init(&comp_done_cond);
>      qemu_mutex_init(&comp_done_lock);
>      for (i = 0; i < thread_count; i++) {
> +        if (deflateInit(&comp_param[i].stream,
> +                           migrate_compress_level()) != Z_OK) {

(indent issue)

> +            goto exit;
> +        }
> +        comp_param[i].stream.opaque = &comp_param[i];

...here from document:

        ZEXTERN int ZEXPORT deflateInit OF((z_streamp strm, int level));

        Initializes the internal stream state for compression. The
        fields zalloc, zfree and opaque must be initialized before by
        the caller. If zalloc and zfree are set to Z_NULL, deflateInit
        updates them to use default allocation functions.

So shall we init opaque first?  Otherwise looks good to me.

> +
>          /* comp_param[i].file is just used as a dummy buffer to save data,
>           * set its ops to empty.
>           */

Thanks,

-- 
Peter Xu

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [Qemu-devel] [PATCH v2 02/10] migration: stop compression to allocate and free memory frequently
@ 2018-03-28  9:25     ` Peter Xu
  0 siblings, 0 replies; 44+ messages in thread
From: Peter Xu @ 2018-03-28  9:25 UTC (permalink / raw)
  To: guangrong.xiao
  Cc: pbonzini, mst, mtosatti, qemu-devel, kvm, dgilbert, jiang.biao2,
	wei.w.wang, Xiao Guangrong

On Tue, Mar 27, 2018 at 05:10:35PM +0800, guangrong.xiao@gmail.com wrote:

[...]

> @@ -357,10 +358,20 @@ static void compress_threads_save_cleanup(void)
>      terminate_compression_threads();
>      thread_count = migrate_compress_threads();
>      for (i = 0; i < thread_count; i++) {
> +        /*
> +         * stream.opaque can be used to store private data, we use it
> +         * as a indicator which shows if the thread is properly init'd
> +         * or not
> +         */
> +        if (!comp_param[i].stream.opaque) {
> +            break;
> +        }

How about using comp_param[i].file?  The opaque seems to be hiding
deeper, and...

>          qemu_thread_join(compress_threads + i);
>          qemu_fclose(comp_param[i].file);
>          qemu_mutex_destroy(&comp_param[i].mutex);
>          qemu_cond_destroy(&comp_param[i].cond);
> +        deflateEnd(&comp_param[i].stream);
> +        comp_param[i].stream.opaque = NULL;
>      }
>      qemu_mutex_destroy(&comp_done_lock);
>      qemu_cond_destroy(&comp_done_cond);
> @@ -370,12 +381,12 @@ static void compress_threads_save_cleanup(void)
>      comp_param = NULL;
>  }
>  
> -static void compress_threads_save_setup(void)
> +static int compress_threads_save_setup(void)
>  {
>      int i, thread_count;
>  
>      if (!migrate_use_compression()) {
> -        return;
> +        return 0;
>      }
>      thread_count = migrate_compress_threads();
>      compress_threads = g_new0(QemuThread, thread_count);
> @@ -383,6 +394,12 @@ static void compress_threads_save_setup(void)
>      qemu_cond_init(&comp_done_cond);
>      qemu_mutex_init(&comp_done_lock);
>      for (i = 0; i < thread_count; i++) {
> +        if (deflateInit(&comp_param[i].stream,
> +                           migrate_compress_level()) != Z_OK) {

(indent issue)

> +            goto exit;
> +        }
> +        comp_param[i].stream.opaque = &comp_param[i];

...here from document:

        ZEXTERN int ZEXPORT deflateInit OF((z_streamp strm, int level));

        Initializes the internal stream state for compression. The
        fields zalloc, zfree and opaque must be initialized before by
        the caller. If zalloc and zfree are set to Z_NULL, deflateInit
        updates them to use default allocation functions.

So shall we init opaque first?  Otherwise looks good to me.

> +
>          /* comp_param[i].file is just used as a dummy buffer to save data,
>           * set its ops to empty.
>           */

Thanks,

-- 
Peter Xu

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH v2 03/10] migration: stop decompression to allocate and free memory frequently
  2018-03-27  9:10   ` [Qemu-devel] " guangrong.xiao
@ 2018-03-28  9:42     ` Peter Xu
  -1 siblings, 0 replies; 44+ messages in thread
From: Peter Xu @ 2018-03-28  9:42 UTC (permalink / raw)
  To: guangrong.xiao
  Cc: kvm, mst, mtosatti, Xiao Guangrong, dgilbert, qemu-devel,
	wei.w.wang, jiang.biao2, pbonzini

On Tue, Mar 27, 2018 at 05:10:36PM +0800, guangrong.xiao@gmail.com wrote:

[...]

> +static int compress_threads_load_setup(void)
> +{
> +    int i, thread_count;
> +
> +    if (!migrate_use_compression()) {
> +        return 0;
> +    }
> +
> +    thread_count = migrate_decompress_threads();
> +    decompress_threads = g_new0(QemuThread, thread_count);
> +    decomp_param = g_new0(DecompressParam, thread_count);
> +    qemu_mutex_init(&decomp_done_lock);
> +    qemu_cond_init(&decomp_done_cond);
> +    for (i = 0; i < thread_count; i++) {
> +        if (inflateInit(&decomp_param[i].stream) != Z_OK) {
> +            goto exit;
> +        }
> +        decomp_param[i].stream.opaque = &decomp_param[i];

Same question as the encoding patch here, otherwise looks good to me.

Thanks,

-- 
Peter Xu

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [Qemu-devel] [PATCH v2 03/10] migration: stop decompression to allocate and free memory frequently
@ 2018-03-28  9:42     ` Peter Xu
  0 siblings, 0 replies; 44+ messages in thread
From: Peter Xu @ 2018-03-28  9:42 UTC (permalink / raw)
  To: guangrong.xiao
  Cc: pbonzini, mst, mtosatti, qemu-devel, kvm, dgilbert, jiang.biao2,
	wei.w.wang, Xiao Guangrong

On Tue, Mar 27, 2018 at 05:10:36PM +0800, guangrong.xiao@gmail.com wrote:

[...]

> +static int compress_threads_load_setup(void)
> +{
> +    int i, thread_count;
> +
> +    if (!migrate_use_compression()) {
> +        return 0;
> +    }
> +
> +    thread_count = migrate_decompress_threads();
> +    decompress_threads = g_new0(QemuThread, thread_count);
> +    decomp_param = g_new0(DecompressParam, thread_count);
> +    qemu_mutex_init(&decomp_done_lock);
> +    qemu_cond_init(&decomp_done_cond);
> +    for (i = 0; i < thread_count; i++) {
> +        if (inflateInit(&decomp_param[i].stream) != Z_OK) {
> +            goto exit;
> +        }
> +        decomp_param[i].stream.opaque = &decomp_param[i];

Same question as the encoding patch here, otherwise looks good to me.

Thanks,

-- 
Peter Xu

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH v2 04/10] migration: detect compression and decompression errors
  2018-03-27  9:10   ` [Qemu-devel] " guangrong.xiao
@ 2018-03-28  9:59     ` Peter Xu
  -1 siblings, 0 replies; 44+ messages in thread
From: Peter Xu @ 2018-03-28  9:59 UTC (permalink / raw)
  To: guangrong.xiao
  Cc: kvm, mst, mtosatti, Xiao Guangrong, dgilbert, qemu-devel,
	wei.w.wang, jiang.biao2, pbonzini

On Tue, Mar 27, 2018 at 05:10:37PM +0800, guangrong.xiao@gmail.com wrote:

[...]

> -static int compress_threads_load_setup(void)
> +static int compress_threads_load_setup(QEMUFile *f)
>  {
>      int i, thread_count;
>  
> @@ -2665,6 +2685,7 @@ static int compress_threads_load_setup(void)
>          }
>          decomp_param[i].stream.opaque = &decomp_param[i];
>  
> +        decomp_param[i].file = f;

On the source side the error will be set via:

        qemu_file_set_error(migrate_get_current()->to_dst_file, blen);

Maybe we can do similar things using migrate_incoming_get_current() to
avoid caching the QEMUFile multiple times?

I think both are not good since qemu_file_set_error() can be called by
multiple threads, but it's only setting a fault value so maybe it's
fine.  Other than that it looks good to me.

Thanks,

-- 
Peter Xu

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [Qemu-devel] [PATCH v2 04/10] migration: detect compression and decompression errors
@ 2018-03-28  9:59     ` Peter Xu
  0 siblings, 0 replies; 44+ messages in thread
From: Peter Xu @ 2018-03-28  9:59 UTC (permalink / raw)
  To: guangrong.xiao
  Cc: pbonzini, mst, mtosatti, qemu-devel, kvm, dgilbert, jiang.biao2,
	wei.w.wang, Xiao Guangrong

On Tue, Mar 27, 2018 at 05:10:37PM +0800, guangrong.xiao@gmail.com wrote:

[...]

> -static int compress_threads_load_setup(void)
> +static int compress_threads_load_setup(QEMUFile *f)
>  {
>      int i, thread_count;
>  
> @@ -2665,6 +2685,7 @@ static int compress_threads_load_setup(void)
>          }
>          decomp_param[i].stream.opaque = &decomp_param[i];
>  
> +        decomp_param[i].file = f;

On the source side the error will be set via:

        qemu_file_set_error(migrate_get_current()->to_dst_file, blen);

Maybe we can do similar things using migrate_incoming_get_current() to
avoid caching the QEMUFile multiple times?

I think both are not good since qemu_file_set_error() can be called by
multiple threads, but it's only setting a fault value so maybe it's
fine.  Other than that it looks good to me.

Thanks,

-- 
Peter Xu

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH v2 06/10] migration: move some code ram_save_host_page
  2018-03-27  9:10   ` [Qemu-devel] " guangrong.xiao
@ 2018-03-28 10:05     ` Peter Xu
  -1 siblings, 0 replies; 44+ messages in thread
From: Peter Xu @ 2018-03-28 10:05 UTC (permalink / raw)
  To: guangrong.xiao
  Cc: kvm, mst, mtosatti, Xiao Guangrong, dgilbert, qemu-devel,
	wei.w.wang, jiang.biao2, pbonzini

On Tue, Mar 27, 2018 at 05:10:39PM +0800, guangrong.xiao@gmail.com wrote:
> From: Xiao Guangrong <xiaoguangrong@tencent.com>
> 
> Move some code from ram_save_target_page() to ram_save_host_page()
> to make it be more readable for latter patches that dramatically
> clean ram_save_target_page() up
> 
> Signed-off-by: Xiao Guangrong <xiaoguangrong@tencent.com>

Reviewed-by: Peter Xu <peterx@redhat.com>

-- 
Peter Xu

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [Qemu-devel] [PATCH v2 06/10] migration: move some code ram_save_host_page
@ 2018-03-28 10:05     ` Peter Xu
  0 siblings, 0 replies; 44+ messages in thread
From: Peter Xu @ 2018-03-28 10:05 UTC (permalink / raw)
  To: guangrong.xiao
  Cc: pbonzini, mst, mtosatti, qemu-devel, kvm, dgilbert, jiang.biao2,
	wei.w.wang, Xiao Guangrong

On Tue, Mar 27, 2018 at 05:10:39PM +0800, guangrong.xiao@gmail.com wrote:
> From: Xiao Guangrong <xiaoguangrong@tencent.com>
> 
> Move some code from ram_save_target_page() to ram_save_host_page()
> to make it be more readable for latter patches that dramatically
> clean ram_save_target_page() up
> 
> Signed-off-by: Xiao Guangrong <xiaoguangrong@tencent.com>

Reviewed-by: Peter Xu <peterx@redhat.com>

-- 
Peter Xu

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH v2 02/10] migration: stop compression to allocate and free memory frequently
  2018-03-28  9:25     ` [Qemu-devel] " Peter Xu
@ 2018-03-29  3:41       ` Xiao Guangrong
  -1 siblings, 0 replies; 44+ messages in thread
From: Xiao Guangrong @ 2018-03-29  3:41 UTC (permalink / raw)
  To: Peter Xu
  Cc: kvm, mst, mtosatti, Xiao Guangrong, dgilbert, qemu-devel,
	wei.w.wang, jiang.biao2, pbonzini



On 03/28/2018 05:25 PM, Peter Xu wrote:
> On Tue, Mar 27, 2018 at 05:10:35PM +0800, guangrong.xiao@gmail.com wrote:
> 
> [...]
> 
>> @@ -357,10 +358,20 @@ static void compress_threads_save_cleanup(void)
>>       terminate_compression_threads();
>>       thread_count = migrate_compress_threads();
>>       for (i = 0; i < thread_count; i++) {
>> +        /*
>> +         * stream.opaque can be used to store private data, we use it
>> +         * as a indicator which shows if the thread is properly init'd
>> +         * or not
>> +         */
>> +        if (!comp_param[i].stream.opaque) {
>> +            break;
>> +        }
> 
> How about using comp_param[i].file?  The opaque seems to be hiding
> deeper, and...
> 

Yes, indeed, good suggestion.

>>           qemu_thread_join(compress_threads + i);
>>           qemu_fclose(comp_param[i].file);
>>           qemu_mutex_destroy(&comp_param[i].mutex);
>>           qemu_cond_destroy(&comp_param[i].cond);
>> +        deflateEnd(&comp_param[i].stream);
>> +        comp_param[i].stream.opaque = NULL;
>>       }
>>       qemu_mutex_destroy(&comp_done_lock);
>>       qemu_cond_destroy(&comp_done_cond);
>> @@ -370,12 +381,12 @@ static void compress_threads_save_cleanup(void)
>>       comp_param = NULL;
>>   }
>>   
>> -static void compress_threads_save_setup(void)
>> +static int compress_threads_save_setup(void)
>>   {
>>       int i, thread_count;
>>   
>>       if (!migrate_use_compression()) {
>> -        return;
>> +        return 0;
>>       }
>>       thread_count = migrate_compress_threads();
>>       compress_threads = g_new0(QemuThread, thread_count);
>> @@ -383,6 +394,12 @@ static void compress_threads_save_setup(void)
>>       qemu_cond_init(&comp_done_cond);
>>       qemu_mutex_init(&comp_done_lock);
>>       for (i = 0; i < thread_count; i++) {
>> +        if (deflateInit(&comp_param[i].stream,
>> +                           migrate_compress_level()) != Z_OK) {
> 
> (indent issue)
> 

Will fix.

>> +            goto exit;
>> +        }
>> +        comp_param[i].stream.opaque = &comp_param[i];
> 
> ...here from document:
> 
>          ZEXTERN int ZEXPORT deflateInit OF((z_streamp strm, int level));
> 
>          Initializes the internal stream state for compression. The
>          fields zalloc, zfree and opaque must be initialized before by
>          the caller. If zalloc and zfree are set to Z_NULL, deflateInit
>          updates them to use default allocation functions.
> 
> So shall we init opaque first?  Otherwise looks good to me.

No, opaque need to be init-ed only if zalloc and zfree are specified, it
is not the case in this patch.

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [Qemu-devel] [PATCH v2 02/10] migration: stop compression to allocate and free memory frequently
@ 2018-03-29  3:41       ` Xiao Guangrong
  0 siblings, 0 replies; 44+ messages in thread
From: Xiao Guangrong @ 2018-03-29  3:41 UTC (permalink / raw)
  To: Peter Xu
  Cc: pbonzini, mst, mtosatti, qemu-devel, kvm, dgilbert, jiang.biao2,
	wei.w.wang, Xiao Guangrong



On 03/28/2018 05:25 PM, Peter Xu wrote:
> On Tue, Mar 27, 2018 at 05:10:35PM +0800, guangrong.xiao@gmail.com wrote:
> 
> [...]
> 
>> @@ -357,10 +358,20 @@ static void compress_threads_save_cleanup(void)
>>       terminate_compression_threads();
>>       thread_count = migrate_compress_threads();
>>       for (i = 0; i < thread_count; i++) {
>> +        /*
>> +         * stream.opaque can be used to store private data, we use it
>> +         * as a indicator which shows if the thread is properly init'd
>> +         * or not
>> +         */
>> +        if (!comp_param[i].stream.opaque) {
>> +            break;
>> +        }
> 
> How about using comp_param[i].file?  The opaque seems to be hiding
> deeper, and...
> 

Yes, indeed, good suggestion.

>>           qemu_thread_join(compress_threads + i);
>>           qemu_fclose(comp_param[i].file);
>>           qemu_mutex_destroy(&comp_param[i].mutex);
>>           qemu_cond_destroy(&comp_param[i].cond);
>> +        deflateEnd(&comp_param[i].stream);
>> +        comp_param[i].stream.opaque = NULL;
>>       }
>>       qemu_mutex_destroy(&comp_done_lock);
>>       qemu_cond_destroy(&comp_done_cond);
>> @@ -370,12 +381,12 @@ static void compress_threads_save_cleanup(void)
>>       comp_param = NULL;
>>   }
>>   
>> -static void compress_threads_save_setup(void)
>> +static int compress_threads_save_setup(void)
>>   {
>>       int i, thread_count;
>>   
>>       if (!migrate_use_compression()) {
>> -        return;
>> +        return 0;
>>       }
>>       thread_count = migrate_compress_threads();
>>       compress_threads = g_new0(QemuThread, thread_count);
>> @@ -383,6 +394,12 @@ static void compress_threads_save_setup(void)
>>       qemu_cond_init(&comp_done_cond);
>>       qemu_mutex_init(&comp_done_lock);
>>       for (i = 0; i < thread_count; i++) {
>> +        if (deflateInit(&comp_param[i].stream,
>> +                           migrate_compress_level()) != Z_OK) {
> 
> (indent issue)
> 

Will fix.

>> +            goto exit;
>> +        }
>> +        comp_param[i].stream.opaque = &comp_param[i];
> 
> ...here from document:
> 
>          ZEXTERN int ZEXPORT deflateInit OF((z_streamp strm, int level));
> 
>          Initializes the internal stream state for compression. The
>          fields zalloc, zfree and opaque must be initialized before by
>          the caller. If zalloc and zfree are set to Z_NULL, deflateInit
>          updates them to use default allocation functions.
> 
> So shall we init opaque first?  Otherwise looks good to me.

No, opaque need to be init-ed only if zalloc and zfree are specified, it
is not the case in this patch.

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH v2 03/10] migration: stop decompression to allocate and free memory frequently
  2018-03-28  9:42     ` [Qemu-devel] " Peter Xu
@ 2018-03-29  3:43       ` Xiao Guangrong
  -1 siblings, 0 replies; 44+ messages in thread
From: Xiao Guangrong @ 2018-03-29  3:43 UTC (permalink / raw)
  To: Peter Xu
  Cc: kvm, mst, mtosatti, Xiao Guangrong, dgilbert, qemu-devel,
	wei.w.wang, jiang.biao2, pbonzini



On 03/28/2018 05:42 PM, Peter Xu wrote:
> On Tue, Mar 27, 2018 at 05:10:36PM +0800, guangrong.xiao@gmail.com wrote:
> 
> [...]
> 
>> +static int compress_threads_load_setup(void)
>> +{
>> +    int i, thread_count;
>> +
>> +    if (!migrate_use_compression()) {
>> +        return 0;
>> +    }
>> +
>> +    thread_count = migrate_decompress_threads();
>> +    decompress_threads = g_new0(QemuThread, thread_count);
>> +    decomp_param = g_new0(DecompressParam, thread_count);
>> +    qemu_mutex_init(&decomp_done_lock);
>> +    qemu_cond_init(&decomp_done_cond);
>> +    for (i = 0; i < thread_count; i++) {
>> +        if (inflateInit(&decomp_param[i].stream) != Z_OK) {
>> +            goto exit;
>> +        }
>> +        decomp_param[i].stream.opaque = &decomp_param[i];
> 
> Same question as the encoding patch here, otherwise looks good to me.

Thanks for you pointed out, will fix.

Hmm, can i treat it as your Reviewed-by for the next version?

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [Qemu-devel] [PATCH v2 03/10] migration: stop decompression to allocate and free memory frequently
@ 2018-03-29  3:43       ` Xiao Guangrong
  0 siblings, 0 replies; 44+ messages in thread
From: Xiao Guangrong @ 2018-03-29  3:43 UTC (permalink / raw)
  To: Peter Xu
  Cc: pbonzini, mst, mtosatti, qemu-devel, kvm, dgilbert, jiang.biao2,
	wei.w.wang, Xiao Guangrong



On 03/28/2018 05:42 PM, Peter Xu wrote:
> On Tue, Mar 27, 2018 at 05:10:36PM +0800, guangrong.xiao@gmail.com wrote:
> 
> [...]
> 
>> +static int compress_threads_load_setup(void)
>> +{
>> +    int i, thread_count;
>> +
>> +    if (!migrate_use_compression()) {
>> +        return 0;
>> +    }
>> +
>> +    thread_count = migrate_decompress_threads();
>> +    decompress_threads = g_new0(QemuThread, thread_count);
>> +    decomp_param = g_new0(DecompressParam, thread_count);
>> +    qemu_mutex_init(&decomp_done_lock);
>> +    qemu_cond_init(&decomp_done_cond);
>> +    for (i = 0; i < thread_count; i++) {
>> +        if (inflateInit(&decomp_param[i].stream) != Z_OK) {
>> +            goto exit;
>> +        }
>> +        decomp_param[i].stream.opaque = &decomp_param[i];
> 
> Same question as the encoding patch here, otherwise looks good to me.

Thanks for you pointed out, will fix.

Hmm, can i treat it as your Reviewed-by for the next version?

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH v2 04/10] migration: detect compression and decompression errors
  2018-03-28  9:59     ` [Qemu-devel] " Peter Xu
@ 2018-03-29  3:51       ` Xiao Guangrong
  -1 siblings, 0 replies; 44+ messages in thread
From: Xiao Guangrong @ 2018-03-29  3:51 UTC (permalink / raw)
  To: Peter Xu
  Cc: kvm, mst, mtosatti, Xiao Guangrong, dgilbert, qemu-devel,
	wei.w.wang, jiang.biao2, pbonzini



On 03/28/2018 05:59 PM, Peter Xu wrote:
> On Tue, Mar 27, 2018 at 05:10:37PM +0800, guangrong.xiao@gmail.com wrote:
> 
> [...]
> 
>> -static int compress_threads_load_setup(void)
>> +static int compress_threads_load_setup(QEMUFile *f)
>>   {
>>       int i, thread_count;
>>   
>> @@ -2665,6 +2685,7 @@ static int compress_threads_load_setup(void)
>>           }
>>           decomp_param[i].stream.opaque = &decomp_param[i];
>>   
>> +        decomp_param[i].file = f;
> 
> On the source side the error will be set via:
> 
>          qemu_file_set_error(migrate_get_current()->to_dst_file, blen);
> 
> Maybe we can do similar things using migrate_incoming_get_current() to
> avoid caching the QEMUFile multiple times?
> 

I have considered it, however, it can not work as the @file used by ram
loader is not the file got from migrate_incoming_get_current() under some
cases.

For example, in colo_process_incoming_thread(), the file passed to
qemu_loadvm_state() is a internal buffer and it is not easy to switch it
to incoming file.

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [Qemu-devel] [PATCH v2 04/10] migration: detect compression and decompression errors
@ 2018-03-29  3:51       ` Xiao Guangrong
  0 siblings, 0 replies; 44+ messages in thread
From: Xiao Guangrong @ 2018-03-29  3:51 UTC (permalink / raw)
  To: Peter Xu
  Cc: pbonzini, mst, mtosatti, qemu-devel, kvm, dgilbert, jiang.biao2,
	wei.w.wang, Xiao Guangrong



On 03/28/2018 05:59 PM, Peter Xu wrote:
> On Tue, Mar 27, 2018 at 05:10:37PM +0800, guangrong.xiao@gmail.com wrote:
> 
> [...]
> 
>> -static int compress_threads_load_setup(void)
>> +static int compress_threads_load_setup(QEMUFile *f)
>>   {
>>       int i, thread_count;
>>   
>> @@ -2665,6 +2685,7 @@ static int compress_threads_load_setup(void)
>>           }
>>           decomp_param[i].stream.opaque = &decomp_param[i];
>>   
>> +        decomp_param[i].file = f;
> 
> On the source side the error will be set via:
> 
>          qemu_file_set_error(migrate_get_current()->to_dst_file, blen);
> 
> Maybe we can do similar things using migrate_incoming_get_current() to
> avoid caching the QEMUFile multiple times?
> 

I have considered it, however, it can not work as the @file used by ram
loader is not the file got from migrate_incoming_get_current() under some
cases.

For example, in colo_process_incoming_thread(), the file passed to
qemu_loadvm_state() is a internal buffer and it is not easy to switch it
to incoming file.

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH v2 03/10] migration: stop decompression to allocate and free memory frequently
  2018-03-29  3:43       ` [Qemu-devel] " Xiao Guangrong
@ 2018-03-29  4:14         ` Peter Xu
  -1 siblings, 0 replies; 44+ messages in thread
From: Peter Xu @ 2018-03-29  4:14 UTC (permalink / raw)
  To: Xiao Guangrong
  Cc: kvm, mst, mtosatti, Xiao Guangrong, dgilbert, qemu-devel,
	wei.w.wang, jiang.biao2, pbonzini

On Thu, Mar 29, 2018 at 11:43:07AM +0800, Xiao Guangrong wrote:
> 
> 
> On 03/28/2018 05:42 PM, Peter Xu wrote:
> > On Tue, Mar 27, 2018 at 05:10:36PM +0800, guangrong.xiao@gmail.com wrote:
> > 
> > [...]
> > 
> > > +static int compress_threads_load_setup(void)
> > > +{
> > > +    int i, thread_count;
> > > +
> > > +    if (!migrate_use_compression()) {
> > > +        return 0;
> > > +    }
> > > +
> > > +    thread_count = migrate_decompress_threads();
> > > +    decompress_threads = g_new0(QemuThread, thread_count);
> > > +    decomp_param = g_new0(DecompressParam, thread_count);
> > > +    qemu_mutex_init(&decomp_done_lock);
> > > +    qemu_cond_init(&decomp_done_cond);
> > > +    for (i = 0; i < thread_count; i++) {
> > > +        if (inflateInit(&decomp_param[i].stream) != Z_OK) {
> > > +            goto exit;
> > > +        }
> > > +        decomp_param[i].stream.opaque = &decomp_param[i];
> > 
> > Same question as the encoding patch here, otherwise looks good to me.
> 
> Thanks for you pointed out, will fix.
> 
> Hmm, can i treat it as your Reviewed-by for the next version?

Yes :), as long as we drop the usage of zstream.opaque and use any
existing fields.

And also for the previous patch too, since they are mostly the same.

Thanks,

-- 
Peter Xu

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [Qemu-devel] [PATCH v2 03/10] migration: stop decompression to allocate and free memory frequently
@ 2018-03-29  4:14         ` Peter Xu
  0 siblings, 0 replies; 44+ messages in thread
From: Peter Xu @ 2018-03-29  4:14 UTC (permalink / raw)
  To: Xiao Guangrong
  Cc: pbonzini, mst, mtosatti, qemu-devel, kvm, dgilbert, jiang.biao2,
	wei.w.wang, Xiao Guangrong

On Thu, Mar 29, 2018 at 11:43:07AM +0800, Xiao Guangrong wrote:
> 
> 
> On 03/28/2018 05:42 PM, Peter Xu wrote:
> > On Tue, Mar 27, 2018 at 05:10:36PM +0800, guangrong.xiao@gmail.com wrote:
> > 
> > [...]
> > 
> > > +static int compress_threads_load_setup(void)
> > > +{
> > > +    int i, thread_count;
> > > +
> > > +    if (!migrate_use_compression()) {
> > > +        return 0;
> > > +    }
> > > +
> > > +    thread_count = migrate_decompress_threads();
> > > +    decompress_threads = g_new0(QemuThread, thread_count);
> > > +    decomp_param = g_new0(DecompressParam, thread_count);
> > > +    qemu_mutex_init(&decomp_done_lock);
> > > +    qemu_cond_init(&decomp_done_cond);
> > > +    for (i = 0; i < thread_count; i++) {
> > > +        if (inflateInit(&decomp_param[i].stream) != Z_OK) {
> > > +            goto exit;
> > > +        }
> > > +        decomp_param[i].stream.opaque = &decomp_param[i];
> > 
> > Same question as the encoding patch here, otherwise looks good to me.
> 
> Thanks for you pointed out, will fix.
> 
> Hmm, can i treat it as your Reviewed-by for the next version?

Yes :), as long as we drop the usage of zstream.opaque and use any
existing fields.

And also for the previous patch too, since they are mostly the same.

Thanks,

-- 
Peter Xu

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH v2 04/10] migration: detect compression and decompression errors
  2018-03-29  3:51       ` [Qemu-devel] " Xiao Guangrong
@ 2018-03-29  4:25         ` Peter Xu
  -1 siblings, 0 replies; 44+ messages in thread
From: Peter Xu @ 2018-03-29  4:25 UTC (permalink / raw)
  To: Xiao Guangrong
  Cc: kvm, mst, mtosatti, Xiao Guangrong, dgilbert, qemu-devel,
	wei.w.wang, jiang.biao2, pbonzini

On Thu, Mar 29, 2018 at 11:51:03AM +0800, Xiao Guangrong wrote:
> 
> 
> On 03/28/2018 05:59 PM, Peter Xu wrote:
> > On Tue, Mar 27, 2018 at 05:10:37PM +0800, guangrong.xiao@gmail.com wrote:
> > 
> > [...]
> > 
> > > -static int compress_threads_load_setup(void)
> > > +static int compress_threads_load_setup(QEMUFile *f)
> > >   {
> > >       int i, thread_count;
> > > @@ -2665,6 +2685,7 @@ static int compress_threads_load_setup(void)
> > >           }
> > >           decomp_param[i].stream.opaque = &decomp_param[i];
> > > +        decomp_param[i].file = f;
> > 
> > On the source side the error will be set via:
> > 
> >          qemu_file_set_error(migrate_get_current()->to_dst_file, blen);
> > 
> > Maybe we can do similar things using migrate_incoming_get_current() to
> > avoid caching the QEMUFile multiple times?
> > 
> 
> I have considered it, however, it can not work as the @file used by ram
> loader is not the file got from migrate_incoming_get_current() under some
> cases.
> 
> For example, in colo_process_incoming_thread(), the file passed to
> qemu_loadvm_state() is a internal buffer and it is not easy to switch it
> to incoming file.

I see. How about cache it in a global variable?  We have these
already:

    thread_count = migrate_decompress_threads();
    decompress_threads = g_new0(QemuThread, thread_count);
    decomp_param = g_new0(DecompressParam, thread_count);
    ...

IMHO we can add a new one too, at least we don't cache it multiple
times (after all decomp_param[i]s are global variables too).

-- 
Peter Xu

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [Qemu-devel] [PATCH v2 04/10] migration: detect compression and decompression errors
@ 2018-03-29  4:25         ` Peter Xu
  0 siblings, 0 replies; 44+ messages in thread
From: Peter Xu @ 2018-03-29  4:25 UTC (permalink / raw)
  To: Xiao Guangrong
  Cc: pbonzini, mst, mtosatti, qemu-devel, kvm, dgilbert, jiang.biao2,
	wei.w.wang, Xiao Guangrong

On Thu, Mar 29, 2018 at 11:51:03AM +0800, Xiao Guangrong wrote:
> 
> 
> On 03/28/2018 05:59 PM, Peter Xu wrote:
> > On Tue, Mar 27, 2018 at 05:10:37PM +0800, guangrong.xiao@gmail.com wrote:
> > 
> > [...]
> > 
> > > -static int compress_threads_load_setup(void)
> > > +static int compress_threads_load_setup(QEMUFile *f)
> > >   {
> > >       int i, thread_count;
> > > @@ -2665,6 +2685,7 @@ static int compress_threads_load_setup(void)
> > >           }
> > >           decomp_param[i].stream.opaque = &decomp_param[i];
> > > +        decomp_param[i].file = f;
> > 
> > On the source side the error will be set via:
> > 
> >          qemu_file_set_error(migrate_get_current()->to_dst_file, blen);
> > 
> > Maybe we can do similar things using migrate_incoming_get_current() to
> > avoid caching the QEMUFile multiple times?
> > 
> 
> I have considered it, however, it can not work as the @file used by ram
> loader is not the file got from migrate_incoming_get_current() under some
> cases.
> 
> For example, in colo_process_incoming_thread(), the file passed to
> qemu_loadvm_state() is a internal buffer and it is not easy to switch it
> to incoming file.

I see. How about cache it in a global variable?  We have these
already:

    thread_count = migrate_decompress_threads();
    decompress_threads = g_new0(QemuThread, thread_count);
    decomp_param = g_new0(DecompressParam, thread_count);
    ...

IMHO we can add a new one too, at least we don't cache it multiple
times (after all decomp_param[i]s are global variables too).

-- 
Peter Xu

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH v2 04/10] migration: detect compression and decompression errors
  2018-03-29  4:25         ` [Qemu-devel] " Peter Xu
@ 2018-03-30  3:11           ` Xiao Guangrong
  -1 siblings, 0 replies; 44+ messages in thread
From: Xiao Guangrong @ 2018-03-30  3:11 UTC (permalink / raw)
  To: Peter Xu
  Cc: kvm, mst, mtosatti, Xiao Guangrong, dgilbert, qemu-devel,
	wei.w.wang, jiang.biao2, pbonzini



On 03/29/2018 12:25 PM, Peter Xu wrote:
> On Thu, Mar 29, 2018 at 11:51:03AM +0800, Xiao Guangrong wrote:
>>
>>
>> On 03/28/2018 05:59 PM, Peter Xu wrote:
>>> On Tue, Mar 27, 2018 at 05:10:37PM +0800, guangrong.xiao@gmail.com wrote:
>>>
>>> [...]
>>>
>>>> -static int compress_threads_load_setup(void)
>>>> +static int compress_threads_load_setup(QEMUFile *f)
>>>>    {
>>>>        int i, thread_count;
>>>> @@ -2665,6 +2685,7 @@ static int compress_threads_load_setup(void)
>>>>            }
>>>>            decomp_param[i].stream.opaque = &decomp_param[i];
>>>> +        decomp_param[i].file = f;
>>>
>>> On the source side the error will be set via:
>>>
>>>           qemu_file_set_error(migrate_get_current()->to_dst_file, blen);
>>>
>>> Maybe we can do similar things using migrate_incoming_get_current() to
>>> avoid caching the QEMUFile multiple times?
>>>
>>
>> I have considered it, however, it can not work as the @file used by ram
>> loader is not the file got from migrate_incoming_get_current() under some
>> cases.
>>
>> For example, in colo_process_incoming_thread(), the file passed to
>> qemu_loadvm_state() is a internal buffer and it is not easy to switch it
>> to incoming file.
> 
> I see. How about cache it in a global variable?  We have these
> already:
> 
>      thread_count = migrate_decompress_threads();
>      decompress_threads = g_new0(QemuThread, thread_count);
>      decomp_param = g_new0(DecompressParam, thread_count);
>      ...
> 
> IMHO we can add a new one too, at least we don't cache it multiple
> times (after all decomp_param[i]s are global variables too).
> 

Nice, that's good to me. Will add your Reviewed-by on this patch
as well if you do not mind. :)

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [Qemu-devel] [PATCH v2 04/10] migration: detect compression and decompression errors
@ 2018-03-30  3:11           ` Xiao Guangrong
  0 siblings, 0 replies; 44+ messages in thread
From: Xiao Guangrong @ 2018-03-30  3:11 UTC (permalink / raw)
  To: Peter Xu
  Cc: pbonzini, mst, mtosatti, qemu-devel, kvm, dgilbert, jiang.biao2,
	wei.w.wang, Xiao Guangrong



On 03/29/2018 12:25 PM, Peter Xu wrote:
> On Thu, Mar 29, 2018 at 11:51:03AM +0800, Xiao Guangrong wrote:
>>
>>
>> On 03/28/2018 05:59 PM, Peter Xu wrote:
>>> On Tue, Mar 27, 2018 at 05:10:37PM +0800, guangrong.xiao@gmail.com wrote:
>>>
>>> [...]
>>>
>>>> -static int compress_threads_load_setup(void)
>>>> +static int compress_threads_load_setup(QEMUFile *f)
>>>>    {
>>>>        int i, thread_count;
>>>> @@ -2665,6 +2685,7 @@ static int compress_threads_load_setup(void)
>>>>            }
>>>>            decomp_param[i].stream.opaque = &decomp_param[i];
>>>> +        decomp_param[i].file = f;
>>>
>>> On the source side the error will be set via:
>>>
>>>           qemu_file_set_error(migrate_get_current()->to_dst_file, blen);
>>>
>>> Maybe we can do similar things using migrate_incoming_get_current() to
>>> avoid caching the QEMUFile multiple times?
>>>
>>
>> I have considered it, however, it can not work as the @file used by ram
>> loader is not the file got from migrate_incoming_get_current() under some
>> cases.
>>
>> For example, in colo_process_incoming_thread(), the file passed to
>> qemu_loadvm_state() is a internal buffer and it is not easy to switch it
>> to incoming file.
> 
> I see. How about cache it in a global variable?  We have these
> already:
> 
>      thread_count = migrate_decompress_threads();
>      decompress_threads = g_new0(QemuThread, thread_count);
>      decomp_param = g_new0(DecompressParam, thread_count);
>      ...
> 
> IMHO we can add a new one too, at least we don't cache it multiple
> times (after all decomp_param[i]s are global variables too).
> 

Nice, that's good to me. Will add your Reviewed-by on this patch
as well if you do not mind. :)

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH v2 04/10] migration: detect compression and decompression errors
  2018-03-30  3:11           ` [Qemu-devel] " Xiao Guangrong
@ 2018-04-02  4:26             ` Peter Xu
  -1 siblings, 0 replies; 44+ messages in thread
From: Peter Xu @ 2018-04-02  4:26 UTC (permalink / raw)
  To: Xiao Guangrong
  Cc: kvm, mst, mtosatti, Xiao Guangrong, dgilbert, qemu-devel,
	wei.w.wang, jiang.biao2, pbonzini

On Fri, Mar 30, 2018 at 11:11:27AM +0800, Xiao Guangrong wrote:
> 
> 
> On 03/29/2018 12:25 PM, Peter Xu wrote:
> > On Thu, Mar 29, 2018 at 11:51:03AM +0800, Xiao Guangrong wrote:
> > > 
> > > 
> > > On 03/28/2018 05:59 PM, Peter Xu wrote:
> > > > On Tue, Mar 27, 2018 at 05:10:37PM +0800, guangrong.xiao@gmail.com wrote:
> > > > 
> > > > [...]
> > > > 
> > > > > -static int compress_threads_load_setup(void)
> > > > > +static int compress_threads_load_setup(QEMUFile *f)
> > > > >    {
> > > > >        int i, thread_count;
> > > > > @@ -2665,6 +2685,7 @@ static int compress_threads_load_setup(void)
> > > > >            }
> > > > >            decomp_param[i].stream.opaque = &decomp_param[i];
> > > > > +        decomp_param[i].file = f;
> > > > 
> > > > On the source side the error will be set via:
> > > > 
> > > >           qemu_file_set_error(migrate_get_current()->to_dst_file, blen);
> > > > 
> > > > Maybe we can do similar things using migrate_incoming_get_current() to
> > > > avoid caching the QEMUFile multiple times?
> > > > 
> > > 
> > > I have considered it, however, it can not work as the @file used by ram
> > > loader is not the file got from migrate_incoming_get_current() under some
> > > cases.
> > > 
> > > For example, in colo_process_incoming_thread(), the file passed to
> > > qemu_loadvm_state() is a internal buffer and it is not easy to switch it
> > > to incoming file.
> > 
> > I see. How about cache it in a global variable?  We have these
> > already:
> > 
> >      thread_count = migrate_decompress_threads();
> >      decompress_threads = g_new0(QemuThread, thread_count);
> >      decomp_param = g_new0(DecompressParam, thread_count);
> >      ...
> > 
> > IMHO we can add a new one too, at least we don't cache it multiple
> > times (after all decomp_param[i]s are global variables too).
> > 
> 
> Nice, that's good to me. Will add your Reviewed-by on this patch
> as well if you do not mind. :)

Yes, please. :)

Thanks,

-- 
Peter Xu

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [Qemu-devel] [PATCH v2 04/10] migration: detect compression and decompression errors
@ 2018-04-02  4:26             ` Peter Xu
  0 siblings, 0 replies; 44+ messages in thread
From: Peter Xu @ 2018-04-02  4:26 UTC (permalink / raw)
  To: Xiao Guangrong
  Cc: pbonzini, mst, mtosatti, qemu-devel, kvm, dgilbert, jiang.biao2,
	wei.w.wang, Xiao Guangrong

On Fri, Mar 30, 2018 at 11:11:27AM +0800, Xiao Guangrong wrote:
> 
> 
> On 03/29/2018 12:25 PM, Peter Xu wrote:
> > On Thu, Mar 29, 2018 at 11:51:03AM +0800, Xiao Guangrong wrote:
> > > 
> > > 
> > > On 03/28/2018 05:59 PM, Peter Xu wrote:
> > > > On Tue, Mar 27, 2018 at 05:10:37PM +0800, guangrong.xiao@gmail.com wrote:
> > > > 
> > > > [...]
> > > > 
> > > > > -static int compress_threads_load_setup(void)
> > > > > +static int compress_threads_load_setup(QEMUFile *f)
> > > > >    {
> > > > >        int i, thread_count;
> > > > > @@ -2665,6 +2685,7 @@ static int compress_threads_load_setup(void)
> > > > >            }
> > > > >            decomp_param[i].stream.opaque = &decomp_param[i];
> > > > > +        decomp_param[i].file = f;
> > > > 
> > > > On the source side the error will be set via:
> > > > 
> > > >           qemu_file_set_error(migrate_get_current()->to_dst_file, blen);
> > > > 
> > > > Maybe we can do similar things using migrate_incoming_get_current() to
> > > > avoid caching the QEMUFile multiple times?
> > > > 
> > > 
> > > I have considered it, however, it can not work as the @file used by ram
> > > loader is not the file got from migrate_incoming_get_current() under some
> > > cases.
> > > 
> > > For example, in colo_process_incoming_thread(), the file passed to
> > > qemu_loadvm_state() is a internal buffer and it is not easy to switch it
> > > to incoming file.
> > 
> > I see. How about cache it in a global variable?  We have these
> > already:
> > 
> >      thread_count = migrate_decompress_threads();
> >      decompress_threads = g_new0(QemuThread, thread_count);
> >      decomp_param = g_new0(DecompressParam, thread_count);
> >      ...
> > 
> > IMHO we can add a new one too, at least we don't cache it multiple
> > times (after all decomp_param[i]s are global variables too).
> > 
> 
> Nice, that's good to me. Will add your Reviewed-by on this patch
> as well if you do not mind. :)

Yes, please. :)

Thanks,

-- 
Peter Xu

^ permalink raw reply	[flat|nested] 44+ messages in thread

end of thread, other threads:[~2018-04-02  4:26 UTC | newest]

Thread overview: 44+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-03-27  9:10 [PATCH v2 00/10] migration: improve and cleanup compression guangrong.xiao
2018-03-27  9:10 ` [Qemu-devel] " guangrong.xiao
2018-03-27  9:10 ` [PATCH v2 01/10] migration: stop compressing page in migration thread guangrong.xiao
2018-03-27  9:10   ` [Qemu-devel] " guangrong.xiao
2018-03-27  9:10 ` [PATCH v2 02/10] migration: stop compression to allocate and free memory frequently guangrong.xiao
2018-03-27  9:10   ` [Qemu-devel] " guangrong.xiao
2018-03-28  9:25   ` Peter Xu
2018-03-28  9:25     ` [Qemu-devel] " Peter Xu
2018-03-29  3:41     ` Xiao Guangrong
2018-03-29  3:41       ` [Qemu-devel] " Xiao Guangrong
2018-03-27  9:10 ` [PATCH v2 03/10] migration: stop decompression " guangrong.xiao
2018-03-27  9:10   ` [Qemu-devel] " guangrong.xiao
2018-03-28  9:42   ` Peter Xu
2018-03-28  9:42     ` [Qemu-devel] " Peter Xu
2018-03-29  3:43     ` Xiao Guangrong
2018-03-29  3:43       ` [Qemu-devel] " Xiao Guangrong
2018-03-29  4:14       ` Peter Xu
2018-03-29  4:14         ` [Qemu-devel] " Peter Xu
2018-03-27  9:10 ` [PATCH v2 04/10] migration: detect compression and decompression errors guangrong.xiao
2018-03-27  9:10   ` [Qemu-devel] " guangrong.xiao
2018-03-28  9:59   ` Peter Xu
2018-03-28  9:59     ` [Qemu-devel] " Peter Xu
2018-03-29  3:51     ` Xiao Guangrong
2018-03-29  3:51       ` [Qemu-devel] " Xiao Guangrong
2018-03-29  4:25       ` Peter Xu
2018-03-29  4:25         ` [Qemu-devel] " Peter Xu
2018-03-30  3:11         ` Xiao Guangrong
2018-03-30  3:11           ` [Qemu-devel] " Xiao Guangrong
2018-04-02  4:26           ` Peter Xu
2018-04-02  4:26             ` [Qemu-devel] " Peter Xu
2018-03-27  9:10 ` [PATCH v2 05/10] migration: introduce control_save_page() guangrong.xiao
2018-03-27  9:10   ` [Qemu-devel] " guangrong.xiao
2018-03-27  9:10 ` [PATCH v2 06/10] migration: move some code ram_save_host_page guangrong.xiao
2018-03-27  9:10   ` [Qemu-devel] " guangrong.xiao
2018-03-28 10:05   ` Peter Xu
2018-03-28 10:05     ` [Qemu-devel] " Peter Xu
2018-03-27  9:10 ` [PATCH v2 07/10] migration: move calling control_save_page to the common place guangrong.xiao
2018-03-27  9:10   ` [Qemu-devel] " guangrong.xiao
2018-03-27  9:10 ` [PATCH v2 08/10] migration: move calling save_zero_page " guangrong.xiao
2018-03-27  9:10   ` [Qemu-devel] " guangrong.xiao
2018-03-27  9:10 ` [PATCH v2 09/10] migration: introduce save_normal_page() guangrong.xiao
2018-03-27  9:10   ` [Qemu-devel] " guangrong.xiao
2018-03-27  9:10 ` [PATCH v2 10/10] migration: remove ram_save_compressed_page() guangrong.xiao
2018-03-27  9:10   ` [Qemu-devel] " guangrong.xiao

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.