All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v6 0/3] migration: compression optimization
@ 2018-09-06  7:00 ` guangrong.xiao
  0 siblings, 0 replies; 18+ messages in thread
From: guangrong.xiao @ 2018-09-06  7:00 UTC (permalink / raw)
  To: pbonzini, mst, mtosatti
  Cc: kvm, quintela, Xiao Guangrong, qemu-devel, peterx, dgilbert,
	wei.w.wang, jiang.biao2

From: Xiao Guangrong <xiaoguangrong@tencent.com>

Changelog in v6:

Thanks to Juan's review, in this version we
1) move flush compressed data to find_dirty_block() where it hits the end
   of memblock
2) use save_page_use_compression instead of migrate_use_compression in
   flush_compressed_data

Xiao Guangrong (3):
  migration: do not flush_compressed_data at the end of iteration
  migration: show the statistics of compression
  migration: use save_page_use_compression in flush_compressed_data

 hmp.c                 | 13 +++++++++++
 migration/migration.c | 12 ++++++++++
 migration/ram.c       | 63 +++++++++++++++++++++++++++++++++++++++++++--------
 migration/ram.h       |  1 +
 qapi/migration.json   | 26 ++++++++++++++++++++-
 5 files changed, 105 insertions(+), 10 deletions(-)

-- 
2.14.4

^ permalink raw reply	[flat|nested] 18+ messages in thread

* [Qemu-devel] [PATCH v6 0/3] migration: compression optimization
@ 2018-09-06  7:00 ` guangrong.xiao
  0 siblings, 0 replies; 18+ messages in thread
From: guangrong.xiao @ 2018-09-06  7:00 UTC (permalink / raw)
  To: pbonzini, mst, mtosatti
  Cc: qemu-devel, kvm, dgilbert, peterx, wei.w.wang, jiang.biao2,
	eblake, quintela, Xiao Guangrong

From: Xiao Guangrong <xiaoguangrong@tencent.com>

Changelog in v6:

Thanks to Juan's review, in this version we
1) move flush compressed data to find_dirty_block() where it hits the end
   of memblock
2) use save_page_use_compression instead of migrate_use_compression in
   flush_compressed_data

Xiao Guangrong (3):
  migration: do not flush_compressed_data at the end of iteration
  migration: show the statistics of compression
  migration: use save_page_use_compression in flush_compressed_data

 hmp.c                 | 13 +++++++++++
 migration/migration.c | 12 ++++++++++
 migration/ram.c       | 63 +++++++++++++++++++++++++++++++++++++++++++--------
 migration/ram.h       |  1 +
 qapi/migration.json   | 26 ++++++++++++++++++++-
 5 files changed, 105 insertions(+), 10 deletions(-)

-- 
2.14.4

^ permalink raw reply	[flat|nested] 18+ messages in thread

* [PATCH v6 1/3] migration: do not flush_compressed_data at the end of iteration
  2018-09-06  7:00 ` [Qemu-devel] " guangrong.xiao
@ 2018-09-06  7:00   ` guangrong.xiao
  -1 siblings, 0 replies; 18+ messages in thread
From: guangrong.xiao @ 2018-09-06  7:00 UTC (permalink / raw)
  To: pbonzini, mst, mtosatti
  Cc: kvm, quintela, Xiao Guangrong, qemu-devel, peterx, dgilbert,
	wei.w.wang, jiang.biao2

From: Xiao Guangrong <xiaoguangrong@tencent.com>

flush_compressed_data() needs to wait all compression threads to
finish their work, after that all threads are free until the
migration feeds new request to them, reducing its call can improve
the throughput and use CPU resource more effectively

We do not need to flush all threads at the end of iteration, the
data can be kept locally until the memory block is changed or
memory migration starts over in that case we will meet a dirtied
page which may still exists in compression threads's ring

Signed-off-by: Xiao Guangrong <xiaoguangrong@tencent.com>
---
 migration/ram.c | 18 +++++++++++-------
 1 file changed, 11 insertions(+), 7 deletions(-)

diff --git a/migration/ram.c b/migration/ram.c
index 2add09174d..e152831254 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -1996,17 +1996,22 @@ static bool find_dirty_block(RAMState *rs, PageSearchStatus *pss, bool *again)
         pss->page = 0;
         pss->block = QLIST_NEXT_RCU(pss->block, next);
         if (!pss->block) {
+            /*
+             * If memory migration starts over, we will meet a dirtied page
+             * which may still exists in compression threads's ring, so we
+             * should flush the compressed data to make sure the new page
+             * is not overwritten by the old one in the destination.
+             *
+             * Also If xbzrle is on, stop using the data compression at this
+             * point. In theory, xbzrle can do better than compression.
+             */
+            flush_compressed_data(rs);
+
             /* Hit the end of the list */
             pss->block = QLIST_FIRST_RCU(&ram_list.blocks);
             /* Flag that we've looped */
             pss->complete_round = true;
             rs->ram_bulk_stage = false;
-            if (migrate_use_xbzrle()) {
-                /* If xbzrle is on, stop using the data compression at this
-                 * point. In theory, xbzrle can do better than compression.
-                 */
-                flush_compressed_data(rs);
-            }
         }
         /* Didn't find anything this time, but try again on the new block */
         *again = true;
@@ -3219,7 +3224,6 @@ static int ram_save_iterate(QEMUFile *f, void *opaque)
         }
         i++;
     }
-    flush_compressed_data(rs);
     rcu_read_unlock();
 
     /*
-- 
2.14.4

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [Qemu-devel] [PATCH v6 1/3] migration: do not flush_compressed_data at the end of iteration
@ 2018-09-06  7:00   ` guangrong.xiao
  0 siblings, 0 replies; 18+ messages in thread
From: guangrong.xiao @ 2018-09-06  7:00 UTC (permalink / raw)
  To: pbonzini, mst, mtosatti
  Cc: qemu-devel, kvm, dgilbert, peterx, wei.w.wang, jiang.biao2,
	eblake, quintela, Xiao Guangrong

From: Xiao Guangrong <xiaoguangrong@tencent.com>

flush_compressed_data() needs to wait all compression threads to
finish their work, after that all threads are free until the
migration feeds new request to them, reducing its call can improve
the throughput and use CPU resource more effectively

We do not need to flush all threads at the end of iteration, the
data can be kept locally until the memory block is changed or
memory migration starts over in that case we will meet a dirtied
page which may still exists in compression threads's ring

Signed-off-by: Xiao Guangrong <xiaoguangrong@tencent.com>
---
 migration/ram.c | 18 +++++++++++-------
 1 file changed, 11 insertions(+), 7 deletions(-)

diff --git a/migration/ram.c b/migration/ram.c
index 2add09174d..e152831254 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -1996,17 +1996,22 @@ static bool find_dirty_block(RAMState *rs, PageSearchStatus *pss, bool *again)
         pss->page = 0;
         pss->block = QLIST_NEXT_RCU(pss->block, next);
         if (!pss->block) {
+            /*
+             * If memory migration starts over, we will meet a dirtied page
+             * which may still exists in compression threads's ring, so we
+             * should flush the compressed data to make sure the new page
+             * is not overwritten by the old one in the destination.
+             *
+             * Also If xbzrle is on, stop using the data compression at this
+             * point. In theory, xbzrle can do better than compression.
+             */
+            flush_compressed_data(rs);
+
             /* Hit the end of the list */
             pss->block = QLIST_FIRST_RCU(&ram_list.blocks);
             /* Flag that we've looped */
             pss->complete_round = true;
             rs->ram_bulk_stage = false;
-            if (migrate_use_xbzrle()) {
-                /* If xbzrle is on, stop using the data compression at this
-                 * point. In theory, xbzrle can do better than compression.
-                 */
-                flush_compressed_data(rs);
-            }
         }
         /* Didn't find anything this time, but try again on the new block */
         *again = true;
@@ -3219,7 +3224,6 @@ static int ram_save_iterate(QEMUFile *f, void *opaque)
         }
         i++;
     }
-    flush_compressed_data(rs);
     rcu_read_unlock();
 
     /*
-- 
2.14.4

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH v6 2/3] migration: show the statistics of compression
  2018-09-06  7:00 ` [Qemu-devel] " guangrong.xiao
@ 2018-09-06  7:01   ` guangrong.xiao
  -1 siblings, 0 replies; 18+ messages in thread
From: guangrong.xiao @ 2018-09-06  7:01 UTC (permalink / raw)
  To: pbonzini, mst, mtosatti
  Cc: kvm, quintela, Xiao Guangrong, qemu-devel, peterx, dgilbert,
	wei.w.wang, jiang.biao2

From: Xiao Guangrong <xiaoguangrong@tencent.com>

Currently, it includes:
pages: amount of pages compressed and transferred to the target VM
busy: amount of count that no free thread to compress data
busy-rate: rate of thread busy
compressed-size: amount of bytes after compression
compression-rate: rate of compressed size

Reviewed-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Peter Xu <peterx@redhat.com>
Signed-off-by: Xiao Guangrong <xiaoguangrong@tencent.com>
---
 hmp.c                 | 13 +++++++++++++
 migration/migration.c | 12 ++++++++++++
 migration/ram.c       | 41 ++++++++++++++++++++++++++++++++++++++++-
 migration/ram.h       |  1 +
 qapi/migration.json   | 26 +++++++++++++++++++++++++-
 5 files changed, 91 insertions(+), 2 deletions(-)

diff --git a/hmp.c b/hmp.c
index 4975fa56b0..f57b23d889 100644
--- a/hmp.c
+++ b/hmp.c
@@ -271,6 +271,19 @@ void hmp_info_migrate(Monitor *mon, const QDict *qdict)
                        info->xbzrle_cache->overflow);
     }
 
+    if (info->has_compression) {
+        monitor_printf(mon, "compression pages: %" PRIu64 " pages\n",
+                       info->compression->pages);
+        monitor_printf(mon, "compression busy: %" PRIu64 "\n",
+                       info->compression->busy);
+        monitor_printf(mon, "compression busy rate: %0.2f\n",
+                       info->compression->busy_rate);
+        monitor_printf(mon, "compressed size: %" PRIu64 "\n",
+                       info->compression->compressed_size);
+        monitor_printf(mon, "compression rate: %0.2f\n",
+                       info->compression->compression_rate);
+    }
+
     if (info->has_cpu_throttle_percentage) {
         monitor_printf(mon, "cpu throttle percentage: %" PRIu64 "\n",
                        info->cpu_throttle_percentage);
diff --git a/migration/migration.c b/migration/migration.c
index 4b316ec343..f1d662f928 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -758,6 +758,18 @@ static void populate_ram_info(MigrationInfo *info, MigrationState *s)
         info->xbzrle_cache->overflow = xbzrle_counters.overflow;
     }
 
+    if (migrate_use_compression()) {
+        info->has_compression = true;
+        info->compression = g_malloc0(sizeof(*info->compression));
+        info->compression->pages = compression_counters.pages;
+        info->compression->busy = compression_counters.busy;
+        info->compression->busy_rate = compression_counters.busy_rate;
+        info->compression->compressed_size =
+                                    compression_counters.compressed_size;
+        info->compression->compression_rate =
+                                    compression_counters.compression_rate;
+    }
+
     if (cpu_throttle_active()) {
         info->has_cpu_throttle_percentage = true;
         info->cpu_throttle_percentage = cpu_throttle_get_percentage();
diff --git a/migration/ram.c b/migration/ram.c
index e152831254..65a563993d 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -301,6 +301,15 @@ struct RAMState {
     uint64_t num_dirty_pages_period;
     /* xbzrle misses since the beginning of the period */
     uint64_t xbzrle_cache_miss_prev;
+
+    /* compression statistics since the beginning of the period */
+    /* amount of count that no free thread to compress data */
+    uint64_t compress_thread_busy_prev;
+    /* amount bytes after compression */
+    uint64_t compressed_size_prev;
+    /* amount of compressed pages */
+    uint64_t compress_pages_prev;
+
     /* total handled target pages at the beginning of period */
     uint64_t target_page_count_prev;
     /* total handled target pages since start */
@@ -338,6 +347,8 @@ struct PageSearchStatus {
 };
 typedef struct PageSearchStatus PageSearchStatus;
 
+CompressionStats compression_counters;
+
 struct CompressParam {
     bool done;
     bool quit;
@@ -1593,6 +1604,7 @@ uint64_t ram_pagesize_summary(void)
 static void migration_update_rates(RAMState *rs, int64_t end_time)
 {
     uint64_t page_count = rs->target_page_count - rs->target_page_count_prev;
+    double compressed_size;
 
     /* calculate period counters */
     ram_counters.dirty_pages_rate = rs->num_dirty_pages_period * 1000
@@ -1607,6 +1619,26 @@ static void migration_update_rates(RAMState *rs, int64_t end_time)
             rs->xbzrle_cache_miss_prev) / page_count;
         rs->xbzrle_cache_miss_prev = xbzrle_counters.cache_miss;
     }
+
+    if (migrate_use_compression()) {
+        compression_counters.busy_rate = (double)(compression_counters.busy -
+            rs->compress_thread_busy_prev) / page_count;
+        rs->compress_thread_busy_prev = compression_counters.busy;
+
+        compressed_size = compression_counters.compressed_size -
+                          rs->compressed_size_prev;
+        if (compressed_size) {
+            double uncompressed_size = (compression_counters.pages -
+                                    rs->compress_pages_prev) * TARGET_PAGE_SIZE;
+
+            /* Compression-Ratio = Uncompressed-size / Compressed-size */
+            compression_counters.compression_rate =
+                                        uncompressed_size / compressed_size;
+
+            rs->compress_pages_prev = compression_counters.pages;
+            rs->compressed_size_prev = compression_counters.compressed_size;
+        }
+    }
 }
 
 static void migration_bitmap_sync(RAMState *rs)
@@ -1888,10 +1920,16 @@ exit:
 static void
 update_compress_thread_counts(const CompressParam *param, int bytes_xmit)
 {
+    ram_counters.transferred += bytes_xmit;
+
     if (param->zero_page) {
         ram_counters.duplicate++;
+        return;
     }
-    ram_counters.transferred += bytes_xmit;
+
+    /* 8 means a header with RAM_SAVE_FLAG_CONTINUE. */
+    compression_counters.compressed_size += bytes_xmit - 8;
+    compression_counters.pages++;
 }
 
 static void flush_compressed_data(RAMState *rs)
@@ -2264,6 +2302,7 @@ static bool save_compress_page(RAMState *rs, RAMBlock *block, ram_addr_t offset)
         return true;
     }
 
+    compression_counters.busy++;
     return false;
 }
 
diff --git a/migration/ram.h b/migration/ram.h
index 457bf54b8c..a139066846 100644
--- a/migration/ram.h
+++ b/migration/ram.h
@@ -36,6 +36,7 @@
 
 extern MigrationStats ram_counters;
 extern XBZRLECacheStats xbzrle_counters;
+extern CompressionStats compression_counters;
 
 int xbzrle_cache_resize(int64_t new_size, Error **errp);
 uint64_t ram_bytes_remaining(void);
diff --git a/qapi/migration.json b/qapi/migration.json
index f62d3f9a4b..6e8c21258a 100644
--- a/qapi/migration.json
+++ b/qapi/migration.json
@@ -75,6 +75,27 @@
            'cache-miss': 'int', 'cache-miss-rate': 'number',
            'overflow': 'int' } }
 
+##
+# @CompressionStats:
+#
+# Detailed migration compression statistics
+#
+# @pages: amount of pages compressed and transferred to the target VM
+#
+# @busy: count of times that no free thread was available to compress data
+#
+# @busy-rate: rate of thread busy
+#
+# @compressed-size: amount of bytes after compression
+#
+# @compression-rate: rate of compressed size
+#
+# Since: 3.1
+##
+{ 'struct': 'CompressionStats',
+  'data': {'pages': 'int', 'busy': 'int', 'busy-rate': 'number',
+	   'compressed-size': 'int', 'compression-rate': 'number' } }
+
 ##
 # @MigrationStatus:
 #
@@ -172,6 +193,8 @@
 #           only present when the postcopy-blocktime migration capability
 #           is enabled. (Since 3.0)
 #
+# @compression: migration compression statistics, only returned if compression
+#           feature is on and status is 'active' or 'completed' (Since 3.1)
 #
 # Since: 0.14.0
 ##
@@ -186,7 +209,8 @@
            '*cpu-throttle-percentage': 'int',
            '*error-desc': 'str',
            '*postcopy-blocktime' : 'uint32',
-           '*postcopy-vcpu-blocktime': ['uint32']} }
+           '*postcopy-vcpu-blocktime': ['uint32'],
+           '*compression': 'CompressionStats'} }
 
 ##
 # @query-migrate:
-- 
2.14.4

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [Qemu-devel] [PATCH v6 2/3] migration: show the statistics of compression
@ 2018-09-06  7:01   ` guangrong.xiao
  0 siblings, 0 replies; 18+ messages in thread
From: guangrong.xiao @ 2018-09-06  7:01 UTC (permalink / raw)
  To: pbonzini, mst, mtosatti
  Cc: qemu-devel, kvm, dgilbert, peterx, wei.w.wang, jiang.biao2,
	eblake, quintela, Xiao Guangrong

From: Xiao Guangrong <xiaoguangrong@tencent.com>

Currently, it includes:
pages: amount of pages compressed and transferred to the target VM
busy: amount of count that no free thread to compress data
busy-rate: rate of thread busy
compressed-size: amount of bytes after compression
compression-rate: rate of compressed size

Reviewed-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Peter Xu <peterx@redhat.com>
Signed-off-by: Xiao Guangrong <xiaoguangrong@tencent.com>
---
 hmp.c                 | 13 +++++++++++++
 migration/migration.c | 12 ++++++++++++
 migration/ram.c       | 41 ++++++++++++++++++++++++++++++++++++++++-
 migration/ram.h       |  1 +
 qapi/migration.json   | 26 +++++++++++++++++++++++++-
 5 files changed, 91 insertions(+), 2 deletions(-)

diff --git a/hmp.c b/hmp.c
index 4975fa56b0..f57b23d889 100644
--- a/hmp.c
+++ b/hmp.c
@@ -271,6 +271,19 @@ void hmp_info_migrate(Monitor *mon, const QDict *qdict)
                        info->xbzrle_cache->overflow);
     }
 
+    if (info->has_compression) {
+        monitor_printf(mon, "compression pages: %" PRIu64 " pages\n",
+                       info->compression->pages);
+        monitor_printf(mon, "compression busy: %" PRIu64 "\n",
+                       info->compression->busy);
+        monitor_printf(mon, "compression busy rate: %0.2f\n",
+                       info->compression->busy_rate);
+        monitor_printf(mon, "compressed size: %" PRIu64 "\n",
+                       info->compression->compressed_size);
+        monitor_printf(mon, "compression rate: %0.2f\n",
+                       info->compression->compression_rate);
+    }
+
     if (info->has_cpu_throttle_percentage) {
         monitor_printf(mon, "cpu throttle percentage: %" PRIu64 "\n",
                        info->cpu_throttle_percentage);
diff --git a/migration/migration.c b/migration/migration.c
index 4b316ec343..f1d662f928 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -758,6 +758,18 @@ static void populate_ram_info(MigrationInfo *info, MigrationState *s)
         info->xbzrle_cache->overflow = xbzrle_counters.overflow;
     }
 
+    if (migrate_use_compression()) {
+        info->has_compression = true;
+        info->compression = g_malloc0(sizeof(*info->compression));
+        info->compression->pages = compression_counters.pages;
+        info->compression->busy = compression_counters.busy;
+        info->compression->busy_rate = compression_counters.busy_rate;
+        info->compression->compressed_size =
+                                    compression_counters.compressed_size;
+        info->compression->compression_rate =
+                                    compression_counters.compression_rate;
+    }
+
     if (cpu_throttle_active()) {
         info->has_cpu_throttle_percentage = true;
         info->cpu_throttle_percentage = cpu_throttle_get_percentage();
diff --git a/migration/ram.c b/migration/ram.c
index e152831254..65a563993d 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -301,6 +301,15 @@ struct RAMState {
     uint64_t num_dirty_pages_period;
     /* xbzrle misses since the beginning of the period */
     uint64_t xbzrle_cache_miss_prev;
+
+    /* compression statistics since the beginning of the period */
+    /* amount of count that no free thread to compress data */
+    uint64_t compress_thread_busy_prev;
+    /* amount bytes after compression */
+    uint64_t compressed_size_prev;
+    /* amount of compressed pages */
+    uint64_t compress_pages_prev;
+
     /* total handled target pages at the beginning of period */
     uint64_t target_page_count_prev;
     /* total handled target pages since start */
@@ -338,6 +347,8 @@ struct PageSearchStatus {
 };
 typedef struct PageSearchStatus PageSearchStatus;
 
+CompressionStats compression_counters;
+
 struct CompressParam {
     bool done;
     bool quit;
@@ -1593,6 +1604,7 @@ uint64_t ram_pagesize_summary(void)
 static void migration_update_rates(RAMState *rs, int64_t end_time)
 {
     uint64_t page_count = rs->target_page_count - rs->target_page_count_prev;
+    double compressed_size;
 
     /* calculate period counters */
     ram_counters.dirty_pages_rate = rs->num_dirty_pages_period * 1000
@@ -1607,6 +1619,26 @@ static void migration_update_rates(RAMState *rs, int64_t end_time)
             rs->xbzrle_cache_miss_prev) / page_count;
         rs->xbzrle_cache_miss_prev = xbzrle_counters.cache_miss;
     }
+
+    if (migrate_use_compression()) {
+        compression_counters.busy_rate = (double)(compression_counters.busy -
+            rs->compress_thread_busy_prev) / page_count;
+        rs->compress_thread_busy_prev = compression_counters.busy;
+
+        compressed_size = compression_counters.compressed_size -
+                          rs->compressed_size_prev;
+        if (compressed_size) {
+            double uncompressed_size = (compression_counters.pages -
+                                    rs->compress_pages_prev) * TARGET_PAGE_SIZE;
+
+            /* Compression-Ratio = Uncompressed-size / Compressed-size */
+            compression_counters.compression_rate =
+                                        uncompressed_size / compressed_size;
+
+            rs->compress_pages_prev = compression_counters.pages;
+            rs->compressed_size_prev = compression_counters.compressed_size;
+        }
+    }
 }
 
 static void migration_bitmap_sync(RAMState *rs)
@@ -1888,10 +1920,16 @@ exit:
 static void
 update_compress_thread_counts(const CompressParam *param, int bytes_xmit)
 {
+    ram_counters.transferred += bytes_xmit;
+
     if (param->zero_page) {
         ram_counters.duplicate++;
+        return;
     }
-    ram_counters.transferred += bytes_xmit;
+
+    /* 8 means a header with RAM_SAVE_FLAG_CONTINUE. */
+    compression_counters.compressed_size += bytes_xmit - 8;
+    compression_counters.pages++;
 }
 
 static void flush_compressed_data(RAMState *rs)
@@ -2264,6 +2302,7 @@ static bool save_compress_page(RAMState *rs, RAMBlock *block, ram_addr_t offset)
         return true;
     }
 
+    compression_counters.busy++;
     return false;
 }
 
diff --git a/migration/ram.h b/migration/ram.h
index 457bf54b8c..a139066846 100644
--- a/migration/ram.h
+++ b/migration/ram.h
@@ -36,6 +36,7 @@
 
 extern MigrationStats ram_counters;
 extern XBZRLECacheStats xbzrle_counters;
+extern CompressionStats compression_counters;
 
 int xbzrle_cache_resize(int64_t new_size, Error **errp);
 uint64_t ram_bytes_remaining(void);
diff --git a/qapi/migration.json b/qapi/migration.json
index f62d3f9a4b..6e8c21258a 100644
--- a/qapi/migration.json
+++ b/qapi/migration.json
@@ -75,6 +75,27 @@
            'cache-miss': 'int', 'cache-miss-rate': 'number',
            'overflow': 'int' } }
 
+##
+# @CompressionStats:
+#
+# Detailed migration compression statistics
+#
+# @pages: amount of pages compressed and transferred to the target VM
+#
+# @busy: count of times that no free thread was available to compress data
+#
+# @busy-rate: rate of thread busy
+#
+# @compressed-size: amount of bytes after compression
+#
+# @compression-rate: rate of compressed size
+#
+# Since: 3.1
+##
+{ 'struct': 'CompressionStats',
+  'data': {'pages': 'int', 'busy': 'int', 'busy-rate': 'number',
+	   'compressed-size': 'int', 'compression-rate': 'number' } }
+
 ##
 # @MigrationStatus:
 #
@@ -172,6 +193,8 @@
 #           only present when the postcopy-blocktime migration capability
 #           is enabled. (Since 3.0)
 #
+# @compression: migration compression statistics, only returned if compression
+#           feature is on and status is 'active' or 'completed' (Since 3.1)
 #
 # Since: 0.14.0
 ##
@@ -186,7 +209,8 @@
            '*cpu-throttle-percentage': 'int',
            '*error-desc': 'str',
            '*postcopy-blocktime' : 'uint32',
-           '*postcopy-vcpu-blocktime': ['uint32']} }
+           '*postcopy-vcpu-blocktime': ['uint32'],
+           '*compression': 'CompressionStats'} }
 
 ##
 # @query-migrate:
-- 
2.14.4

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH v6 3/3] migration: use save_page_use_compression in flush_compressed_data
  2018-09-06  7:00 ` [Qemu-devel] " guangrong.xiao
@ 2018-09-06  7:01   ` guangrong.xiao
  -1 siblings, 0 replies; 18+ messages in thread
From: guangrong.xiao @ 2018-09-06  7:01 UTC (permalink / raw)
  To: pbonzini, mst, mtosatti
  Cc: kvm, quintela, Xiao Guangrong, qemu-devel, peterx, dgilbert,
	wei.w.wang, jiang.biao2

From: Xiao Guangrong <xiaoguangrong@tencent.com>

It avoids to touch compression locks if xbzrle and compression
are both enabled

Signed-off-by: Xiao Guangrong <xiaoguangrong@tencent.com>
---
 migration/ram.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/migration/ram.c b/migration/ram.c
index 65a563993d..747dd9208b 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -1932,11 +1932,13 @@ update_compress_thread_counts(const CompressParam *param, int bytes_xmit)
     compression_counters.pages++;
 }
 
+static bool save_page_use_compression(RAMState *rs);
+
 static void flush_compressed_data(RAMState *rs)
 {
     int idx, len, thread_count;
 
-    if (!migrate_use_compression()) {
+    if (!save_page_use_compression(rs)) {
         return;
     }
     thread_count = migrate_compress_threads();
-- 
2.14.4

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [Qemu-devel] [PATCH v6 3/3] migration: use save_page_use_compression in flush_compressed_data
@ 2018-09-06  7:01   ` guangrong.xiao
  0 siblings, 0 replies; 18+ messages in thread
From: guangrong.xiao @ 2018-09-06  7:01 UTC (permalink / raw)
  To: pbonzini, mst, mtosatti
  Cc: qemu-devel, kvm, dgilbert, peterx, wei.w.wang, jiang.biao2,
	eblake, quintela, Xiao Guangrong

From: Xiao Guangrong <xiaoguangrong@tencent.com>

It avoids to touch compression locks if xbzrle and compression
are both enabled

Signed-off-by: Xiao Guangrong <xiaoguangrong@tencent.com>
---
 migration/ram.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/migration/ram.c b/migration/ram.c
index 65a563993d..747dd9208b 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -1932,11 +1932,13 @@ update_compress_thread_counts(const CompressParam *param, int bytes_xmit)
     compression_counters.pages++;
 }
 
+static bool save_page_use_compression(RAMState *rs);
+
 static void flush_compressed_data(RAMState *rs)
 {
     int idx, len, thread_count;
 
-    if (!migrate_use_compression()) {
+    if (!save_page_use_compression(rs)) {
         return;
     }
     thread_count = migrate_compress_threads();
-- 
2.14.4

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* Re: [PATCH v6 1/3] migration: do not flush_compressed_data at the end of iteration
  2018-09-06  7:00   ` [Qemu-devel] " guangrong.xiao
@ 2018-09-06  9:38     ` Juan Quintela
  -1 siblings, 0 replies; 18+ messages in thread
From: Juan Quintela @ 2018-09-06  9:38 UTC (permalink / raw)
  To: guangrong.xiao
  Cc: kvm, mst, mtosatti, Xiao Guangrong, dgilbert, peterx, qemu-devel,
	wei.w.wang, jiang.biao2, pbonzini

guangrong.xiao@gmail.com wrote:
> From: Xiao Guangrong <xiaoguangrong@tencent.com>
>
> flush_compressed_data() needs to wait all compression threads to
> finish their work, after that all threads are free until the
> migration feeds new request to them, reducing its call can improve
> the throughput and use CPU resource more effectively
>
> We do not need to flush all threads at the end of iteration, the
> data can be kept locally until the memory block is changed or
> memory migration starts over in that case we will meet a dirtied
> page which may still exists in compression threads's ring
>
> Signed-off-by: Xiao Guangrong <xiaoguangrong@tencent.com>


Reviewed-by: Juan Quintela <quintela@redhat.com>

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [Qemu-devel] [PATCH v6 1/3] migration: do not flush_compressed_data at the end of iteration
@ 2018-09-06  9:38     ` Juan Quintela
  0 siblings, 0 replies; 18+ messages in thread
From: Juan Quintela @ 2018-09-06  9:38 UTC (permalink / raw)
  To: guangrong.xiao
  Cc: pbonzini, mst, mtosatti, qemu-devel, kvm, dgilbert, peterx,
	wei.w.wang, jiang.biao2, eblake, Xiao Guangrong

guangrong.xiao@gmail.com wrote:
> From: Xiao Guangrong <xiaoguangrong@tencent.com>
>
> flush_compressed_data() needs to wait all compression threads to
> finish their work, after that all threads are free until the
> migration feeds new request to them, reducing its call can improve
> the throughput and use CPU resource more effectively
>
> We do not need to flush all threads at the end of iteration, the
> data can be kept locally until the memory block is changed or
> memory migration starts over in that case we will meet a dirtied
> page which may still exists in compression threads's ring
>
> Signed-off-by: Xiao Guangrong <xiaoguangrong@tencent.com>


Reviewed-by: Juan Quintela <quintela@redhat.com>

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v6 3/3] migration: use save_page_use_compression in flush_compressed_data
  2018-09-06  7:01   ` [Qemu-devel] " guangrong.xiao
@ 2018-09-06  9:49     ` Juan Quintela
  -1 siblings, 0 replies; 18+ messages in thread
From: Juan Quintela @ 2018-09-06  9:49 UTC (permalink / raw)
  To: guangrong.xiao
  Cc: kvm, mst, mtosatti, Xiao Guangrong, dgilbert, peterx, qemu-devel,
	wei.w.wang, jiang.biao2, pbonzini

guangrong.xiao@gmail.com wrote:
> From: Xiao Guangrong <xiaoguangrong@tencent.com>
>
> It avoids to touch compression locks if xbzrle and compression
> are both enabled
>
> Signed-off-by: Xiao Guangrong <xiaoguangrong@tencent.com>

Reviewed-by: Juan Quintela <quintela@redhat.com>

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [Qemu-devel] [PATCH v6 3/3] migration: use save_page_use_compression in flush_compressed_data
@ 2018-09-06  9:49     ` Juan Quintela
  0 siblings, 0 replies; 18+ messages in thread
From: Juan Quintela @ 2018-09-06  9:49 UTC (permalink / raw)
  To: guangrong.xiao
  Cc: pbonzini, mst, mtosatti, qemu-devel, kvm, dgilbert, peterx,
	wei.w.wang, jiang.biao2, eblake, Xiao Guangrong

guangrong.xiao@gmail.com wrote:
> From: Xiao Guangrong <xiaoguangrong@tencent.com>
>
> It avoids to touch compression locks if xbzrle and compression
> are both enabled
>
> Signed-off-by: Xiao Guangrong <xiaoguangrong@tencent.com>

Reviewed-by: Juan Quintela <quintela@redhat.com>

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v6 0/3] migration: compression optimization
  2018-09-06  7:00 ` [Qemu-devel] " guangrong.xiao
@ 2018-09-06 11:03   ` Juan Quintela
  -1 siblings, 0 replies; 18+ messages in thread
From: Juan Quintela @ 2018-09-06 11:03 UTC (permalink / raw)
  To: guangrong.xiao
  Cc: kvm, mst, mtosatti, Xiao Guangrong, dgilbert, peterx, qemu-devel,
	wei.w.wang, jiang.biao2, pbonzini

guangrong.xiao@gmail.com wrote:
> From: Xiao Guangrong <xiaoguangrong@tencent.com>
>
> Changelog in v6:
>
> Thanks to Juan's review, in this version we
> 1) move flush compressed data to find_dirty_block() where it hits the end
>    of memblock
> 2) use save_page_use_compression instead of migrate_use_compression in
>    flush_compressed_data
>
> Xiao Guangrong (3):
>   migration: do not flush_compressed_data at the end of iteration
>   migration: show the statistics of compression
>   migration: use save_page_use_compression in flush_compressed_data
>
>  hmp.c                 | 13 +++++++++++
>  migration/migration.c | 12 ++++++++++
>  migration/ram.c       | 63 +++++++++++++++++++++++++++++++++++++++++++--------
>  migration/ram.h       |  1 +
>  qapi/migration.json   | 26 ++++++++++++++++++++-
>  5 files changed, 105 insertions(+), 10 deletions(-)

queued

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [Qemu-devel] [PATCH v6 0/3] migration: compression optimization
@ 2018-09-06 11:03   ` Juan Quintela
  0 siblings, 0 replies; 18+ messages in thread
From: Juan Quintela @ 2018-09-06 11:03 UTC (permalink / raw)
  To: guangrong.xiao
  Cc: pbonzini, mst, mtosatti, qemu-devel, kvm, dgilbert, peterx,
	wei.w.wang, jiang.biao2, eblake, Xiao Guangrong

guangrong.xiao@gmail.com wrote:
> From: Xiao Guangrong <xiaoguangrong@tencent.com>
>
> Changelog in v6:
>
> Thanks to Juan's review, in this version we
> 1) move flush compressed data to find_dirty_block() where it hits the end
>    of memblock
> 2) use save_page_use_compression instead of migrate_use_compression in
>    flush_compressed_data
>
> Xiao Guangrong (3):
>   migration: do not flush_compressed_data at the end of iteration
>   migration: show the statistics of compression
>   migration: use save_page_use_compression in flush_compressed_data
>
>  hmp.c                 | 13 +++++++++++
>  migration/migration.c | 12 ++++++++++
>  migration/ram.c       | 63 +++++++++++++++++++++++++++++++++++++++++++--------
>  migration/ram.h       |  1 +
>  qapi/migration.json   | 26 ++++++++++++++++++++-
>  5 files changed, 105 insertions(+), 10 deletions(-)

queued

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v6 0/3] migration: compression optimization
  2018-09-06 11:03   ` [Qemu-devel] " Juan Quintela
@ 2018-09-13  7:45     ` Xiao Guangrong
  -1 siblings, 0 replies; 18+ messages in thread
From: Xiao Guangrong @ 2018-09-13  7:45 UTC (permalink / raw)
  To: quintela
  Cc: kvm, mst, mtosatti, Xiao Guangrong, dgilbert, peterx, qemu-devel,
	wei.w.wang, jiang.biao2, pbonzini



On 09/06/2018 07:03 PM, Juan Quintela wrote:
> guangrong.xiao@gmail.com wrote:
>> From: Xiao Guangrong <xiaoguangrong@tencent.com>
>>
>> Changelog in v6:
>>
>> Thanks to Juan's review, in this version we
>> 1) move flush compressed data to find_dirty_block() where it hits the end
>>     of memblock
>> 2) use save_page_use_compression instead of migrate_use_compression in
>>     flush_compressed_data
>>
>> Xiao Guangrong (3):
>>    migration: do not flush_compressed_data at the end of iteration
>>    migration: show the statistics of compression
>>    migration: use save_page_use_compression in flush_compressed_data
>>
>>   hmp.c                 | 13 +++++++++++
>>   migration/migration.c | 12 ++++++++++
>>   migration/ram.c       | 63 +++++++++++++++++++++++++++++++++++++++++++--------
>>   migration/ram.h       |  1 +
>>   qapi/migration.json   | 26 ++++++++++++++++++++-
>>   5 files changed, 105 insertions(+), 10 deletions(-)
> 
> queued
> 

Hi Juan,

Could i ask where is the place you queued these patches, i did not found
them on your git tree at
    https://github.com/juanquintela/qemu migration/next or migration.next

I am working on the next part of migration, it's more convenient to let
it be based on your place. :)

Thanks!

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [Qemu-devel] [PATCH v6 0/3] migration: compression optimization
@ 2018-09-13  7:45     ` Xiao Guangrong
  0 siblings, 0 replies; 18+ messages in thread
From: Xiao Guangrong @ 2018-09-13  7:45 UTC (permalink / raw)
  To: quintela
  Cc: pbonzini, mst, mtosatti, qemu-devel, kvm, dgilbert, peterx,
	wei.w.wang, jiang.biao2, eblake, Xiao Guangrong



On 09/06/2018 07:03 PM, Juan Quintela wrote:
> guangrong.xiao@gmail.com wrote:
>> From: Xiao Guangrong <xiaoguangrong@tencent.com>
>>
>> Changelog in v6:
>>
>> Thanks to Juan's review, in this version we
>> 1) move flush compressed data to find_dirty_block() where it hits the end
>>     of memblock
>> 2) use save_page_use_compression instead of migrate_use_compression in
>>     flush_compressed_data
>>
>> Xiao Guangrong (3):
>>    migration: do not flush_compressed_data at the end of iteration
>>    migration: show the statistics of compression
>>    migration: use save_page_use_compression in flush_compressed_data
>>
>>   hmp.c                 | 13 +++++++++++
>>   migration/migration.c | 12 ++++++++++
>>   migration/ram.c       | 63 +++++++++++++++++++++++++++++++++++++++++++--------
>>   migration/ram.h       |  1 +
>>   qapi/migration.json   | 26 ++++++++++++++++++++-
>>   5 files changed, 105 insertions(+), 10 deletions(-)
> 
> queued
> 

Hi Juan,

Could i ask where is the place you queued these patches, i did not found
them on your git tree at
    https://github.com/juanquintela/qemu migration/next or migration.next

I am working on the next part of migration, it's more convenient to let
it be based on your place. :)

Thanks!

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v6 0/3] migration: compression optimization
  2018-09-13  7:45     ` [Qemu-devel] " Xiao Guangrong
@ 2018-09-13 13:26       ` Juan Quintela
  -1 siblings, 0 replies; 18+ messages in thread
From: Juan Quintela @ 2018-09-13 13:26 UTC (permalink / raw)
  To: Xiao Guangrong
  Cc: kvm, mst, mtosatti, Xiao Guangrong, dgilbert, peterx, qemu-devel,
	wei.w.wang, jiang.biao2, pbonzini

Xiao Guangrong <guangrong.xiao@gmail.com> wrote:
> On 09/06/2018 07:03 PM, Juan Quintela wrote:
>> guangrong.xiao@gmail.com wrote:
>>> From: Xiao Guangrong <xiaoguangrong@tencent.com>
>>>
>>> Changelog in v6:
>>>
>>> Thanks to Juan's review, in this version we
>>> 1) move flush compressed data to find_dirty_block() where it hits the end
>>>     of memblock
>>> 2) use save_page_use_compression instead of migrate_use_compression in
>>>     flush_compressed_data
>>>
>>> Xiao Guangrong (3):
>>>    migration: do not flush_compressed_data at the end of iteration
>>>    migration: show the statistics of compression
>>>    migration: use save_page_use_compression in flush_compressed_data
>>>
>>>   hmp.c                 | 13 +++++++++++
>>>   migration/migration.c | 12 ++++++++++
>>>   migration/ram.c       | 63 +++++++++++++++++++++++++++++++++++++++++++--------
>>>   migration/ram.h       |  1 +
>>>   qapi/migration.json   | 26 ++++++++++++++++++++-
>>>   5 files changed, 105 insertions(+), 10 deletions(-)
>>
>> queued
>>
>
> Hi Juan,
>
> Could i ask where is the place you queued these patches, i did not found
> them on your git tree at
>    https://github.com/juanquintela/qemu migration/next or migration.next
>
> I am working on the next part of migration, it's more convenient to let
> it be based on your place. :)

They are there now, I send the pull request already.  I am going to be
on vacation for the following four weeks.  Migration issues will be
handled by David.

Thanks, Juan.

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [Qemu-devel] [PATCH v6 0/3] migration: compression optimization
@ 2018-09-13 13:26       ` Juan Quintela
  0 siblings, 0 replies; 18+ messages in thread
From: Juan Quintela @ 2018-09-13 13:26 UTC (permalink / raw)
  To: Xiao Guangrong
  Cc: pbonzini, mst, mtosatti, qemu-devel, kvm, dgilbert, peterx,
	wei.w.wang, jiang.biao2, eblake, Xiao Guangrong

Xiao Guangrong <guangrong.xiao@gmail.com> wrote:
> On 09/06/2018 07:03 PM, Juan Quintela wrote:
>> guangrong.xiao@gmail.com wrote:
>>> From: Xiao Guangrong <xiaoguangrong@tencent.com>
>>>
>>> Changelog in v6:
>>>
>>> Thanks to Juan's review, in this version we
>>> 1) move flush compressed data to find_dirty_block() where it hits the end
>>>     of memblock
>>> 2) use save_page_use_compression instead of migrate_use_compression in
>>>     flush_compressed_data
>>>
>>> Xiao Guangrong (3):
>>>    migration: do not flush_compressed_data at the end of iteration
>>>    migration: show the statistics of compression
>>>    migration: use save_page_use_compression in flush_compressed_data
>>>
>>>   hmp.c                 | 13 +++++++++++
>>>   migration/migration.c | 12 ++++++++++
>>>   migration/ram.c       | 63 +++++++++++++++++++++++++++++++++++++++++++--------
>>>   migration/ram.h       |  1 +
>>>   qapi/migration.json   | 26 ++++++++++++++++++++-
>>>   5 files changed, 105 insertions(+), 10 deletions(-)
>>
>> queued
>>
>
> Hi Juan,
>
> Could i ask where is the place you queued these patches, i did not found
> them on your git tree at
>    https://github.com/juanquintela/qemu migration/next or migration.next
>
> I am working on the next part of migration, it's more convenient to let
> it be based on your place. :)

They are there now, I send the pull request already.  I am going to be
on vacation for the following four weeks.  Migration issues will be
handled by David.

Thanks, Juan.

^ permalink raw reply	[flat|nested] 18+ messages in thread

end of thread, other threads:[~2018-09-13 13:27 UTC | newest]

Thread overview: 18+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-09-06  7:00 [PATCH v6 0/3] migration: compression optimization guangrong.xiao
2018-09-06  7:00 ` [Qemu-devel] " guangrong.xiao
2018-09-06  7:00 ` [PATCH v6 1/3] migration: do not flush_compressed_data at the end of iteration guangrong.xiao
2018-09-06  7:00   ` [Qemu-devel] " guangrong.xiao
2018-09-06  9:38   ` Juan Quintela
2018-09-06  9:38     ` [Qemu-devel] " Juan Quintela
2018-09-06  7:01 ` [PATCH v6 2/3] migration: show the statistics of compression guangrong.xiao
2018-09-06  7:01   ` [Qemu-devel] " guangrong.xiao
2018-09-06  7:01 ` [PATCH v6 3/3] migration: use save_page_use_compression in flush_compressed_data guangrong.xiao
2018-09-06  7:01   ` [Qemu-devel] " guangrong.xiao
2018-09-06  9:49   ` Juan Quintela
2018-09-06  9:49     ` [Qemu-devel] " Juan Quintela
2018-09-06 11:03 ` [PATCH v6 0/3] migration: compression optimization Juan Quintela
2018-09-06 11:03   ` [Qemu-devel] " Juan Quintela
2018-09-13  7:45   ` Xiao Guangrong
2018-09-13  7:45     ` [Qemu-devel] " Xiao Guangrong
2018-09-13 13:26     ` Juan Quintela
2018-09-13 13:26       ` [Qemu-devel] " Juan Quintela

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.