All of lore.kernel.org
 help / color / mirror / Atom feed
* [Qemu-devel] [PATCH 0/5] Make RAMState dynamic
@ 2017-06-01 22:08 Juan Quintela
  2017-06-01 22:08 ` [Qemu-devel] [PATCH 1/5] ram: Call migration_page_queue_free() at ram_migration_cleanup() Juan Quintela
                   ` (4 more replies)
  0 siblings, 5 replies; 20+ messages in thread
From: Juan Quintela @ 2017-06-01 22:08 UTC (permalink / raw)
  To: qemu-devel; +Cc: dgilbert, lvivier, peterx

Hi

This series make RAMState a dynamic variable.  We create it at the
beggining of migration and remove it when migration ends.

- Move ZERO_TARGET_PAGE to XBZRLE
- print the number of remaining pages not only the number of bytes
  this makes easier to use statistics.
- How to export statistics to use for "info migrate"?  It is
  complicated, right now we use accessor functions for RAMState.  But we
  already have a struct with the fields that we need.  MigrationStats.
  Use them instead of having to create a new accessor function for
  each new field that we print.  There is another reason for this: We
  want to make RAMstate dynamic. And we access states after migration
  has finished.
- We end making RAMState dynamic.

This series is on top of the "previous consistent ouput"

Please, review.

Thanks, Juan.

Juan Quintela (5):
  ram: Call migration_page_queue_free() at ram_migration_cleanup()
  ram: Move ZERO_TARGET_PAGE inside XBZRLE
  migration: Print statistics about the number of remaining target pages
  ram: Use MigrationStats for statistics
  ram: Make RAMState dynamic

 migration/migration.c |  35 ++++----
 migration/ram.c       | 243 ++++++++++++++++++--------------------------------
 migration/ram.h       |  16 +---
 qapi-schema.json      |   6 +-
 4 files changed, 113 insertions(+), 187 deletions(-)

-- 
2.9.4

^ permalink raw reply	[flat|nested] 20+ messages in thread

* [Qemu-devel] [PATCH 1/5] ram: Call migration_page_queue_free() at ram_migration_cleanup()
  2017-06-01 22:08 [Qemu-devel] [PATCH 0/5] Make RAMState dynamic Juan Quintela
@ 2017-06-01 22:08 ` Juan Quintela
  2017-06-05 11:24   ` Dr. David Alan Gilbert
  2017-06-06  7:58   ` Peter Xu
  2017-06-01 22:08 ` [Qemu-devel] [PATCH 2/5] ram: Move ZERO_TARGET_PAGE inside XBZRLE Juan Quintela
                   ` (3 subsequent siblings)
  4 siblings, 2 replies; 20+ messages in thread
From: Juan Quintela @ 2017-06-01 22:08 UTC (permalink / raw)
  To: qemu-devel; +Cc: dgilbert, lvivier, peterx

We shouldn't be using memory later than that.

Signed-off-by: Juan Quintela <quintela@redhat.com>
---
 migration/migration.c | 2 --
 migration/ram.c       | 5 +++--
 migration/ram.h       | 1 -
 3 files changed, 3 insertions(+), 5 deletions(-)

diff --git a/migration/migration.c b/migration/migration.c
index af4c2cc..ea3d41c 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -814,8 +814,6 @@ static void migrate_fd_cleanup(void *opaque)
     qemu_bh_delete(s->cleanup_bh);
     s->cleanup_bh = NULL;
 
-    migration_page_queue_free();
-
     if (s->to_dst_file) {
         trace_migrate_fd_cleanup();
         qemu_mutex_unlock_iothread();
diff --git a/migration/ram.c b/migration/ram.c
index db7f4b0..e503277 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -1181,10 +1181,9 @@ static bool get_queued_page(RAMState *rs, PageSearchStatus *pss)
  * be some left.  in case that there is any page left, we drop it.
  *
  */
-void migration_page_queue_free(void)
+static void migration_page_queue_free(RAMState *rs)
 {
     struct RAMSrcPageRequest *mspr, *next_mspr;
-    RAMState *rs = &ram_state;
     /* This queue generally should be empty - but in the case of a failed
      * migration might have some droppings in.
      */
@@ -1434,6 +1433,7 @@ void free_xbzrle_decoded_buf(void)
 
 static void ram_migration_cleanup(void *opaque)
 {
+    RAMState *rs = opaque;
     RAMBlock *block;
 
     /* caller have hold iothread lock or is in a bh, so there is
@@ -1459,6 +1459,7 @@ static void ram_migration_cleanup(void *opaque)
         XBZRLE.current_buf = NULL;
     }
     XBZRLE_cache_unlock();
+    migration_page_queue_free(rs);
 }
 
 static void ram_state_reset(RAMState *rs)
diff --git a/migration/ram.h b/migration/ram.h
index c9563d1..d4da419 100644
--- a/migration/ram.h
+++ b/migration/ram.h
@@ -53,7 +53,6 @@ void migrate_decompress_threads_create(void);
 void migrate_decompress_threads_join(void);
 
 uint64_t ram_pagesize_summary(void);
-void migration_page_queue_free(void);
 int ram_save_queue_pages(const char *rbname, ram_addr_t start, ram_addr_t len);
 void acct_update_position(QEMUFile *f, size_t size, bool zero);
 void free_xbzrle_decoded_buf(void);
-- 
2.9.4

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [Qemu-devel] [PATCH 2/5] ram: Move ZERO_TARGET_PAGE inside XBZRLE
  2017-06-01 22:08 [Qemu-devel] [PATCH 0/5] Make RAMState dynamic Juan Quintela
  2017-06-01 22:08 ` [Qemu-devel] [PATCH 1/5] ram: Call migration_page_queue_free() at ram_migration_cleanup() Juan Quintela
@ 2017-06-01 22:08 ` Juan Quintela
  2017-06-05 11:27   ` Dr. David Alan Gilbert
  2017-06-06  7:59   ` Peter Xu
  2017-06-01 22:08 ` [Qemu-devel] [PATCH 3/5] migration: Print statistics about the number of remaining target pages Juan Quintela
                   ` (2 subsequent siblings)
  4 siblings, 2 replies; 20+ messages in thread
From: Juan Quintela @ 2017-06-01 22:08 UTC (permalink / raw)
  To: qemu-devel; +Cc: dgilbert, lvivier, peterx

It was only used by XBZRLE anyways.

Signed-off-by: Juan Quintela <quintela@redhat.com>
---
 migration/ram.c | 11 ++++++-----
 1 file changed, 6 insertions(+), 5 deletions(-)

diff --git a/migration/ram.c b/migration/ram.c
index e503277..04b55a7 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -66,8 +66,6 @@
 /* 0x80 is reserved in migration.h start with 0x100 next */
 #define RAM_SAVE_FLAG_COMPRESS_PAGE    0x100
 
-static uint8_t *ZERO_TARGET_PAGE;
-
 static inline bool is_zero_range(uint8_t *p, uint64_t size)
 {
     return buffer_is_zero(p, size);
@@ -83,6 +81,8 @@ static struct {
     /* Cache for XBZRLE, Protected by lock. */
     PageCache *cache;
     QemuMutex lock;
+    /* it will store a page full of zeros */
+    uint8_t *zero_target_page;
 } XBZRLE;
 
 /* buffer used for XBZRLE decoding */
@@ -509,7 +509,7 @@ static void xbzrle_cache_zero_page(RAMState *rs, ram_addr_t current_addr)
 
     /* We don't care if this fails to allocate a new cache page
      * as long as it updated an old one */
-    cache_insert(XBZRLE.cache, current_addr, ZERO_TARGET_PAGE,
+    cache_insert(XBZRLE.cache, current_addr, XBZRLE.zero_target_page,
                  rs->bitmap_sync_count);
 }
 
@@ -1453,10 +1453,11 @@ static void ram_migration_cleanup(void *opaque)
         cache_fini(XBZRLE.cache);
         g_free(XBZRLE.encoded_buf);
         g_free(XBZRLE.current_buf);
-        g_free(ZERO_TARGET_PAGE);
+        g_free(XBZRLE.zero_target_page);
         XBZRLE.cache = NULL;
         XBZRLE.encoded_buf = NULL;
         XBZRLE.current_buf = NULL;
+        XBZRLE.zero_target_page = NULL;
     }
     XBZRLE_cache_unlock();
     migration_page_queue_free(rs);
@@ -1877,7 +1878,7 @@ static int ram_state_init(RAMState *rs)
 
     if (migrate_use_xbzrle()) {
         XBZRLE_cache_lock();
-        ZERO_TARGET_PAGE = g_malloc0(TARGET_PAGE_SIZE);
+        XBZRLE.zero_target_page = g_malloc0(TARGET_PAGE_SIZE);
         XBZRLE.cache = cache_init(migrate_xbzrle_cache_size() /
                                   TARGET_PAGE_SIZE,
                                   TARGET_PAGE_SIZE);
-- 
2.9.4

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [Qemu-devel] [PATCH 3/5] migration: Print statistics about the number of remaining target pages
  2017-06-01 22:08 [Qemu-devel] [PATCH 0/5] Make RAMState dynamic Juan Quintela
  2017-06-01 22:08 ` [Qemu-devel] [PATCH 1/5] ram: Call migration_page_queue_free() at ram_migration_cleanup() Juan Quintela
  2017-06-01 22:08 ` [Qemu-devel] [PATCH 2/5] ram: Move ZERO_TARGET_PAGE inside XBZRLE Juan Quintela
@ 2017-06-01 22:08 ` Juan Quintela
  2017-06-02 15:15   ` Eric Blake
  2017-06-01 22:08 ` [Qemu-devel] [PATCH 4/5] ram: Use MigrationStats for statistics Juan Quintela
  2017-06-01 22:08 ` [Qemu-devel] [PATCH 5/5] ram: Make RAMState dynamic Juan Quintela
  4 siblings, 1 reply; 20+ messages in thread
From: Juan Quintela @ 2017-06-01 22:08 UTC (permalink / raw)
  To: qemu-devel; +Cc: dgilbert, lvivier, peterx

Signed-off-by: Juan Quintela <quintela@redhat.com>
---
 migration/migration.c | 4 +++-
 migration/ram.c       | 4 ++--
 migration/ram.h       | 2 +-
 qapi-schema.json      | 6 +++++-
 4 files changed, 11 insertions(+), 5 deletions(-)

diff --git a/migration/migration.c b/migration/migration.c
index ea3d41c..2c13217 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -518,7 +518,9 @@ static void populate_ram_info(MigrationInfo *info, MigrationState *s)
     }
 
     if (s->state != MIGRATION_STATUS_COMPLETED) {
-        info->ram->remaining = ram_bytes_remaining();
+        info->ram->remaining_pages = ram_pages_remaining();
+        info->ram->remaining = ram_pages_remaining() *
+            qemu_target_page_size();
         info->ram->dirty_pages_rate = ram_dirty_pages_rate();
     }
 }
diff --git a/migration/ram.c b/migration/ram.c
index 04b55a7..30519e1 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -265,9 +265,9 @@ uint64_t ram_bytes_transferred(void)
     return ram_state.bytes_transferred;
 }
 
-uint64_t ram_bytes_remaining(void)
+uint64_t ram_pages_remaining(void)
 {
-    return ram_state.migration_dirty_pages * TARGET_PAGE_SIZE;
+    return ram_state.migration_dirty_pages;
 }
 
 uint64_t ram_dirty_sync_count(void)
diff --git a/migration/ram.h b/migration/ram.h
index d4da419..5864470 100644
--- a/migration/ram.h
+++ b/migration/ram.h
@@ -41,7 +41,7 @@ uint64_t xbzrle_mig_pages_cache_miss(void);
 double xbzrle_mig_cache_miss_rate(void);
 uint64_t xbzrle_mig_pages_overflow(void);
 uint64_t ram_bytes_transferred(void);
-uint64_t ram_bytes_remaining(void);
+uint64_t ram_pages_remaining(void);
 uint64_t ram_dirty_sync_count(void);
 uint64_t ram_dirty_pages_rate(void);
 uint64_t ram_postcopy_requests(void);
diff --git a/qapi-schema.json b/qapi-schema.json
index 4b50b65..ff1c048 100644
--- a/qapi-schema.json
+++ b/qapi-schema.json
@@ -601,6 +601,9 @@
 # @page-size: The number of bytes per page for the various page-based
 #        statistics (since 2.10)
 #
+# @remaining-pages: amount of pages remaining to be transferred to the target VM
+#        (since 2.10)
+#
 # Since: 0.14.0
 ##
 { 'struct': 'MigrationStats',
@@ -608,7 +611,8 @@
            'duplicate': 'int', 'skipped': 'int', 'normal': 'int',
            'normal-bytes': 'int', 'dirty-pages-rate' : 'int',
            'mbps' : 'number', 'dirty-sync-count' : 'int',
-           'postcopy-requests' : 'int', 'page-size' : 'int' } }
+           'postcopy-requests' : 'int', 'page-size' : 'int',
+           'remaining-pages' : 'int' } }
 
 ##
 # @XBZRLECacheStats:
-- 
2.9.4

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [Qemu-devel] [PATCH 4/5] ram: Use MigrationStats for statistics
  2017-06-01 22:08 [Qemu-devel] [PATCH 0/5] Make RAMState dynamic Juan Quintela
                   ` (2 preceding siblings ...)
  2017-06-01 22:08 ` [Qemu-devel] [PATCH 3/5] migration: Print statistics about the number of remaining target pages Juan Quintela
@ 2017-06-01 22:08 ` Juan Quintela
  2017-06-05 12:34   ` Dr. David Alan Gilbert
  2017-06-01 22:08 ` [Qemu-devel] [PATCH 5/5] ram: Make RAMState dynamic Juan Quintela
  4 siblings, 1 reply; 20+ messages in thread
From: Juan Quintela @ 2017-06-01 22:08 UTC (permalink / raw)
  To: qemu-devel; +Cc: dgilbert, lvivier, peterx

RAM Statistics need to survive migration to make info migrate work, so we
need to store them outside of RAMState.  As we already have an struct
with those fields, just used them. (MigrationStats and XBZRLECacheStats).

Signed-off-by: Juan Quintela <quintela@redhat.com>
---
 migration/migration.c |  33 +++++-----
 migration/ram.c       | 179 ++++++++++++++------------------------------------
 migration/ram.h       |  15 +----
 3 files changed, 68 insertions(+), 159 deletions(-)

diff --git a/migration/migration.c b/migration/migration.c
index 2c13217..331cab7 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -480,28 +480,28 @@ static void populate_ram_info(MigrationInfo *info, MigrationState *s)
 {
     info->has_ram = true;
     info->ram = g_malloc0(sizeof(*info->ram));
-    info->ram->transferred = ram_bytes_transferred();
+    info->ram->transferred = ram_counters.transferred;
     info->ram->total = ram_bytes_total();
-    info->ram->duplicate = dup_mig_pages_transferred();
+    info->ram->duplicate = ram_counters.duplicate;
     /* legacy value.  It is not used anymore */
     info->ram->skipped = 0;
-    info->ram->normal = norm_mig_pages_transferred();
-    info->ram->normal_bytes = norm_mig_pages_transferred() *
+    info->ram->normal = ram_counters.normal;
+    info->ram->normal_bytes = ram_counters.normal *
         qemu_target_page_size();
     info->ram->mbps = s->mbps;
-    info->ram->dirty_sync_count = ram_dirty_sync_count();
-    info->ram->postcopy_requests = ram_postcopy_requests();
+    info->ram->dirty_sync_count = ram_counters.dirty_sync_count;
+    info->ram->postcopy_requests = ram_counters.postcopy_requests;
     info->ram->page_size = qemu_target_page_size();
 
     if (migrate_use_xbzrle()) {
         info->has_xbzrle_cache = true;
         info->xbzrle_cache = g_malloc0(sizeof(*info->xbzrle_cache));
         info->xbzrle_cache->cache_size = migrate_xbzrle_cache_size();
-        info->xbzrle_cache->bytes = xbzrle_mig_bytes_transferred();
-        info->xbzrle_cache->pages = xbzrle_mig_pages_transferred();
-        info->xbzrle_cache->cache_miss = xbzrle_mig_pages_cache_miss();
-        info->xbzrle_cache->cache_miss_rate = xbzrle_mig_cache_miss_rate();
-        info->xbzrle_cache->overflow = xbzrle_mig_pages_overflow();
+        info->xbzrle_cache->bytes = xbzrle_counters.bytes;
+        info->xbzrle_cache->pages = xbzrle_counters.pages;
+        info->xbzrle_cache->cache_miss = xbzrle_counters.cache_miss;
+        info->xbzrle_cache->cache_miss_rate = xbzrle_counters.cache_miss_rate;
+        info->xbzrle_cache->overflow = xbzrle_counters.overflow;
     }
 
     if (cpu_throttle_active()) {
@@ -518,10 +518,11 @@ static void populate_ram_info(MigrationInfo *info, MigrationState *s)
     }
 
     if (s->state != MIGRATION_STATUS_COMPLETED) {
-        info->ram->remaining_pages = ram_pages_remaining();
-        info->ram->remaining = ram_pages_remaining() *
+
+        info->ram->remaining_pages = ram_counters.remaining_pages;
+        info->ram->remaining = ram_counters.remaining_pages *
             qemu_target_page_size();
-        info->ram->dirty_pages_rate = ram_dirty_pages_rate();
+        info->ram->dirty_pages_rate = ram_counters.dirty_pages_rate;
     }
 }
 
@@ -1886,8 +1887,8 @@ static void *migration_thread(void *opaque)
                                       bandwidth, threshold_size);
             /* if we haven't sent anything, we don't want to recalculate
                10000 is a small enough number for our purposes */
-            if (ram_dirty_pages_rate() && transferred_bytes > 10000) {
-                s->expected_downtime = ram_dirty_pages_rate() *
+            if (ram_counters.dirty_pages_rate && transferred_bytes > 10000) {
+                s->expected_downtime = ram_counters.dirty_pages_rate *
                     qemu_target_page_size() / bandwidth;
             }
 
diff --git a/migration/ram.c b/migration/ram.c
index 30519e1..6c48219 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -71,6 +71,8 @@ static inline bool is_zero_range(uint8_t *p, uint64_t size)
     return buffer_is_zero(p, size);
 }
 
+XBZRLECacheStats xbzrle_counters;
+
 /* struct contains XBZRLE cache and a static page
    used by the compression */
 static struct {
@@ -174,8 +176,6 @@ struct RAMState {
     bool ram_bulk_stage;
     /* How many times we have dirty too many pages */
     int dirty_rate_high_cnt;
-    /* How many times we have synchronized the bitmap */
-    uint64_t bitmap_sync_count;
     /* these variables are used for bitmap sync */
     /* last time we did a full bitmap_sync */
     int64_t time_last_bitmap_sync;
@@ -187,32 +187,8 @@ struct RAMState {
     uint64_t xbzrle_cache_miss_prev;
     /* number of iterations at the beginning of period */
     uint64_t iterations_prev;
-    /* Accounting fields */
-    /* number of zero pages.  It used to be pages filled by the same char. */
-    uint64_t zero_pages;
-    /* number of normal transferred pages */
-    uint64_t norm_pages;
     /* Iterations since start */
     uint64_t iterations;
-    /* xbzrle transmitted bytes.  Notice that this is with
-     * compression, they can't be calculated from the pages */
-    uint64_t xbzrle_bytes;
-    /* xbzrle transmmited pages */
-    uint64_t xbzrle_pages;
-    /* xbzrle number of cache miss */
-    uint64_t xbzrle_cache_miss;
-    /* xbzrle miss rate */
-    double xbzrle_cache_miss_rate;
-    /* xbzrle number of overflows */
-    uint64_t xbzrle_overflows;
-    /* number of dirty bits in the bitmap */
-    uint64_t migration_dirty_pages;
-    /* total number of bytes transferred */
-    uint64_t bytes_transferred;
-    /* number of dirtied pages in the last second */
-    uint64_t dirty_pages_rate;
-    /* Count of requests incoming from destination */
-    uint64_t postcopy_requests;
     /* protects modification of the bitmap */
     QemuMutex bitmap_mutex;
     /* The RAMBlock used in the last src_page_requests */
@@ -225,65 +201,7 @@ typedef struct RAMState RAMState;
 
 static RAMState ram_state;
 
-uint64_t dup_mig_pages_transferred(void)
-{
-    return ram_state.zero_pages;
-}
-
-uint64_t norm_mig_pages_transferred(void)
-{
-    return ram_state.norm_pages;
-}
-
-uint64_t xbzrle_mig_bytes_transferred(void)
-{
-    return ram_state.xbzrle_bytes;
-}
-
-uint64_t xbzrle_mig_pages_transferred(void)
-{
-    return ram_state.xbzrle_pages;
-}
-
-uint64_t xbzrle_mig_pages_cache_miss(void)
-{
-    return ram_state.xbzrle_cache_miss;
-}
-
-double xbzrle_mig_cache_miss_rate(void)
-{
-    return ram_state.xbzrle_cache_miss_rate;
-}
-
-uint64_t xbzrle_mig_pages_overflow(void)
-{
-    return ram_state.xbzrle_overflows;
-}
-
-uint64_t ram_bytes_transferred(void)
-{
-    return ram_state.bytes_transferred;
-}
-
-uint64_t ram_pages_remaining(void)
-{
-    return ram_state.migration_dirty_pages;
-}
-
-uint64_t ram_dirty_sync_count(void)
-{
-    return ram_state.bitmap_sync_count;
-}
-
-uint64_t ram_dirty_pages_rate(void)
-{
-    return ram_state.dirty_pages_rate;
-}
-
-uint64_t ram_postcopy_requests(void)
-{
-    return ram_state.postcopy_requests;
-}
+MigrationStats ram_counters;
 
 /* used by the search for pages to send */
 struct PageSearchStatus {
@@ -510,7 +428,7 @@ static void xbzrle_cache_zero_page(RAMState *rs, ram_addr_t current_addr)
     /* We don't care if this fails to allocate a new cache page
      * as long as it updated an old one */
     cache_insert(XBZRLE.cache, current_addr, XBZRLE.zero_target_page,
-                 rs->bitmap_sync_count);
+                 ram_counters.dirty_sync_count);
 }
 
 #define ENCODING_FLAG_XBZRLE 0x1
@@ -536,11 +454,12 @@ static int save_xbzrle_page(RAMState *rs, uint8_t **current_data,
     int encoded_len = 0, bytes_xbzrle;
     uint8_t *prev_cached_page;
 
-    if (!cache_is_cached(XBZRLE.cache, current_addr, rs->bitmap_sync_count)) {
-        rs->xbzrle_cache_miss++;
+    if (!cache_is_cached(XBZRLE.cache, current_addr,
+                         ram_counters.dirty_sync_count)) {
+        xbzrle_counters.cache_miss++;
         if (!last_stage) {
             if (cache_insert(XBZRLE.cache, current_addr, *current_data,
-                             rs->bitmap_sync_count) == -1) {
+                             ram_counters.dirty_sync_count) == -1) {
                 return -1;
             } else {
                 /* update *current_data when the page has been
@@ -565,7 +484,7 @@ static int save_xbzrle_page(RAMState *rs, uint8_t **current_data,
         return 0;
     } else if (encoded_len == -1) {
         trace_save_xbzrle_page_overflow();
-        rs->xbzrle_overflows++;
+        xbzrle_counters.overflow++;
         /* update data in the cache */
         if (!last_stage) {
             memcpy(prev_cached_page, *current_data, TARGET_PAGE_SIZE);
@@ -586,9 +505,9 @@ static int save_xbzrle_page(RAMState *rs, uint8_t **current_data,
     qemu_put_be16(rs->f, encoded_len);
     qemu_put_buffer(rs->f, XBZRLE.encoded_buf, encoded_len);
     bytes_xbzrle += encoded_len + 1 + 2;
-    rs->xbzrle_pages++;
-    rs->xbzrle_bytes += bytes_xbzrle;
-    rs->bytes_transferred += bytes_xbzrle;
+    xbzrle_counters.pages++;
+    xbzrle_counters.bytes += bytes_xbzrle;
+    ram_counters.transferred += bytes_xbzrle;
 
     return 1;
 }
@@ -630,7 +549,7 @@ static inline bool migration_bitmap_clear_dirty(RAMState *rs,
     ret = test_and_clear_bit(page, rb->bmap);
 
     if (ret) {
-        rs->migration_dirty_pages--;
+        ram_counters.remaining_pages--;
     }
     return ret;
 }
@@ -638,7 +557,7 @@ static inline bool migration_bitmap_clear_dirty(RAMState *rs,
 static void migration_bitmap_sync_range(RAMState *rs, RAMBlock *rb,
                                         ram_addr_t start, ram_addr_t length)
 {
-    rs->migration_dirty_pages +=
+    ram_counters.remaining_pages +=
         cpu_physical_memory_sync_dirty_bitmap(rb, start, length,
                                               &rs->num_dirty_pages_period);
 }
@@ -670,7 +589,7 @@ static void migration_bitmap_sync(RAMState *rs)
     int64_t end_time;
     uint64_t bytes_xfer_now;
 
-    rs->bitmap_sync_count++;
+    ram_counters.dirty_sync_count++;
 
     if (!rs->time_last_bitmap_sync) {
         rs->time_last_bitmap_sync = qemu_clock_get_ms(QEMU_CLOCK_REALTIME);
@@ -694,9 +613,9 @@ static void migration_bitmap_sync(RAMState *rs)
     /* more than 1 second = 1000 millisecons */
     if (end_time > rs->time_last_bitmap_sync + 1000) {
         /* calculate period counters */
-        rs->dirty_pages_rate = rs->num_dirty_pages_period * 1000
+        ram_counters.dirty_pages_rate = rs->num_dirty_pages_period * 1000
             / (end_time - rs->time_last_bitmap_sync);
-        bytes_xfer_now = ram_bytes_transferred();
+        bytes_xfer_now = ram_counters.transferred;
 
         if (migrate_auto_converge()) {
             /* The following detection logic can be refined later. For now:
@@ -716,13 +635,13 @@ static void migration_bitmap_sync(RAMState *rs)
 
         if (migrate_use_xbzrle()) {
             if (rs->iterations_prev != rs->iterations) {
-                rs->xbzrle_cache_miss_rate =
-                   (double)(rs->xbzrle_cache_miss -
+                xbzrle_counters.cache_miss_rate =
+                   (double)(xbzrle_counters.cache_miss -
                             rs->xbzrle_cache_miss_prev) /
                    (rs->iterations - rs->iterations_prev);
             }
             rs->iterations_prev = rs->iterations;
-            rs->xbzrle_cache_miss_prev = rs->xbzrle_cache_miss;
+            rs->xbzrle_cache_miss_prev = xbzrle_counters.cache_miss;
         }
 
         /* reset period counters */
@@ -731,7 +650,7 @@ static void migration_bitmap_sync(RAMState *rs)
         rs->bytes_xfer_prev = bytes_xfer_now;
     }
     if (migrate_use_events()) {
-        qapi_event_send_migration_pass(rs->bitmap_sync_count, NULL);
+        qapi_event_send_migration_pass(ram_counters.dirty_sync_count, NULL);
     }
 }
 
@@ -751,11 +670,11 @@ static int save_zero_page(RAMState *rs, RAMBlock *block, ram_addr_t offset,
     int pages = -1;
 
     if (is_zero_range(p, TARGET_PAGE_SIZE)) {
-        rs->zero_pages++;
-        rs->bytes_transferred +=
+        ram_counters.duplicate++;
+        ram_counters.transferred +=
             save_page_header(rs, rs->f, block, offset | RAM_SAVE_FLAG_ZERO);
         qemu_put_byte(rs->f, 0);
-        rs->bytes_transferred += 1;
+        ram_counters.transferred += 1;
         pages = 1;
     }
 
@@ -803,7 +722,7 @@ static int ram_save_page(RAMState *rs, PageSearchStatus *pss, bool last_stage)
     ret = ram_control_save_page(rs->f, block->offset,
                            offset, TARGET_PAGE_SIZE, &bytes_xmit);
     if (bytes_xmit) {
-        rs->bytes_transferred += bytes_xmit;
+        ram_counters.transferred += bytes_xmit;
         pages = 1;
     }
 
@@ -814,9 +733,9 @@ static int ram_save_page(RAMState *rs, PageSearchStatus *pss, bool last_stage)
     if (ret != RAM_SAVE_CONTROL_NOT_SUPP) {
         if (ret != RAM_SAVE_CONTROL_DELAYED) {
             if (bytes_xmit > 0) {
-                rs->norm_pages++;
+                ram_counters.normal++;
             } else if (bytes_xmit == 0) {
-                rs->zero_pages++;
+                ram_counters.duplicate++;
             }
         }
     } else {
@@ -842,8 +761,8 @@ static int ram_save_page(RAMState *rs, PageSearchStatus *pss, bool last_stage)
 
     /* XBZRLE overflow or normal page */
     if (pages == -1) {
-        rs->bytes_transferred += save_page_header(rs, rs->f, block,
-                                                  offset | RAM_SAVE_FLAG_PAGE);
+        ram_counters.transferred +=
+            save_page_header(rs, rs->f, block, offset | RAM_SAVE_FLAG_PAGE);
         if (send_async) {
             qemu_put_buffer_async(rs->f, p, TARGET_PAGE_SIZE,
                                   migrate_release_ram() &
@@ -851,9 +770,9 @@ static int ram_save_page(RAMState *rs, PageSearchStatus *pss, bool last_stage)
         } else {
             qemu_put_buffer(rs->f, p, TARGET_PAGE_SIZE);
         }
-        rs->bytes_transferred += TARGET_PAGE_SIZE;
+        ram_counters.transferred += TARGET_PAGE_SIZE;
         pages = 1;
-        rs->norm_pages++;
+        ram_counters.normal++;
     }
 
     XBZRLE_cache_unlock();
@@ -905,7 +824,7 @@ static void flush_compressed_data(RAMState *rs)
         qemu_mutex_lock(&comp_param[idx].mutex);
         if (!comp_param[idx].quit) {
             len = qemu_put_qemu_file(rs->f, comp_param[idx].file);
-            rs->bytes_transferred += len;
+            ram_counters.transferred += len;
         }
         qemu_mutex_unlock(&comp_param[idx].mutex);
     }
@@ -935,8 +854,8 @@ static int compress_page_with_multi_thread(RAMState *rs, RAMBlock *block,
                 qemu_cond_signal(&comp_param[idx].cond);
                 qemu_mutex_unlock(&comp_param[idx].mutex);
                 pages = 1;
-                rs->norm_pages++;
-                rs->bytes_transferred += bytes_xmit;
+                ram_counters.normal++;
+                ram_counters.transferred += bytes_xmit;
                 break;
             }
         }
@@ -976,15 +895,15 @@ static int ram_save_compressed_page(RAMState *rs, PageSearchStatus *pss,
     ret = ram_control_save_page(rs->f, block->offset,
                                 offset, TARGET_PAGE_SIZE, &bytes_xmit);
     if (bytes_xmit) {
-        rs->bytes_transferred += bytes_xmit;
+        ram_counters.transferred += bytes_xmit;
         pages = 1;
     }
     if (ret != RAM_SAVE_CONTROL_NOT_SUPP) {
         if (ret != RAM_SAVE_CONTROL_DELAYED) {
             if (bytes_xmit > 0) {
-                rs->norm_pages++;
+                ram_counters.normal++;
             } else if (bytes_xmit == 0) {
-                rs->zero_pages++;
+                ram_counters.duplicate++;
             }
         }
     } else {
@@ -1004,8 +923,8 @@ static int ram_save_compressed_page(RAMState *rs, PageSearchStatus *pss,
                 blen = qemu_put_compression_data(rs->f, p, TARGET_PAGE_SIZE,
                                                  migrate_compress_level());
                 if (blen > 0) {
-                    rs->bytes_transferred += bytes_xmit + blen;
-                    rs->norm_pages++;
+                    ram_counters.transferred += bytes_xmit + blen;
+                    ram_counters.normal++;
                     pages = 1;
                 } else {
                     qemu_file_set_error(rs->f, blen);
@@ -1213,7 +1132,7 @@ int ram_save_queue_pages(const char *rbname, ram_addr_t start, ram_addr_t len)
     RAMBlock *ramblock;
     RAMState *rs = &ram_state;
 
-    rs->postcopy_requests++;
+    ram_counters.postcopy_requests++;
     rcu_read_lock();
     if (!rbname) {
         /* Reuse last RAMBlock */
@@ -1401,13 +1320,12 @@ static int ram_find_and_save_block(RAMState *rs, bool last_stage)
 void acct_update_position(QEMUFile *f, size_t size, bool zero)
 {
     uint64_t pages = size / TARGET_PAGE_SIZE;
-    RAMState *rs = &ram_state;
 
     if (zero) {
-        rs->zero_pages += pages;
+        ram_counters.duplicate += pages;
     } else {
-        rs->norm_pages += pages;
-        rs->bytes_transferred += size;
+        ram_counters.normal += pages;
+        ram_counters.transferred += size;
         qemu_update_position(f, size);
     }
 }
@@ -1631,7 +1549,6 @@ static void postcopy_chunk_hostpages_pass(MigrationState *ms, bool unsent_pass,
                                           RAMBlock *block,
                                           PostcopyDiscardState *pds)
 {
-    RAMState *rs = &ram_state;
     unsigned long *bitmap = block->bmap;
     unsigned long *unsentmap = block->unsentmap;
     unsigned int host_ratio = block->page_size / TARGET_PAGE_SIZE;
@@ -1724,7 +1641,7 @@ static void postcopy_chunk_hostpages_pass(MigrationState *ms, bool unsent_pass,
                  * Remark them as dirty, updating the count for any pages
                  * that weren't previously dirty.
                  */
-                rs->migration_dirty_pages += !test_and_set_bit(page, bitmap);
+                ram_counters.remaining_pages += !test_and_set_bit(page, bitmap);
             }
         }
 
@@ -1932,7 +1849,7 @@ static int ram_state_init(RAMState *rs)
      * Count the total number of pages used by ram blocks not including any
      * gaps due to alignment or unplugs.
      */
-    rs->migration_dirty_pages = ram_bytes_total() >> TARGET_PAGE_BITS;
+    ram_counters.remaining_pages = ram_bytes_total() >> TARGET_PAGE_BITS;
 
     memory_global_dirty_log_start();
     migration_bitmap_sync(rs);
@@ -2057,7 +1974,7 @@ static int ram_save_iterate(QEMUFile *f, void *opaque)
     ram_control_after_iterate(f, RAM_CONTROL_ROUND);
 
     qemu_put_be64(f, RAM_SAVE_FLAG_EOS);
-    rs->bytes_transferred += 8;
+    ram_counters.transferred += 8;
 
     ret = qemu_file_get_error(f);
     if (ret < 0) {
@@ -2119,7 +2036,7 @@ static void ram_save_pending(QEMUFile *f, void *opaque, uint64_t max_size,
     RAMState *rs = opaque;
     uint64_t remaining_size;
 
-    remaining_size = rs->migration_dirty_pages * TARGET_PAGE_SIZE;
+    remaining_size = ram_counters.remaining_pages * TARGET_PAGE_SIZE;
 
     if (!migration_in_postcopy() &&
         remaining_size < max_size) {
@@ -2128,7 +2045,7 @@ static void ram_save_pending(QEMUFile *f, void *opaque, uint64_t max_size,
         migration_bitmap_sync(rs);
         rcu_read_unlock();
         qemu_mutex_unlock_iothread();
-        remaining_size = rs->migration_dirty_pages * TARGET_PAGE_SIZE;
+        remaining_size = ram_counters.remaining_pages * TARGET_PAGE_SIZE;
     }
 
     /* We can do postcopy, and all the data is postcopiable */
diff --git a/migration/ram.h b/migration/ram.h
index 5864470..9eadc8c 100644
--- a/migration/ram.h
+++ b/migration/ram.h
@@ -32,19 +32,10 @@
 #include "qemu-common.h"
 #include "exec/cpu-common.h"
 
+extern MigrationStats ram_counters;
+extern XBZRLECacheStats xbzrle_counters;
+
 int64_t xbzrle_cache_resize(int64_t new_size);
-uint64_t dup_mig_pages_transferred(void);
-uint64_t norm_mig_pages_transferred(void);
-uint64_t xbzrle_mig_bytes_transferred(void);
-uint64_t xbzrle_mig_pages_transferred(void);
-uint64_t xbzrle_mig_pages_cache_miss(void);
-double xbzrle_mig_cache_miss_rate(void);
-uint64_t xbzrle_mig_pages_overflow(void);
-uint64_t ram_bytes_transferred(void);
-uint64_t ram_pages_remaining(void);
-uint64_t ram_dirty_sync_count(void);
-uint64_t ram_dirty_pages_rate(void);
-uint64_t ram_postcopy_requests(void);
 uint64_t ram_bytes_total(void);
 
 void migrate_compress_threads_create(void);
-- 
2.9.4

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [Qemu-devel] [PATCH 5/5] ram: Make RAMState dynamic
  2017-06-01 22:08 [Qemu-devel] [PATCH 0/5] Make RAMState dynamic Juan Quintela
                   ` (3 preceding siblings ...)
  2017-06-01 22:08 ` [Qemu-devel] [PATCH 4/5] ram: Use MigrationStats for statistics Juan Quintela
@ 2017-06-01 22:08 ` Juan Quintela
  2017-06-05 15:00   ` Dr. David Alan Gilbert
  2017-06-06  8:16   ` Peter Xu
  4 siblings, 2 replies; 20+ messages in thread
From: Juan Quintela @ 2017-06-01 22:08 UTC (permalink / raw)
  To: qemu-devel; +Cc: dgilbert, lvivier, peterx

We create the variable while we are at migration and we remove it
after migration.

Signed-off-by: Juan Quintela <quintela@redhat.com>
---
 migration/ram.c | 52 ++++++++++++++++++++++++++++++++--------------------
 1 file changed, 32 insertions(+), 20 deletions(-)

diff --git a/migration/ram.c b/migration/ram.c
index 6c48219..1164f14 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -199,7 +199,7 @@ struct RAMState {
 };
 typedef struct RAMState RAMState;
 
-static RAMState ram_state;
+static RAMState *ram_state;
 
 MigrationStats ram_counters;
 
@@ -783,7 +783,7 @@ static int ram_save_page(RAMState *rs, PageSearchStatus *pss, bool last_stage)
 static int do_compress_ram_page(QEMUFile *f, RAMBlock *block,
                                 ram_addr_t offset)
 {
-    RAMState *rs = &ram_state;
+    RAMState *rs = ram_state;
     int bytes_sent, blen;
     uint8_t *p = block->host + (offset & TARGET_PAGE_MASK);
 
@@ -1130,7 +1130,7 @@ static void migration_page_queue_free(RAMState *rs)
 int ram_save_queue_pages(const char *rbname, ram_addr_t start, ram_addr_t len)
 {
     RAMBlock *ramblock;
-    RAMState *rs = &ram_state;
+    RAMState *rs = ram_state;
 
     ram_counters.postcopy_requests++;
     rcu_read_lock();
@@ -1351,7 +1351,7 @@ void free_xbzrle_decoded_buf(void)
 
 static void ram_migration_cleanup(void *opaque)
 {
-    RAMState *rs = opaque;
+    RAMState **rsp = opaque;
     RAMBlock *block;
 
     /* caller have hold iothread lock or is in a bh, so there is
@@ -1378,7 +1378,9 @@ static void ram_migration_cleanup(void *opaque)
         XBZRLE.zero_target_page = NULL;
     }
     XBZRLE_cache_unlock();
-    migration_page_queue_free(rs);
+    migration_page_queue_free(*rsp);
+    g_free(*rsp);
+    *rsp = NULL;
 }
 
 static void ram_state_reset(RAMState *rs)
@@ -1703,7 +1705,7 @@ static int postcopy_chunk_hostpages(MigrationState *ms, RAMBlock *block)
  */
 int ram_postcopy_send_discard_bitmap(MigrationState *ms)
 {
-    RAMState *rs = &ram_state;
+    RAMState *rs = ram_state;
     RAMBlock *block;
     int ret;
 
@@ -1786,12 +1788,13 @@ err:
     return ret;
 }
 
-static int ram_state_init(RAMState *rs)
+static int ram_state_init(RAMState **rsp)
 {
-    memset(rs, 0, sizeof(*rs));
-    qemu_mutex_init(&rs->bitmap_mutex);
-    qemu_mutex_init(&rs->src_page_req_mutex);
-    QSIMPLEQ_INIT(&rs->src_page_requests);
+    *rsp = g_new0(RAMState, 1);
+
+    qemu_mutex_init(&(*rsp)->bitmap_mutex);
+    qemu_mutex_init(&(*rsp)->src_page_req_mutex);
+    QSIMPLEQ_INIT(&(*rsp)->src_page_requests);
 
     if (migrate_use_xbzrle()) {
         XBZRLE_cache_lock();
@@ -1802,6 +1805,8 @@ static int ram_state_init(RAMState *rs)
         if (!XBZRLE.cache) {
             XBZRLE_cache_unlock();
             error_report("Error creating cache");
+            g_free(*rsp);
+            *rsp = NULL;
             return -1;
         }
         XBZRLE_cache_unlock();
@@ -1810,6 +1815,8 @@ static int ram_state_init(RAMState *rs)
         XBZRLE.encoded_buf = g_try_malloc0(TARGET_PAGE_SIZE);
         if (!XBZRLE.encoded_buf) {
             error_report("Error allocating encoded_buf");
+            g_free(*rsp);
+            *rsp = NULL;
             return -1;
         }
 
@@ -1818,6 +1825,8 @@ static int ram_state_init(RAMState *rs)
             error_report("Error allocating current_buf");
             g_free(XBZRLE.encoded_buf);
             XBZRLE.encoded_buf = NULL;
+            g_free(*rsp);
+            *rsp = NULL;
             return -1;
         }
     }
@@ -1827,7 +1836,7 @@ static int ram_state_init(RAMState *rs)
 
     qemu_mutex_lock_ramlist();
     rcu_read_lock();
-    ram_state_reset(rs);
+    ram_state_reset(*rsp);
 
     /* Skip setting bitmap if there is no RAM */
     if (ram_bytes_total()) {
@@ -1852,7 +1861,7 @@ static int ram_state_init(RAMState *rs)
     ram_counters.remaining_pages = ram_bytes_total() >> TARGET_PAGE_BITS;
 
     memory_global_dirty_log_start();
-    migration_bitmap_sync(rs);
+    migration_bitmap_sync(*rsp);
     qemu_mutex_unlock_ramlist();
     qemu_mutex_unlock_iothread();
     rcu_read_unlock();
@@ -1877,16 +1886,16 @@ static int ram_state_init(RAMState *rs)
  */
 static int ram_save_setup(QEMUFile *f, void *opaque)
 {
-    RAMState *rs = opaque;
+    RAMState **rsp = opaque;
     RAMBlock *block;
 
     /* migration has already setup the bitmap, reuse it. */
     if (!migration_in_colo_state()) {
-        if (ram_state_init(rs) < 0) {
+        if (ram_state_init(rsp) != 0) {
             return -1;
-         }
+        }
     }
-    rs->f = f;
+    (*rsp)->f = f;
 
     rcu_read_lock();
 
@@ -1921,7 +1930,8 @@ static int ram_save_setup(QEMUFile *f, void *opaque)
  */
 static int ram_save_iterate(QEMUFile *f, void *opaque)
 {
-    RAMState *rs = opaque;
+    RAMState **temp = opaque;
+    RAMState *rs = *temp;
     int ret;
     int i;
     int64_t t0;
@@ -1996,7 +2006,8 @@ static int ram_save_iterate(QEMUFile *f, void *opaque)
  */
 static int ram_save_complete(QEMUFile *f, void *opaque)
 {
-    RAMState *rs = opaque;
+    RAMState **temp = opaque;
+    RAMState *rs = *temp;
 
     rcu_read_lock();
 
@@ -2033,7 +2044,8 @@ static void ram_save_pending(QEMUFile *f, void *opaque, uint64_t max_size,
                              uint64_t *non_postcopiable_pending,
                              uint64_t *postcopiable_pending)
 {
-    RAMState *rs = opaque;
+    RAMState **temp = opaque;
+    RAMState *rs = *temp;
     uint64_t remaining_size;
 
     remaining_size = ram_counters.remaining_pages * TARGET_PAGE_SIZE;
-- 
2.9.4

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* Re: [Qemu-devel] [PATCH 3/5] migration: Print statistics about the number of remaining target pages
  2017-06-01 22:08 ` [Qemu-devel] [PATCH 3/5] migration: Print statistics about the number of remaining target pages Juan Quintela
@ 2017-06-02 15:15   ` Eric Blake
  2017-06-02 16:36     ` Juan Quintela
  2017-06-06 17:48     ` Juan Quintela
  0 siblings, 2 replies; 20+ messages in thread
From: Eric Blake @ 2017-06-02 15:15 UTC (permalink / raw)
  To: Juan Quintela, qemu-devel; +Cc: lvivier, dgilbert, peterx

[-- Attachment #1: Type: text/plain, Size: 2727 bytes --]

On 06/01/2017 05:08 PM, Juan Quintela wrote:
> Signed-off-by: Juan Quintela <quintela@redhat.com>
> ---
>  migration/migration.c | 4 +++-
>  migration/ram.c       | 4 ++--
>  migration/ram.h       | 2 +-
>  qapi-schema.json      | 6 +++++-
>  4 files changed, 11 insertions(+), 5 deletions(-)
> 
> diff --git a/migration/migration.c b/migration/migration.c
> index ea3d41c..2c13217 100644
> --- a/migration/migration.c
> +++ b/migration/migration.c
> @@ -518,7 +518,9 @@ static void populate_ram_info(MigrationInfo *info, MigrationState *s)
>      }
>  
>      if (s->state != MIGRATION_STATUS_COMPLETED) {
> -        info->ram->remaining = ram_bytes_remaining();
> +        info->ram->remaining_pages = ram_pages_remaining();
> +        info->ram->remaining = ram_pages_remaining() *
> +            qemu_target_page_size();

Why not the opposite direction, of:

info->ram->remaining_pages = ram_bytes_remaining() /
qemu_target_page_size();
info->ram->remaining = ram_bytes_remaining();

?

> +++ b/migration/ram.h
> @@ -41,7 +41,7 @@ uint64_t xbzrle_mig_pages_cache_miss(void);
>  double xbzrle_mig_cache_miss_rate(void);
>  uint64_t xbzrle_mig_pages_overflow(void);
>  uint64_t ram_bytes_transferred(void);
> -uint64_t ram_bytes_remaining(void);
> +uint64_t ram_pages_remaining(void);
>  uint64_t ram_dirty_sync_count(void);
>  uint64_t ram_dirty_pages_rate(void);
>  uint64_t ram_postcopy_requests(void);

I know we already have a mishmash of which interfaces are byte-based vs.
page-based, but using byte-based everywhere seems like a better goal,
and this feels like we are going backwards from that goal.

> diff --git a/qapi-schema.json b/qapi-schema.json
> index 4b50b65..ff1c048 100644
> --- a/qapi-schema.json
> +++ b/qapi-schema.json
> @@ -601,6 +601,9 @@
>  # @page-size: The number of bytes per page for the various page-based
>  #        statistics (since 2.10)
>  #
> +# @remaining-pages: amount of pages remaining to be transferred to the target VM
> +#        (since 2.10)
> +#
>  # Since: 0.14.0
>  ##
>  { 'struct': 'MigrationStats',
> @@ -608,7 +611,8 @@
>             'duplicate': 'int', 'skipped': 'int', 'normal': 'int',
>             'normal-bytes': 'int', 'dirty-pages-rate' : 'int',
>             'mbps' : 'number', 'dirty-sync-count' : 'int',
> -           'postcopy-requests' : 'int', 'page-size' : 'int' } }
> +           'postcopy-requests' : 'int', 'page-size' : 'int',
> +           'remaining-pages' : 'int' } }

We already have 'remaining'; why do we need a redundant stat
'remaining-pages'?

-- 
Eric Blake, Principal Software Engineer
Red Hat, Inc.           +1-919-301-3266
Virtualization:  qemu.org | libvirt.org


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 604 bytes --]

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [Qemu-devel] [PATCH 3/5] migration: Print statistics about the number of remaining target pages
  2017-06-02 15:15   ` Eric Blake
@ 2017-06-02 16:36     ` Juan Quintela
  2017-06-06 17:48     ` Juan Quintela
  1 sibling, 0 replies; 20+ messages in thread
From: Juan Quintela @ 2017-06-02 16:36 UTC (permalink / raw)
  To: Eric Blake; +Cc: qemu-devel, lvivier, dgilbert, peterx

Eric Blake <eblake@redhat.com> wrote:
> On 06/01/2017 05:08 PM, Juan Quintela wrote:
>> Signed-off-by: Juan Quintela <quintela@redhat.com>
>> ---
>>  migration/migration.c | 4 +++-
>>  migration/ram.c       | 4 ++--
>>  migration/ram.h       | 2 +-
>>  qapi-schema.json      | 6 +++++-
>>  4 files changed, 11 insertions(+), 5 deletions(-)
>> 
>> diff --git a/migration/migration.c b/migration/migration.c
>> index ea3d41c..2c13217 100644
>> --- a/migration/migration.c
>> +++ b/migration/migration.c
>> @@ -518,7 +518,9 @@ static void populate_ram_info(MigrationInfo *info, MigrationState *s)
>>      }
>>  
>>      if (s->state != MIGRATION_STATUS_COMPLETED) {
>> -        info->ram->remaining = ram_bytes_remaining();
>> +        info->ram->remaining_pages = ram_pages_remaining();
>> +        info->ram->remaining = ram_pages_remaining() *
>> +            qemu_target_page_size();
>
> Why not the opposite direction, of:
>
> info->ram->remaining_pages = ram_bytes_remaining() /
> qemu_target_page_size();
> info->ram->remaining = ram_bytes_remaining();
>
> ?

On migration, everything is on pages (TARGET_PAGE_SIZE).  It is the real
unit and the easier to calculate.  So, we can have:
- number of dirty pages independent of the statistics
  notice that migration code really use that number, it don't use almost
  none of the others.
- put number of dirty bytes and be dividing/multiplying all around
- change the statistics and also print the number of bytes.

I know that you are changing the interfaces for block devices to be byte
based, but (I guess) you have more options of sector size.  On
migration, everything is on TARGET_PAGE_SIZE, or multiples, and the code
of multiples is not clear that works in all cases :-(

I.e. I am not sure that everything works when you had a 4kb guest on a
64 kb host (or the other way around).

>> +++ b/migration/ram.h
>> @@ -41,7 +41,7 @@ uint64_t xbzrle_mig_pages_cache_miss(void);
>>  double xbzrle_mig_cache_miss_rate(void);
>>  uint64_t xbzrle_mig_pages_overflow(void);
>>  uint64_t ram_bytes_transferred(void);
>> -uint64_t ram_bytes_remaining(void);
>> +uint64_t ram_pages_remaining(void);
>>  uint64_t ram_dirty_sync_count(void);
>>  uint64_t ram_dirty_pages_rate(void);
>>  uint64_t ram_postcopy_requests(void);
>
> I know we already have a mishmash of which interfaces are byte-based vs.
> page-based, but using byte-based everywhere seems like a better goal,
> and this feels like we are going backwards from that goal.

On migration, the only one that is really byte based is the compression
one and the xbzrle one, because they don't use multiple of target page
size.  The other ones are indiferent.


>
>> diff --git a/qapi-schema.json b/qapi-schema.json
>> index 4b50b65..ff1c048 100644
>> --- a/qapi-schema.json
>> +++ b/qapi-schema.json
>> @@ -601,6 +601,9 @@
>>  # @page-size: The number of bytes per page for the various page-based
>>  #        statistics (since 2.10)
>>  #
>> +# @remaining-pages: amount of pages remaining to be transferred to the target VM
>> +#        (since 2.10)
>> +#
>>  # Since: 0.14.0
>>  ##
>>  { 'struct': 'MigrationStats',
>> @@ -608,7 +611,8 @@
>>             'duplicate': 'int', 'skipped': 'int', 'normal': 'int',
>>             'normal-bytes': 'int', 'dirty-pages-rate' : 'int',
>>             'mbps' : 'number', 'dirty-sync-count' : 'int',
>> -           'postcopy-requests' : 'int', 'page-size' : 'int' } }
>> +           'postcopy-requests' : 'int', 'page-size' : 'int',
>> +           'remaining-pages' : 'int' } }
>
> We already have 'remaining'; why do we need a redundant stat
> 'remaining-pages'?

See next patch.  I have that other stat, or I have to use a different
place for statistics for dirty number of pages.  Notice that it kind of
make sense because that is the only stat that is used for migration
code.

So far:
- I can't be consistent, there are already things that are on bytes, and
  things that are on target_page_size
- All stats can be used on Migration stats *except* for dirty number of
  pages.

As far as I know:
- I can export the number of remaining pages, like this patch does
  (information is reduntdant, but that already happens, notice 'normal':
  number of pages, and 'normal-bytes': number of bytes).
- I can have the dirty_number_pages as a different variable in
  ram_stats, so it don't feel like the other counters.

As far as I can see one way is bad and the other is worse, none is good.
For me this movement makes it a bit more consistent, but the difference
is not big.

Suggestions?

Independently of this patch, that is for statistics, in migration code
it is more natural to use pages, otherwise we need to be dividing the
"offset" between the page size, as the basic unit is the dirty bitmap
that is in pages.

Later, Juan.

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [Qemu-devel] [PATCH 1/5] ram: Call migration_page_queue_free() at ram_migration_cleanup()
  2017-06-01 22:08 ` [Qemu-devel] [PATCH 1/5] ram: Call migration_page_queue_free() at ram_migration_cleanup() Juan Quintela
@ 2017-06-05 11:24   ` Dr. David Alan Gilbert
  2017-06-06  7:58   ` Peter Xu
  1 sibling, 0 replies; 20+ messages in thread
From: Dr. David Alan Gilbert @ 2017-06-05 11:24 UTC (permalink / raw)
  To: Juan Quintela; +Cc: qemu-devel, lvivier, peterx

* Juan Quintela (quintela@redhat.com) wrote:
> We shouldn't be using memory later than that.
> 
> Signed-off-by: Juan Quintela <quintela@redhat.com>

Yes, I think I agree:

   migration_completion calls await_return_path_close_on_source that
makes sure there's no more incoming requests.

   migration_completion is called prior to the call to qemu_savevm_state_cleanup
qemu_savevm_state_cleanup calls the ram_migration_cleanup.

So you're moving it later which should be safe:


Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>

> ---
>  migration/migration.c | 2 --
>  migration/ram.c       | 5 +++--
>  migration/ram.h       | 1 -
>  3 files changed, 3 insertions(+), 5 deletions(-)
> 
> diff --git a/migration/migration.c b/migration/migration.c
> index af4c2cc..ea3d41c 100644
> --- a/migration/migration.c
> +++ b/migration/migration.c
> @@ -814,8 +814,6 @@ static void migrate_fd_cleanup(void *opaque)
>      qemu_bh_delete(s->cleanup_bh);
>      s->cleanup_bh = NULL;
>  
> -    migration_page_queue_free();
> -
>      if (s->to_dst_file) {
>          trace_migrate_fd_cleanup();
>          qemu_mutex_unlock_iothread();
> diff --git a/migration/ram.c b/migration/ram.c
> index db7f4b0..e503277 100644
> --- a/migration/ram.c
> +++ b/migration/ram.c
> @@ -1181,10 +1181,9 @@ static bool get_queued_page(RAMState *rs, PageSearchStatus *pss)
>   * be some left.  in case that there is any page left, we drop it.
>   *
>   */
> -void migration_page_queue_free(void)
> +static void migration_page_queue_free(RAMState *rs)
>  {
>      struct RAMSrcPageRequest *mspr, *next_mspr;
> -    RAMState *rs = &ram_state;
>      /* This queue generally should be empty - but in the case of a failed
>       * migration might have some droppings in.
>       */
> @@ -1434,6 +1433,7 @@ void free_xbzrle_decoded_buf(void)
>  
>  static void ram_migration_cleanup(void *opaque)
>  {
> +    RAMState *rs = opaque;
>      RAMBlock *block;
>  
>      /* caller have hold iothread lock or is in a bh, so there is
> @@ -1459,6 +1459,7 @@ static void ram_migration_cleanup(void *opaque)
>          XBZRLE.current_buf = NULL;
>      }
>      XBZRLE_cache_unlock();
> +    migration_page_queue_free(rs);
>  }
>  
>  static void ram_state_reset(RAMState *rs)
> diff --git a/migration/ram.h b/migration/ram.h
> index c9563d1..d4da419 100644
> --- a/migration/ram.h
> +++ b/migration/ram.h
> @@ -53,7 +53,6 @@ void migrate_decompress_threads_create(void);
>  void migrate_decompress_threads_join(void);
>  
>  uint64_t ram_pagesize_summary(void);
> -void migration_page_queue_free(void);
>  int ram_save_queue_pages(const char *rbname, ram_addr_t start, ram_addr_t len);
>  void acct_update_position(QEMUFile *f, size_t size, bool zero);
>  void free_xbzrle_decoded_buf(void);
> -- 
> 2.9.4
> 
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [Qemu-devel] [PATCH 2/5] ram: Move ZERO_TARGET_PAGE inside XBZRLE
  2017-06-01 22:08 ` [Qemu-devel] [PATCH 2/5] ram: Move ZERO_TARGET_PAGE inside XBZRLE Juan Quintela
@ 2017-06-05 11:27   ` Dr. David Alan Gilbert
  2017-06-06  7:59   ` Peter Xu
  1 sibling, 0 replies; 20+ messages in thread
From: Dr. David Alan Gilbert @ 2017-06-05 11:27 UTC (permalink / raw)
  To: Juan Quintela; +Cc: qemu-devel, lvivier, peterx

* Juan Quintela (quintela@redhat.com) wrote:
> It was only used by XBZRLE anyways.
> 
> Signed-off-by: Juan Quintela <quintela@redhat.com>

Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>

> ---
>  migration/ram.c | 11 ++++++-----
>  1 file changed, 6 insertions(+), 5 deletions(-)
> 
> diff --git a/migration/ram.c b/migration/ram.c
> index e503277..04b55a7 100644
> --- a/migration/ram.c
> +++ b/migration/ram.c
> @@ -66,8 +66,6 @@
>  /* 0x80 is reserved in migration.h start with 0x100 next */
>  #define RAM_SAVE_FLAG_COMPRESS_PAGE    0x100
>  
> -static uint8_t *ZERO_TARGET_PAGE;
> -
>  static inline bool is_zero_range(uint8_t *p, uint64_t size)
>  {
>      return buffer_is_zero(p, size);
> @@ -83,6 +81,8 @@ static struct {
>      /* Cache for XBZRLE, Protected by lock. */
>      PageCache *cache;
>      QemuMutex lock;
> +    /* it will store a page full of zeros */
> +    uint8_t *zero_target_page;
>  } XBZRLE;
>  
>  /* buffer used for XBZRLE decoding */
> @@ -509,7 +509,7 @@ static void xbzrle_cache_zero_page(RAMState *rs, ram_addr_t current_addr)
>  
>      /* We don't care if this fails to allocate a new cache page
>       * as long as it updated an old one */
> -    cache_insert(XBZRLE.cache, current_addr, ZERO_TARGET_PAGE,
> +    cache_insert(XBZRLE.cache, current_addr, XBZRLE.zero_target_page,
>                   rs->bitmap_sync_count);
>  }
>  
> @@ -1453,10 +1453,11 @@ static void ram_migration_cleanup(void *opaque)
>          cache_fini(XBZRLE.cache);
>          g_free(XBZRLE.encoded_buf);
>          g_free(XBZRLE.current_buf);
> -        g_free(ZERO_TARGET_PAGE);
> +        g_free(XBZRLE.zero_target_page);
>          XBZRLE.cache = NULL;
>          XBZRLE.encoded_buf = NULL;
>          XBZRLE.current_buf = NULL;
> +        XBZRLE.zero_target_page = NULL;
>      }
>      XBZRLE_cache_unlock();
>      migration_page_queue_free(rs);
> @@ -1877,7 +1878,7 @@ static int ram_state_init(RAMState *rs)
>  
>      if (migrate_use_xbzrle()) {
>          XBZRLE_cache_lock();
> -        ZERO_TARGET_PAGE = g_malloc0(TARGET_PAGE_SIZE);
> +        XBZRLE.zero_target_page = g_malloc0(TARGET_PAGE_SIZE);
>          XBZRLE.cache = cache_init(migrate_xbzrle_cache_size() /
>                                    TARGET_PAGE_SIZE,
>                                    TARGET_PAGE_SIZE);
> -- 
> 2.9.4
> 
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [Qemu-devel] [PATCH 4/5] ram: Use MigrationStats for statistics
  2017-06-01 22:08 ` [Qemu-devel] [PATCH 4/5] ram: Use MigrationStats for statistics Juan Quintela
@ 2017-06-05 12:34   ` Dr. David Alan Gilbert
  2017-06-06  8:05     ` Peter Xu
  0 siblings, 1 reply; 20+ messages in thread
From: Dr. David Alan Gilbert @ 2017-06-05 12:34 UTC (permalink / raw)
  To: Juan Quintela; +Cc: qemu-devel, lvivier, peterx

* Juan Quintela (quintela@redhat.com) wrote:
> RAM Statistics need to survive migration to make info migrate work, so we
> need to store them outside of RAMState.  As we already have an struct
> with those fields, just used them. (MigrationStats and XBZRLECacheStats).
> 
> Signed-off-by: Juan Quintela <quintela@redhat.com>

Hmm OK; this feels very much like it's the opposite of 180f61f from
March; these variables keep moving around over the last couple of months
- are they going to stay still now?


Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>

> ---
>  migration/migration.c |  33 +++++-----
>  migration/ram.c       | 179 ++++++++++++++------------------------------------
>  migration/ram.h       |  15 +----
>  3 files changed, 68 insertions(+), 159 deletions(-)
> 
> diff --git a/migration/migration.c b/migration/migration.c
> index 2c13217..331cab7 100644
> --- a/migration/migration.c
> +++ b/migration/migration.c
> @@ -480,28 +480,28 @@ static void populate_ram_info(MigrationInfo *info, MigrationState *s)
>  {
>      info->has_ram = true;
>      info->ram = g_malloc0(sizeof(*info->ram));
> -    info->ram->transferred = ram_bytes_transferred();
> +    info->ram->transferred = ram_counters.transferred;
>      info->ram->total = ram_bytes_total();
> -    info->ram->duplicate = dup_mig_pages_transferred();
> +    info->ram->duplicate = ram_counters.duplicate;
>      /* legacy value.  It is not used anymore */
>      info->ram->skipped = 0;
> -    info->ram->normal = norm_mig_pages_transferred();
> -    info->ram->normal_bytes = norm_mig_pages_transferred() *
> +    info->ram->normal = ram_counters.normal;
> +    info->ram->normal_bytes = ram_counters.normal *
>          qemu_target_page_size();
>      info->ram->mbps = s->mbps;
> -    info->ram->dirty_sync_count = ram_dirty_sync_count();
> -    info->ram->postcopy_requests = ram_postcopy_requests();
> +    info->ram->dirty_sync_count = ram_counters.dirty_sync_count;
> +    info->ram->postcopy_requests = ram_counters.postcopy_requests;
>      info->ram->page_size = qemu_target_page_size();
>  
>      if (migrate_use_xbzrle()) {
>          info->has_xbzrle_cache = true;
>          info->xbzrle_cache = g_malloc0(sizeof(*info->xbzrle_cache));
>          info->xbzrle_cache->cache_size = migrate_xbzrle_cache_size();
> -        info->xbzrle_cache->bytes = xbzrle_mig_bytes_transferred();
> -        info->xbzrle_cache->pages = xbzrle_mig_pages_transferred();
> -        info->xbzrle_cache->cache_miss = xbzrle_mig_pages_cache_miss();
> -        info->xbzrle_cache->cache_miss_rate = xbzrle_mig_cache_miss_rate();
> -        info->xbzrle_cache->overflow = xbzrle_mig_pages_overflow();
> +        info->xbzrle_cache->bytes = xbzrle_counters.bytes;
> +        info->xbzrle_cache->pages = xbzrle_counters.pages;
> +        info->xbzrle_cache->cache_miss = xbzrle_counters.cache_miss;
> +        info->xbzrle_cache->cache_miss_rate = xbzrle_counters.cache_miss_rate;
> +        info->xbzrle_cache->overflow = xbzrle_counters.overflow;
>      }
>  
>      if (cpu_throttle_active()) {
> @@ -518,10 +518,11 @@ static void populate_ram_info(MigrationInfo *info, MigrationState *s)
>      }
>  
>      if (s->state != MIGRATION_STATUS_COMPLETED) {
> -        info->ram->remaining_pages = ram_pages_remaining();
> -        info->ram->remaining = ram_pages_remaining() *
> +
> +        info->ram->remaining_pages = ram_counters.remaining_pages;
> +        info->ram->remaining = ram_counters.remaining_pages *
>              qemu_target_page_size();
> -        info->ram->dirty_pages_rate = ram_dirty_pages_rate();
> +        info->ram->dirty_pages_rate = ram_counters.dirty_pages_rate;
>      }
>  }
>  
> @@ -1886,8 +1887,8 @@ static void *migration_thread(void *opaque)
>                                        bandwidth, threshold_size);
>              /* if we haven't sent anything, we don't want to recalculate
>                 10000 is a small enough number for our purposes */
> -            if (ram_dirty_pages_rate() && transferred_bytes > 10000) {
> -                s->expected_downtime = ram_dirty_pages_rate() *
> +            if (ram_counters.dirty_pages_rate && transferred_bytes > 10000) {
> +                s->expected_downtime = ram_counters.dirty_pages_rate *
>                      qemu_target_page_size() / bandwidth;
>              }
>  
> diff --git a/migration/ram.c b/migration/ram.c
> index 30519e1..6c48219 100644
> --- a/migration/ram.c
> +++ b/migration/ram.c
> @@ -71,6 +71,8 @@ static inline bool is_zero_range(uint8_t *p, uint64_t size)
>      return buffer_is_zero(p, size);
>  }
>  
> +XBZRLECacheStats xbzrle_counters;
> +
>  /* struct contains XBZRLE cache and a static page
>     used by the compression */
>  static struct {
> @@ -174,8 +176,6 @@ struct RAMState {
>      bool ram_bulk_stage;
>      /* How many times we have dirty too many pages */
>      int dirty_rate_high_cnt;
> -    /* How many times we have synchronized the bitmap */
> -    uint64_t bitmap_sync_count;
>      /* these variables are used for bitmap sync */
>      /* last time we did a full bitmap_sync */
>      int64_t time_last_bitmap_sync;
> @@ -187,32 +187,8 @@ struct RAMState {
>      uint64_t xbzrle_cache_miss_prev;
>      /* number of iterations at the beginning of period */
>      uint64_t iterations_prev;
> -    /* Accounting fields */
> -    /* number of zero pages.  It used to be pages filled by the same char. */
> -    uint64_t zero_pages;
> -    /* number of normal transferred pages */
> -    uint64_t norm_pages;
>      /* Iterations since start */
>      uint64_t iterations;
> -    /* xbzrle transmitted bytes.  Notice that this is with
> -     * compression, they can't be calculated from the pages */
> -    uint64_t xbzrle_bytes;
> -    /* xbzrle transmmited pages */
> -    uint64_t xbzrle_pages;
> -    /* xbzrle number of cache miss */
> -    uint64_t xbzrle_cache_miss;
> -    /* xbzrle miss rate */
> -    double xbzrle_cache_miss_rate;
> -    /* xbzrle number of overflows */
> -    uint64_t xbzrle_overflows;
> -    /* number of dirty bits in the bitmap */
> -    uint64_t migration_dirty_pages;
> -    /* total number of bytes transferred */
> -    uint64_t bytes_transferred;
> -    /* number of dirtied pages in the last second */
> -    uint64_t dirty_pages_rate;
> -    /* Count of requests incoming from destination */
> -    uint64_t postcopy_requests;
>      /* protects modification of the bitmap */
>      QemuMutex bitmap_mutex;
>      /* The RAMBlock used in the last src_page_requests */
> @@ -225,65 +201,7 @@ typedef struct RAMState RAMState;
>  
>  static RAMState ram_state;
>  
> -uint64_t dup_mig_pages_transferred(void)
> -{
> -    return ram_state.zero_pages;
> -}
> -
> -uint64_t norm_mig_pages_transferred(void)
> -{
> -    return ram_state.norm_pages;
> -}
> -
> -uint64_t xbzrle_mig_bytes_transferred(void)
> -{
> -    return ram_state.xbzrle_bytes;
> -}
> -
> -uint64_t xbzrle_mig_pages_transferred(void)
> -{
> -    return ram_state.xbzrle_pages;
> -}
> -
> -uint64_t xbzrle_mig_pages_cache_miss(void)
> -{
> -    return ram_state.xbzrle_cache_miss;
> -}
> -
> -double xbzrle_mig_cache_miss_rate(void)
> -{
> -    return ram_state.xbzrle_cache_miss_rate;
> -}
> -
> -uint64_t xbzrle_mig_pages_overflow(void)
> -{
> -    return ram_state.xbzrle_overflows;
> -}
> -
> -uint64_t ram_bytes_transferred(void)
> -{
> -    return ram_state.bytes_transferred;
> -}
> -
> -uint64_t ram_pages_remaining(void)
> -{
> -    return ram_state.migration_dirty_pages;
> -}
> -
> -uint64_t ram_dirty_sync_count(void)
> -{
> -    return ram_state.bitmap_sync_count;
> -}
> -
> -uint64_t ram_dirty_pages_rate(void)
> -{
> -    return ram_state.dirty_pages_rate;
> -}
> -
> -uint64_t ram_postcopy_requests(void)
> -{
> -    return ram_state.postcopy_requests;
> -}
> +MigrationStats ram_counters;
>  
>  /* used by the search for pages to send */
>  struct PageSearchStatus {
> @@ -510,7 +428,7 @@ static void xbzrle_cache_zero_page(RAMState *rs, ram_addr_t current_addr)
>      /* We don't care if this fails to allocate a new cache page
>       * as long as it updated an old one */
>      cache_insert(XBZRLE.cache, current_addr, XBZRLE.zero_target_page,
> -                 rs->bitmap_sync_count);
> +                 ram_counters.dirty_sync_count);
>  }
>  
>  #define ENCODING_FLAG_XBZRLE 0x1
> @@ -536,11 +454,12 @@ static int save_xbzrle_page(RAMState *rs, uint8_t **current_data,
>      int encoded_len = 0, bytes_xbzrle;
>      uint8_t *prev_cached_page;
>  
> -    if (!cache_is_cached(XBZRLE.cache, current_addr, rs->bitmap_sync_count)) {
> -        rs->xbzrle_cache_miss++;
> +    if (!cache_is_cached(XBZRLE.cache, current_addr,
> +                         ram_counters.dirty_sync_count)) {
> +        xbzrle_counters.cache_miss++;
>          if (!last_stage) {
>              if (cache_insert(XBZRLE.cache, current_addr, *current_data,
> -                             rs->bitmap_sync_count) == -1) {
> +                             ram_counters.dirty_sync_count) == -1) {
>                  return -1;
>              } else {
>                  /* update *current_data when the page has been
> @@ -565,7 +484,7 @@ static int save_xbzrle_page(RAMState *rs, uint8_t **current_data,
>          return 0;
>      } else if (encoded_len == -1) {
>          trace_save_xbzrle_page_overflow();
> -        rs->xbzrle_overflows++;
> +        xbzrle_counters.overflow++;
>          /* update data in the cache */
>          if (!last_stage) {
>              memcpy(prev_cached_page, *current_data, TARGET_PAGE_SIZE);
> @@ -586,9 +505,9 @@ static int save_xbzrle_page(RAMState *rs, uint8_t **current_data,
>      qemu_put_be16(rs->f, encoded_len);
>      qemu_put_buffer(rs->f, XBZRLE.encoded_buf, encoded_len);
>      bytes_xbzrle += encoded_len + 1 + 2;
> -    rs->xbzrle_pages++;
> -    rs->xbzrle_bytes += bytes_xbzrle;
> -    rs->bytes_transferred += bytes_xbzrle;
> +    xbzrle_counters.pages++;
> +    xbzrle_counters.bytes += bytes_xbzrle;
> +    ram_counters.transferred += bytes_xbzrle;
>  
>      return 1;
>  }
> @@ -630,7 +549,7 @@ static inline bool migration_bitmap_clear_dirty(RAMState *rs,
>      ret = test_and_clear_bit(page, rb->bmap);
>  
>      if (ret) {
> -        rs->migration_dirty_pages--;
> +        ram_counters.remaining_pages--;
>      }
>      return ret;
>  }
> @@ -638,7 +557,7 @@ static inline bool migration_bitmap_clear_dirty(RAMState *rs,
>  static void migration_bitmap_sync_range(RAMState *rs, RAMBlock *rb,
>                                          ram_addr_t start, ram_addr_t length)
>  {
> -    rs->migration_dirty_pages +=
> +    ram_counters.remaining_pages +=
>          cpu_physical_memory_sync_dirty_bitmap(rb, start, length,
>                                                &rs->num_dirty_pages_period);
>  }
> @@ -670,7 +589,7 @@ static void migration_bitmap_sync(RAMState *rs)
>      int64_t end_time;
>      uint64_t bytes_xfer_now;
>  
> -    rs->bitmap_sync_count++;
> +    ram_counters.dirty_sync_count++;
>  
>      if (!rs->time_last_bitmap_sync) {
>          rs->time_last_bitmap_sync = qemu_clock_get_ms(QEMU_CLOCK_REALTIME);
> @@ -694,9 +613,9 @@ static void migration_bitmap_sync(RAMState *rs)
>      /* more than 1 second = 1000 millisecons */
>      if (end_time > rs->time_last_bitmap_sync + 1000) {
>          /* calculate period counters */
> -        rs->dirty_pages_rate = rs->num_dirty_pages_period * 1000
> +        ram_counters.dirty_pages_rate = rs->num_dirty_pages_period * 1000
>              / (end_time - rs->time_last_bitmap_sync);
> -        bytes_xfer_now = ram_bytes_transferred();
> +        bytes_xfer_now = ram_counters.transferred;
>  
>          if (migrate_auto_converge()) {
>              /* The following detection logic can be refined later. For now:
> @@ -716,13 +635,13 @@ static void migration_bitmap_sync(RAMState *rs)
>  
>          if (migrate_use_xbzrle()) {
>              if (rs->iterations_prev != rs->iterations) {
> -                rs->xbzrle_cache_miss_rate =
> -                   (double)(rs->xbzrle_cache_miss -
> +                xbzrle_counters.cache_miss_rate =
> +                   (double)(xbzrle_counters.cache_miss -
>                              rs->xbzrle_cache_miss_prev) /
>                     (rs->iterations - rs->iterations_prev);
>              }
>              rs->iterations_prev = rs->iterations;
> -            rs->xbzrle_cache_miss_prev = rs->xbzrle_cache_miss;
> +            rs->xbzrle_cache_miss_prev = xbzrle_counters.cache_miss;
>          }
>  
>          /* reset period counters */
> @@ -731,7 +650,7 @@ static void migration_bitmap_sync(RAMState *rs)
>          rs->bytes_xfer_prev = bytes_xfer_now;
>      }
>      if (migrate_use_events()) {
> -        qapi_event_send_migration_pass(rs->bitmap_sync_count, NULL);
> +        qapi_event_send_migration_pass(ram_counters.dirty_sync_count, NULL);
>      }
>  }
>  
> @@ -751,11 +670,11 @@ static int save_zero_page(RAMState *rs, RAMBlock *block, ram_addr_t offset,
>      int pages = -1;
>  
>      if (is_zero_range(p, TARGET_PAGE_SIZE)) {
> -        rs->zero_pages++;
> -        rs->bytes_transferred +=
> +        ram_counters.duplicate++;
> +        ram_counters.transferred +=
>              save_page_header(rs, rs->f, block, offset | RAM_SAVE_FLAG_ZERO);
>          qemu_put_byte(rs->f, 0);
> -        rs->bytes_transferred += 1;
> +        ram_counters.transferred += 1;
>          pages = 1;
>      }
>  
> @@ -803,7 +722,7 @@ static int ram_save_page(RAMState *rs, PageSearchStatus *pss, bool last_stage)
>      ret = ram_control_save_page(rs->f, block->offset,
>                             offset, TARGET_PAGE_SIZE, &bytes_xmit);
>      if (bytes_xmit) {
> -        rs->bytes_transferred += bytes_xmit;
> +        ram_counters.transferred += bytes_xmit;
>          pages = 1;
>      }
>  
> @@ -814,9 +733,9 @@ static int ram_save_page(RAMState *rs, PageSearchStatus *pss, bool last_stage)
>      if (ret != RAM_SAVE_CONTROL_NOT_SUPP) {
>          if (ret != RAM_SAVE_CONTROL_DELAYED) {
>              if (bytes_xmit > 0) {
> -                rs->norm_pages++;
> +                ram_counters.normal++;
>              } else if (bytes_xmit == 0) {
> -                rs->zero_pages++;
> +                ram_counters.duplicate++;
>              }
>          }
>      } else {
> @@ -842,8 +761,8 @@ static int ram_save_page(RAMState *rs, PageSearchStatus *pss, bool last_stage)
>  
>      /* XBZRLE overflow or normal page */
>      if (pages == -1) {
> -        rs->bytes_transferred += save_page_header(rs, rs->f, block,
> -                                                  offset | RAM_SAVE_FLAG_PAGE);
> +        ram_counters.transferred +=
> +            save_page_header(rs, rs->f, block, offset | RAM_SAVE_FLAG_PAGE);
>          if (send_async) {
>              qemu_put_buffer_async(rs->f, p, TARGET_PAGE_SIZE,
>                                    migrate_release_ram() &
> @@ -851,9 +770,9 @@ static int ram_save_page(RAMState *rs, PageSearchStatus *pss, bool last_stage)
>          } else {
>              qemu_put_buffer(rs->f, p, TARGET_PAGE_SIZE);
>          }
> -        rs->bytes_transferred += TARGET_PAGE_SIZE;
> +        ram_counters.transferred += TARGET_PAGE_SIZE;
>          pages = 1;
> -        rs->norm_pages++;
> +        ram_counters.normal++;
>      }
>  
>      XBZRLE_cache_unlock();
> @@ -905,7 +824,7 @@ static void flush_compressed_data(RAMState *rs)
>          qemu_mutex_lock(&comp_param[idx].mutex);
>          if (!comp_param[idx].quit) {
>              len = qemu_put_qemu_file(rs->f, comp_param[idx].file);
> -            rs->bytes_transferred += len;
> +            ram_counters.transferred += len;
>          }
>          qemu_mutex_unlock(&comp_param[idx].mutex);
>      }
> @@ -935,8 +854,8 @@ static int compress_page_with_multi_thread(RAMState *rs, RAMBlock *block,
>                  qemu_cond_signal(&comp_param[idx].cond);
>                  qemu_mutex_unlock(&comp_param[idx].mutex);
>                  pages = 1;
> -                rs->norm_pages++;
> -                rs->bytes_transferred += bytes_xmit;
> +                ram_counters.normal++;
> +                ram_counters.transferred += bytes_xmit;
>                  break;
>              }
>          }
> @@ -976,15 +895,15 @@ static int ram_save_compressed_page(RAMState *rs, PageSearchStatus *pss,
>      ret = ram_control_save_page(rs->f, block->offset,
>                                  offset, TARGET_PAGE_SIZE, &bytes_xmit);
>      if (bytes_xmit) {
> -        rs->bytes_transferred += bytes_xmit;
> +        ram_counters.transferred += bytes_xmit;
>          pages = 1;
>      }
>      if (ret != RAM_SAVE_CONTROL_NOT_SUPP) {
>          if (ret != RAM_SAVE_CONTROL_DELAYED) {
>              if (bytes_xmit > 0) {
> -                rs->norm_pages++;
> +                ram_counters.normal++;
>              } else if (bytes_xmit == 0) {
> -                rs->zero_pages++;
> +                ram_counters.duplicate++;
>              }
>          }
>      } else {
> @@ -1004,8 +923,8 @@ static int ram_save_compressed_page(RAMState *rs, PageSearchStatus *pss,
>                  blen = qemu_put_compression_data(rs->f, p, TARGET_PAGE_SIZE,
>                                                   migrate_compress_level());
>                  if (blen > 0) {
> -                    rs->bytes_transferred += bytes_xmit + blen;
> -                    rs->norm_pages++;
> +                    ram_counters.transferred += bytes_xmit + blen;
> +                    ram_counters.normal++;
>                      pages = 1;
>                  } else {
>                      qemu_file_set_error(rs->f, blen);
> @@ -1213,7 +1132,7 @@ int ram_save_queue_pages(const char *rbname, ram_addr_t start, ram_addr_t len)
>      RAMBlock *ramblock;
>      RAMState *rs = &ram_state;
>  
> -    rs->postcopy_requests++;
> +    ram_counters.postcopy_requests++;
>      rcu_read_lock();
>      if (!rbname) {
>          /* Reuse last RAMBlock */
> @@ -1401,13 +1320,12 @@ static int ram_find_and_save_block(RAMState *rs, bool last_stage)
>  void acct_update_position(QEMUFile *f, size_t size, bool zero)
>  {
>      uint64_t pages = size / TARGET_PAGE_SIZE;
> -    RAMState *rs = &ram_state;
>  
>      if (zero) {
> -        rs->zero_pages += pages;
> +        ram_counters.duplicate += pages;
>      } else {
> -        rs->norm_pages += pages;
> -        rs->bytes_transferred += size;
> +        ram_counters.normal += pages;
> +        ram_counters.transferred += size;
>          qemu_update_position(f, size);
>      }
>  }
> @@ -1631,7 +1549,6 @@ static void postcopy_chunk_hostpages_pass(MigrationState *ms, bool unsent_pass,
>                                            RAMBlock *block,
>                                            PostcopyDiscardState *pds)
>  {
> -    RAMState *rs = &ram_state;
>      unsigned long *bitmap = block->bmap;
>      unsigned long *unsentmap = block->unsentmap;
>      unsigned int host_ratio = block->page_size / TARGET_PAGE_SIZE;
> @@ -1724,7 +1641,7 @@ static void postcopy_chunk_hostpages_pass(MigrationState *ms, bool unsent_pass,
>                   * Remark them as dirty, updating the count for any pages
>                   * that weren't previously dirty.
>                   */
> -                rs->migration_dirty_pages += !test_and_set_bit(page, bitmap);
> +                ram_counters.remaining_pages += !test_and_set_bit(page, bitmap);
>              }
>          }
>  
> @@ -1932,7 +1849,7 @@ static int ram_state_init(RAMState *rs)
>       * Count the total number of pages used by ram blocks not including any
>       * gaps due to alignment or unplugs.
>       */
> -    rs->migration_dirty_pages = ram_bytes_total() >> TARGET_PAGE_BITS;
> +    ram_counters.remaining_pages = ram_bytes_total() >> TARGET_PAGE_BITS;
>  
>      memory_global_dirty_log_start();
>      migration_bitmap_sync(rs);
> @@ -2057,7 +1974,7 @@ static int ram_save_iterate(QEMUFile *f, void *opaque)
>      ram_control_after_iterate(f, RAM_CONTROL_ROUND);
>  
>      qemu_put_be64(f, RAM_SAVE_FLAG_EOS);
> -    rs->bytes_transferred += 8;
> +    ram_counters.transferred += 8;
>  
>      ret = qemu_file_get_error(f);
>      if (ret < 0) {
> @@ -2119,7 +2036,7 @@ static void ram_save_pending(QEMUFile *f, void *opaque, uint64_t max_size,
>      RAMState *rs = opaque;
>      uint64_t remaining_size;
>  
> -    remaining_size = rs->migration_dirty_pages * TARGET_PAGE_SIZE;
> +    remaining_size = ram_counters.remaining_pages * TARGET_PAGE_SIZE;
>  
>      if (!migration_in_postcopy() &&
>          remaining_size < max_size) {
> @@ -2128,7 +2045,7 @@ static void ram_save_pending(QEMUFile *f, void *opaque, uint64_t max_size,
>          migration_bitmap_sync(rs);
>          rcu_read_unlock();
>          qemu_mutex_unlock_iothread();
> -        remaining_size = rs->migration_dirty_pages * TARGET_PAGE_SIZE;
> +        remaining_size = ram_counters.remaining_pages * TARGET_PAGE_SIZE;
>      }
>  
>      /* We can do postcopy, and all the data is postcopiable */
> diff --git a/migration/ram.h b/migration/ram.h
> index 5864470..9eadc8c 100644
> --- a/migration/ram.h
> +++ b/migration/ram.h
> @@ -32,19 +32,10 @@
>  #include "qemu-common.h"
>  #include "exec/cpu-common.h"
>  
> +extern MigrationStats ram_counters;
> +extern XBZRLECacheStats xbzrle_counters;
> +
>  int64_t xbzrle_cache_resize(int64_t new_size);
> -uint64_t dup_mig_pages_transferred(void);
> -uint64_t norm_mig_pages_transferred(void);
> -uint64_t xbzrle_mig_bytes_transferred(void);
> -uint64_t xbzrle_mig_pages_transferred(void);
> -uint64_t xbzrle_mig_pages_cache_miss(void);
> -double xbzrle_mig_cache_miss_rate(void);
> -uint64_t xbzrle_mig_pages_overflow(void);
> -uint64_t ram_bytes_transferred(void);
> -uint64_t ram_pages_remaining(void);
> -uint64_t ram_dirty_sync_count(void);
> -uint64_t ram_dirty_pages_rate(void);
> -uint64_t ram_postcopy_requests(void);
>  uint64_t ram_bytes_total(void);
>  
>  void migrate_compress_threads_create(void);
> -- 
> 2.9.4
> 
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [Qemu-devel] [PATCH 5/5] ram: Make RAMState dynamic
  2017-06-01 22:08 ` [Qemu-devel] [PATCH 5/5] ram: Make RAMState dynamic Juan Quintela
@ 2017-06-05 15:00   ` Dr. David Alan Gilbert
  2017-06-06  8:16   ` Peter Xu
  1 sibling, 0 replies; 20+ messages in thread
From: Dr. David Alan Gilbert @ 2017-06-05 15:00 UTC (permalink / raw)
  To: Juan Quintela; +Cc: qemu-devel, lvivier, peterx

* Juan Quintela (quintela@redhat.com) wrote:
> We create the variable while we are at migration and we remove it
> after migration.
> 
> Signed-off-by: Juan Quintela <quintela@redhat.com>
> ---
>  migration/ram.c | 52 ++++++++++++++++++++++++++++++++--------------------
>  1 file changed, 32 insertions(+), 20 deletions(-)
> 
> diff --git a/migration/ram.c b/migration/ram.c
> index 6c48219..1164f14 100644
> --- a/migration/ram.c
> +++ b/migration/ram.c
> @@ -199,7 +199,7 @@ struct RAMState {
>  };
>  typedef struct RAMState RAMState;
>  
> -static RAMState ram_state;
> +static RAMState *ram_state;
>  
>  MigrationStats ram_counters;
>  
> @@ -783,7 +783,7 @@ static int ram_save_page(RAMState *rs, PageSearchStatus *pss, bool last_stage)
>  static int do_compress_ram_page(QEMUFile *f, RAMBlock *block,
>                                  ram_addr_t offset)
>  {
> -    RAMState *rs = &ram_state;
> +    RAMState *rs = ram_state;
>      int bytes_sent, blen;
>      uint8_t *p = block->host + (offset & TARGET_PAGE_MASK);
>  
> @@ -1130,7 +1130,7 @@ static void migration_page_queue_free(RAMState *rs)
>  int ram_save_queue_pages(const char *rbname, ram_addr_t start, ram_addr_t len)
>  {
>      RAMBlock *ramblock;
> -    RAMState *rs = &ram_state;
> +    RAMState *rs = ram_state;
>  
>      ram_counters.postcopy_requests++;
>      rcu_read_lock();
> @@ -1351,7 +1351,7 @@ void free_xbzrle_decoded_buf(void)
>  
>  static void ram_migration_cleanup(void *opaque)
>  {
> -    RAMState *rs = opaque;
> +    RAMState **rsp = opaque;
>      RAMBlock *block;
>  
>      /* caller have hold iothread lock or is in a bh, so there is
> @@ -1378,7 +1378,9 @@ static void ram_migration_cleanup(void *opaque)
>          XBZRLE.zero_target_page = NULL;
>      }
>      XBZRLE_cache_unlock();
> -    migration_page_queue_free(rs);
> +    migration_page_queue_free(*rsp);
> +    g_free(*rsp);
> +    *rsp = NULL;

Yes, I think that's a safe place to free it.

>  }
>  
>  static void ram_state_reset(RAMState *rs)
> @@ -1703,7 +1705,7 @@ static int postcopy_chunk_hostpages(MigrationState *ms, RAMBlock *block)
>   */
>  int ram_postcopy_send_discard_bitmap(MigrationState *ms)
>  {
> -    RAMState *rs = &ram_state;
> +    RAMState *rs = ram_state;
>      RAMBlock *block;
>      int ret;
>  
> @@ -1786,12 +1788,13 @@ err:
>      return ret;
>  }
>  
> -static int ram_state_init(RAMState *rs)
> +static int ram_state_init(RAMState **rsp)
>  {
> -    memset(rs, 0, sizeof(*rs));
> -    qemu_mutex_init(&rs->bitmap_mutex);
> -    qemu_mutex_init(&rs->src_page_req_mutex);
> -    QSIMPLEQ_INIT(&rs->src_page_requests);
> +    *rsp = g_new0(RAMState, 1);
> +
> +    qemu_mutex_init(&(*rsp)->bitmap_mutex);
> +    qemu_mutex_init(&(*rsp)->src_page_req_mutex);
> +    QSIMPLEQ_INIT(&(*rsp)->src_page_requests);
>  
>      if (migrate_use_xbzrle()) {
>          XBZRLE_cache_lock();
> @@ -1802,6 +1805,8 @@ static int ram_state_init(RAMState *rs)
>          if (!XBZRLE.cache) {
>              XBZRLE_cache_unlock();
>              error_report("Error creating cache");
> +            g_free(*rsp);
> +            *rsp = NULL;
>              return -1;
>          }
>          XBZRLE_cache_unlock();
> @@ -1810,6 +1815,8 @@ static int ram_state_init(RAMState *rs)
>          XBZRLE.encoded_buf = g_try_malloc0(TARGET_PAGE_SIZE);
>          if (!XBZRLE.encoded_buf) {
>              error_report("Error allocating encoded_buf");
> +            g_free(*rsp);
> +            *rsp = NULL;
>              return -1;
>          }
>  
> @@ -1818,6 +1825,8 @@ static int ram_state_init(RAMState *rs)
>              error_report("Error allocating current_buf");
>              g_free(XBZRLE.encoded_buf);
>              XBZRLE.encoded_buf = NULL;
> +            g_free(*rsp);
> +            *rsp = NULL;
>              return -1;
>          }
>      }
> @@ -1827,7 +1836,7 @@ static int ram_state_init(RAMState *rs)
>  
>      qemu_mutex_lock_ramlist();
>      rcu_read_lock();
> -    ram_state_reset(rs);
> +    ram_state_reset(*rsp);
>  
>      /* Skip setting bitmap if there is no RAM */
>      if (ram_bytes_total()) {
> @@ -1852,7 +1861,7 @@ static int ram_state_init(RAMState *rs)
>      ram_counters.remaining_pages = ram_bytes_total() >> TARGET_PAGE_BITS;
>  
>      memory_global_dirty_log_start();
> -    migration_bitmap_sync(rs);
> +    migration_bitmap_sync(*rsp);
>      qemu_mutex_unlock_ramlist();
>      qemu_mutex_unlock_iothread();
>      rcu_read_unlock();
> @@ -1877,16 +1886,16 @@ static int ram_state_init(RAMState *rs)
>   */
>  static int ram_save_setup(QEMUFile *f, void *opaque)
>  {
> -    RAMState *rs = opaque;
> +    RAMState **rsp = opaque;
>      RAMBlock *block;
>  
>      /* migration has already setup the bitmap, reuse it. */
>      if (!migration_in_colo_state()) {
> -        if (ram_state_init(rs) < 0) {
> +        if (ram_state_init(rsp) != 0) {
>              return -1;
> -         }
> +        }
>      }
> -    rs->f = f;
> +    (*rsp)->f = f;
>  
>      rcu_read_lock();
>  
> @@ -1921,7 +1930,8 @@ static int ram_save_setup(QEMUFile *f, void *opaque)
>   */
>  static int ram_save_iterate(QEMUFile *f, void *opaque)
>  {
> -    RAMState *rs = opaque;
> +    RAMState **temp = opaque;
> +    RAMState *rs = *temp;

OK, to be honest my preference would be
   RAMState *rs = *(RAMState **)opaque;
that I think works; but that's jus ttaste if you had to redo it.

Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>

>      int ret;
>      int i;
>      int64_t t0;
> @@ -1996,7 +2006,8 @@ static int ram_save_iterate(QEMUFile *f, void *opaque)
>   */
>  static int ram_save_complete(QEMUFile *f, void *opaque)
>  {
> -    RAMState *rs = opaque;
> +    RAMState **temp = opaque;
> +    RAMState *rs = *temp;
>  
>      rcu_read_lock();
>  
> @@ -2033,7 +2044,8 @@ static void ram_save_pending(QEMUFile *f, void *opaque, uint64_t max_size,
>                               uint64_t *non_postcopiable_pending,
>                               uint64_t *postcopiable_pending)
>  {
> -    RAMState *rs = opaque;
> +    RAMState **temp = opaque;
> +    RAMState *rs = *temp;
>      uint64_t remaining_size;
>  
>      remaining_size = ram_counters.remaining_pages * TARGET_PAGE_SIZE;
> -- 
> 2.9.4
> 
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [Qemu-devel] [PATCH 1/5] ram: Call migration_page_queue_free() at ram_migration_cleanup()
  2017-06-01 22:08 ` [Qemu-devel] [PATCH 1/5] ram: Call migration_page_queue_free() at ram_migration_cleanup() Juan Quintela
  2017-06-05 11:24   ` Dr. David Alan Gilbert
@ 2017-06-06  7:58   ` Peter Xu
  1 sibling, 0 replies; 20+ messages in thread
From: Peter Xu @ 2017-06-06  7:58 UTC (permalink / raw)
  To: Juan Quintela; +Cc: qemu-devel, dgilbert, lvivier

On Fri, Jun 02, 2017 at 12:08:09AM +0200, Juan Quintela wrote:
> We shouldn't be using memory later than that.
> 
> Signed-off-by: Juan Quintela <quintela@redhat.com>

Reviewed-by: Peter Xu <peterx@redhat.com>

> ---
>  migration/migration.c | 2 --
>  migration/ram.c       | 5 +++--
>  migration/ram.h       | 1 -
>  3 files changed, 3 insertions(+), 5 deletions(-)
> 
> diff --git a/migration/migration.c b/migration/migration.c
> index af4c2cc..ea3d41c 100644
> --- a/migration/migration.c
> +++ b/migration/migration.c
> @@ -814,8 +814,6 @@ static void migrate_fd_cleanup(void *opaque)
>      qemu_bh_delete(s->cleanup_bh);
>      s->cleanup_bh = NULL;
>  
> -    migration_page_queue_free();
> -
>      if (s->to_dst_file) {
>          trace_migrate_fd_cleanup();
>          qemu_mutex_unlock_iothread();
> diff --git a/migration/ram.c b/migration/ram.c
> index db7f4b0..e503277 100644
> --- a/migration/ram.c
> +++ b/migration/ram.c
> @@ -1181,10 +1181,9 @@ static bool get_queued_page(RAMState *rs, PageSearchStatus *pss)
>   * be some left.  in case that there is any page left, we drop it.
>   *
>   */
> -void migration_page_queue_free(void)
> +static void migration_page_queue_free(RAMState *rs)
>  {
>      struct RAMSrcPageRequest *mspr, *next_mspr;
> -    RAMState *rs = &ram_state;
>      /* This queue generally should be empty - but in the case of a failed
>       * migration might have some droppings in.
>       */
> @@ -1434,6 +1433,7 @@ void free_xbzrle_decoded_buf(void)
>  
>  static void ram_migration_cleanup(void *opaque)
>  {
> +    RAMState *rs = opaque;
>      RAMBlock *block;
>  
>      /* caller have hold iothread lock or is in a bh, so there is
> @@ -1459,6 +1459,7 @@ static void ram_migration_cleanup(void *opaque)
>          XBZRLE.current_buf = NULL;
>      }
>      XBZRLE_cache_unlock();
> +    migration_page_queue_free(rs);
>  }
>  
>  static void ram_state_reset(RAMState *rs)
> diff --git a/migration/ram.h b/migration/ram.h
> index c9563d1..d4da419 100644
> --- a/migration/ram.h
> +++ b/migration/ram.h
> @@ -53,7 +53,6 @@ void migrate_decompress_threads_create(void);
>  void migrate_decompress_threads_join(void);
>  
>  uint64_t ram_pagesize_summary(void);
> -void migration_page_queue_free(void);
>  int ram_save_queue_pages(const char *rbname, ram_addr_t start, ram_addr_t len);
>  void acct_update_position(QEMUFile *f, size_t size, bool zero);
>  void free_xbzrle_decoded_buf(void);
> -- 
> 2.9.4
> 

-- 
Peter Xu

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [Qemu-devel] [PATCH 2/5] ram: Move ZERO_TARGET_PAGE inside XBZRLE
  2017-06-01 22:08 ` [Qemu-devel] [PATCH 2/5] ram: Move ZERO_TARGET_PAGE inside XBZRLE Juan Quintela
  2017-06-05 11:27   ` Dr. David Alan Gilbert
@ 2017-06-06  7:59   ` Peter Xu
  1 sibling, 0 replies; 20+ messages in thread
From: Peter Xu @ 2017-06-06  7:59 UTC (permalink / raw)
  To: Juan Quintela; +Cc: qemu-devel, dgilbert, lvivier

On Fri, Jun 02, 2017 at 12:08:10AM +0200, Juan Quintela wrote:
> It was only used by XBZRLE anyways.
> 
> Signed-off-by: Juan Quintela <quintela@redhat.com>

Reviewed-by: Peter Xu <peterx@redhat.com>

> ---
>  migration/ram.c | 11 ++++++-----
>  1 file changed, 6 insertions(+), 5 deletions(-)
> 
> diff --git a/migration/ram.c b/migration/ram.c
> index e503277..04b55a7 100644
> --- a/migration/ram.c
> +++ b/migration/ram.c
> @@ -66,8 +66,6 @@
>  /* 0x80 is reserved in migration.h start with 0x100 next */
>  #define RAM_SAVE_FLAG_COMPRESS_PAGE    0x100
>  
> -static uint8_t *ZERO_TARGET_PAGE;
> -
>  static inline bool is_zero_range(uint8_t *p, uint64_t size)
>  {
>      return buffer_is_zero(p, size);
> @@ -83,6 +81,8 @@ static struct {
>      /* Cache for XBZRLE, Protected by lock. */
>      PageCache *cache;
>      QemuMutex lock;
> +    /* it will store a page full of zeros */
> +    uint8_t *zero_target_page;
>  } XBZRLE;
>  
>  /* buffer used for XBZRLE decoding */
> @@ -509,7 +509,7 @@ static void xbzrle_cache_zero_page(RAMState *rs, ram_addr_t current_addr)
>  
>      /* We don't care if this fails to allocate a new cache page
>       * as long as it updated an old one */
> -    cache_insert(XBZRLE.cache, current_addr, ZERO_TARGET_PAGE,
> +    cache_insert(XBZRLE.cache, current_addr, XBZRLE.zero_target_page,
>                   rs->bitmap_sync_count);
>  }
>  
> @@ -1453,10 +1453,11 @@ static void ram_migration_cleanup(void *opaque)
>          cache_fini(XBZRLE.cache);
>          g_free(XBZRLE.encoded_buf);
>          g_free(XBZRLE.current_buf);
> -        g_free(ZERO_TARGET_PAGE);
> +        g_free(XBZRLE.zero_target_page);
>          XBZRLE.cache = NULL;
>          XBZRLE.encoded_buf = NULL;
>          XBZRLE.current_buf = NULL;
> +        XBZRLE.zero_target_page = NULL;
>      }
>      XBZRLE_cache_unlock();
>      migration_page_queue_free(rs);
> @@ -1877,7 +1878,7 @@ static int ram_state_init(RAMState *rs)
>  
>      if (migrate_use_xbzrle()) {
>          XBZRLE_cache_lock();
> -        ZERO_TARGET_PAGE = g_malloc0(TARGET_PAGE_SIZE);
> +        XBZRLE.zero_target_page = g_malloc0(TARGET_PAGE_SIZE);
>          XBZRLE.cache = cache_init(migrate_xbzrle_cache_size() /
>                                    TARGET_PAGE_SIZE,
>                                    TARGET_PAGE_SIZE);
> -- 
> 2.9.4
> 

-- 
Peter Xu

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [Qemu-devel] [PATCH 4/5] ram: Use MigrationStats for statistics
  2017-06-05 12:34   ` Dr. David Alan Gilbert
@ 2017-06-06  8:05     ` Peter Xu
  2017-06-06 17:33       ` Juan Quintela
  0 siblings, 1 reply; 20+ messages in thread
From: Peter Xu @ 2017-06-06  8:05 UTC (permalink / raw)
  To: Dr. David Alan Gilbert; +Cc: Juan Quintela, qemu-devel, lvivier

On Mon, Jun 05, 2017 at 01:34:45PM +0100, Dr. David Alan Gilbert wrote:
> * Juan Quintela (quintela@redhat.com) wrote:
> > RAM Statistics need to survive migration to make info migrate work, so we
> > need to store them outside of RAMState.  As we already have an struct
> > with those fields, just used them. (MigrationStats and XBZRLECacheStats).
> > 
> > Signed-off-by: Juan Quintela <quintela@redhat.com>
> 
> Hmm OK; this feels very much like it's the opposite of 180f61f from
> March; these variables keep moving around over the last couple of months
> - are they going to stay still now?

O:-)

Meanwhile, I don't know whether it'll be necessary to remove all the
functions like ram_bytes_transferred(), e.g., it would be just:

uint64_t ram_bytes_transferred(void)
{
-    return ram_state.bytes_transferred;
+    return ram_counters.transferred;
}

But I'm okay with either.

> 
> 
> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>

Reviewed-by: Peter Xu <peterx@redhat.com>

> 
> > ---
> >  migration/migration.c |  33 +++++-----
> >  migration/ram.c       | 179 ++++++++++++++------------------------------------
> >  migration/ram.h       |  15 +----
> >  3 files changed, 68 insertions(+), 159 deletions(-)
> > 
> > diff --git a/migration/migration.c b/migration/migration.c
> > index 2c13217..331cab7 100644
> > --- a/migration/migration.c
> > +++ b/migration/migration.c
> > @@ -480,28 +480,28 @@ static void populate_ram_info(MigrationInfo *info, MigrationState *s)
> >  {
> >      info->has_ram = true;
> >      info->ram = g_malloc0(sizeof(*info->ram));
> > -    info->ram->transferred = ram_bytes_transferred();
> > +    info->ram->transferred = ram_counters.transferred;
> >      info->ram->total = ram_bytes_total();
> > -    info->ram->duplicate = dup_mig_pages_transferred();
> > +    info->ram->duplicate = ram_counters.duplicate;
> >      /* legacy value.  It is not used anymore */
> >      info->ram->skipped = 0;
> > -    info->ram->normal = norm_mig_pages_transferred();
> > -    info->ram->normal_bytes = norm_mig_pages_transferred() *
> > +    info->ram->normal = ram_counters.normal;
> > +    info->ram->normal_bytes = ram_counters.normal *
> >          qemu_target_page_size();
> >      info->ram->mbps = s->mbps;
> > -    info->ram->dirty_sync_count = ram_dirty_sync_count();
> > -    info->ram->postcopy_requests = ram_postcopy_requests();
> > +    info->ram->dirty_sync_count = ram_counters.dirty_sync_count;
> > +    info->ram->postcopy_requests = ram_counters.postcopy_requests;
> >      info->ram->page_size = qemu_target_page_size();
> >  
> >      if (migrate_use_xbzrle()) {
> >          info->has_xbzrle_cache = true;
> >          info->xbzrle_cache = g_malloc0(sizeof(*info->xbzrle_cache));
> >          info->xbzrle_cache->cache_size = migrate_xbzrle_cache_size();
> > -        info->xbzrle_cache->bytes = xbzrle_mig_bytes_transferred();
> > -        info->xbzrle_cache->pages = xbzrle_mig_pages_transferred();
> > -        info->xbzrle_cache->cache_miss = xbzrle_mig_pages_cache_miss();
> > -        info->xbzrle_cache->cache_miss_rate = xbzrle_mig_cache_miss_rate();
> > -        info->xbzrle_cache->overflow = xbzrle_mig_pages_overflow();
> > +        info->xbzrle_cache->bytes = xbzrle_counters.bytes;
> > +        info->xbzrle_cache->pages = xbzrle_counters.pages;
> > +        info->xbzrle_cache->cache_miss = xbzrle_counters.cache_miss;
> > +        info->xbzrle_cache->cache_miss_rate = xbzrle_counters.cache_miss_rate;
> > +        info->xbzrle_cache->overflow = xbzrle_counters.overflow;
> >      }
> >  
> >      if (cpu_throttle_active()) {
> > @@ -518,10 +518,11 @@ static void populate_ram_info(MigrationInfo *info, MigrationState *s)
> >      }
> >  
> >      if (s->state != MIGRATION_STATUS_COMPLETED) {
> > -        info->ram->remaining_pages = ram_pages_remaining();
> > -        info->ram->remaining = ram_pages_remaining() *
> > +
> > +        info->ram->remaining_pages = ram_counters.remaining_pages;
> > +        info->ram->remaining = ram_counters.remaining_pages *
> >              qemu_target_page_size();
> > -        info->ram->dirty_pages_rate = ram_dirty_pages_rate();
> > +        info->ram->dirty_pages_rate = ram_counters.dirty_pages_rate;
> >      }
> >  }
> >  
> > @@ -1886,8 +1887,8 @@ static void *migration_thread(void *opaque)
> >                                        bandwidth, threshold_size);
> >              /* if we haven't sent anything, we don't want to recalculate
> >                 10000 is a small enough number for our purposes */
> > -            if (ram_dirty_pages_rate() && transferred_bytes > 10000) {
> > -                s->expected_downtime = ram_dirty_pages_rate() *
> > +            if (ram_counters.dirty_pages_rate && transferred_bytes > 10000) {
> > +                s->expected_downtime = ram_counters.dirty_pages_rate *
> >                      qemu_target_page_size() / bandwidth;
> >              }
> >  
> > diff --git a/migration/ram.c b/migration/ram.c
> > index 30519e1..6c48219 100644
> > --- a/migration/ram.c
> > +++ b/migration/ram.c
> > @@ -71,6 +71,8 @@ static inline bool is_zero_range(uint8_t *p, uint64_t size)
> >      return buffer_is_zero(p, size);
> >  }
> >  
> > +XBZRLECacheStats xbzrle_counters;
> > +
> >  /* struct contains XBZRLE cache and a static page
> >     used by the compression */
> >  static struct {
> > @@ -174,8 +176,6 @@ struct RAMState {
> >      bool ram_bulk_stage;
> >      /* How many times we have dirty too many pages */
> >      int dirty_rate_high_cnt;
> > -    /* How many times we have synchronized the bitmap */
> > -    uint64_t bitmap_sync_count;
> >      /* these variables are used for bitmap sync */
> >      /* last time we did a full bitmap_sync */
> >      int64_t time_last_bitmap_sync;
> > @@ -187,32 +187,8 @@ struct RAMState {
> >      uint64_t xbzrle_cache_miss_prev;
> >      /* number of iterations at the beginning of period */
> >      uint64_t iterations_prev;
> > -    /* Accounting fields */
> > -    /* number of zero pages.  It used to be pages filled by the same char. */
> > -    uint64_t zero_pages;
> > -    /* number of normal transferred pages */
> > -    uint64_t norm_pages;
> >      /* Iterations since start */
> >      uint64_t iterations;
> > -    /* xbzrle transmitted bytes.  Notice that this is with
> > -     * compression, they can't be calculated from the pages */
> > -    uint64_t xbzrle_bytes;
> > -    /* xbzrle transmmited pages */
> > -    uint64_t xbzrle_pages;
> > -    /* xbzrle number of cache miss */
> > -    uint64_t xbzrle_cache_miss;
> > -    /* xbzrle miss rate */
> > -    double xbzrle_cache_miss_rate;
> > -    /* xbzrle number of overflows */
> > -    uint64_t xbzrle_overflows;
> > -    /* number of dirty bits in the bitmap */
> > -    uint64_t migration_dirty_pages;
> > -    /* total number of bytes transferred */
> > -    uint64_t bytes_transferred;
> > -    /* number of dirtied pages in the last second */
> > -    uint64_t dirty_pages_rate;
> > -    /* Count of requests incoming from destination */
> > -    uint64_t postcopy_requests;
> >      /* protects modification of the bitmap */
> >      QemuMutex bitmap_mutex;
> >      /* The RAMBlock used in the last src_page_requests */
> > @@ -225,65 +201,7 @@ typedef struct RAMState RAMState;
> >  
> >  static RAMState ram_state;
> >  
> > -uint64_t dup_mig_pages_transferred(void)
> > -{
> > -    return ram_state.zero_pages;
> > -}
> > -
> > -uint64_t norm_mig_pages_transferred(void)
> > -{
> > -    return ram_state.norm_pages;
> > -}
> > -
> > -uint64_t xbzrle_mig_bytes_transferred(void)
> > -{
> > -    return ram_state.xbzrle_bytes;
> > -}
> > -
> > -uint64_t xbzrle_mig_pages_transferred(void)
> > -{
> > -    return ram_state.xbzrle_pages;
> > -}
> > -
> > -uint64_t xbzrle_mig_pages_cache_miss(void)
> > -{
> > -    return ram_state.xbzrle_cache_miss;
> > -}
> > -
> > -double xbzrle_mig_cache_miss_rate(void)
> > -{
> > -    return ram_state.xbzrle_cache_miss_rate;
> > -}
> > -
> > -uint64_t xbzrle_mig_pages_overflow(void)
> > -{
> > -    return ram_state.xbzrle_overflows;
> > -}
> > -
> > -uint64_t ram_bytes_transferred(void)
> > -{
> > -    return ram_state.bytes_transferred;
> > -}
> > -
> > -uint64_t ram_pages_remaining(void)
> > -{
> > -    return ram_state.migration_dirty_pages;
> > -}
> > -
> > -uint64_t ram_dirty_sync_count(void)
> > -{
> > -    return ram_state.bitmap_sync_count;
> > -}
> > -
> > -uint64_t ram_dirty_pages_rate(void)
> > -{
> > -    return ram_state.dirty_pages_rate;
> > -}
> > -
> > -uint64_t ram_postcopy_requests(void)
> > -{
> > -    return ram_state.postcopy_requests;
> > -}
> > +MigrationStats ram_counters;
> >  
> >  /* used by the search for pages to send */
> >  struct PageSearchStatus {
> > @@ -510,7 +428,7 @@ static void xbzrle_cache_zero_page(RAMState *rs, ram_addr_t current_addr)
> >      /* We don't care if this fails to allocate a new cache page
> >       * as long as it updated an old one */
> >      cache_insert(XBZRLE.cache, current_addr, XBZRLE.zero_target_page,
> > -                 rs->bitmap_sync_count);
> > +                 ram_counters.dirty_sync_count);
> >  }
> >  
> >  #define ENCODING_FLAG_XBZRLE 0x1
> > @@ -536,11 +454,12 @@ static int save_xbzrle_page(RAMState *rs, uint8_t **current_data,
> >      int encoded_len = 0, bytes_xbzrle;
> >      uint8_t *prev_cached_page;
> >  
> > -    if (!cache_is_cached(XBZRLE.cache, current_addr, rs->bitmap_sync_count)) {
> > -        rs->xbzrle_cache_miss++;
> > +    if (!cache_is_cached(XBZRLE.cache, current_addr,
> > +                         ram_counters.dirty_sync_count)) {
> > +        xbzrle_counters.cache_miss++;
> >          if (!last_stage) {
> >              if (cache_insert(XBZRLE.cache, current_addr, *current_data,
> > -                             rs->bitmap_sync_count) == -1) {
> > +                             ram_counters.dirty_sync_count) == -1) {
> >                  return -1;
> >              } else {
> >                  /* update *current_data when the page has been
> > @@ -565,7 +484,7 @@ static int save_xbzrle_page(RAMState *rs, uint8_t **current_data,
> >          return 0;
> >      } else if (encoded_len == -1) {
> >          trace_save_xbzrle_page_overflow();
> > -        rs->xbzrle_overflows++;
> > +        xbzrle_counters.overflow++;
> >          /* update data in the cache */
> >          if (!last_stage) {
> >              memcpy(prev_cached_page, *current_data, TARGET_PAGE_SIZE);
> > @@ -586,9 +505,9 @@ static int save_xbzrle_page(RAMState *rs, uint8_t **current_data,
> >      qemu_put_be16(rs->f, encoded_len);
> >      qemu_put_buffer(rs->f, XBZRLE.encoded_buf, encoded_len);
> >      bytes_xbzrle += encoded_len + 1 + 2;
> > -    rs->xbzrle_pages++;
> > -    rs->xbzrle_bytes += bytes_xbzrle;
> > -    rs->bytes_transferred += bytes_xbzrle;
> > +    xbzrle_counters.pages++;
> > +    xbzrle_counters.bytes += bytes_xbzrle;
> > +    ram_counters.transferred += bytes_xbzrle;
> >  
> >      return 1;
> >  }
> > @@ -630,7 +549,7 @@ static inline bool migration_bitmap_clear_dirty(RAMState *rs,
> >      ret = test_and_clear_bit(page, rb->bmap);
> >  
> >      if (ret) {
> > -        rs->migration_dirty_pages--;
> > +        ram_counters.remaining_pages--;
> >      }
> >      return ret;
> >  }
> > @@ -638,7 +557,7 @@ static inline bool migration_bitmap_clear_dirty(RAMState *rs,
> >  static void migration_bitmap_sync_range(RAMState *rs, RAMBlock *rb,
> >                                          ram_addr_t start, ram_addr_t length)
> >  {
> > -    rs->migration_dirty_pages +=
> > +    ram_counters.remaining_pages +=
> >          cpu_physical_memory_sync_dirty_bitmap(rb, start, length,
> >                                                &rs->num_dirty_pages_period);
> >  }
> > @@ -670,7 +589,7 @@ static void migration_bitmap_sync(RAMState *rs)
> >      int64_t end_time;
> >      uint64_t bytes_xfer_now;
> >  
> > -    rs->bitmap_sync_count++;
> > +    ram_counters.dirty_sync_count++;
> >  
> >      if (!rs->time_last_bitmap_sync) {
> >          rs->time_last_bitmap_sync = qemu_clock_get_ms(QEMU_CLOCK_REALTIME);
> > @@ -694,9 +613,9 @@ static void migration_bitmap_sync(RAMState *rs)
> >      /* more than 1 second = 1000 millisecons */
> >      if (end_time > rs->time_last_bitmap_sync + 1000) {
> >          /* calculate period counters */
> > -        rs->dirty_pages_rate = rs->num_dirty_pages_period * 1000
> > +        ram_counters.dirty_pages_rate = rs->num_dirty_pages_period * 1000
> >              / (end_time - rs->time_last_bitmap_sync);
> > -        bytes_xfer_now = ram_bytes_transferred();
> > +        bytes_xfer_now = ram_counters.transferred;
> >  
> >          if (migrate_auto_converge()) {
> >              /* The following detection logic can be refined later. For now:
> > @@ -716,13 +635,13 @@ static void migration_bitmap_sync(RAMState *rs)
> >  
> >          if (migrate_use_xbzrle()) {
> >              if (rs->iterations_prev != rs->iterations) {
> > -                rs->xbzrle_cache_miss_rate =
> > -                   (double)(rs->xbzrle_cache_miss -
> > +                xbzrle_counters.cache_miss_rate =
> > +                   (double)(xbzrle_counters.cache_miss -
> >                              rs->xbzrle_cache_miss_prev) /
> >                     (rs->iterations - rs->iterations_prev);
> >              }
> >              rs->iterations_prev = rs->iterations;
> > -            rs->xbzrle_cache_miss_prev = rs->xbzrle_cache_miss;
> > +            rs->xbzrle_cache_miss_prev = xbzrle_counters.cache_miss;
> >          }
> >  
> >          /* reset period counters */
> > @@ -731,7 +650,7 @@ static void migration_bitmap_sync(RAMState *rs)
> >          rs->bytes_xfer_prev = bytes_xfer_now;
> >      }
> >      if (migrate_use_events()) {
> > -        qapi_event_send_migration_pass(rs->bitmap_sync_count, NULL);
> > +        qapi_event_send_migration_pass(ram_counters.dirty_sync_count, NULL);
> >      }
> >  }
> >  
> > @@ -751,11 +670,11 @@ static int save_zero_page(RAMState *rs, RAMBlock *block, ram_addr_t offset,
> >      int pages = -1;
> >  
> >      if (is_zero_range(p, TARGET_PAGE_SIZE)) {
> > -        rs->zero_pages++;
> > -        rs->bytes_transferred +=
> > +        ram_counters.duplicate++;
> > +        ram_counters.transferred +=
> >              save_page_header(rs, rs->f, block, offset | RAM_SAVE_FLAG_ZERO);
> >          qemu_put_byte(rs->f, 0);
> > -        rs->bytes_transferred += 1;
> > +        ram_counters.transferred += 1;
> >          pages = 1;
> >      }
> >  
> > @@ -803,7 +722,7 @@ static int ram_save_page(RAMState *rs, PageSearchStatus *pss, bool last_stage)
> >      ret = ram_control_save_page(rs->f, block->offset,
> >                             offset, TARGET_PAGE_SIZE, &bytes_xmit);
> >      if (bytes_xmit) {
> > -        rs->bytes_transferred += bytes_xmit;
> > +        ram_counters.transferred += bytes_xmit;
> >          pages = 1;
> >      }
> >  
> > @@ -814,9 +733,9 @@ static int ram_save_page(RAMState *rs, PageSearchStatus *pss, bool last_stage)
> >      if (ret != RAM_SAVE_CONTROL_NOT_SUPP) {
> >          if (ret != RAM_SAVE_CONTROL_DELAYED) {
> >              if (bytes_xmit > 0) {
> > -                rs->norm_pages++;
> > +                ram_counters.normal++;
> >              } else if (bytes_xmit == 0) {
> > -                rs->zero_pages++;
> > +                ram_counters.duplicate++;
> >              }
> >          }
> >      } else {
> > @@ -842,8 +761,8 @@ static int ram_save_page(RAMState *rs, PageSearchStatus *pss, bool last_stage)
> >  
> >      /* XBZRLE overflow or normal page */
> >      if (pages == -1) {
> > -        rs->bytes_transferred += save_page_header(rs, rs->f, block,
> > -                                                  offset | RAM_SAVE_FLAG_PAGE);
> > +        ram_counters.transferred +=
> > +            save_page_header(rs, rs->f, block, offset | RAM_SAVE_FLAG_PAGE);
> >          if (send_async) {
> >              qemu_put_buffer_async(rs->f, p, TARGET_PAGE_SIZE,
> >                                    migrate_release_ram() &
> > @@ -851,9 +770,9 @@ static int ram_save_page(RAMState *rs, PageSearchStatus *pss, bool last_stage)
> >          } else {
> >              qemu_put_buffer(rs->f, p, TARGET_PAGE_SIZE);
> >          }
> > -        rs->bytes_transferred += TARGET_PAGE_SIZE;
> > +        ram_counters.transferred += TARGET_PAGE_SIZE;
> >          pages = 1;
> > -        rs->norm_pages++;
> > +        ram_counters.normal++;
> >      }
> >  
> >      XBZRLE_cache_unlock();
> > @@ -905,7 +824,7 @@ static void flush_compressed_data(RAMState *rs)
> >          qemu_mutex_lock(&comp_param[idx].mutex);
> >          if (!comp_param[idx].quit) {
> >              len = qemu_put_qemu_file(rs->f, comp_param[idx].file);
> > -            rs->bytes_transferred += len;
> > +            ram_counters.transferred += len;
> >          }
> >          qemu_mutex_unlock(&comp_param[idx].mutex);
> >      }
> > @@ -935,8 +854,8 @@ static int compress_page_with_multi_thread(RAMState *rs, RAMBlock *block,
> >                  qemu_cond_signal(&comp_param[idx].cond);
> >                  qemu_mutex_unlock(&comp_param[idx].mutex);
> >                  pages = 1;
> > -                rs->norm_pages++;
> > -                rs->bytes_transferred += bytes_xmit;
> > +                ram_counters.normal++;
> > +                ram_counters.transferred += bytes_xmit;
> >                  break;
> >              }
> >          }
> > @@ -976,15 +895,15 @@ static int ram_save_compressed_page(RAMState *rs, PageSearchStatus *pss,
> >      ret = ram_control_save_page(rs->f, block->offset,
> >                                  offset, TARGET_PAGE_SIZE, &bytes_xmit);
> >      if (bytes_xmit) {
> > -        rs->bytes_transferred += bytes_xmit;
> > +        ram_counters.transferred += bytes_xmit;
> >          pages = 1;
> >      }
> >      if (ret != RAM_SAVE_CONTROL_NOT_SUPP) {
> >          if (ret != RAM_SAVE_CONTROL_DELAYED) {
> >              if (bytes_xmit > 0) {
> > -                rs->norm_pages++;
> > +                ram_counters.normal++;
> >              } else if (bytes_xmit == 0) {
> > -                rs->zero_pages++;
> > +                ram_counters.duplicate++;
> >              }
> >          }
> >      } else {
> > @@ -1004,8 +923,8 @@ static int ram_save_compressed_page(RAMState *rs, PageSearchStatus *pss,
> >                  blen = qemu_put_compression_data(rs->f, p, TARGET_PAGE_SIZE,
> >                                                   migrate_compress_level());
> >                  if (blen > 0) {
> > -                    rs->bytes_transferred += bytes_xmit + blen;
> > -                    rs->norm_pages++;
> > +                    ram_counters.transferred += bytes_xmit + blen;
> > +                    ram_counters.normal++;
> >                      pages = 1;
> >                  } else {
> >                      qemu_file_set_error(rs->f, blen);
> > @@ -1213,7 +1132,7 @@ int ram_save_queue_pages(const char *rbname, ram_addr_t start, ram_addr_t len)
> >      RAMBlock *ramblock;
> >      RAMState *rs = &ram_state;
> >  
> > -    rs->postcopy_requests++;
> > +    ram_counters.postcopy_requests++;
> >      rcu_read_lock();
> >      if (!rbname) {
> >          /* Reuse last RAMBlock */
> > @@ -1401,13 +1320,12 @@ static int ram_find_and_save_block(RAMState *rs, bool last_stage)
> >  void acct_update_position(QEMUFile *f, size_t size, bool zero)
> >  {
> >      uint64_t pages = size / TARGET_PAGE_SIZE;
> > -    RAMState *rs = &ram_state;
> >  
> >      if (zero) {
> > -        rs->zero_pages += pages;
> > +        ram_counters.duplicate += pages;
> >      } else {
> > -        rs->norm_pages += pages;
> > -        rs->bytes_transferred += size;
> > +        ram_counters.normal += pages;
> > +        ram_counters.transferred += size;
> >          qemu_update_position(f, size);
> >      }
> >  }
> > @@ -1631,7 +1549,6 @@ static void postcopy_chunk_hostpages_pass(MigrationState *ms, bool unsent_pass,
> >                                            RAMBlock *block,
> >                                            PostcopyDiscardState *pds)
> >  {
> > -    RAMState *rs = &ram_state;
> >      unsigned long *bitmap = block->bmap;
> >      unsigned long *unsentmap = block->unsentmap;
> >      unsigned int host_ratio = block->page_size / TARGET_PAGE_SIZE;
> > @@ -1724,7 +1641,7 @@ static void postcopy_chunk_hostpages_pass(MigrationState *ms, bool unsent_pass,
> >                   * Remark them as dirty, updating the count for any pages
> >                   * that weren't previously dirty.
> >                   */
> > -                rs->migration_dirty_pages += !test_and_set_bit(page, bitmap);
> > +                ram_counters.remaining_pages += !test_and_set_bit(page, bitmap);
> >              }
> >          }
> >  
> > @@ -1932,7 +1849,7 @@ static int ram_state_init(RAMState *rs)
> >       * Count the total number of pages used by ram blocks not including any
> >       * gaps due to alignment or unplugs.
> >       */
> > -    rs->migration_dirty_pages = ram_bytes_total() >> TARGET_PAGE_BITS;
> > +    ram_counters.remaining_pages = ram_bytes_total() >> TARGET_PAGE_BITS;
> >  
> >      memory_global_dirty_log_start();
> >      migration_bitmap_sync(rs);
> > @@ -2057,7 +1974,7 @@ static int ram_save_iterate(QEMUFile *f, void *opaque)
> >      ram_control_after_iterate(f, RAM_CONTROL_ROUND);
> >  
> >      qemu_put_be64(f, RAM_SAVE_FLAG_EOS);
> > -    rs->bytes_transferred += 8;
> > +    ram_counters.transferred += 8;
> >  
> >      ret = qemu_file_get_error(f);
> >      if (ret < 0) {
> > @@ -2119,7 +2036,7 @@ static void ram_save_pending(QEMUFile *f, void *opaque, uint64_t max_size,
> >      RAMState *rs = opaque;
> >      uint64_t remaining_size;
> >  
> > -    remaining_size = rs->migration_dirty_pages * TARGET_PAGE_SIZE;
> > +    remaining_size = ram_counters.remaining_pages * TARGET_PAGE_SIZE;
> >  
> >      if (!migration_in_postcopy() &&
> >          remaining_size < max_size) {
> > @@ -2128,7 +2045,7 @@ static void ram_save_pending(QEMUFile *f, void *opaque, uint64_t max_size,
> >          migration_bitmap_sync(rs);
> >          rcu_read_unlock();
> >          qemu_mutex_unlock_iothread();
> > -        remaining_size = rs->migration_dirty_pages * TARGET_PAGE_SIZE;
> > +        remaining_size = ram_counters.remaining_pages * TARGET_PAGE_SIZE;
> >      }
> >  
> >      /* We can do postcopy, and all the data is postcopiable */
> > diff --git a/migration/ram.h b/migration/ram.h
> > index 5864470..9eadc8c 100644
> > --- a/migration/ram.h
> > +++ b/migration/ram.h
> > @@ -32,19 +32,10 @@
> >  #include "qemu-common.h"
> >  #include "exec/cpu-common.h"
> >  
> > +extern MigrationStats ram_counters;
> > +extern XBZRLECacheStats xbzrle_counters;
> > +
> >  int64_t xbzrle_cache_resize(int64_t new_size);
> > -uint64_t dup_mig_pages_transferred(void);
> > -uint64_t norm_mig_pages_transferred(void);
> > -uint64_t xbzrle_mig_bytes_transferred(void);
> > -uint64_t xbzrle_mig_pages_transferred(void);
> > -uint64_t xbzrle_mig_pages_cache_miss(void);
> > -double xbzrle_mig_cache_miss_rate(void);
> > -uint64_t xbzrle_mig_pages_overflow(void);
> > -uint64_t ram_bytes_transferred(void);
> > -uint64_t ram_pages_remaining(void);
> > -uint64_t ram_dirty_sync_count(void);
> > -uint64_t ram_dirty_pages_rate(void);
> > -uint64_t ram_postcopy_requests(void);
> >  uint64_t ram_bytes_total(void);
> >  
> >  void migrate_compress_threads_create(void);
> > -- 
> > 2.9.4
> > 
> --
> Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK

-- 
Peter Xu

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [Qemu-devel] [PATCH 5/5] ram: Make RAMState dynamic
  2017-06-01 22:08 ` [Qemu-devel] [PATCH 5/5] ram: Make RAMState dynamic Juan Quintela
  2017-06-05 15:00   ` Dr. David Alan Gilbert
@ 2017-06-06  8:16   ` Peter Xu
  2017-06-06 17:39     ` Juan Quintela
  1 sibling, 1 reply; 20+ messages in thread
From: Peter Xu @ 2017-06-06  8:16 UTC (permalink / raw)
  To: Juan Quintela; +Cc: qemu-devel, dgilbert, lvivier

On Fri, Jun 02, 2017 at 12:08:13AM +0200, Juan Quintela wrote:
> We create the variable while we are at migration and we remove it
> after migration.
> 
> Signed-off-by: Juan Quintela <quintela@redhat.com>
> ---
>  migration/ram.c | 52 ++++++++++++++++++++++++++++++++--------------------
>  1 file changed, 32 insertions(+), 20 deletions(-)
> 
> diff --git a/migration/ram.c b/migration/ram.c
> index 6c48219..1164f14 100644
> --- a/migration/ram.c
> +++ b/migration/ram.c
> @@ -199,7 +199,7 @@ struct RAMState {
>  };
>  typedef struct RAMState RAMState;
>  
> -static RAMState ram_state;
> +static RAMState *ram_state;
>  
>  MigrationStats ram_counters;
>  
> @@ -783,7 +783,7 @@ static int ram_save_page(RAMState *rs, PageSearchStatus *pss, bool last_stage)
>  static int do_compress_ram_page(QEMUFile *f, RAMBlock *block,
>                                  ram_addr_t offset)
>  {
> -    RAMState *rs = &ram_state;
> +    RAMState *rs = ram_state;
>      int bytes_sent, blen;
>      uint8_t *p = block->host + (offset & TARGET_PAGE_MASK);
>  
> @@ -1130,7 +1130,7 @@ static void migration_page_queue_free(RAMState *rs)
>  int ram_save_queue_pages(const char *rbname, ram_addr_t start, ram_addr_t len)
>  {
>      RAMBlock *ramblock;
> -    RAMState *rs = &ram_state;
> +    RAMState *rs = ram_state;
>  
>      ram_counters.postcopy_requests++;
>      rcu_read_lock();
> @@ -1351,7 +1351,7 @@ void free_xbzrle_decoded_buf(void)
>  
>  static void ram_migration_cleanup(void *opaque)
>  {
> -    RAMState *rs = opaque;
> +    RAMState **rsp = opaque;
>      RAMBlock *block;
>  
>      /* caller have hold iothread lock or is in a bh, so there is
> @@ -1378,7 +1378,9 @@ static void ram_migration_cleanup(void *opaque)
>          XBZRLE.zero_target_page = NULL;
>      }
>      XBZRLE_cache_unlock();
> -    migration_page_queue_free(rs);
> +    migration_page_queue_free(*rsp);
> +    g_free(*rsp);
> +    *rsp = NULL;
>  }
>  
>  static void ram_state_reset(RAMState *rs)
> @@ -1703,7 +1705,7 @@ static int postcopy_chunk_hostpages(MigrationState *ms, RAMBlock *block)
>   */
>  int ram_postcopy_send_discard_bitmap(MigrationState *ms)
>  {
> -    RAMState *rs = &ram_state;
> +    RAMState *rs = ram_state;
>      RAMBlock *block;
>      int ret;
>  
> @@ -1786,12 +1788,13 @@ err:
>      return ret;
>  }
>  
> -static int ram_state_init(RAMState *rs)
> +static int ram_state_init(RAMState **rsp)
>  {
> -    memset(rs, 0, sizeof(*rs));
> -    qemu_mutex_init(&rs->bitmap_mutex);
> -    qemu_mutex_init(&rs->src_page_req_mutex);
> -    QSIMPLEQ_INIT(&rs->src_page_requests);
> +    *rsp = g_new0(RAMState, 1);
> +
> +    qemu_mutex_init(&(*rsp)->bitmap_mutex);
> +    qemu_mutex_init(&(*rsp)->src_page_req_mutex);
> +    QSIMPLEQ_INIT(&(*rsp)->src_page_requests);
>  
>      if (migrate_use_xbzrle()) {
>          XBZRLE_cache_lock();
> @@ -1802,6 +1805,8 @@ static int ram_state_init(RAMState *rs)
>          if (!XBZRLE.cache) {
>              XBZRLE_cache_unlock();
>              error_report("Error creating cache");
> +            g_free(*rsp);
> +            *rsp = NULL;
>              return -1;
>          }
>          XBZRLE_cache_unlock();
> @@ -1810,6 +1815,8 @@ static int ram_state_init(RAMState *rs)
>          XBZRLE.encoded_buf = g_try_malloc0(TARGET_PAGE_SIZE);
>          if (!XBZRLE.encoded_buf) {
>              error_report("Error allocating encoded_buf");
> +            g_free(*rsp);
> +            *rsp = NULL;
>              return -1;
>          }
>  
> @@ -1818,6 +1825,8 @@ static int ram_state_init(RAMState *rs)
>              error_report("Error allocating current_buf");
>              g_free(XBZRLE.encoded_buf);
>              XBZRLE.encoded_buf = NULL;
> +            g_free(*rsp);
> +            *rsp = NULL;

Though may not be directly related to this patch, but... do we need to
destroy the two mutexes as well?

        (*rsp)->bitmap_mutex
        (*rsp)->src_page_req_mutex

Also a nit: These several places I would slightly prefer a "goto
xbzrle_fail", then:

xbzrle_fail:
    if (XBZRLE.encoded_buf) {
        g_free(XBZRLE.encoded_buf);
        XBZRLE.encoded_buf = NULL;
    }
    qemu_mutex_destroy(&(*rsp)->bitmap_mutex);
    qemu_mutex_destroy(&(*rsp)->src_page_req_mutex);
    g_free(*rsp);
    *rsp = NULL;
    return -1;

The rest looks okay to me. Thanks,

>              return -1;
>          }
>      }
> @@ -1827,7 +1836,7 @@ static int ram_state_init(RAMState *rs)
>  
>      qemu_mutex_lock_ramlist();
>      rcu_read_lock();
> -    ram_state_reset(rs);
> +    ram_state_reset(*rsp);
>  
>      /* Skip setting bitmap if there is no RAM */
>      if (ram_bytes_total()) {
> @@ -1852,7 +1861,7 @@ static int ram_state_init(RAMState *rs)
>      ram_counters.remaining_pages = ram_bytes_total() >> TARGET_PAGE_BITS;
>  
>      memory_global_dirty_log_start();
> -    migration_bitmap_sync(rs);
> +    migration_bitmap_sync(*rsp);
>      qemu_mutex_unlock_ramlist();
>      qemu_mutex_unlock_iothread();
>      rcu_read_unlock();
> @@ -1877,16 +1886,16 @@ static int ram_state_init(RAMState *rs)
>   */
>  static int ram_save_setup(QEMUFile *f, void *opaque)
>  {
> -    RAMState *rs = opaque;
> +    RAMState **rsp = opaque;
>      RAMBlock *block;
>  
>      /* migration has already setup the bitmap, reuse it. */
>      if (!migration_in_colo_state()) {
> -        if (ram_state_init(rs) < 0) {
> +        if (ram_state_init(rsp) != 0) {
>              return -1;
> -         }
> +        }
>      }
> -    rs->f = f;
> +    (*rsp)->f = f;
>  
>      rcu_read_lock();
>  
> @@ -1921,7 +1930,8 @@ static int ram_save_setup(QEMUFile *f, void *opaque)
>   */
>  static int ram_save_iterate(QEMUFile *f, void *opaque)
>  {
> -    RAMState *rs = opaque;
> +    RAMState **temp = opaque;
> +    RAMState *rs = *temp;
>      int ret;
>      int i;
>      int64_t t0;
> @@ -1996,7 +2006,8 @@ static int ram_save_iterate(QEMUFile *f, void *opaque)
>   */
>  static int ram_save_complete(QEMUFile *f, void *opaque)
>  {
> -    RAMState *rs = opaque;
> +    RAMState **temp = opaque;
> +    RAMState *rs = *temp;
>  
>      rcu_read_lock();
>  
> @@ -2033,7 +2044,8 @@ static void ram_save_pending(QEMUFile *f, void *opaque, uint64_t max_size,
>                               uint64_t *non_postcopiable_pending,
>                               uint64_t *postcopiable_pending)
>  {
> -    RAMState *rs = opaque;
> +    RAMState **temp = opaque;
> +    RAMState *rs = *temp;
>      uint64_t remaining_size;
>  
>      remaining_size = ram_counters.remaining_pages * TARGET_PAGE_SIZE;
> -- 
> 2.9.4
> 

-- 
Peter Xu

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [Qemu-devel] [PATCH 4/5] ram: Use MigrationStats for statistics
  2017-06-06  8:05     ` Peter Xu
@ 2017-06-06 17:33       ` Juan Quintela
  2017-06-07  3:08         ` Peter Xu
  0 siblings, 1 reply; 20+ messages in thread
From: Juan Quintela @ 2017-06-06 17:33 UTC (permalink / raw)
  To: Peter Xu; +Cc: Dr. David Alan Gilbert, qemu-devel, lvivier

Peter Xu <peterx@redhat.com> wrote:
> On Mon, Jun 05, 2017 at 01:34:45PM +0100, Dr. David Alan Gilbert wrote:
>> * Juan Quintela (quintela@redhat.com) wrote:
>> > RAM Statistics need to survive migration to make info migrate work, so we
>> > need to store them outside of RAMState.  As we already have an struct
>> > with those fields, just used them. (MigrationStats and XBZRLECacheStats).
>> > 
>> > Signed-off-by: Juan Quintela <quintela@redhat.com>
>> 
>> Hmm OK; this feels very much like it's the opposite of 180f61f from
>> March; these variables keep moving around over the last couple of months
>> - are they going to stay still now?
>
> O:-)
>
> Meanwhile, I don't know whether it'll be necessary to remove all the
> functions like ram_bytes_transferred(), e.g., it would be just:
>
> uint64_t ram_bytes_transferred(void)
> {
> -    return ram_state.bytes_transferred;
> +    return ram_counters.transferred;
> }
>
> But I'm okay with either.

That value was only used for filling the statistics.  And we are filling
a struct from another struct of the exact same type.  Going through an
exported function looks stranger.

And as said in $commit, the idea was that creating a new counter was
easy, right now you have to:

- add it to MigrationParam (still have to do this)
- add it to MigrationParams (still have to do this)
- create the field in MigrationStats or RAMState
- create a function that exports it
- add that function in ram.h to export it
- add it on qmp_query (still have to do this)

So, we are moving from 6 steps to 3 steps.  I think we are much better
now, no? O:-)

Later, Juan.

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [Qemu-devel] [PATCH 5/5] ram: Make RAMState dynamic
  2017-06-06  8:16   ` Peter Xu
@ 2017-06-06 17:39     ` Juan Quintela
  0 siblings, 0 replies; 20+ messages in thread
From: Juan Quintela @ 2017-06-06 17:39 UTC (permalink / raw)
  To: Peter Xu; +Cc: qemu-devel, dgilbert, lvivier

Peter Xu <peterx@redhat.com> wrote:
> On Fri, Jun 02, 2017 at 12:08:13AM +0200, Juan Quintela wrote:
>> We create the variable while we are at migration and we remove it
>> after migration.
>> 
>> Signed-off-by: Juan Quintela <quintela@redhat.com>

>> @@ -1818,6 +1825,8 @@ static int ram_state_init(RAMState *rs)
>>              error_report("Error allocating current_buf");
>>              g_free(XBZRLE.encoded_buf);
>>              XBZRLE.encoded_buf = NULL;
>> +            g_free(*rsp);
>> +            *rsp = NULL;
>
> Though may not be directly related to this patch, but... do we need to
> destroy the two mutexes as well?
>
>         (*rsp)->bitmap_mutex
>         (*rsp)->src_page_req_mutex
>

I guess so.  Will do on a next series.

> Also a nit: These several places I would slightly prefer a "goto
> xbzrle_fail", then:
>
> xbzrle_fail:
>     if (XBZRLE.encoded_buf) {
>         g_free(XBZRLE.encoded_buf);
>         XBZRLE.encoded_buf = NULL;
>     }
>     qemu_mutex_destroy(&(*rsp)->bitmap_mutex);
>     qemu_mutex_destroy(&(*rsp)->src_page_req_mutex);
>     g_free(*rsp);
>     *rsp = NULL;
>     return -1;
>

My idea is to split the XBZRLE bits from being all overal the code.  But
that would be later O:-)

Later, Juan.

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [Qemu-devel] [PATCH 3/5] migration: Print statistics about the number of remaining target pages
  2017-06-02 15:15   ` Eric Blake
  2017-06-02 16:36     ` Juan Quintela
@ 2017-06-06 17:48     ` Juan Quintela
  1 sibling, 0 replies; 20+ messages in thread
From: Juan Quintela @ 2017-06-06 17:48 UTC (permalink / raw)
  To: Eric Blake; +Cc: qemu-devel, lvivier, dgilbert, peterx

Eric Blake <eblake@redhat.com> wrote:
> On 06/01/2017 05:08 PM, Juan Quintela wrote:
>> Signed-off-by: Juan Quintela <quintela@redhat.com>
>> ---
>>  migration/migration.c | 4 +++-
>>  migration/ram.c       | 4 ++--
>>  migration/ram.h       | 2 +-
>>  qapi-schema.json      | 6 +++++-
>>  4 files changed, 11 insertions(+), 5 deletions(-)
>> 
>> diff --git a/migration/migration.c b/migration/migration.c
>> index ea3d41c..2c13217 100644
>> --- a/migration/migration.c
>> +++ b/migration/migration.c
>> @@ -518,7 +518,9 @@ static void populate_ram_info(MigrationInfo *info, MigrationState *s)
>>      }
>>  
>>      if (s->state != MIGRATION_STATUS_COMPLETED) {
>> -        info->ram->remaining = ram_bytes_remaining();
>> +        info->ram->remaining_pages = ram_pages_remaining();
>> +        info->ram->remaining = ram_pages_remaining() *
>> +            qemu_target_page_size();
>
> Why not the opposite direction, of:
>
> info->ram->remaining_pages = ram_bytes_remaining() /
> qemu_target_page_size();
> info->ram->remaining = ram_bytes_remaining();
>
> ?
>
>> +++ b/migration/ram.h
>> @@ -41,7 +41,7 @@ uint64_t xbzrle_mig_pages_cache_miss(void);
>>  double xbzrle_mig_cache_miss_rate(void);
>>  uint64_t xbzrle_mig_pages_overflow(void);
>>  uint64_t ram_bytes_transferred(void);
>> -uint64_t ram_bytes_remaining(void);
>> +uint64_t ram_pages_remaining(void);
>>  uint64_t ram_dirty_sync_count(void);
>>  uint64_t ram_dirty_pages_rate(void);
>>  uint64_t ram_postcopy_requests(void);
>
> I know we already have a mishmash of which interfaces are byte-based vs.
> page-based, but using byte-based everywhere seems like a better goal,
> and this feels like we are going backwards from that goal.

Ok, just dropped this bit.

And let the rest of the series the same.

Later, Juan.

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [Qemu-devel] [PATCH 4/5] ram: Use MigrationStats for statistics
  2017-06-06 17:33       ` Juan Quintela
@ 2017-06-07  3:08         ` Peter Xu
  0 siblings, 0 replies; 20+ messages in thread
From: Peter Xu @ 2017-06-07  3:08 UTC (permalink / raw)
  To: Juan Quintela; +Cc: Dr. David Alan Gilbert, qemu-devel, lvivier

On Tue, Jun 06, 2017 at 07:33:45PM +0200, Juan Quintela wrote:
> Peter Xu <peterx@redhat.com> wrote:
> > On Mon, Jun 05, 2017 at 01:34:45PM +0100, Dr. David Alan Gilbert wrote:
> >> * Juan Quintela (quintela@redhat.com) wrote:
> >> > RAM Statistics need to survive migration to make info migrate work, so we
> >> > need to store them outside of RAMState.  As we already have an struct
> >> > with those fields, just used them. (MigrationStats and XBZRLECacheStats).
> >> > 
> >> > Signed-off-by: Juan Quintela <quintela@redhat.com>
> >> 
> >> Hmm OK; this feels very much like it's the opposite of 180f61f from
> >> March; these variables keep moving around over the last couple of months
> >> - are they going to stay still now?
> >
> > O:-)
> >
> > Meanwhile, I don't know whether it'll be necessary to remove all the
> > functions like ram_bytes_transferred(), e.g., it would be just:
> >
> > uint64_t ram_bytes_transferred(void)
> > {
> > -    return ram_state.bytes_transferred;
> > +    return ram_counters.transferred;
> > }
> >
> > But I'm okay with either.
> 
> That value was only used for filling the statistics.  And we are filling
> a struct from another struct of the exact same type.  Going through an
> exported function looks stranger.
> 
> And as said in $commit, the idea was that creating a new counter was
> easy, right now you have to:
> 
> - add it to MigrationParam (still have to do this)
> - add it to MigrationParams (still have to do this)
> - create the field in MigrationStats or RAMState
> - create a function that exports it
> - add that function in ram.h to export it
> - add it on qmp_query (still have to do this)
> 
> So, we are moving from 6 steps to 3 steps.  I think we are much better
> now, no? O:-)

Hmm, okay!

(as long as we won't move these functions back one day :-)

-- 
Peter Xu

^ permalink raw reply	[flat|nested] 20+ messages in thread

end of thread, other threads:[~2017-06-07  3:09 UTC | newest]

Thread overview: 20+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-06-01 22:08 [Qemu-devel] [PATCH 0/5] Make RAMState dynamic Juan Quintela
2017-06-01 22:08 ` [Qemu-devel] [PATCH 1/5] ram: Call migration_page_queue_free() at ram_migration_cleanup() Juan Quintela
2017-06-05 11:24   ` Dr. David Alan Gilbert
2017-06-06  7:58   ` Peter Xu
2017-06-01 22:08 ` [Qemu-devel] [PATCH 2/5] ram: Move ZERO_TARGET_PAGE inside XBZRLE Juan Quintela
2017-06-05 11:27   ` Dr. David Alan Gilbert
2017-06-06  7:59   ` Peter Xu
2017-06-01 22:08 ` [Qemu-devel] [PATCH 3/5] migration: Print statistics about the number of remaining target pages Juan Quintela
2017-06-02 15:15   ` Eric Blake
2017-06-02 16:36     ` Juan Quintela
2017-06-06 17:48     ` Juan Quintela
2017-06-01 22:08 ` [Qemu-devel] [PATCH 4/5] ram: Use MigrationStats for statistics Juan Quintela
2017-06-05 12:34   ` Dr. David Alan Gilbert
2017-06-06  8:05     ` Peter Xu
2017-06-06 17:33       ` Juan Quintela
2017-06-07  3:08         ` Peter Xu
2017-06-01 22:08 ` [Qemu-devel] [PATCH 5/5] ram: Make RAMState dynamic Juan Quintela
2017-06-05 15:00   ` Dr. David Alan Gilbert
2017-06-06  8:16   ` Peter Xu
2017-06-06 17:39     ` Juan Quintela

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.