All of lore.kernel.org
 help / color / mirror / Atom feed
* [Qemu-devel] [PATCH 00/31] Creating RAMState for migration
@ 2017-03-15 13:49 Juan Quintela
  2017-03-15 13:49 ` [Qemu-devel] [PATCH 01/31] ram: move more fields into RAMState Juan Quintela
                   ` (31 more replies)
  0 siblings, 32 replies; 68+ messages in thread
From: Juan Quintela @ 2017-03-15 13:49 UTC (permalink / raw)
  To: qemu-devel; +Cc: amit.shah, dgilbert

Hi

Currently, we have several places where we store informaticon about
ram for migration pruposes:
- global variables on migration/ram.c
- inside the accounting_info struct in migration/ram.c
  notice that not all the accounting vars are inside there
- some stuff is in MigrationState, althought it belongs to migrate/ram.c

So, this series does:
- move everything related to ram.c to RAMState struct
- make all the statistics consistent, exporting them with an accessor
  function

Why now?

Because I am trying to do some more optimizations about how we send
data around and it is basically impossible to do with current code, we
still need to add more variables.  Notice that there are things like that:
- accounting info was only reset if we had xbzrle enabled
- How/where to initialize variables are completely inconsistent.



To Do:

- There are still places that access directly the global struct.
  Mainly postcopy.  We could finfd a way to make a pointer to the
  current migration.  If people like the approach, I will search where
  to put it.
- I haven't posted any real change here, this is just the move of
  variables to the struct and pass the struct around.  Optimizations
  will came after.

- Consolidate XBZRLE, Compression params, etc in its own structs
  (inside or not RAMState, to be able to allocate ones, others, or
  ...)

Comments, please.


Juan Quintela (31):
  ram: move more fields into RAMState
  ram: Add dirty_rate_high_cnt to RAMState
  ram: move bitmap_sync_count into RAMState
  ram: Move start time into RAMState
  ram: Move bytes_xfer_prev into RAMState
  ram: Move num_dirty_pages_period into RAMState
  ram: Move xbzrle_cache_miss_prev into RAMState
  ram: Move iterations_prev into RAMState
  ram: Move dup_pages into RAMState
  ram: Remove unused dump_mig_dbytes_transferred()
  ram: Remove unused pages_skiped variable
  ram: Move norm_pages to RAMState
  ram: Remove norm_mig_bytes_transferred
  ram: Move iterations into RAMState
  ram: Move xbzrle_bytes into RAMState
  ram: Move xbzrle_pages into RAMState
  ram: Move xbzrle_cache_miss into RAMState
  ram: move xbzrle_cache_miss_rate into RAMState
  ram: move xbzrle_overflows into RAMState
  ram: move migration_dirty_pages to RAMState
  ram: Everything was init to zero, so use memset
  ram: move migration_bitmap_mutex into RAMState
  ram: Move migration_bitmap_rcu into RAMState
  ram: Move bytes_transferred into RAMState
  ram: Use the RAMState bytes_transferred parameter
  ram: Remove ram_save_remaining
  ram: Move last_req_rb to RAMState
  ram: Create ram_dirty_sync_count()
  ram: Remove dirty_bytes_rate
  ram: move dirty_pages_rate to RAMState
  ram: move postcopy_requests into RAMState

 include/migration/migration.h |  14 +-
 migration/migration.c         |  21 +-
 migration/ram.c               | 594 +++++++++++++++++++++---------------------
 3 files changed, 303 insertions(+), 326 deletions(-)

-- 
2.9.3

^ permalink raw reply	[flat|nested] 68+ messages in thread

* [Qemu-devel] [PATCH 01/31] ram: move more fields into RAMState
  2017-03-15 13:49 [Qemu-devel] [PATCH 00/31] Creating RAMState for migration Juan Quintela
@ 2017-03-15 13:49 ` Juan Quintela
  2017-03-16 12:09   ` Dr. David Alan Gilbert
  2017-03-15 13:49 ` [Qemu-devel] [PATCH 02/31] ram: Add dirty_rate_high_cnt to RAMState Juan Quintela
                   ` (30 subsequent siblings)
  31 siblings, 1 reply; 68+ messages in thread
From: Juan Quintela @ 2017-03-15 13:49 UTC (permalink / raw)
  To: qemu-devel; +Cc: amit.shah, dgilbert

last_seen_block, last_sent_block, last_offset, last_version and
ram_bulk_stage are globals that are really related together.

Signed-off-by: Juan Quintela <quintela@redhat.com>
---
 migration/ram.c | 136 ++++++++++++++++++++++++++++++++------------------------
 1 file changed, 79 insertions(+), 57 deletions(-)

diff --git a/migration/ram.c b/migration/ram.c
index 719425b..c20a539 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -136,6 +136,23 @@ out:
     return ret;
 }
 
+/* State of RAM for migration */
+struct RAMState {
+    /* Last block that we have visited searching for dirty pages */
+    RAMBlock    *last_seen_block;
+    /* Last block from where we have sent data */
+    RAMBlock *last_sent_block;
+    /* Last offeset we have sent data from */
+    ram_addr_t last_offset;
+    /* last ram version we have seen */
+    uint32_t last_version;
+    /* We are in the first round */
+    bool ram_bulk_stage;
+};
+typedef struct RAMState RAMState;
+
+static RAMState ram_state;
+
 /* accounting for migration statistics */
 typedef struct AccountingInfo {
     uint64_t dup_pages;
@@ -211,16 +228,8 @@ uint64_t xbzrle_mig_pages_overflow(void)
     return acct_info.xbzrle_overflows;
 }
 
-/* This is the last block that we have visited serching for dirty pages
- */
-static RAMBlock *last_seen_block;
-/* This is the last block from where we have sent data */
-static RAMBlock *last_sent_block;
-static ram_addr_t last_offset;
 static QemuMutex migration_bitmap_mutex;
 static uint64_t migration_dirty_pages;
-static uint32_t last_version;
-static bool ram_bulk_stage;
 
 /* used by the search for pages to send */
 struct PageSearchStatus {
@@ -437,9 +446,9 @@ static void mig_throttle_guest_down(void)
  * As a bonus, if the page wasn't in the cache it gets added so that
  * when a small write is made into the 0'd page it gets XBZRLE sent
  */
-static void xbzrle_cache_zero_page(ram_addr_t current_addr)
+static void xbzrle_cache_zero_page(RAMState *rs, ram_addr_t current_addr)
 {
-    if (ram_bulk_stage || !migrate_use_xbzrle()) {
+    if (rs->ram_bulk_stage || !migrate_use_xbzrle()) {
         return;
     }
 
@@ -539,7 +548,7 @@ static int save_xbzrle_page(QEMUFile *f, uint8_t **current_data,
  * Returns: byte offset within memory region of the start of a dirty page
  */
 static inline
-ram_addr_t migration_bitmap_find_dirty(RAMBlock *rb,
+ram_addr_t migration_bitmap_find_dirty(RAMState *rs, RAMBlock *rb,
                                        ram_addr_t start,
                                        ram_addr_t *ram_addr_abs)
 {
@@ -552,7 +561,7 @@ ram_addr_t migration_bitmap_find_dirty(RAMBlock *rb,
     unsigned long next;
 
     bitmap = atomic_rcu_read(&migration_bitmap_rcu)->bmap;
-    if (ram_bulk_stage && nr > base) {
+    if (rs->ram_bulk_stage && nr > base) {
         next = nr + 1;
     } else {
         next = find_next_bit(bitmap, size, nr);
@@ -740,6 +749,7 @@ static void ram_release_pages(MigrationState *ms, const char *block_name,
  *          >=0 - Number of pages written - this might legally be 0
  *                if xbzrle noticed the page was the same.
  *
+ * @rs: The RAM state
  * @ms: The current migration state.
  * @f: QEMUFile where to send the data
  * @block: block that contains the page we want to send
@@ -747,8 +757,9 @@ static void ram_release_pages(MigrationState *ms, const char *block_name,
  * @last_stage: if we are at the completion stage
  * @bytes_transferred: increase it with the number of transferred bytes
  */
-static int ram_save_page(MigrationState *ms, QEMUFile *f, PageSearchStatus *pss,
-                         bool last_stage, uint64_t *bytes_transferred)
+static int ram_save_page(RAMState *rs, MigrationState *ms, QEMUFile *f,
+                         PageSearchStatus *pss, bool last_stage,
+                         uint64_t *bytes_transferred)
 {
     int pages = -1;
     uint64_t bytes_xmit;
@@ -774,7 +785,7 @@ static int ram_save_page(MigrationState *ms, QEMUFile *f, PageSearchStatus *pss,
 
     current_addr = block->offset + offset;
 
-    if (block == last_sent_block) {
+    if (block == rs->last_sent_block) {
         offset |= RAM_SAVE_FLAG_CONTINUE;
     }
     if (ret != RAM_SAVE_CONTROL_NOT_SUPP) {
@@ -791,9 +802,9 @@ static int ram_save_page(MigrationState *ms, QEMUFile *f, PageSearchStatus *pss,
             /* Must let xbzrle know, otherwise a previous (now 0'd) cached
              * page would be stale
              */
-            xbzrle_cache_zero_page(current_addr);
+            xbzrle_cache_zero_page(rs, current_addr);
             ram_release_pages(ms, block->idstr, pss->offset, pages);
-        } else if (!ram_bulk_stage &&
+        } else if (!rs->ram_bulk_stage &&
                    !migration_in_postcopy(ms) && migrate_use_xbzrle()) {
             pages = save_xbzrle_page(f, &p, current_addr, block,
                                      offset, last_stage, bytes_transferred);
@@ -925,6 +936,7 @@ static int compress_page_with_multi_thread(QEMUFile *f, RAMBlock *block,
  *
  * Returns: Number of pages written.
  *
+ * @rs: The RAM state
  * @ms: The current migration state.
  * @f: QEMUFile where to send the data
  * @block: block that contains the page we want to send
@@ -932,7 +944,8 @@ static int compress_page_with_multi_thread(QEMUFile *f, RAMBlock *block,
  * @last_stage: if we are at the completion stage
  * @bytes_transferred: increase it with the number of transferred bytes
  */
-static int ram_save_compressed_page(MigrationState *ms, QEMUFile *f,
+static int ram_save_compressed_page(RAMState *rs, MigrationState *ms,
+                                    QEMUFile *f,
                                     PageSearchStatus *pss, bool last_stage,
                                     uint64_t *bytes_transferred)
 {
@@ -966,7 +979,7 @@ static int ram_save_compressed_page(MigrationState *ms, QEMUFile *f,
          * out, keeping this order is important, because the 'cont' flag
          * is used to avoid resending the block name.
          */
-        if (block != last_sent_block) {
+        if (block != rs->last_sent_block) {
             flush_compressed_data(f);
             pages = save_zero_page(f, block, offset, p, bytes_transferred);
             if (pages == -1) {
@@ -1008,19 +1021,20 @@ static int ram_save_compressed_page(MigrationState *ms, QEMUFile *f,
  *
  * Returns: True if a page is found
  *
+ * @rs: The RAM state
  * @f: Current migration stream.
  * @pss: Data about the state of the current dirty page scan.
  * @*again: Set to false if the search has scanned the whole of RAM
  * *ram_addr_abs: Pointer into which to store the address of the dirty page
  *               within the global ram_addr space
  */
-static bool find_dirty_block(QEMUFile *f, PageSearchStatus *pss,
+static bool find_dirty_block(RAMState *rs, QEMUFile *f, PageSearchStatus *pss,
                              bool *again, ram_addr_t *ram_addr_abs)
 {
-    pss->offset = migration_bitmap_find_dirty(pss->block, pss->offset,
+    pss->offset = migration_bitmap_find_dirty(rs, pss->block, pss->offset,
                                               ram_addr_abs);
-    if (pss->complete_round && pss->block == last_seen_block &&
-        pss->offset >= last_offset) {
+    if (pss->complete_round && pss->block == rs->last_seen_block &&
+        pss->offset >= rs->last_offset) {
         /*
          * We've been once around the RAM and haven't found anything.
          * Give up.
@@ -1037,7 +1051,7 @@ static bool find_dirty_block(QEMUFile *f, PageSearchStatus *pss,
             pss->block = QLIST_FIRST_RCU(&ram_list.blocks);
             /* Flag that we've looped */
             pss->complete_round = true;
-            ram_bulk_stage = false;
+            rs->ram_bulk_stage = false;
             if (migrate_use_xbzrle()) {
                 /* If xbzrle is on, stop using the data compression at this
                  * point. In theory, xbzrle can do better than compression.
@@ -1097,13 +1111,14 @@ static RAMBlock *unqueue_page(MigrationState *ms, ram_addr_t *offset,
  * Unqueue a page from the queue fed by postcopy page requests; skips pages
  * that are already sent (!dirty)
  *
+ *      rs: The RAM state
  *      ms:      MigrationState in
  *     pss:      PageSearchStatus structure updated with found block/offset
  * ram_addr_abs: global offset in the dirty/sent bitmaps
  *
  * Returns:      true if a queued page is found
  */
-static bool get_queued_page(MigrationState *ms, PageSearchStatus *pss,
+static bool get_queued_page(RAMState *rs, MigrationState *ms, PageSearchStatus *pss,
                             ram_addr_t *ram_addr_abs)
 {
     RAMBlock  *block;
@@ -1144,7 +1159,7 @@ static bool get_queued_page(MigrationState *ms, PageSearchStatus *pss,
          * in (migration_bitmap_find_and_reset_dirty) that every page is
          * dirty, that's no longer true.
          */
-        ram_bulk_stage = false;
+        rs->ram_bulk_stage = false;
 
         /*
          * We want the background search to continue from the queued page
@@ -1248,6 +1263,7 @@ err:
  * ram_save_target_page: Save one target page
  *
  *
+ * @rs: The RAM state
  * @f: QEMUFile where to send the data
  * @block: pointer to block that contains the page we want to send
  * @offset: offset inside the block for the page;
@@ -1257,7 +1273,7 @@ err:
  *
  * Returns: Number of pages written.
  */
-static int ram_save_target_page(MigrationState *ms, QEMUFile *f,
+static int ram_save_target_page(RAMState *rs, MigrationState *ms, QEMUFile *f,
                                 PageSearchStatus *pss,
                                 bool last_stage,
                                 uint64_t *bytes_transferred,
@@ -1269,11 +1285,11 @@ static int ram_save_target_page(MigrationState *ms, QEMUFile *f,
     if (migration_bitmap_clear_dirty(dirty_ram_abs)) {
         unsigned long *unsentmap;
         if (compression_switch && migrate_use_compression()) {
-            res = ram_save_compressed_page(ms, f, pss,
+            res = ram_save_compressed_page(rs, ms, f, pss,
                                            last_stage,
                                            bytes_transferred);
         } else {
-            res = ram_save_page(ms, f, pss, last_stage,
+            res = ram_save_page(rs, ms, f, pss, last_stage,
                                 bytes_transferred);
         }
 
@@ -1289,7 +1305,7 @@ static int ram_save_target_page(MigrationState *ms, QEMUFile *f,
          * to the stream.
          */
         if (res > 0) {
-            last_sent_block = pss->block;
+            rs->last_sent_block = pss->block;
         }
     }
 
@@ -1307,6 +1323,7 @@ static int ram_save_target_page(MigrationState *ms, QEMUFile *f,
  *
  * Returns: Number of pages written.
  *
+ * @rs: The RAM state
  * @f: QEMUFile where to send the data
  * @block: pointer to block that contains the page we want to send
  * @offset: offset inside the block for the page; updated to last target page
@@ -1315,7 +1332,7 @@ static int ram_save_target_page(MigrationState *ms, QEMUFile *f,
  * @bytes_transferred: increase it with the number of transferred bytes
  * @dirty_ram_abs: Address of the start of the dirty page in ram_addr_t space
  */
-static int ram_save_host_page(MigrationState *ms, QEMUFile *f,
+static int ram_save_host_page(RAMState *rs, MigrationState *ms, QEMUFile *f,
                               PageSearchStatus *pss,
                               bool last_stage,
                               uint64_t *bytes_transferred,
@@ -1325,7 +1342,7 @@ static int ram_save_host_page(MigrationState *ms, QEMUFile *f,
     size_t pagesize = qemu_ram_pagesize(pss->block);
 
     do {
-        tmppages = ram_save_target_page(ms, f, pss, last_stage,
+        tmppages = ram_save_target_page(rs, ms, f, pss, last_stage,
                                         bytes_transferred, dirty_ram_abs);
         if (tmppages < 0) {
             return tmppages;
@@ -1349,6 +1366,7 @@ static int ram_save_host_page(MigrationState *ms, QEMUFile *f,
  * Returns:  The number of pages written
  *           0 means no dirty pages
  *
+ * @rs: The RAM state
  * @f: QEMUFile where to send the data
  * @last_stage: if we are at the completion stage
  * @bytes_transferred: increase it with the number of transferred bytes
@@ -1357,7 +1375,7 @@ static int ram_save_host_page(MigrationState *ms, QEMUFile *f,
  * pages in a host page that are dirty.
  */
 
-static int ram_find_and_save_block(QEMUFile *f, bool last_stage,
+static int ram_find_and_save_block(RAMState *rs, QEMUFile *f, bool last_stage,
                                    uint64_t *bytes_transferred)
 {
     PageSearchStatus pss;
@@ -1372,8 +1390,8 @@ static int ram_find_and_save_block(QEMUFile *f, bool last_stage,
         return pages;
     }
 
-    pss.block = last_seen_block;
-    pss.offset = last_offset;
+    pss.block = rs->last_seen_block;
+    pss.offset = rs->last_offset;
     pss.complete_round = false;
 
     if (!pss.block) {
@@ -1382,22 +1400,22 @@ static int ram_find_and_save_block(QEMUFile *f, bool last_stage,
 
     do {
         again = true;
-        found = get_queued_page(ms, &pss, &dirty_ram_abs);
+        found = get_queued_page(rs, ms, &pss, &dirty_ram_abs);
 
         if (!found) {
             /* priority queue empty, so just search for something dirty */
-            found = find_dirty_block(f, &pss, &again, &dirty_ram_abs);
+            found = find_dirty_block(rs, f, &pss, &again, &dirty_ram_abs);
         }
 
         if (found) {
-            pages = ram_save_host_page(ms, f, &pss,
+            pages = ram_save_host_page(rs, ms, f, &pss,
                                        last_stage, bytes_transferred,
                                        dirty_ram_abs);
         }
     } while (!pages && again);
 
-    last_seen_block = pss.block;
-    last_offset = pss.offset;
+    rs->last_seen_block = pss.block;
+    rs->last_offset = pss.offset;
 
     return pages;
 }
@@ -1479,13 +1497,13 @@ static void ram_migration_cleanup(void *opaque)
     XBZRLE_cache_unlock();
 }
 
-static void reset_ram_globals(void)
+static void ram_state_reset(RAMState *rs)
 {
-    last_seen_block = NULL;
-    last_sent_block = NULL;
-    last_offset = 0;
-    last_version = ram_list.version;
-    ram_bulk_stage = true;
+    rs->last_seen_block = NULL;
+    rs->last_sent_block = NULL;
+    rs->last_offset = 0;
+    rs->last_version = ram_list.version;
+    rs->ram_bulk_stage = true;
 }
 
 #define MAX_WAIT 50 /* ms, half buffered_file limit */
@@ -1800,9 +1818,9 @@ static int postcopy_chunk_hostpages(MigrationState *ms)
     struct RAMBlock *block;
 
     /* Easiest way to make sure we don't resume in the middle of a host-page */
-    last_seen_block = NULL;
-    last_sent_block = NULL;
-    last_offset     = 0;
+    ram_state.last_seen_block = NULL;
+    ram_state.last_sent_block = NULL;
+    ram_state.last_offset     = 0;
 
     QLIST_FOREACH_RCU(block, &ram_list.blocks, next) {
         unsigned long first = block->offset >> TARGET_PAGE_BITS;
@@ -1913,7 +1931,7 @@ err:
     return ret;
 }
 
-static int ram_save_init_globals(void)
+static int ram_save_init_globals(RAMState *rs)
 {
     int64_t ram_bitmap_pages; /* Size of bitmap in pages, including gaps */
 
@@ -1959,7 +1977,7 @@ static int ram_save_init_globals(void)
     qemu_mutex_lock_ramlist();
     rcu_read_lock();
     bytes_transferred = 0;
-    reset_ram_globals();
+    ram_state_reset(rs);
 
     migration_bitmap_rcu = g_new0(struct BitmapRcu, 1);
     /* Skip setting bitmap if there is no RAM */
@@ -1997,11 +2015,12 @@ static int ram_save_init_globals(void)
 
 static int ram_save_setup(QEMUFile *f, void *opaque)
 {
+    RAMState *rs = opaque;
     RAMBlock *block;
 
     /* migration has already setup the bitmap, reuse it. */
     if (!migration_in_colo_state()) {
-        if (ram_save_init_globals() < 0) {
+        if (ram_save_init_globals(rs) < 0) {
             return -1;
          }
     }
@@ -2031,14 +2050,15 @@ static int ram_save_setup(QEMUFile *f, void *opaque)
 
 static int ram_save_iterate(QEMUFile *f, void *opaque)
 {
+    RAMState *rs = opaque;
     int ret;
     int i;
     int64_t t0;
     int done = 0;
 
     rcu_read_lock();
-    if (ram_list.version != last_version) {
-        reset_ram_globals();
+    if (ram_list.version != rs->last_version) {
+        ram_state_reset(rs);
     }
 
     /* Read version before ram_list.blocks */
@@ -2051,7 +2071,7 @@ static int ram_save_iterate(QEMUFile *f, void *opaque)
     while ((ret = qemu_file_rate_limit(f)) == 0) {
         int pages;
 
-        pages = ram_find_and_save_block(f, false, &bytes_transferred);
+        pages = ram_find_and_save_block(rs, f, false, &bytes_transferred);
         /* no more pages to sent */
         if (pages == 0) {
             done = 1;
@@ -2096,6 +2116,8 @@ static int ram_save_iterate(QEMUFile *f, void *opaque)
 /* Called with iothread lock */
 static int ram_save_complete(QEMUFile *f, void *opaque)
 {
+    RAMState *rs = opaque;
+    
     rcu_read_lock();
 
     if (!migration_in_postcopy(migrate_get_current())) {
@@ -2110,7 +2132,7 @@ static int ram_save_complete(QEMUFile *f, void *opaque)
     while (true) {
         int pages;
 
-        pages = ram_find_and_save_block(f, !migration_in_colo_state(),
+        pages = ram_find_and_save_block(rs, f, !migration_in_colo_state(),
                                         &bytes_transferred);
         /* no more blocks to sent */
         if (pages == 0) {
@@ -2675,5 +2697,5 @@ static SaveVMHandlers savevm_ram_handlers = {
 void ram_mig_init(void)
 {
     qemu_mutex_init(&XBZRLE.lock);
-    register_savevm_live(NULL, "ram", 0, 4, &savevm_ram_handlers, NULL);
+    register_savevm_live(NULL, "ram", 0, 4, &savevm_ram_handlers, &ram_state);
 }
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 68+ messages in thread

* [Qemu-devel] [PATCH 02/31] ram: Add dirty_rate_high_cnt to RAMState
  2017-03-15 13:49 [Qemu-devel] [PATCH 00/31] Creating RAMState for migration Juan Quintela
  2017-03-15 13:49 ` [Qemu-devel] [PATCH 01/31] ram: move more fields into RAMState Juan Quintela
@ 2017-03-15 13:49 ` Juan Quintela
  2017-03-16 12:20   ` Dr. David Alan Gilbert
  2017-03-15 13:49 ` [Qemu-devel] [PATCH 03/31] ram: move bitmap_sync_count into RAMState Juan Quintela
                   ` (29 subsequent siblings)
  31 siblings, 1 reply; 68+ messages in thread
From: Juan Quintela @ 2017-03-15 13:49 UTC (permalink / raw)
  To: qemu-devel; +Cc: amit.shah, dgilbert

We need to add a parameter to several functions to make this work.

Signed-off-by: Juan Quintela <quintela@redhat.com>
---
 migration/ram.c | 23 ++++++++++++-----------
 1 file changed, 12 insertions(+), 11 deletions(-)

diff --git a/migration/ram.c b/migration/ram.c
index c20a539..9120755 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -45,8 +45,6 @@
 #include "qemu/rcu_queue.h"
 #include "migration/colo.h"
 
-static int dirty_rate_high_cnt;
-
 static uint64_t bitmap_sync_count;
 
 /***********************************************************/
@@ -148,6 +146,8 @@ struct RAMState {
     uint32_t last_version;
     /* We are in the first round */
     bool ram_bulk_stage;
+    /* How many times we have dirty too many pages */
+    int dirty_rate_high_cnt;
 };
 typedef struct RAMState RAMState;
 
@@ -626,7 +626,7 @@ uint64_t ram_pagesize_summary(void)
     return summary;
 }
 
-static void migration_bitmap_sync(void)
+static void migration_bitmap_sync(RAMState *rs)
 {
     RAMBlock *block;
     uint64_t num_dirty_pages_init = migration_dirty_pages;
@@ -673,9 +673,9 @@ static void migration_bitmap_sync(void)
             if (s->dirty_pages_rate &&
                (num_dirty_pages_period * TARGET_PAGE_SIZE >
                    (bytes_xfer_now - bytes_xfer_prev)/2) &&
-               (dirty_rate_high_cnt++ >= 2)) {
+               (rs->dirty_rate_high_cnt++ >= 2)) {
                     trace_migration_throttle();
-                    dirty_rate_high_cnt = 0;
+                    rs->dirty_rate_high_cnt = 0;
                     mig_throttle_guest_down();
              }
              bytes_xfer_prev = bytes_xfer_now;
@@ -1859,7 +1859,7 @@ int ram_postcopy_send_discard_bitmap(MigrationState *ms)
     rcu_read_lock();
 
     /* This should be our last sync, the src is now paused */
-    migration_bitmap_sync();
+    migration_bitmap_sync(&ram_state);
 
     unsentmap = atomic_rcu_read(&migration_bitmap_rcu)->unsentmap;
     if (!unsentmap) {
@@ -1935,7 +1935,7 @@ static int ram_save_init_globals(RAMState *rs)
 {
     int64_t ram_bitmap_pages; /* Size of bitmap in pages, including gaps */
 
-    dirty_rate_high_cnt = 0;
+    rs->dirty_rate_high_cnt = 0;
     bitmap_sync_count = 0;
     migration_bitmap_sync_init();
     qemu_mutex_init(&migration_bitmap_mutex);
@@ -1999,7 +1999,7 @@ static int ram_save_init_globals(RAMState *rs)
     migration_dirty_pages = ram_bytes_total() >> TARGET_PAGE_BITS;
 
     memory_global_dirty_log_start();
-    migration_bitmap_sync();
+    migration_bitmap_sync(rs);
     qemu_mutex_unlock_ramlist();
     qemu_mutex_unlock_iothread();
     rcu_read_unlock();
@@ -2117,11 +2117,11 @@ static int ram_save_iterate(QEMUFile *f, void *opaque)
 static int ram_save_complete(QEMUFile *f, void *opaque)
 {
     RAMState *rs = opaque;
-    
+
     rcu_read_lock();
 
     if (!migration_in_postcopy(migrate_get_current())) {
-        migration_bitmap_sync();
+        migration_bitmap_sync(rs);
     }
 
     ram_control_before_iterate(f, RAM_CONTROL_FINISH);
@@ -2154,6 +2154,7 @@ static void ram_save_pending(QEMUFile *f, void *opaque, uint64_t max_size,
                              uint64_t *non_postcopiable_pending,
                              uint64_t *postcopiable_pending)
 {
+    RAMState *rs = opaque;
     uint64_t remaining_size;
 
     remaining_size = ram_save_remaining() * TARGET_PAGE_SIZE;
@@ -2162,7 +2163,7 @@ static void ram_save_pending(QEMUFile *f, void *opaque, uint64_t max_size,
         remaining_size < max_size) {
         qemu_mutex_lock_iothread();
         rcu_read_lock();
-        migration_bitmap_sync();
+        migration_bitmap_sync(rs);
         rcu_read_unlock();
         qemu_mutex_unlock_iothread();
         remaining_size = ram_save_remaining() * TARGET_PAGE_SIZE;
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 68+ messages in thread

* [Qemu-devel] [PATCH 03/31] ram: move bitmap_sync_count into RAMState
  2017-03-15 13:49 [Qemu-devel] [PATCH 00/31] Creating RAMState for migration Juan Quintela
  2017-03-15 13:49 ` [Qemu-devel] [PATCH 01/31] ram: move more fields into RAMState Juan Quintela
  2017-03-15 13:49 ` [Qemu-devel] [PATCH 02/31] ram: Add dirty_rate_high_cnt to RAMState Juan Quintela
@ 2017-03-15 13:49 ` Juan Quintela
  2017-03-16 12:21   ` Dr. David Alan Gilbert
  2017-03-15 13:49 ` [Qemu-devel] [PATCH 04/31] ram: Move start time " Juan Quintela
                   ` (28 subsequent siblings)
  31 siblings, 1 reply; 68+ messages in thread
From: Juan Quintela @ 2017-03-15 13:49 UTC (permalink / raw)
  To: qemu-devel; +Cc: amit.shah, dgilbert

Signed-off-by: Juan Quintela <quintela@redhat.com>
---
 migration/ram.c | 22 +++++++++++-----------
 1 file changed, 11 insertions(+), 11 deletions(-)

diff --git a/migration/ram.c b/migration/ram.c
index 9120755..c0bee94 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -45,8 +45,6 @@
 #include "qemu/rcu_queue.h"
 #include "migration/colo.h"
 
-static uint64_t bitmap_sync_count;
-
 /***********************************************************/
 /* ram save/restore */
 
@@ -148,6 +146,8 @@ struct RAMState {
     bool ram_bulk_stage;
     /* How many times we have dirty too many pages */
     int dirty_rate_high_cnt;
+    /* How many times we have synchronized the bitmap */
+    uint64_t bitmap_sync_count;
 };
 typedef struct RAMState RAMState;
 
@@ -455,7 +455,7 @@ static void xbzrle_cache_zero_page(RAMState *rs, ram_addr_t current_addr)
     /* We don't care if this fails to allocate a new cache page
      * as long as it updated an old one */
     cache_insert(XBZRLE.cache, current_addr, ZERO_TARGET_PAGE,
-                 bitmap_sync_count);
+                 rs->bitmap_sync_count);
 }
 
 #define ENCODING_FLAG_XBZRLE 0x1
@@ -475,7 +475,7 @@ static void xbzrle_cache_zero_page(RAMState *rs, ram_addr_t current_addr)
  * @last_stage: if we are at the completion stage
  * @bytes_transferred: increase it with the number of transferred bytes
  */
-static int save_xbzrle_page(QEMUFile *f, uint8_t **current_data,
+static int save_xbzrle_page(QEMUFile *f, RAMState *rs, uint8_t **current_data,
                             ram_addr_t current_addr, RAMBlock *block,
                             ram_addr_t offset, bool last_stage,
                             uint64_t *bytes_transferred)
@@ -483,11 +483,11 @@ static int save_xbzrle_page(QEMUFile *f, uint8_t **current_data,
     int encoded_len = 0, bytes_xbzrle;
     uint8_t *prev_cached_page;
 
-    if (!cache_is_cached(XBZRLE.cache, current_addr, bitmap_sync_count)) {
+    if (!cache_is_cached(XBZRLE.cache, current_addr, rs->bitmap_sync_count)) {
         acct_info.xbzrle_cache_miss++;
         if (!last_stage) {
             if (cache_insert(XBZRLE.cache, current_addr, *current_data,
-                             bitmap_sync_count) == -1) {
+                             rs->bitmap_sync_count) == -1) {
                 return -1;
             } else {
                 /* update *current_data when the page has been
@@ -634,7 +634,7 @@ static void migration_bitmap_sync(RAMState *rs)
     int64_t end_time;
     int64_t bytes_xfer_now;
 
-    bitmap_sync_count++;
+    rs->bitmap_sync_count++;
 
     if (!bytes_xfer_prev) {
         bytes_xfer_prev = ram_bytes_transferred();
@@ -697,9 +697,9 @@ static void migration_bitmap_sync(RAMState *rs)
         start_time = end_time;
         num_dirty_pages_period = 0;
     }
-    s->dirty_sync_count = bitmap_sync_count;
+    s->dirty_sync_count = rs->bitmap_sync_count;
     if (migrate_use_events()) {
-        qapi_event_send_migration_pass(bitmap_sync_count, NULL);
+        qapi_event_send_migration_pass(rs->bitmap_sync_count, NULL);
     }
 }
 
@@ -806,7 +806,7 @@ static int ram_save_page(RAMState *rs, MigrationState *ms, QEMUFile *f,
             ram_release_pages(ms, block->idstr, pss->offset, pages);
         } else if (!rs->ram_bulk_stage &&
                    !migration_in_postcopy(ms) && migrate_use_xbzrle()) {
-            pages = save_xbzrle_page(f, &p, current_addr, block,
+            pages = save_xbzrle_page(f, rs, &p, current_addr, block,
                                      offset, last_stage, bytes_transferred);
             if (!last_stage) {
                 /* Can't send this cached data async, since the cache page
@@ -1936,7 +1936,7 @@ static int ram_save_init_globals(RAMState *rs)
     int64_t ram_bitmap_pages; /* Size of bitmap in pages, including gaps */
 
     rs->dirty_rate_high_cnt = 0;
-    bitmap_sync_count = 0;
+    rs->bitmap_sync_count = 0;
     migration_bitmap_sync_init();
     qemu_mutex_init(&migration_bitmap_mutex);
 
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 68+ messages in thread

* [Qemu-devel] [PATCH 04/31] ram: Move start time into RAMState
  2017-03-15 13:49 [Qemu-devel] [PATCH 00/31] Creating RAMState for migration Juan Quintela
                   ` (2 preceding siblings ...)
  2017-03-15 13:49 ` [Qemu-devel] [PATCH 03/31] ram: move bitmap_sync_count into RAMState Juan Quintela
@ 2017-03-15 13:49 ` Juan Quintela
  2017-03-16 12:21   ` Dr. David Alan Gilbert
  2017-03-15 13:49 ` [Qemu-devel] [PATCH 05/31] ram: Move bytes_xfer_prev " Juan Quintela
                   ` (27 subsequent siblings)
  31 siblings, 1 reply; 68+ messages in thread
From: Juan Quintela @ 2017-03-15 13:49 UTC (permalink / raw)
  To: qemu-devel; +Cc: amit.shah, dgilbert

Signed-off-by: Juan Quintela <quintela@redhat.com>
---
 migration/ram.c | 20 +++++++++++---------
 1 file changed, 11 insertions(+), 9 deletions(-)

diff --git a/migration/ram.c b/migration/ram.c
index c0bee94..f6ac503 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -148,6 +148,9 @@ struct RAMState {
     int dirty_rate_high_cnt;
     /* How many times we have synchronized the bitmap */
     uint64_t bitmap_sync_count;
+    /* this variables are used for bitmap sync */
+    /* last time we did a full bitmap_sync */
+    int64_t start_time;
 };
 typedef struct RAMState RAMState;
 
@@ -594,15 +597,14 @@ static void migration_bitmap_sync_range(ram_addr_t start, ram_addr_t length)
 }
 
 /* Fix me: there are too many global variables used in migration process. */
-static int64_t start_time;
 static int64_t bytes_xfer_prev;
 static int64_t num_dirty_pages_period;
 static uint64_t xbzrle_cache_miss_prev;
 static uint64_t iterations_prev;
 
-static void migration_bitmap_sync_init(void)
+static void migration_bitmap_sync_init(RAMState *rs)
 {
-    start_time = 0;
+    rs->start_time = 0;
     bytes_xfer_prev = 0;
     num_dirty_pages_period = 0;
     xbzrle_cache_miss_prev = 0;
@@ -640,8 +642,8 @@ static void migration_bitmap_sync(RAMState *rs)
         bytes_xfer_prev = ram_bytes_transferred();
     }
 
-    if (!start_time) {
-        start_time = qemu_clock_get_ms(QEMU_CLOCK_REALTIME);
+    if (!rs->start_time) {
+        rs->start_time = qemu_clock_get_ms(QEMU_CLOCK_REALTIME);
     }
 
     trace_migration_bitmap_sync_start();
@@ -661,7 +663,7 @@ static void migration_bitmap_sync(RAMState *rs)
     end_time = qemu_clock_get_ms(QEMU_CLOCK_REALTIME);
 
     /* more than 1 second = 1000 millisecons */
-    if (end_time > start_time + 1000) {
+    if (end_time > rs->start_time + 1000) {
         if (migrate_auto_converge()) {
             /* The following detection logic can be refined later. For now:
                Check to see if the dirtied bytes is 50% more than the approx.
@@ -692,9 +694,9 @@ static void migration_bitmap_sync(RAMState *rs)
             xbzrle_cache_miss_prev = acct_info.xbzrle_cache_miss;
         }
         s->dirty_pages_rate = num_dirty_pages_period * 1000
-            / (end_time - start_time);
+            / (end_time - rs->start_time);
         s->dirty_bytes_rate = s->dirty_pages_rate * TARGET_PAGE_SIZE;
-        start_time = end_time;
+        rs->start_time = end_time;
         num_dirty_pages_period = 0;
     }
     s->dirty_sync_count = rs->bitmap_sync_count;
@@ -1937,7 +1939,7 @@ static int ram_save_init_globals(RAMState *rs)
 
     rs->dirty_rate_high_cnt = 0;
     rs->bitmap_sync_count = 0;
-    migration_bitmap_sync_init();
+    migration_bitmap_sync_init(rs);
     qemu_mutex_init(&migration_bitmap_mutex);
 
     if (migrate_use_xbzrle()) {
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 68+ messages in thread

* [Qemu-devel] [PATCH 05/31] ram: Move bytes_xfer_prev into RAMState
  2017-03-15 13:49 [Qemu-devel] [PATCH 00/31] Creating RAMState for migration Juan Quintela
                   ` (3 preceding siblings ...)
  2017-03-15 13:49 ` [Qemu-devel] [PATCH 04/31] ram: Move start time " Juan Quintela
@ 2017-03-15 13:49 ` Juan Quintela
  2017-03-16 12:22   ` Dr. David Alan Gilbert
  2017-03-15 13:49 ` [Qemu-devel] [PATCH 06/31] ram: Move num_dirty_pages_period " Juan Quintela
                   ` (26 subsequent siblings)
  31 siblings, 1 reply; 68+ messages in thread
From: Juan Quintela @ 2017-03-15 13:49 UTC (permalink / raw)
  To: qemu-devel; +Cc: amit.shah, dgilbert

Signed-off-by: Juan Quintela <quintela@redhat.com>
---
 migration/ram.c | 13 +++++++------
 1 file changed, 7 insertions(+), 6 deletions(-)

diff --git a/migration/ram.c b/migration/ram.c
index f6ac503..2d288cc 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -151,6 +151,8 @@ struct RAMState {
     /* this variables are used for bitmap sync */
     /* last time we did a full bitmap_sync */
     int64_t start_time;
+    /* bytes transferred at start_time */
+    int64_t bytes_xfer_prev;
 };
 typedef struct RAMState RAMState;
 
@@ -597,7 +599,6 @@ static void migration_bitmap_sync_range(ram_addr_t start, ram_addr_t length)
 }
 
 /* Fix me: there are too many global variables used in migration process. */
-static int64_t bytes_xfer_prev;
 static int64_t num_dirty_pages_period;
 static uint64_t xbzrle_cache_miss_prev;
 static uint64_t iterations_prev;
@@ -605,7 +606,7 @@ static uint64_t iterations_prev;
 static void migration_bitmap_sync_init(RAMState *rs)
 {
     rs->start_time = 0;
-    bytes_xfer_prev = 0;
+    rs->bytes_xfer_prev = 0;
     num_dirty_pages_period = 0;
     xbzrle_cache_miss_prev = 0;
     iterations_prev = 0;
@@ -638,8 +639,8 @@ static void migration_bitmap_sync(RAMState *rs)
 
     rs->bitmap_sync_count++;
 
-    if (!bytes_xfer_prev) {
-        bytes_xfer_prev = ram_bytes_transferred();
+    if (!rs->bytes_xfer_prev) {
+        rs->bytes_xfer_prev = ram_bytes_transferred();
     }
 
     if (!rs->start_time) {
@@ -674,13 +675,13 @@ static void migration_bitmap_sync(RAMState *rs)
 
             if (s->dirty_pages_rate &&
                (num_dirty_pages_period * TARGET_PAGE_SIZE >
-                   (bytes_xfer_now - bytes_xfer_prev)/2) &&
+                   (bytes_xfer_now - rs->bytes_xfer_prev)/2) &&
                (rs->dirty_rate_high_cnt++ >= 2)) {
                     trace_migration_throttle();
                     rs->dirty_rate_high_cnt = 0;
                     mig_throttle_guest_down();
              }
-             bytes_xfer_prev = bytes_xfer_now;
+             rs->bytes_xfer_prev = bytes_xfer_now;
         }
 
         if (migrate_use_xbzrle()) {
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 68+ messages in thread

* [Qemu-devel] [PATCH 06/31] ram: Move num_dirty_pages_period into RAMState
  2017-03-15 13:49 [Qemu-devel] [PATCH 00/31] Creating RAMState for migration Juan Quintela
                   ` (4 preceding siblings ...)
  2017-03-15 13:49 ` [Qemu-devel] [PATCH 05/31] ram: Move bytes_xfer_prev " Juan Quintela
@ 2017-03-15 13:49 ` Juan Quintela
  2017-03-16 12:23   ` Dr. David Alan Gilbert
  2017-03-15 13:49 ` [Qemu-devel] [PATCH 07/31] ram: Move xbzrle_cache_miss_prev " Juan Quintela
                   ` (25 subsequent siblings)
  31 siblings, 1 reply; 68+ messages in thread
From: Juan Quintela @ 2017-03-15 13:49 UTC (permalink / raw)
  To: qemu-devel; +Cc: amit.shah, dgilbert

Signed-off-by: Juan Quintela <quintela@redhat.com>
---
 migration/ram.c | 13 +++++++------
 1 file changed, 7 insertions(+), 6 deletions(-)

diff --git a/migration/ram.c b/migration/ram.c
index 2d288cc..b13d2d5 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -153,6 +153,8 @@ struct RAMState {
     int64_t start_time;
     /* bytes transferred at start_time */
     int64_t bytes_xfer_prev;
+    /* number of dirty pages since start_time */
+    int64_t num_dirty_pages_period;
 };
 typedef struct RAMState RAMState;
 
@@ -599,7 +601,6 @@ static void migration_bitmap_sync_range(ram_addr_t start, ram_addr_t length)
 }
 
 /* Fix me: there are too many global variables used in migration process. */
-static int64_t num_dirty_pages_period;
 static uint64_t xbzrle_cache_miss_prev;
 static uint64_t iterations_prev;
 
@@ -607,7 +608,7 @@ static void migration_bitmap_sync_init(RAMState *rs)
 {
     rs->start_time = 0;
     rs->bytes_xfer_prev = 0;
-    num_dirty_pages_period = 0;
+    rs->num_dirty_pages_period = 0;
     xbzrle_cache_miss_prev = 0;
     iterations_prev = 0;
 }
@@ -660,7 +661,7 @@ static void migration_bitmap_sync(RAMState *rs)
 
     trace_migration_bitmap_sync_end(migration_dirty_pages
                                     - num_dirty_pages_init);
-    num_dirty_pages_period += migration_dirty_pages - num_dirty_pages_init;
+    rs->num_dirty_pages_period += migration_dirty_pages - num_dirty_pages_init;
     end_time = qemu_clock_get_ms(QEMU_CLOCK_REALTIME);
 
     /* more than 1 second = 1000 millisecons */
@@ -674,7 +675,7 @@ static void migration_bitmap_sync(RAMState *rs)
             bytes_xfer_now = ram_bytes_transferred();
 
             if (s->dirty_pages_rate &&
-               (num_dirty_pages_period * TARGET_PAGE_SIZE >
+               (rs->num_dirty_pages_period * TARGET_PAGE_SIZE >
                    (bytes_xfer_now - rs->bytes_xfer_prev)/2) &&
                (rs->dirty_rate_high_cnt++ >= 2)) {
                     trace_migration_throttle();
@@ -694,11 +695,11 @@ static void migration_bitmap_sync(RAMState *rs)
             iterations_prev = acct_info.iterations;
             xbzrle_cache_miss_prev = acct_info.xbzrle_cache_miss;
         }
-        s->dirty_pages_rate = num_dirty_pages_period * 1000
+        s->dirty_pages_rate = rs->num_dirty_pages_period * 1000
             / (end_time - rs->start_time);
         s->dirty_bytes_rate = s->dirty_pages_rate * TARGET_PAGE_SIZE;
         rs->start_time = end_time;
-        num_dirty_pages_period = 0;
+        rs->num_dirty_pages_period = 0;
     }
     s->dirty_sync_count = rs->bitmap_sync_count;
     if (migrate_use_events()) {
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 68+ messages in thread

* [Qemu-devel] [PATCH 07/31] ram: Move xbzrle_cache_miss_prev into RAMState
  2017-03-15 13:49 [Qemu-devel] [PATCH 00/31] Creating RAMState for migration Juan Quintela
                   ` (5 preceding siblings ...)
  2017-03-15 13:49 ` [Qemu-devel] [PATCH 06/31] ram: Move num_dirty_pages_period " Juan Quintela
@ 2017-03-15 13:49 ` Juan Quintela
  2017-03-16 12:24   ` Dr. David Alan Gilbert
  2017-03-15 13:49 ` [Qemu-devel] [PATCH 08/31] ram: Move iterations_prev " Juan Quintela
                   ` (24 subsequent siblings)
  31 siblings, 1 reply; 68+ messages in thread
From: Juan Quintela @ 2017-03-15 13:49 UTC (permalink / raw)
  To: qemu-devel; +Cc: amit.shah, dgilbert

Signed-off-by: Juan Quintela <quintela@redhat.com>
---
 migration/ram.c | 9 +++++----
 1 file changed, 5 insertions(+), 4 deletions(-)

diff --git a/migration/ram.c b/migration/ram.c
index b13d2d5..ae077c5 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -155,6 +155,8 @@ struct RAMState {
     int64_t bytes_xfer_prev;
     /* number of dirty pages since start_time */
     int64_t num_dirty_pages_period;
+    /* xbzrle misses since the beggining of the period */
+    uint64_t xbzrle_cache_miss_prev;
 };
 typedef struct RAMState RAMState;
 
@@ -601,7 +603,6 @@ static void migration_bitmap_sync_range(ram_addr_t start, ram_addr_t length)
 }
 
 /* Fix me: there are too many global variables used in migration process. */
-static uint64_t xbzrle_cache_miss_prev;
 static uint64_t iterations_prev;
 
 static void migration_bitmap_sync_init(RAMState *rs)
@@ -609,7 +610,7 @@ static void migration_bitmap_sync_init(RAMState *rs)
     rs->start_time = 0;
     rs->bytes_xfer_prev = 0;
     rs->num_dirty_pages_period = 0;
-    xbzrle_cache_miss_prev = 0;
+    rs->xbzrle_cache_miss_prev = 0;
     iterations_prev = 0;
 }
 
@@ -689,11 +690,11 @@ static void migration_bitmap_sync(RAMState *rs)
             if (iterations_prev != acct_info.iterations) {
                 acct_info.xbzrle_cache_miss_rate =
                    (double)(acct_info.xbzrle_cache_miss -
-                            xbzrle_cache_miss_prev) /
+                            rs->xbzrle_cache_miss_prev) /
                    (acct_info.iterations - iterations_prev);
             }
             iterations_prev = acct_info.iterations;
-            xbzrle_cache_miss_prev = acct_info.xbzrle_cache_miss;
+            rs->xbzrle_cache_miss_prev = acct_info.xbzrle_cache_miss;
         }
         s->dirty_pages_rate = rs->num_dirty_pages_period * 1000
             / (end_time - rs->start_time);
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 68+ messages in thread

* [Qemu-devel] [PATCH 08/31] ram: Move iterations_prev into RAMState
  2017-03-15 13:49 [Qemu-devel] [PATCH 00/31] Creating RAMState for migration Juan Quintela
                   ` (6 preceding siblings ...)
  2017-03-15 13:49 ` [Qemu-devel] [PATCH 07/31] ram: Move xbzrle_cache_miss_prev " Juan Quintela
@ 2017-03-15 13:49 ` Juan Quintela
  2017-03-16 12:26   ` Dr. David Alan Gilbert
  2017-03-15 13:49 ` [Qemu-devel] [PATCH 09/31] ram: Move dup_pages " Juan Quintela
                   ` (23 subsequent siblings)
  31 siblings, 1 reply; 68+ messages in thread
From: Juan Quintela @ 2017-03-15 13:49 UTC (permalink / raw)
  To: qemu-devel; +Cc: amit.shah, dgilbert

Signed-off-by: Juan Quintela <quintela@redhat.com>
---
 migration/ram.c | 13 ++++++-------
 1 file changed, 6 insertions(+), 7 deletions(-)

diff --git a/migration/ram.c b/migration/ram.c
index ae077c5..6cdad06 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -157,6 +157,8 @@ struct RAMState {
     int64_t num_dirty_pages_period;
     /* xbzrle misses since the beggining of the period */
     uint64_t xbzrle_cache_miss_prev;
+    /* number of iterations at the beggining of period */
+    uint64_t iterations_prev;
 };
 typedef struct RAMState RAMState;
 
@@ -602,16 +604,13 @@ static void migration_bitmap_sync_range(ram_addr_t start, ram_addr_t length)
         cpu_physical_memory_sync_dirty_bitmap(bitmap, start, length);
 }
 
-/* Fix me: there are too many global variables used in migration process. */
-static uint64_t iterations_prev;
-
 static void migration_bitmap_sync_init(RAMState *rs)
 {
     rs->start_time = 0;
     rs->bytes_xfer_prev = 0;
     rs->num_dirty_pages_period = 0;
     rs->xbzrle_cache_miss_prev = 0;
-    iterations_prev = 0;
+    rs->iterations_prev = 0;
 }
 
 /* Returns a summary bitmap of the page sizes of all RAMBlocks;
@@ -687,13 +686,13 @@ static void migration_bitmap_sync(RAMState *rs)
         }
 
         if (migrate_use_xbzrle()) {
-            if (iterations_prev != acct_info.iterations) {
+            if (rs->iterations_prev != acct_info.iterations) {
                 acct_info.xbzrle_cache_miss_rate =
                    (double)(acct_info.xbzrle_cache_miss -
                             rs->xbzrle_cache_miss_prev) /
-                   (acct_info.iterations - iterations_prev);
+                   (acct_info.iterations - rs->iterations_prev);
             }
-            iterations_prev = acct_info.iterations;
+            rs->iterations_prev = acct_info.iterations;
             rs->xbzrle_cache_miss_prev = acct_info.xbzrle_cache_miss;
         }
         s->dirty_pages_rate = rs->num_dirty_pages_period * 1000
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 68+ messages in thread

* [Qemu-devel] [PATCH 09/31] ram: Move dup_pages into RAMState
  2017-03-15 13:49 [Qemu-devel] [PATCH 00/31] Creating RAMState for migration Juan Quintela
                   ` (7 preceding siblings ...)
  2017-03-15 13:49 ` [Qemu-devel] [PATCH 08/31] ram: Move iterations_prev " Juan Quintela
@ 2017-03-15 13:49 ` Juan Quintela
  2017-03-16 12:27   ` Dr. David Alan Gilbert
  2017-03-15 13:50 ` [Qemu-devel] [PATCH 10/31] ram: Remove unused dump_mig_dbytes_transferred() Juan Quintela
                   ` (22 subsequent siblings)
  31 siblings, 1 reply; 68+ messages in thread
From: Juan Quintela @ 2017-03-15 13:49 UTC (permalink / raw)
  To: qemu-devel; +Cc: amit.shah, dgilbert

Once there rename it to its actual meaning, zero_pages.

Signed-off-by: Juan Quintela <quintela@redhat.com>
---
 migration/ram.c | 26 +++++++++++++++-----------
 1 file changed, 15 insertions(+), 11 deletions(-)

diff --git a/migration/ram.c b/migration/ram.c
index 6cdad06..059e9f1 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -159,6 +159,9 @@ struct RAMState {
     uint64_t xbzrle_cache_miss_prev;
     /* number of iterations at the beggining of period */
     uint64_t iterations_prev;
+    /* Accounting fields */
+    /* number of zero pages.  It used to be pages filled by the same char. */
+    uint64_t zero_pages;
 };
 typedef struct RAMState RAMState;
 
@@ -166,7 +169,6 @@ static RAMState ram_state;
 
 /* accounting for migration statistics */
 typedef struct AccountingInfo {
-    uint64_t dup_pages;
     uint64_t skipped_pages;
     uint64_t norm_pages;
     uint64_t iterations;
@@ -186,12 +188,12 @@ static void acct_clear(void)
 
 uint64_t dup_mig_bytes_transferred(void)
 {
-    return acct_info.dup_pages * TARGET_PAGE_SIZE;
+    return ram_state.zero_pages * TARGET_PAGE_SIZE;
 }
 
 uint64_t dup_mig_pages_transferred(void)
 {
-    return acct_info.dup_pages;
+    return ram_state.zero_pages;
 }
 
 uint64_t skipped_mig_bytes_transferred(void)
@@ -718,13 +720,14 @@ static void migration_bitmap_sync(RAMState *rs)
  * @p: pointer to the page
  * @bytes_transferred: increase it with the number of transferred bytes
  */
-static int save_zero_page(QEMUFile *f, RAMBlock *block, ram_addr_t offset,
+static int save_zero_page(RAMState *rs, QEMUFile *f, RAMBlock *block,
+                          ram_addr_t offset,
                           uint8_t *p, uint64_t *bytes_transferred)
 {
     int pages = -1;
 
     if (is_zero_range(p, TARGET_PAGE_SIZE)) {
-        acct_info.dup_pages++;
+        rs->zero_pages++;
         *bytes_transferred += save_page_header(f, block,
                                                offset | RAM_SAVE_FLAG_COMPRESS);
         qemu_put_byte(f, 0);
@@ -797,11 +800,11 @@ static int ram_save_page(RAMState *rs, MigrationState *ms, QEMUFile *f,
             if (bytes_xmit > 0) {
                 acct_info.norm_pages++;
             } else if (bytes_xmit == 0) {
-                acct_info.dup_pages++;
+                rs->zero_pages++;
             }
         }
     } else {
-        pages = save_zero_page(f, block, offset, p, bytes_transferred);
+        pages = save_zero_page(rs, f, block, offset, p, bytes_transferred);
         if (pages > 0) {
             /* Must let xbzrle know, otherwise a previous (now 0'd) cached
              * page would be stale
@@ -973,7 +976,7 @@ static int ram_save_compressed_page(RAMState *rs, MigrationState *ms,
             if (bytes_xmit > 0) {
                 acct_info.norm_pages++;
             } else if (bytes_xmit == 0) {
-                acct_info.dup_pages++;
+                rs->zero_pages++;
             }
         }
     } else {
@@ -985,7 +988,7 @@ static int ram_save_compressed_page(RAMState *rs, MigrationState *ms,
          */
         if (block != rs->last_sent_block) {
             flush_compressed_data(f);
-            pages = save_zero_page(f, block, offset, p, bytes_transferred);
+            pages = save_zero_page(rs, f, block, offset, p, bytes_transferred);
             if (pages == -1) {
                 /* Make sure the first page is sent out before other pages */
                 bytes_xmit = save_page_header(f, block, offset |
@@ -1006,7 +1009,7 @@ static int ram_save_compressed_page(RAMState *rs, MigrationState *ms,
             }
         } else {
             offset |= RAM_SAVE_FLAG_CONTINUE;
-            pages = save_zero_page(f, block, offset, p, bytes_transferred);
+            pages = save_zero_page(rs, f, block, offset, p, bytes_transferred);
             if (pages == -1) {
                 pages = compress_page_with_multi_thread(f, block, offset,
                                                         bytes_transferred);
@@ -1428,7 +1431,7 @@ void acct_update_position(QEMUFile *f, size_t size, bool zero)
 {
     uint64_t pages = size / TARGET_PAGE_SIZE;
     if (zero) {
-        acct_info.dup_pages += pages;
+        ram_state.zero_pages += pages;
     } else {
         acct_info.norm_pages += pages;
         bytes_transferred += size;
@@ -1941,6 +1944,7 @@ static int ram_save_init_globals(RAMState *rs)
 
     rs->dirty_rate_high_cnt = 0;
     rs->bitmap_sync_count = 0;
+    rs->zero_pages = 0;
     migration_bitmap_sync_init(rs);
     qemu_mutex_init(&migration_bitmap_mutex);
 
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 68+ messages in thread

* [Qemu-devel] [PATCH 10/31] ram: Remove unused dump_mig_dbytes_transferred()
  2017-03-15 13:49 [Qemu-devel] [PATCH 00/31] Creating RAMState for migration Juan Quintela
                   ` (8 preceding siblings ...)
  2017-03-15 13:49 ` [Qemu-devel] [PATCH 09/31] ram: Move dup_pages " Juan Quintela
@ 2017-03-15 13:50 ` Juan Quintela
  2017-03-16 15:48   ` Dr. David Alan Gilbert
  2017-03-15 13:50 ` [Qemu-devel] [PATCH 11/31] ram: Remove unused pages_skiped variable Juan Quintela
                   ` (21 subsequent siblings)
  31 siblings, 1 reply; 68+ messages in thread
From: Juan Quintela @ 2017-03-15 13:50 UTC (permalink / raw)
  To: qemu-devel; +Cc: amit.shah, dgilbert

Signed-off-by: Juan Quintela <quintela@redhat.com>
---
 include/migration/migration.h | 1 -
 migration/ram.c               | 5 -----
 2 files changed, 6 deletions(-)

diff --git a/include/migration/migration.h b/include/migration/migration.h
index 5720c88..3e6bb68 100644
--- a/include/migration/migration.h
+++ b/include/migration/migration.h
@@ -276,7 +276,6 @@ void free_xbzrle_decoded_buf(void);
 
 void acct_update_position(QEMUFile *f, size_t size, bool zero);
 
-uint64_t dup_mig_bytes_transferred(void);
 uint64_t dup_mig_pages_transferred(void);
 uint64_t skipped_mig_bytes_transferred(void);
 uint64_t skipped_mig_pages_transferred(void);
diff --git a/migration/ram.c b/migration/ram.c
index 059e9f1..83fe20a 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -186,11 +186,6 @@ static void acct_clear(void)
     memset(&acct_info, 0, sizeof(acct_info));
 }
 
-uint64_t dup_mig_bytes_transferred(void)
-{
-    return ram_state.zero_pages * TARGET_PAGE_SIZE;
-}
-
 uint64_t dup_mig_pages_transferred(void)
 {
     return ram_state.zero_pages;
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 68+ messages in thread

* [Qemu-devel] [PATCH 11/31] ram: Remove unused pages_skiped variable
  2017-03-15 13:49 [Qemu-devel] [PATCH 00/31] Creating RAMState for migration Juan Quintela
                   ` (9 preceding siblings ...)
  2017-03-15 13:50 ` [Qemu-devel] [PATCH 10/31] ram: Remove unused dump_mig_dbytes_transferred() Juan Quintela
@ 2017-03-15 13:50 ` Juan Quintela
  2017-03-16 15:52   ` Dr. David Alan Gilbert
  2017-03-15 13:50 ` [Qemu-devel] [PATCH 12/31] ram: Move norm_pages to RAMState Juan Quintela
                   ` (20 subsequent siblings)
  31 siblings, 1 reply; 68+ messages in thread
From: Juan Quintela @ 2017-03-15 13:50 UTC (permalink / raw)
  To: qemu-devel; +Cc: amit.shah, dgilbert

For compatibility, we need to still send a value, but just specify it
and comment the fact.

Signed-off-by: Juan Quintela <quintela@redhat.com>
---
 include/migration/migration.h |  2 --
 migration/migration.c         |  3 ++-
 migration/ram.c               | 11 -----------
 3 files changed, 2 insertions(+), 14 deletions(-)

diff --git a/include/migration/migration.h b/include/migration/migration.h
index 3e6bb68..9c83951 100644
--- a/include/migration/migration.h
+++ b/include/migration/migration.h
@@ -277,8 +277,6 @@ void free_xbzrle_decoded_buf(void);
 void acct_update_position(QEMUFile *f, size_t size, bool zero);
 
 uint64_t dup_mig_pages_transferred(void);
-uint64_t skipped_mig_bytes_transferred(void);
-uint64_t skipped_mig_pages_transferred(void);
 uint64_t norm_mig_bytes_transferred(void);
 uint64_t norm_mig_pages_transferred(void);
 uint64_t xbzrle_mig_bytes_transferred(void);
diff --git a/migration/migration.c b/migration/migration.c
index 3dab684..c3e1b95 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -639,7 +639,8 @@ static void populate_ram_info(MigrationInfo *info, MigrationState *s)
     info->ram->transferred = ram_bytes_transferred();
     info->ram->total = ram_bytes_total();
     info->ram->duplicate = dup_mig_pages_transferred();
-    info->ram->skipped = skipped_mig_pages_transferred();
+    /* legacy value.  It is not used anymore */
+    info->ram->skipped = 0;
     info->ram->normal = norm_mig_pages_transferred();
     info->ram->normal_bytes = norm_mig_bytes_transferred();
     info->ram->mbps = s->mbps;
diff --git a/migration/ram.c b/migration/ram.c
index 83fe20a..468f042 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -169,7 +169,6 @@ static RAMState ram_state;
 
 /* accounting for migration statistics */
 typedef struct AccountingInfo {
-    uint64_t skipped_pages;
     uint64_t norm_pages;
     uint64_t iterations;
     uint64_t xbzrle_bytes;
@@ -191,16 +190,6 @@ uint64_t dup_mig_pages_transferred(void)
     return ram_state.zero_pages;
 }
 
-uint64_t skipped_mig_bytes_transferred(void)
-{
-    return acct_info.skipped_pages * TARGET_PAGE_SIZE;
-}
-
-uint64_t skipped_mig_pages_transferred(void)
-{
-    return acct_info.skipped_pages;
-}
-
 uint64_t norm_mig_bytes_transferred(void)
 {
     return acct_info.norm_pages * TARGET_PAGE_SIZE;
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 68+ messages in thread

* [Qemu-devel] [PATCH 12/31] ram: Move norm_pages to RAMState
  2017-03-15 13:49 [Qemu-devel] [PATCH 00/31] Creating RAMState for migration Juan Quintela
                   ` (10 preceding siblings ...)
  2017-03-15 13:50 ` [Qemu-devel] [PATCH 11/31] ram: Remove unused pages_skiped variable Juan Quintela
@ 2017-03-15 13:50 ` Juan Quintela
  2017-03-16 16:09   ` Dr. David Alan Gilbert
  2017-03-15 13:50 ` [Qemu-devel] [PATCH 13/31] ram: Remove norm_mig_bytes_transferred Juan Quintela
                   ` (19 subsequent siblings)
  31 siblings, 1 reply; 68+ messages in thread
From: Juan Quintela @ 2017-03-15 13:50 UTC (permalink / raw)
  To: qemu-devel; +Cc: amit.shah, dgilbert

Signed-off-by: Juan Quintela <quintela@redhat.com>
---
 migration/ram.c | 26 ++++++++++++++------------
 1 file changed, 14 insertions(+), 12 deletions(-)

diff --git a/migration/ram.c b/migration/ram.c
index 468f042..58c7dc7 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -162,6 +162,8 @@ struct RAMState {
     /* Accounting fields */
     /* number of zero pages.  It used to be pages filled by the same char. */
     uint64_t zero_pages;
+    /* number of normal transferred pages */
+    uint64_t norm_pages;
 };
 typedef struct RAMState RAMState;
 
@@ -169,7 +171,6 @@ static RAMState ram_state;
 
 /* accounting for migration statistics */
 typedef struct AccountingInfo {
-    uint64_t norm_pages;
     uint64_t iterations;
     uint64_t xbzrle_bytes;
     uint64_t xbzrle_pages;
@@ -192,12 +193,12 @@ uint64_t dup_mig_pages_transferred(void)
 
 uint64_t norm_mig_bytes_transferred(void)
 {
-    return acct_info.norm_pages * TARGET_PAGE_SIZE;
+    return ram_state.norm_pages * TARGET_PAGE_SIZE;
 }
 
 uint64_t norm_mig_pages_transferred(void)
 {
-    return acct_info.norm_pages;
+    return ram_state.norm_pages;
 }
 
 uint64_t xbzrle_mig_bytes_transferred(void)
@@ -782,7 +783,7 @@ static int ram_save_page(RAMState *rs, MigrationState *ms, QEMUFile *f,
     if (ret != RAM_SAVE_CONTROL_NOT_SUPP) {
         if (ret != RAM_SAVE_CONTROL_DELAYED) {
             if (bytes_xmit > 0) {
-                acct_info.norm_pages++;
+                rs->norm_pages++;
             } else if (bytes_xmit == 0) {
                 rs->zero_pages++;
             }
@@ -821,7 +822,7 @@ static int ram_save_page(RAMState *rs, MigrationState *ms, QEMUFile *f,
         }
         *bytes_transferred += TARGET_PAGE_SIZE;
         pages = 1;
-        acct_info.norm_pages++;
+        rs->norm_pages++;
     }
 
     XBZRLE_cache_unlock();
@@ -888,8 +889,8 @@ static inline void set_compress_params(CompressParam *param, RAMBlock *block,
     param->offset = offset;
 }
 
-static int compress_page_with_multi_thread(QEMUFile *f, RAMBlock *block,
-                                           ram_addr_t offset,
+static int compress_page_with_multi_thread(RAMState *rs, QEMUFile *f,
+                                           RAMBlock *block, ram_addr_t offset,
                                            uint64_t *bytes_transferred)
 {
     int idx, thread_count, bytes_xmit = -1, pages = -1;
@@ -906,7 +907,7 @@ static int compress_page_with_multi_thread(QEMUFile *f, RAMBlock *block,
                 qemu_cond_signal(&comp_param[idx].cond);
                 qemu_mutex_unlock(&comp_param[idx].mutex);
                 pages = 1;
-                acct_info.norm_pages++;
+                rs->norm_pages++;
                 *bytes_transferred += bytes_xmit;
                 break;
             }
@@ -958,7 +959,7 @@ static int ram_save_compressed_page(RAMState *rs, MigrationState *ms,
     if (ret != RAM_SAVE_CONTROL_NOT_SUPP) {
         if (ret != RAM_SAVE_CONTROL_DELAYED) {
             if (bytes_xmit > 0) {
-                acct_info.norm_pages++;
+                rs->norm_pages++;
             } else if (bytes_xmit == 0) {
                 rs->zero_pages++;
             }
@@ -981,7 +982,7 @@ static int ram_save_compressed_page(RAMState *rs, MigrationState *ms,
                                                  migrate_compress_level());
                 if (blen > 0) {
                     *bytes_transferred += bytes_xmit + blen;
-                    acct_info.norm_pages++;
+                    rs->norm_pages++;
                     pages = 1;
                 } else {
                     qemu_file_set_error(f, blen);
@@ -995,7 +996,7 @@ static int ram_save_compressed_page(RAMState *rs, MigrationState *ms,
             offset |= RAM_SAVE_FLAG_CONTINUE;
             pages = save_zero_page(rs, f, block, offset, p, bytes_transferred);
             if (pages == -1) {
-                pages = compress_page_with_multi_thread(f, block, offset,
+                pages = compress_page_with_multi_thread(rs, f, block, offset,
                                                         bytes_transferred);
             } else {
                 ram_release_pages(ms, block->idstr, pss->offset, pages);
@@ -1417,7 +1418,7 @@ void acct_update_position(QEMUFile *f, size_t size, bool zero)
     if (zero) {
         ram_state.zero_pages += pages;
     } else {
-        acct_info.norm_pages += pages;
+        ram_state.norm_pages += pages;
         bytes_transferred += size;
         qemu_update_position(f, size);
     }
@@ -1929,6 +1930,7 @@ static int ram_save_init_globals(RAMState *rs)
     rs->dirty_rate_high_cnt = 0;
     rs->bitmap_sync_count = 0;
     rs->zero_pages = 0;
+    rs->norm_pages = 0;
     migration_bitmap_sync_init(rs);
     qemu_mutex_init(&migration_bitmap_mutex);
 
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 68+ messages in thread

* [Qemu-devel] [PATCH 13/31] ram: Remove norm_mig_bytes_transferred
  2017-03-15 13:49 [Qemu-devel] [PATCH 00/31] Creating RAMState for migration Juan Quintela
                   ` (11 preceding siblings ...)
  2017-03-15 13:50 ` [Qemu-devel] [PATCH 12/31] ram: Move norm_pages to RAMState Juan Quintela
@ 2017-03-15 13:50 ` Juan Quintela
  2017-03-16 16:14   ` Dr. David Alan Gilbert
  2017-03-15 13:50 ` [Qemu-devel] [PATCH 14/31] ram: Move iterations into RAMState Juan Quintela
                   ` (18 subsequent siblings)
  31 siblings, 1 reply; 68+ messages in thread
From: Juan Quintela @ 2017-03-15 13:50 UTC (permalink / raw)
  To: qemu-devel; +Cc: amit.shah, dgilbert

Its value can be calculated by other exported.

Signed-off-by: Juan Quintela <quintela@redhat.com>
---
 include/migration/migration.h | 1 -
 migration/migration.c         | 3 ++-
 migration/ram.c               | 5 -----
 3 files changed, 2 insertions(+), 7 deletions(-)

diff --git a/include/migration/migration.h b/include/migration/migration.h
index 9c83951..84cef4b 100644
--- a/include/migration/migration.h
+++ b/include/migration/migration.h
@@ -277,7 +277,6 @@ void free_xbzrle_decoded_buf(void);
 void acct_update_position(QEMUFile *f, size_t size, bool zero);
 
 uint64_t dup_mig_pages_transferred(void);
-uint64_t norm_mig_bytes_transferred(void);
 uint64_t norm_mig_pages_transferred(void);
 uint64_t xbzrle_mig_bytes_transferred(void);
 uint64_t xbzrle_mig_pages_transferred(void);
diff --git a/migration/migration.c b/migration/migration.c
index c3e1b95..46645b6 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -642,7 +642,8 @@ static void populate_ram_info(MigrationInfo *info, MigrationState *s)
     /* legacy value.  It is not used anymore */
     info->ram->skipped = 0;
     info->ram->normal = norm_mig_pages_transferred();
-    info->ram->normal_bytes = norm_mig_bytes_transferred();
+    info->ram->normal_bytes = norm_mig_pages_transferred() *
+        (1ul << qemu_target_page_bits());
     info->ram->mbps = s->mbps;
     info->ram->dirty_sync_count = s->dirty_sync_count;
     info->ram->postcopy_requests = s->postcopy_requests;
diff --git a/migration/ram.c b/migration/ram.c
index 58c7dc7..8caeb4f 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -191,11 +191,6 @@ uint64_t dup_mig_pages_transferred(void)
     return ram_state.zero_pages;
 }
 
-uint64_t norm_mig_bytes_transferred(void)
-{
-    return ram_state.norm_pages * TARGET_PAGE_SIZE;
-}
-
 uint64_t norm_mig_pages_transferred(void)
 {
     return ram_state.norm_pages;
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 68+ messages in thread

* [Qemu-devel] [PATCH 14/31] ram: Move iterations into RAMState
  2017-03-15 13:49 [Qemu-devel] [PATCH 00/31] Creating RAMState for migration Juan Quintela
                   ` (12 preceding siblings ...)
  2017-03-15 13:50 ` [Qemu-devel] [PATCH 13/31] ram: Remove norm_mig_bytes_transferred Juan Quintela
@ 2017-03-15 13:50 ` Juan Quintela
  2017-03-16 20:04   ` Dr. David Alan Gilbert
  2017-03-15 13:50 ` [Qemu-devel] [PATCH 15/31] ram: Move xbzrle_bytes " Juan Quintela
                   ` (17 subsequent siblings)
  31 siblings, 1 reply; 68+ messages in thread
From: Juan Quintela @ 2017-03-15 13:50 UTC (permalink / raw)
  To: qemu-devel; +Cc: amit.shah, dgilbert

Signed-off-by: Juan Quintela <quintela@redhat.com>
---
 migration/ram.c | 12 +++++++-----
 1 file changed, 7 insertions(+), 5 deletions(-)

diff --git a/migration/ram.c b/migration/ram.c
index 8caeb4f..234bdba 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -164,6 +164,8 @@ struct RAMState {
     uint64_t zero_pages;
     /* number of normal transferred pages */
     uint64_t norm_pages;
+    /* Iterations since start */
+    uint64_t iterations;
 };
 typedef struct RAMState RAMState;
 
@@ -171,7 +173,6 @@ static RAMState ram_state;
 
 /* accounting for migration statistics */
 typedef struct AccountingInfo {
-    uint64_t iterations;
     uint64_t xbzrle_bytes;
     uint64_t xbzrle_pages;
     uint64_t xbzrle_cache_miss;
@@ -668,13 +669,13 @@ static void migration_bitmap_sync(RAMState *rs)
         }
 
         if (migrate_use_xbzrle()) {
-            if (rs->iterations_prev != acct_info.iterations) {
+            if (rs->iterations_prev != rs->iterations) {
                 acct_info.xbzrle_cache_miss_rate =
                    (double)(acct_info.xbzrle_cache_miss -
                             rs->xbzrle_cache_miss_prev) /
-                   (acct_info.iterations - rs->iterations_prev);
+                   (rs->iterations - rs->iterations_prev);
             }
-            rs->iterations_prev = acct_info.iterations;
+            rs->iterations_prev = rs->iterations;
             rs->xbzrle_cache_miss_prev = acct_info.xbzrle_cache_miss;
         }
         s->dirty_pages_rate = rs->num_dirty_pages_period * 1000
@@ -1926,6 +1927,7 @@ static int ram_save_init_globals(RAMState *rs)
     rs->bitmap_sync_count = 0;
     rs->zero_pages = 0;
     rs->norm_pages = 0;
+    rs->iterations = 0;
     migration_bitmap_sync_init(rs);
     qemu_mutex_init(&migration_bitmap_mutex);
 
@@ -2066,7 +2068,7 @@ static int ram_save_iterate(QEMUFile *f, void *opaque)
             done = 1;
             break;
         }
-        acct_info.iterations++;
+        rs->iterations++;
 
         /* we want to check in the 1st loop, just in case it was the 1st time
            and we had to sync the dirty bitmap.
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 68+ messages in thread

* [Qemu-devel] [PATCH 15/31] ram: Move xbzrle_bytes into RAMState
  2017-03-15 13:49 [Qemu-devel] [PATCH 00/31] Creating RAMState for migration Juan Quintela
                   ` (13 preceding siblings ...)
  2017-03-15 13:50 ` [Qemu-devel] [PATCH 14/31] ram: Move iterations into RAMState Juan Quintela
@ 2017-03-15 13:50 ` Juan Quintela
  2017-03-15 13:50 ` [Qemu-devel] [PATCH 16/31] ram: Move xbzrle_pages " Juan Quintela
                   ` (16 subsequent siblings)
  31 siblings, 0 replies; 68+ messages in thread
From: Juan Quintela @ 2017-03-15 13:50 UTC (permalink / raw)
  To: qemu-devel; +Cc: amit.shah, dgilbert

Signed-off-by: Juan Quintela <quintela@redhat.com>
---
 migration/ram.c | 8 +++++---
 1 file changed, 5 insertions(+), 3 deletions(-)

diff --git a/migration/ram.c b/migration/ram.c
index 234bdba..02bbe53 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -166,6 +166,8 @@ struct RAMState {
     uint64_t norm_pages;
     /* Iterations since start */
     uint64_t iterations;
+    /* xbzrle transmitted bytes */
+    uint64_t xbzrle_bytes;
 };
 typedef struct RAMState RAMState;
 
@@ -173,7 +175,6 @@ static RAMState ram_state;
 
 /* accounting for migration statistics */
 typedef struct AccountingInfo {
-    uint64_t xbzrle_bytes;
     uint64_t xbzrle_pages;
     uint64_t xbzrle_cache_miss;
     double xbzrle_cache_miss_rate;
@@ -199,7 +200,7 @@ uint64_t norm_mig_pages_transferred(void)
 
 uint64_t xbzrle_mig_bytes_transferred(void)
 {
-    return acct_info.xbzrle_bytes;
+    return ram_state.xbzrle_bytes;
 }
 
 uint64_t xbzrle_mig_pages_transferred(void)
@@ -527,7 +528,7 @@ static int save_xbzrle_page(QEMUFile *f, RAMState *rs, uint8_t **current_data,
     qemu_put_buffer(f, XBZRLE.encoded_buf, encoded_len);
     bytes_xbzrle += encoded_len + 1 + 2;
     acct_info.xbzrle_pages++;
-    acct_info.xbzrle_bytes += bytes_xbzrle;
+    rs->xbzrle_bytes += bytes_xbzrle;
     *bytes_transferred += bytes_xbzrle;
 
     return 1;
@@ -1928,6 +1929,7 @@ static int ram_save_init_globals(RAMState *rs)
     rs->zero_pages = 0;
     rs->norm_pages = 0;
     rs->iterations = 0;
+    rs->xbzrle_bytes = 0;
     migration_bitmap_sync_init(rs);
     qemu_mutex_init(&migration_bitmap_mutex);
 
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 68+ messages in thread

* [Qemu-devel] [PATCH 16/31] ram: Move xbzrle_pages into RAMState
  2017-03-15 13:49 [Qemu-devel] [PATCH 00/31] Creating RAMState for migration Juan Quintela
                   ` (14 preceding siblings ...)
  2017-03-15 13:50 ` [Qemu-devel] [PATCH 15/31] ram: Move xbzrle_bytes " Juan Quintela
@ 2017-03-15 13:50 ` Juan Quintela
  2017-03-15 13:50 ` [Qemu-devel] [PATCH 17/31] ram: Move xbzrle_cache_miss " Juan Quintela
                   ` (15 subsequent siblings)
  31 siblings, 0 replies; 68+ messages in thread
From: Juan Quintela @ 2017-03-15 13:50 UTC (permalink / raw)
  To: qemu-devel; +Cc: amit.shah, dgilbert

Signed-off-by: Juan Quintela <quintela@redhat.com>
---
 migration/ram.c | 8 +++++---
 1 file changed, 5 insertions(+), 3 deletions(-)

diff --git a/migration/ram.c b/migration/ram.c
index 02bbe53..ce703e5 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -168,6 +168,8 @@ struct RAMState {
     uint64_t iterations;
     /* xbzrle transmitted bytes */
     uint64_t xbzrle_bytes;
+    /* xbzrle transmmited pages */
+    uint64_t xbzrle_pages;
 };
 typedef struct RAMState RAMState;
 
@@ -175,7 +177,6 @@ static RAMState ram_state;
 
 /* accounting for migration statistics */
 typedef struct AccountingInfo {
-    uint64_t xbzrle_pages;
     uint64_t xbzrle_cache_miss;
     double xbzrle_cache_miss_rate;
     uint64_t xbzrle_overflows;
@@ -205,7 +206,7 @@ uint64_t xbzrle_mig_bytes_transferred(void)
 
 uint64_t xbzrle_mig_pages_transferred(void)
 {
-    return acct_info.xbzrle_pages;
+    return ram_state.xbzrle_pages;
 }
 
 uint64_t xbzrle_mig_pages_cache_miss(void)
@@ -527,7 +528,7 @@ static int save_xbzrle_page(QEMUFile *f, RAMState *rs, uint8_t **current_data,
     qemu_put_be16(f, encoded_len);
     qemu_put_buffer(f, XBZRLE.encoded_buf, encoded_len);
     bytes_xbzrle += encoded_len + 1 + 2;
-    acct_info.xbzrle_pages++;
+    rs->xbzrle_pages++;
     rs->xbzrle_bytes += bytes_xbzrle;
     *bytes_transferred += bytes_xbzrle;
 
@@ -1930,6 +1931,7 @@ static int ram_save_init_globals(RAMState *rs)
     rs->norm_pages = 0;
     rs->iterations = 0;
     rs->xbzrle_bytes = 0;
+    rs->xbzrle_pages = 0;
     migration_bitmap_sync_init(rs);
     qemu_mutex_init(&migration_bitmap_mutex);
 
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 68+ messages in thread

* [Qemu-devel] [PATCH 17/31] ram: Move xbzrle_cache_miss into RAMState
  2017-03-15 13:49 [Qemu-devel] [PATCH 00/31] Creating RAMState for migration Juan Quintela
                   ` (15 preceding siblings ...)
  2017-03-15 13:50 ` [Qemu-devel] [PATCH 16/31] ram: Move xbzrle_pages " Juan Quintela
@ 2017-03-15 13:50 ` Juan Quintela
  2017-03-15 13:50 ` [Qemu-devel] [PATCH 18/31] ram: move xbzrle_cache_miss_rate " Juan Quintela
                   ` (14 subsequent siblings)
  31 siblings, 0 replies; 68+ messages in thread
From: Juan Quintela @ 2017-03-15 13:50 UTC (permalink / raw)
  To: qemu-devel; +Cc: amit.shah, dgilbert

Signed-off-by: Juan Quintela <quintela@redhat.com>
---
 migration/ram.c | 12 +++++++-----
 1 file changed, 7 insertions(+), 5 deletions(-)

diff --git a/migration/ram.c b/migration/ram.c
index ce703e5..8470db0 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -170,6 +170,8 @@ struct RAMState {
     uint64_t xbzrle_bytes;
     /* xbzrle transmmited pages */
     uint64_t xbzrle_pages;
+    /* xbzrle number of cache miss */
+    uint64_t xbzrle_cache_miss;
 };
 typedef struct RAMState RAMState;
 
@@ -177,7 +179,6 @@ static RAMState ram_state;
 
 /* accounting for migration statistics */
 typedef struct AccountingInfo {
-    uint64_t xbzrle_cache_miss;
     double xbzrle_cache_miss_rate;
     uint64_t xbzrle_overflows;
 } AccountingInfo;
@@ -211,7 +212,7 @@ uint64_t xbzrle_mig_pages_transferred(void)
 
 uint64_t xbzrle_mig_pages_cache_miss(void)
 {
-    return acct_info.xbzrle_cache_miss;
+    return ram_state.xbzrle_cache_miss;
 }
 
 double xbzrle_mig_cache_miss_rate(void)
@@ -480,7 +481,7 @@ static int save_xbzrle_page(QEMUFile *f, RAMState *rs, uint8_t **current_data,
     uint8_t *prev_cached_page;
 
     if (!cache_is_cached(XBZRLE.cache, current_addr, rs->bitmap_sync_count)) {
-        acct_info.xbzrle_cache_miss++;
+        rs->xbzrle_cache_miss++;
         if (!last_stage) {
             if (cache_insert(XBZRLE.cache, current_addr, *current_data,
                              rs->bitmap_sync_count) == -1) {
@@ -673,12 +674,12 @@ static void migration_bitmap_sync(RAMState *rs)
         if (migrate_use_xbzrle()) {
             if (rs->iterations_prev != rs->iterations) {
                 acct_info.xbzrle_cache_miss_rate =
-                   (double)(acct_info.xbzrle_cache_miss -
+                   (double)(rs->xbzrle_cache_miss -
                             rs->xbzrle_cache_miss_prev) /
                    (rs->iterations - rs->iterations_prev);
             }
             rs->iterations_prev = rs->iterations;
-            rs->xbzrle_cache_miss_prev = acct_info.xbzrle_cache_miss;
+            rs->xbzrle_cache_miss_prev = rs->xbzrle_cache_miss;
         }
         s->dirty_pages_rate = rs->num_dirty_pages_period * 1000
             / (end_time - rs->start_time);
@@ -1932,6 +1933,7 @@ static int ram_save_init_globals(RAMState *rs)
     rs->iterations = 0;
     rs->xbzrle_bytes = 0;
     rs->xbzrle_pages = 0;
+    rs->xbzrle_cache_miss = 0;
     migration_bitmap_sync_init(rs);
     qemu_mutex_init(&migration_bitmap_mutex);
 
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 68+ messages in thread

* [Qemu-devel] [PATCH 18/31] ram: move xbzrle_cache_miss_rate into RAMState
  2017-03-15 13:49 [Qemu-devel] [PATCH 00/31] Creating RAMState for migration Juan Quintela
                   ` (16 preceding siblings ...)
  2017-03-15 13:50 ` [Qemu-devel] [PATCH 17/31] ram: Move xbzrle_cache_miss " Juan Quintela
@ 2017-03-15 13:50 ` Juan Quintela
  2017-03-15 13:50 ` [Qemu-devel] [PATCH 19/31] ram: move xbzrle_overflows " Juan Quintela
                   ` (13 subsequent siblings)
  31 siblings, 0 replies; 68+ messages in thread
From: Juan Quintela @ 2017-03-15 13:50 UTC (permalink / raw)
  To: qemu-devel; +Cc: amit.shah, dgilbert

Signed-off-by: Juan Quintela <quintela@redhat.com>
---
 migration/ram.c | 8 +++++---
 1 file changed, 5 insertions(+), 3 deletions(-)

diff --git a/migration/ram.c b/migration/ram.c
index 8470db0..23a7317 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -172,6 +172,8 @@ struct RAMState {
     uint64_t xbzrle_pages;
     /* xbzrle number of cache miss */
     uint64_t xbzrle_cache_miss;
+    /* xbzrle miss rate */
+    double xbzrle_cache_miss_rate;
 };
 typedef struct RAMState RAMState;
 
@@ -179,7 +181,6 @@ static RAMState ram_state;
 
 /* accounting for migration statistics */
 typedef struct AccountingInfo {
-    double xbzrle_cache_miss_rate;
     uint64_t xbzrle_overflows;
 } AccountingInfo;
 
@@ -217,7 +218,7 @@ uint64_t xbzrle_mig_pages_cache_miss(void)
 
 double xbzrle_mig_cache_miss_rate(void)
 {
-    return acct_info.xbzrle_cache_miss_rate;
+    return ram_state.xbzrle_cache_miss_rate;
 }
 
 uint64_t xbzrle_mig_pages_overflow(void)
@@ -673,7 +674,7 @@ static void migration_bitmap_sync(RAMState *rs)
 
         if (migrate_use_xbzrle()) {
             if (rs->iterations_prev != rs->iterations) {
-                acct_info.xbzrle_cache_miss_rate =
+                rs->xbzrle_cache_miss_rate =
                    (double)(rs->xbzrle_cache_miss -
                             rs->xbzrle_cache_miss_prev) /
                    (rs->iterations - rs->iterations_prev);
@@ -1934,6 +1935,7 @@ static int ram_save_init_globals(RAMState *rs)
     rs->xbzrle_bytes = 0;
     rs->xbzrle_pages = 0;
     rs->xbzrle_cache_miss = 0;
+    rs->xbzrle_cache_miss_rate = 0;
     migration_bitmap_sync_init(rs);
     qemu_mutex_init(&migration_bitmap_mutex);
 
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 68+ messages in thread

* [Qemu-devel] [PATCH 19/31] ram: move xbzrle_overflows into RAMState
  2017-03-15 13:49 [Qemu-devel] [PATCH 00/31] Creating RAMState for migration Juan Quintela
                   ` (17 preceding siblings ...)
  2017-03-15 13:50 ` [Qemu-devel] [PATCH 18/31] ram: move xbzrle_cache_miss_rate " Juan Quintela
@ 2017-03-15 13:50 ` Juan Quintela
  2017-03-16 20:07   ` Dr. David Alan Gilbert
  2017-03-15 13:50 ` [Qemu-devel] [PATCH 20/31] ram: move migration_dirty_pages to RAMState Juan Quintela
                   ` (12 subsequent siblings)
  31 siblings, 1 reply; 68+ messages in thread
From: Juan Quintela @ 2017-03-15 13:50 UTC (permalink / raw)
  To: qemu-devel; +Cc: amit.shah, dgilbert

Once there, remove the now unused AccountingInfo struct and var.

Signed-off-by: Juan Quintela <quintela@redhat.com>
---
 migration/ram.c | 21 +++++----------------
 1 file changed, 5 insertions(+), 16 deletions(-)

diff --git a/migration/ram.c b/migration/ram.c
index 23a7317..75ad17f 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -174,23 +174,13 @@ struct RAMState {
     uint64_t xbzrle_cache_miss;
     /* xbzrle miss rate */
     double xbzrle_cache_miss_rate;
+    /* xbzrle number of overflows */
+    uint64_t xbzrle_overflows;
 };
 typedef struct RAMState RAMState;
 
 static RAMState ram_state;
 
-/* accounting for migration statistics */
-typedef struct AccountingInfo {
-    uint64_t xbzrle_overflows;
-} AccountingInfo;
-
-static AccountingInfo acct_info;
-
-static void acct_clear(void)
-{
-    memset(&acct_info, 0, sizeof(acct_info));
-}
-
 uint64_t dup_mig_pages_transferred(void)
 {
     return ram_state.zero_pages;
@@ -223,7 +213,7 @@ double xbzrle_mig_cache_miss_rate(void)
 
 uint64_t xbzrle_mig_pages_overflow(void)
 {
-    return acct_info.xbzrle_overflows;
+    return ram_state.xbzrle_overflows;
 }
 
 static QemuMutex migration_bitmap_mutex;
@@ -510,7 +500,7 @@ static int save_xbzrle_page(QEMUFile *f, RAMState *rs, uint8_t **current_data,
         return 0;
     } else if (encoded_len == -1) {
         trace_save_xbzrle_page_overflow();
-        acct_info.xbzrle_overflows++;
+        rs->xbzrle_overflows++;
         /* update data in the cache */
         if (!last_stage) {
             memcpy(prev_cached_page, *current_data, TARGET_PAGE_SIZE);
@@ -1936,6 +1926,7 @@ static int ram_save_init_globals(RAMState *rs)
     rs->xbzrle_pages = 0;
     rs->xbzrle_cache_miss = 0;
     rs->xbzrle_cache_miss_rate = 0;
+    rs->xbzrle_overflows = 0;
     migration_bitmap_sync_init(rs);
     qemu_mutex_init(&migration_bitmap_mutex);
 
@@ -1966,8 +1957,6 @@ static int ram_save_init_globals(RAMState *rs)
             XBZRLE.encoded_buf = NULL;
             return -1;
         }
-
-        acct_clear();
     }
 
     /* For memory_global_dirty_log_start below.  */
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 68+ messages in thread

* [Qemu-devel] [PATCH 20/31] ram: move migration_dirty_pages to RAMState
  2017-03-15 13:49 [Qemu-devel] [PATCH 00/31] Creating RAMState for migration Juan Quintela
                   ` (18 preceding siblings ...)
  2017-03-15 13:50 ` [Qemu-devel] [PATCH 19/31] ram: move xbzrle_overflows " Juan Quintela
@ 2017-03-15 13:50 ` Juan Quintela
  2017-03-15 13:50 ` [Qemu-devel] [PATCH 21/31] ram: Everything was init to zero, so use memset Juan Quintela
                   ` (11 subsequent siblings)
  31 siblings, 0 replies; 68+ messages in thread
From: Juan Quintela @ 2017-03-15 13:50 UTC (permalink / raw)
  To: qemu-devel; +Cc: amit.shah, dgilbert

Signed-off-by: Juan Quintela <quintela@redhat.com>
---
 migration/ram.c | 38 ++++++++++++++++++++------------------
 1 file changed, 20 insertions(+), 18 deletions(-)

diff --git a/migration/ram.c b/migration/ram.c
index 75ad17f..606e836 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -176,6 +176,8 @@ struct RAMState {
     double xbzrle_cache_miss_rate;
     /* xbzrle number of overflows */
     uint64_t xbzrle_overflows;
+    /* number of dirty bits in the bitmap */
+    uint64_t migration_dirty_pages;
 };
 typedef struct RAMState RAMState;
 
@@ -216,8 +218,12 @@ uint64_t xbzrle_mig_pages_overflow(void)
     return ram_state.xbzrle_overflows;
 }
 
+static ram_addr_t ram_save_remaining(void)
+{
+    return ram_state.migration_dirty_pages;
+}
+
 static QemuMutex migration_bitmap_mutex;
-static uint64_t migration_dirty_pages;
 
 /* used by the search for pages to send */
 struct PageSearchStatus {
@@ -559,7 +565,7 @@ ram_addr_t migration_bitmap_find_dirty(RAMState *rs, RAMBlock *rb,
     return (next - base) << TARGET_PAGE_BITS;
 }
 
-static inline bool migration_bitmap_clear_dirty(ram_addr_t addr)
+static inline bool migration_bitmap_clear_dirty(RAMState *rs, ram_addr_t addr)
 {
     bool ret;
     int nr = addr >> TARGET_PAGE_BITS;
@@ -568,16 +574,17 @@ static inline bool migration_bitmap_clear_dirty(ram_addr_t addr)
     ret = test_and_clear_bit(nr, bitmap);
 
     if (ret) {
-        migration_dirty_pages--;
+        rs->migration_dirty_pages--;
     }
     return ret;
 }
 
-static void migration_bitmap_sync_range(ram_addr_t start, ram_addr_t length)
+static void migration_bitmap_sync_range(RAMState *rs, ram_addr_t start,
+                                        ram_addr_t length)
 {
     unsigned long *bitmap;
     bitmap = atomic_rcu_read(&migration_bitmap_rcu)->bmap;
-    migration_dirty_pages +=
+    rs-> migration_dirty_pages +=
         cpu_physical_memory_sync_dirty_bitmap(bitmap, start, length);
 }
 
@@ -610,7 +617,7 @@ uint64_t ram_pagesize_summary(void)
 static void migration_bitmap_sync(RAMState *rs)
 {
     RAMBlock *block;
-    uint64_t num_dirty_pages_init = migration_dirty_pages;
+    uint64_t num_dirty_pages_init = rs->migration_dirty_pages;
     MigrationState *s = migrate_get_current();
     int64_t end_time;
     int64_t bytes_xfer_now;
@@ -631,14 +638,14 @@ static void migration_bitmap_sync(RAMState *rs)
     qemu_mutex_lock(&migration_bitmap_mutex);
     rcu_read_lock();
     QLIST_FOREACH_RCU(block, &ram_list.blocks, next) {
-        migration_bitmap_sync_range(block->offset, block->used_length);
+        migration_bitmap_sync_range(rs, block->offset, block->used_length);
     }
     rcu_read_unlock();
     qemu_mutex_unlock(&migration_bitmap_mutex);
 
-    trace_migration_bitmap_sync_end(migration_dirty_pages
+    trace_migration_bitmap_sync_end(rs->migration_dirty_pages
                                     - num_dirty_pages_init);
-    rs->num_dirty_pages_period += migration_dirty_pages - num_dirty_pages_init;
+    rs->num_dirty_pages_period += rs->migration_dirty_pages - num_dirty_pages_init;
     end_time = qemu_clock_get_ms(QEMU_CLOCK_REALTIME);
 
     /* more than 1 second = 1000 millisecons */
@@ -1264,7 +1271,7 @@ static int ram_save_target_page(RAMState *rs, MigrationState *ms, QEMUFile *f,
     int res = 0;
 
     /* Check the pages is dirty and if it is send it */
-    if (migration_bitmap_clear_dirty(dirty_ram_abs)) {
+    if (migration_bitmap_clear_dirty(rs, dirty_ram_abs)) {
         unsigned long *unsentmap;
         if (compression_switch && migrate_use_compression()) {
             res = ram_save_compressed_page(rs, ms, f, pss,
@@ -1414,11 +1421,6 @@ void acct_update_position(QEMUFile *f, size_t size, bool zero)
     }
 }
 
-static ram_addr_t ram_save_remaining(void)
-{
-    return migration_dirty_pages;
-}
-
 uint64_t ram_bytes_remaining(void)
 {
     return ram_save_remaining() * TARGET_PAGE_SIZE;
@@ -1517,7 +1519,7 @@ void migration_bitmap_extend(ram_addr_t old, ram_addr_t new)
 
         atomic_rcu_set(&migration_bitmap_rcu, bitmap);
         qemu_mutex_unlock(&migration_bitmap_mutex);
-        migration_dirty_pages += new - old;
+        ram_state.migration_dirty_pages += new - old;
         call_rcu(old_bitmap, migration_bitmap_free, rcu);
     }
 }
@@ -1771,7 +1773,7 @@ static void postcopy_chunk_hostpages_pass(MigrationState *ms, bool unsent_pass,
                  * Remark them as dirty, updating the count for any pages
                  * that weren't previously dirty.
                  */
-                migration_dirty_pages += !test_and_set_bit(page, bitmap);
+                ram_state.migration_dirty_pages += !test_and_set_bit(page, bitmap);
             }
         }
 
@@ -1984,7 +1986,7 @@ static int ram_save_init_globals(RAMState *rs)
      * Count the total number of pages used by ram blocks not including any
      * gaps due to alignment or unplugs.
      */
-    migration_dirty_pages = ram_bytes_total() >> TARGET_PAGE_BITS;
+    rs->migration_dirty_pages = ram_bytes_total() >> TARGET_PAGE_BITS;
 
     memory_global_dirty_log_start();
     migration_bitmap_sync(rs);
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 68+ messages in thread

* [Qemu-devel] [PATCH 21/31] ram: Everything was init to zero, so use memset
  2017-03-15 13:49 [Qemu-devel] [PATCH 00/31] Creating RAMState for migration Juan Quintela
                   ` (19 preceding siblings ...)
  2017-03-15 13:50 ` [Qemu-devel] [PATCH 20/31] ram: move migration_dirty_pages to RAMState Juan Quintela
@ 2017-03-15 13:50 ` Juan Quintela
  2017-03-16 20:15   ` Dr. David Alan Gilbert
  2017-03-15 13:50 ` [Qemu-devel] [PATCH 22/31] ram: move migration_bitmap_mutex into RAMState Juan Quintela
                   ` (10 subsequent siblings)
  31 siblings, 1 reply; 68+ messages in thread
From: Juan Quintela @ 2017-03-15 13:50 UTC (permalink / raw)
  To: qemu-devel; +Cc: amit.shah, dgilbert

And then init only things that are not zero by default.

Signed-off-by: Juan Quintela <quintela@redhat.com>
---
 migration/ram.c | 25 +++----------------------
 1 file changed, 3 insertions(+), 22 deletions(-)

diff --git a/migration/ram.c b/migration/ram.c
index 606e836..7f56b5f 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -588,15 +588,6 @@ static void migration_bitmap_sync_range(RAMState *rs, ram_addr_t start,
         cpu_physical_memory_sync_dirty_bitmap(bitmap, start, length);
 }
 
-static void migration_bitmap_sync_init(RAMState *rs)
-{
-    rs->start_time = 0;
-    rs->bytes_xfer_prev = 0;
-    rs->num_dirty_pages_period = 0;
-    rs->xbzrle_cache_miss_prev = 0;
-    rs->iterations_prev = 0;
-}
-
 /* Returns a summary bitmap of the page sizes of all RAMBlocks;
  * for VMs with just normal pages this is equivalent to the
  * host page size.  If it's got some huge pages then it's the OR
@@ -1915,21 +1906,11 @@ err:
     return ret;
 }
 
-static int ram_save_init_globals(RAMState *rs)
+static int ram_state_init(RAMState *rs)
 {
     int64_t ram_bitmap_pages; /* Size of bitmap in pages, including gaps */
 
-    rs->dirty_rate_high_cnt = 0;
-    rs->bitmap_sync_count = 0;
-    rs->zero_pages = 0;
-    rs->norm_pages = 0;
-    rs->iterations = 0;
-    rs->xbzrle_bytes = 0;
-    rs->xbzrle_pages = 0;
-    rs->xbzrle_cache_miss = 0;
-    rs->xbzrle_cache_miss_rate = 0;
-    rs->xbzrle_overflows = 0;
-    migration_bitmap_sync_init(rs);
+    memset(rs, 0, sizeof(*rs));
     qemu_mutex_init(&migration_bitmap_mutex);
 
     if (migrate_use_xbzrle()) {
@@ -2010,7 +1991,7 @@ static int ram_save_setup(QEMUFile *f, void *opaque)
 
     /* migration has already setup the bitmap, reuse it. */
     if (!migration_in_colo_state()) {
-        if (ram_save_init_globals(rs) < 0) {
+        if (ram_state_init(rs) < 0) {
             return -1;
          }
     }
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 68+ messages in thread

* [Qemu-devel] [PATCH 22/31] ram: move migration_bitmap_mutex into RAMState
  2017-03-15 13:49 [Qemu-devel] [PATCH 00/31] Creating RAMState for migration Juan Quintela
                   ` (20 preceding siblings ...)
  2017-03-15 13:50 ` [Qemu-devel] [PATCH 21/31] ram: Everything was init to zero, so use memset Juan Quintela
@ 2017-03-15 13:50 ` Juan Quintela
  2017-03-16 20:21   ` Dr. David Alan Gilbert
  2017-03-15 13:50 ` [Qemu-devel] [PATCH 23/31] ram: Move migration_bitmap_rcu " Juan Quintela
                   ` (9 subsequent siblings)
  31 siblings, 1 reply; 68+ messages in thread
From: Juan Quintela @ 2017-03-15 13:50 UTC (permalink / raw)
  To: qemu-devel; +Cc: amit.shah, dgilbert

Signed-off-by: Juan Quintela <quintela@redhat.com>
---
 migration/ram.c | 14 +++++++-------
 1 file changed, 7 insertions(+), 7 deletions(-)

diff --git a/migration/ram.c b/migration/ram.c
index 7f56b5f..c14293c 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -178,6 +178,8 @@ struct RAMState {
     uint64_t xbzrle_overflows;
     /* number of dirty bits in the bitmap */
     uint64_t migration_dirty_pages;
+    /* protects modification of the bitmap */
+    QemuMutex bitmap_mutex;
 };
 typedef struct RAMState RAMState;
 
@@ -223,8 +225,6 @@ static ram_addr_t ram_save_remaining(void)
     return ram_state.migration_dirty_pages;
 }
 
-static QemuMutex migration_bitmap_mutex;
-
 /* used by the search for pages to send */
 struct PageSearchStatus {
     /* Current block being searched */
@@ -626,13 +626,13 @@ static void migration_bitmap_sync(RAMState *rs)
     trace_migration_bitmap_sync_start();
     memory_global_dirty_log_sync();
 
-    qemu_mutex_lock(&migration_bitmap_mutex);
+    qemu_mutex_lock(&rs->bitmap_mutex);
     rcu_read_lock();
     QLIST_FOREACH_RCU(block, &ram_list.blocks, next) {
         migration_bitmap_sync_range(rs, block->offset, block->used_length);
     }
     rcu_read_unlock();
-    qemu_mutex_unlock(&migration_bitmap_mutex);
+    qemu_mutex_unlock(&rs->bitmap_mutex);
 
     trace_migration_bitmap_sync_end(rs->migration_dirty_pages
                                     - num_dirty_pages_init);
@@ -1498,7 +1498,7 @@ void migration_bitmap_extend(ram_addr_t old, ram_addr_t new)
          * it is safe to migration if migration_bitmap is cleared bit
          * at the same time.
          */
-        qemu_mutex_lock(&migration_bitmap_mutex);
+        qemu_mutex_lock(&ram_state.bitmap_mutex);
         bitmap_copy(bitmap->bmap, old_bitmap->bmap, old);
         bitmap_set(bitmap->bmap, old, new - old);
 
@@ -1509,7 +1509,7 @@ void migration_bitmap_extend(ram_addr_t old, ram_addr_t new)
         bitmap->unsentmap = NULL;
 
         atomic_rcu_set(&migration_bitmap_rcu, bitmap);
-        qemu_mutex_unlock(&migration_bitmap_mutex);
+        qemu_mutex_unlock(&ram_state.bitmap_mutex);
         ram_state.migration_dirty_pages += new - old;
         call_rcu(old_bitmap, migration_bitmap_free, rcu);
     }
@@ -1911,7 +1911,7 @@ static int ram_state_init(RAMState *rs)
     int64_t ram_bitmap_pages; /* Size of bitmap in pages, including gaps */
 
     memset(rs, 0, sizeof(*rs));
-    qemu_mutex_init(&migration_bitmap_mutex);
+    qemu_mutex_init(&rs->bitmap_mutex);
 
     if (migrate_use_xbzrle()) {
         XBZRLE_cache_lock();
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 68+ messages in thread

* [Qemu-devel] [PATCH 23/31] ram: Move migration_bitmap_rcu into RAMState
  2017-03-15 13:49 [Qemu-devel] [PATCH 00/31] Creating RAMState for migration Juan Quintela
                   ` (21 preceding siblings ...)
  2017-03-15 13:50 ` [Qemu-devel] [PATCH 22/31] ram: move migration_bitmap_mutex into RAMState Juan Quintela
@ 2017-03-15 13:50 ` Juan Quintela
  2017-03-17  9:51   ` Dr. David Alan Gilbert
  2017-03-15 13:50 ` [Qemu-devel] [PATCH 24/31] ram: Move bytes_transferred " Juan Quintela
                   ` (8 subsequent siblings)
  31 siblings, 1 reply; 68+ messages in thread
From: Juan Quintela @ 2017-03-15 13:50 UTC (permalink / raw)
  To: qemu-devel; +Cc: amit.shah, dgilbert

Once there, rename the type to be shorter.

Signed-off-by: Juan Quintela <quintela@redhat.com>
---
 migration/ram.c | 79 ++++++++++++++++++++++++++++++---------------------------
 1 file changed, 42 insertions(+), 37 deletions(-)

diff --git a/migration/ram.c b/migration/ram.c
index c14293c..d39d185 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -132,6 +132,19 @@ out:
     return ret;
 }
 
+struct RAMBitmap {
+    struct rcu_head rcu;
+    /* Main migration bitmap */
+    unsigned long *bmap;
+    /* bitmap of pages that haven't been sent even once
+     * only maintained and used in postcopy at the moment
+     * where it's used to send the dirtymap at the start
+     * of the postcopy phase
+     */
+    unsigned long *unsentmap;
+};
+typedef struct RAMBitmap RAMBitmap;
+
 /* State of RAM for migration */
 struct RAMState {
     /* Last block that we have visited searching for dirty pages */
@@ -180,6 +193,8 @@ struct RAMState {
     uint64_t migration_dirty_pages;
     /* protects modification of the bitmap */
     QemuMutex bitmap_mutex;
+    /* Ram Bitmap protected by RCU */
+    RAMBitmap *ram_bitmap;
 };
 typedef struct RAMState RAMState;
 
@@ -236,18 +251,6 @@ struct PageSearchStatus {
 };
 typedef struct PageSearchStatus PageSearchStatus;
 
-static struct BitmapRcu {
-    struct rcu_head rcu;
-    /* Main migration bitmap */
-    unsigned long *bmap;
-    /* bitmap of pages that haven't been sent even once
-     * only maintained and used in postcopy at the moment
-     * where it's used to send the dirtymap at the start
-     * of the postcopy phase
-     */
-    unsigned long *unsentmap;
-} *migration_bitmap_rcu;
-
 struct CompressParam {
     bool done;
     bool quit;
@@ -554,7 +557,7 @@ ram_addr_t migration_bitmap_find_dirty(RAMState *rs, RAMBlock *rb,
 
     unsigned long next;
 
-    bitmap = atomic_rcu_read(&migration_bitmap_rcu)->bmap;
+    bitmap = atomic_rcu_read(&rs->ram_bitmap)->bmap;
     if (rs->ram_bulk_stage && nr > base) {
         next = nr + 1;
     } else {
@@ -569,7 +572,7 @@ static inline bool migration_bitmap_clear_dirty(RAMState *rs, ram_addr_t addr)
 {
     bool ret;
     int nr = addr >> TARGET_PAGE_BITS;
-    unsigned long *bitmap = atomic_rcu_read(&migration_bitmap_rcu)->bmap;
+    unsigned long *bitmap = atomic_rcu_read(&rs->ram_bitmap)->bmap;
 
     ret = test_and_clear_bit(nr, bitmap);
 
@@ -583,7 +586,7 @@ static void migration_bitmap_sync_range(RAMState *rs, ram_addr_t start,
                                         ram_addr_t length)
 {
     unsigned long *bitmap;
-    bitmap = atomic_rcu_read(&migration_bitmap_rcu)->bmap;
+    bitmap = atomic_rcu_read(&rs->ram_bitmap)->bmap;
     rs-> migration_dirty_pages +=
         cpu_physical_memory_sync_dirty_bitmap(bitmap, start, length);
 }
@@ -1115,14 +1118,14 @@ static bool get_queued_page(RAMState *rs, MigrationState *ms, PageSearchStatus *
          */
         if (block) {
             unsigned long *bitmap;
-            bitmap = atomic_rcu_read(&migration_bitmap_rcu)->bmap;
+            bitmap = atomic_rcu_read(&rs->ram_bitmap)->bmap;
             dirty = test_bit(*ram_addr_abs >> TARGET_PAGE_BITS, bitmap);
             if (!dirty) {
                 trace_get_queued_page_not_dirty(
                     block->idstr, (uint64_t)offset,
                     (uint64_t)*ram_addr_abs,
                     test_bit(*ram_addr_abs >> TARGET_PAGE_BITS,
-                         atomic_rcu_read(&migration_bitmap_rcu)->unsentmap));
+                         atomic_rcu_read(&rs->ram_bitmap)->unsentmap));
             } else {
                 trace_get_queued_page(block->idstr,
                                       (uint64_t)offset,
@@ -1276,7 +1279,7 @@ static int ram_save_target_page(RAMState *rs, MigrationState *ms, QEMUFile *f,
         if (res < 0) {
             return res;
         }
-        unsentmap = atomic_rcu_read(&migration_bitmap_rcu)->unsentmap;
+        unsentmap = atomic_rcu_read(&rs->ram_bitmap)->unsentmap;
         if (unsentmap) {
             clear_bit(dirty_ram_abs >> TARGET_PAGE_BITS, unsentmap);
         }
@@ -1440,7 +1443,7 @@ void free_xbzrle_decoded_buf(void)
     xbzrle_decoded_buf = NULL;
 }
 
-static void migration_bitmap_free(struct BitmapRcu *bmap)
+static void migration_bitmap_free(struct RAMBitmap *bmap)
 {
     g_free(bmap->bmap);
     g_free(bmap->unsentmap);
@@ -1449,11 +1452,13 @@ static void migration_bitmap_free(struct BitmapRcu *bmap)
 
 static void ram_migration_cleanup(void *opaque)
 {
+    RAMState *rs = opaque;
+
     /* caller have hold iothread lock or is in a bh, so there is
      * no writing race against this migration_bitmap
      */
-    struct BitmapRcu *bitmap = migration_bitmap_rcu;
-    atomic_rcu_set(&migration_bitmap_rcu, NULL);
+    struct RAMBitmap *bitmap = rs->ram_bitmap;
+    atomic_rcu_set(&rs->ram_bitmap, NULL);
     if (bitmap) {
         memory_global_dirty_log_stop();
         call_rcu(bitmap, migration_bitmap_free, rcu);
@@ -1488,9 +1493,9 @@ void migration_bitmap_extend(ram_addr_t old, ram_addr_t new)
     /* called in qemu main thread, so there is
      * no writing race against this migration_bitmap
      */
-    if (migration_bitmap_rcu) {
-        struct BitmapRcu *old_bitmap = migration_bitmap_rcu, *bitmap;
-        bitmap = g_new(struct BitmapRcu, 1);
+    if (ram_state.ram_bitmap) {
+        struct RAMBitmap *old_bitmap = ram_state.ram_bitmap, *bitmap;
+        bitmap = g_new(struct RAMBitmap, 1);
         bitmap->bmap = bitmap_new(new);
 
         /* prevent migration_bitmap content from being set bit
@@ -1508,7 +1513,7 @@ void migration_bitmap_extend(ram_addr_t old, ram_addr_t new)
          */
         bitmap->unsentmap = NULL;
 
-        atomic_rcu_set(&migration_bitmap_rcu, bitmap);
+        atomic_rcu_set(&ram_state.ram_bitmap, bitmap);
         qemu_mutex_unlock(&ram_state.bitmap_mutex);
         ram_state.migration_dirty_pages += new - old;
         call_rcu(old_bitmap, migration_bitmap_free, rcu);
@@ -1529,7 +1534,7 @@ void ram_debug_dump_bitmap(unsigned long *todump, bool expected)
     char linebuf[129];
 
     if (!todump) {
-        todump = atomic_rcu_read(&migration_bitmap_rcu)->bmap;
+        todump = atomic_rcu_read(&ram_state.ram_bitmap)->bmap;
     }
 
     for (cur = 0; cur < ram_pages; cur += linelen) {
@@ -1559,7 +1564,7 @@ void ram_debug_dump_bitmap(unsigned long *todump, bool expected)
 void ram_postcopy_migrated_memory_release(MigrationState *ms)
 {
     struct RAMBlock *block;
-    unsigned long *bitmap = atomic_rcu_read(&migration_bitmap_rcu)->bmap;
+    unsigned long *bitmap = atomic_rcu_read(&ram_state.ram_bitmap)->bmap;
 
     QLIST_FOREACH_RCU(block, &ram_list.blocks, next) {
         unsigned long first = block->offset >> TARGET_PAGE_BITS;
@@ -1591,7 +1596,7 @@ static int postcopy_send_discard_bm_ram(MigrationState *ms,
     unsigned long current;
     unsigned long *unsentmap;
 
-    unsentmap = atomic_rcu_read(&migration_bitmap_rcu)->unsentmap;
+    unsentmap = atomic_rcu_read(&ram_state.ram_bitmap)->unsentmap;
     for (current = start; current < end; ) {
         unsigned long one = find_next_bit(unsentmap, end, current);
 
@@ -1680,8 +1685,8 @@ static void postcopy_chunk_hostpages_pass(MigrationState *ms, bool unsent_pass,
         return;
     }
 
-    bitmap = atomic_rcu_read(&migration_bitmap_rcu)->bmap;
-    unsentmap = atomic_rcu_read(&migration_bitmap_rcu)->unsentmap;
+    bitmap = atomic_rcu_read(&ram_state.ram_bitmap)->bmap;
+    unsentmap = atomic_rcu_read(&ram_state.ram_bitmap)->unsentmap;
 
     if (unsent_pass) {
         /* Find a sent page */
@@ -1836,7 +1841,7 @@ int ram_postcopy_send_discard_bitmap(MigrationState *ms)
     /* This should be our last sync, the src is now paused */
     migration_bitmap_sync(&ram_state);
 
-    unsentmap = atomic_rcu_read(&migration_bitmap_rcu)->unsentmap;
+    unsentmap = atomic_rcu_read(&ram_state.ram_bitmap)->unsentmap;
     if (!unsentmap) {
         /* We don't have a safe way to resize the sentmap, so
          * if the bitmap was resized it will be NULL at this
@@ -1857,7 +1862,7 @@ int ram_postcopy_send_discard_bitmap(MigrationState *ms)
     /*
      * Update the unsentmap to be unsentmap = unsentmap | dirty
      */
-    bitmap = atomic_rcu_read(&migration_bitmap_rcu)->bmap;
+    bitmap = atomic_rcu_read(&ram_state.ram_bitmap)->bmap;
     bitmap_or(unsentmap, unsentmap, bitmap,
                last_ram_offset() >> TARGET_PAGE_BITS);
 
@@ -1950,16 +1955,16 @@ static int ram_state_init(RAMState *rs)
     bytes_transferred = 0;
     ram_state_reset(rs);
 
-    migration_bitmap_rcu = g_new0(struct BitmapRcu, 1);
+    rs->ram_bitmap = g_new0(struct RAMBitmap, 1);
     /* Skip setting bitmap if there is no RAM */
     if (ram_bytes_total()) {
         ram_bitmap_pages = last_ram_offset() >> TARGET_PAGE_BITS;
-        migration_bitmap_rcu->bmap = bitmap_new(ram_bitmap_pages);
-        bitmap_set(migration_bitmap_rcu->bmap, 0, ram_bitmap_pages);
+        rs->ram_bitmap->bmap = bitmap_new(ram_bitmap_pages);
+        bitmap_set(rs->ram_bitmap->bmap, 0, ram_bitmap_pages);
 
         if (migrate_postcopy_ram()) {
-            migration_bitmap_rcu->unsentmap = bitmap_new(ram_bitmap_pages);
-            bitmap_set(migration_bitmap_rcu->unsentmap, 0, ram_bitmap_pages);
+            rs->ram_bitmap->unsentmap = bitmap_new(ram_bitmap_pages);
+            bitmap_set(rs->ram_bitmap->unsentmap, 0, ram_bitmap_pages);
         }
     }
 
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 68+ messages in thread

* [Qemu-devel] [PATCH 24/31] ram: Move bytes_transferred into RAMState
  2017-03-15 13:49 [Qemu-devel] [PATCH 00/31] Creating RAMState for migration Juan Quintela
                   ` (22 preceding siblings ...)
  2017-03-15 13:50 ` [Qemu-devel] [PATCH 23/31] ram: Move migration_bitmap_rcu " Juan Quintela
@ 2017-03-15 13:50 ` Juan Quintela
  2017-03-15 13:50 ` [Qemu-devel] [PATCH 25/31] ram: Use the RAMState bytes_transferred parameter Juan Quintela
                   ` (7 subsequent siblings)
  31 siblings, 0 replies; 68+ messages in thread
From: Juan Quintela @ 2017-03-15 13:50 UTC (permalink / raw)
  To: qemu-devel; +Cc: amit.shah, dgilbert

Signed-off-by: Juan Quintela <quintela@redhat.com>
---
 migration/ram.c | 35 +++++++++++++++++------------------
 1 file changed, 17 insertions(+), 18 deletions(-)

diff --git a/migration/ram.c b/migration/ram.c
index d39d185..f9933b2 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -191,6 +191,8 @@ struct RAMState {
     uint64_t xbzrle_overflows;
     /* number of dirty bits in the bitmap */
     uint64_t migration_dirty_pages;
+    /* total number of bytes transferred */
+    uint64_t bytes_transferred;
     /* protects modification of the bitmap */
     QemuMutex bitmap_mutex;
     /* Ram Bitmap protected by RCU */
@@ -240,6 +242,11 @@ static ram_addr_t ram_save_remaining(void)
     return ram_state.migration_dirty_pages;
 }
 
+uint64_t ram_bytes_transferred(void)
+{
+    return ram_state.bytes_transferred;
+}
+
 /* used by the search for pages to send */
 struct PageSearchStatus {
     /* Current block being searched */
@@ -844,9 +851,7 @@ static int do_compress_ram_page(QEMUFile *f, RAMBlock *block,
     return bytes_sent;
 }
 
-static uint64_t bytes_transferred;
-
-static void flush_compressed_data(QEMUFile *f)
+static void flush_compressed_data(RAMState *rs, QEMUFile *f)
 {
     int idx, len, thread_count;
 
@@ -867,7 +872,7 @@ static void flush_compressed_data(QEMUFile *f)
         qemu_mutex_lock(&comp_param[idx].mutex);
         if (!comp_param[idx].quit) {
             len = qemu_put_qemu_file(f, comp_param[idx].file);
-            bytes_transferred += len;
+            rs->bytes_transferred += len;
         }
         qemu_mutex_unlock(&comp_param[idx].mutex);
     }
@@ -963,7 +968,7 @@ static int ram_save_compressed_page(RAMState *rs, MigrationState *ms,
          * is used to avoid resending the block name.
          */
         if (block != rs->last_sent_block) {
-            flush_compressed_data(f);
+            flush_compressed_data(rs, f);
             pages = save_zero_page(rs, f, block, offset, p, bytes_transferred);
             if (pages == -1) {
                 /* Make sure the first page is sent out before other pages */
@@ -1039,7 +1044,7 @@ static bool find_dirty_block(RAMState *rs, QEMUFile *f, PageSearchStatus *pss,
                 /* If xbzrle is on, stop using the data compression at this
                  * point. In theory, xbzrle can do better than compression.
                  */
-                flush_compressed_data(f);
+                flush_compressed_data(rs, f);
                 compression_switch = false;
             }
         }
@@ -1410,7 +1415,7 @@ void acct_update_position(QEMUFile *f, size_t size, bool zero)
         ram_state.zero_pages += pages;
     } else {
         ram_state.norm_pages += pages;
-        bytes_transferred += size;
+        ram_state.bytes_transferred += size;
         qemu_update_position(f, size);
     }
 }
@@ -1420,11 +1425,6 @@ uint64_t ram_bytes_remaining(void)
     return ram_save_remaining() * TARGET_PAGE_SIZE;
 }
 
-uint64_t ram_bytes_transferred(void)
-{
-    return bytes_transferred;
-}
-
 uint64_t ram_bytes_total(void)
 {
     RAMBlock *block;
@@ -1952,7 +1952,6 @@ static int ram_state_init(RAMState *rs)
 
     qemu_mutex_lock_ramlist();
     rcu_read_lock();
-    bytes_transferred = 0;
     ram_state_reset(rs);
 
     rs->ram_bitmap = g_new0(struct RAMBitmap, 1);
@@ -2047,7 +2046,7 @@ static int ram_save_iterate(QEMUFile *f, void *opaque)
     while ((ret = qemu_file_rate_limit(f)) == 0) {
         int pages;
 
-        pages = ram_find_and_save_block(rs, f, false, &bytes_transferred);
+        pages = ram_find_and_save_block(rs, f, false, &rs->bytes_transferred);
         /* no more pages to sent */
         if (pages == 0) {
             done = 1;
@@ -2069,7 +2068,7 @@ static int ram_save_iterate(QEMUFile *f, void *opaque)
         }
         i++;
     }
-    flush_compressed_data(f);
+    flush_compressed_data(rs, f);
     rcu_read_unlock();
 
     /*
@@ -2079,7 +2078,7 @@ static int ram_save_iterate(QEMUFile *f, void *opaque)
     ram_control_after_iterate(f, RAM_CONTROL_ROUND);
 
     qemu_put_be64(f, RAM_SAVE_FLAG_EOS);
-    bytes_transferred += 8;
+    rs->bytes_transferred += 8;
 
     ret = qemu_file_get_error(f);
     if (ret < 0) {
@@ -2109,14 +2108,14 @@ static int ram_save_complete(QEMUFile *f, void *opaque)
         int pages;
 
         pages = ram_find_and_save_block(rs, f, !migration_in_colo_state(),
-                                        &bytes_transferred);
+                                        &rs->bytes_transferred);
         /* no more blocks to sent */
         if (pages == 0) {
             break;
         }
     }
 
-    flush_compressed_data(f);
+    flush_compressed_data(rs, f);
     ram_control_after_iterate(f, RAM_CONTROL_FINISH);
 
     rcu_read_unlock();
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 68+ messages in thread

* [Qemu-devel] [PATCH 25/31] ram: Use the RAMState bytes_transferred parameter
  2017-03-15 13:49 [Qemu-devel] [PATCH 00/31] Creating RAMState for migration Juan Quintela
                   ` (23 preceding siblings ...)
  2017-03-15 13:50 ` [Qemu-devel] [PATCH 24/31] ram: Move bytes_transferred " Juan Quintela
@ 2017-03-15 13:50 ` Juan Quintela
  2017-03-17  9:57   ` Dr. David Alan Gilbert
  2017-03-15 13:50 ` [Qemu-devel] [PATCH 26/31] ram: Remove ram_save_remaining Juan Quintela
                   ` (6 subsequent siblings)
  31 siblings, 1 reply; 68+ messages in thread
From: Juan Quintela @ 2017-03-15 13:50 UTC (permalink / raw)
  To: qemu-devel; +Cc: amit.shah, dgilbert

Somewhere it was passed by reference, just use it from RAMState.

Signed-off-by: Juan Quintela <quintela@redhat.com>
---
 migration/ram.c | 77 ++++++++++++++++++++-------------------------------------
 1 file changed, 27 insertions(+), 50 deletions(-)

diff --git a/migration/ram.c b/migration/ram.c
index f9933b2..9c9533d 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -477,12 +477,10 @@ static void xbzrle_cache_zero_page(RAMState *rs, ram_addr_t current_addr)
  * @block: block that contains the page we want to send
  * @offset: offset inside the block for the page
  * @last_stage: if we are at the completion stage
- * @bytes_transferred: increase it with the number of transferred bytes
  */
 static int save_xbzrle_page(QEMUFile *f, RAMState *rs, uint8_t **current_data,
                             ram_addr_t current_addr, RAMBlock *block,
-                            ram_addr_t offset, bool last_stage,
-                            uint64_t *bytes_transferred)
+                            ram_addr_t offset, bool last_stage)
 {
     int encoded_len = 0, bytes_xbzrle;
     uint8_t *prev_cached_page;
@@ -538,7 +536,7 @@ static int save_xbzrle_page(QEMUFile *f, RAMState *rs, uint8_t **current_data,
     bytes_xbzrle += encoded_len + 1 + 2;
     rs->xbzrle_pages++;
     rs->xbzrle_bytes += bytes_xbzrle;
-    *bytes_transferred += bytes_xbzrle;
+    rs->bytes_transferred += bytes_xbzrle;
 
     return 1;
 }
@@ -701,20 +699,18 @@ static void migration_bitmap_sync(RAMState *rs)
  * @block: block that contains the page we want to send
  * @offset: offset inside the block for the page
  * @p: pointer to the page
- * @bytes_transferred: increase it with the number of transferred bytes
  */
 static int save_zero_page(RAMState *rs, QEMUFile *f, RAMBlock *block,
-                          ram_addr_t offset,
-                          uint8_t *p, uint64_t *bytes_transferred)
+                          ram_addr_t offset, uint8_t *p)
 {
     int pages = -1;
 
     if (is_zero_range(p, TARGET_PAGE_SIZE)) {
         rs->zero_pages++;
-        *bytes_transferred += save_page_header(f, block,
-                                               offset | RAM_SAVE_FLAG_COMPRESS);
+        rs->bytes_transferred += save_page_header(f, block,
+                                                  offset | RAM_SAVE_FLAG_COMPRESS);
         qemu_put_byte(f, 0);
-        *bytes_transferred += 1;
+        rs->bytes_transferred += 1;
         pages = 1;
     }
 
@@ -745,11 +741,9 @@ static void ram_release_pages(MigrationState *ms, const char *block_name,
  * @block: block that contains the page we want to send
  * @offset: offset inside the block for the page
  * @last_stage: if we are at the completion stage
- * @bytes_transferred: increase it with the number of transferred bytes
  */
 static int ram_save_page(RAMState *rs, MigrationState *ms, QEMUFile *f,
-                         PageSearchStatus *pss, bool last_stage,
-                         uint64_t *bytes_transferred)
+                         PageSearchStatus *pss, bool last_stage)
 {
     int pages = -1;
     uint64_t bytes_xmit;
@@ -767,7 +761,7 @@ static int ram_save_page(RAMState *rs, MigrationState *ms, QEMUFile *f,
     ret = ram_control_save_page(f, block->offset,
                            offset, TARGET_PAGE_SIZE, &bytes_xmit);
     if (bytes_xmit) {
-        *bytes_transferred += bytes_xmit;
+        rs->bytes_transferred += bytes_xmit;
         pages = 1;
     }
 
@@ -787,7 +781,7 @@ static int ram_save_page(RAMState *rs, MigrationState *ms, QEMUFile *f,
             }
         }
     } else {
-        pages = save_zero_page(rs, f, block, offset, p, bytes_transferred);
+        pages = save_zero_page(rs, f, block, offset, p);
         if (pages > 0) {
             /* Must let xbzrle know, otherwise a previous (now 0'd) cached
              * page would be stale
@@ -797,7 +791,7 @@ static int ram_save_page(RAMState *rs, MigrationState *ms, QEMUFile *f,
         } else if (!rs->ram_bulk_stage &&
                    !migration_in_postcopy(ms) && migrate_use_xbzrle()) {
             pages = save_xbzrle_page(f, rs, &p, current_addr, block,
-                                     offset, last_stage, bytes_transferred);
+                                     offset, last_stage);
             if (!last_stage) {
                 /* Can't send this cached data async, since the cache page
                  * might get updated before it gets to the wire
@@ -809,7 +803,7 @@ static int ram_save_page(RAMState *rs, MigrationState *ms, QEMUFile *f,
 
     /* XBZRLE overflow or normal page */
     if (pages == -1) {
-        *bytes_transferred += save_page_header(f, block,
+        rs->bytes_transferred += save_page_header(f, block,
                                                offset | RAM_SAVE_FLAG_PAGE);
         if (send_async) {
             qemu_put_buffer_async(f, p, TARGET_PAGE_SIZE,
@@ -818,7 +812,7 @@ static int ram_save_page(RAMState *rs, MigrationState *ms, QEMUFile *f,
         } else {
             qemu_put_buffer(f, p, TARGET_PAGE_SIZE);
         }
-        *bytes_transferred += TARGET_PAGE_SIZE;
+        rs->bytes_transferred += TARGET_PAGE_SIZE;
         pages = 1;
         rs->norm_pages++;
     }
@@ -886,8 +880,7 @@ static inline void set_compress_params(CompressParam *param, RAMBlock *block,
 }
 
 static int compress_page_with_multi_thread(RAMState *rs, QEMUFile *f,
-                                           RAMBlock *block, ram_addr_t offset,
-                                           uint64_t *bytes_transferred)
+                                           RAMBlock *block, ram_addr_t offset)
 {
     int idx, thread_count, bytes_xmit = -1, pages = -1;
 
@@ -904,7 +897,7 @@ static int compress_page_with_multi_thread(RAMState *rs, QEMUFile *f,
                 qemu_mutex_unlock(&comp_param[idx].mutex);
                 pages = 1;
                 rs->norm_pages++;
-                *bytes_transferred += bytes_xmit;
+                rs->bytes_transferred += bytes_xmit;
                 break;
             }
         }
@@ -930,12 +923,10 @@ static int compress_page_with_multi_thread(RAMState *rs, QEMUFile *f,
  * @block: block that contains the page we want to send
  * @offset: offset inside the block for the page
  * @last_stage: if we are at the completion stage
- * @bytes_transferred: increase it with the number of transferred bytes
  */
 static int ram_save_compressed_page(RAMState *rs, MigrationState *ms,
                                     QEMUFile *f,
-                                    PageSearchStatus *pss, bool last_stage,
-                                    uint64_t *bytes_transferred)
+                                    PageSearchStatus *pss, bool last_stage)
 {
     int pages = -1;
     uint64_t bytes_xmit = 0;
@@ -949,7 +940,7 @@ static int ram_save_compressed_page(RAMState *rs, MigrationState *ms,
     ret = ram_control_save_page(f, block->offset,
                                 offset, TARGET_PAGE_SIZE, &bytes_xmit);
     if (bytes_xmit) {
-        *bytes_transferred += bytes_xmit;
+        rs->bytes_transferred += bytes_xmit;
         pages = 1;
     }
     if (ret != RAM_SAVE_CONTROL_NOT_SUPP) {
@@ -969,7 +960,7 @@ static int ram_save_compressed_page(RAMState *rs, MigrationState *ms,
          */
         if (block != rs->last_sent_block) {
             flush_compressed_data(rs, f);
-            pages = save_zero_page(rs, f, block, offset, p, bytes_transferred);
+            pages = save_zero_page(rs, f, block, offset, p);
             if (pages == -1) {
                 /* Make sure the first page is sent out before other pages */
                 bytes_xmit = save_page_header(f, block, offset |
@@ -977,7 +968,7 @@ static int ram_save_compressed_page(RAMState *rs, MigrationState *ms,
                 blen = qemu_put_compression_data(f, p, TARGET_PAGE_SIZE,
                                                  migrate_compress_level());
                 if (blen > 0) {
-                    *bytes_transferred += bytes_xmit + blen;
+                    rs->bytes_transferred += bytes_xmit + blen;
                     rs->norm_pages++;
                     pages = 1;
                 } else {
@@ -990,10 +981,9 @@ static int ram_save_compressed_page(RAMState *rs, MigrationState *ms,
             }
         } else {
             offset |= RAM_SAVE_FLAG_CONTINUE;
-            pages = save_zero_page(rs, f, block, offset, p, bytes_transferred);
+            pages = save_zero_page(rs, f, block, offset, p);
             if (pages == -1) {
-                pages = compress_page_with_multi_thread(rs, f, block, offset,
-                                                        bytes_transferred);
+                pages = compress_page_with_multi_thread(rs, f, block, offset);
             } else {
                 ram_release_pages(ms, block->idstr, pss->offset, pages);
             }
@@ -1256,7 +1246,6 @@ err:
  * @block: pointer to block that contains the page we want to send
  * @offset: offset inside the block for the page;
  * @last_stage: if we are at the completion stage
- * @bytes_transferred: increase it with the number of transferred bytes
  * @dirty_ram_abs: Address of the start of the dirty page in ram_addr_t space
  *
  * Returns: Number of pages written.
@@ -1264,7 +1253,6 @@ err:
 static int ram_save_target_page(RAMState *rs, MigrationState *ms, QEMUFile *f,
                                 PageSearchStatus *pss,
                                 bool last_stage,
-                                uint64_t *bytes_transferred,
                                 ram_addr_t dirty_ram_abs)
 {
     int res = 0;
@@ -1273,12 +1261,9 @@ static int ram_save_target_page(RAMState *rs, MigrationState *ms, QEMUFile *f,
     if (migration_bitmap_clear_dirty(rs, dirty_ram_abs)) {
         unsigned long *unsentmap;
         if (compression_switch && migrate_use_compression()) {
-            res = ram_save_compressed_page(rs, ms, f, pss,
-                                           last_stage,
-                                           bytes_transferred);
+            res = ram_save_compressed_page(rs, ms, f, pss, last_stage);
         } else {
-            res = ram_save_page(rs, ms, f, pss, last_stage,
-                                bytes_transferred);
+            res = ram_save_page(rs, ms, f, pss, last_stage);
         }
 
         if (res < 0) {
@@ -1317,21 +1302,18 @@ static int ram_save_target_page(RAMState *rs, MigrationState *ms, QEMUFile *f,
  * @offset: offset inside the block for the page; updated to last target page
  *          sent
  * @last_stage: if we are at the completion stage
- * @bytes_transferred: increase it with the number of transferred bytes
  * @dirty_ram_abs: Address of the start of the dirty page in ram_addr_t space
  */
 static int ram_save_host_page(RAMState *rs, MigrationState *ms, QEMUFile *f,
                               PageSearchStatus *pss,
                               bool last_stage,
-                              uint64_t *bytes_transferred,
                               ram_addr_t dirty_ram_abs)
 {
     int tmppages, pages = 0;
     size_t pagesize = qemu_ram_pagesize(pss->block);
 
     do {
-        tmppages = ram_save_target_page(rs, ms, f, pss, last_stage,
-                                        bytes_transferred, dirty_ram_abs);
+        tmppages = ram_save_target_page(rs, ms, f, pss, last_stage, dirty_ram_abs);
         if (tmppages < 0) {
             return tmppages;
         }
@@ -1357,14 +1339,12 @@ static int ram_save_host_page(RAMState *rs, MigrationState *ms, QEMUFile *f,
  * @rs: The RAM state
  * @f: QEMUFile where to send the data
  * @last_stage: if we are at the completion stage
- * @bytes_transferred: increase it with the number of transferred bytes
  *
  * On systems where host-page-size > target-page-size it will send all the
  * pages in a host page that are dirty.
  */
 
-static int ram_find_and_save_block(RAMState *rs, QEMUFile *f, bool last_stage,
-                                   uint64_t *bytes_transferred)
+static int ram_find_and_save_block(RAMState *rs, QEMUFile *f, bool last_stage)
 {
     PageSearchStatus pss;
     MigrationState *ms = migrate_get_current();
@@ -1396,9 +1376,7 @@ static int ram_find_and_save_block(RAMState *rs, QEMUFile *f, bool last_stage,
         }
 
         if (found) {
-            pages = ram_save_host_page(rs, ms, f, &pss,
-                                       last_stage, bytes_transferred,
-                                       dirty_ram_abs);
+            pages = ram_save_host_page(rs, ms, f, &pss, last_stage, dirty_ram_abs);
         }
     } while (!pages && again);
 
@@ -2046,7 +2024,7 @@ static int ram_save_iterate(QEMUFile *f, void *opaque)
     while ((ret = qemu_file_rate_limit(f)) == 0) {
         int pages;
 
-        pages = ram_find_and_save_block(rs, f, false, &rs->bytes_transferred);
+        pages = ram_find_and_save_block(rs, f, false);
         /* no more pages to sent */
         if (pages == 0) {
             done = 1;
@@ -2107,8 +2085,7 @@ static int ram_save_complete(QEMUFile *f, void *opaque)
     while (true) {
         int pages;
 
-        pages = ram_find_and_save_block(rs, f, !migration_in_colo_state(),
-                                        &rs->bytes_transferred);
+        pages = ram_find_and_save_block(rs, f, !migration_in_colo_state());
         /* no more blocks to sent */
         if (pages == 0) {
             break;
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 68+ messages in thread

* [Qemu-devel] [PATCH 26/31] ram: Remove ram_save_remaining
  2017-03-15 13:49 [Qemu-devel] [PATCH 00/31] Creating RAMState for migration Juan Quintela
                   ` (24 preceding siblings ...)
  2017-03-15 13:50 ` [Qemu-devel] [PATCH 25/31] ram: Use the RAMState bytes_transferred parameter Juan Quintela
@ 2017-03-15 13:50 ` Juan Quintela
  2017-03-15 13:50 ` [Qemu-devel] [PATCH 27/31] ram: Move last_req_rb to RAMState Juan Quintela
                   ` (5 subsequent siblings)
  31 siblings, 0 replies; 68+ messages in thread
From: Juan Quintela @ 2017-03-15 13:50 UTC (permalink / raw)
  To: qemu-devel; +Cc: amit.shah, dgilbert

Just unfold it.  Move ram_bytes_remaining() with the rest of exported
functions.

Signed-off-by: Juan Quintela <quintela@redhat.com>
---
 migration/ram.c | 19 +++++++------------
 1 file changed, 7 insertions(+), 12 deletions(-)

diff --git a/migration/ram.c b/migration/ram.c
index 9c9533d..e7db39c 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -237,16 +237,16 @@ uint64_t xbzrle_mig_pages_overflow(void)
     return ram_state.xbzrle_overflows;
 }
 
-static ram_addr_t ram_save_remaining(void)
-{
-    return ram_state.migration_dirty_pages;
-}
-
 uint64_t ram_bytes_transferred(void)
 {
     return ram_state.bytes_transferred;
 }
 
+uint64_t ram_bytes_remaining(void)
+{
+    return ram_state.migration_dirty_pages * TARGET_PAGE_SIZE;
+}
+
 /* used by the search for pages to send */
 struct PageSearchStatus {
     /* Current block being searched */
@@ -1398,11 +1398,6 @@ void acct_update_position(QEMUFile *f, size_t size, bool zero)
     }
 }
 
-uint64_t ram_bytes_remaining(void)
-{
-    return ram_save_remaining() * TARGET_PAGE_SIZE;
-}
-
 uint64_t ram_bytes_total(void)
 {
     RAMBlock *block;
@@ -2109,7 +2104,7 @@ static void ram_save_pending(QEMUFile *f, void *opaque, uint64_t max_size,
     RAMState *rs = opaque;
     uint64_t remaining_size;
 
-    remaining_size = ram_save_remaining() * TARGET_PAGE_SIZE;
+    remaining_size = rs->migration_dirty_pages * TARGET_PAGE_SIZE;
 
     if (!migration_in_postcopy(migrate_get_current()) &&
         remaining_size < max_size) {
@@ -2118,7 +2113,7 @@ static void ram_save_pending(QEMUFile *f, void *opaque, uint64_t max_size,
         migration_bitmap_sync(rs);
         rcu_read_unlock();
         qemu_mutex_unlock_iothread();
-        remaining_size = ram_save_remaining() * TARGET_PAGE_SIZE;
+        remaining_size = rs->migration_dirty_pages * TARGET_PAGE_SIZE;
     }
 
     /* We can do postcopy, and all the data is postcopiable */
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 68+ messages in thread

* [Qemu-devel] [PATCH 27/31] ram: Move last_req_rb to RAMState
  2017-03-15 13:49 [Qemu-devel] [PATCH 00/31] Creating RAMState for migration Juan Quintela
                   ` (25 preceding siblings ...)
  2017-03-15 13:50 ` [Qemu-devel] [PATCH 26/31] ram: Remove ram_save_remaining Juan Quintela
@ 2017-03-15 13:50 ` Juan Quintela
  2017-03-17 10:14   ` Dr. David Alan Gilbert
  2017-03-15 13:50 ` [Qemu-devel] [PATCH 28/31] ram: Create ram_dirty_sync_count() Juan Quintela
                   ` (4 subsequent siblings)
  31 siblings, 1 reply; 68+ messages in thread
From: Juan Quintela @ 2017-03-15 13:50 UTC (permalink / raw)
  To: qemu-devel; +Cc: amit.shah, dgilbert

It was on MigrationState when it is only used inside ram.c for
postcopy.  Problem is that we need to access it without being able to
pass it RAMState directly.

Signed-off-by: Juan Quintela <quintela@redhat.com>
---
 include/migration/migration.h | 2 --
 migration/migration.c         | 1 -
 migration/ram.c               | 6 ++++--
 3 files changed, 4 insertions(+), 5 deletions(-)

diff --git a/include/migration/migration.h b/include/migration/migration.h
index 84cef4b..e032fb0 100644
--- a/include/migration/migration.h
+++ b/include/migration/migration.h
@@ -189,8 +189,6 @@ struct MigrationState
     /* Queue of outstanding page requests from the destination */
     QemuMutex src_page_req_mutex;
     QSIMPLEQ_HEAD(src_page_requests, MigrationSrcPageRequest) src_page_requests;
-    /* The RAMBlock used in the last src_page_request */
-    RAMBlock *last_req_rb;
     /* The semaphore is used to notify COLO thread that failover is finished */
     QemuSemaphore colo_exit_sem;
 
diff --git a/migration/migration.c b/migration/migration.c
index 46645b6..4f19382 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -1114,7 +1114,6 @@ MigrationState *migrate_init(const MigrationParams *params)
     s->postcopy_after_devices = false;
     s->postcopy_requests = 0;
     s->migration_thread_running = false;
-    s->last_req_rb = NULL;
     error_free(s->error);
     s->error = NULL;
 
diff --git a/migration/ram.c b/migration/ram.c
index e7db39c..50ca1da 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -197,6 +197,8 @@ struct RAMState {
     QemuMutex bitmap_mutex;
     /* Ram Bitmap protected by RCU */
     RAMBitmap *ram_bitmap;
+    /* The RAMBlock used in the last src_page_request */
+    RAMBlock *last_req_rb;
 };
 typedef struct RAMState RAMState;
 
@@ -1190,7 +1192,7 @@ int ram_save_queue_pages(MigrationState *ms, const char *rbname,
     rcu_read_lock();
     if (!rbname) {
         /* Reuse last RAMBlock */
-        ramblock = ms->last_req_rb;
+        ramblock = ram_state.last_req_rb;
 
         if (!ramblock) {
             /*
@@ -1208,7 +1210,7 @@ int ram_save_queue_pages(MigrationState *ms, const char *rbname,
             error_report("ram_save_queue_pages no block '%s'", rbname);
             goto err;
         }
-        ms->last_req_rb = ramblock;
+        ram_state.last_req_rb = ramblock;
     }
     trace_ram_save_queue_pages(ramblock->idstr, start, len);
     if (start+len > ramblock->used_length) {
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 68+ messages in thread

* [Qemu-devel] [PATCH 28/31] ram: Create ram_dirty_sync_count()
  2017-03-15 13:49 [Qemu-devel] [PATCH 00/31] Creating RAMState for migration Juan Quintela
                   ` (26 preceding siblings ...)
  2017-03-15 13:50 ` [Qemu-devel] [PATCH 27/31] ram: Move last_req_rb to RAMState Juan Quintela
@ 2017-03-15 13:50 ` Juan Quintela
  2017-03-15 13:50 ` [Qemu-devel] [PATCH 29/31] ram: Remove dirty_bytes_rate Juan Quintela
                   ` (3 subsequent siblings)
  31 siblings, 0 replies; 68+ messages in thread
From: Juan Quintela @ 2017-03-15 13:50 UTC (permalink / raw)
  To: qemu-devel; +Cc: amit.shah, dgilbert

This is a ram field that was inside MigrationState.  Move it to
RAMState and make it the same that the other ram stats.

Signed-off-by: Juan Quintela <quintela@redhat.com>
---
 include/migration/migration.h | 2 +-
 migration/migration.c         | 3 +--
 migration/ram.c               | 6 +++++-
 3 files changed, 7 insertions(+), 4 deletions(-)

diff --git a/include/migration/migration.h b/include/migration/migration.h
index e032fb0..54a1a4f 100644
--- a/include/migration/migration.h
+++ b/include/migration/migration.h
@@ -171,7 +171,6 @@ struct MigrationState
     bool enabled_capabilities[MIGRATION_CAPABILITY__MAX];
     int64_t xbzrle_cache_size;
     int64_t setup_time;
-    int64_t dirty_sync_count;
     /* Count of requests incoming from destination */
     int64_t postcopy_requests;
 
@@ -270,6 +269,7 @@ void migrate_decompress_threads_join(void);
 uint64_t ram_bytes_remaining(void);
 uint64_t ram_bytes_transferred(void);
 uint64_t ram_bytes_total(void);
+uint64_t ram_dirty_sync_count(void);
 void free_xbzrle_decoded_buf(void);
 
 void acct_update_position(QEMUFile *f, size_t size, bool zero);
diff --git a/migration/migration.c b/migration/migration.c
index 4f19382..09d02be 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -645,7 +645,7 @@ static void populate_ram_info(MigrationInfo *info, MigrationState *s)
     info->ram->normal_bytes = norm_mig_pages_transferred() *
         (1ul << qemu_target_page_bits());
     info->ram->mbps = s->mbps;
-    info->ram->dirty_sync_count = s->dirty_sync_count;
+    info->ram->dirty_sync_count = ram_dirty_sync_count();
     info->ram->postcopy_requests = s->postcopy_requests;
 
     if (s->state != MIGRATION_STATUS_COMPLETED) {
@@ -1109,7 +1109,6 @@ MigrationState *migrate_init(const MigrationParams *params)
     s->dirty_pages_rate = 0;
     s->dirty_bytes_rate = 0;
     s->setup_time = 0;
-    s->dirty_sync_count = 0;
     s->start_postcopy = false;
     s->postcopy_after_devices = false;
     s->postcopy_requests = 0;
diff --git a/migration/ram.c b/migration/ram.c
index 50ca1da..4563e3d 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -249,6 +249,11 @@ uint64_t ram_bytes_remaining(void)
     return ram_state.migration_dirty_pages * TARGET_PAGE_SIZE;
 }
 
+uint64_t ram_dirty_sync_count(void)
+{
+    return ram_state.bitmap_sync_count;
+}
+
 /* used by the search for pages to send */
 struct PageSearchStatus {
     /* Current block being searched */
@@ -686,7 +691,6 @@ static void migration_bitmap_sync(RAMState *rs)
         rs->start_time = end_time;
         rs->num_dirty_pages_period = 0;
     }
-    s->dirty_sync_count = rs->bitmap_sync_count;
     if (migrate_use_events()) {
         qapi_event_send_migration_pass(rs->bitmap_sync_count, NULL);
     }
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 68+ messages in thread

* [Qemu-devel] [PATCH 29/31] ram: Remove dirty_bytes_rate
  2017-03-15 13:49 [Qemu-devel] [PATCH 00/31] Creating RAMState for migration Juan Quintela
                   ` (27 preceding siblings ...)
  2017-03-15 13:50 ` [Qemu-devel] [PATCH 28/31] ram: Create ram_dirty_sync_count() Juan Quintela
@ 2017-03-15 13:50 ` Juan Quintela
  2017-03-17 10:21   ` Dr. David Alan Gilbert
  2017-03-15 13:50 ` [Qemu-devel] [PATCH 30/31] ram: move dirty_pages_rate to RAMState Juan Quintela
                   ` (2 subsequent siblings)
  31 siblings, 1 reply; 68+ messages in thread
From: Juan Quintela @ 2017-03-15 13:50 UTC (permalink / raw)
  To: qemu-devel; +Cc: amit.shah, dgilbert

It can be recalculated from dirty_pages_rate.

Signed-off-by: Juan Quintela <quintela@redhat.com>
---
 include/migration/migration.h | 1 -
 migration/migration.c         | 5 ++---
 migration/ram.c               | 1 -
 3 files changed, 2 insertions(+), 5 deletions(-)

diff --git a/include/migration/migration.h b/include/migration/migration.h
index 54a1a4f..42b9edf 100644
--- a/include/migration/migration.h
+++ b/include/migration/migration.h
@@ -167,7 +167,6 @@ struct MigrationState
     int64_t downtime;
     int64_t expected_downtime;
     int64_t dirty_pages_rate;
-    int64_t dirty_bytes_rate;
     bool enabled_capabilities[MIGRATION_CAPABILITY__MAX];
     int64_t xbzrle_cache_size;
     int64_t setup_time;
diff --git a/migration/migration.c b/migration/migration.c
index 09d02be..2f8c440 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -1107,7 +1107,6 @@ MigrationState *migrate_init(const MigrationParams *params)
     s->downtime = 0;
     s->expected_downtime = 0;
     s->dirty_pages_rate = 0;
-    s->dirty_bytes_rate = 0;
     s->setup_time = 0;
     s->start_postcopy = false;
     s->postcopy_after_devices = false;
@@ -1999,8 +1998,8 @@ static void *migration_thread(void *opaque)
                                       bandwidth, max_size);
             /* if we haven't sent anything, we don't want to recalculate
                10000 is a small enough number for our purposes */
-            if (s->dirty_bytes_rate && transferred_bytes > 10000) {
-                s->expected_downtime = s->dirty_bytes_rate / bandwidth;
+            if (s->dirty_pages_rate && transferred_bytes > 10000) {
+                s->expected_downtime = s->dirty_pages_rate * (1ul << qemu_target_page_bits())/ bandwidth;
             }
 
             qemu_file_reset_rate_limit(s->to_dst_file);
diff --git a/migration/ram.c b/migration/ram.c
index 4563e3d..1006e60 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -687,7 +687,6 @@ static void migration_bitmap_sync(RAMState *rs)
         }
         s->dirty_pages_rate = rs->num_dirty_pages_period * 1000
             / (end_time - rs->start_time);
-        s->dirty_bytes_rate = s->dirty_pages_rate * TARGET_PAGE_SIZE;
         rs->start_time = end_time;
         rs->num_dirty_pages_period = 0;
     }
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 68+ messages in thread

* [Qemu-devel] [PATCH 30/31] ram: move dirty_pages_rate to RAMState
  2017-03-15 13:49 [Qemu-devel] [PATCH 00/31] Creating RAMState for migration Juan Quintela
                   ` (28 preceding siblings ...)
  2017-03-15 13:50 ` [Qemu-devel] [PATCH 29/31] ram: Remove dirty_bytes_rate Juan Quintela
@ 2017-03-15 13:50 ` Juan Quintela
  2017-03-17 10:45   ` Dr. David Alan Gilbert
  2017-03-15 13:50 ` [Qemu-devel] [PATCH 31/31] ram: move postcopy_requests into RAMState Juan Quintela
  2017-03-15 14:25 ` [Qemu-devel] [PATCH 00/31] Creating RAMState for migration no-reply
  31 siblings, 1 reply; 68+ messages in thread
From: Juan Quintela @ 2017-03-15 13:50 UTC (permalink / raw)
  To: qemu-devel; +Cc: amit.shah, dgilbert

Treat it like the rest of ram stats counters.  Export its value the
same way.  As an added bonus, no more MigrationState used in
migration_bitmap_sync();

Signed-off-by: Juan Quintela <quintela@redhat.com>
---
 include/migration/migration.h |  2 +-
 migration/migration.c         |  7 +++----
 migration/ram.c               | 12 +++++++++---
 3 files changed, 13 insertions(+), 8 deletions(-)

diff --git a/include/migration/migration.h b/include/migration/migration.h
index 42b9edf..43bdf86 100644
--- a/include/migration/migration.h
+++ b/include/migration/migration.h
@@ -166,7 +166,6 @@ struct MigrationState
     int64_t total_time;
     int64_t downtime;
     int64_t expected_downtime;
-    int64_t dirty_pages_rate;
     bool enabled_capabilities[MIGRATION_CAPABILITY__MAX];
     int64_t xbzrle_cache_size;
     int64_t setup_time;
@@ -269,6 +268,7 @@ uint64_t ram_bytes_remaining(void);
 uint64_t ram_bytes_transferred(void);
 uint64_t ram_bytes_total(void);
 uint64_t ram_dirty_sync_count(void);
+uint64_t ram_dirty_pages_rate(void);
 void free_xbzrle_decoded_buf(void);
 
 void acct_update_position(QEMUFile *f, size_t size, bool zero);
diff --git a/migration/migration.c b/migration/migration.c
index 2f8c440..0a70d55 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -650,7 +650,7 @@ static void populate_ram_info(MigrationInfo *info, MigrationState *s)
 
     if (s->state != MIGRATION_STATUS_COMPLETED) {
         info->ram->remaining = ram_bytes_remaining();
-        info->ram->dirty_pages_rate = s->dirty_pages_rate;
+        info->ram->dirty_pages_rate = ram_dirty_pages_rate();
     }
 }
 
@@ -1106,7 +1106,6 @@ MigrationState *migrate_init(const MigrationParams *params)
     s->mbps = 0.0;
     s->downtime = 0;
     s->expected_downtime = 0;
-    s->dirty_pages_rate = 0;
     s->setup_time = 0;
     s->start_postcopy = false;
     s->postcopy_after_devices = false;
@@ -1998,8 +1997,8 @@ static void *migration_thread(void *opaque)
                                       bandwidth, max_size);
             /* if we haven't sent anything, we don't want to recalculate
                10000 is a small enough number for our purposes */
-            if (s->dirty_pages_rate && transferred_bytes > 10000) {
-                s->expected_downtime = s->dirty_pages_rate * (1ul << qemu_target_page_bits())/ bandwidth;
+            if (ram_dirty_pages_rate() && transferred_bytes > 10000) {
+                s->expected_downtime = ram_dirty_pages_rate() * (1ul << qemu_target_page_bits())/ bandwidth;
             }
 
             qemu_file_reset_rate_limit(s->to_dst_file);
diff --git a/migration/ram.c b/migration/ram.c
index 1006e60..b85f58f 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -193,6 +193,8 @@ struct RAMState {
     uint64_t migration_dirty_pages;
     /* total number of bytes transferred */
     uint64_t bytes_transferred;
+    /* number of dirtied pages in the last second */
+    uint64_t dirty_pages_rate;
     /* protects modification of the bitmap */
     QemuMutex bitmap_mutex;
     /* Ram Bitmap protected by RCU */
@@ -254,6 +256,11 @@ uint64_t ram_dirty_sync_count(void)
     return ram_state.bitmap_sync_count;
 }
 
+uint64_t ram_dirty_pages_rate(void)
+{
+    return ram_state.dirty_pages_rate;
+}
+
 /* used by the search for pages to send */
 struct PageSearchStatus {
     /* Current block being searched */
@@ -624,7 +631,6 @@ static void migration_bitmap_sync(RAMState *rs)
 {
     RAMBlock *block;
     uint64_t num_dirty_pages_init = rs->migration_dirty_pages;
-    MigrationState *s = migrate_get_current();
     int64_t end_time;
     int64_t bytes_xfer_now;
 
@@ -664,7 +670,7 @@ static void migration_bitmap_sync(RAMState *rs)
                throttling */
             bytes_xfer_now = ram_bytes_transferred();
 
-            if (s->dirty_pages_rate &&
+            if (rs->dirty_pages_rate &&
                (rs->num_dirty_pages_period * TARGET_PAGE_SIZE >
                    (bytes_xfer_now - rs->bytes_xfer_prev)/2) &&
                (rs->dirty_rate_high_cnt++ >= 2)) {
@@ -685,7 +691,7 @@ static void migration_bitmap_sync(RAMState *rs)
             rs->iterations_prev = rs->iterations;
             rs->xbzrle_cache_miss_prev = rs->xbzrle_cache_miss;
         }
-        s->dirty_pages_rate = rs->num_dirty_pages_period * 1000
+        rs->dirty_pages_rate = rs->num_dirty_pages_period * 1000
             / (end_time - rs->start_time);
         rs->start_time = end_time;
         rs->num_dirty_pages_period = 0;
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 68+ messages in thread

* [Qemu-devel] [PATCH 31/31] ram: move postcopy_requests into RAMState
  2017-03-15 13:49 [Qemu-devel] [PATCH 00/31] Creating RAMState for migration Juan Quintela
                   ` (29 preceding siblings ...)
  2017-03-15 13:50 ` [Qemu-devel] [PATCH 30/31] ram: move dirty_pages_rate to RAMState Juan Quintela
@ 2017-03-15 13:50 ` Juan Quintela
  2017-03-15 14:25 ` [Qemu-devel] [PATCH 00/31] Creating RAMState for migration no-reply
  31 siblings, 0 replies; 68+ messages in thread
From: Juan Quintela @ 2017-03-15 13:50 UTC (permalink / raw)
  To: qemu-devel; +Cc: amit.shah, dgilbert

Signed-off-by: Juan Quintela <quintela@redhat.com>
---
 include/migration/migration.h | 3 +--
 migration/migration.c         | 3 +--
 migration/ram.c               | 9 ++++++++-
 3 files changed, 10 insertions(+), 5 deletions(-)

diff --git a/include/migration/migration.h b/include/migration/migration.h
index 43bdf86..bc48a8e 100644
--- a/include/migration/migration.h
+++ b/include/migration/migration.h
@@ -169,8 +169,6 @@ struct MigrationState
     bool enabled_capabilities[MIGRATION_CAPABILITY__MAX];
     int64_t xbzrle_cache_size;
     int64_t setup_time;
-    /* Count of requests incoming from destination */
-    int64_t postcopy_requests;
 
     /* Flag set once the migration has been asked to enter postcopy */
     bool start_postcopy;
@@ -269,6 +267,7 @@ uint64_t ram_bytes_transferred(void);
 uint64_t ram_bytes_total(void);
 uint64_t ram_dirty_sync_count(void);
 uint64_t ram_dirty_pages_rate(void);
+uint64_t ram_postcopy_requests(void);
 void free_xbzrle_decoded_buf(void);
 
 void acct_update_position(QEMUFile *f, size_t size, bool zero);
diff --git a/migration/migration.c b/migration/migration.c
index 0a70d55..0df6111 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -646,7 +646,7 @@ static void populate_ram_info(MigrationInfo *info, MigrationState *s)
         (1ul << qemu_target_page_bits());
     info->ram->mbps = s->mbps;
     info->ram->dirty_sync_count = ram_dirty_sync_count();
-    info->ram->postcopy_requests = s->postcopy_requests;
+    info->ram->postcopy_requests = ram_postcopy_requests();
 
     if (s->state != MIGRATION_STATUS_COMPLETED) {
         info->ram->remaining = ram_bytes_remaining();
@@ -1109,7 +1109,6 @@ MigrationState *migrate_init(const MigrationParams *params)
     s->setup_time = 0;
     s->start_postcopy = false;
     s->postcopy_after_devices = false;
-    s->postcopy_requests = 0;
     s->migration_thread_running = false;
     error_free(s->error);
     s->error = NULL;
diff --git a/migration/ram.c b/migration/ram.c
index b85f58f..91f9fb5 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -195,6 +195,8 @@ struct RAMState {
     uint64_t bytes_transferred;
     /* number of dirtied pages in the last second */
     uint64_t dirty_pages_rate;
+    /* Count of requests incoming from destination */
+    uint64_t postcopy_requests;
     /* protects modification of the bitmap */
     QemuMutex bitmap_mutex;
     /* Ram Bitmap protected by RCU */
@@ -261,6 +263,11 @@ uint64_t ram_dirty_pages_rate(void)
     return ram_state.dirty_pages_rate;
 }
 
+uint64_t ram_postcopy_requests(void)
+{
+    return ram_state.postcopy_requests;
+}
+
 /* used by the search for pages to send */
 struct PageSearchStatus {
     /* Current block being searched */
@@ -1197,7 +1204,7 @@ int ram_save_queue_pages(MigrationState *ms, const char *rbname,
 {
     RAMBlock *ramblock;
 
-    ms->postcopy_requests++;
+    ram_state.postcopy_requests++;
     rcu_read_lock();
     if (!rbname) {
         /* Reuse last RAMBlock */
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 68+ messages in thread

* Re: [Qemu-devel] [PATCH 00/31] Creating RAMState for migration
  2017-03-15 13:49 [Qemu-devel] [PATCH 00/31] Creating RAMState for migration Juan Quintela
                   ` (30 preceding siblings ...)
  2017-03-15 13:50 ` [Qemu-devel] [PATCH 31/31] ram: move postcopy_requests into RAMState Juan Quintela
@ 2017-03-15 14:25 ` no-reply
  31 siblings, 0 replies; 68+ messages in thread
From: no-reply @ 2017-03-15 14:25 UTC (permalink / raw)
  To: quintela; +Cc: famz, qemu-devel, amit.shah, dgilbert

Hi,

This series seems to have some coding style problems. See output below for
more information:

Type: series
Message-id: 20170315135021.6978-1-quintela@redhat.com
Subject: [Qemu-devel] [PATCH 00/31] Creating RAMState for migration

=== TEST SCRIPT BEGIN ===
#!/bin/bash

BASE=base
n=1
total=$(git log --oneline $BASE.. | wc -l)
failed=0

# Useful git options
git config --local diff.renamelimit 0
git config --local diff.renames True

commits="$(git log --format=%H --reverse $BASE..)"
for c in $commits; do
    echo "Checking PATCH $n/$total: $(git log -n 1 --format=%s $c)..."
    if ! git show $c --format=email | ./scripts/checkpatch.pl --mailback -; then
        failed=1
        echo
    fi
    n=$((n+1))
done

exit $failed
=== TEST SCRIPT END ===

Updating 3c8cf5a9c21ff8782164d1def7f44bd888713384
From https://github.com/patchew-project/qemu
 * [new tag]         patchew/20170315135021.6978-1-quintela@redhat.com -> patchew/20170315135021.6978-1-quintela@redhat.com
 * [new tag]         patchew/20170315142032.6788-1-shorne@gmail.com -> patchew/20170315142032.6788-1-shorne@gmail.com
Switched to a new branch 'test'
6fe577e ram: move postcopy_requests into RAMState
f53e389 ram: move dirty_pages_rate to RAMState
3484779 ram: Remove dirty_bytes_rate
1d9f0c2 ram: Create ram_dirty_sync_count()
afa30a7 ram: Move last_req_rb to RAMState
b6c274d ram: Remove ram_save_remaining
40aa7d1 ram: Use the RAMState bytes_transferred parameter
c160baf ram: Move bytes_transferred into RAMState
d4507b7 ram: Move migration_bitmap_rcu into RAMState
06e4d78 ram: move migration_bitmap_mutex into RAMState
a97081e ram: Everything was init to zero, so use memset
f0575a7 ram: move migration_dirty_pages to RAMState
2d35060 ram: move xbzrle_overflows into RAMState
8b9c459 ram: move xbzrle_cache_miss_rate into RAMState
a69643a ram: Move xbzrle_cache_miss into RAMState
b56f54b ram: Move xbzrle_pages into RAMState
6771534 ram: Move xbzrle_bytes into RAMState
d921876 ram: Move iterations into RAMState
174f925 ram: Remove norm_mig_bytes_transferred
5523509 ram: Move norm_pages to RAMState
abf8516 ram: Remove unused pages_skiped variable
fad4269 ram: Remove unused dump_mig_dbytes_transferred()
b1d44ad ram: Move dup_pages into RAMState
bbe7d75 ram: Move iterations_prev into RAMState
9d4c5c4 ram: Move xbzrle_cache_miss_prev into RAMState
de4f910 ram: Move num_dirty_pages_period into RAMState
c94e9c4 ram: Move bytes_xfer_prev into RAMState
84ef592 ram: Move start time into RAMState
e967ab5 ram: move bitmap_sync_count into RAMState
a9b59ac ram: Add dirty_rate_high_cnt to RAMState
620e9e6 ram: move more fields into RAMState

=== OUTPUT BEGIN ===
Checking PATCH 1/31: ram: move more fields into RAMState...
WARNING: line over 80 characters
#201: FILE: migration/ram.c:1121:
+static bool get_queued_page(RAMState *rs, MigrationState *ms, PageSearchStatus *pss,

ERROR: trailing whitespace
#433: FILE: migration/ram.c:2120:
+    $

total: 1 errors, 1 warnings, 400 lines checked

Your patch has style problems, please review.  If any of these errors
are false positives report them to the maintainer, see
CHECKPATCH in MAINTAINERS.

Checking PATCH 2/31: ram: Add dirty_rate_high_cnt to RAMState...
Checking PATCH 3/31: ram: move bitmap_sync_count into RAMState...
Checking PATCH 4/31: ram: Move start time into RAMState...
Checking PATCH 5/31: ram: Move bytes_xfer_prev into RAMState...
ERROR: spaces required around that '/' (ctx:VxV)
#55: FILE: migration/ram.c:678:
+                   (bytes_xfer_now - rs->bytes_xfer_prev)/2) &&
                                                          ^

total: 1 errors, 0 warnings, 48 lines checked

Your patch has style problems, please review.  If any of these errors
are false positives report them to the maintainer, see
CHECKPATCH in MAINTAINERS.

Checking PATCH 6/31: ram: Move num_dirty_pages_period into RAMState...
Checking PATCH 7/31: ram: Move xbzrle_cache_miss_prev into RAMState...
Checking PATCH 8/31: ram: Move iterations_prev into RAMState...
Checking PATCH 9/31: ram: Move dup_pages into RAMState...
Checking PATCH 10/31: ram: Remove unused dump_mig_dbytes_transferred()...
Checking PATCH 11/31: ram: Remove unused pages_skiped variable...
Checking PATCH 12/31: ram: Move norm_pages to RAMState...
Checking PATCH 13/31: ram: Remove norm_mig_bytes_transferred...
Checking PATCH 14/31: ram: Move iterations into RAMState...
Checking PATCH 15/31: ram: Move xbzrle_bytes into RAMState...
Checking PATCH 16/31: ram: Move xbzrle_pages into RAMState...
Checking PATCH 17/31: ram: Move xbzrle_cache_miss into RAMState...
Checking PATCH 18/31: ram: move xbzrle_cache_miss_rate into RAMState...
Checking PATCH 19/31: ram: move xbzrle_overflows into RAMState...
Checking PATCH 20/31: ram: move migration_dirty_pages to RAMState...
ERROR: spaces prohibited around that '->' (ctx:VxW)
#62: FILE: migration/ram.c:587:
+    rs-> migration_dirty_pages +=
       ^

WARNING: line over 80 characters
#89: FILE: migration/ram.c:648:
+    rs->num_dirty_pages_period += rs->migration_dirty_pages - num_dirty_pages_init;

WARNING: line over 80 characters
#128: FILE: migration/ram.c:1776:
+                ram_state.migration_dirty_pages += !test_and_set_bit(page, bitmap);

total: 1 errors, 2 warnings, 117 lines checked

Your patch has style problems, please review.  If any of these errors
are false positives report them to the maintainer, see
CHECKPATCH in MAINTAINERS.

Checking PATCH 21/31: ram: Everything was init to zero, so use memset...
Checking PATCH 22/31: ram: move migration_bitmap_mutex into RAMState...
Checking PATCH 23/31: ram: Move migration_bitmap_rcu into RAMState...
Checking PATCH 24/31: ram: Move bytes_transferred into RAMState...
Checking PATCH 25/31: ram: Use the RAMState bytes_transferred parameter...
WARNING: line over 80 characters
#56: FILE: migration/ram.c:711:
+                                                  offset | RAM_SAVE_FLAG_COMPRESS);

WARNING: line over 80 characters
#244: FILE: migration/ram.c:1316:
+        tmppages = ram_save_target_page(rs, ms, f, pss, last_stage, dirty_ram_abs);

WARNING: line over 80 characters
#271: FILE: migration/ram.c:1379:
+            pages = ram_save_host_page(rs, ms, f, &pss, last_stage, dirty_ram_abs);

total: 0 errors, 3 warnings, 255 lines checked

Your patch has style problems, please review.  If any of these errors
are false positives report them to the maintainer, see
CHECKPATCH in MAINTAINERS.
Checking PATCH 26/31: ram: Remove ram_save_remaining...
Checking PATCH 27/31: ram: Move last_req_rb to RAMState...
Checking PATCH 28/31: ram: Create ram_dirty_sync_count()...
Checking PATCH 29/31: ram: Remove dirty_bytes_rate...
ERROR: line over 90 characters
#42: FILE: migration/migration.c:2002:
+                s->expected_downtime = s->dirty_pages_rate * (1ul << qemu_target_page_bits())/ bandwidth;

ERROR: spaces required around that '/' (ctx:VxW)
#42: FILE: migration/migration.c:2002:
+                s->expected_downtime = s->dirty_pages_rate * (1ul << qemu_target_page_bits())/ bandwidth;
                                                                                              ^

total: 2 errors, 0 warnings, 31 lines checked

Your patch has style problems, please review.  If any of these errors
are false positives report them to the maintainer, see
CHECKPATCH in MAINTAINERS.

Checking PATCH 30/31: ram: move dirty_pages_rate to RAMState...
ERROR: line over 90 characters
#61: FILE: migration/migration.c:2001:
+                s->expected_downtime = ram_dirty_pages_rate() * (1ul << qemu_target_page_bits())/ bandwidth;

ERROR: spaces required around that '/' (ctx:VxW)
#61: FILE: migration/migration.c:2001:
+                s->expected_downtime = ram_dirty_pages_rate() * (1ul << qemu_target_page_bits())/ bandwidth;
                                                                                                 ^

total: 2 errors, 0 warnings, 81 lines checked

Your patch has style problems, please review.  If any of these errors
are false positives report them to the maintainer, see
CHECKPATCH in MAINTAINERS.

Checking PATCH 31/31: ram: move postcopy_requests into RAMState...
=== OUTPUT END ===

Test command exited with code: 1


---
Email generated automatically by Patchew [http://patchew.org/].
Please send your feedback to patchew-devel@freelists.org

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [Qemu-devel] [PATCH 01/31] ram: move more fields into RAMState
  2017-03-15 13:49 ` [Qemu-devel] [PATCH 01/31] ram: move more fields into RAMState Juan Quintela
@ 2017-03-16 12:09   ` Dr. David Alan Gilbert
  2017-03-16 21:32     ` Philippe Mathieu-Daudé
  2017-03-20 19:36     ` Juan Quintela
  0 siblings, 2 replies; 68+ messages in thread
From: Dr. David Alan Gilbert @ 2017-03-16 12:09 UTC (permalink / raw)
  To: Juan Quintela; +Cc: qemu-devel

* Juan Quintela (quintela@redhat.com) wrote:
> last_seen_block, last_sent_block, last_offset, last_version and
> ram_bulk_stage are globals that are really related together.
> 
> Signed-off-by: Juan Quintela <quintela@redhat.com>
> ---
>  migration/ram.c | 136 ++++++++++++++++++++++++++++++++------------------------
>  1 file changed, 79 insertions(+), 57 deletions(-)
> 
> diff --git a/migration/ram.c b/migration/ram.c
> index 719425b..c20a539 100644
> --- a/migration/ram.c
> +++ b/migration/ram.c
> @@ -136,6 +136,23 @@ out:
>      return ret;
>  }
>  
> +/* State of RAM for migration */
> +struct RAMState {
> +    /* Last block that we have visited searching for dirty pages */
> +    RAMBlock    *last_seen_block;
> +    /* Last block from where we have sent data */
> +    RAMBlock *last_sent_block;
> +    /* Last offeset we have sent data from */
                  ^
                  One extra e

Other than that (and the minor formatting things the bot found)

Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>

> +    ram_addr_t last_offset;
> +    /* last ram version we have seen */
> +    uint32_t last_version;
> +    /* We are in the first round */
> +    bool ram_bulk_stage;
> +};
> +typedef struct RAMState RAMState;
> +
> +static RAMState ram_state;
> +
>  /* accounting for migration statistics */
>  typedef struct AccountingInfo {
>      uint64_t dup_pages;
> @@ -211,16 +228,8 @@ uint64_t xbzrle_mig_pages_overflow(void)
>      return acct_info.xbzrle_overflows;
>  }
>  
> -/* This is the last block that we have visited serching for dirty pages
> - */
> -static RAMBlock *last_seen_block;
> -/* This is the last block from where we have sent data */
> -static RAMBlock *last_sent_block;
> -static ram_addr_t last_offset;
>  static QemuMutex migration_bitmap_mutex;
>  static uint64_t migration_dirty_pages;
> -static uint32_t last_version;
> -static bool ram_bulk_stage;
>  
>  /* used by the search for pages to send */
>  struct PageSearchStatus {
> @@ -437,9 +446,9 @@ static void mig_throttle_guest_down(void)
>   * As a bonus, if the page wasn't in the cache it gets added so that
>   * when a small write is made into the 0'd page it gets XBZRLE sent
>   */
> -static void xbzrle_cache_zero_page(ram_addr_t current_addr)
> +static void xbzrle_cache_zero_page(RAMState *rs, ram_addr_t current_addr)
>  {
> -    if (ram_bulk_stage || !migrate_use_xbzrle()) {
> +    if (rs->ram_bulk_stage || !migrate_use_xbzrle()) {
>          return;
>      }
>  
> @@ -539,7 +548,7 @@ static int save_xbzrle_page(QEMUFile *f, uint8_t **current_data,
>   * Returns: byte offset within memory region of the start of a dirty page
>   */
>  static inline
> -ram_addr_t migration_bitmap_find_dirty(RAMBlock *rb,
> +ram_addr_t migration_bitmap_find_dirty(RAMState *rs, RAMBlock *rb,
>                                         ram_addr_t start,
>                                         ram_addr_t *ram_addr_abs)
>  {
> @@ -552,7 +561,7 @@ ram_addr_t migration_bitmap_find_dirty(RAMBlock *rb,
>      unsigned long next;
>  
>      bitmap = atomic_rcu_read(&migration_bitmap_rcu)->bmap;
> -    if (ram_bulk_stage && nr > base) {
> +    if (rs->ram_bulk_stage && nr > base) {
>          next = nr + 1;
>      } else {
>          next = find_next_bit(bitmap, size, nr);
> @@ -740,6 +749,7 @@ static void ram_release_pages(MigrationState *ms, const char *block_name,
>   *          >=0 - Number of pages written - this might legally be 0
>   *                if xbzrle noticed the page was the same.
>   *
> + * @rs: The RAM state
>   * @ms: The current migration state.
>   * @f: QEMUFile where to send the data
>   * @block: block that contains the page we want to send
> @@ -747,8 +757,9 @@ static void ram_release_pages(MigrationState *ms, const char *block_name,
>   * @last_stage: if we are at the completion stage
>   * @bytes_transferred: increase it with the number of transferred bytes
>   */
> -static int ram_save_page(MigrationState *ms, QEMUFile *f, PageSearchStatus *pss,
> -                         bool last_stage, uint64_t *bytes_transferred)
> +static int ram_save_page(RAMState *rs, MigrationState *ms, QEMUFile *f,
> +                         PageSearchStatus *pss, bool last_stage,
> +                         uint64_t *bytes_transferred)
>  {
>      int pages = -1;
>      uint64_t bytes_xmit;
> @@ -774,7 +785,7 @@ static int ram_save_page(MigrationState *ms, QEMUFile *f, PageSearchStatus *pss,
>  
>      current_addr = block->offset + offset;
>  
> -    if (block == last_sent_block) {
> +    if (block == rs->last_sent_block) {
>          offset |= RAM_SAVE_FLAG_CONTINUE;
>      }
>      if (ret != RAM_SAVE_CONTROL_NOT_SUPP) {
> @@ -791,9 +802,9 @@ static int ram_save_page(MigrationState *ms, QEMUFile *f, PageSearchStatus *pss,
>              /* Must let xbzrle know, otherwise a previous (now 0'd) cached
>               * page would be stale
>               */
> -            xbzrle_cache_zero_page(current_addr);
> +            xbzrle_cache_zero_page(rs, current_addr);
>              ram_release_pages(ms, block->idstr, pss->offset, pages);
> -        } else if (!ram_bulk_stage &&
> +        } else if (!rs->ram_bulk_stage &&
>                     !migration_in_postcopy(ms) && migrate_use_xbzrle()) {
>              pages = save_xbzrle_page(f, &p, current_addr, block,
>                                       offset, last_stage, bytes_transferred);
> @@ -925,6 +936,7 @@ static int compress_page_with_multi_thread(QEMUFile *f, RAMBlock *block,
>   *
>   * Returns: Number of pages written.
>   *
> + * @rs: The RAM state
>   * @ms: The current migration state.
>   * @f: QEMUFile where to send the data
>   * @block: block that contains the page we want to send
> @@ -932,7 +944,8 @@ static int compress_page_with_multi_thread(QEMUFile *f, RAMBlock *block,
>   * @last_stage: if we are at the completion stage
>   * @bytes_transferred: increase it with the number of transferred bytes
>   */
> -static int ram_save_compressed_page(MigrationState *ms, QEMUFile *f,
> +static int ram_save_compressed_page(RAMState *rs, MigrationState *ms,
> +                                    QEMUFile *f,
>                                      PageSearchStatus *pss, bool last_stage,
>                                      uint64_t *bytes_transferred)
>  {
> @@ -966,7 +979,7 @@ static int ram_save_compressed_page(MigrationState *ms, QEMUFile *f,
>           * out, keeping this order is important, because the 'cont' flag
>           * is used to avoid resending the block name.
>           */
> -        if (block != last_sent_block) {
> +        if (block != rs->last_sent_block) {
>              flush_compressed_data(f);
>              pages = save_zero_page(f, block, offset, p, bytes_transferred);
>              if (pages == -1) {
> @@ -1008,19 +1021,20 @@ static int ram_save_compressed_page(MigrationState *ms, QEMUFile *f,
>   *
>   * Returns: True if a page is found
>   *
> + * @rs: The RAM state
>   * @f: Current migration stream.
>   * @pss: Data about the state of the current dirty page scan.
>   * @*again: Set to false if the search has scanned the whole of RAM
>   * *ram_addr_abs: Pointer into which to store the address of the dirty page
>   *               within the global ram_addr space
>   */
> -static bool find_dirty_block(QEMUFile *f, PageSearchStatus *pss,
> +static bool find_dirty_block(RAMState *rs, QEMUFile *f, PageSearchStatus *pss,
>                               bool *again, ram_addr_t *ram_addr_abs)
>  {
> -    pss->offset = migration_bitmap_find_dirty(pss->block, pss->offset,
> +    pss->offset = migration_bitmap_find_dirty(rs, pss->block, pss->offset,
>                                                ram_addr_abs);
> -    if (pss->complete_round && pss->block == last_seen_block &&
> -        pss->offset >= last_offset) {
> +    if (pss->complete_round && pss->block == rs->last_seen_block &&
> +        pss->offset >= rs->last_offset) {
>          /*
>           * We've been once around the RAM and haven't found anything.
>           * Give up.
> @@ -1037,7 +1051,7 @@ static bool find_dirty_block(QEMUFile *f, PageSearchStatus *pss,
>              pss->block = QLIST_FIRST_RCU(&ram_list.blocks);
>              /* Flag that we've looped */
>              pss->complete_round = true;
> -            ram_bulk_stage = false;
> +            rs->ram_bulk_stage = false;
>              if (migrate_use_xbzrle()) {
>                  /* If xbzrle is on, stop using the data compression at this
>                   * point. In theory, xbzrle can do better than compression.
> @@ -1097,13 +1111,14 @@ static RAMBlock *unqueue_page(MigrationState *ms, ram_addr_t *offset,
>   * Unqueue a page from the queue fed by postcopy page requests; skips pages
>   * that are already sent (!dirty)
>   *
> + *      rs: The RAM state
>   *      ms:      MigrationState in
>   *     pss:      PageSearchStatus structure updated with found block/offset
>   * ram_addr_abs: global offset in the dirty/sent bitmaps
>   *
>   * Returns:      true if a queued page is found
>   */
> -static bool get_queued_page(MigrationState *ms, PageSearchStatus *pss,
> +static bool get_queued_page(RAMState *rs, MigrationState *ms, PageSearchStatus *pss,
>                              ram_addr_t *ram_addr_abs)
>  {
>      RAMBlock  *block;
> @@ -1144,7 +1159,7 @@ static bool get_queued_page(MigrationState *ms, PageSearchStatus *pss,
>           * in (migration_bitmap_find_and_reset_dirty) that every page is
>           * dirty, that's no longer true.
>           */
> -        ram_bulk_stage = false;
> +        rs->ram_bulk_stage = false;
>  
>          /*
>           * We want the background search to continue from the queued page
> @@ -1248,6 +1263,7 @@ err:
>   * ram_save_target_page: Save one target page
>   *
>   *
> + * @rs: The RAM state
>   * @f: QEMUFile where to send the data
>   * @block: pointer to block that contains the page we want to send
>   * @offset: offset inside the block for the page;
> @@ -1257,7 +1273,7 @@ err:
>   *
>   * Returns: Number of pages written.
>   */
> -static int ram_save_target_page(MigrationState *ms, QEMUFile *f,
> +static int ram_save_target_page(RAMState *rs, MigrationState *ms, QEMUFile *f,
>                                  PageSearchStatus *pss,
>                                  bool last_stage,
>                                  uint64_t *bytes_transferred,
> @@ -1269,11 +1285,11 @@ static int ram_save_target_page(MigrationState *ms, QEMUFile *f,
>      if (migration_bitmap_clear_dirty(dirty_ram_abs)) {
>          unsigned long *unsentmap;
>          if (compression_switch && migrate_use_compression()) {
> -            res = ram_save_compressed_page(ms, f, pss,
> +            res = ram_save_compressed_page(rs, ms, f, pss,
>                                             last_stage,
>                                             bytes_transferred);
>          } else {
> -            res = ram_save_page(ms, f, pss, last_stage,
> +            res = ram_save_page(rs, ms, f, pss, last_stage,
>                                  bytes_transferred);
>          }
>  
> @@ -1289,7 +1305,7 @@ static int ram_save_target_page(MigrationState *ms, QEMUFile *f,
>           * to the stream.
>           */
>          if (res > 0) {
> -            last_sent_block = pss->block;
> +            rs->last_sent_block = pss->block;
>          }
>      }
>  
> @@ -1307,6 +1323,7 @@ static int ram_save_target_page(MigrationState *ms, QEMUFile *f,
>   *
>   * Returns: Number of pages written.
>   *
> + * @rs: The RAM state
>   * @f: QEMUFile where to send the data
>   * @block: pointer to block that contains the page we want to send
>   * @offset: offset inside the block for the page; updated to last target page
> @@ -1315,7 +1332,7 @@ static int ram_save_target_page(MigrationState *ms, QEMUFile *f,
>   * @bytes_transferred: increase it with the number of transferred bytes
>   * @dirty_ram_abs: Address of the start of the dirty page in ram_addr_t space
>   */
> -static int ram_save_host_page(MigrationState *ms, QEMUFile *f,
> +static int ram_save_host_page(RAMState *rs, MigrationState *ms, QEMUFile *f,
>                                PageSearchStatus *pss,
>                                bool last_stage,
>                                uint64_t *bytes_transferred,
> @@ -1325,7 +1342,7 @@ static int ram_save_host_page(MigrationState *ms, QEMUFile *f,
>      size_t pagesize = qemu_ram_pagesize(pss->block);
>  
>      do {
> -        tmppages = ram_save_target_page(ms, f, pss, last_stage,
> +        tmppages = ram_save_target_page(rs, ms, f, pss, last_stage,
>                                          bytes_transferred, dirty_ram_abs);
>          if (tmppages < 0) {
>              return tmppages;
> @@ -1349,6 +1366,7 @@ static int ram_save_host_page(MigrationState *ms, QEMUFile *f,
>   * Returns:  The number of pages written
>   *           0 means no dirty pages
>   *
> + * @rs: The RAM state
>   * @f: QEMUFile where to send the data
>   * @last_stage: if we are at the completion stage
>   * @bytes_transferred: increase it with the number of transferred bytes
> @@ -1357,7 +1375,7 @@ static int ram_save_host_page(MigrationState *ms, QEMUFile *f,
>   * pages in a host page that are dirty.
>   */
>  
> -static int ram_find_and_save_block(QEMUFile *f, bool last_stage,
> +static int ram_find_and_save_block(RAMState *rs, QEMUFile *f, bool last_stage,
>                                     uint64_t *bytes_transferred)
>  {
>      PageSearchStatus pss;
> @@ -1372,8 +1390,8 @@ static int ram_find_and_save_block(QEMUFile *f, bool last_stage,
>          return pages;
>      }
>  
> -    pss.block = last_seen_block;
> -    pss.offset = last_offset;
> +    pss.block = rs->last_seen_block;
> +    pss.offset = rs->last_offset;
>      pss.complete_round = false;
>  
>      if (!pss.block) {
> @@ -1382,22 +1400,22 @@ static int ram_find_and_save_block(QEMUFile *f, bool last_stage,
>  
>      do {
>          again = true;
> -        found = get_queued_page(ms, &pss, &dirty_ram_abs);
> +        found = get_queued_page(rs, ms, &pss, &dirty_ram_abs);
>  
>          if (!found) {
>              /* priority queue empty, so just search for something dirty */
> -            found = find_dirty_block(f, &pss, &again, &dirty_ram_abs);
> +            found = find_dirty_block(rs, f, &pss, &again, &dirty_ram_abs);
>          }
>  
>          if (found) {
> -            pages = ram_save_host_page(ms, f, &pss,
> +            pages = ram_save_host_page(rs, ms, f, &pss,
>                                         last_stage, bytes_transferred,
>                                         dirty_ram_abs);
>          }
>      } while (!pages && again);
>  
> -    last_seen_block = pss.block;
> -    last_offset = pss.offset;
> +    rs->last_seen_block = pss.block;
> +    rs->last_offset = pss.offset;
>  
>      return pages;
>  }
> @@ -1479,13 +1497,13 @@ static void ram_migration_cleanup(void *opaque)
>      XBZRLE_cache_unlock();
>  }
>  
> -static void reset_ram_globals(void)
> +static void ram_state_reset(RAMState *rs)
>  {
> -    last_seen_block = NULL;
> -    last_sent_block = NULL;
> -    last_offset = 0;
> -    last_version = ram_list.version;
> -    ram_bulk_stage = true;
> +    rs->last_seen_block = NULL;
> +    rs->last_sent_block = NULL;
> +    rs->last_offset = 0;
> +    rs->last_version = ram_list.version;
> +    rs->ram_bulk_stage = true;
>  }
>  
>  #define MAX_WAIT 50 /* ms, half buffered_file limit */
> @@ -1800,9 +1818,9 @@ static int postcopy_chunk_hostpages(MigrationState *ms)
>      struct RAMBlock *block;
>  
>      /* Easiest way to make sure we don't resume in the middle of a host-page */
> -    last_seen_block = NULL;
> -    last_sent_block = NULL;
> -    last_offset     = 0;
> +    ram_state.last_seen_block = NULL;
> +    ram_state.last_sent_block = NULL;
> +    ram_state.last_offset     = 0;
>  
>      QLIST_FOREACH_RCU(block, &ram_list.blocks, next) {
>          unsigned long first = block->offset >> TARGET_PAGE_BITS;
> @@ -1913,7 +1931,7 @@ err:
>      return ret;
>  }
>  
> -static int ram_save_init_globals(void)
> +static int ram_save_init_globals(RAMState *rs)
>  {
>      int64_t ram_bitmap_pages; /* Size of bitmap in pages, including gaps */
>  
> @@ -1959,7 +1977,7 @@ static int ram_save_init_globals(void)
>      qemu_mutex_lock_ramlist();
>      rcu_read_lock();
>      bytes_transferred = 0;
> -    reset_ram_globals();
> +    ram_state_reset(rs);
>  
>      migration_bitmap_rcu = g_new0(struct BitmapRcu, 1);
>      /* Skip setting bitmap if there is no RAM */
> @@ -1997,11 +2015,12 @@ static int ram_save_init_globals(void)
>  
>  static int ram_save_setup(QEMUFile *f, void *opaque)
>  {
> +    RAMState *rs = opaque;
>      RAMBlock *block;
>  
>      /* migration has already setup the bitmap, reuse it. */
>      if (!migration_in_colo_state()) {
> -        if (ram_save_init_globals() < 0) {
> +        if (ram_save_init_globals(rs) < 0) {
>              return -1;
>           }
>      }
> @@ -2031,14 +2050,15 @@ static int ram_save_setup(QEMUFile *f, void *opaque)
>  
>  static int ram_save_iterate(QEMUFile *f, void *opaque)
>  {
> +    RAMState *rs = opaque;
>      int ret;
>      int i;
>      int64_t t0;
>      int done = 0;
>  
>      rcu_read_lock();
> -    if (ram_list.version != last_version) {
> -        reset_ram_globals();
> +    if (ram_list.version != rs->last_version) {
> +        ram_state_reset(rs);
>      }
>  
>      /* Read version before ram_list.blocks */
> @@ -2051,7 +2071,7 @@ static int ram_save_iterate(QEMUFile *f, void *opaque)
>      while ((ret = qemu_file_rate_limit(f)) == 0) {
>          int pages;
>  
> -        pages = ram_find_and_save_block(f, false, &bytes_transferred);
> +        pages = ram_find_and_save_block(rs, f, false, &bytes_transferred);
>          /* no more pages to sent */
>          if (pages == 0) {
>              done = 1;
> @@ -2096,6 +2116,8 @@ static int ram_save_iterate(QEMUFile *f, void *opaque)
>  /* Called with iothread lock */
>  static int ram_save_complete(QEMUFile *f, void *opaque)
>  {
> +    RAMState *rs = opaque;
> +    
>      rcu_read_lock();
>  
>      if (!migration_in_postcopy(migrate_get_current())) {
> @@ -2110,7 +2132,7 @@ static int ram_save_complete(QEMUFile *f, void *opaque)
>      while (true) {
>          int pages;
>  
> -        pages = ram_find_and_save_block(f, !migration_in_colo_state(),
> +        pages = ram_find_and_save_block(rs, f, !migration_in_colo_state(),
>                                          &bytes_transferred);
>          /* no more blocks to sent */
>          if (pages == 0) {
> @@ -2675,5 +2697,5 @@ static SaveVMHandlers savevm_ram_handlers = {
>  void ram_mig_init(void)
>  {
>      qemu_mutex_init(&XBZRLE.lock);
> -    register_savevm_live(NULL, "ram", 0, 4, &savevm_ram_handlers, NULL);
> +    register_savevm_live(NULL, "ram", 0, 4, &savevm_ram_handlers, &ram_state);
>  }
> -- 
> 2.9.3
> 
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [Qemu-devel] [PATCH 02/31] ram: Add dirty_rate_high_cnt to RAMState
  2017-03-15 13:49 ` [Qemu-devel] [PATCH 02/31] ram: Add dirty_rate_high_cnt to RAMState Juan Quintela
@ 2017-03-16 12:20   ` Dr. David Alan Gilbert
  2017-03-16 21:32     ` Philippe Mathieu-Daudé
  2017-03-20 19:39     ` Juan Quintela
  0 siblings, 2 replies; 68+ messages in thread
From: Dr. David Alan Gilbert @ 2017-03-16 12:20 UTC (permalink / raw)
  To: Juan Quintela; +Cc: qemu-devel, amit.shah

* Juan Quintela (quintela@redhat.com) wrote:
> We need to add a parameter to several functions to make this work.
> 
> Signed-off-by: Juan Quintela <quintela@redhat.com>
> ---
>  migration/ram.c | 23 ++++++++++++-----------
>  1 file changed, 12 insertions(+), 11 deletions(-)
> 
> diff --git a/migration/ram.c b/migration/ram.c
> index c20a539..9120755 100644
> --- a/migration/ram.c
> +++ b/migration/ram.c
> @@ -45,8 +45,6 @@
>  #include "qemu/rcu_queue.h"
>  #include "migration/colo.h"
>  
> -static int dirty_rate_high_cnt;
> -
>  static uint64_t bitmap_sync_count;
>  
>  /***********************************************************/
> @@ -148,6 +146,8 @@ struct RAMState {
>      uint32_t last_version;
>      /* We are in the first round */
>      bool ram_bulk_stage;
> +    /* How many times we have dirty too many pages */
> +    int dirty_rate_high_cnt;
>  };
>  typedef struct RAMState RAMState;
>  
> @@ -626,7 +626,7 @@ uint64_t ram_pagesize_summary(void)
>      return summary;
>  }
>  
> -static void migration_bitmap_sync(void)
> +static void migration_bitmap_sync(RAMState *rs)
>  {
>      RAMBlock *block;
>      uint64_t num_dirty_pages_init = migration_dirty_pages;
> @@ -673,9 +673,9 @@ static void migration_bitmap_sync(void)
>              if (s->dirty_pages_rate &&
>                 (num_dirty_pages_period * TARGET_PAGE_SIZE >
>                     (bytes_xfer_now - bytes_xfer_prev)/2) &&
> -               (dirty_rate_high_cnt++ >= 2)) {
> +               (rs->dirty_rate_high_cnt++ >= 2)) {
>                      trace_migration_throttle();
> -                    dirty_rate_high_cnt = 0;
> +                    rs->dirty_rate_high_cnt = 0;
>                      mig_throttle_guest_down();
>               }
>               bytes_xfer_prev = bytes_xfer_now;
> @@ -1859,7 +1859,7 @@ int ram_postcopy_send_discard_bitmap(MigrationState *ms)
>      rcu_read_lock();
>  
>      /* This should be our last sync, the src is now paused */
> -    migration_bitmap_sync();
> +    migration_bitmap_sync(&ram_state);
>  
>      unsentmap = atomic_rcu_read(&migration_bitmap_rcu)->unsentmap;
>      if (!unsentmap) {
> @@ -1935,7 +1935,7 @@ static int ram_save_init_globals(RAMState *rs)
>  {
>      int64_t ram_bitmap_pages; /* Size of bitmap in pages, including gaps */
>  
> -    dirty_rate_high_cnt = 0;
> +    rs->dirty_rate_high_cnt = 0;
>      bitmap_sync_count = 0;
>      migration_bitmap_sync_init();
>      qemu_mutex_init(&migration_bitmap_mutex);
> @@ -1999,7 +1999,7 @@ static int ram_save_init_globals(RAMState *rs)
>      migration_dirty_pages = ram_bytes_total() >> TARGET_PAGE_BITS;
>  
>      memory_global_dirty_log_start();
> -    migration_bitmap_sync();
> +    migration_bitmap_sync(rs);
>      qemu_mutex_unlock_ramlist();
>      qemu_mutex_unlock_iothread();
>      rcu_read_unlock();
> @@ -2117,11 +2117,11 @@ static int ram_save_iterate(QEMUFile *f, void *opaque)
>  static int ram_save_complete(QEMUFile *f, void *opaque)
>  {
>      RAMState *rs = opaque;
> -    
> +

Is that undoing false spaces from the previous patch?

anyway,
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>

>      rcu_read_lock();
>  
>      if (!migration_in_postcopy(migrate_get_current())) {
> -        migration_bitmap_sync();
> +        migration_bitmap_sync(rs);
>      }
>  
>      ram_control_before_iterate(f, RAM_CONTROL_FINISH);
> @@ -2154,6 +2154,7 @@ static void ram_save_pending(QEMUFile *f, void *opaque, uint64_t max_size,
>                               uint64_t *non_postcopiable_pending,
>                               uint64_t *postcopiable_pending)
>  {
> +    RAMState *rs = opaque;
>      uint64_t remaining_size;
>  
>      remaining_size = ram_save_remaining() * TARGET_PAGE_SIZE;
> @@ -2162,7 +2163,7 @@ static void ram_save_pending(QEMUFile *f, void *opaque, uint64_t max_size,
>          remaining_size < max_size) {
>          qemu_mutex_lock_iothread();
>          rcu_read_lock();
> -        migration_bitmap_sync();
> +        migration_bitmap_sync(rs);
>          rcu_read_unlock();
>          qemu_mutex_unlock_iothread();
>          remaining_size = ram_save_remaining() * TARGET_PAGE_SIZE;
> -- 
> 2.9.3
> 
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [Qemu-devel] [PATCH 03/31] ram: move bitmap_sync_count into RAMState
  2017-03-15 13:49 ` [Qemu-devel] [PATCH 03/31] ram: move bitmap_sync_count into RAMState Juan Quintela
@ 2017-03-16 12:21   ` Dr. David Alan Gilbert
  2017-03-16 21:33     ` Philippe Mathieu-Daudé
  0 siblings, 1 reply; 68+ messages in thread
From: Dr. David Alan Gilbert @ 2017-03-16 12:21 UTC (permalink / raw)
  To: Juan Quintela; +Cc: qemu-devel, amit.shah

* Juan Quintela (quintela@redhat.com) wrote:
> Signed-off-by: Juan Quintela <quintela@redhat.com>

Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>

> ---
>  migration/ram.c | 22 +++++++++++-----------
>  1 file changed, 11 insertions(+), 11 deletions(-)
> 
> diff --git a/migration/ram.c b/migration/ram.c
> index 9120755..c0bee94 100644
> --- a/migration/ram.c
> +++ b/migration/ram.c
> @@ -45,8 +45,6 @@
>  #include "qemu/rcu_queue.h"
>  #include "migration/colo.h"
>  
> -static uint64_t bitmap_sync_count;
> -
>  /***********************************************************/
>  /* ram save/restore */
>  
> @@ -148,6 +146,8 @@ struct RAMState {
>      bool ram_bulk_stage;
>      /* How many times we have dirty too many pages */
>      int dirty_rate_high_cnt;
> +    /* How many times we have synchronized the bitmap */
> +    uint64_t bitmap_sync_count;
>  };
>  typedef struct RAMState RAMState;
>  
> @@ -455,7 +455,7 @@ static void xbzrle_cache_zero_page(RAMState *rs, ram_addr_t current_addr)
>      /* We don't care if this fails to allocate a new cache page
>       * as long as it updated an old one */
>      cache_insert(XBZRLE.cache, current_addr, ZERO_TARGET_PAGE,
> -                 bitmap_sync_count);
> +                 rs->bitmap_sync_count);
>  }
>  
>  #define ENCODING_FLAG_XBZRLE 0x1
> @@ -475,7 +475,7 @@ static void xbzrle_cache_zero_page(RAMState *rs, ram_addr_t current_addr)
>   * @last_stage: if we are at the completion stage
>   * @bytes_transferred: increase it with the number of transferred bytes
>   */
> -static int save_xbzrle_page(QEMUFile *f, uint8_t **current_data,
> +static int save_xbzrle_page(QEMUFile *f, RAMState *rs, uint8_t **current_data,
>                              ram_addr_t current_addr, RAMBlock *block,
>                              ram_addr_t offset, bool last_stage,
>                              uint64_t *bytes_transferred)
> @@ -483,11 +483,11 @@ static int save_xbzrle_page(QEMUFile *f, uint8_t **current_data,
>      int encoded_len = 0, bytes_xbzrle;
>      uint8_t *prev_cached_page;
>  
> -    if (!cache_is_cached(XBZRLE.cache, current_addr, bitmap_sync_count)) {
> +    if (!cache_is_cached(XBZRLE.cache, current_addr, rs->bitmap_sync_count)) {
>          acct_info.xbzrle_cache_miss++;
>          if (!last_stage) {
>              if (cache_insert(XBZRLE.cache, current_addr, *current_data,
> -                             bitmap_sync_count) == -1) {
> +                             rs->bitmap_sync_count) == -1) {
>                  return -1;
>              } else {
>                  /* update *current_data when the page has been
> @@ -634,7 +634,7 @@ static void migration_bitmap_sync(RAMState *rs)
>      int64_t end_time;
>      int64_t bytes_xfer_now;
>  
> -    bitmap_sync_count++;
> +    rs->bitmap_sync_count++;
>  
>      if (!bytes_xfer_prev) {
>          bytes_xfer_prev = ram_bytes_transferred();
> @@ -697,9 +697,9 @@ static void migration_bitmap_sync(RAMState *rs)
>          start_time = end_time;
>          num_dirty_pages_period = 0;
>      }
> -    s->dirty_sync_count = bitmap_sync_count;
> +    s->dirty_sync_count = rs->bitmap_sync_count;
>      if (migrate_use_events()) {
> -        qapi_event_send_migration_pass(bitmap_sync_count, NULL);
> +        qapi_event_send_migration_pass(rs->bitmap_sync_count, NULL);
>      }
>  }
>  
> @@ -806,7 +806,7 @@ static int ram_save_page(RAMState *rs, MigrationState *ms, QEMUFile *f,
>              ram_release_pages(ms, block->idstr, pss->offset, pages);
>          } else if (!rs->ram_bulk_stage &&
>                     !migration_in_postcopy(ms) && migrate_use_xbzrle()) {
> -            pages = save_xbzrle_page(f, &p, current_addr, block,
> +            pages = save_xbzrle_page(f, rs, &p, current_addr, block,
>                                       offset, last_stage, bytes_transferred);
>              if (!last_stage) {
>                  /* Can't send this cached data async, since the cache page
> @@ -1936,7 +1936,7 @@ static int ram_save_init_globals(RAMState *rs)
>      int64_t ram_bitmap_pages; /* Size of bitmap in pages, including gaps */
>  
>      rs->dirty_rate_high_cnt = 0;
> -    bitmap_sync_count = 0;
> +    rs->bitmap_sync_count = 0;
>      migration_bitmap_sync_init();
>      qemu_mutex_init(&migration_bitmap_mutex);
>  
> -- 
> 2.9.3
> 
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [Qemu-devel] [PATCH 04/31] ram: Move start time into RAMState
  2017-03-15 13:49 ` [Qemu-devel] [PATCH 04/31] ram: Move start time " Juan Quintela
@ 2017-03-16 12:21   ` Dr. David Alan Gilbert
  2017-03-16 21:33     ` Philippe Mathieu-Daudé
  0 siblings, 1 reply; 68+ messages in thread
From: Dr. David Alan Gilbert @ 2017-03-16 12:21 UTC (permalink / raw)
  To: Juan Quintela; +Cc: qemu-devel

* Juan Quintela (quintela@redhat.com) wrote:
> Signed-off-by: Juan Quintela <quintela@redhat.com>

Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>

> ---
>  migration/ram.c | 20 +++++++++++---------
>  1 file changed, 11 insertions(+), 9 deletions(-)
> 
> diff --git a/migration/ram.c b/migration/ram.c
> index c0bee94..f6ac503 100644
> --- a/migration/ram.c
> +++ b/migration/ram.c
> @@ -148,6 +148,9 @@ struct RAMState {
>      int dirty_rate_high_cnt;
>      /* How many times we have synchronized the bitmap */
>      uint64_t bitmap_sync_count;
> +    /* this variables are used for bitmap sync */
> +    /* last time we did a full bitmap_sync */
> +    int64_t start_time;
>  };
>  typedef struct RAMState RAMState;
>  
> @@ -594,15 +597,14 @@ static void migration_bitmap_sync_range(ram_addr_t start, ram_addr_t length)
>  }
>  
>  /* Fix me: there are too many global variables used in migration process. */
> -static int64_t start_time;
>  static int64_t bytes_xfer_prev;
>  static int64_t num_dirty_pages_period;
>  static uint64_t xbzrle_cache_miss_prev;
>  static uint64_t iterations_prev;
>  
> -static void migration_bitmap_sync_init(void)
> +static void migration_bitmap_sync_init(RAMState *rs)
>  {
> -    start_time = 0;
> +    rs->start_time = 0;
>      bytes_xfer_prev = 0;
>      num_dirty_pages_period = 0;
>      xbzrle_cache_miss_prev = 0;
> @@ -640,8 +642,8 @@ static void migration_bitmap_sync(RAMState *rs)
>          bytes_xfer_prev = ram_bytes_transferred();
>      }
>  
> -    if (!start_time) {
> -        start_time = qemu_clock_get_ms(QEMU_CLOCK_REALTIME);
> +    if (!rs->start_time) {
> +        rs->start_time = qemu_clock_get_ms(QEMU_CLOCK_REALTIME);
>      }
>  
>      trace_migration_bitmap_sync_start();
> @@ -661,7 +663,7 @@ static void migration_bitmap_sync(RAMState *rs)
>      end_time = qemu_clock_get_ms(QEMU_CLOCK_REALTIME);
>  
>      /* more than 1 second = 1000 millisecons */
> -    if (end_time > start_time + 1000) {
> +    if (end_time > rs->start_time + 1000) {
>          if (migrate_auto_converge()) {
>              /* The following detection logic can be refined later. For now:
>                 Check to see if the dirtied bytes is 50% more than the approx.
> @@ -692,9 +694,9 @@ static void migration_bitmap_sync(RAMState *rs)
>              xbzrle_cache_miss_prev = acct_info.xbzrle_cache_miss;
>          }
>          s->dirty_pages_rate = num_dirty_pages_period * 1000
> -            / (end_time - start_time);
> +            / (end_time - rs->start_time);
>          s->dirty_bytes_rate = s->dirty_pages_rate * TARGET_PAGE_SIZE;
> -        start_time = end_time;
> +        rs->start_time = end_time;
>          num_dirty_pages_period = 0;
>      }
>      s->dirty_sync_count = rs->bitmap_sync_count;
> @@ -1937,7 +1939,7 @@ static int ram_save_init_globals(RAMState *rs)
>  
>      rs->dirty_rate_high_cnt = 0;
>      rs->bitmap_sync_count = 0;
> -    migration_bitmap_sync_init();
> +    migration_bitmap_sync_init(rs);
>      qemu_mutex_init(&migration_bitmap_mutex);
>  
>      if (migrate_use_xbzrle()) {
> -- 
> 2.9.3
> 
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [Qemu-devel] [PATCH 05/31] ram: Move bytes_xfer_prev into RAMState
  2017-03-15 13:49 ` [Qemu-devel] [PATCH 05/31] ram: Move bytes_xfer_prev " Juan Quintela
@ 2017-03-16 12:22   ` Dr. David Alan Gilbert
  2017-03-16 21:34     ` Philippe Mathieu-Daudé
  0 siblings, 1 reply; 68+ messages in thread
From: Dr. David Alan Gilbert @ 2017-03-16 12:22 UTC (permalink / raw)
  To: Juan Quintela; +Cc: qemu-devel, amit.shah

* Juan Quintela (quintela@redhat.com) wrote:
> Signed-off-by: Juan Quintela <quintela@redhat.com>

Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>

> ---
>  migration/ram.c | 13 +++++++------
>  1 file changed, 7 insertions(+), 6 deletions(-)
> 
> diff --git a/migration/ram.c b/migration/ram.c
> index f6ac503..2d288cc 100644
> --- a/migration/ram.c
> +++ b/migration/ram.c
> @@ -151,6 +151,8 @@ struct RAMState {
>      /* this variables are used for bitmap sync */
>      /* last time we did a full bitmap_sync */
>      int64_t start_time;
> +    /* bytes transferred at start_time */
> +    int64_t bytes_xfer_prev;
>  };
>  typedef struct RAMState RAMState;
>  
> @@ -597,7 +599,6 @@ static void migration_bitmap_sync_range(ram_addr_t start, ram_addr_t length)
>  }
>  
>  /* Fix me: there are too many global variables used in migration process. */
> -static int64_t bytes_xfer_prev;
>  static int64_t num_dirty_pages_period;
>  static uint64_t xbzrle_cache_miss_prev;
>  static uint64_t iterations_prev;
> @@ -605,7 +606,7 @@ static uint64_t iterations_prev;
>  static void migration_bitmap_sync_init(RAMState *rs)
>  {
>      rs->start_time = 0;
> -    bytes_xfer_prev = 0;
> +    rs->bytes_xfer_prev = 0;
>      num_dirty_pages_period = 0;
>      xbzrle_cache_miss_prev = 0;
>      iterations_prev = 0;
> @@ -638,8 +639,8 @@ static void migration_bitmap_sync(RAMState *rs)
>  
>      rs->bitmap_sync_count++;
>  
> -    if (!bytes_xfer_prev) {
> -        bytes_xfer_prev = ram_bytes_transferred();
> +    if (!rs->bytes_xfer_prev) {
> +        rs->bytes_xfer_prev = ram_bytes_transferred();
>      }
>  
>      if (!rs->start_time) {
> @@ -674,13 +675,13 @@ static void migration_bitmap_sync(RAMState *rs)
>  
>              if (s->dirty_pages_rate &&
>                 (num_dirty_pages_period * TARGET_PAGE_SIZE >
> -                   (bytes_xfer_now - bytes_xfer_prev)/2) &&
> +                   (bytes_xfer_now - rs->bytes_xfer_prev)/2) &&
>                 (rs->dirty_rate_high_cnt++ >= 2)) {
>                      trace_migration_throttle();
>                      rs->dirty_rate_high_cnt = 0;
>                      mig_throttle_guest_down();
>               }
> -             bytes_xfer_prev = bytes_xfer_now;
> +             rs->bytes_xfer_prev = bytes_xfer_now;
>          }
>  
>          if (migrate_use_xbzrle()) {
> -- 
> 2.9.3
> 
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [Qemu-devel] [PATCH 06/31] ram: Move num_dirty_pages_period into RAMState
  2017-03-15 13:49 ` [Qemu-devel] [PATCH 06/31] ram: Move num_dirty_pages_period " Juan Quintela
@ 2017-03-16 12:23   ` Dr. David Alan Gilbert
  2017-03-16 21:35     ` Philippe Mathieu-Daudé
  0 siblings, 1 reply; 68+ messages in thread
From: Dr. David Alan Gilbert @ 2017-03-16 12:23 UTC (permalink / raw)
  To: Juan Quintela; +Cc: qemu-devel

* Juan Quintela (quintela@redhat.com) wrote:
> Signed-off-by: Juan Quintela <quintela@redhat.com>

Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>

(This series could be fewer patches...)

> ---
>  migration/ram.c | 13 +++++++------
>  1 file changed, 7 insertions(+), 6 deletions(-)
> 
> diff --git a/migration/ram.c b/migration/ram.c
> index 2d288cc..b13d2d5 100644
> --- a/migration/ram.c
> +++ b/migration/ram.c
> @@ -153,6 +153,8 @@ struct RAMState {
>      int64_t start_time;
>      /* bytes transferred at start_time */
>      int64_t bytes_xfer_prev;
> +    /* number of dirty pages since start_time */
> +    int64_t num_dirty_pages_period;
>  };
>  typedef struct RAMState RAMState;
>  
> @@ -599,7 +601,6 @@ static void migration_bitmap_sync_range(ram_addr_t start, ram_addr_t length)
>  }
>  
>  /* Fix me: there are too many global variables used in migration process. */
> -static int64_t num_dirty_pages_period;
>  static uint64_t xbzrle_cache_miss_prev;
>  static uint64_t iterations_prev;
>  
> @@ -607,7 +608,7 @@ static void migration_bitmap_sync_init(RAMState *rs)
>  {
>      rs->start_time = 0;
>      rs->bytes_xfer_prev = 0;
> -    num_dirty_pages_period = 0;
> +    rs->num_dirty_pages_period = 0;
>      xbzrle_cache_miss_prev = 0;
>      iterations_prev = 0;
>  }
> @@ -660,7 +661,7 @@ static void migration_bitmap_sync(RAMState *rs)
>  
>      trace_migration_bitmap_sync_end(migration_dirty_pages
>                                      - num_dirty_pages_init);
> -    num_dirty_pages_period += migration_dirty_pages - num_dirty_pages_init;
> +    rs->num_dirty_pages_period += migration_dirty_pages - num_dirty_pages_init;
>      end_time = qemu_clock_get_ms(QEMU_CLOCK_REALTIME);
>  
>      /* more than 1 second = 1000 millisecons */
> @@ -674,7 +675,7 @@ static void migration_bitmap_sync(RAMState *rs)
>              bytes_xfer_now = ram_bytes_transferred();
>  
>              if (s->dirty_pages_rate &&
> -               (num_dirty_pages_period * TARGET_PAGE_SIZE >
> +               (rs->num_dirty_pages_period * TARGET_PAGE_SIZE >
>                     (bytes_xfer_now - rs->bytes_xfer_prev)/2) &&
>                 (rs->dirty_rate_high_cnt++ >= 2)) {
>                      trace_migration_throttle();
> @@ -694,11 +695,11 @@ static void migration_bitmap_sync(RAMState *rs)
>              iterations_prev = acct_info.iterations;
>              xbzrle_cache_miss_prev = acct_info.xbzrle_cache_miss;
>          }
> -        s->dirty_pages_rate = num_dirty_pages_period * 1000
> +        s->dirty_pages_rate = rs->num_dirty_pages_period * 1000
>              / (end_time - rs->start_time);
>          s->dirty_bytes_rate = s->dirty_pages_rate * TARGET_PAGE_SIZE;
>          rs->start_time = end_time;
> -        num_dirty_pages_period = 0;
> +        rs->num_dirty_pages_period = 0;
>      }
>      s->dirty_sync_count = rs->bitmap_sync_count;
>      if (migrate_use_events()) {
> -- 
> 2.9.3
> 
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [Qemu-devel] [PATCH 07/31] ram: Move xbzrle_cache_miss_prev into RAMState
  2017-03-15 13:49 ` [Qemu-devel] [PATCH 07/31] ram: Move xbzrle_cache_miss_prev " Juan Quintela
@ 2017-03-16 12:24   ` Dr. David Alan Gilbert
  2017-03-16 21:35     ` Philippe Mathieu-Daudé
  0 siblings, 1 reply; 68+ messages in thread
From: Dr. David Alan Gilbert @ 2017-03-16 12:24 UTC (permalink / raw)
  To: Juan Quintela; +Cc: qemu-devel

* Juan Quintela (quintela@redhat.com) wrote:
> Signed-off-by: Juan Quintela <quintela@redhat.com>
> ---
>  migration/ram.c | 9 +++++----
>  1 file changed, 5 insertions(+), 4 deletions(-)
> 
> diff --git a/migration/ram.c b/migration/ram.c
> index b13d2d5..ae077c5 100644
> --- a/migration/ram.c
> +++ b/migration/ram.c
> @@ -155,6 +155,8 @@ struct RAMState {
>      int64_t bytes_xfer_prev;
>      /* number of dirty pages since start_time */
>      int64_t num_dirty_pages_period;
> +    /* xbzrle misses since the beggining of the period */
                                    ^--- extra g

Other than that,
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>

> +    uint64_t xbzrle_cache_miss_prev;
>  };
>  typedef struct RAMState RAMState;
>  
> @@ -601,7 +603,6 @@ static void migration_bitmap_sync_range(ram_addr_t start, ram_addr_t length)
>  }
>  
>  /* Fix me: there are too many global variables used in migration process. */
> -static uint64_t xbzrle_cache_miss_prev;
>  static uint64_t iterations_prev;
>  
>  static void migration_bitmap_sync_init(RAMState *rs)
> @@ -609,7 +610,7 @@ static void migration_bitmap_sync_init(RAMState *rs)
>      rs->start_time = 0;
>      rs->bytes_xfer_prev = 0;
>      rs->num_dirty_pages_period = 0;
> -    xbzrle_cache_miss_prev = 0;
> +    rs->xbzrle_cache_miss_prev = 0;
>      iterations_prev = 0;
>  }
>  
> @@ -689,11 +690,11 @@ static void migration_bitmap_sync(RAMState *rs)
>              if (iterations_prev != acct_info.iterations) {
>                  acct_info.xbzrle_cache_miss_rate =
>                     (double)(acct_info.xbzrle_cache_miss -
> -                            xbzrle_cache_miss_prev) /
> +                            rs->xbzrle_cache_miss_prev) /
>                     (acct_info.iterations - iterations_prev);
>              }
>              iterations_prev = acct_info.iterations;
> -            xbzrle_cache_miss_prev = acct_info.xbzrle_cache_miss;
> +            rs->xbzrle_cache_miss_prev = acct_info.xbzrle_cache_miss;
>          }
>          s->dirty_pages_rate = rs->num_dirty_pages_period * 1000
>              / (end_time - rs->start_time);
> -- 
> 2.9.3
> 
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [Qemu-devel] [PATCH 08/31] ram: Move iterations_prev into RAMState
  2017-03-15 13:49 ` [Qemu-devel] [PATCH 08/31] ram: Move iterations_prev " Juan Quintela
@ 2017-03-16 12:26   ` Dr. David Alan Gilbert
  2017-03-16 21:36     ` Philippe Mathieu-Daudé
  0 siblings, 1 reply; 68+ messages in thread
From: Dr. David Alan Gilbert @ 2017-03-16 12:26 UTC (permalink / raw)
  To: Juan Quintela; +Cc: qemu-devel

* Juan Quintela (quintela@redhat.com) wrote:
> Signed-off-by: Juan Quintela <quintela@redhat.com>
> ---
>  migration/ram.c | 13 ++++++-------
>  1 file changed, 6 insertions(+), 7 deletions(-)
> 
> diff --git a/migration/ram.c b/migration/ram.c
> index ae077c5..6cdad06 100644
> --- a/migration/ram.c
> +++ b/migration/ram.c
> @@ -157,6 +157,8 @@ struct RAMState {
>      int64_t num_dirty_pages_period;
>      /* xbzrle misses since the beggining of the period */
>      uint64_t xbzrle_cache_miss_prev;
> +    /* number of iterations at the beggining of period */
                                         ^  ^ 
                                         One extra g, one missing n

Other than that,
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>

> +    uint64_t iterations_prev;
>  };
>  typedef struct RAMState RAMState;
>  
> @@ -602,16 +604,13 @@ static void migration_bitmap_sync_range(ram_addr_t start, ram_addr_t length)
>          cpu_physical_memory_sync_dirty_bitmap(bitmap, start, length);
>  }
>  
> -/* Fix me: there are too many global variables used in migration process. */
> -static uint64_t iterations_prev;
> -
>  static void migration_bitmap_sync_init(RAMState *rs)
>  {
>      rs->start_time = 0;
>      rs->bytes_xfer_prev = 0;
>      rs->num_dirty_pages_period = 0;
>      rs->xbzrle_cache_miss_prev = 0;
> -    iterations_prev = 0;
> +    rs->iterations_prev = 0;
>  }
>  
>  /* Returns a summary bitmap of the page sizes of all RAMBlocks;
> @@ -687,13 +686,13 @@ static void migration_bitmap_sync(RAMState *rs)
>          }
>  
>          if (migrate_use_xbzrle()) {
> -            if (iterations_prev != acct_info.iterations) {
> +            if (rs->iterations_prev != acct_info.iterations) {
>                  acct_info.xbzrle_cache_miss_rate =
>                     (double)(acct_info.xbzrle_cache_miss -
>                              rs->xbzrle_cache_miss_prev) /
> -                   (acct_info.iterations - iterations_prev);
> +                   (acct_info.iterations - rs->iterations_prev);
>              }
> -            iterations_prev = acct_info.iterations;
> +            rs->iterations_prev = acct_info.iterations;
>              rs->xbzrle_cache_miss_prev = acct_info.xbzrle_cache_miss;
>          }
>          s->dirty_pages_rate = rs->num_dirty_pages_period * 1000
> -- 
> 2.9.3
> 
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [Qemu-devel] [PATCH 09/31] ram: Move dup_pages into RAMState
  2017-03-15 13:49 ` [Qemu-devel] [PATCH 09/31] ram: Move dup_pages " Juan Quintela
@ 2017-03-16 12:27   ` Dr. David Alan Gilbert
  0 siblings, 0 replies; 68+ messages in thread
From: Dr. David Alan Gilbert @ 2017-03-16 12:27 UTC (permalink / raw)
  To: Juan Quintela; +Cc: qemu-devel

* Juan Quintela (quintela@redhat.com) wrote:
> Once there rename it to its actual meaning, zero_pages.
> 
> Signed-off-by: Juan Quintela <quintela@redhat.com>

Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>

> ---
>  migration/ram.c | 26 +++++++++++++++-----------
>  1 file changed, 15 insertions(+), 11 deletions(-)
> 
> diff --git a/migration/ram.c b/migration/ram.c
> index 6cdad06..059e9f1 100644
> --- a/migration/ram.c
> +++ b/migration/ram.c
> @@ -159,6 +159,9 @@ struct RAMState {
>      uint64_t xbzrle_cache_miss_prev;
>      /* number of iterations at the beggining of period */
>      uint64_t iterations_prev;
> +    /* Accounting fields */
> +    /* number of zero pages.  It used to be pages filled by the same char. */
> +    uint64_t zero_pages;
>  };
>  typedef struct RAMState RAMState;
>  
> @@ -166,7 +169,6 @@ static RAMState ram_state;
>  
>  /* accounting for migration statistics */
>  typedef struct AccountingInfo {
> -    uint64_t dup_pages;
>      uint64_t skipped_pages;
>      uint64_t norm_pages;
>      uint64_t iterations;
> @@ -186,12 +188,12 @@ static void acct_clear(void)
>  
>  uint64_t dup_mig_bytes_transferred(void)
>  {
> -    return acct_info.dup_pages * TARGET_PAGE_SIZE;
> +    return ram_state.zero_pages * TARGET_PAGE_SIZE;
>  }
>  
>  uint64_t dup_mig_pages_transferred(void)
>  {
> -    return acct_info.dup_pages;
> +    return ram_state.zero_pages;
>  }
>  
>  uint64_t skipped_mig_bytes_transferred(void)
> @@ -718,13 +720,14 @@ static void migration_bitmap_sync(RAMState *rs)
>   * @p: pointer to the page
>   * @bytes_transferred: increase it with the number of transferred bytes
>   */
> -static int save_zero_page(QEMUFile *f, RAMBlock *block, ram_addr_t offset,
> +static int save_zero_page(RAMState *rs, QEMUFile *f, RAMBlock *block,
> +                          ram_addr_t offset,
>                            uint8_t *p, uint64_t *bytes_transferred)
>  {
>      int pages = -1;
>  
>      if (is_zero_range(p, TARGET_PAGE_SIZE)) {
> -        acct_info.dup_pages++;
> +        rs->zero_pages++;
>          *bytes_transferred += save_page_header(f, block,
>                                                 offset | RAM_SAVE_FLAG_COMPRESS);
>          qemu_put_byte(f, 0);
> @@ -797,11 +800,11 @@ static int ram_save_page(RAMState *rs, MigrationState *ms, QEMUFile *f,
>              if (bytes_xmit > 0) {
>                  acct_info.norm_pages++;
>              } else if (bytes_xmit == 0) {
> -                acct_info.dup_pages++;
> +                rs->zero_pages++;
>              }
>          }
>      } else {
> -        pages = save_zero_page(f, block, offset, p, bytes_transferred);
> +        pages = save_zero_page(rs, f, block, offset, p, bytes_transferred);
>          if (pages > 0) {
>              /* Must let xbzrle know, otherwise a previous (now 0'd) cached
>               * page would be stale
> @@ -973,7 +976,7 @@ static int ram_save_compressed_page(RAMState *rs, MigrationState *ms,
>              if (bytes_xmit > 0) {
>                  acct_info.norm_pages++;
>              } else if (bytes_xmit == 0) {
> -                acct_info.dup_pages++;
> +                rs->zero_pages++;
>              }
>          }
>      } else {
> @@ -985,7 +988,7 @@ static int ram_save_compressed_page(RAMState *rs, MigrationState *ms,
>           */
>          if (block != rs->last_sent_block) {
>              flush_compressed_data(f);
> -            pages = save_zero_page(f, block, offset, p, bytes_transferred);
> +            pages = save_zero_page(rs, f, block, offset, p, bytes_transferred);
>              if (pages == -1) {
>                  /* Make sure the first page is sent out before other pages */
>                  bytes_xmit = save_page_header(f, block, offset |
> @@ -1006,7 +1009,7 @@ static int ram_save_compressed_page(RAMState *rs, MigrationState *ms,
>              }
>          } else {
>              offset |= RAM_SAVE_FLAG_CONTINUE;
> -            pages = save_zero_page(f, block, offset, p, bytes_transferred);
> +            pages = save_zero_page(rs, f, block, offset, p, bytes_transferred);
>              if (pages == -1) {
>                  pages = compress_page_with_multi_thread(f, block, offset,
>                                                          bytes_transferred);
> @@ -1428,7 +1431,7 @@ void acct_update_position(QEMUFile *f, size_t size, bool zero)
>  {
>      uint64_t pages = size / TARGET_PAGE_SIZE;
>      if (zero) {
> -        acct_info.dup_pages += pages;
> +        ram_state.zero_pages += pages;
>      } else {
>          acct_info.norm_pages += pages;
>          bytes_transferred += size;
> @@ -1941,6 +1944,7 @@ static int ram_save_init_globals(RAMState *rs)
>  
>      rs->dirty_rate_high_cnt = 0;
>      rs->bitmap_sync_count = 0;
> +    rs->zero_pages = 0;
>      migration_bitmap_sync_init(rs);
>      qemu_mutex_init(&migration_bitmap_mutex);
>  
> -- 
> 2.9.3
> 
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [Qemu-devel] [PATCH 10/31] ram: Remove unused dump_mig_dbytes_transferred()
  2017-03-15 13:50 ` [Qemu-devel] [PATCH 10/31] ram: Remove unused dump_mig_dbytes_transferred() Juan Quintela
@ 2017-03-16 15:48   ` Dr. David Alan Gilbert
  0 siblings, 0 replies; 68+ messages in thread
From: Dr. David Alan Gilbert @ 2017-03-16 15:48 UTC (permalink / raw)
  To: Juan Quintela; +Cc: qemu-devel

* Juan Quintela (quintela@redhat.com) wrote:
> Signed-off-by: Juan Quintela <quintela@redhat.com>

Subject has a couple of typos; otherwise;

Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>


> ---
>  include/migration/migration.h | 1 -
>  migration/ram.c               | 5 -----
>  2 files changed, 6 deletions(-)
> 
> diff --git a/include/migration/migration.h b/include/migration/migration.h
> index 5720c88..3e6bb68 100644
> --- a/include/migration/migration.h
> +++ b/include/migration/migration.h
> @@ -276,7 +276,6 @@ void free_xbzrle_decoded_buf(void);
>  
>  void acct_update_position(QEMUFile *f, size_t size, bool zero);
>  
> -uint64_t dup_mig_bytes_transferred(void);
>  uint64_t dup_mig_pages_transferred(void);
>  uint64_t skipped_mig_bytes_transferred(void);
>  uint64_t skipped_mig_pages_transferred(void);
> diff --git a/migration/ram.c b/migration/ram.c
> index 059e9f1..83fe20a 100644
> --- a/migration/ram.c
> +++ b/migration/ram.c
> @@ -186,11 +186,6 @@ static void acct_clear(void)
>      memset(&acct_info, 0, sizeof(acct_info));
>  }
>  
> -uint64_t dup_mig_bytes_transferred(void)
> -{
> -    return ram_state.zero_pages * TARGET_PAGE_SIZE;
> -}
> -
>  uint64_t dup_mig_pages_transferred(void)
>  {
>      return ram_state.zero_pages;
> -- 
> 2.9.3
> 
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [Qemu-devel] [PATCH 11/31] ram: Remove unused pages_skiped variable
  2017-03-15 13:50 ` [Qemu-devel] [PATCH 11/31] ram: Remove unused pages_skiped variable Juan Quintela
@ 2017-03-16 15:52   ` Dr. David Alan Gilbert
  0 siblings, 0 replies; 68+ messages in thread
From: Dr. David Alan Gilbert @ 2017-03-16 15:52 UTC (permalink / raw)
  To: Juan Quintela; +Cc: qemu-devel

* Juan Quintela (quintela@redhat.com) wrote:
> For compatibility, we need to still send a value, but just specify it
> and comment the fact.
> 
> Signed-off-by: Juan Quintela <quintela@redhat.com>

Note missing p in subject,

Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>

> ---
>  include/migration/migration.h |  2 --
>  migration/migration.c         |  3 ++-
>  migration/ram.c               | 11 -----------
>  3 files changed, 2 insertions(+), 14 deletions(-)
> 
> diff --git a/include/migration/migration.h b/include/migration/migration.h
> index 3e6bb68..9c83951 100644
> --- a/include/migration/migration.h
> +++ b/include/migration/migration.h
> @@ -277,8 +277,6 @@ void free_xbzrle_decoded_buf(void);
>  void acct_update_position(QEMUFile *f, size_t size, bool zero);
>  
>  uint64_t dup_mig_pages_transferred(void);
> -uint64_t skipped_mig_bytes_transferred(void);
> -uint64_t skipped_mig_pages_transferred(void);
>  uint64_t norm_mig_bytes_transferred(void);
>  uint64_t norm_mig_pages_transferred(void);
>  uint64_t xbzrle_mig_bytes_transferred(void);
> diff --git a/migration/migration.c b/migration/migration.c
> index 3dab684..c3e1b95 100644
> --- a/migration/migration.c
> +++ b/migration/migration.c
> @@ -639,7 +639,8 @@ static void populate_ram_info(MigrationInfo *info, MigrationState *s)
>      info->ram->transferred = ram_bytes_transferred();
>      info->ram->total = ram_bytes_total();
>      info->ram->duplicate = dup_mig_pages_transferred();
> -    info->ram->skipped = skipped_mig_pages_transferred();
> +    /* legacy value.  It is not used anymore */
> +    info->ram->skipped = 0;
>      info->ram->normal = norm_mig_pages_transferred();
>      info->ram->normal_bytes = norm_mig_bytes_transferred();
>      info->ram->mbps = s->mbps;
> diff --git a/migration/ram.c b/migration/ram.c
> index 83fe20a..468f042 100644
> --- a/migration/ram.c
> +++ b/migration/ram.c
> @@ -169,7 +169,6 @@ static RAMState ram_state;
>  
>  /* accounting for migration statistics */
>  typedef struct AccountingInfo {
> -    uint64_t skipped_pages;
>      uint64_t norm_pages;
>      uint64_t iterations;
>      uint64_t xbzrle_bytes;
> @@ -191,16 +190,6 @@ uint64_t dup_mig_pages_transferred(void)
>      return ram_state.zero_pages;
>  }
>  
> -uint64_t skipped_mig_bytes_transferred(void)
> -{
> -    return acct_info.skipped_pages * TARGET_PAGE_SIZE;
> -}
> -
> -uint64_t skipped_mig_pages_transferred(void)
> -{
> -    return acct_info.skipped_pages;
> -}
> -
>  uint64_t norm_mig_bytes_transferred(void)
>  {
>      return acct_info.norm_pages * TARGET_PAGE_SIZE;
> -- 
> 2.9.3
> 
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [Qemu-devel] [PATCH 12/31] ram: Move norm_pages to RAMState
  2017-03-15 13:50 ` [Qemu-devel] [PATCH 12/31] ram: Move norm_pages to RAMState Juan Quintela
@ 2017-03-16 16:09   ` Dr. David Alan Gilbert
  0 siblings, 0 replies; 68+ messages in thread
From: Dr. David Alan Gilbert @ 2017-03-16 16:09 UTC (permalink / raw)
  To: Juan Quintela; +Cc: qemu-devel

* Juan Quintela (quintela@redhat.com) wrote:
> Signed-off-by: Juan Quintela <quintela@redhat.com>

Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>

> ---
>  migration/ram.c | 26 ++++++++++++++------------
>  1 file changed, 14 insertions(+), 12 deletions(-)
> 
> diff --git a/migration/ram.c b/migration/ram.c
> index 468f042..58c7dc7 100644
> --- a/migration/ram.c
> +++ b/migration/ram.c
> @@ -162,6 +162,8 @@ struct RAMState {
>      /* Accounting fields */
>      /* number of zero pages.  It used to be pages filled by the same char. */
>      uint64_t zero_pages;
> +    /* number of normal transferred pages */
> +    uint64_t norm_pages;
>  };
>  typedef struct RAMState RAMState;
>  
> @@ -169,7 +171,6 @@ static RAMState ram_state;
>  
>  /* accounting for migration statistics */
>  typedef struct AccountingInfo {
> -    uint64_t norm_pages;
>      uint64_t iterations;
>      uint64_t xbzrle_bytes;
>      uint64_t xbzrle_pages;
> @@ -192,12 +193,12 @@ uint64_t dup_mig_pages_transferred(void)
>  
>  uint64_t norm_mig_bytes_transferred(void)
>  {
> -    return acct_info.norm_pages * TARGET_PAGE_SIZE;
> +    return ram_state.norm_pages * TARGET_PAGE_SIZE;
>  }
>  
>  uint64_t norm_mig_pages_transferred(void)
>  {
> -    return acct_info.norm_pages;
> +    return ram_state.norm_pages;
>  }
>  
>  uint64_t xbzrle_mig_bytes_transferred(void)
> @@ -782,7 +783,7 @@ static int ram_save_page(RAMState *rs, MigrationState *ms, QEMUFile *f,
>      if (ret != RAM_SAVE_CONTROL_NOT_SUPP) {
>          if (ret != RAM_SAVE_CONTROL_DELAYED) {
>              if (bytes_xmit > 0) {
> -                acct_info.norm_pages++;
> +                rs->norm_pages++;
>              } else if (bytes_xmit == 0) {
>                  rs->zero_pages++;
>              }
> @@ -821,7 +822,7 @@ static int ram_save_page(RAMState *rs, MigrationState *ms, QEMUFile *f,
>          }
>          *bytes_transferred += TARGET_PAGE_SIZE;
>          pages = 1;
> -        acct_info.norm_pages++;
> +        rs->norm_pages++;
>      }
>  
>      XBZRLE_cache_unlock();
> @@ -888,8 +889,8 @@ static inline void set_compress_params(CompressParam *param, RAMBlock *block,
>      param->offset = offset;
>  }
>  
> -static int compress_page_with_multi_thread(QEMUFile *f, RAMBlock *block,
> -                                           ram_addr_t offset,
> +static int compress_page_with_multi_thread(RAMState *rs, QEMUFile *f,
> +                                           RAMBlock *block, ram_addr_t offset,
>                                             uint64_t *bytes_transferred)
>  {
>      int idx, thread_count, bytes_xmit = -1, pages = -1;
> @@ -906,7 +907,7 @@ static int compress_page_with_multi_thread(QEMUFile *f, RAMBlock *block,
>                  qemu_cond_signal(&comp_param[idx].cond);
>                  qemu_mutex_unlock(&comp_param[idx].mutex);
>                  pages = 1;
> -                acct_info.norm_pages++;
> +                rs->norm_pages++;
>                  *bytes_transferred += bytes_xmit;
>                  break;
>              }
> @@ -958,7 +959,7 @@ static int ram_save_compressed_page(RAMState *rs, MigrationState *ms,
>      if (ret != RAM_SAVE_CONTROL_NOT_SUPP) {
>          if (ret != RAM_SAVE_CONTROL_DELAYED) {
>              if (bytes_xmit > 0) {
> -                acct_info.norm_pages++;
> +                rs->norm_pages++;
>              } else if (bytes_xmit == 0) {
>                  rs->zero_pages++;
>              }
> @@ -981,7 +982,7 @@ static int ram_save_compressed_page(RAMState *rs, MigrationState *ms,
>                                                   migrate_compress_level());
>                  if (blen > 0) {
>                      *bytes_transferred += bytes_xmit + blen;
> -                    acct_info.norm_pages++;
> +                    rs->norm_pages++;
>                      pages = 1;
>                  } else {
>                      qemu_file_set_error(f, blen);
> @@ -995,7 +996,7 @@ static int ram_save_compressed_page(RAMState *rs, MigrationState *ms,
>              offset |= RAM_SAVE_FLAG_CONTINUE;
>              pages = save_zero_page(rs, f, block, offset, p, bytes_transferred);
>              if (pages == -1) {
> -                pages = compress_page_with_multi_thread(f, block, offset,
> +                pages = compress_page_with_multi_thread(rs, f, block, offset,
>                                                          bytes_transferred);
>              } else {
>                  ram_release_pages(ms, block->idstr, pss->offset, pages);
> @@ -1417,7 +1418,7 @@ void acct_update_position(QEMUFile *f, size_t size, bool zero)
>      if (zero) {
>          ram_state.zero_pages += pages;
>      } else {
> -        acct_info.norm_pages += pages;
> +        ram_state.norm_pages += pages;
>          bytes_transferred += size;
>          qemu_update_position(f, size);
>      }
> @@ -1929,6 +1930,7 @@ static int ram_save_init_globals(RAMState *rs)
>      rs->dirty_rate_high_cnt = 0;
>      rs->bitmap_sync_count = 0;
>      rs->zero_pages = 0;
> +    rs->norm_pages = 0;
>      migration_bitmap_sync_init(rs);
>      qemu_mutex_init(&migration_bitmap_mutex);
>  
> -- 
> 2.9.3
> 
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [Qemu-devel] [PATCH 13/31] ram: Remove norm_mig_bytes_transferred
  2017-03-15 13:50 ` [Qemu-devel] [PATCH 13/31] ram: Remove norm_mig_bytes_transferred Juan Quintela
@ 2017-03-16 16:14   ` Dr. David Alan Gilbert
  0 siblings, 0 replies; 68+ messages in thread
From: Dr. David Alan Gilbert @ 2017-03-16 16:14 UTC (permalink / raw)
  To: Juan Quintela; +Cc: qemu-devel

* Juan Quintela (quintela@redhat.com) wrote:
> Its value can be calculated by other exported.
> 
> Signed-off-by: Juan Quintela <quintela@redhat.com>

Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>

> ---
>  include/migration/migration.h | 1 -
>  migration/migration.c         | 3 ++-
>  migration/ram.c               | 5 -----
>  3 files changed, 2 insertions(+), 7 deletions(-)
> 
> diff --git a/include/migration/migration.h b/include/migration/migration.h
> index 9c83951..84cef4b 100644
> --- a/include/migration/migration.h
> +++ b/include/migration/migration.h
> @@ -277,7 +277,6 @@ void free_xbzrle_decoded_buf(void);
>  void acct_update_position(QEMUFile *f, size_t size, bool zero);
>  
>  uint64_t dup_mig_pages_transferred(void);
> -uint64_t norm_mig_bytes_transferred(void);
>  uint64_t norm_mig_pages_transferred(void);
>  uint64_t xbzrle_mig_bytes_transferred(void);
>  uint64_t xbzrle_mig_pages_transferred(void);
> diff --git a/migration/migration.c b/migration/migration.c
> index c3e1b95..46645b6 100644
> --- a/migration/migration.c
> +++ b/migration/migration.c
> @@ -642,7 +642,8 @@ static void populate_ram_info(MigrationInfo *info, MigrationState *s)
>      /* legacy value.  It is not used anymore */
>      info->ram->skipped = 0;
>      info->ram->normal = norm_mig_pages_transferred();
> -    info->ram->normal_bytes = norm_mig_bytes_transferred();
> +    info->ram->normal_bytes = norm_mig_pages_transferred() *
> +        (1ul << qemu_target_page_bits());
>      info->ram->mbps = s->mbps;
>      info->ram->dirty_sync_count = s->dirty_sync_count;
>      info->ram->postcopy_requests = s->postcopy_requests;
> diff --git a/migration/ram.c b/migration/ram.c
> index 58c7dc7..8caeb4f 100644
> --- a/migration/ram.c
> +++ b/migration/ram.c
> @@ -191,11 +191,6 @@ uint64_t dup_mig_pages_transferred(void)
>      return ram_state.zero_pages;
>  }
>  
> -uint64_t norm_mig_bytes_transferred(void)
> -{
> -    return ram_state.norm_pages * TARGET_PAGE_SIZE;
> -}
> -
>  uint64_t norm_mig_pages_transferred(void)
>  {
>      return ram_state.norm_pages;
> -- 
> 2.9.3
> 
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [Qemu-devel] [PATCH 14/31] ram: Move iterations into RAMState
  2017-03-15 13:50 ` [Qemu-devel] [PATCH 14/31] ram: Move iterations into RAMState Juan Quintela
@ 2017-03-16 20:04   ` Dr. David Alan Gilbert
  2017-03-16 21:40     ` Philippe Mathieu-Daudé
  0 siblings, 1 reply; 68+ messages in thread
From: Dr. David Alan Gilbert @ 2017-03-16 20:04 UTC (permalink / raw)
  To: Juan Quintela; +Cc: qemu-devel

* Juan Quintela (quintela@redhat.com) wrote:
> Signed-off-by: Juan Quintela <quintela@redhat.com>
> ---
>  migration/ram.c | 12 +++++++-----
>  1 file changed, 7 insertions(+), 5 deletions(-)

Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>

> 
> diff --git a/migration/ram.c b/migration/ram.c
> index 8caeb4f..234bdba 100644
> --- a/migration/ram.c
> +++ b/migration/ram.c
> @@ -164,6 +164,8 @@ struct RAMState {
>      uint64_t zero_pages;
>      /* number of normal transferred pages */
>      uint64_t norm_pages;
> +    /* Iterations since start */
> +    uint64_t iterations;
>  };
>  typedef struct RAMState RAMState;
>  
> @@ -171,7 +173,6 @@ static RAMState ram_state;
>  
>  /* accounting for migration statistics */
>  typedef struct AccountingInfo {
> -    uint64_t iterations;
>      uint64_t xbzrle_bytes;
>      uint64_t xbzrle_pages;
>      uint64_t xbzrle_cache_miss;
> @@ -668,13 +669,13 @@ static void migration_bitmap_sync(RAMState *rs)
>          }
>  
>          if (migrate_use_xbzrle()) {
> -            if (rs->iterations_prev != acct_info.iterations) {
> +            if (rs->iterations_prev != rs->iterations) {
>                  acct_info.xbzrle_cache_miss_rate =
>                     (double)(acct_info.xbzrle_cache_miss -
>                              rs->xbzrle_cache_miss_prev) /
> -                   (acct_info.iterations - rs->iterations_prev);
> +                   (rs->iterations - rs->iterations_prev);
>              }
> -            rs->iterations_prev = acct_info.iterations;
> +            rs->iterations_prev = rs->iterations;
>              rs->xbzrle_cache_miss_prev = acct_info.xbzrle_cache_miss;
>          }
>          s->dirty_pages_rate = rs->num_dirty_pages_period * 1000
> @@ -1926,6 +1927,7 @@ static int ram_save_init_globals(RAMState *rs)
>      rs->bitmap_sync_count = 0;
>      rs->zero_pages = 0;
>      rs->norm_pages = 0;
> +    rs->iterations = 0;
>      migration_bitmap_sync_init(rs);
>      qemu_mutex_init(&migration_bitmap_mutex);
>  
> @@ -2066,7 +2068,7 @@ static int ram_save_iterate(QEMUFile *f, void *opaque)
>              done = 1;
>              break;
>          }
> -        acct_info.iterations++;
> +        rs->iterations++;
>  
>          /* we want to check in the 1st loop, just in case it was the 1st time
>             and we had to sync the dirty bitmap.
> -- 
> 2.9.3
> 
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [Qemu-devel] [PATCH 19/31] ram: move xbzrle_overflows into RAMState
  2017-03-15 13:50 ` [Qemu-devel] [PATCH 19/31] ram: move xbzrle_overflows " Juan Quintela
@ 2017-03-16 20:07   ` Dr. David Alan Gilbert
  0 siblings, 0 replies; 68+ messages in thread
From: Dr. David Alan Gilbert @ 2017-03-16 20:07 UTC (permalink / raw)
  To: Juan Quintela; +Cc: qemu-devel

* Juan Quintela (quintela@redhat.com) wrote:
> Once there, remove the now unused AccountingInfo struct and var.
> 
> Signed-off-by: Juan Quintela <quintela@redhat.com>
> ---
>  migration/ram.c | 21 +++++----------------
>  1 file changed, 5 insertions(+), 16 deletions(-)
> 
> diff --git a/migration/ram.c b/migration/ram.c
> index 23a7317..75ad17f 100644
> --- a/migration/ram.c
> +++ b/migration/ram.c
> @@ -174,23 +174,13 @@ struct RAMState {
>      uint64_t xbzrle_cache_miss;
>      /* xbzrle miss rate */
>      double xbzrle_cache_miss_rate;
> +    /* xbzrle number of overflows */
> +    uint64_t xbzrle_overflows;
>  };
>  typedef struct RAMState RAMState;
>  
>  static RAMState ram_state;
>  
> -/* accounting for migration statistics */
> -typedef struct AccountingInfo {
> -    uint64_t xbzrle_overflows;
> -} AccountingInfo;
> -
> -static AccountingInfo acct_info;
> -
> -static void acct_clear(void)
> -{
> -    memset(&acct_info, 0, sizeof(acct_info));
> -}
> -
>  uint64_t dup_mig_pages_transferred(void)
>  {
>      return ram_state.zero_pages;
> @@ -223,7 +213,7 @@ double xbzrle_mig_cache_miss_rate(void)
>  
>  uint64_t xbzrle_mig_pages_overflow(void)
>  {
> -    return acct_info.xbzrle_overflows;
> +    return ram_state.xbzrle_overflows;
>  }

That's a bit naughty isn't it - I thought you were trying to get rid of all
the global accesses?

Dave

>  static QemuMutex migration_bitmap_mutex;
> @@ -510,7 +500,7 @@ static int save_xbzrle_page(QEMUFile *f, RAMState *rs, uint8_t **current_data,
>          return 0;
>      } else if (encoded_len == -1) {
>          trace_save_xbzrle_page_overflow();
> -        acct_info.xbzrle_overflows++;
> +        rs->xbzrle_overflows++;
>          /* update data in the cache */
>          if (!last_stage) {
>              memcpy(prev_cached_page, *current_data, TARGET_PAGE_SIZE);
> @@ -1936,6 +1926,7 @@ static int ram_save_init_globals(RAMState *rs)
>      rs->xbzrle_pages = 0;
>      rs->xbzrle_cache_miss = 0;
>      rs->xbzrle_cache_miss_rate = 0;
> +    rs->xbzrle_overflows = 0;
>      migration_bitmap_sync_init(rs);
>      qemu_mutex_init(&migration_bitmap_mutex);
>  
> @@ -1966,8 +1957,6 @@ static int ram_save_init_globals(RAMState *rs)
>              XBZRLE.encoded_buf = NULL;
>              return -1;
>          }
> -
> -        acct_clear();
>      }
>  
>      /* For memory_global_dirty_log_start below.  */
> -- 
> 2.9.3
> 
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [Qemu-devel] [PATCH 21/31] ram: Everything was init to zero, so use memset
  2017-03-15 13:50 ` [Qemu-devel] [PATCH 21/31] ram: Everything was init to zero, so use memset Juan Quintela
@ 2017-03-16 20:15   ` Dr. David Alan Gilbert
  0 siblings, 0 replies; 68+ messages in thread
From: Dr. David Alan Gilbert @ 2017-03-16 20:15 UTC (permalink / raw)
  To: Juan Quintela; +Cc: qemu-devel

* Juan Quintela (quintela@redhat.com) wrote:
> And then init only things that are not zero by default.
> 
> Signed-off-by: Juan Quintela <quintela@redhat.com>

Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>

> ---
>  migration/ram.c | 25 +++----------------------
>  1 file changed, 3 insertions(+), 22 deletions(-)
> 
> diff --git a/migration/ram.c b/migration/ram.c
> index 606e836..7f56b5f 100644
> --- a/migration/ram.c
> +++ b/migration/ram.c
> @@ -588,15 +588,6 @@ static void migration_bitmap_sync_range(RAMState *rs, ram_addr_t start,
>          cpu_physical_memory_sync_dirty_bitmap(bitmap, start, length);
>  }
>  
> -static void migration_bitmap_sync_init(RAMState *rs)
> -{
> -    rs->start_time = 0;
> -    rs->bytes_xfer_prev = 0;
> -    rs->num_dirty_pages_period = 0;
> -    rs->xbzrle_cache_miss_prev = 0;
> -    rs->iterations_prev = 0;
> -}
> -
>  /* Returns a summary bitmap of the page sizes of all RAMBlocks;
>   * for VMs with just normal pages this is equivalent to the
>   * host page size.  If it's got some huge pages then it's the OR
> @@ -1915,21 +1906,11 @@ err:
>      return ret;
>  }
>  
> -static int ram_save_init_globals(RAMState *rs)
> +static int ram_state_init(RAMState *rs)
>  {
>      int64_t ram_bitmap_pages; /* Size of bitmap in pages, including gaps */
>  
> -    rs->dirty_rate_high_cnt = 0;
> -    rs->bitmap_sync_count = 0;
> -    rs->zero_pages = 0;
> -    rs->norm_pages = 0;
> -    rs->iterations = 0;
> -    rs->xbzrle_bytes = 0;
> -    rs->xbzrle_pages = 0;
> -    rs->xbzrle_cache_miss = 0;
> -    rs->xbzrle_cache_miss_rate = 0;
> -    rs->xbzrle_overflows = 0;
> -    migration_bitmap_sync_init(rs);
> +    memset(rs, 0, sizeof(*rs));
>      qemu_mutex_init(&migration_bitmap_mutex);
>  
>      if (migrate_use_xbzrle()) {
> @@ -2010,7 +1991,7 @@ static int ram_save_setup(QEMUFile *f, void *opaque)
>  
>      /* migration has already setup the bitmap, reuse it. */
>      if (!migration_in_colo_state()) {
> -        if (ram_save_init_globals(rs) < 0) {
> +        if (ram_state_init(rs) < 0) {
>              return -1;
>           }
>      }
> -- 
> 2.9.3
> 
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [Qemu-devel] [PATCH 22/31] ram: move migration_bitmap_mutex into RAMState
  2017-03-15 13:50 ` [Qemu-devel] [PATCH 22/31] ram: move migration_bitmap_mutex into RAMState Juan Quintela
@ 2017-03-16 20:21   ` Dr. David Alan Gilbert
  0 siblings, 0 replies; 68+ messages in thread
From: Dr. David Alan Gilbert @ 2017-03-16 20:21 UTC (permalink / raw)
  To: Juan Quintela; +Cc: qemu-devel, amit.shah

* Juan Quintela (quintela@redhat.com) wrote:
> Signed-off-by: Juan Quintela <quintela@redhat.com>
> ---
>  migration/ram.c | 14 +++++++-------
>  1 file changed, 7 insertions(+), 7 deletions(-)
> 
> diff --git a/migration/ram.c b/migration/ram.c
> index 7f56b5f..c14293c 100644
> --- a/migration/ram.c
> +++ b/migration/ram.c
> @@ -178,6 +178,8 @@ struct RAMState {
>      uint64_t xbzrle_overflows;
>      /* number of dirty bits in the bitmap */
>      uint64_t migration_dirty_pages;
> +    /* protects modification of the bitmap */
> +    QemuMutex bitmap_mutex;
>  };
>  typedef struct RAMState RAMState;
>  
> @@ -223,8 +225,6 @@ static ram_addr_t ram_save_remaining(void)
>      return ram_state.migration_dirty_pages;
>  }
>  
> -static QemuMutex migration_bitmap_mutex;
> -
>  /* used by the search for pages to send */
>  struct PageSearchStatus {
>      /* Current block being searched */
> @@ -626,13 +626,13 @@ static void migration_bitmap_sync(RAMState *rs)
>      trace_migration_bitmap_sync_start();
>      memory_global_dirty_log_sync();
>  
> -    qemu_mutex_lock(&migration_bitmap_mutex);
> +    qemu_mutex_lock(&rs->bitmap_mutex);
>      rcu_read_lock();
>      QLIST_FOREACH_RCU(block, &ram_list.blocks, next) {
>          migration_bitmap_sync_range(rs, block->offset, block->used_length);
>      }
>      rcu_read_unlock();
> -    qemu_mutex_unlock(&migration_bitmap_mutex);
> +    qemu_mutex_unlock(&rs->bitmap_mutex);
>  
>      trace_migration_bitmap_sync_end(rs->migration_dirty_pages
>                                      - num_dirty_pages_init);
> @@ -1498,7 +1498,7 @@ void migration_bitmap_extend(ram_addr_t old, ram_addr_t new)
>           * it is safe to migration if migration_bitmap is cleared bit
>           * at the same time.
>           */
> -        qemu_mutex_lock(&migration_bitmap_mutex);
> +        qemu_mutex_lock(&ram_state.bitmap_mutex);
>          bitmap_copy(bitmap->bmap, old_bitmap->bmap, old);
>          bitmap_set(bitmap->bmap, old, new - old);
>  
> @@ -1509,7 +1509,7 @@ void migration_bitmap_extend(ram_addr_t old, ram_addr_t new)
>          bitmap->unsentmap = NULL;
>  
>          atomic_rcu_set(&migration_bitmap_rcu, bitmap);
> -        qemu_mutex_unlock(&migration_bitmap_mutex);
> +        qemu_mutex_unlock(&ram_state.bitmap_mutex);
>          ram_state.migration_dirty_pages += new - old;
>          call_rcu(old_bitmap, migration_bitmap_free, rcu);
>      }
> @@ -1911,7 +1911,7 @@ static int ram_state_init(RAMState *rs)
>      int64_t ram_bitmap_pages; /* Size of bitmap in pages, including gaps */
>  
>      memset(rs, 0, sizeof(*rs));
> -    qemu_mutex_init(&migration_bitmap_mutex);
> +    qemu_mutex_init(&rs->bitmap_mutex);

Hmm - this isn't new, but....
ram_save_init is called from ram_save_setup; I don't see any
qemu_mutex_destroy's anywhere on bitmap_mutex.
So if you migrate, fail and then try again will you end up
calling qemu_mutex_init twice on that bitmap_mutex without
having destroyed it? And you'll have memset over it without
having destroyed it (that's new).

Dave

>      if (migrate_use_xbzrle()) {
>          XBZRLE_cache_lock();
> -- 
> 2.9.3
> 
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [Qemu-devel] [PATCH 01/31] ram: move more fields into RAMState
  2017-03-16 12:09   ` Dr. David Alan Gilbert
@ 2017-03-16 21:32     ` Philippe Mathieu-Daudé
  2017-03-20 19:36     ` Juan Quintela
  1 sibling, 0 replies; 68+ messages in thread
From: Philippe Mathieu-Daudé @ 2017-03-16 21:32 UTC (permalink / raw)
  To: Dr. David Alan Gilbert, Juan Quintela; +Cc: qemu-devel

On 03/16/2017 09:09 AM, Dr. David Alan Gilbert wrote:
> * Juan Quintela (quintela@redhat.com) wrote:
>> last_seen_block, last_sent_block, last_offset, last_version and
>> ram_bulk_stage are globals that are really related together.
>>
>> Signed-off-by: Juan Quintela <quintela@redhat.com>
>> ---
>>  migration/ram.c | 136 ++++++++++++++++++++++++++++++++------------------------
>>  1 file changed, 79 insertions(+), 57 deletions(-)
>>
>> diff --git a/migration/ram.c b/migration/ram.c
>> index 719425b..c20a539 100644
>> --- a/migration/ram.c
>> +++ b/migration/ram.c
>> @@ -136,6 +136,23 @@ out:
>>      return ret;
>>  }
>>
>> +/* State of RAM for migration */
>> +struct RAMState {
>> +    /* Last block that we have visited searching for dirty pages */
>> +    RAMBlock    *last_seen_block;
>> +    /* Last block from where we have sent data */
>> +    RAMBlock *last_sent_block;
>> +    /* Last offeset we have sent data from */
>                   ^
>                   One extra e
>
> Other than that (and the minor formatting things the bot found)
>
> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
>

Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>

>> +    ram_addr_t last_offset;
>> +    /* last ram version we have seen */
>> +    uint32_t last_version;
>> +    /* We are in the first round */
>> +    bool ram_bulk_stage;
>> +};
>> +typedef struct RAMState RAMState;
>> +
>> +static RAMState ram_state;
>> +
>>  /* accounting for migration statistics */
>>  typedef struct AccountingInfo {
>>      uint64_t dup_pages;
>> @@ -211,16 +228,8 @@ uint64_t xbzrle_mig_pages_overflow(void)
>>      return acct_info.xbzrle_overflows;
>>  }
>>
>> -/* This is the last block that we have visited serching for dirty pages
>> - */
>> -static RAMBlock *last_seen_block;
>> -/* This is the last block from where we have sent data */
>> -static RAMBlock *last_sent_block;
>> -static ram_addr_t last_offset;
>>  static QemuMutex migration_bitmap_mutex;
>>  static uint64_t migration_dirty_pages;
>> -static uint32_t last_version;
>> -static bool ram_bulk_stage;
>>
>>  /* used by the search for pages to send */
>>  struct PageSearchStatus {
>> @@ -437,9 +446,9 @@ static void mig_throttle_guest_down(void)
>>   * As a bonus, if the page wasn't in the cache it gets added so that
>>   * when a small write is made into the 0'd page it gets XBZRLE sent
>>   */
>> -static void xbzrle_cache_zero_page(ram_addr_t current_addr)
>> +static void xbzrle_cache_zero_page(RAMState *rs, ram_addr_t current_addr)
>>  {
>> -    if (ram_bulk_stage || !migrate_use_xbzrle()) {
>> +    if (rs->ram_bulk_stage || !migrate_use_xbzrle()) {
>>          return;
>>      }
>>
>> @@ -539,7 +548,7 @@ static int save_xbzrle_page(QEMUFile *f, uint8_t **current_data,
>>   * Returns: byte offset within memory region of the start of a dirty page
>>   */
>>  static inline
>> -ram_addr_t migration_bitmap_find_dirty(RAMBlock *rb,
>> +ram_addr_t migration_bitmap_find_dirty(RAMState *rs, RAMBlock *rb,
>>                                         ram_addr_t start,
>>                                         ram_addr_t *ram_addr_abs)
>>  {
>> @@ -552,7 +561,7 @@ ram_addr_t migration_bitmap_find_dirty(RAMBlock *rb,
>>      unsigned long next;
>>
>>      bitmap = atomic_rcu_read(&migration_bitmap_rcu)->bmap;
>> -    if (ram_bulk_stage && nr > base) {
>> +    if (rs->ram_bulk_stage && nr > base) {
>>          next = nr + 1;
>>      } else {
>>          next = find_next_bit(bitmap, size, nr);
>> @@ -740,6 +749,7 @@ static void ram_release_pages(MigrationState *ms, const char *block_name,
>>   *          >=0 - Number of pages written - this might legally be 0
>>   *                if xbzrle noticed the page was the same.
>>   *
>> + * @rs: The RAM state
>>   * @ms: The current migration state.
>>   * @f: QEMUFile where to send the data
>>   * @block: block that contains the page we want to send
>> @@ -747,8 +757,9 @@ static void ram_release_pages(MigrationState *ms, const char *block_name,
>>   * @last_stage: if we are at the completion stage
>>   * @bytes_transferred: increase it with the number of transferred bytes
>>   */
>> -static int ram_save_page(MigrationState *ms, QEMUFile *f, PageSearchStatus *pss,
>> -                         bool last_stage, uint64_t *bytes_transferred)
>> +static int ram_save_page(RAMState *rs, MigrationState *ms, QEMUFile *f,
>> +                         PageSearchStatus *pss, bool last_stage,
>> +                         uint64_t *bytes_transferred)
>>  {
>>      int pages = -1;
>>      uint64_t bytes_xmit;
>> @@ -774,7 +785,7 @@ static int ram_save_page(MigrationState *ms, QEMUFile *f, PageSearchStatus *pss,
>>
>>      current_addr = block->offset + offset;
>>
>> -    if (block == last_sent_block) {
>> +    if (block == rs->last_sent_block) {
>>          offset |= RAM_SAVE_FLAG_CONTINUE;
>>      }
>>      if (ret != RAM_SAVE_CONTROL_NOT_SUPP) {
>> @@ -791,9 +802,9 @@ static int ram_save_page(MigrationState *ms, QEMUFile *f, PageSearchStatus *pss,
>>              /* Must let xbzrle know, otherwise a previous (now 0'd) cached
>>               * page would be stale
>>               */
>> -            xbzrle_cache_zero_page(current_addr);
>> +            xbzrle_cache_zero_page(rs, current_addr);
>>              ram_release_pages(ms, block->idstr, pss->offset, pages);
>> -        } else if (!ram_bulk_stage &&
>> +        } else if (!rs->ram_bulk_stage &&
>>                     !migration_in_postcopy(ms) && migrate_use_xbzrle()) {
>>              pages = save_xbzrle_page(f, &p, current_addr, block,
>>                                       offset, last_stage, bytes_transferred);
>> @@ -925,6 +936,7 @@ static int compress_page_with_multi_thread(QEMUFile *f, RAMBlock *block,
>>   *
>>   * Returns: Number of pages written.
>>   *
>> + * @rs: The RAM state
>>   * @ms: The current migration state.
>>   * @f: QEMUFile where to send the data
>>   * @block: block that contains the page we want to send
>> @@ -932,7 +944,8 @@ static int compress_page_with_multi_thread(QEMUFile *f, RAMBlock *block,
>>   * @last_stage: if we are at the completion stage
>>   * @bytes_transferred: increase it with the number of transferred bytes
>>   */
>> -static int ram_save_compressed_page(MigrationState *ms, QEMUFile *f,
>> +static int ram_save_compressed_page(RAMState *rs, MigrationState *ms,
>> +                                    QEMUFile *f,
>>                                      PageSearchStatus *pss, bool last_stage,
>>                                      uint64_t *bytes_transferred)
>>  {
>> @@ -966,7 +979,7 @@ static int ram_save_compressed_page(MigrationState *ms, QEMUFile *f,
>>           * out, keeping this order is important, because the 'cont' flag
>>           * is used to avoid resending the block name.
>>           */
>> -        if (block != last_sent_block) {
>> +        if (block != rs->last_sent_block) {
>>              flush_compressed_data(f);
>>              pages = save_zero_page(f, block, offset, p, bytes_transferred);
>>              if (pages == -1) {
>> @@ -1008,19 +1021,20 @@ static int ram_save_compressed_page(MigrationState *ms, QEMUFile *f,
>>   *
>>   * Returns: True if a page is found
>>   *
>> + * @rs: The RAM state
>>   * @f: Current migration stream.
>>   * @pss: Data about the state of the current dirty page scan.
>>   * @*again: Set to false if the search has scanned the whole of RAM
>>   * *ram_addr_abs: Pointer into which to store the address of the dirty page
>>   *               within the global ram_addr space
>>   */
>> -static bool find_dirty_block(QEMUFile *f, PageSearchStatus *pss,
>> +static bool find_dirty_block(RAMState *rs, QEMUFile *f, PageSearchStatus *pss,
>>                               bool *again, ram_addr_t *ram_addr_abs)
>>  {
>> -    pss->offset = migration_bitmap_find_dirty(pss->block, pss->offset,
>> +    pss->offset = migration_bitmap_find_dirty(rs, pss->block, pss->offset,
>>                                                ram_addr_abs);
>> -    if (pss->complete_round && pss->block == last_seen_block &&
>> -        pss->offset >= last_offset) {
>> +    if (pss->complete_round && pss->block == rs->last_seen_block &&
>> +        pss->offset >= rs->last_offset) {
>>          /*
>>           * We've been once around the RAM and haven't found anything.
>>           * Give up.
>> @@ -1037,7 +1051,7 @@ static bool find_dirty_block(QEMUFile *f, PageSearchStatus *pss,
>>              pss->block = QLIST_FIRST_RCU(&ram_list.blocks);
>>              /* Flag that we've looped */
>>              pss->complete_round = true;
>> -            ram_bulk_stage = false;
>> +            rs->ram_bulk_stage = false;
>>              if (migrate_use_xbzrle()) {
>>                  /* If xbzrle is on, stop using the data compression at this
>>                   * point. In theory, xbzrle can do better than compression.
>> @@ -1097,13 +1111,14 @@ static RAMBlock *unqueue_page(MigrationState *ms, ram_addr_t *offset,
>>   * Unqueue a page from the queue fed by postcopy page requests; skips pages
>>   * that are already sent (!dirty)
>>   *
>> + *      rs: The RAM state
>>   *      ms:      MigrationState in
>>   *     pss:      PageSearchStatus structure updated with found block/offset
>>   * ram_addr_abs: global offset in the dirty/sent bitmaps
>>   *
>>   * Returns:      true if a queued page is found
>>   */
>> -static bool get_queued_page(MigrationState *ms, PageSearchStatus *pss,
>> +static bool get_queued_page(RAMState *rs, MigrationState *ms, PageSearchStatus *pss,
>>                              ram_addr_t *ram_addr_abs)
>>  {
>>      RAMBlock  *block;
>> @@ -1144,7 +1159,7 @@ static bool get_queued_page(MigrationState *ms, PageSearchStatus *pss,
>>           * in (migration_bitmap_find_and_reset_dirty) that every page is
>>           * dirty, that's no longer true.
>>           */
>> -        ram_bulk_stage = false;
>> +        rs->ram_bulk_stage = false;
>>
>>          /*
>>           * We want the background search to continue from the queued page
>> @@ -1248,6 +1263,7 @@ err:
>>   * ram_save_target_page: Save one target page
>>   *
>>   *
>> + * @rs: The RAM state
>>   * @f: QEMUFile where to send the data
>>   * @block: pointer to block that contains the page we want to send
>>   * @offset: offset inside the block for the page;
>> @@ -1257,7 +1273,7 @@ err:
>>   *
>>   * Returns: Number of pages written.
>>   */
>> -static int ram_save_target_page(MigrationState *ms, QEMUFile *f,
>> +static int ram_save_target_page(RAMState *rs, MigrationState *ms, QEMUFile *f,
>>                                  PageSearchStatus *pss,
>>                                  bool last_stage,
>>                                  uint64_t *bytes_transferred,
>> @@ -1269,11 +1285,11 @@ static int ram_save_target_page(MigrationState *ms, QEMUFile *f,
>>      if (migration_bitmap_clear_dirty(dirty_ram_abs)) {
>>          unsigned long *unsentmap;
>>          if (compression_switch && migrate_use_compression()) {
>> -            res = ram_save_compressed_page(ms, f, pss,
>> +            res = ram_save_compressed_page(rs, ms, f, pss,
>>                                             last_stage,
>>                                             bytes_transferred);
>>          } else {
>> -            res = ram_save_page(ms, f, pss, last_stage,
>> +            res = ram_save_page(rs, ms, f, pss, last_stage,
>>                                  bytes_transferred);
>>          }
>>
>> @@ -1289,7 +1305,7 @@ static int ram_save_target_page(MigrationState *ms, QEMUFile *f,
>>           * to the stream.
>>           */
>>          if (res > 0) {
>> -            last_sent_block = pss->block;
>> +            rs->last_sent_block = pss->block;
>>          }
>>      }
>>
>> @@ -1307,6 +1323,7 @@ static int ram_save_target_page(MigrationState *ms, QEMUFile *f,
>>   *
>>   * Returns: Number of pages written.
>>   *
>> + * @rs: The RAM state
>>   * @f: QEMUFile where to send the data
>>   * @block: pointer to block that contains the page we want to send
>>   * @offset: offset inside the block for the page; updated to last target page
>> @@ -1315,7 +1332,7 @@ static int ram_save_target_page(MigrationState *ms, QEMUFile *f,
>>   * @bytes_transferred: increase it with the number of transferred bytes
>>   * @dirty_ram_abs: Address of the start of the dirty page in ram_addr_t space
>>   */
>> -static int ram_save_host_page(MigrationState *ms, QEMUFile *f,
>> +static int ram_save_host_page(RAMState *rs, MigrationState *ms, QEMUFile *f,
>>                                PageSearchStatus *pss,
>>                                bool last_stage,
>>                                uint64_t *bytes_transferred,
>> @@ -1325,7 +1342,7 @@ static int ram_save_host_page(MigrationState *ms, QEMUFile *f,
>>      size_t pagesize = qemu_ram_pagesize(pss->block);
>>
>>      do {
>> -        tmppages = ram_save_target_page(ms, f, pss, last_stage,
>> +        tmppages = ram_save_target_page(rs, ms, f, pss, last_stage,
>>                                          bytes_transferred, dirty_ram_abs);
>>          if (tmppages < 0) {
>>              return tmppages;
>> @@ -1349,6 +1366,7 @@ static int ram_save_host_page(MigrationState *ms, QEMUFile *f,
>>   * Returns:  The number of pages written
>>   *           0 means no dirty pages
>>   *
>> + * @rs: The RAM state
>>   * @f: QEMUFile where to send the data
>>   * @last_stage: if we are at the completion stage
>>   * @bytes_transferred: increase it with the number of transferred bytes
>> @@ -1357,7 +1375,7 @@ static int ram_save_host_page(MigrationState *ms, QEMUFile *f,
>>   * pages in a host page that are dirty.
>>   */
>>
>> -static int ram_find_and_save_block(QEMUFile *f, bool last_stage,
>> +static int ram_find_and_save_block(RAMState *rs, QEMUFile *f, bool last_stage,
>>                                     uint64_t *bytes_transferred)
>>  {
>>      PageSearchStatus pss;
>> @@ -1372,8 +1390,8 @@ static int ram_find_and_save_block(QEMUFile *f, bool last_stage,
>>          return pages;
>>      }
>>
>> -    pss.block = last_seen_block;
>> -    pss.offset = last_offset;
>> +    pss.block = rs->last_seen_block;
>> +    pss.offset = rs->last_offset;
>>      pss.complete_round = false;
>>
>>      if (!pss.block) {
>> @@ -1382,22 +1400,22 @@ static int ram_find_and_save_block(QEMUFile *f, bool last_stage,
>>
>>      do {
>>          again = true;
>> -        found = get_queued_page(ms, &pss, &dirty_ram_abs);
>> +        found = get_queued_page(rs, ms, &pss, &dirty_ram_abs);
>>
>>          if (!found) {
>>              /* priority queue empty, so just search for something dirty */
>> -            found = find_dirty_block(f, &pss, &again, &dirty_ram_abs);
>> +            found = find_dirty_block(rs, f, &pss, &again, &dirty_ram_abs);
>>          }
>>
>>          if (found) {
>> -            pages = ram_save_host_page(ms, f, &pss,
>> +            pages = ram_save_host_page(rs, ms, f, &pss,
>>                                         last_stage, bytes_transferred,
>>                                         dirty_ram_abs);
>>          }
>>      } while (!pages && again);
>>
>> -    last_seen_block = pss.block;
>> -    last_offset = pss.offset;
>> +    rs->last_seen_block = pss.block;
>> +    rs->last_offset = pss.offset;
>>
>>      return pages;
>>  }
>> @@ -1479,13 +1497,13 @@ static void ram_migration_cleanup(void *opaque)
>>      XBZRLE_cache_unlock();
>>  }
>>
>> -static void reset_ram_globals(void)
>> +static void ram_state_reset(RAMState *rs)
>>  {
>> -    last_seen_block = NULL;
>> -    last_sent_block = NULL;
>> -    last_offset = 0;
>> -    last_version = ram_list.version;
>> -    ram_bulk_stage = true;
>> +    rs->last_seen_block = NULL;
>> +    rs->last_sent_block = NULL;
>> +    rs->last_offset = 0;
>> +    rs->last_version = ram_list.version;
>> +    rs->ram_bulk_stage = true;
>>  }
>>
>>  #define MAX_WAIT 50 /* ms, half buffered_file limit */
>> @@ -1800,9 +1818,9 @@ static int postcopy_chunk_hostpages(MigrationState *ms)
>>      struct RAMBlock *block;
>>
>>      /* Easiest way to make sure we don't resume in the middle of a host-page */
>> -    last_seen_block = NULL;
>> -    last_sent_block = NULL;
>> -    last_offset     = 0;
>> +    ram_state.last_seen_block = NULL;
>> +    ram_state.last_sent_block = NULL;
>> +    ram_state.last_offset     = 0;
>>
>>      QLIST_FOREACH_RCU(block, &ram_list.blocks, next) {
>>          unsigned long first = block->offset >> TARGET_PAGE_BITS;
>> @@ -1913,7 +1931,7 @@ err:
>>      return ret;
>>  }
>>
>> -static int ram_save_init_globals(void)
>> +static int ram_save_init_globals(RAMState *rs)
>>  {
>>      int64_t ram_bitmap_pages; /* Size of bitmap in pages, including gaps */
>>
>> @@ -1959,7 +1977,7 @@ static int ram_save_init_globals(void)
>>      qemu_mutex_lock_ramlist();
>>      rcu_read_lock();
>>      bytes_transferred = 0;
>> -    reset_ram_globals();
>> +    ram_state_reset(rs);
>>
>>      migration_bitmap_rcu = g_new0(struct BitmapRcu, 1);
>>      /* Skip setting bitmap if there is no RAM */
>> @@ -1997,11 +2015,12 @@ static int ram_save_init_globals(void)
>>
>>  static int ram_save_setup(QEMUFile *f, void *opaque)
>>  {
>> +    RAMState *rs = opaque;
>>      RAMBlock *block;
>>
>>      /* migration has already setup the bitmap, reuse it. */
>>      if (!migration_in_colo_state()) {
>> -        if (ram_save_init_globals() < 0) {
>> +        if (ram_save_init_globals(rs) < 0) {
>>              return -1;
>>           }
>>      }
>> @@ -2031,14 +2050,15 @@ static int ram_save_setup(QEMUFile *f, void *opaque)
>>
>>  static int ram_save_iterate(QEMUFile *f, void *opaque)
>>  {
>> +    RAMState *rs = opaque;
>>      int ret;
>>      int i;
>>      int64_t t0;
>>      int done = 0;
>>
>>      rcu_read_lock();
>> -    if (ram_list.version != last_version) {
>> -        reset_ram_globals();
>> +    if (ram_list.version != rs->last_version) {
>> +        ram_state_reset(rs);
>>      }
>>
>>      /* Read version before ram_list.blocks */
>> @@ -2051,7 +2071,7 @@ static int ram_save_iterate(QEMUFile *f, void *opaque)
>>      while ((ret = qemu_file_rate_limit(f)) == 0) {
>>          int pages;
>>
>> -        pages = ram_find_and_save_block(f, false, &bytes_transferred);
>> +        pages = ram_find_and_save_block(rs, f, false, &bytes_transferred);
>>          /* no more pages to sent */
>>          if (pages == 0) {
>>              done = 1;
>> @@ -2096,6 +2116,8 @@ static int ram_save_iterate(QEMUFile *f, void *opaque)
>>  /* Called with iothread lock */
>>  static int ram_save_complete(QEMUFile *f, void *opaque)
>>  {
>> +    RAMState *rs = opaque;
>> +
>>      rcu_read_lock();
>>
>>      if (!migration_in_postcopy(migrate_get_current())) {
>> @@ -2110,7 +2132,7 @@ static int ram_save_complete(QEMUFile *f, void *opaque)
>>      while (true) {
>>          int pages;
>>
>> -        pages = ram_find_and_save_block(f, !migration_in_colo_state(),
>> +        pages = ram_find_and_save_block(rs, f, !migration_in_colo_state(),
>>                                          &bytes_transferred);
>>          /* no more blocks to sent */
>>          if (pages == 0) {
>> @@ -2675,5 +2697,5 @@ static SaveVMHandlers savevm_ram_handlers = {
>>  void ram_mig_init(void)
>>  {
>>      qemu_mutex_init(&XBZRLE.lock);
>> -    register_savevm_live(NULL, "ram", 0, 4, &savevm_ram_handlers, NULL);
>> +    register_savevm_live(NULL, "ram", 0, 4, &savevm_ram_handlers, &ram_state);
>>  }
>> --
>> 2.9.3
>>
> --
> Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
>

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [Qemu-devel] [PATCH 02/31] ram: Add dirty_rate_high_cnt to RAMState
  2017-03-16 12:20   ` Dr. David Alan Gilbert
@ 2017-03-16 21:32     ` Philippe Mathieu-Daudé
  2017-03-20 19:39     ` Juan Quintela
  1 sibling, 0 replies; 68+ messages in thread
From: Philippe Mathieu-Daudé @ 2017-03-16 21:32 UTC (permalink / raw)
  To: Dr. David Alan Gilbert, Juan Quintela; +Cc: amit.shah, qemu-devel

On 03/16/2017 09:20 AM, Dr. David Alan Gilbert wrote:
> * Juan Quintela (quintela@redhat.com) wrote:
>> We need to add a parameter to several functions to make this work.
>>
>> Signed-off-by: Juan Quintela <quintela@redhat.com>
>> ---
>>  migration/ram.c | 23 ++++++++++++-----------
>>  1 file changed, 12 insertions(+), 11 deletions(-)
>>
>> diff --git a/migration/ram.c b/migration/ram.c
>> index c20a539..9120755 100644
>> --- a/migration/ram.c
>> +++ b/migration/ram.c
>> @@ -45,8 +45,6 @@
>>  #include "qemu/rcu_queue.h"
>>  #include "migration/colo.h"
>>
>> -static int dirty_rate_high_cnt;
>> -
>>  static uint64_t bitmap_sync_count;
>>
>>  /***********************************************************/
>> @@ -148,6 +146,8 @@ struct RAMState {
>>      uint32_t last_version;
>>      /* We are in the first round */
>>      bool ram_bulk_stage;
>> +    /* How many times we have dirty too many pages */
>> +    int dirty_rate_high_cnt;
>>  };
>>  typedef struct RAMState RAMState;
>>
>> @@ -626,7 +626,7 @@ uint64_t ram_pagesize_summary(void)
>>      return summary;
>>  }
>>
>> -static void migration_bitmap_sync(void)
>> +static void migration_bitmap_sync(RAMState *rs)
>>  {
>>      RAMBlock *block;
>>      uint64_t num_dirty_pages_init = migration_dirty_pages;
>> @@ -673,9 +673,9 @@ static void migration_bitmap_sync(void)
>>              if (s->dirty_pages_rate &&
>>                 (num_dirty_pages_period * TARGET_PAGE_SIZE >
>>                     (bytes_xfer_now - bytes_xfer_prev)/2) &&
>> -               (dirty_rate_high_cnt++ >= 2)) {
>> +               (rs->dirty_rate_high_cnt++ >= 2)) {
>>                      trace_migration_throttle();
>> -                    dirty_rate_high_cnt = 0;
>> +                    rs->dirty_rate_high_cnt = 0;
>>                      mig_throttle_guest_down();
>>               }
>>               bytes_xfer_prev = bytes_xfer_now;
>> @@ -1859,7 +1859,7 @@ int ram_postcopy_send_discard_bitmap(MigrationState *ms)
>>      rcu_read_lock();
>>
>>      /* This should be our last sync, the src is now paused */
>> -    migration_bitmap_sync();
>> +    migration_bitmap_sync(&ram_state);
>>
>>      unsentmap = atomic_rcu_read(&migration_bitmap_rcu)->unsentmap;
>>      if (!unsentmap) {
>> @@ -1935,7 +1935,7 @@ static int ram_save_init_globals(RAMState *rs)
>>  {
>>      int64_t ram_bitmap_pages; /* Size of bitmap in pages, including gaps */
>>
>> -    dirty_rate_high_cnt = 0;
>> +    rs->dirty_rate_high_cnt = 0;
>>      bitmap_sync_count = 0;
>>      migration_bitmap_sync_init();
>>      qemu_mutex_init(&migration_bitmap_mutex);
>> @@ -1999,7 +1999,7 @@ static int ram_save_init_globals(RAMState *rs)
>>      migration_dirty_pages = ram_bytes_total() >> TARGET_PAGE_BITS;
>>
>>      memory_global_dirty_log_start();
>> -    migration_bitmap_sync();
>> +    migration_bitmap_sync(rs);
>>      qemu_mutex_unlock_ramlist();
>>      qemu_mutex_unlock_iothread();
>>      rcu_read_unlock();
>> @@ -2117,11 +2117,11 @@ static int ram_save_iterate(QEMUFile *f, void *opaque)
>>  static int ram_save_complete(QEMUFile *f, void *opaque)
>>  {
>>      RAMState *rs = opaque;
>> -
>> +
>
> Is that undoing false spaces from the previous patch?
>
> anyway,
> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
>

Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>

>>      rcu_read_lock();
>>
>>      if (!migration_in_postcopy(migrate_get_current())) {
>> -        migration_bitmap_sync();
>> +        migration_bitmap_sync(rs);
>>      }
>>
>>      ram_control_before_iterate(f, RAM_CONTROL_FINISH);
>> @@ -2154,6 +2154,7 @@ static void ram_save_pending(QEMUFile *f, void *opaque, uint64_t max_size,
>>                               uint64_t *non_postcopiable_pending,
>>                               uint64_t *postcopiable_pending)
>>  {
>> +    RAMState *rs = opaque;
>>      uint64_t remaining_size;
>>
>>      remaining_size = ram_save_remaining() * TARGET_PAGE_SIZE;
>> @@ -2162,7 +2163,7 @@ static void ram_save_pending(QEMUFile *f, void *opaque, uint64_t max_size,
>>          remaining_size < max_size) {
>>          qemu_mutex_lock_iothread();
>>          rcu_read_lock();
>> -        migration_bitmap_sync();
>> +        migration_bitmap_sync(rs);
>>          rcu_read_unlock();
>>          qemu_mutex_unlock_iothread();
>>          remaining_size = ram_save_remaining() * TARGET_PAGE_SIZE;
>> --
>> 2.9.3
>>
> --
> Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
>

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [Qemu-devel] [PATCH 03/31] ram: move bitmap_sync_count into RAMState
  2017-03-16 12:21   ` Dr. David Alan Gilbert
@ 2017-03-16 21:33     ` Philippe Mathieu-Daudé
  0 siblings, 0 replies; 68+ messages in thread
From: Philippe Mathieu-Daudé @ 2017-03-16 21:33 UTC (permalink / raw)
  To: Dr. David Alan Gilbert, Juan Quintela; +Cc: amit.shah, qemu-devel

On 03/16/2017 09:21 AM, Dr. David Alan Gilbert wrote:
> * Juan Quintela (quintela@redhat.com) wrote:
>> Signed-off-by: Juan Quintela <quintela@redhat.com>
>
> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
>

Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>

>> ---
>>  migration/ram.c | 22 +++++++++++-----------
>>  1 file changed, 11 insertions(+), 11 deletions(-)
>>
>> diff --git a/migration/ram.c b/migration/ram.c
>> index 9120755..c0bee94 100644
>> --- a/migration/ram.c
>> +++ b/migration/ram.c
>> @@ -45,8 +45,6 @@
>>  #include "qemu/rcu_queue.h"
>>  #include "migration/colo.h"
>>
>> -static uint64_t bitmap_sync_count;
>> -
>>  /***********************************************************/
>>  /* ram save/restore */
>>
>> @@ -148,6 +146,8 @@ struct RAMState {
>>      bool ram_bulk_stage;
>>      /* How many times we have dirty too many pages */
>>      int dirty_rate_high_cnt;
>> +    /* How many times we have synchronized the bitmap */
>> +    uint64_t bitmap_sync_count;
>>  };
>>  typedef struct RAMState RAMState;
>>
>> @@ -455,7 +455,7 @@ static void xbzrle_cache_zero_page(RAMState *rs, ram_addr_t current_addr)
>>      /* We don't care if this fails to allocate a new cache page
>>       * as long as it updated an old one */
>>      cache_insert(XBZRLE.cache, current_addr, ZERO_TARGET_PAGE,
>> -                 bitmap_sync_count);
>> +                 rs->bitmap_sync_count);
>>  }
>>
>>  #define ENCODING_FLAG_XBZRLE 0x1
>> @@ -475,7 +475,7 @@ static void xbzrle_cache_zero_page(RAMState *rs, ram_addr_t current_addr)
>>   * @last_stage: if we are at the completion stage
>>   * @bytes_transferred: increase it with the number of transferred bytes
>>   */
>> -static int save_xbzrle_page(QEMUFile *f, uint8_t **current_data,
>> +static int save_xbzrle_page(QEMUFile *f, RAMState *rs, uint8_t **current_data,
>>                              ram_addr_t current_addr, RAMBlock *block,
>>                              ram_addr_t offset, bool last_stage,
>>                              uint64_t *bytes_transferred)
>> @@ -483,11 +483,11 @@ static int save_xbzrle_page(QEMUFile *f, uint8_t **current_data,
>>      int encoded_len = 0, bytes_xbzrle;
>>      uint8_t *prev_cached_page;
>>
>> -    if (!cache_is_cached(XBZRLE.cache, current_addr, bitmap_sync_count)) {
>> +    if (!cache_is_cached(XBZRLE.cache, current_addr, rs->bitmap_sync_count)) {
>>          acct_info.xbzrle_cache_miss++;
>>          if (!last_stage) {
>>              if (cache_insert(XBZRLE.cache, current_addr, *current_data,
>> -                             bitmap_sync_count) == -1) {
>> +                             rs->bitmap_sync_count) == -1) {
>>                  return -1;
>>              } else {
>>                  /* update *current_data when the page has been
>> @@ -634,7 +634,7 @@ static void migration_bitmap_sync(RAMState *rs)
>>      int64_t end_time;
>>      int64_t bytes_xfer_now;
>>
>> -    bitmap_sync_count++;
>> +    rs->bitmap_sync_count++;
>>
>>      if (!bytes_xfer_prev) {
>>          bytes_xfer_prev = ram_bytes_transferred();
>> @@ -697,9 +697,9 @@ static void migration_bitmap_sync(RAMState *rs)
>>          start_time = end_time;
>>          num_dirty_pages_period = 0;
>>      }
>> -    s->dirty_sync_count = bitmap_sync_count;
>> +    s->dirty_sync_count = rs->bitmap_sync_count;
>>      if (migrate_use_events()) {
>> -        qapi_event_send_migration_pass(bitmap_sync_count, NULL);
>> +        qapi_event_send_migration_pass(rs->bitmap_sync_count, NULL);
>>      }
>>  }
>>
>> @@ -806,7 +806,7 @@ static int ram_save_page(RAMState *rs, MigrationState *ms, QEMUFile *f,
>>              ram_release_pages(ms, block->idstr, pss->offset, pages);
>>          } else if (!rs->ram_bulk_stage &&
>>                     !migration_in_postcopy(ms) && migrate_use_xbzrle()) {
>> -            pages = save_xbzrle_page(f, &p, current_addr, block,
>> +            pages = save_xbzrle_page(f, rs, &p, current_addr, block,
>>                                       offset, last_stage, bytes_transferred);
>>              if (!last_stage) {
>>                  /* Can't send this cached data async, since the cache page
>> @@ -1936,7 +1936,7 @@ static int ram_save_init_globals(RAMState *rs)
>>      int64_t ram_bitmap_pages; /* Size of bitmap in pages, including gaps */
>>
>>      rs->dirty_rate_high_cnt = 0;
>> -    bitmap_sync_count = 0;
>> +    rs->bitmap_sync_count = 0;
>>      migration_bitmap_sync_init();
>>      qemu_mutex_init(&migration_bitmap_mutex);
>>
>> --
>> 2.9.3
>>
> --
> Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
>

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [Qemu-devel] [PATCH 04/31] ram: Move start time into RAMState
  2017-03-16 12:21   ` Dr. David Alan Gilbert
@ 2017-03-16 21:33     ` Philippe Mathieu-Daudé
  0 siblings, 0 replies; 68+ messages in thread
From: Philippe Mathieu-Daudé @ 2017-03-16 21:33 UTC (permalink / raw)
  To: Dr. David Alan Gilbert, Juan Quintela; +Cc: qemu-devel

On 03/16/2017 09:21 AM, Dr. David Alan Gilbert wrote:
> * Juan Quintela (quintela@redhat.com) wrote:
>> Signed-off-by: Juan Quintela <quintela@redhat.com>
>
> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
>

Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>

>> ---
>>  migration/ram.c | 20 +++++++++++---------
>>  1 file changed, 11 insertions(+), 9 deletions(-)
>>
>> diff --git a/migration/ram.c b/migration/ram.c
>> index c0bee94..f6ac503 100644
>> --- a/migration/ram.c
>> +++ b/migration/ram.c
>> @@ -148,6 +148,9 @@ struct RAMState {
>>      int dirty_rate_high_cnt;
>>      /* How many times we have synchronized the bitmap */
>>      uint64_t bitmap_sync_count;
>> +    /* this variables are used for bitmap sync */
>> +    /* last time we did a full bitmap_sync */
>> +    int64_t start_time;
>>  };
>>  typedef struct RAMState RAMState;
>>
>> @@ -594,15 +597,14 @@ static void migration_bitmap_sync_range(ram_addr_t start, ram_addr_t length)
>>  }
>>
>>  /* Fix me: there are too many global variables used in migration process. */
>> -static int64_t start_time;
>>  static int64_t bytes_xfer_prev;
>>  static int64_t num_dirty_pages_period;
>>  static uint64_t xbzrle_cache_miss_prev;
>>  static uint64_t iterations_prev;
>>
>> -static void migration_bitmap_sync_init(void)
>> +static void migration_bitmap_sync_init(RAMState *rs)
>>  {
>> -    start_time = 0;
>> +    rs->start_time = 0;
>>      bytes_xfer_prev = 0;
>>      num_dirty_pages_period = 0;
>>      xbzrle_cache_miss_prev = 0;
>> @@ -640,8 +642,8 @@ static void migration_bitmap_sync(RAMState *rs)
>>          bytes_xfer_prev = ram_bytes_transferred();
>>      }
>>
>> -    if (!start_time) {
>> -        start_time = qemu_clock_get_ms(QEMU_CLOCK_REALTIME);
>> +    if (!rs->start_time) {
>> +        rs->start_time = qemu_clock_get_ms(QEMU_CLOCK_REALTIME);
>>      }
>>
>>      trace_migration_bitmap_sync_start();
>> @@ -661,7 +663,7 @@ static void migration_bitmap_sync(RAMState *rs)
>>      end_time = qemu_clock_get_ms(QEMU_CLOCK_REALTIME);
>>
>>      /* more than 1 second = 1000 millisecons */
>> -    if (end_time > start_time + 1000) {
>> +    if (end_time > rs->start_time + 1000) {
>>          if (migrate_auto_converge()) {
>>              /* The following detection logic can be refined later. For now:
>>                 Check to see if the dirtied bytes is 50% more than the approx.
>> @@ -692,9 +694,9 @@ static void migration_bitmap_sync(RAMState *rs)
>>              xbzrle_cache_miss_prev = acct_info.xbzrle_cache_miss;
>>          }
>>          s->dirty_pages_rate = num_dirty_pages_period * 1000
>> -            / (end_time - start_time);
>> +            / (end_time - rs->start_time);
>>          s->dirty_bytes_rate = s->dirty_pages_rate * TARGET_PAGE_SIZE;
>> -        start_time = end_time;
>> +        rs->start_time = end_time;
>>          num_dirty_pages_period = 0;
>>      }
>>      s->dirty_sync_count = rs->bitmap_sync_count;
>> @@ -1937,7 +1939,7 @@ static int ram_save_init_globals(RAMState *rs)
>>
>>      rs->dirty_rate_high_cnt = 0;
>>      rs->bitmap_sync_count = 0;
>> -    migration_bitmap_sync_init();
>> +    migration_bitmap_sync_init(rs);
>>      qemu_mutex_init(&migration_bitmap_mutex);
>>
>>      if (migrate_use_xbzrle()) {
>> --
>> 2.9.3
>>
> --
> Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
>

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [Qemu-devel] [PATCH 05/31] ram: Move bytes_xfer_prev into RAMState
  2017-03-16 12:22   ` Dr. David Alan Gilbert
@ 2017-03-16 21:34     ` Philippe Mathieu-Daudé
  0 siblings, 0 replies; 68+ messages in thread
From: Philippe Mathieu-Daudé @ 2017-03-16 21:34 UTC (permalink / raw)
  To: Dr. David Alan Gilbert, Juan Quintela; +Cc: amit.shah, qemu-devel

On 03/16/2017 09:22 AM, Dr. David Alan Gilbert wrote:
> * Juan Quintela (quintela@redhat.com) wrote:
>> Signed-off-by: Juan Quintela <quintela@redhat.com>
>
> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
>

Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>

>> ---
>>  migration/ram.c | 13 +++++++------
>>  1 file changed, 7 insertions(+), 6 deletions(-)
>>
>> diff --git a/migration/ram.c b/migration/ram.c
>> index f6ac503..2d288cc 100644
>> --- a/migration/ram.c
>> +++ b/migration/ram.c
>> @@ -151,6 +151,8 @@ struct RAMState {
>>      /* this variables are used for bitmap sync */
>>      /* last time we did a full bitmap_sync */
>>      int64_t start_time;
>> +    /* bytes transferred at start_time */
>> +    int64_t bytes_xfer_prev;
>>  };
>>  typedef struct RAMState RAMState;
>>
>> @@ -597,7 +599,6 @@ static void migration_bitmap_sync_range(ram_addr_t start, ram_addr_t length)
>>  }
>>
>>  /* Fix me: there are too many global variables used in migration process. */
>> -static int64_t bytes_xfer_prev;
>>  static int64_t num_dirty_pages_period;
>>  static uint64_t xbzrle_cache_miss_prev;
>>  static uint64_t iterations_prev;
>> @@ -605,7 +606,7 @@ static uint64_t iterations_prev;
>>  static void migration_bitmap_sync_init(RAMState *rs)
>>  {
>>      rs->start_time = 0;
>> -    bytes_xfer_prev = 0;
>> +    rs->bytes_xfer_prev = 0;
>>      num_dirty_pages_period = 0;
>>      xbzrle_cache_miss_prev = 0;
>>      iterations_prev = 0;
>> @@ -638,8 +639,8 @@ static void migration_bitmap_sync(RAMState *rs)
>>
>>      rs->bitmap_sync_count++;
>>
>> -    if (!bytes_xfer_prev) {
>> -        bytes_xfer_prev = ram_bytes_transferred();
>> +    if (!rs->bytes_xfer_prev) {
>> +        rs->bytes_xfer_prev = ram_bytes_transferred();
>>      }
>>
>>      if (!rs->start_time) {
>> @@ -674,13 +675,13 @@ static void migration_bitmap_sync(RAMState *rs)
>>
>>              if (s->dirty_pages_rate &&
>>                 (num_dirty_pages_period * TARGET_PAGE_SIZE >
>> -                   (bytes_xfer_now - bytes_xfer_prev)/2) &&
>> +                   (bytes_xfer_now - rs->bytes_xfer_prev)/2) &&
>>                 (rs->dirty_rate_high_cnt++ >= 2)) {
>>                      trace_migration_throttle();
>>                      rs->dirty_rate_high_cnt = 0;
>>                      mig_throttle_guest_down();
>>               }
>> -             bytes_xfer_prev = bytes_xfer_now;
>> +             rs->bytes_xfer_prev = bytes_xfer_now;
>>          }
>>
>>          if (migrate_use_xbzrle()) {
>> --
>> 2.9.3
>>
> --
> Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
>

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [Qemu-devel] [PATCH 06/31] ram: Move num_dirty_pages_period into RAMState
  2017-03-16 12:23   ` Dr. David Alan Gilbert
@ 2017-03-16 21:35     ` Philippe Mathieu-Daudé
  0 siblings, 0 replies; 68+ messages in thread
From: Philippe Mathieu-Daudé @ 2017-03-16 21:35 UTC (permalink / raw)
  To: Dr. David Alan Gilbert, Juan Quintela; +Cc: qemu-devel

On 03/16/2017 09:23 AM, Dr. David Alan Gilbert wrote:
> * Juan Quintela (quintela@redhat.com) wrote:
>> Signed-off-by: Juan Quintela <quintela@redhat.com>
>
> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
>

Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>

> (This series could be fewer patches...)
>
>> ---
>>  migration/ram.c | 13 +++++++------
>>  1 file changed, 7 insertions(+), 6 deletions(-)
>>
>> diff --git a/migration/ram.c b/migration/ram.c
>> index 2d288cc..b13d2d5 100644
>> --- a/migration/ram.c
>> +++ b/migration/ram.c
>> @@ -153,6 +153,8 @@ struct RAMState {
>>      int64_t start_time;
>>      /* bytes transferred at start_time */
>>      int64_t bytes_xfer_prev;
>> +    /* number of dirty pages since start_time */
>> +    int64_t num_dirty_pages_period;
>>  };
>>  typedef struct RAMState RAMState;
>>
>> @@ -599,7 +601,6 @@ static void migration_bitmap_sync_range(ram_addr_t start, ram_addr_t length)
>>  }
>>
>>  /* Fix me: there are too many global variables used in migration process. */
>> -static int64_t num_dirty_pages_period;
>>  static uint64_t xbzrle_cache_miss_prev;
>>  static uint64_t iterations_prev;
>>
>> @@ -607,7 +608,7 @@ static void migration_bitmap_sync_init(RAMState *rs)
>>  {
>>      rs->start_time = 0;
>>      rs->bytes_xfer_prev = 0;
>> -    num_dirty_pages_period = 0;
>> +    rs->num_dirty_pages_period = 0;
>>      xbzrle_cache_miss_prev = 0;
>>      iterations_prev = 0;
>>  }
>> @@ -660,7 +661,7 @@ static void migration_bitmap_sync(RAMState *rs)
>>
>>      trace_migration_bitmap_sync_end(migration_dirty_pages
>>                                      - num_dirty_pages_init);
>> -    num_dirty_pages_period += migration_dirty_pages - num_dirty_pages_init;
>> +    rs->num_dirty_pages_period += migration_dirty_pages - num_dirty_pages_init;
>>      end_time = qemu_clock_get_ms(QEMU_CLOCK_REALTIME);
>>
>>      /* more than 1 second = 1000 millisecons */
>> @@ -674,7 +675,7 @@ static void migration_bitmap_sync(RAMState *rs)
>>              bytes_xfer_now = ram_bytes_transferred();
>>
>>              if (s->dirty_pages_rate &&
>> -               (num_dirty_pages_period * TARGET_PAGE_SIZE >
>> +               (rs->num_dirty_pages_period * TARGET_PAGE_SIZE >
>>                     (bytes_xfer_now - rs->bytes_xfer_prev)/2) &&
>>                 (rs->dirty_rate_high_cnt++ >= 2)) {
>>                      trace_migration_throttle();
>> @@ -694,11 +695,11 @@ static void migration_bitmap_sync(RAMState *rs)
>>              iterations_prev = acct_info.iterations;
>>              xbzrle_cache_miss_prev = acct_info.xbzrle_cache_miss;
>>          }
>> -        s->dirty_pages_rate = num_dirty_pages_period * 1000
>> +        s->dirty_pages_rate = rs->num_dirty_pages_period * 1000
>>              / (end_time - rs->start_time);
>>          s->dirty_bytes_rate = s->dirty_pages_rate * TARGET_PAGE_SIZE;
>>          rs->start_time = end_time;
>> -        num_dirty_pages_period = 0;
>> +        rs->num_dirty_pages_period = 0;
>>      }
>>      s->dirty_sync_count = rs->bitmap_sync_count;
>>      if (migrate_use_events()) {
>> --
>> 2.9.3
>>
> --
> Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
>

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [Qemu-devel] [PATCH 07/31] ram: Move xbzrle_cache_miss_prev into RAMState
  2017-03-16 12:24   ` Dr. David Alan Gilbert
@ 2017-03-16 21:35     ` Philippe Mathieu-Daudé
  0 siblings, 0 replies; 68+ messages in thread
From: Philippe Mathieu-Daudé @ 2017-03-16 21:35 UTC (permalink / raw)
  To: Dr. David Alan Gilbert, Juan Quintela; +Cc: qemu-devel

On 03/16/2017 09:24 AM, Dr. David Alan Gilbert wrote:
> * Juan Quintela (quintela@redhat.com) wrote:
>> Signed-off-by: Juan Quintela <quintela@redhat.com>
>> ---
>>  migration/ram.c | 9 +++++----
>>  1 file changed, 5 insertions(+), 4 deletions(-)
>>
>> diff --git a/migration/ram.c b/migration/ram.c
>> index b13d2d5..ae077c5 100644
>> --- a/migration/ram.c
>> +++ b/migration/ram.c
>> @@ -155,6 +155,8 @@ struct RAMState {
>>      int64_t bytes_xfer_prev;
>>      /* number of dirty pages since start_time */
>>      int64_t num_dirty_pages_period;
>> +    /* xbzrle misses since the beggining of the period */
>                                     ^--- extra g
>
> Other than that,
> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
>

Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>

>> +    uint64_t xbzrle_cache_miss_prev;
>>  };
>>  typedef struct RAMState RAMState;
>>
>> @@ -601,7 +603,6 @@ static void migration_bitmap_sync_range(ram_addr_t start, ram_addr_t length)
>>  }
>>
>>  /* Fix me: there are too many global variables used in migration process. */
>> -static uint64_t xbzrle_cache_miss_prev;
>>  static uint64_t iterations_prev;
>>
>>  static void migration_bitmap_sync_init(RAMState *rs)
>> @@ -609,7 +610,7 @@ static void migration_bitmap_sync_init(RAMState *rs)
>>      rs->start_time = 0;
>>      rs->bytes_xfer_prev = 0;
>>      rs->num_dirty_pages_period = 0;
>> -    xbzrle_cache_miss_prev = 0;
>> +    rs->xbzrle_cache_miss_prev = 0;
>>      iterations_prev = 0;
>>  }
>>
>> @@ -689,11 +690,11 @@ static void migration_bitmap_sync(RAMState *rs)
>>              if (iterations_prev != acct_info.iterations) {
>>                  acct_info.xbzrle_cache_miss_rate =
>>                     (double)(acct_info.xbzrle_cache_miss -
>> -                            xbzrle_cache_miss_prev) /
>> +                            rs->xbzrle_cache_miss_prev) /
>>                     (acct_info.iterations - iterations_prev);
>>              }
>>              iterations_prev = acct_info.iterations;
>> -            xbzrle_cache_miss_prev = acct_info.xbzrle_cache_miss;
>> +            rs->xbzrle_cache_miss_prev = acct_info.xbzrle_cache_miss;
>>          }
>>          s->dirty_pages_rate = rs->num_dirty_pages_period * 1000
>>              / (end_time - rs->start_time);
>> --
>> 2.9.3
>>
> --
> Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
>

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [Qemu-devel] [PATCH 08/31] ram: Move iterations_prev into RAMState
  2017-03-16 12:26   ` Dr. David Alan Gilbert
@ 2017-03-16 21:36     ` Philippe Mathieu-Daudé
  0 siblings, 0 replies; 68+ messages in thread
From: Philippe Mathieu-Daudé @ 2017-03-16 21:36 UTC (permalink / raw)
  To: Dr. David Alan Gilbert, Juan Quintela; +Cc: qemu-devel

On 03/16/2017 09:26 AM, Dr. David Alan Gilbert wrote:
> * Juan Quintela (quintela@redhat.com) wrote:
>> Signed-off-by: Juan Quintela <quintela@redhat.com>
>> ---
>>  migration/ram.c | 13 ++++++-------
>>  1 file changed, 6 insertions(+), 7 deletions(-)
>>
>> diff --git a/migration/ram.c b/migration/ram.c
>> index ae077c5..6cdad06 100644
>> --- a/migration/ram.c
>> +++ b/migration/ram.c
>> @@ -157,6 +157,8 @@ struct RAMState {
>>      int64_t num_dirty_pages_period;
>>      /* xbzrle misses since the beggining of the period */
>>      uint64_t xbzrle_cache_miss_prev;
>> +    /* number of iterations at the beggining of period */
>                                          ^  ^
>                                          One extra g, one missing n
>
> Other than that,
> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
>

Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>

>> +    uint64_t iterations_prev;
>>  };
>>  typedef struct RAMState RAMState;
>>
>> @@ -602,16 +604,13 @@ static void migration_bitmap_sync_range(ram_addr_t start, ram_addr_t length)
>>          cpu_physical_memory_sync_dirty_bitmap(bitmap, start, length);
>>  }
>>
>> -/* Fix me: there are too many global variables used in migration process. */
>> -static uint64_t iterations_prev;
>> -
>>  static void migration_bitmap_sync_init(RAMState *rs)
>>  {
>>      rs->start_time = 0;
>>      rs->bytes_xfer_prev = 0;
>>      rs->num_dirty_pages_period = 0;
>>      rs->xbzrle_cache_miss_prev = 0;
>> -    iterations_prev = 0;
>> +    rs->iterations_prev = 0;
>>  }
>>
>>  /* Returns a summary bitmap of the page sizes of all RAMBlocks;
>> @@ -687,13 +686,13 @@ static void migration_bitmap_sync(RAMState *rs)
>>          }
>>
>>          if (migrate_use_xbzrle()) {
>> -            if (iterations_prev != acct_info.iterations) {
>> +            if (rs->iterations_prev != acct_info.iterations) {
>>                  acct_info.xbzrle_cache_miss_rate =
>>                     (double)(acct_info.xbzrle_cache_miss -
>>                              rs->xbzrle_cache_miss_prev) /
>> -                   (acct_info.iterations - iterations_prev);
>> +                   (acct_info.iterations - rs->iterations_prev);
>>              }
>> -            iterations_prev = acct_info.iterations;
>> +            rs->iterations_prev = acct_info.iterations;
>>              rs->xbzrle_cache_miss_prev = acct_info.xbzrle_cache_miss;
>>          }
>>          s->dirty_pages_rate = rs->num_dirty_pages_period * 1000
>> --
>> 2.9.3
>>
> --
> Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
>

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [Qemu-devel] [PATCH 14/31] ram: Move iterations into RAMState
  2017-03-16 20:04   ` Dr. David Alan Gilbert
@ 2017-03-16 21:40     ` Philippe Mathieu-Daudé
  0 siblings, 0 replies; 68+ messages in thread
From: Philippe Mathieu-Daudé @ 2017-03-16 21:40 UTC (permalink / raw)
  To: Dr. David Alan Gilbert, Juan Quintela; +Cc: qemu-devel

On 03/16/2017 05:04 PM, Dr. David Alan Gilbert wrote:
> * Juan Quintela (quintela@redhat.com) wrote:
>> Signed-off-by: Juan Quintela <quintela@redhat.com>
>> ---
>>  migration/ram.c | 12 +++++++-----
>>  1 file changed, 7 insertions(+), 5 deletions(-)
>
> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
>

Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>

>>
>> diff --git a/migration/ram.c b/migration/ram.c
>> index 8caeb4f..234bdba 100644
>> --- a/migration/ram.c
>> +++ b/migration/ram.c
>> @@ -164,6 +164,8 @@ struct RAMState {
>>      uint64_t zero_pages;
>>      /* number of normal transferred pages */
>>      uint64_t norm_pages;
>> +    /* Iterations since start */
>> +    uint64_t iterations;
>>  };
>>  typedef struct RAMState RAMState;
>>
>> @@ -171,7 +173,6 @@ static RAMState ram_state;
>>
>>  /* accounting for migration statistics */
>>  typedef struct AccountingInfo {
>> -    uint64_t iterations;
>>      uint64_t xbzrle_bytes;
>>      uint64_t xbzrle_pages;
>>      uint64_t xbzrle_cache_miss;
>> @@ -668,13 +669,13 @@ static void migration_bitmap_sync(RAMState *rs)
>>          }
>>
>>          if (migrate_use_xbzrle()) {
>> -            if (rs->iterations_prev != acct_info.iterations) {
>> +            if (rs->iterations_prev != rs->iterations) {
>>                  acct_info.xbzrle_cache_miss_rate =
>>                     (double)(acct_info.xbzrle_cache_miss -
>>                              rs->xbzrle_cache_miss_prev) /
>> -                   (acct_info.iterations - rs->iterations_prev);
>> +                   (rs->iterations - rs->iterations_prev);
>>              }
>> -            rs->iterations_prev = acct_info.iterations;
>> +            rs->iterations_prev = rs->iterations;
>>              rs->xbzrle_cache_miss_prev = acct_info.xbzrle_cache_miss;
>>          }
>>          s->dirty_pages_rate = rs->num_dirty_pages_period * 1000
>> @@ -1926,6 +1927,7 @@ static int ram_save_init_globals(RAMState *rs)
>>      rs->bitmap_sync_count = 0;
>>      rs->zero_pages = 0;
>>      rs->norm_pages = 0;
>> +    rs->iterations = 0;
>>      migration_bitmap_sync_init(rs);
>>      qemu_mutex_init(&migration_bitmap_mutex);
>>
>> @@ -2066,7 +2068,7 @@ static int ram_save_iterate(QEMUFile *f, void *opaque)
>>              done = 1;
>>              break;
>>          }
>> -        acct_info.iterations++;
>> +        rs->iterations++;
>>
>>          /* we want to check in the 1st loop, just in case it was the 1st time
>>             and we had to sync the dirty bitmap.
>> --
>> 2.9.3
>>
> --
> Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
>

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [Qemu-devel] [PATCH 23/31] ram: Move migration_bitmap_rcu into RAMState
  2017-03-15 13:50 ` [Qemu-devel] [PATCH 23/31] ram: Move migration_bitmap_rcu " Juan Quintela
@ 2017-03-17  9:51   ` Dr. David Alan Gilbert
  2017-03-20 20:10     ` Juan Quintela
  0 siblings, 1 reply; 68+ messages in thread
From: Dr. David Alan Gilbert @ 2017-03-17  9:51 UTC (permalink / raw)
  To: Juan Quintela; +Cc: qemu-devel

* Juan Quintela (quintela@redhat.com) wrote:
> Once there, rename the type to be shorter.
> 
> Signed-off-by: Juan Quintela <quintela@redhat.com>
> ---
>  migration/ram.c | 79 ++++++++++++++++++++++++++++++---------------------------
>  1 file changed, 42 insertions(+), 37 deletions(-)
> 
> diff --git a/migration/ram.c b/migration/ram.c
> index c14293c..d39d185 100644
> --- a/migration/ram.c
> +++ b/migration/ram.c
> @@ -132,6 +132,19 @@ out:
>      return ret;
>  }
>  
> +struct RAMBitmap {
> +    struct rcu_head rcu;
> +    /* Main migration bitmap */
> +    unsigned long *bmap;
> +    /* bitmap of pages that haven't been sent even once
> +     * only maintained and used in postcopy at the moment
> +     * where it's used to send the dirtymap at the start
> +     * of the postcopy phase
> +     */
> +    unsigned long *unsentmap;
> +};
> +typedef struct RAMBitmap RAMBitmap;
> +

I'm OK with this; although I can see the idea of naming it BitmapRcu,
given that the actual bmap is inside that and most of the rest of the type
is just the rcu wrapper.

Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>

>  /* State of RAM for migration */
>  struct RAMState {
>      /* Last block that we have visited searching for dirty pages */
> @@ -180,6 +193,8 @@ struct RAMState {
>      uint64_t migration_dirty_pages;
>      /* protects modification of the bitmap */
>      QemuMutex bitmap_mutex;
> +    /* Ram Bitmap protected by RCU */
> +    RAMBitmap *ram_bitmap;
>  };
>  typedef struct RAMState RAMState;
>  
> @@ -236,18 +251,6 @@ struct PageSearchStatus {
>  };
>  typedef struct PageSearchStatus PageSearchStatus;
>  
> -static struct BitmapRcu {
> -    struct rcu_head rcu;
> -    /* Main migration bitmap */
> -    unsigned long *bmap;
> -    /* bitmap of pages that haven't been sent even once
> -     * only maintained and used in postcopy at the moment
> -     * where it's used to send the dirtymap at the start
> -     * of the postcopy phase
> -     */
> -    unsigned long *unsentmap;
> -} *migration_bitmap_rcu;
> -
>  struct CompressParam {
>      bool done;
>      bool quit;
> @@ -554,7 +557,7 @@ ram_addr_t migration_bitmap_find_dirty(RAMState *rs, RAMBlock *rb,
>  
>      unsigned long next;
>  
> -    bitmap = atomic_rcu_read(&migration_bitmap_rcu)->bmap;
> +    bitmap = atomic_rcu_read(&rs->ram_bitmap)->bmap;
>      if (rs->ram_bulk_stage && nr > base) {
>          next = nr + 1;
>      } else {
> @@ -569,7 +572,7 @@ static inline bool migration_bitmap_clear_dirty(RAMState *rs, ram_addr_t addr)
>  {
>      bool ret;
>      int nr = addr >> TARGET_PAGE_BITS;
> -    unsigned long *bitmap = atomic_rcu_read(&migration_bitmap_rcu)->bmap;
> +    unsigned long *bitmap = atomic_rcu_read(&rs->ram_bitmap)->bmap;
>  
>      ret = test_and_clear_bit(nr, bitmap);
>  
> @@ -583,7 +586,7 @@ static void migration_bitmap_sync_range(RAMState *rs, ram_addr_t start,
>                                          ram_addr_t length)
>  {
>      unsigned long *bitmap;
> -    bitmap = atomic_rcu_read(&migration_bitmap_rcu)->bmap;
> +    bitmap = atomic_rcu_read(&rs->ram_bitmap)->bmap;
>      rs-> migration_dirty_pages +=
>          cpu_physical_memory_sync_dirty_bitmap(bitmap, start, length);
>  }
> @@ -1115,14 +1118,14 @@ static bool get_queued_page(RAMState *rs, MigrationState *ms, PageSearchStatus *
>           */
>          if (block) {
>              unsigned long *bitmap;
> -            bitmap = atomic_rcu_read(&migration_bitmap_rcu)->bmap;
> +            bitmap = atomic_rcu_read(&rs->ram_bitmap)->bmap;
>              dirty = test_bit(*ram_addr_abs >> TARGET_PAGE_BITS, bitmap);
>              if (!dirty) {
>                  trace_get_queued_page_not_dirty(
>                      block->idstr, (uint64_t)offset,
>                      (uint64_t)*ram_addr_abs,
>                      test_bit(*ram_addr_abs >> TARGET_PAGE_BITS,
> -                         atomic_rcu_read(&migration_bitmap_rcu)->unsentmap));
> +                         atomic_rcu_read(&rs->ram_bitmap)->unsentmap));
>              } else {
>                  trace_get_queued_page(block->idstr,
>                                        (uint64_t)offset,
> @@ -1276,7 +1279,7 @@ static int ram_save_target_page(RAMState *rs, MigrationState *ms, QEMUFile *f,
>          if (res < 0) {
>              return res;
>          }
> -        unsentmap = atomic_rcu_read(&migration_bitmap_rcu)->unsentmap;
> +        unsentmap = atomic_rcu_read(&rs->ram_bitmap)->unsentmap;
>          if (unsentmap) {
>              clear_bit(dirty_ram_abs >> TARGET_PAGE_BITS, unsentmap);
>          }
> @@ -1440,7 +1443,7 @@ void free_xbzrle_decoded_buf(void)
>      xbzrle_decoded_buf = NULL;
>  }
>  
> -static void migration_bitmap_free(struct BitmapRcu *bmap)
> +static void migration_bitmap_free(struct RAMBitmap *bmap)
>  {
>      g_free(bmap->bmap);
>      g_free(bmap->unsentmap);
> @@ -1449,11 +1452,13 @@ static void migration_bitmap_free(struct BitmapRcu *bmap)
>  
>  static void ram_migration_cleanup(void *opaque)
>  {
> +    RAMState *rs = opaque;
> +
>      /* caller have hold iothread lock or is in a bh, so there is
>       * no writing race against this migration_bitmap
>       */
> -    struct BitmapRcu *bitmap = migration_bitmap_rcu;
> -    atomic_rcu_set(&migration_bitmap_rcu, NULL);
> +    struct RAMBitmap *bitmap = rs->ram_bitmap;
> +    atomic_rcu_set(&rs->ram_bitmap, NULL);
>      if (bitmap) {
>          memory_global_dirty_log_stop();
>          call_rcu(bitmap, migration_bitmap_free, rcu);
> @@ -1488,9 +1493,9 @@ void migration_bitmap_extend(ram_addr_t old, ram_addr_t new)
>      /* called in qemu main thread, so there is
>       * no writing race against this migration_bitmap
>       */
> -    if (migration_bitmap_rcu) {
> -        struct BitmapRcu *old_bitmap = migration_bitmap_rcu, *bitmap;
> -        bitmap = g_new(struct BitmapRcu, 1);
> +    if (ram_state.ram_bitmap) {
> +        struct RAMBitmap *old_bitmap = ram_state.ram_bitmap, *bitmap;
> +        bitmap = g_new(struct RAMBitmap, 1);
>          bitmap->bmap = bitmap_new(new);
>  
>          /* prevent migration_bitmap content from being set bit
> @@ -1508,7 +1513,7 @@ void migration_bitmap_extend(ram_addr_t old, ram_addr_t new)
>           */
>          bitmap->unsentmap = NULL;
>  
> -        atomic_rcu_set(&migration_bitmap_rcu, bitmap);
> +        atomic_rcu_set(&ram_state.ram_bitmap, bitmap);
>          qemu_mutex_unlock(&ram_state.bitmap_mutex);
>          ram_state.migration_dirty_pages += new - old;
>          call_rcu(old_bitmap, migration_bitmap_free, rcu);
> @@ -1529,7 +1534,7 @@ void ram_debug_dump_bitmap(unsigned long *todump, bool expected)
>      char linebuf[129];
>  
>      if (!todump) {
> -        todump = atomic_rcu_read(&migration_bitmap_rcu)->bmap;
> +        todump = atomic_rcu_read(&ram_state.ram_bitmap)->bmap;
>      }
>  
>      for (cur = 0; cur < ram_pages; cur += linelen) {
> @@ -1559,7 +1564,7 @@ void ram_debug_dump_bitmap(unsigned long *todump, bool expected)
>  void ram_postcopy_migrated_memory_release(MigrationState *ms)
>  {
>      struct RAMBlock *block;
> -    unsigned long *bitmap = atomic_rcu_read(&migration_bitmap_rcu)->bmap;
> +    unsigned long *bitmap = atomic_rcu_read(&ram_state.ram_bitmap)->bmap;
>  
>      QLIST_FOREACH_RCU(block, &ram_list.blocks, next) {
>          unsigned long first = block->offset >> TARGET_PAGE_BITS;
> @@ -1591,7 +1596,7 @@ static int postcopy_send_discard_bm_ram(MigrationState *ms,
>      unsigned long current;
>      unsigned long *unsentmap;
>  
> -    unsentmap = atomic_rcu_read(&migration_bitmap_rcu)->unsentmap;
> +    unsentmap = atomic_rcu_read(&ram_state.ram_bitmap)->unsentmap;
>      for (current = start; current < end; ) {
>          unsigned long one = find_next_bit(unsentmap, end, current);
>  
> @@ -1680,8 +1685,8 @@ static void postcopy_chunk_hostpages_pass(MigrationState *ms, bool unsent_pass,
>          return;
>      }
>  
> -    bitmap = atomic_rcu_read(&migration_bitmap_rcu)->bmap;
> -    unsentmap = atomic_rcu_read(&migration_bitmap_rcu)->unsentmap;
> +    bitmap = atomic_rcu_read(&ram_state.ram_bitmap)->bmap;
> +    unsentmap = atomic_rcu_read(&ram_state.ram_bitmap)->unsentmap;
>  
>      if (unsent_pass) {
>          /* Find a sent page */
> @@ -1836,7 +1841,7 @@ int ram_postcopy_send_discard_bitmap(MigrationState *ms)
>      /* This should be our last sync, the src is now paused */
>      migration_bitmap_sync(&ram_state);
>  
> -    unsentmap = atomic_rcu_read(&migration_bitmap_rcu)->unsentmap;
> +    unsentmap = atomic_rcu_read(&ram_state.ram_bitmap)->unsentmap;
>      if (!unsentmap) {
>          /* We don't have a safe way to resize the sentmap, so
>           * if the bitmap was resized it will be NULL at this
> @@ -1857,7 +1862,7 @@ int ram_postcopy_send_discard_bitmap(MigrationState *ms)
>      /*
>       * Update the unsentmap to be unsentmap = unsentmap | dirty
>       */
> -    bitmap = atomic_rcu_read(&migration_bitmap_rcu)->bmap;
> +    bitmap = atomic_rcu_read(&ram_state.ram_bitmap)->bmap;
>      bitmap_or(unsentmap, unsentmap, bitmap,
>                 last_ram_offset() >> TARGET_PAGE_BITS);
>  
> @@ -1950,16 +1955,16 @@ static int ram_state_init(RAMState *rs)
>      bytes_transferred = 0;
>      ram_state_reset(rs);
>  
> -    migration_bitmap_rcu = g_new0(struct BitmapRcu, 1);
> +    rs->ram_bitmap = g_new0(struct RAMBitmap, 1);
>      /* Skip setting bitmap if there is no RAM */
>      if (ram_bytes_total()) {
>          ram_bitmap_pages = last_ram_offset() >> TARGET_PAGE_BITS;
> -        migration_bitmap_rcu->bmap = bitmap_new(ram_bitmap_pages);
> -        bitmap_set(migration_bitmap_rcu->bmap, 0, ram_bitmap_pages);
> +        rs->ram_bitmap->bmap = bitmap_new(ram_bitmap_pages);
> +        bitmap_set(rs->ram_bitmap->bmap, 0, ram_bitmap_pages);
>  
>          if (migrate_postcopy_ram()) {
> -            migration_bitmap_rcu->unsentmap = bitmap_new(ram_bitmap_pages);
> -            bitmap_set(migration_bitmap_rcu->unsentmap, 0, ram_bitmap_pages);
> +            rs->ram_bitmap->unsentmap = bitmap_new(ram_bitmap_pages);
> +            bitmap_set(rs->ram_bitmap->unsentmap, 0, ram_bitmap_pages);
>          }
>      }
>  
> -- 
> 2.9.3
> 
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [Qemu-devel] [PATCH 25/31] ram: Use the RAMState bytes_transferred parameter
  2017-03-15 13:50 ` [Qemu-devel] [PATCH 25/31] ram: Use the RAMState bytes_transferred parameter Juan Quintela
@ 2017-03-17  9:57   ` Dr. David Alan Gilbert
  0 siblings, 0 replies; 68+ messages in thread
From: Dr. David Alan Gilbert @ 2017-03-17  9:57 UTC (permalink / raw)
  To: Juan Quintela; +Cc: qemu-devel

* Juan Quintela (quintela@redhat.com) wrote:
> Somewhere it was passed by reference, just use it from RAMState.
> 
> Signed-off-by: Juan Quintela <quintela@redhat.com>

Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>

> ---
>  migration/ram.c | 77 ++++++++++++++++++++-------------------------------------
>  1 file changed, 27 insertions(+), 50 deletions(-)
> 
> diff --git a/migration/ram.c b/migration/ram.c
> index f9933b2..9c9533d 100644
> --- a/migration/ram.c
> +++ b/migration/ram.c
> @@ -477,12 +477,10 @@ static void xbzrle_cache_zero_page(RAMState *rs, ram_addr_t current_addr)
>   * @block: block that contains the page we want to send
>   * @offset: offset inside the block for the page
>   * @last_stage: if we are at the completion stage
> - * @bytes_transferred: increase it with the number of transferred bytes
>   */
>  static int save_xbzrle_page(QEMUFile *f, RAMState *rs, uint8_t **current_data,
>                              ram_addr_t current_addr, RAMBlock *block,
> -                            ram_addr_t offset, bool last_stage,
> -                            uint64_t *bytes_transferred)
> +                            ram_addr_t offset, bool last_stage)
>  {
>      int encoded_len = 0, bytes_xbzrle;
>      uint8_t *prev_cached_page;
> @@ -538,7 +536,7 @@ static int save_xbzrle_page(QEMUFile *f, RAMState *rs, uint8_t **current_data,
>      bytes_xbzrle += encoded_len + 1 + 2;
>      rs->xbzrle_pages++;
>      rs->xbzrle_bytes += bytes_xbzrle;
> -    *bytes_transferred += bytes_xbzrle;
> +    rs->bytes_transferred += bytes_xbzrle;
>  
>      return 1;
>  }
> @@ -701,20 +699,18 @@ static void migration_bitmap_sync(RAMState *rs)
>   * @block: block that contains the page we want to send
>   * @offset: offset inside the block for the page
>   * @p: pointer to the page
> - * @bytes_transferred: increase it with the number of transferred bytes
>   */
>  static int save_zero_page(RAMState *rs, QEMUFile *f, RAMBlock *block,
> -                          ram_addr_t offset,
> -                          uint8_t *p, uint64_t *bytes_transferred)
> +                          ram_addr_t offset, uint8_t *p)
>  {
>      int pages = -1;
>  
>      if (is_zero_range(p, TARGET_PAGE_SIZE)) {
>          rs->zero_pages++;
> -        *bytes_transferred += save_page_header(f, block,
> -                                               offset | RAM_SAVE_FLAG_COMPRESS);
> +        rs->bytes_transferred += save_page_header(f, block,
> +                                                  offset | RAM_SAVE_FLAG_COMPRESS);
>          qemu_put_byte(f, 0);
> -        *bytes_transferred += 1;
> +        rs->bytes_transferred += 1;
>          pages = 1;
>      }
>  
> @@ -745,11 +741,9 @@ static void ram_release_pages(MigrationState *ms, const char *block_name,
>   * @block: block that contains the page we want to send
>   * @offset: offset inside the block for the page
>   * @last_stage: if we are at the completion stage
> - * @bytes_transferred: increase it with the number of transferred bytes
>   */
>  static int ram_save_page(RAMState *rs, MigrationState *ms, QEMUFile *f,
> -                         PageSearchStatus *pss, bool last_stage,
> -                         uint64_t *bytes_transferred)
> +                         PageSearchStatus *pss, bool last_stage)
>  {
>      int pages = -1;
>      uint64_t bytes_xmit;
> @@ -767,7 +761,7 @@ static int ram_save_page(RAMState *rs, MigrationState *ms, QEMUFile *f,
>      ret = ram_control_save_page(f, block->offset,
>                             offset, TARGET_PAGE_SIZE, &bytes_xmit);
>      if (bytes_xmit) {
> -        *bytes_transferred += bytes_xmit;
> +        rs->bytes_transferred += bytes_xmit;
>          pages = 1;
>      }
>  
> @@ -787,7 +781,7 @@ static int ram_save_page(RAMState *rs, MigrationState *ms, QEMUFile *f,
>              }
>          }
>      } else {
> -        pages = save_zero_page(rs, f, block, offset, p, bytes_transferred);
> +        pages = save_zero_page(rs, f, block, offset, p);
>          if (pages > 0) {
>              /* Must let xbzrle know, otherwise a previous (now 0'd) cached
>               * page would be stale
> @@ -797,7 +791,7 @@ static int ram_save_page(RAMState *rs, MigrationState *ms, QEMUFile *f,
>          } else if (!rs->ram_bulk_stage &&
>                     !migration_in_postcopy(ms) && migrate_use_xbzrle()) {
>              pages = save_xbzrle_page(f, rs, &p, current_addr, block,
> -                                     offset, last_stage, bytes_transferred);
> +                                     offset, last_stage);
>              if (!last_stage) {
>                  /* Can't send this cached data async, since the cache page
>                   * might get updated before it gets to the wire
> @@ -809,7 +803,7 @@ static int ram_save_page(RAMState *rs, MigrationState *ms, QEMUFile *f,
>  
>      /* XBZRLE overflow or normal page */
>      if (pages == -1) {
> -        *bytes_transferred += save_page_header(f, block,
> +        rs->bytes_transferred += save_page_header(f, block,
>                                                 offset | RAM_SAVE_FLAG_PAGE);
>          if (send_async) {
>              qemu_put_buffer_async(f, p, TARGET_PAGE_SIZE,
> @@ -818,7 +812,7 @@ static int ram_save_page(RAMState *rs, MigrationState *ms, QEMUFile *f,
>          } else {
>              qemu_put_buffer(f, p, TARGET_PAGE_SIZE);
>          }
> -        *bytes_transferred += TARGET_PAGE_SIZE;
> +        rs->bytes_transferred += TARGET_PAGE_SIZE;
>          pages = 1;
>          rs->norm_pages++;
>      }
> @@ -886,8 +880,7 @@ static inline void set_compress_params(CompressParam *param, RAMBlock *block,
>  }
>  
>  static int compress_page_with_multi_thread(RAMState *rs, QEMUFile *f,
> -                                           RAMBlock *block, ram_addr_t offset,
> -                                           uint64_t *bytes_transferred)
> +                                           RAMBlock *block, ram_addr_t offset)
>  {
>      int idx, thread_count, bytes_xmit = -1, pages = -1;
>  
> @@ -904,7 +897,7 @@ static int compress_page_with_multi_thread(RAMState *rs, QEMUFile *f,
>                  qemu_mutex_unlock(&comp_param[idx].mutex);
>                  pages = 1;
>                  rs->norm_pages++;
> -                *bytes_transferred += bytes_xmit;
> +                rs->bytes_transferred += bytes_xmit;
>                  break;
>              }
>          }
> @@ -930,12 +923,10 @@ static int compress_page_with_multi_thread(RAMState *rs, QEMUFile *f,
>   * @block: block that contains the page we want to send
>   * @offset: offset inside the block for the page
>   * @last_stage: if we are at the completion stage
> - * @bytes_transferred: increase it with the number of transferred bytes
>   */
>  static int ram_save_compressed_page(RAMState *rs, MigrationState *ms,
>                                      QEMUFile *f,
> -                                    PageSearchStatus *pss, bool last_stage,
> -                                    uint64_t *bytes_transferred)
> +                                    PageSearchStatus *pss, bool last_stage)
>  {
>      int pages = -1;
>      uint64_t bytes_xmit = 0;
> @@ -949,7 +940,7 @@ static int ram_save_compressed_page(RAMState *rs, MigrationState *ms,
>      ret = ram_control_save_page(f, block->offset,
>                                  offset, TARGET_PAGE_SIZE, &bytes_xmit);
>      if (bytes_xmit) {
> -        *bytes_transferred += bytes_xmit;
> +        rs->bytes_transferred += bytes_xmit;
>          pages = 1;
>      }
>      if (ret != RAM_SAVE_CONTROL_NOT_SUPP) {
> @@ -969,7 +960,7 @@ static int ram_save_compressed_page(RAMState *rs, MigrationState *ms,
>           */
>          if (block != rs->last_sent_block) {
>              flush_compressed_data(rs, f);
> -            pages = save_zero_page(rs, f, block, offset, p, bytes_transferred);
> +            pages = save_zero_page(rs, f, block, offset, p);
>              if (pages == -1) {
>                  /* Make sure the first page is sent out before other pages */
>                  bytes_xmit = save_page_header(f, block, offset |
> @@ -977,7 +968,7 @@ static int ram_save_compressed_page(RAMState *rs, MigrationState *ms,
>                  blen = qemu_put_compression_data(f, p, TARGET_PAGE_SIZE,
>                                                   migrate_compress_level());
>                  if (blen > 0) {
> -                    *bytes_transferred += bytes_xmit + blen;
> +                    rs->bytes_transferred += bytes_xmit + blen;
>                      rs->norm_pages++;
>                      pages = 1;
>                  } else {
> @@ -990,10 +981,9 @@ static int ram_save_compressed_page(RAMState *rs, MigrationState *ms,
>              }
>          } else {
>              offset |= RAM_SAVE_FLAG_CONTINUE;
> -            pages = save_zero_page(rs, f, block, offset, p, bytes_transferred);
> +            pages = save_zero_page(rs, f, block, offset, p);
>              if (pages == -1) {
> -                pages = compress_page_with_multi_thread(rs, f, block, offset,
> -                                                        bytes_transferred);
> +                pages = compress_page_with_multi_thread(rs, f, block, offset);
>              } else {
>                  ram_release_pages(ms, block->idstr, pss->offset, pages);
>              }
> @@ -1256,7 +1246,6 @@ err:
>   * @block: pointer to block that contains the page we want to send
>   * @offset: offset inside the block for the page;
>   * @last_stage: if we are at the completion stage
> - * @bytes_transferred: increase it with the number of transferred bytes
>   * @dirty_ram_abs: Address of the start of the dirty page in ram_addr_t space
>   *
>   * Returns: Number of pages written.
> @@ -1264,7 +1253,6 @@ err:
>  static int ram_save_target_page(RAMState *rs, MigrationState *ms, QEMUFile *f,
>                                  PageSearchStatus *pss,
>                                  bool last_stage,
> -                                uint64_t *bytes_transferred,
>                                  ram_addr_t dirty_ram_abs)
>  {
>      int res = 0;
> @@ -1273,12 +1261,9 @@ static int ram_save_target_page(RAMState *rs, MigrationState *ms, QEMUFile *f,
>      if (migration_bitmap_clear_dirty(rs, dirty_ram_abs)) {
>          unsigned long *unsentmap;
>          if (compression_switch && migrate_use_compression()) {
> -            res = ram_save_compressed_page(rs, ms, f, pss,
> -                                           last_stage,
> -                                           bytes_transferred);
> +            res = ram_save_compressed_page(rs, ms, f, pss, last_stage);
>          } else {
> -            res = ram_save_page(rs, ms, f, pss, last_stage,
> -                                bytes_transferred);
> +            res = ram_save_page(rs, ms, f, pss, last_stage);
>          }
>  
>          if (res < 0) {
> @@ -1317,21 +1302,18 @@ static int ram_save_target_page(RAMState *rs, MigrationState *ms, QEMUFile *f,
>   * @offset: offset inside the block for the page; updated to last target page
>   *          sent
>   * @last_stage: if we are at the completion stage
> - * @bytes_transferred: increase it with the number of transferred bytes
>   * @dirty_ram_abs: Address of the start of the dirty page in ram_addr_t space
>   */
>  static int ram_save_host_page(RAMState *rs, MigrationState *ms, QEMUFile *f,
>                                PageSearchStatus *pss,
>                                bool last_stage,
> -                              uint64_t *bytes_transferred,
>                                ram_addr_t dirty_ram_abs)
>  {
>      int tmppages, pages = 0;
>      size_t pagesize = qemu_ram_pagesize(pss->block);
>  
>      do {
> -        tmppages = ram_save_target_page(rs, ms, f, pss, last_stage,
> -                                        bytes_transferred, dirty_ram_abs);
> +        tmppages = ram_save_target_page(rs, ms, f, pss, last_stage, dirty_ram_abs);
>          if (tmppages < 0) {
>              return tmppages;
>          }
> @@ -1357,14 +1339,12 @@ static int ram_save_host_page(RAMState *rs, MigrationState *ms, QEMUFile *f,
>   * @rs: The RAM state
>   * @f: QEMUFile where to send the data
>   * @last_stage: if we are at the completion stage
> - * @bytes_transferred: increase it with the number of transferred bytes
>   *
>   * On systems where host-page-size > target-page-size it will send all the
>   * pages in a host page that are dirty.
>   */
>  
> -static int ram_find_and_save_block(RAMState *rs, QEMUFile *f, bool last_stage,
> -                                   uint64_t *bytes_transferred)
> +static int ram_find_and_save_block(RAMState *rs, QEMUFile *f, bool last_stage)
>  {
>      PageSearchStatus pss;
>      MigrationState *ms = migrate_get_current();
> @@ -1396,9 +1376,7 @@ static int ram_find_and_save_block(RAMState *rs, QEMUFile *f, bool last_stage,
>          }
>  
>          if (found) {
> -            pages = ram_save_host_page(rs, ms, f, &pss,
> -                                       last_stage, bytes_transferred,
> -                                       dirty_ram_abs);
> +            pages = ram_save_host_page(rs, ms, f, &pss, last_stage, dirty_ram_abs);
>          }
>      } while (!pages && again);
>  
> @@ -2046,7 +2024,7 @@ static int ram_save_iterate(QEMUFile *f, void *opaque)
>      while ((ret = qemu_file_rate_limit(f)) == 0) {
>          int pages;
>  
> -        pages = ram_find_and_save_block(rs, f, false, &rs->bytes_transferred);
> +        pages = ram_find_and_save_block(rs, f, false);
>          /* no more pages to sent */
>          if (pages == 0) {
>              done = 1;
> @@ -2107,8 +2085,7 @@ static int ram_save_complete(QEMUFile *f, void *opaque)
>      while (true) {
>          int pages;
>  
> -        pages = ram_find_and_save_block(rs, f, !migration_in_colo_state(),
> -                                        &rs->bytes_transferred);
> +        pages = ram_find_and_save_block(rs, f, !migration_in_colo_state());
>          /* no more blocks to sent */
>          if (pages == 0) {
>              break;
> -- 
> 2.9.3
> 
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [Qemu-devel] [PATCH 27/31] ram: Move last_req_rb to RAMState
  2017-03-15 13:50 ` [Qemu-devel] [PATCH 27/31] ram: Move last_req_rb to RAMState Juan Quintela
@ 2017-03-17 10:14   ` Dr. David Alan Gilbert
  2017-03-20 20:13     ` Juan Quintela
  0 siblings, 1 reply; 68+ messages in thread
From: Dr. David Alan Gilbert @ 2017-03-17 10:14 UTC (permalink / raw)
  To: Juan Quintela; +Cc: qemu-devel

* Juan Quintela (quintela@redhat.com) wrote:
> It was on MigrationState when it is only used inside ram.c for
> postcopy.  Problem is that we need to access it without being able to
> pass it RAMState directly.
> 
> Signed-off-by: Juan Quintela <quintela@redhat.com>
> ---
>  include/migration/migration.h | 2 --
>  migration/migration.c         | 1 -
>  migration/ram.c               | 6 ++++--
>  3 files changed, 4 insertions(+), 5 deletions(-)
> 
> diff --git a/include/migration/migration.h b/include/migration/migration.h
> index 84cef4b..e032fb0 100644
> --- a/include/migration/migration.h
> +++ b/include/migration/migration.h
> @@ -189,8 +189,6 @@ struct MigrationState
>      /* Queue of outstanding page requests from the destination */
>      QemuMutex src_page_req_mutex;
>      QSIMPLEQ_HEAD(src_page_requests, MigrationSrcPageRequest) src_page_requests;
> -    /* The RAMBlock used in the last src_page_request */
> -    RAMBlock *last_req_rb;

Should this be kept together with src_page_req_mutex and src_page_requests?

Dave

>      /* The semaphore is used to notify COLO thread that failover is finished */
>      QemuSemaphore colo_exit_sem;
>  
> diff --git a/migration/migration.c b/migration/migration.c
> index 46645b6..4f19382 100644
> --- a/migration/migration.c
> +++ b/migration/migration.c
> @@ -1114,7 +1114,6 @@ MigrationState *migrate_init(const MigrationParams *params)
>      s->postcopy_after_devices = false;
>      s->postcopy_requests = 0;
>      s->migration_thread_running = false;
> -    s->last_req_rb = NULL;
>      error_free(s->error);
>      s->error = NULL;
>  
> diff --git a/migration/ram.c b/migration/ram.c
> index e7db39c..50ca1da 100644
> --- a/migration/ram.c
> +++ b/migration/ram.c
> @@ -197,6 +197,8 @@ struct RAMState {
>      QemuMutex bitmap_mutex;
>      /* Ram Bitmap protected by RCU */
>      RAMBitmap *ram_bitmap;
> +    /* The RAMBlock used in the last src_page_request */
> +    RAMBlock *last_req_rb;
>  };
>  typedef struct RAMState RAMState;
>  
> @@ -1190,7 +1192,7 @@ int ram_save_queue_pages(MigrationState *ms, const char *rbname,
>      rcu_read_lock();
>      if (!rbname) {
>          /* Reuse last RAMBlock */
> -        ramblock = ms->last_req_rb;
> +        ramblock = ram_state.last_req_rb;
>  
>          if (!ramblock) {
>              /*
> @@ -1208,7 +1210,7 @@ int ram_save_queue_pages(MigrationState *ms, const char *rbname,
>              error_report("ram_save_queue_pages no block '%s'", rbname);
>              goto err;
>          }
> -        ms->last_req_rb = ramblock;
> +        ram_state.last_req_rb = ramblock;
>      }
>      trace_ram_save_queue_pages(ramblock->idstr, start, len);
>      if (start+len > ramblock->used_length) {
> -- 
> 2.9.3
> 
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [Qemu-devel] [PATCH 29/31] ram: Remove dirty_bytes_rate
  2017-03-15 13:50 ` [Qemu-devel] [PATCH 29/31] ram: Remove dirty_bytes_rate Juan Quintela
@ 2017-03-17 10:21   ` Dr. David Alan Gilbert
  0 siblings, 0 replies; 68+ messages in thread
From: Dr. David Alan Gilbert @ 2017-03-17 10:21 UTC (permalink / raw)
  To: Juan Quintela; +Cc: qemu-devel

* Juan Quintela (quintela@redhat.com) wrote:
> It can be recalculated from dirty_pages_rate.
> 
> Signed-off-by: Juan Quintela <quintela@redhat.com>
> ---
>  include/migration/migration.h | 1 -
>  migration/migration.c         | 5 ++---
>  migration/ram.c               | 1 -
>  3 files changed, 2 insertions(+), 5 deletions(-)
> 
> diff --git a/include/migration/migration.h b/include/migration/migration.h
> index 54a1a4f..42b9edf 100644
> --- a/include/migration/migration.h
> +++ b/include/migration/migration.h
> @@ -167,7 +167,6 @@ struct MigrationState
>      int64_t downtime;
>      int64_t expected_downtime;
>      int64_t dirty_pages_rate;
> -    int64_t dirty_bytes_rate;
>      bool enabled_capabilities[MIGRATION_CAPABILITY__MAX];
>      int64_t xbzrle_cache_size;
>      int64_t setup_time;
> diff --git a/migration/migration.c b/migration/migration.c
> index 09d02be..2f8c440 100644
> --- a/migration/migration.c
> +++ b/migration/migration.c
> @@ -1107,7 +1107,6 @@ MigrationState *migrate_init(const MigrationParams *params)
>      s->downtime = 0;
>      s->expected_downtime = 0;
>      s->dirty_pages_rate = 0;
> -    s->dirty_bytes_rate = 0;
>      s->setup_time = 0;
>      s->start_postcopy = false;
>      s->postcopy_after_devices = false;
> @@ -1999,8 +1998,8 @@ static void *migration_thread(void *opaque)
>                                        bandwidth, max_size);
>              /* if we haven't sent anything, we don't want to recalculate
>                 10000 is a small enough number for our purposes */
> -            if (s->dirty_bytes_rate && transferred_bytes > 10000) {
> -                s->expected_downtime = s->dirty_bytes_rate / bandwidth;
> +            if (s->dirty_pages_rate && transferred_bytes > 10000) {
> +                s->expected_downtime = s->dirty_pages_rate * (1ul << qemu_target_page_bits())/ bandwidth;

The line got a bit long, please wrap.

Other than that,

Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>

>              }
>  
>              qemu_file_reset_rate_limit(s->to_dst_file);
> diff --git a/migration/ram.c b/migration/ram.c
> index 4563e3d..1006e60 100644
> --- a/migration/ram.c
> +++ b/migration/ram.c
> @@ -687,7 +687,6 @@ static void migration_bitmap_sync(RAMState *rs)
>          }
>          s->dirty_pages_rate = rs->num_dirty_pages_period * 1000
>              / (end_time - rs->start_time);
> -        s->dirty_bytes_rate = s->dirty_pages_rate * TARGET_PAGE_SIZE;
>          rs->start_time = end_time;
>          rs->num_dirty_pages_period = 0;
>      }
> -- 
> 2.9.3
> 
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [Qemu-devel] [PATCH 30/31] ram: move dirty_pages_rate to RAMState
  2017-03-15 13:50 ` [Qemu-devel] [PATCH 30/31] ram: move dirty_pages_rate to RAMState Juan Quintela
@ 2017-03-17 10:45   ` Dr. David Alan Gilbert
  0 siblings, 0 replies; 68+ messages in thread
From: Dr. David Alan Gilbert @ 2017-03-17 10:45 UTC (permalink / raw)
  To: Juan Quintela; +Cc: qemu-devel

* Juan Quintela (quintela@redhat.com) wrote:
> Treat it like the rest of ram stats counters.  Export its value the
> same way.  As an added bonus, no more MigrationState used in
> migration_bitmap_sync();
> 
> Signed-off-by: Juan Quintela <quintela@redhat.com>

Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>

> ---
>  include/migration/migration.h |  2 +-
>  migration/migration.c         |  7 +++----
>  migration/ram.c               | 12 +++++++++---
>  3 files changed, 13 insertions(+), 8 deletions(-)
> 
> diff --git a/include/migration/migration.h b/include/migration/migration.h
> index 42b9edf..43bdf86 100644
> --- a/include/migration/migration.h
> +++ b/include/migration/migration.h
> @@ -166,7 +166,6 @@ struct MigrationState
>      int64_t total_time;
>      int64_t downtime;
>      int64_t expected_downtime;
> -    int64_t dirty_pages_rate;
>      bool enabled_capabilities[MIGRATION_CAPABILITY__MAX];
>      int64_t xbzrle_cache_size;
>      int64_t setup_time;
> @@ -269,6 +268,7 @@ uint64_t ram_bytes_remaining(void);
>  uint64_t ram_bytes_transferred(void);
>  uint64_t ram_bytes_total(void);
>  uint64_t ram_dirty_sync_count(void);
> +uint64_t ram_dirty_pages_rate(void);
>  void free_xbzrle_decoded_buf(void);
>  
>  void acct_update_position(QEMUFile *f, size_t size, bool zero);
> diff --git a/migration/migration.c b/migration/migration.c
> index 2f8c440..0a70d55 100644
> --- a/migration/migration.c
> +++ b/migration/migration.c
> @@ -650,7 +650,7 @@ static void populate_ram_info(MigrationInfo *info, MigrationState *s)
>  
>      if (s->state != MIGRATION_STATUS_COMPLETED) {
>          info->ram->remaining = ram_bytes_remaining();
> -        info->ram->dirty_pages_rate = s->dirty_pages_rate;
> +        info->ram->dirty_pages_rate = ram_dirty_pages_rate();
>      }
>  }
>  
> @@ -1106,7 +1106,6 @@ MigrationState *migrate_init(const MigrationParams *params)
>      s->mbps = 0.0;
>      s->downtime = 0;
>      s->expected_downtime = 0;
> -    s->dirty_pages_rate = 0;
>      s->setup_time = 0;
>      s->start_postcopy = false;
>      s->postcopy_after_devices = false;
> @@ -1998,8 +1997,8 @@ static void *migration_thread(void *opaque)
>                                        bandwidth, max_size);
>              /* if we haven't sent anything, we don't want to recalculate
>                 10000 is a small enough number for our purposes */
> -            if (s->dirty_pages_rate && transferred_bytes > 10000) {
> -                s->expected_downtime = s->dirty_pages_rate * (1ul << qemu_target_page_bits())/ bandwidth;
> +            if (ram_dirty_pages_rate() && transferred_bytes > 10000) {
> +                s->expected_downtime = ram_dirty_pages_rate() * (1ul << qemu_target_page_bits())/ bandwidth;
>              }
>  
>              qemu_file_reset_rate_limit(s->to_dst_file);
> diff --git a/migration/ram.c b/migration/ram.c
> index 1006e60..b85f58f 100644
> --- a/migration/ram.c
> +++ b/migration/ram.c
> @@ -193,6 +193,8 @@ struct RAMState {
>      uint64_t migration_dirty_pages;
>      /* total number of bytes transferred */
>      uint64_t bytes_transferred;
> +    /* number of dirtied pages in the last second */
> +    uint64_t dirty_pages_rate;
>      /* protects modification of the bitmap */
>      QemuMutex bitmap_mutex;
>      /* Ram Bitmap protected by RCU */
> @@ -254,6 +256,11 @@ uint64_t ram_dirty_sync_count(void)
>      return ram_state.bitmap_sync_count;
>  }
>  
> +uint64_t ram_dirty_pages_rate(void)
> +{
> +    return ram_state.dirty_pages_rate;
> +}
> +
>  /* used by the search for pages to send */
>  struct PageSearchStatus {
>      /* Current block being searched */
> @@ -624,7 +631,6 @@ static void migration_bitmap_sync(RAMState *rs)
>  {
>      RAMBlock *block;
>      uint64_t num_dirty_pages_init = rs->migration_dirty_pages;
> -    MigrationState *s = migrate_get_current();
>      int64_t end_time;
>      int64_t bytes_xfer_now;
>  
> @@ -664,7 +670,7 @@ static void migration_bitmap_sync(RAMState *rs)
>                 throttling */
>              bytes_xfer_now = ram_bytes_transferred();
>  
> -            if (s->dirty_pages_rate &&
> +            if (rs->dirty_pages_rate &&
>                 (rs->num_dirty_pages_period * TARGET_PAGE_SIZE >
>                     (bytes_xfer_now - rs->bytes_xfer_prev)/2) &&
>                 (rs->dirty_rate_high_cnt++ >= 2)) {
> @@ -685,7 +691,7 @@ static void migration_bitmap_sync(RAMState *rs)
>              rs->iterations_prev = rs->iterations;
>              rs->xbzrle_cache_miss_prev = rs->xbzrle_cache_miss;
>          }
> -        s->dirty_pages_rate = rs->num_dirty_pages_period * 1000
> +        rs->dirty_pages_rate = rs->num_dirty_pages_period * 1000
>              / (end_time - rs->start_time);
>          rs->start_time = end_time;
>          rs->num_dirty_pages_period = 0;
> -- 
> 2.9.3
> 
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [Qemu-devel] [PATCH 01/31] ram: move more fields into RAMState
  2017-03-16 12:09   ` Dr. David Alan Gilbert
  2017-03-16 21:32     ` Philippe Mathieu-Daudé
@ 2017-03-20 19:36     ` Juan Quintela
  1 sibling, 0 replies; 68+ messages in thread
From: Juan Quintela @ 2017-03-20 19:36 UTC (permalink / raw)
  To: Dr. David Alan Gilbert; +Cc: qemu-devel

"Dr. David Alan Gilbert" <dgilbert@redhat.com> wrote:
> * Juan Quintela (quintela@redhat.com) wrote:
>> last_seen_block, last_sent_block, last_offset, last_version and
>> ram_bulk_stage are globals that are really related together.
>> 
>> Signed-off-by: Juan Quintela <quintela@redhat.com>
>> ---
>>  migration/ram.c | 136 ++++++++++++++++++++++++++++++++------------------------
>>  1 file changed, 79 insertions(+), 57 deletions(-)
>> 
>> diff --git a/migration/ram.c b/migration/ram.c
>> index 719425b..c20a539 100644
>> --- a/migration/ram.c
>> +++ b/migration/ram.c
>> @@ -136,6 +136,23 @@ out:
>>      return ret;
>>  }
>>  
>> +/* State of RAM for migration */
>> +struct RAMState {
>> +    /* Last block that we have visited searching for dirty pages */
>> +    RAMBlock    *last_seen_block;
>> +    /* Last block from where we have sent data */
>> +    RAMBlock *last_sent_block;
>> +    /* Last offeset we have sent data from */
>                   ^
>                   One extra e
>
> Other than that (and the minor formatting things the bot found)

fixed, thanks.

>
> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [Qemu-devel] [PATCH 02/31] ram: Add dirty_rate_high_cnt to RAMState
  2017-03-16 12:20   ` Dr. David Alan Gilbert
  2017-03-16 21:32     ` Philippe Mathieu-Daudé
@ 2017-03-20 19:39     ` Juan Quintela
  1 sibling, 0 replies; 68+ messages in thread
From: Juan Quintela @ 2017-03-20 19:39 UTC (permalink / raw)
  To: Dr. David Alan Gilbert; +Cc: qemu-devel, amit.shah

"Dr. David Alan Gilbert" <dgilbert@redhat.com> wrote:
> * Juan Quintela (quintela@redhat.com) wrote:
>> We need to add a parameter to several functions to make this work.
>> 
>> Signed-off-by: Juan Quintela <quintela@redhat.com>

[...]

> Is that undoing false spaces from the previous patch?

Yes O:-)

>
> anyway,
> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>

Thanks.

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [Qemu-devel] [PATCH 23/31] ram: Move migration_bitmap_rcu into RAMState
  2017-03-17  9:51   ` Dr. David Alan Gilbert
@ 2017-03-20 20:10     ` Juan Quintela
  0 siblings, 0 replies; 68+ messages in thread
From: Juan Quintela @ 2017-03-20 20:10 UTC (permalink / raw)
  To: Dr. David Alan Gilbert; +Cc: qemu-devel

"Dr. David Alan Gilbert" <dgilbert@redhat.com> wrote:
> * Juan Quintela (quintela@redhat.com) wrote:
>> Once there, rename the type to be shorter.
>> 
>> Signed-off-by: Juan Quintela <quintela@redhat.com>
>> ---
>>  migration/ram.c | 79 ++++++++++++++++++++++++++++++---------------------------
>>  1 file changed, 42 insertions(+), 37 deletions(-)
>> 
>> diff --git a/migration/ram.c b/migration/ram.c
>> index c14293c..d39d185 100644
>> --- a/migration/ram.c
>> +++ b/migration/ram.c
>> @@ -132,6 +132,19 @@ out:
>>      return ret;
>>  }
>>  
>> +struct RAMBitmap {
>> +    struct rcu_head rcu;
>> +    /* Main migration bitmap */
>> +    unsigned long *bmap;
>> +    /* bitmap of pages that haven't been sent even once
>> +     * only maintained and used in postcopy at the moment
>> +     * where it's used to send the dirtymap at the start
>> +     * of the postcopy phase
>> +     */
>> +    unsigned long *unsentmap;
>> +};
>> +typedef struct RAMBitmap RAMBitmap;
>> +
>
> I'm OK with this; although I can see the idea of naming it BitmapRcu,
> given that the actual bmap is inside that and most of the rest of the type
> is just the rcu wrapper.

It is the type, and now it also has the unsentmap.

  atomic_rcu_read(&ram_state.bitmap_rcu)->bmap)

ends  getting really long quite fast "p"

> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>

Thanks, Juan.

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [Qemu-devel] [PATCH 27/31] ram: Move last_req_rb to RAMState
  2017-03-17 10:14   ` Dr. David Alan Gilbert
@ 2017-03-20 20:13     ` Juan Quintela
  0 siblings, 0 replies; 68+ messages in thread
From: Juan Quintela @ 2017-03-20 20:13 UTC (permalink / raw)
  To: Dr. David Alan Gilbert; +Cc: qemu-devel

"Dr. David Alan Gilbert" <dgilbert@redhat.com> wrote:
> * Juan Quintela (quintela@redhat.com) wrote:
>> It was on MigrationState when it is only used inside ram.c for
>> postcopy.  Problem is that we need to access it without being able to
>> pass it RAMState directly.
>> 
>> Signed-off-by: Juan Quintela <quintela@redhat.com>
>> ---
>>  include/migration/migration.h | 2 --
>>  migration/migration.c         | 1 -
>>  migration/ram.c               | 6 ++++--
>>  3 files changed, 4 insertions(+), 5 deletions(-)
>> 
>> diff --git a/include/migration/migration.h b/include/migration/migration.h
>> index 84cef4b..e032fb0 100644
>> --- a/include/migration/migration.h
>> +++ b/include/migration/migration.h
>> @@ -189,8 +189,6 @@ struct MigrationState
>>      /* Queue of outstanding page requests from the destination */
>>      QemuMutex src_page_req_mutex;
>>      QSIMPLEQ_HEAD(src_page_requests, MigrationSrcPageRequest) src_page_requests;
>> -    /* The RAMBlock used in the last src_page_request */
>> -    RAMBlock *last_req_rb;
>
> Should this be kept together with src_page_req_mutex and src_page_requests?

Yes.

But I still have to use the global variable.

Will do for next version.

Thanks, Juan.

^ permalink raw reply	[flat|nested] 68+ messages in thread

end of thread, other threads:[~2017-03-20 20:13 UTC | newest]

Thread overview: 68+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-03-15 13:49 [Qemu-devel] [PATCH 00/31] Creating RAMState for migration Juan Quintela
2017-03-15 13:49 ` [Qemu-devel] [PATCH 01/31] ram: move more fields into RAMState Juan Quintela
2017-03-16 12:09   ` Dr. David Alan Gilbert
2017-03-16 21:32     ` Philippe Mathieu-Daudé
2017-03-20 19:36     ` Juan Quintela
2017-03-15 13:49 ` [Qemu-devel] [PATCH 02/31] ram: Add dirty_rate_high_cnt to RAMState Juan Quintela
2017-03-16 12:20   ` Dr. David Alan Gilbert
2017-03-16 21:32     ` Philippe Mathieu-Daudé
2017-03-20 19:39     ` Juan Quintela
2017-03-15 13:49 ` [Qemu-devel] [PATCH 03/31] ram: move bitmap_sync_count into RAMState Juan Quintela
2017-03-16 12:21   ` Dr. David Alan Gilbert
2017-03-16 21:33     ` Philippe Mathieu-Daudé
2017-03-15 13:49 ` [Qemu-devel] [PATCH 04/31] ram: Move start time " Juan Quintela
2017-03-16 12:21   ` Dr. David Alan Gilbert
2017-03-16 21:33     ` Philippe Mathieu-Daudé
2017-03-15 13:49 ` [Qemu-devel] [PATCH 05/31] ram: Move bytes_xfer_prev " Juan Quintela
2017-03-16 12:22   ` Dr. David Alan Gilbert
2017-03-16 21:34     ` Philippe Mathieu-Daudé
2017-03-15 13:49 ` [Qemu-devel] [PATCH 06/31] ram: Move num_dirty_pages_period " Juan Quintela
2017-03-16 12:23   ` Dr. David Alan Gilbert
2017-03-16 21:35     ` Philippe Mathieu-Daudé
2017-03-15 13:49 ` [Qemu-devel] [PATCH 07/31] ram: Move xbzrle_cache_miss_prev " Juan Quintela
2017-03-16 12:24   ` Dr. David Alan Gilbert
2017-03-16 21:35     ` Philippe Mathieu-Daudé
2017-03-15 13:49 ` [Qemu-devel] [PATCH 08/31] ram: Move iterations_prev " Juan Quintela
2017-03-16 12:26   ` Dr. David Alan Gilbert
2017-03-16 21:36     ` Philippe Mathieu-Daudé
2017-03-15 13:49 ` [Qemu-devel] [PATCH 09/31] ram: Move dup_pages " Juan Quintela
2017-03-16 12:27   ` Dr. David Alan Gilbert
2017-03-15 13:50 ` [Qemu-devel] [PATCH 10/31] ram: Remove unused dump_mig_dbytes_transferred() Juan Quintela
2017-03-16 15:48   ` Dr. David Alan Gilbert
2017-03-15 13:50 ` [Qemu-devel] [PATCH 11/31] ram: Remove unused pages_skiped variable Juan Quintela
2017-03-16 15:52   ` Dr. David Alan Gilbert
2017-03-15 13:50 ` [Qemu-devel] [PATCH 12/31] ram: Move norm_pages to RAMState Juan Quintela
2017-03-16 16:09   ` Dr. David Alan Gilbert
2017-03-15 13:50 ` [Qemu-devel] [PATCH 13/31] ram: Remove norm_mig_bytes_transferred Juan Quintela
2017-03-16 16:14   ` Dr. David Alan Gilbert
2017-03-15 13:50 ` [Qemu-devel] [PATCH 14/31] ram: Move iterations into RAMState Juan Quintela
2017-03-16 20:04   ` Dr. David Alan Gilbert
2017-03-16 21:40     ` Philippe Mathieu-Daudé
2017-03-15 13:50 ` [Qemu-devel] [PATCH 15/31] ram: Move xbzrle_bytes " Juan Quintela
2017-03-15 13:50 ` [Qemu-devel] [PATCH 16/31] ram: Move xbzrle_pages " Juan Quintela
2017-03-15 13:50 ` [Qemu-devel] [PATCH 17/31] ram: Move xbzrle_cache_miss " Juan Quintela
2017-03-15 13:50 ` [Qemu-devel] [PATCH 18/31] ram: move xbzrle_cache_miss_rate " Juan Quintela
2017-03-15 13:50 ` [Qemu-devel] [PATCH 19/31] ram: move xbzrle_overflows " Juan Quintela
2017-03-16 20:07   ` Dr. David Alan Gilbert
2017-03-15 13:50 ` [Qemu-devel] [PATCH 20/31] ram: move migration_dirty_pages to RAMState Juan Quintela
2017-03-15 13:50 ` [Qemu-devel] [PATCH 21/31] ram: Everything was init to zero, so use memset Juan Quintela
2017-03-16 20:15   ` Dr. David Alan Gilbert
2017-03-15 13:50 ` [Qemu-devel] [PATCH 22/31] ram: move migration_bitmap_mutex into RAMState Juan Quintela
2017-03-16 20:21   ` Dr. David Alan Gilbert
2017-03-15 13:50 ` [Qemu-devel] [PATCH 23/31] ram: Move migration_bitmap_rcu " Juan Quintela
2017-03-17  9:51   ` Dr. David Alan Gilbert
2017-03-20 20:10     ` Juan Quintela
2017-03-15 13:50 ` [Qemu-devel] [PATCH 24/31] ram: Move bytes_transferred " Juan Quintela
2017-03-15 13:50 ` [Qemu-devel] [PATCH 25/31] ram: Use the RAMState bytes_transferred parameter Juan Quintela
2017-03-17  9:57   ` Dr. David Alan Gilbert
2017-03-15 13:50 ` [Qemu-devel] [PATCH 26/31] ram: Remove ram_save_remaining Juan Quintela
2017-03-15 13:50 ` [Qemu-devel] [PATCH 27/31] ram: Move last_req_rb to RAMState Juan Quintela
2017-03-17 10:14   ` Dr. David Alan Gilbert
2017-03-20 20:13     ` Juan Quintela
2017-03-15 13:50 ` [Qemu-devel] [PATCH 28/31] ram: Create ram_dirty_sync_count() Juan Quintela
2017-03-15 13:50 ` [Qemu-devel] [PATCH 29/31] ram: Remove dirty_bytes_rate Juan Quintela
2017-03-17 10:21   ` Dr. David Alan Gilbert
2017-03-15 13:50 ` [Qemu-devel] [PATCH 30/31] ram: move dirty_pages_rate to RAMState Juan Quintela
2017-03-17 10:45   ` Dr. David Alan Gilbert
2017-03-15 13:50 ` [Qemu-devel] [PATCH 31/31] ram: move postcopy_requests into RAMState Juan Quintela
2017-03-15 14:25 ` [Qemu-devel] [PATCH 00/31] Creating RAMState for migration no-reply

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.