All of lore.kernel.org
 help / color / mirror / Atom feed
From: zhanghailiang <zhang.zhanghailiang@huawei.com>
To: qemu-devel@nongnu.org
Cc: xiecl.fnst@cn.fujitsu.com, lizhijian@cn.fujitsu.com,
	quintela@redhat.com, yunhong.jiang@intel.com,
	eddie.dong@intel.com, peter.huangpeng@huawei.com,
	dgilbert@redhat.com,
	zhanghailiang <zhang.zhanghailiang@huawei.com>,
	arei.gonglei@huawei.com, stefanha@redhat.com,
	amit.shah@redhat.com, zhangchen.fnst@cn.fujitsu.com,
	hongyang.yang@easystack.cn
Subject: [Qemu-devel] [PATCH COLO-Frame v13 18/39] COLO: Flush PVM's cached RAM into SVM's memory
Date: Tue, 29 Dec 2015 15:09:14 +0800	[thread overview]
Message-ID: <1451372975-5048-19-git-send-email-zhang.zhanghailiang@huawei.com> (raw)
In-Reply-To: <1451372975-5048-1-git-send-email-zhang.zhanghailiang@huawei.com>

During the time of VM's running, PVM may dirty some pages, we will transfer
PVM's dirty pages to SVM and store them into SVM's RAM cache at next checkpoint
time. So, the content of SVM's RAM cache will always be same with PVM's memory
after checkpoint.

Instead of flushing all content of PVM's RAM cache into SVM's MEMORY,
we do this in a more efficient way:
Only flush any page that dirtied by PVM since last checkpoint.
In this way, we can ensure SVM's memory same with PVM's.

Besides, we must ensure flush RAM cache before load device state.

Signed-off-by: zhanghailiang <zhang.zhanghailiang@huawei.com>
Signed-off-by: Li Zhijian <lizhijian@cn.fujitsu.com>
Signed-off-by: Gonglei <arei.gonglei@huawei.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
---
v12:
- Add a trace point in the end of colo_flush_ram_cache() (Dave's suggestion)
- Add Reviewed-by tag
v11:
- Move the place of 'need_flush' (Dave's suggestion)
- Remove unused 'DPRINTF("Flush ram_cache\n")'
v10:
- trace the number of dirty pages that be received.
---
 include/migration/migration.h |  1 +
 migration/colo.c              |  2 --
 migration/ram.c               | 38 ++++++++++++++++++++++++++++++++++++++
 trace-events                  |  2 ++
 4 files changed, 41 insertions(+), 2 deletions(-)

diff --git a/include/migration/migration.h b/include/migration/migration.h
index 6907986..14b9f3d 100644
--- a/include/migration/migration.h
+++ b/include/migration/migration.h
@@ -336,4 +336,5 @@ PostcopyState postcopy_state_set(PostcopyState new_state);
 /* ram cache */
 int colo_init_ram_cache(void);
 void colo_release_ram_cache(void);
+void colo_flush_ram_cache(void);
 #endif
diff --git a/migration/colo.c b/migration/colo.c
index 8414feb..11d2b51 100644
--- a/migration/colo.c
+++ b/migration/colo.c
@@ -417,8 +417,6 @@ void *colo_process_incoming_thread(void *opaque)
         }
         qemu_mutex_unlock_iothread();
 
-        /* TODO: flush vm state */
-
         colo_put_cmd(mis->to_src_file, COLO_COMMAND_VMSTATE_LOADED,
                      &local_err);
         if (local_err) {
diff --git a/migration/ram.c b/migration/ram.c
index 3d5947b..8ff7f7c 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -2458,6 +2458,7 @@ static int ram_load(QEMUFile *f, void *opaque, int version_id)
      * be atomic
      */
     bool postcopy_running = postcopy_state_get() >= POSTCOPY_INCOMING_LISTENING;
+    bool need_flush = false;
 
     seq_iter++;
 
@@ -2492,6 +2493,7 @@ static int ram_load(QEMUFile *f, void *opaque, int version_id)
             /* After going into COLO, we should load the Page into colo_cache */
             if (ram_cache_enable) {
                 host = colo_cache_from_block_offset(block, addr);
+                need_flush = true;
             } else {
                 host = host_from_ram_block_offset(block, addr);
             }
@@ -2585,6 +2587,10 @@ static int ram_load(QEMUFile *f, void *opaque, int version_id)
     }
 
     rcu_read_unlock();
+
+    if (!ret  && ram_cache_enable && need_flush) {
+        colo_flush_ram_cache();
+    }
     DPRINTF("Completed load of VM with exit code %d seq iteration "
             "%" PRIu64 "\n", ret, seq_iter);
     return ret;
@@ -2657,6 +2663,38 @@ void colo_release_ram_cache(void)
     rcu_read_unlock();
 }
 
+/*
+ * Flush content of RAM cache into SVM's memory.
+ * Only flush the pages that be dirtied by PVM or SVM or both.
+ */
+void colo_flush_ram_cache(void)
+{
+    RAMBlock *block = NULL;
+    void *dst_host;
+    void *src_host;
+    ram_addr_t offset = 0;
+
+    trace_colo_flush_ram_cache_begin(migration_dirty_pages);
+    rcu_read_lock();
+    block = QLIST_FIRST_RCU(&ram_list.blocks);
+    while (block) {
+        ram_addr_t ram_addr_abs;
+        offset = migration_bitmap_find_dirty(block, offset, &ram_addr_abs);
+        migration_bitmap_clear_dirty(ram_addr_abs);
+        if (offset >= block->used_length) {
+            offset = 0;
+            block = QLIST_NEXT_RCU(block, next);
+        } else {
+            dst_host = block->host + offset;
+            src_host = block->colo_cache + offset;
+            memcpy(dst_host, src_host, TARGET_PAGE_SIZE);
+        }
+    }
+    rcu_read_unlock();
+    trace_colo_flush_ram_cache_end();
+    assert(migration_dirty_pages == 0);
+}
+
 static SaveVMHandlers savevm_ram_handlers = {
     .save_live_setup = ram_save_setup,
     .save_live_iterate = ram_save_iterate,
diff --git a/trace-events b/trace-events
index 51b2305..578b775 100644
--- a/trace-events
+++ b/trace-events
@@ -1266,6 +1266,8 @@ migration_throttle(void) ""
 ram_load_postcopy_loop(uint64_t addr, int flags) "@%" PRIx64 " %x"
 ram_postcopy_send_discard_bitmap(void) ""
 ram_save_queue_pages(const char *rbname, size_t start, size_t len) "%s: start: %zx len: %zx"
+colo_flush_ram_cache_begin(uint64_t dirty_pages) "dirty_pages %" PRIu64
+colo_flush_ram_cache_end(void) ""
 
 # hw/display/qxl.c
 disable qxl_interface_set_mm_time(int qid, uint32_t mm_time) "%d %d"
-- 
1.8.3.1

  parent reply	other threads:[~2015-12-29  7:11 UTC|newest]

Thread overview: 63+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-12-29  7:08 [Qemu-devel] [PATCH COLO-Frame v13 00/39] COarse-grain LOck-stepping(COLO) Virtual Machines for Non-stop Service (FT) zhanghailiang
2015-12-29  7:08 ` [Qemu-devel] [PATCH COLO-Frame v13 01/39] configure: Add parameter for configure to enable/disable COLO support zhanghailiang
2015-12-29  7:08 ` [Qemu-devel] [PATCH COLO-Frame v13 02/39] migration: Introduce capability 'x-colo' to migration zhanghailiang
2015-12-29  7:08 ` [Qemu-devel] [PATCH COLO-Frame v13 03/39] COLO: migrate colo related info to secondary node zhanghailiang
2015-12-29  7:09 ` [Qemu-devel] [PATCH COLO-Frame v13 04/39] migration: Export migrate_set_state() zhanghailiang
2015-12-29  7:09 ` [Qemu-devel] [PATCH COLO-Frame v13 05/39] migration: Add state records for migration incoming zhanghailiang
2015-12-29  7:09 ` [Qemu-devel] [PATCH COLO-Frame v13 06/39] migration: Integrate COLO checkpoint process into migration zhanghailiang
2015-12-29  7:09 ` [Qemu-devel] [PATCH COLO-Frame v13 07/39] migration: Integrate COLO checkpoint process into loadvm zhanghailiang
2015-12-29  7:09 ` [Qemu-devel] [PATCH COLO-Frame v13 08/39] migration: Rename the'file' member of MigrationState zhanghailiang
2015-12-29  7:09 ` [Qemu-devel] [PATCH COLO-Frame v13 09/39] COLO/migration: Create a new communication path from destination to source zhanghailiang
2015-12-29  7:09 ` [Qemu-devel] [PATCH COLO-Frame v13 10/39] COLO: Implement colo checkpoint protocol zhanghailiang
2016-01-29 13:08   ` Dr. David Alan Gilbert
2016-01-30  8:51     ` Hailiang Zhang
2015-12-29  7:09 ` [Qemu-devel] [PATCH COLO-Frame v13 11/39] COLO: Add a new RunState RUN_STATE_COLO zhanghailiang
2015-12-29  7:09 ` [Qemu-devel] [PATCH COLO-Frame v13 12/39] QEMUSizedBuffer: Introduce two help functions for qsb zhanghailiang
2015-12-29  7:09 ` [Qemu-devel] [PATCH COLO-Frame v13 13/39] COLO: Save PVM state to secondary side when do checkpoint zhanghailiang
2015-12-29  7:09 ` [Qemu-devel] [PATCH COLO-Frame v13 14/39] ram: Split host_from_stream_offset() into two helper functions zhanghailiang
2015-12-29  7:09 ` [Qemu-devel] [PATCH COLO-Frame v13 15/39] COLO: Load PVM's dirty pages into SVM's RAM cache temporarily zhanghailiang
2015-12-29  7:09 ` [Qemu-devel] [PATCH COLO-Frame v13 16/39] ram/COLO: Record the dirty pages that SVM received zhanghailiang
2015-12-29  7:09 ` [Qemu-devel] [PATCH COLO-Frame v13 17/39] COLO: Load VMState into qsb before restore it zhanghailiang
2016-01-04 19:00   ` Dr. David Alan Gilbert
2016-01-11  1:16     ` Hailiang Zhang
2015-12-29  7:09 ` zhanghailiang [this message]
2015-12-29  7:09 ` [Qemu-devel] [PATCH COLO-Frame v13 19/39] COLO: Add checkpoint-delay parameter for migrate-set-parameters zhanghailiang
2015-12-29  7:09 ` [Qemu-devel] [PATCH COLO-Frame v13 20/39] COLO: synchronize PVM's state to SVM periodically zhanghailiang
2015-12-29  7:09 ` [Qemu-devel] [PATCH COLO-Frame v13 21/39] COLO failover: Introduce a new command to trigger a failover zhanghailiang
2015-12-29  7:09 ` [Qemu-devel] [PATCH COLO-Frame v13 22/39] COLO failover: Introduce state to record failover process zhanghailiang
2015-12-29  7:09 ` [Qemu-devel] [PATCH COLO-Frame v13 23/39] COLO: Implement failover work for Primary VM zhanghailiang
2015-12-29  7:09 ` [Qemu-devel] [PATCH COLO-Frame v13 24/39] COLO: Implement failover work for Secondary VM zhanghailiang
2015-12-29  7:09 ` [Qemu-devel] [PATCH COLO-Frame v13 25/39] qmp event: Add COLO_EXIT event to notify users while exited from COLO zhanghailiang
2015-12-29  7:09 ` [Qemu-devel] [PATCH COLO-Frame v13 26/39] COLO failover: Shutdown related socket fd when do failover zhanghailiang
2015-12-29  7:09 ` [Qemu-devel] [PATCH COLO-Frame v13 27/39] COLO failover: Don't do failover during loading VM's state zhanghailiang
2015-12-29  7:09 ` [Qemu-devel] [PATCH COLO-Frame v13 28/39] COLO: Process shutdown command for VM in COLO state zhanghailiang
2016-01-26 19:55   ` Dr. David Alan Gilbert
2016-01-27  9:54     ` Hailiang Zhang
2015-12-29  7:09 ` [Qemu-devel] [PATCH COLO-Frame v13 29/39] COLO: Update the global runstate after going into colo state zhanghailiang
2015-12-29  7:09 ` [Qemu-devel] [PATCH COLO-Frame v13 30/39] savevm: Split load vm state function qemu_loadvm_state zhanghailiang
2015-12-29  7:09 ` [Qemu-devel] [PATCH COLO-Frame v13 31/39] savevm: Introduce two helper functions for save/find loadvm_handlers entry zhanghailiang
2016-01-26 19:59   ` Dr. David Alan Gilbert
2015-12-29  7:09 ` [Qemu-devel] [PATCH COLO-Frame v13 32/39] COLO: Separate the process of saving/loading ram and device state zhanghailiang
2016-01-27 14:14   ` Dr. David Alan Gilbert
2016-01-30 10:23     ` Hailiang Zhang
2015-12-29  7:09 ` [Qemu-devel] [PATCH COLO-Frame v13 33/39] COLO: Split qemu_savevm_state_begin out of checkpoint process zhanghailiang
2015-12-29  7:09 ` [Qemu-devel] [PATCH COLO-Frame v13 34/39] net/filter-buffer: Add default filter-buffer for each netdev zhanghailiang
2016-01-11  1:26   ` Hailiang Zhang
2016-01-19  1:46     ` Hailiang Zhang
2016-01-19  3:19   ` Jason Wang
2016-01-19  8:39     ` Hailiang Zhang
2016-01-20  2:39       ` Jason Wang
2016-01-20  7:14         ` Hailiang Zhang
2016-01-20  9:15           ` Jason Wang
2016-01-20  9:27             ` Hailiang Zhang
2015-12-29  7:09 ` [Qemu-devel] [PATCH COLO-Frame v13 35/39] filter-buffer: Accept zero interval zhanghailiang
2016-01-19  3:21   ` Jason Wang
2016-01-19  8:40     ` Hailiang Zhang
2015-12-29  7:09 ` [Qemu-devel] [PATCH COLO-Frame v13 36/39] filter-buffer: Introduce a helper function to enable/disable default filter zhanghailiang
2016-01-19  3:35   ` Jason Wang
2016-01-19  8:44     ` Hailiang Zhang
2015-12-29  7:09 ` [Qemu-devel] [PATCH COLO-Frame v13 37/39] filter-buffer: Introduce a helper function to release packets zhanghailiang
2015-12-29  7:09 ` [Qemu-devel] [PATCH COLO-Frame v13 38/39] colo: Use default buffer-filter to buffer and " zhanghailiang
2016-01-19  3:59   ` Jason Wang
2015-12-29  7:09 ` [Qemu-devel] [PATCH COLO-Frame v13 39/39] COLO: Add block replication into colo process zhanghailiang
2015-12-29  7:14 ` [Qemu-devel] [PATCH COLO-Frame v13 00/39] COarse-grain LOck-stepping(COLO) Virtual Machines for Non-stop Service (FT) Hailiang Zhang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1451372975-5048-19-git-send-email-zhang.zhanghailiang@huawei.com \
    --to=zhang.zhanghailiang@huawei.com \
    --cc=amit.shah@redhat.com \
    --cc=arei.gonglei@huawei.com \
    --cc=dgilbert@redhat.com \
    --cc=eddie.dong@intel.com \
    --cc=hongyang.yang@easystack.cn \
    --cc=lizhijian@cn.fujitsu.com \
    --cc=peter.huangpeng@huawei.com \
    --cc=qemu-devel@nongnu.org \
    --cc=quintela@redhat.com \
    --cc=stefanha@redhat.com \
    --cc=xiecl.fnst@cn.fujitsu.com \
    --cc=yunhong.jiang@intel.com \
    --cc=zhangchen.fnst@cn.fujitsu.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.