All of lore.kernel.org
 help / color / mirror / Atom feed
From: guangrong.xiao@gmail.com
To: pbonzini@redhat.com, mst@redhat.com, mtosatti@redhat.com
Cc: kvm@vger.kernel.org, quintela@redhat.com,
	Xiao Guangrong <xiaoguangrong@tencent.com>,
	qemu-devel@nongnu.org, peterx@redhat.com, dgilbert@redhat.com,
	wei.w.wang@intel.com, jiang.biao2@zte.com.cn
Subject: [PATCH v5 1/4] migration: do not flush_compressed_data at the end of each iteration
Date: Mon,  3 Sep 2018 17:26:41 +0800	[thread overview]
Message-ID: <20180903092644.25812-2-xiaoguangrong@tencent.com> (raw)
In-Reply-To: <20180903092644.25812-1-xiaoguangrong@tencent.com>

From: Xiao Guangrong <xiaoguangrong@tencent.com>

flush_compressed_data() needs to wait all compression threads to
finish their work, after that all threads are free until the
migration feeds new request to them, reducing its call can improve
the throughput and use CPU resource more effectively

We do not need to flush all threads at the end of iteration, the
data can be kept locally until the memory block is changed or
memory migration starts over in that case we will meet a dirtied
page which may still exists in compression threads's ring

Signed-off-by: Xiao Guangrong <xiaoguangrong@tencent.com>
---
 migration/ram.c | 14 +++++++++++++-
 1 file changed, 13 insertions(+), 1 deletion(-)

diff --git a/migration/ram.c b/migration/ram.c
index 79c89425a3..2ad07b5e15 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -307,6 +307,8 @@ struct RAMState {
     uint64_t iterations;
     /* number of dirty bits in the bitmap */
     uint64_t migration_dirty_pages;
+    /* last dirty_sync_count we have seen */
+    uint64_t dirty_sync_count;
     /* protects modification of the bitmap */
     QemuMutex bitmap_mutex;
     /* The RAMBlock used in the last src_page_requests */
@@ -3180,6 +3182,17 @@ static int ram_save_iterate(QEMUFile *f, void *opaque)
 
     ram_control_before_iterate(f, RAM_CONTROL_ROUND);
 
+    /*
+     * if memory migration starts over, we will meet a dirtied page which
+     * may still exists in compression threads's ring, so we should flush
+     * the compressed data to make sure the new page is not overwritten by
+     * the old one in the destination.
+     */
+    if (ram_counters.dirty_sync_count != rs->dirty_sync_count) {
+        rs->dirty_sync_count = ram_counters.dirty_sync_count;
+        flush_compressed_data(rs);
+    }
+
     t0 = qemu_clock_get_ns(QEMU_CLOCK_REALTIME);
     i = 0;
     while ((ret = qemu_file_rate_limit(f)) == 0 ||
@@ -3212,7 +3225,6 @@ static int ram_save_iterate(QEMUFile *f, void *opaque)
         }
         i++;
     }
-    flush_compressed_data(rs);
     rcu_read_unlock();
 
     /*
-- 
2.14.4

WARNING: multiple messages have this Message-ID (diff)
From: guangrong.xiao@gmail.com
To: pbonzini@redhat.com, mst@redhat.com, mtosatti@redhat.com
Cc: qemu-devel@nongnu.org, kvm@vger.kernel.org, dgilbert@redhat.com,
	peterx@redhat.com, wei.w.wang@intel.com, jiang.biao2@zte.com.cn,
	eblake@redhat.com, quintela@redhat.com,
	Xiao Guangrong <xiaoguangrong@tencent.com>
Subject: [Qemu-devel] [PATCH v5 1/4] migration: do not flush_compressed_data at the end of each iteration
Date: Mon,  3 Sep 2018 17:26:41 +0800	[thread overview]
Message-ID: <20180903092644.25812-2-xiaoguangrong@tencent.com> (raw)
In-Reply-To: <20180903092644.25812-1-xiaoguangrong@tencent.com>

From: Xiao Guangrong <xiaoguangrong@tencent.com>

flush_compressed_data() needs to wait all compression threads to
finish their work, after that all threads are free until the
migration feeds new request to them, reducing its call can improve
the throughput and use CPU resource more effectively

We do not need to flush all threads at the end of iteration, the
data can be kept locally until the memory block is changed or
memory migration starts over in that case we will meet a dirtied
page which may still exists in compression threads's ring

Signed-off-by: Xiao Guangrong <xiaoguangrong@tencent.com>
---
 migration/ram.c | 14 +++++++++++++-
 1 file changed, 13 insertions(+), 1 deletion(-)

diff --git a/migration/ram.c b/migration/ram.c
index 79c89425a3..2ad07b5e15 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -307,6 +307,8 @@ struct RAMState {
     uint64_t iterations;
     /* number of dirty bits in the bitmap */
     uint64_t migration_dirty_pages;
+    /* last dirty_sync_count we have seen */
+    uint64_t dirty_sync_count;
     /* protects modification of the bitmap */
     QemuMutex bitmap_mutex;
     /* The RAMBlock used in the last src_page_requests */
@@ -3180,6 +3182,17 @@ static int ram_save_iterate(QEMUFile *f, void *opaque)
 
     ram_control_before_iterate(f, RAM_CONTROL_ROUND);
 
+    /*
+     * if memory migration starts over, we will meet a dirtied page which
+     * may still exists in compression threads's ring, so we should flush
+     * the compressed data to make sure the new page is not overwritten by
+     * the old one in the destination.
+     */
+    if (ram_counters.dirty_sync_count != rs->dirty_sync_count) {
+        rs->dirty_sync_count = ram_counters.dirty_sync_count;
+        flush_compressed_data(rs);
+    }
+
     t0 = qemu_clock_get_ns(QEMU_CLOCK_REALTIME);
     i = 0;
     while ((ret = qemu_file_rate_limit(f)) == 0 ||
@@ -3212,7 +3225,6 @@ static int ram_save_iterate(QEMUFile *f, void *opaque)
         }
         i++;
     }
-    flush_compressed_data(rs);
     rcu_read_unlock();
 
     /*
-- 
2.14.4

  reply	other threads:[~2018-09-03  9:26 UTC|newest]

Thread overview: 24+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-09-03  9:26 [PATCH v5 0/4] migration: compression optimization guangrong.xiao
2018-09-03  9:26 ` [Qemu-devel] " guangrong.xiao
2018-09-03  9:26 ` guangrong.xiao [this message]
2018-09-03  9:26   ` [Qemu-devel] [PATCH v5 1/4] migration: do not flush_compressed_data at the end of each iteration guangrong.xiao
2018-09-03 16:38   ` Juan Quintela
2018-09-03 16:38     ` [Qemu-devel] " Juan Quintela
2018-09-04  3:54     ` Xiao Guangrong
2018-09-04  3:54       ` [Qemu-devel] " Xiao Guangrong
2018-09-04  4:00       ` Xiao Guangrong
2018-09-04  4:00         ` [Qemu-devel] " Xiao Guangrong
2018-09-04  9:28       ` Juan Quintela
2018-09-04  9:28         ` [Qemu-devel] " Juan Quintela
2018-09-03  9:26 ` [PATCH v5 2/4] migration: fix calculating xbzrle_counters.cache_miss_rate guangrong.xiao
2018-09-03  9:26   ` [Qemu-devel] " guangrong.xiao
2018-09-03 17:19   ` Juan Quintela
2018-09-03 17:19     ` [Qemu-devel] " Juan Quintela
2018-09-03  9:26 ` [PATCH v5 3/4] migration: show the statistics of compression guangrong.xiao
2018-09-03  9:26   ` [Qemu-devel] " guangrong.xiao
2018-09-03 17:22   ` Juan Quintela
2018-09-03 17:22     ` [Qemu-devel] " Juan Quintela
2018-09-03  9:26 ` [PATCH v5 4/4] migration: handle the error condition properly guangrong.xiao
2018-09-03  9:26   ` [Qemu-devel] " guangrong.xiao
2018-09-03 17:28   ` Juan Quintela
2018-09-03 17:28     ` [Qemu-devel] " Juan Quintela

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20180903092644.25812-2-xiaoguangrong@tencent.com \
    --to=guangrong.xiao@gmail.com \
    --cc=dgilbert@redhat.com \
    --cc=jiang.biao2@zte.com.cn \
    --cc=kvm@vger.kernel.org \
    --cc=mst@redhat.com \
    --cc=mtosatti@redhat.com \
    --cc=pbonzini@redhat.com \
    --cc=peterx@redhat.com \
    --cc=qemu-devel@nongnu.org \
    --cc=quintela@redhat.com \
    --cc=wei.w.wang@intel.com \
    --cc=xiaoguangrong@tencent.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.