From: guangrong.xiao@gmail.com To: pbonzini@redhat.com, mst@redhat.com, mtosatti@redhat.com Cc: kvm@vger.kernel.org, Xiao Guangrong <xiaoguangrong@tencent.com>, qemu-devel@nongnu.org, peterx@redhat.com, dgilbert@redhat.com, wei.w.wang@intel.com, jiang.biao2@zte.com.cn Subject: [PATCH v4 08/10] migration: fix calculating xbzrle_counters.cache_miss_rate Date: Tue, 21 Aug 2018 16:10:27 +0800 [thread overview] Message-ID: <20180821081029.26121-9-xiaoguangrong@tencent.com> (raw) In-Reply-To: <20180821081029.26121-1-xiaoguangrong@tencent.com> From: Xiao Guangrong <xiaoguangrong@tencent.com> As Peter pointed out: | - xbzrle_counters.cache_miss is done in save_xbzrle_page(), so it's | per-guest-page granularity | | - RAMState.iterations is done for each ram_find_and_save_block(), so | it's per-host-page granularity | | An example is that when we migrate a 2M huge page in the guest, we | will only increase the RAMState.iterations by 1 (since | ram_find_and_save_block() will be called once), but we might increase | xbzrle_counters.cache_miss for 2M/4K=512 times (we'll call | save_xbzrle_page() that many times) if all the pages got cache miss. | Then IMHO the cache miss rate will be 512/1=51200% (while it should | actually be just 100% cache miss). And he also suggested as xbzrle_counters.cache_miss_rate is the only user of rs->iterations we can adapt it to count target guest page numbers After that, rename 'iterations' to 'target_page_count' to better reflect its meaning Suggested-by: Peter Xu <peterx@redhat.com> Signed-off-by: Xiao Guangrong <xiaoguangrong@tencent.com> --- migration/ram.c | 18 +++++++++--------- 1 file changed, 9 insertions(+), 9 deletions(-) diff --git a/migration/ram.c b/migration/ram.c index 1d54285501..17c3eed445 100644 --- a/migration/ram.c +++ b/migration/ram.c @@ -300,10 +300,10 @@ struct RAMState { uint64_t num_dirty_pages_period; /* xbzrle misses since the beginning of the period */ uint64_t xbzrle_cache_miss_prev; - /* number of iterations at the beginning of period */ - uint64_t iterations_prev; - /* Iterations since start */ - uint64_t iterations; + /* total handled target pages at the beginning of period */ + uint64_t target_page_count_prev; + /* total handled target pages since start */ + uint64_t target_page_count; /* number of dirty bits in the bitmap */ uint64_t migration_dirty_pages; /* protects modification of the bitmap */ @@ -1585,19 +1585,19 @@ uint64_t ram_pagesize_summary(void) static void migration_update_rates(RAMState *rs, int64_t end_time) { - uint64_t iter_count = rs->iterations - rs->iterations_prev; + uint64_t page_count = rs->target_page_count - rs->target_page_count_prev; /* calculate period counters */ ram_counters.dirty_pages_rate = rs->num_dirty_pages_period * 1000 / (end_time - rs->time_last_bitmap_sync); - if (!iter_count) { + if (!page_count) { return; } if (migrate_use_xbzrle()) { xbzrle_counters.cache_miss_rate = (double)(xbzrle_counters.cache_miss - - rs->xbzrle_cache_miss_prev) / iter_count; + rs->xbzrle_cache_miss_prev) / page_count; rs->xbzrle_cache_miss_prev = xbzrle_counters.cache_miss; } } @@ -1704,7 +1704,7 @@ static void migration_bitmap_sync(RAMState *rs) migration_update_rates(rs, end_time); - rs->iterations_prev = rs->iterations; + rs->target_page_count_prev = rs->target_page_count; /* reset period counters */ rs->time_last_bitmap_sync = end_time; @@ -3197,7 +3197,7 @@ static int ram_save_iterate(QEMUFile *f, void *opaque) done = 1; break; } - rs->iterations++; + rs->target_page_count += pages; /* we want to check in the 1st loop, just in case it was the 1st time and we had to sync the dirty bitmap. -- 2.14.4
WARNING: multiple messages have this Message-ID (diff)
From: guangrong.xiao@gmail.com To: pbonzini@redhat.com, mst@redhat.com, mtosatti@redhat.com Cc: qemu-devel@nongnu.org, kvm@vger.kernel.org, dgilbert@redhat.com, peterx@redhat.com, wei.w.wang@intel.com, jiang.biao2@zte.com.cn, eblake@redhat.com, Xiao Guangrong <xiaoguangrong@tencent.com> Subject: [Qemu-devel] [PATCH v4 08/10] migration: fix calculating xbzrle_counters.cache_miss_rate Date: Tue, 21 Aug 2018 16:10:27 +0800 [thread overview] Message-ID: <20180821081029.26121-9-xiaoguangrong@tencent.com> (raw) In-Reply-To: <20180821081029.26121-1-xiaoguangrong@tencent.com> From: Xiao Guangrong <xiaoguangrong@tencent.com> As Peter pointed out: | - xbzrle_counters.cache_miss is done in save_xbzrle_page(), so it's | per-guest-page granularity | | - RAMState.iterations is done for each ram_find_and_save_block(), so | it's per-host-page granularity | | An example is that when we migrate a 2M huge page in the guest, we | will only increase the RAMState.iterations by 1 (since | ram_find_and_save_block() will be called once), but we might increase | xbzrle_counters.cache_miss for 2M/4K=512 times (we'll call | save_xbzrle_page() that many times) if all the pages got cache miss. | Then IMHO the cache miss rate will be 512/1=51200% (while it should | actually be just 100% cache miss). And he also suggested as xbzrle_counters.cache_miss_rate is the only user of rs->iterations we can adapt it to count target guest page numbers After that, rename 'iterations' to 'target_page_count' to better reflect its meaning Suggested-by: Peter Xu <peterx@redhat.com> Signed-off-by: Xiao Guangrong <xiaoguangrong@tencent.com> --- migration/ram.c | 18 +++++++++--------- 1 file changed, 9 insertions(+), 9 deletions(-) diff --git a/migration/ram.c b/migration/ram.c index 1d54285501..17c3eed445 100644 --- a/migration/ram.c +++ b/migration/ram.c @@ -300,10 +300,10 @@ struct RAMState { uint64_t num_dirty_pages_period; /* xbzrle misses since the beginning of the period */ uint64_t xbzrle_cache_miss_prev; - /* number of iterations at the beginning of period */ - uint64_t iterations_prev; - /* Iterations since start */ - uint64_t iterations; + /* total handled target pages at the beginning of period */ + uint64_t target_page_count_prev; + /* total handled target pages since start */ + uint64_t target_page_count; /* number of dirty bits in the bitmap */ uint64_t migration_dirty_pages; /* protects modification of the bitmap */ @@ -1585,19 +1585,19 @@ uint64_t ram_pagesize_summary(void) static void migration_update_rates(RAMState *rs, int64_t end_time) { - uint64_t iter_count = rs->iterations - rs->iterations_prev; + uint64_t page_count = rs->target_page_count - rs->target_page_count_prev; /* calculate period counters */ ram_counters.dirty_pages_rate = rs->num_dirty_pages_period * 1000 / (end_time - rs->time_last_bitmap_sync); - if (!iter_count) { + if (!page_count) { return; } if (migrate_use_xbzrle()) { xbzrle_counters.cache_miss_rate = (double)(xbzrle_counters.cache_miss - - rs->xbzrle_cache_miss_prev) / iter_count; + rs->xbzrle_cache_miss_prev) / page_count; rs->xbzrle_cache_miss_prev = xbzrle_counters.cache_miss; } } @@ -1704,7 +1704,7 @@ static void migration_bitmap_sync(RAMState *rs) migration_update_rates(rs, end_time); - rs->iterations_prev = rs->iterations; + rs->target_page_count_prev = rs->target_page_count; /* reset period counters */ rs->time_last_bitmap_sync = end_time; @@ -3197,7 +3197,7 @@ static int ram_save_iterate(QEMUFile *f, void *opaque) done = 1; break; } - rs->iterations++; + rs->target_page_count += pages; /* we want to check in the 1st loop, just in case it was the 1st time and we had to sync the dirty bitmap. -- 2.14.4
next prev parent reply other threads:[~2018-08-21 8:10 UTC|newest] Thread overview: 38+ messages / expand[flat|nested] mbox.gz Atom feed top 2018-08-21 8:10 [PATCH v4 00/10] migration: compression optimization guangrong.xiao 2018-08-21 8:10 ` [Qemu-devel] " guangrong.xiao 2018-08-21 8:10 ` [PATCH v4 01/10] migration: do not wait for free thread guangrong.xiao 2018-08-21 8:10 ` [Qemu-devel] " guangrong.xiao 2018-08-22 10:25 ` Juan Quintela 2018-08-22 10:25 ` [Qemu-devel] " Juan Quintela 2018-08-21 8:10 ` [PATCH v4 02/10] migration: fix counting normal page for compression guangrong.xiao 2018-08-21 8:10 ` [Qemu-devel] " guangrong.xiao 2018-08-22 10:20 ` Juan Quintela 2018-08-22 10:20 ` [Qemu-devel] " Juan Quintela 2018-08-21 8:10 ` [PATCH v4 03/10] migration: introduce save_zero_page_to_file guangrong.xiao 2018-08-21 8:10 ` [Qemu-devel] " guangrong.xiao 2018-08-22 10:21 ` Juan Quintela 2018-08-22 10:21 ` [Qemu-devel] " Juan Quintela 2018-08-21 8:10 ` [PATCH v4 04/10] migration: drop the return value of do_compress_ram_page guangrong.xiao 2018-08-21 8:10 ` [Qemu-devel] " guangrong.xiao 2018-08-22 10:22 ` Juan Quintela 2018-08-22 10:22 ` [Qemu-devel] " Juan Quintela 2018-08-21 8:10 ` [PATCH v4 05/10] migration: move handle of zero page to the thread guangrong.xiao 2018-08-21 8:10 ` [Qemu-devel] " guangrong.xiao 2018-08-22 10:25 ` Juan Quintela 2018-08-22 10:25 ` [Qemu-devel] " Juan Quintela 2018-08-21 8:10 ` [PATCH v4 06/10] migration: hold the lock only if it is really needed guangrong.xiao 2018-08-21 8:10 ` [Qemu-devel] " guangrong.xiao 2018-08-22 10:24 ` Juan Quintela 2018-08-22 10:24 ` [Qemu-devel] " Juan Quintela 2018-08-21 8:10 ` [PATCH v4 07/10] migration: do not flush_compressed_data at the end of each iteration guangrong.xiao 2018-08-21 8:10 ` [Qemu-devel] " guangrong.xiao 2018-08-22 4:56 ` Peter Xu 2018-08-22 4:56 ` [Qemu-devel] " Peter Xu 2018-08-21 8:10 ` guangrong.xiao [this message] 2018-08-21 8:10 ` [Qemu-devel] [PATCH v4 08/10] migration: fix calculating xbzrle_counters.cache_miss_rate guangrong.xiao 2018-08-22 4:58 ` Peter Xu 2018-08-22 4:58 ` [Qemu-devel] " Peter Xu 2018-08-21 8:10 ` [PATCH v4 09/10] migration: show the statistics of compression guangrong.xiao 2018-08-21 8:10 ` [Qemu-devel] " guangrong.xiao 2018-08-21 8:10 ` [PATCH v4 10/10] migration: handle the error condition properly guangrong.xiao 2018-08-21 8:10 ` [Qemu-devel] " guangrong.xiao
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=20180821081029.26121-9-xiaoguangrong@tencent.com \ --to=guangrong.xiao@gmail.com \ --cc=dgilbert@redhat.com \ --cc=jiang.biao2@zte.com.cn \ --cc=kvm@vger.kernel.org \ --cc=mst@redhat.com \ --cc=mtosatti@redhat.com \ --cc=pbonzini@redhat.com \ --cc=peterx@redhat.com \ --cc=qemu-devel@nongnu.org \ --cc=wei.w.wang@intel.com \ --cc=xiaoguangrong@tencent.com \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.