* Subject: [RFC PATCH v2] migration: calculate remaining pages accurately during the bulk stage
@ 2018-09-05 14:17 ` Quan Xu
0 siblings, 0 replies; 4+ messages in thread
From: Quan Xu @ 2018-09-05 14:17 UTC (permalink / raw)
To: kvm, qemu-devel; +Cc: Dr. David Alan Gilbert, quintela
From 7de4cc7c944bfccde0ef10992a7ec882fdcf0508 Mon Sep 17 00:00:00 2001
From: Quan Xu <quan.xu0@gmail.com>
Date: Wed, 5 Sep 2018 22:06:58 +0800
Subject: [RFC PATCH v2] migration: calculate remaining pages accurately
during the bulk stage
Since the bulk stage assumes in (migration_bitmap_find_dirty) that every
page is dirty, initialize the number of dirty pages at the beggining of
the iteration and then decrease it for each processed page.
Signed-off-by: Quan Xu <quan.xu0@gmail.com>
---
migration/ram.c | 7 ++++++-
1 file changed, 6 insertions(+), 1 deletion(-)
diff --git a/migration/ram.c b/migration/ram.c
index 79c8942..1a11436 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -290,6 +290,8 @@ struct RAMState {
uint32_t last_version;
/* We are in the first round */
bool ram_bulk_stage;
+ /* Remaining bytes in the first round */
+ uint64_t ram_bulk_bytes;
/* How many times we have dirty too many pages */
int dirty_rate_high_cnt;
/* these variables are used for bitmap sync */
@@ -1540,6 +1542,7 @@ unsigned long migration_bitmap_find_dirty(RAMState
*rs, RAMBlock *rb,
if (rs->ram_bulk_stage && start > 0) {
next = start + 1;
+ rs->ram_bulk_bytes -= TARGET_PAGE_SIZE;
} else {
next = find_next_bit(bitmap, size, start);
}
@@ -2001,6 +2004,7 @@ static bool find_dirty_block(RAMState *rs,
PageSearchStatus *pss, bool *again)
/* Flag that we've looped */
pss->complete_round = true;
rs->ram_bulk_stage = false;
+ rs->ram_bulk_bytes = 0;
if (migrate_use_xbzrle()) {
/* If xbzrle is on, stop using the data compression at
this
* point. In theory, xbzrle can do better than
compression.
@@ -2513,6 +2517,7 @@ static void ram_state_reset(RAMState *rs)
rs->last_page = 0;
rs->last_version = ram_list.version;
rs->ram_bulk_stage = true;
+ rs->ram_bulk_bytes = ram_bytes_total();
}
#define MAX_WAIT 50 /* ms, half buffered_file limit */
@@ -3308,7 +3313,7 @@ static void ram_save_pending(QEMUFile *f, void
*opaque, uint64_t max_size,
/* We can do postcopy, and all the data is postcopiable */
*res_compatible += remaining_size;
} else {
- *res_precopy_only += remaining_size;
+ *res_precopy_only += remaining_size + rs->ram_bulk_bytes;
}
}
--
1.8.3.1
^ permalink raw reply related [flat|nested] 4+ messages in thread
* [Qemu-devel] Subject: [RFC PATCH v2] migration: calculate remaining pages accurately during the bulk stage
@ 2018-09-05 14:17 ` Quan Xu
0 siblings, 0 replies; 4+ messages in thread
From: Quan Xu @ 2018-09-05 14:17 UTC (permalink / raw)
To: kvm, qemu-devel; +Cc: quintela, Dr. David Alan Gilbert
From 7de4cc7c944bfccde0ef10992a7ec882fdcf0508 Mon Sep 17 00:00:00 2001
From: Quan Xu <quan.xu0@gmail.com>
Date: Wed, 5 Sep 2018 22:06:58 +0800
Subject: [RFC PATCH v2] migration: calculate remaining pages accurately
during the bulk stage
Since the bulk stage assumes in (migration_bitmap_find_dirty) that every
page is dirty, initialize the number of dirty pages at the beggining of
the iteration and then decrease it for each processed page.
Signed-off-by: Quan Xu <quan.xu0@gmail.com>
---
migration/ram.c | 7 ++++++-
1 file changed, 6 insertions(+), 1 deletion(-)
diff --git a/migration/ram.c b/migration/ram.c
index 79c8942..1a11436 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -290,6 +290,8 @@ struct RAMState {
uint32_t last_version;
/* We are in the first round */
bool ram_bulk_stage;
+ /* Remaining bytes in the first round */
+ uint64_t ram_bulk_bytes;
/* How many times we have dirty too many pages */
int dirty_rate_high_cnt;
/* these variables are used for bitmap sync */
@@ -1540,6 +1542,7 @@ unsigned long migration_bitmap_find_dirty(RAMState
*rs, RAMBlock *rb,
if (rs->ram_bulk_stage && start > 0) {
next = start + 1;
+ rs->ram_bulk_bytes -= TARGET_PAGE_SIZE;
} else {
next = find_next_bit(bitmap, size, start);
}
@@ -2001,6 +2004,7 @@ static bool find_dirty_block(RAMState *rs,
PageSearchStatus *pss, bool *again)
/* Flag that we've looped */
pss->complete_round = true;
rs->ram_bulk_stage = false;
+ rs->ram_bulk_bytes = 0;
if (migrate_use_xbzrle()) {
/* If xbzrle is on, stop using the data compression at
this
* point. In theory, xbzrle can do better than
compression.
@@ -2513,6 +2517,7 @@ static void ram_state_reset(RAMState *rs)
rs->last_page = 0;
rs->last_version = ram_list.version;
rs->ram_bulk_stage = true;
+ rs->ram_bulk_bytes = ram_bytes_total();
}
#define MAX_WAIT 50 /* ms, half buffered_file limit */
@@ -3308,7 +3313,7 @@ static void ram_save_pending(QEMUFile *f, void
*opaque, uint64_t max_size,
/* We can do postcopy, and all the data is postcopiable */
*res_compatible += remaining_size;
} else {
- *res_precopy_only += remaining_size;
+ *res_precopy_only += remaining_size + rs->ram_bulk_bytes;
}
}
--
1.8.3.1
^ permalink raw reply related [flat|nested] 4+ messages in thread
* Re: Subject: [RFC PATCH v2] migration: calculate remaining pages accurately during the bulk stage
2018-09-05 14:17 ` [Qemu-devel] " Quan Xu
@ 2018-09-05 16:42 ` Eric Blake
-1 siblings, 0 replies; 4+ messages in thread
From: Eric Blake @ 2018-09-05 16:42 UTC (permalink / raw)
To: Quan Xu, kvm, qemu-devel; +Cc: Dr. David Alan Gilbert, quintela
On 09/05/2018 09:17 AM, Quan Xu wrote:
> From 7de4cc7c944bfccde0ef10992a7ec882fdcf0508 Mon Sep 17 00:00:00 2001
> From: Quan Xu <quan.xu0@gmail.com>
> Date: Wed, 5 Sep 2018 22:06:58 +0800
> Subject: [RFC PATCH v2] migration: calculate remaining pages accurately
> during the bulk stage
>
> Since the bulk stage assumes in (migration_bitmap_find_dirty) that every
> page is dirty, initialize the number of dirty pages at the beggining of
s/beggining/beginning/
> the iteration and then decrease it for each processed page.
>
> Signed-off-by: Quan Xu <quan.xu0@gmail.com>
> ---
> migration/ram.c | 7 ++++++-
> 1 file changed, 6 insertions(+), 1 deletion(-)
>
--
Eric Blake, Principal Software Engineer
Red Hat, Inc. +1-919-301-3266
Virtualization: qemu.org | libvirt.org
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [Qemu-devel] Subject: [RFC PATCH v2] migration: calculate remaining pages accurately during the bulk stage
@ 2018-09-05 16:42 ` Eric Blake
0 siblings, 0 replies; 4+ messages in thread
From: Eric Blake @ 2018-09-05 16:42 UTC (permalink / raw)
To: Quan Xu, kvm, qemu-devel; +Cc: Dr. David Alan Gilbert, quintela
On 09/05/2018 09:17 AM, Quan Xu wrote:
> From 7de4cc7c944bfccde0ef10992a7ec882fdcf0508 Mon Sep 17 00:00:00 2001
> From: Quan Xu <quan.xu0@gmail.com>
> Date: Wed, 5 Sep 2018 22:06:58 +0800
> Subject: [RFC PATCH v2] migration: calculate remaining pages accurately
> during the bulk stage
>
> Since the bulk stage assumes in (migration_bitmap_find_dirty) that every
> page is dirty, initialize the number of dirty pages at the beggining of
s/beggining/beginning/
> the iteration and then decrease it for each processed page.
>
> Signed-off-by: Quan Xu <quan.xu0@gmail.com>
> ---
> migration/ram.c | 7 ++++++-
> 1 file changed, 6 insertions(+), 1 deletion(-)
>
--
Eric Blake, Principal Software Engineer
Red Hat, Inc. +1-919-301-3266
Virtualization: qemu.org | libvirt.org
^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2018-09-05 16:42 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-09-05 14:17 Subject: [RFC PATCH v2] migration: calculate remaining pages accurately during the bulk stage Quan Xu
2018-09-05 14:17 ` [Qemu-devel] " Quan Xu
2018-09-05 16:42 ` Eric Blake
2018-09-05 16:42 ` [Qemu-devel] " Eric Blake
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.