All of lore.kernel.org
 help / color / mirror / Atom feed
* [Qemu-devel] [PATCH v3 0/4] migation: unbreak postcopy recovery
@ 2018-06-27 13:22 Peter Xu
  2018-06-27 13:22 ` [Qemu-devel] [PATCH v3 1/4] migration: delay postcopy paused state Peter Xu
                   ` (5 more replies)
  0 siblings, 6 replies; 13+ messages in thread
From: Peter Xu @ 2018-06-27 13:22 UTC (permalink / raw)
  To: qemu-devel; +Cc: Juan Quintela, Dr . David Alan Gilbert, peterx

v3:
- keep the recovery logic even for RDMA by dropping the 3rd patch and
  touch up the original 4th patch (current 3rd patch) to suite that [Dave]

v2:
- break the first patch into several
- fix a QEMUFile leak

Please review.  Thanks,

Peter Xu (4):
  migration: delay postcopy paused state
  migration: move income process out of multifd
  migration: unbreak postcopy recovery
  migration: unify incoming processing

 migration/ram.h       |  2 +-
 migration/exec.c      |  3 ---
 migration/fd.c        |  3 ---
 migration/migration.c | 44 ++++++++++++++++++++++++++++++++++++-------
 migration/ram.c       | 11 +++++------
 migration/savevm.c    |  6 +++---
 migration/socket.c    |  5 -----
 7 files changed, 46 insertions(+), 28 deletions(-)

-- 
2.17.1

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [Qemu-devel] [PATCH v3 1/4] migration: delay postcopy paused state
  2018-06-27 13:22 [Qemu-devel] [PATCH v3 0/4] migation: unbreak postcopy recovery Peter Xu
@ 2018-06-27 13:22 ` Peter Xu
  2018-06-27 13:22 ` [Qemu-devel] [PATCH v3 2/4] migration: move income process out of multifd Peter Xu
                   ` (4 subsequent siblings)
  5 siblings, 0 replies; 13+ messages in thread
From: Peter Xu @ 2018-06-27 13:22 UTC (permalink / raw)
  To: qemu-devel; +Cc: Juan Quintela, Dr . David Alan Gilbert, peterx

Before this patch we firstly setup the postcopy-paused state then we
clean up the QEMUFile handles.  That can be racy if there is a very fast
"migrate-recover" command running in parallel.  Fix that up.

Reported-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Peter Xu <peterx@redhat.com>
---
 migration/savevm.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/migration/savevm.c b/migration/savevm.c
index c2f34ffc7c..851d74e8b6 100644
--- a/migration/savevm.c
+++ b/migration/savevm.c
@@ -2194,9 +2194,6 @@ static bool postcopy_pause_incoming(MigrationIncomingState *mis)
     /* Clear the triggered bit to allow one recovery */
     mis->postcopy_recover_triggered = false;
 
-    migrate_set_state(&mis->state, MIGRATION_STATUS_POSTCOPY_ACTIVE,
-                      MIGRATION_STATUS_POSTCOPY_PAUSED);
-
     assert(mis->from_src_file);
     qemu_file_shutdown(mis->from_src_file);
     qemu_fclose(mis->from_src_file);
@@ -2209,6 +2206,9 @@ static bool postcopy_pause_incoming(MigrationIncomingState *mis)
     mis->to_src_file = NULL;
     qemu_mutex_unlock(&mis->rp_mutex);
 
+    migrate_set_state(&mis->state, MIGRATION_STATUS_POSTCOPY_ACTIVE,
+                      MIGRATION_STATUS_POSTCOPY_PAUSED);
+
     /* Notify the fault thread for the invalidated file handle */
     postcopy_fault_thread_notify(mis);
 
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [Qemu-devel] [PATCH v3 2/4] migration: move income process out of multifd
  2018-06-27 13:22 [Qemu-devel] [PATCH v3 0/4] migation: unbreak postcopy recovery Peter Xu
  2018-06-27 13:22 ` [Qemu-devel] [PATCH v3 1/4] migration: delay postcopy paused state Peter Xu
@ 2018-06-27 13:22 ` Peter Xu
  2018-06-27 13:59   ` Juan Quintela
  2018-06-27 13:22 ` [Qemu-devel] [PATCH v3 3/4] migration: unbreak postcopy recovery Peter Xu
                   ` (3 subsequent siblings)
  5 siblings, 1 reply; 13+ messages in thread
From: Peter Xu @ 2018-06-27 13:22 UTC (permalink / raw)
  To: qemu-devel; +Cc: Juan Quintela, Dr . David Alan Gilbert, peterx

Move the call to migration_incoming_process() out of multifd code.  It's
a bit strange that we can migration generic calls in multifd code.
Instead, let multifd_recv_new_channel() return a boolean showing whether
it's ready to continue the incoming migration.

Signed-off-by: Peter Xu <peterx@redhat.com>
---
 migration/ram.h       |  2 +-
 migration/migration.c |  5 ++++-
 migration/ram.c       | 11 +++++------
 3 files changed, 10 insertions(+), 8 deletions(-)

diff --git a/migration/ram.h b/migration/ram.h
index d386f4d641..457bf54b8c 100644
--- a/migration/ram.h
+++ b/migration/ram.h
@@ -46,7 +46,7 @@ int multifd_save_cleanup(Error **errp);
 int multifd_load_setup(void);
 int multifd_load_cleanup(Error **errp);
 bool multifd_recv_all_channels_created(void);
-void multifd_recv_new_channel(QIOChannel *ioc);
+bool multifd_recv_new_channel(QIOChannel *ioc);
 
 uint64_t ram_pagesize_summary(void);
 int ram_save_queue_pages(const char *rbname, ram_addr_t start, ram_addr_t len);
diff --git a/migration/migration.c b/migration/migration.c
index e1eaa97df4..6ecea2de30 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -507,7 +507,10 @@ void migration_ioc_process_incoming(QIOChannel *ioc)
         migration_incoming_setup(f);
         return;
     }
-    multifd_recv_new_channel(ioc);
+
+    if (multifd_recv_new_channel(ioc)) {
+        migration_incoming_process();
+    }
 }
 
 /**
diff --git a/migration/ram.c b/migration/ram.c
index cd5f55117d..52167d5142 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -873,7 +873,8 @@ bool multifd_recv_all_channels_created(void)
     return thread_count == atomic_read(&multifd_recv_state->count);
 }
 
-void multifd_recv_new_channel(QIOChannel *ioc)
+/* Return true if multifd is ready for the migration, otherwise false */
+bool multifd_recv_new_channel(QIOChannel *ioc)
 {
     MultiFDRecvParams *p;
     Error *local_err = NULL;
@@ -882,7 +883,7 @@ void multifd_recv_new_channel(QIOChannel *ioc)
     id = multifd_recv_initial_packet(ioc, &local_err);
     if (id < 0) {
         multifd_recv_terminate_threads(local_err);
-        return;
+        return false;
     }
 
     p = &multifd_recv_state->params[id];
@@ -890,7 +891,7 @@ void multifd_recv_new_channel(QIOChannel *ioc)
         error_setg(&local_err, "multifd: received id '%d' already setup'",
                    id);
         multifd_recv_terminate_threads(local_err);
-        return;
+        return false;
     }
     p->c = ioc;
     object_ref(OBJECT(ioc));
@@ -899,9 +900,7 @@ void multifd_recv_new_channel(QIOChannel *ioc)
     qemu_thread_create(&p->thread, p->name, multifd_recv_thread, p,
                        QEMU_THREAD_JOINABLE);
     atomic_inc(&multifd_recv_state->count);
-    if (multifd_recv_state->count == migrate_multifd_channels()) {
-        migration_incoming_process();
-    }
+    return multifd_recv_state->count == migrate_multifd_channels();
 }
 
 /**
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [Qemu-devel] [PATCH v3 3/4] migration: unbreak postcopy recovery
  2018-06-27 13:22 [Qemu-devel] [PATCH v3 0/4] migation: unbreak postcopy recovery Peter Xu
  2018-06-27 13:22 ` [Qemu-devel] [PATCH v3 1/4] migration: delay postcopy paused state Peter Xu
  2018-06-27 13:22 ` [Qemu-devel] [PATCH v3 2/4] migration: move income process out of multifd Peter Xu
@ 2018-06-27 13:22 ` Peter Xu
  2018-06-27 14:00   ` Juan Quintela
  2018-06-27 13:22 ` [Qemu-devel] [PATCH v3 4/4] migration: unify incoming processing Peter Xu
                   ` (2 subsequent siblings)
  5 siblings, 1 reply; 13+ messages in thread
From: Peter Xu @ 2018-06-27 13:22 UTC (permalink / raw)
  To: qemu-devel; +Cc: Juan Quintela, Dr . David Alan Gilbert, peterx

The whole postcopy recovery logic was accidentally broken.  We need to
fix it in two steps.

This is the first step that we should do the recovery when needed.  It
was bypassed before after commit 36c2f8be2c.

Introduce postcopy_try_recovery() helper for the postcopy recovery
logic.  Call it both in migration_fd_process_incoming() and
migration_ioc_process_incoming().

Fixes: 36c2f8be2c ("migration: Delay start of migration main routines")
Signed-off-by: Peter Xu <peterx@redhat.com>
---
 migration/migration.c | 23 ++++++++++++++++++-----
 1 file changed, 18 insertions(+), 5 deletions(-)

diff --git a/migration/migration.c b/migration/migration.c
index 6ecea2de30..0a0db49817 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -466,7 +466,8 @@ void migration_incoming_process(void)
     qemu_coroutine_enter(co);
 }
 
-void migration_fd_process_incoming(QEMUFile *f)
+/* Returns true if recovered from a paused migration, otherwise false */
+static bool postcopy_try_recover(QEMUFile *f)
 {
     MigrationIncomingState *mis = migration_incoming_get_current();
 
@@ -491,11 +492,20 @@ void migration_fd_process_incoming(QEMUFile *f)
          * that source is ready to reply to page requests.
          */
         qemu_sem_post(&mis->postcopy_pause_sem_dst);
-    } else {
-        /* New incoming migration */
-        migration_incoming_setup(f);
-        migration_incoming_process();
+        return true;
+    }
+
+    return false;
+}
+
+void migration_fd_process_incoming(QEMUFile *f)
+{
+    if (postcopy_try_recover(f)) {
+        return;
     }
+
+    migration_incoming_setup(f);
+    migration_incoming_process();
 }
 
 void migration_ioc_process_incoming(QIOChannel *ioc)
@@ -504,6 +514,9 @@ void migration_ioc_process_incoming(QIOChannel *ioc)
 
     if (!mis->from_src_file) {
         QEMUFile *f = qemu_fopen_channel_input(ioc);
+        if (postcopy_try_recover(f)) {
+            return;
+        }
         migration_incoming_setup(f);
         return;
     }
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [Qemu-devel] [PATCH v3 4/4] migration: unify incoming processing
  2018-06-27 13:22 [Qemu-devel] [PATCH v3 0/4] migation: unbreak postcopy recovery Peter Xu
                   ` (2 preceding siblings ...)
  2018-06-27 13:22 ` [Qemu-devel] [PATCH v3 3/4] migration: unbreak postcopy recovery Peter Xu
@ 2018-06-27 13:22 ` Peter Xu
  2018-06-27 14:01   ` Juan Quintela
  2018-07-02  8:04 ` [Qemu-devel] [PATCH v3 0/4] migation: unbreak postcopy recovery Balamuruhan S
  2018-07-06  8:47 ` Dr. David Alan Gilbert
  5 siblings, 1 reply; 13+ messages in thread
From: Peter Xu @ 2018-06-27 13:22 UTC (permalink / raw)
  To: qemu-devel; +Cc: Juan Quintela, Dr . David Alan Gilbert, peterx

This is the 2nd patch to unbreak postcopy recovery.

Let's unify the migration_incoming_process() call at a single place
rather than calling it in connection setup codes.  This fixes a problem
that we will go into incoming migration procedure even if we are trying
to recovery from a paused postcopy migration.

Fixes: 36c2f8be2c ("migration: Delay start of migration main routines")
Signed-off-by: Peter Xu <peterx@redhat.com>
---
 migration/exec.c      |  3 ---
 migration/fd.c        |  3 ---
 migration/migration.c | 18 ++++++++++++++++--
 migration/socket.c    |  5 -----
 4 files changed, 16 insertions(+), 13 deletions(-)

diff --git a/migration/exec.c b/migration/exec.c
index 0bbeb63c97..375d2e1b54 100644
--- a/migration/exec.c
+++ b/migration/exec.c
@@ -49,9 +49,6 @@ static gboolean exec_accept_incoming_migration(QIOChannel *ioc,
 {
     migration_channel_process_incoming(ioc);
     object_unref(OBJECT(ioc));
-    if (!migrate_use_multifd()) {
-        migration_incoming_process();
-    }
     return G_SOURCE_REMOVE;
 }
 
diff --git a/migration/fd.c b/migration/fd.c
index fee34ffdc0..a7c13df4ad 100644
--- a/migration/fd.c
+++ b/migration/fd.c
@@ -49,9 +49,6 @@ static gboolean fd_accept_incoming_migration(QIOChannel *ioc,
 {
     migration_channel_process_incoming(ioc);
     object_unref(OBJECT(ioc));
-    if (!migrate_use_multifd()) {
-        migration_incoming_process();
-    }
     return G_SOURCE_REMOVE;
 }
 
diff --git a/migration/migration.c b/migration/migration.c
index 0a0db49817..7fc92327d7 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -511,17 +511,31 @@ void migration_fd_process_incoming(QEMUFile *f)
 void migration_ioc_process_incoming(QIOChannel *ioc)
 {
     MigrationIncomingState *mis = migration_incoming_get_current();
+    bool start_migration;
 
     if (!mis->from_src_file) {
+        /* The first connection (multifd may have multiple) */
         QEMUFile *f = qemu_fopen_channel_input(ioc);
+
+        /* If it's a recovery, we're done */
         if (postcopy_try_recover(f)) {
             return;
         }
+
         migration_incoming_setup(f);
-        return;
+
+        /*
+         * Common migration only needs one channel, so we can start
+         * right now.  Multifd needs more than one channel, we wait.
+         */
+        start_migration = !migrate_use_multifd();
+    } else {
+        /* Multiple connections */
+        assert(migrate_use_multifd());
+        start_migration = multifd_recv_new_channel(ioc);
     }
 
-    if (multifd_recv_new_channel(ioc)) {
+    if (start_migration) {
         migration_incoming_process();
     }
 }
diff --git a/migration/socket.c b/migration/socket.c
index 3456eb76e9..f4c8174400 100644
--- a/migration/socket.c
+++ b/migration/socket.c
@@ -168,12 +168,7 @@ static void socket_accept_incoming_migration(QIONetListener *listener,
     if (migration_has_all_channels()) {
         /* Close listening socket as its no longer needed */
         qio_net_listener_disconnect(listener);
-
         object_unref(OBJECT(listener));
-
-        if (!migrate_use_multifd()) {
-            migration_incoming_process();
-        }
     }
 }
 
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* Re: [Qemu-devel] [PATCH v3 2/4] migration: move income process out of multifd
  2018-06-27 13:22 ` [Qemu-devel] [PATCH v3 2/4] migration: move income process out of multifd Peter Xu
@ 2018-06-27 13:59   ` Juan Quintela
  0 siblings, 0 replies; 13+ messages in thread
From: Juan Quintela @ 2018-06-27 13:59 UTC (permalink / raw)
  To: Peter Xu; +Cc: qemu-devel, Dr . David Alan Gilbert

Peter Xu <peterx@redhat.com> wrote:
> Move the call to migration_incoming_process() out of multifd code.  It's
> a bit strange that we can migration generic calls in multifd code.
> Instead, let multifd_recv_new_channel() return a boolean showing whether
> it's ready to continue the incoming migration.
>
> Signed-off-by: Peter Xu <peterx@redhat.com>

Reviewed-by: Juan Quintela <quintela@redhat.com>

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [Qemu-devel] [PATCH v3 3/4] migration: unbreak postcopy recovery
  2018-06-27 13:22 ` [Qemu-devel] [PATCH v3 3/4] migration: unbreak postcopy recovery Peter Xu
@ 2018-06-27 14:00   ` Juan Quintela
  0 siblings, 0 replies; 13+ messages in thread
From: Juan Quintela @ 2018-06-27 14:00 UTC (permalink / raw)
  To: Peter Xu; +Cc: qemu-devel, Dr . David Alan Gilbert

Peter Xu <peterx@redhat.com> wrote:
> The whole postcopy recovery logic was accidentally broken.  We need to
> fix it in two steps.
>
> This is the first step that we should do the recovery when needed.  It
> was bypassed before after commit 36c2f8be2c.
>
> Introduce postcopy_try_recovery() helper for the postcopy recovery
> logic.  Call it both in migration_fd_process_incoming() and
> migration_ioc_process_incoming().
>
> Fixes: 36c2f8be2c ("migration: Delay start of migration main routines")
> Signed-off-by: Peter Xu <peterx@redhat.com>

Reviewed-by: Juan Quintela <quintela@redhat.com>

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [Qemu-devel] [PATCH v3 4/4] migration: unify incoming processing
  2018-06-27 13:22 ` [Qemu-devel] [PATCH v3 4/4] migration: unify incoming processing Peter Xu
@ 2018-06-27 14:01   ` Juan Quintela
  0 siblings, 0 replies; 13+ messages in thread
From: Juan Quintela @ 2018-06-27 14:01 UTC (permalink / raw)
  To: Peter Xu; +Cc: qemu-devel, Dr . David Alan Gilbert

Peter Xu <peterx@redhat.com> wrote:
> This is the 2nd patch to unbreak postcopy recovery.
>
> Let's unify the migration_incoming_process() call at a single place
> rather than calling it in connection setup codes.  This fixes a problem
> that we will go into incoming migration procedure even if we are trying
> to recovery from a paused postcopy migration.
>
> Fixes: 36c2f8be2c ("migration: Delay start of migration main routines")
> Signed-off-by: Peter Xu <peterx@redhat.com>

Reviewed-by: Juan Quintela <quintela@redhat.com>

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [Qemu-devel] [PATCH v3 0/4] migation: unbreak postcopy recovery
  2018-06-27 13:22 [Qemu-devel] [PATCH v3 0/4] migation: unbreak postcopy recovery Peter Xu
                   ` (3 preceding siblings ...)
  2018-06-27 13:22 ` [Qemu-devel] [PATCH v3 4/4] migration: unify incoming processing Peter Xu
@ 2018-07-02  8:04 ` Balamuruhan S
  2018-07-02  8:46   ` Peter Xu
  2018-07-06  8:47 ` Dr. David Alan Gilbert
  5 siblings, 1 reply; 13+ messages in thread
From: Balamuruhan S @ 2018-07-02  8:04 UTC (permalink / raw)
  To: Peter Xu; +Cc: qemu-devel

On Wed, Jun 27, 2018 at 09:22:42PM +0800, Peter Xu wrote:
> v3:
> - keep the recovery logic even for RDMA by dropping the 3rd patch and
>   touch up the original 4th patch (current 3rd patch) to suite that [Dave]
> 
> v2:
> - break the first patch into several
> - fix a QEMUFile leak
> 
> Please review.  Thanks,
Hi Peter,

I have applied this patchset with upstream Qemu for testing postcopy
pause recover feature in PowerPC,

I used NFS shared qcow2 between source and target host

source:
# ppc64-softmmu/qemu-system-ppc64 --enable-kvm --nographic -vga none \
-machine pseries -m 64G,slots=128,maxmem=128G -smp 16,maxcpus=32 \
-device virtio-blk-pci,drive=rootdisk -drive \
file=/home/bala/sharing/hostos-ppc64le.qcow2,if=none,cache=none,format=qcow2,id=rootdisk \
-monitor telnet:127.0.0.1:1234,server,nowait -net nic,model=virtio \
-net user -redir tcp:2000::22

To keep the VM with workload I ran stress-ng inside guest,

# stress-ng --cpu 6 --vm 6 --io 6

target:
# ppc64-softmmu/qemu-system-ppc64 --enable-kvm --nographic -vga none \
-machine pseries -m 64G,slots=128,maxmem=128G -smp 16,maxcpus=32 \
-device virtio-blk-pci,drive=rootdisk -drive \
file=/home/bala/sharing/hostos-ppc64le.qcow2,if=none,cache=none,format=qcow2,id=rootdisk \
-monitor telnet:127.0.0.1:1235,server,nowait -net nic,model=virtio \
-net user -redir tcp:2001::22 -incoming tcp:0:4445

enabled postcopy on both source and destination from qemu monitor

(qemu) migrate_set_capability postcopy-ram on

>From source qemu monitor,
(qemu) migrate -d tcp:10.45.70.203:4445
(qemu) info migrate
globals:
store-global-state: on
only-migratable: off
send-configuration: on
send-section-footer: on
decompress-error-check: on
capabilities: xbzrle: off rdma-pin-all: off auto-converge: off
zero-blocks: off compress: off events: off postcopy-ram: on x-colo: off
release-ram: off block: off return-path: off pause-before-switchover:
off x-multifd: off dirty-bitmaps: off postcopy-blocktime: off
late-block-activate: off 
Migration status: active
total time: 2331 milliseconds
expected downtime: 300 milliseconds
setup: 65 milliseconds
transferred ram: 38914 kbytes
throughput: 273.16 mbps
remaining ram: 67063784 kbytes
total ram: 67109120 kbytes
duplicate: 1627 pages
skipped: 0 pages
normal: 9706 pages
normal bytes: 38824 kbytes
dirty sync count: 1
page size: 4 kbytes
multifd bytes: 0 kbytes

triggered postcopy from source,
(qemu) migrate_start_postcopy

After triggering postcopy from source, in target I tried to pause the
postcopy migration

(qemu) migrate_pause

In target I see error as,
error while loading state section id 4(ram)
qemu-system-ppc64: Detected IO failure for postcopy. Migration paused.

In source I see error as,
qemu-system-ppc64: Detected IO failure for postcopy. Migration paused.

Later from target I try for recovery from target monitor,
(qemu) migrate_recover qemu+ssh://10.45.70.203/system
Migrate recovery is triggered already

but in source still it remains to be in postcopy-paused state
(qemu) info migrate
globals:
store-global-state: on
only-migratable: off
send-configuration: on
send-section-footer: on
decompress-error-check: on
capabilities: xbzrle: off rdma-pin-all: off auto-converge: off
zero-blocks: off compress: off events: off postcopy-ram: on x-colo: off
release-ram: off block: off return-path: off pause-before-switchover:
off x-multifd: off dirty-bitmaps: off postcopy-blocktime: off
late-block-activate: off 
Migration status: postcopy-paused
total time: 222841 milliseconds
expected downtime: 382991 milliseconds
setup: 65 milliseconds
transferred ram: 385270 kbytes
throughput: 265.06 mbps
remaining ram: 8150528 kbytes
total ram: 67109120 kbytes
duplicate: 14679647 pages
skipped: 0 pages
normal: 63937 pages
normal bytes: 255748 kbytes
dirty sync count: 2
page size: 4 kbytes
multifd bytes: 0 kbytes
dirty pages rate: 854740 pages
postcopy request count: 374

later I also tried to recover postcopy in source monitor,
(qemu) migrate_recover qemu+ssh://10.45.193.21/system
Migrate recover can only be run when postcopy is paused.

Looks to be it is broken, please help me if I missed something
in this test.

Thank you,
Bala
> 
> Peter Xu (4):
>   migration: delay postcopy paused state
>   migration: move income process out of multifd
>   migration: unbreak postcopy recovery
>   migration: unify incoming processing
> 
>  migration/ram.h       |  2 +-
>  migration/exec.c      |  3 ---
>  migration/fd.c        |  3 ---
>  migration/migration.c | 44 ++++++++++++++++++++++++++++++++++++-------
>  migration/ram.c       | 11 +++++------
>  migration/savevm.c    |  6 +++---
>  migration/socket.c    |  5 -----
>  7 files changed, 46 insertions(+), 28 deletions(-)
> 
> -- 
> 2.17.1
> 
> 

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [Qemu-devel] [PATCH v3 0/4] migation: unbreak postcopy recovery
  2018-07-02  8:04 ` [Qemu-devel] [PATCH v3 0/4] migation: unbreak postcopy recovery Balamuruhan S
@ 2018-07-02  8:46   ` Peter Xu
  2018-07-02  9:42     ` Balamuruhan S
  0 siblings, 1 reply; 13+ messages in thread
From: Peter Xu @ 2018-07-02  8:46 UTC (permalink / raw)
  To: Balamuruhan S; +Cc: qemu-devel

On Mon, Jul 02, 2018 at 01:34:45PM +0530, Balamuruhan S wrote:
> On Wed, Jun 27, 2018 at 09:22:42PM +0800, Peter Xu wrote:
> > v3:
> > - keep the recovery logic even for RDMA by dropping the 3rd patch and
> >   touch up the original 4th patch (current 3rd patch) to suite that [Dave]
> > 
> > v2:
> > - break the first patch into several
> > - fix a QEMUFile leak
> > 
> > Please review.  Thanks,
> Hi Peter,

Hi, Balamuruhan,

Glad to know that you are playing this stuff with ppc.  I think the
major steps are correct, though...

> 
> I have applied this patchset with upstream Qemu for testing postcopy
> pause recover feature in PowerPC,
> 
> I used NFS shared qcow2 between source and target host
> 
> source:
> # ppc64-softmmu/qemu-system-ppc64 --enable-kvm --nographic -vga none \
> -machine pseries -m 64G,slots=128,maxmem=128G -smp 16,maxcpus=32 \
> -device virtio-blk-pci,drive=rootdisk -drive \
> file=/home/bala/sharing/hostos-ppc64le.qcow2,if=none,cache=none,format=qcow2,id=rootdisk \
> -monitor telnet:127.0.0.1:1234,server,nowait -net nic,model=virtio \
> -net user -redir tcp:2000::22
> 
> To keep the VM with workload I ran stress-ng inside guest,
> 
> # stress-ng --cpu 6 --vm 6 --io 6
> 
> target:
> # ppc64-softmmu/qemu-system-ppc64 --enable-kvm --nographic -vga none \
> -machine pseries -m 64G,slots=128,maxmem=128G -smp 16,maxcpus=32 \
> -device virtio-blk-pci,drive=rootdisk -drive \
> file=/home/bala/sharing/hostos-ppc64le.qcow2,if=none,cache=none,format=qcow2,id=rootdisk \
> -monitor telnet:127.0.0.1:1235,server,nowait -net nic,model=virtio \
> -net user -redir tcp:2001::22 -incoming tcp:0:4445
> 
> enabled postcopy on both source and destination from qemu monitor
> 
> (qemu) migrate_set_capability postcopy-ram on
> 
> From source qemu monitor,
> (qemu) migrate -d tcp:10.45.70.203:4445

[1]

> (qemu) info migrate
> globals:
> store-global-state: on
> only-migratable: off
> send-configuration: on
> send-section-footer: on
> decompress-error-check: on
> capabilities: xbzrle: off rdma-pin-all: off auto-converge: off
> zero-blocks: off compress: off events: off postcopy-ram: on x-colo: off
> release-ram: off block: off return-path: off pause-before-switchover:
> off x-multifd: off dirty-bitmaps: off postcopy-blocktime: off
> late-block-activate: off 
> Migration status: active
> total time: 2331 milliseconds
> expected downtime: 300 milliseconds
> setup: 65 milliseconds
> transferred ram: 38914 kbytes
> throughput: 273.16 mbps
> remaining ram: 67063784 kbytes
> total ram: 67109120 kbytes
> duplicate: 1627 pages
> skipped: 0 pages
> normal: 9706 pages
> normal bytes: 38824 kbytes
> dirty sync count: 1
> page size: 4 kbytes
> multifd bytes: 0 kbytes
> 
> triggered postcopy from source,
> (qemu) migrate_start_postcopy
> 
> After triggering postcopy from source, in target I tried to pause the
> postcopy migration
> 
> (qemu) migrate_pause
> 
> In target I see error as,
> error while loading state section id 4(ram)
> qemu-system-ppc64: Detected IO failure for postcopy. Migration paused.
> 
> In source I see error as,
> qemu-system-ppc64: Detected IO failure for postcopy. Migration paused.
> 
> Later from target I try for recovery from target monitor,
> (qemu) migrate_recover qemu+ssh://10.45.70.203/system

... here is that URI for libvirt only?

Normally I'll use something similar to [1] above.

> Migrate recovery is triggered already

And this means that you have already sent one recovery command before
hand.  In the future we'd better allow the recovery command to be run
more than once (in case the first one mistyped...).

> 
> but in source still it remains to be in postcopy-paused state
> (qemu) info migrate
> globals:
> store-global-state: on
> only-migratable: off
> send-configuration: on
> send-section-footer: on
> decompress-error-check: on
> capabilities: xbzrle: off rdma-pin-all: off auto-converge: off
> zero-blocks: off compress: off events: off postcopy-ram: on x-colo: off
> release-ram: off block: off return-path: off pause-before-switchover:
> off x-multifd: off dirty-bitmaps: off postcopy-blocktime: off
> late-block-activate: off 
> Migration status: postcopy-paused
> total time: 222841 milliseconds
> expected downtime: 382991 milliseconds
> setup: 65 milliseconds
> transferred ram: 385270 kbytes
> throughput: 265.06 mbps
> remaining ram: 8150528 kbytes
> total ram: 67109120 kbytes
> duplicate: 14679647 pages
> skipped: 0 pages
> normal: 63937 pages
> normal bytes: 255748 kbytes
> dirty sync count: 2
> page size: 4 kbytes
> multifd bytes: 0 kbytes
> dirty pages rate: 854740 pages
> postcopy request count: 374
> 
> later I also tried to recover postcopy in source monitor,
> (qemu) migrate_recover qemu+ssh://10.45.193.21/system

This command should be run on destination side only.  Here the
"migrate-recover" command on destination will start a new listening
port there waiting for the migration to be continued.  Then after that
command we need an extra command on source to start the recovery:

  (HMP) migrate -r $URI

Here $URI should be the only you specified in the "migrate-recover"
command on destination machine.

> Migrate recover can only be run when postcopy is paused.

I can try to fix up this error.  Basically we shouldn't allow this
command to be run on source machine.

> 
> Looks to be it is broken, please help me if I missed something
> in this test.

Btw, I'm writting up an unit test for postcopy recovery recently, that
could be a good reference for the new feature.  Meanwhile I think I
should write up some documents too afterwards.

Regards,

> 
> Thank you,
> Bala
> > 
> > Peter Xu (4):
> >   migration: delay postcopy paused state
> >   migration: move income process out of multifd
> >   migration: unbreak postcopy recovery
> >   migration: unify incoming processing
> > 
> >  migration/ram.h       |  2 +-
> >  migration/exec.c      |  3 ---
> >  migration/fd.c        |  3 ---
> >  migration/migration.c | 44 ++++++++++++++++++++++++++++++++++++-------
> >  migration/ram.c       | 11 +++++------
> >  migration/savevm.c    |  6 +++---
> >  migration/socket.c    |  5 -----
> >  7 files changed, 46 insertions(+), 28 deletions(-)
> > 
> > -- 
> > 2.17.1
> > 
> > 
> 

-- 
Peter Xu

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [Qemu-devel] [PATCH v3 0/4] migation: unbreak postcopy recovery
  2018-07-02  8:46   ` Peter Xu
@ 2018-07-02  9:42     ` Balamuruhan S
  2018-07-02 10:18       ` Peter Xu
  0 siblings, 1 reply; 13+ messages in thread
From: Balamuruhan S @ 2018-07-02  9:42 UTC (permalink / raw)
  To: Peter Xu; +Cc: qemu-devel

On Mon, Jul 02, 2018 at 04:46:18PM +0800, Peter Xu wrote:
> On Mon, Jul 02, 2018 at 01:34:45PM +0530, Balamuruhan S wrote:
> > On Wed, Jun 27, 2018 at 09:22:42PM +0800, Peter Xu wrote:
> > > v3:
> > > - keep the recovery logic even for RDMA by dropping the 3rd patch and
> > >   touch up the original 4th patch (current 3rd patch) to suite that [Dave]
> > > 
> > > v2:
> > > - break the first patch into several
> > > - fix a QEMUFile leak
> > > 
> > > Please review.  Thanks,
> > Hi Peter,
> 
> Hi, Balamuruhan,
> 
> Glad to know that you are playing this stuff with ppc.  I think the
> major steps are correct, though...
> 

Thank you Peter for correcting my mistake, It works like a charm.
Nice feature!

Tested-by: Balamuruhan S <bala24@linux.vnet.ibm.com>

> > 
> > I have applied this patchset with upstream Qemu for testing postcopy
> > pause recover feature in PowerPC,
> > 
> > I used NFS shared qcow2 between source and target host
> > 
> > source:
> > # ppc64-softmmu/qemu-system-ppc64 --enable-kvm --nographic -vga none \
> > -machine pseries -m 64G,slots=128,maxmem=128G -smp 16,maxcpus=32 \
> > -device virtio-blk-pci,drive=rootdisk -drive \
> > file=/home/bala/sharing/hostos-ppc64le.qcow2,if=none,cache=none,format=qcow2,id=rootdisk \
> > -monitor telnet:127.0.0.1:1234,server,nowait -net nic,model=virtio \
> > -net user -redir tcp:2000::22
> > 
> > To keep the VM with workload I ran stress-ng inside guest,
> > 
> > # stress-ng --cpu 6 --vm 6 --io 6
> > 
> > target:
> > # ppc64-softmmu/qemu-system-ppc64 --enable-kvm --nographic -vga none \
> > -machine pseries -m 64G,slots=128,maxmem=128G -smp 16,maxcpus=32 \
> > -device virtio-blk-pci,drive=rootdisk -drive \
> > file=/home/bala/sharing/hostos-ppc64le.qcow2,if=none,cache=none,format=qcow2,id=rootdisk \
> > -monitor telnet:127.0.0.1:1235,server,nowait -net nic,model=virtio \
> > -net user -redir tcp:2001::22 -incoming tcp:0:4445
> > 
> > enabled postcopy on both source and destination from qemu monitor
> > 
> > (qemu) migrate_set_capability postcopy-ram on
> > 
> > From source qemu monitor,
> > (qemu) migrate -d tcp:10.45.70.203:4445
> 
> [1]
> 
> > (qemu) info migrate
> > globals:
> > store-global-state: on
> > only-migratable: off
> > send-configuration: on
> > send-section-footer: on
> > decompress-error-check: on
> > capabilities: xbzrle: off rdma-pin-all: off auto-converge: off
> > zero-blocks: off compress: off events: off postcopy-ram: on x-colo: off
> > release-ram: off block: off return-path: off pause-before-switchover:
> > off x-multifd: off dirty-bitmaps: off postcopy-blocktime: off
> > late-block-activate: off 
> > Migration status: active
> > total time: 2331 milliseconds
> > expected downtime: 300 milliseconds
> > setup: 65 milliseconds
> > transferred ram: 38914 kbytes
> > throughput: 273.16 mbps
> > remaining ram: 67063784 kbytes
> > total ram: 67109120 kbytes
> > duplicate: 1627 pages
> > skipped: 0 pages
> > normal: 9706 pages
> > normal bytes: 38824 kbytes
> > dirty sync count: 1
> > page size: 4 kbytes
> > multifd bytes: 0 kbytes
> > 
> > triggered postcopy from source,
> > (qemu) migrate_start_postcopy
> > 
> > After triggering postcopy from source, in target I tried to pause the
> > postcopy migration
> > 
> > (qemu) migrate_pause
> > 
> > In target I see error as,
> > error while loading state section id 4(ram)
> > qemu-system-ppc64: Detected IO failure for postcopy. Migration paused.
> > 
> > In source I see error as,
> > qemu-system-ppc64: Detected IO failure for postcopy. Migration paused.
> > 
> > Later from target I try for recovery from target monitor,
> > (qemu) migrate_recover qemu+ssh://10.45.70.203/system
> 
> ... here is that URI for libvirt only?
> 
> Normally I'll use something similar to [1] above.
> 
> > Migrate recovery is triggered already
> 
> And this means that you have already sent one recovery command before
> hand.  In the future we'd better allow the recovery command to be run
> more than once (in case the first one mistyped...).
> 
> > 
> > but in source still it remains to be in postcopy-paused state
> > (qemu) info migrate
> > globals:
> > store-global-state: on
> > only-migratable: off
> > send-configuration: on
> > send-section-footer: on
> > decompress-error-check: on
> > capabilities: xbzrle: off rdma-pin-all: off auto-converge: off
> > zero-blocks: off compress: off events: off postcopy-ram: on x-colo: off
> > release-ram: off block: off return-path: off pause-before-switchover:
> > off x-multifd: off dirty-bitmaps: off postcopy-blocktime: off
> > late-block-activate: off 
> > Migration status: postcopy-paused
> > total time: 222841 milliseconds
> > expected downtime: 382991 milliseconds
> > setup: 65 milliseconds
> > transferred ram: 385270 kbytes
> > throughput: 265.06 mbps
> > remaining ram: 8150528 kbytes
> > total ram: 67109120 kbytes
> > duplicate: 14679647 pages
> > skipped: 0 pages
> > normal: 63937 pages
> > normal bytes: 255748 kbytes
> > dirty sync count: 2
> > page size: 4 kbytes
> > multifd bytes: 0 kbytes
> > dirty pages rate: 854740 pages
> > postcopy request count: 374
> > 
> > later I also tried to recover postcopy in source monitor,
> > (qemu) migrate_recover qemu+ssh://10.45.193.21/system
> 
> This command should be run on destination side only.  Here the
> "migrate-recover" command on destination will start a new listening
> port there waiting for the migration to be continued.  Then after that
> command we need an extra command on source to start the recovery:
> 
>   (HMP) migrate -r $URI
> 
> Here $URI should be the only you specified in the "migrate-recover"
> command on destination machine.
> 
> > Migrate recover can only be run when postcopy is paused.
> 
> I can try to fix up this error.  Basically we shouldn't allow this
> command to be run on source machine.

Sure, :+1:

> 
> > 
> > Looks to be it is broken, please help me if I missed something
> > in this test.
> 
> Btw, I'm writting up an unit test for postcopy recovery recently, that
> could be a good reference for the new feature.  Meanwhile I think I
> should write up some documents too afterwards.

fine, I am also working on writing test scenario in tp-qemu using Avocado-VT
for postcopy pause/recover and multifd features.

-- Bala
> 
> Regards,
> 
> > 
> > Thank you,
> > Bala
> > > 
> > > Peter Xu (4):
> > >   migration: delay postcopy paused state
> > >   migration: move income process out of multifd
> > >   migration: unbreak postcopy recovery
> > >   migration: unify incoming processing
> > > 
> > >  migration/ram.h       |  2 +-
> > >  migration/exec.c      |  3 ---
> > >  migration/fd.c        |  3 ---
> > >  migration/migration.c | 44 ++++++++++++++++++++++++++++++++++++-------
> > >  migration/ram.c       | 11 +++++------
> > >  migration/savevm.c    |  6 +++---
> > >  migration/socket.c    |  5 -----
> > >  7 files changed, 46 insertions(+), 28 deletions(-)
> > > 
> > > -- 
> > > 2.17.1
> > > 
> > > 
> > 
> 
> -- 
> Peter Xu
> 

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [Qemu-devel] [PATCH v3 0/4] migation: unbreak postcopy recovery
  2018-07-02  9:42     ` Balamuruhan S
@ 2018-07-02 10:18       ` Peter Xu
  0 siblings, 0 replies; 13+ messages in thread
From: Peter Xu @ 2018-07-02 10:18 UTC (permalink / raw)
  To: Balamuruhan S; +Cc: qemu-devel

On Mon, Jul 02, 2018 at 03:12:41PM +0530, Balamuruhan S wrote:
> On Mon, Jul 02, 2018 at 04:46:18PM +0800, Peter Xu wrote:
> > On Mon, Jul 02, 2018 at 01:34:45PM +0530, Balamuruhan S wrote:
> > > On Wed, Jun 27, 2018 at 09:22:42PM +0800, Peter Xu wrote:
> > > > v3:
> > > > - keep the recovery logic even for RDMA by dropping the 3rd patch and
> > > >   touch up the original 4th patch (current 3rd patch) to suite that [Dave]
> > > > 
> > > > v2:
> > > > - break the first patch into several
> > > > - fix a QEMUFile leak
> > > > 
> > > > Please review.  Thanks,
> > > Hi Peter,
> > 
> > Hi, Balamuruhan,
> > 
> > Glad to know that you are playing this stuff with ppc.  I think the
> > major steps are correct, though...
> > 
> 
> Thank you Peter for correcting my mistake, It works like a charm.
> Nice feature!
> 
> Tested-by: Balamuruhan S <bala24@linux.vnet.ibm.com>

Thanks!  Good to know that it worked.

> 
> > > 
> > > I have applied this patchset with upstream Qemu for testing postcopy
> > > pause recover feature in PowerPC,
> > > 
> > > I used NFS shared qcow2 between source and target host
> > > 
> > > source:
> > > # ppc64-softmmu/qemu-system-ppc64 --enable-kvm --nographic -vga none \
> > > -machine pseries -m 64G,slots=128,maxmem=128G -smp 16,maxcpus=32 \
> > > -device virtio-blk-pci,drive=rootdisk -drive \
> > > file=/home/bala/sharing/hostos-ppc64le.qcow2,if=none,cache=none,format=qcow2,id=rootdisk \
> > > -monitor telnet:127.0.0.1:1234,server,nowait -net nic,model=virtio \
> > > -net user -redir tcp:2000::22
> > > 
> > > To keep the VM with workload I ran stress-ng inside guest,
> > > 
> > > # stress-ng --cpu 6 --vm 6 --io 6
> > > 
> > > target:
> > > # ppc64-softmmu/qemu-system-ppc64 --enable-kvm --nographic -vga none \
> > > -machine pseries -m 64G,slots=128,maxmem=128G -smp 16,maxcpus=32 \
> > > -device virtio-blk-pci,drive=rootdisk -drive \
> > > file=/home/bala/sharing/hostos-ppc64le.qcow2,if=none,cache=none,format=qcow2,id=rootdisk \
> > > -monitor telnet:127.0.0.1:1235,server,nowait -net nic,model=virtio \
> > > -net user -redir tcp:2001::22 -incoming tcp:0:4445
> > > 
> > > enabled postcopy on both source and destination from qemu monitor
> > > 
> > > (qemu) migrate_set_capability postcopy-ram on
> > > 
> > > From source qemu monitor,
> > > (qemu) migrate -d tcp:10.45.70.203:4445
> > 
> > [1]
> > 
> > > (qemu) info migrate
> > > globals:
> > > store-global-state: on
> > > only-migratable: off
> > > send-configuration: on
> > > send-section-footer: on
> > > decompress-error-check: on
> > > capabilities: xbzrle: off rdma-pin-all: off auto-converge: off
> > > zero-blocks: off compress: off events: off postcopy-ram: on x-colo: off
> > > release-ram: off block: off return-path: off pause-before-switchover:
> > > off x-multifd: off dirty-bitmaps: off postcopy-blocktime: off
> > > late-block-activate: off 
> > > Migration status: active
> > > total time: 2331 milliseconds
> > > expected downtime: 300 milliseconds
> > > setup: 65 milliseconds
> > > transferred ram: 38914 kbytes
> > > throughput: 273.16 mbps
> > > remaining ram: 67063784 kbytes
> > > total ram: 67109120 kbytes
> > > duplicate: 1627 pages
> > > skipped: 0 pages
> > > normal: 9706 pages
> > > normal bytes: 38824 kbytes
> > > dirty sync count: 1
> > > page size: 4 kbytes
> > > multifd bytes: 0 kbytes
> > > 
> > > triggered postcopy from source,
> > > (qemu) migrate_start_postcopy
> > > 
> > > After triggering postcopy from source, in target I tried to pause the
> > > postcopy migration
> > > 
> > > (qemu) migrate_pause
> > > 
> > > In target I see error as,
> > > error while loading state section id 4(ram)
> > > qemu-system-ppc64: Detected IO failure for postcopy. Migration paused.
> > > 
> > > In source I see error as,
> > > qemu-system-ppc64: Detected IO failure for postcopy. Migration paused.
> > > 
> > > Later from target I try for recovery from target monitor,
> > > (qemu) migrate_recover qemu+ssh://10.45.70.203/system
> > 
> > ... here is that URI for libvirt only?
> > 
> > Normally I'll use something similar to [1] above.
> > 
> > > Migrate recovery is triggered already
> > 
> > And this means that you have already sent one recovery command before
> > hand.  In the future we'd better allow the recovery command to be run
> > more than once (in case the first one mistyped...).
> > 
> > > 
> > > but in source still it remains to be in postcopy-paused state
> > > (qemu) info migrate
> > > globals:
> > > store-global-state: on
> > > only-migratable: off
> > > send-configuration: on
> > > send-section-footer: on
> > > decompress-error-check: on
> > > capabilities: xbzrle: off rdma-pin-all: off auto-converge: off
> > > zero-blocks: off compress: off events: off postcopy-ram: on x-colo: off
> > > release-ram: off block: off return-path: off pause-before-switchover:
> > > off x-multifd: off dirty-bitmaps: off postcopy-blocktime: off
> > > late-block-activate: off 
> > > Migration status: postcopy-paused
> > > total time: 222841 milliseconds
> > > expected downtime: 382991 milliseconds
> > > setup: 65 milliseconds
> > > transferred ram: 385270 kbytes
> > > throughput: 265.06 mbps
> > > remaining ram: 8150528 kbytes
> > > total ram: 67109120 kbytes
> > > duplicate: 14679647 pages
> > > skipped: 0 pages
> > > normal: 63937 pages
> > > normal bytes: 255748 kbytes
> > > dirty sync count: 2
> > > page size: 4 kbytes
> > > multifd bytes: 0 kbytes
> > > dirty pages rate: 854740 pages
> > > postcopy request count: 374
> > > 
> > > later I also tried to recover postcopy in source monitor,
> > > (qemu) migrate_recover qemu+ssh://10.45.193.21/system
> > 
> > This command should be run on destination side only.  Here the
> > "migrate-recover" command on destination will start a new listening
> > port there waiting for the migration to be continued.  Then after that
> > command we need an extra command on source to start the recovery:
> > 
> >   (HMP) migrate -r $URI
> > 
> > Here $URI should be the only you specified in the "migrate-recover"
> > command on destination machine.
> > 
> > > Migrate recover can only be run when postcopy is paused.
> > 
> > I can try to fix up this error.  Basically we shouldn't allow this
> > command to be run on source machine.
> 
> Sure, :+1:
> 
> > 
> > > 
> > > Looks to be it is broken, please help me if I missed something
> > > in this test.
> > 
> > Btw, I'm writting up an unit test for postcopy recovery recently, that
> > could be a good reference for the new feature.  Meanwhile I think I
> > should write up some documents too afterwards.
> 
> fine, I am also working on writing test scenario in tp-qemu using Avocado-VT
> for postcopy pause/recover and multifd features.

Nice!  I don't know avocado much inside, but definitely it'll be good
if we have more tests to cover it so we'll know its breakage asap (and
the same applies to multifd for sure).

Regards,

-- 
Peter Xu

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [Qemu-devel] [PATCH v3 0/4] migation: unbreak postcopy recovery
  2018-06-27 13:22 [Qemu-devel] [PATCH v3 0/4] migation: unbreak postcopy recovery Peter Xu
                   ` (4 preceding siblings ...)
  2018-07-02  8:04 ` [Qemu-devel] [PATCH v3 0/4] migation: unbreak postcopy recovery Balamuruhan S
@ 2018-07-06  8:47 ` Dr. David Alan Gilbert
  5 siblings, 0 replies; 13+ messages in thread
From: Dr. David Alan Gilbert @ 2018-07-06  8:47 UTC (permalink / raw)
  To: Peter Xu; +Cc: qemu-devel, Juan Quintela

Queued

* Peter Xu (peterx@redhat.com) wrote:
> v3:
> - keep the recovery logic even for RDMA by dropping the 3rd patch and
>   touch up the original 4th patch (current 3rd patch) to suite that [Dave]
> 
> v2:
> - break the first patch into several
> - fix a QEMUFile leak
> 
> Please review.  Thanks,
> 
> Peter Xu (4):
>   migration: delay postcopy paused state
>   migration: move income process out of multifd
>   migration: unbreak postcopy recovery
>   migration: unify incoming processing
> 
>  migration/ram.h       |  2 +-
>  migration/exec.c      |  3 ---
>  migration/fd.c        |  3 ---
>  migration/migration.c | 44 ++++++++++++++++++++++++++++++++++++-------
>  migration/ram.c       | 11 +++++------
>  migration/savevm.c    |  6 +++---
>  migration/socket.c    |  5 -----
>  7 files changed, 46 insertions(+), 28 deletions(-)
> 
> -- 
> 2.17.1
> 
> 
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK

^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2018-07-06  8:47 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-06-27 13:22 [Qemu-devel] [PATCH v3 0/4] migation: unbreak postcopy recovery Peter Xu
2018-06-27 13:22 ` [Qemu-devel] [PATCH v3 1/4] migration: delay postcopy paused state Peter Xu
2018-06-27 13:22 ` [Qemu-devel] [PATCH v3 2/4] migration: move income process out of multifd Peter Xu
2018-06-27 13:59   ` Juan Quintela
2018-06-27 13:22 ` [Qemu-devel] [PATCH v3 3/4] migration: unbreak postcopy recovery Peter Xu
2018-06-27 14:00   ` Juan Quintela
2018-06-27 13:22 ` [Qemu-devel] [PATCH v3 4/4] migration: unify incoming processing Peter Xu
2018-06-27 14:01   ` Juan Quintela
2018-07-02  8:04 ` [Qemu-devel] [PATCH v3 0/4] migation: unbreak postcopy recovery Balamuruhan S
2018-07-02  8:46   ` Peter Xu
2018-07-02  9:42     ` Balamuruhan S
2018-07-02 10:18       ` Peter Xu
2018-07-06  8:47 ` Dr. David Alan Gilbert

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.